diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 92b35e97baa05..6a4531f1bdefa 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -11,3 +11,4 @@ attention. - If submitting code, have you built your formula locally prior to submission with `gradle check`? - If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed. - If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)? +- If you are submitting this code for a class then read our [policy](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-as-part-of-a-class) for that. diff --git a/.gitignore b/.gitignore index d1810a5a83fc7..b4ec8795057e2 100644 --- a/.gitignore +++ b/.gitignore @@ -33,6 +33,7 @@ dependency-reduced-pom.xml # testing stuff **/.local* .vagrant/ +/logs/ # osx stuff .DS_Store diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b0f1e054e4693..f38a0588a9c0d 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -38,6 +38,11 @@ If you have a bugfix or new feature that you would like to contribute to Elastic We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code. +Note that it is unlikely the project will merge refactors for the sake of refactoring. These +types of pull requests have a high cost to maintainers in reviewing and testing with little +to no tangible benefit. This especially includes changes generated by tools. For example, +converting all generic interface instances to use the diamond operator. + The process for contributing to any of the [Elastic repositories](https://github.com/elastic/) is similar. Details for individual projects can be found below. ### Fork and clone the repository @@ -88,7 +93,8 @@ Contributing to the Elasticsearch codebase **Repository:** [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch) Make sure you have [Gradle](http://gradle.org) installed, as -Elasticsearch uses it as its build system. +Elasticsearch uses it as its build system. Gradle must be at least +version 3.3 in order to build successfully. Eclipse users can automatically configure their IDE: `gradle eclipse` then `File: Import: Existing Projects into Workspace`. Select the @@ -137,3 +143,32 @@ Before submitting your changes, run the test suite to make sure that nothing is ```sh gradle check ``` + +Contributing as part of a class +------------------------------- +In general Elasticsearch is happy to accept contributions that were created as +part of a class but strongly advise against making the contribution as part of +the class. So if you have code you wrote for a class feel free to submit it. + +Please, please, please do not assign contributing to Elasticsearch as part of a +class. If you really want to assign writing code for Elasticsearch as an +assignment then the code contributions should be made to your private clone and +opening PRs against the primary Elasticsearch clone must be optional, fully +voluntary, not for a grade, and without any deadlines. + +Because: + +* While the code review process is likely very educational, it can take wildly +varying amounts of time depending on who is available, where the change is, and +how deep the change is. There is no way to predict how long it will take unless +we rush. +* We do not rush reviews without a very, very good reason. Class deadlines +aren't a good enough reason for us to rush reviews. +* We deeply discourage opening a PR you don't intend to work through the entire +code review process because it wastes our time. +* We don't have the capacity to absorb an entire class full of new contributors, +especially when they are unlikely to become long time contributors. + +Finally, we require that you run `gradle check` before submitting a +non-documentation contribution. This is mentioned above, but it is worth +repeating in this section because it has come up in this context. diff --git a/GRADLE.CHEATSHEET b/GRADLE.CHEATSHEET index 3362b8571e7b9..2c9c34fe1b512 100644 --- a/GRADLE.CHEATSHEET +++ b/GRADLE.CHEATSHEET @@ -4,4 +4,4 @@ test -> test verify -> check verify -Dskip.unit.tests -> integTest package -DskipTests -> assemble -install -DskipTests -> install +install -DskipTests -> publishToMavenLocal diff --git a/NOTICE.txt b/NOTICE.txt index c99b958193198..2126baed56547 100644 --- a/NOTICE.txt +++ b/NOTICE.txt @@ -1,5 +1,5 @@ Elasticsearch -Copyright 2009-2016 Elasticsearch +Copyright 2009-2017 Elasticsearch -This product includes software developed by The Apache Software -Foundation (http://www.apache.org/). +This product includes software developed by The Apache Software Foundation +(http://www.apache.org/). diff --git a/README.textile b/README.textile index 69d3fd54767e5..91895df93b4f8 100644 --- a/README.textile +++ b/README.textile @@ -50,16 +50,16 @@ h3. Indexing Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
-curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -H 'Content-Type: application/json' -d '{ "name" : "Shay Banon" }'
 
-curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -H 'Content-Type: application/json' -d '
 {
     "user": "kimchy",
     "post_date": "2009-11-15T13:12:00",
     "message": "Trying out Elasticsearch, so far so good?"
 }'
 
-curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -H 'Content-Type: application/json' -d '
 {
     "user": "kimchy",
     "post_date": "2009-11-15T14:12:12",
@@ -87,7 +87,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=tru
 We can also use the JSON query language Elasticsearch provides instead of a query string:
 
 
-curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "match" : { "user": "kimchy" }
@@ -98,7 +98,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
 Just for kicks, let's get all the documents stored (we should see the user as well):
 
 
-curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "match_all" : {}
@@ -106,10 +106,10 @@ curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
 }'
 
-We can also do range search (the @postDate@ was automatically identified as date) +We can also do range search (the @post_date@ was automatically identified as date)
-curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "range" : {
@@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In
 Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
 
 
-curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -H 'Content-Type: application/json' -d '{ "name" : "Shay Banon" }'
 
-curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -H 'Content-Type: application/json' -d '
 {
     "user": "kimchy",
     "post_date": "2009-11-15T13:12:00",
     "message": "Trying out Elasticsearch, so far so good?"
 }'
 
-curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -H 'Content-Type: application/json' -d '
 {
     "user": "kimchy",
     "post_date": "2009-11-15T14:12:12",
@@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@
 Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
 
 
-curl -XPUT http://localhost:9200/another_user?pretty -d '
+curl -XPUT http://localhost:9200/another_user?pretty -H 'Content-Type: application/json' -d '
 {
     "index" : {
         "number_of_shards" : 1,
@@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea
 index (twitter user), for example:
 
 
-curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "match_all" : {}
@@ -176,7 +176,7 @@ curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
 Or on all the indices:
 
 
-curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "match_all" : {}
@@ -200,7 +200,7 @@ We have just covered a very small portion of what Elasticsearch is all about. Fo
 
 h3. Building from Source
 
-Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have a modern version of Gradle installed - 2.13 should do.
+Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have at least version 3.3 of Gradle installed.
 
 In order to create a distribution, simply run the @gradle assemble@ command in the cloned directory.
 
diff --git a/TESTING.asciidoc b/TESTING.asciidoc
index 5046dc087b564..4b47f0409734d 100644
--- a/TESTING.asciidoc
+++ b/TESTING.asciidoc
@@ -41,6 +41,12 @@ run it using Gradle:
 gradle run
 -------------------------------------
 
+or to attach a remote debugger, run it as:
+
+-------------------------------------
+gradle run --debug-jvm
+-------------------------------------
+
 === Test case filtering.
 
 - `tests.class` is a class-filtering shell-like glob pattern,
@@ -335,7 +341,7 @@ vagrant plugin install vagrant-cachier
 . Validate your installed dependencies:
 
 -------------------------------------
-gradle :qa:vagrant:checkVagrantVersion
+gradle :qa:vagrant:vagrantCheckVersion
 -------------------------------------
 
 . Download and smoke test the VMs with `gradle vagrantSmokeTest` or
@@ -361,24 +367,25 @@ VM running trusty by running
 
 These are the linux flavors the Vagrantfile currently supports:
 
-* ubuntu-1204 aka precise
 * ubuntu-1404 aka trusty
-* ubuntu-1504 aka vivid
-* debian-8 aka jessie, the current debian stable distribution
+* ubuntu-1604 aka xenial
+* debian-8 aka jessie
+* debian-9 aka stretch, the current debian stable distribution
 * centos-6
 * centos-7
-* fedora-22
+* fedora-25
+* fedora-26
+* oel-6 aka Oracle Enterprise Linux 6
 * oel-7 aka Oracle Enterprise Linux 7
 * sles-12
-* opensuse-13
+* opensuse-42 aka Leap
 
 We're missing the following from the support matrix because there aren't high
 quality boxes available in vagrant atlas:
 
 * sles-11
-* oel-6
 
-We're missing the follow because our tests are very linux/bash centric:
+We're missing the following because our tests are very linux/bash centric:
 
 * Windows Server 2012
 
@@ -427,19 +434,46 @@ and in another window:
 
 ----------------------------------------------------
 vagrant up centos-7 --provider virtualbox && vagrant ssh centos-7
-cd $TESTROOT
-sudo bats $BATS/*rpm*.bats
+cd $BATS_ARCHIVES
+sudo -E bats $BATS_TESTS/*rpm*.bats
 ----------------------------------------------------
 
 If you wanted to retest all the release artifacts on a single VM you could:
 
 -------------------------------------------------
-gradle prepareTestRoot
-vagrant up ubuntu-1404 --provider virtualbox && vagrant ssh ubuntu-1404
-cd $TESTROOT
-sudo bats $BATS/*.bats
+gradle setupBats
+cd qa/vagrant; vagrant up ubuntu-1404 --provider virtualbox && vagrant ssh ubuntu-1404
+cd $BATS_ARCHIVES
+sudo -E bats $BATS_TESTS/*.bats
 -------------------------------------------------
 
+You can also use Gradle to prepare the test environment and then starts a single VM:
+
+-------------------------------------------------
+gradle vagrantFedora25#up
+-------------------------------------------------
+
+Or any of vagrantCentos6#up, vagrantCentos7#up, vagrantDebian8#up,
+vagrantFedora25#up, vagrantOel6#up, vagrantOel7#up, vagrantOpensuse13#up,
+vagrantSles12#up, vagrantUbuntu1404#up, vagrantUbuntu1604#up.
+
+Once up, you can then connect to the VM using SSH from the elasticsearch directory:
+
+-------------------------------------------------
+vagrant ssh fedora-25
+-------------------------------------------------
+
+Or from another directory:
+
+-------------------------------------------------
+VAGRANT_CWD=/path/to/elasticsearch vagrant ssh fedora-25
+-------------------------------------------------
+
+Note: Starting vagrant VM outside of the elasticsearch folder requires to
+indicates the folder that contains the Vagrantfile using the VAGRANT_CWD
+environment variable.
+
+
 == Coverage analysis
 
 Tests can be run instrumented with jacoco to produce a coverage report in
@@ -475,12 +509,11 @@ gradle run --debug-jvm
 == Building with extra plugins
 Additional plugins may be built alongside elasticsearch, where their
 dependency on elasticsearch will be substituted with the local elasticsearch
-build. To add your plugin, create a directory called x-plugins as a sibling
-of elasticsearch. Checkout your plugin underneath x-plugins and the build
-will automatically pick it up. You can verify the plugin is included as part
-of the build by checking the projects of the build.
+build. To add your plugin, create a directory called elasticsearch-extra as
+a sibling of elasticsearch. Checkout your plugin underneath elasticsearch-extra
+and the build will automatically pick it up. You can verify the plugin is
+included as part of the build by checking the projects of the build.
 
 ---------------------------------------------------------------------------
 gradle projects
 ---------------------------------------------------------------------------
-
diff --git a/Vagrantfile b/Vagrantfile
index fc148ee444319..487594bba8a1a 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -22,27 +22,27 @@
 # under the License.
 
 Vagrant.configure(2) do |config|
-  config.vm.define "ubuntu-1204" do |config|
-    config.vm.box = "elastic/ubuntu-12.04-x86_64"
-    ubuntu_common config
-  end
   config.vm.define "ubuntu-1404" do |config|
     config.vm.box = "elastic/ubuntu-14.04-x86_64"
     ubuntu_common config
   end
-  config.vm.define "ubuntu-1504" do |config|
-    config.vm.box = "elastic/ubuntu-15.04-x86_64"
+  config.vm.define "ubuntu-1604" do |config|
+    config.vm.box = "elastic/ubuntu-16.04-x86_64"
     ubuntu_common config, extra: <<-SHELL
       # Install Jayatana so we can work around it being present.
       [ -f /usr/share/java/jayatanaag.jar ] || install jayatana
     SHELL
   end
-  # Wheezy's backports don't contain Openjdk 8 and the backflips required to
-  # get the sun jdk on there just aren't worth it. We have jessie for testing
-  # debian and it works fine.
+  # Wheezy's backports don't contain Openjdk 8 and the backflips
+  # required to get the sun jdk on there just aren't worth it. We have
+  # jessie and stretch for testing debian and it works fine.
   config.vm.define "debian-8" do |config|
     config.vm.box = "elastic/debian-8-x86_64"
-    deb_common config, 'echo deb http://cloudfront.debian.net/debian jessie-backports main > /etc/apt/sources.list.d/backports.list', 'backports'
+    deb_common config
+  end
+  config.vm.define "debian-9" do |config|
+    config.vm.box = "elastic/debian-9-x86_64"
+    deb_common config
   end
   config.vm.define "centos-6" do |config|
     config.vm.box = "elastic/centos-6-x86_64"
@@ -60,12 +60,16 @@ Vagrant.configure(2) do |config|
     config.vm.box = "elastic/oraclelinux-7-x86_64"
     rpm_common config
   end
-  config.vm.define "fedora-24" do |config|
-    config.vm.box = "elastic/fedora-24-x86_64"
+  config.vm.define "fedora-25" do |config|
+    config.vm.box = "elastic/fedora-25-x86_64"
     dnf_common config
   end
-  config.vm.define "opensuse-13" do |config|
-    config.vm.box = "elastic/opensuse-13-x86_64"
+  config.vm.define "fedora-26" do |config|
+    config.vm.box = "elastic/fedora-26-x86_64"
+    dnf_common config
+  end
+  config.vm.define "opensuse-42" do |config|
+    config.vm.box = "elastic/opensuse-42-x86_64"
     opensuse_common config
   end
   config.vm.define "sles-12" do |config|
@@ -77,6 +81,9 @@ Vagrant.configure(2) do |config|
   # the elasticsearch project called vagrant....
   config.vm.synced_folder ".", "/vagrant", disabled: true
   config.vm.synced_folder ".", "/elasticsearch"
+  # Expose project directory
+  PROJECT_DIR = ENV['VAGRANT_PROJECT_DIR'] || Dir.pwd
+  config.vm.synced_folder PROJECT_DIR, "/project"
   config.vm.provider "virtualbox" do |v|
     # Give the boxes 3GB because Elasticsearch defaults to using 2GB
     v.memory = 3072
@@ -105,16 +112,22 @@ SOURCE_PROMPT
 source /etc/profile.d/elasticsearch_prompt.sh
 SOURCE_PROMPT
       SHELL
+      # Creates a file to mark the machine as created by vagrant. Tests check
+      # for this file and refuse to run if it is not present so that they can't
+      # be run unexpectedly.
+      config.vm.provision "markerfile", type: "shell", inline: <<-SHELL
+        touch /etc/is_vagrant_vm
+      SHELL
     end
     config.config_procs.push ['2', set_prompt]
   end
 end
 
 def ubuntu_common(config, extra: '')
-  deb_common config, 'apt-add-repository -y ppa:openjdk-r/ppa > /dev/null 2>&1', 'openjdk-r-*', extra: extra
+  deb_common config, extra: extra
 end
 
-def deb_common(config, add_openjdk_repository_command, openjdk_list, extra: '')
+def deb_common(config, extra: '')
   # http://foo-o-rama.com/vagrant--stdin-is-not-a-tty--fix.html
   config.vm.provision "fix-no-tty", type: "shell" do |s|
       s.privileged = false
@@ -124,24 +137,14 @@ def deb_common(config, add_openjdk_repository_command, openjdk_list, extra: '')
     update_command: "apt-get update",
     update_tracking_file: "/var/cache/apt/archives/last_update",
     install_command: "apt-get install -y",
-    java_package: "openjdk-8-jdk",
-    extra: <<-SHELL
-      export DEBIAN_FRONTEND=noninteractive
-      ls /etc/apt/sources.list.d/#{openjdk_list}.list > /dev/null 2>&1 ||
-        (echo "==> Importing java-8 ppa" &&
-          #{add_openjdk_repository_command} &&
-          apt-get update)
-      #{extra}
-SHELL
-  )
+    extra: extra)
 end
 
 def rpm_common(config)
   provision(config,
     update_command: "yum check-update",
     update_tracking_file: "/var/cache/yum/last_update",
-    install_command: "yum install -y",
-    java_package: "java-1.8.0-openjdk-devel")
+    install_command: "yum install -y")
 end
 
 def dnf_common(config)
@@ -149,7 +152,7 @@ def dnf_common(config)
     update_command: "dnf check-update",
     update_tracking_file: "/var/cache/dnf/last_update",
     install_command: "dnf install -y",
-    java_package: "java-1.8.0-openjdk-devel")
+    install_command_retries: 5)
   if Vagrant.has_plugin?("vagrant-cachier")
     # Autodetect doesn't work....
     config.cache.auto_detect = false
@@ -166,17 +169,12 @@ def suse_common(config, extra)
     update_command: "zypper --non-interactive list-updates",
     update_tracking_file: "/var/cache/zypp/packages/last_update",
     install_command: "zypper --non-interactive --quiet install --no-recommends",
-    java_package: "java-1_8_0-openjdk-devel",
     extra: extra)
 end
 
 def sles_common(config)
   extra = <<-SHELL
-    zypper rr systemsmanagement_puppet
-    zypper addrepo -t yast2 http://demeter.uni-regensburg.de/SLES12-x64/DVD1/ dvd1 || true
-    zypper addrepo -t yast2 http://demeter.uni-regensburg.de/SLES12-x64/DVD2/ dvd2 || true
-    zypper addrepo http://download.opensuse.org/repositories/Java:Factory/SLE_12/Java:Factory.repo || true
-    zypper --no-gpg-checks --non-interactive refresh
+    zypper rr systemsmanagement_puppet puppetlabs-pc1
     zypper --non-interactive install git-core
 SHELL
   suse_common config, extra
@@ -191,26 +189,50 @@ end
 #   is cached by vagrant-cachier.
 # @param install_command [String] The command used to install a package.
 #   Required. Think `apt-get install #{package}`.
-# @param java_package [String] The name of the java package. Required.
 # @param extra [String] Extra provisioning commands run before anything else.
 #   Optional. Used for things like setting up the ppa for Java 8.
 def provision(config,
     update_command: 'required',
     update_tracking_file: 'required',
     install_command: 'required',
-    java_package: 'required',
+    install_command_retries: 0,
     extra: '')
   # Vagrant run ruby 2.0.0 which doesn't have required named parameters....
   raise ArgumentError.new('update_command is required') if update_command == 'required'
   raise ArgumentError.new('update_tracking_file is required') if update_tracking_file == 'required'
   raise ArgumentError.new('install_command is required') if install_command == 'required'
-  raise ArgumentError.new('java_package is required') if java_package == 'required'
-  config.vm.provision "bats dependencies", type: "shell", inline: <<-SHELL
+  config.vm.provider "virtualbox" do |v|
+    # Give the box more memory and cpu because our tests are beasts!
+    v.memory = Integer(ENV['VAGRANT_MEMORY'] || 8192)
+    v.cpus = Integer(ENV['VAGRANT_CPUS'] || 4)
+  end
+  config.vm.synced_folder "#{Dir.home}/.gradle/caches", "/home/vagrant/.gradle/caches",
+    create: true,
+    owner: "vagrant"
+  config.vm.provision "dependencies", type: "shell", inline: <<-SHELL
     set -e
     set -o pipefail
+
+    # Retry install command up to $2 times, if failed
+    retry_installcommand() {
+      n=0
+      while true; do
+        #{install_command} $1 && break
+        let n=n+1
+        if [ $n -ge $2 ]; then
+          echo "==> Exhausted retries to install $1"
+          return 1
+        fi
+        echo "==> Retrying installing $1, attempt $((n+1))"
+        # Add a small delay to increase chance of metalink providing updated list of mirrors
+        sleep 5
+      done
+    }
+
     installed() {
       command -v $1 2>&1 >/dev/null
     }
+
     install() {
       # Only apt-get update if we haven't in the last day
       if [ ! -f #{update_tracking_file} ] || [ "x$(find #{update_tracking_file} -mtime +0)" == "x#{update_tracking_file}" ]; then
@@ -219,15 +241,24 @@ def provision(config,
         touch #{update_tracking_file}
       fi
       echo "==> Installing $1"
-      #{install_command} $1
+      if [ #{install_command_retries} -eq 0 ]
+      then
+            #{install_command} $1
+      else
+            retry_installcommand $1 #{install_command_retries}
+      fi
     }
+
     ensure() {
       installed $1 || install $1
     }
 
     #{extra}
 
-    installed java || install #{java_package}
+    installed java || {
+      echo "==> Java is not installed on vagrant box ${config.vm.box}"
+      return 1
+    }
     ensure tar
     ensure curl
     ensure unzip
@@ -241,13 +272,39 @@ def provision(config,
       /tmp/bats/install.sh /usr
       rm -rf /tmp/bats
     }
+
+    installed gradle || {
+      echo "==> Installing Gradle"
+      curl -sS -o /tmp/gradle.zip -L https://services.gradle.org/distributions/gradle-3.3-bin.zip
+      unzip -q /tmp/gradle.zip -d /opt
+      rm -rf /tmp/gradle.zip
+      ln -s /opt/gradle-3.3/bin/gradle /usr/bin/gradle
+      # make nfs mounted gradle home dir writeable
+      chown vagrant:vagrant /home/vagrant/.gradle
+    }
+
+
     cat \<\ /etc/profile.d/elasticsearch_vars.sh
 export ZIP=/elasticsearch/distribution/zip/build/distributions
 export TAR=/elasticsearch/distribution/tar/build/distributions
 export RPM=/elasticsearch/distribution/rpm/build/distributions
 export DEB=/elasticsearch/distribution/deb/build/distributions
-export TESTROOT=/elasticsearch/qa/vagrant/build/testroot
-export BATS=/elasticsearch/qa/vagrant/src/test/resources/packaging/scripts
+export BATS=/project/build/bats
+export BATS_UTILS=/project/build/bats/utils
+export BATS_TESTS=/project/build/bats/tests
+export BATS_ARCHIVES=/project/build/bats/archives
+export GRADLE_HOME=/opt/gradle-3.3
 VARS
+    cat \<\ /etc/sudoers.d/elasticsearch_vars
+Defaults   env_keep += "ZIP"
+Defaults   env_keep += "TAR"
+Defaults   env_keep += "RPM"
+Defaults   env_keep += "DEB"
+Defaults   env_keep += "BATS"
+Defaults   env_keep += "BATS_UTILS"
+Defaults   env_keep += "BATS_TESTS"
+Defaults   env_keep += "BATS_ARCHIVES"
+SUDOERS_VARS
+    chmod 0440 /etc/sudoers.d/elasticsearch_vars
   SHELL
 end
diff --git a/benchmarks/build.gradle b/benchmarks/build.gradle
index 3b8b92328e149..35b7c1a163d3e 100644
--- a/benchmarks/build.gradle
+++ b/benchmarks/build.gradle
@@ -34,13 +34,14 @@ apply plugin: 'com.github.johnrengelman.shadow'
 // have the shadow plugin provide the runShadow task
 apply plugin: 'application'
 
+// Not published so no need to assemble
+tasks.remove(assemble)
+build.dependsOn.remove('assemble')
+
 archivesBaseName = 'elasticsearch-benchmarks'
 mainClassName = 'org.openjdk.jmh.Main'
 
-// never try to invoke tests on the benchmark project - there aren't any
-check.dependsOn.remove(test)
-// explicitly override the test task too in case somebody invokes 'gradle test' so it won't trip
-task test(type: Test, overwrite: true)
+test.enabled = false
 
 dependencies {
     compile("org.elasticsearch:elasticsearch:${version}") {
@@ -60,7 +61,6 @@ compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-u
 // enable the JMH's BenchmarkProcessor to generate the final benchmark classes
 // needs to be added separately otherwise Gradle will quote it and javac will fail
 compileJava.options.compilerArgs.addAll(["-processor", "org.openjdk.jmh.generators.BenchmarkProcessor"])
-compileTestJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
 
 forbiddenApis {
     // classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
diff --git a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/AllocationBenchmark.java b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/AllocationBenchmark.java
index 86902b380c86e..39cfdb6582d74 100644
--- a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/AllocationBenchmark.java
+++ b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/AllocationBenchmark.java
@@ -27,7 +27,6 @@
 import org.elasticsearch.cluster.routing.RoutingTable;
 import org.elasticsearch.cluster.routing.ShardRoutingState;
 import org.elasticsearch.cluster.routing.allocation.AllocationService;
-import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
 import org.elasticsearch.common.settings.Settings;
 import org.openjdk.jmh.annotations.Benchmark;
 import org.openjdk.jmh.annotations.BenchmarkMode;
@@ -160,11 +159,9 @@ private int toInt(String v) {
     public ClusterState measureAllocation() {
         ClusterState clusterState = initialClusterState;
         while (clusterState.getRoutingNodes().hasUnassignedShards()) {
-            RoutingAllocation.Result result = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes()
+            clusterState = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes()
                     .shardsWithState(ShardRoutingState.INITIALIZING));
-            clusterState = ClusterState.builder(clusterState).routingResult(result).build();
-            result = strategy.reroute(clusterState, "reroute");
-            clusterState = ClusterState.builder(clusterState).routingResult(result).build();
+            clusterState = strategy.reroute(clusterState, "reroute");
         }
         return clusterState;
     }
diff --git a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java
index ad06f75074d53..860137cf559a0 100644
--- a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java
+++ b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java
@@ -22,10 +22,10 @@
 import org.elasticsearch.cluster.ClusterModule;
 import org.elasticsearch.cluster.EmptyClusterInfoService;
 import org.elasticsearch.cluster.node.DiscoveryNode;
+import org.elasticsearch.cluster.routing.ShardRouting;
 import org.elasticsearch.cluster.routing.allocation.AllocationService;
-import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation;
+import org.elasticsearch.cluster.routing.allocation.FailedShard;
 import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
-import org.elasticsearch.cluster.routing.allocation.StartedRerouteAllocation;
 import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;
 import org.elasticsearch.cluster.routing.allocation.decider.AllocationDecider;
 import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;
@@ -35,9 +35,7 @@
 import org.elasticsearch.common.util.set.Sets;
 import org.elasticsearch.gateway.GatewayAllocator;
 
-import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
-import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.List;
@@ -52,12 +50,12 @@ protected NoopGatewayAllocator() {
         }
 
         @Override
-        public void applyStartedShards(StartedRerouteAllocation allocation) {
+        public void applyStartedShards(RoutingAllocation allocation, List startedShards) {
             // noop
         }
 
         @Override
-        public void applyFailedShards(FailedRerouteAllocation allocation) {
+        public void applyFailedShards(RoutingAllocation allocation, List failedShards) {
             // noop
         }
 
diff --git a/distribution/licenses/HdrHistogram-NOTICE.txt b/bootstrap.memory_lock
similarity index 100%
rename from distribution/licenses/HdrHistogram-NOTICE.txt
rename to bootstrap.memory_lock
diff --git a/build.gradle b/build.gradle
index f1b57d7857bfc..2c7fff04e53ea 100644
--- a/build.gradle
+++ b/build.gradle
@@ -17,17 +17,26 @@
  * under the License.
  */
 
-import com.bmuschko.gradle.nexus.NexusPlugin
+import java.util.regex.Matcher
+import java.nio.file.Path
 import org.eclipse.jgit.lib.Repository
 import org.eclipse.jgit.lib.RepositoryBuilder
 import org.gradle.plugins.ide.eclipse.model.SourceFolder
 import org.apache.tools.ant.taskdefs.condition.Os
+import org.elasticsearch.gradle.BuildPlugin
+import org.elasticsearch.gradle.VersionProperties
+import org.elasticsearch.gradle.Version
 
 // common maven publishing configuration
 subprojects {
   group = 'org.elasticsearch'
   version = org.elasticsearch.gradle.VersionProperties.elasticsearch
   description = "Elasticsearch subproject ${project.path}"
+}
+
+Path rootPath = rootDir.toPath()
+// setup pom license info, but only for artifacts that are part of elasticsearch
+configure(subprojects.findAll { it.projectDir.toPath().startsWith(rootPath) }) {
 
   // we only use maven publish to add tasks for pom generation
   plugins.withType(MavenPublishPlugin).whenPluginAdded {
@@ -52,89 +61,128 @@ subprojects {
       }
     }
   }
+  plugins.withType(BuildPlugin).whenPluginAdded {
+    project.licenseFile = project.rootProject.file('LICENSE.txt')
+    project.noticeFile = project.rootProject.file('NOTICE.txt')
+  }
+}
 
-  plugins.withType(NexusPlugin).whenPluginAdded {
-    modifyPom {
-      project {
-        url 'https://github.com/elastic/elasticsearch'
-        inceptionYear '2009'
-
-        scm {
-          url 'https://github.com/elastic/elasticsearch'
-          connection 'scm:https://elastic@github.com/elastic/elasticsearch'
-          developerConnection 'scm:git://github.com/elastic/elasticsearch.git'
-        }
-
-        licenses {
-          license {
-            name 'The Apache Software License, Version 2.0'
-            url 'http://www.apache.org/licenses/LICENSE-2.0.txt'
-            distribution 'repo'
-          }
-        }
-      }
+// introspect all versions of ES that may be tested agains for backwards compatibility
+Version currentVersion = Version.fromString(VersionProperties.elasticsearch.minus('-SNAPSHOT'))
+File versionFile = file('core/src/main/java/org/elasticsearch/Version.java')
+List versionLines = versionFile.readLines('UTF-8')
+List versions = []
+// keep track of the major version's first version, so we know where wire compat begins
+int firstMajorIndex = -1 // index in the versions list of the first minor from this major
+for (String line : versionLines) {
+  Matcher match = line =~ /\W+public static final Version V_(\d+)_(\d+)_(\d+)(_UNRELEASED)? .*/
+  if (match.matches()) {
+    int major = Integer.parseInt(match.group(1))
+    int minor = Integer.parseInt(match.group(2))
+    int bugfix = Integer.parseInt(match.group(3))
+    boolean unreleased = match.group(4) != null
+    Version foundVersion = new Version(major, minor, bugfix, false, unreleased)
+    if (currentVersion != foundVersion) {
+      versions.add(foundVersion)
     }
-    extraArchive {
-      javadoc = true
-      tests = false
-    }
-    nexus {
-      String buildSnapshot = System.getProperty('build.snapshot', 'true')
-      if (buildSnapshot == 'false') {
-        Repository repo = new RepositoryBuilder().findGitDir(project.rootDir).build()
-        String shortHash = repo.resolve('HEAD')?.name?.substring(0,7)
-        repositoryUrl = project.hasProperty('build.repository') ? project.property('build.repository') : "file://${System.getenv('HOME')}/elasticsearch-releases/${version}-${shortHash}/"
-      }
-    }
-    // we have our own username/password prompts so that they only happen once
-    // TODO: add gpg signing prompts, which is tricky, as the buildDeb/buildRpm tasks are executed before this code block
-    project.gradle.taskGraph.whenReady { taskGraph ->
-      if (taskGraph.allTasks.any { it.name == 'uploadArchives' }) {
-        Console console = System.console()
-        // no need for username/password on local deploy
-        if (project.nexus.repositoryUrl.startsWith('file://')) {
-          project.rootProject.allprojects.each {
-            it.ext.nexusUsername = 'foo'
-            it.ext.nexusPassword = 'bar'
-          }
-        } else {
-          if (project.hasProperty('nexusUsername') == false) {
-            String nexusUsername = console.readLine('\nNexus username: ')
-            project.rootProject.allprojects.each {
-              it.ext.nexusUsername = nexusUsername
-            }
-          }
-          if (project.hasProperty('nexusPassword') == false) {
-            String nexusPassword = new String(console.readPassword('\nNexus password: '))
-            project.rootProject.allprojects.each {
-              it.ext.nexusPassword = nexusPassword
-            }
-          }
-        }
-      }
+    if (major == currentVersion.major && firstMajorIndex == -1) {
+      firstMajorIndex = versions.size() - 1
     }
   }
 }
+if (versions.toSorted { it.id } != versions) {
+  println "Versions: ${versions}"
+  throw new GradleException("Versions.java contains out of order version constants")
+}
+if (currentVersion.bugfix == 0) {
+  // If on a release branch, after the initial release of that branch, the bugfix version will
+  // be bumped, and will be != 0. On master and N.x branches, we want to test against the
+  // unreleased version of closest branch. So for those cases, the version includes -SNAPSHOT,
+  // and the bwc distribution will checkout and build that version.
+  Version last = versions[-1]
+  versions[-1] = new Version(last.major, last.minor, last.bugfix,
+      true, last.unreleased)
+}
+
+// build metadata from previous build, contains eg hashes for bwc builds
+String buildMetadataValue = System.getenv('BUILD_METADATA')
+if (buildMetadataValue == null) {
+  buildMetadataValue = ''
+}
+Map buildMetadataMap = buildMetadataValue.tokenize(';').collectEntries {
+  def (String key, String value) = it.split('=')
+  return [key, value]
+}
 
+// injecting groovy property variables into all projects
 allprojects {
   // injecting groovy property variables into all projects
   project.ext {
     // for ide hacks...
     isEclipse = System.getProperty("eclipse.launcher") != null || gradle.startParameter.taskNames.contains('eclipse') || gradle.startParameter.taskNames.contains('cleanEclipse')
     isIdea = System.getProperty("idea.active") != null || gradle.startParameter.taskNames.contains('idea') || gradle.startParameter.taskNames.contains('cleanIdea')
+    // for backcompat testing
+    indexCompatVersions = versions
+    wireCompatVersions = versions.subList(firstMajorIndex, versions.size())
+    buildMetadata = buildMetadataMap
+  }
+}
+
+task verifyVersions {
+  doLast {
+    if (gradle.startParameter.isOffline()) {
+      throw new GradleException("Must run in online mode to verify versions")
+    }
+    // Read the list from maven central
+    Node xml
+    new URL('https://repo1.maven.org/maven2/org/elasticsearch/elasticsearch/maven-metadata.xml').openStream().withStream { s ->
+        xml = new XmlParser().parse(s)
+    }
+    Set knownVersions = new TreeSet<>(xml.versioning.versions.version.collect { it.text() }.findAll { it ==~ /\d\.\d\.\d/ }.collect { Version.fromString(it) })
+
+    // Limit the known versions to those that should be index compatible, and are not future versions
+    knownVersions = knownVersions.findAll { it.major >= 2 && it.before(VersionProperties.elasticsearch) }
+
+    /* Limit the listed versions to those that have been marked as released.
+     * Versions not marked as released don't get the same testing and we want
+     * to make sure that we flip all unreleased versions to released as soon
+     * as possible after release. */
+    Set actualVersions = new TreeSet<>(indexCompatVersions.findAll { false == it.snapshot })
+
+    // Finally, compare!
+    if (knownVersions.equals(actualVersions) == false) {
+      throw new GradleException("out-of-date released versions\nActual  :" + actualVersions + "\nExpected:" + knownVersions +
+        "\nUpdate Version.java. Note that Version.CURRENT doesn't count because it is not released.")
+    }
+  }
+}
+
+/*
+ * When adding backcompat behavior that spans major versions, temporarily
+ * disabling the backcompat tests is necessary. This flag controls
+ * the enabled state of every bwc task. It should be set back to true
+ * after the backport of the backcompat code is complete.
+ */
+allprojects {
+  ext.bwc_tests_enabled = true
+}
+
+task verifyBwcTestsEnabled {
+  doLast {
+    if (project.bwc_tests_enabled == false) {
+      throw new GradleException('Bwc tests are disabled. They must be re-enabled after completing backcompat behavior backporting.')
+    }
   }
 }
 
+task branchConsistency {
+  description 'Ensures this branch is internally consistent. For example, that versions constants match released versions.'
+  group 'Verification'
+  dependsOn verifyVersions, verifyBwcTestsEnabled
+}
+
 subprojects {
   project.afterEvaluate {
-    // include license and notice in jars
-    tasks.withType(Jar) {
-      into('META-INF') {
-        from project.rootProject.rootDir
-        include 'LICENSE.txt'
-        include 'NOTICE.txt'
-      }
-    }
     // ignore missing javadocs
     tasks.withType(Javadoc) { Javadoc javadoc ->
       // the -quiet here is because of a bug in gradle, in that adding a string option
@@ -163,8 +211,9 @@ subprojects {
     "org.elasticsearch.gradle:build-tools:${version}": ':build-tools',
     "org.elasticsearch:rest-api-spec:${version}": ':rest-api-spec',
     "org.elasticsearch:elasticsearch:${version}": ':core',
-    "org.elasticsearch.client:rest:${version}": ':client:rest',
-    "org.elasticsearch.client:sniffer:${version}": ':client:sniffer',
+    "org.elasticsearch.client:elasticsearch-rest-client:${version}": ':client:rest',
+    "org.elasticsearch.client:elasticsearch-rest-client-sniffer:${version}": ':client:sniffer',
+    "org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}": ':client:rest-high-level',
     "org.elasticsearch.client:test:${version}": ':client:test',
     "org.elasticsearch.client:transport:${version}": ':client:transport',
     "org.elasticsearch.test:framework:${version}": ':test:framework',
@@ -179,12 +228,23 @@ subprojects {
     "org.elasticsearch.plugin:transport-netty4-client:${version}": ':modules:transport-netty4',
     "org.elasticsearch.plugin:reindex-client:${version}": ':modules:reindex',
     "org.elasticsearch.plugin:lang-mustache-client:${version}": ':modules:lang-mustache',
+    "org.elasticsearch.plugin:parent-join-client:${version}": ':modules:parent-join',
+    "org.elasticsearch.plugin:aggs-matrix-stats-client:${version}": ':modules:aggs-matrix-stats',
     "org.elasticsearch.plugin:percolator-client:${version}": ':modules:percolator',
   ]
-  configurations.all {
-    resolutionStrategy.dependencySubstitution { DependencySubstitutions subs ->
-      projectSubstitutions.each { k,v ->
-        subs.substitute(subs.module(k)).with(subs.project(v))
+  if (wireCompatVersions[-1].snapshot) {
+    // if the most previous version is a snapshot, we need to connect that version to the
+    // bwc project which will checkout and build that snapshot version
+    ext.projectSubstitutions["org.elasticsearch.distribution.deb:elasticsearch:${wireCompatVersions[-1]}"] = ':distribution:bwc'
+    ext.projectSubstitutions["org.elasticsearch.distribution.rpm:elasticsearch:${wireCompatVersions[-1]}"] = ':distribution:bwc'
+    ext.projectSubstitutions["org.elasticsearch.distribution.zip:elasticsearch:${wireCompatVersions[-1]}"] = ':distribution:bwc'
+  }
+  project.afterEvaluate {
+    configurations.all {
+      resolutionStrategy.dependencySubstitution { DependencySubstitutions subs ->
+        projectSubstitutions.each { k,v ->
+          subs.substitute(subs.module(k)).with(subs.project(v))
+        }
       }
     }
   }
@@ -341,3 +401,17 @@ task run(type: Run) {
   group = 'Verification'
   impliesSubProjects = true
 }
+
+/* Remove assemble on all qa projects because we don't need to publish
+ * artifacts for them. */
+gradle.projectsEvaluated {
+  subprojects {
+    if (project.path.startsWith(':qa')) {
+      Task assemble = project.tasks.findByName('assemble')
+      if (assemble) {
+        project.tasks.remove(assemble)
+        project.build.dependsOn.remove('assemble')
+      }
+    }
+  }
+}
diff --git a/buildSrc/build.gradle b/buildSrc/build.gradle
index 1be5020f4f813..73115aab88fd3 100644
--- a/buildSrc/build.gradle
+++ b/buildSrc/build.gradle
@@ -23,14 +23,12 @@ apply plugin: 'groovy'
 
 group = 'org.elasticsearch.gradle'
 
-// TODO: remove this when upgrading to a version that supports ProgressLogger
-// gradle 2.14 made internal apis unavailable to plugins, and gradle considered
-// ProgressLogger to be an internal api. Until this is made available again,
-// we can't upgrade without losing our nice progress logging
-// NOTE that this check duplicates that in BuildPlugin, but we need to check
-// early here before trying to compile the broken classes in buildSrc
-if (GradleVersion.current() != GradleVersion.version('2.13')) {
-  throw new GradleException('Gradle 2.13 is required to build elasticsearch')
+if (GradleVersion.current() < GradleVersion.version('3.3')) {
+  throw new GradleException('Gradle 3.3+ is required to build elasticsearch')
+}
+
+if (JavaVersion.current() < JavaVersion.VERSION_1_8) {
+  throw new GradleException('Java 1.8 is required to build elasticsearch gradle tools')
 }
 
 if (project == rootProject) {
@@ -94,12 +92,18 @@ dependencies {
   compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
   compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
   compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
-  compile 'de.thetaphi:forbiddenapis:2.2'
-  compile 'com.bmuschko:gradle-nexus-plugin:2.3.1'
+  compile 'de.thetaphi:forbiddenapis:2.3'
   compile 'org.apache.rat:apache-rat:0.11'
-  compile 'ru.vyarus:gradle-animalsniffer-plugin:1.0.1'
+  compile "org.elasticsearch:jna:4.4.0-1"
 }
 
+// Gradle 2.14+ removed ProgressLogger(-Factory) classes from the public APIs
+// Use logging dependency instead
+
+dependencies {
+  compileOnly "org.gradle:gradle-logging:${GradleVersion.current().getVersion()}"
+  compile 'ru.vyarus:gradle-animalsniffer-plugin:1.2.0' // Gradle 2.14 requires a version > 1.0.1
+}
 
 /*****************************************************************************
  *                         Bootstrap repositories                            *
@@ -108,11 +112,10 @@ dependencies {
 if (project == rootProject) {
 
   repositories {
-    mavenCentral()
-    maven {
-      name 'sonatype-snapshots'
-      url "https://oss.sonatype.org/content/repositories/snapshots/"
+    if (System.getProperty("repos.mavenLocal") != null) {
+      mavenLocal()
     }
+    mavenCentral()
   }
   test.exclude 'org/elasticsearch/test/NamingConventionsCheckBadClasses*'
 }
@@ -154,4 +157,11 @@ if (project != rootProject) {
     testClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$UnitTestCase'
     integTestClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$IntegTestCase'
   }
+
+  task namingConventionsMain(type: org.elasticsearch.gradle.precommit.NamingConventionsTask) {
+    checkForTestsInMain = true
+    testClass = namingConventions.testClass
+    integTestClass = namingConventions.integTestClass
+  }
+  precommit.dependsOn namingConventionsMain
 }
diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy
index e2230b116c714..c811a4f6c268e 100644
--- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy
+++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy
@@ -33,7 +33,7 @@ class RandomizedTestingPlugin implements Plugin {
         ]
         RandomizedTestingTask newTestTask = tasks.create(properties)
         newTestTask.classpath = oldTestTask.classpath
-        newTestTask.testClassesDir = oldTestTask.testClassesDir
+        newTestTask.testClassesDir = oldTestTask.project.sourceSets.test.output.classesDir
 
         // hack so check task depends on custom test
         Task checkTask = tasks.findByPath('check')
diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy
index b28e7210ea41d..e24c226837d26 100644
--- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy
+++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy
@@ -19,7 +19,7 @@ import org.gradle.api.tasks.Optional
 import org.gradle.api.tasks.TaskAction
 import org.gradle.api.tasks.util.PatternFilterable
 import org.gradle.api.tasks.util.PatternSet
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLoggerFactory
 import org.gradle.util.ConfigureUtil
 
 import javax.inject.Inject
@@ -69,6 +69,10 @@ class RandomizedTestingTask extends DefaultTask {
     @Input
     String ifNoTests = 'ignore'
 
+    @Optional
+    @Input
+    String onNonEmptyWorkDirectory = 'fail'
+
     TestLoggingConfiguration testLoggingConfig = new TestLoggingConfiguration()
 
     BalancersConfiguration balancersConfig = new BalancersConfiguration(task: this)
@@ -81,6 +85,7 @@ class RandomizedTestingTask extends DefaultTask {
     String argLine = null
 
     Map systemProperties = new HashMap<>()
+    Map environmentVariables = new HashMap<>()
     PatternFilterable patternSet = new PatternSet()
 
     RandomizedTestingTask() {
@@ -91,7 +96,7 @@ class RandomizedTestingTask extends DefaultTask {
 
     @Inject
     ProgressLoggerFactory getProgressLoggerFactory() {
-        throw new UnsupportedOperationException();
+        throw new UnsupportedOperationException()
     }
 
     void jvmArgs(Iterable arguments) {
@@ -106,6 +111,10 @@ class RandomizedTestingTask extends DefaultTask {
         systemProperties.put(property, value)
     }
 
+    void environment(String key, Object value) {
+        environmentVariables.put(key, value)
+    }
+
     void include(String... includes) {
         this.patternSet.include(includes);
     }
@@ -194,7 +203,9 @@ class RandomizedTestingTask extends DefaultTask {
             haltOnFailure: true, // we want to capture when a build failed, but will decide whether to rethrow later
             shuffleOnSlave: shuffleOnSlave,
             leaveTemporary: leaveTemporary,
-            ifNoTests: ifNoTests
+            ifNoTests: ifNoTests,
+            onNonEmptyWorkDirectory: onNonEmptyWorkDirectory,
+            newenvironment: true
         ]
 
         DefaultLogger listener = null
@@ -250,6 +261,9 @@ class RandomizedTestingTask extends DefaultTask {
                 for (Map.Entry prop : systemProperties) {
                     sysproperty key: prop.getKey(), value: prop.getValue().toString()
                 }
+                for (Map.Entry envvar : environmentVariables) {
+                    env key: envvar.getKey(), value: envvar.getValue().toString()
+                }
                 makeListeners()
             }
         } catch (BuildException e) {
diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy
index 14f5d476be3cb..da25afa938916 100644
--- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy
+++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy
@@ -25,8 +25,8 @@ import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedStartEvent
 import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedSuiteResultEvent
 import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedTestResultEvent
 import com.carrotsearch.ant.tasks.junit4.listeners.AggregatedEventListener
-import org.gradle.logging.ProgressLogger
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLogger
+import org.gradle.internal.logging.progress.ProgressLoggerFactory
 
 import static com.carrotsearch.ant.tasks.junit4.FormattingUtils.formatDurationInSeconds
 import static com.carrotsearch.ant.tasks.junit4.events.aggregated.TestStatus.ERROR
@@ -77,7 +77,7 @@ class TestProgressLogger implements AggregatedEventListener {
     /** Have we finished a whole suite yet? */
     volatile boolean suiteFinished = false
     /* Note that we probably overuse volatile here but it isn't hurting us and
-      lets us move things around without worying about breaking things. */
+       lets us move things around without worrying about breaking things. */
 
     @Subscribe
     void onStart(AggregatedStartEvent e) throws IOException {
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy
index 4d7bee866b824..c51fc0229eebe 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy
@@ -18,9 +18,12 @@
  */
 package org.elasticsearch.gradle
 
+import com.carrotsearch.gradle.junit4.RandomizedTestingTask
 import nebula.plugin.extraconfigurations.ProvidedBasePlugin
+import org.apache.tools.ant.taskdefs.condition.Os
 import org.elasticsearch.gradle.precommit.PrecommitTasks
 import org.gradle.api.GradleException
+import org.gradle.api.InvalidUserDataException
 import org.gradle.api.JavaVersion
 import org.gradle.api.Plugin
 import org.gradle.api.Project
@@ -32,19 +35,20 @@ import org.gradle.api.artifacts.ModuleVersionIdentifier
 import org.gradle.api.artifacts.ProjectDependency
 import org.gradle.api.artifacts.ResolvedArtifact
 import org.gradle.api.artifacts.dsl.RepositoryHandler
-import org.gradle.api.artifacts.maven.MavenPom
+import org.gradle.api.file.CopySpec
+import org.gradle.api.plugins.JavaPlugin
 import org.gradle.api.publish.maven.MavenPublication
 import org.gradle.api.publish.maven.plugins.MavenPublishPlugin
 import org.gradle.api.publish.maven.tasks.GenerateMavenPom
 import org.gradle.api.tasks.bundling.Jar
 import org.gradle.api.tasks.compile.JavaCompile
+import org.gradle.api.tasks.javadoc.Javadoc
 import org.gradle.internal.jvm.Jvm
 import org.gradle.process.ExecResult
 import org.gradle.util.GradleVersion
 
 import java.time.ZoneOffset
 import java.time.ZonedDateTime
-
 /**
  * Encapsulates build configuration for elasticsearch projects.
  */
@@ -54,6 +58,11 @@ class BuildPlugin implements Plugin {
 
     @Override
     void apply(Project project) {
+        if (project.pluginManager.hasPlugin('elasticsearch.standalone-rest-test')) {
+              throw new InvalidUserDataException('elasticsearch.standalone-test, '
+                + 'elasticearch.standalone-rest-test, and elasticsearch.build '
+                + 'are mutually exclusive')
+        }
         project.pluginManager.apply('java')
         project.pluginManager.apply('carrotsearch.randomized-testing')
         // these plugins add lots of info to our jars
@@ -63,7 +72,6 @@ class BuildPlugin implements Plugin {
         project.pluginManager.apply('nebula.info-java')
         project.pluginManager.apply('nebula.info-scm')
         project.pluginManager.apply('nebula.info-jar')
-        project.pluginManager.apply('com.bmuschko.nexus')
         project.pluginManager.apply(ProvidedBasePlugin)
 
         globalBuildInfo(project)
@@ -71,6 +79,8 @@ class BuildPlugin implements Plugin {
         configureConfigurations(project)
         project.ext.versions = VersionProperties.versions
         configureCompile(project)
+        configureJavadoc(project)
+        configureSourcesJar(project)
         configurePomGeneration(project)
 
         configureTest(project)
@@ -113,7 +123,7 @@ class BuildPlugin implements Plugin {
             }
 
             // enforce gradle version
-            GradleVersion minGradle = GradleVersion.version('2.13')
+            GradleVersion minGradle = GradleVersion.version('3.3')
             if (GradleVersion.current() < minGradle) {
                 throw new GradleException("${minGradle} or above is required to build elasticsearch")
             }
@@ -157,7 +167,7 @@ class BuildPlugin implements Plugin {
     private static String findJavaHome() {
         String javaHome = System.getenv('JAVA_HOME')
         if (javaHome == null) {
-            if (System.getProperty("idea.active") != null) {
+            if (System.getProperty("idea.active") != null || System.getProperty("eclipse.launcher") != null) {
                 // intellij doesn't set JAVA_HOME, so we use the jdk gradle was run with
                 javaHome = Jvm.current().javaHome
             } else {
@@ -194,19 +204,28 @@ class BuildPlugin implements Plugin {
 
     /** Runs the given javascript using jjs from the jdk, and returns the output */
     private static String runJavascript(Project project, String javaHome, String script) {
-        File tmpScript = File.createTempFile('es-gradle-tmp', '.js')
-        tmpScript.setText(script, 'UTF-8')
-        ByteArrayOutputStream output = new ByteArrayOutputStream()
+        ByteArrayOutputStream stdout = new ByteArrayOutputStream()
+        ByteArrayOutputStream stderr = new ByteArrayOutputStream()
+        if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+            // gradle/groovy does not properly escape the double quote for windows
+            script = script.replace('"', '\\"')
+        }
+        File jrunscriptPath = new File(javaHome, 'bin/jrunscript')
         ExecResult result = project.exec {
-            executable = new File(javaHome, 'bin/jjs')
-            args tmpScript.toString()
-            standardOutput = output
-            errorOutput = new ByteArrayOutputStream()
-            ignoreExitValue = true // we do not fail so we can first cleanup the tmp file
+            executable = jrunscriptPath
+            args '-e', script
+            standardOutput = stdout
+            errorOutput = stderr
+            ignoreExitValue = true
         }
-        java.nio.file.Files.delete(tmpScript.toPath())
-        result.assertNormalExitValue()
-        return output.toString('UTF-8').trim()
+        if (result.exitValue != 0) {
+            project.logger.error("STDOUT:")
+            stdout.toString('UTF-8').eachLine { line -> project.logger.error(line) }
+            project.logger.error("STDERR:")
+            stderr.toString('UTF-8').eachLine { line -> project.logger.error(line) }
+            result.rethrowFailure()
+        }
+        return stdout.toString('UTF-8').trim()
     }
 
     /** Return the configuration name used for finding transitive deps of the given dependency. */
@@ -267,14 +286,9 @@ class BuildPlugin implements Plugin {
         project.configurations.compile.dependencies.all(disableTransitiveDeps)
         project.configurations.testCompile.dependencies.all(disableTransitiveDeps)
         project.configurations.provided.dependencies.all(disableTransitiveDeps)
-
-        // add exclusions to the pom directly, for each of the transitive deps of this project's deps
-        project.modifyPom { MavenPom pom ->
-            pom.withXml(fixupDependencies(project))
-        }
     }
 
-    /** Adds repositores used by ES dependencies */
+    /** Adds repositories used by ES dependencies */
     static void configureRepositories(Project project) {
         RepositoryHandler repos = project.repositories
         if (System.getProperty("repos.mavenlocal") != null) {
@@ -284,10 +298,6 @@ class BuildPlugin implements Plugin {
             repos.mavenLocal()
         }
         repos.mavenCentral()
-        repos.maven {
-            name 'sonatype-snapshots'
-            url 'http://oss.sonatype.org/content/repositories/snapshots/'
-        }
         String luceneVersion = VersionProperties.lucene
         if (luceneVersion.contains('-snapshot')) {
             // extract the revision number from the version with a regex matcher
@@ -303,12 +313,14 @@ class BuildPlugin implements Plugin {
      * Returns a closure which can be used with a MavenPom for fixing problems with gradle generated poms.
      *
      * 
    - *
  • Remove transitive dependencies (using wildcard exclusions, fixed in gradle 2.14)
  • - *
  • Set compile time deps back to compile from runtime (known issue with maven-publish plugin) + *
  • Remove transitive dependencies. We currently exclude all artifacts explicitly instead of using wildcards + * as Ivy incorrectly translates POMs with * excludes to Ivy XML with * excludes which results in the main artifact + * being excluded as well (see https://issues.apache.org/jira/browse/IVY-1531). Note that Gradle 2.14+ automatically + * translates non-transitive dependencies to * excludes. We should revisit this when upgrading Gradle.
  • + *
  • Set compile time deps back to compile from runtime (known issue with maven-publish plugin)
  • *
*/ private static Closure fixupDependencies(Project project) { - // TODO: remove this when enforcing gradle 2.14+, it now properly handles exclusions return { XmlProvider xml -> // first find if we have dependencies at all, and grab the node NodeList depsNodes = xml.asNode().get('dependencies') @@ -331,6 +343,13 @@ class BuildPlugin implements Plugin { depNode.scope*.value = 'compile' } + // remove any exclusions added by gradle, they contain wildcards and systems like ivy have bugs with wildcards + // see https://github.com/elastic/elasticsearch/issues/24490 + NodeList exclusionsNode = depNode.get('exclusions') + if (exclusionsNode.size() > 0) { + depNode.remove(exclusionsNode.get(0)) + } + // collect the transitive deps now that we know what this dependency is String depConfig = transitiveDepConfigName(groupId, artifactId, version) Configuration configuration = project.configurations.findByName(depConfig) @@ -343,10 +362,19 @@ class BuildPlugin implements Plugin { continue } - // we now know we have something to exclude, so add a wildcard exclusion element - Node exclusion = depNode.appendNode('exclusions').appendNode('exclusion') - exclusion.appendNode('groupId', '*') - exclusion.appendNode('artifactId', '*') + // we now know we have something to exclude, so add exclusions for all artifacts except the main one + Node exclusions = depNode.appendNode('exclusions') + for (ResolvedArtifact artifact : artifacts) { + ModuleVersionIdentifier moduleVersionIdentifier = artifact.moduleVersion.id; + String depGroupId = moduleVersionIdentifier.group + String depArtifactId = moduleVersionIdentifier.name + // add exclusions for all artifacts except the main one + if (depGroupId != groupId || depArtifactId != artifactId) { + Node exclusion = exclusions.appendNode('exclusion') + exclusion.appendNode('groupId', depGroupId) + exclusion.appendNode('artifactId', depArtifactId) + } + } } } } @@ -366,8 +394,11 @@ class BuildPlugin implements Plugin { project.tasks.withType(GenerateMavenPom.class) { GenerateMavenPom t -> // place the pom next to the jar it is for t.destination = new File(project.buildDir, "distributions/${project.archivesBaseName}-${project.version}.pom") - // build poms with assemble - project.assemble.dependsOn(t) + // build poms with assemble (if the assemble task exists) + Task assemble = project.tasks.findByName('assemble') + if (assemble) { + assemble.dependsOn(t) + } } } } @@ -376,8 +407,9 @@ class BuildPlugin implements Plugin { static void configureCompile(Project project) { project.ext.compactProfile = 'compact3' project.afterEvaluate { - // fail on all javac warnings project.tasks.withType(JavaCompile) { + File gradleJavaHome = Jvm.current().javaHome + // we fork because compiling lots of different classes in a shared jvm can eventually trigger GC overhead limitations options.fork = true options.forkOptions.executable = new File(project.javaHome, 'bin/javac') options.forkOptions.memoryMaximumSize = "1g" @@ -394,6 +426,7 @@ class BuildPlugin implements Plugin { * -serial because we don't use java serialization. */ // don't even think about passing args with -J-xxx, oracle will ask you to submit a bug report :) + // fail on all javac warnings options.compilerArgs << '-Werror' << '-Xlint:all,-path,-serial,-options,-deprecation' << '-Xdoclint:all' << '-Xdoclint:-missing' // either disable annotation processor completely (default) or allow to enable them if an annotation processor is explicitly defined @@ -402,21 +435,74 @@ class BuildPlugin implements Plugin { } options.encoding = 'UTF-8' - //options.incremental = true + options.incremental = true if (project.javaVersion == JavaVersion.VERSION_1_9) { - // hack until gradle supports java 9's new "-release" arg + // hack until gradle supports java 9's new "--release" arg assert minimumJava == JavaVersion.VERSION_1_8 - options.compilerArgs << '-release' << '8' - project.sourceCompatibility = null - project.targetCompatibility = null + options.compilerArgs << '--release' << '8' + if (GradleVersion.current().getBaseVersion() < GradleVersion.version("4.1")) { + // this hack is not needed anymore since Gradle 4.1, see https://github.com/gradle/gradle/pull/2474 + doFirst { + sourceCompatibility = null + targetCompatibility = null + } + } } } } } - /** Adds additional manifest info to jars, and adds source and javadoc jars */ + static void configureJavadoc(Project project) { + String artifactsHost = VersionProperties.elasticsearch.endsWith("-SNAPSHOT") ? "https://snapshots.elastic.co" : "https://artifacts.elastic.co" + project.afterEvaluate { + project.tasks.withType(Javadoc) { + executable = new File(project.javaHome, 'bin/javadoc') + } + /* + * Order matters, the linksOffline for org.elasticsearch:elasticsearch must be the last one + * or all the links for the other packages (e.g org.elasticsearch.client) will point to core rather than their own artifacts + */ + Closure sortClosure = { a, b -> b.group <=> a.group } + Closure depJavadocClosure = { dep -> + if (dep.group != null && dep.group.startsWith('org.elasticsearch')) { + String substitution = project.ext.projectSubstitutions.get("${dep.group}:${dep.name}:${dep.version}") + if (substitution != null) { + project.javadoc.dependsOn substitution + ':javadoc' + String artifactPath = dep.group.replaceAll('\\.', '/') + '/' + dep.name.replaceAll('\\.', '/') + '/' + dep.version + project.javadoc.options.linksOffline artifactsHost + "/javadoc/" + artifactPath, "${project.project(substitution).buildDir}/docs/javadoc/" + } + } + } + project.configurations.compile.dependencies.findAll().toSorted(sortClosure).each(depJavadocClosure) + project.configurations.provided.dependencies.findAll().toSorted(sortClosure).each(depJavadocClosure) + } + configureJavadocJar(project) + } + + /** Adds a javadocJar task to generate a jar containing javadocs. */ + static void configureJavadocJar(Project project) { + Jar javadocJarTask = project.task('javadocJar', type: Jar) + javadocJarTask.classifier = 'javadoc' + javadocJarTask.group = 'build' + javadocJarTask.description = 'Assembles a jar containing javadocs.' + javadocJarTask.from(project.tasks.getByName(JavaPlugin.JAVADOC_TASK_NAME)) + project.assemble.dependsOn(javadocJarTask) + } + + static void configureSourcesJar(Project project) { + Jar sourcesJarTask = project.task('sourcesJar', type: Jar) + sourcesJarTask.classifier = 'sources' + sourcesJarTask.group = 'build' + sourcesJarTask.description = 'Assembles a jar containing source files.' + sourcesJarTask.from(project.sourceSets.main.allSource) + project.assemble.dependsOn(sourcesJarTask) + } + + /** Adds additional manifest info to jars */ static void configureJars(Project project) { + project.ext.licenseFile = null + project.ext.noticeFile = null project.tasks.withType(Jar) { Jar jarTask -> // we put all our distributable files under distributions jarTask.destinationDir = new File(project.buildDir, 'distributions') @@ -437,7 +523,21 @@ class BuildPlugin implements Plugin { 'Build-Java-Version': project.javaVersion) if (jarTask.manifest.attributes.containsKey('Change') == false) { logger.warn('Building without git revision id.') - jarTask.manifest.attributes('Change': 'N/A') + jarTask.manifest.attributes('Change': 'Unknown') + } + } + // add license/notice files + project.afterEvaluate { + if (project.licenseFile == null || project.noticeFile == null) { + throw new GradleException("Must specify license and notice file for project ${project.path}") + } + jarTask.into('META-INF') { + from(project.licenseFile.parent) { + include project.licenseFile.name + } + from(project.noticeFile.parent) { + include project.noticeFile.name + } } } } @@ -449,16 +549,12 @@ class BuildPlugin implements Plugin { jvm "${project.javaHome}/bin/java" parallelism System.getProperty('tests.jvms', 'auto') ifNoTests 'fail' + onNonEmptyWorkDirectory 'wipe' leaveTemporary true // TODO: why are we not passing maxmemory to junit4? jvmArg '-Xmx' + System.getProperty('tests.heap.size', '512m') jvmArg '-Xms' + System.getProperty('tests.heap.size', '512m') - if (JavaVersion.current().isJava7()) { - // some tests need a large permgen, but that only exists on java 7 - jvmArg '-XX:MaxPermSize=128m' - } - jvmArg '-XX:MaxDirectMemorySize=512m' jvmArg '-XX:+HeapDumpOnOutOfMemoryError' File heapdumpDir = new File(project.buildDir, 'heapdump') heapdumpDir.mkdirs() @@ -472,6 +568,8 @@ class BuildPlugin implements Plugin { systemProperty 'tests.artifact', project.name systemProperty 'tests.task', path systemProperty 'tests.security.manager', 'true' + // Breaking change in JDK-9, revert to JDK-8 behavior for now, see https://github.com/elastic/elasticsearch/issues/21534 + systemProperty 'jdk.io.permissionsUseCanonicalPath', 'true' systemProperty 'jna.nosys', 'true' // default test sysprop values systemProperty 'tests.ifNoTests', 'fail' @@ -484,11 +582,9 @@ class BuildPlugin implements Plugin { } } - // System assertions (-esa) are disabled for now because of what looks like a - // JDK bug triggered by Groovy on JDK7. We should look at re-enabling system - // assertions when we upgrade to a new version of Groovy (currently 2.4.4) or - // require JDK8. See https://issues.apache.org/jira/browse/GROOVY-7528. - enableSystemAssertions false + boolean assertionsEnabled = Boolean.parseBoolean(System.getProperty('tests.asserts', 'true')) + enableSystemAssertions assertionsEnabled + enableAssertions assertionsEnabled testLogging { showNumFailuresAtEnd 25 @@ -529,11 +625,22 @@ class BuildPlugin implements Plugin { /** Configures the test task */ static Task configureTest(Project project) { - Task test = project.tasks.getByName('test') + RandomizedTestingTask test = project.tasks.getByName('test') test.configure(commonTestConfig(project)) test.configure { include '**/*Tests.class' } + + // Add a method to create additional unit tests for a project, which will share the same + // randomized testing setup, but by default run no tests. + project.extensions.add('additionalTest', { String name, Closure config -> + RandomizedTestingTask additionalTest = project.tasks.create(name, RandomizedTestingTask.class) + additionalTest.classpath = test.classpath + additionalTest.testClassesDir = test.testClassesDir + additionalTest.configure(commonTestConfig(project)) + additionalTest.configure(config) + test.dependsOn(additionalTest) + }); return test } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/NoticeTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/NoticeTask.groovy new file mode 100644 index 0000000000000..928298db7bfc2 --- /dev/null +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/NoticeTask.groovy @@ -0,0 +1,99 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.gradle + +import org.gradle.api.DefaultTask +import org.gradle.api.Project +import org.gradle.api.artifacts.Configuration +import org.gradle.api.tasks.InputFile +import org.gradle.api.tasks.OutputFile +import org.gradle.api.tasks.TaskAction + +/** + * A task to create a notice file which includes dependencies' notices. + */ +public class NoticeTask extends DefaultTask { + + @InputFile + File inputFile = project.rootProject.file('NOTICE.txt') + + @OutputFile + File outputFile = new File(project.buildDir, "notices/${name}/NOTICE.txt") + + /** Directories to include notices from */ + private List licensesDirs = new ArrayList<>() + + public NoticeTask() { + description = 'Create a notice file from dependencies' + // Default licenses directory is ${projectDir}/licenses (if it exists) + File licensesDir = new File(project.projectDir, 'licenses') + if (licensesDir.exists()) { + licensesDirs.add(licensesDir) + } + } + + /** Add notices from the specified directory. */ + public void licensesDir(File licensesDir) { + licensesDirs.add(licensesDir) + } + + @TaskAction + public void generateNotice() { + StringBuilder output = new StringBuilder() + output.append(inputFile.getText('UTF-8')) + output.append('\n\n') + // This is a map rather than a set so that the sort order is the 3rd + // party component names, unaffected by the full path to the various files + Map seen = new TreeMap<>() + for (File licensesDir : licensesDirs) { + licensesDir.eachFileMatch({ it ==~ /.*-NOTICE\.txt/ }) { File file -> + String name = file.name.substring(0, file.name.length() - '-NOTICE.txt'.length()) + if (seen.containsKey(name)) { + File prevFile = seen.get(name) + if (prevFile.text != file.text) { + throw new RuntimeException("Two different notices exist for dependency '" + + name + "': " + prevFile + " and " + file) + } + } else { + seen.put(name, file) + } + } + } + for (Map.Entry entry : seen.entrySet()) { + String name = entry.getKey() + File file = entry.getValue() + appendFile(file, name, 'NOTICE', output) + appendFile(new File(file.parentFile, "${name}-LICENSE.txt"), name, 'LICENSE', output) + } + outputFile.setText(output.toString(), 'UTF-8') + } + + static void appendFile(File file, String name, String type, StringBuilder output) { + String text = file.getText('UTF-8') + if (text.trim().isEmpty()) { + return + } + output.append('================================================================================\n') + output.append("${name} ${type}\n") + output.append('================================================================================\n') + output.append(text) + output.append('\n\n') + } +} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy new file mode 100644 index 0000000000000..ace8dd34fe9e9 --- /dev/null +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy @@ -0,0 +1,84 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.gradle + +import groovy.transform.Sortable + +/** + * Encapsulates comparison and printing logic for an x.y.z version. + */ +@Sortable(includes=['id']) +public class Version { + + final int major + final int minor + final int bugfix + final int id + final boolean snapshot + /** + * Is the vesion listed as {@code _UNRELEASED} in Version.java. + */ + final boolean unreleased + + public Version(int major, int minor, int bugfix, boolean snapshot, + boolean unreleased) { + this.major = major + this.minor = minor + this.bugfix = bugfix + this.snapshot = snapshot + this.id = major * 100000 + minor * 1000 + bugfix * 10 + + (snapshot ? 1 : 0) + this.unreleased = unreleased + } + + public static Version fromString(String s) { + String[] parts = s.split('\\.') + String bugfix = parts[2] + boolean snapshot = false + if (bugfix.contains('-')) { + snapshot = bugfix.endsWith('-SNAPSHOT') + bugfix = bugfix.split('-')[0] + } + return new Version(parts[0] as int, parts[1] as int, bugfix as int, + snapshot, false) + } + + @Override + public String toString() { + String snapshotStr = snapshot ? '-SNAPSHOT' : '' + return "${major}.${minor}.${bugfix}${snapshotStr}" + } + + public boolean before(String compareTo) { + return id < fromString(compareTo).id + } + + public boolean onOrBefore(String compareTo) { + return id <= fromString(compareTo).id + } + + public boolean onOrAfter(String compareTo) { + return id >= fromString(compareTo).id + } + + public boolean after(String compareTo) { + return id > fromString(compareTo).id + } +} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/DocsTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/DocsTestPlugin.groovy index 0fefecc144646..d2802638ce512 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/DocsTestPlugin.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/DocsTestPlugin.groovy @@ -30,7 +30,11 @@ public class DocsTestPlugin extends RestTestPlugin { @Override public void apply(Project project) { + project.pluginManager.apply('elasticsearch.standalone-rest-test') super.apply(project) + // Docs are published separately so no need to assemble + project.tasks.remove(project.assemble) + project.build.dependsOn.remove('assemble') Map defaultSubstitutions = [ /* These match up with the asciidoc syntax for substitutions but * the values may differ. In particular {version} needs to resolve @@ -38,7 +42,7 @@ public class DocsTestPlugin extends RestTestPlugin { * the last released version for docs. */ '\\{version\\}': VersionProperties.elasticsearch.replace('-SNAPSHOT', ''), - '\\{lucene_version\\}' : VersionProperties.lucene, + '\\{lucene_version\\}' : VersionProperties.lucene.replaceAll('-snapshot-\\w+$', ''), ] Task listSnippets = project.tasks.create('listSnippets', SnippetsTask) listSnippets.group 'Docs' @@ -53,17 +57,7 @@ public class DocsTestPlugin extends RestTestPlugin { 'List snippets that probably should be marked // CONSOLE' listConsoleCandidates.defaultSubstitutions = defaultSubstitutions listConsoleCandidates.perSnippet { - if ( - it.console != null // Already marked, nothing to do - || it.testResponse // It is a response - ) { - return - } - if ( // js almost always should be `// CONSOLE` - it.language == 'js' || - // snippets containing `curl` *probably* should - // be `// CONSOLE` - it.curl) { + if (RestTestsFromSnippetsTask.isConsoleCandidate(it)) { println(it.toString()) } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy index dc4e6f5f70af4..2ec12fe341f6b 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy @@ -41,6 +41,16 @@ public class RestTestsFromSnippetsTask extends SnippetsTask { @Input Map setups = new HashMap() + /** + * A list of files that contain snippets that *probably* should be + * converted to `// CONSOLE` but have yet to be converted. If a file is in + * this list and doesn't contain unconverted snippets this task will fail. + * If there are unconverted snippets not in this list then this task will + * fail. All files are paths relative to the docs dir. + */ + @Input + List expectedUnconvertedCandidates = [] + /** * Root directory of the tests being generated. To make rest tests happy * we generate them in a testRoot() which is contained in this directory. @@ -56,6 +66,7 @@ public class RestTestsFromSnippetsTask extends SnippetsTask { TestBuilder builder = new TestBuilder() doFirst { outputRoot().delete() } perSnippet builder.&handleSnippet + doLast builder.&checkUnconverted doLast builder.&finishLastTest } @@ -67,6 +78,27 @@ public class RestTestsFromSnippetsTask extends SnippetsTask { return new File(testRoot, '/rest-api-spec/test') } + /** + * Is this snippet a candidate for conversion to `// CONSOLE`? + */ + static isConsoleCandidate(Snippet snippet) { + /* Snippets that are responses or already marked as `// CONSOLE` or + * `// NOTCONSOLE` are not candidates. */ + if (snippet.console != null || snippet.testResponse) { + return false + } + /* js snippets almost always should be marked with `// CONSOLE`. js + * snippets that shouldn't be marked `// CONSOLE`, like examples for + * js client, should always be marked with `// NOTCONSOLE`. + * + * `sh` snippets that contain `curl` almost always should be marked + * with `// CONSOLE`. In the exceptionally rare cases where they are + * not communicating with Elasticsearch, like the xamples in the ec2 + * and gce discovery plugins, the snippets should be marked + * `// NOTCONSOLE`. */ + return snippet.language == 'js' || snippet.curl + } + private class TestBuilder { private static final String SYNTAX = { String method = /(?GET|PUT|POST|HEAD|OPTIONS|DELETE)/ @@ -88,17 +120,34 @@ public class RestTestsFromSnippetsTask extends SnippetsTask { */ PrintWriter current + /** + * Files containing all snippets that *probably* should be converted + * to `// CONSOLE` but have yet to be converted. All files are paths + * relative to the docs dir. + */ + Set unconvertedCandidates = new HashSet<>() + + /** + * The last non-TESTRESPONSE snippet. + */ + Snippet previousTest + /** * Called each time a snippet is encountered. Tracks the snippets and * calls buildTest to actually build the test. */ void handleSnippet(Snippet snippet) { + if (RestTestsFromSnippetsTask.isConsoleCandidate(snippet)) { + unconvertedCandidates.add(snippet.path.toString() + .replace('\\', '/')) + } if (BAD_LANGUAGES.contains(snippet.language)) { throw new InvalidUserDataException( "$snippet: Use `js` instead of `${snippet.language}`.") } if (snippet.testSetup) { setup(snippet) + previousTest = snippet return } if (snippet.testResponse) { @@ -107,6 +156,7 @@ public class RestTestsFromSnippetsTask extends SnippetsTask { } if (snippet.test || snippet.console) { test(snippet) + previousTest = snippet return } // Must be an unmarked snippet.... @@ -115,13 +165,37 @@ public class RestTestsFromSnippetsTask extends SnippetsTask { private void test(Snippet test) { setupCurrent(test) - if (false == test.continued) { + if (test.continued) { + /* Catch some difficult to debug errors with // TEST[continued] + * and throw a helpful error message. */ + if (previousTest == null || previousTest.path != test.path) { + throw new InvalidUserDataException("// TEST[continued] " + + "cannot be on first snippet in a file: $test") + } + if (previousTest != null && previousTest.testSetup) { + throw new InvalidUserDataException("// TEST[continued] " + + "cannot immediately follow // TESTSETUP: $test") + } + } else { current.println('---') current.println("\"line_$test.start\":") + /* The Elasticsearch test runner doesn't support the warnings + * construct unless you output this skip. Since we don't know + * if this snippet will use the warnings construct we emit this + * warning every time. */ + current.println(" - skip:") + current.println(" features: ") + current.println(" - stash_in_key") + current.println(" - stash_in_path") + current.println(" - stash_path_replace") + current.println(" - warnings") } if (test.skipTest) { - current.println(" - skip:") - current.println(" features: always_skip") + if (test.continued) { + throw new InvalidUserDataException("Continued snippets " + + "can't be skipped") + } + current.println(" - always_skip") current.println(" reason: $test.skipTest") } if (test.setup != null) { @@ -148,6 +222,11 @@ public class RestTestsFromSnippetsTask extends SnippetsTask { def (String path, String query) = pathAndQuery.tokenize('?') if (path == null) { path = '' // Catch requests to the root... + } else { + // Escape some characters that are also escaped by sense + path = path.replace('<', '%3C').replace('>', '%3E') + path = path.replace('{', '%7B').replace('}', '%7D') + path = path.replace('|', '%7C') } current.println(" - do:") if (catchPart != null) { @@ -250,5 +329,35 @@ public class RestTestsFromSnippetsTask extends SnippetsTask { current = null } } + + void checkUnconverted() { + List listedButNotFound = [] + for (String listed : expectedUnconvertedCandidates) { + if (false == unconvertedCandidates.remove(listed)) { + listedButNotFound.add(listed) + } + } + String message = "" + if (false == listedButNotFound.isEmpty()) { + Collections.sort(listedButNotFound) + listedButNotFound = listedButNotFound.collect {' ' + it} + message += "Expected unconverted snippets but none found in:\n" + message += listedButNotFound.join("\n") + } + if (false == unconvertedCandidates.isEmpty()) { + List foundButNotListed = + new ArrayList<>(unconvertedCandidates) + Collections.sort(foundButNotListed) + foundButNotListed = foundButNotListed.collect {' ' + it} + if (false == "".equals(message)) { + message += "\n" + } + message += "Unexpected unconverted snippets:\n" + message += foundButNotListed.join("\n") + } + if (false == "".equals(message)) { + throw new InvalidUserDataException(message); + } + } } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy index 41f74b45be143..94af22f4aa279 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy @@ -39,6 +39,7 @@ public class SnippetsTask extends DefaultTask { private static final String SKIP = /skip:([^\]]+)/ private static final String SETUP = /setup:([^ \]]+)/ private static final String WARNING = /warning:(.+)/ + private static final String CAT = /(_cat)/ private static final String TEST_SYNTAX = /(?:$CATCH|$SUBSTITUTION|$SKIP|(continued)|$SETUP|$WARNING) ?/ @@ -89,6 +90,7 @@ public class SnippetsTask extends DefaultTask { * tests cleaner. */ subst = subst.replace('$body', '\\$body') + subst = subst.replace('$_path', '\\$_path') // \n is a new line.... subst = subst.replace('\\n', '\n') snippet.contents = snippet.contents.replaceAll( @@ -221,8 +223,17 @@ public class SnippetsTask extends DefaultTask { substitutions = [] } String loc = "$file:$lineNumber" - parse(loc, matcher.group(2), /$SUBSTITUTION ?/) { - substitutions.add([it.group(1), it.group(2)]) + parse(loc, matcher.group(2), /(?:$SUBSTITUTION|$CAT) ?/) { + if (it.group(1) != null) { + // TESTRESPONSE[s/adsf/jkl/] + substitutions.add([it.group(1), it.group(2)]) + } else if (it.group(3) != null) { + // TESTRESPONSE[_cat] + substitutions.add(['^', '/']) + substitutions.add(['\n$', '\\\\s*/']) + substitutions.add(['( +)', '$1\\\\s+']) + substitutions.add(['\n', '\\\\s*\n ']) + } } } return diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy index 0a454ee1006ff..2e11fdc2681bc 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy @@ -19,6 +19,7 @@ package org.elasticsearch.gradle.plugin import org.elasticsearch.gradle.BuildPlugin +import org.elasticsearch.gradle.NoticeTask import org.elasticsearch.gradle.test.RestIntegTestTask import org.elasticsearch.gradle.test.RunTask import org.gradle.api.Project @@ -49,33 +50,29 @@ public class PluginBuildPlugin extends BuildPlugin { project.afterEvaluate { boolean isModule = project.path.startsWith(':modules:') String name = project.pluginProperties.extension.name - project.jar.baseName = name - project.bundlePlugin.baseName = name + project.archivesBaseName = name if (project.pluginProperties.extension.hasClientJar) { // for plugins which work with the transport client, we copy the jar // file to a new name, copy the nebula generated pom to the same name, // and generate a different pom for the zip - project.signArchives.enabled = false addClientJarPomGeneration(project) addClientJarTask(project) - if (isModule == false) { - addZipPomGeneration(project) - } } else { // no client plugin, so use the pom file from nebula, without jar, for the zip project.ext.set("nebulaPublish.maven.jar", false) } - project.integTest.dependsOn(project.bundlePlugin) + project.integTestCluster.dependsOn(project.bundlePlugin) project.tasks.run.dependsOn(project.bundlePlugin) if (isModule) { - project.integTest.clusterConfig.module(project) + project.integTestCluster.module(project) project.tasks.run.clusterConfig.module(project) } else { - project.integTest.clusterConfig.plugin(project.path) + project.integTestCluster.plugin(project.path) project.tasks.run.clusterConfig.plugin(project.path) addZipPomGeneration(project) + addNoticeGeneration(project) } project.namingConventions { @@ -99,7 +96,7 @@ public class PluginBuildPlugin extends BuildPlugin { provided "com.vividsolutions:jts:${project.versions.jts}" provided "org.apache.logging.log4j:log4j-api:${project.versions.log4j}" provided "org.apache.logging.log4j:log4j-core:${project.versions.log4j}" - provided "net.java.dev.jna:jna:${project.versions.jna}" + provided "org.elasticsearch:jna:${project.versions.jna}" } } @@ -123,12 +120,15 @@ public class PluginBuildPlugin extends BuildPlugin { // add the plugin properties and metadata to test resources, so unit tests can // know about the plugin (used by test security code to statically initialize the plugin in unit tests) SourceSet testSourceSet = project.sourceSets.test - testSourceSet.output.dir(buildProperties.generatedResourcesDir, builtBy: 'pluginProperties') + testSourceSet.output.dir(buildProperties.descriptorOutput.parentFile, builtBy: 'pluginProperties') testSourceSet.resources.srcDir(pluginMetadata) // create the actual bundle task, which zips up all the files for the plugin Zip bundle = project.tasks.create(name: 'bundlePlugin', type: Zip, dependsOn: [project.jar, buildProperties]) { - from buildProperties // plugin properties file + from(buildProperties.descriptorOutput.parentFile) { + // plugin properties file + include(buildProperties.descriptorOutput.name) + } from pluginMetadata // metadata (eg custom security policy) from project.jar // this plugin's jar from project.configurations.runtime - project.configurations.provided // the dep jars @@ -152,7 +152,7 @@ public class PluginBuildPlugin extends BuildPlugin { /** Adds a task to move jar and associated files to a "-client" name. */ protected static void addClientJarTask(Project project) { Task clientJar = project.tasks.create('clientJar') - clientJar.dependsOn('generatePomFileForJarPublication', project.jar, project.javadocJar, project.sourcesJar) + clientJar.dependsOn(project.jar, 'generatePomFileForClientJarPublication', project.javadocJar, project.sourcesJar) clientJar.doFirst { Path jarFile = project.jar.outputs.files.singleFile.toPath() String clientFileName = jarFile.fileName.toString().replace(project.version, "client-${project.version}") @@ -179,7 +179,10 @@ public class PluginBuildPlugin extends BuildPlugin { static final Pattern GIT_PATTERN = Pattern.compile(/git@([^:]+):([^\.]+)\.git/) /** Find the reponame. */ - protected static String urlFromOrigin(String origin) { + static String urlFromOrigin(String origin) { + if (origin == null) { + return null // best effort, the url doesnt really matter, it is just required by maven central + } if (origin.startsWith('https')) { return origin } @@ -197,9 +200,9 @@ public class PluginBuildPlugin extends BuildPlugin { project.publishing { publications { - jar(MavenPublication) { + clientJar(MavenPublication) { from project.components.java - artifactId = artifactId + '-client' + artifactId = project.pluginProperties.extension.name + '-client' pom.withXml { XmlProvider xml -> Node root = xml.asNode() root.appendNode('name', project.pluginProperties.extension.name) @@ -213,7 +216,7 @@ public class PluginBuildPlugin extends BuildPlugin { } } - /** Adds a task to generate a*/ + /** Adds a task to generate a pom file for the zip distribution. */ protected void addZipPomGeneration(Project project) { project.plugins.apply(MavenPublishPlugin.class) @@ -221,7 +224,19 @@ public class PluginBuildPlugin extends BuildPlugin { publications { zip(MavenPublication) { artifact project.bundlePlugin - pom.packaging = 'pom' + } + /* HUGE HACK: the underlying maven publication library refuses to deploy any attached artifacts + * when the packaging type is set to 'pom'. But Sonatype's OSS repositories require source files + * for artifacts that are of type 'zip'. We already publish the source and javadoc for Elasticsearch + * under the various other subprojects. So here we create another publication using the same + * name that has the "real" pom, and rely on the fact that gradle will execute the publish tasks + * in alphabetical order. This lets us publish the zip file and even though the pom says the + * type is 'pom' instead of 'zip'. We cannot setup a dependency between the tasks because the + * publishing tasks are created *extremely* late in the configuration phase, so that we cannot get + * ahold of the actual task. Furthermore, this entire hack only exists so we can make publishing to + * maven local work, since we publish to maven central externally. */ + zipReal(MavenPublication) { + artifactId = project.pluginProperties.extension.name pom.withXml { XmlProvider xml -> Node root = xml.asNode() root.appendNode('name', project.pluginProperties.extension.name) @@ -234,4 +249,19 @@ public class PluginBuildPlugin extends BuildPlugin { } } } + + protected void addNoticeGeneration(Project project) { + File licenseFile = project.pluginProperties.extension.licenseFile + if (licenseFile != null) { + project.bundlePlugin.from(licenseFile.parentFile) { + include(licenseFile.name) + } + } + File noticeFile = project.pluginProperties.extension.noticeFile + if (noticeFile != null) { + NoticeTask generateNotice = project.tasks.create('generateNotice', NoticeTask.class) + generateNotice.inputFile = noticeFile + project.bundlePlugin.from(generateNotice) + } + } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy index 5502266693653..1251be265da9a 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy @@ -39,10 +39,24 @@ class PluginPropertiesExtension { @Input String classname + @Input + boolean hasNativeController = false + /** Indicates whether the plugin jar should be made available for the transport client. */ @Input boolean hasClientJar = false + /** A license file that should be included in the built plugin zip. */ + @Input + File licenseFile = null + + /** + * A notice file that should be included in the built plugin zip. This will be + * extended with notices from the {@code licenses/} directory. + */ + @Input + File noticeFile = null + PluginPropertiesExtension(Project project) { name = project.name version = project.version diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy index 7156c2650cbe0..91efe247a016b 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy @@ -22,6 +22,7 @@ import org.elasticsearch.gradle.VersionProperties import org.gradle.api.InvalidUserDataException import org.gradle.api.Task import org.gradle.api.tasks.Copy +import org.gradle.api.tasks.OutputFile /** * Creates a plugin descriptor. @@ -29,20 +30,22 @@ import org.gradle.api.tasks.Copy class PluginPropertiesTask extends Copy { PluginPropertiesExtension extension - File generatedResourcesDir = new File(project.buildDir, 'generated-resources') + + @OutputFile + File descriptorOutput = new File(project.buildDir, 'generated-resources/plugin-descriptor.properties') PluginPropertiesTask() { - File templateFile = new File(project.buildDir, 'templates/plugin-descriptor.properties') + File templateFile = new File(project.buildDir, "templates/${descriptorOutput.name}") Task copyPluginPropertiesTemplate = project.tasks.create('copyPluginPropertiesTemplate') { doLast { - InputStream resourceTemplate = PluginPropertiesTask.getResourceAsStream('/plugin-descriptor.properties') + InputStream resourceTemplate = PluginPropertiesTask.getResourceAsStream("/${descriptorOutput.name}") templateFile.parentFile.mkdirs() templateFile.setText(resourceTemplate.getText('UTF-8'), 'UTF-8') } } + dependsOn(copyPluginPropertiesTemplate) extension = project.extensions.create('esplugin', PluginPropertiesExtension, project) - project.clean.delete(generatedResourcesDir) project.afterEvaluate { // check require properties are set if (extension.name == null) { @@ -55,8 +58,8 @@ class PluginPropertiesTask extends Copy { throw new InvalidUserDataException('classname is a required setting for esplugin') } // configure property substitution - from(templateFile) - into(generatedResourcesDir) + from(templateFile.parentFile).include(descriptorOutput.name) + into(descriptorOutput.parentFile) Map properties = generateSubstitutions() expand(properties) inputs.properties(properties) @@ -76,7 +79,8 @@ class PluginPropertiesTask extends Copy { 'version': stringSnap(extension.version), 'elasticsearchVersion': stringSnap(VersionProperties.elasticsearch), 'javaVersion': project.targetCompatibility as String, - 'classname': extension.classname + 'classname': extension.classname, + 'hasNativeController': extension.hasNativeController ] } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy index 6fa37be309ec1..4d292d87ec39c 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy @@ -86,6 +86,9 @@ public class DependencyLicensesTask extends DefaultTask { /** A map of patterns to prefix, used to find the LICENSE and NOTICE file. */ private LinkedHashMap mappings = new LinkedHashMap<>() + /** Names of dependencies whose shas should not exist. */ + private Set ignoreShas = new HashSet<>() + /** * Add a mapping from a regex pattern for the jar name, to a prefix to find * the LICENSE and NOTICE file for that jar. @@ -106,6 +109,15 @@ public class DependencyLicensesTask extends DefaultTask { mappings.put(from, to) } + /** + * Add a rule which will skip SHA checking for the given dependency name. This should be used for + * locally build dependencies, which cause the sha to change constantly. + */ + @Input + public void ignoreSha(String dep) { + ignoreShas.add(dep) + } + @TaskAction public void checkDependencies() { if (dependencies.isEmpty()) { @@ -139,19 +151,27 @@ public class DependencyLicensesTask extends DefaultTask { for (File dependency : dependencies) { String jarName = dependency.getName() - logger.info("Checking license/notice/sha for " + jarName) - checkSha(dependency, jarName, shaFiles) + String depName = jarName - ~/\-\d+.*/ + if (ignoreShas.contains(depName)) { + // local deps should not have sha files! + if (getShaFile(jarName).exists()) { + throw new GradleException("SHA file ${getShaFile(jarName)} exists for ignored dependency ${depName}") + } + } else { + logger.info("Checking sha for " + jarName) + checkSha(dependency, jarName, shaFiles) + } - String name = jarName - ~/\-\d+.*/ - Matcher match = mappingsPattern.matcher(name) + logger.info("Checking license/notice for " + depName) + Matcher match = mappingsPattern.matcher(depName) if (match.matches()) { int i = 0 while (i < match.groupCount() && match.group(i + 1) == null) ++i; - logger.info("Mapped dependency name ${name} to ${mapped.get(i)} for license check") - name = mapped.get(i) + logger.info("Mapped dependency name ${depName} to ${mapped.get(i)} for license check") + depName = mapped.get(i) } - checkFile(name, jarName, licenses, 'LICENSE') - checkFile(name, jarName, notices, 'NOTICE') + checkFile(depName, jarName, licenses, 'LICENSE') + checkFile(depName, jarName, notices, 'NOTICE') } licenses.each { license, count -> @@ -169,8 +189,12 @@ public class DependencyLicensesTask extends DefaultTask { } } + private File getShaFile(String jarName) { + return new File(licensesDir, jarName + SHA_EXTENSION) + } + private void checkSha(File jar, String jarName, Set shaFiles) { - File shaFile = new File(licensesDir, jarName + SHA_EXTENSION) + File shaFile = getShaFile(jarName) if (shaFile.exists() == false) { throw new GradleException("Missing SHA for ${jarName}. Run 'gradle updateSHAs' to create") } @@ -215,6 +239,10 @@ public class DependencyLicensesTask extends DefaultTask { } for (File dependency : parentTask.dependencies) { String jarName = dependency.getName() + String depName = jarName - ~/\-\d+.*/ + if (parentTask.ignoreShas.contains(depName)) { + continue + } File shaFile = new File(parentTask.licensesDir, jarName + SHA_EXTENSION) if (shaFile.exists() == false) { logger.lifecycle("Adding sha for ${jarName}") diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy index 7cb344bf47fc9..ed62e88c567fa 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy @@ -59,11 +59,16 @@ public class ForbiddenPatternsTask extends DefaultTask { filesFilter.exclude('**/*.png') // add mandatory rules - patterns.put('nocommit', /nocommit/) + patterns.put('nocommit', /nocommit|NOCOMMIT/) + patterns.put('nocommit should be all lowercase or all uppercase', + /((?i)nocommit)(? arguments = new ArrayList<>() + + @Input + public void args(Object... args) { + arguments.addAll(args) + } + + /** + * Environment variables for the fixture process. The value can be any object, which + * will have toString() called at execution time. + */ + private final Map environment = new HashMap<>() + + @Input + public void env(String key, Object value) { + environment.put(key, value) + } + + /** A flag to indicate whether the command should be executed from a shell. */ + @Input + boolean useShell = false + + /** + * A flag to indicate whether the fixture should be run in the foreground, or spawned. + * It is protected so subclasses can override (eg RunTask). + */ + protected boolean spawn = true + + /** + * A closure to call before the fixture is considered ready. The closure is passed the fixture object, + * as well as a groovy AntBuilder, to enable running ant condition checks. The default wait + * condition is for http on the http port. + */ + @Input + Closure waitCondition = { AntFixture fixture, AntBuilder ant -> + File tmpFile = new File(fixture.cwd, 'wait.success') + ant.get(src: "http://${fixture.addressAndPort}", + dest: tmpFile.toString(), + ignoreerrors: true, // do not fail on error, so logging information can be flushed + retries: 10) + return tmpFile.exists() + } + + private final Task stopTask + + public AntFixture() { + stopTask = createStopTask() + finalizedBy(stopTask) + } + + @Override + public Task getStopTask() { + return stopTask + } + + @Override + protected void runAnt(AntBuilder ant) { + project.delete(baseDir) // reset everything + cwd.mkdirs() + final String realExecutable + final List realArgs = new ArrayList<>() + final Map realEnv = environment + // We need to choose which executable we are using. In shell mode, or when we + // are spawning and thus using the wrapper script, the executable is the shell. + if (useShell || spawn) { + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + realExecutable = 'cmd' + realArgs.add('/C') + realArgs.add('"') // quote the entire command + } else { + realExecutable = 'sh' + } + } else { + realExecutable = executable + realArgs.addAll(arguments) + } + if (spawn) { + writeWrapperScript(executable) + realArgs.add(wrapperScript) + realArgs.addAll(arguments) + } + if (Os.isFamily(Os.FAMILY_WINDOWS) && (useShell || spawn)) { + realArgs.add('"') + } + commandString.eachLine { line -> logger.info(line) } + + ant.exec(executable: realExecutable, spawn: spawn, dir: cwd, taskname: name) { + realEnv.each { key, value -> env(key: key, value: value) } + realArgs.each { arg(value: it) } + } + + String failedProp = "failed${name}" + // first wait for resources, or the failure marker from the wrapper script + ant.waitfor(maxwait: '30', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond', timeoutproperty: failedProp) { + or { + resourceexists { + file(file: failureMarker.toString()) + } + and { + resourceexists { + file(file: pidFile.toString()) + } + resourceexists { + file(file: portsFile.toString()) + } + } + } + } + + if (ant.project.getProperty(failedProp) || failureMarker.exists()) { + fail("Failed to start ${name}") + } + + // the process is started (has a pid) and is bound to a network interface + // so now wait undil the waitCondition has been met + // TODO: change this to a loop? + boolean success + try { + success = waitCondition(this, ant) == false + } catch (Exception e) { + String msg = "Wait condition caught exception for ${name}" + logger.error(msg, e) + fail(msg, e) + } + if (success == false) { + fail("Wait condition failed for ${name}") + } + } + + /** Returns a debug string used to log information about how the fixture was run. */ + protected String getCommandString() { + String commandString = "\n${name} configuration:\n" + commandString += "-----------------------------------------\n" + commandString += " cwd: ${cwd}\n" + commandString += " command: ${executable} ${arguments.join(' ')}\n" + commandString += ' environment:\n' + environment.each { k, v -> commandString += " ${k}: ${v}\n" } + if (spawn) { + commandString += "\n [${wrapperScript.name}]\n" + wrapperScript.eachLine('UTF-8', { line -> commandString += " ${line}\n"}) + } + return commandString + } + + /** + * Writes a script to run the real executable, so that stdout/stderr can be captured. + * TODO: this could be removed if we do use our own ProcessBuilder and pump output from the process + */ + private void writeWrapperScript(String executable) { + wrapperScript.parentFile.mkdirs() + String argsPasser = '"$@"' + String exitMarker = "; if [ \$? != 0 ]; then touch run.failed; fi" + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + argsPasser = '%*' + exitMarker = "\r\n if \"%errorlevel%\" neq \"0\" ( type nul >> run.failed )" + } + wrapperScript.setText("\"${executable}\" ${argsPasser} > run.log 2>&1 ${exitMarker}", 'UTF-8') + } + + /** Fail the build with the given message, and logging relevant info*/ + private void fail(String msg, Exception... suppressed) { + if (logger.isInfoEnabled() == false) { + // We already log the command at info level. No need to do it twice. + commandString.eachLine { line -> logger.error(line) } + } + logger.error("${name} output:") + logger.error("-----------------------------------------") + logger.error(" failure marker exists: ${failureMarker.exists()}") + logger.error(" pid file exists: ${pidFile.exists()}") + logger.error(" ports file exists: ${portsFile.exists()}") + // also dump the log file for the startup script (which will include ES logging output to stdout) + if (runLog.exists()) { + logger.error("\n [log]") + runLog.eachLine { line -> logger.error(" ${line}") } + } + logger.error("-----------------------------------------") + GradleException toThrow = new GradleException(msg) + for (Exception e : suppressed) { + toThrow.addSuppressed(e) + } + throw toThrow + } + + /** Adds a task to kill an elasticsearch node with the given pidfile */ + private Task createStopTask() { + final AntFixture fixture = this + final Object pid = "${ -> fixture.pid }" + Exec stop = project.tasks.create(name: "${name}#stop", type: LoggedExec) + stop.onlyIf { fixture.pidFile.exists() } + stop.doFirst { + logger.info("Shutting down ${fixture.name} with pid ${pid}") + } + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + stop.executable = 'Taskkill' + stop.args('/PID', pid, '/F') + } else { + stop.executable = 'kill' + stop.args('-9', pid) + } + stop.doLast { + project.delete(fixture.pidFile) + } + return stop + } + + /** + * A path relative to the build dir that all configuration and runtime files + * will live in for this fixture + */ + protected File getBaseDir() { + return new File(project.buildDir, "fixtures/${name}") + } + + /** Returns the working directory for the process. Defaults to "cwd" inside baseDir. */ + protected File getCwd() { + return new File(baseDir, 'cwd') + } + + /** Returns the file the process writes its pid to. Defaults to "pid" inside baseDir. */ + protected File getPidFile() { + return new File(baseDir, 'pid') + } + + /** Reads the pid file and returns the process' pid */ + public int getPid() { + return Integer.parseInt(pidFile.getText('UTF-8').trim()) + } + + /** Returns the file the process writes its bound ports to. Defaults to "ports" inside baseDir. */ + protected File getPortsFile() { + return new File(baseDir, 'ports') + } + + /** Returns an address and port suitable for a uri to connect to this node over http */ + public String getAddressAndPort() { + return portsFile.readLines("UTF-8").get(0) + } + + /** Returns a file that wraps around the actual command when {@code spawn == true}. */ + protected File getWrapperScript() { + return new File(cwd, Os.isFamily(Os.FAMILY_WINDOWS) ? 'run.bat' : 'run') + } + + /** Returns a file that the wrapper script writes when the command failed. */ + protected File getFailureMarker() { + return new File(cwd, 'run.failed') + } + + /** Returns a file that the wrapper script writes when the command failed. */ + protected File getRunLog() { + return new File(cwd, 'run.log') + } +} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy index 6a2375efc6299..c9965caa96e7d 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy @@ -20,8 +20,6 @@ package org.elasticsearch.gradle.test import org.gradle.api.GradleException import org.gradle.api.Project -import org.gradle.api.artifacts.Configuration -import org.gradle.api.file.FileCollection import org.gradle.api.tasks.Input /** Configuration for an elasticsearch cluster, used for integration tests. */ @@ -47,25 +45,56 @@ class ClusterConfiguration { @Input int transportPort = 0 + /** + * An override of the data directory. This may only be used with a single node. + * The value is lazily evaluated at runtime as a String path. + */ + @Input + Object dataDir = null + + /** Optional override of the cluster name. */ + @Input + String clusterName = null + @Input boolean daemonize = true @Input boolean debug = false + /** + * if true each node will be configured with discovery.zen.minimum_master_nodes set + * to the total number of nodes in the cluster. This will also cause that each node has `0s` state recovery + * timeout which can lead to issues if for instance an existing clusterstate is expected to be recovered + * before any tests start + */ + @Input + boolean useMinimumMasterNodes = true + @Input String jvmArgs = "-Xms" + System.getProperty('tests.heap.size', '512m') + - " " + "-Xmx" + System.getProperty('tests.heap.size', '512m') + - " " + System.getProperty('tests.jvm.argline', '') + " " + "-Xmx" + System.getProperty('tests.heap.size', '512m') + + " " + System.getProperty('tests.jvm.argline', '') /** - * The seed nodes port file. In the case the cluster has more than one node we use a seed node - * to form the cluster. The file is null if there is no seed node yet available. + * A closure to call which returns the unicast host to connect to for cluster formation. * - * Note: this can only be null if the cluster has only one node or if the first node is not yet - * configured. All nodes but the first node should see a non null value. + * This allows multi node clusters, or a new cluster to connect to an existing cluster. + * The closure takes two arguments, the NodeInfo for the first node in the cluster, and + * an AntBuilder which may be used to wait on conditions before returning. */ - File seedNodePortsFile + @Input + Closure unicastTransportUri = { NodeInfo seedNode, NodeInfo node, AntBuilder ant -> + if (seedNode == node) { + return null + } + ant.waitfor(maxwait: '40', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond') { + resourceexists { + file(file: seedNode.transportPortsFile.toString()) + } + } + return seedNode.transportUri() + } /** * A closure to call before the cluster is considered ready. The closure is passed the node info, @@ -75,7 +104,13 @@ class ClusterConfiguration { @Input Closure waitCondition = { NodeInfo node, AntBuilder ant -> File tmpFile = new File(node.cwd, 'wait.success') - ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=${numNodes}", + String waitUrl = "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=${numNodes}&wait_for_status=yellow" + ant.echo(message: "==> [${new Date()}] checking health: ${waitUrl}", + level: 'info') + // checking here for wait_for_nodes to be >= the number of nodes because its possible + // this cluster is attempting to connect to nodes created by another task (same cluster name), + // so there will be more nodes in that case in the cluster state + ant.get(src: waitUrl, dest: tmpFile.toString(), ignoreerrors: true, // do not fail on error, so logging buffers can be flushed by the wait task retries: 10) @@ -88,7 +123,9 @@ class ClusterConfiguration { Map systemProperties = new HashMap<>() - Map settings = new HashMap<>() + Map settings = new HashMap<>() + + Map keystoreSettings = new HashMap<>() // map from destination path, to source file Map extraConfigFiles = new HashMap<>() @@ -99,16 +136,23 @@ class ClusterConfiguration { LinkedHashMap setupCommands = new LinkedHashMap<>() + List dependencies = new ArrayList<>() + @Input void systemProperty(String property, String value) { systemProperties.put(property, value) } @Input - void setting(String name, String value) { + void setting(String name, Object value) { settings.put(name, value) } + @Input + void keystoreSetting(String name, String value) { + keystoreSettings.put(name, value) + } + @Input void plugin(String path) { Project pluginProject = project.project(path) @@ -138,11 +182,9 @@ class ClusterConfiguration { extraConfigFiles.put(path, sourceFile) } - /** Returns an address and port suitable for a uri to connect to this clusters seed node over transport protocol*/ - String seedNodeTransportUri() { - if (seedNodePortsFile != null) { - return seedNodePortsFile.readLines("UTF-8").get(0) - } - return null; + /** Add dependencies that must be run before the first task setting up the cluster. */ + @Input + void dependsOn(Object... deps) { + dependencies.addAll(deps) } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy index 8819c63080a3b..ee225904cc93e 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy @@ -23,6 +23,7 @@ import org.apache.tools.ant.taskdefs.condition.Os import org.elasticsearch.gradle.LoggedExec import org.elasticsearch.gradle.VersionProperties import org.elasticsearch.gradle.plugin.PluginBuildPlugin +import org.elasticsearch.gradle.plugin.PluginPropertiesExtension import org.gradle.api.AntBuilder import org.gradle.api.DefaultTask import org.gradle.api.GradleException @@ -30,13 +31,16 @@ import org.gradle.api.InvalidUserDataException import org.gradle.api.Project import org.gradle.api.Task import org.gradle.api.artifacts.Configuration +import org.gradle.api.artifacts.Dependency import org.gradle.api.file.FileCollection import org.gradle.api.logging.Logger import org.gradle.api.tasks.Copy import org.gradle.api.tasks.Delete import org.gradle.api.tasks.Exec +import java.nio.charset.StandardCharsets import java.nio.file.Paths +import java.util.concurrent.TimeUnit /** * A helper for creating tasks to build a cluster that is used by a task, and tear down the cluster when the task is finished. @@ -46,24 +50,20 @@ class ClusterFormationTasks { /** * Adds dependent tasks to the given task to start and stop a cluster with the given configuration. * - * Returns a NodeInfo object for the first node in the cluster. + * Returns a list of NodeInfo objects for each node in the cluster. */ - static NodeInfo setup(Project project, Task task, ClusterConfiguration config) { - if (task.getEnabled() == false) { - // no need to add cluster formation tasks if the task won't run! - return - } + static List setup(Project project, String prefix, Task runner, ClusterConfiguration config) { File sharedDir = new File(project.buildDir, "cluster/shared") // first we remove everything in the shared cluster directory to ensure there are no leftovers in repos or anything // in theory this should not be necessary but repositories are only deleted in the cluster-state and not on-disk // such that snapshots survive failures / test runs and there is no simple way today to fix that. - Task cleanup = project.tasks.create(name: "${task.name}#prepareCluster.cleanShared", type: Delete, dependsOn: task.dependsOn.collect()) { + Task cleanup = project.tasks.create(name: "${prefix}#prepareCluster.cleanShared", type: Delete, dependsOn: config.dependencies) { delete sharedDir doLast { sharedDir.mkdirs() } } - List startTasks = [cleanup] + List startTasks = [] List nodes = [] if (config.numNodes < config.numBwcNodes) { throw new GradleException("numNodes must be >= numBwcNodes [${config.numNodes} < ${config.numBwcNodes}]") @@ -72,45 +72,45 @@ class ClusterFormationTasks { throw new GradleException("bwcVersion must not be null if numBwcNodes is > 0") } // this is our current version distribution configuration we use for all kinds of REST tests etc. - String distroConfigName = "${task.name}_elasticsearchDistro" - Configuration distro = project.configurations.create(distroConfigName) - configureDistributionDependency(project, config.distribution, distro, VersionProperties.elasticsearch) - if (config.bwcVersion != null && config.numBwcNodes > 0) { + Configuration currentDistro = project.configurations.create("${prefix}_elasticsearchDistro") + Configuration bwcDistro = project.configurations.create("${prefix}_elasticsearchBwcDistro") + Configuration bwcPlugins = project.configurations.create("${prefix}_elasticsearchBwcPlugins") + configureDistributionDependency(project, config.distribution, currentDistro, VersionProperties.elasticsearch) + if (config.numBwcNodes > 0) { + if (config.bwcVersion == null) { + throw new IllegalArgumentException("Must specify bwcVersion when numBwcNodes > 0") + } // if we have a cluster that has a BWC cluster we also need to configure a dependency on the BWC version // this version uses the same distribution etc. and only differs in the version we depend on. // from here on everything else works the same as if it's the current version, we fetch the BWC version // from mirrors using gradles built-in mechanism etc. - project.configurations { - elasticsearchBwcDistro + + configureDistributionDependency(project, config.distribution, bwcDistro, config.bwcVersion) + for (Map.Entry entry : config.plugins.entrySet()) { + configureBwcPluginDependency("${prefix}_elasticsearchBwcPlugins", project, entry.getValue(), bwcPlugins, config.bwcVersion) } - configureDistributionDependency(project, config.distribution, project.configurations.elasticsearchBwcDistro, config.bwcVersion) + bwcDistro.resolutionStrategy.cacheChangingModulesFor(0, TimeUnit.SECONDS) + bwcPlugins.resolutionStrategy.cacheChangingModulesFor(0, TimeUnit.SECONDS) } - - for (int i = 0; i < config.numNodes; ++i) { + for (int i = 0; i < config.numNodes; i++) { // we start N nodes and out of these N nodes there might be M bwc nodes. - // for each of those nodes we might have a different configuratioon + // for each of those nodes we might have a different configuration String elasticsearchVersion = VersionProperties.elasticsearch + Configuration distro = currentDistro if (i < config.numBwcNodes) { elasticsearchVersion = config.bwcVersion - distro = project.configurations.elasticsearchBwcDistro - } - NodeInfo node = new NodeInfo(config, i, project, task, elasticsearchVersion, sharedDir) - if (i == 0) { - if (config.seedNodePortsFile != null) { - // we might allow this in the future to be set but for now we are the only authority to set this! - throw new GradleException("seedNodePortsFile has a non-null value but first node has not been intialized") - } - config.seedNodePortsFile = node.transportPortsFile; + distro = bwcDistro } + NodeInfo node = new NodeInfo(config, i, project, prefix, elasticsearchVersion, sharedDir) nodes.add(node) - startTasks.add(configureNode(project, task, cleanup, node, distro)) + Task dependsOn = startTasks.empty ? cleanup : startTasks.get(0) + startTasks.add(configureNode(project, prefix, runner, dependsOn, node, config, distro, nodes.get(0))) } - Task wait = configureWaitTask("${task.name}#wait", project, nodes, startTasks) - task.dependsOn(wait) + Task wait = configureWaitTask("${prefix}#wait", project, nodes, startTasks) + runner.dependsOn(wait) - // delay the resolution of the uri by wrapping in a closure, so it is not used until read for tests - return nodes[0] + return nodes } /** Adds a dependency on the given distribution */ @@ -124,6 +124,13 @@ class ClusterFormationTasks { project.dependencies.add(configuration.name, "org.elasticsearch.distribution.${distro}:elasticsearch:${elasticsearchVersion}@${packaging}") } + /** Adds a dependency on a different version of the given plugin, which will be retrieved using gradle's dependency resolution */ + static void configureBwcPluginDependency(String name, Project project, Project pluginProject, Configuration configuration, String elasticsearchVersion) { + verifyProjectHasBuildPlugin(name, elasticsearchVersion, project, pluginProject) + PluginPropertiesExtension extension = pluginProject.extensions.findByName('esplugin'); + project.dependencies.add(configuration.name, "org.elasticsearch.plugin:${extension.name}:${elasticsearchVersion}@zip") + } + /** * Adds dependent tasks to start an elasticsearch cluster before the given task is executed, * and stop it after it has finished executing. @@ -141,49 +148,83 @@ class ClusterFormationTasks { * * @return a task which starts the node. */ - static Task configureNode(Project project, Task task, Object dependsOn, NodeInfo node, Configuration configuration) { + static Task configureNode(Project project, String prefix, Task runner, Object dependsOn, NodeInfo node, ClusterConfiguration config, + Configuration distribution, NodeInfo seedNode) { // tasks are chained so their execution order is maintained - Task setup = project.tasks.create(name: taskName(task, node, 'clean'), type: Delete, dependsOn: dependsOn) { + Task setup = project.tasks.create(name: taskName(prefix, node, 'clean'), type: Delete, dependsOn: dependsOn) { delete node.homeDir delete node.cwd doLast { node.cwd.mkdirs() } } - setup = configureCheckPreviousTask(taskName(task, node, 'checkPrevious'), project, setup, node) - setup = configureStopTask(taskName(task, node, 'stopPrevious'), project, setup, node) - setup = configureExtractTask(taskName(task, node, 'extract'), project, setup, node, configuration) - setup = configureWriteConfigTask(taskName(task, node, 'configure'), project, setup, node) - setup = configureExtraConfigFilesTask(taskName(task, node, 'extraConfig'), project, setup, node) - setup = configureCopyPluginsTask(taskName(task, node, 'copyPlugins'), project, setup, node) + + setup = configureCheckPreviousTask(taskName(prefix, node, 'checkPrevious'), project, setup, node) + setup = configureStopTask(taskName(prefix, node, 'stopPrevious'), project, setup, node) + setup = configureExtractTask(taskName(prefix, node, 'extract'), project, setup, node, distribution) + setup = configureWriteConfigTask(taskName(prefix, node, 'configure'), project, setup, node, seedNode) + setup = configureCreateKeystoreTask(taskName(prefix, node, 'createKeystore'), project, setup, node) + setup = configureAddKeystoreSettingTasks(prefix, project, setup, node) + + if (node.config.plugins.isEmpty() == false) { + if (node.nodeVersion == VersionProperties.elasticsearch) { + setup = configureCopyPluginsTask(taskName(prefix, node, 'copyPlugins'), project, setup, node, prefix) + } else { + setup = configureCopyBwcPluginsTask(taskName(prefix, node, 'copyBwcPlugins'), project, setup, node, prefix) + } + } // install modules for (Project module : node.config.modules) { String actionName = pluginTaskName('install', module.name, 'Module') - setup = configureInstallModuleTask(taskName(task, node, actionName), project, setup, node, module) + setup = configureInstallModuleTask(taskName(prefix, node, actionName), project, setup, node, module) } // install plugins for (Map.Entry plugin : node.config.plugins.entrySet()) { String actionName = pluginTaskName('install', plugin.getKey(), 'Plugin') - setup = configureInstallPluginTask(taskName(task, node, actionName), project, setup, node, plugin.getValue()) + setup = configureInstallPluginTask(taskName(prefix, node, actionName), project, setup, node, plugin.getValue(), prefix) } + // sets up any extra config files that need to be copied over to the ES instance; + // its run after plugins have been installed, as the extra config files may belong to plugins + setup = configureExtraConfigFilesTask(taskName(prefix, node, 'extraConfig'), project, setup, node) + // extra setup commands for (Map.Entry command : node.config.setupCommands.entrySet()) { // the first argument is the actual script name, relative to home Object[] args = command.getValue().clone() - args[0] = new File(node.homeDir, args[0].toString()) - setup = configureExecTask(taskName(task, node, command.getKey()), project, setup, node, args) + final Object commandPath + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. Note that we have to capture the value of arg[0] now + * otherwise we would stack overflow later since arg[0] is replaced below. + */ + String argsZero = args[0] + commandPath = "${-> Paths.get(NodeInfo.getShortPathName(node.homeDir.toString())).resolve(argsZero.toString()).toString()}" + } else { + commandPath = node.homeDir.toPath().resolve(args[0].toString()).toString() + } + args[0] = commandPath + setup = configureExecTask(taskName(prefix, node, command.getKey()), project, setup, node, args) } - Task start = configureStartTask(taskName(task, node, 'start'), project, setup, node) + Task start = configureStartTask(taskName(prefix, node, 'start'), project, setup, node) if (node.config.daemonize) { + Task stop = configureStopTask(taskName(prefix, node, 'stop'), project, [], node) // if we are running in the background, make sure to stop the server when the task completes - Task stop = configureStopTask(taskName(task, node, 'stop'), project, [], node) - task.finalizedBy(stop) + runner.finalizedBy(stop) + start.finalizedBy(stop) + for (Object dependency : config.dependencies) { + if (dependency instanceof Fixture) { + def depStop = ((Fixture)dependency).stopTask + runner.finalizedBy(depStop) + start.finalizedBy(depStop) + } + } } return start } @@ -222,9 +263,9 @@ class ClusterFormationTasks { Object rpm = "${ -> configuration.singleFile}" extract = project.tasks.create(name: name, type: LoggedExec, dependsOn: extractDependsOn) { commandLine 'rpm', '--badreloc', '--nodeps', '--noscripts', '--notriggers', - '--dbpath', rpmDatabase, - '--relocate', "/=${rpmExtracted}", - '-i', rpm + '--dbpath', rpmDatabase, + '--relocate', "/=${rpmExtracted}", + '-i', rpm doFirst { rpmDatabase.deleteDir() rpmExtracted.deleteDir() @@ -249,32 +290,37 @@ class ClusterFormationTasks { } /** Adds a task to write elasticsearch.yml for the given node configuration */ - static Task configureWriteConfigTask(String name, Project project, Task setup, NodeInfo node) { + static Task configureWriteConfigTask(String name, Project project, Task setup, NodeInfo node, NodeInfo seedNode) { Map esConfig = [ 'cluster.name' : node.clusterName, 'pidfile' : node.pidFile, 'path.repo' : "${node.sharedDir}/repo", 'path.shared_data' : "${node.sharedDir}/", // Define a node attribute so we can test that it exists - 'node.attr.testattr' : 'test', + 'node.attr.testattr' : 'test', 'repositories.url.allowed_urls': 'http://snapshot.test*' ] + // we set min master nodes to the total number of nodes in the cluster and + // basically skip initial state recovery to allow the cluster to form using a realistic master election + // this means all nodes must be up, join the seed node and do a master election. This will also allow new and + // old nodes in the BWC case to become the master + if (node.config.useMinimumMasterNodes && node.config.numNodes > 1) { + esConfig['discovery.zen.minimum_master_nodes'] = node.config.numNodes + esConfig['discovery.initial_state_timeout'] = '0s' // don't wait for state.. just start up quickly + } esConfig['node.max_local_storage_nodes'] = node.config.numNodes esConfig['http.port'] = node.config.httpPort esConfig['transport.tcp.port'] = node.config.transportPort + // Default the watermarks to absurdly low to prevent the tests from failing on nodes without enough disk space + esConfig['cluster.routing.allocation.disk.watermark.low'] = '1b' + esConfig['cluster.routing.allocation.disk.watermark.high'] = '1b' esConfig.putAll(node.config.settings) Task writeConfig = project.tasks.create(name: name, type: DefaultTask, dependsOn: setup) writeConfig.doFirst { - if (node.nodeNum > 0) { // multi-node cluster case, we have to wait for the seed node to startup - ant.waitfor(maxwait: '20', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond') { - resourceexists { - file(file: node.config.seedNodePortsFile.toString()) - } - } - // the seed node is enough to form the cluster - all subsequent nodes will get the seed node as a unicast - // host and join the cluster via that. - esConfig['discovery.zen.ping.unicast.hosts'] = "\"${node.config.seedNodeTransportUri()}\"" + String unicastTransportUri = node.config.unicastTransportUri(seedNode, node, project.ant) + if (unicastTransportUri != null) { + esConfig['discovery.zen.ping.unicast.hosts'] = "\"${unicastTransportUri}\"" } File configFile = new File(node.confDir, 'elasticsearch.yml') logger.info("Configuring ${configFile}") @@ -282,6 +328,42 @@ class ClusterFormationTasks { } } + /** Adds a task to create keystore */ + static Task configureCreateKeystoreTask(String name, Project project, Task setup, NodeInfo node) { + if (node.config.keystoreSettings.isEmpty()) { + return setup + } else { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. + */ + final Object esKeystoreUtil = "${-> node.binPath().resolve('elasticsearch-keystore').toString()}" + return configureExecTask(name, project, setup, node, esKeystoreUtil, 'create') + } + } + + /** Adds tasks to add settings to the keystore */ + static Task configureAddKeystoreSettingTasks(String parent, Project project, Task setup, NodeInfo node) { + Map kvs = node.config.keystoreSettings + Task parentTask = setup + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to getting + * the short name requiring the path to already exist. + */ + final Object esKeystoreUtil = "${-> node.binPath().resolve('elasticsearch-keystore').toString()}" + for (Map.Entry entry in kvs) { + String key = entry.getKey() + String name = taskName(parent, node, 'addToKeystore#' + key) + Task t = configureExecTask(name, project, parentTask, node, esKeystoreUtil, 'add', key, '-x') + String settingsValue = entry.getValue() // eval this early otherwise it will not use the right value + t.doFirst { + standardInput = new ByteArrayInputStream(settingsValue.getBytes(StandardCharsets.UTF_8)) + } + parentTask = t + } + return parentTask + } + static Task configureExtraConfigFilesTask(String name, Project project, Task setup, NodeInfo node) { if (node.config.extraConfigFiles.isEmpty()) { return setup @@ -318,20 +400,15 @@ class ClusterFormationTasks { * For each plugin, if the plugin has rest spec apis in its tests, those api files are also copied * to the test resources for this project. */ - static Task configureCopyPluginsTask(String name, Project project, Task setup, NodeInfo node) { - if (node.config.plugins.isEmpty()) { - return setup - } + static Task configureCopyPluginsTask(String name, Project project, Task setup, NodeInfo node, String prefix) { Copy copyPlugins = project.tasks.create(name: name, type: Copy, dependsOn: setup) List pluginFiles = [] for (Map.Entry plugin : node.config.plugins.entrySet()) { Project pluginProject = plugin.getValue() - if (pluginProject.plugins.hasPlugin(PluginBuildPlugin) == false) { - throw new GradleException("Task ${name} cannot project ${pluginProject.path} which is not an esplugin") - } - String configurationName = "_plugin_${pluginProject.path}" + verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject) + String configurationName = "_plugin_${prefix}_${pluginProject.path}" Configuration configuration = project.configurations.findByName(configurationName) if (configuration == null) { configuration = project.configurations.create(configurationName) @@ -360,6 +437,33 @@ class ClusterFormationTasks { return copyPlugins } + /** Configures task to copy a plugin based on a zip file resolved using dependencies for an older version */ + static Task configureCopyBwcPluginsTask(String name, Project project, Task setup, NodeInfo node, String prefix) { + Configuration bwcPlugins = project.configurations.getByName("${prefix}_elasticsearchBwcPlugins") + for (Map.Entry plugin : node.config.plugins.entrySet()) { + Project pluginProject = plugin.getValue() + verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject) + String configurationName = "_plugin_bwc_${prefix}_${pluginProject.path}" + Configuration configuration = project.configurations.findByName(configurationName) + if (configuration == null) { + configuration = project.configurations.create(configurationName) + } + + final String depName = pluginProject.extensions.findByName('esplugin').name + + Dependency dep = bwcPlugins.dependencies.find { + it.name == depName + } + configuration.dependencies.add(dep) + } + + Copy copyPlugins = project.tasks.create(name: name, type: Copy, dependsOn: setup) { + from bwcPlugins + into node.pluginsTmpDir + } + return copyPlugins + } + static Task configureInstallModuleTask(String name, Project project, Task setup, NodeInfo node, Project module) { if (node.config.distribution != 'integ-test-zip') { throw new GradleException("Module ${module.path} not allowed be installed distributions other than integ-test-zip because they should already have all modules bundled!") @@ -374,11 +478,21 @@ class ClusterFormationTasks { return installModule } - static Task configureInstallPluginTask(String name, Project project, Task setup, NodeInfo node, Project plugin) { - FileCollection pluginZip = project.configurations.getByName("_plugin_${plugin.path}") + static Task configureInstallPluginTask(String name, Project project, Task setup, NodeInfo node, Project plugin, String prefix) { + final FileCollection pluginZip; + if (node.nodeVersion != VersionProperties.elasticsearch) { + pluginZip = project.configurations.getByName("_plugin_bwc_${prefix}_${plugin.path}") + } else { + pluginZip = project.configurations.getByName("_plugin_${prefix}_${plugin.path}") + } // delay reading the file location until execution time by wrapping in a closure within a GString - Object file = "${-> new File(node.pluginsTmpDir, pluginZip.singleFile.getName()).toURI().toURL().toString()}" - Object[] args = [new File(node.homeDir, 'bin/elasticsearch-plugin'), 'install', file] + final Object file = "${-> new File(node.pluginsTmpDir, pluginZip.singleFile.getName()).toURI().toURL().toString()}" + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to getting + * the short name requiring the path to already exist. + */ + final Object esPluginUtil = "${-> node.binPath().resolve('elasticsearch-plugin').toString()}" + final Object[] args = [esPluginUtil, 'install', file] return configureExecTask(name, project, setup, node, args) } @@ -490,7 +604,7 @@ class ClusterFormationTasks { anyNodeFailed |= node.failedMarker.exists() } if (ant.properties.containsKey("failed${name}".toString()) || anyNodeFailed) { - waitFailed(nodes, logger, 'Failed to start elasticsearch') + waitFailed(project, nodes, logger, 'Failed to start elasticsearch') } // go through each node checking the wait condition @@ -507,14 +621,14 @@ class ClusterFormationTasks { } if (success == false) { - waitFailed(nodes, logger, 'Elasticsearch cluster failed to pass wait condition') + waitFailed(project, nodes, logger, 'Elasticsearch cluster failed to pass wait condition') } } } return wait } - static void waitFailed(List nodes, Logger logger, String msg) { + static void waitFailed(Project project, List nodes, Logger logger, String msg) { for (NodeInfo node : nodes) { if (logger.isInfoEnabled() == false) { // We already log the command at info level. No need to do it twice. @@ -534,6 +648,17 @@ class ClusterFormationTasks { logger.error("|\n| [log]") node.startLog.eachLine { line -> logger.error("| ${line}") } } + if (node.pidFile.exists() && node.failedMarker.exists() == false && + (node.httpPortsFile.exists() == false || node.transportPortsFile.exists() == false)) { + logger.error("|\n| [jstack]") + String pid = node.pidFile.getText('UTF-8') + ByteArrayOutputStream output = new ByteArrayOutputStream() + project.exec { + commandLine = ["${project.javaHome}/bin/jstack", pid] + standardOutput = output + } + output.toString('UTF-8').eachLine { line -> logger.error("| ${line}") } + } logger.error("|-----------------------------------------") } throw new GradleException(msg) @@ -558,11 +683,11 @@ class ClusterFormationTasks { standardOutput = new ByteArrayOutputStream() doLast { String out = standardOutput.toString() - if (out.contains("${pid} org.elasticsearch.bootstrap.Elasticsearch") == false) { + if (out.contains("${ext.pid} org.elasticsearch.bootstrap.Elasticsearch") == false) { logger.error('jps -l') logger.error(out) - logger.error("pid file: ${pidFile}") - logger.error("pid: ${pid}") + logger.error("pid file: ${node.pidFile}") + logger.error("pid: ${ext.pid}") throw new GradleException("jps -l did not report any process with org.elasticsearch.bootstrap.Elasticsearch\n" + "Did you run gradle clean? Maybe an old pid file is still lying around.") } else { @@ -599,11 +724,11 @@ class ClusterFormationTasks { } /** Returns a unique task name for this task and node configuration */ - static String taskName(Task parentTask, NodeInfo node, String action) { + static String taskName(String prefix, NodeInfo node, String action) { if (node.config.numNodes > 1) { - return "${parentTask.name}#node${node.nodeNum}.${action}" + return "${prefix}#node${node.nodeNum}.${action}" } else { - return "${parentTask.name}#${action}" + return "${prefix}#${action}" } } @@ -625,4 +750,11 @@ class ClusterFormationTasks { project.ant.project.removeBuildListener(listener) return retVal } + + static void verifyProjectHasBuildPlugin(String name, String version, Project project, Project pluginProject) { + if (pluginProject.plugins.hasPlugin(PluginBuildPlugin) == false) { + throw new GradleException("Task [${name}] cannot add plugin [${pluginProject.path}] with version [${version}] to project's " + + "[${project.path}] dependencies: the plugin is not an esplugin") + } + } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy index 46b81624ba3fa..498a1627b3598 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy @@ -16,272 +16,15 @@ * specific language governing permissions and limitations * under the License. */ - package org.elasticsearch.gradle.test -import org.apache.tools.ant.taskdefs.condition.Os -import org.elasticsearch.gradle.AntTask -import org.elasticsearch.gradle.LoggedExec -import org.gradle.api.GradleException -import org.gradle.api.Task -import org.gradle.api.tasks.Exec -import org.gradle.api.tasks.Input - /** - * A fixture for integration tests which runs in a separate process. + * Any object that can produce an accompanying stop task, meant to tear down + * a previously instantiated service. */ -public class Fixture extends AntTask { - - /** The path to the executable that starts the fixture. */ - @Input - String executable - - private final List arguments = new ArrayList<>() - - @Input - public void args(Object... args) { - arguments.addAll(args) - } - - /** - * Environment variables for the fixture process. The value can be any object, which - * will have toString() called at execution time. - */ - private final Map environment = new HashMap<>() - - @Input - public void env(String key, Object value) { - environment.put(key, value) - } - - /** A flag to indicate whether the command should be executed from a shell. */ - @Input - boolean useShell = false - - /** - * A flag to indicate whether the fixture should be run in the foreground, or spawned. - * It is protected so subclasses can override (eg RunTask). - */ - protected boolean spawn = true - - /** - * A closure to call before the fixture is considered ready. The closure is passed the fixture object, - * as well as a groovy AntBuilder, to enable running ant condition checks. The default wait - * condition is for http on the http port. - */ - @Input - Closure waitCondition = { Fixture fixture, AntBuilder ant -> - File tmpFile = new File(fixture.cwd, 'wait.success') - ant.get(src: "http://${fixture.addressAndPort}", - dest: tmpFile.toString(), - ignoreerrors: true, // do not fail on error, so logging information can be flushed - retries: 10) - return tmpFile.exists() - } +public interface Fixture { /** A task which will stop this fixture. This should be used as a finalizedBy for any tasks that use the fixture. */ - public final Task stopTask - - public Fixture() { - stopTask = createStopTask() - finalizedBy(stopTask) - } - - @Override - protected void runAnt(AntBuilder ant) { - project.delete(baseDir) // reset everything - cwd.mkdirs() - final String realExecutable - final List realArgs = new ArrayList<>() - final Map realEnv = environment - // We need to choose which executable we are using. In shell mode, or when we - // are spawning and thus using the wrapper script, the executable is the shell. - if (useShell || spawn) { - if (Os.isFamily(Os.FAMILY_WINDOWS)) { - realExecutable = 'cmd' - realArgs.add('/C') - realArgs.add('"') // quote the entire command - } else { - realExecutable = 'sh' - } - } else { - realExecutable = executable - realArgs.addAll(arguments) - } - if (spawn) { - writeWrapperScript(executable) - realArgs.add(wrapperScript) - realArgs.addAll(arguments) - } - if (Os.isFamily(Os.FAMILY_WINDOWS) && (useShell || spawn)) { - realArgs.add('"') - } - commandString.eachLine { line -> logger.info(line) } - - ant.exec(executable: realExecutable, spawn: spawn, dir: cwd, taskname: name) { - realEnv.each { key, value -> env(key: key, value: value) } - realArgs.each { arg(value: it) } - } - - String failedProp = "failed${name}" - // first wait for resources, or the failure marker from the wrapper script - ant.waitfor(maxwait: '30', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond', timeoutproperty: failedProp) { - or { - resourceexists { - file(file: failureMarker.toString()) - } - and { - resourceexists { - file(file: pidFile.toString()) - } - resourceexists { - file(file: portsFile.toString()) - } - } - } - } - - if (ant.project.getProperty(failedProp) || failureMarker.exists()) { - fail("Failed to start ${name}") - } - - // the process is started (has a pid) and is bound to a network interface - // so now wait undil the waitCondition has been met - // TODO: change this to a loop? - boolean success - try { - success = waitCondition(this, ant) == false - } catch (Exception e) { - String msg = "Wait condition caught exception for ${name}" - logger.error(msg, e) - fail(msg, e) - } - if (success == false) { - fail("Wait condition failed for ${name}") - } - } - - /** Returns a debug string used to log information about how the fixture was run. */ - protected String getCommandString() { - String commandString = "\n${name} configuration:\n" - commandString += "-----------------------------------------\n" - commandString += " cwd: ${cwd}\n" - commandString += " command: ${executable} ${arguments.join(' ')}\n" - commandString += ' environment:\n' - environment.each { k, v -> commandString += " ${k}: ${v}\n" } - if (spawn) { - commandString += "\n [${wrapperScript.name}]\n" - wrapperScript.eachLine('UTF-8', { line -> commandString += " ${line}\n"}) - } - return commandString - } - - /** - * Writes a script to run the real executable, so that stdout/stderr can be captured. - * TODO: this could be removed if we do use our own ProcessBuilder and pump output from the process - */ - private void writeWrapperScript(String executable) { - wrapperScript.parentFile.mkdirs() - String argsPasser = '"$@"' - String exitMarker = "; if [ \$? != 0 ]; then touch run.failed; fi" - if (Os.isFamily(Os.FAMILY_WINDOWS)) { - argsPasser = '%*' - exitMarker = "\r\n if \"%errorlevel%\" neq \"0\" ( type nul >> run.failed )" - } - wrapperScript.setText("\"${executable}\" ${argsPasser} > run.log 2>&1 ${exitMarker}", 'UTF-8') - } - - /** Fail the build with the given message, and logging relevant info*/ - private void fail(String msg, Exception... suppressed) { - if (logger.isInfoEnabled() == false) { - // We already log the command at info level. No need to do it twice. - commandString.eachLine { line -> logger.error(line) } - } - logger.error("${name} output:") - logger.error("-----------------------------------------") - logger.error(" failure marker exists: ${failureMarker.exists()}") - logger.error(" pid file exists: ${pidFile.exists()}") - logger.error(" ports file exists: ${portsFile.exists()}") - // also dump the log file for the startup script (which will include ES logging output to stdout) - if (runLog.exists()) { - logger.error("\n [log]") - runLog.eachLine { line -> logger.error(" ${line}") } - } - logger.error("-----------------------------------------") - GradleException toThrow = new GradleException(msg) - for (Exception e : suppressed) { - toThrow.addSuppressed(e) - } - throw toThrow - } - - /** Adds a task to kill an elasticsearch node with the given pidfile */ - private Task createStopTask() { - final Fixture fixture = this - final Object pid = "${ -> fixture.pid }" - Exec stop = project.tasks.create(name: "${name}#stop", type: LoggedExec) - stop.onlyIf { fixture.pidFile.exists() } - stop.doFirst { - logger.info("Shutting down ${fixture.name} with pid ${pid}") - } - if (Os.isFamily(Os.FAMILY_WINDOWS)) { - stop.executable = 'Taskkill' - stop.args('/PID', pid, '/F') - } else { - stop.executable = 'kill' - stop.args('-9', pid) - } - stop.doLast { - project.delete(fixture.pidFile) - } - return stop - } - - /** - * A path relative to the build dir that all configuration and runtime files - * will live in for this fixture - */ - protected File getBaseDir() { - return new File(project.buildDir, "fixtures/${name}") - } - - /** Returns the working directory for the process. Defaults to "cwd" inside baseDir. */ - protected File getCwd() { - return new File(baseDir, 'cwd') - } - - /** Returns the file the process writes its pid to. Defaults to "pid" inside baseDir. */ - protected File getPidFile() { - return new File(baseDir, 'pid') - } - - /** Reads the pid file and returns the process' pid */ - public int getPid() { - return Integer.parseInt(pidFile.getText('UTF-8').trim()) - } - - /** Returns the file the process writes its bound ports to. Defaults to "ports" inside baseDir. */ - protected File getPortsFile() { - return new File(baseDir, 'ports') - } - - /** Returns an address and port suitable for a uri to connect to this node over http */ - public String getAddressAndPort() { - return portsFile.readLines("UTF-8").get(0) - } - - /** Returns a file that wraps around the actual command when {@code spawn == true}. */ - protected File getWrapperScript() { - return new File(cwd, Os.isFamily(Os.FAMILY_WINDOWS) ? 'run.bat' : 'run') - } - - /** Returns a file that the wrapper script writes when the command failed. */ - protected File getFailureMarker() { - return new File(cwd, 'run.failed') - } + public Object getStopTask() - /** Returns a file that the wrapper script writes when the command failed. */ - protected File getRunLog() { - return new File(cwd, 'run.log') - } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/JNAKernel32Library.java b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/JNAKernel32Library.java new file mode 100644 index 0000000000000..4d069cd434fc0 --- /dev/null +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/JNAKernel32Library.java @@ -0,0 +1,25 @@ +package org.elasticsearch.gradle.test; + +import com.sun.jna.Native; +import com.sun.jna.WString; +import org.apache.tools.ant.taskdefs.condition.Os; + +public class JNAKernel32Library { + + private static final class Holder { + private static final JNAKernel32Library instance = new JNAKernel32Library(); + } + + static JNAKernel32Library getInstance() { + return Holder.instance; + } + + private JNAKernel32Library() { + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + Native.register("kernel32"); + } + } + + native int GetShortPathNameW(WString lpszLongPath, char[] lpszShortPath, int cchBuffer); + +} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy index 5d9961a0425ee..9a237c628de98 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy @@ -18,10 +18,15 @@ */ package org.elasticsearch.gradle.test +import com.sun.jna.Native +import com.sun.jna.WString import org.apache.tools.ant.taskdefs.condition.Os import org.gradle.api.InvalidUserDataException import org.gradle.api.Project -import org.gradle.api.Task + +import java.nio.file.Files +import java.nio.file.Path +import java.nio.file.Paths /** * A container for the files and configuration associated with a single node in a test cluster. @@ -57,6 +62,9 @@ class NodeInfo { /** config directory */ File confDir + /** data directory (as an Object, to allow lazy evaluation) */ + Object dataDir + /** THE config file */ File configFile @@ -82,24 +90,40 @@ class NodeInfo { String executable /** Path to the elasticsearch start script */ - File esScript + private Object esScript /** script to run when running in the background */ - File wrapperScript + private File wrapperScript /** buffer for ant output when starting this node */ ByteArrayOutputStream buffer = new ByteArrayOutputStream() - /** Creates a node to run as part of a cluster for the given task */ - NodeInfo(ClusterConfiguration config, int nodeNum, Project project, Task task, String nodeVersion, File sharedDir) { + /** the version of elasticsearch that this node runs */ + String nodeVersion + + /** Holds node configuration for part of a test cluster. */ + NodeInfo(ClusterConfiguration config, int nodeNum, Project project, String prefix, String nodeVersion, File sharedDir) { this.config = config this.nodeNum = nodeNum this.sharedDir = sharedDir - clusterName = "${task.path.replace(':', '_').substring(1)}" - baseDir = new File(project.buildDir, "cluster/${task.name} node${nodeNum}") + if (config.clusterName != null) { + clusterName = config.clusterName + } else { + clusterName = project.path.replace(':', '_').substring(1) + '_' + prefix + } + baseDir = new File(project.buildDir, "cluster/${prefix} node${nodeNum}") pidFile = new File(baseDir, 'es.pid') + this.nodeVersion = nodeVersion homeDir = homeDir(baseDir, config.distribution, nodeVersion) confDir = confDir(baseDir, config.distribution, nodeVersion) + if (config.dataDir != null) { + if (config.numNodes != 1) { + throw new IllegalArgumentException("Cannot set data dir for integ test with more than one node") + } + dataDir = config.dataDir + } else { + dataDir = new File(homeDir, "data") + } configFile = new File(confDir, 'elasticsearch.yml') // even for rpm/deb, the logs are under home because we dont start with real services File logsDir = new File(homeDir, 'logs') @@ -116,22 +140,37 @@ class NodeInfo { args.add('/C') args.add('"') // quote the entire command wrapperScript = new File(cwd, "run.bat") - esScript = new File(homeDir, 'bin/elasticsearch.bat') + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. + */ + esScript = "${-> binPath().resolve('elasticsearch.bat').toString()}" } else { - executable = 'sh' + executable = 'bash' wrapperScript = new File(cwd, "run") - esScript = new File(homeDir, 'bin/elasticsearch') + esScript = binPath().resolve('elasticsearch') } if (config.daemonize) { - args.add("${wrapperScript}") + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. + */ + args.add("${-> getShortPathName(wrapperScript.toString())}") + } else { + args.add("${wrapperScript}") + } } else { args.add("${esScript}") } - env = [ 'JAVA_HOME' : project.javaHome ] + env = ['JAVA_HOME': project.javaHome] args.addAll("-E", "node.portsfile=true") String collectedSystemProperties = config.systemProperties.collect { key, value -> "-D${key}=${value}" }.join(" ") String esJavaOpts = config.jvmArgs.isEmpty() ? collectedSystemProperties : collectedSystemProperties + " " + config.jvmArgs + if (Boolean.parseBoolean(System.getProperty('tests.asserts', 'true'))) { + esJavaOpts += " -ea -esa" + } env.put('ES_JAVA_OPTS', esJavaOpts) for (Map.Entry property : System.properties.entrySet()) { if (property.key.startsWith('tests.es.')) { @@ -139,13 +178,68 @@ class NodeInfo { args.add("${property.key.substring('tests.es.'.size())}=${property.value}") } } - env.put('ES_JVM_OPTIONS', new File(confDir, 'jvm.options')) - args.addAll("-E", "path.conf=${confDir}") + final File jvmOptionsFile = new File(confDir, 'jvm.options') + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. + */ + env.put('ES_JVM_OPTIONS', "${-> getShortPathName(jvmOptionsFile.toString())}") + } else { + env.put('ES_JVM_OPTIONS', jvmOptionsFile) + } + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. + */ + args.addAll("-E", "path.conf=${-> getShortPathName(confDir.toString())}") + } else { + args.addAll("-E", "path.conf=${confDir}") + } + if (!System.properties.containsKey("tests.es.path.data")) { + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. This one is extra tricky because usually we rely on the node + * creating its data directory on startup but we simply can not do that here because getting the short path name requires + * the directory to already exist. Therefore, we create this directory immediately before getting the short name. + */ + args.addAll("-E", "path.data=${-> Files.createDirectories(Paths.get(dataDir.toString())); getShortPathName(dataDir.toString())}") + } else { + args.addAll("-E", "path.data=${-> dataDir.toString()}") + } + } if (Os.isFamily(Os.FAMILY_WINDOWS)) { args.add('"') // end the entire command, quoted } } + Path binPath() { + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + return Paths.get(getShortPathName(new File(homeDir, 'bin').toString())) + } else { + return Paths.get(new File(homeDir, 'bin').toURI()) + } + } + + static String getShortPathName(String path) { + assert Os.isFamily(Os.FAMILY_WINDOWS) + final WString longPath = new WString("\\\\?\\" + path) + // first we get the length of the buffer needed + final int length = JNAKernel32Library.getInstance().GetShortPathNameW(longPath, null, 0) + if (length == 0) { + throw new IllegalStateException("path [" + path + "] encountered error [" + Native.getLastError() + "]") + } + final char[] shortPath = new char[length] + // knowing the length of the buffer, now we get the short name + if (JNAKernel32Library.getInstance().GetShortPathNameW(longPath, shortPath, length) == 0) { + throw new IllegalStateException("path [" + path + "] encountered error [" + Native.getLastError() + "]") + } + // we have to strip the \\?\ away from the path for cmd.exe + return Native.toString(shortPath).substring(4) + } + /** Returns debug string for the command that started this node. */ String getCommandString() { String esCommandString = "\nNode ${nodeNum} configuration:\n" @@ -184,6 +278,19 @@ class NodeInfo { return transportPortsFile.readLines("UTF-8").get(0) } + /** Returns the file which contains the transport protocol ports for this node */ + File getTransportPortsFile() { + return transportPortsFile + } + + /** Returns the data directory for this node */ + File getDataDir() { + if (!(dataDir instanceof File)) { + return new File(dataDir) + } + return dataDir + } + /** Returns the directory elasticsearch home is contained in for the given distribution */ static File homeDir(File baseDir, String distro, String nodeVersion) { String path diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy index c6463d2881185..859c89c693806 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy @@ -20,80 +20,114 @@ package org.elasticsearch.gradle.test import com.carrotsearch.gradle.junit4.RandomizedTestingTask import org.elasticsearch.gradle.BuildPlugin +import org.gradle.api.DefaultTask import org.gradle.api.Task +import org.gradle.api.execution.TaskExecutionAdapter import org.gradle.api.internal.tasks.options.Option import org.gradle.api.plugins.JavaBasePlugin import org.gradle.api.tasks.Input -import org.gradle.util.ConfigureUtil +import org.gradle.api.tasks.TaskState + +import java.nio.charset.StandardCharsets +import java.nio.file.Files +import java.util.stream.Stream /** - * Runs integration tests, but first starts an ES cluster, - * and passes the ES cluster info as parameters to the tests. + * A wrapper task around setting up a cluster and running rest tests. */ -public class RestIntegTestTask extends RandomizedTestingTask { +public class RestIntegTestTask extends DefaultTask { + + protected ClusterConfiguration clusterConfig + + protected RandomizedTestingTask runner - ClusterConfiguration clusterConfig + protected Task clusterInit + + /** Info about nodes in the integ test cluster. Note this is *not* available until runtime. */ + List nodes /** Flag indicating whether the rest tests in the rest spec should be run. */ @Input boolean includePackaged = false public RestIntegTestTask() { - description = 'Runs rest tests against an elasticsearch cluster.' - group = JavaBasePlugin.VERIFICATION_GROUP - dependsOn(project.testClasses) - classpath = project.sourceSets.test.runtimeClasspath - testClassesDir = project.sourceSets.test.output.classesDir - clusterConfig = new ClusterConfiguration(project) + runner = project.tasks.create("${name}Runner", RandomizedTestingTask.class) + super.dependsOn(runner) + clusterInit = project.tasks.create(name: "${name}Cluster#init", dependsOn: project.testClasses) + runner.dependsOn(clusterInit) + runner.classpath = project.sourceSets.test.runtimeClasspath + runner.testClassesDir = project.sourceSets.test.output.classesDir + clusterConfig = project.extensions.create("${name}Cluster", ClusterConfiguration.class, project) // start with the common test configuration - configure(BuildPlugin.commonTestConfig(project)) + runner.configure(BuildPlugin.commonTestConfig(project)) // override/add more for rest tests - parallelism = '1' - include('**/*IT.class') - systemProperty('tests.rest.load_packaged', 'false') + runner.parallelism = '1' + runner.include('**/*IT.class') + runner.systemProperty('tests.rest.load_packaged', 'false') + // we pass all nodes to the rest cluster to allow the clients to round-robin between them + // this is more realistic than just talking to a single node + runner.systemProperty('tests.rest.cluster', "${-> nodes.collect{it.httpUri()}.join(",")}") + runner.systemProperty('tests.config.dir', "${-> nodes[0].confDir}") + // TODO: our "client" qa tests currently use the rest-test plugin. instead they should have their own plugin + // that sets up the test cluster and passes this transport uri instead of http uri. Until then, we pass + // both as separate sysprops + runner.systemProperty('tests.cluster', "${-> nodes[0].transportUri()}") + + // dump errors and warnings from cluster log on failure + TaskExecutionAdapter logDumpListener = new TaskExecutionAdapter() { + @Override + void afterExecute(Task task, TaskState state) { + if (state.failure != null) { + for (NodeInfo nodeInfo : nodes) { + printLogExcerpt(nodeInfo) + } + } + } + } + runner.doFirst { + project.gradle.addListener(logDumpListener) + } + runner.doLast { + project.gradle.removeListener(logDumpListener) + } // copy the rest spec/tests into the test resources RestSpecHack.configureDependencies(project) project.afterEvaluate { - dependsOn(RestSpecHack.configureTask(project, includePackaged)) + runner.dependsOn(RestSpecHack.configureTask(project, includePackaged)) } // this must run after all projects have been configured, so we know any project // references can be accessed as a fully configured project.gradle.projectsEvaluated { - NodeInfo node = ClusterFormationTasks.setup(project, this, clusterConfig) - systemProperty('tests.rest.cluster', "${-> node.httpUri()}") - systemProperty('tests.config.dir', "${-> node.confDir}") - // TODO: our "client" qa tests currently use the rest-test plugin. instead they should have their own plugin - // that sets up the test cluster and passes this transport uri instead of http uri. Until then, we pass - // both as separate sysprops - systemProperty('tests.cluster', "${-> node.transportUri()}") + if (enabled == false) { + runner.enabled = false + clusterInit.enabled = false + return // no need to add cluster formation tasks if the task won't run! + } + nodes = ClusterFormationTasks.setup(project, "${name}Cluster", runner, clusterConfig) + super.dependsOn(runner.finalizedBy) } } @Option( - option = "debug-jvm", - description = "Enable debugging configuration, to allow attaching a debugger to elasticsearch." + option = "debug-jvm", + description = "Enable debugging configuration, to allow attaching a debugger to elasticsearch." ) public void setDebug(boolean enabled) { clusterConfig.debug = enabled; } - @Input - public void cluster(Closure closure) { - ConfigureUtil.configure(closure, clusterConfig) - } - - public ClusterConfiguration getCluster() { - return clusterConfig + public List getNodes() { + return nodes } @Override public Task dependsOn(Object... dependencies) { - super.dependsOn(dependencies) + runner.dependsOn(dependencies) for (Object dependency : dependencies) { if (dependency instanceof Fixture) { - finalizedBy(((Fixture)dependency).stopTask) + runner.finalizedBy(((Fixture)dependency).getStopTask()) } } return this @@ -101,11 +135,54 @@ public class RestIntegTestTask extends RandomizedTestingTask { @Override public void setDependsOn(Iterable dependencies) { - super.setDependsOn(dependencies) + runner.setDependsOn(dependencies) for (Object dependency : dependencies) { if (dependency instanceof Fixture) { - finalizedBy(((Fixture)dependency).stopTask) + runner.finalizedBy(((Fixture)dependency).getStopTask()) + } + } + } + + @Override + public Task mustRunAfter(Object... tasks) { + clusterInit.mustRunAfter(tasks) + } + + /** Print out an excerpt of the log from the given node. */ + protected static void printLogExcerpt(NodeInfo nodeInfo) { + File logFile = new File(nodeInfo.homeDir, "logs/${nodeInfo.clusterName}.log") + println("\nCluster ${nodeInfo.clusterName} - node ${nodeInfo.nodeNum} log excerpt:") + println("(full log at ${logFile})") + println('-----------------------------------------') + Stream stream = Files.lines(logFile.toPath(), StandardCharsets.UTF_8) + try { + boolean inStartup = true + boolean inExcerpt = false + int linesSkipped = 0 + for (String line : stream) { + if (line.startsWith("[")) { + inExcerpt = false // clear with the next log message + } + if (line =~ /(\[WARN\])|(\[ERROR\])/) { + inExcerpt = true // show warnings and errors + } + if (inStartup || inExcerpt) { + if (linesSkipped != 0) { + println("... SKIPPED ${linesSkipped} LINES ...") + } + println(line) + linesSkipped = 0 + } else { + ++linesSkipped + } + if (line =~ /recovered \[\d+\] indices into cluster_state/) { + inStartup = false + } } + } finally { + stream.close() } + println('=========================================') + } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestSpecHack.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestSpecHack.groovy index 43b5c2f6f38d2..296ae7115789f 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestSpecHack.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestSpecHack.groovy @@ -43,18 +43,22 @@ public class RestSpecHack { } /** - * Creates a task to copy the rest spec files. + * Creates a task (if necessary) to copy the rest spec files. * * @param project The project to add the copy task to * @param includePackagedTests true if the packaged tests should be copied, false otherwise */ public static Task configureTask(Project project, boolean includePackagedTests) { + Task copyRestSpec = project.tasks.findByName('copyRestSpec') + if (copyRestSpec != null) { + return copyRestSpec + } Map copyRestSpecProps = [ name : 'copyRestSpec', type : Copy, dependsOn: [project.configurations.restSpec, 'processTestResources'] ] - Task copyRestSpec = project.tasks.create(copyRestSpecProps) { + copyRestSpec = project.tasks.create(copyRestSpecProps) { from { project.zipTree(project.configurations.restSpec.singleFile) } include 'rest-api-spec/api/**' if (includePackagedTests) { diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy index dc9aa7693889d..da1462412812a 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy @@ -18,18 +18,35 @@ */ package org.elasticsearch.gradle.test +import org.elasticsearch.gradle.BuildPlugin +import org.gradle.api.InvalidUserDataException import org.gradle.api.Plugin import org.gradle.api.Project +import org.gradle.api.plugins.JavaBasePlugin -/** A plugin to add rest integration tests. Used for qa projects. */ +/** + * Adds support for starting an Elasticsearch cluster before running integration + * tests. Used in conjunction with {@link StandaloneRestTestPlugin} for qa + * projects and in conjunction with {@link BuildPlugin} for testing the rest + * client. + */ public class RestTestPlugin implements Plugin { + List REQUIRED_PLUGINS = [ + 'elasticsearch.build', + 'elasticsearch.standalone-rest-test'] @Override public void apply(Project project) { - project.pluginManager.apply(StandaloneTestBasePlugin) + if (false == REQUIRED_PLUGINS.any {project.pluginManager.hasPlugin(it)}) { + throw new InvalidUserDataException('elasticsearch.rest-test ' + + 'requires either elasticsearch.build or ' + + 'elasticsearch.standalone-rest-test') + } RestIntegTestTask integTest = project.tasks.create('integTest', RestIntegTestTask.class) - integTest.cluster.distribution = 'zip' // rest tests should run with the real zip + integTest.description = 'Runs rest tests against an elasticsearch cluster.' + integTest.group = JavaBasePlugin.VERIFICATION_GROUP + integTest.clusterConfig.distribution = 'zip' // rest tests should run with the real zip integTest.mustRunAfter(project.precommit) project.check.dependsOn(integTest) } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy index f045c95740ba6..a88152d7865ff 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy @@ -16,8 +16,9 @@ public class RunTask extends DefaultTask { clusterConfig.httpPort = 9200 clusterConfig.transportPort = 9300 clusterConfig.daemonize = false + clusterConfig.distribution = 'zip' project.afterEvaluate { - ClusterFormationTasks.setup(project, this, clusterConfig) + ClusterFormationTasks.setup(project, name, this, clusterConfig) } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy new file mode 100644 index 0000000000000..66868aafac0b0 --- /dev/null +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy @@ -0,0 +1,64 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + + +package org.elasticsearch.gradle.test + +import com.carrotsearch.gradle.junit4.RandomizedTestingPlugin +import org.elasticsearch.gradle.BuildPlugin +import org.elasticsearch.gradle.VersionProperties +import org.elasticsearch.gradle.precommit.PrecommitTasks +import org.gradle.api.InvalidUserDataException +import org.gradle.api.Plugin +import org.gradle.api.Project +import org.gradle.api.plugins.JavaBasePlugin + +/** + * Configures the build to compile tests against Elasticsearch's test framework + * and run REST tests. Use BuildPlugin if you want to build main code as well + * as tests. + */ +public class StandaloneRestTestPlugin implements Plugin { + + @Override + public void apply(Project project) { + if (project.pluginManager.hasPlugin('elasticsearch.build')) { + throw new InvalidUserDataException('elasticsearch.standalone-test ' + + 'elasticsearch.standalone-rest-test, and elasticsearch.build ' + + 'are mutually exclusive') + } + project.pluginManager.apply(JavaBasePlugin) + project.pluginManager.apply(RandomizedTestingPlugin) + + BuildPlugin.globalBuildInfo(project) + BuildPlugin.configureRepositories(project) + + // only setup tests to build + project.sourceSets.create('test') + project.dependencies.add('testCompile', "org.elasticsearch.test:framework:${VersionProperties.elasticsearch}") + + project.eclipse.classpath.sourceSets = [project.sourceSets.test] + project.eclipse.classpath.plusConfigurations = [project.configurations.testRuntime] + project.idea.module.testSourceDirs += project.sourceSets.test.java.srcDirs + project.idea.module.scopes['TEST'] = [plus: [project.configurations.testRuntime]] + + PrecommitTasks.create(project, false) + project.check.dependsOn(project.precommit) + } +} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestBasePlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestBasePlugin.groovy deleted file mode 100644 index af2b20e4abfab..0000000000000 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestBasePlugin.groovy +++ /dev/null @@ -1,54 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - - -package org.elasticsearch.gradle.test - -import com.carrotsearch.gradle.junit4.RandomizedTestingPlugin -import org.elasticsearch.gradle.BuildPlugin -import org.elasticsearch.gradle.VersionProperties -import org.elasticsearch.gradle.precommit.PrecommitTasks -import org.gradle.api.Plugin -import org.gradle.api.Project -import org.gradle.api.plugins.JavaBasePlugin - -/** Configures the build to have a rest integration test. */ -public class StandaloneTestBasePlugin implements Plugin { - - @Override - public void apply(Project project) { - project.pluginManager.apply(JavaBasePlugin) - project.pluginManager.apply(RandomizedTestingPlugin) - - BuildPlugin.globalBuildInfo(project) - BuildPlugin.configureRepositories(project) - - // only setup tests to build - project.sourceSets.create('test') - project.dependencies.add('testCompile', "org.elasticsearch.test:framework:${VersionProperties.elasticsearch}") - - project.eclipse.classpath.sourceSets = [project.sourceSets.test] - project.eclipse.classpath.plusConfigurations = [project.configurations.testRuntime] - project.idea.module.testSourceDirs += project.sourceSets.test.java.srcDirs - project.idea.module.scopes['TEST'] = [plus: [project.configurations.testRuntime]] - - PrecommitTasks.create(project, false) - project.check.dependsOn(project.precommit) - } -} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestPlugin.groovy index fefd08fe4e525..de52d75c6008c 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestPlugin.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestPlugin.groovy @@ -25,12 +25,15 @@ import org.gradle.api.Plugin import org.gradle.api.Project import org.gradle.api.plugins.JavaBasePlugin -/** A plugin to add tests only. Used for QA tests that run arbitrary unit tests. */ +/** + * Configures the build to compile against Elasticsearch's test framework and + * run integration and unit tests. Use BuildPlugin if you want to build main + * code as well as tests. */ public class StandaloneTestPlugin implements Plugin { @Override public void apply(Project project) { - project.pluginManager.apply(StandaloneTestBasePlugin) + project.pluginManager.apply(StandaloneRestTestPlugin) Map testOptions = [ name: 'test', diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy new file mode 100644 index 0000000000000..fa08a8f9c6667 --- /dev/null +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy @@ -0,0 +1,54 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.gradle.test + +import org.elasticsearch.gradle.vagrant.VagrantCommandTask +import org.gradle.api.Task + +/** + * A fixture for integration tests which runs in a virtual machine launched by Vagrant. + */ +class VagrantFixture extends VagrantCommandTask implements Fixture { + + private VagrantCommandTask stopTask + + public VagrantFixture() { + this.stopTask = project.tasks.create(name: "${name}#stop", type: VagrantCommandTask) { + command 'halt' + } + finalizedBy this.stopTask + } + + @Override + void setBoxName(String boxName) { + super.setBoxName(boxName) + this.stopTask.setBoxName(boxName) + } + + @Override + void setEnvironmentVars(Map environmentVars) { + super.setEnvironmentVars(environmentVars) + this.stopTask.setEnvironmentVars(environmentVars) + } + + @Override + public Task getStopTask() { + return this.stopTask + } +} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy index c68e0528c9b67..110f2fc7e8461 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy @@ -18,14 +18,7 @@ */ package org.elasticsearch.gradle.vagrant -import org.gradle.api.DefaultTask import org.gradle.api.tasks.Input -import org.gradle.api.tasks.TaskAction -import org.gradle.logging.ProgressLoggerFactory -import org.gradle.process.internal.ExecAction -import org.gradle.process.internal.ExecActionFactory - -import javax.inject.Inject /** * Runs bats over vagrant. Pretty much like running it using Exec but with a @@ -34,12 +27,15 @@ import javax.inject.Inject public class BatsOverVagrantTask extends VagrantCommandTask { @Input - String command + String remoteCommand BatsOverVagrantTask() { - project.afterEvaluate { - args 'ssh', boxName, '--command', command - } + command = 'ssh' + } + + void setRemoteCommand(String remoteCommand) { + this.remoteCommand = Objects.requireNonNull(remoteCommand) + setArgs(['--command', remoteCommand]) } @Override diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy index 3f980c57a49a6..e15759a1fe588 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy @@ -19,11 +19,9 @@ package org.elasticsearch.gradle.vagrant import com.carrotsearch.gradle.junit4.LoggingOutputStream -import groovy.transform.PackageScope import org.gradle.api.GradleScriptException import org.gradle.api.logging.Logger -import org.gradle.logging.ProgressLogger -import org.gradle.logging.ProgressLoggerFactory +import org.gradle.internal.logging.progress.ProgressLogger import java.util.regex.Matcher diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy index d79c2533fabaf..aab120e8d049a 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy @@ -21,9 +21,15 @@ package org.elasticsearch.gradle.vagrant import org.apache.commons.io.output.TeeOutputStream import org.elasticsearch.gradle.LoggedExec import org.gradle.api.tasks.Input -import org.gradle.logging.ProgressLoggerFactory +import org.gradle.api.tasks.Optional +import org.gradle.api.tasks.TaskAction +import org.gradle.internal.logging.progress.ProgressLoggerFactory import javax.inject.Inject +import java.util.concurrent.CountDownLatch +import java.util.concurrent.locks.Lock +import java.util.concurrent.locks.ReadWriteLock +import java.util.concurrent.locks.ReentrantLock /** * Runs a vagrant command. Pretty much like Exec task but with a nicer output @@ -31,17 +37,51 @@ import javax.inject.Inject */ public class VagrantCommandTask extends LoggedExec { + @Input + String command + + @Input @Optional + String subcommand + @Input String boxName + @Input + Map environmentVars + public VagrantCommandTask() { executable = 'vagrant' + + // We're using afterEvaluate here to slot in some logic that captures configurations and + // modifies the command line right before the main execution happens. The reason that we + // call doFirst instead of just doing the work in the afterEvaluate is that the latter + // restricts how subclasses can extend functionality. Calling afterEvaluate is like having + // all the logic of a task happening at construction time, instead of at execution time + // where a subclass can override or extend the logic. project.afterEvaluate { - // It'd be nice if --machine-readable were, well, nice - standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream()) + doFirst { + if (environmentVars != null) { + environment environmentVars + } + + // Build our command line for vagrant + def vagrantCommand = [executable, command] + if (subcommand != null) { + vagrantCommand = vagrantCommand + subcommand + } + commandLine([*vagrantCommand, boxName, *args]) + + // It'd be nice if --machine-readable were, well, nice + standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream()) + } } } + @Inject + ProgressLoggerFactory getProgressLoggerFactory() { + throw new UnsupportedOperationException() + } + protected OutputStream createLoggerOutputStream() { return new VagrantLoggerOutputStream( command: commandLine.join(' '), @@ -50,9 +90,4 @@ public class VagrantCommandTask extends LoggedExec { stuff starts with ==> $box */ squashedPrefix: "==> $boxName: ") } - - @Inject - ProgressLoggerFactory getProgressLoggerFactory() { - throw new UnsupportedOperationException(); - } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy index 331a638b5cade..e899c0171298b 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy @@ -19,9 +19,7 @@ package org.elasticsearch.gradle.vagrant import com.carrotsearch.gradle.junit4.LoggingOutputStream -import org.gradle.api.logging.Logger -import org.gradle.logging.ProgressLogger -import org.gradle.logging.ProgressLoggerFactory +import org.gradle.internal.logging.progress.ProgressLogger /** * Adapts an OutputStream being written to by vagrant into a ProcessLogger. It diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy new file mode 100644 index 0000000000000..f16913d5be64a --- /dev/null +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy @@ -0,0 +1,76 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.gradle.vagrant + +import org.gradle.api.tasks.Input + +class VagrantPropertiesExtension { + + @Input + List boxes + + @Input + Long testSeed + + @Input + String formattedTestSeed + + @Input + String upgradeFromVersion + + @Input + List upgradeFromVersions + + @Input + String batsDir + + @Input + Boolean inheritTests + + @Input + Boolean inheritTestArchives + + @Input + Boolean inheritTestUtils + + VagrantPropertiesExtension(List availableBoxes) { + this.boxes = availableBoxes + this.batsDir = 'src/test/resources/packaging' + } + + void boxes(String... boxes) { + this.boxes = Arrays.asList(boxes) + } + + void setBatsDir(String batsDir) { + this.batsDir = batsDir + } + + void setInheritTests(Boolean inheritTests) { + this.inheritTests = inheritTests + } + + void setInheritTestArchives(Boolean inheritTestArchives) { + this.inheritTestArchives = inheritTestArchives + } + + void setInheritTestUtils(Boolean inheritTestUtils) { + this.inheritTestUtils = inheritTestUtils + } +} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy new file mode 100644 index 0000000000000..9dfe487e83018 --- /dev/null +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy @@ -0,0 +1,127 @@ +package org.elasticsearch.gradle.vagrant + +import org.gradle.api.GradleException +import org.gradle.api.InvalidUserDataException +import org.gradle.api.Plugin +import org.gradle.api.Project +import org.gradle.process.ExecResult +import org.gradle.process.internal.ExecException + +/** + * Global configuration for if Vagrant tasks are supported in this + * build environment. + */ +class VagrantSupportPlugin implements Plugin { + + @Override + void apply(Project project) { + if (project.rootProject.ext.has('vagrantEnvChecksDone') == false) { + Map vagrantInstallation = getVagrantInstallation(project) + Map virtualBoxInstallation = getVirtualBoxInstallation(project) + + project.rootProject.ext.vagrantInstallation = vagrantInstallation + project.rootProject.ext.virtualBoxInstallation = virtualBoxInstallation + project.rootProject.ext.vagrantSupported = vagrantInstallation.supported && virtualBoxInstallation.supported + project.rootProject.ext.vagrantEnvChecksDone = true + + // Finding that HOME needs to be set when performing vagrant updates + String homeLocation = System.getenv("HOME") + if (project.rootProject.ext.vagrantSupported && homeLocation == null) { + throw new GradleException("Could not locate \$HOME environment variable. Vagrant is enabled " + + "and requires \$HOME to be set to function properly.") + } + } + + addVerifyInstallationTasks(project) + } + + private Map getVagrantInstallation(Project project) { + try { + ByteArrayOutputStream pipe = new ByteArrayOutputStream() + ExecResult runResult = project.exec { + commandLine 'vagrant', '--version' + standardOutput pipe + ignoreExitValue true + } + String version = pipe.toString().trim() + if (runResult.exitValue == 0) { + if (version ==~ /Vagrant 1\.(8\.[6-9]|9\.[0-9])+/ || version ==~ /Vagrant 2\.[0-9]+\.[0-9]+/) { + return [ 'supported' : true ] + } else { + return [ 'supported' : false, + 'info' : "Illegal version of vagrant [${version}]. Need [Vagrant 1.8.6+]" ] + } + } else { + return [ 'supported' : false, + 'info' : "Could not read installed vagrant version:\n" + version ] + } + } catch (ExecException e) { + // Exec still throws this if it cannot find the command, regardless if ignoreExitValue is set. + // Swallow error. Vagrant isn't installed. Don't halt the build here. + return [ 'supported' : false, 'info' : "Could not find vagrant: " + e.message ] + } + } + + private Map getVirtualBoxInstallation(Project project) { + try { + ByteArrayOutputStream pipe = new ByteArrayOutputStream() + ExecResult runResult = project.exec { + commandLine 'vboxmanage', '--version' + standardOutput = pipe + ignoreExitValue true + } + String version = pipe.toString().trim() + if (runResult.exitValue == 0) { + try { + String[] versions = version.split('\\.') + int major = Integer.parseInt(versions[0]) + int minor = Integer.parseInt(versions[1]) + if ((major < 5) || (major == 5 && minor < 1)) { + return [ 'supported' : false, + 'info' : "Illegal version of virtualbox [${version}]. Need [5.1+]" ] + } else { + return [ 'supported' : true ] + } + } catch (NumberFormatException | ArrayIndexOutOfBoundsException e) { + return [ 'supported' : false, + 'info' : "Unable to parse version of virtualbox [${version}]. Required [5.1+]" ] + } + } else { + return [ 'supported': false, 'info': "Could not read installed virtualbox version:\n" + version ] + } + } catch (ExecException e) { + // Exec still throws this if it cannot find the command, regardless if ignoreExitValue is set. + // Swallow error. VirtualBox isn't installed. Don't halt the build here. + return [ 'supported' : false, 'info' : "Could not find virtualbox: " + e.message ] + } + } + + private void addVerifyInstallationTasks(Project project) { + createCheckVagrantVersionTask(project) + createCheckVirtualBoxVersionTask(project) + } + + private void createCheckVagrantVersionTask(Project project) { + project.tasks.create('vagrantCheckVersion') { + description 'Check the Vagrant version' + group 'Verification' + doLast { + if (project.rootProject.vagrantInstallation.supported == false) { + throw new InvalidUserDataException(project.rootProject.vagrantInstallation.info) + } + } + } + } + + private void createCheckVirtualBoxVersionTask(Project project) { + project.tasks.create('virtualboxCheckVersion') { + description 'Check the Virtualbox version' + group 'Verification' + doLast { + if (project.rootProject.virtualBoxInstallation.supported == false) { + throw new InvalidUserDataException(project.rootProject.virtualBoxInstallation.info) + } + } + } + } +} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy new file mode 100644 index 0000000000000..d0c529b263285 --- /dev/null +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy @@ -0,0 +1,403 @@ +package org.elasticsearch.gradle.vagrant + +import org.elasticsearch.gradle.FileContentsTask +import org.gradle.api.* +import org.gradle.api.artifacts.dsl.RepositoryHandler +import org.gradle.api.execution.TaskExecutionAdapter +import org.gradle.api.internal.artifacts.dependencies.DefaultProjectDependency +import org.gradle.api.tasks.Copy +import org.gradle.api.tasks.Delete +import org.gradle.api.tasks.Exec +import org.gradle.api.tasks.TaskState + +class VagrantTestPlugin implements Plugin { + + /** All available boxes **/ + static List BOXES = [ + 'centos-6', + 'centos-7', + 'debian-8', + 'debian-9', + 'fedora-25', + 'fedora-26', + 'oel-6', + 'oel-7', + 'opensuse-42', + 'sles-12', + 'ubuntu-1404', + 'ubuntu-1604' + ] + + /** Boxes used when sampling the tests **/ + static List SAMPLE = [ + 'centos-7', + 'ubuntu-1404', + ] + + /** All onboarded archives by default, available for Bats tests even if not used **/ + static List DISTRIBUTION_ARCHIVES = ['tar', 'rpm', 'deb'] + + /** Packages onboarded for upgrade tests **/ + static List UPGRADE_FROM_ARCHIVES = ['rpm', 'deb'] + + private static final BATS = 'bats' + private static final String BATS_TEST_COMMAND ="cd \$BATS_ARCHIVES && sudo bats --tap \$BATS_TESTS/*.$BATS" + private static final String PLATFORM_TEST_COMMAND ="rm -rf ~/elasticsearch && rsync -r /elasticsearch/ ~/elasticsearch && cd ~/elasticsearch && \$GRADLE_HOME/bin/gradle test integTest" + + @Override + void apply(Project project) { + + // Creates the Vagrant extension for the project + project.extensions.create('esvagrant', VagrantPropertiesExtension, listVagrantBoxes(project)) + + // Add required repositories for Bats tests + configureBatsRepositories(project) + + // Creates custom configurations for Bats testing files (and associated scripts and archives) + createBatsConfiguration(project) + + // Creates all the main Vagrant tasks + createVagrantTasks(project) + + if (project.extensions.esvagrant.boxes == null || project.extensions.esvagrant.boxes.size() == 0) { + throw new InvalidUserDataException('Vagrant boxes cannot be null or empty for esvagrant') + } + + for (String box : project.extensions.esvagrant.boxes) { + if (BOXES.contains(box) == false) { + throw new InvalidUserDataException("Vagrant box [${box}] not found, available virtual machines are ${BOXES}") + } + } + + // Creates all tasks related to the Vagrant boxes + createVagrantBoxesTasks(project) + } + + private List listVagrantBoxes(Project project) { + String vagrantBoxes = project.getProperties().get('vagrant.boxes', 'sample') + if (vagrantBoxes == 'sample') { + return SAMPLE + } else if (vagrantBoxes == 'all') { + return BOXES + } else { + return vagrantBoxes.split(',') + } + } + + private static void configureBatsRepositories(Project project) { + RepositoryHandler repos = project.repositories + + // Try maven central first, it'll have releases before 5.0.0 + repos.mavenCentral() + + /* Setup a repository that tries to download from + https://artifacts.elastic.co/downloads/elasticsearch/[module]-[revision].[ext] + which should work for 5.0.0+. This isn't a real ivy repository but gradle + is fine with that */ + repos.ivy { + artifactPattern "https://artifacts.elastic.co/downloads/elasticsearch/[module]-[revision].[ext]" + } + } + + private static void createBatsConfiguration(Project project) { + project.configurations.create(BATS) + + final long seed + final String formattedSeed + String maybeTestsSeed = System.getProperty("tests.seed") + if (maybeTestsSeed != null) { + if (maybeTestsSeed.trim().isEmpty()) { + throw new GradleException("explicit tests.seed cannot be empty") + } + String masterSeed = maybeTestsSeed.tokenize(':').get(0) + seed = new BigInteger(masterSeed, 16).longValue() + formattedSeed = maybeTestsSeed + } else { + seed = new Random().nextLong() + formattedSeed = String.format("%016X", seed) + } + + String upgradeFromVersion = System.getProperty("tests.packaging.upgradeVersion"); + if (upgradeFromVersion == null) { + upgradeFromVersion = project.indexCompatVersions[new Random(seed).nextInt(project.indexCompatVersions.size())] + } + + DISTRIBUTION_ARCHIVES.each { + // Adds a dependency for the current version + project.dependencies.add(BATS, project.dependencies.project(path: ":distribution:${it}", configuration: 'archives')) + } + + UPGRADE_FROM_ARCHIVES.each { + // The version of elasticsearch that we upgrade *from* + project.dependencies.add(BATS, "org.elasticsearch.distribution.${it}:elasticsearch:${upgradeFromVersion}@${it}") + } + + project.extensions.esvagrant.testSeed = seed + project.extensions.esvagrant.formattedTestSeed = formattedSeed + project.extensions.esvagrant.upgradeFromVersion = upgradeFromVersion + } + + private static void createCleanTask(Project project) { + project.tasks.create('clean', Delete.class) { + description 'Clean the project build directory' + group 'Build' + delete project.buildDir + } + } + + private static void createStopTask(Project project) { + project.tasks.create('stop') { + description 'Stop any tasks from tests that still may be running' + group 'Verification' + } + } + + private static void createSmokeTestTask(Project project) { + project.tasks.create('vagrantSmokeTest') { + description 'Smoke test the specified vagrant boxes' + group 'Verification' + } + } + + private static void createPrepareVagrantTestEnvTask(Project project) { + File batsDir = new File("${project.buildDir}/${BATS}") + + Task createBatsDirsTask = project.tasks.create('createBatsDirs') + createBatsDirsTask.outputs.dir batsDir + createBatsDirsTask.doLast { + batsDir.mkdirs() + } + + Copy copyBatsArchives = project.tasks.create('copyBatsArchives', Copy) { + dependsOn createBatsDirsTask + into "${batsDir}/archives" + from project.configurations[BATS] + } + + Copy copyBatsTests = project.tasks.create('copyBatsTests', Copy) { + dependsOn createBatsDirsTask + into "${batsDir}/tests" + from { + "${project.extensions.esvagrant.batsDir}/tests" + } + } + + Copy copyBatsUtils = project.tasks.create('copyBatsUtils', Copy) { + dependsOn createBatsDirsTask + into "${batsDir}/utils" + from { + "${project.extensions.esvagrant.batsDir}/utils" + } + } + + // Now we iterate over dependencies of the bats configuration. When a project dependency is found, + // we bring back its own archives, test files or test utils. + project.afterEvaluate { + project.configurations.bats.dependencies.findAll {it.targetConfiguration == BATS }.each { d -> + if (d instanceof DefaultProjectDependency) { + DefaultProjectDependency externalBatsDependency = (DefaultProjectDependency) d + Project externalBatsProject = externalBatsDependency.dependencyProject + String externalBatsDir = externalBatsProject.extensions.esvagrant.batsDir + + if (project.extensions.esvagrant.inheritTests) { + copyBatsTests.from(externalBatsProject.files("${externalBatsDir}/tests")) + } + if (project.extensions.esvagrant.inheritTestArchives) { + copyBatsArchives.from(externalBatsDependency.projectConfiguration.files) + } + if (project.extensions.esvagrant.inheritTestUtils) { + copyBatsUtils.from(externalBatsProject.files("${externalBatsDir}/utils")) + } + } + } + } + + Task createVersionFile = project.tasks.create('createVersionFile', FileContentsTask) { + dependsOn createBatsDirsTask + file "${batsDir}/archives/version" + contents project.version + } + + Task createUpgradeFromFile = project.tasks.create('createUpgradeFromFile', FileContentsTask) { + dependsOn createBatsDirsTask + file "${batsDir}/archives/upgrade_from_version" + contents project.extensions.esvagrant.upgradeFromVersion + } + + Task vagrantSetUpTask = project.tasks.create('setupBats') + vagrantSetUpTask.dependsOn 'vagrantCheckVersion' + vagrantSetUpTask.dependsOn copyBatsTests, copyBatsUtils, copyBatsArchives, createVersionFile, createUpgradeFromFile + } + + private static void createPackagingTestTask(Project project) { + project.tasks.create('packagingTest') { + group 'Verification' + description "Tests yum/apt packages using vagrant and bats.\n" + + " Specify the vagrant boxes to test using the gradle property 'vagrant.boxes'.\n" + + " 'sample' can be used to test a single yum and apt box. 'all' can be used to\n" + + " test all available boxes. The available boxes are: \n" + + " ${BOXES}" + dependsOn 'vagrantCheckVersion' + } + } + + private static void createPlatformTestTask(Project project) { + project.tasks.create('platformTest') { + group 'Verification' + description "Test unit and integ tests on different platforms using vagrant.\n" + + " Specify the vagrant boxes to test using the gradle property 'vagrant.boxes'.\n" + + " 'all' can be used to test all available boxes. The available boxes are: \n" + + " ${BOXES}" + dependsOn 'vagrantCheckVersion' + } + } + + private static void createVagrantTasks(Project project) { + createCleanTask(project) + createStopTask(project) + createSmokeTestTask(project) + createPrepareVagrantTestEnvTask(project) + createPackagingTestTask(project) + createPlatformTestTask(project) + } + + private static void createVagrantBoxesTasks(Project project) { + assert project.extensions.esvagrant.boxes != null + + assert project.tasks.stop != null + Task stop = project.tasks.stop + + assert project.tasks.vagrantSmokeTest != null + Task vagrantSmokeTest = project.tasks.vagrantSmokeTest + + assert project.tasks.vagrantCheckVersion != null + Task vagrantCheckVersion = project.tasks.vagrantCheckVersion + + assert project.tasks.virtualboxCheckVersion != null + Task virtualboxCheckVersion = project.tasks.virtualboxCheckVersion + + assert project.tasks.setupBats != null + Task setupBats = project.tasks.setupBats + + assert project.tasks.packagingTest != null + Task packagingTest = project.tasks.packagingTest + + assert project.tasks.platformTest != null + Task platformTest = project.tasks.platformTest + + /* + * We always use the main project.rootDir as Vagrant's current working directory (VAGRANT_CWD) + * so that boxes are not duplicated for every Gradle project that use this VagrantTestPlugin. + */ + def vagrantEnvVars = [ + 'VAGRANT_CWD' : "${project.rootDir.absolutePath}", + 'VAGRANT_VAGRANTFILE' : 'Vagrantfile', + 'VAGRANT_PROJECT_DIR' : "${project.projectDir.absolutePath}" + ] + + // Each box gets it own set of tasks + for (String box : BOXES) { + String boxTask = box.capitalize().replace('-', '') + + // always add a halt task for all boxes, so clean makes sure they are all shutdown + Task halt = project.tasks.create("vagrant${boxTask}#halt", VagrantCommandTask) { + command 'halt' + boxName box + environmentVars vagrantEnvVars + } + stop.dependsOn(halt) + + Task update = project.tasks.create("vagrant${boxTask}#update", VagrantCommandTask) { + command 'box' + subcommand 'update' + boxName box + environmentVars vagrantEnvVars + dependsOn vagrantCheckVersion, virtualboxCheckVersion + } + update.mustRunAfter(setupBats) + + Task up = project.tasks.create("vagrant${boxTask}#up", VagrantCommandTask) { + command 'up' + boxName box + environmentVars vagrantEnvVars + /* Its important that we try to reprovision the box even if it already + exists. That way updates to the vagrant configuration take automatically. + That isn't to say that the updates will always be compatible. Its ok to + just destroy the boxes if they get busted but that is a manual step + because its slow-ish. */ + /* We lock the provider to virtualbox because the Vagrantfile specifies + lots of boxes that only work properly in virtualbox. Virtualbox is + vagrant's default but its possible to change that default and folks do. + But the boxes that we use are unlikely to work properly with other + virtualization providers. Thus the lock. */ + args '--provision', '--provider', 'virtualbox' + /* It'd be possible to check if the box is already up here and output + SKIPPED but that would require running vagrant status which is slow! */ + dependsOn update + } + + Task smoke = project.tasks.create("vagrant${boxTask}#smoketest", Exec) { + environment vagrantEnvVars + dependsOn up + finalizedBy halt + commandLine 'vagrant', 'ssh', box, '--command', + "set -o pipefail && echo 'Hello from ${project.path}' | sed -ue 's/^/ ${box}: /'" + } + vagrantSmokeTest.dependsOn(smoke) + + Task packaging = project.tasks.create("vagrant${boxTask}#packagingTest", BatsOverVagrantTask) { + remoteCommand BATS_TEST_COMMAND + boxName box + environmentVars vagrantEnvVars + dependsOn up, setupBats + finalizedBy halt + } + + TaskExecutionAdapter packagingReproListener = new TaskExecutionAdapter() { + @Override + void afterExecute(Task task, TaskState state) { + if (state.failure != null) { + println "REPRODUCE WITH: gradle ${packaging.path} " + + "-Dtests.seed=${project.extensions.esvagrant.formattedTestSeed} " + } + } + } + packaging.doFirst { + project.gradle.addListener(packagingReproListener) + } + packaging.doLast { + project.gradle.removeListener(packagingReproListener) + } + if (project.extensions.esvagrant.boxes.contains(box)) { + packagingTest.dependsOn(packaging) + } + + Task platform = project.tasks.create("vagrant${boxTask}#platformTest", VagrantCommandTask) { + command 'ssh' + boxName box + environmentVars vagrantEnvVars + dependsOn up + finalizedBy halt + args '--command', PLATFORM_TEST_COMMAND + " -Dtests.seed=${-> project.extensions.esvagrant.formattedTestSeed}" + } + TaskExecutionAdapter platformReproListener = new TaskExecutionAdapter() { + @Override + void afterExecute(Task task, TaskState state) { + if (state.failure != null) { + println "REPRODUCE WITH: gradle ${platform.path} " + + "-Dtests.seed=${project.extensions.esvagrant.formattedTestSeed} " + } + } + } + platform.doFirst { + project.gradle.addListener(platformReproListener) + } + platform.doLast { + project.gradle.removeListener(platformReproListener) + } + if (project.extensions.esvagrant.boxes.contains(box)) { + platformTest.dependsOn(platform) + } + } + } +} diff --git a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java index cbfa31d1aaf5b..9bd14675d34a4 100644 --- a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java +++ b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java @@ -28,6 +28,7 @@ import java.nio.file.Paths; import java.nio.file.attribute.BasicFileAttributes; import java.util.HashSet; +import java.util.Objects; import java.util.Set; /** @@ -49,6 +50,7 @@ public static void main(String[] args) throws IOException { Path rootPath = null; boolean skipIntegTestsInDisguise = false; boolean selfTest = false; + boolean checkMainClasses = false; for (int i = 0; i < args.length; i++) { String arg = args[i]; switch (arg) { @@ -64,6 +66,9 @@ public static void main(String[] args) throws IOException { case "--self-test": selfTest = true; break; + case "--main": + checkMainClasses = true; + break; case "--": rootPath = Paths.get(args[++i]); break; @@ -73,28 +78,43 @@ public static void main(String[] args) throws IOException { } NamingConventionsCheck check = new NamingConventionsCheck(testClass, integTestClass); - check.check(rootPath, skipIntegTestsInDisguise); + if (checkMainClasses) { + check.checkMain(rootPath); + } else { + check.checkTests(rootPath, skipIntegTestsInDisguise); + } if (selfTest) { - assertViolation("WrongName", check.missingSuffix); - assertViolation("WrongNameTheSecond", check.missingSuffix); - assertViolation("DummyAbstractTests", check.notRunnable); - assertViolation("DummyInterfaceTests", check.notRunnable); - assertViolation("InnerTests", check.innerClasses); - assertViolation("NotImplementingTests", check.notImplementing); - assertViolation("PlainUnit", check.pureUnitTest); + if (checkMainClasses) { + assertViolation(NamingConventionsCheckInMainTests.class.getName(), check.testsInMain); + assertViolation(NamingConventionsCheckInMainIT.class.getName(), check.testsInMain); + } else { + assertViolation("WrongName", check.missingSuffix); + assertViolation("WrongNameTheSecond", check.missingSuffix); + assertViolation("DummyAbstractTests", check.notRunnable); + assertViolation("DummyInterfaceTests", check.notRunnable); + assertViolation("InnerTests", check.innerClasses); + assertViolation("NotImplementingTests", check.notImplementing); + assertViolation("PlainUnit", check.pureUnitTest); + } } // Now we should have no violations - assertNoViolations("Not all subclasses of " + check.testClass.getSimpleName() - + " match the naming convention. Concrete classes must end with [Tests]", check.missingSuffix); + assertNoViolations( + "Not all subclasses of " + check.testClass.getSimpleName() + + " match the naming convention. Concrete classes must end with [Tests]", + check.missingSuffix); assertNoViolations("Classes ending with [Tests] are abstract or interfaces", check.notRunnable); assertNoViolations("Found inner classes that are tests, which are excluded from the test runner", check.innerClasses); assertNoViolations("Pure Unit-Test found must subclass [" + check.testClass.getSimpleName() + "]", check.pureUnitTest); assertNoViolations("Classes ending with [Tests] must subclass [" + check.testClass.getSimpleName() + "]", check.notImplementing); + assertNoViolations( + "Classes ending with [Tests] or [IT] or extending [" + check.testClass.getSimpleName() + "] must be in src/test/java", + check.testsInMain); if (skipIntegTestsInDisguise == false) { - assertNoViolations("Subclasses of " + check.integTestClass.getSimpleName() + - " should end with IT as they are integration tests", check.integTestsInDisguise); + assertNoViolations( + "Subclasses of " + check.integTestClass.getSimpleName() + " should end with IT as they are integration tests", + check.integTestsInDisguise); } } @@ -104,84 +124,76 @@ public static void main(String[] args) throws IOException { private final Set> integTestsInDisguise = new HashSet<>(); private final Set> notRunnable = new HashSet<>(); private final Set> innerClasses = new HashSet<>(); + private final Set> testsInMain = new HashSet<>(); private final Class testClass; private final Class integTestClass; public NamingConventionsCheck(Class testClass, Class integTestClass) { - this.testClass = testClass; + this.testClass = Objects.requireNonNull(testClass, "--test-class is required"); this.integTestClass = integTestClass; } - public void check(Path rootPath, boolean skipTestsInDisguised) throws IOException { - Files.walkFileTree(rootPath, new FileVisitor() { - /** - * The package name of the directory we are currently visiting. Kept as a string rather than something fancy because we load - * just about every class and doing so requires building a string out of it anyway. At least this way we don't need to build the - * first part of the string over and over and over again. - */ - private String packageName; - + public void checkTests(Path rootPath, boolean skipTestsInDisguised) throws IOException { + Files.walkFileTree(rootPath, new TestClassVisitor() { @Override - public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { - // First we visit the root directory - if (packageName == null) { - // And it package is empty string regardless of the directory name - packageName = ""; - } else { - packageName += dir.getFileName() + "."; + protected void visitTestClass(Class clazz) { + if (skipTestsInDisguised == false && integTestClass.isAssignableFrom(clazz)) { + integTestsInDisguise.add(clazz); + } + if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) { + notRunnable.add(clazz); + } else if (isTestCase(clazz) == false) { + notImplementing.add(clazz); + } else if (Modifier.isStatic(clazz.getModifiers())) { + innerClasses.add(clazz); } - return FileVisitResult.CONTINUE; } @Override - public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException { - // Go up one package by jumping back to the second to last '.' - packageName = packageName.substring(0, 1 + packageName.lastIndexOf('.', packageName.length() - 2)); - return FileVisitResult.CONTINUE; + protected void visitIntegrationTestClass(Class clazz) { + if (isTestCase(clazz) == false) { + notImplementing.add(clazz); + } } @Override - public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { - String filename = file.getFileName().toString(); - if (filename.endsWith(".class")) { - String className = filename.substring(0, filename.length() - ".class".length()); - Class clazz = loadClassWithoutInitializing(packageName + className); - if (clazz.getName().endsWith("Tests")) { - if (skipTestsInDisguised == false && integTestClass.isAssignableFrom(clazz)) { - integTestsInDisguise.add(clazz); - } - if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) { - notRunnable.add(clazz); - } else if (isTestCase(clazz) == false) { - notImplementing.add(clazz); - } else if (Modifier.isStatic(clazz.getModifiers())) { - innerClasses.add(clazz); - } - } else if (clazz.getName().endsWith("IT")) { - if (isTestCase(clazz) == false) { - notImplementing.add(clazz); - } - } else if (Modifier.isAbstract(clazz.getModifiers()) == false && Modifier.isInterface(clazz.getModifiers()) == false) { - if (isTestCase(clazz)) { - missingSuffix.add(clazz); - } else if (junit.framework.Test.class.isAssignableFrom(clazz)) { - pureUnitTest.add(clazz); - } - } + protected void visitOtherClass(Class clazz) { + if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) { + return; + } + if (isTestCase(clazz)) { + missingSuffix.add(clazz); + } else if (junit.framework.Test.class.isAssignableFrom(clazz)) { + pureUnitTest.add(clazz); } - return FileVisitResult.CONTINUE; + } + }); + } + + public void checkMain(Path rootPath) throws IOException { + Files.walkFileTree(rootPath, new TestClassVisitor() { + @Override + protected void visitTestClass(Class clazz) { + testsInMain.add(clazz); } - private boolean isTestCase(Class clazz) { - return testClass.isAssignableFrom(clazz); + @Override + protected void visitIntegrationTestClass(Class clazz) { + testsInMain.add(clazz); } @Override - public FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException { - throw exc; + protected void visitOtherClass(Class clazz) { + if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) { + return; + } + if (isTestCase(clazz)) { + testsInMain.add(clazz); + } } }); + } /** @@ -203,7 +215,7 @@ private static void assertNoViolations(String message, Set> set) { * similar enough. */ private static void assertViolation(String className, Set> set) { - className = "org.elasticsearch.test.NamingConventionsCheckBadClasses$" + className; + className = className.startsWith("org") ? className : "org.elasticsearch.test.NamingConventionsCheckBadClasses$" + className; if (false == set.remove(loadClassWithoutInitializing(className))) { System.err.println("Error in NamingConventionsCheck! Expected [" + className + "] to be a violation but wasn't."); System.exit(1); @@ -229,4 +241,74 @@ static Class loadClassWithoutInitializing(String name) { throw new RuntimeException(e); } } + + abstract class TestClassVisitor implements FileVisitor { + /** + * The package name of the directory we are currently visiting. Kept as a string rather than something fancy because we load + * just about every class and doing so requires building a string out of it anyway. At least this way we don't need to build the + * first part of the string over and over and over again. + */ + private String packageName; + + /** + * Visit classes named like a test. + */ + protected abstract void visitTestClass(Class clazz); + /** + * Visit classes named like an integration test. + */ + protected abstract void visitIntegrationTestClass(Class clazz); + /** + * Visit classes not named like a test at all. + */ + protected abstract void visitOtherClass(Class clazz); + + @Override + public final FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { + // First we visit the root directory + if (packageName == null) { + // And it package is empty string regardless of the directory name + packageName = ""; + } else { + packageName += dir.getFileName() + "."; + } + return FileVisitResult.CONTINUE; + } + + @Override + public final FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException { + // Go up one package by jumping back to the second to last '.' + packageName = packageName.substring(0, 1 + packageName.lastIndexOf('.', packageName.length() - 2)); + return FileVisitResult.CONTINUE; + } + + @Override + public final FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { + String filename = file.getFileName().toString(); + if (filename.endsWith(".class")) { + String className = filename.substring(0, filename.length() - ".class".length()); + Class clazz = loadClassWithoutInitializing(packageName + className); + if (clazz.getName().endsWith("Tests")) { + visitTestClass(clazz); + } else if (clazz.getName().endsWith("IT")) { + visitIntegrationTestClass(clazz); + } else { + visitOtherClass(clazz); + } + } + return FileVisitResult.CONTINUE; + } + + /** + * Is this class a test case? + */ + protected boolean isTestCase(Class clazz) { + return testClass.isAssignableFrom(clazz); + } + + @Override + public final FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException { + throw exc; + } + } } diff --git a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainIT.java b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainIT.java new file mode 100644 index 0000000000000..46adc7f065b16 --- /dev/null +++ b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainIT.java @@ -0,0 +1,26 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.test; + +/** + * This class should fail the naming conventions self test. + */ +public class NamingConventionsCheckInMainIT { +} diff --git a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainTests.java b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainTests.java new file mode 100644 index 0000000000000..27c0b41eb3f6a --- /dev/null +++ b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainTests.java @@ -0,0 +1,26 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.test; + +/** + * This class should fail the naming conventions self test. + */ +public class NamingConventionsCheckInMainTests { +} diff --git a/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.standalone-rest-test.properties b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.standalone-rest-test.properties new file mode 100644 index 0000000000000..2daf4dc27c0ec --- /dev/null +++ b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.standalone-rest-test.properties @@ -0,0 +1,20 @@ +# +# Licensed to Elasticsearch under one or more contributor +# license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright +# ownership. Elasticsearch licenses this file to you under +# the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +implementation-class=org.elasticsearch.gradle.test.StandaloneRestTestPlugin diff --git a/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrant.properties b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrant.properties new file mode 100644 index 0000000000000..844310fa9d7fa --- /dev/null +++ b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrant.properties @@ -0,0 +1 @@ +implementation-class=org.elasticsearch.gradle.vagrant.VagrantTestPlugin diff --git a/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrantsupport.properties b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrantsupport.properties new file mode 100644 index 0000000000000..73a3f4123496c --- /dev/null +++ b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrantsupport.properties @@ -0,0 +1 @@ +implementation-class=org.elasticsearch.gradle.vagrant.VagrantSupportPlugin \ No newline at end of file diff --git a/buildSrc/src/main/resources/checkstyle_suppressions.xml b/buildSrc/src/main/resources/checkstyle_suppressions.xml index 14c2bc8ca5a5f..7774fd8b13fe5 100644 --- a/buildSrc/src/main/resources/checkstyle_suppressions.xml +++ b/buildSrc/src/main/resources/checkstyle_suppressions.xml @@ -10,11 +10,12 @@ - - + + + @@ -29,7 +30,6 @@ - @@ -38,7 +38,6 @@ - @@ -58,14 +57,8 @@ - - - - - - @@ -117,8 +110,6 @@ - - @@ -141,50 +132,28 @@ - - - - - - - - - - - - - - - - - - + - - - - - @@ -202,7 +171,6 @@ - @@ -212,33 +180,23 @@ - - - - - - - - - - @@ -246,7 +204,6 @@ - @@ -259,7 +216,6 @@ - @@ -272,8 +228,6 @@ - - @@ -281,9 +235,7 @@ - - @@ -298,36 +250,22 @@ - - - - - - - - - - - - - - @@ -339,14 +277,11 @@ - - - @@ -363,67 +298,55 @@ - + + - - - + + - + + - - - - - - - - - - - - - - - - + + + + + + + + + + + + - - - - - - - - - @@ -433,11 +356,8 @@ - - - @@ -447,7 +367,6 @@ - @@ -455,153 +374,103 @@ - + - - - - - - - - - - - - - - - + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - + @@ -613,20 +482,15 @@ - - - - - @@ -662,14 +526,12 @@ - - @@ -711,26 +573,22 @@ - - - - @@ -739,28 +597,23 @@ - + - - - - - @@ -768,86 +621,71 @@ - - - - - - - - - - - - - + - - - - - - - + + + + + + - + + + + - - - - + + - + + + + + - - + - - - + + + - - - - - - - + @@ -856,7 +694,6 @@ - @@ -871,10 +708,8 @@ - - @@ -889,29 +724,20 @@ - - - - - - - - - @@ -919,15 +745,28 @@ - + + + + + + + + + + + + + + - + @@ -950,28 +789,21 @@ - - - - - - - - + @@ -980,68 +812,50 @@ - - - - - + - - - - - + - - - - - - + - - - - - @@ -1050,26 +864,21 @@ - - - - - - - - - - + - + + + + + - + @@ -1077,29 +886,17 @@ - - - - - - - - - - - - diff --git a/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs b/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs index 9bee5e587b03f..48c93f444ba2a 100644 --- a/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs +++ b/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs @@ -1,6 +1,5 @@ eclipse.preferences.version=1 -# previous configuration from maven build # this is merged with gradle's generated properties during 'gradle eclipse' # NOTE: null pointer analysis etc is not enabled currently, it seems very unstable diff --git a/buildSrc/src/main/resources/forbidden/es-all-signatures.txt b/buildSrc/src/main/resources/forbidden/es-all-signatures.txt index 37f03f4c91c28..1b655004cec3e 100644 --- a/buildSrc/src/main/resources/forbidden/es-all-signatures.txt +++ b/buildSrc/src/main/resources/forbidden/es-all-signatures.txt @@ -36,3 +36,12 @@ org.apache.lucene.document.FieldType#numericType() java.lang.invoke.MethodHandle#invoke(java.lang.Object[]) java.lang.invoke.MethodHandle#invokeWithArguments(java.lang.Object[]) java.lang.invoke.MethodHandle#invokeWithArguments(java.util.List) + +# This method is misleading, and uses lenient boolean parsing under the hood. If you intend to parse +# a system property as a boolean, use +# org.elasticsearch.common.Booleans#parseBoolean(java.lang.String) on the result of +# java.lang.SystemProperty#getProperty(java.lang.String) instead. If you were not intending to parse +# a system property as a boolean, but instead parse a string to a boolean, use +# org.elasticsearch.common.Booleans#parseBoolean(java.lang.String) directly on the string. +@defaultMessage use org.elasticsearch.common.Booleans#parseBoolean(java.lang.String) +java.lang.Boolean#getBoolean(java.lang.String) diff --git a/buildSrc/src/main/resources/forbidden/http-signatures.txt b/buildSrc/src/main/resources/forbidden/http-signatures.txt new file mode 100644 index 0000000000000..dcf20bbb09387 --- /dev/null +++ b/buildSrc/src/main/resources/forbidden/http-signatures.txt @@ -0,0 +1,45 @@ +# Licensed to Elasticsearch under one or more contributor +# license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright +# ownership. Elasticsearch licenses this file to you under +# the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on +# an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, +# either express or implied. See the License for the specific +# language governing permissions and limitations under the License. + +@defaultMessage Explicitly specify the ContentType of HTTP entities when creating +org.apache.http.entity.StringEntity#(java.lang.String) +org.apache.http.entity.StringEntity#(java.lang.String,java.lang.String) +org.apache.http.entity.StringEntity#(java.lang.String,java.nio.charset.Charset) +org.apache.http.entity.ByteArrayEntity#(byte[]) +org.apache.http.entity.ByteArrayEntity#(byte[],int,int) +org.apache.http.entity.FileEntity#(java.io.File) +org.apache.http.entity.InputStreamEntity#(java.io.InputStream) +org.apache.http.entity.InputStreamEntity#(java.io.InputStream,long) +org.apache.http.nio.entity.NByteArrayEntity#(byte[]) +org.apache.http.nio.entity.NByteArrayEntity#(byte[],int,int) +org.apache.http.nio.entity.NFileEntity#(java.io.File) +org.apache.http.nio.entity.NStringEntity#(java.lang.String) +org.apache.http.nio.entity.NStringEntity#(java.lang.String,java.lang.String) + +@defaultMessage Use non-deprecated constructors +org.apache.http.nio.entity.NFileEntity#(java.io.File,java.lang.String) +org.apache.http.nio.entity.NFileEntity#(java.io.File,java.lang.String,boolean) +org.apache.http.entity.FileEntity#(java.io.File,java.lang.String) +org.apache.http.entity.StringEntity#(java.lang.String,java.lang.String,java.lang.String) + +@defaultMessage BasicEntity is easy to mess up and forget to set content type +org.apache.http.entity.BasicHttpEntity#() + +@defaultMessage EntityTemplate is easy to mess up and forget to set content type +org.apache.http.entity.EntityTemplate#(org.apache.http.entity.ContentProducer) + +@defaultMessage SerializableEntity uses java serialization and makes it easy to forget to set content type +org.apache.http.entity.SerializableEntity#(java.io.Serializable) diff --git a/buildSrc/src/main/resources/plugin-descriptor.properties b/buildSrc/src/main/resources/plugin-descriptor.properties index ebde46d326ba9..67c6ee39968cd 100644 --- a/buildSrc/src/main/resources/plugin-descriptor.properties +++ b/buildSrc/src/main/resources/plugin-descriptor.properties @@ -30,11 +30,15 @@ name=${name} # 'classname': the name of the class to load, fully-qualified. classname=${classname} # -# 'java.version' version of java the code is built against +# 'java.version': version of java the code is built against # use the system property java.specification.version # version string must be a sequence of nonnegative decimal integers # separated by "."'s and may have leading zeros java.version=${javaVersion} # -# 'elasticsearch.version' version of elasticsearch compiled against +# 'elasticsearch.version': version of elasticsearch compiled against elasticsearch.version=${elasticsearchVersion} +### optional elements for plugins: +# +# 'has.native.controller': whether or not the plugin has a native controller +has.native.controller=${hasNativeController} diff --git a/buildSrc/version.properties b/buildSrc/version.properties index e96f98245958d..eab95c5ace916 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -1,18 +1,21 @@ -elasticsearch = 5.0.0-alpha6 -lucene = 6.2.0 +elasticsearch = 5.6.4 +lucene = 6.6.1 # optional dependencies spatial4j = 0.6 jts = 1.13 -jackson = 2.8.1 +jackson = 2.8.6 snakeyaml = 1.15 -log4j = 2.6.2 +# when updating log4j, please update also docs/java-api/index.asciidoc +log4j = 2.9.1 slf4j = 1.6.2 -jna = 4.2.2 + +# when updating the JNA version, also update the version in buildSrc/build.gradle +jna = 4.4.0-1 # test dependencies -randomizedrunner = 2.3.2 -junit = 4.11 +randomizedrunner = 2.5.0 +junit = 4.12 httpclient = 4.5.2 httpcore = 4.4.5 commonslogging = 1.1.3 diff --git a/client/benchmark/build.gradle b/client/benchmark/build.gradle index bd4abddbd1ddf..ec0a002b1ae45 100644 --- a/client/benchmark/build.gradle +++ b/client/benchmark/build.gradle @@ -37,6 +37,10 @@ apply plugin: 'application' group = 'org.elasticsearch.client' +// Not published so no need to assemble +tasks.remove(assemble) +build.dependsOn.remove('assemble') + archivesBaseName = 'client-benchmarks' mainClassName = 'org.elasticsearch.client.benchmark.BenchmarkMain' @@ -49,7 +53,7 @@ task test(type: Test, overwrite: true) dependencies { compile 'org.apache.commons:commons-math3:3.2' - compile("org.elasticsearch.client:rest:${version}") + compile("org.elasticsearch.client:elasticsearch-rest-client:${version}") // bottleneck should be the client, not Elasticsearch compile project(path: ':client:client-benchmark-noop-api-plugin') // for transport client @@ -64,7 +68,3 @@ dependencies { // No licenses for our benchmark deps (we don't ship benchmarks) dependencyLicenses.enabled = false - -extraArchive { - javadoc = false -} diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java index 214a75d12cc01..e9cde26e6c870 100644 --- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java +++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java @@ -95,7 +95,7 @@ private static final class LoadGenerator { private final BlockingQueue> bulkQueue; private final int bulkSize; - public LoadGenerator(Path bulkDataFile, BlockingQueue> bulkQueue, int bulkSize) { + LoadGenerator(Path bulkDataFile, BlockingQueue> bulkQueue, int bulkSize) { this.bulkDataFile = bulkDataFile; this.bulkQueue = bulkQueue; this.bulkSize = bulkSize; @@ -143,7 +143,7 @@ private static final class BulkIndexer implements Runnable { private final BulkRequestExecutor bulkRequestExecutor; private final SampleRecorder sampleRecorder; - public BulkIndexer(BlockingQueue> bulkData, int warmupIterations, int measurementIterations, + BulkIndexer(BlockingQueue> bulkData, int warmupIterations, int measurementIterations, SampleRecorder sampleRecorder, BulkRequestExecutor bulkRequestExecutor) { this.bulkData = bulkData; this.warmupIterations = warmupIterations; diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java index b342d93fba5a1..9210526e7c81c 100644 --- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java +++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java @@ -73,7 +73,7 @@ private static final class RestBulkRequestExecutor implements BulkRequestExecuto private final RestClient client; private final String actionMetaData; - public RestBulkRequestExecutor(RestClient client, String index, String type) { + RestBulkRequestExecutor(RestClient client, String index, String type) { this.client = client; this.actionMetaData = String.format(Locale.ROOT, "{ \"index\" : { \"_index\" : \"%s\", \"_type\" : \"%s\" } }%n", index, type); } diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java index c38234ef30241..bf25fde24cfc4 100644 --- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java +++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java @@ -28,6 +28,7 @@ import org.elasticsearch.client.transport.TransportClient; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.InetSocketTransportAddress; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.plugin.noop.NoopPlugin; import org.elasticsearch.plugin.noop.action.bulk.NoopBulkAction; @@ -70,7 +71,7 @@ private static final class TransportBulkRequestExecutor implements BulkRequestEx private final String indexName; private final String typeName; - public TransportBulkRequestExecutor(TransportClient client, String indexName, String typeName) { + TransportBulkRequestExecutor(TransportClient client, String indexName, String typeName) { this.client = client; this.indexName = indexName; this.typeName = typeName; @@ -80,7 +81,7 @@ public TransportBulkRequestExecutor(TransportClient client, String indexName, St public boolean bulkIndex(List bulkData) { NoopBulkRequestBuilder builder = NoopBulkAction.INSTANCE.newRequestBuilder(client); for (String bulkItem : bulkData) { - builder.add(new IndexRequest(indexName, typeName).source(bulkItem.getBytes(StandardCharsets.UTF_8))); + builder.add(new IndexRequest(indexName, typeName).source(bulkItem.getBytes(StandardCharsets.UTF_8), XContentType.JSON)); } BulkResponse bulkResponse; try { diff --git a/client/client-benchmark-noop-api-plugin/build.gradle b/client/client-benchmark-noop-api-plugin/build.gradle index 9f3af3ce002ae..bee41034c3ce5 100644 --- a/client/client-benchmark-noop-api-plugin/build.gradle +++ b/client/client-benchmark-noop-api-plugin/build.gradle @@ -20,7 +20,6 @@ group = 'org.elasticsearch.plugin' apply plugin: 'elasticsearch.esplugin' -apply plugin: 'com.bmuschko.nexus' esplugin { name 'client-benchmark-noop-api' @@ -28,9 +27,12 @@ esplugin { classname 'org.elasticsearch.plugin.noop.NoopPlugin' } +// Not published so no need to assemble +tasks.remove(assemble) +build.dependsOn.remove('assemble') + compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked" // no unit tests test.enabled = false integTest.enabled = false - diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java index 343d3cf613a04..e8ed27715c10a 100644 --- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java +++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java @@ -23,19 +23,27 @@ import org.elasticsearch.plugin.noop.action.bulk.TransportNoopBulkAction; import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.IndexScopedSettings; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.plugin.noop.action.search.NoopSearchAction; import org.elasticsearch.plugin.noop.action.search.RestNoopSearchAction; import org.elasticsearch.plugin.noop.action.search.TransportNoopSearchAction; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestHandler; import java.util.Arrays; import java.util.List; +import java.util.function.Supplier; public class NoopPlugin extends Plugin implements ActionPlugin { @Override - public List, ? extends ActionResponse>> getActions() { + public List> getActions() { return Arrays.asList( new ActionHandler<>(NoopBulkAction.INSTANCE, TransportNoopBulkAction.class), new ActionHandler<>(NoopSearchAction.INSTANCE, TransportNoopSearchAction.class) @@ -43,7 +51,11 @@ public class NoopPlugin extends Plugin implements ActionPlugin { } @Override - public List> getRestHandlers() { - return Arrays.asList(RestNoopBulkAction.class, RestNoopSearchAction.class); + public List getRestHandlers(Settings settings, RestController restController, ClusterSettings clusterSettings, + IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter, IndexNameExpressionResolver indexNameExpressionResolver, + Supplier nodesInCluster) { + return Arrays.asList( + new RestNoopBulkAction(settings, restController), + new RestNoopSearchAction(settings, restController)); } } diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java index ceaf9f8cc9d17..1034e722e8789 100644 --- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java +++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java @@ -33,6 +33,7 @@ import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentType; public class NoopBulkRequestBuilder extends ActionRequestBuilder implements WriteRequestBuilder { @@ -95,17 +96,17 @@ public NoopBulkRequestBuilder add(UpdateRequestBuilder request) { /** * Adds a framed data in binary format */ - public NoopBulkRequestBuilder add(byte[] data, int from, int length) throws Exception { - request.add(data, from, length, null, null); + public NoopBulkRequestBuilder add(byte[] data, int from, int length, XContentType xContentType) throws Exception { + request.add(data, from, length, null, null, xContentType); return this; } /** * Adds a framed data in binary format */ - public NoopBulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType) - throws Exception { - request.add(data, from, length, defaultIndex, defaultType); + public NoopBulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType, + XContentType xContentType) throws Exception { + request.add(data, from, length, defaultIndex, defaultType, xContentType); return this; } diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java index 814c05889b2b7..df712f537e43e 100644 --- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java +++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java @@ -18,6 +18,7 @@ */ package org.elasticsearch.plugin.noop.action.bulk; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.bulk.BulkItemResponse; import org.elasticsearch.action.bulk.BulkRequest; @@ -27,7 +28,6 @@ import org.elasticsearch.client.Requests; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.shard.ShardId; @@ -39,12 +39,13 @@ import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestRequest.Method.PUT; import static org.elasticsearch.rest.RestStatus.OK; public class RestNoopBulkAction extends BaseRestHandler { - @Inject public RestNoopBulkAction(Settings settings, RestController controller) { super(settings); @@ -57,7 +58,7 @@ public RestNoopBulkAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { BulkRequest bulkRequest = Requests.bulkRequest(); String defaultIndex = request.param("index"); String defaultType = request.param("type"); @@ -72,22 +73,24 @@ public void handleRequest(final RestRequest request, final RestChannel channel, } bulkRequest.timeout(request.paramAsTime("timeout", BulkShardRequest.DEFAULT_TIMEOUT)); bulkRequest.setRefreshPolicy(request.param("refresh")); - bulkRequest.add(request.content(), defaultIndex, defaultType, defaultRouting, defaultFields, defaultPipeline, null, true); + bulkRequest.add(request.requiredContent(), defaultIndex, defaultType, defaultRouting, defaultFields, + null, defaultPipeline, null, true, request.getXContentType()); // short circuit the call to the transport layer - BulkRestBuilderListener listener = new BulkRestBuilderListener(channel, request); - listener.onResponse(bulkRequest); - + return channel -> { + BulkRestBuilderListener listener = new BulkRestBuilderListener(channel, request); + listener.onResponse(bulkRequest); + }; } private static class BulkRestBuilderListener extends RestBuilderListener { - private final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, "update", + private final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, DocWriteRequest.OpType.UPDATE, new UpdateResponse(new ShardId("mock", "", 1), "mock_type", "1", 1L, DocWriteResponse.Result.CREATED)); private final RestRequest request; - public BulkRestBuilderListener(RestChannel channel, RestRequest request) { + BulkRestBuilderListener(RestChannel channel, RestRequest request) { super(channel); this.request = request; } @@ -99,9 +102,7 @@ public RestResponse buildResponse(BulkRequest bulkRequest, XContentBuilder build builder.field(Fields.ERRORS, false); builder.startArray(Fields.ITEMS); for (int idx = 0; idx < bulkRequest.numberOfActions(); idx++) { - builder.startObject(); ITEM_RESPONSE.toXContent(builder, request); - builder.endObject(); } builder.endArray(); builder.endObject(); diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/TransportNoopBulkAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/TransportNoopBulkAction.java index dcc225c260309..aff5863bd9298 100644 --- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/TransportNoopBulkAction.java +++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/TransportNoopBulkAction.java @@ -19,6 +19,7 @@ package org.elasticsearch.plugin.noop.action.bulk; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.bulk.BulkItemResponse; import org.elasticsearch.action.bulk.BulkRequest; @@ -34,7 +35,7 @@ import org.elasticsearch.transport.TransportService; public class TransportNoopBulkAction extends HandledTransportAction { - private static final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, "update", + private static final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, DocWriteRequest.OpType.UPDATE, new UpdateResponse(new ShardId("mock", "", 1), "mock_type", "1", 1L, DocWriteResponse.Result.CREATED)); @Inject @@ -45,7 +46,7 @@ public TransportNoopBulkAction(Settings settings, ThreadPool threadPool, Transpo @Override protected void doExecute(BulkRequest request, ActionListener listener) { - final int itemCount = request.subRequests().size(); + final int itemCount = request.requests().size(); // simulate at least a realistic amount of data that gets serialized BulkItemResponse[] bulkItemResponses = new BulkItemResponse[itemCount]; for (int idx = 0; idx < itemCount; idx++) { diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/RestNoopSearchAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/RestNoopSearchAction.java index 3520876af04b4..48a453c372507 100644 --- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/RestNoopSearchAction.java +++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/RestNoopSearchAction.java @@ -20,10 +20,8 @@ import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestStatusToXContentListener; @@ -34,8 +32,6 @@ import static org.elasticsearch.rest.RestRequest.Method.POST; public class RestNoopSearchAction extends BaseRestHandler { - - @Inject public RestNoopSearchAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_noop_search", this); @@ -47,8 +43,8 @@ public RestNoopSearchAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws IOException { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { SearchRequest searchRequest = new SearchRequest(); - client.execute(NoopSearchAction.INSTANCE, searchRequest, new RestStatusToXContentListener<>(channel)); + return channel -> client.execute(NoopSearchAction.INSTANCE, searchRequest, new RestStatusToXContentListener<>(channel)); } } diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java index c4397684bc41e..280e0b08f2c72 100644 --- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java +++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java @@ -28,8 +28,8 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.search.aggregations.InternalAggregations; -import org.elasticsearch.search.internal.InternalSearchHit; -import org.elasticsearch.search.internal.InternalSearchHits; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.SearchHits; import org.elasticsearch.search.internal.InternalSearchResponse; import org.elasticsearch.search.profile.SearchProfileShardResults; import org.elasticsearch.search.suggest.Suggest; @@ -49,10 +49,10 @@ public TransportNoopSearchAction(Settings settings, ThreadPool threadPool, Trans @Override protected void doExecute(SearchRequest request, ActionListener listener) { listener.onResponse(new SearchResponse(new InternalSearchResponse( - new InternalSearchHits( - new InternalSearchHit[0], 0L, 0.0f), + new SearchHits( + new SearchHit[0], 0L, 0.0f), new InternalAggregations(Collections.emptyList()), new Suggest(Collections.emptyList()), - new SearchProfileShardResults(Collections.emptyMap()), false, false), "", 1, 1, 0, new ShardSearchFailure[0])); + new SearchProfileShardResults(Collections.emptyMap()), false, false, 1), "", 1, 1, 0, 0, new ShardSearchFailure[0])); } } diff --git a/client/rest-high-level/build.gradle b/client/rest-high-level/build.gradle new file mode 100644 index 0000000000000..ba97605dba82e --- /dev/null +++ b/client/rest-high-level/build.gradle @@ -0,0 +1,63 @@ +import org.elasticsearch.gradle.precommit.PrecommitTasks + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +apply plugin: 'elasticsearch.build' +apply plugin: 'elasticsearch.rest-test' +apply plugin: 'nebula.maven-base-publish' +apply plugin: 'nebula.maven-scm' + +group = 'org.elasticsearch.client' +archivesBaseName = 'elasticsearch-rest-high-level-client' + +publishing { + publications { + nebula { + artifactId = archivesBaseName + } + } +} + +dependencies { + compile "org.elasticsearch:elasticsearch:${version}" + compile "org.elasticsearch.client:elasticsearch-rest-client:${version}" + compile "org.elasticsearch.plugin:parent-join-client:${version}" + compile "org.elasticsearch.plugin:aggs-matrix-stats-client:${version}" + + testCompile "org.elasticsearch.client:test:${version}" + testCompile "org.elasticsearch.test:framework:${version}" + testCompile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}" + testCompile "junit:junit:${versions.junit}" + testCompile "org.hamcrest:hamcrest-all:${versions.hamcrest}" +} + +dependencyLicenses { + // Don't check licenses for dependency that are part of the elasticsearch project + // But any other dependency should have its license/notice/sha1 + dependencies = project.configurations.runtime.fileCollection { + it.group.startsWith('org.elasticsearch') == false + } +} + +forbiddenApisMain { + // core does not depend on the httpclient for compile so we add the signatures here. We don't add them for test as they are already + // specified + signaturesURLs += [PrecommitTasks.getResource('/forbidden/http-signatures.txt')] + signaturesURLs += [file('src/main/resources/forbidden/rest-high-level-signatures.txt').toURI().toURL()] +} \ No newline at end of file diff --git a/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java b/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java new file mode 100644 index 0000000000000..7a95553c3c003 --- /dev/null +++ b/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java @@ -0,0 +1,578 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.HttpEntity; +import org.apache.http.client.methods.HttpDelete; +import org.apache.http.client.methods.HttpGet; +import org.apache.http.client.methods.HttpHead; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.client.methods.HttpPut; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.get.GetRequest; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.action.support.ActiveShardCount; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.VersionType; +import org.elasticsearch.rest.action.search.RestSearchAction; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.nio.charset.Charset; +import java.util.Collections; +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; +import java.util.Objects; +import java.util.StringJoiner; + +public final class Request { + + static final XContentType REQUEST_BODY_CONTENT_TYPE = XContentType.JSON; + + private final String method; + private final String endpoint; + private final Map parameters; + private final HttpEntity entity; + + public Request(String method, String endpoint, Map parameters, HttpEntity entity) { + this.method = Objects.requireNonNull(method, "method cannot be null"); + this.endpoint = Objects.requireNonNull(endpoint, "endpoint cannot be null"); + this.parameters = Objects.requireNonNull(parameters, "parameters cannot be null"); + this.entity = entity; + } + + public String getMethod() { + return method; + } + + public String getEndpoint() { + return endpoint; + } + + public Map getParameters() { + return parameters; + } + + public HttpEntity getEntity() { + return entity; + } + + @Override + public String toString() { + return "Request{" + + "method='" + method + '\'' + + ", endpoint='" + endpoint + '\'' + + ", params=" + parameters + + ", hasBody=" + (entity != null) + + '}'; + } + + static Request delete(DeleteRequest deleteRequest) { + String endpoint = endpoint(deleteRequest.index(), deleteRequest.type(), deleteRequest.id()); + + Params parameters = Params.builder(); + parameters.withRouting(deleteRequest.routing()); + parameters.withParent(deleteRequest.parent()); + parameters.withTimeout(deleteRequest.timeout()); + parameters.withVersion(deleteRequest.version()); + parameters.withVersionType(deleteRequest.versionType()); + parameters.withRefreshPolicy(deleteRequest.getRefreshPolicy()); + parameters.withWaitForActiveShards(deleteRequest.waitForActiveShards()); + + return new Request(HttpDelete.METHOD_NAME, endpoint, parameters.getParams(), null); + } + + static Request info() { + return new Request(HttpGet.METHOD_NAME, "/", Collections.emptyMap(), null); + } + + static Request bulk(BulkRequest bulkRequest) throws IOException { + Params parameters = Params.builder(); + parameters.withTimeout(bulkRequest.timeout()); + parameters.withRefreshPolicy(bulkRequest.getRefreshPolicy()); + + // Bulk API only supports newline delimited JSON or Smile. Before executing + // the bulk, we need to check that all requests have the same content-type + // and this content-type is supported by the Bulk API. + XContentType bulkContentType = null; + for (int i = 0; i < bulkRequest.numberOfActions(); i++) { + DocWriteRequest request = bulkRequest.requests().get(i); + + DocWriteRequest.OpType opType = request.opType(); + if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) { + bulkContentType = enforceSameContentType((IndexRequest) request, bulkContentType); + + } else if (opType == DocWriteRequest.OpType.UPDATE) { + UpdateRequest updateRequest = (UpdateRequest) request; + if (updateRequest.doc() != null) { + bulkContentType = enforceSameContentType(updateRequest.doc(), bulkContentType); + } + if (updateRequest.upsertRequest() != null) { + bulkContentType = enforceSameContentType(updateRequest.upsertRequest(), bulkContentType); + } + } + } + + if (bulkContentType == null) { + bulkContentType = XContentType.JSON; + } + + final byte separator = bulkContentType.xContent().streamSeparator(); + final ContentType requestContentType = createContentType(bulkContentType); + + ByteArrayOutputStream content = new ByteArrayOutputStream(); + for (DocWriteRequest request : bulkRequest.requests()) { + DocWriteRequest.OpType opType = request.opType(); + + try (XContentBuilder metadata = XContentBuilder.builder(bulkContentType.xContent())) { + metadata.startObject(); + { + metadata.startObject(opType.getLowercase()); + if (Strings.hasLength(request.index())) { + metadata.field("_index", request.index()); + } + if (Strings.hasLength(request.type())) { + metadata.field("_type", request.type()); + } + if (Strings.hasLength(request.id())) { + metadata.field("_id", request.id()); + } + if (Strings.hasLength(request.routing())) { + metadata.field("_routing", request.routing()); + } + if (Strings.hasLength(request.parent())) { + metadata.field("_parent", request.parent()); + } + if (request.version() != Versions.MATCH_ANY) { + metadata.field("_version", request.version()); + } + + VersionType versionType = request.versionType(); + if (versionType != VersionType.INTERNAL) { + if (versionType == VersionType.EXTERNAL) { + metadata.field("_version_type", "external"); + } else if (versionType == VersionType.EXTERNAL_GTE) { + metadata.field("_version_type", "external_gte"); + } else if (versionType == VersionType.FORCE) { + metadata.field("_version_type", "force"); + } + } + + if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) { + IndexRequest indexRequest = (IndexRequest) request; + if (Strings.hasLength(indexRequest.getPipeline())) { + metadata.field("pipeline", indexRequest.getPipeline()); + } + } else if (opType == DocWriteRequest.OpType.UPDATE) { + UpdateRequest updateRequest = (UpdateRequest) request; + if (updateRequest.retryOnConflict() > 0) { + metadata.field("_retry_on_conflict", updateRequest.retryOnConflict()); + } + if (updateRequest.fetchSource() != null) { + metadata.field("_source", updateRequest.fetchSource()); + } + } + metadata.endObject(); + } + metadata.endObject(); + + BytesRef metadataSource = metadata.bytes().toBytesRef(); + content.write(metadataSource.bytes, metadataSource.offset, metadataSource.length); + content.write(separator); + } + + BytesRef source = null; + if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) { + IndexRequest indexRequest = (IndexRequest) request; + BytesReference indexSource = indexRequest.source(); + XContentType indexXContentType = indexRequest.getContentType(); + + try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, indexSource, indexXContentType)) { + try (XContentBuilder builder = XContentBuilder.builder(bulkContentType.xContent())) { + builder.copyCurrentStructure(parser); + source = builder.bytes().toBytesRef(); + } + } + } else if (opType == DocWriteRequest.OpType.UPDATE) { + source = XContentHelper.toXContent((UpdateRequest) request, bulkContentType, false).toBytesRef(); + } + + if (source != null) { + content.write(source.bytes, source.offset, source.length); + content.write(separator); + } + } + + HttpEntity entity = new ByteArrayEntity(content.toByteArray(), 0, content.size(), requestContentType); + return new Request(HttpPost.METHOD_NAME, "/_bulk", parameters.getParams(), entity); + } + + static Request exists(GetRequest getRequest) { + Request request = get(getRequest); + return new Request(HttpHead.METHOD_NAME, request.endpoint, request.parameters, null); + } + + static Request get(GetRequest getRequest) { + String endpoint = endpoint(getRequest.index(), getRequest.type(), getRequest.id()); + + Params parameters = Params.builder(); + parameters.withPreference(getRequest.preference()); + parameters.withRouting(getRequest.routing()); + parameters.withParent(getRequest.parent()); + parameters.withRefresh(getRequest.refresh()); + parameters.withRealtime(getRequest.realtime()); + parameters.withStoredFields(getRequest.storedFields()); + parameters.withVersion(getRequest.version()); + parameters.withVersionType(getRequest.versionType()); + parameters.withFetchSourceContext(getRequest.fetchSourceContext()); + + return new Request(HttpGet.METHOD_NAME, endpoint, parameters.getParams(), null); + } + + static Request index(IndexRequest indexRequest) { + String method = Strings.hasLength(indexRequest.id()) ? HttpPut.METHOD_NAME : HttpPost.METHOD_NAME; + + boolean isCreate = (indexRequest.opType() == DocWriteRequest.OpType.CREATE); + String endpoint = endpoint(indexRequest.index(), indexRequest.type(), indexRequest.id(), isCreate ? "_create" : null); + + Params parameters = Params.builder(); + parameters.withRouting(indexRequest.routing()); + parameters.withParent(indexRequest.parent()); + parameters.withTimeout(indexRequest.timeout()); + parameters.withVersion(indexRequest.version()); + parameters.withVersionType(indexRequest.versionType()); + parameters.withPipeline(indexRequest.getPipeline()); + parameters.withRefreshPolicy(indexRequest.getRefreshPolicy()); + parameters.withWaitForActiveShards(indexRequest.waitForActiveShards()); + + BytesRef source = indexRequest.source().toBytesRef(); + ContentType contentType = createContentType(indexRequest.getContentType()); + HttpEntity entity = new ByteArrayEntity(source.bytes, source.offset, source.length, contentType); + + return new Request(method, endpoint, parameters.getParams(), entity); + } + + static Request ping() { + return new Request(HttpHead.METHOD_NAME, "/", Collections.emptyMap(), null); + } + + static Request update(UpdateRequest updateRequest) throws IOException { + String endpoint = endpoint(updateRequest.index(), updateRequest.type(), updateRequest.id(), "_update"); + + Params parameters = Params.builder(); + parameters.withRouting(updateRequest.routing()); + parameters.withParent(updateRequest.parent()); + parameters.withTimeout(updateRequest.timeout()); + parameters.withRefreshPolicy(updateRequest.getRefreshPolicy()); + parameters.withWaitForActiveShards(updateRequest.waitForActiveShards()); + parameters.withDocAsUpsert(updateRequest.docAsUpsert()); + parameters.withFetchSourceContext(updateRequest.fetchSource()); + parameters.withRetryOnConflict(updateRequest.retryOnConflict()); + parameters.withVersion(updateRequest.version()); + parameters.withVersionType(updateRequest.versionType()); + + // The Java API allows update requests with different content types + // set for the partial document and the upsert document. This client + // only accepts update requests that have the same content types set + // for both doc and upsert. + XContentType xContentType = null; + if (updateRequest.doc() != null) { + xContentType = updateRequest.doc().getContentType(); + } + if (updateRequest.upsertRequest() != null) { + XContentType upsertContentType = updateRequest.upsertRequest().getContentType(); + if ((xContentType != null) && (xContentType != upsertContentType)) { + throw new IllegalStateException("Update request cannot have different content types for doc [" + xContentType + "]" + + " and upsert [" + upsertContentType + "] documents"); + } else { + xContentType = upsertContentType; + } + } + if (xContentType == null) { + xContentType = Requests.INDEX_CONTENT_TYPE; + } + + HttpEntity entity = createEntity(updateRequest, xContentType); + return new Request(HttpPost.METHOD_NAME, endpoint, parameters.getParams(), entity); + } + + static Request search(SearchRequest searchRequest) throws IOException { + String endpoint = endpoint(searchRequest.indices(), searchRequest.types(), "_search"); + Params params = Params.builder(); + params.putParam(RestSearchAction.TYPED_KEYS_PARAM, "true"); + params.withRouting(searchRequest.routing()); + params.withPreference(searchRequest.preference()); + params.withIndicesOptions(searchRequest.indicesOptions()); + params.putParam("search_type", searchRequest.searchType().name().toLowerCase(Locale.ROOT)); + if (searchRequest.requestCache() != null) { + params.putParam("request_cache", Boolean.toString(searchRequest.requestCache())); + } + params.putParam("batched_reduce_size", Integer.toString(searchRequest.getBatchedReduceSize())); + if (searchRequest.scroll() != null) { + params.putParam("scroll", searchRequest.scroll().keepAlive()); + } + HttpEntity entity = null; + if (searchRequest.source() != null) { + entity = createEntity(searchRequest.source(), REQUEST_BODY_CONTENT_TYPE); + } + return new Request(HttpGet.METHOD_NAME, endpoint, params.getParams(), entity); + } + + static Request searchScroll(SearchScrollRequest searchScrollRequest) throws IOException { + HttpEntity entity = createEntity(searchScrollRequest, REQUEST_BODY_CONTENT_TYPE); + return new Request("GET", "/_search/scroll", Collections.emptyMap(), entity); + } + + static Request clearScroll(ClearScrollRequest clearScrollRequest) throws IOException { + HttpEntity entity = createEntity(clearScrollRequest, REQUEST_BODY_CONTENT_TYPE); + return new Request("DELETE", "/_search/scroll", Collections.emptyMap(), entity); + } + + private static HttpEntity createEntity(ToXContent toXContent, XContentType xContentType) throws IOException { + BytesRef source = XContentHelper.toXContent(toXContent, xContentType, false).toBytesRef(); + return new ByteArrayEntity(source.bytes, source.offset, source.length, createContentType(xContentType)); + } + + static String endpoint(String[] indices, String[] types, String endpoint) { + return endpoint(String.join(",", indices), String.join(",", types), endpoint); + } + + /** + * Utility method to build request's endpoint. + */ + static String endpoint(String... parts) { + StringJoiner joiner = new StringJoiner("/", "/", ""); + for (String part : parts) { + if (Strings.hasLength(part)) { + joiner.add(part); + } + } + return joiner.toString(); + } + + /** + * Returns a {@link ContentType} from a given {@link XContentType}. + * + * @param xContentType the {@link XContentType} + * @return the {@link ContentType} + */ + @SuppressForbidden(reason = "Only allowed place to convert a XContentType to a ContentType") + public static ContentType createContentType(final XContentType xContentType) { + return ContentType.create(xContentType.mediaTypeWithoutParameters(), (Charset) null); + } + + /** + * Utility class to build request's parameters map and centralize all parameter names. + */ + static class Params { + private final Map params = new HashMap<>(); + + private Params() { + } + + Params putParam(String key, String value) { + if (Strings.hasLength(value)) { + if (params.putIfAbsent(key, value) != null) { + throw new IllegalArgumentException("Request parameter [" + key + "] is already registered"); + } + } + return this; + } + + Params putParam(String key, TimeValue value) { + if (value != null) { + return putParam(key, value.getStringRep()); + } + return this; + } + + Params withDocAsUpsert(boolean docAsUpsert) { + if (docAsUpsert) { + return putParam("doc_as_upsert", Boolean.TRUE.toString()); + } + return this; + } + + Params withFetchSourceContext(FetchSourceContext fetchSourceContext) { + if (fetchSourceContext != null) { + if (fetchSourceContext.fetchSource() == false) { + putParam("_source", Boolean.FALSE.toString()); + } + if (fetchSourceContext.includes() != null && fetchSourceContext.includes().length > 0) { + putParam("_source_include", String.join(",", fetchSourceContext.includes())); + } + if (fetchSourceContext.excludes() != null && fetchSourceContext.excludes().length > 0) { + putParam("_source_exclude", String.join(",", fetchSourceContext.excludes())); + } + } + return this; + } + + Params withParent(String parent) { + return putParam("parent", parent); + } + + Params withPipeline(String pipeline) { + return putParam("pipeline", pipeline); + } + + Params withPreference(String preference) { + return putParam("preference", preference); + } + + Params withRealtime(boolean realtime) { + if (realtime == false) { + return putParam("realtime", Boolean.FALSE.toString()); + } + return this; + } + + Params withRefresh(boolean refresh) { + if (refresh) { + return withRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); + } + return this; + } + + Params withRefreshPolicy(WriteRequest.RefreshPolicy refreshPolicy) { + if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) { + return putParam("refresh", refreshPolicy.getValue()); + } + return this; + } + + Params withRetryOnConflict(int retryOnConflict) { + if (retryOnConflict > 0) { + return putParam("retry_on_conflict", String.valueOf(retryOnConflict)); + } + return this; + } + + Params withRouting(String routing) { + return putParam("routing", routing); + } + + Params withStoredFields(String[] storedFields) { + if (storedFields != null && storedFields.length > 0) { + return putParam("stored_fields", String.join(",", storedFields)); + } + return this; + } + + Params withTimeout(TimeValue timeout) { + return putParam("timeout", timeout); + } + + Params withVersion(long version) { + if (version != Versions.MATCH_ANY) { + return putParam("version", Long.toString(version)); + } + return this; + } + + Params withVersionType(VersionType versionType) { + if (versionType != VersionType.INTERNAL) { + return putParam("version_type", versionType.name().toLowerCase(Locale.ROOT)); + } + return this; + } + + Params withWaitForActiveShards(ActiveShardCount activeShardCount) { + if (activeShardCount != null && activeShardCount != ActiveShardCount.DEFAULT) { + return putParam("wait_for_active_shards", activeShardCount.toString().toLowerCase(Locale.ROOT)); + } + return this; + } + + Params withIndicesOptions(IndicesOptions indicesOptions) { + putParam("ignore_unavailable", Boolean.toString(indicesOptions.ignoreUnavailable())); + putParam("allow_no_indices", Boolean.toString(indicesOptions.allowNoIndices())); + String expandWildcards; + if (indicesOptions.expandWildcardsOpen() == false && indicesOptions.expandWildcardsClosed() == false) { + expandWildcards = "none"; + } else { + StringJoiner joiner = new StringJoiner(","); + if (indicesOptions.expandWildcardsOpen()) { + joiner.add("open"); + } + if (indicesOptions.expandWildcardsClosed()) { + joiner.add("closed"); + } + expandWildcards = joiner.toString(); + } + putParam("expand_wildcards", expandWildcards); + return this; + } + + Map getParams() { + return Collections.unmodifiableMap(params); + } + + static Params builder() { + return new Params(); + } + } + + /** + * Ensure that the {@link IndexRequest}'s content type is supported by the Bulk API and that it conforms + * to the current {@link BulkRequest}'s content type (if it's known at the time of this method get called). + * + * @return the {@link IndexRequest}'s content type + */ + static XContentType enforceSameContentType(IndexRequest indexRequest, @Nullable XContentType xContentType) { + XContentType requestContentType = indexRequest.getContentType(); + if (requestContentType != XContentType.JSON && requestContentType != XContentType.SMILE) { + throw new IllegalArgumentException("Unsupported content-type found for request with content-type [" + requestContentType + + "], only JSON and SMILE are supported"); + } + if (xContentType == null) { + return requestContentType; + } + if (requestContentType != xContentType) { + throw new IllegalArgumentException("Mismatching content-type found for request with content-type [" + requestContentType + + "], previous requests have content-type [" + xContentType + "]"); + } + return xContentType; + } +} diff --git a/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java b/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java new file mode 100644 index 0000000000000..59684b18508ee --- /dev/null +++ b/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java @@ -0,0 +1,601 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ElasticsearchStatusException; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.bulk.BulkResponse; +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.delete.DeleteResponse; +import org.elasticsearch.action.get.GetRequest; +import org.elasticsearch.action.get.GetResponse; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.index.IndexResponse; +import org.elasticsearch.action.main.MainRequest; +import org.elasticsearch.action.main.MainResponse; +import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.ClearScrollResponse; +import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.action.update.UpdateResponse; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.xcontent.ContextParser; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.plugins.spi.NamedXContentProvider; +import org.elasticsearch.rest.BytesRestResponse; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.adjacency.ParsedAdjacencyMatrix; +import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.filter.ParsedFilter; +import org.elasticsearch.search.aggregations.bucket.filters.FiltersAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.filters.ParsedFilters; +import org.elasticsearch.search.aggregations.bucket.geogrid.GeoGridAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.geogrid.ParsedGeoHashGrid; +import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.global.ParsedGlobal; +import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.histogram.ParsedDateHistogram; +import org.elasticsearch.search.aggregations.bucket.histogram.ParsedHistogram; +import org.elasticsearch.search.aggregations.bucket.missing.MissingAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.missing.ParsedMissing; +import org.elasticsearch.search.aggregations.bucket.nested.NestedAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.nested.ParsedNested; +import org.elasticsearch.search.aggregations.bucket.nested.ParsedReverseNested; +import org.elasticsearch.search.aggregations.bucket.nested.ReverseNestedAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.range.ip.IpRangeAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.range.ParsedBinaryRange; +import org.elasticsearch.search.aggregations.bucket.range.ParsedRange; +import org.elasticsearch.search.aggregations.bucket.range.RangeAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.range.date.DateRangeAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.range.date.ParsedDateRange; +import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.range.geodistance.ParsedGeoDistance; +import org.elasticsearch.search.aggregations.bucket.sampler.InternalSampler; +import org.elasticsearch.search.aggregations.bucket.sampler.ParsedSampler; +import org.elasticsearch.search.aggregations.bucket.significant.ParsedSignificantLongTerms; +import org.elasticsearch.search.aggregations.bucket.significant.ParsedSignificantStringTerms; +import org.elasticsearch.search.aggregations.bucket.significant.SignificantLongTerms; +import org.elasticsearch.search.aggregations.bucket.significant.SignificantStringTerms; +import org.elasticsearch.search.aggregations.bucket.terms.DoubleTerms; +import org.elasticsearch.search.aggregations.bucket.terms.LongTerms; +import org.elasticsearch.search.aggregations.bucket.terms.ParsedDoubleTerms; +import org.elasticsearch.search.aggregations.bucket.terms.ParsedLongTerms; +import org.elasticsearch.search.aggregations.bucket.terms.ParsedStringTerms; +import org.elasticsearch.search.aggregations.bucket.terms.StringTerms; +import org.elasticsearch.search.aggregations.metrics.avg.AvgAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.avg.ParsedAvg; +import org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.cardinality.ParsedCardinality; +import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBoundsAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.geobounds.ParsedGeoBounds; +import org.elasticsearch.search.aggregations.metrics.geocentroid.GeoCentroidAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.geocentroid.ParsedGeoCentroid; +import org.elasticsearch.search.aggregations.metrics.max.MaxAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.max.ParsedMax; +import org.elasticsearch.search.aggregations.metrics.min.MinAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.min.ParsedMin; +import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.InternalHDRPercentileRanks; +import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.InternalHDRPercentiles; +import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.ParsedHDRPercentileRanks; +import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.ParsedHDRPercentiles; +import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.InternalTDigestPercentileRanks; +import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.InternalTDigestPercentiles; +import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.ParsedTDigestPercentileRanks; +import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.ParsedTDigestPercentiles; +import org.elasticsearch.search.aggregations.metrics.scripted.ParsedScriptedMetric; +import org.elasticsearch.search.aggregations.metrics.scripted.ScriptedMetricAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.stats.ParsedStats; +import org.elasticsearch.search.aggregations.metrics.stats.StatsAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStatsAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.stats.extended.ParsedExtendedStats; +import org.elasticsearch.search.aggregations.metrics.sum.ParsedSum; +import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.tophits.ParsedTopHits; +import org.elasticsearch.search.aggregations.metrics.tophits.TopHitsAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.valuecount.ParsedValueCount; +import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.InternalSimpleValue; +import org.elasticsearch.search.aggregations.pipeline.ParsedSimpleValue; +import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.InternalBucketMetricValue; +import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.ParsedBucketMetricValue; +import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile.ParsedPercentilesBucket; +import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile.PercentilesBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.ParsedStatsBucket; +import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.StatsBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended.ExtendedStatsBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended.ParsedExtendedStatsBucket; +import org.elasticsearch.search.aggregations.pipeline.derivative.DerivativePipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.derivative.ParsedDerivative; +import org.elasticsearch.search.suggest.Suggest; +import org.elasticsearch.search.suggest.completion.CompletionSuggestion; +import org.elasticsearch.search.suggest.phrase.PhraseSuggestion; +import org.elasticsearch.search.suggest.term.TermSuggestion; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.ServiceLoader; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import static java.util.Collections.emptySet; +import static java.util.Collections.singleton; +import static java.util.stream.Collectors.toList; + +/** + * High level REST client that wraps an instance of the low level {@link RestClient} and allows to build requests and read responses. + * The provided {@link RestClient} is externally built and closed. + * Can be sub-classed to expose additional client methods that make use of endpoints added to Elasticsearch through plugins, or to + * add support for custom response sections, again added to Elasticsearch through plugins. + */ +public class RestHighLevelClient { + + private final RestClient client; + private final NamedXContentRegistry registry; + + /** + * Creates a {@link RestHighLevelClient} given the low level {@link RestClient} that it should use to perform requests. + */ + public RestHighLevelClient(RestClient restClient) { + this(restClient, Collections.emptyList()); + } + + /** + * Creates a {@link RestHighLevelClient} given the low level {@link RestClient} that it should use to perform requests and + * a list of entries that allow to parse custom response sections added to Elasticsearch through plugins. + */ + protected RestHighLevelClient(RestClient restClient, List namedXContentEntries) { + this.client = Objects.requireNonNull(restClient); + this.registry = new NamedXContentRegistry( + Stream.of(getDefaultNamedXContents().stream(), getProvidedNamedXContents().stream(), namedXContentEntries.stream()) + .flatMap(Function.identity()).collect(toList())); + } + + /** + * Executes a bulk request using the Bulk API + * + * See Bulk API on elastic.co + */ + public BulkResponse bulk(BulkRequest bulkRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, emptySet(), headers); + } + + /** + * Asynchronously executes a bulk request using the Bulk API + * + * See Bulk API on elastic.co + */ + public void bulkAsync(BulkRequest bulkRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, listener, emptySet(), headers); + } + + /** + * Pings the remote Elasticsearch cluster and returns true if the ping succeeded, false otherwise + */ + public boolean ping(Header... headers) throws IOException { + return performRequest(new MainRequest(), (request) -> Request.ping(), RestHighLevelClient::convertExistsResponse, + emptySet(), headers); + } + + /** + * Get the cluster info otherwise provided when sending an HTTP request to port 9200 + */ + public MainResponse info(Header... headers) throws IOException { + return performRequestAndParseEntity(new MainRequest(), (request) -> Request.info(), MainResponse::fromXContent, emptySet(), + headers); + } + + /** + * Retrieves a document by id using the Get API + * + * See Get API on elastic.co + */ + public GetResponse get(GetRequest getRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(getRequest, Request::get, GetResponse::fromXContent, singleton(404), headers); + } + + /** + * Asynchronously retrieves a document by id using the Get API + * + * See Get API on elastic.co + */ + public void getAsync(GetRequest getRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(getRequest, Request::get, GetResponse::fromXContent, listener, singleton(404), headers); + } + + /** + * Checks for the existence of a document. Returns true if it exists, false otherwise + * + * See Get API on elastic.co + */ + public boolean exists(GetRequest getRequest, Header... headers) throws IOException { + return performRequest(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, emptySet(), headers); + } + + /** + * Asynchronously checks for the existence of a document. Returns true if it exists, false otherwise + * + * See Get API on elastic.co + */ + public void existsAsync(GetRequest getRequest, ActionListener listener, Header... headers) { + performRequestAsync(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, listener, emptySet(), headers); + } + + /** + * Index a document using the Index API + * + * See Index API on elastic.co + */ + public IndexResponse index(IndexRequest indexRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, emptySet(), headers); + } + + /** + * Asynchronously index a document using the Index API + * + * See Index API on elastic.co + */ + public void indexAsync(IndexRequest indexRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, listener, emptySet(), headers); + } + + /** + * Updates a document using the Update API + *

+ * See Update API on elastic.co + */ + public UpdateResponse update(UpdateRequest updateRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, emptySet(), headers); + } + + /** + * Asynchronously updates a document using the Update API + *

+ * See Update API on elastic.co + */ + public void updateAsync(UpdateRequest updateRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, listener, emptySet(), headers); + } + + /** + * Deletes a document by id using the Delete api + * + * See Delete API on elastic.co + */ + public DeleteResponse delete(DeleteRequest deleteRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, Collections.singleton(404), + headers); + } + + /** + * Asynchronously deletes a document by id using the Delete api + * + * See Delete API on elastic.co + */ + public void deleteAsync(DeleteRequest deleteRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, listener, + Collections.singleton(404), headers); + } + + /** + * Executes a search using the Search api + * + * See Search API on elastic.co + */ + public SearchResponse search(SearchRequest searchRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, emptySet(), headers); + } + + /** + * Asynchronously executes a search using the Search api + * + * See Search API on elastic.co + */ + public void searchAsync(SearchRequest searchRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, listener, emptySet(), headers); + } + + /** + * Executes a search using the Search Scroll api + * + * See Search Scroll + * API on elastic.co + */ + public SearchResponse searchScroll(SearchScrollRequest searchScrollRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent, emptySet(), headers); + } + + /** + * Asynchronously executes a search using the Search Scroll api + * + * See Search Scroll + * API on elastic.co + */ + public void searchScrollAsync(SearchScrollRequest searchScrollRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent, + listener, emptySet(), headers); + } + + /** + * Clears one or more scroll ids using the Clear Scroll api + * + * See + * Clear Scroll API on elastic.co + */ + public ClearScrollResponse clearScroll(ClearScrollRequest clearScrollRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent, + emptySet(), headers); + } + + /** + * Asynchronously clears one or more scroll ids using the Clear Scroll api + * + * See + * Clear Scroll API on elastic.co + */ + public void clearScrollAsync(ClearScrollRequest clearScrollRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent, + listener, emptySet(), headers); + } + + protected Resp performRequestAndParseEntity(Req request, + CheckedFunction requestConverter, + CheckedFunction entityParser, + Set ignores, Header... headers) throws IOException { + return performRequest(request, requestConverter, (response) -> parseEntity(response.getEntity(), entityParser), ignores, headers); + } + + protected Resp performRequest(Req request, + CheckedFunction requestConverter, + CheckedFunction responseConverter, + Set ignores, Header... headers) throws IOException { + ActionRequestValidationException validationException = request.validate(); + if (validationException != null) { + throw validationException; + } + Request req = requestConverter.apply(request); + Response response; + try { + response = client.performRequest(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), headers); + } catch (ResponseException e) { + if (ignores.contains(e.getResponse().getStatusLine().getStatusCode())) { + try { + return responseConverter.apply(e.getResponse()); + } catch (Exception innerException) { + //the exception is ignored as we now try to parse the response as an error. + //this covers cases like get where 404 can either be a valid document not found response, + //or an error for which parsing is completely different. We try to consider the 404 response as a valid one + //first. If parsing of the response breaks, we fall back to parsing it as an error. + throw parseResponseException(e); + } + } + throw parseResponseException(e); + } + + try { + return responseConverter.apply(response); + } catch(Exception e) { + throw new IOException("Unable to parse response body for " + response, e); + } + } + + protected void performRequestAsyncAndParseEntity(Req request, + CheckedFunction requestConverter, + CheckedFunction entityParser, + ActionListener listener, Set ignores, Header... headers) { + performRequestAsync(request, requestConverter, (response) -> parseEntity(response.getEntity(), entityParser), + listener, ignores, headers); + } + + protected void performRequestAsync(Req request, + CheckedFunction requestConverter, + CheckedFunction responseConverter, + ActionListener listener, Set ignores, Header... headers) { + ActionRequestValidationException validationException = request.validate(); + if (validationException != null) { + listener.onFailure(validationException); + return; + } + Request req; + try { + req = requestConverter.apply(request); + } catch (Exception e) { + listener.onFailure(e); + return; + } + + ResponseListener responseListener = wrapResponseListener(responseConverter, listener, ignores); + client.performRequestAsync(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), responseListener, headers); + } + + ResponseListener wrapResponseListener(CheckedFunction responseConverter, + ActionListener actionListener, Set ignores) { + return new ResponseListener() { + @Override + public void onSuccess(Response response) { + try { + actionListener.onResponse(responseConverter.apply(response)); + } catch(Exception e) { + IOException ioe = new IOException("Unable to parse response body for " + response, e); + onFailure(ioe); + } + } + + @Override + public void onFailure(Exception exception) { + if (exception instanceof ResponseException) { + ResponseException responseException = (ResponseException) exception; + Response response = responseException.getResponse(); + if (ignores.contains(response.getStatusLine().getStatusCode())) { + try { + actionListener.onResponse(responseConverter.apply(response)); + } catch (Exception innerException) { + //the exception is ignored as we now try to parse the response as an error. + //this covers cases like get where 404 can either be a valid document not found response, + //or an error for which parsing is completely different. We try to consider the 404 response as a valid one + //first. If parsing of the response breaks, we fall back to parsing it as an error. + actionListener.onFailure(parseResponseException(responseException)); + } + } else { + actionListener.onFailure(parseResponseException(responseException)); + } + } else { + actionListener.onFailure(exception); + } + } + }; + } + + /** + * Converts a {@link ResponseException} obtained from the low level REST client into an {@link ElasticsearchException}. + * If a response body was returned, tries to parse it as an error returned from Elasticsearch. + * If no response body was returned or anything goes wrong while parsing the error, returns a new {@link ElasticsearchStatusException} + * that wraps the original {@link ResponseException}. The potential exception obtained while parsing is added to the returned + * exception as a suppressed exception. This method is guaranteed to not throw any exception eventually thrown while parsing. + */ + protected ElasticsearchStatusException parseResponseException(ResponseException responseException) { + Response response = responseException.getResponse(); + HttpEntity entity = response.getEntity(); + ElasticsearchStatusException elasticsearchException; + if (entity == null) { + elasticsearchException = new ElasticsearchStatusException( + responseException.getMessage(), RestStatus.fromCode(response.getStatusLine().getStatusCode()), responseException); + } else { + try { + elasticsearchException = parseEntity(entity, BytesRestResponse::errorFromXContent); + elasticsearchException.addSuppressed(responseException); + } catch (Exception e) { + RestStatus restStatus = RestStatus.fromCode(response.getStatusLine().getStatusCode()); + elasticsearchException = new ElasticsearchStatusException("Unable to parse response body", restStatus, responseException); + elasticsearchException.addSuppressed(e); + } + } + return elasticsearchException; + } + + protected Resp parseEntity(final HttpEntity entity, + final CheckedFunction entityParser) throws IOException { + if (entity == null) { + throw new IllegalStateException("Response body expected but not returned"); + } + if (entity.getContentType() == null) { + throw new IllegalStateException("Elasticsearch didn't return the [Content-Type] header, unable to parse response body"); + } + XContentType xContentType = XContentType.fromMediaTypeOrFormat(entity.getContentType().getValue()); + if (xContentType == null) { + throw new IllegalStateException("Unsupported Content-Type: " + entity.getContentType().getValue()); + } + try (XContentParser parser = xContentType.xContent().createParser(registry, entity.getContent())) { + return entityParser.apply(parser); + } + } + + static boolean convertExistsResponse(Response response) { + return response.getStatusLine().getStatusCode() == 200; + } + + static List getDefaultNamedXContents() { + Map> map = new HashMap<>(); + map.put(CardinalityAggregationBuilder.NAME, (p, c) -> ParsedCardinality.fromXContent(p, (String) c)); + map.put(InternalHDRPercentiles.NAME, (p, c) -> ParsedHDRPercentiles.fromXContent(p, (String) c)); + map.put(InternalHDRPercentileRanks.NAME, (p, c) -> ParsedHDRPercentileRanks.fromXContent(p, (String) c)); + map.put(InternalTDigestPercentiles.NAME, (p, c) -> ParsedTDigestPercentiles.fromXContent(p, (String) c)); + map.put(InternalTDigestPercentileRanks.NAME, (p, c) -> ParsedTDigestPercentileRanks.fromXContent(p, (String) c)); + map.put(PercentilesBucketPipelineAggregationBuilder.NAME, (p, c) -> ParsedPercentilesBucket.fromXContent(p, (String) c)); + map.put(MinAggregationBuilder.NAME, (p, c) -> ParsedMin.fromXContent(p, (String) c)); + map.put(MaxAggregationBuilder.NAME, (p, c) -> ParsedMax.fromXContent(p, (String) c)); + map.put(SumAggregationBuilder.NAME, (p, c) -> ParsedSum.fromXContent(p, (String) c)); + map.put(AvgAggregationBuilder.NAME, (p, c) -> ParsedAvg.fromXContent(p, (String) c)); + map.put(ValueCountAggregationBuilder.NAME, (p, c) -> ParsedValueCount.fromXContent(p, (String) c)); + map.put(InternalSimpleValue.NAME, (p, c) -> ParsedSimpleValue.fromXContent(p, (String) c)); + map.put(DerivativePipelineAggregationBuilder.NAME, (p, c) -> ParsedDerivative.fromXContent(p, (String) c)); + map.put(InternalBucketMetricValue.NAME, (p, c) -> ParsedBucketMetricValue.fromXContent(p, (String) c)); + map.put(StatsAggregationBuilder.NAME, (p, c) -> ParsedStats.fromXContent(p, (String) c)); + map.put(StatsBucketPipelineAggregationBuilder.NAME, (p, c) -> ParsedStatsBucket.fromXContent(p, (String) c)); + map.put(ExtendedStatsAggregationBuilder.NAME, (p, c) -> ParsedExtendedStats.fromXContent(p, (String) c)); + map.put(ExtendedStatsBucketPipelineAggregationBuilder.NAME, + (p, c) -> ParsedExtendedStatsBucket.fromXContent(p, (String) c)); + map.put(GeoBoundsAggregationBuilder.NAME, (p, c) -> ParsedGeoBounds.fromXContent(p, (String) c)); + map.put(GeoCentroidAggregationBuilder.NAME, (p, c) -> ParsedGeoCentroid.fromXContent(p, (String) c)); + map.put(HistogramAggregationBuilder.NAME, (p, c) -> ParsedHistogram.fromXContent(p, (String) c)); + map.put(DateHistogramAggregationBuilder.NAME, (p, c) -> ParsedDateHistogram.fromXContent(p, (String) c)); + map.put(StringTerms.NAME, (p, c) -> ParsedStringTerms.fromXContent(p, (String) c)); + map.put(LongTerms.NAME, (p, c) -> ParsedLongTerms.fromXContent(p, (String) c)); + map.put(DoubleTerms.NAME, (p, c) -> ParsedDoubleTerms.fromXContent(p, (String) c)); + map.put(MissingAggregationBuilder.NAME, (p, c) -> ParsedMissing.fromXContent(p, (String) c)); + map.put(NestedAggregationBuilder.NAME, (p, c) -> ParsedNested.fromXContent(p, (String) c)); + map.put(ReverseNestedAggregationBuilder.NAME, (p, c) -> ParsedReverseNested.fromXContent(p, (String) c)); + map.put(GlobalAggregationBuilder.NAME, (p, c) -> ParsedGlobal.fromXContent(p, (String) c)); + map.put(FilterAggregationBuilder.NAME, (p, c) -> ParsedFilter.fromXContent(p, (String) c)); + map.put(InternalSampler.PARSER_NAME, (p, c) -> ParsedSampler.fromXContent(p, (String) c)); + map.put(GeoGridAggregationBuilder.NAME, (p, c) -> ParsedGeoHashGrid.fromXContent(p, (String) c)); + map.put(RangeAggregationBuilder.NAME, (p, c) -> ParsedRange.fromXContent(p, (String) c)); + map.put(DateRangeAggregationBuilder.NAME, (p, c) -> ParsedDateRange.fromXContent(p, (String) c)); + map.put(GeoDistanceAggregationBuilder.NAME, (p, c) -> ParsedGeoDistance.fromXContent(p, (String) c)); + map.put(FiltersAggregationBuilder.NAME, (p, c) -> ParsedFilters.fromXContent(p, (String) c)); + map.put(AdjacencyMatrixAggregationBuilder.NAME, (p, c) -> ParsedAdjacencyMatrix.fromXContent(p, (String) c)); + map.put(SignificantLongTerms.NAME, (p, c) -> ParsedSignificantLongTerms.fromXContent(p, (String) c)); + map.put(SignificantStringTerms.NAME, (p, c) -> ParsedSignificantStringTerms.fromXContent(p, (String) c)); + map.put(ScriptedMetricAggregationBuilder.NAME, (p, c) -> ParsedScriptedMetric.fromXContent(p, (String) c)); + map.put(IpRangeAggregationBuilder.NAME, (p, c) -> ParsedBinaryRange.fromXContent(p, (String) c)); + map.put(TopHitsAggregationBuilder.NAME, (p, c) -> ParsedTopHits.fromXContent(p, (String) c)); + List entries = map.entrySet().stream() + .map(entry -> new NamedXContentRegistry.Entry(Aggregation.class, new ParseField(entry.getKey()), entry.getValue())) + .collect(Collectors.toList()); + entries.add(new NamedXContentRegistry.Entry(Suggest.Suggestion.class, new ParseField(TermSuggestion.NAME), + (parser, context) -> TermSuggestion.fromXContent(parser, (String)context))); + entries.add(new NamedXContentRegistry.Entry(Suggest.Suggestion.class, new ParseField(PhraseSuggestion.NAME), + (parser, context) -> PhraseSuggestion.fromXContent(parser, (String)context))); + entries.add(new NamedXContentRegistry.Entry(Suggest.Suggestion.class, new ParseField(CompletionSuggestion.NAME), + (parser, context) -> CompletionSuggestion.fromXContent(parser, (String)context))); + return entries; + } + + /** + * Loads and returns the {@link NamedXContentRegistry.Entry} parsers provided by plugins. + */ + static List getProvidedNamedXContents() { + List entries = new ArrayList<>(); + for (NamedXContentProvider service : ServiceLoader.load(NamedXContentProvider.class)) { + entries.addAll(service.getNamedXContentParsers()); + } + return entries; + } +} diff --git a/client/rest-high-level/src/main/resources/forbidden/rest-high-level-signatures.txt b/client/rest-high-level/src/main/resources/forbidden/rest-high-level-signatures.txt new file mode 100644 index 0000000000000..fb2330f3f083c --- /dev/null +++ b/client/rest-high-level/src/main/resources/forbidden/rest-high-level-signatures.txt @@ -0,0 +1,21 @@ +# Licensed to Elasticsearch under one or more contributor +# license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright +# ownership. Elasticsearch licenses this file to you under +# the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on +# an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, +# either express or implied. See the License for the specific +# language governing permissions and limitations under the License. + +@defaultMessage Use Request#createContentType(XContentType) to be sure to pass the right MIME type +org.apache.http.entity.ContentType#create(java.lang.String) +org.apache.http.entity.ContentType#create(java.lang.String,java.lang.String) +org.apache.http.entity.ContentType#create(java.lang.String,java.nio.charset.Charset) +org.apache.http.entity.ContentType#create(java.lang.String,org.apache.http.NameValuePair[]) diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java new file mode 100644 index 0000000000000..a28686b27aa0d --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java @@ -0,0 +1,704 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ElasticsearchStatusException; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.DocWriteResponse; +import org.elasticsearch.action.bulk.BulkItemResponse; +import org.elasticsearch.action.bulk.BulkProcessor; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.bulk.BulkResponse; +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.delete.DeleteResponse; +import org.elasticsearch.action.get.GetRequest; +import org.elasticsearch.action.get.GetResponse; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.index.IndexResponse; +import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.action.update.UpdateResponse; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.common.unit.ByteSizeValue; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.get.GetResult; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptType; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import org.elasticsearch.threadpool.ThreadPool; + +import java.io.IOException; +import java.util.Collections; +import java.util.Map; + +import java.util.concurrent.atomic.AtomicReference; + +import static java.util.Collections.singletonMap; + +public class CrudIT extends ESRestHighLevelClientTestCase { + + public void testDelete() throws IOException { + { + // Testing deletion + String docId = "id"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar"))); + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId); + if (randomBoolean()) { + deleteRequest.version(1L); + } + DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + assertEquals("index", deleteResponse.getIndex()); + assertEquals("type", deleteResponse.getType()); + assertEquals(docId, deleteResponse.getId()); + assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult()); + } + { + // Testing non existing document + String docId = "does_not_exist"; + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId); + DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + assertEquals("index", deleteResponse.getIndex()); + assertEquals("type", deleteResponse.getType()); + assertEquals(docId, deleteResponse.getId()); + assertEquals(DocWriteResponse.Result.NOT_FOUND, deleteResponse.getResult()); + } + { + // Testing version conflict + String docId = "version_conflict"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar"))); + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).version(2); + ElasticsearchException exception = expectThrows(ElasticsearchException.class, + () -> execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync)); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][" + docId + "]: " + + "version conflict, current version [1] is different than the one provided [2]]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + { + // Testing version type + String docId = "version_type"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar")) + .versionType(VersionType.EXTERNAL).version(12)); + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).versionType(VersionType.EXTERNAL).version(13); + DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + assertEquals("index", deleteResponse.getIndex()); + assertEquals("type", deleteResponse.getType()); + assertEquals(docId, deleteResponse.getId()); + assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult()); + } + { + // Testing version type with a wrong version + String docId = "wrong_version"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar")) + .versionType(VersionType.EXTERNAL).version(12)); + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).versionType(VersionType.EXTERNAL).version(10); + execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + }); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][" + + docId + "]: version conflict, current version [12] is higher or equal to the one provided [10]]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + { + // Testing routing + String docId = "routing"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar")).routing("foo")); + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).routing("foo"); + DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + assertEquals("index", deleteResponse.getIndex()); + assertEquals("type", deleteResponse.getType()); + assertEquals(docId, deleteResponse.getId()); + assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult()); + } + } + + public void testExists() throws IOException { + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync)); + } + String document = "{\"field1\":\"value1\",\"field2\":\"value2\"}"; + StringEntity stringEntity = new StringEntity(document, ContentType.APPLICATION_JSON); + Response response = client().performRequest("PUT", "/index/type/id", Collections.singletonMap("refresh", "wait_for"), stringEntity); + assertEquals(201, response.getStatusLine().getStatusCode()); + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + assertTrue(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync)); + } + { + GetRequest getRequest = new GetRequest("index", "type", "does_not_exist"); + assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync)); + } + { + GetRequest getRequest = new GetRequest("index", "type", "does_not_exist").version(1); + assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync)); + } + } + + public void testGet() throws IOException { + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + ElasticsearchException exception = expectThrows(ElasticsearchException.class, + () -> execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync)); + assertEquals(RestStatus.NOT_FOUND, exception.status()); + assertEquals("Elasticsearch exception [type=index_not_found_exception, reason=no such index]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + + String document = "{\"field1\":\"value1\",\"field2\":\"value2\"}"; + StringEntity stringEntity = new StringEntity(document, ContentType.APPLICATION_JSON); + Response response = client().performRequest("PUT", "/index/type/id", Collections.singletonMap("refresh", "wait_for"), stringEntity); + assertEquals(201, response.getStatusLine().getStatusCode()); + { + GetRequest getRequest = new GetRequest("index", "type", "id").version(2); + ElasticsearchException exception = expectThrows(ElasticsearchException.class, + () -> execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync)); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, " + "reason=[type][id]: " + + "version conflict, current version [1] is different than the one provided [2]]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + if (randomBoolean()) { + getRequest.version(1L); + } + GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync); + assertEquals("index", getResponse.getIndex()); + assertEquals("type", getResponse.getType()); + assertEquals("id", getResponse.getId()); + assertTrue(getResponse.isExists()); + assertFalse(getResponse.isSourceEmpty()); + assertEquals(1L, getResponse.getVersion()); + assertEquals(document, getResponse.getSourceAsString()); + } + { + GetRequest getRequest = new GetRequest("index", "type", "does_not_exist"); + GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync); + assertEquals("index", getResponse.getIndex()); + assertEquals("type", getResponse.getType()); + assertEquals("does_not_exist", getResponse.getId()); + assertFalse(getResponse.isExists()); + assertEquals(-1, getResponse.getVersion()); + assertTrue(getResponse.isSourceEmpty()); + assertNull(getResponse.getSourceAsString()); + } + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + getRequest.fetchSourceContext(new FetchSourceContext(false, Strings.EMPTY_ARRAY, Strings.EMPTY_ARRAY)); + GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync); + assertEquals("index", getResponse.getIndex()); + assertEquals("type", getResponse.getType()); + assertEquals("id", getResponse.getId()); + assertTrue(getResponse.isExists()); + assertTrue(getResponse.isSourceEmpty()); + assertEquals(1L, getResponse.getVersion()); + assertNull(getResponse.getSourceAsString()); + } + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + if (randomBoolean()) { + getRequest.fetchSourceContext(new FetchSourceContext(true, new String[]{"field1"}, Strings.EMPTY_ARRAY)); + } else { + getRequest.fetchSourceContext(new FetchSourceContext(true, Strings.EMPTY_ARRAY, new String[]{"field2"})); + } + GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync); + assertEquals("index", getResponse.getIndex()); + assertEquals("type", getResponse.getType()); + assertEquals("id", getResponse.getId()); + assertTrue(getResponse.isExists()); + assertFalse(getResponse.isSourceEmpty()); + assertEquals(1L, getResponse.getVersion()); + Map sourceAsMap = getResponse.getSourceAsMap(); + assertEquals(1, sourceAsMap.size()); + assertEquals("value1", sourceAsMap.get("field1")); + } + } + + public void testIndex() throws IOException { + final XContentType xContentType = randomFrom(XContentType.values()); + { + IndexRequest indexRequest = new IndexRequest("index", "type"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("test", "test").endObject()); + + IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals(DocWriteResponse.Result.CREATED, indexResponse.getResult()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertTrue(Strings.hasLength(indexResponse.getId())); + assertEquals(1L, indexResponse.getVersion()); + assertNotNull(indexResponse.getShardId()); + assertEquals(-1, indexResponse.getShardId().getId()); + assertEquals("index", indexResponse.getShardId().getIndexName()); + assertEquals("index", indexResponse.getShardId().getIndex().getName()); + assertEquals("_na_", indexResponse.getShardId().getIndex().getUUID()); + assertNotNull(indexResponse.getShardInfo()); + assertEquals(0, indexResponse.getShardInfo().getFailed()); + assertTrue(indexResponse.getShardInfo().getSuccessful() > 0); + assertTrue(indexResponse.getShardInfo().getTotal() > 0); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "id"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("version", 1).endObject()); + + IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertEquals("id", indexResponse.getId()); + assertEquals(1L, indexResponse.getVersion()); + + indexRequest = new IndexRequest("index", "type", "id"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("version", 2).endObject()); + + indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.OK, indexResponse.status()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertEquals("id", indexResponse.getId()); + assertEquals(2L, indexResponse.getVersion()); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + IndexRequest wrongRequest = new IndexRequest("index", "type", "id"); + wrongRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + wrongRequest.version(5L); + + execute(wrongRequest, highLevelClient()::index, highLevelClient()::indexAsync); + }); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][id]: " + + "version conflict, current version [2] is different than the one provided [5]]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + { + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + IndexRequest indexRequest = new IndexRequest("index", "type", "missing_parent"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + indexRequest.parent("missing"); + + execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + }); + + assertEquals(RestStatus.BAD_REQUEST, exception.status()); + assertEquals("Elasticsearch exception [type=illegal_argument_exception, " + + "reason=can't specify parent if no parent field has been configured]", exception.getMessage()); + } + { + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + IndexRequest indexRequest = new IndexRequest("index", "type", "missing_pipeline"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + indexRequest.setPipeline("missing"); + + execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + }); + + assertEquals(RestStatus.BAD_REQUEST, exception.status()); + assertEquals("Elasticsearch exception [type=illegal_argument_exception, " + + "reason=pipeline with id [missing] does not exist]", exception.getMessage()); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "external_version_type"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + indexRequest.version(12L); + indexRequest.versionType(VersionType.EXTERNAL); + + IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertEquals("external_version_type", indexResponse.getId()); + assertEquals(12L, indexResponse.getVersion()); + } + { + final IndexRequest indexRequest = new IndexRequest("index", "type", "with_create_op_type"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + indexRequest.opType(DocWriteRequest.OpType.CREATE); + + IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertEquals("with_create_op_type", indexResponse.getId()); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + }); + + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][with_create_op_type]: " + + "version conflict, document already exists (current version [1])]", exception.getMessage()); + } + } + + public void testUpdate() throws IOException { + { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "does_not_exist"); + updateRequest.doc(singletonMap("field", "value"), randomFrom(XContentType.values())); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> + execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync)); + assertEquals(RestStatus.NOT_FOUND, exception.status()); + assertEquals("Elasticsearch exception [type=document_missing_exception, reason=[type][does_not_exist]: document missing]", + exception.getMessage()); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "id"); + indexRequest.source(singletonMap("field", "value")); + IndexResponse indexResponse = highLevelClient().index(indexRequest); + assertEquals(RestStatus.CREATED, indexResponse.status()); + + UpdateRequest updateRequest = new UpdateRequest("index", "type", "id"); + updateRequest.doc(singletonMap("field", "updated"), randomFrom(XContentType.values())); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(indexResponse.getVersion() + 1, updateResponse.getVersion()); + + UpdateRequest updateRequestConflict = new UpdateRequest("index", "type", "id"); + updateRequestConflict.doc(singletonMap("field", "with_version_conflict"), randomFrom(XContentType.values())); + updateRequestConflict.version(indexResponse.getVersion()); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> + execute(updateRequestConflict, highLevelClient()::update, highLevelClient()::updateAsync)); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][id]: version conflict, " + + "current version [2] is different than the one provided [1]]", exception.getMessage()); + } + { + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "id"); + updateRequest.doc(singletonMap("field", "updated"), randomFrom(XContentType.values())); + if (randomBoolean()) { + updateRequest.parent("missing"); + } else { + updateRequest.routing("missing"); + } + execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + }); + + assertEquals(RestStatus.NOT_FOUND, exception.status()); + assertEquals("Elasticsearch exception [type=document_missing_exception, reason=[type][id]: document missing]", + exception.getMessage()); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "with_script"); + indexRequest.source(singletonMap("counter", 12)); + IndexResponse indexResponse = highLevelClient().index(indexRequest); + assertEquals(RestStatus.CREATED, indexResponse.status()); + + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_script"); + Script script = new Script(ScriptType.INLINE, "painless", "ctx._source.counter += params.count", singletonMap("count", 8)); + updateRequest.script(script); + updateRequest.fetchSource(true); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(DocWriteResponse.Result.UPDATED, updateResponse.getResult()); + assertEquals(2L, updateResponse.getVersion()); + assertEquals(20, updateResponse.getGetResult().sourceAsMap().get("counter")); + + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "with_doc"); + indexRequest.source("field_1", "one", "field_3", "three"); + indexRequest.version(12L); + indexRequest.versionType(VersionType.EXTERNAL); + IndexResponse indexResponse = highLevelClient().index(indexRequest); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals(12L, indexResponse.getVersion()); + + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_doc"); + updateRequest.doc(singletonMap("field_2", "two"), randomFrom(XContentType.values())); + updateRequest.fetchSource("field_*", "field_3"); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(DocWriteResponse.Result.UPDATED, updateResponse.getResult()); + assertEquals(13L, updateResponse.getVersion()); + GetResult getResult = updateResponse.getGetResult(); + assertEquals(13L, updateResponse.getVersion()); + Map sourceAsMap = getResult.sourceAsMap(); + assertEquals("one", sourceAsMap.get("field_1")); + assertEquals("two", sourceAsMap.get("field_2")); + assertFalse(sourceAsMap.containsKey("field_3")); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "noop"); + indexRequest.source("field", "value"); + IndexResponse indexResponse = highLevelClient().index(indexRequest); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals(1L, indexResponse.getVersion()); + + UpdateRequest updateRequest = new UpdateRequest("index", "type", "noop"); + updateRequest.doc(singletonMap("field", "value"), randomFrom(XContentType.values())); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(DocWriteResponse.Result.NOOP, updateResponse.getResult()); + assertEquals(1L, updateResponse.getVersion()); + + updateRequest.detectNoop(false); + + updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(DocWriteResponse.Result.UPDATED, updateResponse.getResult()); + assertEquals(2L, updateResponse.getVersion()); + } + { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_upsert"); + updateRequest.upsert(singletonMap("doc_status", "created")); + updateRequest.doc(singletonMap("doc_status", "updated")); + updateRequest.fetchSource(true); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.CREATED, updateResponse.status()); + assertEquals("index", updateResponse.getIndex()); + assertEquals("type", updateResponse.getType()); + assertEquals("with_upsert", updateResponse.getId()); + GetResult getResult = updateResponse.getGetResult(); + assertEquals(1L, updateResponse.getVersion()); + assertEquals("created", getResult.sourceAsMap().get("doc_status")); + } + { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_doc_as_upsert"); + updateRequest.doc(singletonMap("field", "initialized")); + updateRequest.fetchSource(true); + updateRequest.docAsUpsert(true); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.CREATED, updateResponse.status()); + assertEquals("index", updateResponse.getIndex()); + assertEquals("type", updateResponse.getType()); + assertEquals("with_doc_as_upsert", updateResponse.getId()); + GetResult getResult = updateResponse.getGetResult(); + assertEquals(1L, updateResponse.getVersion()); + assertEquals("initialized", getResult.sourceAsMap().get("field")); + } + { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_scripted_upsert"); + updateRequest.fetchSource(true); + updateRequest.script(new Script(ScriptType.INLINE, "painless", "ctx._source.level = params.test", singletonMap("test", "C"))); + updateRequest.scriptedUpsert(true); + updateRequest.upsert(singletonMap("level", "A")); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.CREATED, updateResponse.status()); + assertEquals("index", updateResponse.getIndex()); + assertEquals("type", updateResponse.getType()); + assertEquals("with_scripted_upsert", updateResponse.getId()); + + GetResult getResult = updateResponse.getGetResult(); + assertEquals(1L, updateResponse.getVersion()); + assertEquals("C", getResult.sourceAsMap().get("level")); + } + { + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "id"); + updateRequest.doc(new IndexRequest().source(Collections.singletonMap("field", "doc"), XContentType.JSON)); + updateRequest.upsert(new IndexRequest().source(Collections.singletonMap("field", "upsert"), XContentType.YAML)); + execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + }); + assertEquals("Update request cannot have different content types for doc [JSON] and upsert [YAML] documents", + exception.getMessage()); + } + } + + public void testBulk() throws IOException { + int nbItems = randomIntBetween(10, 100); + boolean[] errors = new boolean[nbItems]; + + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + + BulkRequest bulkRequest = new BulkRequest(); + for (int i = 0; i < nbItems; i++) { + String id = String.valueOf(i); + boolean erroneous = randomBoolean(); + errors[i] = erroneous; + + DocWriteRequest.OpType opType = randomFrom(DocWriteRequest.OpType.values()); + if (opType == DocWriteRequest.OpType.DELETE) { + if (erroneous == false) { + assertEquals(RestStatus.CREATED, + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + } + DeleteRequest deleteRequest = new DeleteRequest("index", "test", id); + bulkRequest.add(deleteRequest); + + } else { + BytesReference source = XContentBuilder.builder(xContentType.xContent()).startObject().field("id", i).endObject().bytes(); + if (opType == DocWriteRequest.OpType.INDEX) { + IndexRequest indexRequest = new IndexRequest("index", "test", id).source(source, xContentType); + if (erroneous) { + indexRequest.version(12L); + } + bulkRequest.add(indexRequest); + + } else if (opType == DocWriteRequest.OpType.CREATE) { + IndexRequest createRequest = new IndexRequest("index", "test", id).source(source, xContentType).create(true); + if (erroneous) { + assertEquals(RestStatus.CREATED, highLevelClient().index(createRequest).status()); + } + bulkRequest.add(createRequest); + + } else if (opType == DocWriteRequest.OpType.UPDATE) { + UpdateRequest updateRequest = new UpdateRequest("index", "test", id) + .doc(new IndexRequest().source(source, xContentType)); + if (erroneous == false) { + assertEquals(RestStatus.CREATED, + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + } + bulkRequest.add(updateRequest); + } + } + } + + BulkResponse bulkResponse = execute(bulkRequest, highLevelClient()::bulk, highLevelClient()::bulkAsync); + assertEquals(RestStatus.OK, bulkResponse.status()); + assertTrue(bulkResponse.getTook().getMillis() > 0); + assertEquals(nbItems, bulkResponse.getItems().length); + + validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest); + } + + public void testBulkProcessorIntegration() throws IOException, InterruptedException { + int nbItems = randomIntBetween(10, 100); + boolean[] errors = new boolean[nbItems]; + + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + + AtomicReference responseRef = new AtomicReference<>(); + AtomicReference requestRef = new AtomicReference<>(); + AtomicReference error = new AtomicReference<>(); + + BulkProcessor.Listener listener = new BulkProcessor.Listener() { + @Override + public void beforeBulk(long executionId, BulkRequest request) { + + } + + @Override + public void afterBulk(long executionId, BulkRequest request, BulkResponse response) { + responseRef.set(response); + requestRef.set(request); + } + + @Override + public void afterBulk(long executionId, BulkRequest request, Throwable failure) { + error.set(failure); + } + }; + + ThreadPool threadPool = new ThreadPool(Settings.builder().put("node.name", getClass().getName()).build()); + try(BulkProcessor processor = new BulkProcessor.Builder(highLevelClient()::bulkAsync, listener, threadPool) + .setConcurrentRequests(0) + .setBulkSize(new ByteSizeValue(5, ByteSizeUnit.GB)) + .setBulkActions(nbItems + 1) + .build()) { + for (int i = 0; i < nbItems; i++) { + String id = String.valueOf(i); + boolean erroneous = randomBoolean(); + errors[i] = erroneous; + + DocWriteRequest.OpType opType = randomFrom(DocWriteRequest.OpType.values()); + if (opType == DocWriteRequest.OpType.DELETE) { + if (erroneous == false) { + assertEquals(RestStatus.CREATED, + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + } + DeleteRequest deleteRequest = new DeleteRequest("index", "test", id); + processor.add(deleteRequest); + + } else { + if (opType == DocWriteRequest.OpType.INDEX) { + IndexRequest indexRequest = new IndexRequest("index", "test", id).source(xContentType, "id", i); + if (erroneous) { + indexRequest.version(12L); + } + processor.add(indexRequest); + + } else if (opType == DocWriteRequest.OpType.CREATE) { + IndexRequest createRequest = new IndexRequest("index", "test", id).source(xContentType, "id", i).create(true); + if (erroneous) { + assertEquals(RestStatus.CREATED, highLevelClient().index(createRequest).status()); + } + processor.add(createRequest); + + } else if (opType == DocWriteRequest.OpType.UPDATE) { + UpdateRequest updateRequest = new UpdateRequest("index", "test", id) + .doc(new IndexRequest().source(xContentType, "id", i)); + if (erroneous == false) { + assertEquals(RestStatus.CREATED, + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + } + processor.add(updateRequest); + } + } + } + assertNull(responseRef.get()); + assertNull(requestRef.get()); + } + + + BulkResponse bulkResponse = responseRef.get(); + BulkRequest bulkRequest = requestRef.get(); + + assertEquals(RestStatus.OK, bulkResponse.status()); + assertTrue(bulkResponse.getTookInMillis() > 0); + assertEquals(nbItems, bulkResponse.getItems().length); + assertNull(error.get()); + + validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest); + + terminate(threadPool); + } + + private void validateBulkResponses(int nbItems, boolean[] errors, BulkResponse bulkResponse, BulkRequest bulkRequest) { + for (int i = 0; i < nbItems; i++) { + BulkItemResponse bulkItemResponse = bulkResponse.getItems()[i]; + + assertEquals(i, bulkItemResponse.getItemId()); + assertEquals("index", bulkItemResponse.getIndex()); + assertEquals("test", bulkItemResponse.getType()); + assertEquals(String.valueOf(i), bulkItemResponse.getId()); + + DocWriteRequest.OpType requestOpType = bulkRequest.requests().get(i).opType(); + if (requestOpType == DocWriteRequest.OpType.INDEX || requestOpType == DocWriteRequest.OpType.CREATE) { + assertEquals(errors[i], bulkItemResponse.isFailed()); + assertEquals(errors[i] ? RestStatus.CONFLICT : RestStatus.CREATED, bulkItemResponse.status()); + } else if (requestOpType == DocWriteRequest.OpType.UPDATE) { + assertEquals(errors[i], bulkItemResponse.isFailed()); + assertEquals(errors[i] ? RestStatus.NOT_FOUND : RestStatus.OK, bulkItemResponse.status()); + } else if (requestOpType == DocWriteRequest.OpType.DELETE) { + assertFalse(bulkItemResponse.isFailed()); + assertEquals(errors[i] ? RestStatus.NOT_FOUND : RestStatus.OK, bulkItemResponse.status()); + } + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java new file mode 100644 index 0000000000000..18ca074dd58b1 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java @@ -0,0 +1,209 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.ProtocolVersion; +import org.apache.http.RequestLine; +import org.apache.http.client.methods.HttpGet; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.message.BasicHeader; +import org.apache.http.message.BasicRequestLine; +import org.apache.http.message.BasicStatusLine; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.Build; +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.main.MainRequest; +import org.elasticsearch.action.main.MainResponse; +import org.elasticsearch.action.support.PlainActionFuture; +import org.elasticsearch.client.Request; +import org.elasticsearch.client.Response; +import org.elasticsearch.client.ResponseListener; +import org.elasticsearch.client.RestClient; +import org.elasticsearch.client.RestHighLevelClient; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.test.ESTestCase; +import org.junit.Before; + +import java.io.IOException; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +import static java.util.Collections.emptyMap; +import static java.util.Collections.emptySet; +import static org.hamcrest.Matchers.containsInAnyOrder; +import static org.mockito.Matchers.any; +import static org.mockito.Matchers.anyMapOf; +import static org.mockito.Matchers.anyObject; +import static org.mockito.Matchers.anyVararg; +import static org.mockito.Matchers.eq; +import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +/** + * Test and demonstrates how {@link RestHighLevelClient} can be extended to support custom endpoints. + */ +public class CustomRestHighLevelClientTests extends ESTestCase { + + private static final String ENDPOINT = "/_custom"; + + private CustomRestClient restHighLevelClient; + + @Before + @SuppressWarnings("unchecked") + public void initClients() throws IOException { + if (restHighLevelClient == null) { + final RestClient restClient = mock(RestClient.class); + restHighLevelClient = new CustomRestClient(restClient); + + doAnswer(mock -> mockPerformRequest((Header) mock.getArguments()[4])) + .when(restClient) + .performRequest(eq(HttpGet.METHOD_NAME), eq(ENDPOINT), anyMapOf(String.class, String.class), anyObject(), anyVararg()); + + doAnswer(mock -> mockPerformRequestAsync((Header) mock.getArguments()[5], (ResponseListener) mock.getArguments()[4])) + .when(restClient) + .performRequestAsync(eq(HttpGet.METHOD_NAME), eq(ENDPOINT), anyMapOf(String.class, String.class), + any(HttpEntity.class), any(ResponseListener.class), anyVararg()); + } + } + + public void testCustomEndpoint() throws IOException { + final MainRequest request = new MainRequest(); + final Header header = new BasicHeader("node_name", randomAlphaOfLengthBetween(1, 10)); + + MainResponse response = restHighLevelClient.custom(request, header); + assertEquals(header.getValue(), response.getNodeName()); + + response = restHighLevelClient.customAndParse(request, header); + assertEquals(header.getValue(), response.getNodeName()); + } + + public void testCustomEndpointAsync() throws Exception { + final MainRequest request = new MainRequest(); + final Header header = new BasicHeader("node_name", randomAlphaOfLengthBetween(1, 10)); + + PlainActionFuture future = PlainActionFuture.newFuture(); + restHighLevelClient.customAsync(request, future, header); + assertEquals(header.getValue(), future.get().getNodeName()); + + future = PlainActionFuture.newFuture(); + restHighLevelClient.customAndParseAsync(request, future, header); + assertEquals(header.getValue(), future.get().getNodeName()); + } + + /** + * The {@link RestHighLevelClient} must declare the following execution methods using the protected modifier + * so that they can be used by subclasses to implement custom logic. + */ + @SuppressForbidden(reason = "We're forced to uses Class#getDeclaredMethods() here because this test checks protected methods") + public void testMethodsVisibility() throws ClassNotFoundException { + final String[] methodNames = new String[]{"performRequest", + "performRequestAsync", + "performRequestAndParseEntity", + "performRequestAsyncAndParseEntity", + "parseEntity", + "parseResponseException"}; + + final List protectedMethods = Arrays.stream(RestHighLevelClient.class.getDeclaredMethods()) + .filter(method -> Modifier.isProtected(method.getModifiers())) + .map(Method::getName) + .collect(Collectors.toList()); + + assertThat(protectedMethods, containsInAnyOrder(methodNames)); + } + + /** + * Mocks the asynchronous request execution by calling the {@link #mockPerformRequest(Header)} method. + */ + private Void mockPerformRequestAsync(Header httpHeader, ResponseListener responseListener) { + try { + responseListener.onSuccess(mockPerformRequest(httpHeader)); + } catch (IOException e) { + responseListener.onFailure(e); + } + return null; + } + + /** + * Mocks the synchronous request execution like if it was executed by Elasticsearch. + */ + private Response mockPerformRequest(Header httpHeader) throws IOException { + final Response mockResponse = mock(Response.class); + when(mockResponse.getHost()).thenReturn(new HttpHost("localhost", 9200)); + + ProtocolVersion protocol = new ProtocolVersion("HTTP", 1, 1); + when(mockResponse.getStatusLine()).thenReturn(new BasicStatusLine(protocol, 200, "OK")); + + MainResponse response = new MainResponse(httpHeader.getValue(), Version.CURRENT, ClusterName.DEFAULT, "_na", Build.CURRENT, true); + BytesRef bytesRef = XContentHelper.toXContent(response, XContentType.JSON, false).toBytesRef(); + when(mockResponse.getEntity()).thenReturn(new ByteArrayEntity(bytesRef.bytes, ContentType.APPLICATION_JSON)); + + RequestLine requestLine = new BasicRequestLine(HttpGet.METHOD_NAME, ENDPOINT, protocol); + when(mockResponse.getRequestLine()).thenReturn(requestLine); + + return mockResponse; + } + + /** + * A custom high level client that provides custom methods to execute a request and get its associate response back. + */ + static class CustomRestClient extends RestHighLevelClient { + + private CustomRestClient(RestClient restClient) { + super(restClient); + } + + MainResponse custom(MainRequest mainRequest, Header... headers) throws IOException { + return performRequest(mainRequest, this::toRequest, this::toResponse, emptySet(), headers); + } + + MainResponse customAndParse(MainRequest mainRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(mainRequest, this::toRequest, MainResponse::fromXContent, emptySet(), headers); + } + + void customAsync(MainRequest mainRequest, ActionListener listener, Header... headers) { + performRequestAsync(mainRequest, this::toRequest, this::toResponse, listener, emptySet(), headers); + } + + void customAndParseAsync(MainRequest mainRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(mainRequest, this::toRequest, MainResponse::fromXContent, listener, emptySet(), headers); + } + + Request toRequest(MainRequest mainRequest) throws IOException { + return new Request(HttpGet.METHOD_NAME, ENDPOINT, emptyMap(), null); + } + + MainResponse toResponse(Response response) throws IOException { + return parseEntity(response.getEntity(), MainResponse::fromXContent); + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java new file mode 100644 index 0000000000000..cdd8317830909 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java @@ -0,0 +1,75 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.Header; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.support.PlainActionFuture; +import org.elasticsearch.test.rest.ESRestTestCase; +import org.junit.AfterClass; +import org.junit.Before; + +import java.io.IOException; + +public abstract class ESRestHighLevelClientTestCase extends ESRestTestCase { + + private static RestHighLevelClient restHighLevelClient; + + @Before + public void initHighLevelClient() throws IOException { + super.initClient(); + if (restHighLevelClient == null) { + restHighLevelClient = new RestHighLevelClient(client()); + } + } + + @AfterClass + public static void cleanupClient() { + restHighLevelClient = null; + } + + protected static RestHighLevelClient highLevelClient() { + return restHighLevelClient; + } + + /** + * Executes the provided request using either the sync method or its async variant, both provided as functions + */ + protected static Resp execute(Req request, SyncMethod syncMethod, + AsyncMethod asyncMethod, Header... headers) throws IOException { + if (randomBoolean()) { + return syncMethod.execute(request, headers); + } else { + PlainActionFuture future = PlainActionFuture.newFuture(); + asyncMethod.execute(request, future, headers); + return future.actionGet(); + } + } + + @FunctionalInterface + protected interface SyncMethod { + Response execute(Request request, Header... headers) throws IOException; + } + + @FunctionalInterface + protected interface AsyncMethod { + void execute(Request request, ActionListener listener, Header... headers); + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/PingAndInfoIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/PingAndInfoIT.java new file mode 100644 index 0000000000000..b22ded52655df --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/PingAndInfoIT.java @@ -0,0 +1,51 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.elasticsearch.action.main.MainResponse; + +import java.io.IOException; +import java.util.Map; + +public class PingAndInfoIT extends ESRestHighLevelClientTestCase { + + public void testPing() throws IOException { + assertTrue(highLevelClient().ping()); + } + + @SuppressWarnings("unchecked") + public void testInfo() throws IOException { + MainResponse info = highLevelClient().info(); + // compare with what the low level client outputs + Map infoAsMap = entityAsMap(adminClient().performRequest("GET", "/")); + assertEquals(infoAsMap.get("cluster_name"), info.getClusterName().value()); + assertEquals(infoAsMap.get("cluster_uuid"), info.getClusterUuid()); + + // only check node name existence, might be a different one from what was hit by low level client in multi-node cluster + assertNotNull(info.getNodeName()); + Map versionMap = (Map) infoAsMap.get("version"); + assertEquals(versionMap.get("build_hash"), info.getBuild().shortHash()); + assertEquals(versionMap.get("build_date"), info.getBuild().date()); + assertEquals(versionMap.get("build_snapshot"), info.getBuild().isSnapshot()); + assertEquals(versionMap.get("number"), info.getVersion().toString()); + assertEquals(versionMap.get("lucene_version"), info.getVersion().luceneVersion.toString()); + } + +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java new file mode 100644 index 0000000000000..8f52eb37fe95d --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java @@ -0,0 +1,945 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.HttpEntity; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.util.EntityUtils; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.bulk.BulkShardRequest; +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.get.GetRequest; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.action.search.SearchType; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.action.support.replication.ReplicatedWriteRequest; +import org.elasticsearch.action.support.replication.ReplicationRequest; +import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.io.Streams; +import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.query.TermQueryBuilder; +import org.elasticsearch.rest.action.search.RestSearchAction; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.collapse.CollapseBuilder; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; +import org.elasticsearch.search.rescore.QueryRescorerBuilder; +import org.elasticsearch.search.suggest.SuggestBuilder; +import org.elasticsearch.search.suggest.completion.CompletionSuggestionBuilder; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.RandomObjects; + +import java.io.IOException; +import java.io.InputStream; +import java.lang.reflect.Constructor; +import java.lang.reflect.Modifier; +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; +import java.util.StringJoiner; +import java.util.function.Consumer; +import java.util.function.Function; + +import static java.util.Collections.singletonMap; +import static org.elasticsearch.client.Request.enforceSameContentType; +import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertToXContentEquivalent; + +public class RequestTests extends ESTestCase { + + public void testConstructor() throws Exception { + final String method = randomFrom("GET", "PUT", "POST", "HEAD", "DELETE"); + final String endpoint = randomAlphaOfLengthBetween(1, 10); + final Map parameters = singletonMap(randomAlphaOfLength(5), randomAlphaOfLength(5)); + final HttpEntity entity = randomBoolean() ? new StringEntity(randomAlphaOfLengthBetween(1, 100), ContentType.TEXT_PLAIN) : null; + + NullPointerException e = expectThrows(NullPointerException.class, () -> new Request(null, endpoint, parameters, entity)); + assertEquals("method cannot be null", e.getMessage()); + + e = expectThrows(NullPointerException.class, () -> new Request(method, null, parameters, entity)); + assertEquals("endpoint cannot be null", e.getMessage()); + + e = expectThrows(NullPointerException.class, () -> new Request(method, endpoint, null, entity)); + assertEquals("parameters cannot be null", e.getMessage()); + + final Request request = new Request(method, endpoint, parameters, entity); + assertEquals(method, request.getMethod()); + assertEquals(endpoint, request.getEndpoint()); + assertEquals(parameters, request.getParameters()); + assertEquals(entity, request.getEntity()); + + final Constructor[] constructors = Request.class.getConstructors(); + assertEquals("Expected only 1 constructor", 1, constructors.length); + assertTrue("Request constructor is not public", Modifier.isPublic(constructors[0].getModifiers())); + } + + public void testClassVisibility() throws Exception { + assertTrue("Request class is not public", Modifier.isPublic(Request.class.getModifiers())); + } + + public void testPing() { + Request request = Request.ping(); + assertEquals("/", request.getEndpoint()); + assertEquals(0, request.getParameters().size()); + assertNull(request.getEntity()); + assertEquals("HEAD", request.getMethod()); + } + + public void testInfo() { + Request request = Request.info(); + assertEquals("/", request.getEndpoint()); + assertEquals(0, request.getParameters().size()); + assertNull(request.getEntity()); + assertEquals("GET", request.getMethod()); + } + + public void testGet() { + getAndExistsTest(Request::get, "GET"); + } + + public void testDelete() throws IOException { + String index = randomAlphaOfLengthBetween(3, 10); + String type = randomAlphaOfLengthBetween(3, 10); + String id = randomAlphaOfLengthBetween(3, 10); + DeleteRequest deleteRequest = new DeleteRequest(index, type, id); + + Map expectedParams = new HashMap<>(); + + setRandomTimeout(deleteRequest, expectedParams); + setRandomRefreshPolicy(deleteRequest, expectedParams); + setRandomVersion(deleteRequest, expectedParams); + setRandomVersionType(deleteRequest, expectedParams); + + if (frequently()) { + if (randomBoolean()) { + String routing = randomAlphaOfLengthBetween(3, 10); + deleteRequest.routing(routing); + expectedParams.put("routing", routing); + } + if (randomBoolean()) { + String parent = randomAlphaOfLengthBetween(3, 10); + deleteRequest.parent(parent); + expectedParams.put("parent", parent); + } + } + + Request request = Request.delete(deleteRequest); + assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + assertEquals("DELETE", request.getMethod()); + assertNull(request.getEntity()); + } + + public void testExists() { + getAndExistsTest(Request::exists, "HEAD"); + } + + private static void getAndExistsTest(Function requestConverter, String method) { + String index = randomAlphaOfLengthBetween(3, 10); + String type = randomAlphaOfLengthBetween(3, 10); + String id = randomAlphaOfLengthBetween(3, 10); + GetRequest getRequest = new GetRequest(index, type, id); + + Map expectedParams = new HashMap<>(); + if (randomBoolean()) { + if (randomBoolean()) { + String preference = randomAlphaOfLengthBetween(3, 10); + getRequest.preference(preference); + expectedParams.put("preference", preference); + } + if (randomBoolean()) { + String routing = randomAlphaOfLengthBetween(3, 10); + getRequest.routing(routing); + expectedParams.put("routing", routing); + } + if (randomBoolean()) { + boolean realtime = randomBoolean(); + getRequest.realtime(realtime); + if (realtime == false) { + expectedParams.put("realtime", "false"); + } + } + if (randomBoolean()) { + boolean refresh = randomBoolean(); + getRequest.refresh(refresh); + if (refresh) { + expectedParams.put("refresh", "true"); + } + } + if (randomBoolean()) { + long version = randomLong(); + getRequest.version(version); + if (version != Versions.MATCH_ANY) { + expectedParams.put("version", Long.toString(version)); + } + } + if (randomBoolean()) { + VersionType versionType = randomFrom(VersionType.values()); + getRequest.versionType(versionType); + if (versionType != VersionType.INTERNAL) { + expectedParams.put("version_type", versionType.name().toLowerCase(Locale.ROOT)); + } + } + if (randomBoolean()) { + int numStoredFields = randomIntBetween(1, 10); + String[] storedFields = new String[numStoredFields]; + StringBuilder storedFieldsParam = new StringBuilder(); + for (int i = 0; i < numStoredFields; i++) { + String storedField = randomAlphaOfLengthBetween(3, 10); + storedFields[i] = storedField; + storedFieldsParam.append(storedField); + if (i < numStoredFields - 1) { + storedFieldsParam.append(","); + } + } + getRequest.storedFields(storedFields); + expectedParams.put("stored_fields", storedFieldsParam.toString()); + } + if (randomBoolean()) { + randomizeFetchSourceContextParams(getRequest::fetchSourceContext, expectedParams); + } + } + Request request = requestConverter.apply(getRequest); + assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + assertNull(request.getEntity()); + assertEquals(method, request.getMethod()); + } + + public void testIndex() throws IOException { + String index = randomAlphaOfLengthBetween(3, 10); + String type = randomAlphaOfLengthBetween(3, 10); + IndexRequest indexRequest = new IndexRequest(index, type); + + String id = randomBoolean() ? randomAlphaOfLengthBetween(3, 10) : null; + indexRequest.id(id); + + Map expectedParams = new HashMap<>(); + + String method = "POST"; + if (id != null) { + method = "PUT"; + if (randomBoolean()) { + indexRequest.opType(DocWriteRequest.OpType.CREATE); + } + } + + setRandomTimeout(indexRequest, expectedParams); + setRandomRefreshPolicy(indexRequest, expectedParams); + + // There is some logic around _create endpoint and version/version type + if (indexRequest.opType() == DocWriteRequest.OpType.CREATE) { + indexRequest.version(randomFrom(Versions.MATCH_ANY, Versions.MATCH_DELETED)); + expectedParams.put("version", Long.toString(Versions.MATCH_DELETED)); + } else { + setRandomVersion(indexRequest, expectedParams); + setRandomVersionType(indexRequest, expectedParams); + } + + if (frequently()) { + if (randomBoolean()) { + String routing = randomAlphaOfLengthBetween(3, 10); + indexRequest.routing(routing); + expectedParams.put("routing", routing); + } + if (randomBoolean()) { + String parent = randomAlphaOfLengthBetween(3, 10); + indexRequest.parent(parent); + expectedParams.put("parent", parent); + } + if (randomBoolean()) { + String pipeline = randomAlphaOfLengthBetween(3, 10); + indexRequest.setPipeline(pipeline); + expectedParams.put("pipeline", pipeline); + } + } + + XContentType xContentType = randomFrom(XContentType.values()); + int nbFields = randomIntBetween(0, 10); + try (XContentBuilder builder = XContentBuilder.builder(xContentType.xContent())) { + builder.startObject(); + for (int i = 0; i < nbFields; i++) { + builder.field("field_" + i, i); + } + builder.endObject(); + indexRequest.source(builder); + } + + Request request = Request.index(indexRequest); + if (indexRequest.opType() == DocWriteRequest.OpType.CREATE) { + assertEquals("/" + index + "/" + type + "/" + id + "/_create", request.getEndpoint()); + } else if (id != null) { + assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint()); + } else { + assertEquals("/" + index + "/" + type, request.getEndpoint()); + } + assertEquals(expectedParams, request.getParameters()); + assertEquals(method, request.getMethod()); + + HttpEntity entity = request.getEntity(); + assertTrue(entity instanceof ByteArrayEntity); + assertEquals(indexRequest.getContentType().mediaTypeWithoutParameters(), entity.getContentType().getValue()); + try (XContentParser parser = createParser(xContentType.xContent(), entity.getContent())) { + assertEquals(nbFields, parser.map().size()); + } + } + + public void testUpdate() throws IOException { + XContentType xContentType = randomFrom(XContentType.values()); + + Map expectedParams = new HashMap<>(); + String index = randomAlphaOfLengthBetween(3, 10); + String type = randomAlphaOfLengthBetween(3, 10); + String id = randomAlphaOfLengthBetween(3, 10); + + UpdateRequest updateRequest = new UpdateRequest(index, type, id); + updateRequest.detectNoop(randomBoolean()); + + if (randomBoolean()) { + BytesReference source = RandomObjects.randomSource(random(), xContentType); + updateRequest.doc(new IndexRequest().source(source, xContentType)); + + boolean docAsUpsert = randomBoolean(); + updateRequest.docAsUpsert(docAsUpsert); + if (docAsUpsert) { + expectedParams.put("doc_as_upsert", "true"); + } + } else { + updateRequest.script(mockScript("_value + 1")); + updateRequest.scriptedUpsert(randomBoolean()); + } + if (randomBoolean()) { + BytesReference source = RandomObjects.randomSource(random(), xContentType); + updateRequest.upsert(new IndexRequest().source(source, xContentType)); + } + if (randomBoolean()) { + String routing = randomAlphaOfLengthBetween(3, 10); + updateRequest.routing(routing); + expectedParams.put("routing", routing); + } + if (randomBoolean()) { + String parent = randomAlphaOfLengthBetween(3, 10); + updateRequest.parent(parent); + expectedParams.put("parent", parent); + } + if (randomBoolean()) { + String timeout = randomTimeValue(); + updateRequest.timeout(timeout); + expectedParams.put("timeout", timeout); + } else { + expectedParams.put("timeout", ReplicationRequest.DEFAULT_TIMEOUT.getStringRep()); + } + if (randomBoolean()) { + WriteRequest.RefreshPolicy refreshPolicy = randomFrom(WriteRequest.RefreshPolicy.values()); + updateRequest.setRefreshPolicy(refreshPolicy); + if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) { + expectedParams.put("refresh", refreshPolicy.getValue()); + } + } + if (randomBoolean()) { + int waitForActiveShards = randomIntBetween(0, 10); + updateRequest.waitForActiveShards(waitForActiveShards); + expectedParams.put("wait_for_active_shards", String.valueOf(waitForActiveShards)); + } + if (randomBoolean()) { + long version = randomLong(); + updateRequest.version(version); + if (version != Versions.MATCH_ANY) { + expectedParams.put("version", Long.toString(version)); + } + } + if (randomBoolean()) { + VersionType versionType = randomFrom(VersionType.values()); + updateRequest.versionType(versionType); + if (versionType != VersionType.INTERNAL) { + expectedParams.put("version_type", versionType.name().toLowerCase(Locale.ROOT)); + } + } + if (randomBoolean()) { + int retryOnConflict = randomIntBetween(0, 5); + updateRequest.retryOnConflict(retryOnConflict); + if (retryOnConflict > 0) { + expectedParams.put("retry_on_conflict", String.valueOf(retryOnConflict)); + } + } + if (randomBoolean()) { + randomizeFetchSourceContextParams(updateRequest::fetchSource, expectedParams); + } + + Request request = Request.update(updateRequest); + assertEquals("/" + index + "/" + type + "/" + id + "/_update", request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + assertEquals("POST", request.getMethod()); + + HttpEntity entity = request.getEntity(); + assertTrue(entity instanceof ByteArrayEntity); + + UpdateRequest parsedUpdateRequest = new UpdateRequest(); + + XContentType entityContentType = XContentType.fromMediaTypeOrFormat(entity.getContentType().getValue()); + try (XContentParser parser = createParser(entityContentType.xContent(), entity.getContent())) { + parsedUpdateRequest.fromXContent(parser); + } + + assertEquals(updateRequest.scriptedUpsert(), parsedUpdateRequest.scriptedUpsert()); + assertEquals(updateRequest.docAsUpsert(), parsedUpdateRequest.docAsUpsert()); + assertEquals(updateRequest.detectNoop(), parsedUpdateRequest.detectNoop()); + assertEquals(updateRequest.fetchSource(), parsedUpdateRequest.fetchSource()); + assertEquals(updateRequest.script(), parsedUpdateRequest.script()); + if (updateRequest.doc() != null) { + assertToXContentEquivalent(updateRequest.doc().source(), parsedUpdateRequest.doc().source(), xContentType); + } else { + assertNull(parsedUpdateRequest.doc()); + } + if (updateRequest.upsertRequest() != null) { + assertToXContentEquivalent(updateRequest.upsertRequest().source(), parsedUpdateRequest.upsertRequest().source(), xContentType); + } else { + assertNull(parsedUpdateRequest.upsertRequest()); + } + } + + public void testUpdateWithDifferentContentTypes() throws IOException { + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { + UpdateRequest updateRequest = new UpdateRequest(); + updateRequest.doc(new IndexRequest().source(singletonMap("field", "doc"), XContentType.JSON)); + updateRequest.upsert(new IndexRequest().source(singletonMap("field", "upsert"), XContentType.YAML)); + Request.update(updateRequest); + }); + assertEquals("Update request cannot have different content types for doc [JSON] and upsert [YAML] documents", + exception.getMessage()); + } + + public void testBulk() throws IOException { + Map expectedParams = new HashMap<>(); + + BulkRequest bulkRequest = new BulkRequest(); + if (randomBoolean()) { + String timeout = randomTimeValue(); + bulkRequest.timeout(timeout); + expectedParams.put("timeout", timeout); + } else { + expectedParams.put("timeout", BulkShardRequest.DEFAULT_TIMEOUT.getStringRep()); + } + + if (randomBoolean()) { + WriteRequest.RefreshPolicy refreshPolicy = randomFrom(WriteRequest.RefreshPolicy.values()); + bulkRequest.setRefreshPolicy(refreshPolicy); + if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) { + expectedParams.put("refresh", refreshPolicy.getValue()); + } + } + + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + + int nbItems = randomIntBetween(10, 100); + for (int i = 0; i < nbItems; i++) { + String index = randomAlphaOfLength(5); + String type = randomAlphaOfLength(5); + String id = randomAlphaOfLength(5); + + BytesReference source = RandomObjects.randomSource(random(), xContentType); + DocWriteRequest.OpType opType = randomFrom(DocWriteRequest.OpType.values()); + + DocWriteRequest docWriteRequest = null; + if (opType == DocWriteRequest.OpType.INDEX) { + IndexRequest indexRequest = new IndexRequest(index, type, id).source(source, xContentType); + docWriteRequest = indexRequest; + if (randomBoolean()) { + indexRequest.setPipeline(randomAlphaOfLength(5)); + } + if (randomBoolean()) { + indexRequest.parent(randomAlphaOfLength(5)); + } + } else if (opType == DocWriteRequest.OpType.CREATE) { + IndexRequest createRequest = new IndexRequest(index, type, id).source(source, xContentType).create(true); + docWriteRequest = createRequest; + if (randomBoolean()) { + createRequest.parent(randomAlphaOfLength(5)); + } + } else if (opType == DocWriteRequest.OpType.UPDATE) { + final UpdateRequest updateRequest = new UpdateRequest(index, type, id).doc(new IndexRequest().source(source, xContentType)); + docWriteRequest = updateRequest; + if (randomBoolean()) { + updateRequest.retryOnConflict(randomIntBetween(1, 5)); + } + if (randomBoolean()) { + randomizeFetchSourceContextParams(updateRequest::fetchSource, new HashMap<>()); + } + if (randomBoolean()) { + updateRequest.parent(randomAlphaOfLength(5)); + } + } else if (opType == DocWriteRequest.OpType.DELETE) { + docWriteRequest = new DeleteRequest(index, type, id); + } + + if (randomBoolean()) { + docWriteRequest.routing(randomAlphaOfLength(10)); + } + if (randomBoolean()) { + docWriteRequest.version(randomNonNegativeLong()); + } + if (randomBoolean()) { + docWriteRequest.versionType(randomFrom(VersionType.values())); + } + bulkRequest.add(docWriteRequest); + } + + Request request = Request.bulk(bulkRequest); + assertEquals("/_bulk", request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + assertEquals("POST", request.getMethod()); + assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); + byte[] content = new byte[(int) request.getEntity().getContentLength()]; + try (InputStream inputStream = request.getEntity().getContent()) { + Streams.readFully(inputStream, content); + } + + BulkRequest parsedBulkRequest = new BulkRequest(); + parsedBulkRequest.add(content, 0, content.length, xContentType); + assertEquals(bulkRequest.numberOfActions(), parsedBulkRequest.numberOfActions()); + + for (int i = 0; i < bulkRequest.numberOfActions(); i++) { + DocWriteRequest originalRequest = bulkRequest.requests().get(i); + DocWriteRequest parsedRequest = parsedBulkRequest.requests().get(i); + + assertEquals(originalRequest.opType(), parsedRequest.opType()); + assertEquals(originalRequest.index(), parsedRequest.index()); + assertEquals(originalRequest.type(), parsedRequest.type()); + assertEquals(originalRequest.id(), parsedRequest.id()); + assertEquals(originalRequest.routing(), parsedRequest.routing()); + assertEquals(originalRequest.parent(), parsedRequest.parent()); + assertEquals(originalRequest.version(), parsedRequest.version()); + assertEquals(originalRequest.versionType(), parsedRequest.versionType()); + + DocWriteRequest.OpType opType = originalRequest.opType(); + if (opType == DocWriteRequest.OpType.INDEX) { + IndexRequest indexRequest = (IndexRequest) originalRequest; + IndexRequest parsedIndexRequest = (IndexRequest) parsedRequest; + + assertEquals(indexRequest.getPipeline(), parsedIndexRequest.getPipeline()); + assertToXContentEquivalent(indexRequest.source(), parsedIndexRequest.source(), xContentType); + } else if (opType == DocWriteRequest.OpType.UPDATE) { + UpdateRequest updateRequest = (UpdateRequest) originalRequest; + UpdateRequest parsedUpdateRequest = (UpdateRequest) parsedRequest; + + assertEquals(updateRequest.retryOnConflict(), parsedUpdateRequest.retryOnConflict()); + assertEquals(updateRequest.fetchSource(), parsedUpdateRequest.fetchSource()); + if (updateRequest.doc() != null) { + assertToXContentEquivalent(updateRequest.doc().source(), parsedUpdateRequest.doc().source(), xContentType); + } else { + assertNull(parsedUpdateRequest.doc()); + } + } + } + } + + public void testBulkWithDifferentContentTypes() throws IOException { + { + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new DeleteRequest("index", "type", "0")); + bulkRequest.add(new UpdateRequest("index", "type", "1").script(mockScript("test"))); + bulkRequest.add(new DeleteRequest("index", "type", "2")); + + Request request = Request.bulk(bulkRequest); + assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); + } + { + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new DeleteRequest("index", "type", "0")); + bulkRequest.add(new IndexRequest("index", "type", "0").source(singletonMap("field", "value"), xContentType)); + bulkRequest.add(new DeleteRequest("index", "type", "2")); + + Request request = Request.bulk(bulkRequest); + assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); + } + { + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + UpdateRequest updateRequest = new UpdateRequest("index", "type", "0"); + if (randomBoolean()) { + updateRequest.doc(new IndexRequest().source(singletonMap("field", "value"), xContentType)); + } else { + updateRequest.upsert(new IndexRequest().source(singletonMap("field", "value"), xContentType)); + } + + Request request = Request.bulk(new BulkRequest().add(updateRequest)); + assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); + } + { + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new IndexRequest("index", "type", "0").source(singletonMap("field", "value"), XContentType.SMILE)); + bulkRequest.add(new IndexRequest("index", "type", "1").source(singletonMap("field", "value"), XContentType.JSON)); + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> Request.bulk(bulkRequest)); + assertEquals("Mismatching content-type found for request with content-type [JSON], " + + "previous requests have content-type [SMILE]", exception.getMessage()); + } + { + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new IndexRequest("index", "type", "0") + .source(singletonMap("field", "value"), XContentType.JSON)); + bulkRequest.add(new IndexRequest("index", "type", "1") + .source(singletonMap("field", "value"), XContentType.JSON)); + bulkRequest.add(new UpdateRequest("index", "type", "2") + .doc(new IndexRequest().source(singletonMap("field", "value"), XContentType.JSON)) + .upsert(new IndexRequest().source(singletonMap("field", "value"), XContentType.SMILE)) + ); + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> Request.bulk(bulkRequest)); + assertEquals("Mismatching content-type found for request with content-type [SMILE], " + + "previous requests have content-type [JSON]", exception.getMessage()); + } + { + XContentType xContentType = randomFrom(XContentType.CBOR, XContentType.YAML); + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new DeleteRequest("index", "type", "0")); + bulkRequest.add(new IndexRequest("index", "type", "1").source(singletonMap("field", "value"), XContentType.JSON)); + bulkRequest.add(new DeleteRequest("index", "type", "2")); + bulkRequest.add(new DeleteRequest("index", "type", "3")); + bulkRequest.add(new IndexRequest("index", "type", "4").source(singletonMap("field", "value"), XContentType.JSON)); + bulkRequest.add(new IndexRequest("index", "type", "1").source(singletonMap("field", "value"), xContentType)); + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> Request.bulk(bulkRequest)); + assertEquals("Unsupported content-type found for request with content-type [" + xContentType + + "], only JSON and SMILE are supported", exception.getMessage()); + } + } + + public void testSearch() throws Exception { + SearchRequest searchRequest = new SearchRequest(); + int numIndices = randomIntBetween(0, 5); + String[] indices = new String[numIndices]; + for (int i = 0; i < numIndices; i++) { + indices[i] = "index-" + randomAlphaOfLengthBetween(2, 5); + } + searchRequest.indices(indices); + int numTypes = randomIntBetween(0, 5); + String[] types = new String[numTypes]; + for (int i = 0; i < numTypes; i++) { + types[i] = "type-" + randomAlphaOfLengthBetween(2, 5); + } + searchRequest.types(types); + + Map expectedParams = new HashMap<>(); + expectedParams.put(RestSearchAction.TYPED_KEYS_PARAM, "true"); + if (randomBoolean()) { + searchRequest.routing(randomAlphaOfLengthBetween(3, 10)); + expectedParams.put("routing", searchRequest.routing()); + } + if (randomBoolean()) { + searchRequest.preference(randomAlphaOfLengthBetween(3, 10)); + expectedParams.put("preference", searchRequest.preference()); + } + if (randomBoolean()) { + searchRequest.searchType(randomFrom(SearchType.values())); + } + expectedParams.put("search_type", searchRequest.searchType().name().toLowerCase(Locale.ROOT)); + if (randomBoolean()) { + searchRequest.requestCache(randomBoolean()); + expectedParams.put("request_cache", Boolean.toString(searchRequest.requestCache())); + } + if (randomBoolean()) { + searchRequest.setBatchedReduceSize(randomIntBetween(2, Integer.MAX_VALUE)); + } + expectedParams.put("batched_reduce_size", Integer.toString(searchRequest.getBatchedReduceSize())); + if (randomBoolean()) { + searchRequest.scroll(randomTimeValue()); + expectedParams.put("scroll", searchRequest.scroll().keepAlive().getStringRep()); + } + + if (randomBoolean()) { + searchRequest.indicesOptions(IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean())); + } + expectedParams.put("ignore_unavailable", Boolean.toString(searchRequest.indicesOptions().ignoreUnavailable())); + expectedParams.put("allow_no_indices", Boolean.toString(searchRequest.indicesOptions().allowNoIndices())); + if (searchRequest.indicesOptions().expandWildcardsOpen() && searchRequest.indicesOptions().expandWildcardsClosed()) { + expectedParams.put("expand_wildcards", "open,closed"); + } else if (searchRequest.indicesOptions().expandWildcardsOpen()) { + expectedParams.put("expand_wildcards", "open"); + } else if (searchRequest.indicesOptions().expandWildcardsClosed()) { + expectedParams.put("expand_wildcards", "closed"); + } else { + expectedParams.put("expand_wildcards", "none"); + } + + SearchSourceBuilder searchSourceBuilder = null; + if (frequently()) { + searchSourceBuilder = new SearchSourceBuilder(); + if (randomBoolean()) { + searchSourceBuilder.size(randomIntBetween(0, Integer.MAX_VALUE)); + } + if (randomBoolean()) { + searchSourceBuilder.from(randomIntBetween(0, Integer.MAX_VALUE)); + } + if (randomBoolean()) { + searchSourceBuilder.minScore(randomFloat()); + } + if (randomBoolean()) { + searchSourceBuilder.explain(randomBoolean()); + } + if (randomBoolean()) { + searchSourceBuilder.profile(randomBoolean()); + } + if (randomBoolean()) { + searchSourceBuilder.highlighter(new HighlightBuilder().field(randomAlphaOfLengthBetween(3, 10))); + } + if (randomBoolean()) { + searchSourceBuilder.query(new TermQueryBuilder(randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10))); + } + if (randomBoolean()) { + searchSourceBuilder.aggregation(new TermsAggregationBuilder(randomAlphaOfLengthBetween(3, 10), ValueType.STRING) + .field(randomAlphaOfLengthBetween(3, 10))); + } + if (randomBoolean()) { + searchSourceBuilder.suggest(new SuggestBuilder().addSuggestion(randomAlphaOfLengthBetween(3, 10), + new CompletionSuggestionBuilder(randomAlphaOfLengthBetween(3, 10)))); + } + if (randomBoolean()) { + searchSourceBuilder.addRescorer(new QueryRescorerBuilder( + new TermQueryBuilder(randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10)))); + } + if (randomBoolean()) { + searchSourceBuilder.collapse(new CollapseBuilder(randomAlphaOfLengthBetween(3, 10))); + } + searchRequest.source(searchSourceBuilder); + } + + Request request = Request.search(searchRequest); + StringJoiner endpoint = new StringJoiner("/", "/", ""); + String index = String.join(",", indices); + if (Strings.hasLength(index)) { + endpoint.add(index); + } + String type = String.join(",", types); + if (Strings.hasLength(type)) { + endpoint.add(type); + } + endpoint.add("_search"); + assertEquals(endpoint.toString(), request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + if (searchSourceBuilder == null) { + assertNull(request.getEntity()); + } else { + assertToXContentBody(searchSourceBuilder, request.getEntity()); + } + } + + public void testSearchScroll() throws IOException { + SearchScrollRequest searchScrollRequest = new SearchScrollRequest(); + searchScrollRequest.scrollId(randomAlphaOfLengthBetween(5, 10)); + if (randomBoolean()) { + searchScrollRequest.scroll(randomPositiveTimeValue()); + } + Request request = Request.searchScroll(searchScrollRequest); + assertEquals("GET", request.getMethod()); + assertEquals("/_search/scroll", request.getEndpoint()); + assertEquals(0, request.getParameters().size()); + assertToXContentBody(searchScrollRequest, request.getEntity()); + assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); + } + + public void testClearScroll() throws IOException { + ClearScrollRequest clearScrollRequest = new ClearScrollRequest(); + int numScrolls = randomIntBetween(1, 10); + for (int i = 0; i < numScrolls; i++) { + clearScrollRequest.addScrollId(randomAlphaOfLengthBetween(5, 10)); + } + Request request = Request.clearScroll(clearScrollRequest); + assertEquals("DELETE", request.getMethod()); + assertEquals("/_search/scroll", request.getEndpoint()); + assertEquals(0, request.getParameters().size()); + assertToXContentBody(clearScrollRequest, request.getEntity()); + assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); + } + + private static void assertToXContentBody(ToXContent expectedBody, HttpEntity actualEntity) throws IOException { + BytesReference expectedBytes = XContentHelper.toXContent(expectedBody, Request.REQUEST_BODY_CONTENT_TYPE, false); + assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), actualEntity.getContentType().getValue()); + assertEquals(expectedBytes, new BytesArray(EntityUtils.toByteArray(actualEntity))); + } + + public void testParams() { + final int nbParams = randomIntBetween(0, 10); + Request.Params params = Request.Params.builder(); + Map expectedParams = new HashMap<>(); + for (int i = 0; i < nbParams; i++) { + String paramName = "p_" + i; + String paramValue = randomAlphaOfLength(5); + params.putParam(paramName, paramValue); + expectedParams.put(paramName, paramValue); + } + + Map requestParams = params.getParams(); + assertEquals(nbParams, requestParams.size()); + assertEquals(expectedParams, requestParams); + } + + public void testParamsNoDuplicates() { + Request.Params params = Request.Params.builder(); + params.putParam("test", "1"); + + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> params.putParam("test", "2")); + assertEquals("Request parameter [test] is already registered", e.getMessage()); + + Map requestParams = params.getParams(); + assertEquals(1L, requestParams.size()); + assertEquals("1", requestParams.values().iterator().next()); + } + + public void testEndpoint() { + assertEquals("/", Request.endpoint()); + assertEquals("/", Request.endpoint(Strings.EMPTY_ARRAY)); + assertEquals("/", Request.endpoint("")); + assertEquals("/a/b", Request.endpoint("a", "b")); + assertEquals("/a/b/_create", Request.endpoint("a", "b", "_create")); + assertEquals("/a/b/c/_create", Request.endpoint("a", "b", "c", "_create")); + assertEquals("/a/_create", Request.endpoint("a", null, null, "_create")); + } + + public void testCreateContentType() { + final XContentType xContentType = randomFrom(XContentType.values()); + assertEquals(xContentType.mediaTypeWithoutParameters(), Request.createContentType(xContentType).getMimeType()); + } + + public void testEnforceSameContentType() { + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + IndexRequest indexRequest = new IndexRequest().source(singletonMap("field", "value"), xContentType); + assertEquals(xContentType, enforceSameContentType(indexRequest, null)); + assertEquals(xContentType, enforceSameContentType(indexRequest, xContentType)); + + XContentType bulkContentType = randomBoolean() ? xContentType : null; + + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> + enforceSameContentType(new IndexRequest().source(singletonMap("field", "value"), XContentType.CBOR), bulkContentType)); + assertEquals("Unsupported content-type found for request with content-type [CBOR], only JSON and SMILE are supported", + exception.getMessage()); + + exception = expectThrows(IllegalArgumentException.class, () -> + enforceSameContentType(new IndexRequest().source(singletonMap("field", "value"), XContentType.YAML), bulkContentType)); + assertEquals("Unsupported content-type found for request with content-type [YAML], only JSON and SMILE are supported", + exception.getMessage()); + + XContentType requestContentType = xContentType == XContentType.JSON ? XContentType.SMILE : XContentType.JSON; + + exception = expectThrows(IllegalArgumentException.class, () -> + enforceSameContentType(new IndexRequest().source(singletonMap("field", "value"), requestContentType), xContentType)); + assertEquals("Mismatching content-type found for request with content-type [" + requestContentType + "], " + + "previous requests have content-type [" + xContentType + "]", exception.getMessage()); + } + + /** + * Randomize the {@link FetchSourceContext} request parameters. + */ + private static void randomizeFetchSourceContextParams(Consumer consumer, Map expectedParams) { + if (randomBoolean()) { + if (randomBoolean()) { + boolean fetchSource = randomBoolean(); + consumer.accept(new FetchSourceContext(fetchSource)); + if (fetchSource == false) { + expectedParams.put("_source", "false"); + } + } else { + int numIncludes = randomIntBetween(0, 5); + String[] includes = new String[numIncludes]; + StringBuilder includesParam = new StringBuilder(); + for (int i = 0; i < numIncludes; i++) { + String include = randomAlphaOfLengthBetween(3, 10); + includes[i] = include; + includesParam.append(include); + if (i < numIncludes - 1) { + includesParam.append(","); + } + } + if (numIncludes > 0) { + expectedParams.put("_source_include", includesParam.toString()); + } + int numExcludes = randomIntBetween(0, 5); + String[] excludes = new String[numExcludes]; + StringBuilder excludesParam = new StringBuilder(); + for (int i = 0; i < numExcludes; i++) { + String exclude = randomAlphaOfLengthBetween(3, 10); + excludes[i] = exclude; + excludesParam.append(exclude); + if (i < numExcludes - 1) { + excludesParam.append(","); + } + } + if (numExcludes > 0) { + expectedParams.put("_source_exclude", excludesParam.toString()); + } + consumer.accept(new FetchSourceContext(true, includes, excludes)); + } + } + } + + private static void setRandomTimeout(ReplicationRequest request, Map expectedParams) { + if (randomBoolean()) { + String timeout = randomTimeValue(); + request.timeout(timeout); + expectedParams.put("timeout", timeout); + } else { + expectedParams.put("timeout", ReplicationRequest.DEFAULT_TIMEOUT.getStringRep()); + } + } + + private static void setRandomRefreshPolicy(ReplicatedWriteRequest request, Map expectedParams) { + if (randomBoolean()) { + WriteRequest.RefreshPolicy refreshPolicy = randomFrom(WriteRequest.RefreshPolicy.values()); + request.setRefreshPolicy(refreshPolicy); + if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) { + expectedParams.put("refresh", refreshPolicy.getValue()); + } + } + } + + private static void setRandomVersion(DocWriteRequest request, Map expectedParams) { + if (randomBoolean()) { + long version = randomFrom(Versions.MATCH_ANY, Versions.MATCH_DELETED, Versions.NOT_FOUND, randomNonNegativeLong()); + request.version(version); + if (version != Versions.MATCH_ANY) { + expectedParams.put("version", Long.toString(version)); + } + } + } + + private static void setRandomVersionType(DocWriteRequest request, Map expectedParams) { + if (randomBoolean()) { + VersionType versionType = randomFrom(VersionType.values()); + request.versionType(versionType); + if (versionType != VersionType.INTERNAL) { + expectedParams.put("version_type", versionType.name().toLowerCase(Locale.ROOT)); + } + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java new file mode 100644 index 0000000000000..8c12cbeb1e563 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java @@ -0,0 +1,138 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.test.ESTestCase; +import org.junit.Before; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; + +import static org.hamcrest.CoreMatchers.instanceOf; +import static org.mockito.Mockito.mock; + +/** + * This test works against a {@link RestHighLevelClient} subclass that simulates how custom response sections returned by + * Elasticsearch plugins can be parsed using the high level client. + */ +public class RestHighLevelClientExtTests extends ESTestCase { + + private RestHighLevelClient restHighLevelClient; + + @Before + public void initClient() throws IOException { + RestClient restClient = mock(RestClient.class); + restHighLevelClient = new RestHighLevelClientExt(restClient); + } + + public void testParseEntityCustomResponseSection() throws IOException { + { + HttpEntity jsonEntity = new StringEntity("{\"custom1\":{ \"field\":\"value\"}}", ContentType.APPLICATION_JSON); + BaseCustomResponseSection customSection = restHighLevelClient.parseEntity(jsonEntity, BaseCustomResponseSection::fromXContent); + assertThat(customSection, instanceOf(CustomResponseSection1.class)); + CustomResponseSection1 customResponseSection1 = (CustomResponseSection1) customSection; + assertEquals("value", customResponseSection1.value); + } + { + HttpEntity jsonEntity = new StringEntity("{\"custom2\":{ \"array\": [\"item1\", \"item2\"]}}", ContentType.APPLICATION_JSON); + BaseCustomResponseSection customSection = restHighLevelClient.parseEntity(jsonEntity, BaseCustomResponseSection::fromXContent); + assertThat(customSection, instanceOf(CustomResponseSection2.class)); + CustomResponseSection2 customResponseSection2 = (CustomResponseSection2) customSection; + assertArrayEquals(new String[]{"item1", "item2"}, customResponseSection2.values); + } + } + + private static class RestHighLevelClientExt extends RestHighLevelClient { + + private RestHighLevelClientExt(RestClient restClient) { + super(restClient, getNamedXContentsExt()); + } + + private static List getNamedXContentsExt() { + List entries = new ArrayList<>(); + entries.add(new NamedXContentRegistry.Entry(BaseCustomResponseSection.class, new ParseField("custom1"), + CustomResponseSection1::fromXContent)); + entries.add(new NamedXContentRegistry.Entry(BaseCustomResponseSection.class, new ParseField("custom2"), + CustomResponseSection2::fromXContent)); + return entries; + } + } + + private abstract static class BaseCustomResponseSection { + + static BaseCustomResponseSection fromXContent(XContentParser parser) throws IOException { + assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); + assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); + BaseCustomResponseSection custom = parser.namedObject(BaseCustomResponseSection.class, parser.currentName(), null); + assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); + return custom; + } + } + + private static class CustomResponseSection1 extends BaseCustomResponseSection { + + private final String value; + + private CustomResponseSection1(String value) { + this.value = value; + } + + static CustomResponseSection1 fromXContent(XContentParser parser) throws IOException { + assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); + assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); + assertEquals("field", parser.currentName()); + assertEquals(XContentParser.Token.VALUE_STRING, parser.nextToken()); + CustomResponseSection1 responseSection1 = new CustomResponseSection1(parser.text()); + assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); + return responseSection1; + } + } + + private static class CustomResponseSection2 extends BaseCustomResponseSection { + + private final String[] values; + + private CustomResponseSection2(String[] values) { + this.values = values; + } + + static CustomResponseSection2 fromXContent(XContentParser parser) throws IOException { + assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); + assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); + assertEquals("array", parser.currentName()); + assertEquals(XContentParser.Token.START_ARRAY, parser.nextToken()); + List values = new ArrayList<>(); + while(parser.nextToken().isValue()) { + values.add(parser.text()); + } + assertEquals(XContentParser.Token.END_ARRAY, parser.currentToken()); + CustomResponseSection2 responseSection2 = new CustomResponseSection2(values.toArray(new String[values.size()])); + assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); + return responseSection2; + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java new file mode 100644 index 0000000000000..a6d015afca713 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java @@ -0,0 +1,691 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import com.fasterxml.jackson.core.JsonParseException; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.RequestLine; +import org.apache.http.StatusLine; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicRequestLine; +import org.apache.http.message.BasicStatusLine; +import org.apache.http.nio.entity.NStringEntity; + +import org.elasticsearch.Build; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.main.MainRequest; +import org.elasticsearch.action.main.MainResponse; +import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.ClearScrollResponse; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.action.search.SearchResponseSections; +import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.action.search.ShardSearchFailure; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.cbor.CborXContent; +import org.elasticsearch.common.xcontent.smile.SmileXContent; +import org.elasticsearch.join.aggregations.ChildrenAggregationBuilder; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.InternalAggregations; +import org.elasticsearch.search.aggregations.matrix.stats.MatrixStatsAggregationBuilder; +import org.elasticsearch.search.suggest.Suggest; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.InternalAggregationTestCase; +import org.junit.Before; +import org.mockito.ArgumentMatcher; +import org.mockito.internal.matchers.ArrayEquals; +import org.mockito.internal.matchers.VarargMatcher; + +import java.io.IOException; +import java.net.SocketTimeoutException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; + +import static org.elasticsearch.client.RestClientTestUtil.randomHeaders; +import static org.elasticsearch.common.xcontent.XContentHelper.toXContent; +import static org.hamcrest.CoreMatchers.instanceOf; +import static org.mockito.Matchers.anyMapOf; +import static org.mockito.Matchers.anyObject; +import static org.mockito.Matchers.anyString; +import static org.mockito.Matchers.anyVararg; +import static org.mockito.Matchers.argThat; +import static org.mockito.Matchers.eq; +import static org.mockito.Matchers.isNotNull; +import static org.mockito.Matchers.isNull; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class RestHighLevelClientTests extends ESTestCase { + + private static final ProtocolVersion HTTP_PROTOCOL = new ProtocolVersion("http", 1, 1); + private static final RequestLine REQUEST_LINE = new BasicRequestLine("GET", "/", HTTP_PROTOCOL); + + private RestClient restClient; + private RestHighLevelClient restHighLevelClient; + + @Before + public void initClient() { + restClient = mock(RestClient.class); + restHighLevelClient = new RestHighLevelClient(restClient); + } + + public void testPingSuccessful() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + Response response = mock(Response.class); + when(response.getStatusLine()).thenReturn(newStatusLine(RestStatus.OK)); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenReturn(response); + assertTrue(restHighLevelClient.ping(headers)); + verify(restClient).performRequest(eq("HEAD"), eq("/"), eq(Collections.emptyMap()), + isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); + } + + public void testPing404NotFound() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + Response response = mock(Response.class); + when(response.getStatusLine()).thenReturn(newStatusLine(RestStatus.NOT_FOUND)); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenReturn(response); + assertFalse(restHighLevelClient.ping(headers)); + verify(restClient).performRequest(eq("HEAD"), eq("/"), eq(Collections.emptyMap()), + isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); + } + + public void testPingSocketTimeout() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(new SocketTimeoutException()); + expectThrows(SocketTimeoutException.class, () -> restHighLevelClient.ping(headers)); + verify(restClient).performRequest(eq("HEAD"), eq("/"), eq(Collections.emptyMap()), + isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); + } + + public void testInfo() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + MainResponse testInfo = new MainResponse("nodeName", Version.CURRENT, new ClusterName("clusterName"), "clusterUuid", + Build.CURRENT, true); + mockResponse(testInfo); + MainResponse receivedInfo = restHighLevelClient.info(headers); + assertEquals(testInfo, receivedInfo); + verify(restClient).performRequest(eq("GET"), eq("/"), eq(Collections.emptyMap()), + isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); + } + + public void testSearchScroll() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + SearchResponse mockSearchResponse = new SearchResponse(new SearchResponseSections(SearchHits.empty(), InternalAggregations.EMPTY, + null, false, false, null, 1), randomAlphaOfLengthBetween(5, 10), 5, 5, 0, 100, new ShardSearchFailure[0]); + mockResponse(mockSearchResponse); + SearchResponse searchResponse = restHighLevelClient.searchScroll(new SearchScrollRequest(randomAlphaOfLengthBetween(5, 10)), + headers); + assertEquals(mockSearchResponse.getScrollId(), searchResponse.getScrollId()); + assertEquals(0, searchResponse.getHits().totalHits); + assertEquals(5, searchResponse.getTotalShards()); + assertEquals(5, searchResponse.getSuccessfulShards()); + assertEquals(100, searchResponse.getTook().getMillis()); + verify(restClient).performRequest(eq("GET"), eq("/_search/scroll"), eq(Collections.emptyMap()), + isNotNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); + } + + public void testClearScroll() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + ClearScrollResponse mockClearScrollResponse = new ClearScrollResponse(randomBoolean(), randomIntBetween(0, Integer.MAX_VALUE)); + mockResponse(mockClearScrollResponse); + ClearScrollRequest clearScrollRequest = new ClearScrollRequest(); + clearScrollRequest.addScrollId(randomAlphaOfLengthBetween(5, 10)); + ClearScrollResponse clearScrollResponse = restHighLevelClient.clearScroll(clearScrollRequest, headers); + assertEquals(mockClearScrollResponse.isSucceeded(), clearScrollResponse.isSucceeded()); + assertEquals(mockClearScrollResponse.getNumFreed(), clearScrollResponse.getNumFreed()); + verify(restClient).performRequest(eq("DELETE"), eq("/_search/scroll"), eq(Collections.emptyMap()), + isNotNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); + } + + private void mockResponse(ToXContent toXContent) throws IOException { + Response response = mock(Response.class); + ContentType contentType = ContentType.parse(Request.REQUEST_BODY_CONTENT_TYPE.mediaType()); + String requestBody = toXContent(toXContent, Request.REQUEST_BODY_CONTENT_TYPE, false).utf8ToString(); + when(response.getEntity()).thenReturn(new NStringEntity(requestBody, contentType)); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenReturn(response); + } + + public void testRequestValidation() { + ActionRequestValidationException validationException = new ActionRequestValidationException(); + validationException.addValidationError("validation error"); + ActionRequest request = new ActionRequest() { + @Override + public ActionRequestValidationException validate() { + return validationException; + } + }; + + { + ActionRequestValidationException actualException = expectThrows(ActionRequestValidationException.class, + () -> restHighLevelClient.performRequest(request, null, null, null)); + assertSame(validationException, actualException); + } + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + restHighLevelClient.performRequestAsync(request, null, null, trackingActionListener, null); + assertSame(validationException, trackingActionListener.exception.get()); + } + } + + public void testParseEntity() throws IOException { + { + IllegalStateException ise = expectThrows(IllegalStateException.class, () -> restHighLevelClient.parseEntity(null, null)); + assertEquals("Response body expected but not returned", ise.getMessage()); + } + { + IllegalStateException ise = expectThrows(IllegalStateException.class, + () -> restHighLevelClient.parseEntity(new StringEntity("", (ContentType) null), null)); + assertEquals("Elasticsearch didn't return the [Content-Type] header, unable to parse response body", ise.getMessage()); + } + { + StringEntity entity = new StringEntity("", ContentType.APPLICATION_SVG_XML); + IllegalStateException ise = expectThrows(IllegalStateException.class, () -> restHighLevelClient.parseEntity(entity, null)); + assertEquals("Unsupported Content-Type: " + entity.getContentType().getValue(), ise.getMessage()); + } + { + CheckedFunction entityParser = parser -> { + assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); + assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); + assertTrue(parser.nextToken().isValue()); + String value = parser.text(); + assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); + return value; + }; + HttpEntity jsonEntity = new StringEntity("{\"field\":\"value\"}", ContentType.APPLICATION_JSON); + assertEquals("value", restHighLevelClient.parseEntity(jsonEntity, entityParser)); + HttpEntity yamlEntity = new StringEntity("---\nfield: value\n", ContentType.create("application/yaml")); + assertEquals("value", restHighLevelClient.parseEntity(yamlEntity, entityParser)); + HttpEntity smileEntity = createBinaryEntity(SmileXContent.contentBuilder(), ContentType.create("application/smile")); + assertEquals("value", restHighLevelClient.parseEntity(smileEntity, entityParser)); + HttpEntity cborEntity = createBinaryEntity(CborXContent.contentBuilder(), ContentType.create("application/cbor")); + assertEquals("value", restHighLevelClient.parseEntity(cborEntity, entityParser)); + } + } + + private static HttpEntity createBinaryEntity(XContentBuilder xContentBuilder, ContentType contentType) throws IOException { + try (XContentBuilder builder = xContentBuilder) { + builder.startObject(); + builder.field("field", "value"); + builder.endObject(); + return new ByteArrayEntity(builder.bytes().toBytesRef().bytes, contentType); + } + } + + public void testConvertExistsResponse() { + RestStatus restStatus = randomBoolean() ? RestStatus.OK : randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + boolean result = RestHighLevelClient.convertExistsResponse(response); + assertEquals(restStatus == RestStatus.OK, result); + } + + public void testParseResponseException() throws IOException { + { + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + } + { + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":" + restStatus.getStatus() + "}", + ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + } + { + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":", ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IOException.class)); + } + { + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"status\":" + restStatus.getStatus() + "}", ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IllegalStateException.class)); + } + } + + public void testPerformRequestOnSuccess() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenReturn(mockResponse); + { + Integer result = restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet()); + assertEquals(restStatus.getStatus(), result.intValue()); + } + { + IOException ioe = expectThrows(IOException.class, () -> restHighLevelClient.performRequest(mainRequest, + requestConverter, response -> {throw new IllegalStateException();}, Collections.emptySet())); + assertEquals("Unable to parse response body for Response{requestLine=GET / http/1.1, host=http://localhost:9200, " + + "response=http/1.1 " + restStatus.getStatus() + " " + restStatus.name() + "}", ioe.getMessage()); + } + } + + public void testPerformRequestOnResponseExceptionWithoutEntity() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet())); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + } + + public void testPerformRequestOnResponseExceptionWithEntity() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":" + restStatus.getStatus() + "}", + ContentType.APPLICATION_JSON)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet())); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + } + + public void testPerformRequestOnResponseExceptionWithBrokenEntity() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":", ContentType.APPLICATION_JSON)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet())); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(JsonParseException.class)); + } + + public void testPerformRequestOnResponseExceptionWithBrokenEntity2() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"status\":" + restStatus.getStatus() + "}", ContentType.APPLICATION_JSON)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet())); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IllegalStateException.class)); + } + + public void testPerformRequestOnResponseExceptionWithIgnores() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + //although we got an exception, we turn it into a successful response because the status code was provided among ignores + assertEquals(Integer.valueOf(404), restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.singleton(404))); + } + + public void testPerformRequestOnResponseExceptionWithIgnoresErrorNoBody() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> {throw new IllegalStateException();}, Collections.singleton(404))); + assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + } + + public void testPerformRequestOnResponseExceptionWithIgnoresErrorValidBody() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":404}", + ContentType.APPLICATION_JSON)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> {throw new IllegalStateException();}, Collections.singleton(404))); + assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + } + + public void testWrapResponseListenerOnSuccess() { + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + responseListener.onSuccess(new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse)); + assertNull(trackingActionListener.exception.get()); + assertEquals(restStatus.getStatus(), trackingActionListener.statusCode.get()); + } + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> {throw new IllegalStateException();}, trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + responseListener.onSuccess(new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse)); + assertThat(trackingActionListener.exception.get(), instanceOf(IOException.class)); + IOException ioe = (IOException) trackingActionListener.exception.get(); + assertEquals("Unable to parse response body for Response{requestLine=GET / http/1.1, host=http://localhost:9200, " + + "response=http/1.1 " + restStatus.getStatus() + " " + restStatus.name() + "}", ioe.getMessage()); + assertThat(ioe.getCause(), instanceOf(IllegalStateException.class)); + } + } + + public void testWrapResponseListenerOnException() { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + IllegalStateException exception = new IllegalStateException(); + responseListener.onFailure(exception); + assertSame(exception, trackingActionListener.exception.get()); + } + + public void testWrapResponseListenerOnResponseExceptionWithoutEntity() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException) trackingActionListener.exception.get(); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + } + + public void testWrapResponseListenerOnResponseExceptionWithEntity() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":" + restStatus.getStatus() + "}", + ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + } + + public void testWrapResponseListenerOnResponseExceptionWithBrokenEntity() throws IOException { + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":", ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(JsonParseException.class)); + } + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"status\":" + restStatus.getStatus() + "}", ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IllegalStateException.class)); + } + } + + public void testWrapResponseListenerOnResponseExceptionWithIgnores() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.singleton(404)); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + //although we got an exception, we turn it into a successful response because the status code was provided among ignores + assertNull(trackingActionListener.exception.get()); + assertEquals(404, trackingActionListener.statusCode.get()); + } + + public void testWrapResponseListenerOnResponseExceptionWithIgnoresErrorNoBody() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + //response parsing throws exception while handling ignores. same as when GetResponse#fromXContent throws error when trying + //to parse a 404 response which contains an error rather than a valid document not found response. + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> { throw new IllegalStateException(); }, trackingActionListener, Collections.singleton(404)); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + } + + public void testWrapResponseListenerOnResponseExceptionWithIgnoresErrorValidBody() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + //response parsing throws exception while handling ignores. same as when GetResponse#fromXContent throws error when trying + //to parse a 404 response which contains an error rather than a valid document not found response. + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> { throw new IllegalStateException(); }, trackingActionListener, Collections.singleton(404)); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":404}", + ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + } + + public void testDefaultNamedXContents() { + List namedXContents = RestHighLevelClient.getDefaultNamedXContents(); + int expectedInternalAggregations = InternalAggregationTestCase.getDefaultNamedXContents().size(); + int expectedSuggestions = 3; + assertEquals(expectedInternalAggregations + expectedSuggestions, namedXContents.size()); + Map, Integer> categories = new HashMap<>(); + for (NamedXContentRegistry.Entry namedXContent : namedXContents) { + Integer counter = categories.putIfAbsent(namedXContent.categoryClass, 1); + if (counter != null) { + categories.put(namedXContent.categoryClass, counter + 1); + } + } + assertEquals(2, categories.size()); + assertEquals(expectedInternalAggregations, categories.get(Aggregation.class).intValue()); + assertEquals(expectedSuggestions, categories.get(Suggest.Suggestion.class).intValue()); + } + + public void testProvidedNamedXContents() { + List namedXContents = RestHighLevelClient.getProvidedNamedXContents(); + assertEquals(2, namedXContents.size()); + Map, Integer> categories = new HashMap<>(); + List names = new ArrayList<>(); + for (NamedXContentRegistry.Entry namedXContent : namedXContents) { + names.add(namedXContent.name.getPreferredName()); + Integer counter = categories.putIfAbsent(namedXContent.categoryClass, 1); + if (counter != null) { + categories.put(namedXContent.categoryClass, counter + 1); + } + } + assertEquals(1, categories.size()); + assertEquals(Integer.valueOf(2), categories.get(Aggregation.class)); + assertTrue(names.contains(ChildrenAggregationBuilder.NAME)); + assertTrue(names.contains(MatrixStatsAggregationBuilder.NAME)); + } + + private static class TrackingActionListener implements ActionListener { + private final AtomicInteger statusCode = new AtomicInteger(-1); + private final AtomicReference exception = new AtomicReference<>(); + + @Override + public void onResponse(Integer statusCode) { + assertTrue(this.statusCode.compareAndSet(-1, statusCode)); + } + + @Override + public void onFailure(Exception e) { + assertTrue(exception.compareAndSet(null, e)); + } + } + + private static class HeadersVarargMatcher extends ArgumentMatcher implements VarargMatcher { + private Header[] expectedHeaders; + + HeadersVarargMatcher(Header... expectedHeaders) { + this.expectedHeaders = expectedHeaders; + } + + @Override + public boolean matches(Object varargArgument) { + if (varargArgument instanceof Header[]) { + Header[] actualHeaders = (Header[]) varargArgument; + return new ArrayEquals(expectedHeaders).matches(actualHeaders); + } + return false; + } + } + + private static StatusLine newStatusLine(RestStatus restStatus) { + return new BasicStatusLine(HTTP_PROTOCOL, restStatus.getStatus(), restStatus.name()); + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java new file mode 100644 index 0000000000000..6c53d191dfc98 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java @@ -0,0 +1,464 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.nio.entity.NStringEntity; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ElasticsearchStatusException; +import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.ClearScrollResponse; +import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.MatchQueryBuilder; +import org.elasticsearch.join.aggregations.Children; +import org.elasticsearch.join.aggregations.ChildrenAggregationBuilder; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.aggregations.bucket.range.Range; +import org.elasticsearch.search.aggregations.bucket.range.RangeAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.Terms; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.matrix.stats.MatrixStats; +import org.elasticsearch.search.aggregations.matrix.stats.MatrixStatsAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.sort.SortOrder; +import org.elasticsearch.search.suggest.Suggest; +import org.elasticsearch.search.suggest.SuggestBuilder; +import org.elasticsearch.search.suggest.phrase.PhraseSuggestionBuilder; +import org.junit.Before; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; + +import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; +import static org.hamcrest.Matchers.both; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.either; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.greaterThan; +import static org.hamcrest.Matchers.greaterThanOrEqualTo; +import static org.hamcrest.Matchers.instanceOf; +import static org.hamcrest.Matchers.lessThan; + +public class SearchIT extends ESRestHighLevelClientTestCase { + + @Before + public void indexDocuments() throws IOException { + StringEntity doc1 = new StringEntity("{\"type\":\"type1\", \"num\":10, \"num2\":50}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/1", Collections.emptyMap(), doc1); + StringEntity doc2 = new StringEntity("{\"type\":\"type1\", \"num\":20, \"num2\":40}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/2", Collections.emptyMap(), doc2); + StringEntity doc3 = new StringEntity("{\"type\":\"type1\", \"num\":50, \"num2\":35}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/3", Collections.emptyMap(), doc3); + StringEntity doc4 = new StringEntity("{\"type\":\"type2\", \"num\":100, \"num2\":10}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/4", Collections.emptyMap(), doc4); + StringEntity doc5 = new StringEntity("{\"type\":\"type2\", \"num\":100, \"num2\":10}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/5", Collections.emptyMap(), doc5); + client().performRequest("POST", "/index/_refresh"); + } + + public void testSearchNoQuery() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getAggregations()); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(5, searchResponse.getHits().totalHits); + assertEquals(5, searchResponse.getHits().getHits().length); + for (SearchHit searchHit : searchResponse.getHits().getHits()) { + assertEquals("index", searchHit.getIndex()); + assertEquals("type", searchHit.getType()); + assertThat(Integer.valueOf(searchHit.getId()), both(greaterThan(0)).and(lessThan(6))); + assertEquals(1.0f, searchHit.getScore(), 0); + assertEquals(-1L, searchHit.getVersion()); + assertNotNull(searchHit.getSourceAsMap()); + assertEquals(3, searchHit.getSourceAsMap().size()); + assertTrue(searchHit.getSourceAsMap().containsKey("type")); + assertTrue(searchHit.getSourceAsMap().containsKey("num")); + assertTrue(searchHit.getSourceAsMap().containsKey("num2")); + } + } + + public void testSearchMatchQuery() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + searchRequest.source(new SearchSourceBuilder().query(new MatchQueryBuilder("num", 10))); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getAggregations()); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(1, searchResponse.getHits().totalHits); + assertEquals(1, searchResponse.getHits().getHits().length); + assertThat(searchResponse.getHits().getMaxScore(), greaterThan(0f)); + SearchHit searchHit = searchResponse.getHits().getHits()[0]; + assertEquals("index", searchHit.getIndex()); + assertEquals("type", searchHit.getType()); + assertEquals("1", searchHit.getId()); + assertThat(searchHit.getScore(), greaterThan(0f)); + assertEquals(-1L, searchHit.getVersion()); + assertNotNull(searchHit.getSourceAsMap()); + assertEquals(3, searchHit.getSourceAsMap().size()); + assertEquals("type1", searchHit.getSourceAsMap().get("type")); + assertEquals(50, searchHit.getSourceAsMap().get("num2")); + } + + public void testSearchWithTermsAgg() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.aggregation(new TermsAggregationBuilder("agg1", ValueType.STRING).field("type.keyword")); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + Terms termsAgg = searchResponse.getAggregations().get("agg1"); + assertEquals("agg1", termsAgg.getName()); + assertEquals(2, termsAgg.getBuckets().size()); + Terms.Bucket type1 = termsAgg.getBucketByKey("type1"); + assertEquals(3, type1.getDocCount()); + assertEquals(0, type1.getAggregations().asList().size()); + Terms.Bucket type2 = termsAgg.getBucketByKey("type2"); + assertEquals(2, type2.getDocCount()); + assertEquals(0, type2.getAggregations().asList().size()); + } + + public void testSearchWithRangeAgg() throws IOException { + { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.aggregation(new RangeAggregationBuilder("agg1").field("num")); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, + () -> execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync)); + assertEquals(RestStatus.BAD_REQUEST, exception.status()); + } + + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.aggregation(new RangeAggregationBuilder("agg1").field("num") + .addRange("first", 0, 30).addRange("second", 31, 200)); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(5, searchResponse.getHits().totalHits); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + Range rangeAgg = searchResponse.getAggregations().get("agg1"); + assertEquals("agg1", rangeAgg.getName()); + assertEquals(2, rangeAgg.getBuckets().size()); + { + Range.Bucket bucket = rangeAgg.getBuckets().get(0); + assertEquals("first", bucket.getKeyAsString()); + assertEquals(2, bucket.getDocCount()); + } + { + Range.Bucket bucket = rangeAgg.getBuckets().get(1); + assertEquals("second", bucket.getKeyAsString()); + assertEquals(3, bucket.getDocCount()); + } + } + + public void testSearchWithTermsAndRangeAgg() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + TermsAggregationBuilder agg = new TermsAggregationBuilder("agg1", ValueType.STRING).field("type.keyword"); + agg.subAggregation(new RangeAggregationBuilder("subagg").field("num") + .addRange("first", 0, 30).addRange("second", 31, 200)); + searchSourceBuilder.aggregation(agg); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + Terms termsAgg = searchResponse.getAggregations().get("agg1"); + assertEquals("agg1", termsAgg.getName()); + assertEquals(2, termsAgg.getBuckets().size()); + Terms.Bucket type1 = termsAgg.getBucketByKey("type1"); + assertEquals(3, type1.getDocCount()); + assertEquals(1, type1.getAggregations().asList().size()); + { + Range rangeAgg = type1.getAggregations().get("subagg"); + assertEquals(2, rangeAgg.getBuckets().size()); + { + Range.Bucket bucket = rangeAgg.getBuckets().get(0); + assertEquals("first", bucket.getKeyAsString()); + assertEquals(2, bucket.getDocCount()); + } + { + Range.Bucket bucket = rangeAgg.getBuckets().get(1); + assertEquals("second", bucket.getKeyAsString()); + assertEquals(1, bucket.getDocCount()); + } + } + Terms.Bucket type2 = termsAgg.getBucketByKey("type2"); + assertEquals(2, type2.getDocCount()); + assertEquals(1, type2.getAggregations().asList().size()); + { + Range rangeAgg = type2.getAggregations().get("subagg"); + assertEquals(2, rangeAgg.getBuckets().size()); + { + Range.Bucket bucket = rangeAgg.getBuckets().get(0); + assertEquals("first", bucket.getKeyAsString()); + assertEquals(0, bucket.getDocCount()); + } + { + Range.Bucket bucket = rangeAgg.getBuckets().get(1); + assertEquals("second", bucket.getKeyAsString()); + assertEquals(2, bucket.getDocCount()); + } + } + } + + public void testSearchWithMatrixStats() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.aggregation(new MatrixStatsAggregationBuilder("agg1").fields(Arrays.asList("num", "num2"))); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(5, searchResponse.getHits().totalHits); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + assertEquals(1, searchResponse.getAggregations().asList().size()); + MatrixStats matrixStats = searchResponse.getAggregations().get("agg1"); + assertEquals(5, matrixStats.getFieldCount("num")); + assertEquals(56d, matrixStats.getMean("num"), 0d); + assertEquals(1830d, matrixStats.getVariance("num"), 0d); + assertEquals(0.09340198804973057, matrixStats.getSkewness("num"), 0d); + assertEquals(1.2741646510794589, matrixStats.getKurtosis("num"), 0d); + assertEquals(5, matrixStats.getFieldCount("num2")); + assertEquals(29d, matrixStats.getMean("num2"), 0d); + assertEquals(330d, matrixStats.getVariance("num2"), 0d); + assertEquals(-0.13568039346585542, matrixStats.getSkewness("num2"), 1.0e-16); + assertEquals(1.3517561983471074, matrixStats.getKurtosis("num2"), 0d); + assertEquals(-767.5, matrixStats.getCovariance("num", "num2"), 0d); + assertEquals(-0.9876336291667923, matrixStats.getCorrelation("num", "num2"), 0d); + } + + public void testSearchWithParentJoin() throws IOException { + StringEntity parentMapping = new StringEntity("{\n" + + " \"mappings\": {\n" + + " \"answer\" : {\n" + + " \"_parent\" : {\n" + + " \"type\" : \"question\"\n" + + " }\n" + + " }\n" + + " },\n" + + " \"settings\": {\n" + + " \"index.mapping.single_type\": false" + + " }\n" + + "}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/child_example", Collections.emptyMap(), parentMapping); + StringEntity questionDoc = new StringEntity("{\n" + + " \"body\": \"

I have Windows 2003 server and i bought a new Windows 2008 server...\",\n" + + " \"title\": \"Whats the best way to file transfer my site from server to a newer one?\",\n" + + " \"tags\": [\n" + + " \"windows-server-2003\",\n" + + " \"windows-server-2008\",\n" + + " \"file-transfer\"\n" + + " ]\n" + + "}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/child_example/question/1", Collections.emptyMap(), questionDoc); + StringEntity answerDoc1 = new StringEntity("{\n" + + " \"owner\": {\n" + + " \"location\": \"Norfolk, United Kingdom\",\n" + + " \"display_name\": \"Sam\",\n" + + " \"id\": 48\n" + + " },\n" + + " \"body\": \"

Unfortunately you're pretty much limited to FTP...\",\n" + + " \"creation_date\": \"2009-05-04T13:45:37.030\"\n" + + "}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "child_example/answer/1", Collections.singletonMap("parent", "1"), answerDoc1); + StringEntity answerDoc2 = new StringEntity("{\n" + + " \"owner\": {\n" + + " \"location\": \"Norfolk, United Kingdom\",\n" + + " \"display_name\": \"Troll\",\n" + + " \"id\": 49\n" + + " },\n" + + " \"body\": \"

Use Linux...\",\n" + + " \"creation_date\": \"2009-05-05T13:45:37.030\"\n" + + "}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/child_example/answer/2", Collections.singletonMap("parent", "1"), answerDoc2); + client().performRequest("POST", "/_refresh"); + + TermsAggregationBuilder leafTermAgg = new TermsAggregationBuilder("top-names", ValueType.STRING) + .field("owner.display_name.keyword").size(10); + ChildrenAggregationBuilder childrenAgg = new ChildrenAggregationBuilder("to-answers", "answer").subAggregation(leafTermAgg); + TermsAggregationBuilder termsAgg = new TermsAggregationBuilder("top-tags", ValueType.STRING).field("tags.keyword") + .size(10).subAggregation(childrenAgg); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.size(0).aggregation(termsAgg); + SearchRequest searchRequest = new SearchRequest("child_example"); + searchRequest.source(searchSourceBuilder); + + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(3, searchResponse.getHits().totalHits); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + assertEquals(1, searchResponse.getAggregations().asList().size()); + Terms terms = searchResponse.getAggregations().get("top-tags"); + assertEquals(0, terms.getDocCountError()); + assertEquals(0, terms.getSumOfOtherDocCounts()); + assertEquals(3, terms.getBuckets().size()); + for (Terms.Bucket bucket : terms.getBuckets()) { + assertThat(bucket.getKeyAsString(), + either(equalTo("file-transfer")).or(equalTo("windows-server-2003")).or(equalTo("windows-server-2008"))); + assertEquals(1, bucket.getDocCount()); + assertEquals(1, bucket.getAggregations().asList().size()); + Children children = bucket.getAggregations().get("to-answers"); + assertEquals(2, children.getDocCount()); + assertEquals(1, children.getAggregations().asList().size()); + Terms leafTerms = children.getAggregations().get("top-names"); + assertEquals(0, leafTerms.getDocCountError()); + assertEquals(0, leafTerms.getSumOfOtherDocCounts()); + assertEquals(2, leafTerms.getBuckets().size()); + assertEquals(2, leafTerms.getBuckets().size()); + Terms.Bucket sam = leafTerms.getBucketByKey("Sam"); + assertEquals(1, sam.getDocCount()); + Terms.Bucket troll = leafTerms.getBucketByKey("Troll"); + assertEquals(1, troll.getDocCount()); + } + } + + public void testSearchWithSuggest() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.suggest(new SuggestBuilder().addSuggestion("sugg1", new PhraseSuggestionBuilder("type")) + .setGlobalText("type")); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getAggregations()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(0, searchResponse.getHits().totalHits); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(1, searchResponse.getSuggest().size()); + + Suggest.Suggestion> sugg = searchResponse + .getSuggest().iterator().next(); + assertEquals("sugg1", sugg.getName()); + for (Suggest.Suggestion.Entry options : sugg) { + assertEquals("type", options.getText().string()); + assertEquals(0, options.getOffset()); + assertEquals(4, options.getLength()); + assertEquals(2 ,options.getOptions().size()); + for (Suggest.Suggestion.Entry.Option option : options) { + assertThat(option.getScore(), greaterThan(0f)); + assertThat(option.getText().string(), either(equalTo("type1")).or(equalTo("type2"))); + } + } + } + + public void testSearchScroll() throws Exception { + + for (int i = 0; i < 100; i++) { + XContentBuilder builder = jsonBuilder().startObject().field("field", i).endObject(); + HttpEntity entity = new NStringEntity(builder.string(), ContentType.APPLICATION_JSON); + client().performRequest("PUT", "test/type1/" + Integer.toString(i), Collections.emptyMap(), entity); + } + client().performRequest("POST", "/test/_refresh"); + + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder().size(35).sort("field", SortOrder.ASC); + SearchRequest searchRequest = new SearchRequest("test").scroll(TimeValue.timeValueMinutes(2)).source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + + try { + long counter = 0; + assertSearchHeader(searchResponse); + assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L)); + assertThat(searchResponse.getHits().getHits().length, equalTo(35)); + for (SearchHit hit : searchResponse.getHits()) { + assertThat(((Number) hit.getSortValues()[0]).longValue(), equalTo(counter++)); + } + + searchResponse = execute(new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)), + highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync); + + assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L)); + assertThat(searchResponse.getHits().getHits().length, equalTo(35)); + for (SearchHit hit : searchResponse.getHits()) { + assertEquals(counter++, ((Number) hit.getSortValues()[0]).longValue()); + } + + searchResponse = execute(new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)), + highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync); + + assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L)); + assertThat(searchResponse.getHits().getHits().length, equalTo(30)); + for (SearchHit hit : searchResponse.getHits()) { + assertEquals(counter++, ((Number) hit.getSortValues()[0]).longValue()); + } + } finally { + ClearScrollRequest clearScrollRequest = new ClearScrollRequest(); + clearScrollRequest.addScrollId(searchResponse.getScrollId()); + ClearScrollResponse clearScrollResponse = execute(clearScrollRequest, + // Not using a method reference to work around https://bugs.eclipse.org/bugs/show_bug.cgi?id=517951 + (request, headers) -> highLevelClient().clearScroll(request, headers), + (request, listener, headers) -> highLevelClient().clearScrollAsync(request, listener, headers)); + assertThat(clearScrollResponse.getNumFreed(), greaterThan(0)); + assertTrue(clearScrollResponse.isSucceeded()); + + SearchScrollRequest scrollRequest = new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)); + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> execute(scrollRequest, + highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync)); + assertEquals(RestStatus.NOT_FOUND, exception.status()); + assertThat(exception.getRootCause(), instanceOf(ElasticsearchException.class)); + ElasticsearchException rootCause = (ElasticsearchException) exception.getRootCause(); + assertThat(rootCause.getMessage(), containsString("No search context found for")); + } + } + + private static void assertSearchHeader(SearchResponse searchResponse) { + assertThat(searchResponse.getTook().nanos(), greaterThanOrEqualTo(0L)); + assertEquals(0, searchResponse.getFailedShards()); + assertThat(searchResponse.getTotalShards(), greaterThan(0)); + assertEquals(searchResponse.getTotalShards(), searchResponse.getSuccessfulShards()); + assertEquals(0, searchResponse.getShardFailures().length); + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java new file mode 100644 index 0000000000000..30907cd050531 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java @@ -0,0 +1,962 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.documentation; + +import org.apache.http.HttpEntity; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.entity.ContentType; +import org.apache.http.nio.entity.NStringEntity; +import org.elasticsearch.Build; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.DocWriteResponse; +import org.elasticsearch.action.bulk.BackoffPolicy; +import org.elasticsearch.action.bulk.BulkItemResponse; +import org.elasticsearch.action.bulk.BulkProcessor; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.bulk.BulkResponse; +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.delete.DeleteResponse; +import org.elasticsearch.action.get.GetRequest; +import org.elasticsearch.action.get.GetResponse; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.index.IndexResponse; +import org.elasticsearch.action.main.MainResponse; +import org.elasticsearch.action.support.ActiveShardCount; +import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.action.support.replication.ReplicationResponse; +import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.action.update.UpdateResponse; +import org.elasticsearch.client.ESRestHighLevelClientTestCase; +import org.elasticsearch.client.Response; +import org.elasticsearch.client.RestHighLevelClient; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.common.unit.ByteSizeValue; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.get.GetResult; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptType; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import org.elasticsearch.threadpool.ThreadPool; + +import java.io.IOException; +import java.util.Collections; +import java.util.Date; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +import static java.util.Collections.emptyMap; +import static java.util.Collections.singletonMap; + +/** + * This class is used to generate the Java CRUD API documentation. + * You need to wrap your code between two tags like: + * // tag::example[] + * // end::example[] + * + * Where example is your tag name. + * + * Then in the documentation, you can extract what is between tag and end tags with + * ["source","java",subs="attributes,callouts,macros"] + * -------------------------------------------------- + * include-tagged::{doc-tests}/CRUDDocumentationIT.java[example] + * -------------------------------------------------- + */ +public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase { + + public void testIndex() throws IOException { + RestHighLevelClient client = highLevelClient(); + + { + //tag::index-request-map + Map jsonMap = new HashMap<>(); + jsonMap.put("user", "kimchy"); + jsonMap.put("postDate", new Date()); + jsonMap.put("message", "trying out Elasticsearch"); + IndexRequest indexRequest = new IndexRequest("posts", "doc", "1") + .source(jsonMap); // <1> + //end::index-request-map + IndexResponse indexResponse = client.index(indexRequest); + assertEquals(indexResponse.getResult(), DocWriteResponse.Result.CREATED); + } + { + //tag::index-request-xcontent + XContentBuilder builder = XContentFactory.jsonBuilder(); + builder.startObject(); + { + builder.field("user", "kimchy"); + builder.field("postDate", new Date()); + builder.field("message", "trying out Elasticsearch"); + } + builder.endObject(); + IndexRequest indexRequest = new IndexRequest("posts", "doc", "1") + .source(builder); // <1> + //end::index-request-xcontent + IndexResponse indexResponse = client.index(indexRequest); + assertEquals(indexResponse.getResult(), DocWriteResponse.Result.UPDATED); + } + { + //tag::index-request-shortcut + IndexRequest indexRequest = new IndexRequest("posts", "doc", "1") + .source("user", "kimchy", + "postDate", new Date(), + "message", "trying out Elasticsearch"); // <1> + //end::index-request-shortcut + IndexResponse indexResponse = client.index(indexRequest); + assertEquals(indexResponse.getResult(), DocWriteResponse.Result.UPDATED); + } + { + //tag::index-request-string + IndexRequest request = new IndexRequest( + "posts", // <1> + "doc", // <2> + "1"); // <3> + String jsonString = "{" + + "\"user\":\"kimchy\"," + + "\"postDate\":\"2013-01-30\"," + + "\"message\":\"trying out Elasticsearch\"" + + "}"; + request.source(jsonString, XContentType.JSON); // <4> + //end::index-request-string + + // tag::index-execute + IndexResponse indexResponse = client.index(request); + // end::index-execute + assertEquals(indexResponse.getResult(), DocWriteResponse.Result.UPDATED); + + // tag::index-response + String index = indexResponse.getIndex(); + String type = indexResponse.getType(); + String id = indexResponse.getId(); + long version = indexResponse.getVersion(); + if (indexResponse.getResult() == DocWriteResponse.Result.CREATED) { + // <1> + } else if (indexResponse.getResult() == DocWriteResponse.Result.UPDATED) { + // <2> + } + ReplicationResponse.ShardInfo shardInfo = indexResponse.getShardInfo(); + if (shardInfo.getTotal() != shardInfo.getSuccessful()) { + // <3> + } + if (shardInfo.getFailed() > 0) { + for (ReplicationResponse.ShardInfo.Failure failure : shardInfo.getFailures()) { + String reason = failure.reason(); // <4> + } + } + // end::index-response + + // tag::index-execute-async + client.indexAsync(request, new ActionListener() { + @Override + public void onResponse(IndexResponse indexResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + // end::index-execute-async + } + { + IndexRequest request = new IndexRequest("posts", "doc", "1"); + // tag::index-request-routing + request.routing("routing"); // <1> + // end::index-request-routing + // tag::index-request-parent + request.parent("parent"); // <1> + // end::index-request-parent + // tag::index-request-timeout + request.timeout(TimeValue.timeValueSeconds(1)); // <1> + request.timeout("1s"); // <2> + // end::index-request-timeout + // tag::index-request-refresh + request.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL); // <1> + request.setRefreshPolicy("wait_for"); // <2> + // end::index-request-refresh + // tag::index-request-version + request.version(2); // <1> + // end::index-request-version + // tag::index-request-version-type + request.versionType(VersionType.EXTERNAL); // <1> + // end::index-request-version-type + // tag::index-request-op-type + request.opType(DocWriteRequest.OpType.CREATE); // <1> + request.opType("create"); // <2> + // end::index-request-op-type + // tag::index-request-pipeline + request.setPipeline("pipeline"); // <1> + // end::index-request-pipeline + } + { + // tag::index-conflict + IndexRequest request = new IndexRequest("posts", "doc", "1") + .source("field", "value") + .version(1); + try { + IndexResponse response = client.index(request); + } catch(ElasticsearchException e) { + if (e.status() == RestStatus.CONFLICT) { + // <1> + } + } + // end::index-conflict + } + { + // tag::index-optype + IndexRequest request = new IndexRequest("posts", "doc", "1") + .source("field", "value") + .opType(DocWriteRequest.OpType.CREATE); + try { + IndexResponse response = client.index(request); + } catch(ElasticsearchException e) { + if (e.status() == RestStatus.CONFLICT) { + // <1> + } + } + // end::index-optype + } + } + + public void testUpdate() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + IndexRequest indexRequest = new IndexRequest("posts", "doc", "1").source("field", 0); + IndexResponse indexResponse = client.index(indexRequest); + assertSame(indexResponse.status(), RestStatus.CREATED); + + XContentType xContentType = XContentType.JSON; + String script = XContentBuilder.builder(xContentType.xContent()) + .startObject() + .startObject("script") + .field("lang", "painless") + .field("code", "ctx._source.field += params.count") + .endObject() + .endObject().string(); + HttpEntity body = new NStringEntity(script, ContentType.create(xContentType.mediaType())); + Response response = client().performRequest(HttpPost.METHOD_NAME, "/_scripts/increment-field", emptyMap(), body); + assertEquals(response.getStatusLine().getStatusCode(), RestStatus.OK.getStatus()); + } + { + //tag::update-request + UpdateRequest request = new UpdateRequest( + "posts", // <1> + "doc", // <2> + "1"); // <3> + //end::update-request + request.fetchSource(true); + //tag::update-request-with-inline-script + Map parameters = singletonMap("count", 4); // <1> + + Script inline = new Script(ScriptType.INLINE, "painless", "ctx._source.field += params.count", parameters); // <2> + request.script(inline); // <3> + //end::update-request-with-inline-script + UpdateResponse updateResponse = client.update(request); + assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED); + assertEquals(4, updateResponse.getGetResult().getSource().get("field")); + + request = new UpdateRequest("posts", "doc", "1").fetchSource(true); + //tag::update-request-with-stored-script + Script stored = + new Script(ScriptType.STORED, "painless", "increment-field", parameters); // <1> + request.script(stored); // <2> + //end::update-request-with-stored-script + updateResponse = client.update(request); + assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED); + assertEquals(8, updateResponse.getGetResult().getSource().get("field")); + } + { + //tag::update-request-with-doc-as-map + Map jsonMap = new HashMap<>(); + jsonMap.put("updated", new Date()); + jsonMap.put("reason", "daily update"); + UpdateRequest request = new UpdateRequest("posts", "doc", "1") + .doc(jsonMap); // <1> + //end::update-request-with-doc-as-map + UpdateResponse updateResponse = client.update(request); + assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED); + } + { + //tag::update-request-with-doc-as-xcontent + XContentBuilder builder = XContentFactory.jsonBuilder(); + builder.startObject(); + { + builder.field("updated", new Date()); + builder.field("reason", "daily update"); + } + builder.endObject(); + UpdateRequest request = new UpdateRequest("posts", "doc", "1") + .doc(builder); // <1> + //end::update-request-with-doc-as-xcontent + UpdateResponse updateResponse = client.update(request); + assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED); + } + { + //tag::update-request-shortcut + UpdateRequest request = new UpdateRequest("posts", "doc", "1") + .doc("updated", new Date(), + "reason", "daily update"); // <1> + //end::update-request-shortcut + UpdateResponse updateResponse = client.update(request); + assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED); + } + { + //tag::update-request-with-doc-as-string + UpdateRequest request = new UpdateRequest("posts", "doc", "1"); + String jsonString = "{" + + "\"updated\":\"2017-01-01\"," + + "\"reason\":\"daily update\"" + + "}"; + request.doc(jsonString, XContentType.JSON); // <1> + //end::update-request-with-doc-as-string + request.fetchSource(true); + // tag::update-execute + UpdateResponse updateResponse = client.update(request); + // end::update-execute + assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED); + + // tag::update-response + String index = updateResponse.getIndex(); + String type = updateResponse.getType(); + String id = updateResponse.getId(); + long version = updateResponse.getVersion(); + if (updateResponse.getResult() == DocWriteResponse.Result.CREATED) { + // <1> + } else if (updateResponse.getResult() == DocWriteResponse.Result.UPDATED) { + // <2> + } else if (updateResponse.getResult() == DocWriteResponse.Result.DELETED) { + // <3> + } else if (updateResponse.getResult() == DocWriteResponse.Result.NOOP) { + // <4> + } + // end::update-response + + // tag::update-getresult + GetResult result = updateResponse.getGetResult(); // <1> + if (result.isExists()) { + String sourceAsString = result.sourceAsString(); // <2> + Map sourceAsMap = result.sourceAsMap(); // <3> + byte[] sourceAsBytes = result.source(); // <4> + } else { + // <5> + } + // end::update-getresult + assertNotNull(result); + assertEquals(3, result.sourceAsMap().size()); + // tag::update-failure + ReplicationResponse.ShardInfo shardInfo = updateResponse.getShardInfo(); + if (shardInfo.getTotal() != shardInfo.getSuccessful()) { + // <1> + } + if (shardInfo.getFailed() > 0) { + for (ReplicationResponse.ShardInfo.Failure failure : shardInfo.getFailures()) { + String reason = failure.reason(); // <2> + } + } + // end::update-failure + + // tag::update-execute-async + client.updateAsync(request, new ActionListener() { + @Override + public void onResponse(UpdateResponse updateResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + // end::update-execute-async + } + { + //tag::update-docnotfound + UpdateRequest request = new UpdateRequest("posts", "type", "does_not_exist").doc("field", "value"); + try { + UpdateResponse updateResponse = client.update(request); + } catch (ElasticsearchException e) { + if (e.status() == RestStatus.NOT_FOUND) { + // <1> + } + } + //end::update-docnotfound + } + { + // tag::update-conflict + UpdateRequest request = new UpdateRequest("posts", "doc", "1") + .doc("field", "value") + .version(1); + try { + UpdateResponse updateResponse = client.update(request); + } catch(ElasticsearchException e) { + if (e.status() == RestStatus.CONFLICT) { + // <1> + } + } + // end::update-conflict + } + { + UpdateRequest request = new UpdateRequest("posts", "doc", "1").doc("reason", "no source"); + //tag::update-request-no-source + request.fetchSource(true); // <1> + //end::update-request-no-source + UpdateResponse updateResponse = client.update(request); + assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED); + assertNotNull(updateResponse.getGetResult()); + assertEquals(3, updateResponse.getGetResult().sourceAsMap().size()); + } + { + UpdateRequest request = new UpdateRequest("posts", "doc", "1").doc("reason", "source includes"); + //tag::update-request-source-include + String[] includes = new String[]{"updated", "r*"}; + String[] excludes = Strings.EMPTY_ARRAY; + request.fetchSource(new FetchSourceContext(true, includes, excludes)); // <1> + //end::update-request-source-include + UpdateResponse updateResponse = client.update(request); + assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED); + Map sourceAsMap = updateResponse.getGetResult().sourceAsMap(); + assertEquals(2, sourceAsMap.size()); + assertEquals("source includes", sourceAsMap.get("reason")); + assertTrue(sourceAsMap.containsKey("updated")); + } + { + UpdateRequest request = new UpdateRequest("posts", "doc", "1").doc("reason", "source excludes"); + //tag::update-request-source-exclude + String[] includes = Strings.EMPTY_ARRAY; + String[] excludes = new String[]{"updated"}; + request.fetchSource(new FetchSourceContext(true, includes, excludes)); // <1> + //end::update-request-source-exclude + UpdateResponse updateResponse = client.update(request); + assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED); + Map sourceAsMap = updateResponse.getGetResult().sourceAsMap(); + assertEquals(2, sourceAsMap.size()); + assertEquals("source excludes", sourceAsMap.get("reason")); + assertTrue(sourceAsMap.containsKey("field")); + } + { + UpdateRequest request = new UpdateRequest("posts", "doc", "id"); + // tag::update-request-routing + request.routing("routing"); // <1> + // end::update-request-routing + // tag::update-request-parent + request.parent("parent"); // <1> + // end::update-request-parent + // tag::update-request-timeout + request.timeout(TimeValue.timeValueSeconds(1)); // <1> + request.timeout("1s"); // <2> + // end::update-request-timeout + // tag::update-request-retry + request.retryOnConflict(3); // <1> + // end::update-request-retry + // tag::update-request-refresh + request.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL); // <1> + request.setRefreshPolicy("wait_for"); // <2> + // end::update-request-refresh + // tag::update-request-version + request.version(2); // <1> + // end::update-request-version + // tag::update-request-detect-noop + request.detectNoop(false); // <1> + // end::update-request-detect-noop + // tag::update-request-upsert + String jsonString = "{\"created\":\"2017-01-01\"}"; + request.upsert(jsonString, XContentType.JSON); // <1> + // end::update-request-upsert + // tag::update-request-scripted-upsert + request.scriptedUpsert(true); // <1> + // end::update-request-scripted-upsert + // tag::update-request-doc-upsert + request.docAsUpsert(true); // <1> + // end::update-request-doc-upsert + // tag::update-request-active-shards + request.waitForActiveShards(2); // <1> + request.waitForActiveShards(ActiveShardCount.ALL); // <2> + // end::update-request-active-shards + } + } + + public void testDelete() throws IOException { + RestHighLevelClient client = highLevelClient(); + + { + IndexRequest indexRequest = new IndexRequest("posts", "doc", "1").source("field", "value"); + IndexResponse indexResponse = client.index(indexRequest); + assertSame(indexResponse.status(), RestStatus.CREATED); + } + + { + // tag::delete-request + DeleteRequest request = new DeleteRequest( + "posts", // <1> + "doc", // <2> + "1"); // <3> + // end::delete-request + + // tag::delete-execute + DeleteResponse deleteResponse = client.delete(request); + // end::delete-execute + assertSame(deleteResponse.getResult(), DocWriteResponse.Result.DELETED); + + // tag::delete-response + String index = deleteResponse.getIndex(); + String type = deleteResponse.getType(); + String id = deleteResponse.getId(); + long version = deleteResponse.getVersion(); + ReplicationResponse.ShardInfo shardInfo = deleteResponse.getShardInfo(); + if (shardInfo.getTotal() != shardInfo.getSuccessful()) { + // <1> + } + if (shardInfo.getFailed() > 0) { + for (ReplicationResponse.ShardInfo.Failure failure : shardInfo.getFailures()) { + String reason = failure.reason(); // <2> + } + } + // end::delete-response + + // tag::delete-execute-async + client.deleteAsync(request, new ActionListener() { + @Override + public void onResponse(DeleteResponse deleteResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + // end::delete-execute-async + } + + { + DeleteRequest request = new DeleteRequest("posts", "doc", "1"); + // tag::delete-request-routing + request.routing("routing"); // <1> + // end::delete-request-routing + // tag::delete-request-parent + request.parent("parent"); // <1> + // end::delete-request-parent + // tag::delete-request-timeout + request.timeout(TimeValue.timeValueMinutes(2)); // <1> + request.timeout("2m"); // <2> + // end::delete-request-timeout + // tag::delete-request-refresh + request.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL); // <1> + request.setRefreshPolicy("wait_for"); // <2> + // end::delete-request-refresh + // tag::delete-request-version + request.version(2); // <1> + // end::delete-request-version + // tag::delete-request-version-type + request.versionType(VersionType.EXTERNAL); // <1> + // end::delete-request-version-type + } + + { + // tag::delete-notfound + DeleteRequest request = new DeleteRequest("posts", "doc", "does_not_exist"); + DeleteResponse deleteResponse = client.delete(request); + if (deleteResponse.getResult() == DocWriteResponse.Result.NOT_FOUND) { + // <1> + } + // end::delete-notfound + } + + { + IndexResponse indexResponse = client.index(new IndexRequest("posts", "doc", "1").source("field", "value")); + assertSame(indexResponse.status(), RestStatus.CREATED); + + // tag::delete-conflict + try { + DeleteRequest request = new DeleteRequest("posts", "doc", "1").version(2); + DeleteResponse deleteResponse = client.delete(request); + } catch (ElasticsearchException exception) { + if (exception.status() == RestStatus.CONFLICT) { + // <1> + } + } + // end::delete-conflict + } + } + + public void testBulk() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + // tag::bulk-request + BulkRequest request = new BulkRequest(); // <1> + request.add(new IndexRequest("posts", "doc", "1") // <2> + .source(XContentType.JSON,"field", "foo")); + request.add(new IndexRequest("posts", "doc", "2") // <3> + .source(XContentType.JSON,"field", "bar")); + request.add(new IndexRequest("posts", "doc", "3") // <4> + .source(XContentType.JSON,"field", "baz")); + // end::bulk-request + // tag::bulk-execute + BulkResponse bulkResponse = client.bulk(request); + // end::bulk-execute + assertSame(bulkResponse.status(), RestStatus.OK); + assertFalse(bulkResponse.hasFailures()); + } + { + // tag::bulk-request-with-mixed-operations + BulkRequest request = new BulkRequest(); + request.add(new DeleteRequest("posts", "doc", "3")); // <1> + request.add(new UpdateRequest("posts", "doc", "2") // <2> + .doc(XContentType.JSON,"other", "test")); + request.add(new IndexRequest("posts", "doc", "4") // <3> + .source(XContentType.JSON,"field", "baz")); + // end::bulk-request-with-mixed-operations + BulkResponse bulkResponse = client.bulk(request); + assertSame(bulkResponse.status(), RestStatus.OK); + assertFalse(bulkResponse.hasFailures()); + + // tag::bulk-response + for (BulkItemResponse bulkItemResponse : bulkResponse) { // <1> + DocWriteResponse itemResponse = bulkItemResponse.getResponse(); // <2> + + if (bulkItemResponse.getOpType() == DocWriteRequest.OpType.INDEX + || bulkItemResponse.getOpType() == DocWriteRequest.OpType.CREATE) { // <3> + IndexResponse indexResponse = (IndexResponse) itemResponse; + + } else if (bulkItemResponse.getOpType() == DocWriteRequest.OpType.UPDATE) { // <4> + UpdateResponse updateResponse = (UpdateResponse) itemResponse; + + } else if (bulkItemResponse.getOpType() == DocWriteRequest.OpType.DELETE) { // <5> + DeleteResponse deleteResponse = (DeleteResponse) itemResponse; + } + } + // end::bulk-response + // tag::bulk-has-failures + if (bulkResponse.hasFailures()) { // <1> + + } + // end::bulk-has-failures + // tag::bulk-errors + for (BulkItemResponse bulkItemResponse : bulkResponse) { + if (bulkItemResponse.isFailed()) { // <1> + BulkItemResponse.Failure failure = bulkItemResponse.getFailure(); // <2> + + } + } + // end::bulk-errors + } + { + BulkRequest request = new BulkRequest(); + // tag::bulk-request-timeout + request.timeout(TimeValue.timeValueMinutes(2)); // <1> + request.timeout("2m"); // <2> + // end::bulk-request-timeout + // tag::bulk-request-refresh + request.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL); // <1> + request.setRefreshPolicy("wait_for"); // <2> + // end::bulk-request-refresh + // tag::bulk-request-active-shards + request.waitForActiveShards(2); // <1> + request.waitForActiveShards(ActiveShardCount.ALL); // <2> + // end::bulk-request-active-shards + + // tag::bulk-execute-async + client.bulkAsync(request, new ActionListener() { + @Override + public void onResponse(BulkResponse bulkResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + // end::bulk-execute-async + } + } + + public void testGet() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + String mappings = "{\n" + + " \"mappings\" : {\n" + + " \"doc\" : {\n" + + " \"properties\" : {\n" + + " \"message\" : {\n" + + " \"type\": \"text\",\n" + + " \"store\": true\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + "}"; + + NStringEntity entity = new NStringEntity(mappings, ContentType.APPLICATION_JSON); + Response response = client().performRequest("PUT", "/posts", Collections.emptyMap(), entity); + assertEquals(200, response.getStatusLine().getStatusCode()); + + IndexRequest indexRequest = new IndexRequest("posts", "doc", "1") + .source("user", "kimchy", + "postDate", new Date(), + "message", "trying out Elasticsearch"); + IndexResponse indexResponse = client.index(indexRequest); + assertEquals(indexResponse.getResult(), DocWriteResponse.Result.CREATED); + } + { + //tag::get-request + GetRequest getRequest = new GetRequest( + "posts", // <1> + "doc", // <2> + "1"); // <3> + //end::get-request + + //tag::get-execute + GetResponse getResponse = client.get(getRequest); + //end::get-execute + assertTrue(getResponse.isExists()); + assertEquals(3, getResponse.getSourceAsMap().size()); + //tag::get-response + String index = getResponse.getIndex(); + String type = getResponse.getType(); + String id = getResponse.getId(); + if (getResponse.isExists()) { + long version = getResponse.getVersion(); + String sourceAsString = getResponse.getSourceAsString(); // <1> + Map sourceAsMap = getResponse.getSourceAsMap(); // <2> + byte[] sourceAsBytes = getResponse.getSourceAsBytes(); // <3> + } else { + // <4> + } + //end::get-response + } + { + GetRequest request = new GetRequest("posts", "doc", "1"); + //tag::get-request-no-source + request.fetchSourceContext(new FetchSourceContext(false)); // <1> + //end::get-request-no-source + GetResponse getResponse = client.get(request); + assertNull(getResponse.getSourceInternal()); + } + { + GetRequest request = new GetRequest("posts", "doc", "1"); + //tag::get-request-source-include + String[] includes = new String[]{"message", "*Date"}; + String[] excludes = Strings.EMPTY_ARRAY; + FetchSourceContext fetchSourceContext = new FetchSourceContext(true, includes, excludes); + request.fetchSourceContext(fetchSourceContext); // <1> + //end::get-request-source-include + GetResponse getResponse = client.get(request); + Map sourceAsMap = getResponse.getSourceAsMap(); + assertEquals(2, sourceAsMap.size()); + assertEquals("trying out Elasticsearch", sourceAsMap.get("message")); + assertTrue(sourceAsMap.containsKey("postDate")); + } + { + GetRequest request = new GetRequest("posts", "doc", "1"); + //tag::get-request-source-exclude + String[] includes = Strings.EMPTY_ARRAY; + String[] excludes = new String[]{"message"}; + FetchSourceContext fetchSourceContext = new FetchSourceContext(true, includes, excludes); + request.fetchSourceContext(fetchSourceContext); // <1> + //end::get-request-source-exclude + GetResponse getResponse = client.get(request); + Map sourceAsMap = getResponse.getSourceAsMap(); + assertEquals(2, sourceAsMap.size()); + assertEquals("kimchy", sourceAsMap.get("user")); + assertTrue(sourceAsMap.containsKey("postDate")); + } + { + GetRequest request = new GetRequest("posts", "doc", "1"); + //tag::get-request-stored + request.storedFields("message"); // <1> + GetResponse getResponse = client.get(request); + String message = (String) getResponse.getField("message").getValue(); // <2> + //end::get-request-stored + assertEquals("trying out Elasticsearch", message); + assertEquals(1, getResponse.getFields().size()); + assertNull(getResponse.getSourceInternal()); + } + { + GetRequest request = new GetRequest("posts", "doc", "1"); + //tag::get-request-routing + request.routing("routing"); // <1> + //end::get-request-routing + //tag::get-request-parent + request.parent("parent"); // <1> + //end::get-request-parent + //tag::get-request-preference + request.preference("preference"); // <1> + //end::get-request-preference + //tag::get-request-realtime + request.realtime(false); // <1> + //end::get-request-realtime + //tag::get-request-refresh + request.refresh(true); // <1> + //end::get-request-refresh + //tag::get-request-version + request.version(2); // <1> + //end::get-request-version + //tag::get-request-version-type + request.versionType(VersionType.EXTERNAL); // <1> + //end::get-request-version-type + } + { + GetRequest request = new GetRequest("posts", "doc", "1"); + //tag::get-execute-async + client.getAsync(request, new ActionListener() { + @Override + public void onResponse(GetResponse getResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + //end::get-execute-async + } + { + //tag::get-indexnotfound + GetRequest request = new GetRequest("does_not_exist", "doc", "1"); + try { + GetResponse getResponse = client.get(request); + } catch (ElasticsearchException e) { + if (e.status() == RestStatus.NOT_FOUND) { + // <1> + } + } + //end::get-indexnotfound + } + { + // tag::get-conflict + try { + GetRequest request = new GetRequest("posts", "doc", "1").version(2); + GetResponse getResponse = client.get(request); + } catch (ElasticsearchException exception) { + if (exception.status() == RestStatus.CONFLICT) { + // <1> + } + } + // end::get-conflict + } + } + + public void testBulkProcessor() throws InterruptedException, IOException { + Settings settings = Settings.builder().put("node.name", "my-application").build(); + RestHighLevelClient client = highLevelClient(); + { + // tag::bulk-processor-init + ThreadPool threadPool = new ThreadPool(settings); // <1> + + BulkProcessor.Listener listener = new BulkProcessor.Listener() { // <2> + @Override + public void beforeBulk(long executionId, BulkRequest request) { + // <3> + } + + @Override + public void afterBulk(long executionId, BulkRequest request, BulkResponse response) { + // <4> + } + + @Override + public void afterBulk(long executionId, BulkRequest request, Throwable failure) { + // <5> + } + }; + + BulkProcessor bulkProcessor = new BulkProcessor.Builder(client::bulkAsync, listener, threadPool) + .build(); // <6> + // end::bulk-processor-init + assertNotNull(bulkProcessor); + + // tag::bulk-processor-add + IndexRequest one = new IndexRequest("posts", "doc", "1"). + source(XContentType.JSON, "title", "In which order are my Elasticsearch queries executed?"); + IndexRequest two = new IndexRequest("posts", "doc", "2") + .source(XContentType.JSON, "title", "Current status and upcoming changes in Elasticsearch"); + IndexRequest three = new IndexRequest("posts", "doc", "3") + .source(XContentType.JSON, "title", "The Future of Federated Search in Elasticsearch"); + + bulkProcessor.add(one); + bulkProcessor.add(two); + bulkProcessor.add(three); + // end::bulk-processor-add + + // tag::bulk-processor-await + boolean terminated = bulkProcessor.awaitClose(30L, TimeUnit.SECONDS); // <1> + // end::bulk-processor-await + assertTrue(terminated); + + // tag::bulk-processor-close + bulkProcessor.close(); + // end::bulk-processor-close + terminate(threadPool); + } + { + // tag::bulk-processor-listener + BulkProcessor.Listener listener = new BulkProcessor.Listener() { + @Override + public void beforeBulk(long executionId, BulkRequest request) { + int numberOfActions = request.numberOfActions(); // <1> + logger.debug("Executing bulk [{}] with {} requests", executionId, numberOfActions); + } + + @Override + public void afterBulk(long executionId, BulkRequest request, BulkResponse response) { + if (response.hasFailures()) { // <2> + logger.warn("Bulk [{}] executed with failures", executionId); + } else { + logger.debug("Bulk [{}] completed in {} milliseconds", executionId, response.getTook().getMillis()); + } + } + + @Override + public void afterBulk(long executionId, BulkRequest request, Throwable failure) { + logger.error("Failed to execute bulk", failure); // <3> + } + }; + // end::bulk-processor-listener + + ThreadPool threadPool = new ThreadPool(settings); + try { + // tag::bulk-processor-options + BulkProcessor.Builder builder = new BulkProcessor.Builder(client::bulkAsync, listener, threadPool); + builder.setBulkActions(500); // <1> + builder.setBulkSize(new ByteSizeValue(1L, ByteSizeUnit.MB)); // <2> + builder.setConcurrentRequests(0); // <3> + builder.setFlushInterval(TimeValue.timeValueSeconds(10L)); // <4> + builder.setBackoffPolicy(BackoffPolicy.constantBackoff(TimeValue.timeValueSeconds(1L), 3)); // <5> + // end::bulk-processor-options + } finally { + terminate(threadPool); + } + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MainDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MainDocumentationIT.java new file mode 100644 index 0000000000000..877ae910e83fa --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MainDocumentationIT.java @@ -0,0 +1,68 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.documentation; + +import org.elasticsearch.Build; +import org.elasticsearch.Version; +import org.elasticsearch.action.main.MainResponse; +import org.elasticsearch.client.ESRestHighLevelClientTestCase; +import org.elasticsearch.client.RestHighLevelClient; +import org.elasticsearch.cluster.ClusterName; + +import java.io.IOException; + +/** + * This class is used to generate the Java Main API documentation. + * You need to wrap your code between two tags like: + * // tag::example[] + * // end::example[] + * + * Where example is your tag name. + * + * Then in the documentation, you can extract what is between tag and end tags with + * ["source","java",subs="attributes,callouts,macros"] + * -------------------------------------------------- + * include-tagged::{doc-tests}/MainDocumentationIT.java[example] + * -------------------------------------------------- + */ +public class MainDocumentationIT extends ESRestHighLevelClientTestCase { + + public void testMain() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + //tag::main-execute + MainResponse response = client.info(); + //end::main-execute + assertTrue(response.isAvailable()); + //tag::main-response + ClusterName clusterName = response.getClusterName(); // <1> + String clusterUuid = response.getClusterUuid(); // <2> + String nodeName = response.getNodeName(); // <3> + Version version = response.getVersion(); // <4> + Build build = response.getBuild(); // <5> + //end::main-response + assertNotNull(clusterName); + assertNotNull(clusterUuid); + assertNotNull(nodeName); + assertNotNull(version); + assertNotNull(build); + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MigrationDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MigrationDocumentationIT.java new file mode 100644 index 0000000000000..0984e7d1c93fe --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MigrationDocumentationIT.java @@ -0,0 +1,162 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.documentation; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.delete.DeleteResponse; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.index.IndexRequestBuilder; +import org.elasticsearch.action.index.IndexResponse; +import org.elasticsearch.client.ESRestHighLevelClientTestCase; +import org.elasticsearch.client.Response; +import org.elasticsearch.client.RestClient; +import org.elasticsearch.client.RestHighLevelClient; +import org.apache.http.HttpEntity; +import org.apache.http.HttpStatus; +import org.apache.http.entity.ContentType; +import org.apache.http.nio.entity.NStringEntity; +import org.elasticsearch.cluster.health.ClusterHealthStatus; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.rest.RestStatus; + +import java.io.IOException; +import java.io.InputStream; +import java.util.Map; + +import static java.util.Collections.emptyMap; +import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS; +import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS; + +/** + * This class is used to generate the documentation for the + * docs/java-rest/high-level/migration.asciidoc page. + * + * You need to wrap your code between two tags like: + * // tag::example[] + * // end::example[] + * + * Where example is your tag name. + * + * Then in the documentation, you can extract what is between tag and end tags with + * ["source","java",subs="attributes,callouts,macros"] + * -------------------------------------------------- + * include-tagged::{doc-tests}/MigrationDocumentationIT.java[example] + * -------------------------------------------------- + */ +public class MigrationDocumentationIT extends ESRestHighLevelClientTestCase { + + public void testCreateIndex() throws IOException { + RestClient restClient = client(); + { + //tag::migration-create-inded + Settings indexSettings = Settings.builder() // <1> + .put(SETTING_NUMBER_OF_SHARDS, 1) + .put(SETTING_NUMBER_OF_REPLICAS, 0) + .build(); + + String payload = XContentFactory.jsonBuilder() // <2> + .startObject() + .startObject("settings") // <3> + .value(indexSettings) + .endObject() + .startObject("mappings") // <4> + .startObject("doc") + .startObject("properties") + .startObject("time") + .field("type", "date") + .endObject() + .endObject() + .endObject() + .endObject() + .endObject().string(); + + HttpEntity entity = new NStringEntity(payload, ContentType.APPLICATION_JSON); // <5> + + Response response = restClient.performRequest("PUT", "my-index", emptyMap(), entity); // <6> + if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) { + // <7> + } + //end::migration-create-inded + assertEquals(200, response.getStatusLine().getStatusCode()); + } + } + + public void testClusterHealth() throws IOException { + RestClient restClient = client(); + { + //tag::migration-cluster-health + Response response = restClient.performRequest("GET", "/_cluster/health"); // <1> + + ClusterHealthStatus healthStatus; + try (InputStream is = response.getEntity().getContent()) { // <2> + Map map = XContentHelper.convertToMap(XContentType.JSON.xContent(), is, true); // <3> + healthStatus = ClusterHealthStatus.fromString((String) map.get("status")); // <4> + } + + if (healthStatus == ClusterHealthStatus.GREEN) { + // <5> + } + //end::migration-cluster-health + assertSame(ClusterHealthStatus.GREEN, healthStatus); + } + } + + public void testRequests() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + //tag::migration-request-ctor + IndexRequest request = new IndexRequest("index", "doc", "id"); // <1> + request.source("{\"field\":\"value\"}", XContentType.JSON); + //end::migration-request-ctor + + //tag::migration-request-ctor-execution + IndexResponse response = client.index(request); + //end::migration-request-ctor-execution + assertEquals(RestStatus.CREATED, response.status()); + } + { + //tag::migration-request-sync-execution + DeleteRequest request = new DeleteRequest("index", "doc", "id"); + DeleteResponse response = client.delete(request); // <1> + //end::migration-request-sync-execution + assertEquals(RestStatus.OK, response.status()); + } + { + //tag::migration-request-async-execution + DeleteRequest request = new DeleteRequest("index", "doc", "id"); // <1> + client.deleteAsync(request, new ActionListener() { // <2> + @Override + public void onResponse(DeleteResponse deleteResponse) { + // <3> + } + + @Override + public void onFailure(Exception e) { + // <4> + } + }); + //end::migration-request-async-execution + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java new file mode 100644 index 0000000000000..2459d2e587297 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java @@ -0,0 +1,452 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.documentation; + +import org.apache.lucene.search.join.ScoreMode; +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.geo.ShapeRelation; +import org.elasticsearch.common.geo.builders.CoordinatesBuilder; +import org.elasticsearch.common.geo.builders.ShapeBuilders; +import org.elasticsearch.common.unit.DistanceUnit; +import org.elasticsearch.index.query.GeoShapeQueryBuilder; +import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder; +import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder.FilterFunctionBuilder; +import org.elasticsearch.join.query.JoinQueryBuilders; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptType; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static java.util.Collections.singletonMap; +import static org.elasticsearch.index.query.QueryBuilders.boolQuery; +import static org.elasticsearch.index.query.QueryBuilders.boostingQuery; +import static org.elasticsearch.index.query.QueryBuilders.commonTermsQuery; +import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery; +import static org.elasticsearch.index.query.QueryBuilders.disMaxQuery; +import static org.elasticsearch.index.query.QueryBuilders.existsQuery; +import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery; +import static org.elasticsearch.index.query.QueryBuilders.fuzzyQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoBoundingBoxQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoDistanceQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoPolygonQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoShapeQuery; +import static org.elasticsearch.index.query.QueryBuilders.idsQuery; +import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery; +import static org.elasticsearch.index.query.QueryBuilders.matchQuery; +import static org.elasticsearch.index.query.QueryBuilders.moreLikeThisQuery; +import static org.elasticsearch.index.query.QueryBuilders.multiMatchQuery; +import static org.elasticsearch.index.query.QueryBuilders.nestedQuery; +import static org.elasticsearch.index.query.QueryBuilders.prefixQuery; +import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery; +import static org.elasticsearch.index.query.QueryBuilders.rangeQuery; +import static org.elasticsearch.index.query.QueryBuilders.regexpQuery; +import static org.elasticsearch.index.query.QueryBuilders.scriptQuery; +import static org.elasticsearch.index.query.QueryBuilders.simpleQueryStringQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanContainingQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanFirstQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanMultiTermQueryBuilder; +import static org.elasticsearch.index.query.QueryBuilders.spanNearQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanNotQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanOrQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanTermQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanWithinQuery; +import static org.elasticsearch.index.query.QueryBuilders.termQuery; +import static org.elasticsearch.index.query.QueryBuilders.termsQuery; +import static org.elasticsearch.index.query.QueryBuilders.typeQuery; +import static org.elasticsearch.index.query.QueryBuilders.wildcardQuery; +import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.exponentialDecayFunction; +import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.randomFunction; + +/** + * Examples of using the transport client that are imported into the transport client documentation. + * There are no assertions here because we're mostly concerned with making sure that the examples + * compile and don't throw weird runtime exceptions. Assertions and example data would be nice, but + * that is secondary. + */ +public class QueryDSLDocumentationTests extends ESTestCase { + public void testBool() { + // tag::bool + boolQuery() + .must(termQuery("content", "test1")) // <1> + .must(termQuery("content", "test4")) // <1> + .mustNot(termQuery("content", "test2")) // <2> + .should(termQuery("content", "test3")) // <3> + .filter(termQuery("content", "test5")); // <4> + // end::bool + } + + public void testBoosting() { + // tag::boosting + boostingQuery( + termQuery("name","kimchy"), // <1> + termQuery("name","dadoonet")) // <2> + .negativeBoost(0.2f); // <3> + // end::boosting + } + + public void testCommonTerms() { + // tag::common_terms + commonTermsQuery("name", // <1> + "kimchy"); // <2> + // end::common_terms + } + + public void testConstantScore() { + // tag::constant_score + constantScoreQuery( + termQuery("name","kimchy")) // <1> + .boost(2.0f); // <2> + // end::constant_score + } + + public void testDisMax() { + // tag::dis_max + disMaxQuery() + .add(termQuery("name", "kimchy")) // <1> + .add(termQuery("name", "elasticsearch")) // <2> + .boost(1.2f) // <3> + .tieBreaker(0.7f); // <4> + // end::dis_max + } + + public void testExists() { + // tag::exists + existsQuery("name"); // <1> + // end::exists + } + + public void testFunctionScore() { + // tag::function_score + FilterFunctionBuilder[] functions = { + new FunctionScoreQueryBuilder.FilterFunctionBuilder( + matchQuery("name", "kimchy"), // <1> + randomFunction("ABCDEF")), // <2> + new FunctionScoreQueryBuilder.FilterFunctionBuilder( + exponentialDecayFunction("age", 0L, 1L)) // <3> + }; + functionScoreQuery(functions); + // end::function_score + } + + public void testFuzzy() { + // tag::fuzzy + fuzzyQuery( + "name", // <1> + "kimchy"); // <2> + // end::fuzzy + } + + public void testGeoBoundingBox() { + // tag::geo_bounding_box + geoBoundingBoxQuery("pin.location") // <1> + .setCorners(40.73, -74.1, // <2> + 40.717, -73.99); // <3> + // end::geo_bounding_box + } + + public void testGeoDistance() { + // tag::geo_distance + geoDistanceQuery("pin.location") // <1> + .point(40, -70) // <2> + .distance(200, DistanceUnit.KILOMETERS); // <3> + // end::geo_distance + } + + public void testGeoPolygon() { + // tag::geo_polygon + List points = new ArrayList(); // <1> + points.add(new GeoPoint(40, -70)); + points.add(new GeoPoint(30, -80)); + points.add(new GeoPoint(20, -90)); + geoPolygonQuery("pin.location", points); // <2> + // end::geo_polygon + } + + public void testGeoShape() throws IOException { + { + // tag::geo_shape + GeoShapeQueryBuilder qb = geoShapeQuery( + "pin.location", // <1> + ShapeBuilders.newMultiPoint( // <2> + new CoordinatesBuilder() + .coordinate(0, 0) + .coordinate(0, 10) + .coordinate(10, 10) + .coordinate(10, 0) + .coordinate(0, 0) + .build())); + qb.relation(ShapeRelation.WITHIN); // <3> + // end::geo_shape + } + + { + // tag::indexed_geo_shape + // Using pre-indexed shapes + GeoShapeQueryBuilder qb = geoShapeQuery( + "pin.location", // <1> + "DEU", // <2> + "countries"); // <3> + qb.relation(ShapeRelation.WITHIN) // <4> + .indexedShapeIndex("shapes") // <5> + .indexedShapePath("location"); // <6> + // end::indexed_geo_shape + } + } + + public void testHasChild() { + // tag::has_child + JoinQueryBuilders.hasChildQuery( + "blog_tag", // <1> + termQuery("tag","something"), // <2> + ScoreMode.None); // <3> + // end::has_child + } + + public void testHasParent() { + // tag::has_parent + JoinQueryBuilders.hasParentQuery( + "blog", // <1> + termQuery("tag","something"), // <2> + false); // <3> + // end::has_parent + } + + public void testIds() { + // tag::ids + idsQuery("my_type", "type2") + .addIds("1", "4", "100"); + + idsQuery() // <1> + .addIds("1", "4", "100"); + // end::ids + } + + public void testMatchAll() { + // tag::match_all + matchAllQuery(); + // end::match_all + } + + public void testMatch() { + // tag::match + matchQuery( + "name", // <1> + "kimchy elasticsearch"); // <2> + // end::match + } + + public void testMoreLikeThis() { + // tag::more_like_this + String[] fields = {"name.first", "name.last"}; // <1> + String[] texts = {"text like this one"}; // <2> + + moreLikeThisQuery(fields, texts, null) + .minTermFreq(1) // <3> + .maxQueryTerms(12); // <4> + // end::more_like_this + } + + public void testMultiMatch() { + // tag::multi_match + multiMatchQuery( + "kimchy elasticsearch", // <1> + "user", "message"); // <2> + // end::multi_match + } + + public void testNested() { + // tag::nested + nestedQuery( + "obj1", // <1> + boolQuery() // <2> + .must(matchQuery("obj1.name", "blue")) + .must(rangeQuery("obj1.count").gt(5)), + ScoreMode.Avg); // <3> + // end::nested + } + + public void testPrefix() { + // tag::prefix + prefixQuery( + "brand", // <1> + "heine"); // <2> + // end::prefix + } + + public void testQueryString() { + // tag::query_string + queryStringQuery("+kimchy -elasticsearch"); + // end::query_string + } + + public void testRange() { + // tag::range + rangeQuery("price") // <1> + .from(5) // <2> + .to(10) // <3> + .includeLower(true) // <4> + .includeUpper(false); // <5> + // end::range + + // tag::range_simplified + // A simplified form using gte, gt, lt or lte + rangeQuery("age") // <1> + .gte("10") // <2> + .lt("20"); // <3> + // end::range_simplified + } + + public void testRegExp() { + // tag::regexp + regexpQuery( + "name.first", // <1> + "s.*y"); // <2> + // end::regexp + } + + public void testScript() { + // tag::script_inline + scriptQuery( + new Script("doc['num1'].value > 1") // <1> + ); + // end::script_inline + + // tag::script_file + Map parameters = new HashMap<>(); + parameters.put("param1", 5); + scriptQuery(new Script( + ScriptType.STORED, // <1> + "painless", // <2> + "myscript", // <3> + singletonMap("param1", 5))); // <4> + // end::script_file + } + + public void testSimpleQueryString() { + // tag::simple_query_string + simpleQueryStringQuery("+kimchy -elasticsearch"); + // end::simple_query_string + } + + public void testSpanContaining() { + // tag::span_containing + spanContainingQuery( + spanNearQuery(spanTermQuery("field1","bar"), 5) // <1> + .addClause(spanTermQuery("field1","baz")) + .inOrder(true), + spanTermQuery("field1","foo")); // <2> + // end::span_containing + } + + public void testSpanFirst() { + // tag::span_first + spanFirstQuery( + spanTermQuery("user", "kimchy"), // <1> + 3 // <2> + ); + // end::span_first + } + + public void testSpanMultiTerm() { + // tag::span_multi + spanMultiTermQueryBuilder( + prefixQuery("user", "ki")); // <1> + // end::span_multi + } + + public void testSpanNear() { + // tag::span_near + spanNearQuery( + spanTermQuery("field","value1"), // <1> + 12) // <2> + .addClause(spanTermQuery("field","value2")) // <1> + .addClause(spanTermQuery("field","value3")) // <1> + .inOrder(false); // <3> + // end::span_near + } + + public void testSpanNot() { + // tag::span_not + spanNotQuery( + spanTermQuery("field","value1"), // <1> + spanTermQuery("field","value2")); // <2> + // end::span_not + } + + public void testSpanOr() { + // tag::span_or + spanOrQuery(spanTermQuery("field","value1")) // <1> + .addClause(spanTermQuery("field","value2")) // <1> + .addClause(spanTermQuery("field","value3")); // <1> + // end::span_or + } + + public void testSpanTerm() { + // tag::span_term + spanTermQuery( + "user", // <1> + "kimchy"); // <2> + // end::span_term + } + + public void testSpanWithin() { + // tag::span_within + spanWithinQuery( + spanNearQuery(spanTermQuery("field1", "bar"), 5) // <1> + .addClause(spanTermQuery("field1", "baz")) + .inOrder(true), + spanTermQuery("field1", "foo")); // <2> + // end::span_within + } + + public void testTerm() { + // tag::term + termQuery( + "name", // <1> + "kimchy"); // <2> + // end::term + } + + public void testTerms() { + // tag::terms + termsQuery("tags", // <1> + "blue", "pill"); // <2> + // end::terms + } + + public void testType() { + // tag::type + typeQuery("my_type"); // <1> + // end::type + } + + public void testWildcard() { + // tag::wildcard + wildcardQuery( + "user", // <1> + "k?mch*"); // <2> + // end::wildcard + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/SearchDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/SearchDocumentationIT.java new file mode 100644 index 0000000000000..8c900275a50d5 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/SearchDocumentationIT.java @@ -0,0 +1,674 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.documentation; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.bulk.BulkResponse; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.index.IndexResponse; +import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.ClearScrollResponse; +import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.action.search.ShardSearchFailure; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.client.ESRestHighLevelClientTestCase; +import org.elasticsearch.client.RestHighLevelClient; +import org.elasticsearch.common.text.Text; +import org.elasticsearch.common.unit.Fuzziness; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.query.MatchQueryBuilder; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryBuilders; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.search.Scroll; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.AggregationBuilders; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.aggregations.bucket.range.Range; +import org.elasticsearch.search.aggregations.bucket.terms.Terms; +import org.elasticsearch.search.aggregations.bucket.terms.Terms.Bucket; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.avg.Avg; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; +import org.elasticsearch.search.fetch.subphase.highlight.HighlightField; +import org.elasticsearch.search.profile.ProfileResult; +import org.elasticsearch.search.profile.ProfileShardResult; +import org.elasticsearch.search.profile.aggregation.AggregationProfileShardResult; +import org.elasticsearch.search.profile.query.CollectorResult; +import org.elasticsearch.search.profile.query.QueryProfileShardResult; +import org.elasticsearch.search.sort.FieldSortBuilder; +import org.elasticsearch.search.sort.ScoreSortBuilder; +import org.elasticsearch.search.sort.SortOrder; +import org.elasticsearch.search.suggest.Suggest; +import org.elasticsearch.search.suggest.SuggestBuilder; +import org.elasticsearch.search.suggest.SuggestBuilders; +import org.elasticsearch.search.suggest.SuggestionBuilder; +import org.elasticsearch.search.suggest.term.TermSuggestion; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +import static org.elasticsearch.index.query.QueryBuilders.matchQuery; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.greaterThan; + +/** + * This class is used to generate the Java High Level REST Client Search API documentation. + *

+ * You need to wrap your code between two tags like: + * // tag::example[] + * // end::example[] + *

+ * Where example is your tag name. + *

+ * Then in the documentation, you can extract what is between tag and end tags with + * ["source","java",subs="attributes,callouts,macros"] + * -------------------------------------------------- + * include-tagged::{doc-tests}/SearchDocumentationIT.java[example] + * -------------------------------------------------- + */ +public class SearchDocumentationIT extends ESRestHighLevelClientTestCase { + + @SuppressWarnings({ "unused", "unchecked" }) + public void testSearch() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + BulkRequest request = new BulkRequest(); + request.add(new IndexRequest("posts", "doc", "1") + .source(XContentType.JSON, "title", "In which order are my Elasticsearch queries executed?", "user", + Arrays.asList("kimchy", "luca"), "innerObject", Collections.singletonMap("key", "value"))); + request.add(new IndexRequest("posts", "doc", "2") + .source(XContentType.JSON, "title", "Current status and upcoming changes in Elasticsearch", "user", + Arrays.asList("kimchy", "christoph"), "innerObject", Collections.singletonMap("key", "value"))); + request.add(new IndexRequest("posts", "doc", "3") + .source(XContentType.JSON, "title", "The Future of Federated Search in Elasticsearch", "user", + Arrays.asList("kimchy", "tanguy"), "innerObject", Collections.singletonMap("key", "value"))); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); + BulkResponse bulkResponse = client.bulk(request); + assertSame(RestStatus.OK, bulkResponse.status()); + assertFalse(bulkResponse.hasFailures()); + } + { + // tag::search-request-basic + SearchRequest searchRequest = new SearchRequest(); // <1> + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); // <2> + searchSourceBuilder.query(QueryBuilders.matchAllQuery()); // <3> + // end::search-request-basic + } + { + // tag::search-request-indices-types + SearchRequest searchRequest = new SearchRequest("posts"); // <1> + searchRequest.types("doc"); // <2> + // end::search-request-indices-types + // tag::search-request-routing + searchRequest.routing("routing"); // <1> + // end::search-request-routing + // tag::search-request-indicesOptions + searchRequest.indicesOptions(IndicesOptions.lenientExpandOpen()); // <1> + // end::search-request-indicesOptions + // tag::search-request-preference + searchRequest.preference("_local"); // <1> + // end::search-request-preference + assertNotNull(client.search(searchRequest)); + } + { + // tag::search-source-basics + SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); // <1> + sourceBuilder.query(QueryBuilders.termQuery("user", "kimchy")); // <2> + sourceBuilder.from(0); // <3> + sourceBuilder.size(5); // <4> + sourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS)); // <5> + // end::search-source-basics + + // tag::search-source-sorting + sourceBuilder.sort(new ScoreSortBuilder().order(SortOrder.DESC)); // <1> + sourceBuilder.sort(new FieldSortBuilder("_uid").order(SortOrder.ASC)); // <2> + // end::search-source-sorting + + // tag::search-source-filtering-off + sourceBuilder.fetchSource(false); + // end::search-source-filtering-off + // tag::search-source-filtering-includes + String[] includeFields = new String[] {"title", "user", "innerObject.*"}; + String[] excludeFields = new String[] {"_type"}; + sourceBuilder.fetchSource(includeFields, excludeFields); + // end::search-source-filtering-includes + sourceBuilder.fetchSource(true); + + // tag::search-source-setter + SearchRequest searchRequest = new SearchRequest(); + searchRequest.source(sourceBuilder); + // end::search-source-setter + + // tag::search-execute + SearchResponse searchResponse = client.search(searchRequest); + // end::search-execute + + // tag::search-execute-async + client.searchAsync(searchRequest, new ActionListener() { + @Override + public void onResponse(SearchResponse searchResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + // end::search-execute-async + + // tag::search-response-1 + RestStatus status = searchResponse.status(); + TimeValue took = searchResponse.getTook(); + Boolean terminatedEarly = searchResponse.isTerminatedEarly(); + boolean timedOut = searchResponse.isTimedOut(); + // end::search-response-1 + + // tag::search-response-2 + int totalShards = searchResponse.getTotalShards(); + int successfulShards = searchResponse.getSuccessfulShards(); + int failedShards = searchResponse.getFailedShards(); + for (ShardSearchFailure failure : searchResponse.getShardFailures()) { + // failures should be handled here + } + // end::search-response-2 + assertNotNull(searchResponse); + + // tag::search-hits-get + SearchHits hits = searchResponse.getHits(); + // end::search-hits-get + // tag::search-hits-info + long totalHits = hits.getTotalHits(); + float maxScore = hits.getMaxScore(); + // end::search-hits-info + // tag::search-hits-singleHit + SearchHit[] searchHits = hits.getHits(); + for (SearchHit hit : searchHits) { + // do something with the SearchHit + } + // end::search-hits-singleHit + for (SearchHit hit : searchHits) { + // tag::search-hits-singleHit-properties + String index = hit.getIndex(); + String type = hit.getType(); + String id = hit.getId(); + float score = hit.getScore(); + // end::search-hits-singleHit-properties + // tag::search-hits-singleHit-source + String sourceAsString = hit.getSourceAsString(); + Map sourceAsMap = hit.getSourceAsMap(); + String documentTitle = (String) sourceAsMap.get("title"); + List users = (List) sourceAsMap.get("user"); + Map innerObject = (Map) sourceAsMap.get("innerObject"); + // end::search-hits-singleHit-source + } + assertEquals(3, totalHits); + assertNotNull(hits.getHits()[0].getSourceAsString()); + assertNotNull(hits.getHits()[0].getSourceAsMap().get("title")); + assertNotNull(hits.getHits()[0].getSourceAsMap().get("user")); + assertNotNull(hits.getHits()[0].getSourceAsMap().get("innerObject")); + } + } + + @SuppressWarnings("unused") + public void testBuildingSearchQueries() { + RestHighLevelClient client = highLevelClient(); + { + // tag::search-query-builder-ctor + MatchQueryBuilder matchQueryBuilder = new MatchQueryBuilder("user", "kimchy"); // <1> + // end::search-query-builder-ctor + // tag::search-query-builder-options + matchQueryBuilder.fuzziness(Fuzziness.AUTO); // <1> + matchQueryBuilder.prefixLength(3); // <2> + matchQueryBuilder.maxExpansions(10); // <3> + // end::search-query-builder-options + } + { + // tag::search-query-builders + QueryBuilder matchQueryBuilder = QueryBuilders.matchQuery("user", "kimchy") + .fuzziness(Fuzziness.AUTO) + .prefixLength(3) + .maxExpansions(10); + // end::search-query-builders + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + // tag::search-query-setter + searchSourceBuilder.query(matchQueryBuilder); + // end::search-query-setter + } + } + + @SuppressWarnings({ "unused" }) + public void testSearchRequestAggregations() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + BulkRequest request = new BulkRequest(); + request.add(new IndexRequest("posts", "doc", "1") + .source(XContentType.JSON, "company", "Elastic", "age", 20)); + request.add(new IndexRequest("posts", "doc", "2") + .source(XContentType.JSON, "company", "Elastic", "age", 30)); + request.add(new IndexRequest("posts", "doc", "3") + .source(XContentType.JSON, "company", "Elastic", "age", 40)); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); + BulkResponse bulkResponse = client.bulk(request); + assertSame(RestStatus.OK, bulkResponse.status()); + assertFalse(bulkResponse.hasFailures()); + } + { + SearchRequest searchRequest = new SearchRequest(); + // tag::search-request-aggregations + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + TermsAggregationBuilder aggregation = AggregationBuilders.terms("by_company") + .field("company.keyword"); + aggregation.subAggregation(AggregationBuilders.avg("average_age") + .field("age")); + searchSourceBuilder.aggregation(aggregation); + // end::search-request-aggregations + searchSourceBuilder.query(QueryBuilders.matchAllQuery()); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = client.search(searchRequest); + { + // tag::search-request-aggregations-get + Aggregations aggregations = searchResponse.getAggregations(); + Terms byCompanyAggregation = aggregations.get("by_company"); // <1> + Bucket elasticBucket = byCompanyAggregation.getBucketByKey("Elastic"); // <2> + Avg averageAge = elasticBucket.getAggregations().get("average_age"); // <3> + double avg = averageAge.getValue(); + // end::search-request-aggregations-get + + try { + // tag::search-request-aggregations-get-wrongCast + Range range = aggregations.get("by_company"); // <1> + // end::search-request-aggregations-get-wrongCast + } catch (ClassCastException ex) { + assertEquals("org.elasticsearch.search.aggregations.bucket.terms.ParsedStringTerms" + + " cannot be cast to org.elasticsearch.search.aggregations.bucket.range.Range", ex.getMessage()); + } + assertEquals(3, elasticBucket.getDocCount()); + assertEquals(30, avg, 0.0); + } + Aggregations aggregations = searchResponse.getAggregations(); + { + // tag::search-request-aggregations-asMap + Map aggregationMap = aggregations.getAsMap(); + Terms companyAggregation = (Terms) aggregationMap.get("by_company"); + // end::search-request-aggregations-asMap + } + { + // tag::search-request-aggregations-asList + List aggregationList = aggregations.asList(); + // end::search-request-aggregations-asList + } + { + // tag::search-request-aggregations-iterator + for (Aggregation agg : aggregations) { + String type = agg.getType(); + if (type.equals(TermsAggregationBuilder.NAME)) { + Bucket elasticBucket = ((Terms) agg).getBucketByKey("Elastic"); + long numberOfDocs = elasticBucket.getDocCount(); + } + } + // end::search-request-aggregations-iterator + } + } + } + + @SuppressWarnings({ "unused", "rawtypes" }) + public void testSearchRequestSuggestions() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + BulkRequest request = new BulkRequest(); + request.add(new IndexRequest("posts", "doc", "1").source(XContentType.JSON, "user", "kimchy")); + request.add(new IndexRequest("posts", "doc", "2").source(XContentType.JSON, "user", "javanna")); + request.add(new IndexRequest("posts", "doc", "3").source(XContentType.JSON, "user", "tlrx")); + request.add(new IndexRequest("posts", "doc", "4").source(XContentType.JSON, "user", "cbuescher")); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); + BulkResponse bulkResponse = client.bulk(request); + assertSame(RestStatus.OK, bulkResponse.status()); + assertFalse(bulkResponse.hasFailures()); + } + { + SearchRequest searchRequest = new SearchRequest(); + // tag::search-request-suggestion + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + SuggestionBuilder termSuggestionBuilder = + SuggestBuilders.termSuggestion("user").text("kmichy"); // <1> + SuggestBuilder suggestBuilder = new SuggestBuilder(); + suggestBuilder.addSuggestion("suggest_user", termSuggestionBuilder); // <2> + searchSourceBuilder.suggest(suggestBuilder); + // end::search-request-suggestion + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = client.search(searchRequest); + { + // tag::search-request-suggestion-get + Suggest suggest = searchResponse.getSuggest(); // <1> + TermSuggestion termSuggestion = suggest.getSuggestion("suggest_user"); // <2> + for (TermSuggestion.Entry entry : termSuggestion.getEntries()) { // <3> + for (TermSuggestion.Entry.Option option : entry) { // <4> + String suggestText = option.getText().string(); + } + } + // end::search-request-suggestion-get + assertEquals(1, termSuggestion.getEntries().size()); + assertEquals(1, termSuggestion.getEntries().get(0).getOptions().size()); + assertEquals("kimchy", termSuggestion.getEntries().get(0).getOptions().get(0).getText().string()); + } + } + } + + public void testSearchRequestHighlighting() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + BulkRequest request = new BulkRequest(); + request.add(new IndexRequest("posts", "doc", "1") + .source(XContentType.JSON, "title", "In which order are my Elasticsearch queries executed?", "user", + Arrays.asList("kimchy", "luca"), "innerObject", Collections.singletonMap("key", "value"))); + request.add(new IndexRequest("posts", "doc", "2") + .source(XContentType.JSON, "title", "Current status and upcoming changes in Elasticsearch", "user", + Arrays.asList("kimchy", "christoph"), "innerObject", Collections.singletonMap("key", "value"))); + request.add(new IndexRequest("posts", "doc", "3") + .source(XContentType.JSON, "title", "The Future of Federated Search in Elasticsearch", "user", + Arrays.asList("kimchy", "tanguy"), "innerObject", Collections.singletonMap("key", "value"))); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); + BulkResponse bulkResponse = client.bulk(request); + assertSame(RestStatus.OK, bulkResponse.status()); + assertFalse(bulkResponse.hasFailures()); + } + { + SearchRequest searchRequest = new SearchRequest(); + // tag::search-request-highlighting + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + HighlightBuilder highlightBuilder = new HighlightBuilder(); // <1> + HighlightBuilder.Field highlightTitle = + new HighlightBuilder.Field("title"); // <2> + highlightTitle.highlighterType("unified"); // <3> + highlightBuilder.field(highlightTitle); // <4> + HighlightBuilder.Field highlightUser = new HighlightBuilder.Field("user"); + highlightBuilder.field(highlightUser); + searchSourceBuilder.highlighter(highlightBuilder); + // end::search-request-highlighting + searchSourceBuilder.query(QueryBuilders.boolQuery() + .should(matchQuery("title", "Elasticsearch")) + .should(matchQuery("user", "kimchy"))); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = client.search(searchRequest); + { + // tag::search-request-highlighting-get + SearchHits hits = searchResponse.getHits(); + for (SearchHit hit : hits.getHits()) { + Map highlightFields = hit.getHighlightFields(); + HighlightField highlight = highlightFields.get("title"); // <1> + Text[] fragments = highlight.fragments(); // <2> + String fragmentString = fragments[0].string(); + } + // end::search-request-highlighting-get + hits = searchResponse.getHits(); + for (SearchHit hit : hits.getHits()) { + Map highlightFields = hit.getHighlightFields(); + HighlightField highlight = highlightFields.get("title"); + Text[] fragments = highlight.fragments(); + assertEquals(1, fragments.length); + assertThat(fragments[0].string(), containsString("Elasticsearch")); + highlight = highlightFields.get("user"); + fragments = highlight.fragments(); + assertEquals(1, fragments.length); + assertThat(fragments[0].string(), containsString("kimchy")); + } + } + + } + } + + public void testSearchRequestProfiling() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + IndexRequest request = new IndexRequest("posts", "doc", "1") + .source(XContentType.JSON, "tags", "elasticsearch", "comments", 123); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL); + IndexResponse indexResponse = client.index(request); + assertSame(RestStatus.CREATED, indexResponse.status()); + } + { + SearchRequest searchRequest = new SearchRequest(); + // tag::search-request-profiling + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.profile(true); + // end::search-request-profiling + searchSourceBuilder.query(QueryBuilders.termQuery("tags", "elasticsearch")); + searchSourceBuilder.aggregation(AggregationBuilders.histogram("by_comments").field("comments").interval(100)); + searchRequest.source(searchSourceBuilder); + + SearchResponse searchResponse = client.search(searchRequest); + // tag::search-request-profiling-get + Map profilingResults = searchResponse.getProfileResults(); // <1> + for (Map.Entry profilingResult : profilingResults.entrySet()) { // <2> + String key = profilingResult.getKey(); // <3> + ProfileShardResult profileShardResult = profilingResult.getValue(); // <4> + } + // end::search-request-profiling-get + + ProfileShardResult profileShardResult = profilingResults.values().iterator().next(); + assertNotNull(profileShardResult); + + // tag::search-request-profiling-queries + List queryProfileShardResults = profileShardResult.getQueryProfileResults(); // <1> + for (QueryProfileShardResult queryProfileResult : queryProfileShardResults) { // <2> + + } + // end::search-request-profiling-queries + assertThat(queryProfileShardResults.size(), equalTo(1)); + + for (QueryProfileShardResult queryProfileResult : queryProfileShardResults) { + // tag::search-request-profiling-queries-results + for (ProfileResult profileResult : queryProfileResult.getQueryResults()) { // <1> + String queryName = profileResult.getQueryName(); // <2> + long queryTimeInMillis = profileResult.getTime(); // <3> + List profiledChildren = profileResult.getProfiledChildren(); // <4> + } + // end::search-request-profiling-queries-results + + // tag::search-request-profiling-queries-collectors + CollectorResult collectorResult = queryProfileResult.getCollectorResult(); // <1> + String collectorName = collectorResult.getName(); // <2> + Long collectorTimeInMillis = collectorResult.getTime(); // <3> + List profiledChildren = collectorResult.getProfiledChildren(); // <4> + // end::search-request-profiling-queries-collectors + } + + // tag::search-request-profiling-aggs + AggregationProfileShardResult aggsProfileResults = profileShardResult.getAggregationProfileResults(); // <1> + for (ProfileResult profileResult : aggsProfileResults.getProfileResults()) { // <2> + String aggName = profileResult.getQueryName(); // <3> + long aggTimeInMillis = profileResult.getTime(); // <4> + List profiledChildren = profileResult.getProfiledChildren(); // <5> + } + // end::search-request-profiling-aggs + assertThat(aggsProfileResults.getProfileResults().size(), equalTo(1)); + } + } + + public void testScroll() throws IOException { + RestHighLevelClient client = highLevelClient(); + { + BulkRequest request = new BulkRequest(); + request.add(new IndexRequest("posts", "doc", "1") + .source(XContentType.JSON, "title", "In which order are my Elasticsearch queries executed?")); + request.add(new IndexRequest("posts", "doc", "2") + .source(XContentType.JSON, "title", "Current status and upcoming changes in Elasticsearch")); + request.add(new IndexRequest("posts", "doc", "3") + .source(XContentType.JSON, "title", "The Future of Federated Search in Elasticsearch")); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); + BulkResponse bulkResponse = client.bulk(request); + assertSame(RestStatus.OK, bulkResponse.status()); + assertFalse(bulkResponse.hasFailures()); + } + { + int size = 1; + // tag::search-scroll-init + SearchRequest searchRequest = new SearchRequest("posts"); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.query(matchQuery("title", "Elasticsearch")); + searchSourceBuilder.size(size); // <1> + searchRequest.source(searchSourceBuilder); + searchRequest.scroll(TimeValue.timeValueMinutes(1L)); // <2> + SearchResponse searchResponse = client.search(searchRequest); + String scrollId = searchResponse.getScrollId(); // <3> + SearchHits hits = searchResponse.getHits(); // <4> + // end::search-scroll-init + assertEquals(3, hits.getTotalHits()); + assertEquals(1, hits.getHits().length); + assertNotNull(scrollId); + + // tag::search-scroll2 + SearchScrollRequest scrollRequest = new SearchScrollRequest(scrollId); // <1> + scrollRequest.scroll(TimeValue.timeValueSeconds(30)); + SearchResponse searchScrollResponse = client.searchScroll(scrollRequest); + scrollId = searchScrollResponse.getScrollId(); // <2> + hits = searchScrollResponse.getHits(); // <3> + assertEquals(3, hits.getTotalHits()); + assertEquals(1, hits.getHits().length); + assertNotNull(scrollId); + // end::search-scroll2 + + ClearScrollRequest clearScrollRequest = new ClearScrollRequest(); + clearScrollRequest.addScrollId(scrollId); + ClearScrollResponse clearScrollResponse = client.clearScroll(clearScrollRequest); + assertTrue(clearScrollResponse.isSucceeded()); + } + { + SearchRequest searchRequest = new SearchRequest(); + searchRequest.scroll("60s"); + + SearchResponse initialSearchResponse = client.search(searchRequest); + String scrollId = initialSearchResponse.getScrollId(); + + SearchScrollRequest scrollRequest = new SearchScrollRequest(); + scrollRequest.scrollId(scrollId); + + // tag::scroll-request-arguments + scrollRequest.scroll(TimeValue.timeValueSeconds(60L)); // <1> + scrollRequest.scroll("60s"); // <2> + // end::scroll-request-arguments + + // tag::search-scroll-execute-sync + SearchResponse searchResponse = client.searchScroll(scrollRequest); + // end::search-scroll-execute-sync + + assertEquals(0, searchResponse.getFailedShards()); + assertEquals(3L, searchResponse.getHits().getTotalHits()); + + // tag::search-scroll-execute-async + client.searchScrollAsync(scrollRequest, new ActionListener() { + @Override + public void onResponse(SearchResponse searchResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + // end::search-scroll-execute-async + + // tag::clear-scroll-request + ClearScrollRequest request = new ClearScrollRequest(); // <1> + request.addScrollId(scrollId); // <2> + // end::clear-scroll-request + + // tag::clear-scroll-add-scroll-id + request.addScrollId(scrollId); + // end::clear-scroll-add-scroll-id + + List scrollIds = Collections.singletonList(scrollId); + + // tag::clear-scroll-add-scroll-ids + request.setScrollIds(scrollIds); + // end::clear-scroll-add-scroll-ids + + // tag::clear-scroll-execute + ClearScrollResponse response = client.clearScroll(request); + // end::clear-scroll-execute + + // tag::clear-scroll-response + boolean success = response.isSucceeded(); // <1> + int released = response.getNumFreed(); // <2> + // end::clear-scroll-response + assertTrue(success); + assertThat(released, greaterThan(0)); + + // tag::clear-scroll-execute-async + client.clearScrollAsync(request, new ActionListener() { + @Override + public void onResponse(ClearScrollResponse clearScrollResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + // end::clear-scroll-execute-async + } + { + // tag::search-scroll-example + final Scroll scroll = new Scroll(TimeValue.timeValueMinutes(1L)); + SearchRequest searchRequest = new SearchRequest("posts"); + searchRequest.scroll(scroll); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.query(matchQuery("title", "Elasticsearch")); + searchRequest.source(searchSourceBuilder); + + SearchResponse searchResponse = client.search(searchRequest); // <1> + String scrollId = searchResponse.getScrollId(); + SearchHit[] searchHits = searchResponse.getHits().getHits(); + + while (searchHits != null && searchHits.length > 0) { // <2> + SearchScrollRequest scrollRequest = new SearchScrollRequest(scrollId); // <3> + scrollRequest.scroll(scroll); + searchResponse = client.searchScroll(scrollRequest); + scrollId = searchResponse.getScrollId(); + searchHits = searchResponse.getHits().getHits(); + // <4> + } + + ClearScrollRequest clearScrollRequest = new ClearScrollRequest(); // <5> + clearScrollRequest.addScrollId(scrollId); + ClearScrollResponse clearScrollResponse = client.clearScroll(clearScrollRequest); + boolean succeeded = clearScrollResponse.isSucceeded(); + // end::search-scroll-example + assertTrue(succeeded); + } + } +} diff --git a/client/rest/build.gradle b/client/rest/build.gradle index 1c92013da9743..4f52152b8a88a 100644 --- a/client/rest/build.gradle +++ b/client/rest/build.gradle @@ -18,7 +18,6 @@ */ import org.elasticsearch.gradle.precommit.PrecommitTasks -import org.gradle.api.JavaVersion apply plugin: 'elasticsearch.build' apply plugin: 'ru.vyarus.animalsniffer' @@ -29,6 +28,15 @@ targetCompatibility = JavaVersion.VERSION_1_7 sourceCompatibility = JavaVersion.VERSION_1_7 group = 'org.elasticsearch.client' +archivesBaseName = 'elasticsearch-rest-client' + +publishing { + publications { + nebula { + artifactId = archivesBaseName + } + } +} dependencies { compile "org.apache.httpcomponents:httpclient:${versions.httpclient}" @@ -60,11 +68,6 @@ forbiddenApisTest { signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')] } -dependencyLicenses { - mapping from: /http.*/, to: 'httpclient' - mapping from: /commons-.*/, to: 'commons' -} - //JarHell is part of es core, which we don't want to pull in jarHell.enabled=false diff --git a/client/rest/licenses/commons-LICENSE.txt b/client/rest/licenses/commons-codec-LICENSE.txt similarity index 100% rename from client/rest/licenses/commons-LICENSE.txt rename to client/rest/licenses/commons-codec-LICENSE.txt diff --git a/client/rest/licenses/commons-NOTICE.txt b/client/rest/licenses/commons-codec-NOTICE.txt similarity index 100% rename from client/rest/licenses/commons-NOTICE.txt rename to client/rest/licenses/commons-codec-NOTICE.txt diff --git a/distribution/licenses/securesm-LICENSE.txt b/client/rest/licenses/commons-logging-LICENSE.txt similarity index 100% rename from distribution/licenses/securesm-LICENSE.txt rename to client/rest/licenses/commons-logging-LICENSE.txt diff --git a/client/rest/licenses/commons-logging-NOTICE.txt b/client/rest/licenses/commons-logging-NOTICE.txt new file mode 100644 index 0000000000000..a37977d45a168 --- /dev/null +++ b/client/rest/licenses/commons-logging-NOTICE.txt @@ -0,0 +1,6 @@ +Apache Commons Logging +Copyright 2003-2013 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + diff --git a/client/rest/licenses/httpasyncclient-LICENSE.txt b/client/rest/licenses/httpasyncclient-LICENSE.txt new file mode 100644 index 0000000000000..2c41ec88f61cf --- /dev/null +++ b/client/rest/licenses/httpasyncclient-LICENSE.txt @@ -0,0 +1,182 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + +This project contains annotations derived from JCIP-ANNOTATIONS +Copyright (c) 2005 Brian Goetz and Tim Peierls. +See http://www.jcip.net and the Creative Commons Attribution License +(http://creativecommons.org/licenses/by/2.5) + diff --git a/client/rest/licenses/httpasyncclient-NOTICE.txt b/client/rest/licenses/httpasyncclient-NOTICE.txt new file mode 100644 index 0000000000000..b45be98d168a4 --- /dev/null +++ b/client/rest/licenses/httpasyncclient-NOTICE.txt @@ -0,0 +1,5 @@ +Apache HttpComponents AsyncClient +Copyright 2010-2016 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). diff --git a/client/rest/licenses/httpcore-LICENSE.txt b/client/rest/licenses/httpcore-LICENSE.txt new file mode 100644 index 0000000000000..e454a52586f29 --- /dev/null +++ b/client/rest/licenses/httpcore-LICENSE.txt @@ -0,0 +1,178 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + diff --git a/client/rest/licenses/httpcore-NOTICE.txt b/client/rest/licenses/httpcore-NOTICE.txt new file mode 100644 index 0000000000000..013448d3e9561 --- /dev/null +++ b/client/rest/licenses/httpcore-NOTICE.txt @@ -0,0 +1,5 @@ +Apache HttpComponents Core +Copyright 2005-2016 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). diff --git a/distribution/licenses/spatial4j-LICENSE.txt b/client/rest/licenses/httpcore-nio-LICENSE.txt similarity index 100% rename from distribution/licenses/spatial4j-LICENSE.txt rename to client/rest/licenses/httpcore-nio-LICENSE.txt diff --git a/client/rest/licenses/httpcore-nio-NOTICE.txt b/client/rest/licenses/httpcore-nio-NOTICE.txt new file mode 100644 index 0000000000000..a2e17bb60009f --- /dev/null +++ b/client/rest/licenses/httpcore-nio-NOTICE.txt @@ -0,0 +1,8 @@ + +Apache HttpCore NIO +Copyright 2005-2016 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + + diff --git a/client/rest/src/main/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumer.java b/client/rest/src/main/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumer.java index da7f5c79721bc..84753e6f75c8d 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumer.java +++ b/client/rest/src/main/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumer.java @@ -38,25 +38,15 @@ /** * Default implementation of {@link org.apache.http.nio.protocol.HttpAsyncResponseConsumer}. Buffers the whole * response content in heap memory, meaning that the size of the buffer is equal to the content-length of the response. - * Limits the size of responses that can be read to {@link #DEFAULT_BUFFER_LIMIT} by default, configurable value. - * Throws an exception in case the entity is longer than the configured buffer limit. + * Limits the size of responses that can be read based on a configurable argument. Throws an exception in case the entity is longer + * than the configured buffer limit. */ public class HeapBufferedAsyncResponseConsumer extends AbstractAsyncResponseConsumer { - //default buffer limit is 10MB - public static final int DEFAULT_BUFFER_LIMIT = 10 * 1024 * 1024; - - private final int bufferLimit; + private final int bufferLimitBytes; private volatile HttpResponse response; private volatile SimpleInputBuffer buf; - /** - * Creates a new instance of this consumer with a buffer limit of {@link #DEFAULT_BUFFER_LIMIT} - */ - public HeapBufferedAsyncResponseConsumer() { - this.bufferLimit = DEFAULT_BUFFER_LIMIT; - } - /** * Creates a new instance of this consumer with the provided buffer limit */ @@ -64,7 +54,14 @@ public HeapBufferedAsyncResponseConsumer(int bufferLimit) { if (bufferLimit <= 0) { throw new IllegalArgumentException("bufferLimit must be greater than 0"); } - this.bufferLimit = bufferLimit; + this.bufferLimitBytes = bufferLimit; + } + + /** + * Get the limit of the buffer. + */ + public int getBufferLimit() { + return bufferLimitBytes; } @Override @@ -75,9 +72,9 @@ protected void onResponseReceived(HttpResponse response) throws HttpException, I @Override protected void onEntityEnclosed(HttpEntity entity, ContentType contentType) throws IOException { long len = entity.getContentLength(); - if (len > bufferLimit) { + if (len > bufferLimitBytes) { throw new ContentTooLongException("entity content is too long [" + len + - "] for the configured buffer limit [" + bufferLimit + "]"); + "] for the configured buffer limit [" + bufferLimitBytes + "]"); } if (len < 0) { len = 4096; diff --git a/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java b/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java new file mode 100644 index 0000000000000..1af9e0dcf0fa4 --- /dev/null +++ b/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java @@ -0,0 +1,65 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.HttpResponse; +import org.apache.http.nio.protocol.HttpAsyncResponseConsumer; + +import static org.elasticsearch.client.HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory.DEFAULT_BUFFER_LIMIT; + +/** + * Factory used to create instances of {@link HttpAsyncResponseConsumer}. Each request retry needs its own instance of the + * consumer object. Users can implement this interface and pass their own instance to the specialized + * performRequest methods that accept an {@link HttpAsyncResponseConsumerFactory} instance as argument. + */ +public interface HttpAsyncResponseConsumerFactory { + + /** + * Creates the default type of {@link HttpAsyncResponseConsumer}, based on heap buffering with a buffer limit of 100MB. + */ + HttpAsyncResponseConsumerFactory DEFAULT = new HeapBufferedResponseConsumerFactory(DEFAULT_BUFFER_LIMIT); + + /** + * Creates the {@link HttpAsyncResponseConsumer}, called once per request attempt. + */ + HttpAsyncResponseConsumer createHttpAsyncResponseConsumer(); + + /** + * Default factory used to create instances of {@link HttpAsyncResponseConsumer}. + * Creates one instance of {@link HeapBufferedAsyncResponseConsumer} for each request attempt, with a configurable + * buffer limit which defaults to 100MB. + */ + class HeapBufferedResponseConsumerFactory implements HttpAsyncResponseConsumerFactory { + + //default buffer limit is 100MB + static final int DEFAULT_BUFFER_LIMIT = 100 * 1024 * 1024; + + private final int bufferLimit; + + public HeapBufferedResponseConsumerFactory(int bufferLimitBytes) { + this.bufferLimit = bufferLimitBytes; + } + + @Override + public HttpAsyncResponseConsumer createHttpAsyncResponseConsumer() { + return new HeapBufferedAsyncResponseConsumer(bufferLimit); + } + } +} diff --git a/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java b/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java index fd4c360023468..ad2348762dd07 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java @@ -59,6 +59,12 @@ static void logResponse(Log logger, HttpUriRequest request, HttpHost host, HttpR logger.debug("request [" + request.getMethod() + " " + host + getUri(request.getRequestLine()) + "] returned [" + httpResponse.getStatusLine() + "]"); } + if (logger.isWarnEnabled()) { + Header[] warnings = httpResponse.getHeaders("Warning"); + if (warnings != null && warnings.length > 0) { + logger.warn(buildWarningMessage(request, host, warnings)); + } + } if (tracer.isTraceEnabled()) { String requestLine; try { @@ -97,6 +103,18 @@ static void logFailedRequest(Log logger, HttpUriRequest request, HttpHost host, } } + static String buildWarningMessage(HttpUriRequest request, HttpHost host, Header[] warnings) { + StringBuilder message = new StringBuilder("request [").append(request.getMethod()).append(" ").append(host) + .append(getUri(request.getRequestLine())).append("] returned ").append(warnings.length).append(" warnings: "); + for (int i = 0; i < warnings.length; i++) { + if (i > 0) { + message.append(","); + } + message.append("[").append(warnings[i].getValue()).append("]"); + } + return message.toString(); + } + /** * Creates curl output for given request */ @@ -134,7 +152,7 @@ static String buildTraceResponse(HttpResponse httpResponse) throws IOException { httpResponse.setEntity(entity); ContentType contentType = ContentType.get(entity); Charset charset = StandardCharsets.UTF_8; - if (contentType != null) { + if (contentType != null && contentType.getCharset() != null) { charset = contentType.getCharset(); } try (BufferedReader reader = new BufferedReader(new InputStreamReader(entity.getContent(), charset))) { diff --git a/client/rest/src/main/java/org/elasticsearch/client/ResponseListener.java b/client/rest/src/main/java/org/elasticsearch/client/ResponseListener.java index ce948f6569b65..3d5873599009d 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/ResponseListener.java +++ b/client/rest/src/main/java/org/elasticsearch/client/ResponseListener.java @@ -23,6 +23,10 @@ * Listener to be provided when calling async performRequest methods provided by {@link RestClient}. * Those methods that do accept a listener will return immediately, execute asynchronously, and notify * the listener whenever the request yielded a response, or failed with an exception. + * + *

+ * Note that it is not safe to call {@link RestClient#close()} from either of these + * callbacks. */ public interface ResponseListener { diff --git a/client/rest/src/main/java/org/elasticsearch/client/RestClient.java b/client/rest/src/main/java/org/elasticsearch/client/RestClient.java index d2301e1e8e755..cc0f1b3089638 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RestClient.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RestClient.java @@ -25,6 +25,7 @@ import org.apache.http.HttpHost; import org.apache.http.HttpRequest; import org.apache.http.HttpResponse; +import org.apache.http.client.AuthCache; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.methods.HttpEntityEnclosingRequestBase; import org.apache.http.client.methods.HttpHead; @@ -34,8 +35,11 @@ import org.apache.http.client.methods.HttpPut; import org.apache.http.client.methods.HttpRequestBase; import org.apache.http.client.methods.HttpTrace; +import org.apache.http.client.protocol.HttpClientContext; import org.apache.http.client.utils.URIBuilder; import org.apache.http.concurrent.FutureCallback; +import org.apache.http.impl.auth.BasicScheme; +import org.apache.http.impl.client.BasicAuthCache; import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; import org.apache.http.nio.client.methods.HttpAsyncMethods; import org.apache.http.nio.protocol.HttpAsyncRequestProducer; @@ -49,6 +53,7 @@ import java.util.Collection; import java.util.Collections; import java.util.Comparator; +import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.List; @@ -91,7 +96,7 @@ public class RestClient implements Closeable { private final long maxRetryTimeoutMillis; private final String pathPrefix; private final AtomicInteger lastHostIndex = new AtomicInteger(0); - private volatile Set hosts; + private volatile HostTuple> hostTuple; private final ConcurrentMap blacklist = new ConcurrentHashMap<>(); private final FailureListener failureListener; @@ -121,11 +126,13 @@ public synchronized void setHosts(HttpHost... hosts) { throw new IllegalArgumentException("hosts must not be null nor empty"); } Set httpHosts = new HashSet<>(); + AuthCache authCache = new BasicAuthCache(); for (HttpHost host : hosts) { Objects.requireNonNull(host, "host cannot be null"); httpHosts.add(host); + authCache.put(host, new BasicScheme()); } - this.hosts = Collections.unmodifiableSet(httpHosts); + this.hostTuple = new HostTuple<>(Collections.unmodifiableSet(httpHosts), authCache); this.blacklist.clear(); } @@ -143,7 +150,7 @@ public synchronized void setHosts(HttpHost... hosts) { * @throws ResponseException in case Elasticsearch responded with a status code that indicated an error */ public Response performRequest(String method, String endpoint, Header... headers) throws IOException { - return performRequest(method, endpoint, Collections.emptyMap(), (HttpEntity)null, headers); + return performRequest(method, endpoint, Collections.emptyMap(), null, headers); } /** @@ -165,9 +172,9 @@ public Response performRequest(String method, String endpoint, Map params, HttpEntity entity, Header... headers) throws IOException { - HttpAsyncResponseConsumer responseConsumer = new HeapBufferedAsyncResponseConsumer(); - return performRequest(method, endpoint, params, entity, responseConsumer, headers); + return performRequest(method, endpoint, params, entity, HttpAsyncResponseConsumerFactory.DEFAULT, headers); } /** @@ -196,8 +202,9 @@ public Response performRequest(String method, String endpoint, Map params, - HttpEntity entity, HttpAsyncResponseConsumer responseConsumer, + HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory, Header... headers) throws IOException { SyncResponseListener listener = new SyncResponseListener(maxRetryTimeoutMillis); - performRequestAsync(method, endpoint, params, entity, responseConsumer, listener, headers); + performRequestAsync(method, endpoint, params, entity, httpAsyncResponseConsumerFactory, listener, headers); return listener.get(); } @@ -245,9 +252,9 @@ public void performRequestAsync(String method, String endpoint, Map params, HttpEntity entity, ResponseListener responseListener, Header... headers) { - HttpAsyncResponseConsumer responseConsumer = new HeapBufferedAsyncResponseConsumer(); - performRequestAsync(method, endpoint, params, entity, responseConsumer, responseListener, headers); + performRequestAsync(method, endpoint, params, entity, HttpAsyncResponseConsumerFactory.DEFAULT, responseListener, headers); } /** @@ -274,36 +280,74 @@ public void performRequestAsync(String method, String endpoint, Map params, - HttpEntity entity, HttpAsyncResponseConsumer responseConsumer, + HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory, ResponseListener responseListener, Header... headers) { - URI uri = buildUri(pathPrefix, endpoint, params); - HttpRequestBase request = createHttpRequest(method, uri, entity); - setHeaders(request, headers); - FailureTrackingResponseListener failureTrackingResponseListener = new FailureTrackingResponseListener(responseListener); - long startTime = System.nanoTime(); - performRequestAsync(startTime, nextHost().iterator(), request, responseConsumer, failureTrackingResponseListener); + try { + Objects.requireNonNull(params, "params must not be null"); + Map requestParams = new HashMap<>(params); + //ignore is a special parameter supported by the clients, shouldn't be sent to es + String ignoreString = requestParams.remove("ignore"); + Set ignoreErrorCodes; + if (ignoreString == null) { + if (HttpHead.METHOD_NAME.equals(method)) { + //404 never causes error if returned for a HEAD request + ignoreErrorCodes = Collections.singleton(404); + } else { + ignoreErrorCodes = Collections.emptySet(); + } + } else { + String[] ignoresArray = ignoreString.split(","); + ignoreErrorCodes = new HashSet<>(); + if (HttpHead.METHOD_NAME.equals(method)) { + //404 never causes error if returned for a HEAD request + ignoreErrorCodes.add(404); + } + for (String ignoreCode : ignoresArray) { + try { + ignoreErrorCodes.add(Integer.valueOf(ignoreCode)); + } catch (NumberFormatException e) { + throw new IllegalArgumentException("ignore value should be a number, found [" + ignoreString + "] instead", e); + } + } + } + URI uri = buildUri(pathPrefix, endpoint, requestParams); + HttpRequestBase request = createHttpRequest(method, uri, entity); + setHeaders(request, headers); + FailureTrackingResponseListener failureTrackingResponseListener = new FailureTrackingResponseListener(responseListener); + long startTime = System.nanoTime(); + performRequestAsync(startTime, nextHost(), request, ignoreErrorCodes, httpAsyncResponseConsumerFactory, + failureTrackingResponseListener); + } catch (Exception e) { + responseListener.onFailure(e); + } } - private void performRequestAsync(final long startTime, final Iterator hosts, final HttpRequestBase request, - final HttpAsyncResponseConsumer responseConsumer, + private void performRequestAsync(final long startTime, final HostTuple> hostTuple, final HttpRequestBase request, + final Set ignoreErrorCodes, + final HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory, final FailureTrackingResponseListener listener) { - final HttpHost host = hosts.next(); + final HttpHost host = hostTuple.hosts.next(); //we stream the request body if the entity allows for it - HttpAsyncRequestProducer requestProducer = HttpAsyncMethods.create(host, request); - client.execute(requestProducer, responseConsumer, new FutureCallback() { + final HttpAsyncRequestProducer requestProducer = HttpAsyncMethods.create(host, request); + final HttpAsyncResponseConsumer asyncResponseConsumer = + httpAsyncResponseConsumerFactory.createHttpAsyncResponseConsumer(); + final HttpClientContext context = HttpClientContext.create(); + context.setAuthCache(hostTuple.authCache); + client.execute(requestProducer, asyncResponseConsumer, context, new FutureCallback() { @Override public void completed(HttpResponse httpResponse) { try { RequestLogger.logResponse(logger, request, host, httpResponse); int statusCode = httpResponse.getStatusLine().getStatusCode(); Response response = new Response(request.getRequestLine(), host, httpResponse); - if (isSuccessfulResponse(request.getMethod(), statusCode)) { + if (isSuccessfulResponse(statusCode) || ignoreErrorCodes.contains(response.getStatusLine().getStatusCode())) { onResponse(host); listener.onSuccess(response); } else { @@ -311,7 +355,7 @@ public void completed(HttpResponse httpResponse) { if (isRetryStatus(statusCode)) { //mark host dead and retry against next one onFailure(host); - retryIfPossible(responseException, hosts, request); + retryIfPossible(responseException); } else { //mark host alive and don't retry, as the error should be a request problem onResponse(host); @@ -328,14 +372,14 @@ public void failed(Exception failure) { try { RequestLogger.logFailedRequest(logger, request, host, failure); onFailure(host); - retryIfPossible(failure, hosts, request); + retryIfPossible(failure); } catch(Exception e) { listener.onDefinitiveFailure(e); } } - private void retryIfPossible(Exception exception, Iterator hosts, HttpRequestBase request) { - if (hosts.hasNext()) { + private void retryIfPossible(Exception exception) { + if (hostTuple.hosts.hasNext()) { //in case we are retrying, check whether maxRetryTimeout has been reached long timeElapsedMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startTime); long timeout = maxRetryTimeoutMillis - timeElapsedMillis; @@ -346,7 +390,7 @@ private void retryIfPossible(Exception exception, Iterator hosts, Http } else { listener.trackFailure(exception); request.reset(); - performRequestAsync(startTime, hosts, request, responseConsumer, listener); + performRequestAsync(startTime, hostTuple, request, ignoreErrorCodes, httpAsyncResponseConsumerFactory, listener); } } else { listener.onDefinitiveFailure(exception); @@ -384,17 +428,18 @@ private void setHeaders(HttpRequest httpRequest, Header[] requestHeaders) { * The iterator returned will never be empty. In case there are no healthy hosts available, or dead ones to be be retried, * one dead host gets returned so that it can be retried. */ - private Iterable nextHost() { + private HostTuple> nextHost() { + final HostTuple> hostTuple = this.hostTuple; Collection nextHosts = Collections.emptySet(); do { - Set filteredHosts = new HashSet<>(hosts); + Set filteredHosts = new HashSet<>(hostTuple.hosts); for (Map.Entry entry : blacklist.entrySet()) { if (System.nanoTime() - entry.getValue().getDeadUntilNanos() < 0) { filteredHosts.remove(entry.getKey()); } } if (filteredHosts.isEmpty()) { - //last resort: if there are no good hosts to use, return a single dead one, the one that's closest to being retried + //last resort: if there are no good host to use, return a single dead one, the one that's closest to being retried List> sortedHosts = new ArrayList<>(blacklist.entrySet()); if (sortedHosts.size() > 0) { Collections.sort(sortedHosts, new Comparator>() { @@ -413,7 +458,7 @@ public int compare(Map.Entry o1, Map.Entry(nextHosts.iterator(), hostTuple.authCache); } /** @@ -451,8 +496,8 @@ public void close() throws IOException { client.close(); } - private static boolean isSuccessfulResponse(String method, int statusCode) { - return statusCode < 300 || (HttpHead.METHOD_NAME.equals(method) && statusCode == 404); + private static boolean isSuccessfulResponse(int statusCode) { + return statusCode < 300; } private static boolean isRetryStatus(int statusCode) { @@ -508,8 +553,8 @@ private static HttpRequestBase addRequestBody(HttpRequestBase httpRequest, HttpE return httpRequest; } - private static URI buildUri(String pathPrefix, String path, Map params) { - Objects.requireNonNull(params, "params must not be null"); + static URI buildUri(String pathPrefix, String path, Map params) { + Objects.requireNonNull(path, "path must not be null"); try { String fullPath; if (pathPrefix != null) { @@ -655,4 +700,18 @@ public void onFailure(HttpHost host) { } } + + /** + * {@code HostTuple} enables the {@linkplain HttpHost}s and {@linkplain AuthCache} to be set together in a thread + * safe, volatile way. + */ + private static class HostTuple { + public final T hosts; + public final AuthCache authCache; + + HostTuple(final T hosts, final AuthCache authCache) { + this.hosts = hosts; + this.authCache = authCache; + } + } } diff --git a/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java b/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java index d342d59ade44d..e968039324daf 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java @@ -28,6 +28,8 @@ import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; import org.apache.http.nio.conn.SchemeIOSessionStrategy; +import java.security.AccessController; +import java.security.PrivilegedAction; import java.util.Objects; /** @@ -37,7 +39,7 @@ */ public final class RestClientBuilder { public static final int DEFAULT_CONNECT_TIMEOUT_MILLIS = 1000; - public static final int DEFAULT_SOCKET_TIMEOUT_MILLIS = 10000; + public static final int DEFAULT_SOCKET_TIMEOUT_MILLIS = 30000; public static final int DEFAULT_MAX_RETRY_TIMEOUT_MILLIS = DEFAULT_SOCKET_TIMEOUT_MILLIS; public static final int DEFAULT_CONNECTION_REQUEST_TIMEOUT_MILLIS = 500; public static final int DEFAULT_MAX_CONN_PER_ROUTE = 10; @@ -185,7 +187,8 @@ public RestClient build() { private CloseableHttpAsyncClient createHttpClient() { //default timeouts are all infinite - RequestConfig.Builder requestConfigBuilder = RequestConfig.custom().setConnectTimeout(DEFAULT_CONNECT_TIMEOUT_MILLIS) + RequestConfig.Builder requestConfigBuilder = RequestConfig.custom() + .setConnectTimeout(DEFAULT_CONNECT_TIMEOUT_MILLIS) .setSocketTimeout(DEFAULT_SOCKET_TIMEOUT_MILLIS) .setConnectionRequestTimeout(DEFAULT_CONNECTION_REQUEST_TIMEOUT_MILLIS); if (requestConfigCallback != null) { @@ -194,11 +197,18 @@ private CloseableHttpAsyncClient createHttpClient() { HttpAsyncClientBuilder httpClientBuilder = HttpAsyncClientBuilder.create().setDefaultRequestConfig(requestConfigBuilder.build()) //default settings for connection pooling may be too constraining - .setMaxConnPerRoute(DEFAULT_MAX_CONN_PER_ROUTE).setMaxConnTotal(DEFAULT_MAX_CONN_TOTAL); + .setMaxConnPerRoute(DEFAULT_MAX_CONN_PER_ROUTE).setMaxConnTotal(DEFAULT_MAX_CONN_TOTAL).useSystemProperties(); if (httpClientConfigCallback != null) { httpClientBuilder = httpClientConfigCallback.customizeHttpClient(httpClientBuilder); } - return httpClientBuilder.build(); + + final HttpAsyncClientBuilder finalBuilder = httpClientBuilder; + return AccessController.doPrivileged(new PrivilegedAction() { + @Override + public CloseableHttpAsyncClient run() { + return finalBuilder.build(); + } + }); } /** diff --git a/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java b/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java index d30a9e00b53da..c1eb1044a2cab 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java @@ -30,11 +30,17 @@ import org.apache.http.message.BasicStatusLine; import org.apache.http.nio.ContentDecoder; import org.apache.http.nio.IOControl; +import org.apache.http.nio.protocol.HttpAsyncResponseConsumer; import org.apache.http.protocol.HttpContext; -import static org.elasticsearch.client.HeapBufferedAsyncResponseConsumer.DEFAULT_BUFFER_LIMIT; +import java.lang.reflect.Constructor; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Modifier; + +import static org.hamcrest.CoreMatchers.instanceOf; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertSame; +import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.spy; @@ -45,13 +51,14 @@ public class HeapBufferedAsyncResponseConsumerTests extends RestClientTestCase { //maximum buffer that this test ends up allocating is 50MB private static final int MAX_TEST_BUFFER_SIZE = 50 * 1024 * 1024; + private static final int TEST_BUFFER_LIMIT = 10 * 1024 * 1024; public void testResponseProcessing() throws Exception { ContentDecoder contentDecoder = mock(ContentDecoder.class); IOControl ioControl = mock(IOControl.class); HttpContext httpContext = mock(HttpContext.class); - HeapBufferedAsyncResponseConsumer consumer = spy(new HeapBufferedAsyncResponseConsumer()); + HeapBufferedAsyncResponseConsumer consumer = spy(new HeapBufferedAsyncResponseConsumer(TEST_BUFFER_LIMIT)); ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1); StatusLine statusLine = new BasicStatusLine(protocolVersion, 200, "OK"); @@ -74,8 +81,8 @@ public void testResponseProcessing() throws Exception { } public void testDefaultBufferLimit() throws Exception { - HeapBufferedAsyncResponseConsumer consumer = new HeapBufferedAsyncResponseConsumer(); - bufferLimitTest(consumer, DEFAULT_BUFFER_LIMIT); + HeapBufferedAsyncResponseConsumer consumer = new HeapBufferedAsyncResponseConsumer(TEST_BUFFER_LIMIT); + bufferLimitTest(consumer, TEST_BUFFER_LIMIT); } public void testConfiguredBufferLimit() throws Exception { @@ -94,6 +101,26 @@ public void testConfiguredBufferLimit() throws Exception { bufferLimitTest(consumer, bufferLimit); } + public void testCanConfigureHeapBufferLimitFromOutsidePackage() throws ClassNotFoundException, NoSuchMethodException, + IllegalAccessException, InvocationTargetException, InstantiationException { + int bufferLimit = randomIntBetween(1, Integer.MAX_VALUE); + //we use reflection to make sure that the class can be instantiated from the outside, and the constructor is public + Constructor constructor = HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory.class.getConstructor(Integer.TYPE); + assertEquals(Modifier.PUBLIC, constructor.getModifiers() & Modifier.PUBLIC); + Object object = constructor.newInstance(bufferLimit); + assertThat(object, instanceOf(HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory.class)); + HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory consumerFactory = + (HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory) object; + HttpAsyncResponseConsumer consumer = consumerFactory.createHttpAsyncResponseConsumer(); + assertThat(consumer, instanceOf(HeapBufferedAsyncResponseConsumer.class)); + HeapBufferedAsyncResponseConsumer bufferedAsyncResponseConsumer = (HeapBufferedAsyncResponseConsumer) consumer; + assertEquals(bufferLimit, bufferedAsyncResponseConsumer.getBufferLimit()); + } + + public void testHttpAsyncResponseConsumerFactoryVisibility() throws ClassNotFoundException { + assertEquals(Modifier.PUBLIC, HttpAsyncResponseConsumerFactory.class.getModifiers() & Modifier.PUBLIC); + } + private static void bufferLimitTest(HeapBufferedAsyncResponseConsumer consumer, int bufferLimit) throws Exception { ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1); StatusLine statusLine = new BasicStatusLine(protocolVersion, 200, "OK"); diff --git a/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java b/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java index 789f2bf6f6d65..68717dfe223cd 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.client; -import com.carrotsearch.randomizedtesting.generators.RandomInts; +import org.apache.http.Header; import org.apache.http.HttpEntity; import org.apache.http.HttpEntityEnclosingRequest; import org.apache.http.HttpHost; @@ -29,10 +29,11 @@ import org.apache.http.client.methods.HttpPatch; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.methods.HttpPut; -import org.apache.http.client.methods.HttpRequestBase; import org.apache.http.client.methods.HttpTrace; +import org.apache.http.client.methods.HttpUriRequest; import org.apache.http.entity.InputStreamEntity; import org.apache.http.entity.StringEntity; +import org.apache.http.message.BasicHeader; import org.apache.http.message.BasicHttpResponse; import org.apache.http.message.BasicStatusLine; import org.apache.http.nio.entity.NByteArrayEntity; @@ -43,16 +44,16 @@ import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; +import java.nio.charset.Charset; import java.nio.charset.StandardCharsets; import static org.hamcrest.CoreMatchers.equalTo; +import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertThat; public class RequestLoggerTests extends RestClientTestCase { - public void testTraceRequest() throws IOException, URISyntaxException { - HttpHost host = new HttpHost("localhost", 9200, getRandom().nextBoolean() ? "http" : "https"); - + HttpHost host = new HttpHost("localhost", 9200, randomBoolean() ? "http" : "https"); String expectedEndpoint = "/index/type/_api"; URI uri; if (randomBoolean()) { @@ -60,46 +61,15 @@ public void testTraceRequest() throws IOException, URISyntaxException { } else { uri = new URI("index/type/_api"); } - - HttpRequestBase request; - int requestType = RandomInts.randomIntBetween(getRandom(), 0, 7); - switch(requestType) { - case 0: - request = new HttpGetWithEntity(uri); - break; - case 1: - request = new HttpPost(uri); - break; - case 2: - request = new HttpPut(uri); - break; - case 3: - request = new HttpDeleteWithEntity(uri); - break; - case 4: - request = new HttpHead(uri); - break; - case 5: - request = new HttpTrace(uri); - break; - case 6: - request = new HttpOptions(uri); - break; - case 7: - request = new HttpPatch(uri); - break; - default: - throw new UnsupportedOperationException(); - } - + HttpUriRequest request = randomHttpRequest(uri); String expected = "curl -iX " + request.getMethod() + " '" + host + expectedEndpoint + "'"; - boolean hasBody = request instanceof HttpEntityEnclosingRequest && getRandom().nextBoolean(); + boolean hasBody = request instanceof HttpEntityEnclosingRequest && randomBoolean(); String requestBody = "{ \"field\": \"value\" }"; if (hasBody) { expected += " -d '" + requestBody + "'"; HttpEntityEnclosingRequest enclosingRequest = (HttpEntityEnclosingRequest) request; HttpEntity entity; - switch(RandomInts.randomIntBetween(getRandom(), 0, 3)) { + switch(randomIntBetween(0, 4)) { case 0: entity = new StringEntity(requestBody, StandardCharsets.UTF_8); break; @@ -112,6 +82,10 @@ public void testTraceRequest() throws IOException, URISyntaxException { case 3: entity = new NByteArrayEntity(requestBody.getBytes(StandardCharsets.UTF_8)); break; + case 4: + // Evil entity without a charset + entity = new StringEntity(requestBody, (Charset) null); + break; default: throw new UnsupportedOperationException(); } @@ -128,12 +102,12 @@ public void testTraceRequest() throws IOException, URISyntaxException { public void testTraceResponse() throws IOException { ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1); - int statusCode = RandomInts.randomIntBetween(getRandom(), 200, 599); + int statusCode = randomIntBetween(200, 599); String reasonPhrase = "REASON"; BasicStatusLine statusLine = new BasicStatusLine(protocolVersion, statusCode, reasonPhrase); String expected = "# " + statusLine.toString(); BasicHttpResponse httpResponse = new BasicHttpResponse(statusLine); - int numHeaders = RandomInts.randomIntBetween(getRandom(), 0, 3); + int numHeaders = randomIntBetween(0, 3); for (int i = 0; i < numHeaders; i++) { httpResponse.setHeader("header" + i, "value"); expected += "\n# header" + i + ": value"; @@ -146,11 +120,20 @@ public void testTraceResponse() throws IOException { expected += "\n# \"field\": \"value\""; expected += "\n# }"; HttpEntity entity; - if (getRandom().nextBoolean()) { + switch(randomIntBetween(0, 2)) { + case 0: entity = new StringEntity(responseBody, StandardCharsets.UTF_8); - } else { + break; + case 1: //test a non repeatable entity entity = new InputStreamEntity(new ByteArrayInputStream(responseBody.getBytes(StandardCharsets.UTF_8))); + break; + case 2: + // Evil entity without a charset + entity = new StringEntity(responseBody, (Charset) null); + break; + default: + throw new UnsupportedOperationException(); } httpResponse.setEntity(entity); } @@ -162,4 +145,46 @@ public void testTraceResponse() throws IOException { assertThat(body, equalTo(responseBody)); } } + + public void testResponseWarnings() throws Exception { + HttpHost host = new HttpHost("localhost", 9200); + HttpUriRequest request = randomHttpRequest(new URI("/index/type/_api")); + int numWarnings = randomIntBetween(1, 5); + StringBuilder expected = new StringBuilder("request [").append(request.getMethod()).append(" ").append(host) + .append("/index/type/_api] returned ").append(numWarnings).append(" warnings: "); + Header[] warnings = new Header[numWarnings]; + for (int i = 0; i < numWarnings; i++) { + String warning = "this is warning number " + i; + warnings[i] = new BasicHeader("Warning", warning); + if (i > 0) { + expected.append(","); + } + expected.append("[").append(warning).append("]"); + } + assertEquals(expected.toString(), RequestLogger.buildWarningMessage(request, host, warnings)); + } + + private static HttpUriRequest randomHttpRequest(URI uri) { + int requestType = randomIntBetween(0, 7); + switch(requestType) { + case 0: + return new HttpGetWithEntity(uri); + case 1: + return new HttpPost(uri); + case 2: + return new HttpPut(uri); + case 3: + return new HttpDeleteWithEntity(uri); + case 4: + return new HttpHead(uri); + case 5: + return new HttpTrace(uri); + case 6: + return new HttpOptions(uri); + case 7: + return new HttpPatch(uri); + default: + throw new UnsupportedOperationException(); + } + } } diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderIntegTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderIntegTests.java new file mode 100644 index 0000000000000..8fb33e6f395a8 --- /dev/null +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderIntegTests.java @@ -0,0 +1,118 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import com.sun.net.httpserver.HttpExchange; +import com.sun.net.httpserver.HttpHandler; +import com.sun.net.httpserver.HttpsConfigurator; +import com.sun.net.httpserver.HttpsServer; +import org.apache.http.HttpHost; +import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement; +import org.junit.AfterClass; +import org.junit.BeforeClass; + +import javax.net.ssl.KeyManagerFactory; +import javax.net.ssl.SSLContext; +import javax.net.ssl.TrustManagerFactory; +import java.io.IOException; +import java.io.InputStream; +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.security.KeyStore; + +import static org.hamcrest.Matchers.containsString; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertThat; +import static org.junit.Assert.fail; + +/** + * Integration test to validate the builder builds a client with the correct configuration + */ +//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes +@IgnoreJRERequirement +public class RestClientBuilderIntegTests extends RestClientTestCase { + + private static HttpsServer httpsServer; + + @BeforeClass + public static void startHttpServer() throws Exception { + httpsServer = HttpsServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress(), 0), 0); + httpsServer.setHttpsConfigurator(new HttpsConfigurator(getSslContext())); + httpsServer.createContext("/", new ResponseHandler()); + httpsServer.start(); + } + + //animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes + @IgnoreJRERequirement + private static class ResponseHandler implements HttpHandler { + @Override + public void handle(HttpExchange httpExchange) throws IOException { + httpExchange.sendResponseHeaders(200, -1); + httpExchange.close(); + } + } + + @AfterClass + public static void stopHttpServers() throws IOException { + httpsServer.stop(0); + httpsServer = null; + } + + public void testBuilderUsesDefaultSSLContext() throws Exception { + final SSLContext defaultSSLContext = SSLContext.getDefault(); + try { + try (RestClient client = buildRestClient()) { + try { + client.performRequest("GET", "/"); + fail("connection should have been rejected due to SSL handshake"); + } catch (Exception e) { + assertThat(e.getMessage(), containsString("General SSLEngine problem")); + } + } + + SSLContext.setDefault(getSslContext()); + try (RestClient client = buildRestClient()) { + Response response = client.performRequest("GET", "/"); + assertEquals(200, response.getStatusLine().getStatusCode()); + } + } finally { + SSLContext.setDefault(defaultSSLContext); + } + } + + private RestClient buildRestClient() { + InetSocketAddress address = httpsServer.getAddress(); + return RestClient.builder(new HttpHost(address.getHostString(), address.getPort(), "https")).build(); + } + + private static SSLContext getSslContext() throws Exception { + SSLContext sslContext = SSLContext.getInstance("TLS"); + try (InputStream in = RestClientBuilderIntegTests.class.getResourceAsStream("/testks.jks")) { + KeyStore keyStore = KeyStore.getInstance("JKS"); + keyStore.load(in, "password".toCharArray()); + KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509"); + kmf.init(keyStore, "password".toCharArray()); + TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509"); + tmf.init(keyStore); + sslContext.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null); + } + return sslContext; + } +} diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientIntegTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientIntegTests.java deleted file mode 100644 index 9c5c50946d884..0000000000000 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientIntegTests.java +++ /dev/null @@ -1,298 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.client; - -import com.sun.net.httpserver.Headers; -import com.sun.net.httpserver.HttpContext; -import com.sun.net.httpserver.HttpExchange; -import com.sun.net.httpserver.HttpHandler; -import com.sun.net.httpserver.HttpServer; -import org.apache.http.Consts; -import org.apache.http.Header; -import org.apache.http.HttpHost; -import org.apache.http.entity.StringEntity; -import org.apache.http.util.EntityUtils; -import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement; -import org.junit.AfterClass; -import org.junit.BeforeClass; - -import java.io.IOException; -import java.io.InputStreamReader; -import java.io.OutputStream; -import java.net.InetAddress; -import java.net.InetSocketAddress; -import java.util.Arrays; -import java.util.Collections; -import java.util.HashMap; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.concurrent.CopyOnWriteArrayList; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; - -import static org.elasticsearch.client.RestClientTestUtil.getAllStatusCodes; -import static org.elasticsearch.client.RestClientTestUtil.getHttpMethods; -import static org.elasticsearch.client.RestClientTestUtil.randomStatusCode; -import static org.hamcrest.CoreMatchers.equalTo; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertThat; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; - -/** - * Integration test to check interaction between {@link RestClient} and {@link org.apache.http.client.HttpClient}. - * Works against a real http server, one single host. - */ -//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes -@IgnoreJRERequirement -public class RestClientIntegTests extends RestClientTestCase { - - private static HttpServer httpServer; - private static RestClient restClient; - private static Header[] defaultHeaders; - - @BeforeClass - public static void startHttpServer() throws Exception { - httpServer = HttpServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress(), 0), 0); - httpServer.start(); - //returns a different status code depending on the path - for (int statusCode : getAllStatusCodes()) { - createStatusCodeContext(httpServer, statusCode); - } - int numHeaders = randomIntBetween(0, 5); - defaultHeaders = generateHeaders("Header-default", "Header-array", numHeaders); - restClient = RestClient.builder(new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort())) - .setDefaultHeaders(defaultHeaders).build(); - } - - private static void createStatusCodeContext(HttpServer httpServer, final int statusCode) { - httpServer.createContext("/" + statusCode, new ResponseHandler(statusCode)); - } - - //animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes - @IgnoreJRERequirement - private static class ResponseHandler implements HttpHandler { - private final int statusCode; - - ResponseHandler(int statusCode) { - this.statusCode = statusCode; - } - - @Override - public void handle(HttpExchange httpExchange) throws IOException { - StringBuilder body = new StringBuilder(); - try (InputStreamReader reader = new InputStreamReader(httpExchange.getRequestBody(), Consts.UTF_8)) { - char[] buffer = new char[256]; - int read; - while ((read = reader.read(buffer)) != -1) { - body.append(buffer, 0, read); - } - } - Headers requestHeaders = httpExchange.getRequestHeaders(); - Headers responseHeaders = httpExchange.getResponseHeaders(); - for (Map.Entry> header : requestHeaders.entrySet()) { - responseHeaders.put(header.getKey(), header.getValue()); - } - httpExchange.getRequestBody().close(); - httpExchange.sendResponseHeaders(statusCode, body.length() == 0 ? -1 : body.length()); - if (body.length() > 0) { - try (OutputStream out = httpExchange.getResponseBody()) { - out.write(body.toString().getBytes(Consts.UTF_8)); - } - } - httpExchange.close(); - } - } - - @AfterClass - public static void stopHttpServers() throws IOException { - restClient.close(); - restClient = null; - httpServer.stop(0); - httpServer = null; - } - - /** - * End to end test for headers. We test it explicitly against a real http client as there are different ways - * to set/add headers to the {@link org.apache.http.client.HttpClient}. - * Exercises the test http server ability to send back whatever headers it received. - */ - public void testHeaders() throws IOException { - for (String method : getHttpMethods()) { - final Set standardHeaders = new HashSet<>(Arrays.asList("Connection", "Host", "User-agent", "Date")); - if (method.equals("HEAD") == false) { - standardHeaders.add("Content-length"); - } - - final int numHeaders = randomIntBetween(1, 5); - final Header[] headers = generateHeaders("Header", "Header-array", numHeaders); - final Map> expectedHeaders = new HashMap<>(); - - addHeaders(expectedHeaders, defaultHeaders, headers); - - final int statusCode = randomStatusCode(getRandom()); - Response esResponse; - try { - esResponse = restClient.performRequest(method, "/" + statusCode, Collections.emptyMap(), headers); - } catch(ResponseException e) { - esResponse = e.getResponse(); - } - assertThat(esResponse.getStatusLine().getStatusCode(), equalTo(statusCode)); - for (final Header responseHeader : esResponse.getHeaders()) { - final String name = responseHeader.getName(); - final String value = responseHeader.getValue(); - if (name.startsWith("Header")) { - final List values = expectedHeaders.get(name); - assertNotNull("found response header [" + name + "] that wasn't originally sent: " + value, values); - assertTrue("found incorrect response header [" + name + "]: " + value, values.remove(value)); - - // we've collected them all - if (values.isEmpty()) { - expectedHeaders.remove(name); - } - } else { - assertTrue("unknown header was returned " + name, standardHeaders.remove(name)); - } - } - assertTrue("some headers that were sent weren't returned: " + expectedHeaders, expectedHeaders.isEmpty()); - assertTrue("some expected standard headers weren't returned: " + standardHeaders, standardHeaders.isEmpty()); - } - } - - /** - * End to end test for delete with body. We test it explicitly as it is not supported - * out of the box by {@link org.apache.http.client.HttpClient}. - * Exercises the test http server ability to send back whatever body it received. - */ - public void testDeleteWithBody() throws IOException { - bodyTest("DELETE"); - } - - /** - * End to end test for get with body. We test it explicitly as it is not supported - * out of the box by {@link org.apache.http.client.HttpClient}. - * Exercises the test http server ability to send back whatever body it received. - */ - public void testGetWithBody() throws IOException { - bodyTest("GET"); - } - - /** - * Ensure that pathPrefix works as expected. - */ - public void testPathPrefix() throws IOException { - // guarantee no other test setup collides with this one and lets it sneak through - final String uniqueContextSuffix = "/testPathPrefix"; - final String pathPrefix = "base/" + randomAsciiOfLengthBetween(1, 5) + "/"; - final int statusCode = randomStatusCode(getRandom()); - - final HttpContext context = - httpServer.createContext("/" + pathPrefix + statusCode + uniqueContextSuffix, new ResponseHandler(statusCode)); - - try (final RestClient client = - RestClient.builder(new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort())) - .setPathPrefix((randomBoolean() ? "/" : "") + pathPrefix).build()) { - - for (final String method : getHttpMethods()) { - Response esResponse; - try { - esResponse = client.performRequest(method, "/" + statusCode + uniqueContextSuffix); - } catch(ResponseException e) { - esResponse = e.getResponse(); - } - - assertThat(esResponse.getRequestLine().getUri(), equalTo("/" + pathPrefix + statusCode + uniqueContextSuffix)); - assertThat(esResponse.getStatusLine().getStatusCode(), equalTo(statusCode)); - } - } finally { - httpServer.removeContext(context); - } - } - - private void bodyTest(String method) throws IOException { - String requestBody = "{ \"field\": \"value\" }"; - StringEntity entity = new StringEntity(requestBody); - int statusCode = randomStatusCode(getRandom()); - Response esResponse; - try { - esResponse = restClient.performRequest(method, "/" + statusCode, Collections.emptyMap(), entity); - } catch(ResponseException e) { - esResponse = e.getResponse(); - } - assertEquals(statusCode, esResponse.getStatusLine().getStatusCode()); - assertEquals(requestBody, EntityUtils.toString(esResponse.getEntity())); - } - - public void testAsyncRequests() throws Exception { - int numRequests = randomIntBetween(5, 20); - final CountDownLatch latch = new CountDownLatch(numRequests); - final List responses = new CopyOnWriteArrayList<>(); - for (int i = 0; i < numRequests; i++) { - final String method = RestClientTestUtil.randomHttpMethod(getRandom()); - final int statusCode = randomStatusCode(getRandom()); - restClient.performRequestAsync(method, "/" + statusCode, new ResponseListener() { - @Override - public void onSuccess(Response response) { - responses.add(new TestResponse(method, statusCode, response)); - latch.countDown(); - } - - @Override - public void onFailure(Exception exception) { - responses.add(new TestResponse(method, statusCode, exception)); - latch.countDown(); - } - }); - } - assertTrue(latch.await(5, TimeUnit.SECONDS)); - - assertEquals(numRequests, responses.size()); - for (TestResponse response : responses) { - assertEquals(response.method, response.getResponse().getRequestLine().getMethod()); - assertEquals(response.statusCode, response.getResponse().getStatusLine().getStatusCode()); - - } - } - - private static class TestResponse { - private final String method; - private final int statusCode; - private final Object response; - - TestResponse(String method, int statusCode, Object response) { - this.method = method; - this.statusCode = statusCode; - this.response = response; - } - - Response getResponse() { - if (response instanceof Response) { - return (Response) response; - } - if (response instanceof ResponseException) { - return ((ResponseException) response).getResponse(); - } - throw new AssertionError("unexpected response " + response.getClass()); - } - } -} diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsIntegTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsIntegTests.java new file mode 100644 index 0000000000000..f997f798712e6 --- /dev/null +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsIntegTests.java @@ -0,0 +1,210 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import com.sun.net.httpserver.HttpExchange; +import com.sun.net.httpserver.HttpHandler; +import com.sun.net.httpserver.HttpServer; +import org.apache.http.HttpHost; +import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement; +import org.junit.AfterClass; +import org.junit.Before; +import org.junit.BeforeClass; + +import java.io.IOException; +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; + +import static org.elasticsearch.client.RestClientTestUtil.getAllStatusCodes; +import static org.elasticsearch.client.RestClientTestUtil.randomErrorNoRetryStatusCode; +import static org.elasticsearch.client.RestClientTestUtil.randomOkStatusCode; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +/** + * Integration test to check interaction between {@link RestClient} and {@link org.apache.http.client.HttpClient}. + * Works against real http servers, multiple hosts. Also tests failover by randomly shutting down hosts. + */ +//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes +@IgnoreJRERequirement +public class RestClientMultipleHostsIntegTests extends RestClientTestCase { + + private static HttpServer[] httpServers; + private static RestClient restClient; + private static String pathPrefix; + + @BeforeClass + public static void startHttpServer() throws Exception { + String pathPrefixWithoutLeadingSlash; + if (randomBoolean()) { + pathPrefixWithoutLeadingSlash = "testPathPrefix/" + randomAsciiOfLengthBetween(1, 5); + pathPrefix = "/" + pathPrefixWithoutLeadingSlash; + } else { + pathPrefix = pathPrefixWithoutLeadingSlash = ""; + } + int numHttpServers = randomIntBetween(2, 4); + httpServers = new HttpServer[numHttpServers]; + HttpHost[] httpHosts = new HttpHost[numHttpServers]; + for (int i = 0; i < numHttpServers; i++) { + HttpServer httpServer = createHttpServer(); + httpServers[i] = httpServer; + httpHosts[i] = new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort()); + } + RestClientBuilder restClientBuilder = RestClient.builder(httpHosts); + if (pathPrefix.length() > 0) { + restClientBuilder.setPathPrefix((randomBoolean() ? "/" : "") + pathPrefixWithoutLeadingSlash); + } + restClient = restClientBuilder.build(); + } + + private static HttpServer createHttpServer() throws Exception { + HttpServer httpServer = HttpServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress(), 0), 0); + httpServer.start(); + //returns a different status code depending on the path + for (int statusCode : getAllStatusCodes()) { + httpServer.createContext(pathPrefix + "/" + statusCode, new ResponseHandler(statusCode)); + } + return httpServer; + } + + //animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes + @IgnoreJRERequirement + private static class ResponseHandler implements HttpHandler { + private final int statusCode; + + ResponseHandler(int statusCode) { + this.statusCode = statusCode; + } + + @Override + public void handle(HttpExchange httpExchange) throws IOException { + httpExchange.getRequestBody().close(); + httpExchange.sendResponseHeaders(statusCode, -1); + httpExchange.close(); + } + } + + @AfterClass + public static void stopHttpServers() throws IOException { + restClient.close(); + restClient = null; + for (HttpServer httpServer : httpServers) { + httpServer.stop(0); + } + httpServers = null; + } + + @Before + public void stopRandomHost() { + //verify that shutting down some hosts doesn't matter as long as one working host is left behind + if (httpServers.length > 1 && randomBoolean()) { + List updatedHttpServers = new ArrayList<>(httpServers.length - 1); + int nodeIndex = randomInt(httpServers.length - 1); + for (int i = 0; i < httpServers.length; i++) { + HttpServer httpServer = httpServers[i]; + if (i == nodeIndex) { + httpServer.stop(0); + } else { + updatedHttpServers.add(httpServer); + } + } + httpServers = updatedHttpServers.toArray(new HttpServer[updatedHttpServers.size()]); + } + } + + public void testSyncRequests() throws IOException { + int numRequests = randomIntBetween(5, 20); + for (int i = 0; i < numRequests; i++) { + final String method = RestClientTestUtil.randomHttpMethod(getRandom()); + //we don't test status codes that are subject to retries as they interfere with hosts being stopped + final int statusCode = randomBoolean() ? randomOkStatusCode(getRandom()) : randomErrorNoRetryStatusCode(getRandom()); + Response response; + try { + response = restClient.performRequest(method, "/" + statusCode); + } catch(ResponseException responseException) { + response = responseException.getResponse(); + } + assertEquals(method, response.getRequestLine().getMethod()); + assertEquals(statusCode, response.getStatusLine().getStatusCode()); + assertEquals((pathPrefix.length() > 0 ? pathPrefix : "") + "/" + statusCode, response.getRequestLine().getUri()); + } + } + + public void testAsyncRequests() throws Exception { + int numRequests = randomIntBetween(5, 20); + final CountDownLatch latch = new CountDownLatch(numRequests); + final List responses = new CopyOnWriteArrayList<>(); + for (int i = 0; i < numRequests; i++) { + final String method = RestClientTestUtil.randomHttpMethod(getRandom()); + //we don't test status codes that are subject to retries as they interfere with hosts being stopped + final int statusCode = randomBoolean() ? randomOkStatusCode(getRandom()) : randomErrorNoRetryStatusCode(getRandom()); + restClient.performRequestAsync(method, "/" + statusCode, new ResponseListener() { + @Override + public void onSuccess(Response response) { + responses.add(new TestResponse(method, statusCode, response)); + latch.countDown(); + } + + @Override + public void onFailure(Exception exception) { + responses.add(new TestResponse(method, statusCode, exception)); + latch.countDown(); + } + }); + } + assertTrue(latch.await(5, TimeUnit.SECONDS)); + + assertEquals(numRequests, responses.size()); + for (TestResponse testResponse : responses) { + Response response = testResponse.getResponse(); + assertEquals(testResponse.method, response.getRequestLine().getMethod()); + assertEquals(testResponse.statusCode, response.getStatusLine().getStatusCode()); + assertEquals((pathPrefix.length() > 0 ? pathPrefix : "") + "/" + testResponse.statusCode, + response.getRequestLine().getUri()); + } + } + + private static class TestResponse { + private final String method; + private final int statusCode; + private final Object response; + + TestResponse(String method, int statusCode, Object response) { + this.method = method; + this.statusCode = statusCode; + this.response = response; + } + + Response getResponse() { + if (response instanceof Response) { + return (Response) response; + } + if (response instanceof ResponseException) { + return ((ResponseException) response).getResponse(); + } + throw new AssertionError("unexpected response " + response.getClass()); + } + } +} diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java index 049a216936f7a..6f87a244ff59f 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java @@ -19,15 +19,17 @@ package org.elasticsearch.client; -import com.carrotsearch.randomizedtesting.generators.RandomInts; +import com.carrotsearch.randomizedtesting.generators.RandomNumbers; import org.apache.http.Header; import org.apache.http.HttpHost; import org.apache.http.HttpResponse; import org.apache.http.ProtocolVersion; import org.apache.http.StatusLine; import org.apache.http.client.methods.HttpUriRequest; +import org.apache.http.client.protocol.HttpClientContext; import org.apache.http.concurrent.FutureCallback; import org.apache.http.conn.ConnectTimeoutException; +import org.apache.http.impl.auth.BasicScheme; import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; import org.apache.http.message.BasicHttpResponse; import org.apache.http.message.BasicStatusLine; @@ -73,13 +75,15 @@ public class RestClientMultipleHostsTests extends RestClientTestCase { public void createRestClient() throws IOException { CloseableHttpAsyncClient httpClient = mock(CloseableHttpAsyncClient.class); when(httpClient.execute(any(HttpAsyncRequestProducer.class), any(HttpAsyncResponseConsumer.class), - any(FutureCallback.class))).thenAnswer(new Answer>() { + any(HttpClientContext.class), any(FutureCallback.class))).thenAnswer(new Answer>() { @Override public Future answer(InvocationOnMock invocationOnMock) throws Throwable { HttpAsyncRequestProducer requestProducer = (HttpAsyncRequestProducer) invocationOnMock.getArguments()[0]; HttpUriRequest request = (HttpUriRequest)requestProducer.generateRequest(); HttpHost httpHost = requestProducer.getTarget(); - FutureCallback futureCallback = (FutureCallback) invocationOnMock.getArguments()[2]; + HttpClientContext context = (HttpClientContext) invocationOnMock.getArguments()[2]; + assertThat(context.getAuthCache().get(httpHost), instanceOf(BasicScheme.class)); + FutureCallback futureCallback = (FutureCallback) invocationOnMock.getArguments()[3]; //return the desired status code or exception depending on the path if (request.getURI().getPath().equals("/soe")) { futureCallback.failed(new SocketTimeoutException(httpHost.toString())); @@ -95,7 +99,7 @@ public Future answer(InvocationOnMock invocationOnMock) throws Thr return null; } }); - int numHosts = RandomInts.randomIntBetween(getRandom(), 2, 5); + int numHosts = RandomNumbers.randomIntBetween(getRandom(), 2, 5); httpHosts = new HttpHost[numHosts]; for (int i = 0; i < numHosts; i++) { httpHosts[i] = new HttpHost("localhost", 9200 + i); @@ -105,7 +109,7 @@ public Future answer(InvocationOnMock invocationOnMock) throws Thr } public void testRoundRobinOkStatusCodes() throws IOException { - int numIters = RandomInts.randomIntBetween(getRandom(), 1, 5); + int numIters = RandomNumbers.randomIntBetween(getRandom(), 1, 5); for (int i = 0; i < numIters; i++) { Set hostsSet = new HashSet<>(); Collections.addAll(hostsSet, httpHosts); @@ -121,7 +125,7 @@ public void testRoundRobinOkStatusCodes() throws IOException { } public void testRoundRobinNoRetryErrors() throws IOException { - int numIters = RandomInts.randomIntBetween(getRandom(), 1, 5); + int numIters = RandomNumbers.randomIntBetween(getRandom(), 1, 5); for (int i = 0; i < numIters; i++) { Set hostsSet = new HashSet<>(); Collections.addAll(hostsSet, httpHosts); @@ -198,7 +202,7 @@ public void testRoundRobinRetryErrors() throws IOException { assertEquals("every host should have been used but some weren't: " + hostsSet, 0, hostsSet.size()); } - int numIters = RandomInts.randomIntBetween(getRandom(), 2, 5); + int numIters = RandomNumbers.randomIntBetween(getRandom(), 2, 5); for (int i = 1; i <= numIters; i++) { //check that one different host is resurrected at each new attempt Set hostsSet = new HashSet<>(); @@ -228,7 +232,7 @@ public void testRoundRobinRetryErrors() throws IOException { if (getRandom().nextBoolean()) { //mark one host back alive through a successful request and check that all requests after that are sent to it HttpHost selectedHost = null; - int iters = RandomInts.randomIntBetween(getRandom(), 2, 10); + int iters = RandomNumbers.randomIntBetween(getRandom(), 2, 10); for (int y = 0; y < iters; y++) { int statusCode = randomErrorNoRetryStatusCode(getRandom()); Response response; @@ -269,7 +273,7 @@ public void testRoundRobinRetryErrors() throws IOException { } private static String randomErrorRetryEndpoint() { - switch(RandomInts.randomIntBetween(getRandom(), 0, 3)) { + switch(RandomNumbers.randomIntBetween(getRandom(), 0, 3)) { case 0: return "/" + randomErrorRetryStatusCode(getRandom()); case 1: diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java new file mode 100644 index 0000000000000..9b63e3492fd80 --- /dev/null +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java @@ -0,0 +1,266 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import com.sun.net.httpserver.Headers; +import com.sun.net.httpserver.HttpExchange; +import com.sun.net.httpserver.HttpHandler; +import com.sun.net.httpserver.HttpServer; +import org.apache.http.Consts; +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.auth.AuthScope; +import org.apache.http.auth.UsernamePasswordCredentials; +import org.apache.http.entity.StringEntity; +import org.apache.http.impl.client.BasicCredentialsProvider; +import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; +import org.apache.http.util.EntityUtils; +import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement; +import org.junit.AfterClass; +import org.junit.BeforeClass; + +import java.io.IOException; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import static org.elasticsearch.client.RestClientTestUtil.getAllStatusCodes; +import static org.elasticsearch.client.RestClientTestUtil.getHttpMethods; +import static org.elasticsearch.client.RestClientTestUtil.randomStatusCode; +import static org.hamcrest.Matchers.nullValue; +import static org.hamcrest.Matchers.startsWith; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertThat; +import static org.junit.Assert.assertTrue; + +/** + * Integration test to check interaction between {@link RestClient} and {@link org.apache.http.client.HttpClient}. + * Works against a real http server, one single host. + */ +//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes +@IgnoreJRERequirement +public class RestClientSingleHostIntegTests extends RestClientTestCase { + + private static HttpServer httpServer; + private static RestClient restClient; + private static String pathPrefix; + private static Header[] defaultHeaders; + + @BeforeClass + public static void startHttpServer() throws Exception { + pathPrefix = randomBoolean() ? "/testPathPrefix/" + randomAsciiOfLengthBetween(1, 5) : ""; + httpServer = createHttpServer(); + defaultHeaders = RestClientTestUtil.randomHeaders(getRandom(), "Header-default"); + restClient = createRestClient(false, true); + } + + private static HttpServer createHttpServer() throws Exception { + HttpServer httpServer = HttpServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress(), 0), 0); + httpServer.start(); + //returns a different status code depending on the path + for (int statusCode : getAllStatusCodes()) { + httpServer.createContext(pathPrefix + "/" + statusCode, new ResponseHandler(statusCode)); + } + return httpServer; + } + + //animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes + @IgnoreJRERequirement + private static class ResponseHandler implements HttpHandler { + private final int statusCode; + + ResponseHandler(int statusCode) { + this.statusCode = statusCode; + } + + @Override + public void handle(HttpExchange httpExchange) throws IOException { + StringBuilder body = new StringBuilder(); + try (InputStreamReader reader = new InputStreamReader(httpExchange.getRequestBody(), Consts.UTF_8)) { + char[] buffer = new char[256]; + int read; + while ((read = reader.read(buffer)) != -1) { + body.append(buffer, 0, read); + } + } + Headers requestHeaders = httpExchange.getRequestHeaders(); + Headers responseHeaders = httpExchange.getResponseHeaders(); + for (Map.Entry> header : requestHeaders.entrySet()) { + responseHeaders.put(header.getKey(), header.getValue()); + } + httpExchange.getRequestBody().close(); + httpExchange.sendResponseHeaders(statusCode, body.length() == 0 ? -1 : body.length()); + if (body.length() > 0) { + try (OutputStream out = httpExchange.getResponseBody()) { + out.write(body.toString().getBytes(Consts.UTF_8)); + } + } + httpExchange.close(); + } + } + + private static RestClient createRestClient(final boolean useAuth, final boolean usePreemptiveAuth) { + // provide the username/password for every request + final BasicCredentialsProvider credentialsProvider = new BasicCredentialsProvider(); + credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("user", "pass")); + + final RestClientBuilder restClientBuilder = RestClient.builder( + new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort())).setDefaultHeaders(defaultHeaders); + if (pathPrefix.length() > 0) { + // sometimes cut off the leading slash + restClientBuilder.setPathPrefix(randomBoolean() ? pathPrefix.substring(1) : pathPrefix); + } + + if (useAuth) { + restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { + @Override + public HttpAsyncClientBuilder customizeHttpClient(final HttpAsyncClientBuilder httpClientBuilder) { + if (usePreemptiveAuth == false) { + // disable preemptive auth by ignoring any authcache + httpClientBuilder.disableAuthCaching(); + } + + return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider); + } + }); + } + + return restClientBuilder.build(); + } + + @AfterClass + public static void stopHttpServers() throws IOException { + restClient.close(); + restClient = null; + httpServer.stop(0); + httpServer = null; + } + + /** + * End to end test for headers. We test it explicitly against a real http client as there are different ways + * to set/add headers to the {@link org.apache.http.client.HttpClient}. + * Exercises the test http server ability to send back whatever headers it received. + */ + public void testHeaders() throws IOException { + for (String method : getHttpMethods()) { + final Set standardHeaders = new HashSet<>(Arrays.asList("Connection", "Host", "User-agent", "Date")); + if (method.equals("HEAD") == false) { + standardHeaders.add("Content-length"); + } + final Header[] requestHeaders = RestClientTestUtil.randomHeaders(getRandom(), "Header"); + final int statusCode = randomStatusCode(getRandom()); + Response esResponse; + try { + esResponse = restClient.performRequest(method, "/" + statusCode, Collections.emptyMap(), requestHeaders); + } catch(ResponseException e) { + esResponse = e.getResponse(); + } + + assertEquals(method, esResponse.getRequestLine().getMethod()); + assertEquals(statusCode, esResponse.getStatusLine().getStatusCode()); + assertEquals(pathPrefix + "/" + statusCode, esResponse.getRequestLine().getUri()); + assertHeaders(defaultHeaders, requestHeaders, esResponse.getHeaders(), standardHeaders); + for (final Header responseHeader : esResponse.getHeaders()) { + String name = responseHeader.getName(); + if (name.startsWith("Header") == false) { + assertTrue("unknown header was returned " + name, standardHeaders.remove(name)); + } + } + assertTrue("some expected standard headers weren't returned: " + standardHeaders, standardHeaders.isEmpty()); + } + } + + /** + * End to end test for delete with body. We test it explicitly as it is not supported + * out of the box by {@link org.apache.http.client.HttpClient}. + * Exercises the test http server ability to send back whatever body it received. + */ + public void testDeleteWithBody() throws IOException { + bodyTest("DELETE"); + } + + /** + * End to end test for get with body. We test it explicitly as it is not supported + * out of the box by {@link org.apache.http.client.HttpClient}. + * Exercises the test http server ability to send back whatever body it received. + */ + public void testGetWithBody() throws IOException { + bodyTest("GET"); + } + + /** + * Verify that credentials are sent on the first request with preemptive auth enabled (default when provided with credentials). + */ + public void testPreemptiveAuthEnabled() throws IOException { + final String[] methods = { "POST", "PUT", "GET", "DELETE" }; + + try (RestClient restClient = createRestClient(true, true)) { + for (final String method : methods) { + final Response response = bodyTest(restClient, method); + + assertThat(response.getHeader("Authorization"), startsWith("Basic")); + } + } + } + + /** + * Verify that credentials are not sent on the first request with preemptive auth disabled. + */ + public void testPreemptiveAuthDisabled() throws IOException { + final String[] methods = { "POST", "PUT", "GET", "DELETE" }; + + try (RestClient restClient = createRestClient(true, false)) { + for (final String method : methods) { + final Response response = bodyTest(restClient, method); + + assertThat(response.getHeader("Authorization"), nullValue()); + } + } + } + + private Response bodyTest(final String method) throws IOException { + return bodyTest(restClient, method); + } + + private Response bodyTest(final RestClient restClient, final String method) throws IOException { + String requestBody = "{ \"field\": \"value\" }"; + StringEntity entity = new StringEntity(requestBody); + int statusCode = randomStatusCode(getRandom()); + Response esResponse; + try { + esResponse = restClient.performRequest(method, "/" + statusCode, Collections.emptyMap(), entity); + } catch(ResponseException e) { + esResponse = e.getResponse(); + } + assertEquals(method, esResponse.getRequestLine().getMethod()); + assertEquals(statusCode, esResponse.getStatusLine().getStatusCode()); + assertEquals(pathPrefix + "/" + statusCode, esResponse.getRequestLine().getUri()); + assertEquals(requestBody, EntityUtils.toString(esResponse.getEntity())); + + return esResponse; + } +} diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java index 92e2b0da971df..a74310daa0140 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java @@ -34,10 +34,12 @@ import org.apache.http.client.methods.HttpPut; import org.apache.http.client.methods.HttpTrace; import org.apache.http.client.methods.HttpUriRequest; +import org.apache.http.client.protocol.HttpClientContext; import org.apache.http.client.utils.URIBuilder; import org.apache.http.concurrent.FutureCallback; import org.apache.http.conn.ConnectTimeoutException; import org.apache.http.entity.StringEntity; +import org.apache.http.impl.auth.BasicScheme; import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; import org.apache.http.message.BasicHttpResponse; import org.apache.http.message.BasicStatusLine; @@ -56,7 +58,6 @@ import java.util.Collections; import java.util.HashMap; import java.util.HashSet; -import java.util.List; import java.util.Map; import java.util.Set; import java.util.concurrent.Future; @@ -70,7 +71,6 @@ import static org.hamcrest.CoreMatchers.instanceOf; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; @@ -98,11 +98,13 @@ public class RestClientSingleHostTests extends RestClientTestCase { public void createRestClient() throws IOException { httpClient = mock(CloseableHttpAsyncClient.class); when(httpClient.execute(any(HttpAsyncRequestProducer.class), any(HttpAsyncResponseConsumer.class), - any(FutureCallback.class))).thenAnswer(new Answer>() { + any(HttpClientContext.class), any(FutureCallback.class))).thenAnswer(new Answer>() { @Override public Future answer(InvocationOnMock invocationOnMock) throws Throwable { HttpAsyncRequestProducer requestProducer = (HttpAsyncRequestProducer) invocationOnMock.getArguments()[0]; - FutureCallback futureCallback = (FutureCallback) invocationOnMock.getArguments()[2]; + HttpClientContext context = (HttpClientContext) invocationOnMock.getArguments()[2]; + assertThat(context.getAuthCache().get(httpHost), instanceOf(BasicScheme.class)); + FutureCallback futureCallback = (FutureCallback) invocationOnMock.getArguments()[3]; HttpUriRequest request = (HttpUriRequest)requestProducer.generateRequest(); //return the desired status code or exception depending on the path if (request.getURI().getPath().equals("/soe")) { @@ -132,13 +134,23 @@ public Future answer(InvocationOnMock invocationOnMock) throws Thr }); - int numHeaders = randomIntBetween(0, 3); - defaultHeaders = generateHeaders("Header-default", "Header-array", numHeaders); + defaultHeaders = RestClientTestUtil.randomHeaders(getRandom(), "Header-default"); httpHost = new HttpHost("localhost", 9200); failureListener = new HostsTrackingFailureListener(); restClient = new RestClient(httpClient, 10000, defaultHeaders, new HttpHost[]{httpHost}, null, failureListener); } + public void testNullPath() throws IOException { + for (String method : getHttpMethods()) { + try { + restClient.performRequest(method, null); + fail("path set to null should fail!"); + } catch (NullPointerException e) { + assertEquals("path must not be null", e.getMessage()); + } + } + } + /** * Verifies the content of the {@link HttpRequest} that's internally created and passed through to the http client */ @@ -149,7 +161,7 @@ public void testInternalHttpRequest() throws Exception { for (String httpMethod : getHttpMethods()) { HttpUriRequest expectedRequest = performRandomRequest(httpMethod); verify(httpClient, times(++times)).execute(requestArgumentCaptor.capture(), - any(HttpAsyncResponseConsumer.class), any(FutureCallback.class)); + any(HttpAsyncResponseConsumer.class), any(HttpClientContext.class), any(FutureCallback.class)); HttpUriRequest actualRequest = (HttpUriRequest)requestArgumentCaptor.getValue().generateRequest(); assertEquals(expectedRequest.getURI(), actualRequest.getURI()); assertEquals(expectedRequest.getClass(), actualRequest.getClass()); @@ -209,23 +221,45 @@ public void testOkStatusCodes() throws IOException { */ public void testErrorStatusCodes() throws IOException { for (String method : getHttpMethods()) { + Set expectedIgnores = new HashSet<>(); + String ignoreParam = ""; + if (HttpHead.METHOD_NAME.equals(method)) { + expectedIgnores.add(404); + } + if (randomBoolean()) { + int numIgnores = randomIntBetween(1, 3); + for (int i = 0; i < numIgnores; i++) { + Integer code = randomFrom(getAllErrorStatusCodes()); + expectedIgnores.add(code); + ignoreParam += code; + if (i < numIgnores - 1) { + ignoreParam += ","; + } + } + } //error status codes should cause an exception to be thrown for (int errorStatusCode : getAllErrorStatusCodes()) { try { - Response response = performRequest(method, "/" + errorStatusCode); - if (method.equals("HEAD") && errorStatusCode == 404) { - //no exception gets thrown although we got a 404 - assertThat(response.getStatusLine().getStatusCode(), equalTo(errorStatusCode)); + Map params; + if (ignoreParam.isEmpty()) { + params = Collections.emptyMap(); + } else { + params = Collections.singletonMap("ignore", ignoreParam); + } + Response response = performRequest(method, "/" + errorStatusCode, params); + if (expectedIgnores.contains(errorStatusCode)) { + //no exception gets thrown although we got an error status code, as it was configured to be ignored + assertEquals(errorStatusCode, response.getStatusLine().getStatusCode()); } else { fail("request should have failed"); } } catch(ResponseException e) { - if (method.equals("HEAD") && errorStatusCode == 404) { + if (expectedIgnores.contains(errorStatusCode)) { throw e; } - assertThat(e.getResponse().getStatusLine().getStatusCode(), equalTo(errorStatusCode)); + assertEquals(errorStatusCode, e.getResponse().getStatusLine().getStatusCode()); } - if (errorStatusCode <= 500) { + if (errorStatusCode <= 500 || expectedIgnores.contains(errorStatusCode)) { failureListener.assertNotCalled(); } else { failureListener.assertCalled(httpHost); @@ -328,44 +362,27 @@ public void testNullParams() throws IOException { */ public void testHeaders() throws IOException { for (String method : getHttpMethods()) { - final int numHeaders = randomIntBetween(1, 5); - final Header[] headers = generateHeaders("Header", null, numHeaders); - final Map> expectedHeaders = new HashMap<>(); - - addHeaders(expectedHeaders, defaultHeaders, headers); + final Header[] requestHeaders = RestClientTestUtil.randomHeaders(getRandom(), "Header"); final int statusCode = randomStatusCode(getRandom()); Response esResponse; try { - esResponse = restClient.performRequest(method, "/" + statusCode, headers); + esResponse = restClient.performRequest(method, "/" + statusCode, requestHeaders); } catch(ResponseException e) { esResponse = e.getResponse(); } assertThat(esResponse.getStatusLine().getStatusCode(), equalTo(statusCode)); - for (Header responseHeader : esResponse.getHeaders()) { - final String name = responseHeader.getName(); - final String value = responseHeader.getValue(); - final List values = expectedHeaders.get(name); - assertNotNull("found response header [" + name + "] that wasn't originally sent: " + value, values); - assertTrue("found incorrect response header [" + name + "]: " + value, values.remove(value)); - - // we've collected them all - if (values.isEmpty()) { - expectedHeaders.remove(name); - } - } - assertTrue("some headers that were sent weren't returned " + expectedHeaders, expectedHeaders.isEmpty()); + assertHeaders(defaultHeaders, requestHeaders, esResponse.getHeaders(), Collections.emptySet()); } } private HttpUriRequest performRandomRequest(String method) throws Exception { String uriAsString = "/" + randomStatusCode(getRandom()); URIBuilder uriBuilder = new URIBuilder(uriAsString); - Map params = Collections.emptyMap(); + final Map params = new HashMap<>(); boolean hasParams = randomBoolean(); if (hasParams) { int numParams = randomIntBetween(1, 3); - params = new HashMap<>(numParams); for (int i = 0; i < numParams; i++) { String paramKey = "param-" + i; String paramValue = randomAsciiOfLengthBetween(3, 10); @@ -373,6 +390,14 @@ private HttpUriRequest performRandomRequest(String method) throws Exception { uriBuilder.addParameter(paramKey, paramValue); } } + if (randomBoolean()) { + //randomly add some ignore parameter, which doesn't get sent as part of the request + String ignore = Integer.toString(randomFrom(RestClientTestUtil.getAllErrorStatusCodes())); + if (randomBoolean()) { + ignore += "," + Integer.toString(randomFrom(RestClientTestUtil.getAllErrorStatusCodes())); + } + params.put("ignore", ignore); + } URI uri = uriBuilder.build(); HttpUriRequest request; @@ -413,10 +438,9 @@ private HttpUriRequest performRandomRequest(String method) throws Exception { } Header[] headers = new Header[0]; - final int numHeaders = randomIntBetween(1, 5); - final Set uniqueNames = new HashSet<>(numHeaders); + final Set uniqueNames = new HashSet<>(); if (randomBoolean()) { - headers = generateHeaders("Header", "Header-array", numHeaders); + headers = RestClientTestUtil.randomHeaders(getRandom(), "Header"); for (Header header : headers) { request.addHeader(header); uniqueNames.add(header.getName()); @@ -444,16 +468,25 @@ private HttpUriRequest performRandomRequest(String method) throws Exception { } private Response performRequest(String method, String endpoint, Header... headers) throws IOException { - switch(randomIntBetween(0, 2)) { + return performRequest(method, endpoint, Collections.emptyMap(), headers); + } + + private Response performRequest(String method, String endpoint, Map params, Header... headers) throws IOException { + int methodSelector; + if (params.isEmpty()) { + methodSelector = randomIntBetween(0, 2); + } else { + methodSelector = randomIntBetween(1, 2); + } + switch(methodSelector) { case 0: return restClient.performRequest(method, endpoint, headers); case 1: - return restClient.performRequest(method, endpoint, Collections.emptyMap(), headers); + return restClient.performRequest(method, endpoint, params, headers); case 2: - return restClient.performRequest(method, endpoint, Collections.emptyMap(), (HttpEntity)null, headers); + return restClient.performRequest(method, endpoint, params, (HttpEntity)null, headers); default: throw new UnsupportedOperationException(); } } - } diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java new file mode 100644 index 0000000000000..6978aab58fe71 --- /dev/null +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java @@ -0,0 +1,103 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; + +import java.net.URI; +import java.util.Collections; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.fail; +import static org.mockito.Mockito.mock; + +public class RestClientTests extends RestClientTestCase { + + public void testPerformAsyncWithUnsupportedMethod() throws Exception { + RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); + try (RestClient restClient = createRestClient()) { + restClient.performRequestAsync("unsupported", randomAsciiOfLength(5), listener); + listener.get(); + + fail("should have failed because of unsupported method"); + } catch (UnsupportedOperationException exception) { + assertEquals("http method not supported: unsupported", exception.getMessage()); + } + } + + public void testPerformAsyncWithNullParams() throws Exception { + RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); + try (RestClient restClient = createRestClient()) { + restClient.performRequestAsync(randomAsciiOfLength(5), randomAsciiOfLength(5), null, listener); + listener.get(); + + fail("should have failed because of null parameters"); + } catch (NullPointerException exception) { + assertEquals("params must not be null", exception.getMessage()); + } + } + + public void testPerformAsyncWithNullHeaders() throws Exception { + RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); + try (RestClient restClient = createRestClient()) { + restClient.performRequestAsync("GET", randomAsciiOfLength(5), listener, (Header) null); + listener.get(); + + fail("should have failed because of null headers"); + } catch (NullPointerException exception) { + assertEquals("request header must not be null", exception.getMessage()); + } + } + + public void testPerformAsyncWithWrongEndpoint() throws Exception { + RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); + try (RestClient restClient = createRestClient()) { + restClient.performRequestAsync("GET", "::http:///", listener); + listener.get(); + + fail("should have failed because of wrong endpoint"); + } catch (IllegalArgumentException exception) { + assertEquals("Expected scheme name at index 0: ::http:///", exception.getMessage()); + } + } + + public void testBuildUriLeavesPathUntouched() { + { + URI uri = RestClient.buildUri("/foo$bar", "/index/type/id", Collections.emptyMap()); + assertEquals("/foo$bar/index/type/id", uri.getPath()); + } + { + URI uri = RestClient.buildUri(null, "/foo$bar/ty/pe/i/d", Collections.emptyMap()); + assertEquals("/foo$bar/ty/pe/i/d", uri.getPath()); + } + { + URI uri = RestClient.buildUri(null, "/index/type/id", Collections.singletonMap("foo$bar", "x/y/z")); + assertEquals("/index/type/id", uri.getPath()); + assertEquals("foo$bar=x/y/z", uri.getQuery()); + } + } + + private static RestClient createRestClient() { + HttpHost[] hosts = new HttpHost[]{new HttpHost("localhost", 9200)}; + return new RestClient(mock(CloseableHttpAsyncClient.class), randomLongBetween(1_000, 30_000), new Header[]{}, hosts, null, null); + } +} diff --git a/client/rest/src/test/java/org/elasticsearch/client/documentation/RestClientDocumentation.java b/client/rest/src/test/java/org/elasticsearch/client/documentation/RestClientDocumentation.java new file mode 100644 index 0000000000000..1bad6b5f6d6fd --- /dev/null +++ b/client/rest/src/test/java/org/elasticsearch/client/documentation/RestClientDocumentation.java @@ -0,0 +1,337 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.documentation; + +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.RequestLine; +import org.apache.http.auth.AuthScope; +import org.apache.http.auth.UsernamePasswordCredentials; +import org.apache.http.client.CredentialsProvider; +import org.apache.http.client.config.RequestConfig; +import org.apache.http.entity.ContentType; +import org.apache.http.impl.client.BasicCredentialsProvider; +import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; +import org.apache.http.impl.nio.reactor.IOReactorConfig; +import org.apache.http.message.BasicHeader; +import org.apache.http.nio.entity.NStringEntity; +import org.apache.http.ssl.SSLContextBuilder; +import org.apache.http.ssl.SSLContexts; +import org.apache.http.util.EntityUtils; +import org.elasticsearch.client.HttpAsyncResponseConsumerFactory; +import org.elasticsearch.client.Response; +import org.elasticsearch.client.ResponseListener; +import org.elasticsearch.client.RestClient; +import org.elasticsearch.client.RestClientBuilder; + +import javax.net.ssl.SSLContext; +import java.io.IOException; +import java.io.InputStream; +import java.nio.file.Files; +import java.nio.file.Path; +import java.nio.file.Paths; +import java.security.KeyStore; +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.CountDownLatch; + +/** + * This class is used to generate the Java low-level REST client documentation. + * You need to wrap your code between two tags like: + * // tag::example[] + * // end::example[] + * + * Where example is your tag name. + * + * Then in the documentation, you can extract what is between tag and end tags with + * ["source","java",subs="attributes,callouts,macros"] + * -------------------------------------------------- + * include-tagged::{doc-tests}/RestClientDocumentation.java[example] + * -------------------------------------------------- + * + * Note that this is not a test class as we are only interested in testing that docs snippets compile. We don't want + * to send requests to a node and we don't even have the tools to do it. + */ +@SuppressWarnings("unused") +public class RestClientDocumentation { + + @SuppressWarnings("unused") + public void testUsage() throws IOException, InterruptedException { + + //tag::rest-client-init + RestClient restClient = RestClient.builder( + new HttpHost("localhost", 9200, "http"), + new HttpHost("localhost", 9201, "http")).build(); + //end::rest-client-init + + //tag::rest-client-close + restClient.close(); + //end::rest-client-close + + { + //tag::rest-client-init-default-headers + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http")); + Header[] defaultHeaders = new Header[]{new BasicHeader("header", "value")}; + builder.setDefaultHeaders(defaultHeaders); // <1> + //end::rest-client-init-default-headers + } + { + //tag::rest-client-init-max-retry-timeout + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http")); + builder.setMaxRetryTimeoutMillis(10000); // <1> + //end::rest-client-init-max-retry-timeout + } + { + //tag::rest-client-init-failure-listener + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http")); + builder.setFailureListener(new RestClient.FailureListener() { + @Override + public void onFailure(HttpHost host) { + // <1> + } + }); + //end::rest-client-init-failure-listener + } + { + //tag::rest-client-init-request-config-callback + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http")); + builder.setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() { + @Override + public RequestConfig.Builder customizeRequestConfig(RequestConfig.Builder requestConfigBuilder) { + return requestConfigBuilder.setSocketTimeout(10000); // <1> + } + }); + //end::rest-client-init-request-config-callback + } + { + //tag::rest-client-init-client-config-callback + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http")); + builder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { + @Override + public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { + return httpClientBuilder.setProxy(new HttpHost("proxy", 9000, "http")); // <1> + } + }); + //end::rest-client-init-client-config-callback + } + + { + //tag::rest-client-verb-endpoint + Response response = restClient.performRequest("GET", "/"); // <1> + //end::rest-client-verb-endpoint + } + { + //tag::rest-client-headers + Response response = restClient.performRequest("GET", "/", new BasicHeader("header", "value")); + //end::rest-client-headers + } + { + //tag::rest-client-verb-endpoint-params + Map params = Collections.singletonMap("pretty", "true"); + Response response = restClient.performRequest("GET", "/", params); // <1> + //end::rest-client-verb-endpoint-params + } + { + //tag::rest-client-verb-endpoint-params-body + Map params = Collections.emptyMap(); + String jsonString = "{" + + "\"user\":\"kimchy\"," + + "\"postDate\":\"2013-01-30\"," + + "\"message\":\"trying out Elasticsearch\"" + + "}"; + HttpEntity entity = new NStringEntity(jsonString, ContentType.APPLICATION_JSON); + Response response = restClient.performRequest("PUT", "/posts/doc/1", params, entity); // <1> + //end::rest-client-verb-endpoint-params-body + } + { + //tag::rest-client-response-consumer + Map params = Collections.emptyMap(); + HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory consumerFactory = + new HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory(30 * 1024 * 1024); + Response response = restClient.performRequest("GET", "/posts/_search", params, null, consumerFactory); // <1> + //end::rest-client-response-consumer + } + { + //tag::rest-client-verb-endpoint-async + ResponseListener responseListener = new ResponseListener() { + @Override + public void onSuccess(Response response) { + // <1> + } + + @Override + public void onFailure(Exception exception) { + // <2> + } + }; + restClient.performRequestAsync("GET", "/", responseListener); // <3> + //end::rest-client-verb-endpoint-async + + //tag::rest-client-headers-async + Header[] headers = { + new BasicHeader("header1", "value1"), + new BasicHeader("header2", "value2") + }; + restClient.performRequestAsync("GET", "/", responseListener, headers); + //end::rest-client-headers-async + + //tag::rest-client-verb-endpoint-params-async + Map params = Collections.singletonMap("pretty", "true"); + restClient.performRequestAsync("GET", "/", params, responseListener); // <1> + //end::rest-client-verb-endpoint-params-async + + //tag::rest-client-verb-endpoint-params-body-async + String jsonString = "{" + + "\"user\":\"kimchy\"," + + "\"postDate\":\"2013-01-30\"," + + "\"message\":\"trying out Elasticsearch\"" + + "}"; + HttpEntity entity = new NStringEntity(jsonString, ContentType.APPLICATION_JSON); + restClient.performRequestAsync("PUT", "/posts/doc/1", params, entity, responseListener); // <1> + //end::rest-client-verb-endpoint-params-body-async + + //tag::rest-client-response-consumer-async + HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory consumerFactory = + new HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory(30 * 1024 * 1024); + restClient.performRequestAsync("GET", "/posts/_search", params, null, consumerFactory, responseListener); // <1> + //end::rest-client-response-consumer-async + } + { + //tag::rest-client-response2 + Response response = restClient.performRequest("GET", "/"); + RequestLine requestLine = response.getRequestLine(); // <1> + HttpHost host = response.getHost(); // <2> + int statusCode = response.getStatusLine().getStatusCode(); // <3> + Header[] headers = response.getHeaders(); // <4> + String responseBody = EntityUtils.toString(response.getEntity()); // <5> + //end::rest-client-response2 + } + { + HttpEntity[] documents = new HttpEntity[10]; + //tag::rest-client-async-example + final CountDownLatch latch = new CountDownLatch(documents.length); + for (int i = 0; i < documents.length; i++) { + restClient.performRequestAsync( + "PUT", + "/posts/doc/" + i, + Collections.emptyMap(), + //let's assume that the documents are stored in an HttpEntity array + documents[i], + new ResponseListener() { + @Override + public void onSuccess(Response response) { + // <1> + latch.countDown(); + } + + @Override + public void onFailure(Exception exception) { + // <2> + latch.countDown(); + } + } + ); + } + latch.await(); + //end::rest-client-async-example + } + + } + + @SuppressWarnings("unused") + public void testCommonConfiguration() throws Exception { + { + //tag::rest-client-config-timeouts + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200)) + .setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() { + @Override + public RequestConfig.Builder customizeRequestConfig(RequestConfig.Builder requestConfigBuilder) { + return requestConfigBuilder.setConnectTimeout(5000) + .setSocketTimeout(60000); + } + }) + .setMaxRetryTimeoutMillis(60000); + //end::rest-client-config-timeouts + } + { + //tag::rest-client-config-threads + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200)) + .setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { + @Override + public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { + return httpClientBuilder.setDefaultIOReactorConfig( + IOReactorConfig.custom().setIoThreadCount(1).build()); + } + }); + //end::rest-client-config-threads + } + { + //tag::rest-client-config-basic-auth + final CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); + credentialsProvider.setCredentials(AuthScope.ANY, + new UsernamePasswordCredentials("user", "password")); + + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200)) + .setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { + @Override + public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { + return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider); + } + }); + //end::rest-client-config-basic-auth + } + { + //tag::rest-client-config-disable-preemptive-auth + final CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); + credentialsProvider.setCredentials(AuthScope.ANY, + new UsernamePasswordCredentials("user", "password")); + + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200)) + .setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { + @Override + public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { + httpClientBuilder.disableAuthCaching(); // <1> + return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider); + } + }); + //end::rest-client-config-disable-preemptive-auth + } + { + Path keyStorePath = Paths.get(""); + String keyStorePass = ""; + //tag::rest-client-config-encrypted-communication + KeyStore truststore = KeyStore.getInstance("jks"); + try (InputStream is = Files.newInputStream(keyStorePath)) { + truststore.load(is, keyStorePass.toCharArray()); + } + SSLContextBuilder sslBuilder = SSLContexts.custom().loadTrustMaterial(truststore, null); + final SSLContext sslContext = sslBuilder.build(); + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "https")) + .setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { + @Override + public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { + return httpClientBuilder.setSSLContext(sslContext); + } + }); + //end::rest-client-config-encrypted-communication + } + } +} diff --git a/client/rest/src/test/resources/testks.jks b/client/rest/src/test/resources/testks.jks new file mode 100644 index 0000000000000..cd706db5e3834 Binary files /dev/null and b/client/rest/src/test/resources/testks.jks differ diff --git a/client/sniffer/build.gradle b/client/sniffer/build.gradle index f35110e4f9e61..6d6541301ae6d 100644 --- a/client/sniffer/build.gradle +++ b/client/sniffer/build.gradle @@ -18,7 +18,6 @@ */ import org.elasticsearch.gradle.precommit.PrecommitTasks -import org.gradle.api.JavaVersion apply plugin: 'elasticsearch.build' apply plugin: 'ru.vyarus.animalsniffer' @@ -29,9 +28,18 @@ targetCompatibility = JavaVersion.VERSION_1_7 sourceCompatibility = JavaVersion.VERSION_1_7 group = 'org.elasticsearch.client' +archivesBaseName = 'elasticsearch-rest-client-sniffer' + +publishing { + publications { + nebula { + artifactId = archivesBaseName + } + } +} dependencies { - compile "org.elasticsearch.client:rest:${version}" + compile "org.elasticsearch.client:elasticsearch-rest-client:${version}" compile "org.apache.httpcomponents:httpclient:${versions.httpclient}" compile "org.apache.httpcomponents:httpcore:${versions.httpcore}" compile "commons-codec:commons-codec:${versions.commonscodec}" @@ -92,4 +100,4 @@ thirdPartyAudit.excludes = [ //commons-logging provided dependencies 'javax.servlet.ServletContextEvent', 'javax.servlet.ServletContextListener' -] +] \ No newline at end of file diff --git a/client/sniffer/licenses/jackson-core-2.8.1.jar.sha1 b/client/sniffer/licenses/jackson-core-2.8.1.jar.sha1 deleted file mode 100644 index b92131d6fab45..0000000000000 --- a/client/sniffer/licenses/jackson-core-2.8.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fd13b1c033741d48291315c6370f7d475a42dccf \ No newline at end of file diff --git a/client/sniffer/licenses/jackson-core-2.8.6.jar.sha1 b/client/sniffer/licenses/jackson-core-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..af7677d13c28c --- /dev/null +++ b/client/sniffer/licenses/jackson-core-2.8.6.jar.sha1 @@ -0,0 +1 @@ +2ef7b1cc34de149600f5e75bc2d5bf40de894e60 \ No newline at end of file diff --git a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java index 89a7d9df8e60d..c655babd9ed3d 100644 --- a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java +++ b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java @@ -27,12 +27,16 @@ import java.io.Closeable; import java.io.IOException; +import java.security.AccessController; +import java.security.PrivilegedAction; import java.util.List; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.ThreadFactory; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; /** * Class responsible for sniffing nodes from some source (default is elasticsearch itself) and setting them to a provided instance of @@ -45,6 +49,7 @@ public class Sniffer implements Closeable { private static final Log logger = LogFactory.getLog(Sniffer.class); + private static final String SNIFFER_THREAD_NAME = "es_rest_client_sniffer"; private final Task task; @@ -79,7 +84,8 @@ private Task(HostsSniffer hostsSniffer, RestClient restClient, long sniffInterva this.restClient = restClient; this.sniffIntervalMillis = sniffIntervalMillis; this.sniffAfterFailureDelayMillis = sniffAfterFailureDelayMillis; - this.scheduledExecutorService = Executors.newScheduledThreadPool(1); + SnifferThreadFactory threadFactory = new SnifferThreadFactory(SNIFFER_THREAD_NAME); + this.scheduledExecutorService = Executors.newScheduledThreadPool(1, threadFactory); scheduleNextRun(0); } @@ -151,4 +157,34 @@ synchronized void shutdown() { public static SnifferBuilder builder(RestClient restClient) { return new SnifferBuilder(restClient); } + + private static class SnifferThreadFactory implements ThreadFactory { + + private final AtomicInteger threadNumber = new AtomicInteger(1); + private final String namePrefix; + private final ThreadFactory originalThreadFactory; + + private SnifferThreadFactory(String namePrefix) { + this.namePrefix = namePrefix; + this.originalThreadFactory = AccessController.doPrivileged(new PrivilegedAction() { + @Override + public ThreadFactory run() { + return Executors.defaultThreadFactory(); + } + }); + } + + @Override + public Thread newThread(final Runnable r) { + return AccessController.doPrivileged(new PrivilegedAction() { + @Override + public Thread run() { + Thread t = originalThreadFactory.newThread(r); + t.setName(namePrefix + "[T#" + threadNumber.getAndIncrement() + "]"); + t.setDaemon(true); + return t; + } + }); + } + } } diff --git a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/ElasticsearchHostsSnifferTests.java b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/ElasticsearchHostsSnifferTests.java index a926cabb87d7b..aeb0620134b55 100644 --- a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/ElasticsearchHostsSnifferTests.java +++ b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/ElasticsearchHostsSnifferTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.client.sniff; -import com.carrotsearch.randomizedtesting.generators.RandomInts; +import com.carrotsearch.randomizedtesting.generators.RandomNumbers; import com.carrotsearch.randomizedtesting.generators.RandomPicks; import com.carrotsearch.randomizedtesting.generators.RandomStrings; import com.fasterxml.jackson.core.JsonFactory; @@ -69,7 +69,7 @@ public class ElasticsearchHostsSnifferTests extends RestClientTestCase { @Before public void startHttpServer() throws IOException { - this.sniffRequestTimeout = RandomInts.randomIntBetween(getRandom(), 1000, 10000); + this.sniffRequestTimeout = RandomNumbers.randomIntBetween(getRandom(), 1000, 10000); this.scheme = RandomPicks.randomFrom(getRandom(), ElasticsearchHostsSniffer.Scheme.values()); if (rarely()) { this.sniffResponse = SniffResponse.buildFailure(); @@ -101,7 +101,7 @@ public void testConstructorValidation() throws IOException { assertEquals(e.getMessage(), "scheme cannot be null"); } try { - new ElasticsearchHostsSniffer(restClient, RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0), + new ElasticsearchHostsSniffer(restClient, RandomNumbers.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0), ElasticsearchHostsSniffer.Scheme.HTTP); fail("should have failed"); } catch (IllegalArgumentException e) { @@ -175,7 +175,7 @@ public void handle(HttpExchange httpExchange) throws IOException { } private static SniffResponse buildSniffResponse(ElasticsearchHostsSniffer.Scheme scheme) throws IOException { - int numNodes = RandomInts.randomIntBetween(getRandom(), 1, 5); + int numNodes = RandomNumbers.randomIntBetween(getRandom(), 1, 5); List hosts = new ArrayList<>(numNodes); JsonFactory jsonFactory = new JsonFactory(); StringWriter writer = new StringWriter(); @@ -205,7 +205,7 @@ private static SniffResponse buildSniffResponse(ElasticsearchHostsSniffer.Scheme boolean isHttpEnabled = rarely() == false; if (isHttpEnabled) { String host = "host" + i; - int port = RandomInts.randomIntBetween(getRandom(), 9200, 9299); + int port = RandomNumbers.randomIntBetween(getRandom(), 9200, 9299); HttpHost httpHost = new HttpHost(host, port, scheme.toString()); hosts.add(httpHost); generator.writeObjectFieldStart("http"); @@ -228,7 +228,7 @@ private static SniffResponse buildSniffResponse(ElasticsearchHostsSniffer.Scheme } if (getRandom().nextBoolean()) { String[] roles = {"master", "data", "ingest"}; - int numRoles = RandomInts.randomIntBetween(getRandom(), 0, 3); + int numRoles = RandomNumbers.randomIntBetween(getRandom(), 0, 3); Set nodeRoles = new HashSet<>(numRoles); for (int j = 0; j < numRoles; j++) { String role; @@ -242,7 +242,7 @@ private static SniffResponse buildSniffResponse(ElasticsearchHostsSniffer.Scheme } generator.writeEndArray(); } - int numAttributes = RandomInts.randomIntBetween(getRandom(), 0, 3); + int numAttributes = RandomNumbers.randomIntBetween(getRandom(), 0, 3); Map attributes = new HashMap<>(numAttributes); for (int j = 0; j < numAttributes; j++) { attributes.put("attr" + j, "value" + j); @@ -291,6 +291,6 @@ static SniffResponse buildResponse(String nodesInfoBody, List hosts) { } private static int randomErrorResponseCode() { - return RandomInts.randomIntBetween(getRandom(), 400, 599); + return RandomNumbers.randomIntBetween(getRandom(), 400, 599); } } diff --git a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferBuilderTests.java b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferBuilderTests.java index b0c387d733aad..9a7359e9c7215 100644 --- a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferBuilderTests.java +++ b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferBuilderTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.client.sniff; -import com.carrotsearch.randomizedtesting.generators.RandomInts; +import com.carrotsearch.randomizedtesting.generators.RandomNumbers; import org.apache.http.HttpHost; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestClientTestCase; @@ -31,7 +31,7 @@ public class SnifferBuilderTests extends RestClientTestCase { public void testBuild() throws Exception { - int numNodes = RandomInts.randomIntBetween(getRandom(), 1, 5); + int numNodes = RandomNumbers.randomIntBetween(getRandom(), 1, 5); HttpHost[] hosts = new HttpHost[numNodes]; for (int i = 0; i < numNodes; i++) { hosts[i] = new HttpHost("localhost", 9200 + i); @@ -46,14 +46,14 @@ public void testBuild() throws Exception { } try { - Sniffer.builder(client).setSniffIntervalMillis(RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0)); + Sniffer.builder(client).setSniffIntervalMillis(RandomNumbers.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0)); fail("should have failed"); } catch(IllegalArgumentException e) { assertEquals("sniffIntervalMillis must be greater than 0", e.getMessage()); } try { - Sniffer.builder(client).setSniffAfterFailureDelayMillis(RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0)); + Sniffer.builder(client).setSniffAfterFailureDelayMillis(RandomNumbers.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0)); fail("should have failed"); } catch(IllegalArgumentException e) { assertEquals("sniffAfterFailureDelayMillis must be greater than 0", e.getMessage()); @@ -74,10 +74,10 @@ public void testBuild() throws Exception { SnifferBuilder builder = Sniffer.builder(client); if (getRandom().nextBoolean()) { - builder.setSniffIntervalMillis(RandomInts.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE)); + builder.setSniffIntervalMillis(RandomNumbers.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE)); } if (getRandom().nextBoolean()) { - builder.setSniffAfterFailureDelayMillis(RandomInts.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE)); + builder.setSniffAfterFailureDelayMillis(RandomNumbers.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE)); } if (getRandom().nextBoolean()) { builder.setHostsSniffer(new MockHostsSniffer()); diff --git a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/documentation/SnifferDocumentation.java b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/documentation/SnifferDocumentation.java new file mode 100644 index 0000000000000..199632d478f81 --- /dev/null +++ b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/documentation/SnifferDocumentation.java @@ -0,0 +1,131 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.sniff.documentation; + +import org.apache.http.HttpHost; +import org.elasticsearch.client.RestClient; +import org.elasticsearch.client.sniff.ElasticsearchHostsSniffer; +import org.elasticsearch.client.sniff.HostsSniffer; +import org.elasticsearch.client.sniff.SniffOnFailureListener; +import org.elasticsearch.client.sniff.Sniffer; + +import java.io.IOException; +import java.util.List; +import java.util.concurrent.TimeUnit; + +/** + * This class is used to generate the Java low-level REST client documentation. + * You need to wrap your code between two tags like: + * // tag::example[] + * // end::example[] + * + * Where example is your tag name. + * + * Then in the documentation, you can extract what is between tag and end tags with + * ["source","java",subs="attributes,callouts,macros"] + * -------------------------------------------------- + * include-tagged::{doc-tests}/SnifferDocumentation.java[example] + * -------------------------------------------------- + * + * Note that this is not a test class as we are only interested in testing that docs snippets compile. We don't want + * to send requests to a node and we don't even have the tools to do it. + */ +@SuppressWarnings("unused") +public class SnifferDocumentation { + + @SuppressWarnings("unused") + public void testUsage() throws IOException { + { + //tag::sniffer-init + RestClient restClient = RestClient.builder( + new HttpHost("localhost", 9200, "http")) + .build(); + Sniffer sniffer = Sniffer.builder(restClient).build(); + //end::sniffer-init + + //tag::sniffer-close + sniffer.close(); + restClient.close(); + //end::sniffer-close + } + { + //tag::sniffer-interval + RestClient restClient = RestClient.builder( + new HttpHost("localhost", 9200, "http")) + .build(); + Sniffer sniffer = Sniffer.builder(restClient) + .setSniffIntervalMillis(60000).build(); + //end::sniffer-interval + } + { + //tag::sniff-on-failure + SniffOnFailureListener sniffOnFailureListener = new SniffOnFailureListener(); + RestClient restClient = RestClient.builder(new HttpHost("localhost", 9200)) + .setFailureListener(sniffOnFailureListener) // <1> + .build(); + Sniffer sniffer = Sniffer.builder(restClient) + .setSniffAfterFailureDelayMillis(30000) // <2> + .build(); + sniffOnFailureListener.setSniffer(sniffer); // <3> + //end::sniff-on-failure + } + { + //tag::sniffer-https + RestClient restClient = RestClient.builder( + new HttpHost("localhost", 9200, "http")) + .build(); + HostsSniffer hostsSniffer = new ElasticsearchHostsSniffer( + restClient, + ElasticsearchHostsSniffer.DEFAULT_SNIFF_REQUEST_TIMEOUT, + ElasticsearchHostsSniffer.Scheme.HTTPS); + Sniffer sniffer = Sniffer.builder(restClient) + .setHostsSniffer(hostsSniffer).build(); + //end::sniffer-https + } + { + //tag::sniff-request-timeout + RestClient restClient = RestClient.builder( + new HttpHost("localhost", 9200, "http")) + .build(); + HostsSniffer hostsSniffer = new ElasticsearchHostsSniffer( + restClient, + TimeUnit.SECONDS.toMillis(5), + ElasticsearchHostsSniffer.Scheme.HTTP); + Sniffer sniffer = Sniffer.builder(restClient) + .setHostsSniffer(hostsSniffer).build(); + //end::sniff-request-timeout + } + { + //tag::custom-hosts-sniffer + RestClient restClient = RestClient.builder( + new HttpHost("localhost", 9200, "http")) + .build(); + HostsSniffer hostsSniffer = new HostsSniffer() { + @Override + public List sniffHosts() throws IOException { + return null; // <1> + } + }; + Sniffer sniffer = Sniffer.builder(restClient) + .setHostsSniffer(hostsSniffer).build(); + //end::custom-hosts-sniffer + } + } +} diff --git a/client/test/build.gradle b/client/test/build.gradle index a7ffe79ac5c08..e57d415e9eaab 100644 --- a/client/test/build.gradle +++ b/client/test/build.gradle @@ -26,9 +26,6 @@ apply plugin: 'ru.vyarus.animalsniffer' targetCompatibility = JavaVersion.VERSION_1_7 sourceCompatibility = JavaVersion.VERSION_1_7 -install.enabled = false -uploadArchives.enabled = false - dependencies { compile "org.apache.httpcomponents:httpcore:${versions.httpcore}" compile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}" @@ -61,4 +58,4 @@ namingConventions.enabled = false //we aren't releasing this jar thirdPartyAudit.enabled = false -test.enabled = false \ No newline at end of file +test.enabled = false diff --git a/client/test/src/main/java/org/elasticsearch/client/RestClientTestCase.java b/client/test/src/main/java/org/elasticsearch/client/RestClientTestCase.java index 4296932a00208..6a2a45ef2813c 100644 --- a/client/test/src/main/java/org/elasticsearch/client/RestClientTestCase.java +++ b/client/test/src/main/java/org/elasticsearch/client/RestClientTestCase.java @@ -30,16 +30,19 @@ import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope; import com.carrotsearch.randomizedtesting.annotations.ThreadLeakZombies; import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite; - import org.apache.http.Header; -import org.apache.http.message.BasicHeader; import java.util.ArrayList; +import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; + @TestMethodProviders({ JUnit3MethodProvider.class }) @@ -53,70 +56,56 @@ public abstract class RestClientTestCase extends RandomizedTest { /** - * Create the specified number of {@link Header}s. - *

- * Generated header names will be the {@code baseName} plus its index or, rarely, the {@code arrayName} if it's supplied. + * Assert that the actual headers are the expected ones given the original default and request headers. Some headers can be ignored, + * for instance in case the http client is adding its own automatically. * - * @param baseName The base name to use for all headers. - * @param arrayName The optional ({@code null}able) array name to use randomly. - * @param headers The number of headers to create. - * @return Never {@code null}. + * @param defaultHeaders the default headers set to the REST client instance + * @param requestHeaders the request headers sent with a particular request + * @param actualHeaders the actual headers as a result of the provided default and request headers + * @param ignoreHeaders header keys to be ignored as they are not part of default nor request headers, yet they + * will be part of the actual ones */ - protected static Header[] generateHeaders(final String baseName, final String arrayName, final int headers) { - final Header[] generated = new Header[headers]; - for (int i = 0; i < headers; i++) { - String headerName = baseName + i; - if (arrayName != null && rarely()) { - headerName = arrayName; + protected static void assertHeaders(final Header[] defaultHeaders, final Header[] requestHeaders, + final Header[] actualHeaders, final Set ignoreHeaders) { + final Map> expectedHeaders = new HashMap<>(); + final Set requestHeaderKeys = new HashSet<>(); + for (final Header header : requestHeaders) { + final String name = header.getName(); + addValueToListEntry(expectedHeaders, name, header.getValue()); + requestHeaderKeys.add(name); + } + for (final Header defaultHeader : defaultHeaders) { + final String name = defaultHeader.getName(); + if (requestHeaderKeys.contains(name) == false) { + addValueToListEntry(expectedHeaders, name, defaultHeader.getValue()); } - - generated[i] = new BasicHeader(headerName, randomAsciiOfLengthBetween(3, 10)); } - return generated; + Set actualIgnoredHeaders = new HashSet<>(); + for (Header responseHeader : actualHeaders) { + final String name = responseHeader.getName(); + if (ignoreHeaders.contains(name)) { + expectedHeaders.remove(name); + actualIgnoredHeaders.add(name); + continue; + } + final String value = responseHeader.getValue(); + final List values = expectedHeaders.get(name); + assertNotNull("found response header [" + name + "] that wasn't originally sent: " + value, values); + assertTrue("found incorrect response header [" + name + "]: " + value, values.remove(value)); + if (values.isEmpty()) { + expectedHeaders.remove(name); + } + } + assertEquals("some headers meant to be ignored were not part of the actual headers", ignoreHeaders, actualIgnoredHeaders); + assertTrue("some headers that were sent weren't returned " + expectedHeaders, expectedHeaders.isEmpty()); } - /** - * Create a new {@link List} within the {@code map} if none exists for {@code name} or append to the existing list. - * - * @param map The map to manipulate. - * @param name The name to create/append the list for. - * @param value The value to add. - */ - private static void createOrAppendList(final Map> map, final String name, final String value) { + private static void addValueToListEntry(final Map> map, final String name, final String value) { List values = map.get(name); - if (values == null) { values = new ArrayList<>(); map.put(name, values); } - values.add(value); } - - /** - * Add the {@code headers} to the {@code map} so that related tests can more easily assert that they exist. - *

- * If both the {@code defaultHeaders} and {@code headers} contain the same {@link Header}, based on its - * {@linkplain Header#getName() name}, then this will only use the {@code Header}(s) from {@code headers}. - * - * @param map The map to build with name/value(s) pairs. - * @param defaultHeaders The headers to add to the map representing default headers. - * @param headers The headers to add to the map representing request-level headers. - * @see #createOrAppendList(Map, String, String) - */ - protected static void addHeaders(final Map> map, final Header[] defaultHeaders, final Header[] headers) { - final Set uniqueHeaders = new HashSet<>(); - for (final Header header : headers) { - final String name = header.getName(); - createOrAppendList(map, name, header.getValue()); - uniqueHeaders.add(name); - } - for (final Header defaultHeader : defaultHeaders) { - final String name = defaultHeader.getName(); - if (uniqueHeaders.contains(name) == false) { - createOrAppendList(map, name, defaultHeader.getValue()); - } - } - } - } diff --git a/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java b/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java index 4d4aa00f4929f..7cda8a71d6178 100644 --- a/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java +++ b/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java @@ -19,7 +19,11 @@ package org.elasticsearch.client; +import com.carrotsearch.randomizedtesting.generators.RandomNumbers; import com.carrotsearch.randomizedtesting.generators.RandomPicks; +import com.carrotsearch.randomizedtesting.generators.RandomStrings; +import org.apache.http.Header; +import org.apache.http.message.BasicHeader; import java.util.ArrayList; import java.util.Arrays; @@ -55,7 +59,7 @@ static String randomHttpMethod(Random random) { } static int randomStatusCode(Random random) { - return RandomPicks.randomFrom(random, ALL_ERROR_STATUS_CODES); + return RandomPicks.randomFrom(random, ALL_STATUS_CODES); } static int randomOkStatusCode(Random random) { @@ -81,4 +85,23 @@ static List getAllErrorStatusCodes() { static List getAllStatusCodes() { return ALL_STATUS_CODES; } + + /** + * Create a random number of {@link org.apache.http.Header}s. + * Generated header names will either be the {@code baseName} plus its index, or exactly the provided {@code baseName} so that the + * we test also support for multiple headers with same key and different values. + */ + static Header[] randomHeaders(Random random, final String baseName) { + int numHeaders = RandomNumbers.randomIntBetween(random, 0, 5); + final Header[] headers = new Header[numHeaders]; + for (int i = 0; i < numHeaders; i++) { + String headerName = baseName; + //randomly exercise the code path that supports multiple headers with same key + if (random.nextBoolean()) { + headerName = headerName + i; + } + headers[i] = new BasicHeader(headerName, RandomStrings.randomAsciiOfLengthBetween(random, 3, 10)); + } + return headers; + } } diff --git a/client/transport/build.gradle b/client/transport/build.gradle index c3dc2d84982b1..91e463f84683e 100644 --- a/client/transport/build.gradle +++ b/client/transport/build.gradle @@ -32,6 +32,7 @@ dependencies { compile "org.elasticsearch.plugin:reindex-client:${version}" compile "org.elasticsearch.plugin:lang-mustache-client:${version}" compile "org.elasticsearch.plugin:percolator-client:${version}" + compile "org.elasticsearch.plugin:parent-join-client:${version}" testCompile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}" testCompile "junit:junit:${versions.junit}" testCompile "org.hamcrest:hamcrest-all:${versions.hamcrest}" @@ -54,4 +55,4 @@ namingConventions { testClass = 'com.carrotsearch.randomizedtesting.RandomizedTest' //we don't have integration tests skipIntegTestInDisguise = true -} +} \ No newline at end of file diff --git a/client/transport/src/main/java/org/elasticsearch/transport/client/PreBuiltTransportClient.java b/client/transport/src/main/java/org/elasticsearch/transport/client/PreBuiltTransportClient.java index dbd04079d53b1..08ac14f3ca18b 100644 --- a/client/transport/src/main/java/org/elasticsearch/transport/client/PreBuiltTransportClient.java +++ b/client/transport/src/main/java/org/elasticsearch/transport/client/PreBuiltTransportClient.java @@ -22,10 +22,12 @@ import io.netty.util.ThreadDeathWatcher; import io.netty.util.concurrent.GlobalEventExecutor; import org.elasticsearch.client.transport.TransportClient; +import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.reindex.ReindexPlugin; +import org.elasticsearch.join.ParentJoinPlugin; import org.elasticsearch.percolator.PercolatorPlugin; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.script.mustache.MustachePlugin; @@ -39,34 +41,93 @@ import java.util.concurrent.TimeUnit; /** - * A builder to create an instance of {@link TransportClient} - * This class pre-installs the - * {@link Netty3Plugin}, + * A builder to create an instance of {@link TransportClient}. This class pre-installs the + * {@link Netty3Plugin} * {@link Netty4Plugin}, * {@link ReindexPlugin}, * {@link PercolatorPlugin}, - * and {@link MustachePlugin} - * for the client. These plugins are all elasticsearch core modules required. + * {@link MustachePlugin}, + * {@link ParentJoinPlugin} + * plugins for the client. These plugins are all the required modules for Elasticsearch. */ @SuppressWarnings({"unchecked","varargs"}) public class PreBuiltTransportClient extends TransportClient { + static { + // initialize Netty system properties before triggering any Netty class loads + initializeNetty(); + } + + /** + * Netty wants to do some unwelcome things like use unsafe and replace a private field, or use a poorly considered buffer recycler. This + * method disables these things by default, but can be overridden by setting the corresponding system properties. + */ + private static void initializeNetty() { + /* + * We disable three pieces of Netty functionality here: + * - we disable Netty from being unsafe + * - we disable Netty from replacing the selector key set + * - we disable Netty from using the recycler + * + * While permissions are needed to read and set these, the permissions needed here are innocuous and thus should simply be granted + * rather than us handling a security exception here. + */ + setSystemPropertyIfUnset("io.netty.noUnsafe", Boolean.toString(true)); + setSystemPropertyIfUnset("io.netty.noKeySetOptimization", Boolean.toString(true)); + setSystemPropertyIfUnset("io.netty.recycler.maxCapacityPerThread", Integer.toString(0)); + } + + @SuppressForbidden(reason = "set system properties to configure Netty") + private static void setSystemPropertyIfUnset(final String key, final String value) { + final String currentValue = System.getProperty(key); + if (currentValue == null) { + System.setProperty(key, value); + } + } + private static final Collection> PRE_INSTALLED_PLUGINS = - Collections.unmodifiableList( - Arrays.asList( - Netty3Plugin.class, - Netty4Plugin.class, - ReindexPlugin.class, - PercolatorPlugin.class, - MustachePlugin.class)); + Collections.unmodifiableList( + Arrays.asList( + Netty3Plugin.class, + Netty4Plugin.class, + ReindexPlugin.class, + PercolatorPlugin.class, + MustachePlugin.class, + ParentJoinPlugin.class)); + /** + * Creates a new transport client with pre-installed plugins. + * + * @param settings the settings passed to this transport client + * @param plugins an optional array of additional plugins to run with this client + */ @SafeVarargs public PreBuiltTransportClient(Settings settings, Class... plugins) { this(settings, Arrays.asList(plugins)); } + /** + * Creates a new transport client with pre-installed plugins. + * + * @param settings the settings passed to this transport client + * @param plugins a collection of additional plugins to run with this client + */ public PreBuiltTransportClient(Settings settings, Collection> plugins) { - super(settings, Settings.EMPTY, addPlugins(plugins, PRE_INSTALLED_PLUGINS)); + this(settings, plugins, null); + } + + /** + * Creates a new transport client with pre-installed plugins. + * + * @param settings the settings passed to this transport client + * @param plugins a collection of additional plugins to run with this client + * @param hostFailureListener a failure listener that is invoked if a node is disconnected; this can be null + */ + public PreBuiltTransportClient( + Settings settings, + Collection> plugins, + HostFailureListener hostFailureListener) { + super(settings, Settings.EMPTY, addPlugins(plugins, PRE_INSTALLED_PLUGINS), hostFailureListener); } @Override diff --git a/client/transport/src/test/java/org/elasticsearch/transport/client/PreBuiltTransportClientTests.java b/client/transport/src/test/java/org/elasticsearch/transport/client/PreBuiltTransportClientTests.java index a1d95b68af703..dbcf3571125de 100644 --- a/client/transport/src/test/java/org/elasticsearch/transport/client/PreBuiltTransportClientTests.java +++ b/client/transport/src/test/java/org/elasticsearch/transport/client/PreBuiltTransportClientTests.java @@ -25,6 +25,7 @@ import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.reindex.ReindexPlugin; +import org.elasticsearch.join.ParentJoinPlugin; import org.elasticsearch.percolator.PercolatorPlugin; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.script.mustache.MustachePlugin; @@ -41,8 +42,6 @@ public class PreBuiltTransportClientTests extends RandomizedTest { @Test public void testPluginInstalled() { - // TODO: remove when Netty 4.1.5 is upgraded to Netty 4.1.6 including https://github.com/netty/netty/pull/5778 - assumeFalse(Constants.JRE_IS_MINIMUM_JAVA9); try (TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)) { Settings settings = client.settings(); assertEquals(Netty4Plugin.NETTY_TRANSPORT_NAME, NetworkModule.HTTP_DEFAULT_TYPE_SETTING.get(settings)); @@ -52,7 +51,8 @@ public void testPluginInstalled() { @Test public void testInstallPluginTwice() { - for (Class plugin : Arrays.asList(ReindexPlugin.class, PercolatorPlugin.class, MustachePlugin.class)) { + for (Class plugin : + Arrays.asList(ParentJoinPlugin.class, ReindexPlugin.class, PercolatorPlugin.class, MustachePlugin.class)) { try { new PreBuiltTransportClient(Settings.EMPTY, plugin); fail("exception expected"); diff --git a/core/build.gradle b/core/build.gradle index 4eab7ed5d5833..a1265e2eeaa11 100644 --- a/core/build.gradle +++ b/core/build.gradle @@ -22,7 +22,6 @@ import com.carrotsearch.gradle.junit4.RandomizedTestingTask import org.elasticsearch.gradle.BuildPlugin apply plugin: 'elasticsearch.build' -apply plugin: 'com.bmuschko.nexus' apply plugin: 'nebula.optional-base' apply plugin: 'nebula.maven-base-publish' apply plugin: 'nebula.maven-scm' @@ -63,10 +62,7 @@ dependencies { compile 'com.carrotsearch:hppc:0.7.1' // time handling, remove with java 8 time - compile 'joda-time:joda-time:2.9.4' - // joda 2.0 moved to using volatile fields for datetime - // When updating to a new version, make sure to update our copy of BaseDateTime - compile 'org.joda:joda-convert:1.2' + compile 'joda-time:joda-time:2.9.5' // json and yaml compile "org.yaml:snakeyaml:${versions.snakeyaml}" @@ -78,19 +74,20 @@ dependencies { // percentiles aggregation compile 'com.tdunning:t-digest:3.0' // precentil ranks aggregation - compile 'org.hdrhistogram:HdrHistogram:2.1.6' + compile 'org.hdrhistogram:HdrHistogram:2.1.9' // lucene spatial compile "org.locationtech.spatial4j:spatial4j:${versions.spatial4j}", optional compile "com.vividsolutions:jts:${versions.jts}", optional // logging - compile "org.apache.logging.log4j:log4j-api:${versions.log4j}", optional + compile "org.apache.logging.log4j:log4j-api:${versions.log4j}" compile "org.apache.logging.log4j:log4j-core:${versions.log4j}", optional // to bridge dependencies that are still on Log4j 1 to Log4j 2 compile "org.apache.logging.log4j:log4j-1.2-api:${versions.log4j}", optional - compile "net.java.dev.jna:jna:${versions.jna}" + // repackaged jna with native bits linked against all elastic supported platforms + compile "org.elasticsearch:jna:${versions.jna}" if (isEclipse == false || project.path == ":core-tests") { testCompile("org.elasticsearch.test:framework:${version}") { @@ -98,6 +95,8 @@ dependencies { exclude group: 'org.elasticsearch', module: 'elasticsearch' } } + testCompile 'com.google.jimfs:jimfs:1.1' + testCompile 'com.google.guava:guava:18.0' } if (isEclipse) { @@ -125,8 +124,8 @@ forbiddenPatterns { task generateModulesList { List modules = project(':modules').subprojects.collect { it.name } File modulesFile = new File(buildDir, 'generated-resources/modules.txt') - processResources.from(modulesFile) - inputs.property('modules', modules) + processResources.from(modulesFile) + inputs.property('modules', modules) outputs.file(modulesFile) doLast { modulesFile.parentFile.mkdirs() @@ -139,8 +138,8 @@ task generatePluginsList { .findAll { it.name.contains('example') == false } .collect { it.name } File pluginsFile = new File(buildDir, 'generated-resources/plugins.txt') - processResources.from(pluginsFile) - inputs.property('plugins', plugins) + processResources.from(pluginsFile) + inputs.property('plugins', plugins) outputs.file(pluginsFile) doLast { pluginsFile.parentFile.mkdirs() @@ -159,8 +158,11 @@ thirdPartyAudit.excludes = [ 'com.fasterxml.jackson.databind.ObjectMapper', // from log4j + 'com.conversantmedia.util.concurrent.DisruptorBlockingQueue', + 'com.conversantmedia.util.concurrent.SpinPolicy', 'com.fasterxml.jackson.annotation.JsonInclude$Include', 'com.fasterxml.jackson.databind.DeserializationContext', + 'com.fasterxml.jackson.databind.DeserializationFeature', 'com.fasterxml.jackson.databind.JsonMappingException', 'com.fasterxml.jackson.databind.JsonNode', 'com.fasterxml.jackson.databind.Module$SetupContext', @@ -177,6 +179,10 @@ thirdPartyAudit.excludes = [ 'com.fasterxml.jackson.dataformat.xml.JacksonXmlModule', 'com.fasterxml.jackson.dataformat.xml.XmlMapper', 'com.fasterxml.jackson.dataformat.xml.util.DefaultXmlPrettyPrinter', + 'com.fasterxml.jackson.databind.node.JsonNodeFactory', + 'com.fasterxml.jackson.databind.node.ObjectNode', + 'org.fusesource.jansi.Ansi', + 'org.fusesource.jansi.AnsiRenderer$Code', 'com.lmax.disruptor.BlockingWaitStrategy', 'com.lmax.disruptor.BusySpinWaitStrategy', 'com.lmax.disruptor.EventFactory', @@ -197,11 +203,11 @@ thirdPartyAudit.excludes = [ 'javax.jms.Connection', 'javax.jms.ConnectionFactory', 'javax.jms.Destination', + 'javax.jms.JMSException', + 'javax.jms.MapMessage', 'javax.jms.Message', 'javax.jms.MessageConsumer', - 'javax.jms.MessageListener', 'javax.jms.MessageProducer', - 'javax.jms.ObjectMessage', 'javax.jms.Session', 'javax.mail.Authenticator', 'javax.mail.Message$RecipientType', @@ -225,10 +231,14 @@ thirdPartyAudit.excludes = [ 'org.apache.commons.compress.utils.IOUtils', 'org.apache.commons.csv.CSVFormat', 'org.apache.commons.csv.QuoteMode', + 'org.apache.kafka.clients.producer.Callback', 'org.apache.kafka.clients.producer.KafkaProducer', 'org.apache.kafka.clients.producer.Producer', 'org.apache.kafka.clients.producer.ProducerRecord', + 'org.apache.kafka.clients.producer.RecordMetadata', 'org.codehaus.stax2.XMLStreamWriter2', + 'org.jctools.queues.MessagePassingQueue$Consumer', + 'org.jctools.queues.MpscArrayQueue', 'org.osgi.framework.AdaptPermission', 'org.osgi.framework.AdminPermission', 'org.osgi.framework.Bundle', @@ -237,6 +247,7 @@ thirdPartyAudit.excludes = [ 'org.osgi.framework.BundleEvent', 'org.osgi.framework.BundleReference', 'org.osgi.framework.FrameworkUtil', + 'org.osgi.framework.ServiceRegistration', 'org.osgi.framework.SynchronousBundleListener', 'org.osgi.framework.wiring.BundleWire', 'org.osgi.framework.wiring.BundleWiring', @@ -245,11 +256,17 @@ thirdPartyAudit.excludes = [ 'org.zeromq.ZMQ', // from org.locationtech.spatial4j.io.GeoJSONReader (spatial4j) - 'org.noggit.JSONParser', + 'org.noggit.JSONParser', ] -// dependency license are currently checked in distribution -dependencyLicenses.enabled = false +if (JavaVersion.current() > JavaVersion.VERSION_1_8) { + thirdPartyAudit.excludes += ['javax.xml.bind.DatatypeConverter'] +} + +dependencyLicenses { + mapping from: /lucene-.*/, to: 'lucene' + mapping from: /jackson-.*/, to: 'jackson' +} if (isEclipse == false || project.path == ":core-tests") { task integTest(type: RandomizedTestingTask, diff --git a/core/licenses/HdrHistogram-2.1.9.jar.sha1 b/core/licenses/HdrHistogram-2.1.9.jar.sha1 new file mode 100644 index 0000000000000..2378df07b2c0c --- /dev/null +++ b/core/licenses/HdrHistogram-2.1.9.jar.sha1 @@ -0,0 +1 @@ +e4631ce165eb400edecfa32e03d3f1be53dee754 \ No newline at end of file diff --git a/distribution/licenses/HdrHistogram-LICENSE.txt b/core/licenses/HdrHistogram-LICENSE.txt similarity index 100% rename from distribution/licenses/HdrHistogram-LICENSE.txt rename to core/licenses/HdrHistogram-LICENSE.txt diff --git a/distribution/licenses/jopt-simple-NOTICE.txt b/core/licenses/HdrHistogram-NOTICE.txt similarity index 100% rename from distribution/licenses/jopt-simple-NOTICE.txt rename to core/licenses/HdrHistogram-NOTICE.txt diff --git a/distribution/licenses/apache-log4j-extras-DEPENDENCIES b/core/licenses/apache-log4j-extras-DEPENDENCIES similarity index 100% rename from distribution/licenses/apache-log4j-extras-DEPENDENCIES rename to core/licenses/apache-log4j-extras-DEPENDENCIES diff --git a/core/licenses/hppc-0.7.1.jar.sha1 b/core/licenses/hppc-0.7.1.jar.sha1 new file mode 100644 index 0000000000000..aa191a6c93b99 --- /dev/null +++ b/core/licenses/hppc-0.7.1.jar.sha1 @@ -0,0 +1 @@ +8b5057f74ea378c0150a1860874a3ebdcb713767 \ No newline at end of file diff --git a/distribution/licenses/hppc-LICENSE.txt b/core/licenses/hppc-LICENSE.txt similarity index 100% rename from distribution/licenses/hppc-LICENSE.txt rename to core/licenses/hppc-LICENSE.txt diff --git a/distribution/licenses/hppc-NOTICE.txt b/core/licenses/hppc-NOTICE.txt similarity index 100% rename from distribution/licenses/hppc-NOTICE.txt rename to core/licenses/hppc-NOTICE.txt diff --git a/distribution/licenses/jackson-LICENSE b/core/licenses/jackson-LICENSE similarity index 100% rename from distribution/licenses/jackson-LICENSE rename to core/licenses/jackson-LICENSE diff --git a/distribution/licenses/jackson-NOTICE b/core/licenses/jackson-NOTICE similarity index 100% rename from distribution/licenses/jackson-NOTICE rename to core/licenses/jackson-NOTICE diff --git a/core/licenses/jackson-core-2.8.6.jar.sha1 b/core/licenses/jackson-core-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..af7677d13c28c --- /dev/null +++ b/core/licenses/jackson-core-2.8.6.jar.sha1 @@ -0,0 +1 @@ +2ef7b1cc34de149600f5e75bc2d5bf40de894e60 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-cbor-2.8.6.jar.sha1 b/core/licenses/jackson-dataformat-cbor-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..6a2e980235381 --- /dev/null +++ b/core/licenses/jackson-dataformat-cbor-2.8.6.jar.sha1 @@ -0,0 +1 @@ +b88721371cfa2d7242bb5e52fe70861aa061c050 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-smile-2.8.6.jar.sha1 b/core/licenses/jackson-dataformat-smile-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..19be9a2040bed --- /dev/null +++ b/core/licenses/jackson-dataformat-smile-2.8.6.jar.sha1 @@ -0,0 +1 @@ +71590ad45cee21249774e2f93e5eca66e446cef3 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-yaml-2.8.6.jar.sha1 b/core/licenses/jackson-dataformat-yaml-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..c61dad3bbcdd7 --- /dev/null +++ b/core/licenses/jackson-dataformat-yaml-2.8.6.jar.sha1 @@ -0,0 +1 @@ +8bd44d50f9a6cdff9c7578ea39d524eb519e35ab \ No newline at end of file diff --git a/core/licenses/jna-4.4.0-1.jar.sha1 b/core/licenses/jna-4.4.0-1.jar.sha1 new file mode 100644 index 0000000000000..6b564834b5733 --- /dev/null +++ b/core/licenses/jna-4.4.0-1.jar.sha1 @@ -0,0 +1 @@ +c9dfcec6f07ee4b1d7a6c09a7eaa9dd4fb6d2c79 \ No newline at end of file diff --git a/distribution/licenses/jna-LICENSE.txt b/core/licenses/jna-LICENSE.txt similarity index 100% rename from distribution/licenses/jna-LICENSE.txt rename to core/licenses/jna-LICENSE.txt diff --git a/distribution/licenses/jna-NOTICE.txt b/core/licenses/jna-NOTICE.txt similarity index 100% rename from distribution/licenses/jna-NOTICE.txt rename to core/licenses/jna-NOTICE.txt diff --git a/core/licenses/joda-time-2.9.5.jar.sha1 b/core/licenses/joda-time-2.9.5.jar.sha1 new file mode 100644 index 0000000000000..ecf1c781556ee --- /dev/null +++ b/core/licenses/joda-time-2.9.5.jar.sha1 @@ -0,0 +1 @@ +5f01da7306363fad2028b916f3eab926262de928 \ No newline at end of file diff --git a/distribution/licenses/joda-time-LICENSE.txt b/core/licenses/joda-time-LICENSE.txt similarity index 100% rename from distribution/licenses/joda-time-LICENSE.txt rename to core/licenses/joda-time-LICENSE.txt diff --git a/distribution/licenses/joda-time-NOTICE.txt b/core/licenses/joda-time-NOTICE.txt similarity index 100% rename from distribution/licenses/joda-time-NOTICE.txt rename to core/licenses/joda-time-NOTICE.txt diff --git a/distribution/licenses/jopt-simple-5.0.2.jar.sha1 b/core/licenses/jopt-simple-5.0.2.jar.sha1 similarity index 100% rename from distribution/licenses/jopt-simple-5.0.2.jar.sha1 rename to core/licenses/jopt-simple-5.0.2.jar.sha1 diff --git a/distribution/licenses/jopt-simple-LICENSE.txt b/core/licenses/jopt-simple-LICENSE.txt similarity index 100% rename from distribution/licenses/jopt-simple-LICENSE.txt rename to core/licenses/jopt-simple-LICENSE.txt diff --git a/core/licenses/jopt-simple-NOTICE.txt b/core/licenses/jopt-simple-NOTICE.txt new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/core/licenses/jts-1.13.jar.sha1 b/core/licenses/jts-1.13.jar.sha1 new file mode 100644 index 0000000000000..5b9e3902cf493 --- /dev/null +++ b/core/licenses/jts-1.13.jar.sha1 @@ -0,0 +1 @@ +3ccfb9b60f04d71add996a666ceb8902904fd805 \ No newline at end of file diff --git a/distribution/licenses/jts-LICENSE.txt b/core/licenses/jts-LICENSE.txt similarity index 100% rename from distribution/licenses/jts-LICENSE.txt rename to core/licenses/jts-LICENSE.txt diff --git a/distribution/licenses/jts-NOTICE.txt b/core/licenses/jts-NOTICE.txt similarity index 100% rename from distribution/licenses/jts-NOTICE.txt rename to core/licenses/jts-NOTICE.txt diff --git a/core/licenses/log4j-1.2-api-2.9.1.jar.sha1 b/core/licenses/log4j-1.2-api-2.9.1.jar.sha1 new file mode 100644 index 0000000000000..0b5acc62b7a13 --- /dev/null +++ b/core/licenses/log4j-1.2-api-2.9.1.jar.sha1 @@ -0,0 +1 @@ +894f96d677880d4ab834a1356f62b875e579caaa \ No newline at end of file diff --git a/distribution/licenses/log4j-LICENSE.txt b/core/licenses/log4j-LICENSE.txt similarity index 100% rename from distribution/licenses/log4j-LICENSE.txt rename to core/licenses/log4j-LICENSE.txt diff --git a/distribution/licenses/log4j-NOTICE.txt b/core/licenses/log4j-NOTICE.txt similarity index 100% rename from distribution/licenses/log4j-NOTICE.txt rename to core/licenses/log4j-NOTICE.txt diff --git a/core/licenses/log4j-api-2.9.1.jar.sha1 b/core/licenses/log4j-api-2.9.1.jar.sha1 new file mode 100644 index 0000000000000..e1a89fadfed95 --- /dev/null +++ b/core/licenses/log4j-api-2.9.1.jar.sha1 @@ -0,0 +1 @@ +7a2999229464e7a324aa503c0a52ec0f05efe7bd \ No newline at end of file diff --git a/distribution/licenses/log4j-api-LICENSE.txt b/core/licenses/log4j-api-LICENSE.txt similarity index 100% rename from distribution/licenses/log4j-api-LICENSE.txt rename to core/licenses/log4j-api-LICENSE.txt diff --git a/distribution/licenses/log4j-api-NOTICE.txt b/core/licenses/log4j-api-NOTICE.txt similarity index 100% rename from distribution/licenses/log4j-api-NOTICE.txt rename to core/licenses/log4j-api-NOTICE.txt diff --git a/core/licenses/log4j-core-2.9.1.jar.sha1 b/core/licenses/log4j-core-2.9.1.jar.sha1 new file mode 100644 index 0000000000000..990ea322a7613 --- /dev/null +++ b/core/licenses/log4j-core-2.9.1.jar.sha1 @@ -0,0 +1 @@ +c041978c686866ee8534f538c6220238db3bb6be \ No newline at end of file diff --git a/distribution/licenses/log4j-core-LICENSE.txt b/core/licenses/log4j-core-LICENSE.txt similarity index 100% rename from distribution/licenses/log4j-core-LICENSE.txt rename to core/licenses/log4j-core-LICENSE.txt diff --git a/distribution/licenses/log4j-core-NOTICE.txt b/core/licenses/log4j-core-NOTICE.txt similarity index 100% rename from distribution/licenses/log4j-core-NOTICE.txt rename to core/licenses/log4j-core-NOTICE.txt diff --git a/distribution/licenses/lucene-LICENSE.txt b/core/licenses/lucene-LICENSE.txt similarity index 100% rename from distribution/licenses/lucene-LICENSE.txt rename to core/licenses/lucene-LICENSE.txt diff --git a/distribution/licenses/lucene-NOTICE.txt b/core/licenses/lucene-NOTICE.txt similarity index 100% rename from distribution/licenses/lucene-NOTICE.txt rename to core/licenses/lucene-NOTICE.txt diff --git a/core/licenses/lucene-analyzers-common-6.6.1.jar.sha1 b/core/licenses/lucene-analyzers-common-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..b05e43efdf51d --- /dev/null +++ b/core/licenses/lucene-analyzers-common-6.6.1.jar.sha1 @@ -0,0 +1 @@ +52cb2bbc52221d33972faacf67e5da0ab92956bd \ No newline at end of file diff --git a/core/licenses/lucene-backward-codecs-6.6.1.jar.sha1 b/core/licenses/lucene-backward-codecs-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..b570be50213a3 --- /dev/null +++ b/core/licenses/lucene-backward-codecs-6.6.1.jar.sha1 @@ -0,0 +1 @@ +4ad390d10b0290af6dac83a519956b98b1fd18f0 \ No newline at end of file diff --git a/core/licenses/lucene-core-6.6.1.jar.sha1 b/core/licenses/lucene-core-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..d81efa2861d48 --- /dev/null +++ b/core/licenses/lucene-core-6.6.1.jar.sha1 @@ -0,0 +1 @@ +b51e719d781e6ec2dbf6d6eacc20a9c2df30269a \ No newline at end of file diff --git a/core/licenses/lucene-grouping-6.6.1.jar.sha1 b/core/licenses/lucene-grouping-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..3aad8585184a9 --- /dev/null +++ b/core/licenses/lucene-grouping-6.6.1.jar.sha1 @@ -0,0 +1 @@ +fa9069bd2b75b219a295d15394607350195b0665 \ No newline at end of file diff --git a/core/licenses/lucene-highlighter-6.6.1.jar.sha1 b/core/licenses/lucene-highlighter-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..422105a06bf05 --- /dev/null +++ b/core/licenses/lucene-highlighter-6.6.1.jar.sha1 @@ -0,0 +1 @@ +6cc18a6e4a60b8fca62fcfaf8b9fc3ff6bf1864d \ No newline at end of file diff --git a/core/licenses/lucene-join-6.6.1.jar.sha1 b/core/licenses/lucene-join-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..0752b32b10807 --- /dev/null +++ b/core/licenses/lucene-join-6.6.1.jar.sha1 @@ -0,0 +1 @@ +355dc2046a1574cf23d325171372531e687a72cb \ No newline at end of file diff --git a/core/licenses/lucene-memory-6.6.1.jar.sha1 b/core/licenses/lucene-memory-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..f41087ddf8af9 --- /dev/null +++ b/core/licenses/lucene-memory-6.6.1.jar.sha1 @@ -0,0 +1 @@ +4df5d3018bf7853b4f44eada0c3d823f25800fc3 \ No newline at end of file diff --git a/core/licenses/lucene-misc-6.6.1.jar.sha1 b/core/licenses/lucene-misc-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..8d53bf1a1a3c3 --- /dev/null +++ b/core/licenses/lucene-misc-6.6.1.jar.sha1 @@ -0,0 +1 @@ +4a434f20c15a1e651ba9d3db1167fec695b557d4 \ No newline at end of file diff --git a/core/licenses/lucene-queries-6.6.1.jar.sha1 b/core/licenses/lucene-queries-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..8b675bba89333 --- /dev/null +++ b/core/licenses/lucene-queries-6.6.1.jar.sha1 @@ -0,0 +1 @@ +e138ad9807b029ca3ee0276eeb0257812c9c9179 \ No newline at end of file diff --git a/core/licenses/lucene-queryparser-6.6.1.jar.sha1 b/core/licenses/lucene-queryparser-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..4ddb0f20e6c9b --- /dev/null +++ b/core/licenses/lucene-queryparser-6.6.1.jar.sha1 @@ -0,0 +1 @@ +f80e27fee9595ced0276e3caa53b6d12cc779b0e \ No newline at end of file diff --git a/core/licenses/lucene-sandbox-6.6.1.jar.sha1 b/core/licenses/lucene-sandbox-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..bd79ca46cf3af --- /dev/null +++ b/core/licenses/lucene-sandbox-6.6.1.jar.sha1 @@ -0,0 +1 @@ +3a4d147697dfb27b3a0f01f67c0b61175c14b011 \ No newline at end of file diff --git a/core/licenses/lucene-spatial-6.6.1.jar.sha1 b/core/licenses/lucene-spatial-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..7f4593c67e80b --- /dev/null +++ b/core/licenses/lucene-spatial-6.6.1.jar.sha1 @@ -0,0 +1 @@ +a83dc0e68cc3aeb8835610022f8d2cff34096d40 \ No newline at end of file diff --git a/core/licenses/lucene-spatial-extras-6.6.1.jar.sha1 b/core/licenses/lucene-spatial-extras-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..e0846a30ac504 --- /dev/null +++ b/core/licenses/lucene-spatial-extras-6.6.1.jar.sha1 @@ -0,0 +1 @@ +c1a3c9892f1d57b14adc4bcf30509c2bec2ebafb \ No newline at end of file diff --git a/core/licenses/lucene-spatial3d-6.6.1.jar.sha1 b/core/licenses/lucene-spatial3d-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..f2e7580b53fef --- /dev/null +++ b/core/licenses/lucene-spatial3d-6.6.1.jar.sha1 @@ -0,0 +1 @@ +87ed4ef7f3b18bf106c6f780eea39c88c9a39ad1 \ No newline at end of file diff --git a/core/licenses/lucene-suggest-6.6.1.jar.sha1 b/core/licenses/lucene-suggest-6.6.1.jar.sha1 new file mode 100644 index 0000000000000..d1d442f41baf2 --- /dev/null +++ b/core/licenses/lucene-suggest-6.6.1.jar.sha1 @@ -0,0 +1 @@ +9c74240a249fd7c35fdeb6379ed2a929fb7c8acb \ No newline at end of file diff --git a/distribution/licenses/securesm-1.1.jar.sha1 b/core/licenses/securesm-1.1.jar.sha1 similarity index 100% rename from distribution/licenses/securesm-1.1.jar.sha1 rename to core/licenses/securesm-1.1.jar.sha1 diff --git a/modules/transport-netty4/licenses/netty-buffer-LICENSE.txt b/core/licenses/securesm-LICENSE.txt similarity index 100% rename from modules/transport-netty4/licenses/netty-buffer-LICENSE.txt rename to core/licenses/securesm-LICENSE.txt diff --git a/distribution/licenses/securesm-NOTICE.txt b/core/licenses/securesm-NOTICE.txt similarity index 100% rename from distribution/licenses/securesm-NOTICE.txt rename to core/licenses/securesm-NOTICE.txt diff --git a/distribution/licenses/snakeyaml-1.15.jar.sha1 b/core/licenses/snakeyaml-1.15.jar.sha1 similarity index 100% rename from distribution/licenses/snakeyaml-1.15.jar.sha1 rename to core/licenses/snakeyaml-1.15.jar.sha1 diff --git a/distribution/licenses/snakeyaml-LICENSE.txt b/core/licenses/snakeyaml-LICENSE.txt similarity index 100% rename from distribution/licenses/snakeyaml-LICENSE.txt rename to core/licenses/snakeyaml-LICENSE.txt diff --git a/distribution/licenses/snakeyaml-NOTICE.txt b/core/licenses/snakeyaml-NOTICE.txt similarity index 100% rename from distribution/licenses/snakeyaml-NOTICE.txt rename to core/licenses/snakeyaml-NOTICE.txt diff --git a/distribution/licenses/spatial4j-0.6.jar.sha1 b/core/licenses/spatial4j-0.6.jar.sha1 similarity index 100% rename from distribution/licenses/spatial4j-0.6.jar.sha1 rename to core/licenses/spatial4j-0.6.jar.sha1 diff --git a/distribution/licenses/spatial4j-ABOUT.txt b/core/licenses/spatial4j-ABOUT.txt similarity index 100% rename from distribution/licenses/spatial4j-ABOUT.txt rename to core/licenses/spatial4j-ABOUT.txt diff --git a/modules/transport-netty4/licenses/netty-codec-LICENSE.txt b/core/licenses/spatial4j-LICENSE.txt similarity index 100% rename from modules/transport-netty4/licenses/netty-codec-LICENSE.txt rename to core/licenses/spatial4j-LICENSE.txt diff --git a/distribution/licenses/spatial4j-NOTICE.txt b/core/licenses/spatial4j-NOTICE.txt similarity index 100% rename from distribution/licenses/spatial4j-NOTICE.txt rename to core/licenses/spatial4j-NOTICE.txt diff --git a/core/licenses/t-digest-3.0.jar.sha1 b/core/licenses/t-digest-3.0.jar.sha1 new file mode 100644 index 0000000000000..ce2f2e2f04098 --- /dev/null +++ b/core/licenses/t-digest-3.0.jar.sha1 @@ -0,0 +1 @@ +84ccf145ac2215e6bfa63baa3101c0af41017cfc \ No newline at end of file diff --git a/distribution/licenses/t-digest-LICENSE.txt b/core/licenses/t-digest-LICENSE.txt similarity index 100% rename from distribution/licenses/t-digest-LICENSE.txt rename to core/licenses/t-digest-LICENSE.txt diff --git a/distribution/licenses/t-digest-NOTICE.txt b/core/licenses/t-digest-NOTICE.txt similarity index 100% rename from distribution/licenses/t-digest-NOTICE.txt rename to core/licenses/t-digest-NOTICE.txt diff --git a/core/src/main/java/org/apache/logging/log4j/core/impl/ThrowableProxy.java b/core/src/main/java/org/apache/logging/log4j/core/impl/ThrowableProxy.java deleted file mode 100644 index 37ab0a1539154..0000000000000 --- a/core/src/main/java/org/apache/logging/log4j/core/impl/ThrowableProxy.java +++ /dev/null @@ -1,665 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache license, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the license for the specific language governing permissions and - * limitations under the license. - */ - -package org.apache.logging.log4j.core.impl; - -import java.io.Serializable; -import java.net.URL; -import java.security.CodeSource; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.HashMap; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.Stack; - -import org.apache.logging.log4j.core.util.Loader; -import org.apache.logging.log4j.status.StatusLogger; -import org.apache.logging.log4j.util.ReflectionUtil; -import org.apache.logging.log4j.util.Strings; - -/** - * Wraps a Throwable to add packaging information about each stack trace element. - * - *

- * A proxy is used to represent a throwable that may not exist in a different class loader or JVM. When an application - * deserializes a ThrowableProxy, the throwable may not be set, but the throwable's information is preserved in other - * fields of the proxy like the message and stack trace. - *

- * - *

- * TODO: Move this class to org.apache.logging.log4j.core because it is used from LogEvent. - *

- *

- * TODO: Deserialize: Try to rebuild Throwable if the target exception is in this class loader? - *

- */ -public class ThrowableProxy implements Serializable { - - private static final String CAUSED_BY_LABEL = "Caused by: "; - private static final String SUPPRESSED_LABEL = "Suppressed: "; - private static final String WRAPPED_BY_LABEL = "Wrapped by: "; - - /** - * Cached StackTracePackageElement and ClassLoader. - *

- * Consider this class private. - *

- */ - static class CacheEntry { - private final ExtendedClassInfo element; - private final ClassLoader loader; - - public CacheEntry(final ExtendedClassInfo element, final ClassLoader loader) { - this.element = element; - this.loader = loader; - } - } - - private static final ThrowableProxy[] EMPTY_THROWABLE_PROXY_ARRAY = new ThrowableProxy[0]; - - private static final char EOL = '\n'; - - private static final long serialVersionUID = -2752771578252251910L; - - private final ThrowableProxy causeProxy; - - private int commonElementCount; - - private final ExtendedStackTraceElement[] extendedStackTrace; - - private final String localizedMessage; - - private final String message; - - private final String name; - - private final ThrowableProxy[] suppressedProxies; - - private final transient Throwable throwable; - - /** - * For JSON and XML IO via Jackson. - */ - @SuppressWarnings("unused") - private ThrowableProxy() { - this.throwable = null; - this.name = null; - this.extendedStackTrace = null; - this.causeProxy = null; - this.message = null; - this.localizedMessage = null; - this.suppressedProxies = EMPTY_THROWABLE_PROXY_ARRAY; - } - - /** - * Constructs the wrapper for the Throwable that includes packaging data. - * - * @param throwable - * The Throwable to wrap, must not be null. - */ - public ThrowableProxy(final Throwable throwable) { - this(throwable, null); - } - - /** - * Constructs the wrapper for the Throwable that includes packaging data. - * - * @param throwable - * The Throwable to wrap, must not be null. - * @param visited - * The set of visited suppressed exceptions. - */ - private ThrowableProxy(final Throwable throwable, final Set visited) { - this.throwable = throwable; - this.name = throwable.getClass().getName(); - this.message = throwable.getMessage(); - this.localizedMessage = throwable.getLocalizedMessage(); - final Map map = new HashMap<>(); - final Stack> stack = ReflectionUtil.getCurrentStackTrace(); - this.extendedStackTrace = this.toExtendedStackTrace(stack, map, null, throwable.getStackTrace()); - final Throwable throwableCause = throwable.getCause(); - final Set causeVisited = new HashSet<>(1); - this.causeProxy = throwableCause == null ? null : new ThrowableProxy(throwable, stack, map, throwableCause, visited, causeVisited); - this.suppressedProxies = this.toSuppressedProxies(throwable, visited); - } - - /** - * Constructs the wrapper for a Throwable that is referenced as the cause by another Throwable. - * - * @param parent - * The Throwable referencing this Throwable. - * @param stack - * The Class stack. - * @param map - * The cache containing the packaging data. - * @param cause - * The Throwable to wrap. - * @param suppressedVisited TODO - * @param causeVisited TODO - */ - private ThrowableProxy(final Throwable parent, final Stack> stack, final Map map, - final Throwable cause, final Set suppressedVisited, final Set causeVisited) { - causeVisited.add(cause); - this.throwable = cause; - this.name = cause.getClass().getName(); - this.message = this.throwable.getMessage(); - this.localizedMessage = this.throwable.getLocalizedMessage(); - this.extendedStackTrace = this.toExtendedStackTrace(stack, map, parent.getStackTrace(), cause.getStackTrace()); - final Throwable causeCause = cause.getCause(); - this.causeProxy = causeCause == null || causeVisited.contains(causeCause) ? null : new ThrowableProxy(parent, - stack, map, causeCause, suppressedVisited, causeVisited); - this.suppressedProxies = this.toSuppressedProxies(cause, suppressedVisited); - } - - @Override - public boolean equals(final Object obj) { - if (this == obj) { - return true; - } - if (obj == null) { - return false; - } - if (this.getClass() != obj.getClass()) { - return false; - } - final ThrowableProxy other = (ThrowableProxy) obj; - if (this.causeProxy == null) { - if (other.causeProxy != null) { - return false; - } - } else if (!this.causeProxy.equals(other.causeProxy)) { - return false; - } - if (this.commonElementCount != other.commonElementCount) { - return false; - } - if (this.name == null) { - if (other.name != null) { - return false; - } - } else if (!this.name.equals(other.name)) { - return false; - } - if (!Arrays.equals(this.extendedStackTrace, other.extendedStackTrace)) { - return false; - } - if (!Arrays.equals(this.suppressedProxies, other.suppressedProxies)) { - return false; - } - return true; - } - - private void formatCause(final StringBuilder sb, final String prefix, final ThrowableProxy cause, final List ignorePackages) { - formatThrowableProxy(sb, prefix, CAUSED_BY_LABEL, cause, ignorePackages); - } - - private void formatThrowableProxy(final StringBuilder sb, final String prefix, final String causeLabel, - final ThrowableProxy throwableProxy, final List ignorePackages) { - if (throwableProxy == null) { - return; - } - sb.append(prefix).append(causeLabel).append(throwableProxy).append(EOL); - this.formatElements(sb, prefix, throwableProxy.commonElementCount, - throwableProxy.getStackTrace(), throwableProxy.extendedStackTrace, ignorePackages); - this.formatSuppressed(sb, prefix + "\t", throwableProxy.suppressedProxies, ignorePackages); - this.formatCause(sb, prefix, throwableProxy.causeProxy, ignorePackages); - } - - private void formatSuppressed(final StringBuilder sb, final String prefix, final ThrowableProxy[] suppressedProxies, - final List ignorePackages) { - if (suppressedProxies == null) { - return; - } - for (final ThrowableProxy suppressedProxy : suppressedProxies) { - final ThrowableProxy cause = suppressedProxy; - formatThrowableProxy(sb, prefix, SUPPRESSED_LABEL, cause, ignorePackages); - } - } - - private void formatElements(final StringBuilder sb, final String prefix, final int commonCount, - final StackTraceElement[] causedTrace, final ExtendedStackTraceElement[] extStackTrace, - final List ignorePackages) { - if (ignorePackages == null || ignorePackages.isEmpty()) { - for (final ExtendedStackTraceElement element : extStackTrace) { - this.formatEntry(element, sb, prefix); - } - } else { - int count = 0; - for (int i = 0; i < extStackTrace.length; ++i) { - if (!this.ignoreElement(causedTrace[i], ignorePackages)) { - if (count > 0) { - appendSuppressedCount(sb, prefix, count); - count = 0; - } - this.formatEntry(extStackTrace[i], sb, prefix); - } else { - ++count; - } - } - if (count > 0) { - appendSuppressedCount(sb, prefix, count); - } - } - if (commonCount != 0) { - sb.append(prefix).append("\t... ").append(commonCount).append(" more").append(EOL); - } - } - - private void appendSuppressedCount(final StringBuilder sb, final String prefix, final int count) { - sb.append(prefix); - if (count == 1) { - sb.append("\t....").append(EOL); - } else { - sb.append("\t... suppressed ").append(count).append(" lines").append(EOL); - } - } - - private void formatEntry(final ExtendedStackTraceElement extStackTraceElement, final StringBuilder sb, final String prefix) { - sb.append(prefix); - sb.append("\tat "); - sb.append(extStackTraceElement); - sb.append(EOL); - } - - /** - * Formats the specified Throwable. - * - * @param sb - * StringBuilder to contain the formatted Throwable. - * @param cause - * The Throwable to format. - */ - public void formatWrapper(final StringBuilder sb, final ThrowableProxy cause) { - this.formatWrapper(sb, cause, null); - } - - /** - * Formats the specified Throwable. - * - * @param sb - * StringBuilder to contain the formatted Throwable. - * @param cause - * The Throwable to format. - * @param packages - * The List of packages to be suppressed from the trace. - */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") - public void formatWrapper(final StringBuilder sb, final ThrowableProxy cause, final List packages) { - final Throwable caused = cause.getCauseProxy() != null ? cause.getCauseProxy().getThrowable() : null; - if (caused != null) { - this.formatWrapper(sb, cause.causeProxy); - sb.append(WRAPPED_BY_LABEL); - } - sb.append(cause).append(EOL); - this.formatElements(sb, "", cause.commonElementCount, - cause.getThrowable().getStackTrace(), cause.extendedStackTrace, packages); - } - - public ThrowableProxy getCauseProxy() { - return this.causeProxy; - } - - /** - * Format the Throwable that is the cause of this Throwable. - * - * @return The formatted Throwable that caused this Throwable. - */ - public String getCauseStackTraceAsString() { - return this.getCauseStackTraceAsString(null); - } - - /** - * Format the Throwable that is the cause of this Throwable. - * - * @param packages - * The List of packages to be suppressed from the trace. - * @return The formatted Throwable that caused this Throwable. - */ - public String getCauseStackTraceAsString(final List packages) { - final StringBuilder sb = new StringBuilder(); - if (this.causeProxy != null) { - this.formatWrapper(sb, this.causeProxy); - sb.append(WRAPPED_BY_LABEL); - } - sb.append(this.toString()); - sb.append(EOL); - this.formatElements(sb, "", 0, this.throwable.getStackTrace(), this.extendedStackTrace, packages); - return sb.toString(); - } - - /** - * Return the number of elements that are being omitted because they are common with the parent Throwable's stack - * trace. - * - * @return The number of elements omitted from the stack trace. - */ - public int getCommonElementCount() { - return this.commonElementCount; - } - - /** - * Gets the stack trace including packaging information. - * - * @return The stack trace including packaging information. - */ - public ExtendedStackTraceElement[] getExtendedStackTrace() { - return this.extendedStackTrace; - } - - /** - * Format the stack trace including packaging information. - * - * @return The formatted stack trace including packaging information. - */ - public String getExtendedStackTraceAsString() { - return this.getExtendedStackTraceAsString(null); - } - - /** - * Format the stack trace including packaging information. - * - * @param ignorePackages - * List of packages to be ignored in the trace. - * @return The formatted stack trace including packaging information. - */ - public String getExtendedStackTraceAsString(final List ignorePackages) { - final StringBuilder sb = new StringBuilder(this.name); - final String msg = this.message; - if (msg != null) { - sb.append(": ").append(msg); - } - sb.append(EOL); - final StackTraceElement[] causedTrace = this.throwable != null ? this.throwable.getStackTrace() : null; - this.formatElements(sb, "", 0, causedTrace, this.extendedStackTrace, ignorePackages); - this.formatSuppressed(sb, "\t", this.suppressedProxies, ignorePackages); - this.formatCause(sb, "", this.causeProxy, ignorePackages); - return sb.toString(); - } - - public String getLocalizedMessage() { - return this.localizedMessage; - } - - public String getMessage() { - return this.message; - } - - /** - * Return the FQCN of the Throwable. - * - * @return The FQCN of the Throwable. - */ - public String getName() { - return this.name; - } - - public StackTraceElement[] getStackTrace() { - return this.throwable == null ? null : this.throwable.getStackTrace(); - } - - /** - * Gets proxies for suppressed exceptions. - * - * @return proxies for suppressed exceptions. - */ - public ThrowableProxy[] getSuppressedProxies() { - return this.suppressedProxies; - } - - /** - * Format the suppressed Throwables. - * - * @return The formatted suppressed Throwables. - */ - public String getSuppressedStackTrace() { - final ThrowableProxy[] suppressed = this.getSuppressedProxies(); - if (suppressed == null || suppressed.length == 0) { - return Strings.EMPTY; - } - final StringBuilder sb = new StringBuilder("Suppressed Stack Trace Elements:").append(EOL); - for (final ThrowableProxy proxy : suppressed) { - sb.append(proxy.getExtendedStackTraceAsString()); - } - return sb.toString(); - } - - /** - * The throwable or null if this object is deserialized from XML or JSON. - * - * @return The throwable or null if this object is deserialized from XML or JSON. - */ - public Throwable getThrowable() { - return this.throwable; - } - - @Override - public int hashCode() { - final int prime = 31; - int result = 1; - result = prime * result + (this.causeProxy == null ? 0 : this.causeProxy.hashCode()); - result = prime * result + this.commonElementCount; - result = prime * result + (this.extendedStackTrace == null ? 0 : Arrays.hashCode(this.extendedStackTrace)); - result = prime * result + (this.suppressedProxies == null ? 0 : Arrays.hashCode(this.suppressedProxies)); - result = prime * result + (this.name == null ? 0 : this.name.hashCode()); - return result; - } - - private boolean ignoreElement(final StackTraceElement element, final List ignorePackages) { - final String className = element.getClassName(); - for (final String pkg : ignorePackages) { - if (className.startsWith(pkg)) { - return true; - } - } - return false; - } - - /** - * Loads classes not located via Reflection.getCallerClass. - * - * @param lastLoader - * The ClassLoader that loaded the Class that called this Class. - * @param className - * The name of the Class. - * @return The Class object for the Class or null if it could not be located. - */ - private Class loadClass(final ClassLoader lastLoader, final String className) { - // XXX: this is overly complicated - Class clazz; - if (lastLoader != null) { - try { - clazz = Loader.initializeClass(className, lastLoader); - if (clazz != null) { - return clazz; - } - } catch (final Throwable ignore) { - // Ignore exception. - } - } - try { - clazz = Loader.loadClass(className); - } catch (final ClassNotFoundException ignored) { - return initializeClass(className); - } catch (final NoClassDefFoundError ignored) { - return initializeClass(className); - } catch (final SecurityException ignored) { - return initializeClass(className); - } - return clazz; - } - - private Class initializeClass(final String className) { - try { - return Loader.initializeClass(className, this.getClass().getClassLoader()); - } catch (final ClassNotFoundException ignore) { - return null; - } catch (final NoClassDefFoundError ignore) { - return null; - } catch (final SecurityException ignore) { - return null; - } - } - - /** - * Construct the CacheEntry from the Class's information. - * - * @param stackTraceElement - * The stack trace element - * @param callerClass - * The Class. - * @param exact - * True if the class was obtained via Reflection.getCallerClass. - * - * @return The CacheEntry. - */ - private CacheEntry toCacheEntry(final StackTraceElement stackTraceElement, final Class callerClass, - final boolean exact) { - String location = "?"; - String version = "?"; - ClassLoader lastLoader = null; - if (callerClass != null) { - try { - final CodeSource source = callerClass.getProtectionDomain().getCodeSource(); - if (source != null) { - final URL locationURL = source.getLocation(); - if (locationURL != null) { - final String str = locationURL.toString().replace('\\', '/'); - int index = str.lastIndexOf("/"); - if (index >= 0 && index == str.length() - 1) { - index = str.lastIndexOf("/", index - 1); - location = str.substring(index + 1); - } else { - location = str.substring(index + 1); - } - } - } - } catch (final Exception ex) { - // Ignore the exception. - } - final Package pkg = callerClass.getPackage(); - if (pkg != null) { - final String ver = pkg.getImplementationVersion(); - if (ver != null) { - version = ver; - } - } - lastLoader = callerClass.getClassLoader(); - } - return new CacheEntry(new ExtendedClassInfo(exact, location, version), lastLoader); - } - - /** - * Resolve all the stack entries in this stack trace that are not common with the parent. - * - * @param stack - * The callers Class stack. - * @param map - * The cache of CacheEntry objects. - * @param rootTrace - * The first stack trace resolve or null. - * @param stackTrace - * The stack trace being resolved. - * @return The StackTracePackageElement array. - */ - ExtendedStackTraceElement[] toExtendedStackTrace(final Stack> stack, final Map map, - final StackTraceElement[] rootTrace, final StackTraceElement[] stackTrace) { - int stackLength; - if (rootTrace != null) { - int rootIndex = rootTrace.length - 1; - int stackIndex = stackTrace.length - 1; - while (rootIndex >= 0 && stackIndex >= 0 && rootTrace[rootIndex].equals(stackTrace[stackIndex])) { - --rootIndex; - --stackIndex; - } - this.commonElementCount = stackTrace.length - 1 - stackIndex; - stackLength = stackIndex + 1; - } else { - this.commonElementCount = 0; - stackLength = stackTrace.length; - } - final ExtendedStackTraceElement[] extStackTrace = new ExtendedStackTraceElement[stackLength]; - Class clazz = stack.isEmpty() ? null : stack.peek(); - ClassLoader lastLoader = null; - for (int i = stackLength - 1; i >= 0; --i) { - final StackTraceElement stackTraceElement = stackTrace[i]; - final String className = stackTraceElement.getClassName(); - // The stack returned from getCurrentStack may be missing entries for java.lang.reflect.Method.invoke() - // and its implementation. The Throwable might also contain stack entries that are no longer - // present as those methods have returned. - ExtendedClassInfo extClassInfo; - if (clazz != null && className.equals(clazz.getName())) { - final CacheEntry entry = this.toCacheEntry(stackTraceElement, clazz, true); - extClassInfo = entry.element; - lastLoader = entry.loader; - stack.pop(); - clazz = stack.isEmpty() ? null : stack.peek(); - } else { - final CacheEntry cacheEntry = map.get(className); - if (cacheEntry != null) { - final CacheEntry entry = cacheEntry; - extClassInfo = entry.element; - if (entry.loader != null) { - lastLoader = entry.loader; - } - } else { - final CacheEntry entry = this.toCacheEntry(stackTraceElement, - this.loadClass(lastLoader, className), false); - extClassInfo = entry.element; - map.put(stackTraceElement.toString(), entry); - if (entry.loader != null) { - lastLoader = entry.loader; - } - } - } - extStackTrace[i] = new ExtendedStackTraceElement(stackTraceElement, extClassInfo); - } - return extStackTrace; - } - - @Override - public String toString() { - final String msg = this.message; - return msg != null ? this.name + ": " + msg : this.name; - } - - private ThrowableProxy[] toSuppressedProxies(final Throwable thrown, Set suppressedVisited) { - try { - final Throwable[] suppressed = thrown.getSuppressed(); - if (suppressed == null) { - return EMPTY_THROWABLE_PROXY_ARRAY; - } - final List proxies = new ArrayList<>(suppressed.length); - if (suppressedVisited == null) { - suppressedVisited = new HashSet<>(proxies.size()); - } - for (int i = 0; i < suppressed.length; i++) { - final Throwable candidate = suppressed[i]; - if (!suppressedVisited.contains(candidate)) { - suppressedVisited.add(candidate); - proxies.add(new ThrowableProxy(candidate, suppressedVisited)); - } - } - return proxies.toArray(new ThrowableProxy[proxies.size()]); - } catch (final Exception e) { - StatusLogger.getLogger().error(e); - } - return null; - } -} diff --git a/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttribute.java b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttribute.java new file mode 100644 index 0000000000000..16b9ecd1a5247 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttribute.java @@ -0,0 +1,32 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.analysis.miscellaneous; + +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.util.Attribute; +import org.apache.lucene.analysis.tokenattributes.PositionLengthAttribute; + +/** + * This attribute can be used to indicate that the {@link PositionLengthAttribute} + * should not be taken in account in this {@link TokenStream}. + * Query parsers can extract this information to decide if this token stream should be analyzed + * as a graph or not. + */ +public interface DisableGraphAttribute extends Attribute {} diff --git a/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttributeImpl.java b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttributeImpl.java new file mode 100644 index 0000000000000..5a4e7f79f238e --- /dev/null +++ b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttributeImpl.java @@ -0,0 +1,38 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.analysis.miscellaneous; + +import org.apache.lucene.util.AttributeImpl; +import org.apache.lucene.util.AttributeReflector; + +/** Default implementation of {@link DisableGraphAttribute}. */ +public class DisableGraphAttributeImpl extends AttributeImpl implements DisableGraphAttribute { + public DisableGraphAttributeImpl() {} + + @Override + public void clear() {} + + @Override + public void reflectWith(AttributeReflector reflector) { + } + + @Override + public void copyTo(AttributeImpl target) {} +} diff --git a/core/src/main/java/org/apache/lucene/index/OneMergeHelper.java b/core/src/main/java/org/apache/lucene/index/OneMergeHelper.java index 99ef7f4dd7fef..f8b8c6178225b 100644 --- a/core/src/main/java/org/apache/lucene/index/OneMergeHelper.java +++ b/core/src/main/java/org/apache/lucene/index/OneMergeHelper.java @@ -19,6 +19,8 @@ package org.apache.lucene.index; +import java.io.IOException; + /** * Allows pkg private access */ @@ -27,4 +29,33 @@ private OneMergeHelper() {} public static String getSegmentName(MergePolicy.OneMerge merge) { return merge.info != null ? merge.info.info.name : "_na_"; } + + /** + * The current MB per second rate limit for this merge. + **/ + public static double getMbPerSec(Thread thread, MergePolicy.OneMerge merge) { + if (thread instanceof ConcurrentMergeScheduler.MergeThread) { + return ((ConcurrentMergeScheduler.MergeThread) thread).rateLimiter.getMBPerSec(); + } + assert false: "this is not merge thread"; + return Double.POSITIVE_INFINITY; + } + + /** + * Returns total bytes written by this merge. + **/ + public static long getTotalBytesWritten(Thread thread, + MergePolicy.OneMerge merge) throws IOException { + /** + * TODO: The number of bytes written during the merge should be accessible in OneMerge. + */ + if (thread instanceof ConcurrentMergeScheduler.MergeThread) { + return ((ConcurrentMergeScheduler.MergeThread) thread).rateLimiter + .getTotalBytesWritten(); + } + assert false: "this is not merge thread"; + return merge.totalBytesSize(); + } + + } diff --git a/core/src/main/java/org/apache/lucene/index/XPointValues.java b/core/src/main/java/org/apache/lucene/index/XPointValues.java deleted file mode 100644 index c4fa0b4d6232e..0000000000000 --- a/core/src/main/java/org/apache/lucene/index/XPointValues.java +++ /dev/null @@ -1,130 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.apache.lucene.index; -import org.apache.lucene.util.StringHelper; - -import java.io.IOException; - -/** - * Forked utility methods from Lucene's PointValues until LUCENE-7257 is released. - */ -public class XPointValues { - /** Return the cumulated number of points across all leaves of the given - * {@link IndexReader}. Leaves that do not have points for the given field - * are ignored. - * @see PointValues#size(String) */ - public static long size(IndexReader reader, String field) throws IOException { - long size = 0; - for (LeafReaderContext ctx : reader.leaves()) { - FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field); - if (info == null || info.getPointDimensionCount() == 0) { - continue; - } - PointValues values = ctx.reader().getPointValues(); - size += values.size(field); - } - return size; - } - - /** Return the cumulated number of docs that have points across all leaves - * of the given {@link IndexReader}. Leaves that do not have points for the - * given field are ignored. - * @see PointValues#getDocCount(String) */ - public static int getDocCount(IndexReader reader, String field) throws IOException { - int count = 0; - for (LeafReaderContext ctx : reader.leaves()) { - FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field); - if (info == null || info.getPointDimensionCount() == 0) { - continue; - } - PointValues values = ctx.reader().getPointValues(); - count += values.getDocCount(field); - } - return count; - } - - /** Return the minimum packed values across all leaves of the given - * {@link IndexReader}. Leaves that do not have points for the given field - * are ignored. - * @see PointValues#getMinPackedValue(String) */ - public static byte[] getMinPackedValue(IndexReader reader, String field) throws IOException { - byte[] minValue = null; - for (LeafReaderContext ctx : reader.leaves()) { - FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field); - if (info == null || info.getPointDimensionCount() == 0) { - continue; - } - PointValues values = ctx.reader().getPointValues(); - byte[] leafMinValue = values.getMinPackedValue(field); - if (leafMinValue == null) { - continue; - } - if (minValue == null) { - minValue = leafMinValue.clone(); - } else { - final int numDimensions = values.getNumDimensions(field); - final int numBytesPerDimension = values.getBytesPerDimension(field); - for (int i = 0; i < numDimensions; ++i) { - int offset = i * numBytesPerDimension; - if (StringHelper.compare(numBytesPerDimension, leafMinValue, offset, minValue, offset) < 0) { - System.arraycopy(leafMinValue, offset, minValue, offset, numBytesPerDimension); - } - } - } - } - return minValue; - } - - /** Return the maximum packed values across all leaves of the given - * {@link IndexReader}. Leaves that do not have points for the given field - * are ignored. - * @see PointValues#getMaxPackedValue(String) */ - public static byte[] getMaxPackedValue(IndexReader reader, String field) throws IOException { - byte[] maxValue = null; - for (LeafReaderContext ctx : reader.leaves()) { - FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field); - if (info == null || info.getPointDimensionCount() == 0) { - continue; - } - PointValues values = ctx.reader().getPointValues(); - byte[] leafMaxValue = values.getMaxPackedValue(field); - if (leafMaxValue == null) { - continue; - } - if (maxValue == null) { - maxValue = leafMaxValue.clone(); - } else { - final int numDimensions = values.getNumDimensions(field); - final int numBytesPerDimension = values.getBytesPerDimension(field); - for (int i = 0; i < numDimensions; ++i) { - int offset = i * numBytesPerDimension; - if (StringHelper.compare(numBytesPerDimension, leafMaxValue, offset, maxValue, offset) > 0) { - System.arraycopy(leafMaxValue, offset, maxValue, offset, numBytesPerDimension); - } - } - } - } - return maxValue; - } - - /** Default constructor */ - private XPointValues() { - } -} diff --git a/core/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java b/core/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java index a4b94b007fd28..0b34a95710cfe 100644 --- a/core/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java +++ b/core/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java @@ -163,7 +163,7 @@ protected int compare(int i, int j) { if (prev > current) { actualDf++; } - contexts[i] = ctx = adjustDF(ctx, Math.min(maxDoc, actualDf)); + contexts[i] = ctx = adjustDF(reader.getContext(), ctx, Math.min(maxDoc, actualDf)); prev = current; if (sumTTF >= 0 && ctx.totalTermFreq() >= 0) { sumTTF += ctx.totalTermFreq(); @@ -179,16 +179,17 @@ protected int compare(int i, int j) { } // the blended sumTTF can't be greater than the sumTTTF on the field final long fixedTTF = sumTTF == -1 ? -1 : sumTTF; - contexts[i] = adjustTTF(contexts[i], fixedTTF); + contexts[i] = adjustTTF(reader.getContext(), contexts[i], fixedTTF); } } - private TermContext adjustTTF(TermContext termContext, long sumTTF) { + private TermContext adjustTTF(IndexReaderContext readerContext, TermContext termContext, long sumTTF) { + assert termContext.wasBuiltFor(readerContext); if (sumTTF == -1 && termContext.totalTermFreq() == -1) { return termContext; } - TermContext newTermContext = new TermContext(termContext.topReaderContext); - List leaves = termContext.topReaderContext.leaves(); + TermContext newTermContext = new TermContext(readerContext); + List leaves = readerContext.leaves(); final int len; if (leaves == null) { len = 1; @@ -209,7 +210,8 @@ private TermContext adjustTTF(TermContext termContext, long sumTTF) { return newTermContext; } - private static TermContext adjustDF(TermContext ctx, int newDocFreq) { + private static TermContext adjustDF(IndexReaderContext readerContext, TermContext ctx, int newDocFreq) { + assert ctx.wasBuiltFor(readerContext); // Use a value of ttf that is consistent with the doc freq (ie. gte) long newTTF; if (ctx.totalTermFreq() < 0) { @@ -217,14 +219,14 @@ private static TermContext adjustDF(TermContext ctx, int newDocFreq) { } else { newTTF = Math.max(ctx.totalTermFreq(), newDocFreq); } - List leaves = ctx.topReaderContext.leaves(); + List leaves = readerContext.leaves(); final int len; if (leaves == null) { len = 1; } else { len = leaves.size(); } - TermContext newCtx = new TermContext(ctx.topReaderContext); + TermContext newCtx = new TermContext(readerContext); for (int i = 0; i < len; ++i) { TermState termState = ctx.get(i); if (termState == null) { diff --git a/core/src/main/java/org/apache/lucene/queries/MinDocQuery.java b/core/src/main/java/org/apache/lucene/queries/MinDocQuery.java index a8b7dc9299ff0..ea5d4e7e77be1 100644 --- a/core/src/main/java/org/apache/lucene/queries/MinDocQuery.java +++ b/core/src/main/java/org/apache/lucene/queries/MinDocQuery.java @@ -19,6 +19,7 @@ package org.apache.lucene.queries; +import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.ConstantScoreScorer; import org.apache.lucene.search.ConstantScoreWeight; @@ -35,16 +36,26 @@ * to a configured doc ID. */ public final class MinDocQuery extends Query { + // Matching documents depend on the sequence of segments that the index reader + // wraps. Yet matches must be cacheable per-segment, so we need to incorporate + // the reader id in the identity of the query so that a cache entry may only + // be reused if this query is run against the same index reader. + private final Object readerId; private final int minDoc; /** Sole constructor. */ public MinDocQuery(int minDoc) { + this(minDoc, null); + } + + MinDocQuery(int minDoc, Object readerId) { this.minDoc = minDoc; + this.readerId = readerId; } @Override public int hashCode() { - return Objects.hash(classHash(), minDoc); + return Objects.hash(classHash(), minDoc, readerId); } @Override @@ -53,11 +64,24 @@ public boolean equals(Object obj) { return false; } MinDocQuery that = (MinDocQuery) obj; - return minDoc == that.minDoc; + return minDoc == that.minDoc && Objects.equals(readerId, that.readerId); + } + + @Override + public Query rewrite(IndexReader reader) throws IOException { + if (Objects.equals(reader.getContext().id(), readerId) == false) { + return new MinDocQuery(minDoc, reader.getContext().id()); + } + return this; } @Override public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException { + if (readerId == null) { + throw new IllegalStateException("Rewrite first"); + } else if (Objects.equals(searcher.getIndexReader().getContext().id(), readerId) == false) { + throw new IllegalStateException("Executing against a different reader than the query has been rewritten against"); + } return new ConstantScoreWeight(this) { @Override public Scorer scorer(LeafReaderContext context) throws IOException { diff --git a/core/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java b/core/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java index cb4bee30aaaf4..503fef4f18622 100644 --- a/core/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java +++ b/core/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java @@ -19,8 +19,11 @@ package org.apache.lucene.queryparser.classic; -import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.WildcardQuery; +import org.elasticsearch.index.mapper.FieldNamesFieldMapper; import org.elasticsearch.index.query.ExistsQueryBuilder; import org.elasticsearch.index.query.QueryShardContext; @@ -33,6 +36,16 @@ public class ExistsFieldQueryExtension implements FieldQueryExtension { @Override public Query query(QueryShardContext context, String queryText) { - return new ConstantScoreQuery(ExistsQueryBuilder.newFilter(context, queryText)); + final FieldNamesFieldMapper.FieldNamesFieldType fieldNamesFieldType = + (FieldNamesFieldMapper.FieldNamesFieldType) context.getMapperService().fullName(FieldNamesFieldMapper.NAME); + if (fieldNamesFieldType == null) { + return new MatchNoDocsQuery("No mappings yet"); + } + if (fieldNamesFieldType.isEnabled() == false) { + // The field_names_field is disabled so we switch to a wildcard query that matches all terms + return new WildcardQuery(new Term(queryText, "*")); + } + + return ExistsQueryBuilder.newFilter(context, queryText); } } diff --git a/core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java b/core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java index bcf0a2b201a31..130066788e71c 100644 --- a/core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java +++ b/core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java @@ -20,10 +20,12 @@ package org.apache.lucene.queryparser.classic; import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; import org.apache.lucene.index.Term; +import org.apache.lucene.queryparser.analyzing.AnalyzingQueryParser; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.BoostQuery; @@ -34,16 +36,23 @@ import org.apache.lucene.search.PhraseQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.SynonymQuery; +import org.apache.lucene.search.spans.SpanNearQuery; +import org.apache.lucene.search.spans.SpanOrQuery; +import org.apache.lucene.search.spans.SpanQuery; +import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.automaton.RegExp; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.unit.Fuzziness; +import org.elasticsearch.index.mapper.AllFieldMapper; import org.elasticsearch.index.mapper.DateFieldMapper; import org.elasticsearch.index.mapper.LegacyDateFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.StringFieldType; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.support.QueryParsers; +import org.elasticsearch.index.analysis.ShingleTokenFilterFactory; import java.io.IOException; import java.util.ArrayList; @@ -51,8 +60,7 @@ import java.util.HashMap; import java.util.List; import java.util.Map; -import java.util.Objects; - +import java.util.Collections; import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.common.lucene.search.Queries.fixNegativeQueryIfNeeded; @@ -63,7 +71,7 @@ * Also breaks fields with [type].[name] into a boolean query that must include the type * as well as the query on the name. */ -public class MapperQueryParser extends QueryParser { +public class MapperQueryParser extends AnalyzingQueryParser { public static final Map FIELD_QUERY_EXTENSIONS; @@ -86,7 +94,8 @@ public MapperQueryParser(QueryShardContext context) { public void reset(QueryParserSettings settings) { this.settings = settings; - if (settings.fieldsAndWeights().isEmpty()) { + if (settings.fieldsAndWeights() == null) { + // this query has no explicit fields to query so we fallback to the default field this.field = settings.defaultField(); } else if (settings.fieldsAndWeights().size() == 1) { this.field = settings.fieldsAndWeights().keySet().iterator().next(); @@ -99,11 +108,11 @@ public void reset(QueryParserSettings settings) { setAutoGeneratePhraseQueries(settings.autoGeneratePhraseQueries()); setMaxDeterminizedStates(settings.maxDeterminizedStates()); setAllowLeadingWildcard(settings.allowLeadingWildcard()); - setLowercaseExpandedTerms(settings.lowercaseExpandedTerms()); + setLowercaseExpandedTerms(false); setPhraseSlop(settings.phraseSlop()); setDefaultOperator(settings.defaultOperator()); setFuzzyPrefixLength(settings.fuzzyPrefixLength()); - setLocale(settings.locale()); + setSplitOnWhitespace(settings.splitOnWhitespace()); } /** @@ -144,6 +153,11 @@ public Query getFieldQuery(String field, String queryText, boolean quoted) throw if (fields != null) { if (fields.size() == 1) { return getFieldQuerySingle(fields.iterator().next(), queryText, quoted); + } else if (fields.isEmpty()) { + // the requested fields do not match any field in the mapping + // happens for wildcard fields only since we cannot expand to a valid field name + // if there is no match in the mappings. + return new MatchNoDocsQuery("empty fields"); } if (settings.useDisMax()) { List queries = new ArrayList<>(); @@ -180,17 +194,17 @@ private Query getFieldQuerySingle(String field, String queryText, boolean quoted if (queryText.charAt(0) == '>') { if (queryText.length() > 2) { if (queryText.charAt(1) == '=') { - return getRangeQuerySingle(field, queryText.substring(2), null, true, true); + return getRangeQuerySingle(field, queryText.substring(2), null, true, true, context); } } - return getRangeQuerySingle(field, queryText.substring(1), null, false, true); + return getRangeQuerySingle(field, queryText.substring(1), null, false, true, context); } else if (queryText.charAt(0) == '<') { if (queryText.length() > 2) { if (queryText.charAt(1) == '=') { - return getRangeQuerySingle(field, null, queryText.substring(2), true, true); + return getRangeQuerySingle(field, null, queryText.substring(2), true, true, context); } } - return getRangeQuerySingle(field, null, queryText.substring(1), true, false); + return getRangeQuerySingle(field, null, queryText.substring(1), true, false, context); } } currentFieldType = null; @@ -290,19 +304,19 @@ protected Query getRangeQuery(String field, String part1, String part2, Collection fields = extractMultiFields(field); if (fields == null) { - return getRangeQuerySingle(field, part1, part2, startInclusive, endInclusive); + return getRangeQuerySingle(field, part1, part2, startInclusive, endInclusive, context); } if (fields.size() == 1) { - return getRangeQuerySingle(fields.iterator().next(), part1, part2, startInclusive, endInclusive); + return getRangeQuerySingle(fields.iterator().next(), part1, part2, startInclusive, endInclusive, context); } if (settings.useDisMax()) { List queries = new ArrayList<>(); boolean added = false; for (String mField : fields) { - Query q = getRangeQuerySingle(mField, part1, part2, startInclusive, endInclusive); + Query q = getRangeQuerySingle(mField, part1, part2, startInclusive, endInclusive, context); if (q != null) { added = true; queries.add(applyBoost(mField, q)); @@ -315,7 +329,7 @@ protected Query getRangeQuery(String field, String part1, String part2, } else { List clauses = new ArrayList<>(); for (String mField : fields) { - Query q = getRangeQuerySingle(mField, part1, part2, startInclusive, endInclusive); + Query q = getRangeQuerySingle(mField, part1, part2, startInclusive, endInclusive, context); if (q != null) { clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD)); } @@ -326,24 +340,23 @@ protected Query getRangeQuery(String field, String part1, String part2, } private Query getRangeQuerySingle(String field, String part1, String part2, - boolean startInclusive, boolean endInclusive) { + boolean startInclusive, boolean endInclusive, QueryShardContext context) { currentFieldType = context.fieldMapper(field); if (currentFieldType != null) { - if (lowercaseExpandedTerms && currentFieldType.tokenized()) { - part1 = part1 == null ? null : part1.toLowerCase(locale); - part2 = part2 == null ? null : part2.toLowerCase(locale); - } - try { + BytesRef part1Binary = part1 == null ? null : getAnalyzer().normalize(field, part1); + BytesRef part2Binary = part2 == null ? null : getAnalyzer().normalize(field, part2); Query rangeQuery; if (currentFieldType instanceof LegacyDateFieldMapper.DateFieldType && settings.timeZone() != null) { LegacyDateFieldMapper.DateFieldType dateFieldType = (LegacyDateFieldMapper.DateFieldType) this.currentFieldType; - rangeQuery = dateFieldType.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null); + rangeQuery = dateFieldType.rangeQuery(part1Binary, part2Binary, + startInclusive, endInclusive, settings.timeZone(), null, context); } else if (currentFieldType instanceof DateFieldMapper.DateFieldType && settings.timeZone() != null) { DateFieldMapper.DateFieldType dateFieldType = (DateFieldMapper.DateFieldType) this.currentFieldType; - rangeQuery = dateFieldType.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null); + rangeQuery = dateFieldType.rangeQuery(part1Binary, part2Binary, + startInclusive, endInclusive, settings.timeZone(), null, context); } else { - rangeQuery = currentFieldType.rangeQuery(part1, part2, startInclusive, endInclusive); + rangeQuery = currentFieldType.rangeQuery(part1Binary, part2Binary, startInclusive, endInclusive, context); } return rangeQuery; } catch (RuntimeException e) { @@ -357,9 +370,6 @@ private Query getRangeQuerySingle(String field, String part1, String part2, } protected Query getFuzzyQuery(String field, String termStr, String minSimilarity) throws ParseException { - if (lowercaseExpandedTerms) { - termStr = termStr.toLowerCase(locale); - } Collection fields = extractMultiFields(field); if (fields != null) { if (fields.size() == 1) { @@ -398,8 +408,9 @@ private Query getFuzzyQuerySingle(String field, String termStr, String minSimila currentFieldType = context.fieldMapper(field); if (currentFieldType != null) { try { - return currentFieldType.fuzzyQuery(termStr, Fuzziness.build(minSimilarity), - fuzzyPrefixLength, settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions); + BytesRef term = termStr == null ? null : getAnalyzer().normalize(field, termStr); + return currentFieldType.fuzzyQuery(term, Fuzziness.build(minSimilarity), + getFuzzyPrefixLength(), settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions); } catch (RuntimeException e) { if (settings.lenient()) { return null; @@ -422,9 +433,6 @@ protected Query newFuzzyQuery(Term term, float minimumSimilarity, int prefixLeng @Override protected Query getPrefixQuery(String field, String termStr) throws ParseException { - if (lowercaseExpandedTerms) { - termStr = termStr.toLowerCase(locale); - } Collection fields = extractMultiFields(field); if (fields != null) { if (fields.size() == 1) { @@ -470,8 +478,8 @@ private Query getPrefixQuerySingle(String field, String termStr) throws ParseExc setAnalyzer(context.getSearchAnalyzer(currentFieldType)); } Query query = null; - if (currentFieldType.tokenized() == false) { - query = currentFieldType.prefixQuery(termStr, multiTermRewriteMethod, context); + if (currentFieldType instanceof StringFieldType == false) { + query = currentFieldType.prefixQuery(termStr, getMultiTermRewriteMethod(), context); } if (query == null) { query = getPossiblyAnalyzedPrefixQuery(currentFieldType.name(), termStr); @@ -572,25 +580,20 @@ private Query getPossiblyAnalyzedPrefixQuery(String field, String termStr) throw @Override protected Query getWildcardQuery(String field, String termStr) throws ParseException { - if (termStr.equals("*")) { - // we want to optimize for match all query for the "*:*", and "*" cases - if ("*".equals(field) || Objects.equals(field, this.field)) { - String actualField = field; - if (actualField == null) { - actualField = this.field; - } - if (actualField == null) { - return newMatchAllDocsQuery(); - } - if ("*".equals(actualField) || "_all".equals(actualField)) { - return newMatchAllDocsQuery(); - } - // effectively, we check if a field exists or not - return FIELD_QUERY_EXTENSIONS.get(ExistsFieldQueryExtension.NAME).query(context, actualField); + if (termStr.equals("*") && field != null) { + /** + * We rewrite _all:* to a match all query. + * TODO: We can remove this special case when _all is completely removed. + */ + if ("*".equals(field) || AllFieldMapper.NAME.equals(field)) { + return newMatchAllDocsQuery(); } - } - if (lowercaseExpandedTerms) { - termStr = termStr.toLowerCase(locale); + String actualField = field; + if (actualField == null) { + actualField = this.field; + } + // effectively, we check if a field exists or not + return FIELD_QUERY_EXTENSIONS.get(ExistsFieldQueryExtension.NAME).query(context, actualField); } Collection fields = extractMultiFields(field); if (fields != null) { @@ -628,6 +631,10 @@ protected Query getWildcardQuery(String field, String termStr) throws ParseExcep } private Query getWildcardQuerySingle(String field, String termStr) throws ParseException { + if ("*".equals(termStr)) { + // effectively, we check if a field exists or not + return FIELD_QUERY_EXTENSIONS.get(ExistsFieldQueryExtension.NAME).query(context, field); + } String indexedNameField = field; currentFieldType = null; Analyzer oldAnalyzer = getAnalyzer(); @@ -638,9 +645,8 @@ private Query getWildcardQuerySingle(String field, String termStr) throws ParseE setAnalyzer(context.getSearchAnalyzer(currentFieldType)); } indexedNameField = currentFieldType.name(); - return getPossiblyAnalyzedWildcardQuery(indexedNameField, termStr); } - return getPossiblyAnalyzedWildcardQuery(indexedNameField, termStr); + return super.getWildcardQuery(indexedNameField, termStr); } catch (RuntimeException e) { if (settings.lenient()) { return null; @@ -651,75 +657,8 @@ private Query getWildcardQuerySingle(String field, String termStr) throws ParseE } } - private Query getPossiblyAnalyzedWildcardQuery(String field, String termStr) throws ParseException { - if (!settings.analyzeWildcard()) { - return super.getWildcardQuery(field, termStr); - } - boolean isWithinToken = (!termStr.startsWith("?") && !termStr.startsWith("*")); - StringBuilder aggStr = new StringBuilder(); - StringBuilder tmp = new StringBuilder(); - for (int i = 0; i < termStr.length(); i++) { - char c = termStr.charAt(i); - if (c == '?' || c == '*') { - if (isWithinToken) { - try (TokenStream source = getAnalyzer().tokenStream(field, tmp.toString())) { - source.reset(); - CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class); - if (source.incrementToken()) { - String term = termAtt.toString(); - if (term.length() == 0) { - // no tokens, just use what we have now - aggStr.append(tmp); - } else { - aggStr.append(term); - } - } else { - // no tokens, just use what we have now - aggStr.append(tmp); - } - } catch (IOException e) { - aggStr.append(tmp); - } - tmp.setLength(0); - } - isWithinToken = false; - aggStr.append(c); - } else { - tmp.append(c); - isWithinToken = true; - } - } - if (isWithinToken) { - try { - try (TokenStream source = getAnalyzer().tokenStream(field, tmp.toString())) { - source.reset(); - CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class); - if (source.incrementToken()) { - String term = termAtt.toString(); - if (term.length() == 0) { - // no tokens, just use what we have now - aggStr.append(tmp); - } else { - aggStr.append(term); - } - } else { - // no tokens, just use what we have now - aggStr.append(tmp); - } - } - } catch (IOException e) { - aggStr.append(tmp); - } - } - - return super.getWildcardQuery(field, aggStr.toString()); - } - @Override protected Query getRegexpQuery(String field, String termStr) throws ParseException { - if (lowercaseExpandedTerms) { - termStr = termStr.toLowerCase(locale); - } Collection fields = extractMultiFields(field); if (fields != null) { if (fields.size() == 1) { @@ -767,7 +706,7 @@ private Query getRegexpQuerySingle(String field, String termStr) throws ParseExc Query query = null; if (currentFieldType.tokenized() == false) { query = currentFieldType.regexpQuery(termStr, RegExp.ALL, - maxDeterminizedStates, multiTermRewriteMethod, context); + getMaxDeterminizedStates(), getMultiTermRewriteMethod(), context); } if (query == null) { query = super.getRegexpQuery(field, termStr); @@ -809,7 +748,7 @@ protected Query getBooleanQuery(List clauses) throws ParseExcepti } private Query applyBoost(String field, Query q) { - Float fieldBoost = settings.fieldsAndWeights().get(field); + Float fieldBoost = settings.fieldsAndWeights() == null ? null : settings.fieldsAndWeights().get(field); if (fieldBoost != null && fieldBoost != 1f) { return new BoostQuery(q, fieldBoost); } @@ -818,33 +757,58 @@ private Query applyBoost(String field, Query q) { private Query applySlop(Query q, int slop) { if (q instanceof PhraseQuery) { - PhraseQuery pq = (PhraseQuery) q; - PhraseQuery.Builder builder = new PhraseQuery.Builder(); - builder.setSlop(slop); - final Term[] terms = pq.getTerms(); - final int[] positions = pq.getPositions(); - for (int i = 0; i < terms.length; ++i) { - builder.add(terms[i], positions[i]); - } - pq = builder.build(); //make sure that the boost hasn't been set beforehand, otherwise we'd lose it assert q instanceof BoostQuery == false; - return pq; + return addSlopToPhrase((PhraseQuery) q, slop); } else if (q instanceof MultiPhraseQuery) { MultiPhraseQuery.Builder builder = new MultiPhraseQuery.Builder((MultiPhraseQuery) q); builder.setSlop(slop); return builder.build(); + } else if (q instanceof SpanQuery) { + return addSlopToSpan((SpanQuery) q, slop); } else { return q; } } + private Query addSlopToSpan(SpanQuery query, int slop) { + if (query instanceof SpanNearQuery) { + return new SpanNearQuery(((SpanNearQuery) query).getClauses(), slop, + ((SpanNearQuery) query).isInOrder()); + } else if (query instanceof SpanOrQuery) { + SpanQuery[] clauses = new SpanQuery[((SpanOrQuery) query).getClauses().length]; + int pos = 0; + for (SpanQuery clause : ((SpanOrQuery) query).getClauses()) { + clauses[pos++] = (SpanQuery) addSlopToSpan(clause, slop); + } + return new SpanOrQuery(clauses); + } else { + return query; + } + } + + /** + * Rebuild a phrase query with a slop value + */ + private PhraseQuery addSlopToPhrase(PhraseQuery query, int slop) { + PhraseQuery.Builder builder = new PhraseQuery.Builder(); + builder.setSlop(slop); + final Term[] terms = query.getTerms(); + final int[] positions = query.getPositions(); + for (int i = 0; i < terms.length; ++i) { + builder.add(terms[i], positions[i]); + } + + return builder.build(); + } + private Collection extractMultiFields(String field) { Collection fields; if (field != null) { fields = context.simpleMatchToIndexNames(field); } else { - fields = settings.fieldsAndWeights().keySet(); + Map fieldsAndWeights = settings.fieldsAndWeights(); + fields = fieldsAndWeights == null ? Collections.emptyList() : fieldsAndWeights.keySet(); } return fields; } @@ -859,4 +823,30 @@ public Query parse(String query) throws ParseException { } return super.parse(query); } + + /** + * Checks if graph analysis should be enabled for the field depending + * on the provided {@link Analyzer} + */ + protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field, + String queryText, boolean quoted, int phraseSlop) { + assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST; + + // Use the analyzer to get all the tokens, and then build an appropriate + // query based on the analysis chain. + try (TokenStream source = analyzer.tokenStream(field, queryText)) { + if (source.hasAttribute(DisableGraphAttribute.class)) { + /** + * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid + * paths explosion. See {@link ShingleTokenFilterFactory} for details. + */ + setEnableGraphQueries(false); + } + Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop); + setEnableGraphQueries(true); + return query; + } catch (IOException e) { + throw new RuntimeException("Error analyzing query text", e); + } + } } diff --git a/core/src/main/java/org/apache/lucene/queryparser/classic/QueryParserSettings.java b/core/src/main/java/org/apache/lucene/queryparser/classic/QueryParserSettings.java index c1fc2ae556ea7..295c1ace4f637 100644 --- a/core/src/main/java/org/apache/lucene/queryparser/classic/QueryParserSettings.java +++ b/core/src/main/java/org/apache/lucene/queryparser/classic/QueryParserSettings.java @@ -24,7 +24,6 @@ import org.elasticsearch.common.unit.Fuzziness; import org.joda.time.DateTimeZone; -import java.util.Locale; import java.util.Map; /** @@ -53,12 +52,8 @@ public class QueryParserSettings { private boolean analyzeWildcard; - private boolean lowercaseExpandedTerms; - private boolean enablePositionIncrements; - private Locale locale; - private Fuzziness fuzziness; private int fuzzyPrefixLength; private int fuzzyMaxExpansions; @@ -79,6 +74,8 @@ public class QueryParserSettings { /** To limit effort spent determinizing regexp queries. */ private int maxDeterminizedStates; + private boolean splitOnWhitespace; + public QueryParserSettings(String queryString) { this.queryString = queryString; } @@ -135,14 +132,6 @@ public void allowLeadingWildcard(boolean allowLeadingWildcard) { this.allowLeadingWildcard = allowLeadingWildcard; } - public boolean lowercaseExpandedTerms() { - return lowercaseExpandedTerms; - } - - public void lowercaseExpandedTerms(boolean lowercaseExpandedTerms) { - this.lowercaseExpandedTerms = lowercaseExpandedTerms; - } - public boolean enablePositionIncrements() { return enablePositionIncrements; } @@ -267,14 +256,6 @@ public void useDisMax(boolean useDisMax) { this.useDisMax = useDisMax; } - public void locale(Locale locale) { - this.locale = locale; - } - - public Locale locale() { - return this.locale; - } - public void timeZone(DateTimeZone timeZone) { this.timeZone = timeZone; } @@ -290,4 +271,12 @@ public void fuzziness(Fuzziness fuzziness) { public Fuzziness fuzziness() { return fuzziness; } + + public void splitOnWhitespace(boolean value) { + this.splitOnWhitespace = value; + } + + public boolean splitOnWhitespace() { + return splitOnWhitespace; + } } diff --git a/core/src/main/java/org/apache/lucene/search/grouping/CollapseTopFieldDocs.java b/core/src/main/java/org/apache/lucene/search/grouping/CollapseTopFieldDocs.java new file mode 100644 index 0000000000000..b4d3c82343957 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/grouping/CollapseTopFieldDocs.java @@ -0,0 +1,242 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.lucene.search.grouping; + +import org.apache.lucene.search.FieldComparator; +import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TopFieldDocs; +import org.apache.lucene.util.PriorityQueue; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +/** + * Represents hits returned by {@link CollapsingTopDocsCollector#getTopDocs()}. + */ +public final class CollapseTopFieldDocs extends TopFieldDocs { + /** The field used for collapsing **/ + public final String field; + /** The collapse value for each top doc */ + public final Object[] collapseValues; + + public CollapseTopFieldDocs(String field, int totalHits, ScoreDoc[] scoreDocs, + SortField[] sortFields, Object[] values, float maxScore) { + super(totalHits, scoreDocs, sortFields, maxScore); + this.field = field; + this.collapseValues = values; + } + + // Refers to one hit: + private static final class ShardRef { + // Which shard (index into shardHits[]): + final int shardIndex; + + // True if we should use the incoming ScoreDoc.shardIndex for sort order + final boolean useScoreDocIndex; + + // Which hit within the shard: + int hitIndex; + + ShardRef(int shardIndex, boolean useScoreDocIndex) { + this.shardIndex = shardIndex; + this.useScoreDocIndex = useScoreDocIndex; + } + + @Override + public String toString() { + return "ShardRef(shardIndex=" + shardIndex + " hitIndex=" + hitIndex + ")"; + } + + int getShardIndex(ScoreDoc scoreDoc) { + if (useScoreDocIndex) { + if (scoreDoc.shardIndex == -1) { + throw new IllegalArgumentException("setShardIndex is false but TopDocs[" + + shardIndex + "].scoreDocs[" + hitIndex + "] is not set"); + } + return scoreDoc.shardIndex; + } else { + // NOTE: we don't assert that shardIndex is -1 here, because caller could in fact have set it but asked us to ignore it now + return shardIndex; + } + } + } + + /** + * if we need to tie-break since score / sort value are the same we first compare shard index (lower shard wins) + * and then iff shard index is the same we use the hit index. + */ + static boolean tieBreakLessThan(ShardRef first, ScoreDoc firstDoc, ShardRef second, ScoreDoc secondDoc) { + final int firstShardIndex = first.getShardIndex(firstDoc); + final int secondShardIndex = second.getShardIndex(secondDoc); + // Tie break: earlier shard wins + if (firstShardIndex < secondShardIndex) { + return true; + } else if (firstShardIndex > secondShardIndex) { + return false; + } else { + // Tie break in same shard: resolve however the + // shard had resolved it: + assert first.hitIndex != second.hitIndex; + return first.hitIndex < second.hitIndex; + } + } + + private static class MergeSortQueue extends PriorityQueue { + // These are really FieldDoc instances: + final ScoreDoc[][] shardHits; + final FieldComparator[] comparators; + final int[] reverseMul; + + MergeSortQueue(Sort sort, CollapseTopFieldDocs[] shardHits) { + super(shardHits.length); + this.shardHits = new ScoreDoc[shardHits.length][]; + for (int shardIDX = 0; shardIDX < shardHits.length; shardIDX++) { + final ScoreDoc[] shard = shardHits[shardIDX].scoreDocs; + if (shard != null) { + this.shardHits[shardIDX] = shard; + // Fail gracefully if API is misused: + for (int hitIDX = 0; hitIDX < shard.length; hitIDX++) { + final ScoreDoc sd = shard[hitIDX]; + final FieldDoc gd = (FieldDoc) sd; + assert gd.fields != null; + } + } + } + + final SortField[] sortFields = sort.getSort(); + comparators = new FieldComparator[sortFields.length]; + reverseMul = new int[sortFields.length]; + for (int compIDX = 0; compIDX < sortFields.length; compIDX++) { + final SortField sortField = sortFields[compIDX]; + comparators[compIDX] = sortField.getComparator(1, compIDX); + reverseMul[compIDX] = sortField.getReverse() ? -1 : 1; + } + } + + // Returns true if first is < second + @Override + public boolean lessThan(ShardRef first, ShardRef second) { + assert first != second; + final FieldDoc firstFD = (FieldDoc) shardHits[first.shardIndex][first.hitIndex]; + final FieldDoc secondFD = (FieldDoc) shardHits[second.shardIndex][second.hitIndex]; + + for (int compIDX = 0; compIDX < comparators.length; compIDX++) { + final FieldComparator comp = comparators[compIDX]; + + final int cmp = + reverseMul[compIDX] * comp.compareValues(firstFD.fields[compIDX], secondFD.fields[compIDX]); + + if (cmp != 0) { + return cmp < 0; + } + } + return tieBreakLessThan(first, firstFD, second, secondFD); + } + } + + /** + * Returns a new CollapseTopDocs, containing topN collapsed results across + * the provided CollapseTopDocs, sorting by score. Each {@link CollapseTopFieldDocs} instance must be sorted. + **/ + public static CollapseTopFieldDocs merge(Sort sort, int start, int size, + CollapseTopFieldDocs[] shardHits, boolean setShardIndex) { + String collapseField = shardHits[0].field; + for (int i = 1; i < shardHits.length; i++) { + if (collapseField.equals(shardHits[i].field) == false) { + throw new IllegalArgumentException("collapse field differ across shards [" + + collapseField + "] != [" + shardHits[i].field + "]"); + } + } + final PriorityQueue queue = new MergeSortQueue(sort, shardHits); + + int totalHitCount = 0; + int availHitCount = 0; + float maxScore = Float.MIN_VALUE; + for(int shardIDX=0;shardIDX 0) { + availHitCount += shard.scoreDocs.length; + queue.add(new ShardRef(shardIDX, setShardIndex == false)); + maxScore = Math.max(maxScore, shard.getMaxScore()); + } + } + + if (availHitCount == 0) { + maxScore = Float.NaN; + } + + final ScoreDoc[] hits; + final Object[] values; + if (availHitCount <= start) { + hits = new ScoreDoc[0]; + values = new Object[0]; + } else { + List hitList = new ArrayList<>(); + List collapseList = new ArrayList<>(); + int requestedResultWindow = start + size; + int numIterOnHits = Math.min(availHitCount, requestedResultWindow); + int hitUpto = 0; + Set seen = new HashSet<>(); + while (hitUpto < numIterOnHits) { + if (queue.size() == 0) { + break; + } + ShardRef ref = queue.top(); + final ScoreDoc hit = shardHits[ref.shardIndex].scoreDocs[ref.hitIndex]; + final Object collapseValue = shardHits[ref.shardIndex].collapseValues[ref.hitIndex++]; + if (seen.contains(collapseValue)) { + if (ref.hitIndex < shardHits[ref.shardIndex].scoreDocs.length) { + queue.updateTop(); + } else { + queue.pop(); + } + continue; + } + seen.add(collapseValue); + if (setShardIndex) { + hit.shardIndex = ref.shardIndex; + } + if (hitUpto >= start) { + hitList.add(hit); + collapseList.add(collapseValue); + } + + hitUpto++; + + if (ref.hitIndex < shardHits[ref.shardIndex].scoreDocs.length) { + // Not done with this these TopDocs yet: + queue.updateTop(); + } else { + queue.pop(); + } + } + hits = hitList.toArray(new ScoreDoc[0]); + values = collapseList.toArray(new Object[0]); + } + return new CollapseTopFieldDocs(collapseField, totalHitCount, hits, sort.getSort(), values, maxScore); + } +} diff --git a/core/src/main/java/org/apache/lucene/search/grouping/CollapsingDocValuesSource.java b/core/src/main/java/org/apache/lucene/search/grouping/CollapsingDocValuesSource.java new file mode 100644 index 0000000000000..5bc8afb347c71 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/grouping/CollapsingDocValuesSource.java @@ -0,0 +1,211 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.lucene.search.grouping; + +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.NumericDocValues; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.SortedSetDocValues; +import org.apache.lucene.util.ArrayUtil; +import org.apache.lucene.util.Bits; +import org.apache.lucene.util.BytesRef; + +import java.io.IOException; + +/** + * Utility class that ensures that a single collapse key is extracted per document. + */ +abstract class CollapsingDocValuesSource { + protected final String field; + + CollapsingDocValuesSource(String field) throws IOException { + this.field = field; + } + + abstract T get(int doc); + + abstract T copy(T value, T reuse); + + abstract void setNextReader(LeafReader reader) throws IOException; + + /** + * Implementation for {@link NumericDocValues} and {@link SortedNumericDocValues}. + * Fails with an {@link IllegalStateException} if a document contains multiple values for the specified field. + */ + static class Numeric extends CollapsingDocValuesSource { + private NumericDocValues values; + private Bits docsWithField; + + Numeric(String field) throws IOException { + super(field); + } + + @Override + public Long get(int doc) { + if (docsWithField.get(doc)) { + return values.get(doc); + } else { + return null; + } + } + + @Override + public Long copy(Long value, Long reuse) { + return value; + } + + @Override + public void setNextReader(LeafReader reader) throws IOException { + DocValuesType type = getDocValuesType(reader, field); + if (type == null || type == DocValuesType.NONE) { + values = DocValues.emptyNumeric(); + docsWithField = new Bits.MatchNoBits(reader.maxDoc()); + return ; + } + docsWithField = DocValues.getDocsWithField(reader, field); + switch (type) { + case NUMERIC: + values = DocValues.getNumeric(reader, field); + break; + + case SORTED_NUMERIC: + final SortedNumericDocValues sorted = DocValues.getSortedNumeric(reader, field); + values = DocValues.unwrapSingleton(sorted); + if (values == null) { + values = new NumericDocValues() { + @Override + public long get(int docID) { + sorted.setDocument(docID); + assert sorted.count() > 0; + if (sorted.count() > 1) { + throw new IllegalStateException("failed to collapse " + docID + + ", the collapse field must be single valued"); + } + return sorted.valueAt(0); + } + }; + } + break; + + default: + throw new IllegalStateException("unexpected doc values type " + + type + "` for field `" + field + "`"); + } + } + } + + /** + * Implementation for {@link SortedDocValues} and {@link SortedSetDocValues}. + * Fails with an {@link IllegalStateException} if a document contains multiple values for the specified field. + */ + static class Keyword extends CollapsingDocValuesSource { + private Bits docsWithField; + private SortedDocValues values; + + Keyword(String field) throws IOException { + super(field); + } + + @Override + public BytesRef get(int doc) { + if (docsWithField.get(doc)) { + return values.get(doc); + } else { + return null; + } + } + + @Override + public BytesRef copy(BytesRef value, BytesRef reuse) { + if (value == null) { + return null; + } + if (reuse != null) { + reuse.bytes = ArrayUtil.grow(reuse.bytes, value.length); + reuse.offset = 0; + reuse.length = value.length; + System.arraycopy(value.bytes, value.offset, reuse.bytes, 0, value.length); + return reuse; + } else { + return BytesRef.deepCopyOf(value); + } + } + + @Override + public void setNextReader(LeafReader reader) throws IOException { + DocValuesType type = getDocValuesType(reader, field); + if (type == null || type == DocValuesType.NONE) { + values = DocValues.emptySorted(); + docsWithField = new Bits.MatchNoBits(reader.maxDoc()); + return ; + } + docsWithField = DocValues.getDocsWithField(reader, field); + switch (type) { + case SORTED: + values = DocValues.getSorted(reader, field); + break; + + case SORTED_SET: + final SortedSetDocValues sorted = DocValues.getSortedSet(reader, field); + values = DocValues.unwrapSingleton(sorted); + if (values == null) { + values = new SortedDocValues() { + @Override + public int getOrd(int docID) { + sorted.setDocument(docID); + int ord = (int) sorted.nextOrd(); + if (sorted.nextOrd() != SortedSetDocValues.NO_MORE_ORDS) { + throw new IllegalStateException("failed to collapse " + docID + + ", the collapse field must be single valued"); + } + return ord; + } + + @Override + public BytesRef lookupOrd(int ord) { + return sorted.lookupOrd(ord); + } + + @Override + public int getValueCount() { + return (int) sorted.getValueCount(); + } + }; + } + break; + + default: + throw new IllegalStateException("unexpected doc values type " + + type + "` for field `" + field + "`"); + } + } + } + + private static DocValuesType getDocValuesType(LeafReader in, String field) { + FieldInfo fi = in.getFieldInfos().fieldInfo(field); + if (fi != null) { + return fi.getDocValuesType(); + } + return null; + } +} diff --git a/core/src/main/java/org/apache/lucene/search/grouping/CollapsingTopDocsCollector.java b/core/src/main/java/org/apache/lucene/search/grouping/CollapsingTopDocsCollector.java new file mode 100644 index 0000000000000..955a63e5483b7 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/grouping/CollapsingTopDocsCollector.java @@ -0,0 +1,214 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.lucene.search.grouping; + +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.util.BytesRef; + +import java.io.IOException; +import java.util.Collection; +import java.util.Iterator; + +import static org.apache.lucene.search.SortField.Type.SCORE; + +/** + * A collector that groups documents based on field values and returns {@link CollapseTopFieldDocs} + * output. The collapsing is done in a single pass by selecting only the top sorted document per collapse key. + * The value used for the collapse key of each group can be found in {@link CollapseTopFieldDocs#collapseValues}. + */ +public abstract class CollapsingTopDocsCollector extends FirstPassGroupingCollector { + protected final String collapseField; + + protected final Sort sort; + protected Scorer scorer; + + private int totalHitCount; + private float maxScore; + private final boolean trackMaxScore; + + private CollapsingTopDocsCollector(String collapseField, Sort sort, + int topN, boolean trackMaxScore) throws IOException { + super(sort, topN); + this.collapseField = collapseField; + this.trackMaxScore = trackMaxScore; + if (trackMaxScore) { + maxScore = Float.NEGATIVE_INFINITY; + } else { + maxScore = Float.NaN; + } + this.sort = sort; + } + + /** + * Transform {@link FirstPassGroupingCollector#getTopGroups(int, boolean)} output in + * {@link CollapseTopFieldDocs}. The collapsing needs only one pass so we can create the final top docs at the end + * of the first pass. + */ + public CollapseTopFieldDocs getTopDocs() { + Collection> groups = super.getTopGroups(0, true); + if (groups == null) { + return new CollapseTopFieldDocs(collapseField, totalHitCount, new ScoreDoc[0], + sort.getSort(), new Object[0], Float.NaN); + } + FieldDoc[] docs = new FieldDoc[groups.size()]; + Object[] collapseValues = new Object[groups.size()]; + int scorePos = -1; + for (int index = 0; index < sort.getSort().length; index++) { + SortField sortField = sort.getSort()[index]; + if (sortField.getType() == SCORE) { + scorePos = index; + break; + } + } + int pos = 0; + Iterator> it = orderedGroups.iterator(); + for (SearchGroup group : groups) { + assert it.hasNext(); + CollectedSearchGroup col = it.next(); + float score = Float.NaN; + if (scorePos != -1) { + score = (float) group.sortValues[scorePos]; + } + docs[pos] = new FieldDoc(col.topDoc, score, group.sortValues); + collapseValues[pos] = group.groupValue; + pos++; + } + return new CollapseTopFieldDocs(collapseField, totalHitCount, docs, sort.getSort(), + collapseValues, maxScore); + } + + @Override + public boolean needsScores() { + if (super.needsScores() == false) { + return trackMaxScore; + } + return true; + } + + @Override + public void setScorer(Scorer scorer) throws IOException { + super.setScorer(scorer); + this.scorer = scorer; + } + + @Override + public void collect(int doc) throws IOException { + super.collect(doc); + if (trackMaxScore) { + maxScore = Math.max(maxScore, scorer.score()); + } + totalHitCount++; + } + + private static class Numeric extends CollapsingTopDocsCollector { + private final CollapsingDocValuesSource.Numeric source; + + private Numeric(String collapseField, Sort sort, int topN, boolean trackMaxScore) throws IOException { + super(collapseField, sort, topN, trackMaxScore); + source = new CollapsingDocValuesSource.Numeric(collapseField); + } + + @Override + protected void doSetNextReader(LeafReaderContext readerContext) throws IOException { + super.doSetNextReader(readerContext); + source.setNextReader(readerContext.reader()); + } + + @Override + protected Long getDocGroupValue(int doc) { + return source.get(doc); + } + + @Override + protected Long copyDocGroupValue(Long groupValue, Long reuse) { + return source.copy(groupValue, reuse); + } + } + + private static class Keyword extends CollapsingTopDocsCollector { + private final CollapsingDocValuesSource.Keyword source; + + private Keyword(String collapseField, Sort sort, int topN, boolean trackMaxScore) throws IOException { + super(collapseField, sort, topN, trackMaxScore); + source = new CollapsingDocValuesSource.Keyword(collapseField); + + } + + @Override + protected void doSetNextReader(LeafReaderContext readerContext) throws IOException { + super.doSetNextReader(readerContext); + source.setNextReader(readerContext.reader()); + } + + @Override + protected BytesRef getDocGroupValue(int doc) { + return source.get(doc); + } + + @Override + protected BytesRef copyDocGroupValue(BytesRef groupValue, BytesRef reuse) { + return source.copy(groupValue, reuse); + } + } + + /** + * Create a collapsing top docs collector on a {@link org.apache.lucene.index.NumericDocValues} field. + * It accepts also {@link org.apache.lucene.index.SortedNumericDocValues} field but + * the collect will fail with an {@link IllegalStateException} if a document contains more than one value for the + * field. + * + * @param collapseField The sort field used to group + * documents. + * @param sort The {@link Sort} used to sort the collapsed hits. + * The collapsing keeps only the top sorted document per collapsed key. + * This must be non-null, ie, if you want to groupSort by relevance + * use Sort.RELEVANCE. + * @param topN How many top groups to keep. + * @throws IOException When I/O related errors occur + */ + public static CollapsingTopDocsCollector createNumeric(String collapseField, Sort sort, + int topN, boolean trackMaxScore) throws IOException { + return new Numeric(collapseField, sort, topN, trackMaxScore); + } + + /** + * Create a collapsing top docs collector on a {@link org.apache.lucene.index.SortedDocValues} field. + * It accepts also {@link org.apache.lucene.index.SortedSetDocValues} field but + * the collect will fail with an {@link IllegalStateException} if a document contains more than one value for the + * field. + * + * @param collapseField The sort field used to group + * documents. + * @param sort The {@link Sort} used to sort the collapsed hits. The collapsing keeps only the top sorted + * document per collapsed key. + * This must be non-null, ie, if you want to groupSort by relevance use Sort.RELEVANCE. + * @param topN How many top groups to keep. + * @throws IOException When I/O related errors occur + */ + public static CollapsingTopDocsCollector createKeyword(String collapseField, Sort sort, + int topN, boolean trackMaxScore) throws IOException { + return new Keyword(collapseField, sort, topN, trackMaxScore); + } +} + diff --git a/core/src/main/java/org/apache/lucene/search/postingshighlight/Snippet.java b/core/src/main/java/org/apache/lucene/search/highlight/Snippet.java similarity index 96% rename from core/src/main/java/org/apache/lucene/search/postingshighlight/Snippet.java rename to core/src/main/java/org/apache/lucene/search/highlight/Snippet.java index f3bfa1b9c652a..81a3d406ea346 100644 --- a/core/src/main/java/org/apache/lucene/search/postingshighlight/Snippet.java +++ b/core/src/main/java/org/apache/lucene/search/highlight/Snippet.java @@ -17,7 +17,7 @@ * under the License. */ -package org.apache.lucene.search.postingshighlight; +package org.apache.lucene.search.highlight; /** * Represents a scored highlighted snippet. diff --git a/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPassageFormatter.java b/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPassageFormatter.java index 889e7f741ed80..a33bf16dee4c7 100644 --- a/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPassageFormatter.java +++ b/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPassageFormatter.java @@ -19,6 +19,7 @@ package org.apache.lucene.search.postingshighlight; +import org.apache.lucene.search.highlight.Snippet; import org.apache.lucene.search.highlight.Encoder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightUtils; @@ -46,10 +47,10 @@ public Snippet[] format(Passage[] passages, String content) { for (int j = 0; j < passages.length; j++) { Passage passage = passages[j]; StringBuilder sb = new StringBuilder(); - pos = passage.startOffset; - for (int i = 0; i < passage.numMatches; i++) { - int start = passage.matchStarts[i]; - int end = passage.matchEnds[i]; + pos = passage.getStartOffset(); + for (int i = 0; i < passage.getNumMatches(); i++) { + int start = passage.getMatchStarts()[i]; + int end = passage.getMatchEnds()[i]; // its possible to have overlapping terms if (start > pos) { append(sb, content, pos, start); @@ -62,7 +63,7 @@ public Snippet[] format(Passage[] passages, String content) { } } // its possible a "term" from the analyzer could span a sentence boundary. - append(sb, content, pos, Math.max(pos, passage.endOffset)); + append(sb, content, pos, Math.max(pos, passage.getEndOffset())); //we remove the paragraph separator if present at the end of the snippet (we used it as separator between values) if (sb.charAt(sb.length() - 1) == HighlightUtils.PARAGRAPH_SEPARATOR) { sb.deleteCharAt(sb.length() - 1); @@ -70,7 +71,7 @@ public Snippet[] format(Passage[] passages, String content) { sb.deleteCharAt(sb.length() - 1); } //and we trim the snippets too - snippets[j] = new Snippet(sb.toString().trim(), passage.score, passage.numMatches > 0); + snippets[j] = new Snippet(sb.toString().trim(), passage.getScore(), passage.getNumMatches() > 0); } return snippets; } diff --git a/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java b/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java index 30f57b2626c4b..ac90a3e57aee7 100644 --- a/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java +++ b/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java @@ -22,6 +22,7 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; +import org.apache.lucene.search.highlight.Snippet; import java.io.IOException; import java.text.BreakIterator; diff --git a/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java b/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java index c65f962dbb8b0..5eaf63369b962 100644 --- a/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java +++ b/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java @@ -392,7 +392,7 @@ private static final class EscapingTokenStreamToAutomaton extends TokenStreamTo final BytesRefBuilder spare = new BytesRefBuilder(); private char sepLabel; - public EscapingTokenStreamToAutomaton(char sepLabel) { + EscapingTokenStreamToAutomaton(char sepLabel) { this.sepLabel = sepLabel; } @@ -432,7 +432,7 @@ private static class AnalyzingComparator implements Comparator { private final boolean hasPayloads; - public AnalyzingComparator(boolean hasPayloads) { + AnalyzingComparator(boolean hasPayloads) { this.hasPayloads = hasPayloads; } @@ -1114,7 +1114,7 @@ private static final class SurfaceFormAndPayload implements Comparable windowStart && offset < windowEnd) { + innerStart = innerEnd; + innerEnd = windowEnd; + } else { + windowStart = innerStart = mainBreak.preceding(offset); + windowEnd = innerEnd = mainBreak.following(offset-1); + } + + if (innerEnd - innerStart > maxLen) { + // the current split is too big, + // so starting from the current term we try to find boundaries on the left first + if (offset - maxLen > innerStart) { + innerStart = Math.max(innerStart, + innerBreak.preceding(offset - maxLen)); + } + // and then we try to expand the passage to the right with the remaining size + int remaining = Math.max(0, maxLen - (offset - innerStart)); + if (offset + remaining < windowEnd) { + innerEnd = Math.min(windowEnd, + innerBreak.following(offset + remaining)); + } + } + lastPrecedingOffset = offset - 1; + return innerStart; + } + + /** + * Can be invoked only after a call to preceding(offset+1). + * See {@link FieldHighlighter} for usage. + */ + @Override + public int following(int offset) { + if (offset != lastPrecedingOffset || innerEnd == -1) { + throw new IllegalArgumentException("offset != lastPrecedingOffset: " + + "usage doesn't look like UnifiedHighlighter"); + } + return innerEnd; + } + + /** + * Returns a {@link BreakIterator#getSentenceInstance(Locale)} bounded to maxLen. + * Secondary boundaries are found using a {@link BreakIterator#getWordInstance(Locale)}. + */ + public static BreakIterator getSentence(Locale locale, int maxLen) { + final BreakIterator sBreak = BreakIterator.getSentenceInstance(locale); + final BreakIterator wBreak = BreakIterator.getWordInstance(locale); + return new BoundedBreakIteratorScanner(sBreak, wBreak, maxLen); + } + + + @Override + public int current() { + // Returns the last offset of the current split + return this.innerEnd; + } + + @Override + public int first() { + throw new IllegalStateException("first() should not be called in this context"); + } + + @Override + public int next() { + throw new IllegalStateException("next() should not be called in this context"); + } + + @Override + public int last() { + throw new IllegalStateException("last() should not be called in this context"); + } + + @Override + public int next(int n) { + throw new IllegalStateException("next(n) should not be called in this context"); + } + + @Override + public int previous() { + throw new IllegalStateException("previous() should not be called in this context"); + } +} diff --git a/core/src/main/java/org/apache/lucene/search/uhighlight/CustomFieldHighlighter.java b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomFieldHighlighter.java new file mode 100644 index 0000000000000..915e7cc153128 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomFieldHighlighter.java @@ -0,0 +1,79 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.search.uhighlight; + +import java.text.BreakIterator; +import java.util.Locale; + +import static org.apache.lucene.search.uhighlight.CustomUnifiedHighlighter.MULTIVAL_SEP_CHAR; + +/** + * Custom {@link FieldHighlighter} that creates a single passage bounded to {@code noMatchSize} when + * no highlights were found. + */ +class CustomFieldHighlighter extends FieldHighlighter { + private static final Passage[] EMPTY_PASSAGE = new Passage[0]; + + private final Locale breakIteratorLocale; + private final int noMatchSize; + private final String fieldValue; + + CustomFieldHighlighter(String field, FieldOffsetStrategy fieldOffsetStrategy, + Locale breakIteratorLocale, BreakIterator breakIterator, + PassageScorer passageScorer, int maxPassages, int maxNoHighlightPassages, + PassageFormatter passageFormatter, int noMatchSize, String fieldValue) { + super(field, fieldOffsetStrategy, breakIterator, passageScorer, maxPassages, + maxNoHighlightPassages, passageFormatter); + this.breakIteratorLocale = breakIteratorLocale; + this.noMatchSize = noMatchSize; + this.fieldValue = fieldValue; + } + + @Override + protected Passage[] getSummaryPassagesNoHighlight(int maxPassages) { + if (noMatchSize > 0) { + int pos = 0; + while (pos < fieldValue.length() && fieldValue.charAt(pos) == MULTIVAL_SEP_CHAR) { + pos ++; + } + if (pos < fieldValue.length()) { + int end = fieldValue.indexOf(MULTIVAL_SEP_CHAR, pos); + if (end == -1) { + end = fieldValue.length(); + } + if (noMatchSize+pos < end) { + BreakIterator bi = BreakIterator.getWordInstance(breakIteratorLocale); + bi.setText(fieldValue); + // Finds the next word boundary **after** noMatchSize. + end = bi.following(noMatchSize + pos); + if (end == BreakIterator.DONE) { + end = fieldValue.length(); + } + } + Passage passage = new Passage(); + passage.setScore(Float.NaN); + passage.setStartOffset(pos); + passage.setEndOffset(end); + return new Passage[]{passage}; + } + } + return EMPTY_PASSAGE; + } +} diff --git a/core/src/main/java/org/apache/lucene/search/uhighlight/CustomPassageFormatter.java b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomPassageFormatter.java new file mode 100644 index 0000000000000..7a34a805db623 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomPassageFormatter.java @@ -0,0 +1,82 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.search.uhighlight; + +import org.apache.lucene.search.highlight.Encoder; +import org.apache.lucene.search.highlight.Snippet; +import org.elasticsearch.search.fetch.subphase.highlight.HighlightUtils; + +/** + * Custom passage formatter that allows us to: + * 1) extract different snippets (instead of a single big string) together with their scores ({@link Snippet}) + * 2) use the {@link Encoder} implementations that are already used with the other highlighters + */ +public class CustomPassageFormatter extends PassageFormatter { + + private final String preTag; + private final String postTag; + private final Encoder encoder; + + public CustomPassageFormatter(String preTag, String postTag, Encoder encoder) { + this.preTag = preTag; + this.postTag = postTag; + this.encoder = encoder; + } + + @Override + public Snippet[] format(Passage[] passages, String content) { + Snippet[] snippets = new Snippet[passages.length]; + int pos; + for (int j = 0; j < passages.length; j++) { + Passage passage = passages[j]; + StringBuilder sb = new StringBuilder(); + pos = passage.getStartOffset(); + for (int i = 0; i < passage.getNumMatches(); i++) { + int start = passage.getMatchStarts()[i]; + int end = passage.getMatchEnds()[i]; + // its possible to have overlapping terms + if (start > pos) { + append(sb, content, pos, start); + } + if (end > pos) { + sb.append(preTag); + append(sb, content, Math.max(pos, start), end); + sb.append(postTag); + pos = end; + } + } + // its possible a "term" from the analyzer could span a sentence boundary. + append(sb, content, pos, Math.max(pos, passage.getEndOffset())); + //we remove the paragraph separator if present at the end of the snippet (we used it as separator between values) + if (sb.charAt(sb.length() - 1) == HighlightUtils.PARAGRAPH_SEPARATOR) { + sb.deleteCharAt(sb.length() - 1); + } else if (sb.charAt(sb.length() - 1) == HighlightUtils.NULL_SEPARATOR) { + sb.deleteCharAt(sb.length() - 1); + } + //and we trim the snippets too + snippets[j] = new Snippet(sb.toString().trim(), passage.getScore(), passage.getNumMatches() > 0); + } + return snippets; + } + + private void append(StringBuilder dest, String content, int start, int end) { + dest.append(encoder.encodeText(content.substring(start, end))); + } +} diff --git a/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java new file mode 100644 index 0000000000000..ef570033d0c8e --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java @@ -0,0 +1,238 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.search.uhighlight; + +import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.Term; +import org.apache.lucene.queries.CommonTermsQuery; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.PrefixQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.highlight.Snippet; +import org.apache.lucene.search.spans.SpanMultiTermQueryWrapper; +import org.apache.lucene.search.spans.SpanNearQuery; +import org.apache.lucene.search.spans.SpanOrQuery; +import org.apache.lucene.search.spans.SpanQuery; +import org.apache.lucene.search.spans.SpanTermQuery; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.automaton.CharacterRunAutomaton; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.lucene.all.AllTermQuery; +import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; +import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery; +import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; +import org.elasticsearch.index.search.ESToParentBlockJoinQuery; + +import java.io.IOException; +import java.text.BreakIterator; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.Set; + +/** + * Subclass of the {@link UnifiedHighlighter} that works for a single field in a single document. + * Uses a custom {@link PassageFormatter}. Accepts field content as a constructor + * argument, given that loadings field value can be done reading from _source field. + * Supports using different {@link BreakIterator} to break the text into fragments. Considers every distinct field + * value as a discrete passage for highlighting (unless the whole content needs to be highlighted). + * Supports both returning empty snippets and non highlighted snippets when no highlighting can be performed. + */ +public class CustomUnifiedHighlighter extends UnifiedHighlighter { + public static final char MULTIVAL_SEP_CHAR = (char) 0; + private static final Snippet[] EMPTY_SNIPPET = new Snippet[0]; + + private final OffsetSource offsetSource; + private final String fieldValue; + private final PassageFormatter passageFormatter; + private final BreakIterator breakIterator; + private final Locale breakIteratorLocale; + private final int noMatchSize; + + /** + * Creates a new instance of {@link CustomUnifiedHighlighter} + * + * @param analyzer the analyzer used for the field at index time, used for multi term queries internally. + * @param passageFormatter our own {@link CustomPassageFormatter} + * which generates snippets in forms of {@link Snippet} objects. + * @param offsetSource the {@link OffsetSource} to used for offsets retrieval. + * @param breakIteratorLocale the {@link Locale} to use for dividing text into passages. + * If null {@link Locale#ROOT} is used. + * @param breakIterator the {@link BreakIterator} to use for dividing text into passages. + * If null {@link BreakIterator#getSentenceInstance(Locale)} is used. + * @param fieldValue the original field values delimited by MULTIVAL_SEP_CHAR. + * @param noMatchSize The size of the text that should be returned when no highlighting can be performed. + */ + public CustomUnifiedHighlighter(IndexSearcher searcher, + Analyzer analyzer, + OffsetSource offsetSource, + PassageFormatter passageFormatter, + @Nullable Locale breakIteratorLocale, + @Nullable BreakIterator breakIterator, + String fieldValue, + int noMatchSize) { + super(searcher, analyzer); + this.offsetSource = offsetSource; + this.breakIterator = breakIterator; + this.breakIteratorLocale = breakIteratorLocale == null ? Locale.ROOT : breakIteratorLocale; + this.passageFormatter = passageFormatter; + this.fieldValue = fieldValue; + this.noMatchSize = noMatchSize; + } + + /** + * Highlights terms extracted from the provided query within the content of the provided field name + */ + public Snippet[] highlightField(String field, Query query, int docId, int maxPassages) throws IOException { + Map fieldsAsObjects = super.highlightFieldsAsObjects(new String[]{field}, query, + new int[]{docId}, new int[]{maxPassages}); + Object[] snippetObjects = fieldsAsObjects.get(field); + if (snippetObjects != null) { + //one single document at a time + assert snippetObjects.length == 1; + Object snippetObject = snippetObjects[0]; + if (snippetObject != null && snippetObject instanceof Snippet[]) { + return (Snippet[]) snippetObject; + } + } + return EMPTY_SNIPPET; + } + + @Override + protected List loadFieldValues(String[] fields, DocIdSetIterator docIter, + int cacheCharsThreshold) throws IOException { + // we only highlight one field, one document at a time + return Collections.singletonList(new String[]{fieldValue}); + } + + @Override + protected BreakIterator getBreakIterator(String field) { + return breakIterator; + } + + @Override + protected PassageFormatter getFormatter(String field) { + return passageFormatter; + } + + @Override + protected FieldHighlighter getFieldHighlighter(String field, Query query, Set allTerms, int maxPassages) { + BytesRef[] terms = filterExtractedTerms(getFieldMatcher(field), allTerms); + Set highlightFlags = getFlags(field); + PhraseHelper phraseHelper = getPhraseHelper(field, query, highlightFlags); + CharacterRunAutomaton[] automata = getAutomata(field, query, highlightFlags); + OffsetSource offsetSource = getOptimizedOffsetSource(field, terms, phraseHelper, automata); + BreakIterator breakIterator = new SplittingBreakIterator(getBreakIterator(field), + UnifiedHighlighter.MULTIVAL_SEP_CHAR); + FieldOffsetStrategy strategy = + getOffsetStrategy(offsetSource, field, terms, phraseHelper, automata, highlightFlags); + return new CustomFieldHighlighter(field, strategy, breakIteratorLocale, breakIterator, + getScorer(field), maxPassages, (noMatchSize > 0 ? 1 : 0), getFormatter(field), noMatchSize, fieldValue); + } + + @Override + protected Collection preMultiTermQueryRewrite(Query query) { + return rewriteCustomQuery(query); + } + + @Override + protected Collection preSpanQueryRewrite(Query query) { + return rewriteCustomQuery(query); + } + + /** + * Translate custom queries in queries that are supported by the unified highlighter. + */ + private Collection rewriteCustomQuery(Query query) { + if (query instanceof MultiPhrasePrefixQuery) { + MultiPhrasePrefixQuery mpq = (MultiPhrasePrefixQuery) query; + Term[][] terms = mpq.getTerms(); + int[] positions = mpq.getPositions(); + SpanQuery[] positionSpanQueries = new SpanQuery[positions.length]; + int sizeMinus1 = terms.length - 1; + for (int i = 0; i < positions.length; i++) { + SpanQuery[] innerQueries = new SpanQuery[terms[i].length]; + for (int j = 0; j < terms[i].length; j++) { + if (i == sizeMinus1) { + innerQueries[j] = new SpanMultiTermQueryWrapper(new PrefixQuery(terms[i][j])); + } else { + innerQueries[j] = new SpanTermQuery(terms[i][j]); + } + } + if (innerQueries.length > 1) { + positionSpanQueries[i] = new SpanOrQuery(innerQueries); + } else { + positionSpanQueries[i] = innerQueries[0]; + } + } + + if (positionSpanQueries.length == 1) { + return Collections.singletonList(positionSpanQueries[0]); + } + // sum position increments beyond 1 + int positionGaps = 0; + if (positions.length >= 2) { + // positions are in increasing order. max(0,...) is just a safeguard. + positionGaps = Math.max(0, positions[positions.length - 1] - positions[0] - positions.length + 1); + } + //if original slop is 0 then require inOrder + boolean inorder = (mpq.getSlop() == 0); + return Collections.singletonList(new SpanNearQuery(positionSpanQueries, + mpq.getSlop() + positionGaps, inorder)); + } else if (query instanceof CommonTermsQuery) { + CommonTermsQuery ctq = (CommonTermsQuery) query; + List tqs = new ArrayList<> (); + for (Term term : ctq.getTerms()) { + tqs.add(new TermQuery(term)); + } + return tqs; + } else if (query instanceof AllTermQuery) { + AllTermQuery atq = (AllTermQuery) query; + return Collections.singletonList(new TermQuery(atq.getTerm())); + } else if (query instanceof FunctionScoreQuery) { + return Collections.singletonList(((FunctionScoreQuery) query).getSubQuery()); + } else if (query instanceof FiltersFunctionScoreQuery) { + return Collections.singletonList(((FiltersFunctionScoreQuery) query).getSubQuery()); + } else if (query instanceof ESToParentBlockJoinQuery) { + return Collections.singletonList(((ESToParentBlockJoinQuery) query).getChildQuery()); + } else { + return null; + } + } + + /** + * Forces the offset source for this highlighter + */ + @Override + protected OffsetSource getOffsetSource(String field) { + if (offsetSource == null) { + return super.getOffsetSource(field); + } + return offsetSource; + } + +} diff --git a/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java b/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java index 88b045d2a5e90..1339873d3180d 100644 --- a/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java +++ b/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java @@ -28,12 +28,14 @@ import org.apache.lucene.search.MultiPhraseQuery; import org.apache.lucene.search.PhraseQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.SynonymQuery; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.join.ToParentBlockJoinQuery; import org.apache.lucene.search.spans.SpanTermQuery; import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; +import org.elasticsearch.index.search.ESToParentBlockJoinQuery; import java.io.IOException; import java.util.Collection; @@ -79,8 +81,8 @@ void flatten(Query sourceQuery, IndexReader reader, Collection flatQuerie } else if (sourceQuery instanceof BlendedTermQuery) { final BlendedTermQuery blendedTermQuery = (BlendedTermQuery) sourceQuery; flatten(blendedTermQuery.rewrite(reader), reader, flatQueries, boost); - } else if (sourceQuery instanceof ToParentBlockJoinQuery) { - ToParentBlockJoinQuery blockJoinQuery = (ToParentBlockJoinQuery) sourceQuery; + } else if (sourceQuery instanceof ESToParentBlockJoinQuery) { + ESToParentBlockJoinQuery blockJoinQuery = (ESToParentBlockJoinQuery) sourceQuery; flatten(blockJoinQuery.getChildQuery(), reader, flatQueries, boost); } else if (sourceQuery instanceof BoostingQuery) { BoostingQuery boostingQuery = (BoostingQuery) sourceQuery; @@ -88,6 +90,18 @@ void flatten(Query sourceQuery, IndexReader reader, Collection flatQuerie flatten(boostingQuery.getMatch(), reader, flatQueries, boost); //flatten negative query with negative boost flatten(boostingQuery.getContext(), reader, flatQueries, boostingQuery.getBoost()); + } else if (sourceQuery instanceof SynonymQuery) { + // SynonymQuery should be handled by the parent class directly. + // This statement should be removed when https://issues.apache.org/jira/browse/LUCENE-7484 is merged. + SynonymQuery synQuery = (SynonymQuery) sourceQuery; + for (Term term : synQuery.getTerms()) { + flatten(new TermQuery(term), reader, flatQueries, boost); + } + } else if (sourceQuery instanceof ESToParentBlockJoinQuery) { + Query childQuery = ((ESToParentBlockJoinQuery) sourceQuery).getChildQuery(); + if (childQuery != null) { + flatten(childQuery, reader, flatQueries, boost); + } } else { super.flatten(sourceQuery, reader, flatQueries, boost); } diff --git a/core/src/main/java/org/apache/lucene/spatial/geopoint/search/XGeoPointDistanceRangeQuery.java b/core/src/main/java/org/apache/lucene/spatial/geopoint/search/XGeoPointDistanceRangeQuery.java index 3cf290e035ea8..2606b43af8c45 100644 --- a/core/src/main/java/org/apache/lucene/spatial/geopoint/search/XGeoPointDistanceRangeQuery.java +++ b/core/src/main/java/org/apache/lucene/spatial/geopoint/search/XGeoPointDistanceRangeQuery.java @@ -1,18 +1,20 @@ /* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at * - * http://www.apache.org/licenses/LICENSE-2.0 + * http://www.apache.org/licenses/LICENSE-2.0 * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. */ package org.apache.lucene.spatial.geopoint.search; diff --git a/core/src/main/java/org/apache/lucene/store/StoreRateLimiting.java b/core/src/main/java/org/apache/lucene/store/StoreRateLimiting.java index e1ae7b938b3de..2dff741392b61 100644 --- a/core/src/main/java/org/apache/lucene/store/StoreRateLimiting.java +++ b/core/src/main/java/org/apache/lucene/store/StoreRateLimiting.java @@ -20,12 +20,16 @@ import org.apache.lucene.store.RateLimiter.SimpleRateLimiter; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.unit.ByteSizeValue; /** */ public class StoreRateLimiting { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(StoreRateLimiting.class)); + public interface Provider { StoreRateLimiting rateLimiting(); @@ -68,14 +72,14 @@ public RateLimiter getRateLimiter() { } public void setMaxRate(ByteSizeValue rate) { - if (rate.bytes() <= 0) { + if (rate.getBytes() <= 0) { actualRateLimiter = null; } else if (actualRateLimiter == null) { actualRateLimiter = rateLimiter; - actualRateLimiter.setMBPerSec(rate.mbFrac()); + actualRateLimiter.setMBPerSec(rate.getMbFrac()); } else { assert rateLimiter == actualRateLimiter; - rateLimiter.setMBPerSec(rate.mbFrac()); + rateLimiter.setMBPerSec(rate.getMbFrac()); } } @@ -85,9 +89,12 @@ public Type getType() { public void setType(Type type) { this.type = type; + if (type != Type.NONE) { + DEPRECATION_LOGGER.deprecated("Store rate limiting is deprecated and will be removed in 6.0"); + } } public void setType(String type) { - this.type = Type.fromString(type); + setType(Type.fromString(type)); } } diff --git a/core/src/main/java/org/elasticsearch/Assertions.java b/core/src/main/java/org/elasticsearch/Assertions.java new file mode 100644 index 0000000000000..8783101db0a88 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/Assertions.java @@ -0,0 +1,47 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch; + +/** + * Provides a static final field that can be used to check if assertions are enabled. Since this field might be used elsewhere to check if + * assertions are enabled, if you are running with assertions enabled for specific packages or classes, you should enable assertions on this + * class too (e.g., {@code -ea org.elasticsearch.Assertions -ea org.elasticsearch.cluster.service.MasterService}). + */ +public final class Assertions { + + private Assertions() { + + } + + public static final boolean ENABLED; + + static { + boolean enabled = false; + /* + * If assertions are enabled, the following line will be evaluated and enabled will have the value true, otherwise when assertions + * are disabled enabled will have the value false. + */ + // noinspection ConstantConditions,AssertWithSideEffects + assert enabled = true; + // noinspection ConstantConditions + ENABLED = enabled; + } + +} diff --git a/core/src/main/java/org/elasticsearch/Build.java b/core/src/main/java/org/elasticsearch/Build.java index 25da5f281665f..7fbf09b649bfc 100644 --- a/core/src/main/java/org/elasticsearch/Build.java +++ b/core/src/main/java/org/elasticsearch/Build.java @@ -19,6 +19,7 @@ package org.elasticsearch; +import org.elasticsearch.common.Booleans; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -42,8 +43,10 @@ public class Build { final String date; final boolean isSnapshot; + final String esPrefix = "elasticsearch-" + Version.CURRENT; final URL url = getElasticsearchCodebase(); - if (url.toString().endsWith(".jar")) { + final String urlStr = url.toString(); + if (urlStr.startsWith("file:/") && (urlStr.endsWith(esPrefix + ".jar") || urlStr.endsWith(esPrefix + "-SNAPSHOT.jar"))) { try (JarInputStream jar = new JarInputStream(url.openStream())) { Manifest manifest = jar.getManifest(); shortHash = manifest.getMainAttributes().getValue("Change"); @@ -53,10 +56,21 @@ public class Build { throw new RuntimeException(e); } } else { - // not running from a jar (unit tests, IDE) + // not running from the official elasticsearch jar file (unit tests, IDE, uber client jar, shadiness) shortHash = "Unknown"; date = "Unknown"; - isSnapshot = true; + final String buildSnapshot = System.getProperty("build.snapshot"); + if (buildSnapshot != null) { + try { + Class.forName("com.carrotsearch.randomizedtesting.RandomizedContext"); + } catch (final ClassNotFoundException e) { + // we are not in tests but build.snapshot is set, bail hard + throw new IllegalStateException("build.snapshot set to [" + buildSnapshot + "] but not running tests"); + } + isSnapshot = Booleans.parseBooleanExact(buildSnapshot); + } else { + isSnapshot = true; + } } if (shortHash == null) { throw new IllegalStateException("Error finding the build shortHash. " + @@ -79,10 +93,10 @@ static URL getElasticsearchCodebase() { return Build.class.getProtectionDomain().getCodeSource().getLocation(); } - private String shortHash; - private String date; + private final String shortHash; + private final String date; - Build(String shortHash, String date, boolean isSnapshot) { + public Build(String shortHash, String date, boolean isSnapshot) { this.shortHash = shortHash; this.date = date; this.isSnapshot = isSnapshot; diff --git a/core/src/main/java/org/elasticsearch/ElasticsearchException.java b/core/src/main/java/org/elasticsearch/ElasticsearchException.java index 750f133ea1705..4e2821019b8a8 100644 --- a/core/src/main/java/org/elasticsearch/ElasticsearchException.java +++ b/core/src/main/java/org/elasticsearch/ElasticsearchException.java @@ -19,49 +19,79 @@ package org.elasticsearch; -import org.apache.logging.log4j.message.ParameterizedMessage; import org.elasticsearch.action.support.replication.ReplicationOperation; import org.elasticsearch.cluster.action.shard.ShardStateAction; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.logging.LoggerMessageFormat; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.transport.TcpTransport; import java.io.IOException; +import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.HashMap; +import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; import java.util.stream.Collectors; +import static java.util.Collections.emptyMap; +import static java.util.Collections.singletonMap; import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_UUID_NA_VALUE; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureFieldName; /** * A base class for all elasticsearch exceptions. */ public class ElasticsearchException extends RuntimeException implements ToXContent, Writeable { - public static final String REST_EXCEPTION_SKIP_CAUSE = "rest.exception.cause.skip"; + private static final Version UNKNOWN_VERSION_ADDED = Version.fromId(0); + + /** + * Passed in the {@link Params} of {@link #generateThrowableXContent(XContentBuilder, Params, Throwable)} + * to control if the {@code caused_by} element should render. Unlike most parameters to {@code toXContent} methods this parameter is + * internal only and not available as a URL parameter. + */ + private static final String REST_EXCEPTION_SKIP_CAUSE = "rest.exception.cause.skip"; + /** + * Passed in the {@link Params} of {@link #generateThrowableXContent(XContentBuilder, Params, Throwable)} + * to control if the {@code stack_trace} element should render. Unlike most parameters to {@code toXContent} methods this parameter is + * internal only and not available as a URL parameter. Use the {@code error_trace} parameter instead. + */ public static final String REST_EXCEPTION_SKIP_STACK_TRACE = "rest.exception.stacktrace.skip"; public static final boolean REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT = true; - public static final boolean REST_EXCEPTION_SKIP_CAUSE_DEFAULT = false; - private static final String INDEX_HEADER_KEY = "es.index"; - private static final String INDEX_HEADER_KEY_UUID = "es.index_uuid"; - private static final String SHARD_HEADER_KEY = "es.shard"; - private static final String RESOURCE_HEADER_TYPE_KEY = "es.resource.type"; - private static final String RESOURCE_HEADER_ID_KEY = "es.resource.id"; - - private static final Map> ID_TO_SUPPLIER; + private static final boolean REST_EXCEPTION_SKIP_CAUSE_DEFAULT = false; + private static final String INDEX_METADATA_KEY = "es.index"; + private static final String INDEX_METADATA_KEY_UUID = "es.index_uuid"; + private static final String SHARD_METADATA_KEY = "es.shard"; + private static final String RESOURCE_METADATA_TYPE_KEY = "es.resource.type"; + private static final String RESOURCE_METADATA_ID_KEY = "es.resource.id"; + + private static final String TYPE = "type"; + private static final String REASON = "reason"; + private static final String CAUSED_BY = "caused_by"; + private static final String STACK_TRACE = "stack_trace"; + private static final String HEADER = "header"; + private static final String ERROR = "error"; + private static final String ROOT_CAUSE = "root_cause"; + + private static final Map> ID_TO_SUPPLIER; private static final Map, ElasticsearchExceptionHandle> CLASS_TO_ELASTICSEARCH_EXCEPTION_HANDLE; + private final Map> metadata = new HashMap<>(); private final Map> headers = new HashMap<>(); /** @@ -103,14 +133,56 @@ public ElasticsearchException(StreamInput in) throws IOException { super(in.readOptionalString(), in.readException()); readStackTrace(this, in); headers.putAll(in.readMapOfLists(StreamInput::readString, StreamInput::readString)); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + metadata.putAll(in.readMapOfLists(StreamInput::readString, StreamInput::readString)); + } else { + for (Iterator>> iterator = headers.entrySet().iterator(); iterator.hasNext(); ) { + Map.Entry> header = iterator.next(); + if (header.getKey().startsWith("es.")) { + metadata.put(header.getKey(), header.getValue()); + iterator.remove(); + } + } + } } /** - * Adds a new header with the given key. - * This method will replace existing header if a header with the same key already exists + * Adds a new piece of metadata with the given key. + * If the provided key is already present, the corresponding metadata will be replaced */ - public void addHeader(String key, String... value) { - this.headers.put(key, Arrays.asList(value)); + public void addMetadata(String key, String... values) { + addMetadata(key, Arrays.asList(values)); + } + + /** + * Adds a new piece of metadata with the given key. + * If the provided key is already present, the corresponding metadata will be replaced + */ + public void addMetadata(String key, List values) { + //we need to enforce this otherwise bw comp doesn't work properly, as "es." was the previous criteria to split headers in two sets + if (key.startsWith("es.") == false) { + throw new IllegalArgumentException("exception metadata must start with [es.], found [" + key + "] instead"); + } + this.metadata.put(key, values); + } + + /** + * Returns a set of all metadata keys on this exception + */ + public Set getMetadataKeys() { + return metadata.keySet(); + } + + /** + * Returns the list of metadata values for the given key or {@code null} if no metadata for the + * given key exists. + */ + public List getMetadata(String key) { + return metadata.get(key); + } + + protected Map> getMetadata() { + return metadata; } /** @@ -118,9 +190,20 @@ public void addHeader(String key, String... value) { * This method will replace existing header if a header with the same key already exists */ public void addHeader(String key, List value) { + //we need to enforce this otherwise bw comp doesn't work properly, as "es." was the previous criteria to split headers in two sets + if (key.startsWith("es.")) { + throw new IllegalArgumentException("exception headers must not start with [es.], found [" + key + "] instead"); + } this.headers.put(key, value); } + /** + * Adds a new header with the given key. + * This method will replace existing header if a header with the same key already exists + */ + public void addHeader(String key, String... value) { + addHeader(key, Arrays.asList(value)); + } /** * Returns a set of all header keys on this exception @@ -130,13 +213,17 @@ public Set getHeaderKeys() { } /** - * Returns the list of header values for the given key or {@code null} if not header for the + * Returns the list of header values for the given key or {@code null} if no header for the * given key exists. */ public List getHeader(String key) { return headers.get(key); } + protected Map> getHeaders() { + return headers; + } + /** * Returns the rest status code associated with this exception. */ @@ -197,11 +284,19 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(this.getMessage()); out.writeException(this.getCause()); writeStackTraces(this, out); - out.writeMapOfLists(headers, StreamOutput::writeString, StreamOutput::writeString); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + out.writeMapOfLists(headers, StreamOutput::writeString, StreamOutput::writeString); + out.writeMapOfLists(metadata, StreamOutput::writeString, StreamOutput::writeString); + } else { + HashMap> finalHeaders = new HashMap<>(headers.size() + metadata.size()); + finalHeaders.putAll(headers); + finalHeaders.putAll(metadata); + out.writeMapOfLists(finalHeaders, StreamOutput::writeString, StreamOutput::writeString); + } } public static ElasticsearchException readException(StreamInput input, int id) throws IOException { - FunctionThatThrowsIOException elasticsearchException = ID_TO_SUPPLIER.get(id); + CheckedFunction elasticsearchException = ID_TO_SUPPLIER.get(id); if (elasticsearchException == null) { throw new IllegalStateException("unknown exception for id: " + id); } @@ -211,8 +306,12 @@ public static ElasticsearchException readException(StreamInput input, int id) th /** * Returns true iff the given class is a registered for an exception to be read. */ - public static boolean isRegistered(Class exception) { - return CLASS_TO_ELASTICSEARCH_EXCEPTION_HANDLE.containsKey(exception); + public static boolean isRegistered(Class exception, Version version) { + ElasticsearchExceptionHandle elasticsearchExceptionHandle = CLASS_TO_ELASTICSEARCH_EXCEPTION_HANDLE.get(exception); + if (elasticsearchExceptionHandle != null) { + return version.onOrAfter(elasticsearchExceptionHandle.versionAdded); + } + return false; } static Set> getRegisteredKeys() { // for testing @@ -230,64 +329,51 @@ public static int getId(Class exception) { public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { Throwable ex = ExceptionsHelper.unwrapCause(this); if (ex != this) { - toXContent(builder, params, this); + generateThrowableXContent(builder, params, this); } else { - builder.field("type", getExceptionName()); - builder.field("reason", getMessage()); - for (String key : headers.keySet()) { - if (key.startsWith("es.")) { - List values = headers.get(key); - xContentHeader(builder, key.substring("es.".length()), values); - } - } - innerToXContent(builder, params); - renderHeader(builder, params); - if (params.paramAsBoolean(REST_EXCEPTION_SKIP_STACK_TRACE, REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT) == false) { - builder.field("stack_trace", ExceptionsHelper.stackTrace(this)); - } + innerToXContent(builder, params, this, getExceptionName(), getMessage(), headers, metadata, getCause()); } return builder; } - /** - * Renders additional per exception information into the xcontent - */ - protected void innerToXContent(XContentBuilder builder, Params params) throws IOException { - causeToXContent(builder, params); - } + protected static void innerToXContent(XContentBuilder builder, Params params, + Throwable throwable, String type, String message, Map> headers, + Map> metadata, Throwable cause) throws IOException { + builder.field(TYPE, type); + builder.field(REASON, message); - /** - * Renders a cause exception as xcontent - */ - protected void causeToXContent(XContentBuilder builder, Params params) throws IOException { - final Throwable cause = getCause(); - if (cause != null && params.paramAsBoolean(REST_EXCEPTION_SKIP_CAUSE, REST_EXCEPTION_SKIP_CAUSE_DEFAULT) == false) { - builder.field("caused_by"); - builder.startObject(); - toXContent(builder, params, cause); - builder.endObject(); + for (Map.Entry> entry : metadata.entrySet()) { + headerToXContent(builder, entry.getKey().substring("es.".length()), entry.getValue()); } - } - protected final void renderHeader(XContentBuilder builder, Params params) throws IOException { - boolean hasHeader = false; - for (String key : headers.keySet()) { - if (key.startsWith("es.")) { - continue; - } - if (hasHeader == false) { - builder.startObject("header"); - hasHeader = true; + if (throwable instanceof ElasticsearchException) { + ElasticsearchException exception = (ElasticsearchException) throwable; + exception.metadataToXContent(builder, params); + } + + if (params.paramAsBoolean(REST_EXCEPTION_SKIP_CAUSE, REST_EXCEPTION_SKIP_CAUSE_DEFAULT) == false) { + if (cause != null) { + builder.field(CAUSED_BY); + builder.startObject(); + generateThrowableXContent(builder, params, cause); + builder.endObject(); } - List values = headers.get(key); - xContentHeader(builder, key, values); } - if (hasHeader) { + + if (headers.isEmpty() == false) { + builder.startObject(HEADER); + for (Map.Entry> entry : headers.entrySet()) { + headerToXContent(builder, entry.getKey(), entry.getValue()); + } builder.endObject(); } + + if (params.paramAsBoolean(REST_EXCEPTION_SKIP_STACK_TRACE, REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT) == false) { + builder.field(STACK_TRACE, ExceptionsHelper.stackTrace(throwable)); + } } - private void xContentHeader(XContentBuilder builder, String key, List values) throws IOException { + private static void headerToXContent(XContentBuilder builder, String key, List values) throws IOException { if (values != null && values.isEmpty() == false) { if (values.size() == 1) { builder.field(key, values.get(0)); @@ -302,25 +388,210 @@ private void xContentHeader(XContentBuilder builder, String key, List va } /** - * Statis toXContent helper method that also renders non {@link org.elasticsearch.ElasticsearchException} instances as XContent. + * Renders additional per exception information into the XContent */ - public static void toXContent(XContentBuilder builder, Params params, Throwable ex) throws IOException { - ex = ExceptionsHelper.unwrapCause(ex); - if (ex instanceof ElasticsearchException) { - ((ElasticsearchException) ex).toXContent(builder, params); + protected void metadataToXContent(XContentBuilder builder, Params params) throws IOException { + } + + /** + * Generate a {@link ElasticsearchException} from a {@link XContentParser}. This does not + * return the original exception type (ie NodeClosedException for example) but just wraps + * the type, the reason and the cause of the exception. It also recursively parses the + * tree structure of the cause, returning it as a tree structure of {@link ElasticsearchException} + * instances. + */ + public static ElasticsearchException fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + return innerFromXContent(parser, false); + } + + private static ElasticsearchException innerFromXContent(XContentParser parser, boolean parseRootCauses) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + + String type = null, reason = null, stack = null; + ElasticsearchException cause = null; + Map> metadata = new HashMap<>(); + Map> headers = new HashMap<>(); + List rootCauses = new ArrayList<>(); + + for (; token == XContentParser.Token.FIELD_NAME; token = parser.nextToken()) { + String currentFieldName = parser.currentName(); + token = parser.nextToken(); + + if (token.isValue()) { + if (TYPE.equals(currentFieldName)) { + type = parser.text(); + } else if (REASON.equals(currentFieldName)) { + reason = parser.text(); + } else if (STACK_TRACE.equals(currentFieldName)) { + stack = parser.text(); + } else if (token == XContentParser.Token.VALUE_STRING) { + metadata.put(currentFieldName, Collections.singletonList(parser.text())); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (CAUSED_BY.equals(currentFieldName)) { + cause = fromXContent(parser); + } else if (HEADER.equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else { + List values = headers.getOrDefault(currentFieldName, new ArrayList<>()); + if (token == XContentParser.Token.VALUE_STRING) { + values.add(parser.text()); + } else if (token == XContentParser.Token.START_ARRAY) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token == XContentParser.Token.VALUE_STRING) { + values.add(parser.text()); + } else { + parser.skipChildren(); + } + } + } else if (token == XContentParser.Token.START_OBJECT) { + parser.skipChildren(); + } + headers.put(currentFieldName, values); + } + } + } else { + // Any additional metadata object added by the metadataToXContent method is ignored + // and skipped, so that the parser does not fail on unknown fields. The parser only + // support metadata key-pairs and metadata arrays of values. + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (parseRootCauses && ROOT_CAUSE.equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + rootCauses.add(fromXContent(parser)); + } + } else { + // Parse the array and add each item to the corresponding list of metadata. + // Arrays of objects are not supported yet and just ignored and skipped. + List values = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token == XContentParser.Token.VALUE_STRING) { + values.add(parser.text()); + } else { + parser.skipChildren(); + } + } + if (values.size() > 0) { + if (metadata.containsKey(currentFieldName)) { + values.addAll(metadata.get(currentFieldName)); + } + metadata.put(currentFieldName, values); + } + } + } + } + + ElasticsearchException e = new ElasticsearchException(buildMessage(type, reason, stack), cause); + for (Map.Entry> entry : metadata.entrySet()) { + //subclasses can print out additional metadata through the metadataToXContent method. Simple key-value pairs will be + //parsed back and become part of this metadata set, while objects and arrays are not supported when parsing back. + //Those key-value pairs become part of the metadata set and inherit the "es." prefix as that is currently required + //by addMetadata. The prefix will get stripped out when printing metadata out so it will be effectively invisible. + //TODO move subclasses that print out simple metadata to using addMetadata directly and support also numbers and booleans. + //TODO rename metadataToXContent and have only SearchPhaseExecutionException use it, which prints out complex objects + e.addMetadata("es." + entry.getKey(), entry.getValue()); + } + for (Map.Entry> header : headers.entrySet()) { + e.addHeader(header.getKey(), header.getValue()); + } + + // Adds root causes as suppressed exception. This way they are not lost + // after parsing and can be retrieved using getSuppressed() method. + for (ElasticsearchException rootCause : rootCauses) { + e.addSuppressed(rootCause); + } + return e; + } + + /** + * Static toXContent helper method that renders {@link org.elasticsearch.ElasticsearchException} or {@link Throwable} instances + * as XContent, delegating the rendering to {@link #toXContent(XContentBuilder, Params)} + * or {@link #innerToXContent(XContentBuilder, Params, Throwable, String, String, Map, Map, Throwable)}. + * + * This method is usually used when the {@link Throwable} is rendered as a part of another XContent object, and its result can + * be parsed back using the {@link #fromXContent(XContentParser)} method. + */ + public static void generateThrowableXContent(XContentBuilder builder, Params params, Throwable t) throws IOException { + t = ExceptionsHelper.unwrapCause(t); + + if (t instanceof ElasticsearchException) { + ((ElasticsearchException) t).toXContent(builder, params); } else { - builder.field("type", getExceptionName(ex)); - builder.field("reason", ex.getMessage()); - if (ex.getCause() != null) { - builder.field("caused_by"); + innerToXContent(builder, params, t, getExceptionName(t), t.getMessage(), emptyMap(), emptyMap(), t.getCause()); + } + } + + /** + * Render any exception as a xcontent, encapsulated within a field or object named "error". The level of details that are rendered + * depends on the value of the "detailed" parameter: when it's false only a simple message based on the type and message of the + * exception is rendered. When it's true all detail are provided including guesses root causes, cause and potentially stack + * trace. + * + * This method is usually used when the {@link Exception} is rendered as a full XContent object, and its output can be parsed + * by the {@link #failureFromXContent(XContentParser)} method. + */ + public static void generateFailureXContent(XContentBuilder builder, Params params, @Nullable Exception e, boolean detailed) + throws IOException { + // No exception to render as an error + if (e == null) { + builder.field(ERROR, "unknown"); + return; + } + + // Render the exception with a simple message + if (detailed == false) { + String message = "No ElasticsearchException found"; + Throwable t = e; + for (int counter = 0; counter < 10 && t != null; counter++) { + if (t instanceof ElasticsearchException) { + message = t.getClass().getSimpleName() + "[" + t.getMessage() + "]"; + break; + } + t = t.getCause(); + } + builder.field(ERROR, message); + return; + } + + // Render the exception with all details + final ElasticsearchException[] rootCauses = ElasticsearchException.guessRootCauses(e); + builder.startObject(ERROR); + { + builder.startArray(ROOT_CAUSE); + for (ElasticsearchException rootCause : rootCauses) { builder.startObject(); - toXContent(builder, params, ex.getCause()); + rootCause.toXContent(builder, new DelegatingMapParams(singletonMap(REST_EXCEPTION_SKIP_CAUSE, "true"), params)); builder.endObject(); } - if (params.paramAsBoolean(REST_EXCEPTION_SKIP_STACK_TRACE, REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT) == false) { - builder.field("stack_trace", ExceptionsHelper.stackTrace(ex)); - } + builder.endArray(); } + generateThrowableXContent(builder, params, e); + builder.endObject(); + } + + /** + * Parses the output of {@link #generateFailureXContent(XContentBuilder, Params, Exception, boolean)} + */ + public static ElasticsearchException failureFromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureFieldName(parser, token, ERROR); + + token = parser.nextToken(); + if (token.isValue()) { + return new ElasticsearchException(buildMessage("exception", parser.text(), null)); + } + + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + token = parser.nextToken(); + + // Root causes are parsed in the innerFromXContent() and are added as suppressed exceptions. + return innerFromXContent(parser, true); } /** @@ -368,12 +639,23 @@ public static String getExceptionName(Throwable ex) { return toUnderscoreCase(simpleName); } + static String buildMessage(String type, String reason, String stack) { + StringBuilder message = new StringBuilder("Elasticsearch exception ["); + message.append(TYPE).append('=').append(type).append(", "); + message.append(REASON).append('=').append(reason); + if (stack != null) { + message.append(", ").append(STACK_TRACE).append('=').append(stack); + } + message.append(']'); + return message.toString(); + } + @Override public String toString() { StringBuilder builder = new StringBuilder(); - if (headers.containsKey(INDEX_HEADER_KEY)) { + if (metadata.containsKey(INDEX_METADATA_KEY)) { builder.append(getIndex()); - if (headers.containsKey(SHARD_HEADER_KEY)) { + if (metadata.containsKey(SHARD_METADATA_KEY)) { builder.append('[').append(getShardId()).append(']'); } builder.append(' '); @@ -431,284 +713,327 @@ public static T writeStackTraces(T throwable, StreamOutput * in id order below. If you want to remove an exception leave a tombstone comment and mark the id as null in * ExceptionSerializationTests.testIds.ids. */ - enum ElasticsearchExceptionHandle { + private enum ElasticsearchExceptionHandle { INDEX_SHARD_SNAPSHOT_FAILED_EXCEPTION(org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException.class, - org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException::new, 0), + org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException::new, 0, UNKNOWN_VERSION_ADDED), DFS_PHASE_EXECUTION_EXCEPTION(org.elasticsearch.search.dfs.DfsPhaseExecutionException.class, - org.elasticsearch.search.dfs.DfsPhaseExecutionException::new, 1), + org.elasticsearch.search.dfs.DfsPhaseExecutionException::new, 1, UNKNOWN_VERSION_ADDED), EXECUTION_CANCELLED_EXCEPTION(org.elasticsearch.common.util.CancellableThreads.ExecutionCancelledException.class, - org.elasticsearch.common.util.CancellableThreads.ExecutionCancelledException::new, 2), + org.elasticsearch.common.util.CancellableThreads.ExecutionCancelledException::new, 2, UNKNOWN_VERSION_ADDED), MASTER_NOT_DISCOVERED_EXCEPTION(org.elasticsearch.discovery.MasterNotDiscoveredException.class, - org.elasticsearch.discovery.MasterNotDiscoveredException::new, 3), + org.elasticsearch.discovery.MasterNotDiscoveredException::new, 3, UNKNOWN_VERSION_ADDED), ELASTICSEARCH_SECURITY_EXCEPTION(org.elasticsearch.ElasticsearchSecurityException.class, - org.elasticsearch.ElasticsearchSecurityException::new, 4), + org.elasticsearch.ElasticsearchSecurityException::new, 4, UNKNOWN_VERSION_ADDED), INDEX_SHARD_RESTORE_EXCEPTION(org.elasticsearch.index.snapshots.IndexShardRestoreException.class, - org.elasticsearch.index.snapshots.IndexShardRestoreException::new, 5), + org.elasticsearch.index.snapshots.IndexShardRestoreException::new, 5, UNKNOWN_VERSION_ADDED), INDEX_CLOSED_EXCEPTION(org.elasticsearch.indices.IndexClosedException.class, - org.elasticsearch.indices.IndexClosedException::new, 6), + org.elasticsearch.indices.IndexClosedException::new, 6, UNKNOWN_VERSION_ADDED), BIND_HTTP_EXCEPTION(org.elasticsearch.http.BindHttpException.class, - org.elasticsearch.http.BindHttpException::new, 7), + org.elasticsearch.http.BindHttpException::new, 7, UNKNOWN_VERSION_ADDED), REDUCE_SEARCH_PHASE_EXCEPTION(org.elasticsearch.action.search.ReduceSearchPhaseException.class, - org.elasticsearch.action.search.ReduceSearchPhaseException::new, 8), + org.elasticsearch.action.search.ReduceSearchPhaseException::new, 8, UNKNOWN_VERSION_ADDED), NODE_CLOSED_EXCEPTION(org.elasticsearch.node.NodeClosedException.class, - org.elasticsearch.node.NodeClosedException::new, 9), + org.elasticsearch.node.NodeClosedException::new, 9, UNKNOWN_VERSION_ADDED), SNAPSHOT_FAILED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.SnapshotFailedEngineException.class, - org.elasticsearch.index.engine.SnapshotFailedEngineException::new, 10), + org.elasticsearch.index.engine.SnapshotFailedEngineException::new, 10, UNKNOWN_VERSION_ADDED), SHARD_NOT_FOUND_EXCEPTION(org.elasticsearch.index.shard.ShardNotFoundException.class, - org.elasticsearch.index.shard.ShardNotFoundException::new, 11), + org.elasticsearch.index.shard.ShardNotFoundException::new, 11, UNKNOWN_VERSION_ADDED), CONNECT_TRANSPORT_EXCEPTION(org.elasticsearch.transport.ConnectTransportException.class, - org.elasticsearch.transport.ConnectTransportException::new, 12), + org.elasticsearch.transport.ConnectTransportException::new, 12, UNKNOWN_VERSION_ADDED), NOT_SERIALIZABLE_TRANSPORT_EXCEPTION(org.elasticsearch.transport.NotSerializableTransportException.class, - org.elasticsearch.transport.NotSerializableTransportException::new, 13), + org.elasticsearch.transport.NotSerializableTransportException::new, 13, UNKNOWN_VERSION_ADDED), RESPONSE_HANDLER_FAILURE_TRANSPORT_EXCEPTION(org.elasticsearch.transport.ResponseHandlerFailureTransportException.class, - org.elasticsearch.transport.ResponseHandlerFailureTransportException::new, 14), + org.elasticsearch.transport.ResponseHandlerFailureTransportException::new, 14, UNKNOWN_VERSION_ADDED), INDEX_CREATION_EXCEPTION(org.elasticsearch.indices.IndexCreationException.class, - org.elasticsearch.indices.IndexCreationException::new, 15), + org.elasticsearch.indices.IndexCreationException::new, 15, UNKNOWN_VERSION_ADDED), INDEX_NOT_FOUND_EXCEPTION(org.elasticsearch.index.IndexNotFoundException.class, - org.elasticsearch.index.IndexNotFoundException::new, 16), + org.elasticsearch.index.IndexNotFoundException::new, 16, UNKNOWN_VERSION_ADDED), ILLEGAL_SHARD_ROUTING_STATE_EXCEPTION(org.elasticsearch.cluster.routing.IllegalShardRoutingStateException.class, - org.elasticsearch.cluster.routing.IllegalShardRoutingStateException::new, 17), + org.elasticsearch.cluster.routing.IllegalShardRoutingStateException::new, 17, UNKNOWN_VERSION_ADDED), BROADCAST_SHARD_OPERATION_FAILED_EXCEPTION(org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException.class, - org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException::new, 18), + org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException::new, 18, UNKNOWN_VERSION_ADDED), RESOURCE_NOT_FOUND_EXCEPTION(org.elasticsearch.ResourceNotFoundException.class, - org.elasticsearch.ResourceNotFoundException::new, 19), + org.elasticsearch.ResourceNotFoundException::new, 19, UNKNOWN_VERSION_ADDED), ACTION_TRANSPORT_EXCEPTION(org.elasticsearch.transport.ActionTransportException.class, - org.elasticsearch.transport.ActionTransportException::new, 20), + org.elasticsearch.transport.ActionTransportException::new, 20, UNKNOWN_VERSION_ADDED), ELASTICSEARCH_GENERATION_EXCEPTION(org.elasticsearch.ElasticsearchGenerationException.class, - org.elasticsearch.ElasticsearchGenerationException::new, 21), + org.elasticsearch.ElasticsearchGenerationException::new, 21, UNKNOWN_VERSION_ADDED), // 22 was CreateFailedEngineException INDEX_SHARD_STARTED_EXCEPTION(org.elasticsearch.index.shard.IndexShardStartedException.class, - org.elasticsearch.index.shard.IndexShardStartedException::new, 23), + org.elasticsearch.index.shard.IndexShardStartedException::new, 23, UNKNOWN_VERSION_ADDED), SEARCH_CONTEXT_MISSING_EXCEPTION(org.elasticsearch.search.SearchContextMissingException.class, - org.elasticsearch.search.SearchContextMissingException::new, 24), + org.elasticsearch.search.SearchContextMissingException::new, 24, UNKNOWN_VERSION_ADDED), GENERAL_SCRIPT_EXCEPTION(org.elasticsearch.script.GeneralScriptException.class, - org.elasticsearch.script.GeneralScriptException::new, 25), + org.elasticsearch.script.GeneralScriptException::new, 25, UNKNOWN_VERSION_ADDED), BATCH_OPERATION_EXCEPTION(org.elasticsearch.index.shard.TranslogRecoveryPerformer.BatchOperationException.class, - org.elasticsearch.index.shard.TranslogRecoveryPerformer.BatchOperationException::new, 26), + org.elasticsearch.index.shard.TranslogRecoveryPerformer.BatchOperationException::new, 26, UNKNOWN_VERSION_ADDED), SNAPSHOT_CREATION_EXCEPTION(org.elasticsearch.snapshots.SnapshotCreationException.class, - org.elasticsearch.snapshots.SnapshotCreationException::new, 27), + org.elasticsearch.snapshots.SnapshotCreationException::new, 27, UNKNOWN_VERSION_ADDED), DELETE_FAILED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.DeleteFailedEngineException.class, - org.elasticsearch.index.engine.DeleteFailedEngineException::new, 28), + org.elasticsearch.index.engine.DeleteFailedEngineException::new, 28, UNKNOWN_VERSION_ADDED), DOCUMENT_MISSING_EXCEPTION(org.elasticsearch.index.engine.DocumentMissingException.class, - org.elasticsearch.index.engine.DocumentMissingException::new, 29), + org.elasticsearch.index.engine.DocumentMissingException::new, 29, UNKNOWN_VERSION_ADDED), SNAPSHOT_EXCEPTION(org.elasticsearch.snapshots.SnapshotException.class, - org.elasticsearch.snapshots.SnapshotException::new, 30), + org.elasticsearch.snapshots.SnapshotException::new, 30, UNKNOWN_VERSION_ADDED), INVALID_ALIAS_NAME_EXCEPTION(org.elasticsearch.indices.InvalidAliasNameException.class, - org.elasticsearch.indices.InvalidAliasNameException::new, 31), + org.elasticsearch.indices.InvalidAliasNameException::new, 31, UNKNOWN_VERSION_ADDED), INVALID_INDEX_NAME_EXCEPTION(org.elasticsearch.indices.InvalidIndexNameException.class, - org.elasticsearch.indices.InvalidIndexNameException::new, 32), + org.elasticsearch.indices.InvalidIndexNameException::new, 32, UNKNOWN_VERSION_ADDED), INDEX_PRIMARY_SHARD_NOT_ALLOCATED_EXCEPTION(org.elasticsearch.indices.IndexPrimaryShardNotAllocatedException.class, - org.elasticsearch.indices.IndexPrimaryShardNotAllocatedException::new, 33), + org.elasticsearch.indices.IndexPrimaryShardNotAllocatedException::new, 33, UNKNOWN_VERSION_ADDED), TRANSPORT_EXCEPTION(org.elasticsearch.transport.TransportException.class, - org.elasticsearch.transport.TransportException::new, 34), + org.elasticsearch.transport.TransportException::new, 34, UNKNOWN_VERSION_ADDED), ELASTICSEARCH_PARSE_EXCEPTION(org.elasticsearch.ElasticsearchParseException.class, - org.elasticsearch.ElasticsearchParseException::new, 35), + org.elasticsearch.ElasticsearchParseException::new, 35, UNKNOWN_VERSION_ADDED), SEARCH_EXCEPTION(org.elasticsearch.search.SearchException.class, - org.elasticsearch.search.SearchException::new, 36), + org.elasticsearch.search.SearchException::new, 36, UNKNOWN_VERSION_ADDED), MAPPER_EXCEPTION(org.elasticsearch.index.mapper.MapperException.class, - org.elasticsearch.index.mapper.MapperException::new, 37), + org.elasticsearch.index.mapper.MapperException::new, 37, UNKNOWN_VERSION_ADDED), INVALID_TYPE_NAME_EXCEPTION(org.elasticsearch.indices.InvalidTypeNameException.class, - org.elasticsearch.indices.InvalidTypeNameException::new, 38), + org.elasticsearch.indices.InvalidTypeNameException::new, 38, UNKNOWN_VERSION_ADDED), SNAPSHOT_RESTORE_EXCEPTION(org.elasticsearch.snapshots.SnapshotRestoreException.class, - org.elasticsearch.snapshots.SnapshotRestoreException::new, 39), - PARSING_EXCEPTION(org.elasticsearch.common.ParsingException.class, org.elasticsearch.common.ParsingException::new, 40), + org.elasticsearch.snapshots.SnapshotRestoreException::new, 39, UNKNOWN_VERSION_ADDED), + PARSING_EXCEPTION(org.elasticsearch.common.ParsingException.class, org.elasticsearch.common.ParsingException::new, 40, + UNKNOWN_VERSION_ADDED), INDEX_SHARD_CLOSED_EXCEPTION(org.elasticsearch.index.shard.IndexShardClosedException.class, - org.elasticsearch.index.shard.IndexShardClosedException::new, 41), + org.elasticsearch.index.shard.IndexShardClosedException::new, 41, UNKNOWN_VERSION_ADDED), RECOVER_FILES_RECOVERY_EXCEPTION(org.elasticsearch.indices.recovery.RecoverFilesRecoveryException.class, - org.elasticsearch.indices.recovery.RecoverFilesRecoveryException::new, 42), + org.elasticsearch.indices.recovery.RecoverFilesRecoveryException::new, 42, UNKNOWN_VERSION_ADDED), TRUNCATED_TRANSLOG_EXCEPTION(org.elasticsearch.index.translog.TruncatedTranslogException.class, - org.elasticsearch.index.translog.TruncatedTranslogException::new, 43), + org.elasticsearch.index.translog.TruncatedTranslogException::new, 43, UNKNOWN_VERSION_ADDED), RECOVERY_FAILED_EXCEPTION(org.elasticsearch.indices.recovery.RecoveryFailedException.class, - org.elasticsearch.indices.recovery.RecoveryFailedException::new, 44), + org.elasticsearch.indices.recovery.RecoveryFailedException::new, 44, UNKNOWN_VERSION_ADDED), INDEX_SHARD_RELOCATED_EXCEPTION(org.elasticsearch.index.shard.IndexShardRelocatedException.class, - org.elasticsearch.index.shard.IndexShardRelocatedException::new, 45), + org.elasticsearch.index.shard.IndexShardRelocatedException::new, 45, UNKNOWN_VERSION_ADDED), NODE_SHOULD_NOT_CONNECT_EXCEPTION(org.elasticsearch.transport.NodeShouldNotConnectException.class, - org.elasticsearch.transport.NodeShouldNotConnectException::new, 46), + org.elasticsearch.transport.NodeShouldNotConnectException::new, 46, UNKNOWN_VERSION_ADDED), INDEX_TEMPLATE_ALREADY_EXISTS_EXCEPTION(org.elasticsearch.indices.IndexTemplateAlreadyExistsException.class, - org.elasticsearch.indices.IndexTemplateAlreadyExistsException::new, 47), + org.elasticsearch.indices.IndexTemplateAlreadyExistsException::new, 47, UNKNOWN_VERSION_ADDED), TRANSLOG_CORRUPTED_EXCEPTION(org.elasticsearch.index.translog.TranslogCorruptedException.class, - org.elasticsearch.index.translog.TranslogCorruptedException::new, 48), + org.elasticsearch.index.translog.TranslogCorruptedException::new, 48, UNKNOWN_VERSION_ADDED), CLUSTER_BLOCK_EXCEPTION(org.elasticsearch.cluster.block.ClusterBlockException.class, - org.elasticsearch.cluster.block.ClusterBlockException::new, 49), + org.elasticsearch.cluster.block.ClusterBlockException::new, 49, UNKNOWN_VERSION_ADDED), FETCH_PHASE_EXECUTION_EXCEPTION(org.elasticsearch.search.fetch.FetchPhaseExecutionException.class, - org.elasticsearch.search.fetch.FetchPhaseExecutionException::new, 50), + org.elasticsearch.search.fetch.FetchPhaseExecutionException::new, 50, UNKNOWN_VERSION_ADDED), INDEX_SHARD_ALREADY_EXISTS_EXCEPTION(org.elasticsearch.index.IndexShardAlreadyExistsException.class, - org.elasticsearch.index.IndexShardAlreadyExistsException::new, 51), + org.elasticsearch.index.IndexShardAlreadyExistsException::new, 51, UNKNOWN_VERSION_ADDED), VERSION_CONFLICT_ENGINE_EXCEPTION(org.elasticsearch.index.engine.VersionConflictEngineException.class, - org.elasticsearch.index.engine.VersionConflictEngineException::new, 52), - ENGINE_EXCEPTION(org.elasticsearch.index.engine.EngineException.class, org.elasticsearch.index.engine.EngineException::new, 53), + org.elasticsearch.index.engine.VersionConflictEngineException::new, 52, UNKNOWN_VERSION_ADDED), + ENGINE_EXCEPTION(org.elasticsearch.index.engine.EngineException.class, org.elasticsearch.index.engine.EngineException::new, 53, + UNKNOWN_VERSION_ADDED), // 54 was DocumentAlreadyExistsException, which is superseded by VersionConflictEngineException - NO_SUCH_NODE_EXCEPTION(org.elasticsearch.action.NoSuchNodeException.class, org.elasticsearch.action.NoSuchNodeException::new, 55), + NO_SUCH_NODE_EXCEPTION(org.elasticsearch.action.NoSuchNodeException.class, org.elasticsearch.action.NoSuchNodeException::new, 55, + UNKNOWN_VERSION_ADDED), SETTINGS_EXCEPTION(org.elasticsearch.common.settings.SettingsException.class, - org.elasticsearch.common.settings.SettingsException::new, 56), + org.elasticsearch.common.settings.SettingsException::new, 56, UNKNOWN_VERSION_ADDED), INDEX_TEMPLATE_MISSING_EXCEPTION(org.elasticsearch.indices.IndexTemplateMissingException.class, - org.elasticsearch.indices.IndexTemplateMissingException::new, 57), + org.elasticsearch.indices.IndexTemplateMissingException::new, 57, UNKNOWN_VERSION_ADDED), SEND_REQUEST_TRANSPORT_EXCEPTION(org.elasticsearch.transport.SendRequestTransportException.class, - org.elasticsearch.transport.SendRequestTransportException::new, 58), + org.elasticsearch.transport.SendRequestTransportException::new, 58, UNKNOWN_VERSION_ADDED), ES_REJECTED_EXECUTION_EXCEPTION(org.elasticsearch.common.util.concurrent.EsRejectedExecutionException.class, - org.elasticsearch.common.util.concurrent.EsRejectedExecutionException::new, 59), + org.elasticsearch.common.util.concurrent.EsRejectedExecutionException::new, 59, UNKNOWN_VERSION_ADDED), EARLY_TERMINATION_EXCEPTION(org.elasticsearch.common.lucene.Lucene.EarlyTerminationException.class, - org.elasticsearch.common.lucene.Lucene.EarlyTerminationException::new, 60), + org.elasticsearch.common.lucene.Lucene.EarlyTerminationException::new, 60, UNKNOWN_VERSION_ADDED), // 61 used to be for RoutingValidationException NOT_SERIALIZABLE_EXCEPTION_WRAPPER(org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper.class, - org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper::new, 62), + org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper::new, 62, UNKNOWN_VERSION_ADDED), ALIAS_FILTER_PARSING_EXCEPTION(org.elasticsearch.indices.AliasFilterParsingException.class, - org.elasticsearch.indices.AliasFilterParsingException::new, 63), - // 64 was DeleteByQueryFailedEngineException, which was removed in 3.0 - GATEWAY_EXCEPTION(org.elasticsearch.gateway.GatewayException.class, org.elasticsearch.gateway.GatewayException::new, 65), + org.elasticsearch.indices.AliasFilterParsingException::new, 63, UNKNOWN_VERSION_ADDED), + // 64 was DeleteByQueryFailedEngineException, which was removed in 5.0 + GATEWAY_EXCEPTION(org.elasticsearch.gateway.GatewayException.class, org.elasticsearch.gateway.GatewayException::new, 65, + UNKNOWN_VERSION_ADDED), INDEX_SHARD_NOT_RECOVERING_EXCEPTION(org.elasticsearch.index.shard.IndexShardNotRecoveringException.class, - org.elasticsearch.index.shard.IndexShardNotRecoveringException::new, 66), - HTTP_EXCEPTION(org.elasticsearch.http.HttpException.class, org.elasticsearch.http.HttpException::new, 67), + org.elasticsearch.index.shard.IndexShardNotRecoveringException::new, 66, UNKNOWN_VERSION_ADDED), + HTTP_EXCEPTION(org.elasticsearch.http.HttpException.class, org.elasticsearch.http.HttpException::new, 67, UNKNOWN_VERSION_ADDED), ELASTICSEARCH_EXCEPTION(org.elasticsearch.ElasticsearchException.class, - org.elasticsearch.ElasticsearchException::new, 68), + org.elasticsearch.ElasticsearchException::new, 68, UNKNOWN_VERSION_ADDED), SNAPSHOT_MISSING_EXCEPTION(org.elasticsearch.snapshots.SnapshotMissingException.class, - org.elasticsearch.snapshots.SnapshotMissingException::new, 69), + org.elasticsearch.snapshots.SnapshotMissingException::new, 69, UNKNOWN_VERSION_ADDED), PRIMARY_MISSING_ACTION_EXCEPTION(org.elasticsearch.action.PrimaryMissingActionException.class, - org.elasticsearch.action.PrimaryMissingActionException::new, 70), - FAILED_NODE_EXCEPTION(org.elasticsearch.action.FailedNodeException.class, org.elasticsearch.action.FailedNodeException::new, 71), - SEARCH_PARSE_EXCEPTION(org.elasticsearch.search.SearchParseException.class, org.elasticsearch.search.SearchParseException::new, 72), + org.elasticsearch.action.PrimaryMissingActionException::new, 70, UNKNOWN_VERSION_ADDED), + FAILED_NODE_EXCEPTION(org.elasticsearch.action.FailedNodeException.class, org.elasticsearch.action.FailedNodeException::new, 71, + UNKNOWN_VERSION_ADDED), + SEARCH_PARSE_EXCEPTION(org.elasticsearch.search.SearchParseException.class, org.elasticsearch.search.SearchParseException::new, 72, + UNKNOWN_VERSION_ADDED), CONCURRENT_SNAPSHOT_EXECUTION_EXCEPTION(org.elasticsearch.snapshots.ConcurrentSnapshotExecutionException.class, - org.elasticsearch.snapshots.ConcurrentSnapshotExecutionException::new, 73), + org.elasticsearch.snapshots.ConcurrentSnapshotExecutionException::new, 73, UNKNOWN_VERSION_ADDED), BLOB_STORE_EXCEPTION(org.elasticsearch.common.blobstore.BlobStoreException.class, - org.elasticsearch.common.blobstore.BlobStoreException::new, 74), + org.elasticsearch.common.blobstore.BlobStoreException::new, 74, UNKNOWN_VERSION_ADDED), INCOMPATIBLE_CLUSTER_STATE_VERSION_EXCEPTION(org.elasticsearch.cluster.IncompatibleClusterStateVersionException.class, - org.elasticsearch.cluster.IncompatibleClusterStateVersionException::new, 75), + org.elasticsearch.cluster.IncompatibleClusterStateVersionException::new, 75, UNKNOWN_VERSION_ADDED), RECOVERY_ENGINE_EXCEPTION(org.elasticsearch.index.engine.RecoveryEngineException.class, - org.elasticsearch.index.engine.RecoveryEngineException::new, 76), + org.elasticsearch.index.engine.RecoveryEngineException::new, 76, UNKNOWN_VERSION_ADDED), UNCATEGORIZED_EXECUTION_EXCEPTION(org.elasticsearch.common.util.concurrent.UncategorizedExecutionException.class, - org.elasticsearch.common.util.concurrent.UncategorizedExecutionException::new, 77), + org.elasticsearch.common.util.concurrent.UncategorizedExecutionException::new, 77, UNKNOWN_VERSION_ADDED), TIMESTAMP_PARSING_EXCEPTION(org.elasticsearch.action.TimestampParsingException.class, - org.elasticsearch.action.TimestampParsingException::new, 78), + org.elasticsearch.action.TimestampParsingException::new, 78, UNKNOWN_VERSION_ADDED), ROUTING_MISSING_EXCEPTION(org.elasticsearch.action.RoutingMissingException.class, - org.elasticsearch.action.RoutingMissingException::new, 79), + org.elasticsearch.action.RoutingMissingException::new, 79, UNKNOWN_VERSION_ADDED), INDEX_FAILED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.IndexFailedEngineException.class, - org.elasticsearch.index.engine.IndexFailedEngineException::new, 80), + org.elasticsearch.index.engine.IndexFailedEngineException::new, 80, UNKNOWN_VERSION_ADDED), INDEX_SHARD_RESTORE_FAILED_EXCEPTION(org.elasticsearch.index.snapshots.IndexShardRestoreFailedException.class, - org.elasticsearch.index.snapshots.IndexShardRestoreFailedException::new, 81), + org.elasticsearch.index.snapshots.IndexShardRestoreFailedException::new, 81, UNKNOWN_VERSION_ADDED), REPOSITORY_EXCEPTION(org.elasticsearch.repositories.RepositoryException.class, - org.elasticsearch.repositories.RepositoryException::new, 82), + org.elasticsearch.repositories.RepositoryException::new, 82, UNKNOWN_VERSION_ADDED), RECEIVE_TIMEOUT_TRANSPORT_EXCEPTION(org.elasticsearch.transport.ReceiveTimeoutTransportException.class, - org.elasticsearch.transport.ReceiveTimeoutTransportException::new, 83), + org.elasticsearch.transport.ReceiveTimeoutTransportException::new, 83, UNKNOWN_VERSION_ADDED), NODE_DISCONNECTED_EXCEPTION(org.elasticsearch.transport.NodeDisconnectedException.class, - org.elasticsearch.transport.NodeDisconnectedException::new, 84), + org.elasticsearch.transport.NodeDisconnectedException::new, 84, UNKNOWN_VERSION_ADDED), ALREADY_EXPIRED_EXCEPTION(org.elasticsearch.index.AlreadyExpiredException.class, - org.elasticsearch.index.AlreadyExpiredException::new, 85), + org.elasticsearch.index.AlreadyExpiredException::new, 85, UNKNOWN_VERSION_ADDED), AGGREGATION_EXECUTION_EXCEPTION(org.elasticsearch.search.aggregations.AggregationExecutionException.class, - org.elasticsearch.search.aggregations.AggregationExecutionException::new, 86), + org.elasticsearch.search.aggregations.AggregationExecutionException::new, 86, UNKNOWN_VERSION_ADDED), // 87 used to be for MergeMappingException INVALID_INDEX_TEMPLATE_EXCEPTION(org.elasticsearch.indices.InvalidIndexTemplateException.class, - org.elasticsearch.indices.InvalidIndexTemplateException::new, 88), + org.elasticsearch.indices.InvalidIndexTemplateException::new, 88, UNKNOWN_VERSION_ADDED), REFRESH_FAILED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.RefreshFailedEngineException.class, - org.elasticsearch.index.engine.RefreshFailedEngineException::new, 90), + org.elasticsearch.index.engine.RefreshFailedEngineException::new, 90, UNKNOWN_VERSION_ADDED), AGGREGATION_INITIALIZATION_EXCEPTION(org.elasticsearch.search.aggregations.AggregationInitializationException.class, - org.elasticsearch.search.aggregations.AggregationInitializationException::new, 91), + org.elasticsearch.search.aggregations.AggregationInitializationException::new, 91, UNKNOWN_VERSION_ADDED), DELAY_RECOVERY_EXCEPTION(org.elasticsearch.indices.recovery.DelayRecoveryException.class, - org.elasticsearch.indices.recovery.DelayRecoveryException::new, 92), + org.elasticsearch.indices.recovery.DelayRecoveryException::new, 92, UNKNOWN_VERSION_ADDED), // 93 used to be for IndexWarmerMissingException NO_NODE_AVAILABLE_EXCEPTION(org.elasticsearch.client.transport.NoNodeAvailableException.class, - org.elasticsearch.client.transport.NoNodeAvailableException::new, 94), + org.elasticsearch.client.transport.NoNodeAvailableException::new, 94, UNKNOWN_VERSION_ADDED), INVALID_SNAPSHOT_NAME_EXCEPTION(org.elasticsearch.snapshots.InvalidSnapshotNameException.class, - org.elasticsearch.snapshots.InvalidSnapshotNameException::new, 96), + org.elasticsearch.snapshots.InvalidSnapshotNameException::new, 96, UNKNOWN_VERSION_ADDED), ILLEGAL_INDEX_SHARD_STATE_EXCEPTION(org.elasticsearch.index.shard.IllegalIndexShardStateException.class, - org.elasticsearch.index.shard.IllegalIndexShardStateException::new, 97), + org.elasticsearch.index.shard.IllegalIndexShardStateException::new, 97, UNKNOWN_VERSION_ADDED), INDEX_SHARD_SNAPSHOT_EXCEPTION(org.elasticsearch.index.snapshots.IndexShardSnapshotException.class, - org.elasticsearch.index.snapshots.IndexShardSnapshotException::new, 98), + org.elasticsearch.index.snapshots.IndexShardSnapshotException::new, 98, UNKNOWN_VERSION_ADDED), INDEX_SHARD_NOT_STARTED_EXCEPTION(org.elasticsearch.index.shard.IndexShardNotStartedException.class, - org.elasticsearch.index.shard.IndexShardNotStartedException::new, 99), + org.elasticsearch.index.shard.IndexShardNotStartedException::new, 99, UNKNOWN_VERSION_ADDED), SEARCH_PHASE_EXECUTION_EXCEPTION(org.elasticsearch.action.search.SearchPhaseExecutionException.class, - org.elasticsearch.action.search.SearchPhaseExecutionException::new, 100), + org.elasticsearch.action.search.SearchPhaseExecutionException::new, 100, UNKNOWN_VERSION_ADDED), ACTION_NOT_FOUND_TRANSPORT_EXCEPTION(org.elasticsearch.transport.ActionNotFoundTransportException.class, - org.elasticsearch.transport.ActionNotFoundTransportException::new, 101), + org.elasticsearch.transport.ActionNotFoundTransportException::new, 101, UNKNOWN_VERSION_ADDED), TRANSPORT_SERIALIZATION_EXCEPTION(org.elasticsearch.transport.TransportSerializationException.class, - org.elasticsearch.transport.TransportSerializationException::new, 102), + org.elasticsearch.transport.TransportSerializationException::new, 102, UNKNOWN_VERSION_ADDED), REMOTE_TRANSPORT_EXCEPTION(org.elasticsearch.transport.RemoteTransportException.class, - org.elasticsearch.transport.RemoteTransportException::new, 103), + org.elasticsearch.transport.RemoteTransportException::new, 103, UNKNOWN_VERSION_ADDED), ENGINE_CREATION_FAILURE_EXCEPTION(org.elasticsearch.index.engine.EngineCreationFailureException.class, - org.elasticsearch.index.engine.EngineCreationFailureException::new, 104), + org.elasticsearch.index.engine.EngineCreationFailureException::new, 104, UNKNOWN_VERSION_ADDED), ROUTING_EXCEPTION(org.elasticsearch.cluster.routing.RoutingException.class, - org.elasticsearch.cluster.routing.RoutingException::new, 105), + org.elasticsearch.cluster.routing.RoutingException::new, 105, UNKNOWN_VERSION_ADDED), INDEX_SHARD_RECOVERY_EXCEPTION(org.elasticsearch.index.shard.IndexShardRecoveryException.class, - org.elasticsearch.index.shard.IndexShardRecoveryException::new, 106), + org.elasticsearch.index.shard.IndexShardRecoveryException::new, 106, UNKNOWN_VERSION_ADDED), REPOSITORY_MISSING_EXCEPTION(org.elasticsearch.repositories.RepositoryMissingException.class, - org.elasticsearch.repositories.RepositoryMissingException::new, 107), + org.elasticsearch.repositories.RepositoryMissingException::new, 107, UNKNOWN_VERSION_ADDED), DOCUMENT_SOURCE_MISSING_EXCEPTION(org.elasticsearch.index.engine.DocumentSourceMissingException.class, - org.elasticsearch.index.engine.DocumentSourceMissingException::new, 109), - FLUSH_NOT_ALLOWED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.FlushNotAllowedEngineException.class, - org.elasticsearch.index.engine.FlushNotAllowedEngineException::new, 110), + org.elasticsearch.index.engine.DocumentSourceMissingException::new, 109, UNKNOWN_VERSION_ADDED), + // 110 used to be FlushNotAllowedEngineException NO_CLASS_SETTINGS_EXCEPTION(org.elasticsearch.common.settings.NoClassSettingsException.class, - org.elasticsearch.common.settings.NoClassSettingsException::new, 111), + org.elasticsearch.common.settings.NoClassSettingsException::new, 111, UNKNOWN_VERSION_ADDED), BIND_TRANSPORT_EXCEPTION(org.elasticsearch.transport.BindTransportException.class, - org.elasticsearch.transport.BindTransportException::new, 112), + org.elasticsearch.transport.BindTransportException::new, 112, UNKNOWN_VERSION_ADDED), ALIASES_NOT_FOUND_EXCEPTION(org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException.class, - org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException::new, 113), + org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException::new, 113, UNKNOWN_VERSION_ADDED), INDEX_SHARD_RECOVERING_EXCEPTION(org.elasticsearch.index.shard.IndexShardRecoveringException.class, - org.elasticsearch.index.shard.IndexShardRecoveringException::new, 114), + org.elasticsearch.index.shard.IndexShardRecoveringException::new, 114, UNKNOWN_VERSION_ADDED), TRANSLOG_EXCEPTION(org.elasticsearch.index.translog.TranslogException.class, - org.elasticsearch.index.translog.TranslogException::new, 115), + org.elasticsearch.index.translog.TranslogException::new, 115, UNKNOWN_VERSION_ADDED), PROCESS_CLUSTER_EVENT_TIMEOUT_EXCEPTION(org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException.class, - org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException::new, 116), + org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException::new, 116, UNKNOWN_VERSION_ADDED), RETRY_ON_PRIMARY_EXCEPTION(ReplicationOperation.RetryOnPrimaryException.class, - ReplicationOperation.RetryOnPrimaryException::new, 117), + ReplicationOperation.RetryOnPrimaryException::new, 117, UNKNOWN_VERSION_ADDED), ELASTICSEARCH_TIMEOUT_EXCEPTION(org.elasticsearch.ElasticsearchTimeoutException.class, - org.elasticsearch.ElasticsearchTimeoutException::new, 118), + org.elasticsearch.ElasticsearchTimeoutException::new, 118, UNKNOWN_VERSION_ADDED), QUERY_PHASE_EXECUTION_EXCEPTION(org.elasticsearch.search.query.QueryPhaseExecutionException.class, - org.elasticsearch.search.query.QueryPhaseExecutionException::new, 119), + org.elasticsearch.search.query.QueryPhaseExecutionException::new, 119, UNKNOWN_VERSION_ADDED), REPOSITORY_VERIFICATION_EXCEPTION(org.elasticsearch.repositories.RepositoryVerificationException.class, - org.elasticsearch.repositories.RepositoryVerificationException::new, 120), + org.elasticsearch.repositories.RepositoryVerificationException::new, 120, UNKNOWN_VERSION_ADDED), INVALID_AGGREGATION_PATH_EXCEPTION(org.elasticsearch.search.aggregations.InvalidAggregationPathException.class, - org.elasticsearch.search.aggregations.InvalidAggregationPathException::new, 121), - INDEX_ALREADY_EXISTS_EXCEPTION(org.elasticsearch.indices.IndexAlreadyExistsException.class, - org.elasticsearch.indices.IndexAlreadyExistsException::new, 123), + org.elasticsearch.search.aggregations.InvalidAggregationPathException::new, 121, UNKNOWN_VERSION_ADDED), + // 123 used to be IndexAlreadyExistsException and was renamed + RESOURCE_ALREADY_EXISTS_EXCEPTION(ResourceAlreadyExistsException.class, + ResourceAlreadyExistsException::new, 123, UNKNOWN_VERSION_ADDED), // 124 used to be Script.ScriptParseException HTTP_ON_TRANSPORT_EXCEPTION(TcpTransport.HttpOnTransportException.class, - TcpTransport.HttpOnTransportException::new, 125), + TcpTransport.HttpOnTransportException::new, 125, UNKNOWN_VERSION_ADDED), MAPPER_PARSING_EXCEPTION(org.elasticsearch.index.mapper.MapperParsingException.class, - org.elasticsearch.index.mapper.MapperParsingException::new, 126), + org.elasticsearch.index.mapper.MapperParsingException::new, 126, UNKNOWN_VERSION_ADDED), SEARCH_CONTEXT_EXCEPTION(org.elasticsearch.search.SearchContextException.class, - org.elasticsearch.search.SearchContextException::new, 127), + org.elasticsearch.search.SearchContextException::new, 127, UNKNOWN_VERSION_ADDED), SEARCH_SOURCE_BUILDER_EXCEPTION(org.elasticsearch.search.builder.SearchSourceBuilderException.class, - org.elasticsearch.search.builder.SearchSourceBuilderException::new, 128), + org.elasticsearch.search.builder.SearchSourceBuilderException::new, 128, UNKNOWN_VERSION_ADDED), ENGINE_CLOSED_EXCEPTION(org.elasticsearch.index.engine.EngineClosedException.class, - org.elasticsearch.index.engine.EngineClosedException::new, 129), + org.elasticsearch.index.engine.EngineClosedException::new, 129, UNKNOWN_VERSION_ADDED), NO_SHARD_AVAILABLE_ACTION_EXCEPTION(org.elasticsearch.action.NoShardAvailableActionException.class, - org.elasticsearch.action.NoShardAvailableActionException::new, 130), + org.elasticsearch.action.NoShardAvailableActionException::new, 130, UNKNOWN_VERSION_ADDED), UNAVAILABLE_SHARDS_EXCEPTION(org.elasticsearch.action.UnavailableShardsException.class, - org.elasticsearch.action.UnavailableShardsException::new, 131), + org.elasticsearch.action.UnavailableShardsException::new, 131, UNKNOWN_VERSION_ADDED), FLUSH_FAILED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.FlushFailedEngineException.class, - org.elasticsearch.index.engine.FlushFailedEngineException::new, 132), + org.elasticsearch.index.engine.FlushFailedEngineException::new, 132, UNKNOWN_VERSION_ADDED), CIRCUIT_BREAKING_EXCEPTION(org.elasticsearch.common.breaker.CircuitBreakingException.class, - org.elasticsearch.common.breaker.CircuitBreakingException::new, 133), + org.elasticsearch.common.breaker.CircuitBreakingException::new, 133, UNKNOWN_VERSION_ADDED), NODE_NOT_CONNECTED_EXCEPTION(org.elasticsearch.transport.NodeNotConnectedException.class, - org.elasticsearch.transport.NodeNotConnectedException::new, 134), + org.elasticsearch.transport.NodeNotConnectedException::new, 134, UNKNOWN_VERSION_ADDED), STRICT_DYNAMIC_MAPPING_EXCEPTION(org.elasticsearch.index.mapper.StrictDynamicMappingException.class, - org.elasticsearch.index.mapper.StrictDynamicMappingException::new, 135), + org.elasticsearch.index.mapper.StrictDynamicMappingException::new, 135, UNKNOWN_VERSION_ADDED), RETRY_ON_REPLICA_EXCEPTION(org.elasticsearch.action.support.replication.TransportReplicationAction.RetryOnReplicaException.class, - org.elasticsearch.action.support.replication.TransportReplicationAction.RetryOnReplicaException::new, 136), + org.elasticsearch.action.support.replication.TransportReplicationAction.RetryOnReplicaException::new, 136, + UNKNOWN_VERSION_ADDED), TYPE_MISSING_EXCEPTION(org.elasticsearch.indices.TypeMissingException.class, - org.elasticsearch.indices.TypeMissingException::new, 137), + org.elasticsearch.indices.TypeMissingException::new, 137, UNKNOWN_VERSION_ADDED), FAILED_TO_COMMIT_CLUSTER_STATE_EXCEPTION(org.elasticsearch.discovery.Discovery.FailedToCommitClusterStateException.class, - org.elasticsearch.discovery.Discovery.FailedToCommitClusterStateException::new, 140), + org.elasticsearch.discovery.Discovery.FailedToCommitClusterStateException::new, 140, UNKNOWN_VERSION_ADDED), QUERY_SHARD_EXCEPTION(org.elasticsearch.index.query.QueryShardException.class, - org.elasticsearch.index.query.QueryShardException::new, 141), + org.elasticsearch.index.query.QueryShardException::new, 141, UNKNOWN_VERSION_ADDED), NO_LONGER_PRIMARY_SHARD_EXCEPTION(ShardStateAction.NoLongerPrimaryShardException.class, - ShardStateAction.NoLongerPrimaryShardException::new, 142), - SCRIPT_EXCEPTION(org.elasticsearch.script.ScriptException.class, org.elasticsearch.script.ScriptException::new, 143), - NOT_MASTER_EXCEPTION(org.elasticsearch.cluster.NotMasterException.class, org.elasticsearch.cluster.NotMasterException::new, 144), - STATUS_EXCEPTION(org.elasticsearch.ElasticsearchStatusException.class, org.elasticsearch.ElasticsearchStatusException::new, 145); + ShardStateAction.NoLongerPrimaryShardException::new, 142, UNKNOWN_VERSION_ADDED), + SCRIPT_EXCEPTION(org.elasticsearch.script.ScriptException.class, org.elasticsearch.script.ScriptException::new, 143, + UNKNOWN_VERSION_ADDED), + NOT_MASTER_EXCEPTION(org.elasticsearch.cluster.NotMasterException.class, org.elasticsearch.cluster.NotMasterException::new, 144, + UNKNOWN_VERSION_ADDED), + STATUS_EXCEPTION(org.elasticsearch.ElasticsearchStatusException.class, org.elasticsearch.ElasticsearchStatusException::new, 145, + UNKNOWN_VERSION_ADDED), + TASK_CANCELLED_EXCEPTION(org.elasticsearch.tasks.TaskCancelledException.class, + org.elasticsearch.tasks.TaskCancelledException::new, 146, Version.V_5_1_1), + SHARD_LOCK_OBTAIN_FAILED_EXCEPTION(org.elasticsearch.env.ShardLockObtainFailedException.class, + org.elasticsearch.env.ShardLockObtainFailedException::new, 147, Version.V_5_0_2), + UNKNOWN_NAMED_OBJECT_EXCEPTION(org.elasticsearch.common.xcontent.NamedXContentRegistry.UnknownNamedObjectException.class, + org.elasticsearch.common.xcontent.NamedXContentRegistry.UnknownNamedObjectException::new, 148, Version.V_5_2_0); final Class exceptionClass; - final FunctionThatThrowsIOException constructor; + final CheckedFunction constructor; final int id; + final Version versionAdded; ElasticsearchExceptionHandle(Class exceptionClass, - FunctionThatThrowsIOException constructor, int id) { + CheckedFunction constructor, int id, + Version versionAdded) { // We need the exceptionClass because you can't dig it out of the constructor reliably. this.exceptionClass = exceptionClass; this.constructor = constructor; + this.versionAdded = versionAdded; this.id = id; } } + /** + * Returns an array of all registered handle IDs. These are the IDs for every registered + * exception. + * + * @return an array of all registered handle IDs + */ + static int[] ids() { + return Arrays.stream(ElasticsearchExceptionHandle.values()).mapToInt(h -> h.id).toArray(); + } + + /** + * Returns an array of all registered pairs of handle IDs and exception classes. These pairs are + * provided for every registered exception. + * + * @return an array of all registered pairs of handle IDs and exception classes + */ + static Tuple>[] classes() { + @SuppressWarnings("unchecked") + final Tuple>[] ts = + Arrays.stream(ElasticsearchExceptionHandle.values()) + .map(h -> Tuple.tuple(h.id, h.exceptionClass)).toArray(Tuple[]::new); + return ts; + } + static { ID_TO_SUPPLIER = unmodifiableMap(Arrays .stream(ElasticsearchExceptionHandle.values()).collect(Collectors.toMap(e -> e.id, e -> e.constructor))); @@ -717,9 +1042,9 @@ ElasticsearchExceptionHandle(Class excepti } public Index getIndex() { - List index = getHeader(INDEX_HEADER_KEY); + List index = getMetadata(INDEX_METADATA_KEY); if (index != null && index.isEmpty() == false) { - List index_uuid = getHeader(INDEX_HEADER_KEY_UUID); + List index_uuid = getMetadata(INDEX_METADATA_KEY_UUID); return new Index(index.get(0), index_uuid.get(0)); } @@ -727,7 +1052,7 @@ public Index getIndex() { } public ShardId getShardId() { - List shard = getHeader(SHARD_HEADER_KEY); + List shard = getMetadata(SHARD_METADATA_KEY); if (shard != null && shard.isEmpty() == false) { return new ShardId(getIndex(), Integer.parseInt(shard.get(0))); } @@ -736,8 +1061,8 @@ public ShardId getShardId() { public void setIndex(Index index) { if (index != null) { - addHeader(INDEX_HEADER_KEY, index.getName()); - addHeader(INDEX_HEADER_KEY_UUID, index.getUUID()); + addMetadata(INDEX_METADATA_KEY, index.getName()); + addMetadata(INDEX_METADATA_KEY_UUID, index.getUUID()); } } @@ -750,27 +1075,22 @@ public void setIndex(String index) { public void setShard(ShardId shardId) { if (shardId != null) { setIndex(shardId.getIndex()); - addHeader(SHARD_HEADER_KEY, Integer.toString(shardId.id())); + addMetadata(SHARD_METADATA_KEY, Integer.toString(shardId.id())); } } - public void setShard(String index, int shardId) { - setIndex(index); - addHeader(SHARD_HEADER_KEY, Integer.toString(shardId)); - } - public void setResources(String type, String... id) { assert type != null; - addHeader(RESOURCE_HEADER_ID_KEY, id); - addHeader(RESOURCE_HEADER_TYPE_KEY, type); + addMetadata(RESOURCE_METADATA_ID_KEY, id); + addMetadata(RESOURCE_METADATA_TYPE_KEY, type); } public List getResourceId() { - return getHeader(RESOURCE_HEADER_ID_KEY); + return getMetadata(RESOURCE_METADATA_ID_KEY); } public String getResourceType() { - List header = getHeader(RESOURCE_HEADER_TYPE_KEY); + List header = getMetadata(RESOURCE_METADATA_TYPE_KEY); if (header != null && header.isEmpty() == false) { assert header.size() == 1; return header.get(0); @@ -778,26 +1098,6 @@ public String getResourceType() { return null; } - public static void renderException(XContentBuilder builder, Params params, Exception e) throws IOException { - builder.startObject("error"); - final ElasticsearchException[] rootCauses = ElasticsearchException.guessRootCauses(e); - builder.field("root_cause"); - builder.startArray(); - for (ElasticsearchException rootCause : rootCauses) { - builder.startObject(); - rootCause.toXContent(builder, new ToXContent.DelegatingMapParams( - Collections.singletonMap(ElasticsearchException.REST_EXCEPTION_SKIP_CAUSE, "true"), params)); - builder.endObject(); - } - builder.endArray(); - ElasticsearchException.toXContent(builder, params, e); - builder.endObject(); - } - - interface FunctionThatThrowsIOException { - R apply(T t) throws IOException; - } - // lower cases and adds underscores to transitions in a name private static String toUnderscoreCase(String value) { StringBuilder sb = new StringBuilder(); @@ -832,4 +1132,5 @@ private static String toUnderscoreCase(String value) { } return sb.toString(); } + } diff --git a/core/src/main/java/org/elasticsearch/ElasticsearchParseException.java b/core/src/main/java/org/elasticsearch/ElasticsearchParseException.java index 1358ef54d9d89..1711e9a3aaf91 100644 --- a/core/src/main/java/org/elasticsearch/ElasticsearchParseException.java +++ b/core/src/main/java/org/elasticsearch/ElasticsearchParseException.java @@ -25,7 +25,7 @@ import java.io.IOException; /** - * + * Unchecked exception that is translated into a {@code 400 BAD REQUEST} error when it bubbles out over HTTP. */ public class ElasticsearchParseException extends ElasticsearchException { diff --git a/core/src/main/java/org/elasticsearch/ExceptionsHelper.java b/core/src/main/java/org/elasticsearch/ExceptionsHelper.java index c30662a093479..e89e04a301da1 100644 --- a/core/src/main/java/org/elasticsearch/ExceptionsHelper.java +++ b/core/src/main/java/org/elasticsearch/ExceptionsHelper.java @@ -214,7 +214,7 @@ static class GroupBy { final String index; final Class causeType; - public GroupBy(Throwable t) { + GroupBy(Throwable t) { if (t instanceof ElasticsearchException) { final Index index = ((ElasticsearchException) t).getIndex(); if (index != null) { diff --git a/core/src/main/java/org/elasticsearch/ResourceAlreadyExistsException.java b/core/src/main/java/org/elasticsearch/ResourceAlreadyExistsException.java new file mode 100644 index 0000000000000..8ab5cb433f41d --- /dev/null +++ b/core/src/main/java/org/elasticsearch/ResourceAlreadyExistsException.java @@ -0,0 +1,57 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch; + +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.index.Index; +import org.elasticsearch.rest.RestStatus; + +import java.io.IOException; + +public class ResourceAlreadyExistsException extends ElasticsearchException { + + public ResourceAlreadyExistsException(Index index) { + this("index {} already exists", index.toString()); + setIndex(index); + } + + public ResourceAlreadyExistsException(String msg, Object... args) { + super(msg, args); + } + + public ResourceAlreadyExistsException(StreamInput in) throws IOException{ + super(in); + } + + @Override + public RestStatus status() { + return RestStatus.BAD_REQUEST; + } + + @Override + protected String getExceptionName() { + if (getIndex() != null) { + // This is only for BWC and is removed in 6.0.0 + return "index_already_exists_exception"; + } else { + return super.getExceptionName(); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/Version.java b/core/src/main/java/org/elasticsearch/Version.java index 3b4282c9b5624..5c4295a9ff080 100644 --- a/core/src/main/java/org/elasticsearch/Version.java +++ b/core/src/main/java/org/elasticsearch/Version.java @@ -29,9 +29,7 @@ import java.io.IOException; -/** - */ -public class Version { +public class Version implements Comparable { /* * The logic for ID is: XXYYZZAA, where XX is major version, YY is minor version, ZZ is revision, and AA is alpha/beta/rc indicator AA * values below 25 are for alpha builder (since 5.0), and above 25 and below 50 are beta builds, and below 99 are RC builds, with 99 @@ -75,6 +73,18 @@ public class Version { public static final Version V_2_3_5 = new Version(V_2_3_5_ID, org.apache.lucene.util.Version.LUCENE_5_5_0); public static final int V_2_4_0_ID = 2040099; public static final Version V_2_4_0 = new Version(V_2_4_0_ID, org.apache.lucene.util.Version.LUCENE_5_5_2); + public static final int V_2_4_1_ID = 2040199; + public static final Version V_2_4_1 = new Version(V_2_4_1_ID, org.apache.lucene.util.Version.LUCENE_5_5_2); + public static final int V_2_4_2_ID = 2040299; + public static final Version V_2_4_2 = new Version(V_2_4_2_ID, org.apache.lucene.util.Version.LUCENE_5_5_2); + public static final int V_2_4_3_ID = 2040399; + public static final Version V_2_4_3 = new Version(V_2_4_3_ID, org.apache.lucene.util.Version.LUCENE_5_5_2); + public static final int V_2_4_4_ID = 2040499; + public static final Version V_2_4_4 = new Version(V_2_4_4_ID, org.apache.lucene.util.Version.LUCENE_5_5_2); + public static final int V_2_4_5_ID = 2040599; + public static final Version V_2_4_5 = new Version(V_2_4_5_ID, org.apache.lucene.util.Version.LUCENE_5_5_4); + public static final int V_2_4_6_ID = 2040699; + public static final Version V_2_4_6 = new Version(V_2_4_6_ID, org.apache.lucene.util.Version.LUCENE_5_5_4); public static final int V_5_0_0_alpha1_ID = 5000001; public static final Version V_5_0_0_alpha1 = new Version(V_5_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_6_0_0); public static final int V_5_0_0_alpha2_ID = 5000002; @@ -85,9 +95,64 @@ public class Version { public static final Version V_5_0_0_alpha4 = new Version(V_5_0_0_alpha4_ID, org.apache.lucene.util.Version.LUCENE_6_1_0); public static final int V_5_0_0_alpha5_ID = 5000005; public static final Version V_5_0_0_alpha5 = new Version(V_5_0_0_alpha5_ID, org.apache.lucene.util.Version.LUCENE_6_1_0); - public static final int V_5_0_0_alpha6_ID = 5000006; - public static final Version V_5_0_0_alpha6 = new Version(V_5_0_0_alpha6_ID, org.apache.lucene.util.Version.LUCENE_6_2_0); - public static final Version CURRENT = V_5_0_0_alpha6; + public static final int V_5_0_0_beta1_ID = 5000026; + public static final Version V_5_0_0_beta1 = new Version(V_5_0_0_beta1_ID, org.apache.lucene.util.Version.LUCENE_6_2_0); + public static final int V_5_0_0_rc1_ID = 5000051; + public static final Version V_5_0_0_rc1 = new Version(V_5_0_0_rc1_ID, org.apache.lucene.util.Version.LUCENE_6_2_0); + public static final int V_5_0_0_ID = 5000099; + public static final Version V_5_0_0 = new Version(V_5_0_0_ID, org.apache.lucene.util.Version.LUCENE_6_2_0); + public static final int V_5_0_1_ID = 5000199; + public static final Version V_5_0_1 = new Version(V_5_0_1_ID, org.apache.lucene.util.Version.LUCENE_6_2_1); + public static final int V_5_0_2_ID = 5000299; + public static final Version V_5_0_2 = new Version(V_5_0_2_ID, org.apache.lucene.util.Version.LUCENE_6_2_1); + // no version constant for 5.1.0 due to inadvertent release + public static final int V_5_1_1_ID = 5010199; + public static final Version V_5_1_1 = new Version(V_5_1_1_ID, org.apache.lucene.util.Version.LUCENE_6_3_0); + public static final int V_5_1_2_ID = 5010299; + public static final Version V_5_1_2 = new Version(V_5_1_2_ID, org.apache.lucene.util.Version.LUCENE_6_3_0); + public static final int V_5_2_0_ID = 5020099; + public static final Version V_5_2_0 = new Version(V_5_2_0_ID, org.apache.lucene.util.Version.LUCENE_6_4_0); + public static final int V_5_2_1_ID = 5020199; + public static final Version V_5_2_1 = new Version(V_5_2_1_ID, org.apache.lucene.util.Version.LUCENE_6_4_1); + public static final int V_5_2_2_ID = 5020299; + public static final Version V_5_2_2 = new Version(V_5_2_2_ID, org.apache.lucene.util.Version.LUCENE_6_4_1); + public static final int V_5_3_0_ID = 5030099; + public static final Version V_5_3_0 = new Version(V_5_3_0_ID, org.apache.lucene.util.Version.LUCENE_6_4_1); + public static final int V_5_3_1_ID = 5030199; + public static final Version V_5_3_1 = new Version(V_5_3_1_ID, org.apache.lucene.util.Version.LUCENE_6_4_2); + public static final int V_5_3_2_ID = 5030299; + public static final Version V_5_3_2 = new Version(V_5_3_2_ID, org.apache.lucene.util.Version.LUCENE_6_4_2); + public static final int V_5_3_3_ID = 5030399; + public static final Version V_5_3_3 = new Version(V_5_3_3_ID, org.apache.lucene.util.Version.LUCENE_6_4_2); + public static final int V_5_4_0_ID = 5040099; + public static final Version V_5_4_0 = new Version(V_5_4_0_ID, org.apache.lucene.util.Version.LUCENE_6_5_0); + public static final int V_5_4_1_ID = 5040199; + public static final Version V_5_4_1 = new Version(V_5_4_1_ID, org.apache.lucene.util.Version.LUCENE_6_5_1); + public static final int V_5_4_2_ID = 5040299; + public static final Version V_5_4_2 = new Version(V_5_4_2_ID, org.apache.lucene.util.Version.LUCENE_6_5_1); + public static final int V_5_4_3_ID = 5040399; + public static final Version V_5_4_3 = new Version(V_5_4_3_ID, org.apache.lucene.util.Version.LUCENE_6_5_1); + public static final int V_5_5_0_ID = 5050099; + public static final Version V_5_5_0 = new Version(V_5_5_0_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); + public static final int V_5_5_1_ID = 5050199; + public static final Version V_5_5_1 = new Version(V_5_5_1_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); + public static final int V_5_5_2_ID = 5050299; + public static final Version V_5_5_2 = new Version(V_5_5_2_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); + public static final int V_5_5_3_ID = 5050399; + public static final Version V_5_5_3 = new Version(V_5_5_3_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); + public static final int V_5_6_0_ID = 5060099; + public static final Version V_5_6_0 = new Version(V_5_6_0_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); + public static final int V_5_6_1_ID = 5060199; + public static final Version V_5_6_1 = new Version(V_5_6_1_ID, org.apache.lucene.util.Version.LUCENE_6_6_1); + public static final int V_5_6_2_ID = 5060299; + public static final Version V_5_6_2 = new Version(V_5_6_2_ID, org.apache.lucene.util.Version.LUCENE_6_6_1); + public static final int V_5_6_3_ID = 5060399; + public static final Version V_5_6_3 = new Version(V_5_6_3_ID, org.apache.lucene.util.Version.LUCENE_6_6_1); + public static final int V_5_6_4_ID_UNRELEASED = 5060499; + public static final Version V_5_6_4_UNRELEASED = new Version(V_5_6_4_ID_UNRELEASED, org.apache.lucene.util.Version.LUCENE_6_6_1); + public static final Version CURRENT = V_5_6_4_UNRELEASED; + + // unreleased versions must be added to the above list with the suffix _UNRELEASED (with the exception of CURRENT) static { assert CURRENT.luceneVersion.equals(org.apache.lucene.util.Version.LATEST) : "Version must be upgraded to [" @@ -100,8 +165,60 @@ public static Version readVersion(StreamInput in) throws IOException { public static Version fromId(int id) { switch (id) { - case V_5_0_0_alpha6_ID: - return V_5_0_0_alpha6; + case V_5_6_4_ID_UNRELEASED: + return V_5_6_4_UNRELEASED; + case V_5_6_3_ID: + return V_5_6_3; + case V_5_6_2_ID: + return V_5_6_2; + case V_5_6_1_ID: + return V_5_6_1; + case V_5_6_0_ID: + return V_5_6_0; + case V_5_5_3_ID: + return V_5_5_3; + case V_5_5_2_ID: + return V_5_5_2; + case V_5_5_1_ID: + return V_5_5_1; + case V_5_5_0_ID: + return V_5_5_0; + case V_5_4_3_ID: + return V_5_4_3; + case V_5_4_2_ID: + return V_5_4_2; + case V_5_4_1_ID: + return V_5_4_1; + case V_5_4_0_ID: + return V_5_4_0; + case V_5_3_3_ID: + return V_5_3_3; + case V_5_3_2_ID: + return V_5_3_2; + case V_5_3_1_ID: + return V_5_3_1; + case V_5_3_0_ID: + return V_5_3_0; + case V_5_2_2_ID: + return V_5_2_2; + case V_5_2_1_ID: + return V_5_2_1; + case V_5_2_0_ID: + return V_5_2_0; + case V_5_1_2_ID: + return V_5_1_2; + case V_5_1_1_ID: + return V_5_1_1; + case V_5_0_2_ID: + return V_5_0_2; + case V_5_0_1_ID: + return V_5_0_1; + case V_5_0_0_ID: + return V_5_0_0; + case V_5_0_0_rc1_ID: + return V_5_0_0_rc1; + case V_5_0_0_beta1_ID: + return V_5_0_0_beta1; case V_5_0_0_alpha5_ID: return V_5_0_0_alpha5; case V_5_0_0_alpha4_ID: @@ -112,6 +229,18 @@ public static Version fromId(int id) { return V_5_0_0_alpha2; case V_5_0_0_alpha1_ID: return V_5_0_0_alpha1; + case V_2_4_6_ID: + return V_2_4_6; + case V_2_4_5_ID: + return V_2_4_5; + case V_2_4_4_ID: + return V_2_4_4; + case V_2_4_3_ID: + return V_2_4_3; + case V_2_4_2_ID: + return V_2_4_2; + case V_2_4_1_ID: + return V_2_4_1; case V_2_4_0_ID: return V_2_4_0; case V_2_3_5_ID: @@ -176,12 +305,26 @@ public static void writeVersion(Version version, StreamOutput out) throws IOExce } /** - * Returns the smallest version between the 2. + * Returns the minimum version between the 2. */ - public static Version smallest(Version version1, Version version2) { + public static Version min(Version version1, Version version2) { return version1.id < version2.id ? version1 : version2; } + /** + * Returns the minimum version between the 2. + * @deprecated use {@link #min(Version, Version)} instead + */ + @Deprecated + public static Version smallest(Version version1, Version version2) { + return min(version1, version2); + } + + /** + * Returns the maximum version between the 2 + */ + public static Version max(Version version1, Version version2) { return version1.id > version2.id ? version1 : version2; } + /** * Returns the version given its string representation, current version if the argument is null or empty */ @@ -193,7 +336,7 @@ public static Version fromString(String version) { if (snapshot = version.endsWith("-SNAPSHOT")) { version = version.substring(0, version.length() - 9); } - String[] parts = version.split("\\.|\\-"); + String[] parts = version.split("[.-]"); if (parts.length < 3 || parts.length > 4) { throw new IllegalArgumentException( "the version needs to contain major, minor, and revision, and optionally the build: " + version); @@ -267,6 +410,11 @@ public boolean onOrBefore(Version version) { return version.id >= id; } + @Override + public int compareTo(Version other) { + return Integer.compare(this.id, other.id); + } + /** * Returns the minimum compatible version based on the current * version. Ie a node needs to have at least the return version in order @@ -275,7 +423,52 @@ public boolean onOrBefore(Version version) { * is a beta or RC release then the version itself is returned. */ public Version minimumCompatibilityVersion() { - return Version.smallest(this, fromId(major * 1000000 + 99)); + final int bwcMajor; + final int bwcMinor; + if (major >= 6) { + bwcMajor = major - 1; + bwcMinor = CURRENT.minor; + } else { + bwcMajor = major; + bwcMinor = 0; + } + return Version.min(this, fromId(bwcMajor * 1000000 + bwcMinor * 10000 + 99)); + } + + /** + * Returns the minimum created index version that this version supports. Indices created with lower versions + * can't be used with this version. + */ + public Version minimumIndexCompatibilityVersion() { + final int bwcMajor; + if (major == 5) { + bwcMajor = 2; // we jumped from 2 to 5 + } else { + bwcMajor = major - 1; + } + final int bwcMinor; + if (bwcMajor <= 2) { + // we do support beta1 prior to 5.x + // this allows clusters that have upgraded to 5.0 with an index created in 2.0.0.beta1 to go to 5.2 etc. + // otherwise the upgrade will fail and that is really not what we want. from 5 onwards we are supporting only GA + // releases + bwcMinor = 01; + } else { + bwcMinor = 99; + } + + return Version.min(this, fromId(bwcMajor * 1000000 + bwcMinor)); + } + + /** + * Returns true iff both version are compatible. Otherwise false + */ + public boolean isCompatible(Version version) { + boolean compatible = onOrAfter(version.minimumCompatibilityVersion()) + && version.onOrAfter(minimumCompatibilityVersion()); + + assert compatible == false || Math.max(major, version.major) - Math.min(major, version.major) <= 1; + return compatible; } @SuppressForbidden(reason = "System.out.*") @@ -348,4 +541,9 @@ public boolean isAlpha() { public boolean isRC() { return build > 50 && build < 99; } + + public boolean isRelease() { + return build == 99; + } + } diff --git a/core/src/main/java/org/elasticsearch/action/ActionListener.java b/core/src/main/java/org/elasticsearch/action/ActionListener.java index 6ce8592879e08..f9fafa9f95a2e 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionListener.java +++ b/core/src/main/java/org/elasticsearch/action/ActionListener.java @@ -19,6 +19,11 @@ package org.elasticsearch.action; +import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.common.CheckedConsumer; + +import java.util.ArrayList; +import java.util.List; import java.util.function.Consumer; /** @@ -45,7 +50,8 @@ public interface ActionListener { * @param the type of the response * @return a listener that listens for responses and invokes the consumer when received */ - static ActionListener wrap(Consumer onResponse, Consumer onFailure) { + static ActionListener wrap(CheckedConsumer onResponse, + Consumer onFailure) { return new ActionListener() { @Override public void onResponse(Response response) { @@ -62,4 +68,41 @@ public void onFailure(Exception e) { } }; } + + /** + * Notifies every given listener with the response passed to {@link #onResponse(Object)}. If a listener itself throws an exception + * the exception is forwarded to {@link #onFailure(Exception)}. If in turn {@link #onFailure(Exception)} fails all remaining + * listeners will be processed and the caught exception will be re-thrown. + */ + static void onResponse(Iterable> listeners, Response response) { + List exceptionList = new ArrayList<>(); + for (ActionListener listener : listeners) { + try { + listener.onResponse(response); + } catch (Exception ex) { + try { + listener.onFailure(ex); + } catch (Exception ex1) { + exceptionList.add(ex1); + } + } + } + ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptionList); + } + + /** + * Notifies every given listener with the failure passed to {@link #onFailure(Exception)}. If a listener itself throws an exception + * all remaining listeners will be processed and the caught exception will be re-thrown. + */ + static void onFailure(Iterable> listeners, Exception failure) { + List exceptionList = new ArrayList<>(); + for (ActionListener listener : listeners) { + try { + listener.onFailure(failure); + } catch (Exception ex) { + exceptionList.add(ex); + } + } + ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptionList); + } } diff --git a/core/src/main/java/org/elasticsearch/action/ActionModule.java b/core/src/main/java/org/elasticsearch/action/ActionModule.java index 1be1ddda9a470..8aa52de08569d 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionModule.java +++ b/core/src/main/java/org/elasticsearch/action/ActionModule.java @@ -19,13 +19,7 @@ package org.elasticsearch.action; -import java.util.ArrayList; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.stream.Collectors; - +import org.apache.logging.log4j.Logger; import org.elasticsearch.action.admin.cluster.allocation.ClusterAllocationExplainAction; import org.elasticsearch.action.admin.cluster.allocation.TransportClusterAllocationExplainAction; import org.elasticsearch.action.admin.cluster.health.ClusterHealthAction; @@ -43,6 +37,8 @@ import org.elasticsearch.action.admin.cluster.node.tasks.get.TransportGetTaskAction; import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksAction; import org.elasticsearch.action.admin.cluster.node.tasks.list.TransportListTasksAction; +import org.elasticsearch.action.admin.cluster.remote.RemoteInfoAction; +import org.elasticsearch.action.admin.cluster.remote.TransportRemoteInfoAction; import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryAction; import org.elasticsearch.action.admin.cluster.repositories.delete.TransportDeleteRepositoryAction; import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesAction; @@ -155,6 +151,9 @@ import org.elasticsearch.action.delete.TransportDeleteAction; import org.elasticsearch.action.explain.ExplainAction; import org.elasticsearch.action.explain.TransportExplainAction; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesAction; +import org.elasticsearch.action.fieldcaps.TransportFieldCapabilitiesAction; +import org.elasticsearch.action.fieldcaps.TransportFieldCapabilitiesIndexAction; import org.elasticsearch.action.fieldstats.FieldStatsAction; import org.elasticsearch.action.fieldstats.TransportFieldStatsAction; import org.elasticsearch.action.get.GetAction; @@ -168,8 +167,6 @@ import org.elasticsearch.action.ingest.DeletePipelineTransportAction; import org.elasticsearch.action.ingest.GetPipelineAction; import org.elasticsearch.action.ingest.GetPipelineTransportAction; -import org.elasticsearch.action.ingest.IngestActionFilter; -import org.elasticsearch.action.ingest.IngestProxyActionFilter; import org.elasticsearch.action.ingest.PutPipelineAction; import org.elasticsearch.action.ingest.PutPipelineTransportAction; import org.elasticsearch.action.ingest.SimulatePipelineAction; @@ -196,18 +193,24 @@ import org.elasticsearch.action.termvectors.TransportTermVectorsAction; import org.elasticsearch.action.update.TransportUpdateAction; import org.elasticsearch.action.update.UpdateAction; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.NamedRegistry; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.inject.multibindings.MapBinder; import org.elasticsearch.common.inject.multibindings.Multibinder; -import org.elasticsearch.common.network.NetworkModule; +import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.settings.SettingsFilter; +import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.ActionPlugin.ActionHandler; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestHandler; +import org.elasticsearch.rest.action.RestFieldCapabilitiesAction; import org.elasticsearch.rest.action.RestFieldStatsAction; import org.elasticsearch.rest.action.RestMainAction; import org.elasticsearch.rest.action.admin.cluster.RestCancelTasksAction; @@ -234,10 +237,10 @@ import org.elasticsearch.rest.action.admin.cluster.RestPendingClusterTasksAction; import org.elasticsearch.rest.action.admin.cluster.RestPutRepositoryAction; import org.elasticsearch.rest.action.admin.cluster.RestPutStoredScriptAction; +import org.elasticsearch.rest.action.admin.cluster.RestRemoteClusterInfoAction; import org.elasticsearch.rest.action.admin.cluster.RestRestoreSnapshotAction; import org.elasticsearch.rest.action.admin.cluster.RestSnapshotsStatusAction; import org.elasticsearch.rest.action.admin.cluster.RestVerifyRepositoryAction; -import org.elasticsearch.rest.action.admin.indices.RestAliasesExistAction; import org.elasticsearch.rest.action.admin.indices.RestAnalyzeAction; import org.elasticsearch.rest.action.admin.indices.RestClearIndicesCacheAction; import org.elasticsearch.rest.action.admin.indices.RestCloseIndexAction; @@ -252,11 +255,9 @@ import org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction; import org.elasticsearch.rest.action.admin.indices.RestGetMappingAction; import org.elasticsearch.rest.action.admin.indices.RestGetSettingsAction; -import org.elasticsearch.rest.action.admin.indices.RestHeadIndexTemplateAction; import org.elasticsearch.rest.action.admin.indices.RestIndexDeleteAliasesAction; import org.elasticsearch.rest.action.admin.indices.RestIndexPutAliasAction; import org.elasticsearch.rest.action.admin.indices.RestIndicesAliasesAction; -import org.elasticsearch.rest.action.admin.indices.RestIndicesExistsAction; import org.elasticsearch.rest.action.admin.indices.RestIndicesSegmentsAction; import org.elasticsearch.rest.action.admin.indices.RestIndicesShardStoresAction; import org.elasticsearch.rest.action.admin.indices.RestIndicesStatsAction; @@ -268,7 +269,6 @@ import org.elasticsearch.rest.action.admin.indices.RestRolloverIndexAction; import org.elasticsearch.rest.action.admin.indices.RestShrinkIndexAction; import org.elasticsearch.rest.action.admin.indices.RestSyncedFlushAction; -import org.elasticsearch.rest.action.admin.indices.RestTypesExistsAction; import org.elasticsearch.rest.action.admin.indices.RestUpdateSettingsAction; import org.elasticsearch.rest.action.admin.indices.RestUpgradeAction; import org.elasticsearch.rest.action.admin.indices.RestValidateQueryAction; @@ -288,12 +288,12 @@ import org.elasticsearch.rest.action.cat.RestShardsAction; import org.elasticsearch.rest.action.cat.RestSnapshotAction; import org.elasticsearch.rest.action.cat.RestTasksAction; +import org.elasticsearch.rest.action.cat.RestTemplatesAction; import org.elasticsearch.rest.action.cat.RestThreadPoolAction; import org.elasticsearch.rest.action.document.RestBulkAction; import org.elasticsearch.rest.action.document.RestDeleteAction; import org.elasticsearch.rest.action.document.RestGetAction; import org.elasticsearch.rest.action.document.RestGetSourceAction; -import org.elasticsearch.rest.action.document.RestHeadAction; import org.elasticsearch.rest.action.document.RestIndexAction; import org.elasticsearch.rest.action.document.RestMultiGetAction; import org.elasticsearch.rest.action.document.RestMultiTermVectorsAction; @@ -309,6 +309,16 @@ import org.elasticsearch.rest.action.search.RestSearchAction; import org.elasticsearch.rest.action.search.RestSearchScrollAction; import org.elasticsearch.rest.action.search.RestSuggestAction; +import org.elasticsearch.threadpool.ThreadPool; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Consumer; +import java.util.function.Supplier; +import java.util.function.UnaryOperator; +import java.util.stream.Collectors; import static java.util.Collections.unmodifiableList; import static java.util.Collections.unmodifiableMap; @@ -318,8 +328,14 @@ */ public class ActionModule extends AbstractModule { + private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class); + private final boolean transportClient; private final Settings settings; + private final IndexNameExpressionResolver indexNameExpressionResolver; + private final IndexScopedSettings indexScopedSettings; + private final ClusterSettings clusterSettings; + private final SettingsFilter settingsFilter; private final List actionPlugins; private final Map> actions; private final List> actionFilters; @@ -327,19 +343,41 @@ public class ActionModule extends AbstractModule { private final DestructiveOperations destructiveOperations; private final RestController restController; - public ActionModule(boolean ingestEnabled, boolean transportClient, Settings settings, IndexNameExpressionResolver resolver, - ClusterSettings clusterSettings, List actionPlugins) { + public ActionModule(boolean transportClient, Settings settings, IndexNameExpressionResolver indexNameExpressionResolver, + IndexScopedSettings indexScopedSettings, ClusterSettings clusterSettings, SettingsFilter settingsFilter, + ThreadPool threadPool, List actionPlugins, NodeClient nodeClient, + CircuitBreakerService circuitBreakerService) { this.transportClient = transportClient; this.settings = settings; + this.indexNameExpressionResolver = indexNameExpressionResolver; + this.indexScopedSettings = indexScopedSettings; + this.clusterSettings = clusterSettings; + this.settingsFilter = settingsFilter; this.actionPlugins = actionPlugins; actions = setupActions(actionPlugins); - actionFilters = setupActionFilters(actionPlugins, ingestEnabled); - autoCreateIndex = transportClient ? null : new AutoCreateIndex(settings, clusterSettings, resolver); + actionFilters = setupActionFilters(actionPlugins); + autoCreateIndex = transportClient ? null : new AutoCreateIndex(settings, clusterSettings, indexNameExpressionResolver); destructiveOperations = new DestructiveOperations(settings, clusterSettings); Set headers = actionPlugins.stream().flatMap(p -> p.getRestHeaders().stream()).collect(Collectors.toSet()); - restController = new RestController(settings, headers); + UnaryOperator restWrapper = null; + for (ActionPlugin plugin : actionPlugins) { + UnaryOperator newRestWrapper = plugin.getRestHandlerWrapper(threadPool.getThreadContext()); + if (newRestWrapper != null) { + logger.debug("Using REST wrapper from plugin " + plugin.getClass().getName()); + if (restWrapper != null) { + throw new IllegalArgumentException("Cannot have more than one plugin implementing a REST wrapper"); + } + restWrapper = newRestWrapper; + } + } + if (transportClient) { + restController = null; + } else { + restController = new RestController(settings, headers, restWrapper, nodeClient, circuitBreakerService); + } } + public Map> getActions() { return actions; } @@ -347,7 +385,7 @@ public ActionModule(boolean ingestEnabled, boolean transportClient, Settings set static Map> setupActions(List actionPlugins) { // Subclass NamedRegistry for easy registration class ActionRegistry extends NamedRegistry> { - public ActionRegistry() { + ActionRegistry() { super("action"); } @@ -355,7 +393,7 @@ public void register(ActionHandler handler) { register(handler.getAction().name(), handler); } - public , Response extends ActionResponse> void register( + public void register( GenericAction action, Class> transportAction, Class... supportTransportActions) { register(new ActionHandler<>(action, transportAction, supportTransportActions)); @@ -365,6 +403,7 @@ public , Response extends ActionResponse> actions.register(MainAction.INSTANCE, TransportMainAction.class); actions.register(NodesInfoAction.INSTANCE, TransportNodesInfoAction.class); + actions.register(RemoteInfoAction.INSTANCE, TransportRemoteInfoAction.class); actions.register(NodesStatsAction.INSTANCE, TransportNodesStatsAction.class); actions.register(NodesHotThreadsAction.INSTANCE, TransportNodesHotThreadsAction.class); actions.register(ListTasksAction.INSTANCE, TransportListTasksAction.class); @@ -448,6 +487,8 @@ public , Response extends ActionResponse> actions.register(DeleteStoredScriptAction.INSTANCE, TransportDeleteStoredScriptAction.class); actions.register(FieldStatsAction.INSTANCE, TransportFieldStatsAction.class); + actions.register(FieldCapabilitiesAction.INSTANCE, TransportFieldCapabilitiesAction.class, + TransportFieldCapabilitiesIndexAction.class); actions.register(PutPipelineAction.INSTANCE, PutPipelineTransportAction.class); actions.register(GetPipelineAction.INSTANCE, GetPipelineTransportAction.class); @@ -459,163 +500,146 @@ public , Response extends ActionResponse> return unmodifiableMap(actions.getRegistry()); } - private List> setupActionFilters(List actionPlugins, boolean ingestEnabled) { - List> filters = new ArrayList<>(); - if (transportClient == false) { - if (ingestEnabled) { - filters.add(IngestActionFilter.class); - } else { - filters.add(IngestProxyActionFilter.class); - } - } - - for (ActionPlugin plugin : actionPlugins) { - filters.addAll(plugin.getActionFilters()); - } - return unmodifiableList(filters); + private List> setupActionFilters(List actionPlugins) { + return unmodifiableList(actionPlugins.stream().flatMap(p -> p.getActionFilters().stream()).collect(Collectors.toList())); } - static Set> setupRestHandlers(List actionPlugins) { - Set> handlers = new HashSet<>(); - registerRestHandler(handlers, RestMainAction.class); - registerRestHandler(handlers, RestNodesInfoAction.class); - registerRestHandler(handlers, RestNodesStatsAction.class); - registerRestHandler(handlers, RestNodesHotThreadsAction.class); - registerRestHandler(handlers, RestClusterAllocationExplainAction.class); - registerRestHandler(handlers, RestClusterStatsAction.class); - registerRestHandler(handlers, RestClusterStateAction.class); - registerRestHandler(handlers, RestClusterHealthAction.class); - registerRestHandler(handlers, RestClusterUpdateSettingsAction.class); - registerRestHandler(handlers, RestClusterGetSettingsAction.class); - registerRestHandler(handlers, RestClusterRerouteAction.class); - registerRestHandler(handlers, RestClusterSearchShardsAction.class); - registerRestHandler(handlers, RestPendingClusterTasksAction.class); - registerRestHandler(handlers, RestPutRepositoryAction.class); - registerRestHandler(handlers, RestGetRepositoriesAction.class); - registerRestHandler(handlers, RestDeleteRepositoryAction.class); - registerRestHandler(handlers, RestVerifyRepositoryAction.class); - registerRestHandler(handlers, RestGetSnapshotsAction.class); - registerRestHandler(handlers, RestCreateSnapshotAction.class); - registerRestHandler(handlers, RestRestoreSnapshotAction.class); - registerRestHandler(handlers, RestDeleteSnapshotAction.class); - registerRestHandler(handlers, RestSnapshotsStatusAction.class); - - registerRestHandler(handlers, RestIndicesExistsAction.class); - registerRestHandler(handlers, RestTypesExistsAction.class); - registerRestHandler(handlers, RestGetIndicesAction.class); - registerRestHandler(handlers, RestIndicesStatsAction.class); - registerRestHandler(handlers, RestIndicesSegmentsAction.class); - registerRestHandler(handlers, RestIndicesShardStoresAction.class); - registerRestHandler(handlers, RestGetAliasesAction.class); - registerRestHandler(handlers, RestAliasesExistAction.class); - registerRestHandler(handlers, RestIndexDeleteAliasesAction.class); - registerRestHandler(handlers, RestIndexPutAliasAction.class); - registerRestHandler(handlers, RestIndicesAliasesAction.class); - registerRestHandler(handlers, RestCreateIndexAction.class); - registerRestHandler(handlers, RestShrinkIndexAction.class); - registerRestHandler(handlers, RestRolloverIndexAction.class); - registerRestHandler(handlers, RestDeleteIndexAction.class); - registerRestHandler(handlers, RestCloseIndexAction.class); - registerRestHandler(handlers, RestOpenIndexAction.class); - - registerRestHandler(handlers, RestUpdateSettingsAction.class); - registerRestHandler(handlers, RestGetSettingsAction.class); - - registerRestHandler(handlers, RestAnalyzeAction.class); - registerRestHandler(handlers, RestGetIndexTemplateAction.class); - registerRestHandler(handlers, RestPutIndexTemplateAction.class); - registerRestHandler(handlers, RestDeleteIndexTemplateAction.class); - registerRestHandler(handlers, RestHeadIndexTemplateAction.class); - - registerRestHandler(handlers, RestPutMappingAction.class); - registerRestHandler(handlers, RestGetMappingAction.class); - registerRestHandler(handlers, RestGetFieldMappingAction.class); - - registerRestHandler(handlers, RestRefreshAction.class); - registerRestHandler(handlers, RestFlushAction.class); - registerRestHandler(handlers, RestSyncedFlushAction.class); - registerRestHandler(handlers, RestForceMergeAction.class); - registerRestHandler(handlers, RestUpgradeAction.class); - registerRestHandler(handlers, RestClearIndicesCacheAction.class); - - registerRestHandler(handlers, RestIndexAction.class); - registerRestHandler(handlers, RestGetAction.class); - registerRestHandler(handlers, RestGetSourceAction.class); - registerRestHandler(handlers, RestHeadAction.Document.class); - registerRestHandler(handlers, RestHeadAction.Source.class); - registerRestHandler(handlers, RestMultiGetAction.class); - registerRestHandler(handlers, RestDeleteAction.class); - registerRestHandler(handlers, org.elasticsearch.rest.action.document.RestCountAction.class); - registerRestHandler(handlers, RestSuggestAction.class); - registerRestHandler(handlers, RestTermVectorsAction.class); - registerRestHandler(handlers, RestMultiTermVectorsAction.class); - registerRestHandler(handlers, RestBulkAction.class); - registerRestHandler(handlers, RestUpdateAction.class); - - registerRestHandler(handlers, RestSearchAction.class); - registerRestHandler(handlers, RestSearchScrollAction.class); - registerRestHandler(handlers, RestClearScrollAction.class); - registerRestHandler(handlers, RestMultiSearchAction.class); - - registerRestHandler(handlers, RestValidateQueryAction.class); - - registerRestHandler(handlers, RestExplainAction.class); - - registerRestHandler(handlers, RestRecoveryAction.class); + public void initRestHandlers(Supplier nodesInCluster) { + List catActions = new ArrayList<>(); + Consumer registerHandler = a -> { + if (a instanceof AbstractCatAction) { + catActions.add((AbstractCatAction) a); + } + }; + registerHandler.accept(new RestMainAction(settings, restController)); + registerHandler.accept(new RestNodesInfoAction(settings, restController, settingsFilter)); + registerHandler.accept(new RestRemoteClusterInfoAction(settings, restController)); + registerHandler.accept(new RestNodesStatsAction(settings, restController)); + registerHandler.accept(new RestNodesHotThreadsAction(settings, restController)); + registerHandler.accept(new RestClusterAllocationExplainAction(settings, restController)); + registerHandler.accept(new RestClusterStatsAction(settings, restController)); + registerHandler.accept(new RestClusterStateAction(settings, restController, settingsFilter)); + registerHandler.accept(new RestClusterHealthAction(settings, restController)); + registerHandler.accept(new RestClusterUpdateSettingsAction(settings, restController)); + registerHandler.accept(new RestClusterGetSettingsAction(settings, restController, clusterSettings, settingsFilter)); + registerHandler.accept(new RestClusterRerouteAction(settings, restController, settingsFilter)); + registerHandler.accept(new RestClusterSearchShardsAction(settings, restController)); + registerHandler.accept(new RestPendingClusterTasksAction(settings, restController)); + registerHandler.accept(new RestPutRepositoryAction(settings, restController)); + registerHandler.accept(new RestGetRepositoriesAction(settings, restController, settingsFilter)); + registerHandler.accept(new RestDeleteRepositoryAction(settings, restController)); + registerHandler.accept(new RestVerifyRepositoryAction(settings, restController)); + registerHandler.accept(new RestGetSnapshotsAction(settings, restController)); + registerHandler.accept(new RestCreateSnapshotAction(settings, restController)); + registerHandler.accept(new RestRestoreSnapshotAction(settings, restController)); + registerHandler.accept(new RestDeleteSnapshotAction(settings, restController)); + registerHandler.accept(new RestSnapshotsStatusAction(settings, restController)); + + registerHandler.accept(new RestGetIndicesAction(settings, restController, indexScopedSettings, settingsFilter)); + registerHandler.accept(new RestIndicesStatsAction(settings, restController)); + registerHandler.accept(new RestIndicesSegmentsAction(settings, restController)); + registerHandler.accept(new RestIndicesShardStoresAction(settings, restController)); + registerHandler.accept(new RestGetAliasesAction(settings, restController)); + registerHandler.accept(new RestIndexDeleteAliasesAction(settings, restController)); + registerHandler.accept(new RestIndexPutAliasAction(settings, restController)); + registerHandler.accept(new RestIndicesAliasesAction(settings, restController)); + registerHandler.accept(new RestCreateIndexAction(settings, restController)); + registerHandler.accept(new RestShrinkIndexAction(settings, restController)); + registerHandler.accept(new RestRolloverIndexAction(settings, restController)); + registerHandler.accept(new RestDeleteIndexAction(settings, restController)); + registerHandler.accept(new RestCloseIndexAction(settings, restController)); + registerHandler.accept(new RestOpenIndexAction(settings, restController)); + + registerHandler.accept(new RestUpdateSettingsAction(settings, restController)); + registerHandler.accept(new RestGetSettingsAction(settings, restController, indexScopedSettings, settingsFilter)); + + registerHandler.accept(new RestAnalyzeAction(settings, restController)); + registerHandler.accept(new RestGetIndexTemplateAction(settings, restController)); + registerHandler.accept(new RestPutIndexTemplateAction(settings, restController)); + registerHandler.accept(new RestDeleteIndexTemplateAction(settings, restController)); + + registerHandler.accept(new RestPutMappingAction(settings, restController)); + registerHandler.accept(new RestGetMappingAction(settings, restController)); + registerHandler.accept(new RestGetFieldMappingAction(settings, restController)); + + registerHandler.accept(new RestRefreshAction(settings, restController)); + registerHandler.accept(new RestFlushAction(settings, restController)); + registerHandler.accept(new RestSyncedFlushAction(settings, restController)); + registerHandler.accept(new RestForceMergeAction(settings, restController)); + registerHandler.accept(new RestUpgradeAction(settings, restController)); + registerHandler.accept(new RestClearIndicesCacheAction(settings, restController)); + + registerHandler.accept(new RestIndexAction(settings, restController)); + registerHandler.accept(new RestGetAction(settings, restController)); + registerHandler.accept(new RestGetSourceAction(settings, restController)); + registerHandler.accept(new RestMultiGetAction(settings, restController)); + registerHandler.accept(new RestDeleteAction(settings, restController)); + registerHandler.accept(new org.elasticsearch.rest.action.document.RestCountAction(settings, restController)); + registerHandler.accept(new RestTermVectorsAction(settings, restController)); + registerHandler.accept(new RestSuggestAction(settings, restController)); + registerHandler.accept(new RestMultiTermVectorsAction(settings, restController)); + registerHandler.accept(new RestBulkAction(settings, restController)); + registerHandler.accept(new RestUpdateAction(settings, restController)); + + registerHandler.accept(new RestSearchAction(settings, restController)); + registerHandler.accept(new RestSearchScrollAction(settings, restController)); + registerHandler.accept(new RestClearScrollAction(settings, restController)); + registerHandler.accept(new RestMultiSearchAction(settings, restController)); + + registerHandler.accept(new RestValidateQueryAction(settings, restController)); + + registerHandler.accept(new RestExplainAction(settings, restController)); + + registerHandler.accept(new RestRecoveryAction(settings, restController)); // Scripts API - registerRestHandler(handlers, RestGetStoredScriptAction.class); - registerRestHandler(handlers, RestPutStoredScriptAction.class); - registerRestHandler(handlers, RestDeleteStoredScriptAction.class); + registerHandler.accept(new RestGetStoredScriptAction(settings, restController)); + registerHandler.accept(new RestPutStoredScriptAction(settings, restController)); + registerHandler.accept(new RestDeleteStoredScriptAction(settings, restController)); - registerRestHandler(handlers, RestFieldStatsAction.class); + registerHandler.accept(new RestFieldStatsAction(settings, restController)); + registerHandler.accept(new RestFieldCapabilitiesAction(settings, restController)); // Tasks API - registerRestHandler(handlers, RestListTasksAction.class); - registerRestHandler(handlers, RestGetTaskAction.class); - registerRestHandler(handlers, RestCancelTasksAction.class); + registerHandler.accept(new RestListTasksAction(settings, restController, nodesInCluster)); + registerHandler.accept(new RestGetTaskAction(settings, restController)); + registerHandler.accept(new RestCancelTasksAction(settings, restController, nodesInCluster)); // Ingest API - registerRestHandler(handlers, RestPutPipelineAction.class); - registerRestHandler(handlers, RestGetPipelineAction.class); - registerRestHandler(handlers, RestDeletePipelineAction.class); - registerRestHandler(handlers, RestSimulatePipelineAction.class); + registerHandler.accept(new RestPutPipelineAction(settings, restController)); + registerHandler.accept(new RestGetPipelineAction(settings, restController)); + registerHandler.accept(new RestDeletePipelineAction(settings, restController)); + registerHandler.accept(new RestSimulatePipelineAction(settings, restController)); // CAT API - registerRestHandler(handlers, RestCatAction.class); - registerRestHandler(handlers, RestAllocationAction.class); - registerRestHandler(handlers, RestShardsAction.class); - registerRestHandler(handlers, RestMasterAction.class); - registerRestHandler(handlers, RestNodesAction.class); - registerRestHandler(handlers, RestTasksAction.class); - registerRestHandler(handlers, RestIndicesAction.class); - registerRestHandler(handlers, RestSegmentsAction.class); + registerHandler.accept(new RestAllocationAction(settings, restController)); + registerHandler.accept(new RestShardsAction(settings, restController)); + registerHandler.accept(new RestMasterAction(settings, restController)); + registerHandler.accept(new RestNodesAction(settings, restController)); + registerHandler.accept(new RestTasksAction(settings, restController, nodesInCluster)); + registerHandler.accept(new RestIndicesAction(settings, restController, indexNameExpressionResolver)); + registerHandler.accept(new RestSegmentsAction(settings, restController)); // Fully qualified to prevent interference with rest.action.count.RestCountAction - registerRestHandler(handlers, org.elasticsearch.rest.action.cat.RestCountAction.class); + registerHandler.accept(new org.elasticsearch.rest.action.cat.RestCountAction(settings, restController)); // Fully qualified to prevent interference with rest.action.indices.RestRecoveryAction - registerRestHandler(handlers, org.elasticsearch.rest.action.cat.RestRecoveryAction.class); - registerRestHandler(handlers, RestHealthAction.class); - registerRestHandler(handlers, org.elasticsearch.rest.action.cat.RestPendingClusterTasksAction.class); - registerRestHandler(handlers, RestAliasAction.class); - registerRestHandler(handlers, RestThreadPoolAction.class); - registerRestHandler(handlers, RestPluginsAction.class); - registerRestHandler(handlers, RestFielddataAction.class); - registerRestHandler(handlers, RestNodeAttrsAction.class); - registerRestHandler(handlers, RestRepositoriesAction.class); - registerRestHandler(handlers, RestSnapshotAction.class); + registerHandler.accept(new org.elasticsearch.rest.action.cat.RestRecoveryAction(settings, restController)); + registerHandler.accept(new RestHealthAction(settings, restController)); + registerHandler.accept(new org.elasticsearch.rest.action.cat.RestPendingClusterTasksAction(settings, restController)); + registerHandler.accept(new RestAliasAction(settings, restController)); + registerHandler.accept(new RestThreadPoolAction(settings, restController)); + registerHandler.accept(new RestPluginsAction(settings, restController)); + registerHandler.accept(new RestFielddataAction(settings, restController)); + registerHandler.accept(new RestNodeAttrsAction(settings, restController)); + registerHandler.accept(new RestRepositoriesAction(settings, restController)); + registerHandler.accept(new RestSnapshotAction(settings, restController)); + registerHandler.accept(new RestTemplatesAction(settings, restController)); for (ActionPlugin plugin : actionPlugins) { - for (Class handler : plugin.getRestHandlers()) { - registerRestHandler(handlers, handler); + for (RestHandler handler : plugin.getRestHandlers(settings, restController, clusterSettings, indexScopedSettings, + settingsFilter, indexNameExpressionResolver, nodesInCluster)) { + registerHandler.accept(handler); } } - return handlers; - } - - private static void registerRestHandler(Set> handlers, Class handler) { - if (handlers.contains(handler)) { - throw new IllegalArgumentException("can't register the same [rest_handler] more than once for [" + handler.getName() + "]"); - } - handlers.add(handler); + registerHandler.accept(new RestCatAction(settings, restController, catActions)); } @Override @@ -644,23 +668,10 @@ protected void configure() { bind(supportAction).asEagerSingleton(); } } - - // Bind the RestController which is required (by Node) even if rest isn't enabled. - bind(RestController.class).toInstance(restController); - - // Setup the RestHandlers - if (NetworkModule.HTTP_ENABLED.get(settings)) { - Multibinder restHandlers = Multibinder.newSetBinder(binder(), RestHandler.class); - Multibinder catHandlers = Multibinder.newSetBinder(binder(), AbstractCatAction.class); - for (Class handler : setupRestHandlers(actionPlugins)) { - bind(handler).asEagerSingleton(); - if (AbstractCatAction.class.isAssignableFrom(handler)) { - catHandlers.addBinding().to(handler.asSubclass(AbstractCatAction.class)); - } else { - restHandlers.addBinding().to(handler); - } - } - } } } + + public RestController getRestController() { + return restController; + } } diff --git a/core/src/main/java/org/elasticsearch/action/ActionRequest.java b/core/src/main/java/org/elasticsearch/action/ActionRequest.java index 970afa413cc49..379bbc8481b17 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionRequest.java +++ b/core/src/main/java/org/elasticsearch/action/ActionRequest.java @@ -28,7 +28,7 @@ /** * */ -public abstract class ActionRequest> extends TransportRequest { +public abstract class ActionRequest extends TransportRequest { public ActionRequest() { super(); diff --git a/core/src/main/java/org/elasticsearch/action/CompositeIndicesRequest.java b/core/src/main/java/org/elasticsearch/action/CompositeIndicesRequest.java index 126874d0c3ce7..9c661e93be8fe 100644 --- a/core/src/main/java/org/elasticsearch/action/CompositeIndicesRequest.java +++ b/core/src/main/java/org/elasticsearch/action/CompositeIndicesRequest.java @@ -19,16 +19,11 @@ package org.elasticsearch.action; -import java.util.List; - /** - * Needs to be implemented by all {@link org.elasticsearch.action.ActionRequest} subclasses that are composed of - * multiple subrequests which relate to one or more indices. Allows to retrieve those subrequests. + * Marker interface that needs to be implemented by all {@link org.elasticsearch.action.ActionRequest} subclasses that are composed of + * multiple sub-requests which relate to one or more indices. A composite request is executed by its own transport action class + * (e.g. {@link org.elasticsearch.action.search.TransportMultiSearchAction}), which goes through all sub-requests and delegates their + * execution to the appropriate transport action (e.g. {@link org.elasticsearch.action.search.TransportSearchAction}) for each single item. */ public interface CompositeIndicesRequest { - - /** - * Returns the subrequests that a composite request is composed of - */ - List subRequests(); } diff --git a/core/src/main/java/org/elasticsearch/action/DocWriteRequest.java b/core/src/main/java/org/elasticsearch/action/DocWriteRequest.java new file mode 100644 index 0000000000000..09db7089ff629 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/DocWriteRequest.java @@ -0,0 +1,203 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action; + +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.index.VersionType; + +import java.io.IOException; +import java.util.Locale; + +/** + * Generic interface to group ActionRequest, which perform writes to a single document + * Action requests implementing this can be part of {@link org.elasticsearch.action.bulk.BulkRequest} + */ +public interface DocWriteRequest extends IndicesRequest { + + /** + * Get the index that this request operates on + * @return the index + */ + String index(); + + /** + * Get the type that this request operates on + * @return the type + */ + String type(); + + /** + * Get the id of the document for this request + * @return the id + */ + String id(); + + /** + * Get the options for this request + * @return the indices options + */ + IndicesOptions indicesOptions(); + + /** + * Set the routing for this request + * @return the Request + */ + T routing(String routing); + + /** + * Get the routing for this request + * @return the Routing + */ + String routing(); + + + /** + * Get the parent for this request + * @return the Parent + */ + String parent(); + + /** + * Get the document version for this request + * @return the document version + */ + long version(); + + /** + * Sets the version, which will perform the operation only if a matching + * version exists and no changes happened on the doc since then. + */ + T version(long version); + + /** + * Get the document version type for this request + * @return the document version type + */ + VersionType versionType(); + + /** + * Sets the versioning type. Defaults to {@link VersionType#INTERNAL}. + */ + T versionType(VersionType versionType); + + /** + * Get the requested document operation type of the request + * @return the operation type {@link OpType} + */ + OpType opType(); + + /** + * Requested operation type to perform on the document + */ + enum OpType { + /** + * Index the source. If there an existing document with the id, it will + * be replaced. + */ + INDEX(0), + /** + * Creates the resource. Simply adds it to the index, if there is an existing + * document with the id, then it won't be removed. + */ + CREATE(1), + /** Updates a document */ + UPDATE(2), + /** Deletes a document */ + DELETE(3); + + private final byte op; + private final String lowercase; + + OpType(int op) { + this.op = (byte) op; + this.lowercase = this.toString().toLowerCase(Locale.ROOT); + } + + public byte getId() { + return op; + } + + public String getLowercase() { + return lowercase; + } + + public static OpType fromId(byte id) { + switch (id) { + case 0: return INDEX; + case 1: return CREATE; + case 2: return UPDATE; + case 3: return DELETE; + default: throw new IllegalArgumentException("Unknown opType: [" + id + "]"); + } + } + + public static OpType fromString(String sOpType) { + String lowerCase = sOpType.toLowerCase(Locale.ROOT); + for (OpType opType : OpType.values()) { + if (opType.getLowercase().equals(lowerCase)) { + return opType; + } + } + throw new IllegalArgumentException("Unknown opType: [" + sOpType + "]"); + } + } + + /** read a document write (index/delete/update) request */ + static DocWriteRequest readDocumentRequest(StreamInput in) throws IOException { + byte type = in.readByte(); + DocWriteRequest docWriteRequest; + if (type == 0) { + IndexRequest indexRequest = new IndexRequest(); + indexRequest.readFrom(in); + docWriteRequest = indexRequest; + } else if (type == 1) { + DeleteRequest deleteRequest = new DeleteRequest(); + deleteRequest.readFrom(in); + docWriteRequest = deleteRequest; + } else if (type == 2) { + UpdateRequest updateRequest = new UpdateRequest(); + updateRequest.readFrom(in); + docWriteRequest = updateRequest; + } else { + throw new IllegalStateException("invalid request type [" + type+ " ]"); + } + return docWriteRequest; + } + + /** write a document write (index/delete/update) request*/ + static void writeDocumentRequest(StreamOutput out, DocWriteRequest request) throws IOException { + if (request instanceof IndexRequest) { + out.writeByte((byte) 0); + ((IndexRequest) request).writeTo(out); + } else if (request instanceof DeleteRequest) { + out.writeByte((byte) 1); + ((DeleteRequest) request).writeTo(out); + } else if (request instanceof UpdateRequest) { + out.writeByte((byte) 2); + ((UpdateRequest) request).writeTo(out); + } else { + throw new IllegalStateException("invalid request [" + request.getClass().getSimpleName() + " ]"); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java b/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java index 0b64f3afa7d09..089ee8bbadd3c 100644 --- a/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java +++ b/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java @@ -19,26 +19,41 @@ package org.elasticsearch.action; import org.elasticsearch.action.support.WriteRequest; -import org.elasticsearch.action.support.WriteResponse; import org.elasticsearch.action.support.WriteRequest.RefreshPolicy; +import org.elasticsearch.action.support.WriteResponse; import org.elasticsearch.action.support.replication.ReplicationResponse; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import java.io.UnsupportedEncodingException; +import java.net.URLEncoder; import java.util.Locale; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * A base class for the response of a write operation that involves a single doc */ -public abstract class DocWriteResponse extends ReplicationResponse implements WriteResponse, StatusToXContent { +public abstract class DocWriteResponse extends ReplicationResponse implements WriteResponse, StatusToXContentObject { + + private static final String _SHARDS = "_shards"; + private static final String _INDEX = "_index"; + private static final String _TYPE = "_type"; + private static final String _ID = "_id"; + private static final String _VERSION = "_version"; + private static final String RESULT = "result"; + private static final String FORCED_REFRESH = "forced_refresh"; /** * An enum that represents the the results of CRUD operations, primarily used to communicate the type of @@ -124,7 +139,6 @@ public String getIndex() { return this.shardId.getIndexName(); } - /** * The exact shard the document was changed in. */ @@ -168,32 +182,48 @@ public void setForcedRefresh(boolean forcedRefresh) { } /** returns the rest status for this response (based on {@link ShardInfo#status()} */ + @Override public RestStatus status() { return getShardInfo().status(); } /** - * Gets the location of the written document as a string suitable for a {@code Location} header. - * @param routing any routing used in the request. If null the location doesn't include routing information. + * Return the relative URI for the location of the document suitable for use in the {@code Location} header. The use of relative URIs is + * permitted as of HTTP/1.1 (cf. https://tools.ietf.org/html/rfc7231#section-7.1.2). + * + * @param routing custom routing or {@code null} if custom routing is not used + * @return the relative URI for the location of the document */ public String getLocation(@Nullable String routing) { - // Absolute path for the location of the document. This should be allowed as of HTTP/1.1: - // https://tools.ietf.org/html/rfc7231#section-7.1.2 - String index = getIndex(); - String type = getType(); - String id = getId(); - String routingStart = "?routing="; - int bufferSize = 3 + index.length() + type.length() + id.length(); - if (routing != null) { - bufferSize += routingStart.length() + routing.length(); - } - StringBuilder location = new StringBuilder(bufferSize); - location.append('/').append(index); - location.append('/').append(type); - location.append('/').append(id); - if (routing != null) { - location.append(routingStart).append(routing); + final String encodedIndex; + final String encodedType; + final String encodedId; + final String encodedRouting; + try { + // encode the path components separately otherwise the path separators will be encoded + encodedIndex = URLEncoder.encode(getIndex(), "UTF-8"); + encodedType = URLEncoder.encode(getType(), "UTF-8"); + encodedId = URLEncoder.encode(getId(), "UTF-8"); + encodedRouting = routing == null ? null : URLEncoder.encode(routing, "UTF-8"); + } catch (final UnsupportedEncodingException e) { + throw new AssertionError(e); } + final String routingStart = "?routing="; + final int bufferSizeExcludingRouting = 3 + encodedIndex.length() + encodedType.length() + encodedId.length(); + final int bufferSize; + if (encodedRouting == null) { + bufferSize = bufferSizeExcludingRouting; + } else { + bufferSize = bufferSizeExcludingRouting + routingStart.length() + encodedRouting.length(); + } + final StringBuilder location = new StringBuilder(bufferSize); + location.append('/').append(encodedIndex); + location.append('/').append(encodedType); + location.append('/').append(encodedId); + if (encodedRouting != null) { + location.append(routingStart).append(encodedRouting); + } + return location.toString(); } @@ -220,17 +250,128 @@ public void writeTo(StreamOutput out) throws IOException { } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + innerToXContent(builder, params); + builder.endObject(); + return builder; + } + + public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { ReplicationResponse.ShardInfo shardInfo = getShardInfo(); - builder.field("_index", shardId.getIndexName()) - .field("_type", type) - .field("_id", id) - .field("_version", version) - .field("result", getResult().getLowercase()); + builder.field(_INDEX, shardId.getIndexName()) + .field(_TYPE, type) + .field(_ID, id) + .field(_VERSION, version) + .field(RESULT, getResult().getLowercase()); if (forcedRefresh) { - builder.field("forced_refresh", forcedRefresh); + builder.field(FORCED_REFRESH, true); } - shardInfo.toXContent(builder, params); + builder.field(_SHARDS, shardInfo); return builder; } + + /** + * Parse the output of the {@link #innerToXContent(XContentBuilder, Params)} method. + * + * This method is intended to be called by subclasses and must be called multiple times to parse all the information concerning + * {@link DocWriteResponse} objects. It always parses the current token, updates the given parsing context accordingly + * if needed and then immediately returns. + */ + protected static void parseInnerToXContent(XContentParser parser, Builder context) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + + String currentFieldName = parser.currentName(); + token = parser.nextToken(); + + if (token.isValue()) { + if (_INDEX.equals(currentFieldName)) { + // index uuid and shard id are unknown and can't be parsed back for now. + context.setShardId(new ShardId(new Index(parser.text(), IndexMetaData.INDEX_UUID_NA_VALUE), -1)); + } else if (_TYPE.equals(currentFieldName)) { + context.setType(parser.text()); + } else if (_ID.equals(currentFieldName)) { + context.setId(parser.text()); + } else if (_VERSION.equals(currentFieldName)) { + context.setVersion(parser.longValue()); + } else if (RESULT.equals(currentFieldName)) { + String result = parser.text(); + for (Result r : Result.values()) { + if (r.getLowercase().equals(result)) { + context.setResult(r); + break; + } + } + } else if (FORCED_REFRESH.equals(currentFieldName)) { + context.setForcedRefresh(parser.booleanValue()); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (_SHARDS.equals(currentFieldName)) { + context.setShardInfo(ShardInfo.fromXContent(parser)); + } else { + parser.skipChildren(); // skip potential inner objects for forward compatibility + } + } else if (token == XContentParser.Token.START_ARRAY) { + parser.skipChildren(); // skip potential inner arrays for forward compatibility + } + } + + /** + * Base class of all {@link DocWriteResponse} builders. These {@link DocWriteResponse.Builder} are used during + * xcontent parsing to temporarily store the parsed values, then the {@link Builder#build()} method is called to + * instantiate the appropriate {@link DocWriteResponse} with the parsed values. + */ + public abstract static class Builder { + + protected ShardId shardId = null; + protected String type = null; + protected String id = null; + protected Long version = null; + protected Result result = null; + protected boolean forcedRefresh; + protected ShardInfo shardInfo = null; + + public ShardId getShardId() { + return shardId; + } + + public void setShardId(ShardId shardId) { + this.shardId = shardId; + } + + public String getType() { + return type; + } + + public void setType(String type) { + this.type = type; + } + + public String getId() { + return id; + } + + public void setId(String id) { + this.id = id; + } + + public void setVersion(Long version) { + this.version = version; + } + + public void setResult(Result result) { + this.result = result; + } + + public void setForcedRefresh(boolean forcedRefresh) { + this.forcedRefresh = forcedRefresh; + } + + public void setShardInfo(ShardInfo shardInfo) { + this.shardInfo = shardInfo; + } + + public abstract DocWriteResponse build(); + } } diff --git a/core/src/main/java/org/elasticsearch/action/DocumentRequest.java b/core/src/main/java/org/elasticsearch/action/DocumentRequest.java deleted file mode 100644 index a90f013a6b9ab..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/DocumentRequest.java +++ /dev/null @@ -1,73 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.action; - -import org.elasticsearch.action.support.IndicesOptions; - -/** - * Generic interface to group ActionRequest, which work on single document level - * - * Forces this class return index/type/id getters - */ -public interface DocumentRequest extends IndicesRequest { - - /** - * Get the index that this request operates on - * @return the index - */ - String index(); - - /** - * Get the type that this request operates on - * @return the type - */ - String type(); - - /** - * Get the id of the document for this request - * @return the id - */ - String id(); - - /** - * Get the options for this request - * @return the indices options - */ - IndicesOptions indicesOptions(); - - /** - * Set the routing for this request - * @return the Request - */ - T routing(String routing); - - /** - * Get the routing for this request - * @return the Routing - */ - String routing(); - - - /** - * Get the parent for this request - * @return the Parent - */ - String parent(); - -} diff --git a/core/src/main/java/org/elasticsearch/action/ListenableActionFuture.java b/core/src/main/java/org/elasticsearch/action/ListenableActionFuture.java index 29b5a2a877495..87e4df3bc7975 100644 --- a/core/src/main/java/org/elasticsearch/action/ListenableActionFuture.java +++ b/core/src/main/java/org/elasticsearch/action/ListenableActionFuture.java @@ -29,5 +29,5 @@ public interface ListenableActionFuture extends ActionFuture { /** * Add an action listener to be invoked when a response has received. */ - void addListener(final ActionListener listener); + void addListener(ActionListener listener); } diff --git a/core/src/main/java/org/elasticsearch/action/NotifyOnceListener.java b/core/src/main/java/org/elasticsearch/action/NotifyOnceListener.java new file mode 100644 index 0000000000000..1b717dcc6c05a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/NotifyOnceListener.java @@ -0,0 +1,50 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action; + +import java.util.concurrent.atomic.AtomicBoolean; + +/** + * A listener that ensures that only one of onResponse or onFailure is called. And the method + * the is called is only called once. Subclasses should implement notification logic with + * innerOnResponse and innerOnFailure. + */ +public abstract class NotifyOnceListener implements ActionListener { + + private final AtomicBoolean hasBeenCalled = new AtomicBoolean(false); + + protected abstract void innerOnResponse(Response response); + + protected abstract void innerOnFailure(Exception e); + + @Override + public final void onResponse(Response response) { + if (hasBeenCalled.compareAndSet(false, true)) { + innerOnResponse(response); + } + } + + @Override + public final void onFailure(Exception e) { + if (hasBeenCalled.compareAndSet(false, true)) { + innerOnFailure(e); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/OriginalIndices.java b/core/src/main/java/org/elasticsearch/action/OriginalIndices.java index cc299f544b38f..0642326d2b48e 100644 --- a/core/src/main/java/org/elasticsearch/action/OriginalIndices.java +++ b/core/src/main/java/org/elasticsearch/action/OriginalIndices.java @@ -24,11 +24,15 @@ import org.elasticsearch.common.io.stream.StreamOutput; import java.io.IOException; +import java.util.Arrays; /** * Used to keep track of original indices within internal (e.g. shard level) requests */ -public class OriginalIndices implements IndicesRequest { +public final class OriginalIndices implements IndicesRequest { + + //constant to use when original indices are not applicable and will not be serialized across the wire + public static final OriginalIndices NONE = new OriginalIndices(null, null); private final String[] indices; private final IndicesOptions indicesOptions; @@ -39,7 +43,6 @@ public OriginalIndices(IndicesRequest indicesRequest) { public OriginalIndices(String[] indices, IndicesOptions indicesOptions) { this.indices = indices; - assert indicesOptions != null; this.indicesOptions = indicesOptions; } @@ -57,9 +60,17 @@ public static OriginalIndices readOriginalIndices(StreamInput in) throws IOExcep return new OriginalIndices(in.readStringArray(), IndicesOptions.readIndicesOptions(in)); } - public static void writeOriginalIndices(OriginalIndices originalIndices, StreamOutput out) throws IOException { + assert originalIndices != NONE; out.writeStringArrayNullable(originalIndices.indices); originalIndices.indicesOptions.writeIndicesOptions(out); } + + @Override + public String toString() { + return "OriginalIndices{" + + "indices=" + Arrays.toString(indices) + + ", indicesOptions=" + indicesOptions + + '}'; + } } diff --git a/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java b/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java index 6704f610ec0aa..8c8f263c34db0 100644 --- a/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java +++ b/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java @@ -88,7 +88,7 @@ public RestStatus getStatus() { return status; } - public Throwable getCause() { + public Exception getCause() { return reason; } @@ -105,7 +105,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (reason != null) { builder.field("reason"); builder.startObject(); - ElasticsearchException.toXContent(builder, params, reason); + ElasticsearchException.generateThrowableXContent(builder, params, reason); builder.endObject(); } return builder; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java index f31b1d3737684..aea1ee57dca87 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java @@ -19,13 +19,11 @@ package org.elasticsearch.action.admin.cluster.allocation; -import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.support.master.MasterNodeRequest; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ObjectParser; @@ -40,36 +38,47 @@ */ public class ClusterAllocationExplainRequest extends MasterNodeRequest { - private static ObjectParser PARSER = new ObjectParser( - "cluster/allocation/explain"); + private static ObjectParser PARSER = new ObjectParser<>("cluster/allocation/explain"); static { PARSER.declareString(ClusterAllocationExplainRequest::setIndex, new ParseField("index")); PARSER.declareInt(ClusterAllocationExplainRequest::setShard, new ParseField("shard")); PARSER.declareBoolean(ClusterAllocationExplainRequest::setPrimary, new ParseField("primary")); + PARSER.declareString(ClusterAllocationExplainRequest::setCurrentNode, new ParseField("current_node")); } + @Nullable private String index; + @Nullable private Integer shard; + @Nullable private Boolean primary; + @Nullable + private String currentNode; private boolean includeYesDecisions = false; private boolean includeDiskInfo = false; - /** Explain the first unassigned shard */ + /** + * Create a new allocation explain request to explain any unassigned shard in the cluster. + */ public ClusterAllocationExplainRequest() { this.index = null; this.shard = null; this.primary = null; + this.currentNode = null; } /** * Create a new allocation explain request. If {@code primary} is false, the first unassigned replica * will be picked for explanation. If no replicas are unassigned, the first assigned replica will * be explained. + * + * Package private for testing. */ - public ClusterAllocationExplainRequest(String index, int shard, boolean primary) { + ClusterAllocationExplainRequest(String index, int shard, boolean primary, @Nullable String currentNode) { this.index = index; this.shard = shard; this.primary = primary; + this.currentNode = currentNode; } @Override @@ -93,54 +102,103 @@ public ActionRequestValidationException validate() { * Returns {@code true} iff the first unassigned shard is to be used */ public boolean useAnyUnassignedShard() { - return this.index == null && this.shard == null && this.primary == null; + return this.index == null && this.shard == null && this.primary == null && this.currentNode == null; } + /** + * Sets the index name of the shard to explain. + */ public ClusterAllocationExplainRequest setIndex(String index) { this.index = index; return this; } + /** + * Returns the index name of the shard to explain, or {@code null} to use any unassigned shard (see {@link #useAnyUnassignedShard()}). + */ @Nullable public String getIndex() { return this.index; } + /** + * Sets the shard id of the shard to explain. + */ public ClusterAllocationExplainRequest setShard(Integer shard) { this.shard = shard; return this; } + /** + * Returns the shard id of the shard to explain, or {@code null} to use any unassigned shard (see {@link #useAnyUnassignedShard()}). + */ @Nullable public Integer getShard() { return this.shard; } + /** + * Sets whether to explain the allocation of the primary shard or a replica shard copy + * for the shard id (see {@link #getShard()}). + */ public ClusterAllocationExplainRequest setPrimary(Boolean primary) { this.primary = primary; return this; } + /** + * Returns {@code true} if explaining the primary shard for the shard id (see {@link #getShard()}), + * {@code false} if explaining a replica shard copy for the shard id, or {@code null} to use any + * unassigned shard (see {@link #useAnyUnassignedShard()}). + */ @Nullable public Boolean isPrimary() { return this.primary; } + /** + * Requests the explain API to explain an already assigned replica shard currently allocated to + * the given node. + */ + public ClusterAllocationExplainRequest setCurrentNode(String currentNodeId) { + this.currentNode = currentNodeId; + return this; + } + + /** + * Returns the node holding the replica shard to be explained. Returns {@code null} if any replica shard + * can be explained. + */ + @Nullable + public String getCurrentNode() { + return currentNode; + } + + /** + * Set to {@code true} to include yes decisions for a particular node. + */ public void includeYesDecisions(boolean includeYesDecisions) { this.includeYesDecisions = includeYesDecisions; } - /** Returns true if all decisions should be included. Otherwise only "NO" and "THROTTLE" decisions are returned */ + /** + * Returns {@code true} if yes decisions should be included. Otherwise only "no" and "throttle" + * decisions are returned. + */ public boolean includeYesDecisions() { return this.includeYesDecisions; } - /** {@code true} to include information about the gathered disk information of nodes in the cluster */ + /** + * Set to {@code true} to include information about the gathered disk information of nodes in the cluster. + */ public void includeDiskInfo(boolean includeDiskInfo) { this.includeDiskInfo = includeDiskInfo; } - /** Returns true if information about disk usage and shard sizes should also be returned */ + /** + * Returns {@code true} if information about disk usage and shard sizes should also be returned. + */ public boolean includeDiskInfo() { return this.includeDiskInfo; } @@ -154,37 +212,46 @@ public String toString() { sb.append("index=").append(index); sb.append(",shard=").append(shard); sb.append(",primary?=").append(primary); + if (currentNode != null) { + sb.append(",currentNode=").append(currentNode); + } } sb.append(",includeYesDecisions?=").append(includeYesDecisions); return sb.toString(); } public static ClusterAllocationExplainRequest parse(XContentParser parser) throws IOException { - ClusterAllocationExplainRequest req = PARSER.parse(parser, new ClusterAllocationExplainRequest(), () -> ParseFieldMatcher.STRICT); - Exception e = req.validate(); - if (e != null) { - throw new ElasticsearchParseException("'index', 'shard', and 'primary' must be specified in allocation explain request", e); - } - return req; + return PARSER.parse(parser, new ClusterAllocationExplainRequest(), null); } @Override public void readFrom(StreamInput in) throws IOException { + checkVersion(in.getVersion()); super.readFrom(in); this.index = in.readOptionalString(); this.shard = in.readOptionalVInt(); this.primary = in.readOptionalBoolean(); + this.currentNode = in.readOptionalString(); this.includeYesDecisions = in.readBoolean(); this.includeDiskInfo = in.readBoolean(); } @Override public void writeTo(StreamOutput out) throws IOException { + checkVersion(out.getVersion()); super.writeTo(out); out.writeOptionalString(index); out.writeOptionalVInt(shard); out.writeOptionalBoolean(primary); + out.writeOptionalString(currentNode); out.writeBoolean(includeYesDecisions); out.writeBoolean(includeDiskInfo); } + + private void checkVersion(Version version) { + if (version.before(Version.V_5_2_0)) { + throw new IllegalArgumentException("cannot explain shards in a mixed-cluster with pre-" + Version.V_5_2_0 + + " nodes, node version [" + version + "]"); + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequestBuilder.java index 343d1756d7524..038657a277cd5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequestBuilder.java @@ -19,7 +19,6 @@ package org.elasticsearch.action.admin.cluster.allocation; -import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; @@ -65,6 +64,15 @@ public ClusterAllocationExplainRequestBuilder setIncludeDiskInfo(boolean include return this; } + /** + * Requests the explain API to explain an already assigned replica shard currently allocated to + * the given node. + */ + public ClusterAllocationExplainRequestBuilder setCurrentNode(String currentNode) { + request.setCurrentNode(currentNode); + return this; + } + /** * Signal that the first unassigned shard should be used */ diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainResponse.java index cc586bd1a5813..83a847ba01e54 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainResponse.java @@ -20,7 +20,6 @@ package org.elasticsearch.action.admin.cluster.allocation; import org.elasticsearch.action.ActionResponse; -import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplanation.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplanation.java index fd06c685b95f8..e2911c227a868 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplanation.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplanation.java @@ -21,285 +21,184 @@ import org.elasticsearch.cluster.ClusterInfo; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.cluster.routing.ShardRoutingState; import org.elasticsearch.cluster.routing.UnassignedInfo; +import org.elasticsearch.cluster.routing.allocation.AllocationDecision; +import org.elasticsearch.cluster.routing.allocation.ShardAllocationDecision; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.shard.ShardId; import java.io.IOException; -import java.util.HashMap; -import java.util.Map; +import java.util.Locale; + +import static org.elasticsearch.cluster.routing.allocation.AbstractAllocationDecision.discoveryNodeToXContent; /** - * A {@code ClusterAllocationExplanation} is an explanation of why a shard may or may not be allocated to nodes. It also includes weights - * for where the shard is likely to be assigned. It is an immutable class + * A {@code ClusterAllocationExplanation} is an explanation of why a shard is unassigned, + * or if it is not unassigned, then which nodes it could possibly be relocated to. + * It is an immutable class. */ public final class ClusterAllocationExplanation implements ToXContent, Writeable { - private final ShardId shard; - private final boolean primary; - private final boolean hasPendingAsyncFetch; - private final String assignedNodeId; - private final UnassignedInfo unassignedInfo; - private final long allocationDelayMillis; - private final long remainingDelayMillis; - private final Map nodeExplanations; + private final ShardRouting shardRouting; + private final DiscoveryNode currentNode; + private final DiscoveryNode relocationTargetNode; private final ClusterInfo clusterInfo; - - public ClusterAllocationExplanation(ShardId shard, boolean primary, @Nullable String assignedNodeId, long allocationDelayMillis, - long remainingDelayMillis, @Nullable UnassignedInfo unassignedInfo, boolean hasPendingAsyncFetch, - Map nodeExplanations, @Nullable ClusterInfo clusterInfo) { - this.shard = shard; - this.primary = primary; - this.hasPendingAsyncFetch = hasPendingAsyncFetch; - this.assignedNodeId = assignedNodeId; - this.unassignedInfo = unassignedInfo; - this.allocationDelayMillis = allocationDelayMillis; - this.remainingDelayMillis = remainingDelayMillis; - this.nodeExplanations = nodeExplanations; + private final ShardAllocationDecision shardAllocationDecision; + + public ClusterAllocationExplanation(ShardRouting shardRouting, @Nullable DiscoveryNode currentNode, + @Nullable DiscoveryNode relocationTargetNode, @Nullable ClusterInfo clusterInfo, + ShardAllocationDecision shardAllocationDecision) { + this.shardRouting = shardRouting; + this.currentNode = currentNode; + this.relocationTargetNode = relocationTargetNode; this.clusterInfo = clusterInfo; + this.shardAllocationDecision = shardAllocationDecision; } public ClusterAllocationExplanation(StreamInput in) throws IOException { - this.shard = ShardId.readShardId(in); - this.primary = in.readBoolean(); - this.hasPendingAsyncFetch = in.readBoolean(); - this.assignedNodeId = in.readOptionalString(); - this.unassignedInfo = in.readOptionalWriteable(UnassignedInfo::new); - this.allocationDelayMillis = in.readVLong(); - this.remainingDelayMillis = in.readVLong(); - - int mapSize = in.readVInt(); - Map nodeToExplanation = new HashMap<>(mapSize); - for (int i = 0; i < mapSize; i++) { - NodeExplanation nodeExplanation = new NodeExplanation(in); - nodeToExplanation.put(nodeExplanation.getNode(), nodeExplanation); - } - this.nodeExplanations = nodeToExplanation; - if (in.readBoolean()) { - this.clusterInfo = new ClusterInfo(in); - } else { - this.clusterInfo = null; - } + this.shardRouting = new ShardRouting(in); + this.currentNode = in.readOptionalWriteable(DiscoveryNode::new); + this.relocationTargetNode = in.readOptionalWriteable(DiscoveryNode::new); + this.clusterInfo = in.readOptionalWriteable(ClusterInfo::new); + this.shardAllocationDecision = new ShardAllocationDecision(in); } @Override public void writeTo(StreamOutput out) throws IOException { - this.getShard().writeTo(out); - out.writeBoolean(this.isPrimary()); - out.writeBoolean(this.isStillFetchingShardData()); - out.writeOptionalString(this.getAssignedNodeId()); - out.writeOptionalWriteable(this.getUnassignedInfo()); - out.writeVLong(allocationDelayMillis); - out.writeVLong(remainingDelayMillis); - - out.writeVInt(this.nodeExplanations.size()); - for (NodeExplanation explanation : this.nodeExplanations.values()) { - explanation.writeTo(out); - } - if (this.clusterInfo != null) { - out.writeBoolean(true); - this.clusterInfo.writeTo(out); - } else { - out.writeBoolean(false); - } + shardRouting.writeTo(out); + out.writeOptionalWriteable(currentNode); + out.writeOptionalWriteable(relocationTargetNode); + out.writeOptionalWriteable(clusterInfo); + shardAllocationDecision.writeTo(out); } - /** Return the shard that the explanation is about */ + /** + * Returns the shard that the explanation is about. + */ public ShardId getShard() { - return this.shard; + return shardRouting.shardId(); } - /** Return true if the explained shard is primary, false otherwise */ + /** + * Returns {@code true} if the explained shard is primary, {@code false} otherwise. + */ public boolean isPrimary() { - return this.primary; + return shardRouting.primary(); } - /** Return turn if shard data is still being fetched for the allocation */ - public boolean isStillFetchingShardData() { - return this.hasPendingAsyncFetch; + /** + * Returns the current {@link ShardRoutingState} of the shard. + */ + public ShardRoutingState getShardState() { + return shardRouting.state(); } - /** Return turn if the shard is assigned to a node */ - public boolean isAssigned() { - return this.assignedNodeId != null; + /** + * Returns the currently assigned node, or {@code null} if the shard is unassigned. + */ + @Nullable + public DiscoveryNode getCurrentNode() { + return currentNode; } - /** Return the assigned node id or null if not assigned */ + /** + * Returns the relocating target node, or {@code null} if the shard is not in the {@link ShardRoutingState#RELOCATING} state. + */ @Nullable - public String getAssignedNodeId() { - return this.assignedNodeId; + public DiscoveryNode getRelocationTargetNode() { + return relocationTargetNode; } - /** Return the unassigned info for the shard or null if the shard is assigned */ + /** + * Returns the unassigned info for the shard, or {@code null} if the shard is active. + */ @Nullable public UnassignedInfo getUnassignedInfo() { - return this.unassignedInfo; - } - - /** Return the configured delay before the shard can be allocated in milliseconds */ - public long getAllocationDelayMillis() { - return this.allocationDelayMillis; - } - - /** Return the remaining allocation delay for this shard in milliseconds */ - public long getRemainingDelayMillis() { - return this.remainingDelayMillis; - } - - /** Return a map of node to the explanation for that node */ - public Map getNodeExplanations() { - return this.nodeExplanations; + return shardRouting.unassignedInfo(); } - /** Return the cluster disk info for the cluster or null if none available */ + /** + * Returns the cluster disk info for the cluster, or {@code null} if none available. + */ @Nullable public ClusterInfo getClusterInfo() { return this.clusterInfo; } + /** \ + * Returns the shard allocation decision for attempting to assign or move the shard. + */ + public ShardAllocationDecision getShardAllocationDecision() { + return shardAllocationDecision; + } + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); { - builder.startObject("shard"); { - builder.field("index", shard.getIndexName()); - builder.field("index_uuid", shard.getIndex().getUUID()); - builder.field("id", shard.getId()); - builder.field("primary", primary); - } - builder.endObject(); // end shard - builder.field("assigned", this.assignedNodeId != null); - // If assigned, show the node id of the node it's assigned to - if (assignedNodeId != null) { - builder.field("assigned_node_id", this.assignedNodeId); + builder.field("index", shardRouting.getIndexName()); + builder.field("shard", shardRouting.getId()); + builder.field("primary", shardRouting.primary()); + builder.field("current_state", shardRouting.state().toString().toLowerCase(Locale.ROOT)); + if (shardRouting.unassignedInfo() != null) { + unassignedInfoToXContent(shardRouting.unassignedInfo(), builder); } - builder.field("shard_state_fetch_pending", this.hasPendingAsyncFetch); - // If we have unassigned info, show that - if (unassignedInfo != null) { - unassignedInfo.toXContent(builder, params); - builder.timeValueField("allocation_delay_in_millis", "allocation_delay", TimeValue.timeValueMillis(allocationDelayMillis)); - builder.timeValueField("remaining_delay_in_millis", "remaining_delay", TimeValue.timeValueMillis(remainingDelayMillis)); - } - builder.startObject("nodes"); { - for (NodeExplanation explanation : nodeExplanations.values()) { - explanation.toXContent(builder, params); + if (currentNode != null) { + builder.startObject("current_node"); + { + discoveryNodeToXContent(currentNode, true, builder); + if (shardAllocationDecision.getMoveDecision().isDecisionTaken() + && shardAllocationDecision.getMoveDecision().getCurrentNodeRanking() > 0) { + builder.field("weight_ranking", shardAllocationDecision.getMoveDecision().getCurrentNodeRanking()); + } } + builder.endObject(); } - builder.endObject(); // end nodes if (this.clusterInfo != null) { builder.startObject("cluster_info"); { this.clusterInfo.toXContent(builder, params); } builder.endObject(); // end "cluster_info" } + if (shardAllocationDecision.isDecisionTaken()) { + shardAllocationDecision.toXContent(builder, params); + } else { + String explanation; + if (shardRouting.state() == ShardRoutingState.RELOCATING) { + explanation = "the shard is in the process of relocating from node [" + currentNode.getName() + "] " + + "to node [" + relocationTargetNode.getName() + "], wait until relocation has completed"; + } else { + assert shardRouting.state() == ShardRoutingState.INITIALIZING; + explanation = "the shard is in the process of initializing on node [" + currentNode.getName() + "], " + + "wait until initialization has completed"; + } + builder.field("explanation", explanation); + } } builder.endObject(); // end wrapping object return builder; } - /** An Enum representing the final decision for a shard allocation on a node */ - public enum FinalDecision { - // Yes, the shard can be assigned - YES((byte) 0), - // No, the shard cannot be assigned - NO((byte) 1), - // The shard is already assigned to this node - ALREADY_ASSIGNED((byte) 2); + private XContentBuilder unassignedInfoToXContent(UnassignedInfo unassignedInfo, XContentBuilder builder) + throws IOException { - private final byte id; - - FinalDecision (byte id) { - this.id = id; + builder.startObject("unassigned_info"); + builder.field("reason", unassignedInfo.getReason()); + builder.field("at", UnassignedInfo.DATE_TIME_FORMATTER.printer().print(unassignedInfo.getUnassignedTimeInMillis())); + if (unassignedInfo.getNumFailedAllocations() > 0) { + builder.field("failed_allocation_attempts", unassignedInfo.getNumFailedAllocations()); } - - private static FinalDecision fromId(byte id) { - switch (id) { - case 0: return YES; - case 1: return NO; - case 2: return ALREADY_ASSIGNED; - default: - throw new IllegalArgumentException("unknown id for final decision: [" + id + "]"); - } - } - - @Override - public String toString() { - switch (id) { - case 0: return "YES"; - case 1: return "NO"; - case 2: return "ALREADY_ASSIGNED"; - default: - throw new IllegalArgumentException("unknown id for final decision: [" + id + "]"); - } - } - - static FinalDecision readFrom(StreamInput in) throws IOException { - return fromId(in.readByte()); - } - - void writeTo(StreamOutput out) throws IOException { - out.writeByte(id); - } - } - - /** An Enum representing the state of the shard store's copy of the data on a node */ - public enum StoreCopy { - // No data for this shard is on the node - NONE((byte) 0), - // A copy of the data is available on this node - AVAILABLE((byte) 1), - // The copy of the data on the node is corrupt - CORRUPT((byte) 2), - // There was an error reading this node's copy of the data - IO_ERROR((byte) 3), - // The copy of the data on the node is stale - STALE((byte) 4), - // It's unknown what the copy of the data is - UNKNOWN((byte) 5); - - private final byte id; - - StoreCopy (byte id) { - this.id = id; - } - - private static StoreCopy fromId(byte id) { - switch (id) { - case 0: return NONE; - case 1: return AVAILABLE; - case 2: return CORRUPT; - case 3: return IO_ERROR; - case 4: return STALE; - case 5: return UNKNOWN; - default: - throw new IllegalArgumentException("unknown id for store copy: [" + id + "]"); - } - } - - @Override - public String toString() { - switch (id) { - case 0: return "NONE"; - case 1: return "AVAILABLE"; - case 2: return "CORRUPT"; - case 3: return "IO_ERROR"; - case 4: return "STALE"; - case 5: return "UNKNOWN"; - default: - throw new IllegalArgumentException("unknown id for store copy: [" + id + "]"); - } - } - - static StoreCopy readFrom(StreamInput in) throws IOException { - return fromId(in.readByte()); - } - - void writeTo(StreamOutput out) throws IOException { - out.writeByte(id); + String details = unassignedInfo.getDetails(); + if (details != null) { + builder.field("details", details); } + builder.field("last_allocation_status", AllocationDecision.fromAllocationStatus(unassignedInfo.getLastAllocationStatus())); + builder.endObject(); + return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/NodeExplanation.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/NodeExplanation.java deleted file mode 100644 index e564711d4188d..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/NodeExplanation.java +++ /dev/null @@ -1,145 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.admin.cluster.allocation; - -import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresResponse; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.routing.allocation.decider.Decision; -import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; -import org.elasticsearch.common.xcontent.XContentBuilder; - -import java.io.IOException; -import java.util.Map; -/** The cluster allocation explanation for a single node */ -public class NodeExplanation implements Writeable, ToXContent { - private final DiscoveryNode node; - private final Decision nodeDecision; - private final Float nodeWeight; - private final IndicesShardStoresResponse.StoreStatus storeStatus; - private final ClusterAllocationExplanation.FinalDecision finalDecision; - private final ClusterAllocationExplanation.StoreCopy storeCopy; - private final String finalExplanation; - - public NodeExplanation(final DiscoveryNode node, final Decision nodeDecision, final Float nodeWeight, - @Nullable final IndicesShardStoresResponse.StoreStatus storeStatus, - final ClusterAllocationExplanation.FinalDecision finalDecision, - final String finalExplanation, - final ClusterAllocationExplanation.StoreCopy storeCopy) { - this.node = node; - this.nodeDecision = nodeDecision; - this.nodeWeight = nodeWeight; - this.storeStatus = storeStatus; - this.finalDecision = finalDecision; - this.finalExplanation = finalExplanation; - this.storeCopy = storeCopy; - } - - public NodeExplanation(StreamInput in) throws IOException { - this.node = new DiscoveryNode(in); - this.nodeDecision = Decision.readFrom(in); - this.nodeWeight = in.readFloat(); - if (in.readBoolean()) { - this.storeStatus = IndicesShardStoresResponse.StoreStatus.readStoreStatus(in); - } else { - this.storeStatus = null; - } - this.finalDecision = ClusterAllocationExplanation.FinalDecision.readFrom(in); - this.finalExplanation = in.readString(); - this.storeCopy = ClusterAllocationExplanation.StoreCopy.readFrom(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - node.writeTo(out); - Decision.writeTo(nodeDecision, out); - out.writeFloat(nodeWeight); - if (storeStatus == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - storeStatus.writeTo(out); - } - finalDecision.writeTo(out); - out.writeString(finalExplanation); - storeCopy.writeTo(out); - } - - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(node.getId()); { - builder.field("node_name", node.getName()); - builder.startObject("node_attributes"); { - for (Map.Entry attrEntry : node.getAttributes().entrySet()) { - builder.field(attrEntry.getKey(), attrEntry.getValue()); - } - } - builder.endObject(); // end attributes - builder.startObject("store"); { - builder.field("shard_copy", storeCopy.toString()); - if (storeStatus != null) { - final Throwable storeErr = storeStatus.getStoreException(); - if (storeErr != null) { - builder.field("store_exception", ExceptionsHelper.detailedMessage(storeErr)); - } - } - } - builder.endObject(); // end store - builder.field("final_decision", finalDecision.toString()); - builder.field("final_explanation", finalExplanation.toString()); - builder.field("weight", nodeWeight); - nodeDecision.toXContent(builder, params); - } - builder.endObject(); // end node - return builder; - } - - public DiscoveryNode getNode() { - return this.node; - } - - public Decision getDecision() { - return this.nodeDecision; - } - - public Float getWeight() { - return this.nodeWeight; - } - - @Nullable - public IndicesShardStoresResponse.StoreStatus getStoreStatus() { - return this.storeStatus; - } - - public ClusterAllocationExplanation.FinalDecision getFinalDecision() { - return this.finalDecision; - } - - public String getFinalExplanation() { - return this.finalExplanation; - } - - public ClusterAllocationExplanation.StoreCopy getStoreCopy() { - return this.storeCopy; - } -} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java index a304fa60cb798..77d4b24d8cc23 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java @@ -19,13 +19,7 @@ package org.elasticsearch.action.admin.cluster.allocation; -import org.apache.lucene.index.CorruptIndexException; -import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresRequest; -import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresResponse; -import org.elasticsearch.action.admin.indices.shards.TransportIndicesShardStoresAction; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.master.TransportMasterNodeAction; import org.elasticsearch.cluster.ClusterInfo; @@ -33,34 +27,25 @@ import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.block.ClusterBlockLevel; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.routing.RecoverySource; -import org.elasticsearch.cluster.routing.RoutingNode; import org.elasticsearch.cluster.routing.RoutingNodes; import org.elasticsearch.cluster.routing.ShardRouting; -import org.elasticsearch.cluster.routing.UnassignedInfo; +import org.elasticsearch.cluster.routing.allocation.MoveDecision; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; +import org.elasticsearch.cluster.routing.allocation.AllocateUnassignedDecision; +import org.elasticsearch.cluster.routing.allocation.RoutingAllocation.DebugMode; +import org.elasticsearch.cluster.routing.allocation.ShardAllocationDecision; import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocator; import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders; -import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.collect.ImmutableOpenIntMap; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.gateway.GatewayAllocator; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; -import java.util.HashMap; import java.util.List; -import java.util.Map; -import java.util.Set; - -import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING; /** * The {@code TransportClusterAllocationExplainAction} is responsible for actually executing the explanation of a shard's allocation on the @@ -72,7 +57,6 @@ public class TransportClusterAllocationExplainAction private final ClusterInfoService clusterInfoService; private final AllocationDeciders allocationDeciders; private final ShardsAllocator shardAllocator; - private final TransportIndicesShardStoresAction shardStoresAction; private final GatewayAllocator gatewayAllocator; @Inject @@ -80,14 +64,12 @@ public TransportClusterAllocationExplainAction(Settings settings, TransportServi ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, ClusterInfoService clusterInfoService, AllocationDeciders allocationDeciders, - ShardsAllocator shardAllocator, TransportIndicesShardStoresAction shardStoresAction, - GatewayAllocator gatewayAllocator) { + ShardsAllocator shardAllocator, GatewayAllocator gatewayAllocator) { super(settings, ClusterAllocationExplainAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, ClusterAllocationExplainRequest::new); this.clusterInfoService = clusterInfoService; this.allocationDeciders = allocationDeciders; this.shardAllocator = shardAllocator; - this.shardStoresAction = shardStoresAction; this.gatewayAllocator = gatewayAllocator; } @@ -106,240 +88,114 @@ protected ClusterAllocationExplainResponse newResponse() { return new ClusterAllocationExplainResponse(); } - /** - * Return the decisions for the given {@code ShardRouting} on the given {@code RoutingNode}. If {@code includeYesDecisions} is not true, - * only non-YES (NO and THROTTLE) decisions are returned. - */ - public static Decision tryShardOnNode(ShardRouting shard, RoutingNode node, RoutingAllocation allocation, boolean includeYesDecisions) { - Decision d = allocation.deciders().canAllocate(shard, node, allocation); - if (includeYesDecisions) { - return d; - } else { - Decision.Multi nonYesDecisions = new Decision.Multi(); - List decisions = d.getDecisions(); - for (Decision decision : decisions) { - if (decision.type() != Decision.Type.YES) { - nonYesDecisions.add(decision); - } - } - return nonYesDecisions; - } - } - - /** - * Construct a {@code NodeExplanation} object for the given shard given all the metadata. This also attempts to construct the human - * readable FinalDecision and final explanation as part of the explanation. - */ - public static NodeExplanation calculateNodeExplanation(ShardRouting shard, - IndexMetaData indexMetaData, - DiscoveryNode node, - Decision nodeDecision, - Float nodeWeight, - IndicesShardStoresResponse.StoreStatus storeStatus, - String assignedNodeId, - Set activeAllocationIds, - boolean hasPendingAsyncFetch) { - final ClusterAllocationExplanation.FinalDecision finalDecision; - final ClusterAllocationExplanation.StoreCopy storeCopy; - final String finalExplanation; + @Override + protected void masterOperation(final ClusterAllocationExplainRequest request, final ClusterState state, + final ActionListener listener) { + final RoutingNodes routingNodes = state.getRoutingNodes(); + final ClusterInfo clusterInfo = clusterInfoService.getClusterInfo(); + final RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, state, + clusterInfo, System.nanoTime(), false); - if (storeStatus == null) { - // No copies of the data - storeCopy = ClusterAllocationExplanation.StoreCopy.NONE; - } else { - final Exception storeErr = storeStatus.getStoreException(); - if (storeErr != null) { - if (ExceptionsHelper.unwrapCause(storeErr) instanceof CorruptIndexException) { - storeCopy = ClusterAllocationExplanation.StoreCopy.CORRUPT; - } else { - storeCopy = ClusterAllocationExplanation.StoreCopy.IO_ERROR; - } - } else if (activeAllocationIds.isEmpty()) { - // The ids are only empty if dealing with a legacy index - // TODO: fetch the shard state versions and display here? - storeCopy = ClusterAllocationExplanation.StoreCopy.UNKNOWN; - } else if (activeAllocationIds.contains(storeStatus.getAllocationId())) { - storeCopy = ClusterAllocationExplanation.StoreCopy.AVAILABLE; - } else { - // Otherwise, this is a stale copy of the data (allocation ids don't match) - storeCopy = ClusterAllocationExplanation.StoreCopy.STALE; - } - } + ShardRouting shardRouting = findShardToExplain(request, allocation); + logger.debug("explaining the allocation for [{}], found shard [{}]", request, shardRouting); - if (node.getId().equals(assignedNodeId)) { - finalDecision = ClusterAllocationExplanation.FinalDecision.ALREADY_ASSIGNED; - finalExplanation = "the shard is already assigned to this node"; - } else if (shard.unassigned() && shard.primary() == false && - shard.unassignedInfo().getReason() != UnassignedInfo.Reason.INDEX_CREATED && nodeDecision.type() != Decision.Type.YES) { - finalExplanation = "the shard cannot be assigned because allocation deciders return a " + nodeDecision.type().name() + - " decision"; - finalDecision = ClusterAllocationExplanation.FinalDecision.NO; - } else if (shard.unassigned() && shard.primary() == false && - shard.unassignedInfo().getReason() != UnassignedInfo.Reason.INDEX_CREATED && hasPendingAsyncFetch) { - finalExplanation = "the shard's state is still being fetched so it cannot be allocated"; - finalDecision = ClusterAllocationExplanation.FinalDecision.NO; - } else if (shard.primary() && shard.unassigned() && - (shard.recoverySource().getType() == RecoverySource.Type.EXISTING_STORE || - shard.recoverySource().getType() == RecoverySource.Type.SNAPSHOT) - && hasPendingAsyncFetch) { - finalExplanation = "the shard's state is still being fetched so it cannot be allocated"; - finalDecision = ClusterAllocationExplanation.FinalDecision.NO; - } else if (shard.primary() && shard.unassigned() && shard.recoverySource().getType() == RecoverySource.Type.EXISTING_STORE && - storeCopy == ClusterAllocationExplanation.StoreCopy.STALE) { - finalExplanation = "the copy of the shard is stale, allocation ids do not match"; - finalDecision = ClusterAllocationExplanation.FinalDecision.NO; - } else if (shard.primary() && shard.unassigned() && shard.recoverySource().getType() == RecoverySource.Type.EXISTING_STORE && - storeCopy == ClusterAllocationExplanation.StoreCopy.NONE) { - finalExplanation = "there is no copy of the shard available"; - finalDecision = ClusterAllocationExplanation.FinalDecision.NO; - } else if (shard.primary() && shard.unassigned() && shard.recoverySource().getType() == RecoverySource.Type.EXISTING_STORE && - storeCopy == ClusterAllocationExplanation.StoreCopy.CORRUPT) { - finalExplanation = "the copy of the shard is corrupt"; - finalDecision = ClusterAllocationExplanation.FinalDecision.NO; - } else if (shard.primary() && shard.unassigned() && shard.recoverySource().getType() == RecoverySource.Type.EXISTING_STORE && - storeCopy == ClusterAllocationExplanation.StoreCopy.IO_ERROR) { - finalExplanation = "the copy of the shard cannot be read"; - finalDecision = ClusterAllocationExplanation.FinalDecision.NO; - } else { - if (nodeDecision.type() == Decision.Type.NO) { - finalDecision = ClusterAllocationExplanation.FinalDecision.NO; - finalExplanation = "the shard cannot be assigned because one or more allocation decider returns a 'NO' decision"; - } else { - // TODO: handle throttling decision better here - finalDecision = ClusterAllocationExplanation.FinalDecision.YES; - if (storeCopy == ClusterAllocationExplanation.StoreCopy.AVAILABLE) { - finalExplanation = "the shard can be assigned and the node contains a valid copy of the shard data"; - } else { - finalExplanation = "the shard can be assigned"; - } - } - } - return new NodeExplanation(node, nodeDecision, nodeWeight, storeStatus, finalDecision, finalExplanation, storeCopy); + ClusterAllocationExplanation cae = explainShard(shardRouting, allocation, + request.includeDiskInfo() ? clusterInfo : null, request.includeYesDecisions(), gatewayAllocator, shardAllocator); + listener.onResponse(new ClusterAllocationExplainResponse(cae)); } + // public for testing + public static ClusterAllocationExplanation explainShard(ShardRouting shardRouting, RoutingAllocation allocation, + ClusterInfo clusterInfo, boolean includeYesDecisions, + GatewayAllocator gatewayAllocator, ShardsAllocator shardAllocator) { + allocation.setDebugMode(includeYesDecisions ? DebugMode.ON : DebugMode.EXCLUDE_YES_DECISIONS); - /** - * For the given {@code ShardRouting}, return the explanation of the allocation for that shard on all nodes. If {@code - * includeYesDecisions} is true, returns all decisions, otherwise returns only 'NO' and 'THROTTLE' decisions. - */ - public static ClusterAllocationExplanation explainShard(ShardRouting shard, RoutingAllocation allocation, RoutingNodes routingNodes, - boolean includeYesDecisions, ShardsAllocator shardAllocator, - List shardStores, - GatewayAllocator gatewayAllocator, ClusterInfo clusterInfo) { - // don't short circuit deciders, we want a full explanation - allocation.debugDecision(true); - // get the existing unassigned info if available - UnassignedInfo ui = shard.unassignedInfo(); - - Map nodeToDecision = new HashMap<>(); - for (RoutingNode node : routingNodes) { - DiscoveryNode discoNode = node.node(); - if (discoNode.isDataNode()) { - Decision d = tryShardOnNode(shard, node, allocation, includeYesDecisions); - nodeToDecision.put(discoNode, d); + ShardAllocationDecision shardDecision; + if (shardRouting.initializing() || shardRouting.relocating()) { + shardDecision = ShardAllocationDecision.NOT_TAKEN; + } else { + AllocateUnassignedDecision allocateDecision = shardRouting.unassigned() ? + gatewayAllocator.decideUnassignedShardAllocation(shardRouting, allocation) : AllocateUnassignedDecision.NOT_TAKEN; + if (allocateDecision.isDecisionTaken() == false) { + shardDecision = shardAllocator.decideShardAllocation(shardRouting, allocation); + } else { + shardDecision = new ShardAllocationDecision(allocateDecision, MoveDecision.NOT_TAKEN); } } - long remainingDelayMillis = 0; - final MetaData metadata = allocation.metaData(); - final IndexMetaData indexMetaData = metadata.index(shard.index()); - long allocationDelayMillis = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexMetaData.getSettings()).getMillis(); - if (ui != null && ui.isDelayed()) { - long remainingDelayNanos = ui.getRemainingDelay(System.nanoTime(), indexMetaData.getSettings()); - remainingDelayMillis = TimeValue.timeValueNanos(remainingDelayNanos).millis(); - } - // Calculate weights for each of the nodes - Map weights = shardAllocator.weighShard(allocation, shard); - - Map nodeToStatus = new HashMap<>(shardStores.size()); - for (IndicesShardStoresResponse.StoreStatus status : shardStores) { - nodeToStatus.put(status.getNode(), status); - } - - Map explanations = new HashMap<>(shardStores.size()); - for (Map.Entry entry : nodeToDecision.entrySet()) { - DiscoveryNode node = entry.getKey(); - Decision decision = entry.getValue(); - Float weight = weights.get(node); - IndicesShardStoresResponse.StoreStatus storeStatus = nodeToStatus.get(node); - NodeExplanation nodeExplanation = calculateNodeExplanation(shard, indexMetaData, node, decision, weight, - storeStatus, shard.currentNodeId(), indexMetaData.inSyncAllocationIds(shard.getId()), - allocation.hasPendingAsyncFetch()); - explanations.put(node, nodeExplanation); - } - return new ClusterAllocationExplanation(shard.shardId(), shard.primary(), - shard.currentNodeId(), allocationDelayMillis, remainingDelayMillis, ui, - gatewayAllocator.hasFetchPending(shard.shardId(), shard.primary()), explanations, clusterInfo); + return new ClusterAllocationExplanation(shardRouting, + shardRouting.currentNodeId() != null ? allocation.nodes().get(shardRouting.currentNodeId()) : null, + shardRouting.relocatingNodeId() != null ? allocation.nodes().get(shardRouting.relocatingNodeId()) : null, + clusterInfo, shardDecision); } - @Override - protected void masterOperation(final ClusterAllocationExplainRequest request, final ClusterState state, - final ActionListener listener) { - final RoutingNodes routingNodes = state.getRoutingNodes(); - final ClusterInfo clusterInfo = clusterInfoService.getClusterInfo(); - final RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, state, - clusterInfo, System.nanoTime(), false); - + // public for testing + public static ShardRouting findShardToExplain(ClusterAllocationExplainRequest request, RoutingAllocation allocation) { ShardRouting foundShard = null; if (request.useAnyUnassignedShard()) { // If we can use any shard, just pick the first unassigned one (if there are any) - RoutingNodes.UnassignedShards.UnassignedIterator ui = routingNodes.unassigned().iterator(); + RoutingNodes.UnassignedShards.UnassignedIterator ui = allocation.routingNodes().unassigned().iterator(); if (ui.hasNext()) { foundShard = ui.next(); } + if (foundShard == null) { + throw new IllegalArgumentException("unable to find any unassigned shards to explain [" + request + "]"); + } } else { String index = request.getIndex(); int shard = request.getShard(); if (request.isPrimary()) { // If we're looking for the primary shard, there's only one copy, so pick it directly foundShard = allocation.routingTable().shardRoutingTable(index, shard).primaryShard(); + if (request.getCurrentNode() != null) { + DiscoveryNode primaryNode = allocation.nodes().resolveNode(request.getCurrentNode()); + // the primary is assigned to a node other than the node specified in the request + if (primaryNode.getId().equals(foundShard.currentNodeId()) == false) { + throw new IllegalArgumentException( + "unable to find primary shard assigned to node [" + request.getCurrentNode() + "]"); + } + } } else { // If looking for a replica, go through all the replica shards List replicaShardRoutings = allocation.routingTable().shardRoutingTable(index, shard).replicaShards(); - if (replicaShardRoutings.size() > 0) { - // Pick the first replica at the very least - foundShard = replicaShardRoutings.get(0); - // In case there are multiple replicas where some are assigned and some aren't, - // try to find one that is unassigned at least + if (request.getCurrentNode() != null) { + // the request is to explain a replica shard already assigned on a particular node, + // so find that shard copy + DiscoveryNode replicaNode = allocation.nodes().resolveNode(request.getCurrentNode()); for (ShardRouting replica : replicaShardRoutings) { - if (replica.unassigned()) { + if (replicaNode.getId().equals(replica.currentNodeId())) { foundShard = replica; break; } } + if (foundShard == null) { + throw new IllegalArgumentException("unable to find a replica shard assigned to node [" + + request.getCurrentNode() + "]"); + } + } else { + if (replicaShardRoutings.size() > 0) { + // Pick the first replica at the very least + foundShard = replicaShardRoutings.get(0); + for (ShardRouting replica : replicaShardRoutings) { + // In case there are multiple replicas where some are assigned and some aren't, + // try to find one that is unassigned at least + if (replica.unassigned()) { + foundShard = replica; + break; + } else if (replica.started() && (foundShard.initializing() || foundShard.relocating())) { + // prefer started shards to initializing or relocating shards because started shards + // can be explained + foundShard = replica; + } + } + } } } } if (foundShard == null) { - listener.onFailure(new ElasticsearchException("unable to find any shards to explain [{}] in the routing table", request)); - return; + throw new IllegalArgumentException("unable to find any shards to explain [" + request + "] in the routing table"); } - final ShardRouting shardRouting = foundShard; - logger.debug("explaining the allocation for [{}], found shard [{}]", request, shardRouting); - - getShardStores(shardRouting, new ActionListener() { - @Override - public void onResponse(IndicesShardStoresResponse shardStoreResponse) { - ImmutableOpenIntMap> shardStatuses = - shardStoreResponse.getStoreStatuses().get(shardRouting.getIndexName()); - List shardStoreStatus = shardStatuses.get(shardRouting.id()); - ClusterAllocationExplanation cae = explainShard(shardRouting, allocation, routingNodes, - request.includeYesDecisions(), shardAllocator, shardStoreStatus, gatewayAllocator, - request.includeDiskInfo() ? clusterInfo : null); - listener.onResponse(new ClusterAllocationExplainResponse(cae)); - } - - @Override - public void onFailure(Exception e) { - listener.onFailure(e); - } - }); - } - - private void getShardStores(ShardRouting shard, final ActionListener listener) { - IndicesShardStoresRequest request = new IndicesShardStoresRequest(shard.getIndexName()); - request.shardStatuses("all"); - shardStoresAction.execute(request, listener); + return foundShard; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthResponse.java index d483ae86bf76e..a9a2c36970ee4 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthResponse.java @@ -24,22 +24,19 @@ import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.cluster.health.ClusterIndexHealth; import org.elasticsearch.cluster.health.ClusterStateHealth; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.rest.RestStatus; import java.io.IOException; import java.util.Locale; import java.util.Map; -/** - * - */ -public class ClusterHealthResponse extends ActionResponse implements StatusToXContent { +public class ClusterHealthResponse extends ActionResponse implements StatusToXContentObject { private String clusterName; private int numberOfPendingTasks = 0; private int numberOfInFlightFetch = 0; @@ -203,18 +200,9 @@ public void writeTo(StreamOutput out) throws IOException { taskMaxWaitingTime.writeTo(out); } - @Override public String toString() { - try { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; - } + return Strings.toString(this); } @Override @@ -243,6 +231,7 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.field(CLUSTER_NAME, getClusterName()); builder.field(STATUS, getStatus().name().toLowerCase(Locale.ROOT)); builder.field(TIMED_OUT, isTimedOut()); @@ -271,6 +260,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } builder.endObject(); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java index 9314079424012..44c604dc8b845 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java @@ -26,6 +26,7 @@ import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.TransportMasterNodeReadAction; +import org.elasticsearch.cluster.LocalClusterUpdateTask; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterStateObserver; import org.elasticsearch.cluster.ClusterStateUpdateTask; @@ -44,9 +45,8 @@ import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; -/** - * - */ +import java.util.function.Predicate; + public class TransportClusterHealthAction extends TransportMasterNodeReadAction { private final GatewayAllocator gatewayAllocator; @@ -86,37 +86,55 @@ protected final void masterOperation(ClusterHealthRequest request, ClusterState protected void masterOperation(Task task, final ClusterHealthRequest request, final ClusterState unusedState, final ActionListener listener) { if (request.waitForEvents() != null) { final long endTimeMS = TimeValue.nsecToMSec(System.nanoTime()) + request.timeout().millis(); - clusterService.submitStateUpdateTask("cluster_health (wait_for_events [" + request.waitForEvents() + "])", new ClusterStateUpdateTask(request.waitForEvents()) { - @Override - public ClusterState execute(ClusterState currentState) { - return currentState; - } + if (request.local()) { + clusterService.submitStateUpdateTask("cluster_health (wait_for_events [" + request.waitForEvents() + "])", new LocalClusterUpdateTask(request.waitForEvents()) { + @Override + public ClusterTasksResult execute(ClusterState currentState) { + return unchanged(); + } - @Override - public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { - final long timeoutInMillis = Math.max(0, endTimeMS - TimeValue.nsecToMSec(System.nanoTime())); - final TimeValue newTimeout = TimeValue.timeValueMillis(timeoutInMillis); - request.timeout(newTimeout); - executeHealth(request, listener); - } + @Override + public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { + final long timeoutInMillis = Math.max(0, endTimeMS - TimeValue.nsecToMSec(System.nanoTime())); + final TimeValue newTimeout = TimeValue.timeValueMillis(timeoutInMillis); + request.timeout(newTimeout); + executeHealth(request, listener); + } - @Override - public void onNoLongerMaster(String source) { - logger.trace("stopped being master while waiting for events with priority [{}]. retrying.", request.waitForEvents()); - doExecute(task, request, listener); - } + @Override + public void onFailure(String source, Exception e) { + logger.error((Supplier) () -> new ParameterizedMessage("unexpected failure during [{}]", source), e); + listener.onFailure(e); + } + }); + } else { + clusterService.submitStateUpdateTask("cluster_health (wait_for_events [" + request.waitForEvents() + "])", new ClusterStateUpdateTask(request.waitForEvents()) { + @Override + public ClusterState execute(ClusterState currentState) { + return currentState; + } - @Override - public void onFailure(String source, Exception e) { - logger.error((Supplier) () -> new ParameterizedMessage("unexpected failure during [{}]", source), e); - listener.onFailure(e); - } + @Override + public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { + final long timeoutInMillis = Math.max(0, endTimeMS - TimeValue.nsecToMSec(System.nanoTime())); + final TimeValue newTimeout = TimeValue.timeValueMillis(timeoutInMillis); + request.timeout(newTimeout); + executeHealth(request, listener); + } - @Override - public boolean runOnlyOnMaster() { - return !request.local(); - } - }); + @Override + public void onNoLongerMaster(String source) { + logger.trace("stopped being master while waiting for events with priority [{}]. retrying.", request.waitForEvents()); + doExecute(task, request, listener); + } + + @Override + public void onFailure(String source, Exception e) { + logger.error((Supplier) () -> new ParameterizedMessage("unexpected failure during [{}]", source), e); + listener.onFailure(e); + } + }); + } } else { executeHealth(request, listener); } @@ -142,19 +160,14 @@ private void executeHealth(final ClusterHealthRequest request, final ActionListe } assert waitFor >= 0; - final ClusterStateObserver observer = new ClusterStateObserver(clusterService, logger, threadPool.getThreadContext()); - final ClusterState state = observer.observedState(); + final ClusterState state = clusterService.state(); + final ClusterStateObserver observer = new ClusterStateObserver(state, clusterService, null, logger, threadPool.getThreadContext()); if (request.timeout().millis() == 0) { listener.onResponse(getResponse(request, state, waitFor, request.timeout().millis() == 0)); return; } final int concreteWaitFor = waitFor; - final ClusterStateObserver.ChangePredicate validationPredicate = new ClusterStateObserver.ValidationPredicate() { - @Override - protected boolean validate(ClusterState newState) { - return newState.status() == ClusterState.ClusterStateStatus.APPLIED && validateRequest(request, newState, concreteWaitFor); - } - }; + final Predicate validationPredicate = newState -> validateRequest(request, newState, concreteWaitFor); final ClusterStateObserver.Listener stateListener = new ClusterStateObserver.Listener() { @Override @@ -169,12 +182,11 @@ public void onClusterServiceClose() { @Override public void onTimeout(TimeValue timeout) { - final ClusterState clusterState = clusterService.state(); - final ClusterHealthResponse response = getResponse(request, clusterState, concreteWaitFor, true); + final ClusterHealthResponse response = getResponse(request, observer.setAndGetObservedState(), concreteWaitFor, true); listener.onResponse(response); } }; - if (state.status() == ClusterState.ClusterStateStatus.APPLIED && validateRequest(request, state, concreteWaitFor)) { + if (validationPredicate.test(state)) { stateListener.onNewClusterState(state); } else { observer.waitForNextChange(stateListener, validationPredicate, request.timeout()); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/TransportNodesHotThreadsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/TransportNodesHotThreadsAction.java index 73403f40318cb..53d74432bae62 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/TransportNodesHotThreadsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/TransportNodesHotThreadsAction.java @@ -24,7 +24,6 @@ import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.nodes.BaseNodeRequest; import org.elasticsearch.action.support.nodes.TransportNodesAction; -import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; @@ -85,11 +84,6 @@ protected NodeHotThreads nodeOperation(NodeRequest request) { } } - @Override - protected boolean accumulateExceptions() { - return false; - } - public static class NodeRequest extends BaseNodeRequest { NodesHotThreadsRequest request; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodeInfo.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodeInfo.java index dd243cf6d88eb..0ab12fe6c0912 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodeInfo.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodeInfo.java @@ -220,7 +220,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(false); } else { out.writeBoolean(true); - out.writeLong(totalIndexingBuffer.bytes()); + out.writeLong(totalIndexingBuffer.getBytes()); } if (settings == null) { out.writeBoolean(false); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java index 028198cf83140..6b6b74165fb24 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java @@ -29,7 +29,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -79,11 +79,6 @@ protected NodeInfo nodeOperation(NodeInfoRequest nodeRequest) { request.transport(), request.http(), request.plugins(), request.ingest(), request.indices()); } - @Override - protected boolean accumulateExceptions() { - return false; - } - public static class NodeInfoRequest extends BaseNodeRequest { NodesInfoRequest request; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/liveness/LivenessRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/liveness/LivenessRequest.java index 033dd5957d9fe..d6441bb8e77f5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/liveness/LivenessRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/liveness/LivenessRequest.java @@ -25,7 +25,7 @@ * Transport level private response for the transport handler registered under * {@value org.elasticsearch.action.admin.cluster.node.liveness.TransportLivenessAction#NAME} */ -public final class LivenessRequest extends ActionRequest { +public final class LivenessRequest extends ActionRequest { @Override public ActionRequestValidationException validate() { return null; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java index 38e91e342413a..9e301368453eb 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java @@ -249,26 +249,26 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - if (!params.param("node_info_format", "default").equals("none")) { - builder.field("name", getNode().getName()); - builder.field("transport_address", getNode().getAddress().toString()); - builder.field("host", getNode().getHostName()); - builder.field("ip", getNode().getAddress()); - - builder.startArray("roles"); - for (DiscoveryNode.Role role : getNode().getRoles()) { - builder.value(role.getRoleName()); - } - builder.endArray(); - - if (!getNode().getAttributes().isEmpty()) { - builder.startObject("attributes"); - for (Map.Entry attrEntry : getNode().getAttributes().entrySet()) { - builder.field(attrEntry.getKey(), attrEntry.getValue()); - } - builder.endObject(); + + builder.field("name", getNode().getName()); + builder.field("transport_address", getNode().getAddress().toString()); + builder.field("host", getNode().getHostName()); + builder.field("ip", getNode().getAddress()); + + builder.startArray("roles"); + for (DiscoveryNode.Role role : getNode().getRoles()) { + builder.value(role.getRoleName()); + } + builder.endArray(); + + if (!getNode().getAttributes().isEmpty()) { + builder.startObject("attributes"); + for (Map.Entry attrEntry : getNode().getAttributes().entrySet()) { + builder.field(attrEntry.getKey(), attrEntry.getValue()); } + builder.endObject(); } + if (getIndices() != null) { getIndices().toXContent(builder, params); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java index 5863e54d08fa1..f5ae1a780c048 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java @@ -29,7 +29,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -79,11 +79,6 @@ protected NodeStats nodeOperation(NodeStatsRequest nodeStatsRequest) { request.ingest()); } - @Override - protected boolean accumulateExceptions() { - return false; - } - public static class NodeStatsRequest extends BaseNodeRequest { NodesStatsRequest request; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java index dc52e4fd5080b..aca1be7adff4c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java @@ -19,14 +19,16 @@ package org.elasticsearch.action.admin.cluster.node.tasks.cancel; +import com.carrotsearch.hppc.cursors.ObjectObjectCursor; import org.elasticsearch.ResourceNotFoundException; +import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.FailedNodeException; import org.elasticsearch.action.TaskOperationFailure; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.tasks.TransportTasksAction; -import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.io.stream.StreamInput; @@ -45,10 +47,9 @@ import org.elasticsearch.transport.TransportService; import java.io.IOException; +import java.util.ArrayList; import java.util.List; -import java.util.Set; import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicReference; import java.util.function.Consumer; /** @@ -111,92 +112,109 @@ protected void processTasks(CancelTasksRequest request, Consumer removeBanOnNodes(cancellableTask, nodes)); - Set childNodes = taskManager.cancel(cancellableTask, request.getReason(), banLock::onTaskFinished); - if (childNodes != null) { - if (childNodes.isEmpty()) { - logger.trace("cancelling task {} with no children", cancellableTask.getId()); - return cancellableTask.taskInfo(clusterService.localNode(), false); - } else { - logger.trace("cancelling task {} with children on nodes [{}]", cancellableTask.getId(), childNodes); - setBanOnNodes(request.getReason(), cancellableTask, childNodes, banLock); - return cancellableTask.taskInfo(clusterService.localNode(), false); + protected synchronized void taskOperation(CancelTasksRequest request, CancellableTask cancellableTask, + ActionListener listener) { + String nodeId = clusterService.localNode().getId(); + final boolean canceled; + if (cancellableTask.shouldCancelChildrenOnCancellation()) { + DiscoveryNodes childNodes = clusterService.state().nodes(); + final BanLock banLock = new BanLock(childNodes.getSize(), () -> removeBanOnNodes(cancellableTask, childNodes)); + canceled = taskManager.cancel(cancellableTask, request.getReason(), banLock::onTaskFinished); + if (canceled) { + // /In case the task has some child tasks, we need to wait for until ban is set on all nodes + logger.trace("cancelling task {} on child nodes", cancellableTask.getId()); + AtomicInteger responses = new AtomicInteger(childNodes.getSize()); + List failures = new ArrayList<>(); + setBanOnNodes(request.getReason(), cancellableTask, childNodes, new ActionListener() { + @Override + public void onResponse(Void aVoid) { + processResponse(); + } + + @Override + public void onFailure(Exception e) { + synchronized (failures) { + failures.add(e); + } + processResponse(); + } + + private void processResponse() { + banLock.onBanSet(); + if (responses.decrementAndGet() == 0) { + if (failures.isEmpty() == false) { + IllegalStateException exception = new IllegalStateException("failed to cancel children of the task [" + + cancellableTask.getId() + "]"); + failures.forEach(exception::addSuppressed); + listener.onFailure(exception); + } else { + listener.onResponse(cancellableTask.taskInfo(nodeId, false)); + } + } + } + }); } - } else { + } else { + canceled = taskManager.cancel(cancellableTask, request.getReason(), + () -> listener.onResponse(cancellableTask.taskInfo(nodeId, false))); + if (canceled) { + logger.trace("task {} doesn't have any children that should be cancelled", cancellableTask.getId()); + } + } + if (canceled == false) { logger.trace("task {} is already cancelled", cancellableTask.getId()); throw new IllegalStateException("task with id " + cancellableTask.getId() + " is already cancelled"); } } - @Override - protected boolean accumulateExceptions() { - return true; - } - - private void setBanOnNodes(String reason, CancellableTask task, Set nodes, BanLock banLock) { + private void setBanOnNodes(String reason, CancellableTask task, DiscoveryNodes nodes, ActionListener listener) { sendSetBanRequest(nodes, BanParentTaskRequest.createSetBanParentTaskRequest(new TaskId(clusterService.localNode().getId(), task.getId()), reason), - banLock); + listener); } - private void removeBanOnNodes(CancellableTask task, Set nodes) { + private void removeBanOnNodes(CancellableTask task, DiscoveryNodes nodes) { sendRemoveBanRequest(nodes, BanParentTaskRequest.createRemoveBanParentTaskRequest(new TaskId(clusterService.localNode().getId(), task.getId()))); } - private void sendSetBanRequest(Set nodes, BanParentTaskRequest request, BanLock banLock) { - ClusterState clusterState = clusterService.state(); - for (String node : nodes) { - DiscoveryNode discoveryNode = clusterState.getNodes().get(node); - if (discoveryNode != null) { - // Check if node still in the cluster - logger.debug("Sending ban for tasks with the parent [{}] to the node [{}], ban [{}]", request.parentTaskId, node, - request.ban); - transportService.sendRequest(discoveryNode, BAN_PARENT_ACTION_NAME, request, - new EmptyTransportResponseHandler(ThreadPool.Names.SAME) { - @Override - public void handleResponse(TransportResponse.Empty response) { - banLock.onBanSet(); - } - - @Override - public void handleException(TransportException exp) { - banLock.onBanSet(); - } - }); - } else { - banLock.onBanSet(); - logger.debug("Cannot send ban for tasks with the parent [{}] to the node [{}] - the node no longer in the cluster", - request.parentTaskId, node); - } + private void sendSetBanRequest(DiscoveryNodes nodes, BanParentTaskRequest request, ActionListener listener) { + for (ObjectObjectCursor node : nodes.getNodes()) { + logger.trace("Sending ban for tasks with the parent [{}] to the node [{}], ban [{}]", request.parentTaskId, node.key, + request.ban); + transportService.sendRequest(node.value, BAN_PARENT_ACTION_NAME, request, + new EmptyTransportResponseHandler(ThreadPool.Names.SAME) { + @Override + public void handleResponse(TransportResponse.Empty response) { + listener.onResponse(null); + } + + @Override + public void handleException(TransportException exp) { + logger.warn("Cannot send ban for tasks with the parent [{}] to the node [{}]", request.parentTaskId, node.key); + listener.onFailure(exp); + } + }); } } - private void sendRemoveBanRequest(Set nodes, BanParentTaskRequest request) { - ClusterState clusterState = clusterService.state(); - for (String node : nodes) { - DiscoveryNode discoveryNode = clusterState.getNodes().get(node); - if (discoveryNode != null) { - // Check if node still in the cluster - logger.debug("Sending remove ban for tasks with the parent [{}] to the node [{}]", request.parentTaskId, node); - transportService.sendRequest(discoveryNode, BAN_PARENT_ACTION_NAME, request, EmptyTransportResponseHandler - .INSTANCE_SAME); - } else { - logger.debug("Cannot send remove ban request for tasks with the parent [{}] to the node [{}] - the node no longer in " + - "the cluster", request.parentTaskId, node); - } + private void sendRemoveBanRequest(DiscoveryNodes nodes, BanParentTaskRequest request) { + for (ObjectObjectCursor node : nodes.getNodes()) { + logger.debug("Sending remove ban for tasks with the parent [{}] to the node [{}]", request.parentTaskId, node.key); + transportService.sendRequest(node.value, BAN_PARENT_ACTION_NAME, request, EmptyTransportResponseHandler + .INSTANCE_SAME); } } private static class BanLock { - private final Consumer> finish; + private final Runnable finish; private final AtomicInteger counter; - private final AtomicReference> nodes = new AtomicReference<>(); + private final int nodesSize; - public BanLock(Consumer> finish) { + BanLock(int nodesSize, Runnable finish) { counter = new AtomicInteger(0); this.finish = finish; + this.nodesSize = nodesSize; } public void onBanSet() { @@ -205,15 +223,14 @@ public void onBanSet() { } } - public void onTaskFinished(Set nodes) { - this.nodes.set(nodes); - if (counter.addAndGet(nodes.size()) == 0) { + public void onTaskFinished() { + if (counter.addAndGet(nodesSize) == 0) { finish(); } } public void finish() { - finish.accept(nodes.get()); + finish.run(); } } @@ -245,7 +262,7 @@ private BanParentTaskRequest(TaskId parentTaskId) { this.ban = false; } - public BanParentTaskRequest() { + BanParentTaskRequest() { } @Override @@ -285,5 +302,4 @@ public void messageReceived(final BanParentTaskRequest request, final TransportC } } - } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskRequest.java index efbc9679e714e..07d40b5ffcaa0 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskRequest.java @@ -33,7 +33,7 @@ /** * A request to get node tasks */ -public class GetTaskRequest extends ActionRequest { +public class GetTaskRequest extends ActionRequest { private TaskId taskId = TaskId.EMPTY_TASK_ID; private boolean waitForCompletion = false; private TimeValue timeout = null; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskResponse.java index ffd4b35831421..72f26d2d57692 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskResponse.java @@ -23,7 +23,7 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.tasks.TaskResult; @@ -34,7 +34,7 @@ /** * Returns the list of tasks currently running on the nodes */ -public class GetTaskResponse extends ActionResponse implements ToXContent { +public class GetTaskResponse extends ActionResponse implements ToXContentObject { private TaskResult task; public GetTaskResponse() { @@ -65,7 +65,10 @@ public TaskResult getTask() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - return task.innerToXContent(builder, params); + builder.startObject(); + task.innerToXContent(builder, params); + builder.endObject(); + return builder; } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/TransportGetTaskAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/TransportGetTaskAction.java index dc77a1a6e8415..30d71a992fd95 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/TransportGetTaskAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/TransportGetTaskAction.java @@ -31,17 +31,17 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.AbstractRunnable; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.IndexNotFoundException; -import org.elasticsearch.tasks.TaskResult; import org.elasticsearch.tasks.Task; import org.elasticsearch.tasks.TaskId; import org.elasticsearch.tasks.TaskInfo; +import org.elasticsearch.tasks.TaskResult; import org.elasticsearch.tasks.TaskResultsService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportException; @@ -67,14 +67,17 @@ public class TransportGetTaskAction extends HandledTransportAction { + if (e instanceof ResourceNotFoundException) { + e = new ResourceNotFoundException( + "task [" + request.getTaskId() + "] belongs to the node [" + request.getTaskId().getNodeId() + + "] which isn't part of the cluster and there is no record of the task", + e); + } + listener.onFailure(e); + })); return; } GetTaskRequest nodeRequest = request.nodeRequest(clusterService.localNode().getId(), thisTask.getId()); - taskManager.registerChildTask(thisTask, node.getId()); transportService.sendRequest(node, GetTaskAction.NAME, nodeRequest, builder.build(), new TransportResponseHandler() { @Override @@ -150,7 +160,7 @@ void getRunningTaskFromNode(Task thisTask, GetTaskRequest request, ActionListene @Override protected void doRun() throws Exception { taskManager.waitForTaskCompletion(runningTask, waitForCompletionTimeout(request.getTimeout())); - waitedForCompletion(thisTask, request, runningTask.taskInfo(clusterService.localNode(), true), listener); + waitedForCompletion(thisTask, request, runningTask.taskInfo(clusterService.localNode().getId(), true), listener); } @Override @@ -159,7 +169,7 @@ public void onFailure(Exception e) { } }); } else { - TaskInfo info = runningTask.taskInfo(clusterService.localNode(), true); + TaskInfo info = runningTask.taskInfo(clusterService.localNode().getId(), true); listener.onResponse(new GetTaskResponse(new TaskResult(false, info))); } } @@ -216,7 +226,7 @@ public void onResponse(GetResponse getResponse) { public void onFailure(Exception e) { if (ExceptionsHelper.unwrap(e, IndexNotFoundException.class) != null) { // We haven't yet created the index for the task results so it can't be found. - listener.onFailure(new ResourceNotFoundException("task [{}] isn't running or stored its results", e, + listener.onFailure(new ResourceNotFoundException("task [{}] isn't running and hasn't stored its results", e, request.getTaskId())); } else { listener.onFailure(e); @@ -231,15 +241,15 @@ public void onFailure(Exception e) { */ void onGetFinishedTaskFromIndex(GetResponse response, ActionListener listener) throws IOException { if (false == response.isExists()) { - listener.onFailure(new ResourceNotFoundException("task [{}] isn't running or stored its results", response.getId())); + listener.onFailure(new ResourceNotFoundException("task [{}] isn't running and hasn't stored its results", response.getId())); return; } if (response.isSourceEmpty()) { listener.onFailure(new ElasticsearchException("Stored task status for [{}] didn't contain any source!", response.getId())); return; } - try (XContentParser parser = XContentHelper.createParser(response.getSourceAsBytesRef())) { - TaskResult result = TaskResult.PARSER.apply(parser, () -> ParseFieldMatcher.STRICT); + try (XContentParser parser = XContentHelper.createParser(xContentRegistry, response.getSourceAsBytesRef())) { + TaskResult result = TaskResult.PARSER.apply(parser, null); listener.onResponse(new GetTaskResponse(result)); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java index b33226b973ba7..a203dd35b47ff 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java @@ -27,7 +27,7 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.tasks.TaskId; import org.elasticsearch.tasks.TaskInfo; @@ -43,7 +43,7 @@ /** * Returns the list of tasks currently running on the nodes */ -public class ListTasksResponse extends BaseTasksResponse implements ToXContent { +public class ListTasksResponse extends BaseTasksResponse implements ToXContentObject { private List tasks; @@ -161,8 +161,9 @@ public XContentBuilder toXContentGroupedByNode(XContentBuilder builder, Params p } builder.startObject("tasks"); for(TaskInfo task : entry.getValue()) { - builder.field(task.getTaskId().toString()); + builder.startObject(task.getTaskId().toString()); task.toXContent(builder, params); + builder.endObject(); } builder.endObject(); builder.endObject(); @@ -187,7 +188,10 @@ public XContentBuilder toXContentGroupedByParents(XContentBuilder builder, Param @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - return toXContentGroupedByParents(builder, params); + builder.startObject(); + toXContentGroupedByParents(builder, params); + builder.endObject(); + return builder; } private void toXContentCommon(XContentBuilder builder, Params params) throws IOException { @@ -214,6 +218,6 @@ private void toXContentCommon(XContentBuilder builder, Params params) throws IOE @Override public String toString() { - return Strings.toString(this, true); + return Strings.toString(this); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java index b254137163d75..87bf70acede44 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java @@ -81,7 +81,7 @@ public List getChildTasks() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); - task.innerToXContent(builder, params); + task.toXContent(builder, params); if (childTasks.isEmpty() == false) { builder.startArray("children"); for (TaskGroup taskGroup : childTasks) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java index 32c0c3c1845c3..a3cca310d6ab2 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.cluster.node.tasks.list; +import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.FailedNodeException; import org.elasticsearch.action.TaskOperationFailure; import org.elasticsearch.action.support.ActionFilters; @@ -72,8 +73,8 @@ protected TaskInfo readTaskResponse(StreamInput in) throws IOException { } @Override - protected TaskInfo taskOperation(ListTasksRequest request, Task task) { - return task.taskInfo(clusterService.localNode(), request.getDetailed()); + protected void taskOperation(ListTasksRequest request, Task task, ActionListener listener) { + listener.onResponse(task.taskInfo(clusterService.localNode().getId(), request.getDetailed())); } @Override @@ -92,8 +93,4 @@ protected void processTasks(ListTasksRequest request, Consumer operation) super.processTasks(request, operation); } - @Override - protected boolean accumulateExceptions() { - return true; - } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoAction.java new file mode 100644 index 0000000000000..aa546c7dffd26 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoAction.java @@ -0,0 +1,43 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.Action; +import org.elasticsearch.client.ElasticsearchClient; + +public final class RemoteInfoAction extends Action { + + public static final String NAME = "cluster:monitor/remote/info"; + public static final RemoteInfoAction INSTANCE = new RemoteInfoAction(); + + public RemoteInfoAction() { + super(NAME); + } + + @Override + public RemoteInfoRequestBuilder newRequestBuilder(ElasticsearchClient client) { + return new RemoteInfoRequestBuilder(client, INSTANCE); + } + + @Override + public RemoteInfoResponse newResponse() { + return new RemoteInfoResponse(); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java new file mode 100644 index 0000000000000..6e41f145b65e7 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java @@ -0,0 +1,32 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionRequestValidationException; + +public final class RemoteInfoRequest extends ActionRequest { + + @Override + public ActionRequestValidationException validate() { + return null; + } + +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequestBuilder.java new file mode 100644 index 0000000000000..f46f5ecd2d3ca --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequestBuilder.java @@ -0,0 +1,30 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.ActionRequestBuilder; +import org.elasticsearch.client.ElasticsearchClient; + +public final class RemoteInfoRequestBuilder extends ActionRequestBuilder { + + public RemoteInfoRequestBuilder(ElasticsearchClient client, RemoteInfoAction action) { + super(client, action, new RemoteInfoRequest()); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoResponse.java new file mode 100644 index 0000000000000..8e9360bdb1238 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoResponse.java @@ -0,0 +1,67 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.transport.RemoteConnectionInfo; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; + +public final class RemoteInfoResponse extends ActionResponse implements ToXContentObject { + + private List infos; + + RemoteInfoResponse() { + } + + RemoteInfoResponse(Collection infos) { + this.infos = Collections.unmodifiableList(new ArrayList<>(infos)); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeList(infos); + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + infos = in.readList(RemoteConnectionInfo::new); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + for (RemoteConnectionInfo info : infos) { + info.toXContent(builder, params); + } + builder.endObject(); + return builder; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java new file mode 100644 index 0000000000000..33254a9aed9ab --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java @@ -0,0 +1,51 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.action.search.SearchTransportService; +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.HandledTransportAction; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; + +public final class TransportRemoteInfoAction extends HandledTransportAction { + + private final RemoteClusterService remoteClusterService; + + @Inject + public TransportRemoteInfoAction(Settings settings, ThreadPool threadPool, TransportService transportService, + ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, + SearchTransportService searchTransportService) { + super(settings, RemoteInfoAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, + RemoteInfoRequest::new); + this.remoteClusterService = searchTransportService.getRemoteClusterService(); + } + + @Override + protected void doExecute(RemoteInfoRequest remoteInfoRequest, ActionListener listener) { + remoteClusterService.getRemoteConnectionInfos(ActionListener.wrap(remoteConnectionInfos + -> listener.onResponse(new RemoteInfoResponse(remoteConnectionInfos)), listener::onFailure)); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/GetRepositoriesResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/GetRepositoriesResponse.java index c933156fcb077..6d4cb83934548 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/GetRepositoriesResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/GetRepositoriesResponse.java @@ -60,7 +60,7 @@ public List repositories() { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); int size = in.readVInt(); - List repositoryListBuilder = new ArrayList<>(); + List repositoryListBuilder = new ArrayList<>(size); for (int j = 0; j < size; j++) { repositoryListBuilder.add(new RepositoryMetaData( in.readString(), diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java index 83beb0016a972..f0f8d50b4c1c9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java @@ -22,22 +22,20 @@ import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.support.master.AcknowledgedRequest; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; import java.util.Map; import static org.elasticsearch.action.ValidateActions.addValidationError; -import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS; import static org.elasticsearch.common.settings.Settings.readSettingsFromStream; import static org.elasticsearch.common.settings.Settings.writeSettingsToStream; +import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS; /** * Register repository request. @@ -144,14 +142,28 @@ public PutRepositoryRequest settings(Settings.Builder settings) { /** * Sets the repository settings. * - * @param source repository settings in json, yaml or properties format + * @param source repository settings in json or yaml format * @return this request + * @deprecated use {@link #settings(String, XContentType)} to avoid content type auto-detection */ + @Deprecated public PutRepositoryRequest settings(String source) { this.settings = Settings.builder().loadFromSource(source).build(); return this; } + /** + * Sets the repository settings. + * + * @param source repository settings in json or yaml format + * @param xContentType the content type of the source + * @return this request + */ + public PutRepositoryRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); + return this; + } + /** * Sets the repository settings. * @@ -162,7 +174,7 @@ public PutRepositoryRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -198,18 +210,8 @@ public boolean verify() { * * @param repositoryDefinition repository definition */ - public PutRepositoryRequest source(XContentBuilder repositoryDefinition) { - return source(repositoryDefinition.bytes()); - } - - /** - * Parses repository definition. - * - * @param repositoryDefinition repository definition - */ - public PutRepositoryRequest source(Map repositoryDefinition) { - Map source = repositoryDefinition; - for (Map.Entry entry : source.entrySet()) { + public PutRepositoryRequest source(Map repositoryDefinition) { + for (Map.Entry entry : repositoryDefinition.entrySet()) { String name = entry.getKey(); if (name.equals("type")) { type(entry.getValue().toString()); @@ -217,64 +219,14 @@ public PutRepositoryRequest source(Map repositoryDefinition) { if (!(entry.getValue() instanceof Map)) { throw new IllegalArgumentException("Malformed settings section, should include an inner object"); } - settings((Map) entry.getValue()); + @SuppressWarnings("unchecked") + Map sub = (Map) entry.getValue(); + settings(sub); } } return this; } - /** - * Parses repository definition. - * JSON, Smile and YAML formats are supported - * - * @param repositoryDefinition repository definition - */ - public PutRepositoryRequest source(String repositoryDefinition) { - try (XContentParser parser = XContentFactory.xContent(repositoryDefinition).createParser(repositoryDefinition)) { - return source(parser.mapOrdered()); - } catch (IOException e) { - throw new IllegalArgumentException("failed to parse repository source [" + repositoryDefinition + "]", e); - } - } - - /** - * Parses repository definition. - * JSON, Smile and YAML formats are supported - * - * @param repositoryDefinition repository definition - */ - public PutRepositoryRequest source(byte[] repositoryDefinition) { - return source(repositoryDefinition, 0, repositoryDefinition.length); - } - - /** - * Parses repository definition. - * JSON, Smile and YAML formats are supported - * - * @param repositoryDefinition repository definition - */ - public PutRepositoryRequest source(byte[] repositoryDefinition, int offset, int length) { - try (XContentParser parser = XContentFactory.xContent(repositoryDefinition, offset, length).createParser(repositoryDefinition, offset, length)) { - return source(parser.mapOrdered()); - } catch (IOException e) { - throw new IllegalArgumentException("failed to parse repository source", e); - } - } - - /** - * Parses repository definition. - * JSON, Smile and YAML formats are supported - * - * @param repositoryDefinition repository definition - */ - public PutRepositoryRequest source(BytesReference repositoryDefinition) { - try (XContentParser parser = XContentFactory.xContent(repositoryDefinition).createParser(repositoryDefinition)) { - return source(parser.mapOrdered()); - } catch (IOException e) { - throw new IllegalArgumentException("failed to parse template source", e); - } - } - @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java index 39cfa6af7f750..aed09daff2a86 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java @@ -22,6 +22,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -89,16 +90,30 @@ public PutRepositoryRequestBuilder setSettings(Settings.Builder settings) { } /** - * Sets the repository settings in Json, Yaml or properties format + * Sets the repository settings in Json or Yaml format * * @param source repository settings * @return this builder + * @deprecated use {@link #setSettings(String, XContentType)} instead to avoid content type auto detection */ + @Deprecated public PutRepositoryRequestBuilder setSettings(String source) { request.settings(source); return this; } + /** + * Sets the repository settings in Json or Yaml format + * + * @param source repository settings + * @param xContentType the contenty type of the source + * @return this builder + */ + public PutRepositoryRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); + return this; + } + /** * Sets the repository settings * diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryResponse.java index 5de45fafa6df4..27612a3dab24b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryResponse.java @@ -22,18 +22,18 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentHelper; import java.io.IOException; /** * Unregister repository response */ -public class VerifyRepositoryResponse extends ActionResponse implements ToXContent { +public class VerifyRepositoryResponse extends ActionResponse implements ToXContentObject { private DiscoveryNode[] nodes; @@ -83,6 +83,7 @@ static final class Fields { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startObject(Fields.NODES); for (DiscoveryNode node : nodes) { builder.startObject(node.getId()); @@ -90,11 +91,12 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.endObject(); } builder.endObject(); + builder.endObject(); return builder; } @Override public String toString() { - return XContentHelper.toString(this); + return Strings.toString(this); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteResponse.java index fbb6a8d18e8b6..74bf8f341f393 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteResponse.java @@ -59,7 +59,7 @@ public RoutingExplanations getExplanations() { @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - state = ClusterState.Builder.readFrom(in, null); + state = ClusterState.readFrom(in, null); readAcknowledged(in); explanations = RoutingExplanations.readFrom(in); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java index fbfe3ad37133b..7aade821f838a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java @@ -31,7 +31,6 @@ import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.routing.allocation.RoutingExplanations; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Priority; @@ -111,15 +110,14 @@ public void onFailure(String source, Exception e) { @Override public ClusterState execute(ClusterState currentState) { - RoutingAllocation.Result routingResult = allocationService.reroute(currentState, request.getCommands(), request.explain(), - request.isRetryFailed()); - ClusterState newState = ClusterState.builder(currentState).routingResult(routingResult).build(); - clusterStateToSend = newState; - explanations = routingResult.explanations(); + AllocationService.CommandsResult commandsResult = + allocationService.reroute(currentState, request.getCommands(), request.explain(), request.isRetryFailed()); + clusterStateToSend = commandsResult.getClusterState(); + explanations = commandsResult.explanations(); if (request.dryRun()) { return currentState; } - return newState; + return commandsResult.getClusterState(); } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java index 16310b58cbc5b..bd0110e644cb6 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java @@ -51,7 +51,7 @@ public ClusterUpdateSettingsRequest() { @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (transientSettings.getAsMap().isEmpty() && persistentSettings.getAsMap().isEmpty()) { + if (transientSettings.isEmpty() && persistentSettings.isEmpty()) { validationException = addValidationError("no settings to update", validationException); } return validationException; @@ -83,12 +83,22 @@ public ClusterUpdateSettingsRequest transientSettings(Settings.Builder settings) /** * Sets the source containing the transient settings to be updated. They will not survive a full cluster restart + * @deprecated use {@link #transientSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public ClusterUpdateSettingsRequest transientSettings(String source) { this.transientSettings = Settings.builder().loadFromSource(source).build(); return this; } + /** + * Sets the source containing the transient settings to be updated. They will not survive a full cluster restart + */ + public ClusterUpdateSettingsRequest transientSettings(String source, XContentType xContentType) { + this.transientSettings = Settings.builder().loadFromSource(source, xContentType).build(); + return this; + } + /** * Sets the transient settings to be updated. They will not survive a full cluster restart */ @@ -97,7 +107,7 @@ public ClusterUpdateSettingsRequest transientSettings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - transientSettings(builder.string()); + transientSettings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -122,12 +132,22 @@ public ClusterUpdateSettingsRequest persistentSettings(Settings.Builder settings /** * Sets the source containing the persistent settings to be updated. They will get applied cross restarts + * @deprecated use {@link #persistentSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public ClusterUpdateSettingsRequest persistentSettings(String source) { this.persistentSettings = Settings.builder().loadFromSource(source).build(); return this; } + /** + * Sets the source containing the persistent settings to be updated. They will get applied cross restarts + */ + public ClusterUpdateSettingsRequest persistentSettings(String source, XContentType xContentType) { + this.persistentSettings = Settings.builder().loadFromSource(source, xContentType).build(); + return this; + } + /** * Sets the persistent settings to be updated. They will get applied cross restarts */ @@ -136,7 +156,7 @@ public ClusterUpdateSettingsRequest persistentSettings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - persistentSettings(builder.string()); + persistentSettings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java index f0492edfeb19f..906b1867b1f09 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java @@ -22,6 +22,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -52,12 +53,22 @@ public ClusterUpdateSettingsRequestBuilder setTransientSettings(Settings.Builder /** * Sets the source containing the transient settings to be updated. They will not survive a full cluster restart + * @deprecated use {@link #setTransientSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public ClusterUpdateSettingsRequestBuilder setTransientSettings(String settings) { request.transientSettings(settings); return this; } + /** + * Sets the source containing the transient settings to be updated. They will not survive a full cluster restart + */ + public ClusterUpdateSettingsRequestBuilder setTransientSettings(String settings, XContentType xContentType) { + request.transientSettings(settings, xContentType); + return this; + } + /** * Sets the transient settings to be updated. They will not survive a full cluster restart */ @@ -84,12 +95,22 @@ public ClusterUpdateSettingsRequestBuilder setPersistentSettings(Settings.Builde /** * Sets the source containing the persistent settings to be updated. They will get applied cross restarts + * @deprecated use {@link #setPersistentSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public ClusterUpdateSettingsRequestBuilder setPersistentSettings(String settings) { request.persistentSettings(settings); return this; } + /** + * Sets the source containing the persistent settings to be updated. They will get applied cross restarts + */ + public ClusterUpdateSettingsRequestBuilder setPersistentSettings(String settings, XContentType xContentType) { + request.persistentSettings(settings, xContentType); + return this; + } + /** * Sets the persistent settings to be updated. They will get applied cross restarts */ diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java index 575fbcd3b9827..e9fec716a90c7 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java @@ -67,12 +67,20 @@ synchronized ClusterState updateSettings(final ClusterState currentState, Settin .transientSettings(transientSettings.build()); ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()); - boolean updatedReadOnly = MetaData.SETTING_READ_ONLY_SETTING.get(metaData.persistentSettings()) || MetaData.SETTING_READ_ONLY_SETTING.get(metaData.transientSettings()); + boolean updatedReadOnly = MetaData.SETTING_READ_ONLY_SETTING.get(metaData.persistentSettings()) + || MetaData.SETTING_READ_ONLY_SETTING.get(metaData.transientSettings()); if (updatedReadOnly) { blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK); } else { blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK); } + boolean updatedReadOnlyAllowDelete = MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.persistentSettings()) + || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.transientSettings()); + if (updatedReadOnlyAllowDelete) { + blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK); + } else { + blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK); + } ClusterState build = builder(currentState).metaData(metaData).blocks(blocks).build(); Settings settings = build.metaData().settings(); // now we try to apply things and if they are invalid we fail diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java index 2907716eaae0d..dae55b2fc048a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java @@ -33,7 +33,6 @@ import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Priority; @@ -43,19 +42,19 @@ import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; -/** - * - */ -public class TransportClusterUpdateSettingsAction extends TransportMasterNodeAction { +public class TransportClusterUpdateSettingsAction extends + TransportMasterNodeAction { private final AllocationService allocationService; private final ClusterSettings clusterSettings; @Inject - public TransportClusterUpdateSettingsAction(Settings settings, TransportService transportService, ClusterService clusterService, ThreadPool threadPool, - AllocationService allocationService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, ClusterSettings clusterSettings) { - super(settings, ClusterUpdateSettingsAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, ClusterUpdateSettingsRequest::new); + public TransportClusterUpdateSettingsAction(Settings settings, TransportService transportService, ClusterService clusterService, + ThreadPool threadPool, AllocationService allocationService, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver, ClusterSettings clusterSettings) { + super(settings, ClusterUpdateSettingsAction.NAME, false, transportService, clusterService, threadPool, actionFilters, + indexNameExpressionResolver, ClusterUpdateSettingsRequest::new); this.allocationService = allocationService; this.clusterSettings = clusterSettings; } @@ -68,9 +67,15 @@ protected String executor() { @Override protected ClusterBlockException checkBlock(ClusterUpdateSettingsRequest request, ClusterState state) { // allow for dedicated changes to the metadata blocks, so we don't block those to allow to "re-enable" it - if ((request.transientSettings().getAsMap().isEmpty() && request.persistentSettings().getAsMap().size() == 1 && MetaData.SETTING_READ_ONLY_SETTING.exists(request.persistentSettings())) || - request.persistentSettings().getAsMap().isEmpty() && request.transientSettings().getAsMap().size() == 1 && MetaData.SETTING_READ_ONLY_SETTING.exists(request.transientSettings())) { - return null; + if (request.transientSettings().size() + request.persistentSettings().size() == 1) { + // only one setting + if (MetaData.SETTING_READ_ONLY_SETTING.exists(request.persistentSettings()) + || MetaData.SETTING_READ_ONLY_SETTING.exists(request.transientSettings()) + || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.exists(request.transientSettings()) + || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.exists(request.persistentSettings())) { + // one of the settings above as the only setting in the request means - resetting the block! + return null; + } } return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE); } @@ -82,7 +87,8 @@ protected ClusterUpdateSettingsResponse newResponse() { } @Override - protected void masterOperation(final ClusterUpdateSettingsRequest request, final ClusterState state, final ActionListener listener) { + protected void masterOperation(final ClusterUpdateSettingsRequest request, final ClusterState state, + final ActionListener listener) { final SettingsUpdater updater = new SettingsUpdater(clusterSettings); clusterService.submitStateUpdateTask("cluster_update_settings", new AckedClusterStateUpdateTask(Priority.IMMEDIATE, request, listener) { @@ -118,7 +124,8 @@ private void reroute(final boolean updateSettingsAcked) { // so we should *not* execute the reroute. if (!clusterService.state().nodes().isLocalNodeElectedMaster()) { logger.debug("Skipping reroute after cluster update settings, because node is no longer master"); - listener.onResponse(new ClusterUpdateSettingsResponse(updateSettingsAcked, updater.getTransientUpdates(), updater.getPersistentUpdate())); + listener.onResponse(new ClusterUpdateSettingsResponse(updateSettingsAcked, updater.getTransientUpdates(), + updater.getPersistentUpdate())); return; } @@ -136,15 +143,18 @@ public boolean mustAck(DiscoveryNode discoveryNode) { } @Override - //we return when the cluster reroute is acked or it times out but the acknowledged flag depends on whether the update settings was acknowledged + // we return when the cluster reroute is acked or it times out but the acknowledged flag depends on whether the + // update settings was acknowledged protected ClusterUpdateSettingsResponse newResponse(boolean acknowledged) { - return new ClusterUpdateSettingsResponse(updateSettingsAcked && acknowledged, updater.getTransientUpdates(), updater.getPersistentUpdate()); + return new ClusterUpdateSettingsResponse(updateSettingsAcked && acknowledged, updater.getTransientUpdates(), + updater.getPersistentUpdate()); } @Override public void onNoLongerMaster(String source) { logger.debug("failed to preform reroute after cluster settings were updated - current node is no longer a master"); - listener.onResponse(new ClusterUpdateSettingsResponse(updateSettingsAcked, updater.getTransientUpdates(), updater.getPersistentUpdate())); + listener.onResponse(new ClusterUpdateSettingsResponse(updateSettingsAcked, updater.getTransientUpdates(), + updater.getPersistentUpdate())); } @Override @@ -157,11 +167,7 @@ public void onFailure(String source, Exception e) { @Override public ClusterState execute(final ClusterState currentState) { // now, reroute in case things that require it changed (e.g. number of replicas) - RoutingAllocation.Result routingResult = allocationService.reroute(currentState, "reroute after cluster update settings"); - if (!routingResult.changed()) { - return currentState; - } - return ClusterState.builder(currentState).routingResult(routingResult).build(); + return allocationService.reroute(currentState, "reroute after cluster update settings"); } }); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsGroup.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsGroup.java index ccb4d32465eda..a89e86f2a4fee 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsGroup.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsGroup.java @@ -25,7 +25,6 @@ import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; import java.io.IOException; @@ -35,9 +34,9 @@ public class ClusterSearchShardsGroup implements Streamable, ToXContent { private ShardId shardId; - ShardRouting[] shards; + private ShardRouting[] shards; - ClusterSearchShardsGroup() { + private ClusterSearchShardsGroup() { } @@ -46,18 +45,14 @@ public ClusterSearchShardsGroup(ShardId shardId, ShardRouting[] shards) { this.shards = shards; } - public static ClusterSearchShardsGroup readSearchShardsGroupResponse(StreamInput in) throws IOException { + static ClusterSearchShardsGroup readSearchShardsGroupResponse(StreamInput in) throws IOException { ClusterSearchShardsGroup response = new ClusterSearchShardsGroup(); response.readFrom(in); return response; } - public String getIndex() { - return shardId.getIndexName(); - } - - public int getShardId() { - return shardId.id(); + public ShardId getShardId() { + return shardId; } public ShardRouting[] getShards() { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java index 21ecf8a4c4feb..2f643bef7212d 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.cluster.shards; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.support.IndicesOptions; @@ -29,16 +30,17 @@ import org.elasticsearch.common.io.stream.StreamOutput; import java.io.IOException; +import java.util.Objects; /** */ public class ClusterSearchShardsRequest extends MasterNodeReadRequest implements IndicesRequest.Replaceable { - private String[] indices; + + private String[] indices = Strings.EMPTY_ARRAY; @Nullable private String routing; @Nullable private String preference; - private String[] types = Strings.EMPTY_ARRAY; private IndicesOptions indicesOptions = IndicesOptions.lenientExpandOpen(); @@ -59,14 +61,9 @@ public ActionRequestValidationException validate() { */ @Override public ClusterSearchShardsRequest indices(String... indices) { - if (indices == null) { - throw new IllegalArgumentException("indices must not be null"); - } else { - for (int i = 0; i < indices.length; i++) { - if (indices[i] == null) { - throw new IllegalArgumentException("indices[" + i + "] must not be null"); - } - } + Objects.requireNonNull(indices, "indices must not be null"); + for (int i = 0; i < indices.length; i++) { + Objects.requireNonNull(indices[i], "indices[" + i + "] must not be null"); } this.indices = indices; return this; @@ -90,23 +87,6 @@ public ClusterSearchShardsRequest indicesOptions(IndicesOptions indicesOptions) return this; } - /** - * The document types to execute the search against. Defaults to be executed against - * all types. - */ - public String[] types() { - return types; - } - - /** - * The document types to execute the search against. Defaults to be executed against - * all types. - */ - public ClusterSearchShardsRequest types(String... types) { - this.types = types; - return this; - } - /** * A comma separated list of routing values to control the shards the search will be executed on. */ @@ -156,7 +136,10 @@ public void readFrom(StreamInput in) throws IOException { routing = in.readOptionalString(); preference = in.readOptionalString(); - types = in.readStringArray(); + if (in.getVersion().onOrBefore(Version.V_5_1_1)) { + //types + in.readStringArray(); + } indicesOptions = IndicesOptions.readIndicesOptions(in); } @@ -172,7 +155,10 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(routing); out.writeOptionalString(preference); - out.writeStringArray(types); + if (out.getVersion().onOrBefore(Version.V_5_1_1)) { + //types + out.writeStringArray(Strings.EMPTY_ARRAY); + } indicesOptions.writeIndicesOptions(out); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestBuilder.java index 0b9c9d044e7b6..100c9b258f2cc 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestBuilder.java @@ -39,15 +39,6 @@ public ClusterSearchShardsRequestBuilder setIndices(String... indices) { return this; } - /** - * The document types to execute the search against. Defaults to be executed against - * all types. - */ - public ClusterSearchShardsRequestBuilder setTypes(String... types) { - request.types(types); - return this; - } - /** * A comma separated list of routing values to control the shards the search will be executed on. */ diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsResponse.java index 5f45025351e65..b5b28e2b8f79a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsResponse.java @@ -19,24 +19,35 @@ package org.elasticsearch.action.admin.cluster.shards; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.search.internal.AliasFilter; import java.io.IOException; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; -/** - */ -public class ClusterSearchShardsResponse extends ActionResponse implements ToXContent { +public class ClusterSearchShardsResponse extends ActionResponse implements ToXContentObject { private ClusterSearchShardsGroup[] groups; private DiscoveryNode[] nodes; + private Map indicesAndFilters; + + public ClusterSearchShardsResponse() { - ClusterSearchShardsResponse() { + } + public ClusterSearchShardsResponse(ClusterSearchShardsGroup[] groups, DiscoveryNode[] nodes, + Map indicesAndFilters) { + this.groups = groups; + this.nodes = nodes; + this.indicesAndFilters = indicesAndFilters; } public ClusterSearchShardsGroup[] getGroups() { @@ -47,9 +58,8 @@ public DiscoveryNode[] getNodes() { return nodes; } - public ClusterSearchShardsResponse(ClusterSearchShardsGroup[] groups, DiscoveryNode[] nodes) { - this.groups = groups; - this.nodes = nodes; + public Map getIndicesAndFilters() { + return indicesAndFilters; } @Override @@ -63,7 +73,15 @@ public void readFrom(StreamInput in) throws IOException { for (int i = 0; i < nodes.length; i++) { nodes[i] = new DiscoveryNode(in); } - + if (in.getVersion().onOrAfter(Version.V_5_1_1)) { + int size = in.readVInt(); + indicesAndFilters = new HashMap<>(); + for (int i = 0; i < size; i++) { + String index = in.readString(); + AliasFilter aliasFilter = new AliasFilter(in); + indicesAndFilters.put(index, aliasFilter); + } + } } @Override @@ -77,20 +95,48 @@ public void writeTo(StreamOutput out) throws IOException { for (DiscoveryNode node : nodes) { node.writeTo(out); } + if (out.getVersion().onOrAfter(Version.V_5_1_1)) { + out.writeVInt(indicesAndFilters.size()); + for (Map.Entry entry : indicesAndFilters.entrySet()) { + out.writeString(entry.getKey()); + entry.getValue().writeTo(out); + } + } } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startObject("nodes"); for (DiscoveryNode node : nodes) { node.toXContent(builder, params); } builder.endObject(); + if (indicesAndFilters != null) { + builder.startObject("indices"); + for (Map.Entry entry : indicesAndFilters.entrySet()) { + String index = entry.getKey(); + builder.startObject(index); + AliasFilter aliasFilter = entry.getValue(); + String[] aliases = aliasFilter.getAliases(); + if (aliases.length > 0) { + Arrays.sort(aliases); // we want consistent ordering here and these values might be generated from a set / map + builder.array("aliases", aliases); + if (aliasFilter.getQueryBuilder() != null) { // might be null if we include non-filtering aliases + builder.field("filter"); + aliasFilter.getQueryBuilder().toXContent(builder, params); + } + } + builder.endObject(); + } + builder.endObject(); + } builder.startArray("shards"); for (ClusterSearchShardsGroup group : groups) { group.toXContent(builder, params); } builder.endArray(); + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java index 2f9a6e7dede56..20ed69ae5a92f 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java @@ -33,23 +33,29 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; +import java.util.HashMap; import java.util.HashSet; import java.util.Map; import java.util.Set; -/** - */ -public class TransportClusterSearchShardsAction extends TransportMasterNodeReadAction { +public class TransportClusterSearchShardsAction extends + TransportMasterNodeReadAction { + + private final IndicesService indicesService; @Inject public TransportClusterSearchShardsAction(Settings settings, TransportService transportService, ClusterService clusterService, - ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) { - super(settings, ClusterSearchShardsAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, ClusterSearchShardsRequest::new); + IndicesService indicesService, ThreadPool threadPool, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver) { + super(settings, ClusterSearchShardsAction.NAME, transportService, clusterService, threadPool, actionFilters, + indexNameExpressionResolver, ClusterSearchShardsRequest::new); + this.indicesService = indicesService; } @Override @@ -60,7 +66,8 @@ protected String executor() { @Override protected ClusterBlockException checkBlock(ClusterSearchShardsRequest request, ClusterState state) { - return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, indexNameExpressionResolver.concreteIndexNames(state, request)); + return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, + indexNameExpressionResolver.concreteIndexNames(state, request)); } @Override @@ -69,12 +76,22 @@ protected ClusterSearchShardsResponse newResponse() { } @Override - protected void masterOperation(final ClusterSearchShardsRequest request, final ClusterState state, final ActionListener listener) { + protected void masterOperation(final ClusterSearchShardsRequest request, final ClusterState state, + final ActionListener listener) { ClusterState clusterState = clusterService.state(); String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(clusterState, request); Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(state, request.routing(), request.indices()); + Map indicesAndFilters = new HashMap<>(); + for (String index : concreteIndices) { + final AliasFilter aliasFilter = indicesService.buildAliasFilter(clusterState, index, request.indices()); + final String[] aliases = indexNameExpressionResolver.indexAliases(clusterState, index, aliasMetadata -> true, true, + request.indices()); + indicesAndFilters.put(index, new AliasFilter(aliasFilter.getQueryBuilder(), aliases)); + } + Set nodeIds = new HashSet<>(); - GroupShardsIterator groupShardsIterator = clusterService.operationRouting().searchShards(clusterState, concreteIndices, routingMap, request.preference()); + GroupShardsIterator groupShardsIterator = clusterService.operationRouting().searchShards(clusterState, concreteIndices, + routingMap, request.preference()); ShardRouting shard; ClusterSearchShardsGroup[] groupResponses = new ClusterSearchShardsGroup[groupShardsIterator.size()]; int currentGroup = 0; @@ -94,6 +111,6 @@ protected void masterOperation(final ClusterSearchShardsRequest request, final C for (String nodeId : nodeIds) { nodes[currentNode++] = clusterState.getNodes().get(nodeId); } - listener.onResponse(new ClusterSearchShardsResponse(groupResponses, nodes)); + listener.onResponse(new ClusterSearchShardsResponse(groupResponses, nodes, indicesAndFilters)); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java index 2d0943845caeb..7c14f97482fb1 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java @@ -25,13 +25,11 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.MasterNodeRequest; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; @@ -41,11 +39,11 @@ import static org.elasticsearch.action.ValidateActions.addValidationError; import static org.elasticsearch.common.Strings.EMPTY_ARRAY; -import static org.elasticsearch.common.Strings.hasLength; -import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS; import static org.elasticsearch.common.settings.Settings.readSettingsFromStream; import static org.elasticsearch.common.settings.Settings.writeSettingsToStream; +import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS; import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; +import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeBooleanValue; /** * Create snapshot request @@ -291,18 +289,34 @@ public CreateSnapshotRequest settings(Settings.Builder settings) { } /** - * Sets repository-specific snapshot settings in JSON, YAML or properties format + * Sets repository-specific snapshot settings in JSON or YAML format *

* See repository documentation for more information. * * @param source repository-specific snapshot settings * @return this request + * @deprecated use {@link #settings(String, XContentType)} to avoid content type detection */ + @Deprecated public CreateSnapshotRequest settings(String source) { this.settings = Settings.builder().loadFromSource(source).build(); return this; } + /** + * Sets repository-specific snapshot settings in JSON or YAML format + *

+ * See repository documentation for more information. + * + * @param source repository-specific snapshot settings + * @param xContentType the content type of the source + * @return this request + */ + public CreateSnapshotRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); + return this; + } + /** * Sets repository-specific snapshot settings. *

@@ -315,7 +329,7 @@ public CreateSnapshotRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -357,17 +371,7 @@ public boolean includeGlobalState() { * @param source snapshot definition * @return this request */ - public CreateSnapshotRequest source(XContentBuilder source) { - return source(source.bytes()); - } - - /** - * Parses snapshot definition. - * - * @param source snapshot definition - * @return this request - */ - public CreateSnapshotRequest source(Map source) { + public CreateSnapshotRequest source(Map source) { for (Map.Entry entry : ((Map) source).entrySet()) { String name = entry.getKey(); if (name.equals("indices")) { @@ -379,80 +383,20 @@ public CreateSnapshotRequest source(Map source) { throw new IllegalArgumentException("malformed indices section, should be an array of strings"); } } else if (name.equals("partial")) { - partial(lenientNodeBooleanValue(entry.getValue())); + partial(lenientNodeBooleanValue(entry.getValue(), name)); } else if (name.equals("settings")) { if (!(entry.getValue() instanceof Map)) { throw new IllegalArgumentException("malformed settings section, should indices an inner object"); } settings((Map) entry.getValue()); } else if (name.equals("include_global_state")) { - includeGlobalState = lenientNodeBooleanValue(entry.getValue()); + includeGlobalState = lenientNodeBooleanValue(entry.getValue(), name); } } indicesOptions(IndicesOptions.fromMap((Map) source, IndicesOptions.lenientExpandOpen())); return this; } - /** - * Parses snapshot definition. JSON, YAML and properties formats are supported - * - * @param source snapshot definition - * @return this request - */ - public CreateSnapshotRequest source(String source) { - if (hasLength(source)) { - try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) { - return source(parser.mapOrdered()); - } catch (Exception e) { - throw new IllegalArgumentException("failed to parse repository source [" + source + "]", e); - } - } - return this; - } - - /** - * Parses snapshot definition. JSON, YAML and properties formats are supported - * - * @param source snapshot definition - * @return this request - */ - public CreateSnapshotRequest source(byte[] source) { - return source(source, 0, source.length); - } - - /** - * Parses snapshot definition. JSON, YAML and properties formats are supported - * - * @param source snapshot definition - * @param offset offset - * @param length length - * @return this request - */ - public CreateSnapshotRequest source(byte[] source, int offset, int length) { - if (length > 0) { - try (XContentParser parser = XContentFactory.xContent(source, offset, length).createParser(source, offset, length)) { - return source(parser.mapOrdered()); - } catch (IOException e) { - throw new IllegalArgumentException("failed to parse repository source", e); - } - } - return this; - } - - /** - * Parses snapshot definition. JSON, YAML and properties formats are supported - * - * @param source snapshot definition - * @return this request - */ - public CreateSnapshotRequest source(BytesReference source) { - try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) { - return source(parser.mapOrdered()); - } catch (IOException e) { - throw new IllegalArgumentException("failed to parse snapshot source", e); - } - } - @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -478,4 +422,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(waitForCompletion); out.writeBoolean(partial); } + + @Override + public String getDescription() { + return "snapshot [" + repository + ":" + snapshot + "]"; + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestBuilder.java index ebdd206b5c3e8..d3b5e12351c28 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestBuilder.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -147,12 +148,28 @@ public CreateSnapshotRequestBuilder setSettings(Settings.Builder settings) { * * @param source repository-specific snapshot settings * @return this builder + * @deprecated use {@link #setSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public CreateSnapshotRequestBuilder setSettings(String source) { request.settings(source); return this; } + /** + * Sets repository-specific snapshot settings in YAML or JSON format + *

+ * See repository documentation for more information. + * + * @param source repository-specific snapshot settings + * @param xContentType the content type of the source + * @return this builder + */ + public CreateSnapshotRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); + return this; + } + /** * Sets repository-specific snapshot settings. *

diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotResponse.java index efc2fbeb5b580..1f9f77f9ed3df 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotResponse.java @@ -23,7 +23,7 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.snapshots.SnapshotInfo; @@ -33,7 +33,7 @@ /** * Create snapshot response */ -public class CreateSnapshotResponse extends ActionResponse implements ToXContent { +public class CreateSnapshotResponse extends ActionResponse implements ToXContentObject { @Nullable private SnapshotInfo snapshotInfo; @@ -83,12 +83,14 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); if (snapshotInfo != null) { builder.field("snapshot"); snapshotInfo.toXContent(builder, params); } else { builder.field("accepted", true); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java index 3c37d1870e576..5cce5482ec508 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java @@ -75,6 +75,6 @@ public void onResponse() { public void onFailure(Exception e) { listener.onFailure(e); } - }); + }, false); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java index fd2c97ed5d43c..32c9493b440d5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java @@ -28,6 +28,7 @@ import java.io.IOException; import static org.elasticsearch.action.ValidateActions.addValidationError; +import static org.elasticsearch.snapshots.SnapshotInfo.VERBOSE_INTRODUCED; /** * Get snapshot request @@ -36,6 +37,7 @@ public class GetSnapshotsRequest extends MasterNodeRequest public static final String ALL_SNAPSHOTS = "_all"; public static final String CURRENT_SNAPSHOT = "_current"; + public static final boolean DEFAULT_VERBOSE_MODE = true; private String repository; @@ -43,6 +45,8 @@ public class GetSnapshotsRequest extends MasterNodeRequest private boolean ignoreUnavailable; + private boolean verbose = DEFAULT_VERBOSE_MODE; + public GetSnapshotsRequest() { } @@ -123,6 +127,7 @@ public GetSnapshotsRequest ignoreUnavailable(boolean ignoreUnavailable) { this.ignoreUnavailable = ignoreUnavailable; return this; } + /** * @return Whether snapshots should be ignored when unavailable (corrupt or temporarily not fetchable) */ @@ -130,12 +135,36 @@ public boolean ignoreUnavailable() { return ignoreUnavailable; } + /** + * Set to {@code false} to only show the snapshot names and the indices they contain. + * This is useful when the snapshots belong to a cloud-based repository where each + * blob read is a concern (cost wise and performance wise), as the snapshot names and + * indices they contain can be retrieved from a single index blob in the repository, + * whereas the rest of the information requires reading a snapshot metadata file for + * each snapshot requested. Defaults to {@code true}, which returns all information + * about each requested snapshot. + */ + public GetSnapshotsRequest verbose(boolean verbose) { + this.verbose = verbose; + return this; + } + + /** + * Returns whether the request will return a verbose response. + */ + public boolean verbose() { + return verbose; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); repository = in.readString(); snapshots = in.readStringArray(); ignoreUnavailable = in.readBoolean(); + if (in.getVersion().onOrAfter(VERBOSE_INTRODUCED)) { + verbose = in.readBoolean(); + } } @Override @@ -144,5 +173,8 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(repository); out.writeStringArray(snapshots); out.writeBoolean(ignoreUnavailable); + if (out.getVersion().onOrAfter(VERBOSE_INTRODUCED)) { + out.writeBoolean(verbose); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java index 3b0ac47c69f9f..2115bd0bc3b81 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java @@ -96,4 +96,18 @@ public GetSnapshotsRequestBuilder setIgnoreUnavailable(boolean ignoreUnavailable return this; } + /** + * Set to {@code false} to only show the snapshot names and the indices they contain. + * This is useful when the snapshots belong to a cloud-based repository where each + * blob read is a concern (cost wise and performance wise), as the snapshot names and + * indices they contain can be retrieved from a single index blob in the repository, + * whereas the rest of the information requires reading a snapshot metadata file for + * each snapshot requested. Defaults to {@code true}, which returns all information + * about each requested snapshot. + */ + public GetSnapshotsRequestBuilder setVerbose(boolean verbose) { + request.verbose(verbose); + return this; + } + } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsResponse.java index 924f5a90d4256..0d1e5eda7f2d2 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsResponse.java @@ -23,6 +23,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.snapshots.SnapshotInfo; @@ -34,7 +35,7 @@ /** * Get snapshots response */ -public class GetSnapshotsResponse extends ActionResponse implements ToXContent { +public class GetSnapshotsResponse extends ActionResponse implements ToXContentObject { private List snapshots = Collections.emptyList(); @@ -58,7 +59,7 @@ public List getSnapshots() { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); int size = in.readVInt(); - List builder = new ArrayList<>(); + List builder = new ArrayList<>(size); for (int i = 0; i < size; i++) { builder.add(new SnapshotInfo(in)); } @@ -76,11 +77,13 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException { + builder.startObject(); builder.startArray("snapshots"); for (SnapshotInfo snapshotInfo : snapshots) { snapshotInfo.toXContent(builder, params); } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java index ad8cb1ae88e52..eec218a4119ba 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.cluster.snapshots.get; +import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.master.TransportMasterNodeAction; @@ -30,6 +31,8 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.repositories.IndexId; +import org.elasticsearch.repositories.RepositoryData; import org.elasticsearch.snapshots.SnapshotId; import org.elasticsearch.snapshots.SnapshotInfo; import org.elasticsearch.snapshots.SnapshotMissingException; @@ -38,11 +41,13 @@ import org.elasticsearch.transport.TransportService; import java.util.ArrayList; +import java.util.Collections; import java.util.HashMap; -import java.util.LinkedHashSet; +import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; +import java.util.stream.Collectors; /** * Transport Action for get snapshots operation @@ -75,30 +80,36 @@ protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterS } @Override - protected void masterOperation(final GetSnapshotsRequest request, ClusterState state, + protected void masterOperation(final GetSnapshotsRequest request, final ClusterState state, final ActionListener listener) { try { final String repository = request.repository(); - List snapshotInfoBuilder = new ArrayList<>(); - if (isAllSnapshots(request.snapshots())) { - snapshotInfoBuilder.addAll(snapshotsService.currentSnapshots(repository)); - snapshotInfoBuilder.addAll(snapshotsService.snapshots(repository, - snapshotsService.snapshotIds(repository), - request.ignoreUnavailable())); - } else if (isCurrentSnapshots(request.snapshots())) { - snapshotInfoBuilder.addAll(snapshotsService.currentSnapshots(repository)); - } else { - final Map allSnapshotIds = new HashMap<>(); - for (SnapshotInfo snapshotInfo : snapshotsService.currentSnapshots(repository)) { - SnapshotId snapshotId = snapshotInfo.snapshotId(); - allSnapshotIds.put(snapshotId.getName(), snapshotId); - } - for (SnapshotId snapshotId : snapshotsService.snapshotIds(repository)) { + final Map allSnapshotIds = new HashMap<>(); + final List currentSnapshots = new ArrayList<>(); + for (SnapshotInfo snapshotInfo : snapshotsService.currentSnapshots(repository)) { + SnapshotId snapshotId = snapshotInfo.snapshotId(); + allSnapshotIds.put(snapshotId.getName(), snapshotId); + currentSnapshots.add(snapshotInfo); + } + + final RepositoryData repositoryData; + if (isCurrentSnapshotsOnly(request.snapshots()) == false) { + repositoryData = snapshotsService.getRepositoryData(repository); + for (SnapshotId snapshotId : repositoryData.getAllSnapshotIds()) { allSnapshotIds.put(snapshotId.getName(), snapshotId); } - final Set toResolve = new LinkedHashSet<>(); // maintain order + } else { + repositoryData = null; + } + + final Set toResolve = new HashSet<>(); + if (isAllSnapshots(request.snapshots())) { + toResolve.addAll(allSnapshotIds.values()); + } else { for (String snapshotOrPattern : request.snapshots()) { - if (Regex.isSimpleMatchPattern(snapshotOrPattern) == false) { + if (GetSnapshotsRequest.CURRENT_SNAPSHOT.equalsIgnoreCase(snapshotOrPattern)) { + toResolve.addAll(currentSnapshots.stream().map(SnapshotInfo::snapshotId).collect(Collectors.toList())); + } else if (Regex.isSimpleMatchPattern(snapshotOrPattern) == false) { if (allSnapshotIds.containsKey(snapshotOrPattern)) { toResolve.add(allSnapshotIds.get(snapshotOrPattern)); } else if (request.ignoreUnavailable() == false) { @@ -113,13 +124,28 @@ protected void masterOperation(final GetSnapshotsRequest request, ClusterState s } } - if (toResolve.isEmpty() && request.ignoreUnavailable() == false) { + if (toResolve.isEmpty() && request.ignoreUnavailable() == false && isCurrentSnapshotsOnly(request.snapshots()) == false) { throw new SnapshotMissingException(repository, request.snapshots()[0]); } + } - snapshotInfoBuilder.addAll(snapshotsService.snapshots(repository, new ArrayList<>(toResolve), request.ignoreUnavailable())); + final List snapshotInfos; + if (request.verbose()) { + final Set incompatibleSnapshots = repositoryData != null ? + new HashSet<>(repositoryData.getIncompatibleSnapshotIds()) : Collections.emptySet(); + snapshotInfos = snapshotsService.snapshots(repository, new ArrayList<>(toResolve), + incompatibleSnapshots, request.ignoreUnavailable()); + } else { + if (repositoryData != null) { + // want non-current snapshots as well, which are found in the repository data + snapshotInfos = buildSimpleSnapshotInfos(toResolve, repositoryData, currentSnapshots); + } else { + // only want current snapshots + snapshotInfos = currentSnapshots.stream().map(SnapshotInfo::basic).collect(Collectors.toList()); + CollectionUtil.timSort(snapshotInfos); + } } - listener.onResponse(new GetSnapshotsResponse(snapshotInfoBuilder)); + listener.onResponse(new GetSnapshotsResponse(snapshotInfos)); } catch (Exception e) { listener.onFailure(e); } @@ -129,7 +155,35 @@ private boolean isAllSnapshots(String[] snapshots) { return (snapshots.length == 0) || (snapshots.length == 1 && GetSnapshotsRequest.ALL_SNAPSHOTS.equalsIgnoreCase(snapshots[0])); } - private boolean isCurrentSnapshots(String[] snapshots) { + private boolean isCurrentSnapshotsOnly(String[] snapshots) { return (snapshots.length == 1 && GetSnapshotsRequest.CURRENT_SNAPSHOT.equalsIgnoreCase(snapshots[0])); } + + private List buildSimpleSnapshotInfos(final Set toResolve, + final RepositoryData repositoryData, + final List currentSnapshots) { + List snapshotInfos = new ArrayList<>(); + for (SnapshotInfo snapshotInfo : currentSnapshots) { + if (toResolve.remove(snapshotInfo.snapshotId())) { + snapshotInfos.add(snapshotInfo.basic()); + } + } + Map> snapshotsToIndices = new HashMap<>(); + for (IndexId indexId : repositoryData.getIndices().values()) { + for (SnapshotId snapshotId : repositoryData.getSnapshots(indexId)) { + if (toResolve.contains(snapshotId)) { + snapshotsToIndices.computeIfAbsent(snapshotId, (k) -> new ArrayList<>()) + .add(indexId.getName()); + } + } + } + for (Map.Entry> entry : snapshotsToIndices.entrySet()) { + final List indices = entry.getValue(); + CollectionUtil.timSort(indices); + final SnapshotId snapshotId = entry.getKey(); + snapshotInfos.add(new SnapshotInfo(snapshotId, indices, repositoryData.getSnapshotState(snapshotId))); + } + CollectionUtil.timSort(snapshotInfos); + return Collections.unmodifiableList(snapshotInfos); + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java index 1a41a776c7340..9561b50eaeee7 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java @@ -24,13 +24,11 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.MasterNodeRequest; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; @@ -39,10 +37,9 @@ import java.util.Map; import static org.elasticsearch.action.ValidateActions.addValidationError; -import static org.elasticsearch.common.Strings.hasLength; -import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS; import static org.elasticsearch.common.settings.Settings.readSettingsFromStream; import static org.elasticsearch.common.settings.Settings.writeSettingsToStream; +import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS; import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; /** @@ -316,18 +313,34 @@ public RestoreSnapshotRequest settings(Settings.Builder settings) { } /** - * Sets repository-specific restore settings in JSON, YAML or properties format + * Sets repository-specific restore settings in JSON or YAML format *

* See repository documentation for more information. * * @param source repository-specific snapshot settings * @return this request + * @deprecated use {@link #settings(String, XContentType)} to avoid content type detection */ + @Deprecated public RestoreSnapshotRequest settings(String source) { this.settings = Settings.builder().loadFromSource(source).build(); return this; } + /** + * Sets repository-specific restore settings in JSON or YAML format + *

+ * See repository documentation for more information. + * + * @param source repository-specific snapshot settings + * @param xContentType the content type of the source + * @return this request + */ + public RestoreSnapshotRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); + return this; + } + /** * Sets repository-specific restore settings *

@@ -340,7 +353,7 @@ public RestoreSnapshotRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -439,12 +452,22 @@ public RestoreSnapshotRequest indexSettings(Settings.Builder settings) { /** * Sets settings that should be added/changed in all restored indices + * @deprecated use {@link #indexSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public RestoreSnapshotRequest indexSettings(String source) { this.indexSettings = Settings.builder().loadFromSource(source).build(); return this; } + /** + * Sets settings that should be added/changed in all restored indices + */ + public RestoreSnapshotRequest indexSettings(String source, XContentType xContentType) { + this.indexSettings = Settings.builder().loadFromSource(source, xContentType).build(); + return this; + } + /** * Sets settings that should be added/changed in all restored indices */ @@ -452,7 +475,7 @@ public RestoreSnapshotRequest indexSettings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - indexSettings(builder.string()); + indexSettings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -472,22 +495,8 @@ public Settings indexSettings() { * @param source restore definition * @return this request */ - public RestoreSnapshotRequest source(XContentBuilder source) { - try { - return source(source.bytes()); - } catch (Exception e) { - throw new IllegalArgumentException("Failed to build json for repository request", e); - } - } - - /** - * Parses restore definition - * - * @param source restore definition - * @return this request - */ - public RestoreSnapshotRequest source(Map source) { - for (Map.Entry entry : ((Map) source).entrySet()) { + public RestoreSnapshotRequest source(Map source) { + for (Map.Entry entry : source.entrySet()) { String name = entry.getKey(); if (name.equals("indices")) { if (entry.getValue() instanceof String) { @@ -498,16 +507,16 @@ public RestoreSnapshotRequest source(Map source) { throw new IllegalArgumentException("malformed indices section, should be an array of strings"); } } else if (name.equals("partial")) { - partial(lenientNodeBooleanValue(entry.getValue())); + partial(lenientNodeBooleanValue(entry.getValue(), name)); } else if (name.equals("settings")) { if (!(entry.getValue() instanceof Map)) { throw new IllegalArgumentException("malformed settings section"); } settings((Map) entry.getValue()); } else if (name.equals("include_global_state")) { - includeGlobalState = lenientNodeBooleanValue(entry.getValue()); + includeGlobalState = lenientNodeBooleanValue(entry.getValue(), name); } else if (name.equals("include_aliases")) { - includeAliases = lenientNodeBooleanValue(entry.getValue()); + includeAliases = lenientNodeBooleanValue(entry.getValue(), name); } else if (name.equals("rename_pattern")) { if (entry.getValue() instanceof String) { renamePattern((String) entry.getValue()); @@ -543,74 +552,6 @@ public RestoreSnapshotRequest source(Map source) { return this; } - /** - * Parses restore definition - *

- * JSON, YAML and properties formats are supported - * - * @param source restore definition - * @return this request - */ - public RestoreSnapshotRequest source(String source) { - if (hasLength(source)) { - try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) { - return source(parser.mapOrdered()); - } catch (Exception e) { - throw new IllegalArgumentException("failed to parse repository source [" + source + "]", e); - } - } - return this; - } - - /** - * Parses restore definition - *

- * JSON, YAML and properties formats are supported - * - * @param source restore definition - * @return this request - */ - public RestoreSnapshotRequest source(byte[] source) { - return source(source, 0, source.length); - } - - /** - * Parses restore definition - *

- * JSON, YAML and properties formats are supported - * - * @param source restore definition - * @param offset offset - * @param length length - * @return this request - */ - public RestoreSnapshotRequest source(byte[] source, int offset, int length) { - if (length > 0) { - try (XContentParser parser = XContentFactory.xContent(source, offset, length).createParser(source, offset, length)) { - return source(parser.mapOrdered()); - } catch (IOException e) { - throw new IllegalArgumentException("failed to parse repository source", e); - } - } - return this; - } - - /** - * Parses restore definition - *

- * JSON, YAML and properties formats are supported - * - * @param source restore definition - * @return this request - */ - public RestoreSnapshotRequest source(BytesReference source) { - try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) { - return source(parser.mapOrdered()); - } catch (IOException e) { - throw new IllegalArgumentException("failed to parse template source", e); - } - } - @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -646,4 +587,10 @@ public void writeTo(StreamOutput out) throws IOException { writeSettingsToStream(indexSettings, out); out.writeStringArray(ignoreIndexSettings); } + + @Override + public String getDescription() { + return "snapshot [" + repository + ":" + snapshot + "]"; + } + } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestBuilder.java index 661a1a1d018af..807e238724364 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestBuilder.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.List; import java.util.Map; @@ -153,18 +154,34 @@ public RestoreSnapshotRequestBuilder setSettings(Settings.Builder settings) { } /** - * Sets repository-specific restore settings in JSON, YAML or properties format + * Sets repository-specific restore settings in JSON or YAML format *

* See repository documentation for more information. * * @param source repository-specific snapshot settings * @return this builder + * @deprecated use {@link #setSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public RestoreSnapshotRequestBuilder setSettings(String source) { request.settings(source); return this; } + /** + * Sets repository-specific restore settings in JSON or YAML format + *

+ * See repository documentation for more information. + * + * @param source repository-specific snapshot settings + * @param xContentType the content type of the source + * @return this builder + */ + public RestoreSnapshotRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); + return this; + } + /** * Sets repository-specific restore settings *

@@ -251,12 +268,26 @@ public RestoreSnapshotRequestBuilder setIndexSettings(Settings.Builder settings) * * @param source index settings * @return this builder + * @deprecated use {@link #setIndexSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public RestoreSnapshotRequestBuilder setIndexSettings(String source) { request.indexSettings(source); return this; } + /** + * Sets index settings that should be added or replaced during restore + * + * @param source index settings + * @param xContentType the content type of the source + * @return this builder + */ + public RestoreSnapshotRequestBuilder setIndexSettings(String source, XContentType xContentType) { + request.indexSettings(source, xContentType); + return this; + } + /** * Sets index settings that should be added or replaced during restore * diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotResponse.java index 70f4f2aa4f24c..5a02e4bcb1387 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotResponse.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.snapshots.RestoreInfo; @@ -33,7 +34,7 @@ /** * Contains information about restores snapshot */ -public class RestoreSnapshotResponse extends ActionResponse implements ToXContent { +public class RestoreSnapshotResponse extends ActionResponse implements ToXContentObject { @Nullable private RestoreInfo restoreInfo; @@ -75,12 +76,14 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException { + builder.startObject(); if (restoreInfo != null) { builder.field("snapshot"); restoreInfo.toXContent(builder, params); } else { builder.field("accepted", true); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java index 070db6c5248d8..5ff5bd17fe5e6 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java @@ -22,19 +22,27 @@ import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.master.TransportMasterNodeAction; +import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.RestoreInProgress; import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.snapshots.RestoreInfo; import org.elasticsearch.snapshots.RestoreService; +import org.elasticsearch.snapshots.RestoreService.RestoreCompletionResponse; import org.elasticsearch.snapshots.Snapshot; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; +import static org.elasticsearch.snapshots.RestoreService.restoreInProgress; + /** * Transport action for restore snapshot operation */ @@ -78,28 +86,44 @@ protected void masterOperation(final RestoreSnapshotRequest request, final Clust request.settings(), request.masterNodeTimeout(), request.includeGlobalState(), request.partial(), request.includeAliases(), request.indexSettings(), request.ignoreIndexSettings(), "restore_snapshot[" + request.snapshot() + "]"); - restoreService.restoreSnapshot(restoreRequest, new ActionListener() { + restoreService.restoreSnapshot(restoreRequest, new ActionListener() { @Override - public void onResponse(RestoreInfo restoreInfo) { - if (restoreInfo == null && request.waitForCompletion()) { - restoreService.addListener(new ActionListener() { + public void onResponse(RestoreCompletionResponse restoreCompletionResponse) { + if (restoreCompletionResponse.getRestoreInfo() == null && request.waitForCompletion()) { + final Snapshot snapshot = restoreCompletionResponse.getSnapshot(); + + ClusterStateListener clusterStateListener = new ClusterStateListener() { @Override - public void onResponse(RestoreService.RestoreCompletionResponse restoreCompletionResponse) { - final Snapshot snapshot = restoreCompletionResponse.getSnapshot(); - if (snapshot.getRepository().equals(request.repository()) && - snapshot.getSnapshotId().getName().equals(request.snapshot())) { - listener.onResponse(new RestoreSnapshotResponse(restoreCompletionResponse.getRestoreInfo())); - restoreService.removeListener(this); + public void clusterChanged(ClusterChangedEvent changedEvent) { + final RestoreInProgress.Entry prevEntry = restoreInProgress(changedEvent.previousState(), snapshot); + final RestoreInProgress.Entry newEntry = restoreInProgress(changedEvent.state(), snapshot); + if (prevEntry == null) { + // When there is a master failure after a restore has been started, this listener might not be registered + // on the current master and as such it might miss some intermediary cluster states due to batching. + // Clean up listener in that case and acknowledge completion of restore operation to client. + clusterService.removeListener(this); + listener.onResponse(new RestoreSnapshotResponse(null)); + } else if (newEntry == null) { + clusterService.removeListener(this); + ImmutableOpenMap shards = prevEntry.shards(); + assert prevEntry.state().completed() : "expected completed snapshot state but was " + prevEntry.state(); + assert RestoreService.completed(shards) : "expected all restore entries to be completed"; + RestoreInfo ri = new RestoreInfo(prevEntry.snapshot().getSnapshotId().getName(), + prevEntry.indices(), + shards.size(), + shards.size() - RestoreService.failedShards(shards)); + RestoreSnapshotResponse response = new RestoreSnapshotResponse(ri); + logger.debug("restore of [{}] completed", snapshot); + listener.onResponse(response); + } else { + // restore not completed yet, wait for next cluster state update } } + }; - @Override - public void onFailure(Exception e) { - listener.onFailure(e); - } - }); + clusterService.addListener(clusterStateListener); } else { - listener.onResponse(new RestoreSnapshotResponse(restoreInfo)); + listener.onResponse(new RestoreSnapshotResponse(restoreCompletionResponse.getRestoreInfo())); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotIndexShardStage.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotIndexShardStage.java index efbc82c9b6a83..a01fec42304cc 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotIndexShardStage.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotIndexShardStage.java @@ -49,7 +49,7 @@ public enum SnapshotIndexShardStage { private boolean completed; - private SnapshotIndexShardStage(byte value, boolean completed) { + SnapshotIndexShardStage(byte value, boolean completed) { this.value = value; this.completed = completed; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusResponse.java index b9800a2d9edb8..d44a490680c9b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusResponse.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -33,7 +33,7 @@ /** * Snapshot status response */ -public class SnapshotsStatusResponse extends ActionResponse implements ToXContent { +public class SnapshotsStatusResponse extends ActionResponse implements ToXContentObject { private List snapshots = Collections.emptyList(); @@ -75,11 +75,13 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray("snapshots"); for (SnapshotStatus snapshot : snapshots) { snapshot.toXContent(builder, params); } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java index 71a709f0b5b40..872793f6ef21a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java @@ -122,11 +122,6 @@ protected NodeSnapshotStatus nodeOperation(NodeRequest request) { } } - @Override - protected boolean accumulateExceptions() { - return true; - } - public static class Request extends BaseNodesRequest { private Snapshot[] snapshots; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java index cf00784dc3f09..ba8132e8a2e94 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java @@ -36,7 +36,9 @@ import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus; +import org.elasticsearch.repositories.RepositoryData; import org.elasticsearch.snapshots.Snapshot; +import org.elasticsearch.snapshots.SnapshotException; import org.elasticsearch.snapshots.SnapshotId; import org.elasticsearch.snapshots.SnapshotInfo; import org.elasticsearch.snapshots.SnapshotMissingException; @@ -203,7 +205,8 @@ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, Li final String repositoryName = request.repository(); if (Strings.hasText(repositoryName) && request.snapshots() != null && request.snapshots().length > 0) { final Set requestedSnapshotNames = Sets.newHashSet(request.snapshots()); - final Map matchedSnapshotIds = snapshotsService.snapshotIds(repositoryName).stream() + final RepositoryData repositoryData = snapshotsService.getRepositoryData(repositoryName); + final Map matchedSnapshotIds = repositoryData.getAllSnapshotIds().stream() .filter(s -> requestedSnapshotNames.contains(s.getName())) .collect(Collectors.toMap(SnapshotId::getName, Function.identity())); for (final String snapshotName : request.snapshots()) { @@ -222,6 +225,8 @@ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, Li } else { throw new SnapshotMissingException(repositoryName, snapshotName); } + } else if (repositoryData.getIncompatibleSnapshotIds().contains(snapshotId)) { + throw new SnapshotException(repositoryName, snapshotName, "cannot get the status for an incompatible snapshot"); } SnapshotInfo snapshotInfo = snapshotsService.snapshot(repositoryName, snapshotId); List shardStatusBuilder = new ArrayList<>(); @@ -245,7 +250,7 @@ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, Li default: throw new IllegalArgumentException("Unknown snapshot state " + snapshotInfo.state()); } - builder.add(new SnapshotStatus(new Snapshot(repositoryName, snapshotInfo.snapshotId()), state, Collections.unmodifiableList(shardStatusBuilder))); + builder.add(new SnapshotStatus(new Snapshot(repositoryName, snapshotId), state, Collections.unmodifiableList(shardStatusBuilder))); } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateResponse.java index 2a2f4707f69ba..7c4844ae5f090 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateResponse.java @@ -55,7 +55,7 @@ public ClusterName getClusterName() { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); clusterName = new ClusterName(in); - clusterState = ClusterState.Builder.readFrom(in, null); + clusterState = ClusterState.readFrom(in, null); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java index 3a6168315e03e..6347a027f4f89 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java @@ -86,8 +86,8 @@ public void readFrom(StreamInput in) throws IOException { this.nodeStats = NodeStats.readNodeStats(in); int size = in.readVInt(); shardsStats = new ShardStats[size]; - for (size--; size >= 0; size--) { - shardsStats[size] = ShardStats.readShardStats(in); + for (int i = 0; i < size; i++) { + shardsStats[i] = ShardStats.readShardStats(in); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodes.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodes.java index 62255df369bd9..d5a2912f549cf 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodes.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodes.java @@ -25,6 +25,7 @@ import org.elasticsearch.action.admin.cluster.node.info.NodeInfo; import org.elasticsearch.action.admin.cluster.node.stats.NodeStats; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.InetSocketTransportAddress; @@ -65,8 +66,8 @@ public class ClusterStatsNodes implements ToXContent { this.plugins = new HashSet<>(); Set seenAddresses = new HashSet<>(nodeResponses.size()); - List nodeInfos = new ArrayList<>(); - List nodeStats = new ArrayList<>(); + List nodeInfos = new ArrayList<>(nodeResponses.size()); + List nodeStats = new ArrayList<>(nodeResponses.size()); for (ClusterStatsNodeResponse nodeResponse : nodeResponses) { nodeInfos.add(nodeResponse.nodeInfo()); nodeStats.add(nodeResponse.nodeStats()); @@ -250,11 +251,11 @@ private OsStats(List nodeInfos, List nodeStatsList) { long freeMemory = 0; for (NodeStats nodeStats : nodeStatsList) { if (nodeStats.getOs() != null) { - long total = nodeStats.getOs().getMem().getTotal().bytes(); + long total = nodeStats.getOs().getMem().getTotal().getBytes(); if (total > 0) { totalMemory += total; } - long free = nodeStats.getOs().getMem().getFree().bytes(); + long free = nodeStats.getOs().getMem().getFree().getBytes(); if (free > 0) { freeMemory += free; } @@ -423,8 +424,8 @@ private JvmStats(List nodeInfos, List nodeStatsList) { } maxUptime = Math.max(maxUptime, js.getUptime().millis()); if (js.getMem() != null) { - heapUsed += js.getMem().getHeapUsed().bytes(); - heapMax += js.getMem().getHeapMax().bytes(); + heapUsed += js.getMem().getHeapUsed().getBytes(); + heapMax += js.getMem().getHeapMax().getBytes(); } } this.threads = threads; @@ -544,7 +545,7 @@ static class NetworkTypes implements ToXContent { private final Map transportTypes; private final Map httpTypes; - private NetworkTypes(final List nodeInfos) { + NetworkTypes(final List nodeInfos) { final Map transportTypes = new HashMap<>(); final Map httpTypes = new HashMap<>(); for (final NodeInfo nodeInfo : nodeInfos) { @@ -553,8 +554,12 @@ private NetworkTypes(final List nodeInfos) { settings.get(NetworkModule.TRANSPORT_TYPE_KEY, NetworkModule.TRANSPORT_DEFAULT_TYPE_SETTING.get(settings)); final String httpType = settings.get(NetworkModule.HTTP_TYPE_KEY, NetworkModule.HTTP_DEFAULT_TYPE_SETTING.get(settings)); - transportTypes.computeIfAbsent(transportType, k -> new AtomicInteger()).incrementAndGet(); - httpTypes.computeIfAbsent(httpType, k -> new AtomicInteger()).incrementAndGet(); + if (Strings.hasText(transportType)) { + transportTypes.computeIfAbsent(transportType, k -> new AtomicInteger()).incrementAndGet(); + } + if (Strings.hasText(httpType)) { + httpTypes.computeIfAbsent(httpType, k -> new AtomicInteger()).incrementAndGet(); + } } this.transportTypes = Collections.unmodifiableMap(transportTypes); this.httpTypes = Collections.unmodifiableMap(httpTypes); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsResponse.java index efc72d104f86e..40c71c098c897 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsResponse.java @@ -40,19 +40,18 @@ public class ClusterStatsResponse extends BaseNodesResponse nodes, List failures) { + public ClusterStatsResponse(long timestamp, + ClusterName clusterName, + List nodes, + List failures) { super(clusterName, nodes, failures); this.timestamp = timestamp; - this.clusterUUID = clusterUUID; nodesStats = new ClusterStatsNodes(nodes); indicesStats = new ClusterStatsIndices(nodes); for (ClusterStatsNodeResponse response : nodes) { @@ -84,7 +83,6 @@ public ClusterStatsIndices getIndicesStats() { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); timestamp = in.readVLong(); - clusterUUID = in.readString(); // it may be that the master switched on us while doing the operation. In this case the status may be null. status = in.readOptionalWriteable(ClusterHealthStatus::readFrom); } @@ -93,7 +91,6 @@ public void readFrom(StreamInput in) throws IOException { public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); out.writeVLong(timestamp); - out.writeString(clusterUUID); out.writeOptionalWriteable(status); } @@ -117,9 +114,6 @@ protected void writeNodesTo(StreamOutput out, List nod @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.field("timestamp", getTimestamp()); - if (params.paramAsBoolean("output_uuid", false)) { - builder.field("uuid", clusterUUID); - } if (status != null) { builder.field("status", status.name().toLowerCase(Locale.ROOT)); } @@ -144,4 +138,5 @@ public String toString() { return "{ \"error\" : \"" + e.getMessage() + "\"}"; } } + } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java index 3eb73273832e0..d5659c48723d6 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java @@ -39,7 +39,7 @@ import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -75,8 +75,11 @@ public TransportClusterStatsAction(Settings settings, ThreadPool threadPool, @Override protected ClusterStatsResponse newResponse(ClusterStatsRequest request, List responses, List failures) { - return new ClusterStatsResponse(System.currentTimeMillis(), clusterService.getClusterName(), - clusterService.state().metaData().clusterUUID(), responses, failures); + return new ClusterStatsResponse( + System.currentTimeMillis(), + clusterService.getClusterName(), + responses, + failures); } @Override @@ -112,11 +115,6 @@ protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeReq } - @Override - protected boolean accumulateExceptions() { - return false; - } - public static class ClusterStatsNodeRequest extends BaseNodeRequest { ClusterStatsRequest request; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java index d01c128cf36b3..c30eae12a82ea 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java @@ -31,66 +31,79 @@ public class DeleteStoredScriptRequest extends AcknowledgedRequest { private String id; - private String scriptLang; + private String lang; DeleteStoredScriptRequest() { + super(); } - public DeleteStoredScriptRequest(String scriptLang, String id) { - this.scriptLang = scriptLang; + public DeleteStoredScriptRequest(String id, String lang) { + super(); + this.id = id; + this.lang = lang; } @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (id == null) { - validationException = addValidationError("id is missing", validationException); + + if (id == null || id.isEmpty()) { + validationException = addValidationError("must specify id for stored script", validationException); } else if (id.contains("#")) { - validationException = addValidationError("id can't contain: '#'", validationException); + validationException = addValidationError("id cannot contain '#' for stored script", validationException); } - if (scriptLang == null) { - validationException = addValidationError("lang is missing", validationException); - } else if (scriptLang.contains("#")) { - validationException = addValidationError("lang can't contain: '#'", validationException); + + if (lang != null && lang.contains("#")) { + validationException = addValidationError("lang cannot contain '#' for stored script", validationException); } + return validationException; } - public String scriptLang() { - return scriptLang; + public String id() { + return id; } - public DeleteStoredScriptRequest scriptLang(String type) { - this.scriptLang = type; + public DeleteStoredScriptRequest id(String id) { + this.id = id; + return this; } - public String id() { - return id; + public String lang() { + return lang; } - public DeleteStoredScriptRequest id(String id) { - this.id = id; + public DeleteStoredScriptRequest lang(String lang) { + this.lang = lang; + return this; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - scriptLang = in.readString(); + + lang = in.readString(); + + if (lang.isEmpty()) { + lang = null; + } + id = in.readString(); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeString(scriptLang); + + out.writeString(lang == null ? "" : lang); out.writeString(id); } @Override public String toString() { - return "delete script {[" + scriptLang + "][" + id + "]}"; + return "delete stored script {id [" + id + "]" + (lang != null ? ", lang [" + lang + "]" : "") + "}"; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java index caf55a03f18e3..8a65506dabd34 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java @@ -29,13 +29,15 @@ public DeleteStoredScriptRequestBuilder(ElasticsearchClient client, DeleteStored super(client, action, new DeleteStoredScriptRequest()); } - public DeleteStoredScriptRequestBuilder setScriptLang(String scriptLang) { - request.scriptLang(scriptLang); + public DeleteStoredScriptRequestBuilder setLang(String lang) { + request.lang(lang); + return this; } public DeleteStoredScriptRequestBuilder setId(String id) { request.id(id); + return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java index bb7a9effd32eb..2bfd547362c80 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java @@ -28,61 +28,79 @@ import java.io.IOException; +import static org.elasticsearch.action.ValidateActions.addValidationError; + public class GetStoredScriptRequest extends MasterNodeReadRequest { protected String id; protected String lang; GetStoredScriptRequest() { + super(); } - public GetStoredScriptRequest(String lang, String id) { - this.lang = lang; + public GetStoredScriptRequest(String id, String lang) { + super(); + this.id = id; + this.lang = lang; } @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (lang == null) { - validationException = ValidateActions.addValidationError("lang is missing", validationException); + + if (id == null || id.isEmpty()) { + validationException = addValidationError("must specify id for stored script", validationException); + } else if (id.contains("#")) { + validationException = addValidationError("id cannot contain '#' for stored script", validationException); } - if (id == null) { - validationException = ValidateActions.addValidationError("id is missing", validationException); + + if (lang != null && lang.contains("#")) { + validationException = addValidationError("lang cannot contain '#' for stored script", validationException); } + return validationException; } - public GetStoredScriptRequest lang(@Nullable String type) { - this.lang = type; - return this; + public String id() { + return id; } public GetStoredScriptRequest id(String id) { this.id = id; + return this; } - public String lang() { return lang; } - public String id() { - return id; + public GetStoredScriptRequest lang(String lang) { + this.lang = lang; + + return this; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); + lang = in.readString(); + + if (lang.isEmpty()) { + lang = null; + } + id = in.readString(); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeString(lang); + + out.writeString(lang == null ? "" : lang); out.writeString(id); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptResponse.java index 36dd9beb38a7a..d543ac67e1d91 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptResponse.java @@ -19,49 +19,70 @@ package org.elasticsearch.action.admin.cluster.storedscripts; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionResponse; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.script.Script; +import org.elasticsearch.script.StoredScriptSource; import java.io.IOException; public class GetStoredScriptResponse extends ActionResponse implements ToXContent { - private String storedScript; + private StoredScriptSource source; GetStoredScriptResponse() { } - GetStoredScriptResponse(String storedScript) { - this.storedScript = storedScript; + GetStoredScriptResponse(StoredScriptSource source) { + this.source = source; } /** * @return if a stored script and if not found null */ - public String getStoredScript() { - return storedScript; + public StoredScriptSource getSource() { + return source; } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.value(storedScript); + source.toXContent(builder, params); + return builder; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - storedScript = in.readOptionalString(); + + if (in.readBoolean()) { + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + source = new StoredScriptSource(in); + } else { + source = new StoredScriptSource(in.readString()); + } + } else { + source = null; + } } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeOptionalString(storedScript); + + if (source == null) { + out.writeBoolean(false); + } else { + out.writeBoolean(true); + + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + source.writeTo(out); + } else { + out.writeString(source.getSource()); + } + } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java index cfe153d7d9641..1e3d251e7a284 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java @@ -19,108 +19,153 @@ package org.elasticsearch.action.admin.cluster.storedscripts; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; +import java.util.Objects; import static org.elasticsearch.action.ValidateActions.addValidationError; public class PutStoredScriptRequest extends AcknowledgedRequest { private String id; - private String scriptLang; - private BytesReference script; + private String lang; + private BytesReference content; + private XContentType xContentType; public PutStoredScriptRequest() { super(); } - public PutStoredScriptRequest(String scriptLang) { - super(); - this.scriptLang = scriptLang; + @Deprecated + public PutStoredScriptRequest(String id, String lang, BytesReference content) { + this(id, lang, content, XContentFactory.xContentType(content)); } - public PutStoredScriptRequest(String scriptLang, String id) { + public PutStoredScriptRequest(String id, String lang, BytesReference content, XContentType xContentType) { super(); - this.scriptLang = scriptLang; this.id = id; + this.lang = lang; + this.content = content; + this.xContentType = Objects.requireNonNull(xContentType); } @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (id == null) { - validationException = addValidationError("id is missing", validationException); + + if (id == null || id.isEmpty()) { + validationException = addValidationError("must specify id for stored script", validationException); } else if (id.contains("#")) { - validationException = addValidationError("id can't contain: '#'", validationException); + validationException = addValidationError("id cannot contain '#' for stored script", validationException); } - if (scriptLang == null) { - validationException = addValidationError("lang is missing", validationException); - } else if (scriptLang.contains("#")) { - validationException = addValidationError("lang can't contain: '#'", validationException); + + if (lang != null && lang.contains("#")) { + validationException = addValidationError("lang cannot contain '#' for stored script", validationException); } - if (script == null) { - validationException = addValidationError("script is missing", validationException); + + if (content == null) { + validationException = addValidationError("must specify code for stored script", validationException); } + return validationException; } - public String scriptLang() { - return scriptLang; + public String id() { + return id; } - public PutStoredScriptRequest scriptLang(String scriptLang) { - this.scriptLang = scriptLang; + public PutStoredScriptRequest id(String id) { + this.id = id; + return this; } - public String id() { - return id; + public String lang() { + return lang; } - public PutStoredScriptRequest id(String id) { - this.id = id; + public PutStoredScriptRequest lang(String lang) { + this.lang = lang; + return this; } - public BytesReference script() { - return script; + public BytesReference content() { + return content; + } + + public XContentType xContentType() { + return xContentType; } - public PutStoredScriptRequest script(BytesReference source) { - this.script = source; + /** + * Set the script source using bytes. + * @deprecated this method is deprecated as it relies on content type detection. Use {@link #content(BytesReference, XContentType)} + */ + @Deprecated + public PutStoredScriptRequest content(BytesReference content) { + return content(content, XContentFactory.xContentType(content)); + } + + /** + * Set the script source and the content type of the bytes. + */ + public PutStoredScriptRequest content(BytesReference content, XContentType xContentType) { + this.content = content; + this.xContentType = Objects.requireNonNull(xContentType); return this; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - scriptLang = in.readString(); + + lang = in.readString(); + + if (lang.isEmpty()) { + lang = null; + } + id = in.readOptionalString(); - script = in.readBytesReference(); + content = in.readBytesReference(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType = XContentType.readFrom(in); + } else { + xContentType = XContentFactory.xContentType(content); + } } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeString(scriptLang); + + out.writeString(lang == null ? "" : lang); out.writeOptionalString(id); - out.writeBytesReference(script); + out.writeBytesReference(content); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType.writeTo(out); + } } @Override public String toString() { - String sSource = "_na_"; + String source = "_na_"; + try { - sSource = XContentHelper.convertToJson(script, false); + source = XContentHelper.convertToJson(content, false, xContentType); } catch (Exception e) { // ignore } - return "put script {[" + id + "][" + scriptLang + "], script[" + sSource + "]}"; + + return "put stored script {id [" + id + "]" + (lang != null ? ", lang [" + lang + "]" : "") + ", content [" + source + "]}"; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java index 15c51c2ccd7e5..f8223d691999b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java @@ -22,6 +22,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.XContentType; public class PutStoredScriptRequestBuilder extends AcknowledgedRequestBuilder { @@ -30,19 +31,31 @@ public PutStoredScriptRequestBuilder(ElasticsearchClient client, PutStoredScript super(client, action, new PutStoredScriptRequest()); } - public PutStoredScriptRequestBuilder setScriptLang(String scriptLang) { - request.scriptLang(scriptLang); + public PutStoredScriptRequestBuilder setId(String id) { + request.id(id); return this; } - public PutStoredScriptRequestBuilder setId(String id) { - request.id(id); + /** + * Set the source of the script. + * @deprecated this method requires content type detection. Use {@link #setContent(BytesReference, XContentType)} instead + */ + @Deprecated + public PutStoredScriptRequestBuilder setContent(BytesReference content) { + request.content(content); return this; } - public PutStoredScriptRequestBuilder setSource(BytesReference source) { - request.script(source); + /** + * Set the source of the script along with the content type of the source + */ + public PutStoredScriptRequestBuilder setContent(BytesReference source, XContentType xContentType) { + request.content(source, xContentType); return this; } + public PutStoredScriptRequestBuilder setLang(String lang) { + request.lang(lang); + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java index a91dec8d9b30a..8b4079aee7379 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java @@ -59,7 +59,7 @@ protected PutStoredScriptResponse newResponse() { @Override protected void masterOperation(PutStoredScriptRequest request, ClusterState state, ActionListener listener) throws Exception { - scriptService.storeScript(clusterService, request, listener); + scriptService.putStoredScript(clusterService, request, listener); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksResponse.java index 35d5b3efb7b7d..ec42e34ec9614 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksResponse.java @@ -23,18 +23,15 @@ import org.elasticsearch.cluster.service.PendingClusterTask; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import java.io.IOException; import java.util.ArrayList; import java.util.Iterator; import java.util.List; -/** - */ -public class PendingClusterTasksResponse extends ActionResponse implements Iterable, ToXContent { +public class PendingClusterTasksResponse extends ActionResponse implements Iterable, ToXContentObject { private List pendingTasks; @@ -61,30 +58,20 @@ public Iterator iterator() { return pendingTasks.iterator(); } - public String prettyPrint() { + @Override + public String toString() { StringBuilder sb = new StringBuilder(); sb.append("tasks: (").append(pendingTasks.size()).append("):\n"); for (PendingClusterTask pendingClusterTask : this) { - sb.append(pendingClusterTask.getInsertOrder()).append("/").append(pendingClusterTask.getPriority()).append("/").append(pendingClusterTask.getSource()).append("/").append(pendingClusterTask.getTimeInQueue()).append("\n"); + sb.append(pendingClusterTask.getInsertOrder()).append("/").append(pendingClusterTask.getPriority()).append("/") + .append(pendingClusterTask.getSource()).append("/").append(pendingClusterTask.getTimeInQueue()).append("\n"); } return sb.toString(); } - @Override - public String toString() { - try { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; - } - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.TASKS); for (PendingClusterTask pendingClusterTask : this) { builder.startObject(); @@ -97,6 +84,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.endObject(); } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java index 63493210f7c27..e7393efd01ce7 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java @@ -19,23 +19,15 @@ package org.elasticsearch.action.admin.indices.alias; -import com.carrotsearch.hppc.cursors.ObjectCursor; - import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.AliasesRequest; -import org.elasticsearch.action.CompositeIndicesRequest; -import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.cluster.metadata.AliasAction; -import org.elasticsearch.cluster.metadata.AliasMetaData; -import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -45,6 +37,7 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.QueryBuilder; @@ -63,7 +56,7 @@ /** * A request to add/remove aliases for one or more indices. */ -public class IndicesAliasesRequest extends AcknowledgedRequest implements CompositeIndicesRequest { +public class IndicesAliasesRequest extends AcknowledgedRequest { private List allAliasActions = new ArrayList<>(); //indices options that require every specified index to exist, expand wildcards only to open indices and @@ -95,10 +88,10 @@ public byte value() { public static Type fromValue(byte value) { switch (value) { - case 0: return ADD; - case 1: return REMOVE; - case 2: return REMOVE_INDEX; - default: throw new IllegalArgumentException("No type for action [" + value + "]"); + case 0: return ADD; + case 1: return REMOVE; + case 2: return REMOVE_INDEX; + default: throw new IllegalArgumentException("No type for action [" + value + "]"); } } } @@ -109,20 +102,23 @@ public static Type fromValue(byte value) { public static AliasActions add() { return new AliasActions(AliasActions.Type.ADD); } + /** * Build a new {@link AliasAction} to remove aliases. */ public static AliasActions remove() { return new AliasActions(AliasActions.Type.REMOVE); } + /** - * Build a new {@link AliasAction} to remove aliases. + * Build a new {@link AliasAction} to remove an index. */ public static AliasActions removeIndex() { return new AliasActions(AliasActions.Type.REMOVE_INDEX); } - private static ObjectParser parser(String name, Supplier supplier) { - ObjectParser parser = new ObjectParser<>(name, supplier); + + private static ObjectParser parser(String name, Supplier supplier) { + ObjectParser parser = new ObjectParser<>(name, supplier); parser.declareString((action, index) -> { if (action.indices() != null) { throw new IllegalArgumentException("Only one of [index] and [indices] is supported"); @@ -150,7 +146,7 @@ private static ObjectParser parser(Stri return parser; } - private static final ObjectParser ADD_PARSER = parser("add", AliasActions::add); + private static final ObjectParser ADD_PARSER = parser("add", AliasActions::add); static { ADD_PARSER.declareObject(AliasActions::filter, (parser, m) -> { try { @@ -160,18 +156,17 @@ private static ObjectParser parser(Stri } }, new ParseField("filter")); // Since we need to support numbers AND strings here we have to use ValueType.INT. - ADD_PARSER.declareField(AliasActions::routing, p -> p.text(), new ParseField("routing"), ValueType.INT); - ADD_PARSER.declareField(AliasActions::indexRouting, p -> p.text(), new ParseField("index_routing"), ValueType.INT); - ADD_PARSER.declareField(AliasActions::searchRouting, p -> p.text(), new ParseField("search_routing"), ValueType.INT); + ADD_PARSER.declareField(AliasActions::routing, XContentParser::text, new ParseField("routing"), ValueType.INT); + ADD_PARSER.declareField(AliasActions::indexRouting, XContentParser::text, new ParseField("index_routing"), ValueType.INT); + ADD_PARSER.declareField(AliasActions::searchRouting, XContentParser::text, new ParseField("search_routing"), ValueType.INT); } - private static final ObjectParser REMOVE_PARSER = parser("remove", AliasActions::remove); - private static final ObjectParser REMOVE_INDEX_PARSER = parser("remove_index", - AliasActions::removeIndex); + private static final ObjectParser REMOVE_PARSER = parser("remove", AliasActions::remove); + private static final ObjectParser REMOVE_INDEX_PARSER = parser("remove_index", AliasActions::removeIndex); /** * Parser for any one {@link AliasAction}. */ - public static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>( + public static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>( "alias_action", a -> { // Take the first action and complain if there are more than one actions AliasActions action = null; @@ -406,24 +401,6 @@ public IndicesOptions indicesOptions() { return INDICES_OPTIONS; } - public String[] concreteAliases(MetaData metaData, String concreteIndex) { - if (expandAliasesWildcards()) { - //for DELETE we expand the aliases - String[] indexAsArray = {concreteIndex}; - ImmutableOpenMap> aliasMetaData = metaData.findAliases(aliases, indexAsArray); - List finalAliases = new ArrayList<>(); - for (ObjectCursor> curAliases : aliasMetaData.values()) { - for (AliasMetaData aliasMeta: curAliases.value) { - finalAliases.add(aliasMeta.alias()); - } - } - return finalAliases.toArray(new String[finalAliases.size()]); - } else { - //for add we just return the current aliases - return aliases; - } - } - @Override public String toString() { return "AliasActions[" @@ -502,9 +479,4 @@ public void writeTo(StreamOutput out) throws IOException { public IndicesOptions indicesOptions() { return INDICES_OPTIONS; } - - @Override - public List subRequests() { - return allAliasActions; - } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java index 44de63c028db1..9dcd361ae6421 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.alias; +import com.carrotsearch.hppc.cursors.ObjectCursor; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions; import org.elasticsearch.action.support.ActionFilters; @@ -28,9 +29,12 @@ import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.AliasAction; +import org.elasticsearch.cluster.metadata.AliasMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.metadata.MetaDataIndexAliasesService; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException; @@ -75,9 +79,7 @@ protected IndicesAliasesResponse newResponse() { protected ClusterBlockException checkBlock(IndicesAliasesRequest request, ClusterState state) { Set indices = new HashSet<>(); for (AliasActions aliasAction : request.aliasActions()) { - for (String index : aliasAction.indices()) { - indices.add(index); - } + Collections.addAll(indices, aliasAction.indices()); } return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indices.toArray(new String[indices.size()])); } @@ -97,12 +99,12 @@ protected void masterOperation(final IndicesAliasesRequest request, final Cluste for (String index : concreteIndices) { switch (action.actionType()) { case ADD: - for (String alias : action.concreteAliases(state.metaData(), index)) { + for (String alias : concreteAliases(action, state.metaData(), index)) { finalActions.add(new AliasAction.Add(index, alias, action.filter(), action.indexRouting(), action.searchRouting())); } break; case REMOVE: - for (String alias : action.concreteAliases(state.metaData(), index)) { + for (String alias : concreteAliases(action, state.metaData(), index)) { finalActions.add(new AliasAction.Remove(index, alias)); } break; @@ -134,4 +136,22 @@ public void onFailure(Exception t) { } }); } + + private static String[] concreteAliases(AliasActions action, MetaData metaData, String concreteIndex) { + if (action.expandAliasesWildcards()) { + //for DELETE we expand the aliases + String[] indexAsArray = {concreteIndex}; + ImmutableOpenMap> aliasMetaData = metaData.findAliases(action.aliases(), indexAsArray); + List finalAliases = new ArrayList<>(); + for (ObjectCursor> curAliases : aliasMetaData.values()) { + for (AliasMetaData aliasMeta: curAliases.value) { + finalAliases.add(aliasMeta.alias()); + } + } + return finalAliases.toArray(new String[finalAliases.size()]); + } else { + //for ADD and REMOVE_INDEX we just return the current aliases + return action.aliases(); + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/get/GetAliasesResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/get/GetAliasesResponse.java index e23faa1cbbf9c..0c307d643246f 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/get/GetAliasesResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/get/GetAliasesResponse.java @@ -59,7 +59,7 @@ public void readFrom(StreamInput in) throws IOException { int valueSize = in.readVInt(); List value = new ArrayList<>(valueSize); for (int j = 0; j < valueSize; j++) { - value.add(AliasMetaData.Builder.readFrom(in)); + value.add(new AliasMetaData(in)); } aliasesBuilder.put(key, Collections.unmodifiableList(value)); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequest.java index 6d0824eeb31c5..08f220e0199d8 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequest.java @@ -75,7 +75,7 @@ public static class NameOrDefinition implements Writeable { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(definition); - this.definition = Settings.builder().loadFromSource(builder.string()).build(); + this.definition = Settings.builder().loadFromSource(builder.string(), builder.contentType()).build(); } catch (IOException e) { throw new IllegalArgumentException("Failed to parse [" + definition + "]", e); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequestBuilder.java index 78d06185423dd..67a7dca45b933 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequestBuilder.java @@ -116,7 +116,7 @@ public AnalyzeRequestBuilder setExplain(boolean explain) { /** * Sets attributes that will include results */ - public AnalyzeRequestBuilder setAttributes(String attributes){ + public AnalyzeRequestBuilder setAttributes(String... attributes){ request.attributes(attributes); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeResponse.java index 48db340a1c75e..07e14211a36ae 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeResponse.java @@ -23,7 +23,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -32,28 +32,27 @@ import java.util.List; import java.util.Map; -/** - * - */ -public class AnalyzeResponse extends ActionResponse implements Iterable, ToXContent { +public class AnalyzeResponse extends ActionResponse implements Iterable, ToXContentObject { - public static class AnalyzeToken implements Streamable, ToXContent { + public static class AnalyzeToken implements Streamable, ToXContentObject { private String term; private int startOffset; private int endOffset; private int position; + private int positionLength = 1; private Map attributes; private String type; AnalyzeToken() { } - public AnalyzeToken(String term, int position, int startOffset, int endOffset, String type, - Map attributes) { + public AnalyzeToken(String term, int position, int startOffset, int endOffset, int positionLength, + String type, Map attributes) { this.term = term; this.position = position; this.startOffset = startOffset; this.endOffset = endOffset; + this.positionLength = positionLength; this.type = type; this.attributes = attributes; } @@ -74,6 +73,10 @@ public int getPosition() { return this.position; } + public int getPositionLength() { + return this.positionLength; + } + public String getType() { return this.type; } @@ -90,6 +93,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Fields.END_OFFSET, endOffset); builder.field(Fields.TYPE, type); builder.field(Fields.POSITION, position); + if (positionLength > 1) { + builder.field(Fields.POSITION_LENGTH, positionLength); + } if (attributes != null && !attributes.isEmpty()) { for (Map.Entry entity : attributes.entrySet()) { builder.field(entity.getKey(), entity.getValue()); @@ -111,6 +117,14 @@ public void readFrom(StreamInput in) throws IOException { startOffset = in.readInt(); endOffset = in.readInt(); position = in.readVInt(); + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { + Integer len = in.readOptionalVInt(); + if (len != null) { + positionLength = len; + } else { + positionLength = 1; + } + } type = in.readOptionalString(); if (in.getVersion().onOrAfter(Version.V_2_2_0)) { attributes = (Map) in.readGenericValue(); @@ -123,6 +137,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeInt(startOffset); out.writeInt(endOffset); out.writeVInt(position); + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { + out.writeOptionalVInt(positionLength > 1 ? positionLength : null); + } out.writeOptionalString(type); if (out.getVersion().onOrAfter(Version.V_2_2_0)) { out.writeGenericValue(attributes); @@ -157,6 +174,7 @@ public Iterator iterator() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); if (tokens != null) { builder.startArray(Fields.TOKENS); for (AnalyzeToken token : tokens) { @@ -170,6 +188,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws detail.toXContent(builder, params); builder.endObject(); } + builder.endObject(); return builder; } @@ -209,6 +228,7 @@ static final class Fields { static final String END_OFFSET = "end_offset"; static final String TYPE = "type"; static final String POSITION = "position"; + static final String POSITION_LENGTH = "positionLength"; static final String DETAIL = "detail"; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/DetailAnalyzeResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/DetailAnalyzeResponse.java index c67c036023097..2d1ba22b989e5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/DetailAnalyzeResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/DetailAnalyzeResponse.java @@ -292,7 +292,7 @@ public String[] getTexts() { public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); builder.field(Fields.NAME, name); - builder.field(Fields.FILTERED_TEXT, texts); + builder.array(Fields.FILTERED_TEXT, texts); builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java index f035bc0f4b7d1..d7e299b1cf1b5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java @@ -24,6 +24,7 @@ import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.analysis.tokenattributes.OffsetAttribute; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; +import org.apache.lucene.analysis.tokenattributes.PositionLengthAttribute; import org.apache.lucene.analysis.tokenattributes.TypeAttribute; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; @@ -45,13 +46,14 @@ import org.elasticsearch.index.IndexService; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.analysis.AnalysisRegistry; -import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.analysis.CharFilterFactory; import org.elasticsearch.index.analysis.CustomAnalyzer; +import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.analysis.TokenFilterFactory; import org.elasticsearch.index.analysis.TokenizerFactory; import org.elasticsearch.index.mapper.AllFieldMapper; +import org.elasticsearch.index.mapper.KeywordFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; @@ -130,10 +132,17 @@ protected AnalyzeResponse shardOperation(AnalyzeRequest request, ShardId shardId } MappedFieldType fieldType = indexService.mapperService().fullName(request.field()); if (fieldType != null) { - if (fieldType.tokenized() == false) { + if (fieldType.tokenized()) { + analyzer = fieldType.indexAnalyzer(); + } else if (fieldType instanceof KeywordFieldMapper.KeywordFieldType) { + analyzer = ((KeywordFieldMapper.KeywordFieldType) fieldType).normalizer(); + if (analyzer == null) { + // this will be KeywordAnalyzer + analyzer = fieldType.indexAnalyzer(); + } + } else { throw new IllegalArgumentException("Can't process field [" + request.field() + "], Analysis requests are only supported on tokenized fields"); } - analyzer = fieldType.indexAnalyzer(); field = fieldType.name(); } } @@ -145,45 +154,46 @@ protected AnalyzeResponse shardOperation(AnalyzeRequest request, ShardId shardId } } final AnalysisRegistry analysisRegistry = indicesService.getAnalysis(); - return analyze(request, field, analyzer, indexService != null ? indexService.analysisService() : null, analysisRegistry, environment); + return analyze(request, field, analyzer, indexService != null ? indexService.getIndexAnalyzers() : null, analysisRegistry, environment); } catch (IOException e) { throw new ElasticsearchException("analysis failed", e); } } - public static AnalyzeResponse analyze(AnalyzeRequest request, String field, Analyzer analyzer, AnalysisService analysisService, AnalysisRegistry analysisRegistry, Environment environment) throws IOException { + public static AnalyzeResponse analyze(AnalyzeRequest request, String field, Analyzer analyzer, IndexAnalyzers indexAnalyzers, AnalysisRegistry analysisRegistry, Environment environment) throws IOException { boolean closeAnalyzer = false; if (analyzer == null && request.analyzer() != null) { - if (analysisService == null) { + if (indexAnalyzers == null) { analyzer = analysisRegistry.getAnalyzer(request.analyzer()); if (analyzer == null) { throw new IllegalArgumentException("failed to find global analyzer [" + request.analyzer() + "]"); } } else { - analyzer = analysisService.analyzer(request.analyzer()); + analyzer = indexAnalyzers.get(request.analyzer()); if (analyzer == null) { throw new IllegalArgumentException("failed to find analyzer [" + request.analyzer() + "]"); } } } else if (request.tokenizer() != null) { - TokenizerFactory tokenizerFactory = parseTokenizerFactory(request, analysisService, analysisRegistry, environment); + final IndexSettings indexSettings = indexAnalyzers == null ? null : indexAnalyzers.getIndexSettings(); + TokenizerFactory tokenizerFactory = parseTokenizerFactory(request, indexAnalyzers, analysisRegistry, environment); TokenFilterFactory[] tokenFilterFactories = new TokenFilterFactory[0]; - tokenFilterFactories = getTokenFilterFactories(request, analysisService, analysisRegistry, environment, tokenFilterFactories); + tokenFilterFactories = getTokenFilterFactories(request, indexSettings, analysisRegistry, environment, tokenFilterFactories); CharFilterFactory[] charFilterFactories = new CharFilterFactory[0]; - charFilterFactories = getCharFilterFactories(request, analysisService, analysisRegistry, environment, charFilterFactories); + charFilterFactories = getCharFilterFactories(request, indexSettings, analysisRegistry, environment, charFilterFactories); analyzer = new CustomAnalyzer(tokenizerFactory, charFilterFactories, tokenFilterFactories); closeAnalyzer = true; } else if (analyzer == null) { - if (analysisService == null) { + if (indexAnalyzers == null) { analyzer = analysisRegistry.getAnalyzer("standard"); } else { - analyzer = analysisService.defaultIndexAnalyzer(); + analyzer = indexAnalyzers.getDefaultIndexAnalyzer(); } } if (analyzer == null) { @@ -217,13 +227,15 @@ private static List simpleAnalyze(AnalyzeRequest r PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class); OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class); TypeAttribute type = stream.addAttribute(TypeAttribute.class); + PositionLengthAttribute posLen = stream.addAttribute(PositionLengthAttribute.class); while (stream.incrementToken()) { int increment = posIncr.getPositionIncrement(); if (increment > 0) { lastPosition = lastPosition + increment; } - tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(), lastOffset + offset.endOffset(), type.type(), null)); + tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(), + lastOffset + offset.endOffset(), posLen.getPositionLength(), type.type(), null)); } stream.end(); @@ -380,6 +392,7 @@ private void analyze(TokenStream stream, Analyzer analyzer, String field, Set extractExtendedAttributes(TokenStream stream, return extendedAttributes; } - private static CharFilterFactory[] getCharFilterFactories(AnalyzeRequest request, AnalysisService analysisService, AnalysisRegistry analysisRegistry, + private static CharFilterFactory[] getCharFilterFactories(AnalyzeRequest request, IndexSettings indexSettings, AnalysisRegistry analysisRegistry, Environment environment, CharFilterFactory[] charFilterFactories) throws IOException { if (request.charFilters() != null && request.charFilters().size() > 0) { charFilterFactories = new CharFilterFactory[request.charFilters().size()]; @@ -468,19 +481,19 @@ private static CharFilterFactory[] getCharFilterFactories(AnalyzeRequest request charFilterFactories[i] = charFilterFactoryFactory.get(getNaIndexSettings(settings), environment, "_anonymous_charfilter_[" + i + "]", settings); } else { AnalysisModule.AnalysisProvider charFilterFactoryFactory; - if (analysisService == null) { + if (indexSettings == null) { charFilterFactoryFactory = analysisRegistry.getCharFilterProvider(charFilter.name); if (charFilterFactoryFactory == null) { throw new IllegalArgumentException("failed to find global char filter under [" + charFilter.name + "]"); } charFilterFactories[i] = charFilterFactoryFactory.get(environment, charFilter.name); } else { - charFilterFactoryFactory = analysisRegistry.getCharFilterProvider(charFilter.name, analysisService.getIndexSettings()); + charFilterFactoryFactory = analysisRegistry.getCharFilterProvider(charFilter.name, indexSettings); if (charFilterFactoryFactory == null) { throw new IllegalArgumentException("failed to find char filter under [" + charFilter.name + "]"); } - charFilterFactories[i] = charFilterFactoryFactory.get(analysisService.getIndexSettings(), environment, charFilter.name, - AnalysisRegistry.getSettingsFromIndexSettings(analysisService.getIndexSettings(), + charFilterFactories[i] = charFilterFactoryFactory.get(indexSettings, environment, charFilter.name, + AnalysisRegistry.getSettingsFromIndexSettings(indexSettings, AnalysisRegistry.INDEX_ANALYSIS_CHAR_FILTER + "." + charFilter.name)); } } @@ -492,7 +505,7 @@ private static CharFilterFactory[] getCharFilterFactories(AnalyzeRequest request return charFilterFactories; } - private static TokenFilterFactory[] getTokenFilterFactories(AnalyzeRequest request, AnalysisService analysisService, AnalysisRegistry analysisRegistry, + private static TokenFilterFactory[] getTokenFilterFactories(AnalyzeRequest request, IndexSettings indexSettings, AnalysisRegistry analysisRegistry, Environment environment, TokenFilterFactory[] tokenFilterFactories) throws IOException { if (request.tokenFilters() != null && request.tokenFilters().size() > 0) { tokenFilterFactories = new TokenFilterFactory[request.tokenFilters().size()]; @@ -514,19 +527,19 @@ private static TokenFilterFactory[] getTokenFilterFactories(AnalyzeRequest reque tokenFilterFactories[i] = tokenFilterFactoryFactory.get(getNaIndexSettings(settings), environment, "_anonymous_tokenfilter_[" + i + "]", settings); } else { AnalysisModule.AnalysisProvider tokenFilterFactoryFactory; - if (analysisService == null) { + if (indexSettings == null) { tokenFilterFactoryFactory = analysisRegistry.getTokenFilterProvider(tokenFilter.name); if (tokenFilterFactoryFactory == null) { throw new IllegalArgumentException("failed to find global token filter under [" + tokenFilter.name + "]"); } tokenFilterFactories[i] = tokenFilterFactoryFactory.get(environment, tokenFilter.name); } else { - tokenFilterFactoryFactory = analysisRegistry.getTokenFilterProvider(tokenFilter.name, analysisService.getIndexSettings()); + tokenFilterFactoryFactory = analysisRegistry.getTokenFilterProvider(tokenFilter.name, indexSettings); if (tokenFilterFactoryFactory == null) { throw new IllegalArgumentException("failed to find token filter under [" + tokenFilter.name + "]"); } - tokenFilterFactories[i] = tokenFilterFactoryFactory.get(analysisService.getIndexSettings(), environment, tokenFilter.name, - AnalysisRegistry.getSettingsFromIndexSettings(analysisService.getIndexSettings(), + tokenFilterFactories[i] = tokenFilterFactoryFactory.get(indexSettings, environment, tokenFilter.name, + AnalysisRegistry.getSettingsFromIndexSettings(indexSettings, AnalysisRegistry.INDEX_ANALYSIS_FILTER + "." + tokenFilter.name)); } } @@ -538,7 +551,7 @@ private static TokenFilterFactory[] getTokenFilterFactories(AnalyzeRequest reque return tokenFilterFactories; } - private static TokenizerFactory parseTokenizerFactory(AnalyzeRequest request, AnalysisService analysisService, + private static TokenizerFactory parseTokenizerFactory(AnalyzeRequest request, IndexAnalyzers indexAnalzyers, AnalysisRegistry analysisRegistry, Environment environment) throws IOException { TokenizerFactory tokenizerFactory; final AnalyzeRequest.NameOrDefinition tokenizer = request.tokenizer(); @@ -558,19 +571,19 @@ private static TokenizerFactory parseTokenizerFactory(AnalyzeRequest request, An tokenizerFactory = tokenizerFactoryFactory.get(getNaIndexSettings(settings), environment, "_anonymous_tokenizer", settings); } else { AnalysisModule.AnalysisProvider tokenizerFactoryFactory; - if (analysisService == null) { + if (indexAnalzyers == null) { tokenizerFactoryFactory = analysisRegistry.getTokenizerProvider(tokenizer.name); if (tokenizerFactoryFactory == null) { throw new IllegalArgumentException("failed to find global tokenizer under [" + tokenizer.name + "]"); } tokenizerFactory = tokenizerFactoryFactory.get(environment, tokenizer.name); } else { - tokenizerFactoryFactory = analysisRegistry.getTokenizerProvider(tokenizer.name, analysisService.getIndexSettings()); + tokenizerFactoryFactory = analysisRegistry.getTokenizerProvider(tokenizer.name, indexAnalzyers.getIndexSettings()); if (tokenizerFactoryFactory == null) { throw new IllegalArgumentException("failed to find tokenizer under [" + tokenizer.name + "]"); } - tokenizerFactory = tokenizerFactoryFactory.get(analysisService.getIndexSettings(), environment, tokenizer.name, - AnalysisRegistry.getSettingsFromIndexSettings(analysisService.getIndexSettings(), + tokenizerFactory = tokenizerFactoryFactory.get(indexAnalzyers.getIndexSettings(), environment, tokenizer.name, + AnalysisRegistry.getSettingsFromIndexSettings(indexAnalzyers.getIndexSettings(), AnalysisRegistry.INDEX_ANALYSIS_TOKENIZER + "." + tokenizer.name)); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java index d33f37defec1e..244b8a24b9b67 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java @@ -22,6 +22,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.admin.indices.open.OpenIndexResponse; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.DestructiveOperations; import org.elasticsearch.action.support.master.TransportMasterNodeAction; @@ -97,6 +98,10 @@ protected ClusterBlockException checkBlock(CloseIndexRequest request, ClusterSta @Override protected void masterOperation(final CloseIndexRequest request, final ClusterState state, final ActionListener listener) { final Index[] concreteIndices = indexNameExpressionResolver.concreteIndices(state, request); + if (concreteIndices == null || concreteIndices.length == 0) { + listener.onResponse(new CloseIndexResponse(true)); + return; + } CloseIndexClusterStateUpdateRequest updateRequest = new CloseIndexClusterStateUpdateRequest() .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout()) .indices(concreteIndices); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexClusterStateUpdateRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexClusterStateUpdateRequest.java index 7b3b2a0a2f0c6..a2290a5e2556e 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexClusterStateUpdateRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexClusterStateUpdateRequest.java @@ -41,6 +41,7 @@ public class CreateIndexClusterStateUpdateRequest extends ClusterStateUpdateRequ private final TransportMessage originalMessage; private final String cause; private final String index; + private final String providedName; private final boolean updateAllTypes; private Index shrinkFrom; @@ -59,11 +60,13 @@ public class CreateIndexClusterStateUpdateRequest extends ClusterStateUpdateRequ private ActiveShardCount waitForActiveShards = ActiveShardCount.DEFAULT; - public CreateIndexClusterStateUpdateRequest(TransportMessage originalMessage, String cause, String index, boolean updateAllTypes) { + public CreateIndexClusterStateUpdateRequest(TransportMessage originalMessage, String cause, String index, String providedName, + boolean updateAllTypes) { this.originalMessage = originalMessage; this.cause = cause; this.index = index; this.updateAllTypes = updateAllTypes; + this.providedName = providedName; } public CreateIndexClusterStateUpdateRequest settings(Settings settings) { @@ -151,6 +154,14 @@ public boolean updateAllTypes() { return updateAllTypes; } + /** + * The name that was provided by the user. This might contain a date math expression. + * @see IndexMetaData#SETTING_INDEX_PROVIDED_NAME + */ + public String getProvidedName() { + return providedName; + } + public ActiveShardCount waitForActiveShards() { return waitForActiveShards; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java index 17df06dbf4b84..daf1415a0aad5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java @@ -21,6 +21,7 @@ import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.admin.indices.alias.Alias; @@ -35,6 +36,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; @@ -42,10 +44,11 @@ import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; -import java.nio.charset.StandardCharsets; +import java.io.UncheckedIOException; import java.util.HashMap; import java.util.HashSet; import java.util.Map; +import java.util.Objects; import java.util.Set; import static org.elasticsearch.action.ValidateActions.addValidationError; @@ -168,19 +171,29 @@ public CreateIndexRequest settings(Settings.Builder settings) { } /** - * The settings to create the index with (either json/yaml/properties format) + * The settings to create the index with (either json or yaml format) + * @deprecated use {@link #source(String, XContentType)} instead to avoid content type detection */ + @Deprecated public CreateIndexRequest settings(String source) { this.settings = Settings.builder().loadFromSource(source).build(); return this; } + /** + * The settings to create the index with (either json or yaml format) + */ + public CreateIndexRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); + return this; + } + /** * Allows to set the settings using a json builder. */ public CreateIndexRequest settings(XContentBuilder builder) { try { - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate json settings from builder", e); } @@ -195,7 +208,7 @@ public CreateIndexRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), XContentType.JSON); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -207,13 +220,42 @@ public CreateIndexRequest settings(Map source) { * * @param type The mapping type * @param source The mapping source + * @deprecated use {@link #mapping(String, String, XContentType)} to avoid content type detection */ + @Deprecated public CreateIndexRequest mapping(String type, String source) { + return mapping(type, new BytesArray(source), XContentFactory.xContentType(source)); + } + + /** + * Adds mapping that will be added when the index gets created. + * + * @param type The mapping type + * @param source The mapping source + * @param xContentType The content type of the source + */ + public CreateIndexRequest mapping(String type, String source, XContentType xContentType) { + return mapping(type, new BytesArray(source), xContentType); + } + + /** + * Adds mapping that will be added when the index gets created. + * + * @param type The mapping type + * @param source The mapping source + * @param xContentType the content type of the mapping source + */ + private CreateIndexRequest mapping(String type, BytesReference source, XContentType xContentType) { if (mappings.containsKey(type)) { throw new IllegalStateException("mappings for type \"" + type + "\" were already defined"); } - mappings.put(type, source); - return this; + Objects.requireNonNull(xContentType); + try { + mappings.put(type, XContentHelper.convertToJson(source, false, false, xContentType)); + return this; + } catch (IOException e) { + throw new UncheckedIOException("failed to convert to json", e); + } } /** @@ -231,15 +273,7 @@ public CreateIndexRequest cause(String cause) { * @param source The mapping source */ public CreateIndexRequest mapping(String type, XContentBuilder source) { - if (mappings.containsKey(type)) { - throw new IllegalStateException("mappings for type \"" + type + "\" were already defined"); - } - try { - mappings.put(type, source.string()); - } catch (IOException e) { - throw new IllegalArgumentException("Failed to build json for mapping request", e); - } - return this; + return mapping(type, source.bytes(), source.contentType()); } /** @@ -260,7 +294,7 @@ public CreateIndexRequest mapping(String type, Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - return mapping(type, builder.string()); + return mapping(type, builder); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -307,7 +341,8 @@ public CreateIndexRequest aliases(String source) { * Sets the aliases that will be associated with the index when it gets created */ public CreateIndexRequest aliases(BytesReference source) { - try (XContentParser parser = XContentHelper.createParser(source)) { + // EMPTY is safe here because we never call namedObject + try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, source)) { //move to the first alias parser.nextToken(); while ((parser.nextToken()) != XContentParser.Token.END_OBJECT) { @@ -329,9 +364,18 @@ public CreateIndexRequest alias(Alias alias) { /** * Sets the settings and mappings as a single source. + * @deprecated use {@link #source(String, XContentType)} */ + @Deprecated public CreateIndexRequest source(String source) { - return source(source.getBytes(StandardCharsets.UTF_8)); + return source(new BytesArray(source)); + } + + /** + * Sets the settings and mappings as a single source. + */ + public CreateIndexRequest source(String source, XContentType xContentType) { + return source(new BytesArray(source), xContentType); } /** @@ -343,7 +387,9 @@ public CreateIndexRequest source(XContentBuilder source) { /** * Sets the settings and mappings as a single source. + * @deprecated use {@link #source(byte[], XContentType)} */ + @Deprecated public CreateIndexRequest source(byte[] source) { return source(source, 0, source.length); } @@ -351,6 +397,15 @@ public CreateIndexRequest source(byte[] source) { /** * Sets the settings and mappings as a single source. */ + public CreateIndexRequest source(byte[] source, XContentType xContentType) { + return source(source, 0, source.length, xContentType); + } + + /** + * Sets the settings and mappings as a single source. + * @deprecated use {@link #source(byte[], int, int, XContentType)} + */ + @Deprecated public CreateIndexRequest source(byte[] source, int offset, int length) { return source(new BytesArray(source, offset, length)); } @@ -358,17 +413,27 @@ public CreateIndexRequest source(byte[] source, int offset, int length) { /** * Sets the settings and mappings as a single source. */ + public CreateIndexRequest source(byte[] source, int offset, int length, XContentType xContentType) { + return source(new BytesArray(source, offset, length), xContentType); + } + + /** + * Sets the settings and mappings as a single source. + * @deprecated use {@link #source(BytesReference, XContentType)} + */ + @Deprecated public CreateIndexRequest source(BytesReference source) { XContentType xContentType = XContentFactory.xContentType(source); - if (xContentType != null) { - try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(source)) { - source(parser.map()); - } catch (IOException e) { - throw new ElasticsearchParseException("failed to parse source for create index", e); - } - } else { - settings(source.utf8ToString()); - } + source(source, xContentType); + return this; + } + + /** + * Sets the settings and mappings as a single source. + */ + public CreateIndexRequest source(BytesReference source, XContentType xContentType) { + Objects.requireNonNull(xContentType); + source(XContentHelper.convertToMap(source, false, xContentType).v2()); return this; } @@ -485,7 +550,13 @@ public void readFrom(StreamInput in) throws IOException { readTimeout(in); int size = in.readVInt(); for (int i = 0; i < size; i++) { - mappings.put(in.readString(), in.readString()); + final String type = in.readString(); + String source = in.readString(); + if (in.getVersion().before(Version.V_5_3_0)) { + // we do not know the content type that comes from earlier versions so we autodetect and convert + source = XContentHelper.convertToJson(new BytesArray(source), false, false, XContentFactory.xContentType(source)); + } + mappings.put(type, source); } int customSize = in.readVInt(); for (int i = 0; i < customSize; i++) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java index eaae4d53b73fd..237c88244b4cd 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -76,13 +77,23 @@ public CreateIndexRequestBuilder setSettings(XContentBuilder builder) { } /** - * The settings to create the index with (either json/yaml/properties format) + * The settings to create the index with (either json or yaml format) + * @deprecated use {@link #setSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public CreateIndexRequestBuilder setSettings(String source) { request.settings(source); return this; } + /** + * The settings to create the index with (either json or yaml format) + */ + public CreateIndexRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); + return this; + } + /** * A simplified version of settings that takes key value pairs settings. */ @@ -104,12 +115,26 @@ public CreateIndexRequestBuilder setSettings(Map source) { * * @param type The mapping type * @param source The mapping source + * @deprecated use {@link #addMapping(String, String, XContentType)} to avoid content type auto-detection */ + @Deprecated public CreateIndexRequestBuilder addMapping(String type, String source) { request.mapping(type, source); return this; } + /** + * Adds mapping that will be added when the index gets created. + * + * @param type The mapping type + * @param source The mapping source + * @param xContentType The content type of the source + */ + public CreateIndexRequestBuilder addMapping(String type, String source, XContentType xContentType) { + request.mapping(type, source, xContentType); + return this; + } + /** * The cause for this index creation. */ @@ -191,7 +216,9 @@ public CreateIndexRequestBuilder addAlias(Alias alias) { /** * Sets the settings and mappings as a single source. + * @deprecated use {@link #setSource(String, XContentType)} */ + @Deprecated public CreateIndexRequestBuilder setSource(String source) { request.source(source); return this; @@ -200,6 +227,16 @@ public CreateIndexRequestBuilder setSource(String source) { /** * Sets the settings and mappings as a single source. */ + public CreateIndexRequestBuilder setSource(String source, XContentType xContentType) { + request.source(source, xContentType); + return this; + } + + /** + * Sets the settings and mappings as a single source. + * @deprecated use {@link #setSource(BytesReference, XContentType)} + */ + @Deprecated public CreateIndexRequestBuilder setSource(BytesReference source) { request.source(source); return this; @@ -208,6 +245,16 @@ public CreateIndexRequestBuilder setSource(BytesReference source) { /** * Sets the settings and mappings as a single source. */ + public CreateIndexRequestBuilder setSource(BytesReference source, XContentType xContentType) { + request.source(source, xContentType); + return this; + } + + /** + * Sets the settings and mappings as a single source. + * @deprecated use {@link #setSource(byte[], XContentType)} + */ + @Deprecated public CreateIndexRequestBuilder setSource(byte[] source) { request.source(source); return this; @@ -216,11 +263,29 @@ public CreateIndexRequestBuilder setSource(byte[] source) { /** * Sets the settings and mappings as a single source. */ + public CreateIndexRequestBuilder setSource(byte[] source, XContentType xContentType) { + request.source(source, xContentType); + return this; + } + + /** + * Sets the settings and mappings as a single source. + * @deprecated use {@link #setSource(byte[], int, int, XContentType)} + */ + @Deprecated public CreateIndexRequestBuilder setSource(byte[] source, int offset, int length) { request.source(source, offset, length); return this; } + /** + * Sets the settings and mappings as a single source. + */ + public CreateIndexRequestBuilder setSource(byte[] source, int offset, int length, XContentType xContentType) { + request.source(source, offset, length, xContentType); + return this; + } + /** * Sets the settings and mappings as a single source. */ diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java index 35dd53276cd6d..7d948e7137ebf 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.create; +import org.elasticsearch.Version; import org.elasticsearch.action.support.master.AcknowledgedResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -32,14 +33,16 @@ public class CreateIndexResponse extends AcknowledgedResponse { private boolean shardsAcked; + private String index; protected CreateIndexResponse() { } - protected CreateIndexResponse(boolean acknowledged, boolean shardsAcked) { + protected CreateIndexResponse(boolean acknowledged, boolean shardsAcked, String index) { super(acknowledged); assert acknowledged || shardsAcked == false; // if its not acknowledged, then shards acked should be false too this.shardsAcked = shardsAcked; + this.index = index; } @Override @@ -47,6 +50,9 @@ public void readFrom(StreamInput in) throws IOException { super.readFrom(in); readAcknowledged(in); shardsAcked = in.readBoolean(); + if (in.getVersion().onOrAfter(Version.V_5_6_0)) { + index = in.readString(); + } } @Override @@ -54,6 +60,9 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); writeAcknowledged(out); out.writeBoolean(shardsAcked); + if (out.getVersion().onOrAfter(Version.V_5_6_0)) { + out.writeString(index); + } } /** @@ -65,7 +74,12 @@ public boolean isShardsAcked() { return shardsAcked; } + public String index() { + return index; + } + public void addCustomFields(XContentBuilder builder) throws IOException { builder.field("shards_acknowledged", isShardsAcked()); + builder.field("index", index()); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java index d3ce1975e8917..0ac8d02f97760 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java @@ -72,14 +72,14 @@ protected void masterOperation(final CreateIndexRequest request, final ClusterSt } final String indexName = indexNameExpressionResolver.resolveDateMathExpression(request.index()); - final CreateIndexClusterStateUpdateRequest updateRequest = new CreateIndexClusterStateUpdateRequest(request, cause, indexName, request.updateAllTypes()) + final CreateIndexClusterStateUpdateRequest updateRequest = new CreateIndexClusterStateUpdateRequest(request, cause, indexName, request.index(), request.updateAllTypes()) .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout()) .settings(request.settings()).mappings(request.mappings()) .aliases(request.aliases()).customs(request.customs()) .waitForActiveShards(request.waitForActiveShards()); createIndexService.createIndex(updateRequest, ActionListener.wrap(response -> - listener.onResponse(new CreateIndexResponse(response.isAcknowledged(), response.isShardsAcked())), + listener.onResponse(new CreateIndexResponse(response.isAcknowledged(), response.isShardsAcked(), indexName)), listener::onFailure)); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java index 9e5dc88b983a0..e4c7a9fa21430 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java @@ -20,37 +20,18 @@ package org.elasticsearch.action.admin.indices.delete; import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder; +import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; -import org.elasticsearch.common.unit.TimeValue; /** * */ -public class DeleteIndexRequestBuilder extends MasterNodeOperationRequestBuilder { +public class DeleteIndexRequestBuilder extends AcknowledgedRequestBuilder { public DeleteIndexRequestBuilder(ElasticsearchClient client, DeleteIndexAction action, String... indices) { super(client, action, new DeleteIndexRequest(indices)); } - /** - * Timeout to wait for the index deletion to be acknowledged by current cluster nodes. Defaults - * to 60s. - */ - public DeleteIndexRequestBuilder setTimeout(TimeValue timeout) { - request.timeout(timeout); - return this; - } - - /** - * Timeout to wait for the index deletion to be acknowledged by current cluster nodes. Defaults - * to 10s. - */ - public DeleteIndexRequestBuilder setTimeout(String timeout) { - request.timeout(timeout); - return this; - } - /** * Specifies what type of requested indices to ignore and wildcard indices expressions. *

diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java index 251eed8bdb88b..f5c63bd470d40 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java @@ -78,7 +78,7 @@ protected void doExecute(Task task, DeleteIndexRequest request, ActionListener { private boolean force = false; - private boolean waitIfOngoing = false; + private boolean waitIfOngoing = true; /** * Constructs a new flush request against one or more indices. If nothing is provided, all indices will @@ -61,6 +61,7 @@ public boolean waitIfOngoing() { /** * if set to true the flush will block * if a another flush operation is already running until the flush can be performed. + * The default is true */ public FlushRequest waitIfOngoing(boolean waitIfOngoing) { this.waitIfOngoing = waitIfOngoing; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java index 570307a717da2..d79668ea73ed9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java @@ -23,7 +23,7 @@ import org.elasticsearch.action.support.replication.ReplicationResponse; import org.elasticsearch.action.support.replication.TransportReplicationAction; import org.elasticsearch.cluster.action.shard.ShardStateAction; -import org.elasticsearch.cluster.block.ClusterBlockLevel; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; @@ -54,33 +54,21 @@ protected ReplicationResponse newResponseInstance() { } @Override - protected PrimaryResult shardOperationOnPrimary(ShardFlushRequest shardRequest) { - IndexShard indexShard = indicesService.indexServiceSafe(shardRequest.shardId().getIndex()).getShard(shardRequest.shardId().id()); - indexShard.flush(shardRequest.getRequest()); - logger.trace("{} flush request executed on primary", indexShard.shardId()); + protected PrimaryResult shardOperationOnPrimary(ShardFlushRequest shardRequest, IndexShard primary) { + primary.flush(shardRequest.getRequest()); + logger.trace("{} flush request executed on primary", primary.shardId()); return new PrimaryResult(shardRequest, new ReplicationResponse()); } @Override - protected ReplicaResult shardOperationOnReplica(ShardFlushRequest request) { - IndexShard indexShard = indicesService.indexServiceSafe(request.shardId().getIndex()).getShard(request.shardId().id()); - indexShard.flush(request.getRequest()); - logger.trace("{} flush request executed on replica", indexShard.shardId()); + protected ReplicaResult shardOperationOnReplica(ShardFlushRequest request, IndexShard replica) { + replica.flush(request.getRequest()); + logger.trace("{} flush request executed on replica", replica.shardId()); return new ReplicaResult(); } @Override - protected ClusterBlockLevel globalBlockLevel() { - return ClusterBlockLevel.METADATA_WRITE; - } - - @Override - protected ClusterBlockLevel indexBlockLevel() { - return ClusterBlockLevel.METADATA_WRITE; - } - - @Override - protected boolean shouldExecuteReplication(Settings settings) { + protected boolean shouldExecuteReplication(IndexMetaData indexMetaData) { return true; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexRequest.java index 6887857925979..48886c38aa4bc 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexRequest.java @@ -34,7 +34,7 @@ */ public class GetIndexRequest extends ClusterInfoRequest { - public static enum Feature { + public enum Feature { ALIASES((byte) 0, "_aliases", "_alias"), MAPPINGS((byte) 1, "_mappings", "_mapping"), SETTINGS((byte) 2, "_settings"); @@ -52,7 +52,7 @@ public static enum Feature { private final String preferredName; private final byte id; - private Feature(byte id, String... validNames) { + Feature(byte id, String... validNames) { assert validNames != null && validNames.length > 0; this.id = id; this.validNames = Arrays.asList(validNames); @@ -77,7 +77,7 @@ public static Feature fromName(String name) { return feature; } } - throw new IllegalArgumentException("No feature for name [" + name + "]"); + throw new IllegalArgumentException("No endpoint or operation is available at [" + name + "]"); } public static Feature fromId(byte id) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexResponse.java index 3a29237faeb2b..36bfa81a33416 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexResponse.java @@ -104,7 +104,7 @@ public void readFrom(StreamInput in) throws IOException { int valueSize = in.readVInt(); ImmutableOpenMap.Builder mappingEntryBuilder = ImmutableOpenMap.builder(); for (int j = 0; j < valueSize; j++) { - mappingEntryBuilder.put(in.readString(), MappingMetaData.PROTO.readFrom(in)); + mappingEntryBuilder.put(in.readString(), new MappingMetaData(in)); } mappingsMapBuilder.put(key, mappingEntryBuilder.build()); } @@ -114,9 +114,9 @@ public void readFrom(StreamInput in) throws IOException { for (int i = 0; i < aliasesSize; i++) { String key = in.readString(); int valueSize = in.readVInt(); - List aliasEntryBuilder = new ArrayList<>(); + List aliasEntryBuilder = new ArrayList<>(valueSize); for (int j = 0; j < valueSize; j++) { - aliasEntryBuilder.add(AliasMetaData.Builder.readFrom(in)); + aliasEntryBuilder.add(new AliasMetaData(in)); } aliasesMapBuilder.put(key, Collections.unmodifiableList(aliasEntryBuilder)); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsRequest.java index 967ea31c84a09..819d2de999ccf 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsRequest.java @@ -30,7 +30,7 @@ import java.io.IOException; /** Request the mappings of specific fields */ -public class GetFieldMappingsRequest extends ActionRequest implements IndicesRequest.Replaceable { +public class GetFieldMappingsRequest extends ActionRequest implements IndicesRequest.Replaceable { protected boolean local = false; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java index e0cedcf841e47..3f4ddaf08db2c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.mapper.Mapper; import java.io.IOException; @@ -108,20 +109,25 @@ public String fullName() { /** Returns the mappings as a map. Note that the returned map has a single key which is always the field's {@link Mapper#name}. */ public Map sourceAsMap() { - return XContentHelper.convertToMap(source, true).v2(); + return XContentHelper.convertToMap(source, true, XContentType.JSON).v2(); } public boolean isNull() { return NULL.fullName().equals(fullName) && NULL.source.length() == source.length(); } + //pkg-private for testing + BytesReference getSource() { + return source; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.field("full_name", fullName); if (params.paramAsBoolean("pretty", false)) { builder.field("mapping", sourceAsMap()); } else { - builder.rawField("mapping", source); + builder.rawField("mapping", source, XContentType.JSON); } return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetMappingsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetMappingsResponse.java index 30e9e24c4937b..04e257aad5b1a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetMappingsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetMappingsResponse.java @@ -59,7 +59,7 @@ public void readFrom(StreamInput in) throws IOException { int valueSize = in.readVInt(); ImmutableOpenMap.Builder typeMapBuilder = ImmutableOpenMap.builder(); for (int j = 0; j < valueSize; j++) { - typeMapBuilder.put(in.readString(), MappingMetaData.PROTO.readFrom(in)); + typeMapBuilder.put(in.readString(), new MappingMetaData(in)); } indexMapBuilder.put(key, typeMapBuilder.build()); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java index 864c6703c48e0..92c23bb856865 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java @@ -50,12 +50,10 @@ import java.io.IOException; import java.util.ArrayList; import java.util.Collection; -import java.util.Iterator; import java.util.Map; import java.util.stream.Collectors; import static java.util.Collections.singletonMap; -import static org.elasticsearch.common.util.CollectionUtils.newLinkedList; /** * Transport action used to retrieve the mappings related to fields that belong to a specific index @@ -174,24 +172,12 @@ private Map findFieldMappingsByType(DocumentMapper addFieldMapper(fieldMapper.fieldType().name(), fieldMapper, fieldMappings, request.includeDefaults()); } } else if (Regex.isSimpleMatchPattern(field)) { - // go through the field mappers 3 times, to make sure we give preference to the resolve order: full name, index name, name. - // also make sure we only store each mapper once. - Collection remainingFieldMappers = newLinkedList(allFieldMappers); - for (Iterator it = remainingFieldMappers.iterator(); it.hasNext(); ) { - final FieldMapper fieldMapper = it.next(); - if (Regex.simpleMatch(field, fieldMapper.fieldType().name())) { - addFieldMapper(fieldMapper.fieldType().name(), fieldMapper, fieldMappings, request.includeDefaults()); - it.remove(); - } - } - for (Iterator it = remainingFieldMappers.iterator(); it.hasNext(); ) { - final FieldMapper fieldMapper = it.next(); + for (FieldMapper fieldMapper : allFieldMappers) { if (Regex.simpleMatch(field, fieldMapper.fieldType().name())) { - addFieldMapper(fieldMapper.fieldType().name(), fieldMapper, fieldMappings, request.includeDefaults()); - it.remove(); + addFieldMapper(fieldMapper.fieldType().name(), fieldMapper, fieldMappings, + request.includeDefaults()); } } - } else { // not a pattern FieldMapper fieldMapper = allFieldMappers.smartNameFieldMapper(field); @@ -220,4 +206,4 @@ private void addFieldMapper(String field, FieldMapper fieldMapper, MapBuilder im private static ObjectHashSet RESERVED_FIELDS = ObjectHashSet.from( "_uid", "_id", "_type", "_source", "_all", "_analyzer", "_parent", "_routing", "_index", - "_size", "_timestamp", "_ttl" + "_size", "_timestamp", "_ttl", "_field_names" ); private String[] indices; @@ -245,7 +250,7 @@ public static XContentBuilder buildFromSimplifiedDef(String type, Object... sour */ public PutMappingRequest source(XContentBuilder mappingBuilder) { try { - return source(mappingBuilder.string()); + return source(mappingBuilder.string(), mappingBuilder.contentType()); } catch (IOException e) { throw new IllegalArgumentException("Failed to build json for mapping request", e); } @@ -259,7 +264,7 @@ public PutMappingRequest source(Map mappingSource) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(mappingSource); - return source(builder.string()); + return source(builder.string(), XContentType.JSON); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + mappingSource + "]", e); } @@ -267,10 +272,31 @@ public PutMappingRequest source(Map mappingSource) { /** * The mapping source definition. + * @deprecated use {@link #source(String, XContentType)} */ + @Deprecated public PutMappingRequest source(String mappingSource) { - this.source = mappingSource; - return this; + return source(mappingSource, XContentFactory.xContentType(mappingSource)); + } + + /** + * The mapping source definition. + */ + public PutMappingRequest source(String mappingSource, XContentType xContentType) { + return source(new BytesArray(mappingSource), xContentType); + } + + /** + * The mapping source definition. + */ + public PutMappingRequest source(BytesReference mappingSource, XContentType xContentType) { + Objects.requireNonNull(xContentType); + try { + this.source = XContentHelper.convertToJson(mappingSource, false, false, xContentType); + return this; + } catch (IOException e) { + throw new UncheckedIOException("failed to convert source to json", e); + } } /** True if all fields that span multiple types should be updated, false otherwise */ @@ -291,6 +317,10 @@ public void readFrom(StreamInput in) throws IOException { indicesOptions = IndicesOptions.readIndicesOptions(in); type = in.readOptionalString(); source = in.readString(); + if (in.getVersion().before(Version.V_5_3_0)) { + // we do not know the format from earlier versions so convert if necessary + source = XContentHelper.convertToJson(new BytesArray(source), false, false, XContentFactory.xContentType(source)); + } updateAllTypes = in.readBoolean(); readTimeout(in); concreteIndex = in.readOptionalWriteable(Index::new); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java index c21c40cf041ea..012a593ebc473 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.Index; import java.util.Map; @@ -82,12 +83,22 @@ public PutMappingRequestBuilder setSource(Map mappingSource) { /** * The mapping source definition. + * @deprecated use {@link #setSource(String, XContentType)} */ + @Deprecated public PutMappingRequestBuilder setSource(String mappingSource) { request.source(mappingSource); return this; } + /** + * The mapping source definition. + */ + public PutMappingRequestBuilder setSource(String mappingSource, XContentType xContentType) { + request.source(mappingSource, xContentType); + return this; + } + /** * A specialized simplified mapping source method, takes the form of simple properties definition: * ("field1", "type=string,store=true"). diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java index 1128ebf9875fd..451b9a280be68 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java @@ -22,6 +22,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.DestructiveOperations; import org.elasticsearch.action.support.master.TransportMasterNodeAction; @@ -82,6 +83,10 @@ protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterStat @Override protected void masterOperation(final OpenIndexRequest request, final ClusterState state, final ActionListener listener) { final Index[] concreteIndices = indexNameExpressionResolver.concreteIndices(state, request); + if (concreteIndices == null || concreteIndices.length == 0) { + listener.onResponse(new OpenIndexResponse(true)); + return; + } OpenIndexClusterStateUpdateRequest updateRequest = new OpenIndexClusterStateUpdateRequest() .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout()) .indices(concreteIndices); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java index cf9f568195382..d1d8b4078b647 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java @@ -24,13 +24,12 @@ import org.elasticsearch.action.support.replication.ReplicationResponse; import org.elasticsearch.action.support.replication.TransportReplicationAction; import org.elasticsearch.cluster.action.shard.ShardStateAction; -import org.elasticsearch.cluster.block.ClusterBlockLevel; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -55,34 +54,21 @@ protected ReplicationResponse newResponseInstance() { } @Override - protected PrimaryResult shardOperationOnPrimary(BasicReplicationRequest shardRequest) { - IndexShard indexShard = indicesService.indexServiceSafe(shardRequest.shardId().getIndex()).getShard(shardRequest.shardId().id()); - indexShard.refresh("api"); - logger.trace("{} refresh request executed on primary", indexShard.shardId()); + protected PrimaryResult shardOperationOnPrimary(BasicReplicationRequest shardRequest, IndexShard primary) { + primary.refresh("api"); + logger.trace("{} refresh request executed on primary", primary.shardId()); return new PrimaryResult(shardRequest, new ReplicationResponse()); } @Override - protected ReplicaResult shardOperationOnReplica(BasicReplicationRequest request) { - final ShardId shardId = request.shardId(); - IndexShard indexShard = indicesService.indexServiceSafe(shardId.getIndex()).getShard(shardId.id()); - indexShard.refresh("api"); - logger.trace("{} refresh request executed on replica", indexShard.shardId()); + protected ReplicaResult shardOperationOnReplica(BasicReplicationRequest request, IndexShard replica) { + replica.refresh("api"); + logger.trace("{} refresh request executed on replica", replica.shardId()); return new ReplicaResult(); } @Override - protected ClusterBlockLevel globalBlockLevel() { - return ClusterBlockLevel.METADATA_WRITE; - } - - @Override - protected ClusterBlockLevel indexBlockLevel() { - return ClusterBlockLevel.METADATA_WRITE; - } - - @Override - protected boolean shouldExecuteReplication(Settings settings) { + protected boolean shouldExecuteReplication(IndexMetaData indexMetaData) { return true; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java index 8d9b48f20008e..d6bfaf0a48cec 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java @@ -20,7 +20,6 @@ package org.elasticsearch.action.admin.indices.rollover; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ObjectParser; @@ -32,8 +31,7 @@ */ public abstract class Condition implements NamedWriteable { - public static ObjectParser, ParseFieldMatcherSupplier> PARSER = - new ObjectParser<>("conditions", null); + public static ObjectParser, Void> PARSER = new ObjectParser<>("conditions", null); static { PARSER.declareString((conditions, s) -> conditions.add(new MaxAgeCondition(TimeValue.parseTimeValue(s, MaxAgeCondition.NAME))), @@ -49,7 +47,7 @@ protected Condition(String name) { this.name = name; } - public abstract Result evaluate(final Stats stats); + public abstract Result evaluate(Stats stats); @Override public final String toString() { diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java index 854611658dfe5..4804bc577fc58 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java @@ -18,7 +18,6 @@ */ package org.elasticsearch.action.admin.indices.rollover; -import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; @@ -26,16 +25,10 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; import java.util.HashSet; @@ -50,23 +43,19 @@ */ public class RolloverRequest extends AcknowledgedRequest implements IndicesRequest { - public static ObjectParser PARSER = - new ObjectParser<>("conditions", null); + public static final ObjectParser PARSER = new ObjectParser<>("conditions", null); static { - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - Condition.PARSER.parse(parser, request.conditions, parseFieldMatcherSupplier), + PARSER.declareField((parser, request, context) -> Condition.PARSER.parse(parser, request.conditions, null), new ParseField("conditions"), ObjectParser.ValueType.OBJECT); - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - request.createIndexRequest.settings(parser.map()), + PARSER.declareField((parser, request, context) -> request.createIndexRequest.settings(parser.map()), new ParseField("settings"), ObjectParser.ValueType.OBJECT); - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> { + PARSER.declareField((parser, request, context) -> { for (Map.Entry mappingsEntry : parser.map().entrySet()) { request.createIndexRequest.mapping(mappingsEntry.getKey(), (Map) mappingsEntry.getValue()); } }, new ParseField("mappings"), ObjectParser.ValueType.OBJECT); - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - request.createIndexRequest.aliases(parser.map()), + PARSER.declareField((parser, request, context) -> request.createIndexRequest.aliases(parser.map()), new ParseField("aliases"), ObjectParser.ValueType.OBJECT); } @@ -194,19 +183,6 @@ CreateIndexRequest getCreateIndexRequest() { return createIndexRequest; } - public void source(BytesReference source) { - XContentType xContentType = XContentFactory.xContentType(source); - if (xContentType != null) { - try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(source)) { - PARSER.parse(parser, this, () -> ParseFieldMatcher.EMPTY); - } catch (IOException e) { - throw new ElasticsearchParseException("failed to parse source for rollover index", e); - } - } else { - throw new ElasticsearchParseException("failed to parse content type for rollover index source"); - } - } - /** * Sets the number of shard copies that should be active for creation of the * new rollover index to return. Defaults to {@link ActiveShardCount#DEFAULT}, which will diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverResponse.java index b495e3c6a0f32..8c1be3501a820 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverResponse.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -32,7 +32,7 @@ import java.util.Set; import java.util.stream.Collectors; -public final class RolloverResponse extends ActionResponse implements ToXContent { +public final class RolloverResponse extends ActionResponse implements ToXContentObject { private static final String NEW_INDEX = "new_index"; private static final String OLD_INDEX = "old_index"; @@ -157,6 +157,7 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.field(OLD_INDEX, oldIndex); builder.field(NEW_INDEX, newIndex); builder.field(ROLLED_OVER, rolledOver); @@ -168,6 +169,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(entry.getKey(), entry.getValue()); } builder.endObject(); + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java index eaf9025bf0440..2abe0dad74ee3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java @@ -61,7 +61,7 @@ */ public class TransportRolloverAction extends TransportMasterNodeAction { - private static final Pattern INDEX_NAME_PATTERN = Pattern.compile("^.*-(\\d)+$"); + private static final Pattern INDEX_NAME_PATTERN = Pattern.compile("^.*-\\d+$"); private final MetaDataCreateIndexService createIndexService; private final MetaDataIndexAliasesService indexAliasesService; private final ActiveShardsObserver activeShardsObserver; @@ -106,23 +106,29 @@ protected void masterOperation(final RolloverRequest rolloverRequest, final Clus validate(metaData, rolloverRequest); final AliasOrIndex aliasOrIndex = metaData.getAliasAndIndexLookup().get(rolloverRequest.getAlias()); final IndexMetaData indexMetaData = aliasOrIndex.getIndices().get(0); + final String sourceProvidedName = indexMetaData.getSettings().get(IndexMetaData.SETTING_INDEX_PROVIDED_NAME, + indexMetaData.getIndex().getName()); final String sourceIndexName = indexMetaData.getIndex().getName(); + final String unresolvedName = (rolloverRequest.getNewIndexName() != null) + ? rolloverRequest.getNewIndexName() + : generateRolloverIndexName(sourceProvidedName, indexNameExpressionResolver); + final String rolloverIndexName = indexNameExpressionResolver.resolveDateMathExpression(unresolvedName); + MetaDataCreateIndexService.validateIndexName(rolloverIndexName, state); // will fail if the index already exists client.admin().indices().prepareStats(sourceIndexName).clear().setDocs(true).execute( new ActionListener() { @Override public void onResponse(IndicesStatsResponse statsResponse) { final Set conditionResults = evaluateConditions(rolloverRequest.getConditions(), - statsResponse.getTotal().getDocs(), metaData.index(sourceIndexName)); - final String rolloverIndexName = (rolloverRequest.getNewIndexName() != null) - ? rolloverRequest.getNewIndexName() - : generateRolloverIndexName(sourceIndexName); + metaData.index(sourceIndexName), statsResponse); + if (rolloverRequest.isDryRun()) { listener.onResponse( new RolloverResponse(sourceIndexName, rolloverIndexName, conditionResults, true, false, false, false)); return; } if (conditionResults.size() == 0 || conditionResults.stream().anyMatch(result -> result.matched)) { - CreateIndexClusterStateUpdateRequest updateRequest = prepareCreateIndexRequest(rolloverIndexName, rolloverRequest); + CreateIndexClusterStateUpdateRequest updateRequest = prepareCreateIndexRequest(unresolvedName, rolloverIndexName, + rolloverRequest); createIndexService.createIndex(updateRequest, ActionListener.wrap(createIndexClusterStateUpdateResponse -> { // switch the alias to point to the newly created index indexAliasesService.indicesAliases( @@ -145,7 +151,7 @@ public void onResponse(IndicesStatsResponse statsResponse) { } else { // conditions not met listener.onResponse( - new RolloverResponse(sourceIndexName, sourceIndexName, conditionResults, false, false, false, false) + new RolloverResponse(sourceIndexName, rolloverIndexName, conditionResults, false, false, false, false) ); } } @@ -170,14 +176,19 @@ static IndicesAliasesClusterStateUpdateRequest prepareRolloverAliasesUpdateReque } - static String generateRolloverIndexName(String sourceIndexName) { - if (INDEX_NAME_PATTERN.matcher(sourceIndexName).matches()) { + static String generateRolloverIndexName(String sourceIndexName, IndexNameExpressionResolver indexNameExpressionResolver) { + String resolvedName = indexNameExpressionResolver.resolveDateMathExpression(sourceIndexName); + final boolean isDateMath = sourceIndexName.equals(resolvedName) == false; + if (INDEX_NAME_PATTERN.matcher(resolvedName).matches()) { int numberIndex = sourceIndexName.lastIndexOf("-"); assert numberIndex != -1 : "no separator '-' found"; - int counter = Integer.parseInt(sourceIndexName.substring(numberIndex + 1)); - return String.join("-", sourceIndexName.substring(0, numberIndex), String.format(Locale.ROOT, "%06d", ++counter)); + int counter = Integer.parseInt(sourceIndexName.substring(numberIndex + 1, isDateMath ? sourceIndexName.length()-1 : + sourceIndexName.length())); + String newName = sourceIndexName.substring(0, numberIndex) + "-" + String.format(Locale.ROOT, "%06d", ++counter) + + (isDateMath ? ">" : ""); + return newName; } else { - throw new IllegalArgumentException("index name [" + sourceIndexName + "] does not match pattern '^.*-(\\d)+$'"); + throw new IllegalArgumentException("index name [" + sourceIndexName + "] does not match pattern '^.*-\\d+$'"); } } @@ -190,6 +201,11 @@ static Set evaluateConditions(final Set conditions, .collect(Collectors.toSet()); } + static Set evaluateConditions(final Set conditions, final IndexMetaData metaData, + final IndicesStatsResponse statsResponse) { + return evaluateConditions(conditions, statsResponse.getPrimaries().getDocs(), metaData); + } + static void validate(MetaData metaData, RolloverRequest request) { final AliasOrIndex aliasOrIndex = metaData.getAliasAndIndexLookup().get(request.getAlias()); if (aliasOrIndex == null) { @@ -203,14 +219,14 @@ static void validate(MetaData metaData, RolloverRequest request) { } } - static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final String targetIndexName, + static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final String providedIndexName, final String targetIndexName, final RolloverRequest rolloverRequest) { final CreateIndexRequest createIndexRequest = rolloverRequest.getCreateIndexRequest(); createIndexRequest.cause("rollover_index"); createIndexRequest.index(targetIndexName); return new CreateIndexClusterStateUpdateRequest(createIndexRequest, - "rollover_index", targetIndexName, true) + "rollover_index", targetIndexName, providedIndexName, true) .ackTimeout(createIndexRequest.timeout()) .masterNodeTimeout(createIndexRequest.masterNodeTimeout()) .settings(createIndexRequest.settings()) diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java index f9ebff0663617..8ae075c66340a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java @@ -65,7 +65,10 @@ protected ClusterBlockException checkBlock(UpdateSettingsRequest request, Cluste if (globalBlock != null) { return globalBlock; } - if (request.settings().getAsMap().size() == 1 && IndexMetaData.INDEX_BLOCKS_METADATA_SETTING.exists(request.settings()) || IndexMetaData.INDEX_READ_ONLY_SETTING.exists(request.settings())) { + if (request.settings().size() == 1 && // we have to allow resetting these settings otherwise users can't unblock an index + IndexMetaData.INDEX_BLOCKS_METADATA_SETTING.exists(request.settings()) + || IndexMetaData.INDEX_READ_ONLY_SETTING.exists(request.settings()) + || IndexMetaData.INDEX_BLOCKS_READ_ONLY_ALLOW_DELETE_SETTING.exists(request.settings())) { return null; } return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indexNameExpressionResolver.concreteIndexNames(state, request)); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java index 494b9df7bd37b..80f3fb1a0718d 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java @@ -70,7 +70,7 @@ public UpdateSettingsRequest(Settings settings, String... indices) { @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (settings.getAsMap().isEmpty()) { + if (settings.isEmpty()) { validationException = addValidationError("no settings to update", validationException); } return validationException; @@ -121,13 +121,23 @@ public UpdateSettingsRequest settings(Settings.Builder settings) { } /** - * Sets the settings to be updated (either json/yaml/properties format) + * Sets the settings to be updated (either json or yaml format) + * @deprecated use {@link #settings(String, XContentType)} to avoid content type detection */ + @Deprecated public UpdateSettingsRequest settings(String source) { this.settings = Settings.builder().loadFromSource(source).build(); return this; } + /** + * Sets the settings to be updated (either json or yaml format) + */ + public UpdateSettingsRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); + return this; + } + /** * Returns true iff the settings update should only add but not update settings. If the setting already exists * it should not be overwritten by this update. The default is false @@ -146,14 +156,14 @@ public UpdateSettingsRequest setPreserveExisting(boolean preserveExisting) { } /** - * Sets the settings to be updated (either json/yaml/properties format) + * Sets the settings to be updated (either json or yaml format) */ @SuppressWarnings("unchecked") public UpdateSettingsRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java index 36dfbf3b2d49b..a9cecbfc5a434 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -70,15 +71,25 @@ public UpdateSettingsRequestBuilder setSettings(Settings.Builder settings) { } /** - * Sets the settings to be updated (either json/yaml/properties format) + * Sets the settings to be updated (either json or yaml format) + * @deprecated use {@link #setSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public UpdateSettingsRequestBuilder setSettings(String source) { request.settings(source); return this; } /** - * Sets the settings to be updated (either json/yaml/properties format) + * Sets the settings to be updated (either json or yaml format) + */ + public UpdateSettingsRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); + return this; + } + + /** + * Sets the settings to be updated */ public UpdateSettingsRequestBuilder setSettings(Map source) { request.settings(source); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java index f1df6d53e185d..3657e3272658f 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java @@ -207,7 +207,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Fields.ALLOCATED, allocationStatus.value()); if (storeException != null) { builder.startObject(Fields.STORE_EXCEPTION); - ElasticsearchException.toXContent(builder, params, storeException); + ElasticsearchException.generateThrowableXContent(builder, params, storeException); builder.endObject(); } return builder; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java index e13578d66de63..27b89cd8c6297 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java @@ -29,7 +29,6 @@ import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.cluster.health.ClusterShardHealth; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; @@ -155,7 +154,7 @@ private class InternalAsyncFetch extends AsyncShardFetch responses, List failures) { + protected synchronized void processAsyncFetch(List responses, List failures, long fetchingRound) { fetchResponses.add(new Response(shardId, responses, failures)); if (expectedOps.countDown()) { finish(); @@ -226,7 +225,7 @@ public class Response { private final List responses; private final List failures; - public Response(ShardId shardId, List responses, List failures) { + Response(ShardId shardId, List responses, List failures) { this.shardId = shardId; this.responses = responses; this.failures = failures; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java index 9cb60415a12cc..faa0a63c54dcf 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java @@ -18,7 +18,6 @@ */ package org.elasticsearch.action.admin.indices.shrink; -import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; @@ -26,15 +25,9 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; import java.util.Objects; @@ -46,14 +39,11 @@ */ public class ShrinkRequest extends AcknowledgedRequest implements IndicesRequest { - public static ObjectParser PARSER = - new ObjectParser<>("shrink_request", null); + public static final ObjectParser PARSER = new ObjectParser<>("shrink_request", null); static { - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - request.getShrinkIndexRequest().settings(parser.map()), + PARSER.declareField((parser, request, context) -> request.getShrinkIndexRequest().settings(parser.map()), new ParseField("settings"), ObjectParser.ValueType.OBJECT); - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - request.getShrinkIndexRequest().aliases(parser.map()), + PARSER.declareField((parser, request, context) -> request.getShrinkIndexRequest().aliases(parser.map()), new ParseField("aliases"), ObjectParser.ValueType.OBJECT); } @@ -152,17 +142,4 @@ public void setWaitForActiveShards(ActiveShardCount waitForActiveShards) { public void setWaitForActiveShards(final int waitForActiveShards) { setWaitForActiveShards(ActiveShardCount.from(waitForActiveShards)); } - - public void source(BytesReference source) { - XContentType xContentType = XContentFactory.xContentType(source); - if (xContentType != null) { - try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(source)) { - PARSER.parse(parser, this, () -> ParseFieldMatcher.EMPTY); - } catch (IOException e) { - throw new ElasticsearchParseException("failed to parse source for shrink index", e); - } - } else { - throw new ElasticsearchParseException("failed to parse content type for shrink index source"); - } - } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java index e7ad0afe3aa17..0c5149f6bf353 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java @@ -25,7 +25,7 @@ public final class ShrinkResponse extends CreateIndexResponse { ShrinkResponse() { } - ShrinkResponse(boolean acknowledged, boolean shardsAcked) { - super(acknowledged, shardsAcked); + ShrinkResponse(boolean acknowledged, boolean shardsAcked, String index) { + super(acknowledged, shardsAcked, index); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java index 4667f1e98255a..2555299709cda 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java @@ -91,8 +91,13 @@ public void onResponse(IndicesStatsResponse indicesStatsResponse) { IndexShardStats shard = indicesStatsResponse.getIndex(sourceIndex).getIndexShards().get(i); return shard == null ? null : shard.getPrimary().getDocs(); }, indexNameExpressionResolver); - createIndexService.createIndex(updateRequest, ActionListener.wrap(response -> - listener.onResponse(new ShrinkResponse(response.isAcknowledged(), response.isShardsAcked())), listener::onFailure)); + createIndexService.createIndex( + updateRequest, + ActionListener.wrap(response -> + listener.onResponse(new ShrinkResponse(response.isAcknowledged(), response.isShardsAcked(), updateRequest.index())), + listener::onFailure + ) + ); } @Override @@ -104,10 +109,10 @@ public void onFailure(Exception e) { } // static for unittesting this method - static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final ShrinkRequest shrinkReqeust, final ClusterState state + static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final ShrinkRequest shrinkRequest, final ClusterState state , final IntFunction perShardDocStats, IndexNameExpressionResolver indexNameExpressionResolver) { - final String sourceIndex = indexNameExpressionResolver.resolveDateMathExpression(shrinkReqeust.getSourceIndex()); - final CreateIndexRequest targetIndex = shrinkReqeust.getShrinkIndexRequest(); + final String sourceIndex = indexNameExpressionResolver.resolveDateMathExpression(shrinkRequest.getSourceIndex()); + final CreateIndexRequest targetIndex = shrinkRequest.getShrinkIndexRequest(); final String targetIndexName = indexNameExpressionResolver.resolveDateMathExpression(targetIndex.index()); final IndexMetaData metaData = state.metaData().index(sourceIndex); final Settings targetIndexSettings = Settings.builder().put(targetIndex.settings()) @@ -131,13 +136,16 @@ static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final Shri } } + if (IndexMetaData.INDEX_ROUTING_PARTITION_SIZE_SETTING.exists(targetIndexSettings)) { + throw new IllegalArgumentException("cannot provide a routing partition size value when shrinking an index"); + } targetIndex.cause("shrink_index"); Settings.Builder settingsBuilder = Settings.builder().put(targetIndexSettings); settingsBuilder.put("index.number_of_shards", numShards); targetIndex.settings(settingsBuilder); return new CreateIndexClusterStateUpdateRequest(targetIndex, - "shrink_index", targetIndexName, true) + "shrink_index", targetIndex.index(), targetIndexName, true) // mappings are updated on the node when merging in the shards, this prevents race-conditions since all mapping must be // applied once we took the snapshot and if somebody fucks things up and switches the index read/write and adds docs we miss // the mappings for everything is corrupted and hard to debug diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/CommonStats.java b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/CommonStats.java index ce90858f49a9d..b5e91ddf2a104 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/CommonStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/CommonStats.java @@ -46,6 +46,9 @@ import org.elasticsearch.search.suggest.completion.CompletionStats; import java.io.IOException; +import java.util.Arrays; +import java.util.Objects; +import java.util.stream.Stream; public class CommonStats implements Writeable, ToXContent { @@ -225,45 +228,19 @@ public CommonStats(IndicesQueryCache indicesQueryCache, IndexShard indexShard, C } public CommonStats(StreamInput in) throws IOException { - if (in.readBoolean()) { - docs = DocsStats.readDocStats(in); - } - if (in.readBoolean()) { - store = StoreStats.readStoreStats(in); - } - if (in.readBoolean()) { - indexing = IndexingStats.readIndexingStats(in); - } - if (in.readBoolean()) { - get = GetStats.readGetStats(in); - } - if (in.readBoolean()) { - search = SearchStats.readSearchStats(in); - } - if (in.readBoolean()) { - merge = MergeStats.readMergeStats(in); - } - if (in.readBoolean()) { - refresh = RefreshStats.readRefreshStats(in); - } - if (in.readBoolean()) { - flush = FlushStats.readFlushStats(in); - } - if (in.readBoolean()) { - warmer = WarmerStats.readWarmerStats(in); - } - if (in.readBoolean()) { - queryCache = QueryCacheStats.readQueryCacheStats(in); - } - if (in.readBoolean()) { - fieldData = FieldDataStats.readFieldDataStats(in); - } - if (in.readBoolean()) { - completion = CompletionStats.readCompletionStats(in); - } - if (in.readBoolean()) { - segments = SegmentsStats.readSegmentsStats(in); - } + docs = in.readOptionalStreamable(DocsStats::new); + store = in.readOptionalStreamable(StoreStats::new); + indexing = in.readOptionalStreamable(IndexingStats::new); + get = in.readOptionalStreamable(GetStats::new); + search = in.readOptionalStreamable(SearchStats::new); + merge = in.readOptionalStreamable(MergeStats::new); + refresh = in.readOptionalStreamable(RefreshStats::new); + flush = in.readOptionalStreamable(FlushStats::new); + warmer = in.readOptionalStreamable(WarmerStats::new); + queryCache = in.readOptionalStreamable(QueryCacheStats::new); + fieldData = in.readOptionalStreamable(FieldDataStats::new); + completion = in.readOptionalStreamable(CompletionStats::new); + segments = in.readOptionalStreamable(SegmentsStats::new); translog = in.readOptionalStreamable(TranslogStats::new); requestCache = in.readOptionalStreamable(RequestCacheStats::new); recoveryStats = in.readOptionalStreamable(RecoveryStats::new); @@ -271,84 +248,19 @@ public CommonStats(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { - if (docs == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - docs.writeTo(out); - } - if (store == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - store.writeTo(out); - } - if (indexing == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - indexing.writeTo(out); - } - if (get == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - get.writeTo(out); - } - if (search == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - search.writeTo(out); - } - if (merge == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - merge.writeTo(out); - } - if (refresh == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - refresh.writeTo(out); - } - if (flush == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - flush.writeTo(out); - } - if (warmer == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - warmer.writeTo(out); - } - if (queryCache == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - queryCache.writeTo(out); - } - if (fieldData == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - fieldData.writeTo(out); - } - if (completion == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - completion.writeTo(out); - } - if (segments == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - segments.writeTo(out); - } + out.writeOptionalStreamable(docs); + out.writeOptionalStreamable(store); + out.writeOptionalStreamable(indexing); + out.writeOptionalStreamable(get); + out.writeOptionalStreamable(search); + out.writeOptionalStreamable(merge); + out.writeOptionalStreamable(refresh); + out.writeOptionalStreamable(flush); + out.writeOptionalStreamable(warmer); + out.writeOptionalStreamable(queryCache); + out.writeOptionalStreamable(fieldData); + out.writeOptionalStreamable(completion); + out.writeOptionalStreamable(segments); out.writeOptionalStreamable(translog); out.writeOptionalStreamable(requestCache); out.writeOptionalStreamable(recoveryStats); @@ -590,53 +502,12 @@ public ByteSizeValue getTotalMemory() { // note, requires a wrapping object @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - if (docs != null) { - docs.toXContent(builder, params); - } - if (store != null) { - store.toXContent(builder, params); - } - if (indexing != null) { - indexing.toXContent(builder, params); - } - if (get != null) { - get.toXContent(builder, params); - } - if (search != null) { - search.toXContent(builder, params); - } - if (merge != null) { - merge.toXContent(builder, params); - } - if (refresh != null) { - refresh.toXContent(builder, params); - } - if (flush != null) { - flush.toXContent(builder, params); - } - if (warmer != null) { - warmer.toXContent(builder, params); - } - if (queryCache != null) { - queryCache.toXContent(builder, params); - } - if (fieldData != null) { - fieldData.toXContent(builder, params); - } - if (completion != null) { - completion.toXContent(builder, params); - } - if (segments != null) { - segments.toXContent(builder, params); - } - if (translog != null) { - translog.toXContent(builder, params); - } - if (requestCache != null) { - requestCache.toXContent(builder, params); - } - if (recoveryStats != null) { - recoveryStats.toXContent(builder, params); + final Stream stream = Arrays.stream(new ToXContent[] { + docs, store, indexing, get, search, merge, refresh, flush, warmer, queryCache, + fieldData, completion, segments, translog, requestCache, recoveryStats}) + .filter(Objects::nonNull); + for (ToXContent toXContent : ((Iterable)stream::iterator)) { + toXContent.toXContent(builder, params); } return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsResponse.java index 2caa0da956942..26c28da1ae63b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsResponse.java @@ -137,29 +137,25 @@ public CommonStats getPrimaries() { @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - shards = new ShardStats[in.readVInt()]; - for (int i = 0; i < shards.length; i++) { - shards[i] = ShardStats.readShardStats(in); - } + shards = in.readArray(ShardStats::readShardStats, (size) -> new ShardStats[size]); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeVInt(shards.length); - for (ShardStats shard : shards) { - shard.writeTo(out); - } + out.writeArray(shards); } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - String level = params.param("level", "indices"); - boolean isLevelValid = "indices".equalsIgnoreCase(level) || "shards".equalsIgnoreCase(level) || "cluster".equalsIgnoreCase(level); + final String level = params.param("level", "indices"); + final boolean isLevelValid = + "cluster".equalsIgnoreCase(level) || "indices".equalsIgnoreCase(level) || "shards".equalsIgnoreCase(level); if (!isLevelValid) { - return builder; + throw new IllegalArgumentException("level parameter must be one of [cluster] or [indices] or [shards] but was [" + level + "]"); } + builder.startObject("_all"); builder.startObject("primaries"); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java index 5bc6ce8106459..794df89d233aa 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.engine.CommitStats; @@ -31,9 +32,7 @@ import java.io.IOException; -/** - */ -public class ShardStats implements Streamable, ToXContent { +public class ShardStats implements Streamable, Writeable, ToXContent { private ShardRouting shardRouting; private CommonStats commonStats; @Nullable diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/delete/DeleteIndexTemplateResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/delete/DeleteIndexTemplateResponse.java index 5c2a2b166bc3c..9519f0f9fcf4f 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/delete/DeleteIndexTemplateResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/delete/DeleteIndexTemplateResponse.java @@ -32,7 +32,7 @@ public class DeleteIndexTemplateResponse extends AcknowledgedResponse { DeleteIndexTemplateResponse() { } - DeleteIndexTemplateResponse(boolean acknowledged) { + protected DeleteIndexTemplateResponse(boolean acknowledged) { super(acknowledged); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesResponse.java index 7693419df436d..3c5fb36d6c6aa 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesResponse.java @@ -23,17 +23,16 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.ArrayList; -import java.util.HashMap; import java.util.List; -import java.util.Map; import static java.util.Collections.singletonMap; -public class GetIndexTemplatesResponse extends ActionResponse implements ToXContent { +public class GetIndexTemplatesResponse extends ActionResponse implements ToXContentObject { private List indexTemplates; @@ -54,7 +53,7 @@ public void readFrom(StreamInput in) throws IOException { int size = in.readVInt(); indexTemplates = new ArrayList<>(size); for (int i = 0 ; i < size ; i++) { - indexTemplates.add(0, IndexTemplateMetaData.Builder.readFrom(in)); + indexTemplates.add(0, IndexTemplateMetaData.readFrom(in)); } } @@ -70,10 +69,11 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { params = new ToXContent.DelegatingMapParams(singletonMap("reduce_mappings", "true"), params); - + builder.startObject(); for (IndexTemplateMetaData indexTemplateMetaData : getIndexTemplates()) { IndexTemplateMetaData.Builder.toXContent(indexTemplateMetaData, builder, params); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java index 76c665c7b8fef..ccb02c664acc9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java @@ -20,6 +20,7 @@ import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.admin.indices.alias.Alias; @@ -33,6 +34,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; @@ -41,9 +43,11 @@ import org.elasticsearch.common.xcontent.support.XContentMapValues; import java.io.IOException; +import java.io.UncheckedIOException; import java.util.HashMap; import java.util.HashSet; import java.util.Map; +import java.util.Objects; import java.util.Set; import static org.elasticsearch.action.ValidateActions.addValidationError; @@ -74,6 +78,8 @@ public class PutIndexTemplateRequest extends MasterNodeRequest customs = new HashMap<>(); + private Integer version; + public PutIndexTemplateRequest() { } @@ -129,9 +135,18 @@ public int order() { return this.order; } + public PutIndexTemplateRequest version(Integer version) { + this.version = version; + return this; + } + + public Integer version() { + return this.version; + } + /** * Set to true to force only creation, not an update of an index template. If it already - * exists, it will fail with an {@link org.elasticsearch.indices.IndexTemplateAlreadyExistsException}. + * exists, it will fail with an {@link IllegalArgumentException}. */ public PutIndexTemplateRequest create(boolean create) { this.create = create; @@ -159,21 +174,31 @@ public PutIndexTemplateRequest settings(Settings.Builder settings) { } /** - * The settings to create the index template with (either json/yaml/properties format). + * The settings to create the index template with (either json/yaml format). + * @deprecated use {@link #settings(String, XContentType)} */ + @Deprecated public PutIndexTemplateRequest settings(String source) { this.settings = Settings.builder().loadFromSource(source).build(); return this; } /** - * The settings to crete the index template with (either json/yaml/properties format). + * The settings to create the index template with (either json/yaml format). + */ + public PutIndexTemplateRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); + return this; + } + + /** + * The settings to create the index template with (either json or yaml format). */ public PutIndexTemplateRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), XContentType.JSON); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -189,10 +214,23 @@ public Settings settings() { * * @param type The mapping type * @param source The mapping source + * @deprecated use {@link #mapping(String, String, XContentType)} */ + @Deprecated public PutIndexTemplateRequest mapping(String type, String source) { - mappings.put(type, source); - return this; + XContentType xContentType = XContentFactory.xContentType(source); + return mapping(type, source, xContentType); + } + + /** + * Adds mapping that will be added when the index gets created. + * + * @param type The mapping type + * @param source The mapping source + * @param xContentType The type of content contained within the source + */ + public PutIndexTemplateRequest mapping(String type, String source, XContentType xContentType) { + return mapping(type, new BytesArray(source), xContentType); } /** @@ -214,12 +252,24 @@ public String cause() { * @param source The mapping source */ public PutIndexTemplateRequest mapping(String type, XContentBuilder source) { + return mapping(type, source.bytes(), source.contentType()); + } + + /** + * Adds mapping that will be added when the index gets created. + * + * @param type The mapping type + * @param source The mapping source + * @param xContentType the source content type + */ + public PutIndexTemplateRequest mapping(String type, BytesReference source, XContentType xContentType) { + Objects.requireNonNull(xContentType); try { - mappings.put(type, source.string()); + mappings.put(type, XContentHelper.convertToJson(source, false, false, xContentType)); + return this; } catch (IOException e) { - throw new IllegalArgumentException("Failed to build json for mapping request", e); + throw new UncheckedIOException("failed to convert source to json", e); } - return this; } /** @@ -236,7 +286,7 @@ public PutIndexTemplateRequest mapping(String type, Map source) try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - return mapping(type, builder.string()); + return mapping(type, builder); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -260,7 +310,7 @@ public Map mappings() { */ public PutIndexTemplateRequest source(XContentBuilder templateBuilder) { try { - return source(templateBuilder.bytes()); + return source(templateBuilder.bytes(), templateBuilder.contentType()); } catch (Exception e) { throw new IllegalArgumentException("Failed to build json for template request", e); } @@ -278,16 +328,23 @@ public PutIndexTemplateRequest source(Map templateSource) { template(entry.getValue().toString()); } else if (name.equals("order")) { order(XContentMapValues.nodeIntegerValue(entry.getValue(), order())); + } else if ("version".equals(name)) { + if ((entry.getValue() instanceof Integer) == false) { + throw new IllegalArgumentException("Malformed [version] value, should be an integer"); + } + version((Integer)entry.getValue()); } else if (name.equals("settings")) { if (!(entry.getValue() instanceof Map)) { - throw new IllegalArgumentException("Malformed settings section, should include an inner object"); + throw new IllegalArgumentException("Malformed [settings] section, should include an inner object"); } settings((Map) entry.getValue()); } else if (name.equals("mappings")) { Map mappings = (Map) entry.getValue(); for (Map.Entry entry1 : mappings.entrySet()) { if (!(entry1.getValue() instanceof Map)) { - throw new IllegalArgumentException("Malformed mappings section for type [" + entry1.getKey() + "], should include an inner object describing the mapping"); + throw new IllegalArgumentException( + "Malformed [mappings] section for type [" + entry1.getKey() + + "], should include an inner object describing the mapping"); } mapping(entry1.getKey(), (Map) entry1.getValue()); } @@ -310,18 +367,25 @@ public PutIndexTemplateRequest source(Map templateSource) { /** * The template source definition. + * @deprecated use {@link #source(String, XContentType)} */ + @Deprecated public PutIndexTemplateRequest source(String templateSource) { - try (XContentParser parser = XContentFactory.xContent(templateSource).createParser(templateSource)) { - return source(parser.mapOrdered()); - } catch (Exception e) { - throw new IllegalArgumentException("failed to parse template source [" + templateSource + "]", e); - } + return source(XContentHelper.convertToMap(XContentFactory.xContent(templateSource), templateSource, true)); } /** * The template source definition. */ + public PutIndexTemplateRequest source(String templateSource, XContentType xContentType) { + return source(XContentHelper.convertToMap(xContentType.xContent(), templateSource, true)); + } + + /** + * The template source definition. + * @deprecated use {@link #source(byte[], XContentType)} + */ + @Deprecated public PutIndexTemplateRequest source(byte[] source) { return source(source, 0, source.length); } @@ -329,23 +393,40 @@ public PutIndexTemplateRequest source(byte[] source) { /** * The template source definition. */ + public PutIndexTemplateRequest source(byte[] source, XContentType xContentType) { + return source(source, 0, source.length, xContentType); + } + + /** + * The template source definition. + * @deprecated use {@link #source(byte[], int, int, XContentType)} + */ + @Deprecated public PutIndexTemplateRequest source(byte[] source, int offset, int length) { - try (XContentParser parser = XContentFactory.xContent(source, offset, length).createParser(source, offset, length)) { - return source(parser.mapOrdered()); - } catch (IOException e) { - throw new IllegalArgumentException("failed to parse template source", e); - } + return source(new BytesArray(source, offset, length)); } /** * The template source definition. */ + public PutIndexTemplateRequest source(byte[] source, int offset, int length, XContentType xContentType) { + return source(new BytesArray(source, offset, length), xContentType); + } + + /** + * The template source definition. + * @deprecated use {@link #source(BytesReference, XContentType)} + */ + @Deprecated public PutIndexTemplateRequest source(BytesReference source) { - try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) { - return source(parser.mapOrdered()); - } catch (IOException e) { - throw new IllegalArgumentException("failed to parse template source", e); - } + return source(XContentHelper.convertToMap(source, true).v2()); + } + + /** + * The template source definition. + */ + public PutIndexTemplateRequest source(BytesReference source, XContentType xContentType) { + return source(XContentHelper.convertToMap(source, true, xContentType).v2()); } public PutIndexTemplateRequest custom(IndexMetaData.Custom custom) { @@ -393,7 +474,8 @@ public PutIndexTemplateRequest aliases(String source) { * Sets the aliases that will be associated with the index when it gets created */ public PutIndexTemplateRequest aliases(BytesReference source) { - try (XContentParser parser = XContentHelper.createParser(source)) { + // EMPTY is safe here because we never call namedObject + try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, source)) { //move to the first alias parser.nextToken(); while ((parser.nextToken()) != XContentParser.Token.END_OBJECT) { @@ -437,7 +519,14 @@ public void readFrom(StreamInput in) throws IOException { settings = readSettingsFromStream(in); int size = in.readVInt(); for (int i = 0; i < size; i++) { - mappings.put(in.readString(), in.readString()); + final String type = in.readString(); + String mappingSource = in.readString(); + if (in.getVersion().before(Version.V_5_3_0)) { + // we do not know the incoming type so convert it if needed + mappingSource = + XContentHelper.convertToJson(new BytesArray(mappingSource), false, false, XContentFactory.xContentType(mappingSource)); + } + mappings.put(type, mappingSource); } int customSize = in.readVInt(); for (int i = 0; i < customSize; i++) { @@ -449,6 +538,7 @@ public void readFrom(StreamInput in) throws IOException { for (int i = 0; i < aliasesSize; i++) { aliases.add(Alias.read(in)); } + version = in.readOptionalVInt(); } @Override @@ -474,5 +564,6 @@ public void writeTo(StreamOutput out) throws IOException { for (Alias alias : aliases) { alias.writeTo(out); } + out.writeOptionalVInt(version); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java index 5207cacf6b080..0e4e17a956849 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java @@ -24,13 +24,15 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; /** * */ -public class PutIndexTemplateRequestBuilder extends MasterNodeOperationRequestBuilder { +public class PutIndexTemplateRequestBuilder + extends MasterNodeOperationRequestBuilder { public PutIndexTemplateRequestBuilder(ElasticsearchClient client, PutIndexTemplateAction action) { super(client, action, new PutIndexTemplateRequest()); @@ -56,9 +58,17 @@ public PutIndexTemplateRequestBuilder setOrder(int order) { return this; } + /** + * Sets the optional version of this template. + */ + public PutIndexTemplateRequestBuilder setVersion(Integer version) { + request.version(version); + return this; + } + /** * Set to true to force only creation, not an update of an index template. If it already - * exists, it will fail with an {@link org.elasticsearch.indices.IndexTemplateAlreadyExistsException}. + * exists, it will fail with an {@link IllegalArgumentException}. */ public PutIndexTemplateRequestBuilder setCreate(boolean create) { request.create(create); @@ -82,15 +92,25 @@ public PutIndexTemplateRequestBuilder setSettings(Settings.Builder settings) { } /** - * The settings to crete the index template with (either json/yaml/properties format) + * The settings to crete the index template with (either json or yaml format) + * @deprecated use {@link #setSettings(String, XContentType)} */ + @Deprecated public PutIndexTemplateRequestBuilder setSettings(String source) { request.settings(source); return this; } /** - * The settings to crete the index template with (either json/yaml/properties format) + * The settings to crete the index template with (either json or yaml format) + */ + public PutIndexTemplateRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); + return this; + } + + /** + * The settings to crete the index template with (either json or yaml format) */ public PutIndexTemplateRequestBuilder setSettings(Map source) { request.settings(source); @@ -102,12 +122,26 @@ public PutIndexTemplateRequestBuilder setSettings(Map source) { * * @param type The mapping type * @param source The mapping source + * @deprecated use {@link #addMapping(String, String, XContentType)} */ + @Deprecated public PutIndexTemplateRequestBuilder addMapping(String type, String source) { request.mapping(type, source); return this; } + /** + * Adds mapping that will be added when the index template gets created. + * + * @param type The mapping type + * @param source The mapping source + * @param xContentType The type/format of the source + */ + public PutIndexTemplateRequestBuilder addMapping(String type, String source, XContentType xContentType) { + request.mapping(type, source, xContentType); + return this; + } + /** * A specialized simplified mapping source method, takes the form of simple properties definition: * ("field1", "type=string,store=true"). @@ -208,7 +242,9 @@ public PutIndexTemplateRequestBuilder setSource(Map templateSource) { /** * The template source definition. + * @deprecated use {@link #setSource(BytesReference, XContentType)} */ + @Deprecated public PutIndexTemplateRequestBuilder setSource(String templateSource) { request.source(templateSource); return this; @@ -217,6 +253,16 @@ public PutIndexTemplateRequestBuilder setSource(String templateSource) { /** * The template source definition. */ + public PutIndexTemplateRequestBuilder setSource(BytesReference templateSource, XContentType xContentType) { + request.source(templateSource, xContentType); + return this; + } + + /** + * The template source definition. + * @deprecated use {@link #setSource(BytesReference, XContentType)} + */ + @Deprecated public PutIndexTemplateRequestBuilder setSource(BytesReference templateSource) { request.source(templateSource); return this; @@ -224,7 +270,9 @@ public PutIndexTemplateRequestBuilder setSource(BytesReference templateSource) { /** * The template source definition. + * @deprecated use {@link #setSource(byte[], XContentType)} */ + @Deprecated public PutIndexTemplateRequestBuilder setSource(byte[] templateSource) { request.source(templateSource); return this; @@ -233,8 +281,26 @@ public PutIndexTemplateRequestBuilder setSource(byte[] templateSource) { /** * The template source definition. */ + public PutIndexTemplateRequestBuilder setSource(byte[] templateSource, XContentType xContentType) { + request.source(templateSource, xContentType); + return this; + } + + /** + * The template source definition. + * @deprecated use {@link #setSource(byte[], int, int, XContentType)} + */ + @Deprecated public PutIndexTemplateRequestBuilder setSource(byte[] templateSource, int offset, int length) { request.source(templateSource, offset, length); return this; } + + /** + * The template source definition. + */ + public PutIndexTemplateRequestBuilder setSource(byte[] templateSource, int offset, int length, XContentType xContentType) { + request.source(templateSource, offset, length, xContentType); + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java index 64a01b90ac479..77746b395e15e 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java @@ -86,7 +86,8 @@ protected void masterOperation(final PutIndexTemplateRequest request, final Clus .aliases(request.aliases()) .customs(request.customs()) .create(request.create()) - .masterTimeout(request.masterNodeTimeout()), + .masterTimeout(request.masterNodeTimeout()) + .version(request.version()), new MetaDataIndexTemplateService.PutListener() { @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java index ea145ba15bc52..042857a097eb5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.validate.query; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; @@ -30,20 +31,26 @@ */ public class QueryExplanation implements Streamable { + public static final int RANDOM_SHARD = -1; + private String index; - + + private int shard = RANDOM_SHARD; + private boolean valid; - + private String explanation; - + private String error; QueryExplanation() { - + } - - public QueryExplanation(String index, boolean valid, String explanation, String error) { + + public QueryExplanation(String index, int shard, boolean valid, String explanation, + String error) { this.index = index; + this.shard = shard; this.valid = valid; this.explanation = explanation; this.error = error; @@ -53,6 +60,10 @@ public String getIndex() { return this.index; } + public int getShard() { + return this.shard; + } + public boolean isValid() { return this.valid; } @@ -68,6 +79,11 @@ public String getExplanation() { @Override public void readFrom(StreamInput in) throws IOException { index = in.readString(); + if (in.getVersion().onOrAfter(Version.V_5_4_0)) { + shard = in.readInt(); + } else { + shard = RANDOM_SHARD; + } valid = in.readBoolean(); explanation = in.readOptionalString(); error = in.readOptionalString(); @@ -76,6 +92,9 @@ public void readFrom(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { out.writeString(index); + if (out.getVersion().onOrAfter(Version.V_5_4_0)) { + out.writeInt(shard); + } out.writeBoolean(valid); out.writeOptionalString(explanation); out.writeOptionalString(error); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ShardValidateQueryRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ShardValidateQueryRequest.java index 831ef6e106015..2ccf2f1bd3e21 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ShardValidateQueryRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ShardValidateQueryRequest.java @@ -20,14 +20,15 @@ package org.elasticsearch.action.admin.indices.validate.query; import org.elasticsearch.action.support.broadcast.BroadcastShardRequest; -import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.search.internal.AliasFilter; import java.io.IOException; +import java.util.Objects; /** * Internal validate request executed directly against a specific index shard. @@ -39,21 +40,18 @@ public class ShardValidateQueryRequest extends BroadcastShardRequest { private boolean explain; private boolean rewrite; private long nowInMillis; - - @Nullable - private String[] filteringAliases; + private AliasFilter filteringAliases; public ShardValidateQueryRequest() { - } - ShardValidateQueryRequest(ShardId shardId, @Nullable String[] filteringAliases, ValidateQueryRequest request) { + public ShardValidateQueryRequest(ShardId shardId, AliasFilter filteringAliases, ValidateQueryRequest request) { super(shardId, request); this.query = request.query(); this.types = request.types(); this.explain = request.explain(); this.rewrite = request.rewrite(); - this.filteringAliases = filteringAliases; + this.filteringAliases = Objects.requireNonNull(filteringAliases, "filteringAliases must not be null"); this.nowInMillis = request.nowInMillis; } @@ -69,11 +67,11 @@ public boolean explain() { return this.explain; } - public boolean rewrite() { - return this.rewrite; + public boolean rewrite() { + return this.rewrite; } - public String[] filteringAliases() { + public AliasFilter filteringAliases() { return filteringAliases; } @@ -93,14 +91,7 @@ public void readFrom(StreamInput in) throws IOException { types[i] = in.readString(); } } - int aliasesSize = in.readVInt(); - if (aliasesSize > 0) { - filteringAliases = new String[aliasesSize]; - for (int i = 0; i < aliasesSize; i++) { - filteringAliases[i] = in.readString(); - } - } - + filteringAliases = new AliasFilter(in); explain = in.readBoolean(); rewrite = in.readBoolean(); nowInMillis = in.readVLong(); @@ -110,20 +101,11 @@ public void readFrom(StreamInput in) throws IOException { public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); out.writeNamedWriteable(query); - out.writeVInt(types.length); for (String type : types) { out.writeString(type); } - if (filteringAliases != null) { - out.writeVInt(filteringAliases.length); - for (String alias : filteringAliases) { - out.writeString(alias); - } - } else { - out.writeVInt(0); - } - + filteringAliases.writeTo(out); out.writeBoolean(explain); out.writeBoolean(rewrite); out.writeVLong(nowInMillis); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java index d1405e92e1c0d..7f9f69a3aae39 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java @@ -19,7 +19,6 @@ package org.elasticsearch.action.admin.indices.validate.query; -import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.elasticsearch.action.ActionListener; @@ -38,17 +37,12 @@ import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Randomness; import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.index.IndexService; -import org.elasticsearch.index.engine.Engine; +import org.elasticsearch.index.query.ParsedQuery; import org.elasticsearch.index.query.QueryShardException; -import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.SearchService; -import org.elasticsearch.search.fetch.FetchPhase; -import org.elasticsearch.search.internal.DefaultSearchContext; +import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.ShardSearchLocalRequest; import org.elasticsearch.tasks.Task; @@ -67,25 +61,15 @@ */ public class TransportValidateQueryAction extends TransportBroadcastAction { - private final IndicesService indicesService; - - private final ScriptService scriptService; - - private final BigArrays bigArrays; - - private final FetchPhase fetchPhase; + private final SearchService searchService; @Inject public TransportValidateQueryAction(Settings settings, ThreadPool threadPool, ClusterService clusterService, - TransportService transportService, IndicesService indicesService, ScriptService scriptService, - BigArrays bigArrays, ActionFilters actionFilters, - IndexNameExpressionResolver indexNameExpressionResolver, FetchPhase fetchPhase) { + TransportService transportService, SearchService searchService, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver) { super(settings, ValidateQueryAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, ValidateQueryRequest::new, ShardValidateQueryRequest::new, ThreadPool.Names.SEARCH); - this.indicesService = indicesService; - this.scriptService = scriptService; - this.bigArrays = bigArrays; - this.fetchPhase = fetchPhase; + this.searchService = searchService; } @Override @@ -96,8 +80,9 @@ protected void doExecute(Task task, ValidateQueryRequest request, ActionListener @Override protected ShardValidateQueryRequest newShardRequest(int numShards, ShardRouting shard, ValidateQueryRequest request) { - String[] filteringAliases = indexNameExpressionResolver.filteringAliases(clusterService.state(), shard.getIndexName(), request.indices()); - return new ShardValidateQueryRequest(shard.shardId(), filteringAliases, request); + final AliasFilter aliasFilter = searchService.buildAliasFilter(clusterService.state(), shard.getIndexName(), + request.indices()); + return new ShardValidateQueryRequest(shard.shardId(), aliasFilter, request); } @Override @@ -107,8 +92,14 @@ protected ShardValidateQueryResponse newShardResponse() { @Override protected GroupShardsIterator shards(ClusterState clusterState, ValidateQueryRequest request, String[] concreteIndices) { - // Hard-code routing to limit request to a single shard, but still, randomize it... - Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, Integer.toString(Randomness.get().nextInt(1000)), request.indices()); + final String routing; + if (request.allShards()) { + routing = null; + } else { + // Random routing to limit request to a single shard + routing = Integer.toString(Randomness.get().nextInt(1000)); + } + Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, routing, request.indices()); return clusterService.operationRouting().searchShards(clusterState, concreteIndices, routingMap, "_local"); } @@ -142,12 +133,13 @@ protected ValidateQueryResponse newResponse(ValidateQueryRequest request, Atomic } else { ShardValidateQueryResponse validateQueryResponse = (ShardValidateQueryResponse) shardResponse; valid = valid && validateQueryResponse.isValid(); - if (request.explain() || request.rewrite()) { + if (request.explain() || request.rewrite() || request.allShards()) { if (queryExplanations == null) { queryExplanations = new ArrayList<>(); } queryExplanations.add(new QueryExplanation( validateQueryResponse.getIndex(), + request.allShards() ? validateQueryResponse.getShardId().getId() : QueryExplanation.RANDOM_SHARD, validateQueryResponse.isValid(), validateQueryResponse.getExplanation(), validateQueryResponse.getError() @@ -160,30 +152,19 @@ protected ValidateQueryResponse newResponse(ValidateQueryRequest request, Atomic } @Override - protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest request) { - IndexService indexService = indicesService.indexServiceSafe(request.shardId().getIndex()); - IndexShard indexShard = indexService.getShard(request.shardId().id()); - + protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest request) throws IOException { boolean valid; String explanation = null; String error = null; - Engine.Searcher searcher = indexShard.acquireSearcher("validate_query"); - - DefaultSearchContext searchContext = new DefaultSearchContext(0, - new ShardSearchLocalRequest(request.types(), request.nowInMillis(), request.filteringAliases()), null, searcher, - indexService, indexShard, scriptService, bigArrays, threadPool.estimatedTimeInMillisCounter(), - parseFieldMatcher, SearchService.NO_TIMEOUT, fetchPhase); - SearchContext.setCurrent(searchContext); + ShardSearchLocalRequest shardSearchLocalRequest = new ShardSearchLocalRequest(request.shardId(), request.types(), + request.nowInMillis(), request.filteringAliases()); + SearchContext searchContext = searchService.createSearchContext(shardSearchLocalRequest, SearchService.NO_TIMEOUT, null); try { - searchContext.parsedQuery(searchContext.getQueryShardContext().toQuery(request.query())); - searchContext.preProcess(); - + ParsedQuery parsedQuery = searchContext.getQueryShardContext().toQuery(request.query()); + searchContext.parsedQuery(parsedQuery); + searchContext.preProcess(request.rewrite()); valid = true; - if (request.rewrite()) { - explanation = getRewrittenQuery(searcher.searcher(), searchContext.query()); - } else if (request.explain()) { - explanation = searchContext.filteredQuery().query().toString(); - } + explanation = explain(searchContext, request.rewrite()); } catch (QueryShardException|ParsingException e) { valid = false; error = e.getDetailedMessage(); @@ -191,19 +172,18 @@ protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest re valid = false; error = e.getMessage(); } finally { - searchContext.close(); - SearchContext.removeCurrent(); + Releasables.close(searchContext); } return new ShardValidateQueryResponse(request.shardId(), valid, explanation, error); } - private String getRewrittenQuery(IndexSearcher searcher, Query query) throws IOException { - Query queryRewrite = searcher.rewrite(query); - if (queryRewrite instanceof MatchNoDocsQuery) { - return query.toString(); + private String explain(SearchContext context, boolean rewritten) throws IOException { + Query query = context.query(); + if (rewritten && query instanceof MatchNoDocsQuery) { + return context.parsedQuery().query().toString(); } else { - return queryRewrite.toString(); + return query.toString(); } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequest.java index 41ef37ad621f1..5953a5548c465 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequest.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.validate.query; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.ValidateActions; import org.elasticsearch.action.support.IndicesOptions; @@ -43,6 +44,7 @@ public class ValidateQueryRequest extends BroadcastRequest private boolean explain; private boolean rewrite; + private boolean allShards; private String[] types = Strings.EMPTY_ARRAY; @@ -125,6 +127,20 @@ public boolean rewrite() { return rewrite; } + /** + * Indicates whether the query should be validated on all shards instead of one random shard + */ + public void allShards(boolean allShards) { + this.allShards = allShards; + } + + /** + * Indicates whether the query should be validated on all shards instead of one random shard + */ + public boolean allShards() { + return allShards; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -138,6 +154,9 @@ public void readFrom(StreamInput in) throws IOException { } explain = in.readBoolean(); rewrite = in.readBoolean(); + if (in.getVersion().onOrAfter(Version.V_5_4_0)) { + allShards = in.readBoolean(); + } } @Override @@ -150,11 +169,14 @@ public void writeTo(StreamOutput out) throws IOException { } out.writeBoolean(explain); out.writeBoolean(rewrite); + if (out.getVersion().onOrAfter(Version.V_5_4_0)) { + out.writeBoolean(allShards); + } } @Override public String toString() { return "[" + Arrays.toString(indices) + "]" + Arrays.toString(types) + ", query[" + query + "], explain:" + explain + - ", rewrite:" + rewrite; + ", rewrite:" + rewrite + ", all_shards:" + allShards; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequestBuilder.java index bfee7ec6b99e6..9240357ad7184 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequestBuilder.java @@ -67,4 +67,12 @@ public ValidateQueryRequestBuilder setRewrite(boolean rewrite) { request.rewrite(rewrite); return this; } + + /** + * Indicates whether the query should be validated on all shards + */ + public ValidateQueryRequestBuilder setAllShards(boolean rewrite) { + request.allShards(rewrite); + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BackoffPolicy.java b/core/src/main/java/org/elasticsearch/action/bulk/BackoffPolicy.java index bc8f8c347ab0e..81084e22377e5 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BackoffPolicy.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BackoffPolicy.java @@ -171,7 +171,7 @@ private static final class ConstantBackoff extends BackoffPolicy { private final int numberOfElements; - public ConstantBackoff(TimeValue delay, int numberOfElements) { + ConstantBackoff(TimeValue delay, int numberOfElements) { assert numberOfElements >= 0; this.delay = delay; this.numberOfElements = numberOfElements; @@ -188,7 +188,7 @@ private static final class ConstantBackoffIterator implements Iterator private final Iterator delegate; private final Runnable onBackoff; - public WrappedBackoffIterator(Iterator delegate, Runnable onBackoff) { + WrappedBackoffIterator(Iterator delegate, Runnable onBackoff) { this.delegate = delegate; this.onBackoff = onBackoff; } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java index 760c5781aea0a..96c35d661c0bc 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java @@ -19,16 +19,17 @@ package org.elasticsearch.action.bulk; -import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.Version; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.IndicesRequest; -import org.elasticsearch.action.delete.DeleteRequest; -import org.elasticsearch.action.index.IndexRequest; -import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import java.io.IOException; +import java.util.Objects; /** * @@ -36,15 +37,14 @@ public class BulkItemRequest implements Streamable { private int id; - private ActionRequest request; + private DocWriteRequest request; private volatile BulkItemResponse primaryResponse; - private volatile boolean ignoreOnReplica; BulkItemRequest() { } - public BulkItemRequest(int id, ActionRequest request) { + public BulkItemRequest(int id, DocWriteRequest request) { assert request instanceof IndicesRequest; this.id = id; this.request = request; @@ -54,14 +54,13 @@ public int id() { return id; } - public ActionRequest request() { + public DocWriteRequest request() { return request; } public String index() { - IndicesRequest indicesRequest = (IndicesRequest) request; - assert indicesRequest.indices().length == 1; - return indicesRequest.indices()[0]; + assert request.indices().length == 1; + return request.indices()[0]; } BulkItemResponse getPrimaryResponse() { @@ -72,15 +71,34 @@ void setPrimaryResponse(BulkItemResponse primaryResponse) { this.primaryResponse = primaryResponse; } - /** - * Marks this request to be ignored and *not* execute on a replica. - */ - void setIgnoreOnReplica() { - this.ignoreOnReplica = true; - } boolean isIgnoreOnReplica() { - return ignoreOnReplica; + return primaryResponse != null && + (primaryResponse.isFailed() || primaryResponse.getResponse().getResult() == DocWriteResponse.Result.NOOP); + } + + /** + * Abort this request, and store a {@link org.elasticsearch.action.bulk.BulkItemResponse.Failure} response. + * + * @param index The concrete index that was resolved for this request + * @param cause The cause of the rejection (may not be null) + * @throws IllegalStateException If a response already exists for this request + */ + public void abort(String index, Exception cause) { + if (primaryResponse == null) { + final BulkItemResponse.Failure failure = new BulkItemResponse.Failure(index, request.type(), request.id(), + Objects.requireNonNull(cause), true); + setPrimaryResponse(new BulkItemResponse(id, request.opType(), failure)); + } else { + assert primaryResponse.isFailed() && primaryResponse.getFailure().isAborted() + : "response [" + Strings.toString(primaryResponse) + "]; cause [" + cause + "]"; + if (primaryResponse.isFailed() && primaryResponse.getFailure().isAborted()) { + primaryResponse.getFailure().getCause().addSuppressed(cause); + } else { + throw new IllegalStateException( + "aborting item that with response [" + primaryResponse + "] that was previously processed", cause); + } + } } public static BulkItemRequest readBulkItem(StreamInput in) throws IOException { @@ -92,33 +110,30 @@ public static BulkItemRequest readBulkItem(StreamInput in) throws IOException { @Override public void readFrom(StreamInput in) throws IOException { id = in.readVInt(); - byte type = in.readByte(); - if (type == 0) { - request = new IndexRequest(); - } else if (type == 1) { - request = new DeleteRequest(); - } else if (type == 2) { - request = new UpdateRequest(); - } - request.readFrom(in); + request = DocWriteRequest.readDocumentRequest(in); if (in.readBoolean()) { primaryResponse = BulkItemResponse.readBulkItem(in); + // This is a bwc layer for 6.0 which no longer mutates the requests with these + // Since 5.x still requires it we do it here. Note that these are harmless + // as both operations are idempotent. This is something we rely on and assert on + // in InternalEngine.planIndexingAsNonPrimary() + request.version(primaryResponse.getVersion()); + request.versionType(request.versionType().versionTypeForReplicationAndRecovery()); + } + if (in.getVersion().before(Version.V_5_6_0)) { + boolean ignoreOnReplica = in.readBoolean(); + assert ignoreOnReplica == isIgnoreOnReplica() : + "ignoreOnReplica mismatch. wire [" + ignoreOnReplica + "], ours [" + isIgnoreOnReplica() + "]"; } - ignoreOnReplica = in.readBoolean(); } @Override public void writeTo(StreamOutput out) throws IOException { out.writeVInt(id); - if (request instanceof IndexRequest) { - out.writeByte((byte) 0); - } else if (request instanceof DeleteRequest) { - out.writeByte((byte) 1); - } else if (request instanceof UpdateRequest) { - out.writeByte((byte) 2); - } - request.writeTo(out); + DocWriteRequest.writeDocumentRequest(out, request); out.writeOptionalStreamable(primaryResponse); - out.writeBoolean(ignoreOnReplica); + if (out.getVersion().before(Version.V_5_6_0)) { + out.writeBoolean(isIgnoreOnReplica()); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java index ad45ace84c909..69b9180fdef91 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java @@ -21,27 +21,40 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.Version; +import org.elasticsearch.action.DocWriteRequest.OpType; import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.delete.DeleteResponse; import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.update.UpdateResponse; +import org.elasticsearch.common.CheckedConsumer; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; +import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownField; + /** * Represents a single item response for an action executed as part of the bulk API. Holds the index/type/id * of the relevant action, and if it has failed or not (with the failure message incase it failed). */ -public class BulkItemResponse implements Streamable, StatusToXContent { +public class BulkItemResponse implements Streamable, StatusToXContentObject { + + private static final String _INDEX = "_index"; + private static final String _TYPE = "_type"; + private static final String _ID = "_id"; + private static final String STATUS = "status"; + private static final String ERROR = "error"; @Override public RestStatus status() { @@ -50,29 +63,97 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(opType); + builder.startObject(); + builder.startObject(opType.getLowercase()); if (failure == null) { - response.toXContent(builder, params); - builder.field(Fields.STATUS, response.status().getStatus()); + response.innerToXContent(builder, params); + builder.field(STATUS, response.status().getStatus()); } else { - builder.field(Fields._INDEX, failure.getIndex()); - builder.field(Fields._TYPE, failure.getType()); - builder.field(Fields._ID, failure.getId()); - builder.field(Fields.STATUS, failure.getStatus().getStatus()); - builder.startObject(Fields.ERROR); - ElasticsearchException.toXContent(builder, params, failure.getCause()); + builder.field(_INDEX, failure.getIndex()); + builder.field(_TYPE, failure.getType()); + builder.field(_ID, failure.getId()); + builder.field(STATUS, failure.getStatus().getStatus()); + builder.startObject(ERROR); + ElasticsearchException.generateThrowableXContent(builder, params, failure.getCause()); builder.endObject(); } builder.endObject(); + builder.endObject(); return builder; } - static final class Fields { - static final String _INDEX = "_index"; - static final String _TYPE = "_type"; - static final String _ID = "_id"; - static final String STATUS = "status"; - static final String ERROR = "error"; + /** + * Reads a {@link BulkItemResponse} from a {@link XContentParser}. + * + * @param parser the {@link XContentParser} + * @param id the id to assign to the parsed {@link BulkItemResponse}. It is usually the index of + * the item in the {@link BulkResponse#getItems} array. + */ + public static BulkItemResponse fromXContent(XContentParser parser, int id) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.currentToken(), parser::getTokenLocation); + + XContentParser.Token token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + + String currentFieldName = parser.currentName(); + token = parser.nextToken(); + + final OpType opType = OpType.fromString(currentFieldName); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + + DocWriteResponse.Builder builder = null; + CheckedConsumer itemParser = null; + + if (opType == OpType.INDEX || opType == OpType.CREATE) { + final IndexResponse.Builder indexResponseBuilder = new IndexResponse.Builder(); + builder = indexResponseBuilder; + itemParser = (indexParser) -> IndexResponse.parseXContentFields(indexParser, indexResponseBuilder); + + } else if (opType == OpType.UPDATE) { + final UpdateResponse.Builder updateResponseBuilder = new UpdateResponse.Builder(); + builder = updateResponseBuilder; + itemParser = (updateParser) -> UpdateResponse.parseXContentFields(updateParser, updateResponseBuilder); + + } else if (opType == OpType.DELETE) { + final DeleteResponse.Builder deleteResponseBuilder = new DeleteResponse.Builder(); + builder = deleteResponseBuilder; + itemParser = (deleteParser) -> DeleteResponse.parseXContentFields(deleteParser, deleteResponseBuilder); + } else { + throwUnknownField(currentFieldName, parser.getTokenLocation()); + } + + RestStatus status = null; + ElasticsearchException exception = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } + + if (ERROR.equals(currentFieldName)) { + if (token == XContentParser.Token.START_OBJECT) { + exception = ElasticsearchException.fromXContent(parser); + } + } else if (STATUS.equals(currentFieldName)) { + if (token == XContentParser.Token.VALUE_NUMBER) { + status = RestStatus.fromCode(parser.intValue()); + } + } else { + itemParser.accept(parser); + } + } + + ensureExpectedToken(XContentParser.Token.END_OBJECT, token, parser::getTokenLocation); + token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.END_OBJECT, token, parser::getTokenLocation); + + BulkItemResponse bulkItemResponse; + if (exception != null) { + Failure failure = new Failure(builder.getShardId().getIndexName(), builder.getType(), builder.getId(), exception, status); + bulkItemResponse = new BulkItemResponse(id, opType, failure); + } else { + bulkItemResponse = new BulkItemResponse(id, opType, builder.build()); + } + return bulkItemResponse; } /** @@ -88,15 +169,29 @@ public static class Failure implements Writeable, ToXContent { private final String index; private final String type; private final String id; - private final Throwable cause; + private final Exception cause; private final RestStatus status; + private final boolean aborted; + + public Failure(String index, String type, String id, Exception cause) { + this(index, type, id, cause, ExceptionsHelper.status(cause)); + } + + Failure(String index, String type, String id, Exception cause, RestStatus status) { + this(index, type, id, cause, status, false); + } - public Failure(String index, String type, String id, Throwable cause) { + Failure(String index, String type, String id, Exception cause, boolean aborted) { + this(index, type, id, cause, ExceptionsHelper.status(cause), aborted); + } + + Failure(String index, String type, String id, Exception cause, RestStatus status, boolean aborted) { this.index = index; this.type = type; this.id = id; this.cause = cause; - this.status = ExceptionsHelper.status(cause); + this.status = status; + this.aborted = aborted; } /** @@ -108,6 +203,11 @@ public Failure(StreamInput in) throws IOException { id = in.readOptionalString(); cause = in.readException(); status = ExceptionsHelper.status(cause); + if (supportsAbortedFlag(in.getVersion())) { + aborted = in.readBoolean(); + } else { + aborted = false; + } } @Override @@ -116,8 +216,16 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(getType()); out.writeOptionalString(getId()); out.writeException(getCause()); + if (supportsAbortedFlag(out.getVersion())) { + out.writeBoolean(aborted); + } } + private static Version V_6_0_0_BETA_2 = Version.fromString("6.0.0-beta2"); + private static boolean supportsAbortedFlag(Version version) { + // The "aborted" flag was added for 5.5.3 and 5.6.0, but was not in 6.0.0-beta2 + return version.after(V_6_0_0_BETA_2) || (version.major == 5 && version.onOrAfter(Version.V_5_5_3)); + } /** * The index name of the action. @@ -157,10 +265,19 @@ public RestStatus getStatus() { /** * The actual cause of the failure. */ - public Throwable getCause() { + public Exception getCause() { return cause; } + /** + * Whether this failure is the result of an abort. + * If {@code true}, the request to which this failure relates should never be retried, regardless of the {@link #getCause() cause}. + * @see BulkItemRequest#abort(String, Exception) + */ + public boolean isAborted() { + return aborted; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.field(INDEX_FIELD, index); @@ -169,7 +286,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(ID_FIELD, id); } builder.startObject(CAUSE_FIELD); - ElasticsearchException.toXContent(builder, params, cause); + ElasticsearchException.generateThrowableXContent(builder, params, cause); builder.endObject(); builder.field(STATUS_FIELD, status.getStatus()); return builder; @@ -177,13 +294,13 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws @Override public String toString() { - return Strings.toString(this, true); + return Strings.toString(this); } } private int id; - private String opType; + private OpType opType; private DocWriteResponse response; @@ -193,13 +310,13 @@ public String toString() { } - public BulkItemResponse(int id, String opType, DocWriteResponse response) { + public BulkItemResponse(int id, OpType opType, DocWriteResponse response) { this.id = id; this.opType = opType; this.response = response; } - public BulkItemResponse(int id, String opType, Failure failure) { + public BulkItemResponse(int id, OpType opType, Failure failure) { this.id = id; this.opType = opType; this.failure = failure; @@ -215,7 +332,7 @@ public int getItemId() { /** * The operation type ("index", "create" or "delete"). */ - public String getOpType() { + public OpType getOpType() { return this.opType; } @@ -300,8 +417,11 @@ public static BulkItemResponse readBulkItem(StreamInput in) throws IOException { @Override public void readFrom(StreamInput in) throws IOException { id = in.readVInt(); - opType = in.readString(); - + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + opType = OpType.fromId(in.readByte()); + } else { + opType = OpType.fromString(in.readString()); + } byte type = in.readByte(); if (type == 0) { response = new IndexResponse(); @@ -322,8 +442,11 @@ public void readFrom(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { out.writeVInt(id); - out.writeString(opType); - + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + out.writeByte(opType.getId()); + } else { + out.writeString(opType.getLowercase()); + } if (response == null) { out.writeByte((byte) 2); } else { diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java index c54b3588c178d..02ba2f39048bb 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java @@ -19,7 +19,8 @@ package org.elasticsearch.action.bulk; -import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.client.Client; @@ -28,16 +29,14 @@ import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.util.concurrent.EsExecutors; -import org.elasticsearch.common.util.concurrent.FutureUtils; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.threadpool.ThreadPool; import java.io.Closeable; import java.util.Objects; -import java.util.concurrent.Executors; -import java.util.concurrent.ScheduledFuture; -import java.util.concurrent.ScheduledThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; +import java.util.function.BiConsumer; /** * A bulk processor is a thread safe bulk processing class, allowing to easily set when to "flush" a new bulk request @@ -65,7 +64,7 @@ public interface Listener { /** * Callback after a failed execution of bulk request. - * + *

* Note that in case an instance of InterruptedException is passed, which means that request processing has been * cancelled externally, the thread's interruption status has been restored prior to calling this method. */ @@ -77,10 +76,10 @@ public interface Listener { */ public static class Builder { - private final Client client; + private final BiConsumer> consumer; private final Listener listener; + private final ThreadPool threadPool; - private String name; private int concurrentRequests = 1; private int bulkActions = 1000; private ByteSizeValue bulkSize = new ByteSizeValue(5, ByteSizeUnit.MB); @@ -91,17 +90,10 @@ public static class Builder { * Creates a builder of bulk processor with the client to use and the listener that will be used * to be notified on the completion of bulk requests. */ - public Builder(Client client, Listener listener) { - this.client = client; + public Builder(BiConsumer> consumer, Listener listener, ThreadPool threadPool) { + this.consumer = consumer; this.listener = listener; - } - - /** - * Sets an optional name to identify this bulk processor. - */ - public Builder setName(String name) { - this.name = name; - return this; + this.threadPool = threadPool; } /** @@ -163,7 +155,7 @@ public Builder setBackoffPolicy(BackoffPolicy backoffPolicy) { * Builds a new bulk processor. */ public BulkProcessor build() { - return new BulkProcessor(client, backoffPolicy, listener, name, concurrentRequests, bulkActions, bulkSize, flushInterval); + return new BulkProcessor(consumer, backoffPolicy, listener, concurrentRequests, bulkActions, bulkSize, flushInterval, threadPool); } } @@ -171,15 +163,13 @@ public static Builder builder(Client client, Listener listener) { Objects.requireNonNull(client, "client"); Objects.requireNonNull(listener, "listener"); - return new Builder(client, listener); + return new Builder(client::bulk, listener, client.threadPool()); } private final int bulkActions; private final long bulkSize; - - private final ScheduledThreadPoolExecutor scheduler; - private final ScheduledFuture scheduledFuture; + private final ThreadPool.Cancellable cancellableFlushTask; private final AtomicLong executionIdGen = new AtomicLong(); @@ -188,22 +178,21 @@ public static Builder builder(Client client, Listener listener) { private volatile boolean closed = false; - BulkProcessor(Client client, BackoffPolicy backoffPolicy, Listener listener, @Nullable String name, int concurrentRequests, int bulkActions, ByteSizeValue bulkSize, @Nullable TimeValue flushInterval) { + BulkProcessor(BiConsumer> consumer, BackoffPolicy backoffPolicy, Listener listener, + int concurrentRequests, int bulkActions, ByteSizeValue bulkSize, @Nullable TimeValue flushInterval, + ThreadPool threadPool) { this.bulkActions = bulkActions; - this.bulkSize = bulkSize.bytes(); - + this.bulkSize = bulkSize.getBytes(); this.bulkRequest = new BulkRequest(); - this.bulkRequestHandler = (concurrentRequests == 0) ? BulkRequestHandler.syncHandler(client, backoffPolicy, listener) : BulkRequestHandler.asyncHandler(client, backoffPolicy, listener, concurrentRequests); - if (flushInterval != null) { - this.scheduler = (ScheduledThreadPoolExecutor) Executors.newScheduledThreadPool(1, EsExecutors.daemonThreadFactory(client.settings(), (name != null ? "[" + name + "]" : "") + "bulk_processor")); - this.scheduler.setExecuteExistingDelayedTasksAfterShutdownPolicy(false); - this.scheduler.setContinueExistingPeriodicTasksAfterShutdownPolicy(false); - this.scheduledFuture = this.scheduler.scheduleWithFixedDelay(new Flush(), flushInterval.millis(), flushInterval.millis(), TimeUnit.MILLISECONDS); + if (concurrentRequests == 0) { + this.bulkRequestHandler = BulkRequestHandler.syncHandler(consumer, backoffPolicy, listener, threadPool); } else { - this.scheduler = null; - this.scheduledFuture = null; + this.bulkRequestHandler = BulkRequestHandler.asyncHandler(consumer, backoffPolicy, listener, threadPool, concurrentRequests); } + + // Start period flushing task after everything is setup + this.cancellableFlushTask = startFlushTask(flushInterval, threadPool); } /** @@ -213,20 +202,20 @@ public static Builder builder(Client client, Listener listener) { public void close() { try { awaitClose(0, TimeUnit.NANOSECONDS); - } catch(InterruptedException exc) { + } catch (InterruptedException exc) { Thread.currentThread().interrupt(); } } /** * Closes the processor. If flushing by time is enabled, then it's shutdown. Any remaining bulk actions are flushed. - * + *

* If concurrent requests are not enabled, returns {@code true} immediately. * If concurrent requests are enabled, waits for up to the specified timeout for all bulk requests to complete then returns {@code true}, * If the specified waiting time elapses before all bulk requests complete, {@code false} is returned. * * @param timeout The maximum time to wait for the bulk requests to complete - * @param unit The time unit of the {@code timeout} argument + * @param unit The time unit of the {@code timeout} argument * @return {@code true} if all bulk requests completed and {@code false} if the waiting time elapsed before all the bulk requests completed * @throws InterruptedException If the current thread is interrupted */ @@ -235,10 +224,9 @@ public synchronized boolean awaitClose(long timeout, TimeUnit unit) throws Inter return true; } closed = true; - if (this.scheduledFuture != null) { - FutureUtils.cancel(this.scheduledFuture); - this.scheduler.shutdown(); - } + + this.cancellableFlushTask.cancel(); + if (bulkRequest.numberOfActions() > 0) { execute(); } @@ -250,24 +238,24 @@ public synchronized boolean awaitClose(long timeout, TimeUnit unit) throws Inter * (for example, if no id is provided, one will be generated, or usage of the create flag). */ public BulkProcessor add(IndexRequest request) { - return add((ActionRequest) request); + return add((DocWriteRequest) request); } /** * Adds an {@link DeleteRequest} to the list of actions to execute. */ public BulkProcessor add(DeleteRequest request) { - return add((ActionRequest) request); + return add((DocWriteRequest) request); } /** * Adds either a delete or an index request. */ - public BulkProcessor add(ActionRequest request) { + public BulkProcessor add(DocWriteRequest request) { return add(request, null); } - public BulkProcessor add(ActionRequest request, @Nullable Object payload) { + public BulkProcessor add(DocWriteRequest request, @Nullable Object payload) { internalAdd(request, payload); return this; } @@ -282,22 +270,69 @@ protected void ensureOpen() { } } - private synchronized void internalAdd(ActionRequest request, @Nullable Object payload) { + private synchronized void internalAdd(DocWriteRequest request, @Nullable Object payload) { ensureOpen(); bulkRequest.add(request, payload); executeIfNeeded(); } + /** + * Adds the data from the bytes to be processed by the bulk processor + * @deprecated use {@link #add(BytesReference, String, String, XContentType)} instead to avoid content type auto-detection + */ + @Deprecated public BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType) throws Exception { return add(data, defaultIndex, defaultType, null, null); } - public synchronized BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String defaultPipeline, @Nullable Object payload) throws Exception { - bulkRequest.add(data, defaultIndex, defaultType, null, null, defaultPipeline, payload, true); + /** + * Adds the data from the bytes to be processed by the bulk processor + */ + public BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, + XContentType xContentType) throws Exception { + return add(data, defaultIndex, defaultType, null, null, xContentType); + } + + /** + * Adds the data from the bytes to be processed by the bulk processor + * @deprecated use {@link #add(BytesReference, String, String, String, Object, XContentType)} instead to avoid content type + * auto-detection + */ + @Deprecated + public synchronized BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, + @Nullable String defaultPipeline, @Nullable Object payload) throws Exception { + bulkRequest.add(data, defaultIndex, defaultType, null, null, null, defaultPipeline, payload, true); executeIfNeeded(); return this; } + /** + * Adds the data from the bytes to be processed by the bulk processor + */ + public synchronized BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, + @Nullable String defaultPipeline, @Nullable Object payload, XContentType xContentType) throws Exception { + bulkRequest.add(data, defaultIndex, defaultType, null, null, null, defaultPipeline, payload, true, xContentType); + executeIfNeeded(); + return this; + } + + private ThreadPool.Cancellable startFlushTask(TimeValue flushInterval, ThreadPool threadPool) { + if (flushInterval == null) { + return new ThreadPool.Cancellable() { + @Override + public void cancel() {} + + @Override + public boolean isCancelled() { + return true; + } + }; + } + + final Runnable flushRunnable = threadPool.getThreadContext().preserveContext(new Flush()); + return threadPool.scheduleWithFixedDelay(flushRunnable, flushInterval, ThreadPool.Names.GENERIC); + } + private void executeIfNeeded() { ensureOpen(); if (!isOverTheLimit()) { diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java index 7e7aa4ce603be..d31c4b1464edf 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.CompositeIndicesRequest; -import org.elasticsearch.action.IndicesRequest; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.support.ActiveShardCount; @@ -35,12 +35,17 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import java.io.IOException; import java.util.ArrayList; @@ -56,7 +61,9 @@ * Note that we only support refresh on the bulk request not per item. * @see org.elasticsearch.client.Client#bulk(BulkRequest) */ -public class BulkRequest extends ActionRequest implements CompositeIndicesRequest, WriteRequest { +public class BulkRequest extends ActionRequest implements CompositeIndicesRequest, WriteRequest { + private static final DeprecationLogger DEPRECATION_LOGGER = + new DeprecationLogger(Loggers.getLogger(BulkRequest.class)); private static final int REQUEST_OVERHEAD = 50; @@ -65,7 +72,7 @@ public class BulkRequest extends ActionRequest implements Composite * {@link WriteRequest}s to this but java doesn't support syntax to declare that everything in the array has both types so we declare * the one with the least casts. */ - final List> requests = new ArrayList<>(); + final List requests = new ArrayList<>(); List payloads = null; protected TimeValue timeout = BulkShardRequest.DEFAULT_TIMEOUT; @@ -80,14 +87,14 @@ public BulkRequest() { /** * Adds a list of requests to be executed. Either index or delete requests. */ - public BulkRequest add(ActionRequest... requests) { - for (ActionRequest request : requests) { + public BulkRequest add(DocWriteRequest... requests) { + for (DocWriteRequest request : requests) { add(request, null); } return this; } - public BulkRequest add(ActionRequest request) { + public BulkRequest add(DocWriteRequest request) { return add(request, null); } @@ -97,7 +104,7 @@ public BulkRequest add(ActionRequest request) { * @param payload Optional payload * @return the current bulk request */ - public BulkRequest add(ActionRequest request, @Nullable Object payload) { + public BulkRequest add(DocWriteRequest request, @Nullable Object payload) { if (request instanceof IndexRequest) { add((IndexRequest) request, payload); } else if (request instanceof DeleteRequest) { @@ -113,8 +120,8 @@ public BulkRequest add(ActionRequest request, @Nullable Object payload) { /** * Adds a list of requests to be executed. Either index or delete requests. */ - public BulkRequest add(Iterable> requests) { - for (ActionRequest request : requests) { + public BulkRequest add(Iterable requests) { + for (DocWriteRequest request : requests) { add(request); } return this; @@ -163,7 +170,7 @@ BulkRequest internalAdd(UpdateRequest request, @Nullable Object payload) { sizeInBytes += request.upsertRequest().source().length(); } if (request.script() != null) { - sizeInBytes += request.script().getScript().length() * 2; + sizeInBytes += request.script().getIdOrCode().length() * 2; } return this; } @@ -200,20 +207,10 @@ private void addPayload(Object payload) { /** * The list of requests in this bulk request. */ - public List> requests() { + public List requests() { return this.requests; } - @Override - public List subRequests() { - List indicesRequests = new ArrayList<>(); - for (ActionRequest request : requests) { - assert request instanceof IndicesRequest; - indicesRequests.add((IndicesRequest) request); - } - return indicesRequests; - } - /** * The list of optional payloads associated with requests in the same order as the requests. Note, elements within * it might be null if no payload has been provided. @@ -241,34 +238,84 @@ public long estimatedSizeInBytes() { /** * Adds a framed data in binary format + * @deprecated use {@link #add(byte[], int, int, XContentType)} */ - public BulkRequest add(byte[] data, int from, int length) throws Exception { + @Deprecated + public BulkRequest add(byte[] data, int from, int length) throws IOException { return add(data, from, length, null, null); } /** * Adds a framed data in binary format */ - public BulkRequest add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType) throws Exception { + public BulkRequest add(byte[] data, int from, int length, XContentType xContentType) throws IOException { + return add(data, from, length, null, null, xContentType); + } + + /** + * Adds a framed data in binary format + * @deprecated use {@link #add(byte[], int, int, String, String, XContentType)} + */ + @Deprecated + public BulkRequest add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType) throws IOException { return add(new BytesArray(data, from, length), defaultIndex, defaultType); } /** * Adds a framed data in binary format */ - public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType) throws Exception { - return add(data, defaultIndex, defaultType, null, null, null, null, true); + public BulkRequest add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType, + XContentType xContentType) throws IOException { + return add(new BytesArray(data, from, length), defaultIndex, defaultType, xContentType); + } + + /** + * Adds a framed data in binary format + * + * @deprecated use {@link #add(BytesReference, String, String, XContentType)} + */ + @Deprecated + public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType) throws IOException { + return add(data, defaultIndex, defaultType, null, null, null, null, null, true); + } + + /** + * Adds a framed data in binary format + */ + public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, + XContentType xContentType) throws IOException { + return add(data, defaultIndex, defaultType, null, null, null, null, null, true, xContentType); } /** * Adds a framed data in binary format + * + * @deprecated use {@link #add(BytesReference, String, String, boolean, XContentType)} */ - public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, boolean allowExplicitIndex) throws Exception { - return add(data, defaultIndex, defaultType, null, null, null, null, allowExplicitIndex); + @Deprecated + public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, boolean allowExplicitIndex) throws IOException { + return add(data, defaultIndex, defaultType, null, null, null, null, null, allowExplicitIndex); } - public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String defaultRouting, @Nullable String[] defaultFields, @Nullable String defaultPipeline, @Nullable Object payload, boolean allowExplicitIndex) throws Exception { - XContent xContent = XContentFactory.xContent(data); + /** + * Adds a framed data in binary format + */ + public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, boolean allowExplicitIndex, + XContentType xContentType) throws IOException { + return add(data, defaultIndex, defaultType, null, null, null, null, null, allowExplicitIndex, xContentType); + } + + @Deprecated + public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String defaultRouting, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSourceContext, @Nullable String defaultPipeline, @Nullable Object payload, boolean allowExplicitIndex) throws IOException { + XContentType xContentType = XContentFactory.xContentType(data); + return add(data, defaultIndex, defaultType, defaultRouting, defaultFields, defaultFetchSourceContext, defaultPipeline, payload, + allowExplicitIndex, xContentType); + } + + public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String + defaultRouting, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSourceContext, @Nullable String + defaultPipeline, @Nullable Object payload, boolean allowExplicitIndex, XContentType xContentType) throws IOException { + XContent xContent = xContentType.xContent(); int line = 0; int from = 0; int length = data.length(); @@ -281,7 +328,8 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null line++; // now parse the action - try (XContentParser parser = xContent.createParser(data.slice(from, nextMarker - from))) { + // EMPTY is safe here because we never call namedObject + try (XContentParser parser = xContent.createParser(NamedXContentRegistry.EMPTY, data.slice(from, nextMarker - from))) { // move pointers from = nextMarker + 1; @@ -290,10 +338,16 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null if (token == null) { continue; } - assert token == XContentParser.Token.START_OBJECT; + if (token != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException("Malformed action/metadata line [" + line + "], expected " + + XContentParser.Token.START_OBJECT + " but found [" + token + "]"); + } // Move to FIELD_NAME, that's the action token = parser.nextToken(); - assert token == XContentParser.Token.FIELD_NAME; + if (token != XContentParser.Token.FIELD_NAME) { + throw new IllegalArgumentException("Malformed action/metadata line [" + line + "], expected " + + XContentParser.Token.FIELD_NAME + " but found [" + token + "]"); + } String action = parser.currentName(); String index = defaultIndex; @@ -301,6 +355,7 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null String id = null; String routing = defaultRouting; String parent = null; + FetchSourceContext fetchSourceContext = defaultFetchSourceContext; String[] fields = defaultFields; String timestamp = null; TimeValue ttl = null; @@ -334,8 +389,10 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null } else if ("_parent".equals(currentFieldName) || "parent".equals(currentFieldName)) { parent = parser.text(); } else if ("_timestamp".equals(currentFieldName) || "timestamp".equals(currentFieldName)) { + DEPRECATION_LOGGER.deprecated("The [timestamp] parameter of index requests is deprecated"); timestamp = parser.text(); } else if ("_ttl".equals(currentFieldName) || "ttl".equals(currentFieldName)) { + DEPRECATION_LOGGER.deprecated("The [ttl] parameter of index requests is deprecated"); if (parser.currentToken() == XContentParser.Token.VALUE_STRING) { ttl = TimeValue.parseTimeValue(parser.text(), null, currentFieldName); } else { @@ -353,16 +410,21 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null pipeline = parser.text(); } else if ("fields".equals(currentFieldName)) { throw new IllegalArgumentException("Action/metadata line [" + line + "] contains a simple value for parameter [fields] while a list is expected"); + } else if ("_source".equals(currentFieldName)) { + fetchSourceContext = FetchSourceContext.fromXContent(parser); } else { throw new IllegalArgumentException("Action/metadata line [" + line + "] contains an unknown parameter [" + currentFieldName + "]"); } } else if (token == XContentParser.Token.START_ARRAY) { if ("fields".equals(currentFieldName)) { + DEPRECATION_LOGGER.deprecated("Deprecated field [fields] used, expected [_source] instead"); List values = parser.list(); fields = values.toArray(new String[values.size()]); } else { throw new IllegalArgumentException("Malformed action/metadata line [" + line + "], expected a simple value for field [" + currentFieldName + "] but found [" + token + "]"); } + } else if (token == XContentParser.Token.START_OBJECT && "_source".equals(currentFieldName)) { + fetchSourceContext = FetchSourceContext.fromXContent(parser); } else if (token != XContentParser.Token.VALUE_NULL) { throw new IllegalArgumentException("Malformed action/metadata line [" + line + "], expected a simple value for field [" + currentFieldName + "] but found [" + token + "]"); } @@ -387,22 +449,30 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null if ("index".equals(action)) { if (opType == null) { internalAdd(new IndexRequest(index, type, id).routing(routing).parent(parent).timestamp(timestamp).ttl(ttl).version(version).versionType(versionType) - .setPipeline(pipeline).source(data.slice(from, nextMarker - from)), payload); + .setPipeline(pipeline) + .source(sliceTrimmingCarriageReturn(data, from, nextMarker, xContentType), xContentType), payload); } else { internalAdd(new IndexRequest(index, type, id).routing(routing).parent(parent).timestamp(timestamp).ttl(ttl).version(version).versionType(versionType) .create("create".equals(opType)).setPipeline(pipeline) - .source(data.slice(from, nextMarker - from)), payload); + .source(sliceTrimmingCarriageReturn(data, from, nextMarker, xContentType), xContentType), payload); } } else if ("create".equals(action)) { internalAdd(new IndexRequest(index, type, id).routing(routing).parent(parent).timestamp(timestamp).ttl(ttl).version(version).versionType(versionType) .create(true).setPipeline(pipeline) - .source(data.slice(from, nextMarker - from)), payload); + .source(sliceTrimmingCarriageReturn(data, from, nextMarker, xContentType), xContentType), payload); } else if ("update".equals(action)) { UpdateRequest updateRequest = new UpdateRequest(index, type, id).routing(routing).parent(parent).retryOnConflict(retryOnConflict) .version(version).versionType(versionType) .routing(routing) - .parent(parent) - .source(data.slice(from, nextMarker - from)); + .parent(parent); + // EMPTY is safe here because we never call namedObject + try (XContentParser sliceParser = xContent.createParser(NamedXContentRegistry.EMPTY, + sliceTrimmingCarriageReturn(data, from, nextMarker, xContentType))) { + updateRequest.fromXContent(sliceParser); + } + if (fetchSourceContext != null) { + updateRequest.fetchSource(fetchSourceContext); + } if (fields != null) { updateRequest.fields(fields); } @@ -432,6 +502,20 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null return this; } + /** + * Returns the sliced {@link BytesReference}. If the {@link XContentType} is JSON, the byte preceding the marker is checked to see + * if it is a carriage return and if so, the BytesReference is sliced so that the carriage return is ignored + */ + private BytesReference sliceTrimmingCarriageReturn(BytesReference bytesReference, int from, int nextMarker, XContentType xContentType) { + final int length; + if (XContentType.JSON == xContentType && bytesReference.get(nextMarker - 1) == (byte) '\r') { + length = nextMarker - from - 1; + } else { + length = nextMarker - from; + } + return bytesReference.slice(from, length); + } + /** * Sets the number of shard copies that must be active before proceeding with the write. * See {@link ReplicationRequest#waitForActiveShards(ActiveShardCount)} for details. @@ -497,7 +581,7 @@ private int findNextMarker(byte marker, int from, BytesReference data, int lengt * @return Whether this bulk request contains index request with an ingest pipeline enabled. */ public boolean hasIndexRequestsWithPipelines() { - for (ActionRequest actionRequest : requests) { + for (DocWriteRequest actionRequest : requests) { if (actionRequest instanceof IndexRequest) { IndexRequest indexRequest = (IndexRequest) actionRequest; if (Strings.hasText(indexRequest.getPipeline())) { @@ -515,13 +599,13 @@ public ActionRequestValidationException validate() { if (requests.isEmpty()) { validationException = addValidationError("no requests added", validationException); } - for (ActionRequest request : requests) { + for (DocWriteRequest request : requests) { // We first check if refresh has been set if (((WriteRequest) request).getRefreshPolicy() != RefreshPolicy.NONE) { validationException = addValidationError( "RefreshPolicy is not supported on an item request. Set it on the BulkRequest instead.", validationException); } - ActionRequestValidationException ex = request.validate(); + ActionRequestValidationException ex = ((WriteRequest) request).validate(); if (ex != null) { if (validationException == null) { validationException = new ActionRequestValidationException(); @@ -539,20 +623,7 @@ public void readFrom(StreamInput in) throws IOException { waitForActiveShards = ActiveShardCount.readFrom(in); int size = in.readVInt(); for (int i = 0; i < size; i++) { - byte type = in.readByte(); - if (type == 0) { - IndexRequest request = new IndexRequest(); - request.readFrom(in); - requests.add(request); - } else if (type == 1) { - DeleteRequest request = new DeleteRequest(); - request.readFrom(in); - requests.add(request); - } else if (type == 2) { - UpdateRequest request = new UpdateRequest(); - request.readFrom(in); - requests.add(request); - } + requests.add(DocWriteRequest.readDocumentRequest(in)); } refreshPolicy = RefreshPolicy.readFrom(in); timeout = new TimeValue(in); @@ -563,15 +634,8 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); waitForActiveShards.writeTo(out); out.writeVInt(requests.size()); - for (ActionRequest request : requests) { - if (request instanceof IndexRequest) { - out.writeByte((byte) 0); - } else if (request instanceof DeleteRequest) { - out.writeByte((byte) 1); - } else if (request instanceof UpdateRequest) { - out.writeByte((byte) 2); - } - request.writeTo(out); + for (DocWriteRequest request : requests) { + DocWriteRequest.writeDocumentRequest(out, request); } refreshPolicy.writeTo(out); timeout.writeTo(out); diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestBuilder.java index c48a8f507b862..8f634fa28a41b 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestBuilder.java @@ -32,6 +32,7 @@ import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentType; /** * A bulk request holds an ordered {@link IndexRequest}s and {@link DeleteRequest}s and allows to executes @@ -97,7 +98,9 @@ public BulkRequestBuilder add(UpdateRequestBuilder request) { /** * Adds a framed data in binary format + * @deprecated use {@link #add(byte[], int, int, XContentType)} */ + @Deprecated public BulkRequestBuilder add(byte[] data, int from, int length) throws Exception { request.add(data, from, length, null, null); return this; @@ -106,11 +109,30 @@ public BulkRequestBuilder add(byte[] data, int from, int length) throws Exceptio /** * Adds a framed data in binary format */ + public BulkRequestBuilder add(byte[] data, int from, int length, XContentType xContentType) throws Exception { + request.add(data, from, length, null, null, xContentType); + return this; + } + + /** + * Adds a framed data in binary format + * @deprecated use {@link #add(byte[], int, int, String, String, XContentType)} + */ + @Deprecated public BulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType) throws Exception { request.add(data, from, length, defaultIndex, defaultType); return this; } + /** + * Adds a framed data in binary format + */ + public BulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType, + XContentType xContentType) throws Exception { + request.add(data, from, length, defaultIndex, defaultType, xContentType); + return this; + } + /** * Sets the number of shard copies that must be active before proceeding with the write. * See {@link ReplicationRequest#waitForActiveShards(ActiveShardCount)} for details. diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java index 6ad566ca50019..e1755bfb8bf1e 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java @@ -22,23 +22,27 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.client.Client; import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; +import org.elasticsearch.threadpool.ThreadPool; import java.util.concurrent.Semaphore; import java.util.concurrent.TimeUnit; +import java.util.function.BiConsumer; /** * Abstracts the low-level details of bulk request handling */ abstract class BulkRequestHandler { protected final Logger logger; - protected final Client client; + protected final BiConsumer> consumer; + protected final ThreadPool threadPool; - protected BulkRequestHandler(Client client) { - this.client = client; - this.logger = Loggers.getLogger(getClass(), client.settings()); + protected BulkRequestHandler(BiConsumer> consumer, ThreadPool threadPool) { + this.logger = Loggers.getLogger(getClass()); + this.consumer = consumer; + this.threadPool = threadPool; } @@ -47,20 +51,25 @@ protected BulkRequestHandler(Client client) { public abstract boolean awaitClose(long timeout, TimeUnit unit) throws InterruptedException; - public static BulkRequestHandler syncHandler(Client client, BackoffPolicy backoffPolicy, BulkProcessor.Listener listener) { - return new SyncBulkRequestHandler(client, backoffPolicy, listener); + public static BulkRequestHandler syncHandler(BiConsumer> consumer, + BackoffPolicy backoffPolicy, BulkProcessor.Listener listener, + ThreadPool threadPool) { + return new SyncBulkRequestHandler(consumer, backoffPolicy, listener, threadPool); } - public static BulkRequestHandler asyncHandler(Client client, BackoffPolicy backoffPolicy, BulkProcessor.Listener listener, int concurrentRequests) { - return new AsyncBulkRequestHandler(client, backoffPolicy, listener, concurrentRequests); + public static BulkRequestHandler asyncHandler(BiConsumer> consumer, + BackoffPolicy backoffPolicy, BulkProcessor.Listener listener, + ThreadPool threadPool, int concurrentRequests) { + return new AsyncBulkRequestHandler(consumer, backoffPolicy, listener, threadPool, concurrentRequests); } private static class SyncBulkRequestHandler extends BulkRequestHandler { private final BulkProcessor.Listener listener; private final BackoffPolicy backoffPolicy; - public SyncBulkRequestHandler(Client client, BackoffPolicy backoffPolicy, BulkProcessor.Listener listener) { - super(client); + SyncBulkRequestHandler(BiConsumer> consumer, BackoffPolicy backoffPolicy, + BulkProcessor.Listener listener, ThreadPool threadPool) { + super(consumer, threadPool); this.backoffPolicy = backoffPolicy; this.listener = listener; } @@ -71,9 +80,10 @@ public void execute(BulkRequest bulkRequest, long executionId) { try { listener.beforeBulk(executionId, bulkRequest); BulkResponse bulkResponse = Retry - .on(EsRejectedExecutionException.class) - .policy(backoffPolicy) - .withSyncBackoff(client, bulkRequest); + .on(EsRejectedExecutionException.class) + .policy(backoffPolicy) + .using(threadPool) + .withSyncBackoff(consumer, bulkRequest, Settings.EMPTY); afterCalled = true; listener.afterBulk(executionId, bulkRequest, bulkResponse); } catch (InterruptedException e) { @@ -103,8 +113,10 @@ private static class AsyncBulkRequestHandler extends BulkRequestHandler { private final Semaphore semaphore; private final int concurrentRequests; - private AsyncBulkRequestHandler(Client client, BackoffPolicy backoffPolicy, BulkProcessor.Listener listener, int concurrentRequests) { - super(client); + private AsyncBulkRequestHandler(BiConsumer> consumer, BackoffPolicy backoffPolicy, + BulkProcessor.Listener listener, ThreadPool threadPool, + int concurrentRequests) { + super(consumer, threadPool); this.backoffPolicy = backoffPolicy; assert concurrentRequests > 0; this.listener = listener; @@ -121,26 +133,27 @@ public void execute(BulkRequest bulkRequest, long executionId) { semaphore.acquire(); acquired = true; Retry.on(EsRejectedExecutionException.class) - .policy(backoffPolicy) - .withAsyncBackoff(client, bulkRequest, new ActionListener() { - @Override - public void onResponse(BulkResponse response) { - try { - listener.afterBulk(executionId, bulkRequest, response); - } finally { - semaphore.release(); - } + .policy(backoffPolicy) + .using(threadPool) + .withAsyncBackoff(consumer, bulkRequest, new ActionListener() { + @Override + public void onResponse(BulkResponse response) { + try { + listener.afterBulk(executionId, bulkRequest, response); + } finally { + semaphore.release(); } - - @Override - public void onFailure(Exception e) { - try { - listener.afterBulk(executionId, bulkRequest, e); - } finally { - semaphore.release(); - } + } + + @Override + public void onFailure(Exception e) { + try { + listener.afterBulk(executionId, bulkRequest, e); + } finally { + semaphore.release(); } - }); + } + }, Settings.EMPTY); bulkRequestSetupSuccessful = true; } catch (InterruptedException e) { Thread.currentThread().interrupt(); diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkResponse.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkResponse.java index e214f87ddb63b..8e0b48143dc92 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkResponse.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkResponse.java @@ -23,17 +23,32 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.StatusToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import java.util.ArrayList; import java.util.Arrays; import java.util.Iterator; +import java.util.List; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; +import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownField; +import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownToken; /** * A response of a bulk execution. Holding a response for each item responding (in order) of the * bulk requests. Each item holds the index/type/id is operated on, and if it failed or not (with the * failure message). */ -public class BulkResponse extends ActionResponse implements Iterable { +public class BulkResponse extends ActionResponse implements Iterable, StatusToXContentObject { + + private static final String ITEMS = "items"; + private static final String ERRORS = "errors"; + private static final String TOOK = "took"; + private static final String INGEST_TOOK = "ingest_took"; public static final long NO_INGEST_TOOK = -1L; @@ -141,4 +156,61 @@ public void writeTo(StreamOutput out) throws IOException { out.writeVLong(tookInMillis); out.writeZLong(ingestTookInMillis); } + + @Override + public RestStatus status() { + return RestStatus.OK; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(TOOK, tookInMillis); + if (ingestTookInMillis != BulkResponse.NO_INGEST_TOOK) { + builder.field(INGEST_TOOK, ingestTookInMillis); + } + builder.field(ERRORS, hasFailures()); + builder.startArray(ITEMS); + for (BulkItemResponse item : this) { + item.toXContent(builder, params); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + public static BulkResponse fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + + long took = -1L; + long ingestTook = NO_INGEST_TOOK; + List items = new ArrayList<>(); + + String currentFieldName = parser.currentName(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (TOOK.equals(currentFieldName)) { + took = parser.longValue(); + } else if (INGEST_TOOK.equals(currentFieldName)) { + ingestTook = parser.longValue(); + } else if (ERRORS.equals(currentFieldName) == false) { + throwUnknownField(currentFieldName, parser.getTokenLocation()); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (ITEMS.equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + items.add(BulkItemResponse.fromXContent(parser, items.size())); + } + } else { + throwUnknownField(currentFieldName, parser.getTokenLocation()); + } + } else { + throwUnknownToken(token, parser.getTokenLocation()); + } + } + return new BulkResponse(items.toArray(new BulkItemResponse[items.size()]), took, ingestTook); + } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkShardRequest.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkShardRequest.java index ffc2407b8a478..b888c5670c86b 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkShardRequest.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkShardRequest.java @@ -39,13 +39,13 @@ public class BulkShardRequest extends ReplicatedWriteRequest { public BulkShardRequest() { } - BulkShardRequest(ShardId shardId, RefreshPolicy refreshPolicy, BulkItemRequest[] items) { + public BulkShardRequest(ShardId shardId, RefreshPolicy refreshPolicy, BulkItemRequest[] items) { super(shardId); this.items = items; setRefreshPolicy(refreshPolicy); } - BulkItemRequest[] items() { + public BulkItemRequest[] items() { return items; } @@ -88,8 +88,14 @@ public void readFrom(StreamInput in) throws IOException { @Override public String toString() { // This is included in error messages so we'll try to make it somewhat user friendly. - StringBuilder b = new StringBuilder("BulkShardRequest to ["); - b.append(index).append("] containing [").append(items.length).append("] requests"); + StringBuilder b = new StringBuilder("BulkShardRequest ["); + b.append(shardId).append("] containing ["); + if (items.length > 1) { + b.append(items.length).append("] requests"); + } else { + b.append(items[0].request()).append("]"); + } + switch (getRefreshPolicy()) { case IMMEDIATE: b.append(" and a refresh"); diff --git a/core/src/main/java/org/elasticsearch/action/bulk/Retry.java b/core/src/main/java/org/elasticsearch/action/bulk/Retry.java index 375796ae8017d..e1ba1a6bee112 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/Retry.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/Retry.java @@ -20,19 +20,25 @@ import org.apache.logging.log4j.Logger; import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.action.ActionFuture; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.PlainActionFuture; -import org.elasticsearch.client.Client; import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.EsExecutors; import org.elasticsearch.common.util.concurrent.FutureUtils; import org.elasticsearch.threadpool.ThreadPool; import java.util.ArrayList; import java.util.Iterator; import java.util.List; +import java.util.concurrent.Executors; +import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.ScheduledThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.function.BiConsumer; +import java.util.function.BiFunction; import java.util.function.Predicate; /** @@ -42,11 +48,16 @@ public class Retry { private final Class retryOnThrowable; private BackoffPolicy backoffPolicy; + private ThreadPool threadPool; public static Retry on(Class retryOnThrowable) { return new Retry(retryOnThrowable); } + Retry(Class retryOnThrowable) { + this.retryOnThrowable = retryOnThrowable; + } + /** * @param backoffPolicy The backoff policy that defines how long and how often to wait for retries. */ @@ -55,42 +66,48 @@ public Retry policy(BackoffPolicy backoffPolicy) { return this; } - Retry(Class retryOnThrowable) { - this.retryOnThrowable = retryOnThrowable; + /** + * @param threadPool The threadPool that will be used to schedule retries. + */ + public Retry using(ThreadPool threadPool) { + this.threadPool = threadPool; + return this; } /** - * Invokes #bulk(BulkRequest, ActionListener) on the provided client. Backs off on the provided exception and delegates results to the - * provided listener. - * - * @param client Client invoking the bulk request. + * Invokes #apply(BulkRequest, ActionListener). Backs off on the provided exception and delegates results to the + * provided listener. Retries will be attempted using the provided schedule function + * @param consumer The consumer to which apply the request and listener * @param bulkRequest The bulk request that should be executed. - * @param listener A listener that is invoked when the bulk request finishes or completes with an exception. The listener is not + * @param listener A listener that is invoked when the bulk request finishes or completes with an exception. The listener is not + * @param settings settings */ - public void withAsyncBackoff(Client client, BulkRequest bulkRequest, ActionListener listener) { - AsyncRetryHandler r = new AsyncRetryHandler(retryOnThrowable, backoffPolicy, client, listener); + public void withAsyncBackoff(BiConsumer> consumer, BulkRequest bulkRequest, ActionListener listener, Settings settings) { + RetryHandler r = new RetryHandler(retryOnThrowable, backoffPolicy, consumer, listener, settings, threadPool); r.execute(bulkRequest); - } /** - * Invokes #bulk(BulkRequest) on the provided client. Backs off on the provided exception. + * Invokes #apply(BulkRequest, ActionListener). Backs off on the provided exception. Retries will be attempted using + * the provided schedule function. * - * @param client Client invoking the bulk request. + * @param consumer The consumer to which apply the request and listener * @param bulkRequest The bulk request that should be executed. + * @param settings settings * @return the bulk response as returned by the client. * @throws Exception Any exception thrown by the callable. */ - public BulkResponse withSyncBackoff(Client client, BulkRequest bulkRequest) throws Exception { - return SyncRetryHandler - .create(retryOnThrowable, backoffPolicy, client) - .executeBlocking(bulkRequest) - .actionGet(); + public BulkResponse withSyncBackoff(BiConsumer> consumer, BulkRequest bulkRequest, Settings settings) throws Exception { + PlainActionFuture actionFuture = PlainActionFuture.newFuture(); + RetryHandler r = new RetryHandler(retryOnThrowable, backoffPolicy, consumer, actionFuture, settings, threadPool); + r.execute(bulkRequest); + return actionFuture.actionGet(); } - static class AbstractRetryHandler implements ActionListener { + static class RetryHandler implements ActionListener { private final Logger logger; - private final Client client; + private final ThreadPool threadPool; + private final BiConsumer> consumer; private final ActionListener listener; private final Iterator backoff; private final Class retryOnThrowable; @@ -102,12 +119,15 @@ static class AbstractRetryHandler implements ActionListener { private volatile BulkRequest currentBulkRequest; private volatile ScheduledFuture scheduledRequestFuture; - public AbstractRetryHandler(Class retryOnThrowable, BackoffPolicy backoffPolicy, Client client, ActionListener listener) { + RetryHandler(Class retryOnThrowable, BackoffPolicy backoffPolicy, + BiConsumer> consumer, ActionListener listener, + Settings settings, ThreadPool threadPool) { this.retryOnThrowable = retryOnThrowable; this.backoff = backoffPolicy.iterator(); - this.client = client; + this.consumer = consumer; this.listener = listener; - this.logger = Loggers.getLogger(getClass(), client.settings()); + this.logger = Loggers.getLogger(getClass(), settings); + this.threadPool = threadPool; // in contrast to System.currentTimeMillis(), nanoTime() uses a monotonic clock under the hood this.startTimestampNanos = System.nanoTime(); } @@ -142,7 +162,8 @@ private void retry(BulkRequest bulkRequestForRetry) { assert backoff.hasNext(); TimeValue next = backoff.next(); logger.trace("Retry of bulk request scheduled in {} ms.", next.millis()); - scheduledRequestFuture = client.threadPool().schedule(next, ThreadPool.Names.SAME, (() -> this.execute(bulkRequestForRetry))); + Runnable command = threadPool.getThreadContext().preserveContext(() -> this.execute(bulkRequestForRetry)); + scheduledRequestFuture = threadPool.schedule(next, ThreadPool.Names.SAME, command); } private BulkRequest createBulkRequestForRetry(BulkResponse bulkItemResponses) { @@ -206,32 +227,7 @@ private BulkResponse getAccumulatedResponse() { public void execute(BulkRequest bulkRequest) { this.currentBulkRequest = bulkRequest; - client.bulk(bulkRequest, this); - } - } - - static class AsyncRetryHandler extends AbstractRetryHandler { - public AsyncRetryHandler(Class retryOnThrowable, BackoffPolicy backoffPolicy, Client client, ActionListener listener) { - super(retryOnThrowable, backoffPolicy, client, listener); - } - } - - static class SyncRetryHandler extends AbstractRetryHandler { - private final PlainActionFuture actionFuture; - - public static SyncRetryHandler create(Class retryOnThrowable, BackoffPolicy backoffPolicy, Client client) { - PlainActionFuture actionFuture = PlainActionFuture.newFuture(); - return new SyncRetryHandler(retryOnThrowable, backoffPolicy, client, actionFuture); - } - - public SyncRetryHandler(Class retryOnThrowable, BackoffPolicy backoffPolicy, Client client, PlainActionFuture actionFuture) { - super(retryOnThrowable, backoffPolicy, client, actionFuture); - this.actionFuture = actionFuture; - } - - public ActionFuture executeBlocking(BulkRequest bulkRequest) { - super.execute(bulkRequest); - return actionFuture; + consumer.accept(bulkRequest, this); } } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java index da080b54b2531..ecc9fc35fe1e2 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java @@ -19,25 +19,28 @@ package org.elasticsearch.action.bulk; -import org.elasticsearch.ElasticsearchException; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.util.SparseFixedBitSet; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.ResourceAlreadyExistsException; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRequest; -import org.elasticsearch.action.DocumentRequest; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.RoutingMissingException; import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; import org.elasticsearch.action.admin.indices.create.CreateIndexResponse; import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction; -import org.elasticsearch.action.delete.DeleteRequest; -import org.elasticsearch.action.delete.TransportDeleteAction; import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.ingest.IngestActionForwarder; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.AutoCreateIndex; import org.elasticsearch.action.support.HandledTransportAction; import org.elasticsearch.action.update.TransportUpdateAction; import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.ClusterStateObserver; +import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; @@ -46,12 +49,16 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.shard.ShardId; -import org.elasticsearch.indices.IndexAlreadyExistsException; import org.elasticsearch.indices.IndexClosedException; +import org.elasticsearch.ingest.IngestService; +import org.elasticsearch.node.NodeClosedException; import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -59,52 +66,62 @@ import java.util.ArrayList; import java.util.HashMap; import java.util.HashSet; +import java.util.Iterator; import java.util.List; -import java.util.Locale; import java.util.Map; import java.util.Objects; import java.util.Set; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import java.util.function.LongSupplier; +import java.util.stream.Collectors; + +import static java.util.Collections.emptyMap; /** - * + * Groups bulk request items by shard, optionally creating non-existent indices and + * delegates to {@link TransportShardBulkAction} for shard-level bulk execution */ public class TransportBulkAction extends HandledTransportAction { private final AutoCreateIndex autoCreateIndex; private final boolean allowIdGeneration; private final ClusterService clusterService; + private final IngestService ingestService; private final TransportShardBulkAction shardBulkAction; private final TransportCreateIndexAction createIndexAction; private final LongSupplier relativeTimeProvider; + private final IngestActionForwarder ingestForwarder; @Inject - public TransportBulkAction(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterService clusterService, + public TransportBulkAction(Settings settings, ThreadPool threadPool, TransportService transportService, + ClusterService clusterService, IngestService ingestService, TransportShardBulkAction shardBulkAction, TransportCreateIndexAction createIndexAction, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, AutoCreateIndex autoCreateIndex) { - this(settings, threadPool, transportService, clusterService, + this(settings, threadPool, transportService, clusterService, ingestService, shardBulkAction, createIndexAction, actionFilters, indexNameExpressionResolver, autoCreateIndex, System::nanoTime); } - public TransportBulkAction(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterService clusterService, + public TransportBulkAction(Settings settings, ThreadPool threadPool, TransportService transportService, + ClusterService clusterService, IngestService ingestService, TransportShardBulkAction shardBulkAction, TransportCreateIndexAction createIndexAction, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, AutoCreateIndex autoCreateIndex, LongSupplier relativeTimeProvider) { super(settings, BulkAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, BulkRequest::new); Objects.requireNonNull(relativeTimeProvider); this.clusterService = clusterService; + this.ingestService = ingestService; this.shardBulkAction = shardBulkAction; this.createIndexAction = createIndexAction; - this.autoCreateIndex = autoCreateIndex; this.allowIdGeneration = this.settings.getAsBoolean("action.bulk.action.allow_id_generation", true); this.relativeTimeProvider = relativeTimeProvider; + this.ingestForwarder = new IngestActionForwarder(transportService); + clusterService.addStateApplier(this.ingestForwarder); } @Override @@ -114,69 +131,78 @@ protected final void doExecute(final BulkRequest bulkRequest, final ActionListen @Override protected void doExecute(Task task, BulkRequest bulkRequest, ActionListener listener) { + if (bulkRequest.hasIndexRequestsWithPipelines()) { + if (clusterService.localNode().isIngestNode()) { + processBulkIndexIngestRequest(task, bulkRequest, listener); + } else { + ingestForwarder.forwardIngestRequest(BulkAction.INSTANCE, bulkRequest, listener); + } + return; + } + final long startTime = relativeTime(); final AtomicArray responses = new AtomicArray<>(bulkRequest.requests.size()); if (needToCheck()) { - // Keep track of all unique indices and all unique types per index for the create index requests: - final Set autoCreateIndices = new HashSet<>(); - for (ActionRequest request : bulkRequest.requests) { - if (request instanceof DocumentRequest) { - DocumentRequest req = (DocumentRequest) request; - autoCreateIndices.add(req.index()); - } else { - throw new ElasticsearchException("Parsed unknown request in bulk actions: " + request.getClass().getSimpleName()); + // Attempt to create all the indices that we're going to need during the bulk before we start. + // Step 1: collect all the indices in the request + final Set indices = bulkRequest.requests.stream() + .map(DocWriteRequest::index) + .collect(Collectors.toSet()); + /* Step 2: filter that to indices that don't exist and we can create. At the same time build a map of indices we can't create + * that we'll use when we try to run the requests. */ + final Map indicesThatCannotBeCreated = new HashMap<>(); + Set autoCreateIndices = new HashSet<>(); + ClusterState state = clusterService.state(); + for (String index : indices) { + boolean shouldAutoCreate; + try { + shouldAutoCreate = shouldAutoCreate(index, state); + } catch (IndexNotFoundException e) { + shouldAutoCreate = false; + indicesThatCannotBeCreated.put(index, e); + } + if (shouldAutoCreate) { + autoCreateIndices.add(index); } } - final AtomicInteger counter = new AtomicInteger(autoCreateIndices.size()); - ClusterState state = clusterService.state(); - for (String index : autoCreateIndices) { - if (shouldAutoCreate(index, state)) { - CreateIndexRequest createIndexRequest = new CreateIndexRequest(); - createIndexRequest.index(index); - createIndexRequest.cause("auto(bulk api)"); - createIndexRequest.masterNodeTimeout(bulkRequest.timeout()); - createIndexAction.execute(createIndexRequest, new ActionListener() { + // Step 3: create all the indices that are missing, if there are any missing. start the bulk after all the creates come back. + if (autoCreateIndices.isEmpty()) { + executeBulk(task, bulkRequest, startTime, listener, responses, indicesThatCannotBeCreated); + } else { + final AtomicInteger counter = new AtomicInteger(autoCreateIndices.size()); + for (String index : autoCreateIndices) { + createIndex(index, bulkRequest.timeout(), new ActionListener() { @Override public void onResponse(CreateIndexResponse result) { if (counter.decrementAndGet() == 0) { - try { - executeBulk(task, bulkRequest, startTime, listener, responses); - } catch (Exception e) { - listener.onFailure(e); - } + executeBulk(task, bulkRequest, startTime, listener, responses, indicesThatCannotBeCreated); } } @Override public void onFailure(Exception e) { - if (!(ExceptionsHelper.unwrapCause(e) instanceof IndexAlreadyExistsException)) { - // fail all requests involving this index, if create didnt work + if (!(ExceptionsHelper.unwrapCause(e) instanceof ResourceAlreadyExistsException)) { + // fail all requests involving this index, if create didn't work for (int i = 0; i < bulkRequest.requests.size(); i++) { - ActionRequest request = bulkRequest.requests.get(i); + DocWriteRequest request = bulkRequest.requests.get(i); if (request != null && setResponseFailureIfIndexMatches(responses, i, request, index, e)) { bulkRequest.requests.set(i, null); } } } if (counter.decrementAndGet() == 0) { - try { - executeBulk(task, bulkRequest, startTime, listener, responses); - } catch (Exception inner) { + executeBulk(task, bulkRequest, startTime, ActionListener.wrap(listener::onResponse, inner -> { inner.addSuppressed(e); listener.onFailure(inner); - } + }), responses, indicesThatCannotBeCreated); } } }); - } else { - if (counter.decrementAndGet() == 0) { - executeBulk(task, bulkRequest, startTime, listener, responses); - } } } } else { - executeBulk(task, bulkRequest, startTime, listener, responses); + executeBulk(task, bulkRequest, startTime, listener, responses, emptyMap()); } } @@ -188,30 +214,21 @@ boolean shouldAutoCreate(String index, ClusterState state) { return autoCreateIndex.shouldAutoCreate(index, state); } - private boolean setResponseFailureIfIndexMatches(AtomicArray responses, int idx, ActionRequest request, String index, Exception e) { - if (request instanceof IndexRequest) { - IndexRequest indexRequest = (IndexRequest) request; - if (index.equals(indexRequest.index())) { - responses.set(idx, new BulkItemResponse(idx, "index", new BulkItemResponse.Failure(indexRequest.index(), indexRequest.type(), indexRequest.id(), e))); - return true; - } - } else if (request instanceof DeleteRequest) { - DeleteRequest deleteRequest = (DeleteRequest) request; - if (index.equals(deleteRequest.index())) { - responses.set(idx, new BulkItemResponse(idx, "delete", new BulkItemResponse.Failure(deleteRequest.index(), deleteRequest.type(), deleteRequest.id(), e))); - return true; - } - } else if (request instanceof UpdateRequest) { - UpdateRequest updateRequest = (UpdateRequest) request; - if (index.equals(updateRequest.index())) { - responses.set(idx, new BulkItemResponse(idx, "update", new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), e))); - return true; - } - } else { - throw new ElasticsearchException("Parsed unknown request in bulk actions: " + request.getClass().getSimpleName()); + void createIndex(String index, TimeValue timeout, ActionListener listener) { + CreateIndexRequest createIndexRequest = new CreateIndexRequest(); + createIndexRequest.index(index); + createIndexRequest.cause("auto(bulk api)"); + createIndexRequest.masterNodeTimeout(timeout); + createIndexAction.execute(createIndexRequest, listener); + } + + private boolean setResponseFailureIfIndexMatches(AtomicArray responses, int idx, DocWriteRequest request, String index, Exception e) { + if (index.equals(request.index())) { + responses.set(idx, new BulkItemResponse(idx, request.opType(), new BulkItemResponse.Failure(request.index(), request.type(), request.id(), e))); + return true; } return false; - } + } /** * This method executes the {@link BulkRequest} and calls the given listener once the request returns. @@ -221,213 +238,240 @@ private boolean setResponseFailureIfIndexMatches(AtomicArray r */ public void executeBulk(final BulkRequest bulkRequest, final ActionListener listener) { final long startTimeNanos = relativeTime(); - executeBulk(null, bulkRequest, startTimeNanos, listener, new AtomicArray<>(bulkRequest.requests.size())); + executeBulk(null, bulkRequest, startTimeNanos, listener, new AtomicArray<>(bulkRequest.requests.size()), emptyMap()); } private long buildTookInMillis(long startTimeNanos) { return TimeUnit.NANOSECONDS.toMillis(relativeTime() - startTimeNanos); } - void executeBulk(Task task, final BulkRequest bulkRequest, final long startTimeNanos, final ActionListener listener, final AtomicArray responses ) { - final ClusterState clusterState = clusterService.state(); - // TODO use timeout to wait here if its blocked... - clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.WRITE); - - final ConcreteIndices concreteIndices = new ConcreteIndices(clusterState, indexNameExpressionResolver); - MetaData metaData = clusterState.metaData(); - for (int i = 0; i < bulkRequest.requests.size(); i++) { - ActionRequest request = bulkRequest.requests.get(i); - //the request can only be null because we set it to null in the previous step, so it gets ignored - if (request == null) { - continue; - } - DocumentRequest documentRequest = (DocumentRequest) request; - if (addFailureIfIndexIsUnavailable(documentRequest, bulkRequest, responses, i, concreteIndices, metaData)) { - continue; + /** + * retries on retryable cluster blocks, resolves item requests, + * constructs shard bulk requests and delegates execution to shard bulk action + * */ + private final class BulkOperation extends AbstractRunnable { + private final Task task; + private final BulkRequest bulkRequest; + private final ActionListener listener; + private final AtomicArray responses; + private final long startTimeNanos; + private final ClusterStateObserver observer; + private final Map indicesThatCannotBeCreated; + + BulkOperation(Task task, BulkRequest bulkRequest, ActionListener listener, AtomicArray responses, + long startTimeNanos, Map indicesThatCannotBeCreated) { + this.task = task; + this.bulkRequest = bulkRequest; + this.listener = listener; + this.responses = responses; + this.startTimeNanos = startTimeNanos; + this.indicesThatCannotBeCreated = indicesThatCannotBeCreated; + this.observer = new ClusterStateObserver(clusterService, bulkRequest.timeout(), logger, threadPool.getThreadContext()); + } + + @Override + public void onFailure(Exception e) { + listener.onFailure(e); + } + + @Override + protected void doRun() throws Exception { + final ClusterState clusterState = observer.setAndGetObservedState(); + if (handleBlockExceptions(clusterState)) { + return; } - Index concreteIndex = concreteIndices.resolveIfAbsent(documentRequest); - if (request instanceof IndexRequest) { - IndexRequest indexRequest = (IndexRequest) request; - MappingMetaData mappingMd = null; - final IndexMetaData indexMetaData = metaData.index(concreteIndex); - if (indexMetaData != null) { - mappingMd = indexMetaData.mappingOrDefault(indexRequest.type()); + final ConcreteIndices concreteIndices = new ConcreteIndices(clusterState, indexNameExpressionResolver); + MetaData metaData = clusterState.metaData(); + for (int i = 0; i < bulkRequest.requests.size(); i++) { + DocWriteRequest docWriteRequest = bulkRequest.requests.get(i); + //the request can only be null because we set it to null in the previous step, so it gets ignored + if (docWriteRequest == null) { + continue; } - try { - indexRequest.resolveRouting(metaData); - indexRequest.process(mappingMd, allowIdGeneration, concreteIndex.getName()); - } catch (ElasticsearchParseException | RoutingMissingException e) { - BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex.getName(), indexRequest.type(), indexRequest.id(), e); - BulkItemResponse bulkItemResponse = new BulkItemResponse(i, "index", failure); - responses.set(i, bulkItemResponse); - // make sure the request gets never processed again - bulkRequest.requests.set(i, null); + if (addFailureIfIndexIsUnavailable(docWriteRequest, i, concreteIndices, metaData)) { + continue; } - } else if (request instanceof DeleteRequest) { + Index concreteIndex = concreteIndices.resolveIfAbsent(docWriteRequest); try { - TransportDeleteAction.resolveAndValidateRouting(metaData, concreteIndex.getName(), (DeleteRequest)request); - } catch(RoutingMissingException e) { - BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex.getName(), documentRequest.type(), documentRequest.id(), e); - BulkItemResponse bulkItemResponse = new BulkItemResponse(i, "delete", failure); + switch (docWriteRequest.opType()) { + case CREATE: + case INDEX: + IndexRequest indexRequest = (IndexRequest) docWriteRequest; + MappingMetaData mappingMd = null; + final IndexMetaData indexMetaData = metaData.index(concreteIndex); + if (indexMetaData != null) { + mappingMd = indexMetaData.mappingOrDefault(indexRequest.type()); + } + indexRequest.resolveRouting(metaData); + indexRequest.process(mappingMd, allowIdGeneration, concreteIndex.getName()); + break; + case UPDATE: + TransportUpdateAction.resolveAndValidateRouting(metaData, concreteIndex.getName(), (UpdateRequest) docWriteRequest); + break; + case DELETE: + docWriteRequest.routing(metaData.resolveIndexRouting(docWriteRequest.parent(), docWriteRequest.routing(), docWriteRequest.index())); + // check if routing is required, if so, throw error if routing wasn't specified + if (docWriteRequest.routing() == null && metaData.routingRequired(concreteIndex.getName(), docWriteRequest.type())) { + throw new RoutingMissingException(concreteIndex.getName(), docWriteRequest.type(), docWriteRequest.id()); + } + break; + default: throw new AssertionError("request type not supported: [" + docWriteRequest.opType() + "]"); + } + } catch (ElasticsearchParseException | IllegalArgumentException | RoutingMissingException e) { + BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex.getName(), docWriteRequest.type(), docWriteRequest.id(), e); + BulkItemResponse bulkItemResponse = new BulkItemResponse(i, docWriteRequest.opType(), failure); responses.set(i, bulkItemResponse); // make sure the request gets never processed again bulkRequest.requests.set(i, null); } + } - } else if (request instanceof UpdateRequest) { - try { - TransportUpdateAction.resolveAndValidateRouting(metaData, concreteIndex.getName(), (UpdateRequest)request); - } catch(RoutingMissingException e) { - BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex.getName(), documentRequest.type(), documentRequest.id(), e); - BulkItemResponse bulkItemResponse = new BulkItemResponse(i, "update", failure); - responses.set(i, bulkItemResponse); - // make sure the request gets never processed again - bulkRequest.requests.set(i, null); - } - } else { - throw new AssertionError("request type not supported: [" + request.getClass().getName() + "]"); + // first, go over all the requests and create a ShardId -> Operations mapping + Map> requestsByShard = new HashMap<>(); + for (int i = 0; i < bulkRequest.requests.size(); i++) { + DocWriteRequest request = bulkRequest.requests.get(i); + if (request == null) { + continue; + } + String concreteIndex = concreteIndices.getConcreteIndex(request.index()).getName(); + ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, request.id(), request.routing()).shardId(); + List shardRequests = requestsByShard.computeIfAbsent(shardId, shard -> new ArrayList<>()); + shardRequests.add(new BulkItemRequest(i, request)); + } + + if (requestsByShard.isEmpty()) { + listener.onResponse(new BulkResponse(responses.toArray(new BulkItemResponse[responses.length()]), buildTookInMillis(startTimeNanos))); + return; } - } - // first, go over all the requests and create a ShardId -> Operations mapping - Map> requestsByShard = new HashMap<>(); - - for (int i = 0; i < bulkRequest.requests.size(); i++) { - ActionRequest request = bulkRequest.requests.get(i); - if (request instanceof IndexRequest) { - IndexRequest indexRequest = (IndexRequest) request; - String concreteIndex = concreteIndices.getConcreteIndex(indexRequest.index()).getName(); - ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, indexRequest.id(), indexRequest.routing()).shardId(); - List list = requestsByShard.get(shardId); - if (list == null) { - list = new ArrayList<>(); - requestsByShard.put(shardId, list); - } - list.add(new BulkItemRequest(i, request)); - } else if (request instanceof DeleteRequest) { - DeleteRequest deleteRequest = (DeleteRequest) request; - String concreteIndex = concreteIndices.getConcreteIndex(deleteRequest.index()).getName(); - ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, deleteRequest.id(), deleteRequest.routing()).shardId(); - List list = requestsByShard.get(shardId); - if (list == null) { - list = new ArrayList<>(); - requestsByShard.put(shardId, list); - } - list.add(new BulkItemRequest(i, request)); - } else if (request instanceof UpdateRequest) { - UpdateRequest updateRequest = (UpdateRequest) request; - String concreteIndex = concreteIndices.getConcreteIndex(updateRequest.index()).getName(); - ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, updateRequest.id(), updateRequest.routing()).shardId(); - List list = requestsByShard.get(shardId); - if (list == null) { - list = new ArrayList<>(); - requestsByShard.put(shardId, list); + final AtomicInteger counter = new AtomicInteger(requestsByShard.size()); + String nodeId = clusterService.localNode().getId(); + for (Map.Entry> entry : requestsByShard.entrySet()) { + final ShardId shardId = entry.getKey(); + final List requests = entry.getValue(); + BulkShardRequest bulkShardRequest = new BulkShardRequest(shardId, bulkRequest.getRefreshPolicy(), + requests.toArray(new BulkItemRequest[requests.size()])); + bulkShardRequest.waitForActiveShards(bulkRequest.waitForActiveShards()); + bulkShardRequest.timeout(bulkRequest.timeout()); + if (task != null) { + bulkShardRequest.setParentTask(nodeId, task.getId()); } - list.add(new BulkItemRequest(i, request)); + shardBulkAction.execute(bulkShardRequest, new ActionListener() { + @Override + public void onResponse(BulkShardResponse bulkShardResponse) { + for (BulkItemResponse bulkItemResponse : bulkShardResponse.getResponses()) { + // we may have no response if item failed + if (bulkItemResponse.getResponse() != null) { + bulkItemResponse.getResponse().setShardInfo(bulkShardResponse.getShardInfo()); + } + responses.set(bulkItemResponse.getItemId(), bulkItemResponse); + } + if (counter.decrementAndGet() == 0) { + finishHim(); + } + } + + @Override + public void onFailure(Exception e) { + // create failures for all relevant requests + for (BulkItemRequest request : requests) { + final String indexName = concreteIndices.getConcreteIndex(request.index()).getName(); + DocWriteRequest docWriteRequest = request.request(); + responses.set(request.id(), new BulkItemResponse(request.id(), docWriteRequest.opType(), + new BulkItemResponse.Failure(indexName, docWriteRequest.type(), docWriteRequest.id(), e))); + } + if (counter.decrementAndGet() == 0) { + finishHim(); + } + } + + private void finishHim() { + listener.onResponse(new BulkResponse(responses.toArray(new BulkItemResponse[responses.length()]), buildTookInMillis(startTimeNanos))); + } + }); } } - if (requestsByShard.isEmpty()) { - listener.onResponse(new BulkResponse(responses.toArray(new BulkItemResponse[responses.length()]), buildTookInMillis(startTimeNanos))); - return; + private boolean handleBlockExceptions(ClusterState state) { + ClusterBlockException blockException = state.blocks().globalBlockedException(ClusterBlockLevel.WRITE); + if (blockException != null) { + if (blockException.retryable()) { + logger.trace("cluster is blocked, scheduling a retry", blockException); + retry(blockException); + } else { + onFailure(blockException); + } + return true; + } + return false; } - final AtomicInteger counter = new AtomicInteger(requestsByShard.size()); - String nodeId = clusterService.localNode().getId(); - for (Map.Entry> entry : requestsByShard.entrySet()) { - final ShardId shardId = entry.getKey(); - final List requests = entry.getValue(); - BulkShardRequest bulkShardRequest = new BulkShardRequest(shardId, bulkRequest.getRefreshPolicy(), - requests.toArray(new BulkItemRequest[requests.size()])); - bulkShardRequest.waitForActiveShards(bulkRequest.waitForActiveShards()); - bulkShardRequest.timeout(bulkRequest.timeout()); - if (task != null) { - bulkShardRequest.setParentTask(nodeId, task.getId()); + void retry(Exception failure) { + assert failure != null; + if (observer.isTimedOut()) { + // we running as a last attempt after a timeout has happened. don't retry + onFailure(failure); + return; } - shardBulkAction.execute(bulkShardRequest, new ActionListener() { + final ThreadContext.StoredContext context = threadPool.getThreadContext().newStoredContext(true); + observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override - public void onResponse(BulkShardResponse bulkShardResponse) { - for (BulkItemResponse bulkItemResponse : bulkShardResponse.getResponses()) { - // we may have no response if item failed - if (bulkItemResponse.getResponse() != null) { - bulkItemResponse.getResponse().setShardInfo(bulkShardResponse.getShardInfo()); - } - responses.set(bulkItemResponse.getItemId(), bulkItemResponse); - } - if (counter.decrementAndGet() == 0) { - finishHim(); - } + public void onNewClusterState(ClusterState state) { + context.close(); + run(); } @Override - public void onFailure(Exception e) { - // create failures for all relevant requests - for (BulkItemRequest request : requests) { - final String indexName = concreteIndices.getConcreteIndex(request.index()).getName(); - if (request.request() instanceof IndexRequest) { - IndexRequest indexRequest = (IndexRequest) request.request(); - responses.set(request.id(), new BulkItemResponse(request.id(), indexRequest.opType().toString().toLowerCase(Locale.ENGLISH), - new BulkItemResponse.Failure(indexName, indexRequest.type(), indexRequest.id(), e))); - } else if (request.request() instanceof DeleteRequest) { - DeleteRequest deleteRequest = (DeleteRequest) request.request(); - responses.set(request.id(), new BulkItemResponse(request.id(), "delete", - new BulkItemResponse.Failure(indexName, deleteRequest.type(), deleteRequest.id(), e))); - } else if (request.request() instanceof UpdateRequest) { - UpdateRequest updateRequest = (UpdateRequest) request.request(); - responses.set(request.id(), new BulkItemResponse(request.id(), "update", - new BulkItemResponse.Failure(indexName, updateRequest.type(), updateRequest.id(), e))); - } - } - if (counter.decrementAndGet() == 0) { - finishHim(); - } + public void onClusterServiceClose() { + onFailure(new NodeClosedException(clusterService.localNode())); } - private void finishHim() { - listener.onResponse(new BulkResponse(responses.toArray(new BulkItemResponse[responses.length()]), buildTookInMillis(startTimeNanos))); + @Override + public void onTimeout(TimeValue timeout) { + context.close(); + // Try one more time... + run(); } }); } - } - private boolean addFailureIfIndexIsUnavailable(DocumentRequest request, BulkRequest bulkRequest, AtomicArray responses, int idx, - final ConcreteIndices concreteIndices, - final MetaData metaData) { - Index concreteIndex = concreteIndices.getConcreteIndex(request.index()); - Exception unavailableException = null; - if (concreteIndex == null) { - try { - concreteIndex = concreteIndices.resolveIfAbsent(request); - } catch (IndexClosedException | IndexNotFoundException ex) { - // Fix for issue where bulk request references an index that - // cannot be auto-created see issue #8125 - unavailableException = ex; + private boolean addFailureIfIndexIsUnavailable(DocWriteRequest request, int idx, final ConcreteIndices concreteIndices, + final MetaData metaData) { + IndexNotFoundException cannotCreate = indicesThatCannotBeCreated.get(request.index()); + if (cannotCreate != null) { + addFailure(request, idx, cannotCreate); + return true; + } + Index concreteIndex = concreteIndices.getConcreteIndex(request.index()); + if (concreteIndex == null) { + try { + concreteIndex = concreteIndices.resolveIfAbsent(request); + } catch (IndexClosedException | IndexNotFoundException ex) { + addFailure(request, idx, ex); + return true; + } } - } - if (unavailableException == null) { IndexMetaData indexMetaData = metaData.getIndexSafe(concreteIndex); if (indexMetaData.getState() == IndexMetaData.State.CLOSE) { - unavailableException = new IndexClosedException(concreteIndex); + addFailure(request, idx, new IndexClosedException(concreteIndex)); + return true; } + return false; } - if (unavailableException != null) { + + private void addFailure(DocWriteRequest request, int idx, Exception unavailableException) { BulkItemResponse.Failure failure = new BulkItemResponse.Failure(request.index(), request.type(), request.id(), unavailableException); - String operationType = "unknown"; - if (request instanceof IndexRequest) { - operationType = "index"; - } else if (request instanceof DeleteRequest) { - operationType = "delete"; - } else if (request instanceof UpdateRequest) { - operationType = "update"; - } - BulkItemResponse bulkItemResponse = new BulkItemResponse(idx, operationType, failure); + BulkItemResponse bulkItemResponse = new BulkItemResponse(idx, request.opType(), failure); responses.set(idx, bulkItemResponse); // make sure the request gets never processed again bulkRequest.requests.set(idx, null); - return true; } - return false; + } + + void executeBulk(Task task, final BulkRequest bulkRequest, final long startTimeNanos, final ActionListener listener, + final AtomicArray responses, Map indicesThatCannotBeCreated) { + new BulkOperation(task, bulkRequest, listener, responses, startTimeNanos, indicesThatCannotBeCreated).run(); } private static class ConcreteIndices { @@ -444,7 +488,7 @@ Index getConcreteIndex(String indexOrAlias) { return indices.get(indexOrAlias); } - Index resolveIfAbsent(DocumentRequest request) { + Index resolveIfAbsent(DocWriteRequest request) { Index concreteIndex = indices.get(request.index()); if (concreteIndex == null) { concreteIndex = indexNameExpressionResolver.concreteSingleIndex(state, request); @@ -458,4 +502,131 @@ private long relativeTime() { return relativeTimeProvider.getAsLong(); } + void processBulkIndexIngestRequest(Task task, BulkRequest original, ActionListener listener) { + long ingestStartTimeInNanos = System.nanoTime(); + BulkRequestModifier bulkRequestModifier = new BulkRequestModifier(original); + ingestService.getPipelineExecutionService().executeBulkRequest(() -> bulkRequestModifier, (indexRequest, exception) -> { + logger.debug((Supplier) () -> new ParameterizedMessage("failed to execute pipeline [{}] for document [{}/{}/{}]", + indexRequest.getPipeline(), indexRequest.index(), indexRequest.type(), indexRequest.id()), exception); + bulkRequestModifier.markCurrentItemAsFailed(exception); + }, (exception) -> { + if (exception != null) { + logger.error("failed to execute pipeline for a bulk request", exception); + listener.onFailure(exception); + } else { + long ingestTookInMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - ingestStartTimeInNanos); + BulkRequest bulkRequest = bulkRequestModifier.getBulkRequest(); + ActionListener actionListener = bulkRequestModifier.wrapActionListenerIfNeeded(ingestTookInMillis, listener); + if (bulkRequest.requests().isEmpty()) { + // at this stage, the transport bulk action can't deal with a bulk request with no requests, + // so we stop and send an empty response back to the client. + // (this will happen if pre-processing all items in the bulk failed) + actionListener.onResponse(new BulkResponse(new BulkItemResponse[0], 0)); + } else { + doExecute(task, bulkRequest, actionListener); + } + } + }); + } + + static final class BulkRequestModifier implements Iterator { + + final BulkRequest bulkRequest; + final SparseFixedBitSet failedSlots; + final List itemResponses; + + int currentSlot = -1; + int[] originalSlots; + + BulkRequestModifier(BulkRequest bulkRequest) { + this.bulkRequest = bulkRequest; + this.failedSlots = new SparseFixedBitSet(bulkRequest.requests().size()); + this.itemResponses = new ArrayList<>(bulkRequest.requests().size()); + } + + @Override + public DocWriteRequest next() { + return bulkRequest.requests().get(++currentSlot); + } + + @Override + public boolean hasNext() { + return (currentSlot + 1) < bulkRequest.requests().size(); + } + + BulkRequest getBulkRequest() { + if (itemResponses.isEmpty()) { + return bulkRequest; + } else { + BulkRequest modifiedBulkRequest = new BulkRequest(); + modifiedBulkRequest.setRefreshPolicy(bulkRequest.getRefreshPolicy()); + modifiedBulkRequest.waitForActiveShards(bulkRequest.waitForActiveShards()); + modifiedBulkRequest.timeout(bulkRequest.timeout()); + + int slot = 0; + List requests = bulkRequest.requests(); + originalSlots = new int[requests.size()]; // oversize, but that's ok + for (int i = 0; i < requests.size(); i++) { + DocWriteRequest request = requests.get(i); + if (failedSlots.get(i) == false) { + modifiedBulkRequest.add(request); + originalSlots[slot++] = i; + } + } + return modifiedBulkRequest; + } + } + + ActionListener wrapActionListenerIfNeeded(long ingestTookInMillis, ActionListener actionListener) { + if (itemResponses.isEmpty()) { + return ActionListener.wrap( + response -> actionListener.onResponse( + new BulkResponse(response.getItems(), response.getTookInMillis(), ingestTookInMillis)), + actionListener::onFailure); + } else { + return new IngestBulkResponseListener(ingestTookInMillis, originalSlots, itemResponses, actionListener); + } + } + + void markCurrentItemAsFailed(Exception e) { + IndexRequest indexRequest = (IndexRequest) bulkRequest.requests().get(currentSlot); + // We hit a error during preprocessing a request, so we: + // 1) Remember the request item slot from the bulk, so that we're done processing all requests we know what failed + // 2) Add a bulk item failure for this request + // 3) Continue with the next request in the bulk. + failedSlots.set(currentSlot); + BulkItemResponse.Failure failure = new BulkItemResponse.Failure(indexRequest.index(), indexRequest.type(), indexRequest.id(), e); + itemResponses.add(new BulkItemResponse(currentSlot, indexRequest.opType(), failure)); + } + + } + + static final class IngestBulkResponseListener implements ActionListener { + + private final long ingestTookInMillis; + private final int[] originalSlots; + private final List itemResponses; + private final ActionListener actionListener; + + IngestBulkResponseListener(long ingestTookInMillis, int[] originalSlots, List itemResponses, ActionListener actionListener) { + this.ingestTookInMillis = ingestTookInMillis; + this.itemResponses = itemResponses; + this.actionListener = actionListener; + this.originalSlots = originalSlots; + } + + @Override + public void onResponse(BulkResponse response) { + BulkItemResponse[] items = response.getItems(); + for (int i = 0; i < items.length; i++) { + itemResponses.add(originalSlots[i], response.getItems()[i]); + } + actionListener.onResponse(new BulkResponse(itemResponses.toArray(new BulkItemResponse[itemResponses.size()]), response.getTookInMillis(), ingestTookInMillis)); + } + + @Override + public void onFailure(Exception e) { + actionListener.onFailure(e); + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java index 754316f3de008..dfa18a2deb65d 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java @@ -21,17 +21,16 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; -import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.delete.DeleteResponse; -import org.elasticsearch.action.delete.TransportDeleteAction; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.index.IndexResponse; -import org.elasticsearch.action.index.TransportIndexAction; import org.elasticsearch.action.support.ActionFilters; -import org.elasticsearch.action.support.replication.ReplicationRequest; +import org.elasticsearch.action.support.TransportActions; +import org.elasticsearch.action.support.replication.ReplicationOperation; import org.elasticsearch.action.support.replication.ReplicationResponse.ShardInfo; import org.elasticsearch.action.support.replication.TransportWriteAction; import org.elasticsearch.action.update.UpdateHelper; @@ -49,32 +48,26 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentType; -import org.elasticsearch.index.IndexService; import org.elasticsearch.index.VersionType; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.engine.VersionConflictEngineException; +import org.elasticsearch.index.mapper.MapperParsingException; +import org.elasticsearch.index.get.GetResult; +import org.elasticsearch.index.mapper.Mapping; +import org.elasticsearch.index.mapper.SourceToParse; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.translog.Translog; -import org.elasticsearch.index.translog.Translog.Location; import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.rest.RestStatus; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportService; +import java.io.IOException; import java.util.Map; -import static org.elasticsearch.action.support.replication.ReplicationOperation.ignoreReplicaException; -import static org.elasticsearch.action.support.replication.ReplicationOperation.isConflictException; - -/** - * Performs the index operation. - */ -public class TransportShardBulkAction extends TransportWriteAction { - - private static final String OP_TYPE_UPDATE = "update"; - private static final String OP_TYPE_DELETE = "delete"; +/** Performs shard-level bulk (index, delete or update) operations */ +public class TransportShardBulkAction extends TransportWriteAction { public static final String ACTION_NAME = BulkAction.NAME + "[s]"; @@ -88,7 +81,7 @@ public TransportShardBulkAction(Settings settings, TransportService transportSer MappingUpdatedAction mappingUpdatedAction, UpdateHelper updateHelper, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) { super(settings, ACTION_NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, actionFilters, - indexNameExpressionResolver, BulkShardRequest::new, ThreadPool.Names.BULK); + indexNameExpressionResolver, BulkShardRequest::new, BulkShardRequest::new, ThreadPool.Names.BULK); this.updateHelper = updateHelper; this.allowIdGeneration = settings.getAsBoolean("action.allow_id_generation", true); this.mappingUpdatedAction = mappingUpdatedAction; @@ -110,17 +103,17 @@ protected boolean resolveIndex() { } @Override - protected WriteResult onPrimaryShard(BulkShardRequest request, IndexShard indexShard) throws Exception { - ShardId shardId = request.shardId(); - final IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex()); - final IndexMetaData metaData = indexService.getIndexSettings().getIndexMetaData(); + public WritePrimaryResult shardOperationOnPrimary( + BulkShardRequest request, IndexShard primary) throws Exception { + final IndexMetaData metaData = primary.indexSettings().getIndexMetaData(); long[] preVersions = new long[request.items().length]; VersionType[] preVersionTypes = new VersionType[request.items().length]; Translog.Location location = null; for (int requestIndex = 0; requestIndex < request.items().length; requestIndex++) { - BulkItemRequest item = request.items()[requestIndex]; - location = handleItem(metaData, request, indexShard, preVersions, preVersionTypes, location, requestIndex, item); + if (isAborted(request.items()[requestIndex].getPrimaryResponse()) == false) { + location = executeBulkItemRequest(metaData, primary, request, preVersions, preVersionTypes, location, requestIndex); + } } BulkItemResponse[] responses = new BulkItemResponse[request.items().length]; @@ -129,369 +122,296 @@ protected WriteResult onPrimaryShard(BulkShardRequest request responses[i] = items[i].getPrimaryResponse(); } BulkShardResponse response = new BulkShardResponse(request.shardId(), responses); - return new WriteResult<>(response, location); - } - - private Translog.Location handleItem(IndexMetaData metaData, BulkShardRequest request, IndexShard indexShard, long[] preVersions, VersionType[] preVersionTypes, Translog.Location location, int requestIndex, BulkItemRequest item) { - if (item.request() instanceof IndexRequest) { - location = index(metaData, request, indexShard, preVersions, preVersionTypes, location, requestIndex, item); - } else if (item.request() instanceof DeleteRequest) { - location = delete(request, indexShard, preVersions, preVersionTypes, location, requestIndex, item); - } else if (item.request() instanceof UpdateRequest) { - Tuple tuple = update(metaData, request, indexShard, preVersions, preVersionTypes, location, requestIndex, item); - location = tuple.v1(); - item = tuple.v2(); - } else { - throw new IllegalStateException("Unexpected index operation: " + item.request()); - } - - assert item.getPrimaryResponse() != null; - assert preVersionTypes[requestIndex] != null; - return location; + return new WritePrimaryResult<>(request, response, location, null, primary, logger); } - private Translog.Location index(IndexMetaData metaData, BulkShardRequest request, IndexShard indexShard, long[] preVersions, VersionType[] preVersionTypes, Translog.Location location, int requestIndex, BulkItemRequest item) { - IndexRequest indexRequest = (IndexRequest) item.request(); - preVersions[requestIndex] = indexRequest.version(); - preVersionTypes[requestIndex] = indexRequest.versionType(); + /** Executes bulk item requests and handles request execution exceptions */ + private Translog.Location executeBulkItemRequest(IndexMetaData metaData, IndexShard primary, + BulkShardRequest request, + long[] preVersions, VersionType[] preVersionTypes, + Translog.Location location, int requestIndex) throws Exception { + final DocWriteRequest itemRequest = request.items()[requestIndex].request(); + preVersions[requestIndex] = itemRequest.version(); + preVersionTypes[requestIndex] = itemRequest.versionType(); + DocWriteRequest.OpType opType = itemRequest.opType(); try { - WriteResult result = shardIndexOperation(request, indexRequest, metaData, indexShard, true); - location = locationToSync(location, result.getLocation()); - // add the response - IndexResponse indexResponse = result.getResponse(); - setResponse(item, new BulkItemResponse(item.id(), indexRequest.opType().lowercase(), indexResponse)); - } catch (Exception e) { - // rethrow the failure if we are going to retry on primary and let parent failure to handle it - if (retryPrimaryException(e)) { - // restore updated versions... - for (int j = 0; j < requestIndex; j++) { - applyVersion(request.items()[j], preVersions[j], preVersionTypes[j]); - } - throw (ElasticsearchException) e; + // execute item request + final Engine.Result operationResult; + final DocWriteResponse response; + final BulkItemRequest replicaRequest; + switch (itemRequest.opType()) { + case CREATE: + case INDEX: + final IndexRequest indexRequest = (IndexRequest) itemRequest; + Engine.IndexResult indexResult = executeIndexRequestOnPrimary(indexRequest, primary, mappingUpdatedAction); + if (indexResult.hasFailure()) { + response = null; + } else { + // update the version on request so it will happen on the replicas + final long version = indexResult.getVersion(); + indexRequest.version(version); + indexRequest.versionType(indexRequest.versionType().versionTypeForReplicationAndRecovery()); + assert indexRequest.versionType().validateVersionForWrites(indexRequest.version()); + response = new IndexResponse(primary.shardId(), indexRequest.type(), indexRequest.id(), + indexResult.getVersion(), indexResult.isCreated()); + } + operationResult = indexResult; + replicaRequest = request.items()[requestIndex]; + break; + case UPDATE: + UpdateResultHolder updateResultHolder = executeUpdateRequest(((UpdateRequest) itemRequest), + primary, metaData, request, requestIndex); + operationResult = updateResultHolder.operationResult; + response = updateResultHolder.response; + replicaRequest = updateResultHolder.replicaRequest; + break; + case DELETE: + final DeleteRequest deleteRequest = (DeleteRequest) itemRequest; + Engine.DeleteResult deleteResult = executeDeleteRequestOnPrimary(deleteRequest, primary, mappingUpdatedAction); + if (deleteResult.hasFailure()) { + response = null; + } else { + // update the request with the version so it will go to the replicas + deleteRequest.versionType(deleteRequest.versionType().versionTypeForReplicationAndRecovery()); + deleteRequest.version(deleteResult.getVersion()); + assert deleteRequest.versionType().validateVersionForWrites(deleteRequest.version()); + response = new DeleteResponse(request.shardId(), deleteRequest.type(), deleteRequest.id(), + deleteResult.getVersion(), deleteResult.isFound()); + } + operationResult = deleteResult; + replicaRequest = request.items()[requestIndex]; + break; + default: + throw new IllegalStateException("unexpected opType [" + itemRequest.opType() + "] found"); } - logFailure(e, "index", request.shardId(), indexRequest); - // if its a conflict failure, and we already executed the request on a primary (and we execute it - // again, due to primary relocation and only processing up to N bulk items when the shard gets closed) - // then just use the response we got from the successful execution - if (item.getPrimaryResponse() != null && isConflictException(e)) { - setResponse(item, item.getPrimaryResponse()); + // update the bulk item request because update request execution can mutate the bulk item request + request.items()[requestIndex] = replicaRequest; + if (operationResult == null) { // in case of noop update operation + assert response.getResult() == DocWriteResponse.Result.NOOP + : "only noop update can have null operation"; + replicaRequest.setPrimaryResponse(new BulkItemResponse(replicaRequest.id(), opType, response)); + assert replicaRequest.isIgnoreOnReplica(); + } else if (operationResult.hasFailure() == false) { + location = locationToSync(location, operationResult.getTranslogLocation()); + BulkItemResponse primaryResponse = new BulkItemResponse(replicaRequest.id(), opType, response); + replicaRequest.setPrimaryResponse(primaryResponse); + // set an empty ShardInfo to indicate no shards participated in the request execution + // so we can safely send it to the replicas. We won't use it in the real response though. + primaryResponse.getResponse().setShardInfo(new ShardInfo()); + assert replicaRequest.isIgnoreOnReplica() == false; } else { - setResponse(item, new BulkItemResponse(item.id(), indexRequest.opType().lowercase(), - new BulkItemResponse.Failure(request.index(), indexRequest.type(), indexRequest.id(), e))); + DocWriteRequest docWriteRequest = replicaRequest.request(); + Exception failure = operationResult.getFailure(); + if (isConflictException(failure)) { + logger.trace((Supplier) () -> new ParameterizedMessage("{} failed to execute bulk item ({}) {}", + request.shardId(), docWriteRequest.opType().getLowercase(), request), failure); + } else { + logger.debug((Supplier) () -> new ParameterizedMessage("{} failed to execute bulk item ({}) {}", + request.shardId(), docWriteRequest.opType().getLowercase(), request), failure); + } + // if its a conflict failure, and we already executed the request on a primary (and we execute it + // again, due to primary relocation and only processing up to N bulk items when the shard gets closed) + // then just use the response we got from the successful execution + if (replicaRequest.getPrimaryResponse() == null || isConflictException(failure) == false) { + replicaRequest.setPrimaryResponse(new BulkItemResponse(replicaRequest.id(), docWriteRequest.opType(), + new BulkItemResponse.Failure(request.index(), docWriteRequest.type(), docWriteRequest.id(), failure))); + assert replicaRequest.isIgnoreOnReplica(); + } } - } - return location; - } - - private > void logFailure(Throwable t, String operation, ShardId shardId, ReplicationRequest request) { - if (ExceptionsHelper.status(t) == RestStatus.CONFLICT) { - logger.trace((Supplier) () -> new ParameterizedMessage("{} failed to execute bulk item ({}) {}", shardId, operation, request), t); - } else { - logger.debug((Supplier) () -> new ParameterizedMessage("{} failed to execute bulk item ({}) {}", shardId, operation, request), t); - } - } - - private Translog.Location delete(BulkShardRequest request, IndexShard indexShard, long[] preVersions, VersionType[] preVersionTypes, Translog.Location location, int requestIndex, BulkItemRequest item) { - DeleteRequest deleteRequest = (DeleteRequest) item.request(); - preVersions[requestIndex] = deleteRequest.version(); - preVersionTypes[requestIndex] = deleteRequest.versionType(); - - try { - // add the response - final WriteResult writeResult = TransportDeleteAction.executeDeleteRequestOnPrimary(deleteRequest, indexShard); - DeleteResponse deleteResponse = writeResult.getResponse(); - location = locationToSync(location, writeResult.getLocation()); - setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_DELETE, deleteResponse)); + assert replicaRequest.getPrimaryResponse() != null; + assert preVersionTypes[requestIndex] != null; } catch (Exception e) { // rethrow the failure if we are going to retry on primary and let parent failure to handle it if (retryPrimaryException(e)) { // restore updated versions... for (int j = 0; j < requestIndex; j++) { - applyVersion(request.items()[j], preVersions[j], preVersionTypes[j]); + DocWriteRequest docWriteRequest = request.items()[j].request(); + docWriteRequest.version(preVersions[j]); + docWriteRequest.versionType(preVersionTypes[j]); } - throw (ElasticsearchException) e; - } - logFailure(e, "delete", request.shardId(), deleteRequest); - // if its a conflict failure, and we already executed the request on a primary (and we execute it - // again, due to primary relocation and only processing up to N bulk items when the shard gets closed) - // then just use the response we got from the successful execution - if (item.getPrimaryResponse() != null && isConflictException(e)) { - setResponse(item, item.getPrimaryResponse()); - } else { - setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_DELETE, - new BulkItemResponse.Failure(request.index(), deleteRequest.type(), deleteRequest.id(), e))); } + throw e; } return location; } - private Tuple update(IndexMetaData metaData, BulkShardRequest request, IndexShard indexShard, long[] preVersions, VersionType[] preVersionTypes, Translog.Location location, int requestIndex, BulkItemRequest item) { - UpdateRequest updateRequest = (UpdateRequest) item.request(); - preVersions[requestIndex] = updateRequest.version(); - preVersionTypes[requestIndex] = updateRequest.versionType(); - // We need to do the requested retries plus the initial attempt. We don't do < 1+retry_on_conflict because retry_on_conflict may be Integer.MAX_VALUE - for (int updateAttemptsCount = 0; updateAttemptsCount <= updateRequest.retryOnConflict(); updateAttemptsCount++) { - UpdateResult updateResult; - try { - updateResult = shardUpdateOperation(metaData, request, updateRequest, indexShard); - } catch (Exception t) { - updateResult = new UpdateResult(null, null, false, t, null); - } - if (updateResult.success()) { - if (updateResult.writeResult != null) { - location = locationToSync(location, updateResult.writeResult.getLocation()); - } - switch (updateResult.result.getResponseResult()) { - case CREATED: - case UPDATED: - @SuppressWarnings("unchecked") - WriteResult result = updateResult.writeResult; - IndexRequest indexRequest = updateResult.request(); - BytesReference indexSourceAsBytes = indexRequest.source(); - // add the response - IndexResponse indexResponse = result.getResponse(); - UpdateResponse updateResponse = new UpdateResponse(indexResponse.getShardInfo(), indexResponse.getShardId(), indexResponse.getType(), indexResponse.getId(), indexResponse.getVersion(), indexResponse.getResult()); - if (updateRequest.fields() != null && updateRequest.fields().length > 0) { - Tuple> sourceAndContent = XContentHelper.convertToMap(indexSourceAsBytes, true); - updateResponse.setGetResult(updateHelper.extractGetResult(updateRequest, request.index(), indexResponse.getVersion(), sourceAndContent.v2(), sourceAndContent.v1(), indexSourceAsBytes)); - } - item = request.items()[requestIndex] = new BulkItemRequest(request.items()[requestIndex].id(), indexRequest); - setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_UPDATE, updateResponse)); - break; - case DELETED: - @SuppressWarnings("unchecked") - WriteResult writeResult = updateResult.writeResult; - DeleteResponse response = writeResult.getResponse(); - DeleteRequest deleteRequest = updateResult.request(); - updateResponse = new UpdateResponse(response.getShardInfo(), response.getShardId(), response.getType(), response.getId(), response.getVersion(), response.getResult()); - updateResponse.setGetResult(updateHelper.extractGetResult(updateRequest, request.index(), response.getVersion(), updateResult.result.updatedSourceAsMap(), updateResult.result.updateSourceContentType(), null)); - // Replace the update request to the translated delete request to execute on the replica. - item = request.items()[requestIndex] = new BulkItemRequest(request.items()[requestIndex].id(), deleteRequest); - setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_UPDATE, updateResponse)); - break; - case NOOP: - setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_UPDATE, updateResult.noopResult)); - item.setIgnoreOnReplica(); // no need to go to the replica - break; - default: - throw new IllegalStateException("Illegal operation " + updateResult.result.getResponseResult()); - } - // NOTE: Breaking out of the retry_on_conflict loop! - break; - } else if (updateResult.failure()) { - Throwable e = updateResult.error; - if (updateResult.retry) { - // updateAttemptCount is 0 based and marks current attempt, if it's equal to retryOnConflict we are going out of the iteration - if (updateAttemptsCount >= updateRequest.retryOnConflict()) { - setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_UPDATE, - new BulkItemResponse.Failure(request.index(), updateRequest.type(), updateRequest.id(), e))); - } - } else { - // rethrow the failure if we are going to retry on primary and let parent failure to handle it - if (retryPrimaryException(e)) { - // restore updated versions... - for (int j = 0; j < requestIndex; j++) { - applyVersion(request.items()[j], preVersions[j], preVersionTypes[j]); - } - throw (ElasticsearchException) e; - } - // if its a conflict failure, and we already executed the request on a primary (and we execute it - // again, due to primary relocation and only processing up to N bulk items when the shard gets closed) - // then just use the response we got from the successful execution - if (item.getPrimaryResponse() != null && isConflictException(e)) { - setResponse(item, item.getPrimaryResponse()); - } else if (updateResult.result == null) { - setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_UPDATE, new BulkItemResponse.Failure(request.index(), updateRequest.type(), updateRequest.id(), e))); - } else { - switch (updateResult.result.getResponseResult()) { - case CREATED: - case UPDATED: - IndexRequest indexRequest = updateResult.request(); - logFailure(e, "index", request.shardId(), indexRequest); - setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_UPDATE, - new BulkItemResponse.Failure(request.index(), indexRequest.type(), indexRequest.id(), e))); - break; - case DELETED: - DeleteRequest deleteRequest = updateResult.request(); - logFailure(e, "delete", request.shardId(), deleteRequest); - setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_DELETE, - new BulkItemResponse.Failure(request.index(), deleteRequest.type(), deleteRequest.id(), e))); - break; - default: - throw new IllegalStateException("Illegal operation " + updateResult.result.getResponseResult()); - } - } - // NOTE: Breaking out of the retry_on_conflict loop! - break; - } - - } - } - return Tuple.tuple(location, item); + private static boolean isAborted(BulkItemResponse response) { + return response != null && response.isFailed() && response.getFailure().isAborted(); } - private void setResponse(BulkItemRequest request, BulkItemResponse response) { - request.setPrimaryResponse(response); - if (response.isFailed()) { - request.setIgnoreOnReplica(); - } else { - // Set the ShardInfo to 0 so we can safely send it to the replicas. We won't use it in the real response though. - response.getResponse().setShardInfo(new ShardInfo()); - } + private static boolean isConflictException(final Exception e) { + return ExceptionsHelper.unwrapCause(e) instanceof VersionConflictEngineException; } - private WriteResult shardIndexOperation(BulkShardRequest request, IndexRequest indexRequest, IndexMetaData metaData, - IndexShard indexShard, boolean processed) throws Exception { + private static class UpdateResultHolder { + final BulkItemRequest replicaRequest; + final Engine.Result operationResult; + final DocWriteResponse response; - MappingMetaData mappingMd = metaData.mappingOrDefault(indexRequest.type()); - if (!processed) { - indexRequest.process(mappingMd, allowIdGeneration, request.index()); + private UpdateResultHolder(BulkItemRequest replicaRequest, Engine.Result operationResult, + DocWriteResponse response) { + this.replicaRequest = replicaRequest; + this.operationResult = operationResult; + this.response = response; } - return TransportIndexAction.executeIndexRequestOnPrimary(indexRequest, indexShard, mappingUpdatedAction); } - static class UpdateResult { - - final UpdateHelper.Result result; - final ActionRequest actionRequest; - final boolean retry; - final Throwable error; - final WriteResult writeResult; - final UpdateResponse noopResult; - - UpdateResult(UpdateHelper.Result result, ActionRequest actionRequest, boolean retry, Throwable error, WriteResult writeResult) { - this.result = result; - this.actionRequest = actionRequest; - this.retry = retry; - this.error = error; - this.writeResult = writeResult; - this.noopResult = null; - } - - UpdateResult(UpdateHelper.Result result, ActionRequest actionRequest, WriteResult writeResult) { - this.result = result; - this.actionRequest = actionRequest; - this.writeResult = writeResult; - this.retry = false; - this.error = null; - this.noopResult = null; - } - - public UpdateResult(UpdateHelper.Result result, UpdateResponse updateResponse) { - this.result = result; - this.noopResult = updateResponse; - this.actionRequest = null; - this.writeResult = null; - this.retry = false; - this.error = null; - } - - - boolean failure() { - return error != null; - } - - boolean success() { - return noopResult != null || writeResult != null; - } - - @SuppressWarnings("unchecked") - T request() { - return (T) actionRequest; - } - - - } - - private UpdateResult shardUpdateOperation(IndexMetaData metaData, BulkShardRequest bulkShardRequest, UpdateRequest updateRequest, IndexShard indexShard) { - UpdateHelper.Result translate = updateHelper.prepare(updateRequest, indexShard); - switch (translate.getResponseResult()) { - case CREATED: - case UPDATED: - IndexRequest indexRequest = translate.action(); - try { - WriteResult result = shardIndexOperation(bulkShardRequest, indexRequest, metaData, indexShard, false); - return new UpdateResult(translate, indexRequest, result); - } catch (Exception e) { - final Throwable cause = ExceptionsHelper.unwrapCause(e); - boolean retry = false; - if (cause instanceof VersionConflictEngineException) { - retry = true; + /** + * Executes update request, delegating to a index or delete operation after translation, + * handles retries on version conflict and constructs update response + * NOTE: reassigns bulk item request at requestIndex for replicas to + * execute translated update request (NOOP update is an exception). NOOP updates are + * indicated by returning a null operation in {@link UpdateResultHolder} + * */ + private UpdateResultHolder executeUpdateRequest(UpdateRequest updateRequest, IndexShard primary, + IndexMetaData metaData, BulkShardRequest request, + int requestIndex) throws Exception { + Engine.Result updateOperationResult = null; + UpdateResponse updateResponse = null; + BulkItemRequest replicaRequest = request.items()[requestIndex]; + int maxAttempts = updateRequest.retryOnConflict(); + for (int attemptCount = 0; attemptCount <= maxAttempts; attemptCount++) { + final UpdateHelper.Result translate; + // translate update request + try { + translate = updateHelper.prepare(updateRequest, primary, threadPool::absoluteTimeInMillis); + } catch (Exception failure) { + // we may fail translating a update to index or delete operation + // we use index result to communicate failure while translating update request + updateOperationResult = new Engine.IndexResult(failure, updateRequest.version()); + break; // out of retry loop + } + // execute translated update request + switch (translate.getResponseResult()) { + case CREATED: + case UPDATED: + IndexRequest indexRequest = translate.action(); + MappingMetaData mappingMd = metaData.mappingOrDefault(indexRequest.type()); + indexRequest.process(mappingMd, allowIdGeneration, request.index()); + updateOperationResult = executeIndexRequestOnPrimary(indexRequest, primary, mappingUpdatedAction); + if (updateOperationResult.hasFailure() == false) { + // update the version on request so it will happen on the replicas + final long version = updateOperationResult.getVersion(); + indexRequest.version(version); + indexRequest.versionType(indexRequest.versionType().versionTypeForReplicationAndRecovery()); + assert indexRequest.versionType().validateVersionForWrites(indexRequest.version()); } - return new UpdateResult(translate, indexRequest, retry, cause, null); - } - case DELETED: - DeleteRequest deleteRequest = translate.action(); - try { - WriteResult result = TransportDeleteAction.executeDeleteRequestOnPrimary(deleteRequest, indexShard); - return new UpdateResult(translate, deleteRequest, result); - } catch (Exception e) { - final Throwable cause = ExceptionsHelper.unwrapCause(e); - boolean retry = false; - if (cause instanceof VersionConflictEngineException) { - retry = true; + break; + case DELETED: + DeleteRequest deleteRequest = translate.action(); + updateOperationResult = executeDeleteRequestOnPrimary(deleteRequest, primary, mappingUpdatedAction); + if (updateOperationResult.hasFailure() == false) { + // update the request with the version so it will go to the replicas + deleteRequest.versionType(deleteRequest.versionType().versionTypeForReplicationAndRecovery()); + deleteRequest.version(updateOperationResult.getVersion()); + assert deleteRequest.versionType().validateVersionForWrites(deleteRequest.version()); } - return new UpdateResult(translate, deleteRequest, retry, cause, null); + break; + case NOOP: + primary.noopUpdate(updateRequest.type()); + break; + default: throw new IllegalStateException("Illegal update operation " + translate.getResponseResult()); + } + if (updateOperationResult == null) { + // this is a noop operation + updateResponse = translate.action(); + break; // out of retry loop + } else if (updateOperationResult.hasFailure() == false) { + // enrich update response and + // set translated update (index/delete) request for replica execution in bulk items + switch (updateOperationResult.getOperationType()) { + case INDEX: + IndexRequest updateIndexRequest = translate.action(); + final IndexResponse indexResponse = new IndexResponse(primary.shardId(), + updateIndexRequest.type(), updateIndexRequest.id(), + updateOperationResult.getVersion(), ((Engine.IndexResult) updateOperationResult).isCreated()); + BytesReference indexSourceAsBytes = updateIndexRequest.source(); + updateResponse = new UpdateResponse(indexResponse.getShardInfo(), + indexResponse.getShardId(), indexResponse.getType(), indexResponse.getId(), + indexResponse.getVersion(), indexResponse.getResult()); + if ((updateRequest.fetchSource() != null && updateRequest.fetchSource().fetchSource()) || + (updateRequest.fields() != null && updateRequest.fields().length > 0)) { + Tuple> sourceAndContent = + XContentHelper.convertToMap(indexSourceAsBytes, true, updateIndexRequest.getContentType()); + updateResponse.setGetResult(updateHelper.extractGetResult(updateRequest, request.index(), + indexResponse.getVersion(), sourceAndContent.v2(), sourceAndContent.v1(), indexSourceAsBytes)); + } + // set translated request as replica request + replicaRequest = new BulkItemRequest(request.items()[requestIndex].id(), updateIndexRequest); + break; + case DELETE: + DeleteRequest updateDeleteRequest = translate.action(); + DeleteResponse deleteResponse = new DeleteResponse(primary.shardId(), + updateDeleteRequest.type(), updateDeleteRequest.id(), + updateOperationResult.getVersion(), ((Engine.DeleteResult) updateOperationResult).isFound()); + updateResponse = new UpdateResponse(deleteResponse.getShardInfo(), + deleteResponse.getShardId(), deleteResponse.getType(), deleteResponse.getId(), + deleteResponse.getVersion(), deleteResponse.getResult()); + updateResponse.setGetResult(updateHelper.extractGetResult(updateRequest, + request.index(), deleteResponse.getVersion(), translate.updatedSourceAsMap(), + translate.updateSourceContentType(), null)); + // set translated request as replica request + replicaRequest = new BulkItemRequest(request.items()[requestIndex].id(), updateDeleteRequest); + break; + default: throw new IllegalStateException("Illegal update operation " + + updateOperationResult.getOperationType().getLowercase()); } - case NOOP: - UpdateResponse updateResponse = translate.action(); - indexShard.noopUpdate(updateRequest.type()); - return new UpdateResult(translate, updateResponse); - default: - throw new IllegalStateException("Illegal update operation " + translate.getResponseResult()); + // successful operation + break; // out of retry loop + } else if (updateOperationResult.getFailure() instanceof VersionConflictEngineException == false) { + // not a version conflict exception + break; // out of retry loop + } } + return new UpdateResultHolder(replicaRequest, updateOperationResult, updateResponse); } @Override - protected Location onReplicaShard(BulkShardRequest request, IndexShard indexShard) { + public WriteReplicaResult shardOperationOnReplica(BulkShardRequest request, IndexShard replica) throws Exception { Translog.Location location = null; for (int i = 0; i < request.items().length; i++) { BulkItemRequest item = request.items()[i]; - if (item == null || item.isIgnoreOnReplica()) { - continue; - } - if (item.request() instanceof IndexRequest) { - IndexRequest indexRequest = (IndexRequest) item.request(); + if (item.isIgnoreOnReplica() == false) { + DocWriteRequest docWriteRequest = item.request(); + // ensure request version is updated for replica operation during request execution in the primary + assert docWriteRequest.versionType() == docWriteRequest.versionType().versionTypeForReplicationAndRecovery() + : "unexpected version in replica " + docWriteRequest.version(); + final Engine.Result operationResult; try { - Engine.Index operation = TransportIndexAction.executeIndexRequestOnReplica(indexRequest, indexShard); - location = locationToSync(location, operation.getTranslogLocation()); - } catch (Exception e) { - // if its not an ignore replica failure, we need to make sure to bubble up the failure - // so we will fail the shard - if (!ignoreReplicaException(e)) { - throw e; + switch (docWriteRequest.opType()) { + case CREATE: + case INDEX: + operationResult = executeIndexRequestOnReplica(((IndexRequest) docWriteRequest), replica); + break; + case DELETE: + operationResult = executeDeleteRequestOnReplica(((DeleteRequest) docWriteRequest), replica); + break; + default: + throw new IllegalStateException("Unexpected request operation type on replica: " + + docWriteRequest.opType().getLowercase()); + } + if (operationResult.hasFailure()) { + // check if any transient write operation failures should be bubbled up + Exception failure = operationResult.getFailure(); + assert failure instanceof VersionConflictEngineException + || failure instanceof MapperParsingException + : "expected version conflict or mapper parsing failures"; + if (!TransportActions.isShardNotAvailableException(failure)) { + throw failure; + } + } else if (operationResult.getTranslogLocation() != null) { // out of order ops are not added to the translog + location = locationToSync(location, operationResult.getTranslogLocation()); } - } - } else if (item.request() instanceof DeleteRequest) { - DeleteRequest deleteRequest = (DeleteRequest) item.request(); - try { - Engine.Delete delete = TransportDeleteAction.executeDeleteRequestOnReplica(deleteRequest, indexShard); - indexShard.delete(delete); - location = locationToSync(location, delete.getTranslogLocation()); } catch (Exception e) { // if its not an ignore replica failure, we need to make sure to bubble up the failure // so we will fail the shard - if (!ignoreReplicaException(e)) { + if (!TransportActions.isShardNotAvailableException(e)) { throw e; } } - } else { - throw new IllegalStateException("Unexpected index operation: " + item.request()); } } - return location; - } - - private void applyVersion(BulkItemRequest item, long version, VersionType versionType) { - if (item.request() instanceof IndexRequest) { - ((IndexRequest) item.request()).version(version).versionType(versionType); - } else if (item.request() instanceof DeleteRequest) { - ((DeleteRequest) item.request()).version(version).versionType(); - } else if (item.request() instanceof UpdateRequest) { - ((UpdateRequest) item.request()).version(version).versionType(); - } else { - // log? - } + return new WriteReplicaResult<>(request, location, null, replica, logger); } private Translog.Location locationToSync(Translog.Location current, Translog.Location next) { @@ -504,4 +424,124 @@ private Translog.Location locationToSync(Translog.Location current, Translog.Loc assert current == null || current.compareTo(next) < 0 : "translog locations are not increasing"; return next; } + + /** + * Execute the given {@link IndexRequest} on a replica shard, throwing a + * {@link RetryOnReplicaException} if the operation needs to be re-tried. + */ + public static Engine.IndexResult executeIndexRequestOnReplica(IndexRequest request, IndexShard replica) throws IOException { + final ShardId shardId = replica.shardId(); + SourceToParse sourceToParse = + SourceToParse.source(SourceToParse.Origin.REPLICA, shardId.getIndexName(), request.type(), request.id(), request.source(), + request.getContentType()).routing(request.routing()).parent(request.parent()) + .timestamp(request.timestamp()).ttl(request.ttl()); + + final Engine.Index operation; + try { + operation = replica.prepareIndexOnReplica(sourceToParse, request.version(), request.versionType(), request.getAutoGeneratedTimestamp(), request.isRetry()); + } catch (MapperParsingException e) { + return new Engine.IndexResult(e, request.version()); + } + Mapping update = operation.parsedDoc().dynamicMappingsUpdate(); + if (update != null) { + throw new RetryOnReplicaException(shardId, "Mappings are not available on the replica yet, triggered update: " + update); + } + return replica.index(operation); + } + + /** Utility method to prepare an index operation on primary shards */ + static Engine.Index prepareIndexOperationOnPrimary(IndexRequest request, IndexShard primary) { + SourceToParse sourceToParse = + SourceToParse.source(SourceToParse.Origin.PRIMARY, request.index(), request.type(), request.id(), request.source(), + request.getContentType()).routing(request.routing()).parent(request.parent()) + .timestamp(request.timestamp()).ttl(request.ttl()); + return primary.prepareIndexOnPrimary(sourceToParse, request.version(), request.versionType(), request.getAutoGeneratedTimestamp(), request.isRetry()); + } + + /** Executes index operation on primary shard after updates mapping if dynamic mappings are found */ + public static Engine.IndexResult executeIndexRequestOnPrimary(IndexRequest request, IndexShard primary, + MappingUpdatedAction mappingUpdatedAction) throws Exception { + Engine.Index operation; + try { + operation = prepareIndexOperationOnPrimary(request, primary); + } catch (MapperParsingException | IllegalArgumentException e) { + return new Engine.IndexResult(e, request.version()); + } + Mapping update = operation.parsedDoc().dynamicMappingsUpdate(); + final ShardId shardId = primary.shardId(); + if (update != null) { + // can throw timeout exception when updating mappings or ISE for attempting to update default mappings + // which are bubbled up + try { + mappingUpdatedAction.updateMappingOnMaster(shardId.getIndex(), request.type(), update); + } catch (IllegalArgumentException e) { + // throws IAE on conflicts merging dynamic mappings + return new Engine.IndexResult(e, request.version()); + } + try { + operation = prepareIndexOperationOnPrimary(request, primary); + } catch (MapperParsingException | IllegalArgumentException e) { + return new Engine.IndexResult(e, request.version()); + } + update = operation.parsedDoc().dynamicMappingsUpdate(); + if (update != null) { + throw new ReplicationOperation.RetryOnPrimaryException(shardId, + "Dynamic mappings are not available on the node that holds the primary yet"); + } + } + return primary.index(operation); + } + + public static Engine.DeleteResult executeDeleteRequestOnPrimary(DeleteRequest request, IndexShard primary, + MappingUpdatedAction mappingUpdatedAction) throws Exception { + boolean mappingUpdateNeeded = false; + final ShardId shardId = primary.shardId(); + if (primary.indexSettings().isSingleType()) { + // When there is a single type, the unique identifier is only composed of the _id, + // so there is no way to differenciate foo#1 from bar#1. This is especially an issue + // if a user first deletes foo#1 and then indexes bar#1: since we do not encode the + // _type in the uid it might look like we are reindexing the same document, which + // would fail if bar#1 is indexed with a lower version than foo#1 was deleted with. + // In order to work around this issue, we make deletions create types. This way, we + // fail if index and delete operations do not use the same type. + try { + Mapping update = primary.mapperService().documentMapperWithAutoCreate(request.type()).getMapping(); + if (update != null) { + mappingUpdateNeeded = true; + mappingUpdatedAction.updateMappingOnMaster(shardId.getIndex(), request.type(), update); + } + } catch (MapperParsingException | IllegalArgumentException e) { + return new Engine.DeleteResult(e, request.version(), false); + } + } + if (mappingUpdateNeeded) { + Mapping update = primary.mapperService().documentMapperWithAutoCreate(request.type()).getMapping(); + if (update != null) { + throw new ReplicationOperation.RetryOnPrimaryException(shardId, + "Dynamic mappings are not available on the node that holds the primary yet"); + } + } + final Engine.Delete delete = primary.prepareDeleteOnPrimary(request.type(), request.id(), request.version(), request.versionType()); + return primary.delete(delete); + } + + public static Engine.DeleteResult executeDeleteRequestOnReplica(DeleteRequest request, IndexShard replica) throws IOException { + if (replica.indexSettings().isSingleType()) { + // We need to wait for the replica to have the mappings + Mapping update; + try { + update = replica.mapperService().documentMapperWithAutoCreate(request.type()).getMapping(); + } catch (MapperParsingException | IllegalArgumentException e) { + return new Engine.DeleteResult(e, request.version(), false); + } + if (update != null) { + final ShardId shardId = replica.shardId(); + throw new RetryOnReplicaException(shardId, + "Mappings are not available on the replica yet, triggered update: " + update); + } + } + final Engine.Delete delete = replica.prepareDeleteOnReplica(request.type(), request.id(), + request.version(), request.versionType()); + return replica.delete(delete); + } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportSingleItemBulkWriteAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportSingleItemBulkWriteAction.java new file mode 100644 index 0000000000000..fd71f504ea9b1 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportSingleItemBulkWriteAction.java @@ -0,0 +1,132 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.bulk; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.DocWriteResponse; +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.action.support.WriteResponse; +import org.elasticsearch.action.support.replication.ReplicatedWriteRequest; +import org.elasticsearch.action.support.replication.ReplicationResponse; +import org.elasticsearch.action.support.replication.TransportWriteAction; +import org.elasticsearch.cluster.action.shard.ShardStateAction; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; + +import java.util.function.Supplier; + +/** use transport bulk action directly */ +@Deprecated +public abstract class TransportSingleItemBulkWriteAction< + Request extends ReplicatedWriteRequest, + Response extends ReplicationResponse & WriteResponse + > extends TransportWriteAction { + + private final TransportBulkAction bulkAction; + private final TransportShardBulkAction shardBulkAction; + + + protected TransportSingleItemBulkWriteAction(Settings settings, String actionName, TransportService transportService, + ClusterService clusterService, IndicesService indicesService, ThreadPool threadPool, + ShardStateAction shardStateAction, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver, Supplier request, + Supplier replicaRequest, String executor, + TransportBulkAction bulkAction, TransportShardBulkAction shardBulkAction) { + super(settings, actionName, transportService, clusterService, indicesService, threadPool, shardStateAction, actionFilters, + indexNameExpressionResolver, request, replicaRequest, executor); + this.bulkAction = bulkAction; + this.shardBulkAction = shardBulkAction; + } + + + @Override + protected void doExecute(Task task, final Request request, final ActionListener listener) { + bulkAction.execute(task, toSingleItemBulkRequest(request), wrapBulkResponse(listener)); + } + + @Override + protected WritePrimaryResult shardOperationOnPrimary( + Request request, final IndexShard primary) throws Exception { + BulkItemRequest[] itemRequests = new BulkItemRequest[1]; + WriteRequest.RefreshPolicy refreshPolicy = request.getRefreshPolicy(); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.NONE); + itemRequests[0] = new BulkItemRequest(0, ((DocWriteRequest) request)); + BulkShardRequest bulkShardRequest = new BulkShardRequest(request.shardId(), refreshPolicy, itemRequests); + WritePrimaryResult bulkResult = + shardBulkAction.shardOperationOnPrimary(bulkShardRequest, primary); + assert bulkResult.finalResponseIfSuccessful.getResponses().length == 1 : "expected only one bulk shard response"; + BulkItemResponse itemResponse = bulkResult.finalResponseIfSuccessful.getResponses()[0]; + final Response response; + final Exception failure; + if (itemResponse.isFailed()) { + failure = itemResponse.getFailure().getCause(); + response = null; + } else { + response = (Response) itemResponse.getResponse(); + failure = null; + } + return new WritePrimaryResult<>(request, response, bulkResult.location, failure, primary, logger); + } + + @Override + protected WriteReplicaResult shardOperationOnReplica( + Request replicaRequest, IndexShard replica) throws Exception { + BulkItemRequest[] itemRequests = new BulkItemRequest[1]; + WriteRequest.RefreshPolicy refreshPolicy = replicaRequest.getRefreshPolicy(); + itemRequests[0] = new BulkItemRequest(0, ((DocWriteRequest) replicaRequest)); + BulkShardRequest bulkShardRequest = new BulkShardRequest(replicaRequest.shardId(), refreshPolicy, itemRequests); + WriteReplicaResult result = shardBulkAction.shardOperationOnReplica(bulkShardRequest, replica); + // a replica operation can never throw a document-level failure, + // as the same document has been already indexed successfully in the primary + return new WriteReplicaResult<>(replicaRequest, result.location, null, replica, logger); + } + + + private ActionListener wrapBulkResponse(ActionListener listener) { + return ActionListener.wrap(bulkItemResponses -> { + assert bulkItemResponses.getItems().length == 1 : "expected only one item in bulk request"; + BulkItemResponse bulkItemResponse = bulkItemResponses.getItems()[0]; + if (bulkItemResponse.isFailed() == false) { + final DocWriteResponse response = bulkItemResponse.getResponse(); + listener.onResponse((Response) response); + } else { + listener.onFailure(bulkItemResponse.getFailure().getCause()); + } + }, listener::onFailure); + } + + public static BulkRequest toSingleItemBulkRequest(ReplicatedWriteRequest request) { + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(((DocWriteRequest) request)); + bulkRequest.setRefreshPolicy(request.getRefreshPolicy()); + bulkRequest.timeout(request.timeout()); + bulkRequest.waitForActiveShards(request.waitForActiveShards()); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.NONE); + return bulkRequest; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java b/core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java index bdf09e3e532fd..5d44fc87e43ec 100644 --- a/core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java +++ b/core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java @@ -20,11 +20,14 @@ package org.elasticsearch.action.delete; import org.elasticsearch.action.ActionRequestValidationException; -import org.elasticsearch.action.DocumentRequest; +import org.elasticsearch.action.CompositeIndicesRequest; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.support.replication.ReplicatedWriteRequest; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.index.VersionType; @@ -43,7 +46,7 @@ * @see org.elasticsearch.client.Client#delete(DeleteRequest) * @see org.elasticsearch.client.Requests#deleteRequest(String) */ -public class DeleteRequest extends ReplicatedWriteRequest implements DocumentRequest { +public class DeleteRequest extends ReplicatedWriteRequest implements DocWriteRequest, CompositeIndicesRequest { private String type; private String id; @@ -53,6 +56,7 @@ public class DeleteRequest extends ReplicatedWriteRequest impleme private String parent; private long version = Versions.MATCH_ANY; private VersionType versionType = VersionType.INTERNAL; + private static DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(DeleteRequest.class)); public DeleteRequest() { } @@ -90,6 +94,9 @@ public ActionRequestValidationException validate() { if (!versionType.validateVersionForWrites(version)) { validationException = addValidationError("illegal version value [" + version + "] for version type [" + versionType.name() + "]", validationException); } + if (versionType == VersionType.FORCE) { + deprecationLogger.deprecated("version type FORCE is deprecated and will be removed in the next major version"); + } return validationException; } @@ -164,28 +171,33 @@ public String routing() { return this.routing; } - /** - * Sets the version, which will cause the delete operation to only be performed if a matching - * version exists and no changes happened on the doc since then. - */ + @Override public DeleteRequest version(long version) { this.version = version; return this; } + @Override public long version() { return this.version; } + @Override public DeleteRequest versionType(VersionType versionType) { this.versionType = versionType; return this; } + @Override public VersionType versionType() { return this.versionType; } + @Override + public OpType opType() { + return OpType.DELETE; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); diff --git a/core/src/main/java/org/elasticsearch/action/delete/DeleteResponse.java b/core/src/main/java/org/elasticsearch/action/delete/DeleteResponse.java index 4742c573e686c..bf43978582ad4 100644 --- a/core/src/main/java/org/elasticsearch/action/delete/DeleteResponse.java +++ b/core/src/main/java/org/elasticsearch/action/delete/DeleteResponse.java @@ -21,11 +21,14 @@ import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * The response of the delete action. * @@ -34,8 +37,9 @@ */ public class DeleteResponse extends DocWriteResponse { - public DeleteResponse() { + private static final String FOUND = "found"; + public DeleteResponse() { } public DeleteResponse(ShardId shardId, String type, String id, long version, boolean found) { @@ -47,13 +51,6 @@ public RestStatus status() { return result == Result.DELETED ? super.status() : RestStatus.NOT_FOUND; } - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.field("found", result == Result.DELETED); - super.toXContent(builder, params); - return builder; - } - @Override public String toString() { StringBuilder builder = new StringBuilder(); @@ -66,4 +63,61 @@ public String toString() { builder.append(",shards=").append(getShardInfo()); return builder.append("]").toString(); } + + @Override + public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { + builder.field(FOUND, result == Result.DELETED); + super.innerToXContent(builder, params); + return builder; + } + + public static DeleteResponse fromXContent(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + + Builder context = new Builder(); + while (parser.nextToken() != XContentParser.Token.END_OBJECT) { + parseXContentFields(parser, context); + } + return context.build(); + } + + /** + * Parse the current token and update the parsing context appropriately. + */ + public static void parseXContentFields(XContentParser parser, Builder context) throws IOException { + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + + if (FOUND.equals(currentFieldName)) { + if (token.isValue()) { + context.setFound(parser.booleanValue()); + } + } else { + DocWriteResponse.parseInnerToXContent(parser, context); + } + } + + /** + * Builder class for {@link DeleteResponse}. This builder is usually used during xcontent parsing to + * temporarily store the parsed values, then the {@link DocWriteResponse.Builder#build()} method is called to + * instantiate the {@link DeleteResponse}. + */ + public static class Builder extends DocWriteResponse.Builder { + + private boolean found = false; + + public void setFound(boolean found) { + this.found = found; + } + + @Override + public DeleteResponse build() { + DeleteResponse deleteResponse = new DeleteResponse(shardId, type, id, version, found); + deleteResponse.setForcedRefresh(forcedRefresh); + if (shardInfo != null) { + deleteResponse.setShardInfo(shardInfo); + } + return deleteResponse; + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java b/core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java index 6f3d27ea36908..3aaf4a472facf 100644 --- a/core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java +++ b/core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java @@ -19,130 +19,39 @@ package org.elasticsearch.action.delete; -import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.RoutingMissingException; -import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; -import org.elasticsearch.action.admin.indices.create.CreateIndexResponse; -import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction; +import org.elasticsearch.action.bulk.TransportBulkAction; +import org.elasticsearch.action.bulk.TransportShardBulkAction; +import org.elasticsearch.action.bulk.TransportSingleItemBulkWriteAction; import org.elasticsearch.action.support.ActionFilters; -import org.elasticsearch.action.support.AutoCreateIndex; -import org.elasticsearch.action.support.replication.TransportWriteAction; -import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.action.shard.ShardStateAction; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.index.shard.ShardId; -import org.elasticsearch.index.translog.Translog.Location; -import org.elasticsearch.indices.IndexAlreadyExistsException; import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; /** * Performs the delete operation. + * + * Deprecated use TransportBulkAction with a single item instead */ -public class TransportDeleteAction extends TransportWriteAction { - - private final AutoCreateIndex autoCreateIndex; - private final TransportCreateIndexAction createIndexAction; +@Deprecated +public class TransportDeleteAction extends TransportSingleItemBulkWriteAction { @Inject public TransportDeleteAction(Settings settings, TransportService transportService, ClusterService clusterService, IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction, - TransportCreateIndexAction createIndexAction, ActionFilters actionFilters, - IndexNameExpressionResolver indexNameExpressionResolver, - AutoCreateIndex autoCreateIndex) { - super(settings, DeleteAction.NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, actionFilters, - indexNameExpressionResolver, DeleteRequest::new, ThreadPool.Names.INDEX); - this.createIndexAction = createIndexAction; - this.autoCreateIndex = autoCreateIndex; - } - - @Override - protected void doExecute(Task task, final DeleteRequest request, final ActionListener listener) { - ClusterState state = clusterService.state(); - if (autoCreateIndex.shouldAutoCreate(request.index(), state)) { - createIndexAction.execute(task, new CreateIndexRequest().index(request.index()).cause("auto(delete api)").masterNodeTimeout(request.timeout()), new ActionListener() { - @Override - public void onResponse(CreateIndexResponse result) { - innerExecute(task, request, listener); - } - - @Override - public void onFailure(Exception e) { - if (ExceptionsHelper.unwrapCause(e) instanceof IndexAlreadyExistsException) { - // we have the index, do it - innerExecute(task, request, listener); - } else { - listener.onFailure(e); - } - } - }); - } else { - innerExecute(task, request, listener); - } - } - - @Override - protected void resolveRequest(final MetaData metaData, IndexMetaData indexMetaData, DeleteRequest request) { - super.resolveRequest(metaData, indexMetaData, request); - resolveAndValidateRouting(metaData, indexMetaData.getIndex().getName(), request); - ShardId shardId = clusterService.operationRouting().shardId(clusterService.state(), - indexMetaData.getIndex().getName(), request.id(), request.routing()); - request.setShardId(shardId); - } - - public static void resolveAndValidateRouting(final MetaData metaData, final String concreteIndex, - DeleteRequest request) { - request.routing(metaData.resolveIndexRouting(request.parent(), request.routing(), request.index())); - // check if routing is required, if so, throw error if routing wasn't specified - if (request.routing() == null && metaData.routingRequired(concreteIndex, request.type())) { - throw new RoutingMissingException(concreteIndex, request.type(), request.id()); - } - } - - private void innerExecute(Task task, final DeleteRequest request, final ActionListener listener) { - super.doExecute(task, request, listener); + ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, + TransportBulkAction bulkAction, TransportShardBulkAction shardBulkAction) { + super(settings, DeleteAction.NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, + actionFilters, indexNameExpressionResolver, DeleteRequest::new, DeleteRequest::new, ThreadPool.Names.INDEX, + bulkAction, shardBulkAction); } @Override protected DeleteResponse newResponseInstance() { return new DeleteResponse(); } - - @Override - protected WriteResult onPrimaryShard(DeleteRequest request, IndexShard indexShard) { - return executeDeleteRequestOnPrimary(request, indexShard); - } - - @Override - protected Location onReplicaShard(DeleteRequest request, IndexShard indexShard) { - return executeDeleteRequestOnReplica(request, indexShard).getTranslogLocation(); - } - - public static WriteResult executeDeleteRequestOnPrimary(DeleteRequest request, IndexShard indexShard) { - Engine.Delete delete = indexShard.prepareDeleteOnPrimary(request.type(), request.id(), request.version(), request.versionType()); - indexShard.delete(delete); - // update the request with the version so it will go to the replicas - request.versionType(delete.versionType().versionTypeForReplicationAndRecovery()); - request.version(delete.version()); - - assert request.versionType().validateVersionForWrites(request.version()); - DeleteResponse response = new DeleteResponse(indexShard.shardId(), request.type(), request.id(), delete.version(), delete.found()); - return new WriteResult<>(response, delete.getTranslogLocation()); - } - - public static Engine.Delete executeDeleteRequestOnReplica(DeleteRequest request, IndexShard indexShard) { - Engine.Delete delete = indexShard.prepareDeleteOnReplica(request.type(), request.id(), request.version(), request.versionType()); - indexShard.delete(delete); - return delete; - } } diff --git a/core/src/main/java/org/elasticsearch/action/explain/ExplainRequest.java b/core/src/main/java/org/elasticsearch/action/explain/ExplainRequest.java index 851d9e6573d0d..5d8ca27657f92 100644 --- a/core/src/main/java/org/elasticsearch/action/explain/ExplainRequest.java +++ b/core/src/main/java/org/elasticsearch/action/explain/ExplainRequest.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import org.elasticsearch.search.internal.AliasFilter; import java.io.IOException; @@ -40,10 +41,10 @@ public class ExplainRequest extends SingleShardRequest { private String routing; private String preference; private QueryBuilder query; - private String[] fields; + private String[] storedFields; private FetchSourceContext fetchSourceContext; - private String[] filteringAlias = Strings.EMPTY_ARRAY; + private AliasFilter filteringAlias = new AliasFilter(null, Strings.EMPTY_ARRAY); long nowInMillis; @@ -122,20 +123,20 @@ public FetchSourceContext fetchSourceContext() { } - public String[] fields() { - return fields; + public String[] storedFields() { + return storedFields; } - public ExplainRequest fields(String[] fields) { - this.fields = fields; + public ExplainRequest storedFields(String[] fields) { + this.storedFields = fields; return this; } - public String[] filteringAlias() { + public AliasFilter filteringAlias() { return filteringAlias; } - public ExplainRequest filteringAlias(String[] filteringAlias) { + public ExplainRequest filteringAlias(AliasFilter filteringAlias) { if (filteringAlias != null) { this.filteringAlias = filteringAlias; } @@ -166,9 +167,9 @@ public void readFrom(StreamInput in) throws IOException { routing = in.readOptionalString(); preference = in.readOptionalString(); query = in.readNamedWriteable(QueryBuilder.class); - filteringAlias = in.readStringArray(); - fields = in.readOptionalStringArray(); - fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new); + filteringAlias = new AliasFilter(in); + storedFields = in.readOptionalStringArray(); + fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new); nowInMillis = in.readVLong(); } @@ -180,9 +181,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(routing); out.writeOptionalString(preference); out.writeNamedWriteable(query); - out.writeStringArray(filteringAlias); - out.writeOptionalStringArray(fields); - out.writeOptionalStreamable(fetchSourceContext); + filteringAlias.writeTo(out); + out.writeOptionalStringArray(storedFields); + out.writeOptionalWriteable(fetchSourceContext); out.writeVLong(nowInMillis); } } diff --git a/core/src/main/java/org/elasticsearch/action/explain/ExplainRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/explain/ExplainRequestBuilder.java index c201315cbd832..d2d9bb3b820a8 100644 --- a/core/src/main/java/org/elasticsearch/action/explain/ExplainRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/explain/ExplainRequestBuilder.java @@ -88,10 +88,10 @@ public ExplainRequestBuilder setQuery(QueryBuilder query) { } /** - * Explicitly specify the fields that will be returned for the explained document. By default, nothing is returned. + * Explicitly specify the stored fields that will be returned for the explained document. By default, nothing is returned. */ - public ExplainRequestBuilder setFields(String... fields) { - request.fields(fields); + public ExplainRequestBuilder setStoredFields(String... fields) { + request.storedFields(fields); return this; } @@ -99,12 +99,9 @@ public ExplainRequestBuilder setFields(String... fields) { * Indicates whether the response should contain the stored _source */ public ExplainRequestBuilder setFetchSource(boolean fetch) { - FetchSourceContext context = request.fetchSourceContext(); - if (context == null) { - request.fetchSourceContext(new FetchSourceContext(fetch)); - } else { - context.fetchSource(fetch); - } + FetchSourceContext fetchSourceContext = request.fetchSourceContext() != null ? request.fetchSourceContext() + : FetchSourceContext.FETCH_SOURCE; + request.fetchSourceContext(new FetchSourceContext(fetch, fetchSourceContext.includes(), fetchSourceContext.excludes())); return this; } @@ -129,14 +126,9 @@ public ExplainRequestBuilder setFetchSource(@Nullable String include, @Nullable * @param excludes An optional list of exclude (optionally wildcarded) pattern to filter the returned _source */ public ExplainRequestBuilder setFetchSource(@Nullable String[] includes, @Nullable String[] excludes) { - FetchSourceContext context = request.fetchSourceContext(); - if (context == null) { - request.fetchSourceContext(new FetchSourceContext(includes, excludes)); - } else { - context.fetchSource(true); - context.includes(includes); - context.excludes(excludes); - } + FetchSourceContext fetchSourceContext = request.fetchSourceContext() != null ? request.fetchSourceContext() + : FetchSourceContext.FETCH_SOURCE; + request.fetchSourceContext(new FetchSourceContext(fetchSourceContext.fetchSource(), includes, excludes)); return this; } } diff --git a/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java b/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java index 95177853d4198..72aaeb9eb371a 100644 --- a/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java +++ b/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java @@ -31,20 +31,13 @@ import org.elasticsearch.cluster.routing.ShardIterator; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.index.IndexService; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.get.GetResult; -import org.elasticsearch.index.mapper.Uid; -import org.elasticsearch.index.mapper.UidFieldMapper; -import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; -import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.SearchService; -import org.elasticsearch.search.fetch.FetchPhase; -import org.elasticsearch.search.internal.DefaultSearchContext; +import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.ShardSearchLocalRequest; import org.elasticsearch.search.rescore.RescoreSearchContext; @@ -60,26 +53,15 @@ // TODO: AggregatedDfs. Currently the idf can be different then when executing a normal search with explain. public class TransportExplainAction extends TransportSingleShardAction { - private final IndicesService indicesService; - - private final ScriptService scriptService; - - - private final BigArrays bigArrays; - - private final FetchPhase fetchPhase; + private final SearchService searchService; @Inject public TransportExplainAction(Settings settings, ThreadPool threadPool, ClusterService clusterService, - TransportService transportService, IndicesService indicesService, ScriptService scriptService, - BigArrays bigArrays, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, - FetchPhase fetchPhase) { + TransportService transportService, SearchService searchService, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver) { super(settings, ExplainAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, ExplainRequest::new, ThreadPool.Names.GET); - this.indicesService = indicesService; - this.scriptService = scriptService; - this.bigArrays = bigArrays; - this.fetchPhase = fetchPhase; + this.searchService = searchService; } @Override @@ -95,7 +77,9 @@ protected boolean resolveIndex(ExplainRequest request) { @Override protected void resolveRequest(ClusterState state, InternalRequest request) { - request.request().filteringAlias(indexNameExpressionResolver.filteringAliases(state, request.concreteIndex(), request.request().index())); + final AliasFilter aliasFilter = searchService.buildAliasFilter(state, request.concreteIndex(), + request.request().index()); + request.request().filteringAlias(aliasFilter); // Fail fast on the node that received the request. if (request.request().routing() == null && state.getMetaData().routingRequired(request.concreteIndex(), request.request().type())) { throw new RoutingMissingException(request.concreteIndex(), request.request().type(), request.request().id()); @@ -103,35 +87,33 @@ protected void resolveRequest(ClusterState state, InternalRequest request) { } @Override - protected ExplainResponse shardOperation(ExplainRequest request, ShardId shardId) { - IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex()); - IndexShard indexShard = indexService.getShard(shardId.id()); - Term uidTerm = new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(request.type(), request.id())); - Engine.GetResult result = indexShard.get(new Engine.Get(false, uidTerm)); - if (!result.exists()) { - return new ExplainResponse(shardId.getIndexName(), request.type(), request.id(), false); - } - - SearchContext context = new DefaultSearchContext(0, - new ShardSearchLocalRequest(new String[] { request.type() }, request.nowInMillis, request.filteringAlias()), null, - result.searcher(), indexService, indexShard, scriptService, bigArrays, - threadPool.estimatedTimeInMillisCounter(), parseFieldMatcher, SearchService.NO_TIMEOUT, fetchPhase); - SearchContext.setCurrent(context); - + protected ExplainResponse shardOperation(ExplainRequest request, ShardId shardId) throws IOException { + ShardSearchLocalRequest shardSearchLocalRequest = new ShardSearchLocalRequest(shardId, + new String[]{request.type()}, request.nowInMillis, request.filteringAlias()); + SearchContext context = searchService.createSearchContext(shardSearchLocalRequest, SearchService.NO_TIMEOUT, null); + Engine.GetResult result = null; try { + Term uidTerm = context.mapperService().createUidTerm(request.type(), request.id()); + if (uidTerm == null) { + return new ExplainResponse(shardId.getIndexName(), request.type(), request.id(), false); + } + result = context.indexShard().get(new Engine.Get(false, request.type(), request.id(), uidTerm)); + if (!result.exists()) { + return new ExplainResponse(shardId.getIndexName(), request.type(), request.id(), false); + } context.parsedQuery(context.getQueryShardContext().toQuery(request.query())); - context.preProcess(); + context.preProcess(true); int topLevelDocId = result.docIdAndVersion().docId + result.docIdAndVersion().context.docBase; Explanation explanation = context.searcher().explain(context.query(), topLevelDocId); for (RescoreSearchContext ctx : context.rescore()) { Rescorer rescorer = ctx.rescorer(); explanation = rescorer.explain(topLevelDocId, context, ctx, explanation); } - if (request.fields() != null || (request.fetchSourceContext() != null && request.fetchSourceContext().fetchSource())) { + if (request.storedFields() != null || (request.fetchSourceContext() != null && request.fetchSourceContext().fetchSource())) { // Advantage is that we're not opening a second searcher to retrieve the _source. Also // because we are working in the same searcher in engineGetResult we can be sure that a // doc isn't deleted between the initial get and this call. - GetResult getResult = indexShard.getService().get(result, request.id(), request.type(), request.fields(), request.fetchSourceContext()); + GetResult getResult = context.indexShard().getService().get(result, request.id(), request.type(), request.storedFields(), request.fetchSourceContext()); return new ExplainResponse(shardId.getIndexName(), request.type(), request.id(), true, explanation, getResult); } else { return new ExplainResponse(shardId.getIndexName(), request.type(), request.id(), true, explanation); @@ -139,8 +121,7 @@ protected ExplainResponse shardOperation(ExplainRequest request, ShardId shardId } catch (IOException e) { throw new ElasticsearchException("Could not explain", e); } finally { - context.close(); - SearchContext.removeCurrent(); + Releasables.close(result, context); } } diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilities.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilities.java new file mode 100644 index 0000000000000..ef7513f38abc2 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilities.java @@ -0,0 +1,282 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Collections; +import java.util.Arrays; +import java.util.List; +import java.util.ArrayList; +import java.util.Comparator; + +/** + * Describes the capabilities of a field optionally merged across multiple indices. + */ +public class FieldCapabilities implements Writeable, ToXContent { + private final String name; + private final String type; + private final boolean isSearchable; + private final boolean isAggregatable; + + private final String[] indices; + private final String[] nonSearchableIndices; + private final String[] nonAggregatableIndices; + + /** + * Constructor + * @param name The name of the field. + * @param type The type associated with the field. + * @param isSearchable Whether this field is indexed for search. + * @param isAggregatable Whether this field can be aggregated on. + */ + FieldCapabilities(String name, String type, boolean isSearchable, boolean isAggregatable) { + this(name, type, isSearchable, isAggregatable, null, null, null); + } + + /** + * Constructor + * @param name The name of the field + * @param type The type associated with the field. + * @param isSearchable Whether this field is indexed for search. + * @param isAggregatable Whether this field can be aggregated on. + * @param indices The list of indices where this field name is defined as {@code type}, + * or null if all indices have the same {@code type} for the field. + * @param nonSearchableIndices The list of indices where this field is not searchable, + * or null if the field is searchable in all indices. + * @param nonAggregatableIndices The list of indices where this field is not aggregatable, + * or null if the field is aggregatable in all indices. + */ + FieldCapabilities(String name, String type, + boolean isSearchable, boolean isAggregatable, + String[] indices, + String[] nonSearchableIndices, + String[] nonAggregatableIndices) { + this.name = name; + this.type = type; + this.isSearchable = isSearchable; + this.isAggregatable = isAggregatable; + this.indices = indices; + this.nonSearchableIndices = nonSearchableIndices; + this.nonAggregatableIndices = nonAggregatableIndices; + } + + FieldCapabilities(StreamInput in) throws IOException { + this.name = in.readString(); + this.type = in.readString(); + this.isSearchable = in.readBoolean(); + this.isAggregatable = in.readBoolean(); + this.indices = in.readOptionalStringArray(); + this.nonSearchableIndices = in.readOptionalStringArray(); + this.nonAggregatableIndices = in.readOptionalStringArray(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(name); + out.writeString(type); + out.writeBoolean(isSearchable); + out.writeBoolean(isAggregatable); + out.writeOptionalStringArray(indices); + out.writeOptionalStringArray(nonSearchableIndices); + out.writeOptionalStringArray(nonAggregatableIndices); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field("type", type); + builder.field("searchable", isSearchable); + builder.field("aggregatable", isAggregatable); + if (indices != null) { + builder.field("indices", indices); + } + if (nonSearchableIndices != null) { + builder.field("non_searchable_indices", nonSearchableIndices); + } + if (nonAggregatableIndices != null) { + builder.field("non_aggregatable_indices", nonAggregatableIndices); + } + builder.endObject(); + return builder; + } + + /** + * The name of the field. + */ + public String getName() { + return name; + } + + /** + * Whether this field is indexed for search on all indices. + */ + public boolean isAggregatable() { + return isAggregatable; + } + + /** + * Whether this field can be aggregated on all indices. + */ + public boolean isSearchable() { + return isSearchable; + } + + /** + * The type of the field. + */ + public String getType() { + return type; + } + + /** + * The list of indices where this field name is defined as {@code type}, + * or null if all indices have the same {@code type} for the field. + */ + public String[] indices() { + return indices; + } + + /** + * The list of indices where this field is not searchable, + * or null if the field is searchable in all indices. + */ + public String[] nonSearchableIndices() { + return nonSearchableIndices; + } + + /** + * The list of indices where this field is not aggregatable, + * or null if the field is aggregatable in all indices. + */ + public String[] nonAggregatableIndices() { + return nonAggregatableIndices; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + FieldCapabilities that = (FieldCapabilities) o; + + if (isSearchable != that.isSearchable) return false; + if (isAggregatable != that.isAggregatable) return false; + if (!name.equals(that.name)) return false; + if (!type.equals(that.type)) return false; + if (!Arrays.equals(indices, that.indices)) return false; + if (!Arrays.equals(nonSearchableIndices, that.nonSearchableIndices)) return false; + return Arrays.equals(nonAggregatableIndices, that.nonAggregatableIndices); + } + + @Override + public int hashCode() { + int result = name.hashCode(); + result = 31 * result + type.hashCode(); + result = 31 * result + (isSearchable ? 1 : 0); + result = 31 * result + (isAggregatable ? 1 : 0); + result = 31 * result + Arrays.hashCode(indices); + result = 31 * result + Arrays.hashCode(nonSearchableIndices); + result = 31 * result + Arrays.hashCode(nonAggregatableIndices); + return result; + } + + static class Builder { + private String name; + private String type; + private boolean isSearchable; + private boolean isAggregatable; + private List indiceList; + + Builder(String name, String type) { + this.name = name; + this.type = type; + this.isSearchable = true; + this.isAggregatable = true; + this.indiceList = new ArrayList<>(); + } + + void add(String index, boolean search, boolean agg) { + IndexCaps indexCaps = new IndexCaps(index, search, agg); + indiceList.add(indexCaps); + this.isSearchable &= search; + this.isAggregatable &= agg; + } + + FieldCapabilities build(boolean withIndices) { + final String[] indices; + /* Eclipse can't deal with o -> o.name, maybe because of + * https://bugs.eclipse.org/bugs/show_bug.cgi?id=511750 */ + Collections.sort(indiceList, Comparator.comparing((IndexCaps o) -> o.name)); + if (withIndices) { + indices = indiceList.stream() + .map(caps -> caps.name) + .toArray(String[]::new); + } else { + indices = null; + } + + final String[] nonSearchableIndices; + if (isSearchable == false && + indiceList.stream().anyMatch((caps) -> caps.isSearchable)) { + // Iff this field is searchable in some indices AND non-searchable in others + // we record the list of non-searchable indices + nonSearchableIndices = indiceList.stream() + .filter((caps) -> caps.isSearchable == false) + .map(caps -> caps.name) + .toArray(String[]::new); + } else { + nonSearchableIndices = null; + } + + final String[] nonAggregatableIndices; + if (isAggregatable == false && + indiceList.stream().anyMatch((caps) -> caps.isAggregatable)) { + // Iff this field is aggregatable in some indices AND non-searchable in others + // we keep the list of non-aggregatable indices + nonAggregatableIndices = indiceList.stream() + .filter((caps) -> caps.isAggregatable == false) + .map(caps -> caps.name) + .toArray(String[]::new); + } else { + nonAggregatableIndices = null; + } + return new FieldCapabilities(name, type, isSearchable, isAggregatable, + indices, nonSearchableIndices, nonAggregatableIndices); + } + } + + private static class IndexCaps { + final String name; + final boolean isSearchable; + final boolean isAggregatable; + + IndexCaps(String name, boolean isSearchable, boolean isAggregatable) { + this.name = name; + this.isSearchable = isSearchable; + this.isAggregatable = isAggregatable; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesAction.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesAction.java new file mode 100644 index 0000000000000..93d67f3fc3cc4 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesAction.java @@ -0,0 +1,44 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.Action; +import org.elasticsearch.client.ElasticsearchClient; + +public class FieldCapabilitiesAction extends Action { + + public static final FieldCapabilitiesAction INSTANCE = new FieldCapabilitiesAction(); + public static final String NAME = "indices:data/read/field_caps"; + + private FieldCapabilitiesAction() { + super(NAME); + } + + @Override + public FieldCapabilitiesResponse newResponse() { + return new FieldCapabilitiesResponse(); + } + + @Override + public FieldCapabilitiesRequestBuilder newRequestBuilder(ElasticsearchClient client) { + return new FieldCapabilitiesRequestBuilder(client, this); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexRequest.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexRequest.java new file mode 100644 index 0000000000000..460a21ae866aa --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexRequest.java @@ -0,0 +1,65 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.support.single.shard.SingleShardRequest; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; + +import java.io.IOException; + +public class FieldCapabilitiesIndexRequest + extends SingleShardRequest { + + private String[] fields; + + // For serialization + FieldCapabilitiesIndexRequest() {} + + FieldCapabilitiesIndexRequest(String[] fields, String index) { + super(index); + if (fields == null || fields.length == 0) { + throw new IllegalArgumentException("specified fields can't be null or empty"); + } + this.fields = fields; + } + + public String[] fields() { + return fields; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + fields = in.readStringArray(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArray(fields); + } + + @Override + public ActionRequestValidationException validate() { + return null; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexResponse.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexResponse.java new file mode 100644 index 0000000000000..1e4686245165b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexResponse.java @@ -0,0 +1,102 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; + +import java.io.IOException; +import java.util.Map; + +/** + * Response for {@link FieldCapabilitiesIndexRequest} requests. + */ +public class FieldCapabilitiesIndexResponse extends ActionResponse implements Writeable { + private String indexName; + private Map responseMap; + + FieldCapabilitiesIndexResponse(String indexName, Map responseMap) { + this.indexName = indexName; + this.responseMap = responseMap; + } + + FieldCapabilitiesIndexResponse() { + } + + FieldCapabilitiesIndexResponse(StreamInput input) throws IOException { + this.readFrom(input); + } + + + /** + * Get the index name + */ + public String getIndexName() { + return indexName; + } + + /** + * Get the field capabilities map + */ + public Map get() { + return responseMap; + } + + /** + * + * Get the field capabilities for the provided {@code field} + */ + public FieldCapabilities getField(String field) { + return responseMap.get(field); + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + this.indexName = in.readString(); + this.responseMap = + in.readMap(StreamInput::readString, FieldCapabilities::new); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeString(indexName); + out.writeMap(responseMap, + StreamOutput::writeString, (valueOut, fc) -> fc.writeTo(valueOut)); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + FieldCapabilitiesIndexResponse that = (FieldCapabilitiesIndexResponse) o; + + return responseMap.equals(that.responseMap); + } + + @Override + public int hashCode() { + return responseMap.hashCode(); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequest.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequest.java new file mode 100644 index 0000000000000..b04f882076326 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequest.java @@ -0,0 +1,174 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.IndicesRequest; +import org.elasticsearch.action.ValidateActions; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; + +import java.io.IOException; +import java.util.Arrays; +import java.util.HashSet; +import java.util.Objects; +import java.util.Set; + +import static org.elasticsearch.common.xcontent.ObjectParser.fromList; + +public final class FieldCapabilitiesRequest extends ActionRequest implements IndicesRequest.Replaceable { + public static final ParseField FIELDS_FIELD = new ParseField("fields"); + public static final String NAME = "field_caps_request"; + private String[] indices = Strings.EMPTY_ARRAY; + private IndicesOptions indicesOptions = IndicesOptions.strictExpandOpen(); + private String[] fields = Strings.EMPTY_ARRAY; + // pkg private API mainly for cross cluster search to signal that we do multiple reductions ie. the results should not be merged + private boolean mergeResults = true; + + private static ObjectParser PARSER = + new ObjectParser<>(NAME, FieldCapabilitiesRequest::new); + + static { + PARSER.declareStringArray(fromList(String.class, FieldCapabilitiesRequest::fields), + FIELDS_FIELD); + } + + public FieldCapabilitiesRequest() {} + + /** + * Returns true iff the results should be merged. + */ + boolean isMergeResults() { + return mergeResults; + } + + /** + * if set to true the response will contain only a merged view of the per index field capabilities. Otherwise only + * unmerged per index field capabilities are returned. + */ + void setMergeResults(boolean mergeResults) { + this.mergeResults = mergeResults; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + fields = in.readStringArray(); + if (in.getVersion().onOrAfter(Version.V_5_5_0)) { + indices = in.readStringArray(); + indicesOptions = IndicesOptions.readIndicesOptions(in); + mergeResults = in.readBoolean(); + } else { + mergeResults = true; + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArray(fields); + if (out.getVersion().onOrAfter(Version.V_5_5_0)) { + out.writeStringArray(indices); + indicesOptions.writeIndicesOptions(out); + out.writeBoolean(mergeResults); + } + } + + public static FieldCapabilitiesRequest parseFields(XContentParser parser) throws IOException { + return PARSER.parse(parser, null); + } + + /** + * The list of field names to retrieve + */ + public FieldCapabilitiesRequest fields(String... fields) { + if (fields == null || fields.length == 0) { + throw new IllegalArgumentException("specified fields can't be null or empty"); + } + Set fieldSet = new HashSet<>(Arrays.asList(fields)); + this.fields = fieldSet.toArray(new String[0]); + return this; + } + + public String[] fields() { + return fields; + } + + /** + * + * The list of indices to lookup + */ + public FieldCapabilitiesRequest indices(String... indices) { + this.indices = Objects.requireNonNull(indices, "indices must not be null"); + return this; + } + + public FieldCapabilitiesRequest indicesOptions(IndicesOptions indicesOptions) { + this.indicesOptions = Objects.requireNonNull(indicesOptions, "indices options must not be null"); + return this; + } + + @Override + public String[] indices() { + return indices; + } + + @Override + public IndicesOptions indicesOptions() { + return indicesOptions; + } + + @Override + public ActionRequestValidationException validate() { + ActionRequestValidationException validationException = null; + if (fields == null || fields.length == 0) { + validationException = + ValidateActions.addValidationError("no fields specified", validationException); + } + return validationException; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + FieldCapabilitiesRequest that = (FieldCapabilitiesRequest) o; + + if (!Arrays.equals(indices, that.indices)) return false; + if (!indicesOptions.equals(that.indicesOptions)) return false; + return Arrays.equals(fields, that.fields); + } + + @Override + public int hashCode() { + int result = Arrays.hashCode(indices); + result = 31 * result + indicesOptions.hashCode(); + result = 31 * result + Arrays.hashCode(fields); + return result; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequestBuilder.java new file mode 100644 index 0000000000000..742d5b3ee3297 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequestBuilder.java @@ -0,0 +1,41 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.ActionRequestBuilder; +import org.elasticsearch.client.ElasticsearchClient; + +public class FieldCapabilitiesRequestBuilder extends + ActionRequestBuilder { + public FieldCapabilitiesRequestBuilder(ElasticsearchClient client, + FieldCapabilitiesAction action, + String... indices) { + super(client, action, new FieldCapabilitiesRequest().indices(indices)); + } + + /** + * The list of field names to retrieve. + */ + public FieldCapabilitiesRequestBuilder setFields(String... fields) { + request().fields(fields); + return this; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesResponse.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesResponse.java new file mode 100644 index 0000000000000..ae5db5835670a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesResponse.java @@ -0,0 +1,135 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +/** + * Response for {@link FieldCapabilitiesRequest} requests. + */ +public class FieldCapabilitiesResponse extends ActionResponse implements ToXContent { + private Map> responseMap; + private List indexResponses; + + FieldCapabilitiesResponse(Map> responseMap) { + this(responseMap, Collections.emptyList()); + } + + FieldCapabilitiesResponse(List indexResponses) { + this(Collections.emptyMap(), indexResponses); + } + + private FieldCapabilitiesResponse(Map> responseMap, + List indexResponses) { + this.responseMap = responseMap; + this.indexResponses = indexResponses; + } + + /** + * Used for serialization + */ + FieldCapabilitiesResponse() { + this.responseMap = Collections.emptyMap(); + } + + /** + * Get the field capabilities map. + */ + public Map> get() { + return responseMap; + } + + + /** + * Returns the actual per-index field caps responses + */ + List getIndexResponses() { + return indexResponses; + } + /** + * + * Get the field capabilities per type for the provided {@code field}. + */ + public Map getField(String field) { + return responseMap.get(field); + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + this.responseMap = + in.readMap(StreamInput::readString, FieldCapabilitiesResponse::readField); + if (in.getVersion().onOrAfter(Version.V_5_5_0)) { + indexResponses = in.readList(FieldCapabilitiesIndexResponse::new); + } else { + indexResponses = Collections.emptyList(); + } + } + + private static Map readField(StreamInput in) throws IOException { + return in.readMap(StreamInput::readString, FieldCapabilities::new); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeMap(responseMap, StreamOutput::writeString, FieldCapabilitiesResponse::writeField); + if (out.getVersion().onOrAfter(Version.V_5_5_0)) { + out.writeList(indexResponses); + } + + } + + private static void writeField(StreamOutput out, + Map map) throws IOException { + out.writeMap(map, StreamOutput::writeString, (valueOut, fc) -> fc.writeTo(valueOut)); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.field("fields", responseMap); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + FieldCapabilitiesResponse that = (FieldCapabilitiesResponse) o; + + return responseMap.equals(that.responseMap); + } + + @Override + public int hashCode() { + return responseMap.hashCode(); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesAction.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesAction.java new file mode 100644 index 0000000000000..3f0fb77781bdd --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesAction.java @@ -0,0 +1,197 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.HandledTransportAction; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.RemoteClusterAware; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.transport.Transport; +import org.elasticsearch.transport.TransportException; +import org.elasticsearch.transport.TransportRequestOptions; +import org.elasticsearch.transport.TransportResponseHandler; +import org.elasticsearch.transport.TransportService; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class TransportFieldCapabilitiesAction extends HandledTransportAction { + private final ClusterService clusterService; + private final TransportFieldCapabilitiesIndexAction shardAction; + private final RemoteClusterService remoteClusterService; + private final TransportService transportService; + + @Inject + public TransportFieldCapabilitiesAction(Settings settings, TransportService transportService, + ClusterService clusterService, ThreadPool threadPool, + TransportFieldCapabilitiesIndexAction shardAction, + ActionFilters actionFilters, + IndexNameExpressionResolver + indexNameExpressionResolver) { + super(settings, FieldCapabilitiesAction.NAME, threadPool, transportService, + actionFilters, indexNameExpressionResolver, FieldCapabilitiesRequest::new); + this.clusterService = clusterService; + this.remoteClusterService = transportService.getRemoteClusterService(); + this.transportService = transportService; + this.shardAction = shardAction; + } + + @Override + protected void doExecute(FieldCapabilitiesRequest request, + final ActionListener listener) { + final ClusterState clusterState = clusterService.state(); + final Map remoteClusterIndices = remoteClusterService.groupIndices(request.indicesOptions(), + request.indices(), idx -> indexNameExpressionResolver.hasIndexOrAlias(idx, clusterState)); + final OriginalIndices localIndices = remoteClusterIndices.remove(RemoteClusterAware.LOCAL_CLUSTER_GROUP_KEY); + final String[] concreteIndices; + if (remoteClusterIndices.isEmpty() == false && localIndices.indices().length == 0) { + // in the case we have one or more remote indices but no local we don't expand to all local indices and just do remote + // indices + concreteIndices = Strings.EMPTY_ARRAY; + } else { + concreteIndices = indexNameExpressionResolver.concreteIndexNames(clusterState, localIndices); + } + final int totalNumRequest = concreteIndices.length + remoteClusterIndices.size(); + final CountDown completionCounter = new CountDown(totalNumRequest); + final List indexResponses = Collections.synchronizedList(new ArrayList<>()); + final Runnable onResponse = () -> { + if (completionCounter.countDown()) { + if (request.isMergeResults()) { + listener.onResponse(merge(indexResponses)); + } else { + listener.onResponse(new FieldCapabilitiesResponse(indexResponses)); + } + } + }; + if (totalNumRequest == 0) { + listener.onResponse(new FieldCapabilitiesResponse()); + } else { + ActionListener innerListener = new ActionListener() { + @Override + public void onResponse(FieldCapabilitiesIndexResponse result) { + indexResponses.add(result); + onResponse.run(); + } + + @Override + public void onFailure(Exception e) { + // TODO we should somehow inform the user that we failed + onResponse.run(); + } + }; + for (String index : concreteIndices) { + shardAction.execute(new FieldCapabilitiesIndexRequest(request.fields(), index), innerListener); + } + + // this is the cross cluster part of this API - we force the other cluster to not merge the results but instead + // send us back all individual index results. + for (Map.Entry remoteIndices : remoteClusterIndices.entrySet()) { + String clusterAlias = remoteIndices.getKey(); + OriginalIndices originalIndices = remoteIndices.getValue(); + // if we are connected this is basically a no-op, if we are not we try to connect in parallel in a non-blocking fashion + remoteClusterService.ensureConnected(clusterAlias, ActionListener.wrap(v -> { + Transport.Connection connection = remoteClusterService.getConnection(clusterAlias); + FieldCapabilitiesRequest remoteRequest = new FieldCapabilitiesRequest(); + remoteRequest.setMergeResults(false); // we need to merge on this node + remoteRequest.indicesOptions(originalIndices.indicesOptions()); + remoteRequest.indices(originalIndices.indices()); + remoteRequest.fields(request.fields()); + transportService.sendRequest(connection, FieldCapabilitiesAction.NAME, remoteRequest, TransportRequestOptions.EMPTY, + new TransportResponseHandler() { + + @Override + public FieldCapabilitiesResponse newInstance() { + return new FieldCapabilitiesResponse(); + } + + @Override + public void handleResponse(FieldCapabilitiesResponse response) { + try { + for (FieldCapabilitiesIndexResponse res : response.getIndexResponses()) { + indexResponses.add(new FieldCapabilitiesIndexResponse(RemoteClusterAware. + buildRemoteIndexName(clusterAlias, res.getIndexName()), res.get())); + } + } finally { + onResponse.run(); + } + } + + @Override + public void handleException(TransportException exp) { + onResponse.run(); + } + + @Override + public String executor() { + return ThreadPool.Names.SAME; + } + }); + }, e -> onResponse.run())); + } + + } + } + + private FieldCapabilitiesResponse merge(List indexResponses) { + Map> responseMapBuilder = new HashMap<> (); + for (FieldCapabilitiesIndexResponse response : indexResponses) { + innerMerge(responseMapBuilder, response.getIndexName(), response.get()); + } + + Map> responseMap = new HashMap<>(); + for (Map.Entry> entry : + responseMapBuilder.entrySet()) { + Map typeMap = new HashMap<>(); + boolean multiTypes = entry.getValue().size() > 1; + for (Map.Entry fieldEntry : + entry.getValue().entrySet()) { + typeMap.put(fieldEntry.getKey(), fieldEntry.getValue().build(multiTypes)); + } + responseMap.put(entry.getKey(), typeMap); + } + + return new FieldCapabilitiesResponse(responseMap); + } + + private void innerMerge(Map> responseMapBuilder, String indexName, + Map map) { + for (Map.Entry entry : map.entrySet()) { + final String field = entry.getKey(); + final FieldCapabilities fieldCap = entry.getValue(); + Map typeMap = responseMapBuilder.computeIfAbsent(field, f -> new HashMap<>()); + FieldCapabilities.Builder builder = typeMap.computeIfAbsent(fieldCap.getType(), key -> new FieldCapabilities.Builder(field, + key)); + builder.add(indexName, fieldCap.isSearchable(), fieldCap.isAggregatable()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesIndexAction.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesIndexAction.java new file mode 100644 index 0000000000000..b9e6f56b6d7ad --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesIndexAction.java @@ -0,0 +1,100 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.single.shard.TransportSingleShardAction; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.block.ClusterBlockException; +import org.elasticsearch.cluster.block.ClusterBlockLevel; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.routing.ShardsIterator; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; + +public class TransportFieldCapabilitiesIndexAction extends TransportSingleShardAction { + + private static final String ACTION_NAME = FieldCapabilitiesAction.NAME + "[index]"; + + private final IndicesService indicesService; + + @Inject + public TransportFieldCapabilitiesIndexAction(Settings settings, ClusterService clusterService, TransportService transportService, + IndicesService indicesService, ThreadPool threadPool, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver) { + super(settings, ACTION_NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, + FieldCapabilitiesIndexRequest::new, ThreadPool.Names.MANAGEMENT); + this.indicesService = indicesService; + } + + @Override + protected boolean resolveIndex(FieldCapabilitiesIndexRequest request) { + //internal action, index already resolved + return false; + } + + @Override + protected ShardsIterator shards(ClusterState state, InternalRequest request) { + // Will balance requests between shards + // Resolve patterns and deduplicate + return state.routingTable().index(request.concreteIndex()).randomAllActiveShardsIt(); + } + + @Override + protected FieldCapabilitiesIndexResponse shardOperation(final FieldCapabilitiesIndexRequest request, ShardId shardId) { + MapperService mapperService = indicesService.indexServiceSafe(shardId.getIndex()).mapperService(); + Set fieldNames = new HashSet<>(); + for (String field : request.fields()) { + fieldNames.addAll(mapperService.simpleMatchToIndexNames(field)); + } + Map responseMap = new HashMap<>(); + for (String field : fieldNames) { + MappedFieldType ft = mapperService.fullName(field); + if (ft != null) { + FieldCapabilities fieldCap = new FieldCapabilities(field, ft.typeName(), ft.isSearchable(), ft.isAggregatable()); + responseMap.put(field, fieldCap); + } + } + return new FieldCapabilitiesIndexResponse(shardId.getIndexName(), responseMap); + } + + @Override + protected FieldCapabilitiesIndexResponse newResponse() { + return new FieldCapabilitiesIndexResponse(); + } + + @Override + protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) { + return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, request.concreteIndex()); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java index 4a4f106b08575..6c6d0da2e36d3 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java @@ -20,8 +20,10 @@ package org.elasticsearch.action.fieldstats; import org.apache.lucene.document.InetAddressPoint; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.StringHelper; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -34,6 +36,7 @@ import java.io.IOException; import java.net.InetAddress; +import java.util.Objects; public abstract class FieldStats implements Writeable, ToXContent { private final byte type; @@ -43,16 +46,55 @@ public abstract class FieldStats implements Writeable, ToXContent { private long sumTotalTermFreq; private boolean isSearchable; private boolean isAggregatable; + private boolean hasMinMax; protected T minValue; protected T maxValue; - FieldStats(byte type, long maxDoc, boolean isSearchable, boolean isAggregatable) { - this(type, maxDoc, 0, 0, 0, isSearchable, isAggregatable, null, null); + /** + * Builds a FieldStats where min and max value are not available for the field. + * @param type The native type of this FieldStats + * @param maxDoc Max number of docs + * @param docCount the number of documents that have at least one term for this field, + * or -1 if this information isn't available for this field. + * @param sumDocFreq the sum of {@link TermsEnum#docFreq()} for all terms in this field, + * or -1 if this information isn't available for this field. + * @param sumTotalTermFreq the sum of {@link TermsEnum#totalTermFreq} for all terms in this field, + * or -1 if this measure isn't available for this field. + * @param isSearchable true if this field is searchable + * @param isAggregatable true if this field is aggregatable + */ + FieldStats(byte type, long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, + boolean isSearchable, boolean isAggregatable) { + this.type = type; + this.maxDoc = maxDoc; + this.docCount = docCount; + this.sumDocFreq = sumDocFreq; + this.sumTotalTermFreq = sumTotalTermFreq; + this.isSearchable = isSearchable; + this.isAggregatable = isAggregatable; + this.hasMinMax = false; } + /** + * Builds a FieldStats with min and max value for the field. + * @param type The native type of this FieldStats + * @param maxDoc Max number of docs + * @param docCount the number of documents that have at least one term for this field, + * or -1 if this information isn't available for this field. + * @param sumDocFreq the sum of {@link TermsEnum#docFreq()} for all terms in this field, + * or -1 if this information isn't available for this field. + * @param sumTotalTermFreq the sum of {@link TermsEnum#totalTermFreq} for all terms in this field, + * or -1 if this measure isn't available for this field. + * @param isSearchable true if this field is searchable + * @param isAggregatable true if this field is aggregatable + * @param minValue the minimum value indexed in this field + * @param maxValue the maximum value indexed in this field + */ FieldStats(byte type, long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, boolean isSearchable, boolean isAggregatable, T minValue, T maxValue) { + Objects.requireNonNull(minValue, "minValue must not be null"); + Objects.requireNonNull(maxValue, "maxValue must not be null"); this.type = type; this.maxDoc = maxDoc; this.docCount = docCount; @@ -60,6 +102,7 @@ public abstract class FieldStats implements Writeable, ToXContent { this.sumTotalTermFreq = sumTotalTermFreq; this.isSearchable = isSearchable; this.isAggregatable = isAggregatable; + this.hasMinMax = true; this.minValue = minValue; this.maxValue = maxValue; } @@ -80,11 +123,20 @@ public String getDisplayType() { return "string"; case 4: return "ip"; + case 5: + return "geo_point"; default: throw new IllegalArgumentException("Unknown type."); } } + /** + * @return true if min/max informations are available for this field + */ + public boolean hasMinMax() { + return hasMinMax; + } + /** * @return the total number of documents. * @@ -216,18 +268,20 @@ public final void accumulate(FieldStats other) { isAggregatable |= other.isAggregatable; assert type == other.getType(); - updateMinMax((T) other.minValue, (T) other.maxValue); + if (hasMinMax && other.hasMinMax) { + updateMinMax((T) other.minValue, (T) other.maxValue); + } else { + hasMinMax = false; + minValue = null; + maxValue = null; + } } - private void updateMinMax(T min, T max) { - if (minValue == null) { - minValue = min; - } else if (min != null && compare(minValue, min) > 0) { + protected void updateMinMax(T min, T max) { + if (compare(minValue, min) > 0) { minValue = min; } - if (maxValue == null) { - maxValue = max; - } else if (max != null && compare(maxValue, max) < 0) { + if (compare(maxValue, max) < 0) { maxValue = max; } } @@ -245,7 +299,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(SUM_TOTAL_TERM_FREQ_FIELD, sumTotalTermFreq); builder.field(SEARCHABLE_FIELD, isSearchable); builder.field(AGGREGATABLE_FIELD, isAggregatable); - toInnerXContent(builder); + if (hasMinMax) { + toInnerXContent(builder); + } builder.endObject(); return builder; } @@ -266,9 +322,14 @@ public final void writeTo(StreamOutput out) throws IOException { out.writeLong(sumTotalTermFreq); out.writeBoolean(isSearchable); out.writeBoolean(isAggregatable); - boolean hasMinMax = minValue != null; - out.writeBoolean(hasMinMax); - if (hasMinMax) { + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { + out.writeBoolean(hasMinMax); + if (hasMinMax) { + writeMinMax(out); + } + } else { + assert hasMinMax : "cannot serialize null min/max fieldstats in a mixed-cluster " + + "with pre-" + Version.V_5_2_0 + " nodes, remote version [" + out.getVersion() + "]"; writeMinMax(out); } } @@ -280,7 +341,7 @@ public final void writeTo(StreamOutput out) throws IOException { * otherwise false is returned */ public boolean match(IndexConstraint constraint) { - if (minValue == null) { + if (hasMinMax == false) { return false; } int cmp; @@ -307,23 +368,47 @@ public boolean match(IndexConstraint constraint) { } } + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + FieldStats that = (FieldStats) o; + + if (type != that.type) return false; + if (maxDoc != that.maxDoc) return false; + if (docCount != that.docCount) return false; + if (sumDocFreq != that.sumDocFreq) return false; + if (sumTotalTermFreq != that.sumTotalTermFreq) return false; + if (isSearchable != that.isSearchable) return false; + if (isAggregatable != that.isAggregatable) return false; + if (hasMinMax != that.hasMinMax) return false; + if (hasMinMax == false) { + return true; + } + if (!minValue.equals(that.minValue)) return false; + return maxValue.equals(that.maxValue); + + } + + @Override + public int hashCode() { + return Objects.hash(type, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, + hasMinMax, minValue, maxValue); + } + public static class Long extends FieldStats { public Long(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, - boolean isSearchable, boolean isAggregatable, - long minValue, long maxValue) { + boolean isSearchable, boolean isAggregatable) { super((byte) 0, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, - isSearchable, isAggregatable, minValue, maxValue); + isSearchable, isAggregatable); } public Long(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, - boolean isSearchable, boolean isAggregatable) { + boolean isSearchable, boolean isAggregatable, + long minValue, long maxValue) { super((byte) 0, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, - isSearchable, isAggregatable, null, null); - } - - public Long(long maxDoc, - boolean isSearchable, boolean isAggregatable) { - super((byte) 0, maxDoc, isSearchable, isAggregatable); + isSearchable, isAggregatable, minValue, maxValue); } @Override @@ -344,16 +429,21 @@ public java.lang.Long valueOf(String value, String optionalFormat) { @Override public String getMinValueAsString() { - return minValue != null ? java.lang.Long.toString(minValue) : null; + return java.lang.Long.toString(minValue); } @Override public String getMaxValueAsString() { - return maxValue != null ? java.lang.Long.toString(maxValue) : null; + return java.lang.Long.toString(maxValue); } } public static class Double extends FieldStats { + public Double(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, + boolean isSearchable, boolean isAggregatable) { + super((byte) 1, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable); + } + public Double(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, boolean isSearchable, boolean isAggregatable, double minValue, double maxValue) { @@ -361,15 +451,6 @@ public Double(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq minValue, maxValue); } - public Double(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, - boolean isSearchable, boolean isAggregatable) { - super((byte) 1, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, null, null); - } - - public Double(long maxDoc, boolean isSearchable, boolean isAggregatable) { - super((byte) 1, maxDoc, isSearchable, isAggregatable); - } - @Override public int compare(java.lang.Double o1, java.lang.Double o2) { return o1.compareTo(o2); @@ -391,12 +472,12 @@ public java.lang.Double valueOf(String value, String optionalFormat) { @Override public String getMinValueAsString() { - return minValue != null ? java.lang.Double.toString(minValue) : null; + return java.lang.Double.toString(minValue); } @Override public String getMaxValueAsString() { - return maxValue != null ? java.lang.Double.toString(maxValue) : null; + return java.lang.Double.toString(maxValue); } } @@ -404,25 +485,17 @@ public static class Date extends FieldStats { private FormatDateTimeFormatter formatter; public Date(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, - boolean isSearchable, boolean isAggregatable, - FormatDateTimeFormatter formatter, - long minValue, long maxValue) { - super((byte) 2, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, - minValue, maxValue); - this.formatter = formatter; + boolean isSearchable, boolean isAggregatable) { + super((byte) 2, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable); + this.formatter = null; } public Date(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, boolean isSearchable, boolean isAggregatable, - FormatDateTimeFormatter formatter) { + FormatDateTimeFormatter formatter, + long minValue, long maxValue) { super((byte) 2, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, - null, null); - this.formatter = formatter; - } - - public Date(long maxDoc, boolean isSearchable, boolean isAggregatable, - FormatDateTimeFormatter formatter) { - super((byte) 2, maxDoc, isSearchable, isAggregatable); + minValue, maxValue); this.formatter = formatter; } @@ -449,16 +522,37 @@ public java.lang.Long valueOf(String value, String fmt) { @Override public String getMinValueAsString() { - return minValue != null ? formatter.printer().print(minValue) : null; + return formatter.printer().print(minValue); } @Override public String getMaxValueAsString() { - return maxValue != null ? formatter.printer().print(maxValue) : null; + return formatter.printer().print(maxValue); + } + + @Override + public boolean equals(Object o) { + if (!super.equals(o)) return false; + Date that = (Date) o; + return Objects.equals(formatter == null ? null : formatter.format(), + that.formatter == null ? null : that.formatter.format()); + } + + @Override + public int hashCode() { + int result = super.hashCode(); + result = 31 * result + (formatter == null ? 0 : formatter.format().hashCode()); + return result; } } public static class Text extends FieldStats { + public Text(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, + boolean isSearchable, boolean isAggregatable) { + super((byte) 3, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); + } + public Text(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, boolean isSearchable, boolean isAggregatable, BytesRef minValue, BytesRef maxValue) { @@ -467,10 +561,6 @@ public Text(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, minValue, maxValue); } - public Text(long maxDoc, boolean isSearchable, boolean isAggregatable) { - super((byte) 3, maxDoc, isSearchable, isAggregatable); - } - @Override public int compare(BytesRef o1, BytesRef o2) { return o1.compareTo(o2); @@ -492,12 +582,12 @@ protected BytesRef valueOf(String value, String optionalFormat) { @Override public String getMinValueAsString() { - return minValue != null ? minValue.utf8ToString() : null; + return minValue.utf8ToString(); } @Override public String getMaxValueAsString() { - return maxValue != null ? maxValue.utf8ToString() : null; + return maxValue.utf8ToString(); } @Override @@ -508,6 +598,13 @@ protected void toInnerXContent(XContentBuilder builder) throws IOException { } public static class Ip extends FieldStats { + public Ip(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, + boolean isSearchable, boolean isAggregatable) { + super((byte) 4, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); + } + + public Ip(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, boolean isSearchable, boolean isAggregatable, InetAddress minValue, InetAddress maxValue) { @@ -516,10 +613,6 @@ public Ip(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, minValue, maxValue); } - public Ip(long maxDoc, boolean isSearchable, boolean isAggregatable) { - super((byte) 4, maxDoc, isSearchable, isAggregatable); - } - @Override public int compare(InetAddress o1, InetAddress o2) { byte[] b1 = InetAddressPoint.encode(o1); @@ -544,12 +637,61 @@ public InetAddress valueOf(String value, String fmt) { @Override public String getMinValueAsString() { - return minValue != null ? NetworkAddress.format(minValue) : null; + return NetworkAddress.format(minValue); + } + + @Override + public String getMaxValueAsString() { + return NetworkAddress.format(maxValue); + } + } + + public static class GeoPoint extends FieldStats { + public GeoPoint(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, + boolean isSearchable, boolean isAggregatable) { + super((byte) 5, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); + } + + public GeoPoint(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, + boolean isSearchable, boolean isAggregatable, + org.elasticsearch.common.geo.GeoPoint minValue, org.elasticsearch.common.geo.GeoPoint maxValue) { + super((byte) 5, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, + minValue, maxValue); + } + + @Override + public org.elasticsearch.common.geo.GeoPoint valueOf(String value, String fmt) { + return org.elasticsearch.common.geo.GeoPoint.parseFromLatLon(value); + } + + @Override + protected void updateMinMax(org.elasticsearch.common.geo.GeoPoint min, org.elasticsearch.common.geo.GeoPoint max) { + minValue.reset(Math.min(min.lat(), minValue.lat()), Math.min(min.lon(), minValue.lon())); + maxValue.reset(Math.max(max.lat(), maxValue.lat()), Math.max(max.lon(), maxValue.lon())); + } + + @Override + public int compare(org.elasticsearch.common.geo.GeoPoint p1, org.elasticsearch.common.geo.GeoPoint p2) { + throw new IllegalArgumentException("compare is not supported for geo_point field stats"); + } + + @Override + public void writeMinMax(StreamOutput out) throws IOException { + out.writeDouble(minValue.lat()); + out.writeDouble(minValue.lon()); + out.writeDouble(maxValue.lat()); + out.writeDouble(maxValue.lon()); + } + + @Override + public String getMinValueAsString() { + return minValue.toString(); } @Override public String getMaxValueAsString() { - return maxValue != null ? NetworkAddress.format(maxValue) : null; + return maxValue.toString(); } } @@ -561,56 +703,71 @@ public static FieldStats readFrom(StreamInput in) throws IOException { long sumTotalTermFreq = in.readLong(); boolean isSearchable = in.readBoolean(); boolean isAggregatable = in.readBoolean(); - boolean hasMinMax = in.readBoolean(); - + boolean hasMinMax = true; + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { + hasMinMax = in.readBoolean(); + } switch (type) { case 0: if (hasMinMax) { return new Long(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, in.readLong(), in.readLong()); + } else { + return new Long(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); } - return new Long(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, - isSearchable, isAggregatable); - case 1: if (hasMinMax) { return new Double(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, in.readDouble(), in.readDouble()); + } else { + return new Double(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); } - return new Double(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, - isSearchable, isAggregatable); - case 2: - FormatDateTimeFormatter formatter = Joda.forPattern(in.readString()); if (hasMinMax) { + FormatDateTimeFormatter formatter = Joda.forPattern(in.readString()); return new Date(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, formatter, in.readLong(), in.readLong()); + } else { + return new Date(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); } - return new Date(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, - isSearchable, isAggregatable, formatter); - case 3: if (hasMinMax) { return new Text(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, in.readBytesRef(), in.readBytesRef()); + } else { + return new Text(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); } - return new Text(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, - isSearchable, isAggregatable, null, null); - case 4: - InetAddress min = null; - InetAddress max = null; - if (hasMinMax) { - int l1 = in.readByte(); - byte[] b1 = new byte[l1]; - int l2 = in.readByte(); - byte[] b2 = new byte[l2]; - min = InetAddressPoint.decode(b1); - max = InetAddressPoint.decode(b2); + case 4: { + if (hasMinMax == false) { + return new Ip(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); } + int l1 = in.readByte(); + byte[] b1 = new byte[l1]; + in.readBytes(b1, 0, l1); + int l2 = in.readByte(); + byte[] b2 = new byte[l2]; + in.readBytes(b2, 0, l2); + InetAddress min = InetAddressPoint.decode(b1); + InetAddress max = InetAddressPoint.decode(b2); return new Ip(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, min, max); - + } + case 5: { + if (hasMinMax == false) { + return new GeoPoint(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); + } + org.elasticsearch.common.geo.GeoPoint min = new org.elasticsearch.common.geo.GeoPoint(in.readDouble(), in.readDouble()); + org.elasticsearch.common.geo.GeoPoint max = new org.elasticsearch.common.geo.GeoPoint(in.readDouble(), in.readDouble()); + return new GeoPoint(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable, min, max); + } default: throw new IllegalArgumentException("Unknown type."); } diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsRequest.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsRequest.java index d0b40374d6bb3..7d1bc0c56f09b 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsRequest.java @@ -24,10 +24,8 @@ import org.elasticsearch.action.ValidateActions; import org.elasticsearch.action.support.broadcast.BroadcastRequest; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; @@ -76,41 +74,39 @@ public void setIndexConstraints(IndexConstraint[] indexConstraints) { this.indexConstraints = indexConstraints; } - public void source(BytesReference content) throws IOException { + public void source(XContentParser parser) throws IOException { List indexConstraints = new ArrayList<>(); List fields = new ArrayList<>(); - try (XContentParser parser = XContentHelper.createParser(content)) { - String fieldName = null; - Token token = parser.nextToken(); - assert token == Token.START_OBJECT; - for (token = parser.nextToken(); token != Token.END_OBJECT; token = parser.nextToken()) { - switch (token) { - case FIELD_NAME: - fieldName = parser.currentName(); - break; - case START_OBJECT: - if ("index_constraints".equals(fieldName)) { - parseIndexConstraints(indexConstraints, parser); - } else { - throw new IllegalArgumentException("unknown field [" + fieldName + "]"); - } - break; - case START_ARRAY: - if ("fields".equals(fieldName)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token.isValue()) { - fields.add(parser.text()); - } else { - throw new IllegalArgumentException("unexpected token [" + token + "]"); - } + String fieldName = null; + Token token = parser.nextToken(); + assert token == Token.START_OBJECT; + for (token = parser.nextToken(); token != Token.END_OBJECT; token = parser.nextToken()) { + switch (token) { + case FIELD_NAME: + fieldName = parser.currentName(); + break; + case START_OBJECT: + if ("index_constraints".equals(fieldName)) { + parseIndexConstraints(indexConstraints, parser); + } else { + throw new IllegalArgumentException("unknown field [" + fieldName + "]"); + } + break; + case START_ARRAY: + if ("fields".equals(fieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token.isValue()) { + fields.add(parser.text()); + } else { + throw new IllegalArgumentException("unexpected token [" + token + "]"); } - } else { - throw new IllegalArgumentException("unknown field [" + fieldName + "]"); } - break; - default: - throw new IllegalArgumentException("unexpected token [" + token + "]"); - } + } else { + throw new IllegalArgumentException("unknown field [" + fieldName + "]"); + } + break; + default: + throw new IllegalArgumentException("unexpected token [" + token + "]"); } } this.fields = fields.toArray(new String[fields.size()]); diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsResponse.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsResponse.java index 14e2f13d4ff32..aeeacbf5a26f2 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsResponse.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.fieldstats; +import org.elasticsearch.Version; import org.elasticsearch.action.ShardOperationFailedException; import org.elasticsearch.action.support.broadcast.BroadcastResponse; import org.elasticsearch.common.Nullable; @@ -93,10 +94,21 @@ public void writeTo(StreamOutput out) throws IOException { out.writeVInt(indicesMergedFieldStats.size()); for (Map.Entry> entry1 : indicesMergedFieldStats.entrySet()) { out.writeString(entry1.getKey()); - out.writeVInt(entry1.getValue().size()); + int size = entry1.getValue().size(); + if (out.getVersion().before(Version.V_5_2_0)) { + // filter fieldstats without min/max information + for (FieldStats stats : entry1.getValue().values()) { + if (stats.hasMinMax() == false) { + size--; + } + } + } + out.writeVInt(size); for (Map.Entry entry2 : entry1.getValue().entrySet()) { - out.writeString(entry2.getKey()); - entry2.getValue().writeTo(out); + if (entry2.getValue().hasMinMax() || out.getVersion().onOrAfter(Version.V_5_2_0)) { + out.writeString(entry2.getKey()); + entry2.getValue().writeTo(out); + } } } out.writeVInt(conflicts.size()); diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardRequest.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardRequest.java index 85a0d469541fe..9152d64f2d527 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardRequest.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardRequest.java @@ -41,8 +41,7 @@ public FieldStatsShardRequest() { public FieldStatsShardRequest(ShardId shardId, FieldStatsRequest request) { super(shardId, request); - Set fields = new HashSet<>(); - fields.addAll(Arrays.asList(request.getFields())); + Set fields = new HashSet<>(Arrays.asList(request.getFields())); for (IndexConstraint indexConstraint : request.getIndexConstraints()) { fields.add(indexConstraint.getField()); } diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardResponse.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardResponse.java index 7cc298729f03c..58598540ef45a 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardResponse.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardResponse.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.fieldstats; +import org.elasticsearch.Version; import org.elasticsearch.action.support.broadcast.BroadcastShardResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -27,6 +28,7 @@ import java.io.IOException; import java.util.HashMap; import java.util.Map; +import java.util.stream.Collectors; /** */ @@ -46,6 +48,12 @@ public Map> getFieldStats() { return fieldStats; } + Map > filterNullMinMax() { + return fieldStats.entrySet().stream() + .filter((e) -> e.getValue().hasMinMax()) + .collect(Collectors.toMap(p -> p.getKey(), p -> p.getValue())); + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -61,8 +69,17 @@ public void readFrom(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeVInt(fieldStats.size()); - for (Map.Entry> entry : fieldStats.entrySet()) { + final Map > stats; + if (out.getVersion().before(Version.V_5_2_0)) { + /** + * FieldStats with null min/max are not (de)serializable in versions prior to {@link Version.V_5_2_0_UNRELEASED} + */ + stats = filterNullMinMax(); + } else { + stats = getFieldStats(); + } + out.writeVInt(stats.size()); + for (Map.Entry> entry : stats.entrySet()) { out.writeString(entry.getKey()); entry.getValue().writeTo(out); } diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/TransportFieldStatsAction.java b/core/src/main/java/org/elasticsearch/action/fieldstats/TransportFieldStatsAction.java index e65f69514320f..9ee72223a6684 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/TransportFieldStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/TransportFieldStatsAction.java @@ -36,8 +36,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; diff --git a/core/src/main/java/org/elasticsearch/action/get/GetRequest.java b/core/src/main/java/org/elasticsearch/action/get/GetRequest.java index 42c4ccc701d26..abcf6aa671155 100644 --- a/core/src/main/java/org/elasticsearch/action/get/GetRequest.java +++ b/core/src/main/java/org/elasticsearch/action/get/GetRequest.java @@ -26,6 +26,8 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.index.VersionType; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; @@ -51,7 +53,7 @@ public class GetRequest extends SingleShardRequest implements Realti private String parent; private String preference; - private String[] fields; + private String[] storedFields; private FetchSourceContext fetchSourceContext; @@ -61,7 +63,7 @@ public class GetRequest extends SingleShardRequest implements Realti private VersionType versionType = VersionType.INTERNAL; private long version = Versions.MATCH_ANY; - private boolean ignoreErrorsOnGeneratedFields; + private static DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(GetRequest.class)); public GetRequest() { type = "_all"; @@ -102,6 +104,9 @@ public ActionRequestValidationException validate() { validationException = ValidateActions.addValidationError("illegal version value [" + version + "] for version type [" + versionType.name() + "]", validationException); } + if (versionType == VersionType.FORCE) { + deprecationLogger.deprecated("version type FORCE is deprecated and will be removed in the next major version"); + } return validationException; } @@ -187,20 +192,20 @@ public FetchSourceContext fetchSourceContext() { } /** - * Explicitly specify the fields that will be returned. By default, the _source + * Explicitly specify the stored fields that will be returned. By default, the _source * field will be returned. */ - public GetRequest fields(String... fields) { - this.fields = fields; + public GetRequest storedFields(String... fields) { + this.storedFields = fields; return this; } /** - * Explicitly specify the fields that will be returned. By default, the _source + * Explicitly specify the stored fields that will be returned. By default, the _source * field will be returned. */ - public String[] fields() { - return this.fields; + public String[] storedFields() { + return this.storedFields; } /** @@ -248,19 +253,10 @@ public GetRequest versionType(VersionType versionType) { return this; } - public GetRequest ignoreErrorsOnGeneratedFields(boolean ignoreErrorsOnGeneratedFields) { - this.ignoreErrorsOnGeneratedFields = ignoreErrorsOnGeneratedFields; - return this; - } - public VersionType versionType() { return this.versionType; } - public boolean ignoreErrorsOnGeneratedFields() { - return ignoreErrorsOnGeneratedFields; - } - @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -270,19 +266,12 @@ public void readFrom(StreamInput in) throws IOException { parent = in.readOptionalString(); preference = in.readOptionalString(); refresh = in.readBoolean(); - int size = in.readInt(); - if (size >= 0) { - fields = new String[size]; - for (int i = 0; i < size; i++) { - fields[i] = in.readString(); - } - } + storedFields = in.readOptionalStringArray(); realtime = in.readBoolean(); - this.ignoreErrorsOnGeneratedFields = in.readBoolean(); this.versionType = VersionType.fromValue(in.readByte()); this.version = in.readLong(); - fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new); + fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new); } @Override @@ -295,19 +284,11 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(preference); out.writeBoolean(refresh); - if (fields == null) { - out.writeInt(-1); - } else { - out.writeInt(fields.length); - for (String field : fields) { - out.writeString(field); - } - } + out.writeOptionalStringArray(storedFields); out.writeBoolean(realtime); - out.writeBoolean(ignoreErrorsOnGeneratedFields); out.writeByte(versionType.getValue()); out.writeLong(version); - out.writeOptionalStreamable(fetchSourceContext); + out.writeOptionalWriteable(fetchSourceContext); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java index 7827de12eac30..973b130bedbd2 100644 --- a/core/src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java @@ -88,8 +88,8 @@ public GetRequestBuilder setPreference(String preference) { * Explicitly specify the fields that will be returned. By default, the _source * field will be returned. */ - public GetRequestBuilder setFields(String... fields) { - request.fields(fields); + public GetRequestBuilder setStoredFields(String... fields) { + request.storedFields(fields); return this; } @@ -99,12 +99,8 @@ public GetRequestBuilder setFields(String... fields) { * @return this for chaining */ public GetRequestBuilder setFetchSource(boolean fetch) { - FetchSourceContext context = request.fetchSourceContext(); - if (context == null) { - request.fetchSourceContext(new FetchSourceContext(fetch)); - } else { - context.fetchSource(fetch); - } + FetchSourceContext context = request.fetchSourceContext() == null ? FetchSourceContext.FETCH_SOURCE : request.fetchSourceContext(); + request.fetchSourceContext(new FetchSourceContext(fetch, context.includes(), context.excludes())); return this; } @@ -129,14 +125,8 @@ public GetRequestBuilder setFetchSource(@Nullable String include, @Nullable Stri * @param excludes An optional list of exclude (optionally wildcarded) pattern to filter the returned _source */ public GetRequestBuilder setFetchSource(@Nullable String[] includes, @Nullable String[] excludes) { - FetchSourceContext context = request.fetchSourceContext(); - if (context == null) { - request.fetchSourceContext(new FetchSourceContext(includes, excludes)); - } else { - context.fetchSource(true); - context.includes(includes); - context.excludes(excludes); - } + FetchSourceContext context = request.fetchSourceContext() == null ? FetchSourceContext.FETCH_SOURCE : request.fetchSourceContext(); + request.fetchSourceContext(new FetchSourceContext(context.fetchSource(), includes, excludes)); return this; } @@ -155,11 +145,6 @@ public GetRequestBuilder setRealtime(boolean realtime) { return this; } - public GetRequestBuilder setIgnoreErrorsOnGeneratedFields(Boolean ignoreErrorsOnGeneratedFields) { - request.ignoreErrorsOnGeneratedFields(ignoreErrorsOnGeneratedFields); - return this; - } - /** * Sets the version, which will cause the get operation to only be performed if a matching * version exists and no changes happened on the doc since then. diff --git a/core/src/main/java/org/elasticsearch/action/get/GetResponse.java b/core/src/main/java/org/elasticsearch/action/get/GetResponse.java index 5741984d35f39..156005fab243e 100644 --- a/core/src/main/java/org/elasticsearch/action/get/GetResponse.java +++ b/core/src/main/java/org/elasticsearch/action/get/GetResponse.java @@ -21,18 +21,22 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.get.GetField; import org.elasticsearch.index.get.GetResult; import java.io.IOException; import java.util.Iterator; +import java.util.Locale; import java.util.Map; +import java.util.Objects; /** * The response of a get action. @@ -40,9 +44,9 @@ * @see GetRequest * @see org.elasticsearch.client.Client#get(GetRequest) */ -public class GetResponse extends ActionResponse implements Iterable, ToXContent { +public class GetResponse extends ActionResponse implements Iterable, ToXContentObject { - private GetResult getResult; + GetResult getResult; GetResponse() { } @@ -142,6 +146,10 @@ public GetField getField(String name) { return getResult.field(name); } + /** + * @deprecated Use {@link GetResponse#getSource()} instead + */ + @Deprecated @Override public Iterator iterator() { return getResult.iterator(); @@ -152,10 +160,33 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return getResult.toXContent(builder, params); } - public static GetResponse readGetResponse(StreamInput in) throws IOException { - GetResponse result = new GetResponse(); - result.readFrom(in); - return result; + /** + * This method can be used to parse a {@link GetResponse} object when it has been printed out + * as a xcontent using the {@link #toXContent(XContentBuilder, Params)} method. + *

+ * For forward compatibility reason this method might not fail if it tries to parse a field it + * doesn't know. But before returning the result it will check that enough information were + * parsed to return a valid {@link GetResponse} instance and throws a {@link ParsingException} + * otherwise. This is the case when we get a 404 back, which can be parsed as a normal + * {@link GetResponse} with found set to false, or as an elasticsearch exception. The caller + * of this method needs a way to figure out whether we got back a valid get response, which + * can be done by catching ParsingException. + * + * @param parser {@link XContentParser} to parse the response from + * @return a {@link GetResponse} + * @throws IOException is an I/O exception occurs during the parsing + */ + public static GetResponse fromXContent(XContentParser parser) throws IOException { + GetResult getResult = GetResult.fromXContent(parser); + + // At this stage we ensure that we parsed enough information to return + // a valid GetResponse instance. If it's not the case, we throw an + // exception so that callers know it and can handle it correctly. + if (getResult.getIndex() == null && getResult.getType() == null && getResult.getId() == null) { + throw new ParsingException(parser.getTokenLocation(), + String.format(Locale.ROOT, "Missing required fields [%s,%s,%s]", GetResult._INDEX, GetResult._TYPE, GetResult._ID)); + } + return new GetResponse(getResult); } @Override @@ -170,8 +201,25 @@ public void writeTo(StreamOutput out) throws IOException { getResult.writeTo(out); } + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + GetResponse getResponse = (GetResponse) o; + return Objects.equals(getResult, getResponse.getResult); + } + + @Override + public int hashCode() { + return Objects.hash(getResult); + } + @Override public String toString() { - return Strings.toString(this, true); + return Strings.toString(this); } } diff --git a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java index 001e4ebd7a031..dc007cf141b18 100644 --- a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java +++ b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java @@ -28,14 +28,12 @@ import org.elasticsearch.action.ValidateActions; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.lucene.uid.Versions; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.VersionType; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; @@ -46,8 +44,9 @@ import java.util.Collections; import java.util.Iterator; import java.util.List; +import java.util.Locale; -public class MultiGetRequest extends ActionRequest implements Iterable, CompositeIndicesRequest, RealtimeRequest { +public class MultiGetRequest extends ActionRequest implements Iterable, CompositeIndicesRequest, RealtimeRequest { /** * A single get item. @@ -58,7 +57,7 @@ public static class Item implements Streamable, IndicesRequest { private String id; private String routing; private String parent; - private String[] fields; + private String[] storedFields; private long version = Versions.MATCH_ANY; private VersionType versionType = VersionType.INTERNAL; private FetchSourceContext fetchSourceContext; @@ -136,13 +135,13 @@ public String parent() { return parent; } - public Item fields(String... fields) { - this.fields = fields; + public Item storedFields(String... fields) { + this.storedFields = fields; return this; } - public String[] fields() { - return this.fields; + public String[] storedFields() { + return this.storedFields; } public long version() { @@ -188,17 +187,11 @@ public void readFrom(StreamInput in) throws IOException { id = in.readString(); routing = in.readOptionalString(); parent = in.readOptionalString(); - int size = in.readVInt(); - if (size > 0) { - fields = new String[size]; - for (int i = 0; i < size; i++) { - fields[i] = in.readString(); - } - } + storedFields = in.readOptionalStringArray(); version = in.readLong(); versionType = VersionType.fromValue(in.readByte()); - fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new); + fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new); } @Override @@ -208,19 +201,11 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(id); out.writeOptionalString(routing); out.writeOptionalString(parent); - if (fields == null) { - out.writeVInt(0); - } else { - out.writeVInt(fields.length); - for (String field : fields) { - out.writeString(field); - } - } - + out.writeOptionalStringArray(storedFields); out.writeLong(version); out.writeByte(versionType.getValue()); - out.writeOptionalStreamable(fetchSourceContext); + out.writeOptionalWriteable(fetchSourceContext); } @Override @@ -233,7 +218,7 @@ public boolean equals(Object o) { if (version != item.version) return false; if (fetchSourceContext != null ? !fetchSourceContext.equals(item.fetchSourceContext) : item.fetchSourceContext != null) return false; - if (!Arrays.equals(fields, item.fields)) return false; + if (!Arrays.equals(storedFields, item.storedFields)) return false; if (!id.equals(item.id)) return false; if (!index.equals(item.index)) return false; if (routing != null ? !routing.equals(item.routing) : item.routing != null) return false; @@ -251,7 +236,7 @@ public int hashCode() { result = 31 * result + id.hashCode(); result = 31 * result + (routing != null ? routing.hashCode() : 0); result = 31 * result + (parent != null ? parent.hashCode() : 0); - result = 31 * result + (fields != null ? Arrays.hashCode(fields) : 0); + result = 31 * result + (storedFields != null ? Arrays.hashCode(storedFields) : 0); result = 31 * result + Long.hashCode(version); result = 31 * result + versionType.hashCode(); result = 31 * result + (fetchSourceContext != null ? fetchSourceContext.hashCode() : 0); @@ -262,8 +247,6 @@ public int hashCode() { String preference; boolean realtime = true; boolean refresh; - public boolean ignoreErrorsOnGeneratedFields = false; - List items = new ArrayList<>(); public List getItems() { @@ -299,11 +282,6 @@ public ActionRequestValidationException validate() { return validationException; } - @Override - public List subRequests() { - return items; - } - /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to * _local to prefer local shards, _primary to execute only on primary shards, or @@ -337,38 +315,43 @@ public MultiGetRequest refresh(boolean refresh) { return this; } - - public MultiGetRequest ignoreErrorsOnGeneratedFields(boolean ignoreErrorsOnGeneratedFields) { - this.ignoreErrorsOnGeneratedFields = ignoreErrorsOnGeneratedFields; - return this; - } - - public MultiGetRequest add(@Nullable String defaultIndex, @Nullable String defaultType, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSource, byte[] data, int from, int length) throws Exception { - return add(defaultIndex, defaultType, defaultFields, defaultFetchSource, new BytesArray(data, from, length), true); - } - - public MultiGetRequest add(@Nullable String defaultIndex, @Nullable String defaultType, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSource, BytesReference data) throws Exception { - return add(defaultIndex, defaultType, defaultFields, defaultFetchSource, data, true); - } - - public MultiGetRequest add(@Nullable String defaultIndex, @Nullable String defaultType, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSource, BytesReference data, boolean allowExplicitIndex) throws Exception { - return add(defaultIndex, defaultType, defaultFields, defaultFetchSource, null, data, allowExplicitIndex); - } - - public MultiGetRequest add(@Nullable String defaultIndex, @Nullable String defaultType, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSource, @Nullable String defaultRouting, BytesReference data, boolean allowExplicitIndex) throws Exception { - try (XContentParser parser = XContentFactory.xContent(data).createParser(data)) { - XContentParser.Token token; - String currentFieldName = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.START_ARRAY) { - if ("docs".equals(currentFieldName)) { - parseDocuments(parser, this.items, defaultIndex, defaultType, defaultFields, defaultFetchSource, defaultRouting, allowExplicitIndex); - } else if ("ids".equals(currentFieldName)) { - parseIds(parser, this.items, defaultIndex, defaultType, defaultFields, defaultFetchSource, defaultRouting); - } + public MultiGetRequest add(@Nullable String defaultIndex, @Nullable String defaultType, @Nullable String[] defaultFields, + @Nullable FetchSourceContext defaultFetchSource, @Nullable String defaultRouting, XContentParser parser, + boolean allowExplicitIndex) throws IOException { + XContentParser.Token token; + String currentFieldName = null; + if ((token = parser.nextToken()) != XContentParser.Token.START_OBJECT) { + final String message = String.format( + Locale.ROOT, + "unexpected token [%s], expected [%s]", + token, + XContentParser.Token.START_OBJECT); + throw new ParsingException(parser.getTokenLocation(), message); + } + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token == XContentParser.Token.START_ARRAY) { + if ("docs".equals(currentFieldName)) { + parseDocuments(parser, this.items, defaultIndex, defaultType, defaultFields, defaultFetchSource, defaultRouting, allowExplicitIndex); + } else if ("ids".equals(currentFieldName)) { + parseIds(parser, this.items, defaultIndex, defaultType, defaultFields, defaultFetchSource, defaultRouting); + } else { + final String message = String.format( + Locale.ROOT, + "unknown key [%s] for a %s, expected [docs] or [ids]", + currentFieldName, + token); + throw new ParsingException(parser.getTokenLocation(), message); } + } else { + final String message = String.format( + Locale.ROOT, + "unexpected token [%s], expected [%s] or [%s]", + token, + XContentParser.Token.FIELD_NAME, + XContentParser.Token.START_ARRAY); + throw new ParsingException(parser.getTokenLocation(), message); } } return this; @@ -386,11 +369,11 @@ public static void parseDocuments(XContentParser parser, List items, @Null String id = null; String routing = defaultRouting; String parent = null; - List fields = null; + List storedFields = null; long version = Versions.MATCH_ANY; VersionType versionType = VersionType.INTERNAL; - FetchSourceContext fetchSourceContext = null; + FetchSourceContext fetchSourceContext = FetchSourceContext.FETCH_SOURCE; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { @@ -410,33 +393,42 @@ public static void parseDocuments(XContentParser parser, List items, @Null } else if ("_parent".equals(currentFieldName) || "parent".equals(currentFieldName)) { parent = parser.text(); } else if ("fields".equals(currentFieldName)) { - fields = new ArrayList<>(); - fields.add(parser.text()); + throw new ParsingException(parser.getTokenLocation(), + "Unsupported field [fields] used, expected [stored_fields] instead"); + } else if ("stored_fields".equals(currentFieldName)) { + storedFields = new ArrayList<>(); + storedFields.add(parser.text()); } else if ("_version".equals(currentFieldName) || "version".equals(currentFieldName)) { version = parser.longValue(); } else if ("_version_type".equals(currentFieldName) || "_versionType".equals(currentFieldName) || "version_type".equals(currentFieldName) || "versionType".equals(currentFieldName)) { versionType = VersionType.fromString(parser.text()); } else if ("_source".equals(currentFieldName)) { if (parser.isBooleanValue()) { - fetchSourceContext = new FetchSourceContext(parser.booleanValue()); + fetchSourceContext = new FetchSourceContext(parser.booleanValue(), fetchSourceContext.includes(), + fetchSourceContext.excludes()); } else if (token == XContentParser.Token.VALUE_STRING) { - fetchSourceContext = new FetchSourceContext(new String[]{parser.text()}); + fetchSourceContext = new FetchSourceContext(fetchSourceContext.fetchSource(), + new String[]{parser.text()}, fetchSourceContext.excludes()); } else { throw new ElasticsearchParseException("illegal type for _source: [{}]", token); } } } else if (token == XContentParser.Token.START_ARRAY) { if ("fields".equals(currentFieldName)) { - fields = new ArrayList<>(); + throw new ParsingException(parser.getTokenLocation(), + "Unsupported field [fields] used, expected [stored_fields] instead"); + } else if ("stored_fields".equals(currentFieldName)) { + storedFields = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - fields.add(parser.text()); + storedFields.add(parser.text()); } } else if ("_source".equals(currentFieldName)) { ArrayList includes = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { includes.add(parser.text()); } - fetchSourceContext = new FetchSourceContext(includes.toArray(Strings.EMPTY_ARRAY)); + fetchSourceContext = new FetchSourceContext(fetchSourceContext.fetchSource(), includes.toArray(Strings.EMPTY_ARRAY) + , fetchSourceContext.excludes()); } } else if (token == XContentParser.Token.START_OBJECT) { @@ -464,20 +456,20 @@ public static void parseDocuments(XContentParser parser, List items, @Null } } - fetchSourceContext = new FetchSourceContext( + fetchSourceContext = new FetchSourceContext(fetchSourceContext.fetchSource(), includes == null ? Strings.EMPTY_ARRAY : includes.toArray(new String[includes.size()]), excludes == null ? Strings.EMPTY_ARRAY : excludes.toArray(new String[excludes.size()])); } } } String[] aFields; - if (fields != null) { - aFields = fields.toArray(new String[fields.size()]); + if (storedFields != null) { + aFields = storedFields.toArray(new String[storedFields.size()]); } else { aFields = defaultFields; } - items.add(new Item(index, type, id).routing(routing).fields(aFields).parent(parent).version(version).versionType(versionType) - .fetchSourceContext(fetchSourceContext == null ? defaultFetchSource : fetchSourceContext)); + items.add(new Item(index, type, id).routing(routing).storedFields(aFields).parent(parent).version(version).versionType(versionType) + .fetchSourceContext(fetchSourceContext == FetchSourceContext.FETCH_SOURCE ? defaultFetchSource : fetchSourceContext)); } } @@ -491,7 +483,7 @@ public static void parseIds(XContentParser parser, List items, @Nullable S if (!token.isValue()) { throw new IllegalArgumentException("ids array element should only contain ids"); } - items.add(new Item(defaultIndex, defaultType, parser.text()).fields(defaultFields).fetchSourceContext(defaultFetchSource).routing(defaultRouting)); + items.add(new Item(defaultIndex, defaultType, parser.text()).storedFields(defaultFields).fetchSourceContext(defaultFetchSource).routing(defaultRouting)); } } @@ -510,7 +502,6 @@ public void readFrom(StreamInput in) throws IOException { preference = in.readOptionalString(); refresh = in.readBoolean(); realtime = in.readBoolean(); - ignoreErrorsOnGeneratedFields = in.readBoolean(); int size = in.readVInt(); items = new ArrayList<>(size); @@ -525,7 +516,6 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(preference); out.writeBoolean(refresh); out.writeBoolean(realtime); - out.writeBoolean(ignoreErrorsOnGeneratedFields); out.writeVInt(items.size()); for (Item item : items) { diff --git a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequestBuilder.java index 6e32e1caf3099..a2cb204d5eabf 100644 --- a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequestBuilder.java @@ -80,9 +80,4 @@ public MultiGetRequestBuilder setRealtime(boolean realtime) { request.realtime(realtime); return this; } - - public MultiGetRequestBuilder setIgnoreErrorsOnGeneratedFields(boolean ignoreErrorsOnGeneratedFields) { - request.ignoreErrorsOnGeneratedFields(ignoreErrorsOnGeneratedFields); - return this; - } } diff --git a/core/src/main/java/org/elasticsearch/action/get/MultiGetResponse.java b/core/src/main/java/org/elasticsearch/action/get/MultiGetResponse.java index e1fe435fd107d..93e4272bd956c 100644 --- a/core/src/main/java/org/elasticsearch/action/get/MultiGetResponse.java +++ b/core/src/main/java/org/elasticsearch/action/get/MultiGetResponse.java @@ -24,14 +24,14 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.Arrays; import java.util.Iterator; -public class MultiGetResponse extends ActionResponse implements Iterable, ToXContent { +public class MultiGetResponse extends ActionResponse implements Iterable, ToXContentObject { /** * Represents a failure. @@ -128,6 +128,7 @@ public Iterator iterator() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.DOCS); for (MultiGetItemResponse response : responses) { if (response.isFailed()) { @@ -136,16 +137,15 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Fields._INDEX, failure.getIndex()); builder.field(Fields._TYPE, failure.getType()); builder.field(Fields._ID, failure.getId()); - ElasticsearchException.renderException(builder, params, failure.getFailure()); + ElasticsearchException.generateFailureXContent(builder, params, failure.getFailure(), true); builder.endObject(); } else { GetResponse getResponse = response.getResponse(); - builder.startObject(); getResponse.toXContent(builder, params); - builder.endObject(); } } builder.endArray(); + builder.endObject(); return builder; } @@ -154,9 +154,6 @@ static final class Fields { static final String _INDEX = "_index"; static final String _TYPE = "_type"; static final String _ID = "_id"; - static final String ERROR = "error"; - static final String ROOT_CAUSE = "root_cause"; - } @Override diff --git a/core/src/main/java/org/elasticsearch/action/get/MultiGetShardRequest.java b/core/src/main/java/org/elasticsearch/action/get/MultiGetShardRequest.java index 47f07c524888a..25a624b2eb558 100644 --- a/core/src/main/java/org/elasticsearch/action/get/MultiGetShardRequest.java +++ b/core/src/main/java/org/elasticsearch/action/get/MultiGetShardRequest.java @@ -35,7 +35,6 @@ public class MultiGetShardRequest extends SingleShardRequest items; @@ -52,7 +51,6 @@ public MultiGetShardRequest() { preference = multiGetRequest.preference; realtime = multiGetRequest.realtime; refresh = multiGetRequest.refresh; - ignoreErrorsOnGeneratedFields = multiGetRequest.ignoreErrorsOnGeneratedFields; } @Override @@ -87,11 +85,6 @@ public MultiGetShardRequest realtime(boolean realtime) { return this; } - public MultiGetShardRequest ignoreErrorsOnGeneratedFields(Boolean ignoreErrorsOnGeneratedFields) { - this.ignoreErrorsOnGeneratedFields = ignoreErrorsOnGeneratedFields; - return this; - } - public boolean refresh() { return this.refresh; } @@ -130,7 +123,6 @@ public void readFrom(StreamInput in) throws IOException { preference = in.readOptionalString(); refresh = in.readBoolean(); realtime = in.readBoolean(); - ignoreErrorsOnGeneratedFields = in.readBoolean(); } @Override @@ -146,11 +138,5 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(preference); out.writeBoolean(refresh); out.writeBoolean(realtime); - out.writeBoolean(ignoreErrorsOnGeneratedFields); - - } - - public boolean ignoreErrorsOnGeneratedFields() { - return ignoreErrorsOnGeneratedFields; } } diff --git a/core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java b/core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java index 240035aee2aeb..6b9de7ecf64e3 100644 --- a/core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java +++ b/core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java @@ -92,8 +92,8 @@ protected GetResponse shardOperation(GetRequest request, ShardId shardId) { indexShard.refresh("refresh_flag_get"); } - GetResult result = indexShard.getService().get(request.type(), request.id(), request.fields(), - request.realtime(), request.version(), request.versionType(), request.fetchSourceContext(), request.ignoreErrorsOnGeneratedFields()); + GetResult result = indexShard.getService().get(request.type(), request.id(), request.storedFields(), + request.realtime(), request.version(), request.versionType(), request.fetchSourceContext()); return new GetResponse(result); } diff --git a/core/src/main/java/org/elasticsearch/action/get/TransportMultiGetAction.java b/core/src/main/java/org/elasticsearch/action/get/TransportMultiGetAction.java index 4e62030d3298e..bea65283cc034 100644 --- a/core/src/main/java/org/elasticsearch/action/get/TransportMultiGetAction.java +++ b/core/src/main/java/org/elasticsearch/action/get/TransportMultiGetAction.java @@ -47,8 +47,8 @@ public class TransportMultiGetAction extends HandledTransportAction listener) { ClusterState clusterState = clusterService.state(); - clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ); final AtomicArray responses = new AtomicArray<>(request.items.size()); + final Map shardRequests = new HashMap<>(); - Map shardRequests = new HashMap<>(); for (int i = 0; i < request.items.size(); i++) { MultiGetRequest.Item item = request.items.get(i); - if (!clusterState.metaData().hasConcreteIndex(item.index())) { - responses.set(i, new MultiGetItemResponse(null, new MultiGetResponse.Failure(item.index(), item.type(), item.id(), new IndexNotFoundException(item.index())))); - continue; - } - item.routing(clusterState.metaData().resolveIndexRouting(item.parent(), item.routing(), item.index())); - String concreteSingleIndex = indexNameExpressionResolver.concreteSingleIndex(clusterState, item).getName(); - if (item.routing() == null && clusterState.getMetaData().routingRequired(concreteSingleIndex, item.type())) { - responses.set(i, new MultiGetItemResponse(null, new MultiGetResponse.Failure(concreteSingleIndex, item.type(), item.id(), - new IllegalArgumentException("routing is required for [" + concreteSingleIndex + "]/[" + item.type() + "]/[" + item.id() + "]")))); + + String concreteSingleIndex; + try { + concreteSingleIndex = indexNameExpressionResolver.concreteSingleIndex(clusterState, item).getName(); + + item.routing(clusterState.metaData().resolveIndexRouting(item.parent(), item.routing(), item.index())); + if ((item.routing() == null) && (clusterState.getMetaData().routingRequired(concreteSingleIndex, item.type()))) { + String message = "routing is required for [" + concreteSingleIndex + "]/[" + item.type() + "]/[" + item.id() + "]"; + responses.set(i, newItemFailure(concreteSingleIndex, item.type(), item.id(), new IllegalArgumentException(message))); + continue; + } + } catch (Exception e) { + responses.set(i, newItemFailure(item.index(), item.type(), item.id(), e)); continue; } + ShardId shardId = clusterService.operationRouting() - .getShards(clusterState, concreteSingleIndex, item.id(), item.routing(), null).shardId(); + .getShards(clusterState, concreteSingleIndex, item.id(), item.routing(), null) + .shardId(); + MultiGetShardRequest shardRequest = shardRequests.get(shardId); if (shardRequest == null) { - shardRequest = new MultiGetShardRequest(request, shardId.getIndexName(), shardId.id()); + shardRequest = new MultiGetShardRequest(request, shardId.getIndexName(), shardId.getId()); shardRequests.put(shardId, shardRequest); } shardRequest.add(i, item); } - if (shardRequests.size() == 0) { + if (shardRequests.isEmpty()) { // only failures.. listener.onResponse(new MultiGetResponse(responses.toArray(new MultiGetItemResponse[responses.length()]))); } @@ -97,7 +103,8 @@ protected void doExecute(final MultiGetRequest request, final ActionListener) () -> new ParameterizedMessage("{} failed to execute multi_get for [{}]/[{}]", shardId, item.type(), item.id()), e); + logger.debug((Supplier) () -> new ParameterizedMessage("{} failed to execute multi_get for [{}]/[{}]", shardId, + item.type(), item.id()), e); response.add(request.locations.get(i), new MultiGetResponse.Failure(request.index(), item.type(), item.id(), e)); } } diff --git a/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java b/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java index 1f6bc9108c22c..11dda29637857 100644 --- a/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java @@ -20,11 +20,14 @@ package org.elasticsearch.action.index; import org.elasticsearch.ElasticsearchGenerationException; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; -import org.elasticsearch.action.DocumentRequest; +import org.elasticsearch.action.CompositeIndicesRequest; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.RoutingMissingException; import org.elasticsearch.action.TimestampParsingException; import org.elasticsearch.action.support.replication.ReplicatedWriteRequest; +import org.elasticsearch.action.support.replication.ReplicationRequest; import org.elasticsearch.client.Requests; import org.elasticsearch.cluster.metadata.MappingMetaData; import org.elasticsearch.cluster.metadata.MetaData; @@ -34,7 +37,10 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -47,6 +53,7 @@ import java.nio.charset.StandardCharsets; import java.util.Locale; import java.util.Map; +import java.util.Objects; import static org.elasticsearch.action.ValidateActions.addValidationError; @@ -55,10 +62,10 @@ * created using {@link org.elasticsearch.client.Requests#indexRequest(String)}. * * The index requires the {@link #index()}, {@link #type(String)}, {@link #id(String)} and - * {@link #source(byte[])} to be set. + * {@link #source(byte[], XContentType)} to be set. * - * The source (content to index) can be set in its bytes form using ({@link #source(byte[])}), - * its string form ({@link #source(String)}) or using a {@link org.elasticsearch.common.xcontent.XContentBuilder} + * The source (content to index) can be set in its bytes form using ({@link #source(byte[], XContentType)}), + * its string form ({@link #source(String, XContentType)}) or using a {@link org.elasticsearch.common.xcontent.XContentBuilder} * ({@link #source(org.elasticsearch.common.xcontent.XContentBuilder)}). * * If the {@link #id(String)} is not set, it will be automatically generated. @@ -67,68 +74,14 @@ * @see org.elasticsearch.client.Requests#indexRequest(String) * @see org.elasticsearch.client.Client#index(IndexRequest) */ -public class IndexRequest extends ReplicatedWriteRequest implements DocumentRequest { +public class IndexRequest extends ReplicatedWriteRequest implements DocWriteRequest, CompositeIndicesRequest { /** - * Operation type controls if the type of the index operation. + * Max length of the source document to include into toString() + * + * @see ReplicationRequest#createTask(long, java.lang.String, java.lang.String, org.elasticsearch.tasks.TaskId) */ - public enum OpType { - /** - * Index the source. If there an existing document with the id, it will - * be replaced. - */ - INDEX((byte) 0), - /** - * Creates the resource. Simply adds it to the index, if there is an existing - * document with the id, then it won't be removed. - */ - CREATE((byte) 1); - - private final byte id; - private final String lowercase; - - OpType(byte id) { - this.id = id; - this.lowercase = this.toString().toLowerCase(Locale.ENGLISH); - } - - /** - * The internal representation of the operation type. - */ - public byte id() { - return id; - } - - public String lowercase() { - return this.lowercase; - } - - /** - * Constructs the operation type from its internal representation. - */ - public static OpType fromId(byte id) { - if (id == 0) { - return INDEX; - } else if (id == 1) { - return CREATE; - } else { - throw new IllegalArgumentException("No type match for [" + id + "]"); - } - } - - public static OpType fromString(String sOpType) { - String lowersOpType = sOpType.toLowerCase(Locale.ROOT); - switch (lowersOpType) { - case "create": - return OpType.CREATE; - case "index": - return OpType.INDEX; - default: - throw new IllegalArgumentException("opType [" + sOpType + "] not allowed, either [index] or [create] are allowed"); - } - } - - } + static final int MAX_SOURCE_LENGTH_IN_TOSTRING = 2048; private String type; private String id; @@ -148,7 +101,7 @@ public static OpType fromString(String sOpType) { private long version = Versions.MATCH_ANY; private VersionType versionType = VersionType.INTERNAL; - private XContentType contentType = Requests.INDEX_CONTENT_TYPE; + private XContentType contentType; private String pipeline; @@ -161,6 +114,7 @@ public static OpType fromString(String sOpType) { private long autoGeneratedTimestamp = UNSET_AUTO_GENERATED_TIMESTAMP; private boolean isRetry = false; + private static DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(IndexRequest.class)); public IndexRequest() { @@ -168,7 +122,7 @@ public IndexRequest() { /** * Constructs a new index request against the specific index. The {@link #type(String)} - * {@link #source(byte[])} must be set. + * {@link #source(byte[], XContentType)} must be set. */ public IndexRequest(String index) { this.index = index; @@ -205,10 +159,18 @@ public ActionRequestValidationException validate() { if (source == null) { validationException = addValidationError("source is missing", validationException); } - + if (contentType == null) { + validationException = addValidationError("content type is missing", validationException); + } + final long resolvedVersion = resolveVersionDefaults(); if (opType() == OpType.CREATE) { - if (versionType != VersionType.INTERNAL || version != Versions.MATCH_DELETED) { - validationException = addValidationError("create operations do not support versioning. use index instead", validationException); + if (versionType != VersionType.INTERNAL) { + validationException = addValidationError("create operations only support internal versioning. use index instead", validationException); + return validationException; + } + + if (resolvedVersion != Versions.MATCH_DELETED) { + validationException = addValidationError("create operations do not support explicit versions. use index instead", validationException); return validationException; } } @@ -217,8 +179,8 @@ public ActionRequestValidationException validate() { addValidationError("an id is required for a " + opType() + " operation", validationException); } - if (!versionType.validateVersionForWrites(version)) { - validationException = addValidationError("illegal version value [" + version + "] for version type [" + versionType.name() + "]", validationException); + if (!versionType.validateVersionForWrites(resolvedVersion)) { + validationException = addValidationError("illegal version value [" + resolvedVersion + "] for version type [" + versionType.name() + "]", validationException); } if (ttl != null) { @@ -232,28 +194,24 @@ public ActionRequestValidationException validate() { id.getBytes(StandardCharsets.UTF_8).length, validationException); } - if (id == null && (versionType == VersionType.INTERNAL && version == Versions.MATCH_ANY) == false) { + if (id == null && (versionType == VersionType.INTERNAL && resolvedVersion == Versions.MATCH_ANY) == false) { validationException = addValidationError("an id must be provided if version type or value are set", validationException); } + if (versionType == VersionType.FORCE) { + deprecationLogger.deprecated("version type FORCE is deprecated and will be removed in the next major version"); + } return validationException; } /** - * The content type that will be used when generating a document from user provided objects like Maps. + * The content type. This will be used when generating a document from user provided objects like Maps and when parsing the + * source at index time */ public XContentType getContentType() { return contentType; } - /** - * Sets the content type that will be used when generating a document from user provided objects (like Map). - */ - public IndexRequest contentType(XContentType contentType) { - this.contentType = contentType; - return this; - } - /** * The type of the indexed document. */ @@ -325,11 +283,13 @@ public String parent() { /** * Sets the timestamp either as millis since the epoch, or, in the configured date format. */ + @Deprecated public IndexRequest timestamp(String timestamp) { this.timestamp = timestamp; return this; } + @Deprecated public String timestamp() { return this.timestamp; } @@ -337,6 +297,7 @@ public String timestamp() { /** * Sets the ttl value as a time value expression. */ + @Deprecated public IndexRequest ttl(String ttl) { this.ttl = TimeValue.parseTimeValue(ttl, null, "ttl"); return this; @@ -345,6 +306,7 @@ public IndexRequest ttl(String ttl) { /** * Sets the ttl as a {@link TimeValue} instance. */ + @Deprecated public IndexRequest ttl(TimeValue ttl) { this.ttl = ttl; return this; @@ -353,6 +315,7 @@ public IndexRequest ttl(TimeValue ttl) { /** * Sets the relative ttl value in milliseconds. It musts be greater than 0 as it makes little sense otherwise. */ + @Deprecated public IndexRequest ttl(long ttl) { this.ttl = new TimeValue(ttl); return this; @@ -361,6 +324,7 @@ public IndexRequest ttl(long ttl) { /** * Returns the ttl as a {@link TimeValue} */ + @Deprecated public TimeValue ttl() { return this.ttl; } @@ -388,16 +352,16 @@ public BytesReference source() { } public Map sourceAsMap() { - return XContentHelper.convertToMap(source, false).v2(); + return XContentHelper.convertToMap(source, false, contentType).v2(); } /** - * Index the Map as a {@link org.elasticsearch.client.Requests#INDEX_CONTENT_TYPE}. + * Index the Map in {@link Requests#INDEX_CONTENT_TYPE} format * * @param source The map to index */ public IndexRequest source(Map source) throws ElasticsearchGenerationException { - return source(source, contentType); + return source(source, Requests.INDEX_CONTENT_TYPE); } /** @@ -418,63 +382,51 @@ public IndexRequest source(Map source, XContentType contentType) throws Elastics /** * Sets the document source to index. * - * Note, its preferable to either set it using {@link #source(org.elasticsearch.common.xcontent.XContentBuilder)} - * or using the {@link #source(byte[])}. + * @deprecated use {@link #source(String, XContentType)} */ + @Deprecated public IndexRequest source(String source) { - this.source = new BytesArray(source.getBytes(StandardCharsets.UTF_8)); - return this; + return source(new BytesArray(source), XContentFactory.xContentType(source)); } /** - * Sets the content source to index. + * Sets the document source to index. + * + * Note, its preferable to either set it using {@link #source(org.elasticsearch.common.xcontent.XContentBuilder)} + * or using the {@link #source(byte[], XContentType)}. */ - public IndexRequest source(XContentBuilder sourceBuilder) { - source = sourceBuilder.bytes(); - return this; - } - - public IndexRequest source(String field1, Object value1) { - try { - XContentBuilder builder = XContentFactory.contentBuilder(contentType); - builder.startObject().field(field1, value1).endObject(); - return source(builder); - } catch (IOException e) { - throw new ElasticsearchGenerationException("Failed to generate", e); - } - } - - public IndexRequest source(String field1, Object value1, String field2, Object value2) { - try { - XContentBuilder builder = XContentFactory.contentBuilder(contentType); - builder.startObject().field(field1, value1).field(field2, value2).endObject(); - return source(builder); - } catch (IOException e) { - throw new ElasticsearchGenerationException("Failed to generate", e); - } + public IndexRequest source(String source, XContentType xContentType) { + return source(new BytesArray(source), xContentType); } - public IndexRequest source(String field1, Object value1, String field2, Object value2, String field3, Object value3) { - try { - XContentBuilder builder = XContentFactory.contentBuilder(contentType); - builder.startObject().field(field1, value1).field(field2, value2).field(field3, value3).endObject(); - return source(builder); - } catch (IOException e) { - throw new ElasticsearchGenerationException("Failed to generate", e); - } + /** + * Sets the content source to index. + */ + public IndexRequest source(XContentBuilder sourceBuilder) { + return source(sourceBuilder.bytes(), sourceBuilder.contentType()); } - public IndexRequest source(String field1, Object value1, String field2, Object value2, String field3, Object value3, String field4, Object value4) { - try { - XContentBuilder builder = XContentFactory.contentBuilder(contentType); - builder.startObject().field(field1, value1).field(field2, value2).field(field3, value3).field(field4, value4).endObject(); - return source(builder); - } catch (IOException e) { - throw new ElasticsearchGenerationException("Failed to generate", e); - } + /** + * Sets the content source to index using the default content type ({@link Requests#INDEX_CONTENT_TYPE}) + *

+ * Note: the number of objects passed to this method must be an even + * number. Also the first argument in each pair (the field name) must have a + * valid String representation. + *

+ */ + public IndexRequest source(Object... source) { + return source(Requests.INDEX_CONTENT_TYPE, source); } - public IndexRequest source(Object... source) { + /** + * Sets the content source to index. + *

+ * Note: the number of objects passed to this method as varargs must be an even + * number. Also the first argument in each pair (the field name) must have a + * valid String representation. + *

+ */ + public IndexRequest source(XContentType xContentType, Object... source) { if (source.length % 2 != 0) { throw new IllegalArgumentException("The number of object passed must be even but was [" + source.length + "]"); } @@ -482,7 +434,7 @@ public IndexRequest source(Object... source) { throw new IllegalArgumentException("you are using the removed method for source with bytes and unsafe flag, the unsafe flag was removed, please just use source(BytesReference)"); } try { - XContentBuilder builder = XContentFactory.contentBuilder(contentType); + XContentBuilder builder = XContentFactory.contentBuilder(xContentType); builder.startObject(); for (int i = 0; i < source.length; i++) { builder.field(source[i++].toString(), source[i]); @@ -496,19 +448,39 @@ public IndexRequest source(Object... source) { /** * Sets the document to index in bytes form. + * @deprecated use {@link #source(BytesReference, XContentType)} */ + @Deprecated public IndexRequest source(BytesReference source) { - this.source = source; + return source(source, XContentFactory.xContentType(source)); + + } + + /** + * Sets the document to index in bytes form. + */ + public IndexRequest source(BytesReference source, XContentType xContentType) { + this.source = Objects.requireNonNull(source); + this.contentType = Objects.requireNonNull(xContentType); return this; } /** * Sets the document to index in bytes form. + * @deprecated use {@link #source(byte[], XContentType)} */ + @Deprecated public IndexRequest source(byte[] source) { return source(source, 0, source.length); } + /** + * Sets the document to index in bytes form. + */ + public IndexRequest source(byte[] source, XContentType xContentType) { + return source(source, 0, source.length, xContentType); + } + /** * Sets the document to index in bytes form (assumed to be safe to be used from different * threads). @@ -516,30 +488,50 @@ public IndexRequest source(byte[] source) { * @param source The source to index * @param offset The offset in the byte array * @param length The length of the data + * @deprecated use {@link #source(byte[], int, int, XContentType)} */ + @Deprecated public IndexRequest source(byte[] source, int offset, int length) { - this.source = new BytesArray(source, offset, length); - return this; + return source(new BytesArray(source, offset, length), XContentFactory.xContentType(source)); + } + + /** + * Sets the document to index in bytes form (assumed to be safe to be used from different + * threads). + * + * @param source The source to index + * @param offset The offset in the byte array + * @param length The length of the data + */ + public IndexRequest source(byte[] source, int offset, int length, XContentType xContentType) { + return source(new BytesArray(source, offset, length), xContentType); } /** * Sets the type of operation to perform. */ public IndexRequest opType(OpType opType) { - this.opType = opType; - if (opType == OpType.CREATE) { - version(Versions.MATCH_DELETED); - versionType(VersionType.INTERNAL); + if (opType != OpType.CREATE && opType != OpType.INDEX) { + throw new IllegalArgumentException("opType must be 'create' or 'index', found: [" + opType + "]"); } + this.opType = opType; return this; } /** - * Sets a string representation of the {@link #opType(org.elasticsearch.action.index.IndexRequest.OpType)}. Can + * Sets a string representation of the {@link #opType(OpType)}. Can * be either "index" or "create". */ public IndexRequest opType(String opType) { - return opType(OpType.fromString(opType)); + String op = opType.toLowerCase(Locale.ROOT); + if (op.equals("create")) { + opType(OpType.CREATE); + } else if (op.equals("index")) { + opType(OpType.INDEX); + } else { + throw new IllegalArgumentException("opType must be 'create' or 'index', found: [" + opType + "]"); + } + return this; } @@ -554,34 +546,44 @@ public IndexRequest create(boolean create) { } } - /** - * The type of operation to perform. - */ + @Override public OpType opType() { return this.opType; } - /** - * Sets the version, which will cause the index operation to only be performed if a matching - * version exists and no changes happened on the doc since then. - */ + @Override public IndexRequest version(long version) { this.version = version; return this; } + /** + * Returns stored version. If currently stored version is {@link Versions#MATCH_ANY} and + * opType is {@link OpType#CREATE}, returns {@link Versions#MATCH_DELETED}. + */ + @Override public long version() { - return this.version; + return resolveVersionDefaults(); } /** - * Sets the versioning type. Defaults to {@link VersionType#INTERNAL}. + * Resolves the version based on operation type {@link #opType()}. */ + private long resolveVersionDefaults() { + if (opType == OpType.CREATE && version == Versions.MATCH_ANY) { + return Versions.MATCH_DELETED; + } else { + return version; + } + } + + @Override public IndexRequest versionType(VersionType versionType) { this.versionType = versionType; return this; } + @Override public VersionType versionType() { return this.versionType; } @@ -600,14 +602,18 @@ public void process(@Nullable MappingMetaData mappingMd, boolean allowIdGenerati } if (parent != null && !mappingMd.hasParentField()) { - throw new IllegalArgumentException("Can't specify parent if no parent field has been configured"); + throw new IllegalArgumentException("can't specify parent if no parent field has been configured"); } } else { if (parent != null) { - throw new IllegalArgumentException("Can't specify parent if no parent field has been configured"); + throw new IllegalArgumentException("can't specify parent if no parent field has been configured"); } } + if ("".equals(id)) { + throw new IllegalArgumentException("if _id is specified it must not be empty"); + } + // generate id if not already provided and id generation is allowed if (allowIdGeneration && id == null) { assert autoGeneratedTimestamp == -1; @@ -661,6 +667,11 @@ public void readFrom(StreamInput in) throws IOException { pipeline = in.readOptionalString(); isRetry = in.readBoolean(); autoGeneratedTimestamp = in.readLong(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + contentType = in.readOptionalWriteable(XContentType::readFrom); + } else { + contentType = XContentFactory.xContentType(source); + } } @Override @@ -673,19 +684,32 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(timestamp); out.writeOptionalWriteable(ttl); out.writeBytesReference(source); - out.writeByte(opType.id()); - out.writeLong(version); + out.writeByte(opType.getId()); + // ES versions below 5.1.2 don't know about resolveVersionDefaults but resolve the version eagerly (which messes with validation). + if (out.getVersion().before(Version.V_5_1_2)) { + out.writeLong(resolveVersionDefaults()); + } else { + out.writeLong(version); + } out.writeByte(versionType.getValue()); out.writeOptionalString(pipeline); out.writeBoolean(isRetry); out.writeLong(autoGeneratedTimestamp); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + out.writeOptionalWriteable(contentType); + } } @Override public String toString() { String sSource = "_na_"; try { - sSource = XContentHelper.convertToJson(source, false); + if (source.length() > MAX_SOURCE_LENGTH_IN_TOSTRING) { + sSource = "n/a, actual length: [" + new ByteSizeValue(source.length()).toString() + "], max length: " + + new ByteSizeValue(MAX_SOURCE_LENGTH_IN_TOSTRING).toString(); + } else { + sSource = XContentHelper.convertToJson(source, false); + } } catch (Exception e) { // ignore } diff --git a/core/src/main/java/org/elasticsearch/action/index/IndexRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/index/IndexRequestBuilder.java index c4609e03aa5ed..309712ed23ee1 100644 --- a/core/src/main/java/org/elasticsearch/action/index/IndexRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/index/IndexRequestBuilder.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.index; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.support.WriteRequestBuilder; import org.elasticsearch.action.support.replication.ReplicationRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; @@ -82,12 +83,22 @@ public IndexRequestBuilder setParent(String parent) { /** * Sets the source. + * @deprecated use {@link #setSource(BytesReference, XContentType)} */ + @Deprecated public IndexRequestBuilder setSource(BytesReference source) { request.source(source); return this; } + /** + * Sets the source. + */ + public IndexRequestBuilder setSource(BytesReference source, XContentType xContentType) { + request.source(source, xContentType); + return this; + } + /** * Index the Map as a JSON. * @@ -112,13 +123,26 @@ public IndexRequestBuilder setSource(Map source, XContentType content * Sets the document source to index. *

* Note, its preferable to either set it using {@link #setSource(org.elasticsearch.common.xcontent.XContentBuilder)} - * or using the {@link #setSource(byte[])}. + * or using the {@link #setSource(byte[], XContentType)}. + * @deprecated use {@link #setSource(String, XContentType)} */ + @Deprecated public IndexRequestBuilder setSource(String source) { request.source(source); return this; } + /** + * Sets the document source to index. + *

+ * Note, its preferable to either set it using {@link #setSource(org.elasticsearch.common.xcontent.XContentBuilder)} + * or using the {@link #setSource(byte[], XContentType)}. + */ + public IndexRequestBuilder setSource(String source, XContentType xContentType) { + request.source(source, xContentType); + return this; + } + /** * Sets the content source to index. */ @@ -129,12 +153,22 @@ public IndexRequestBuilder setSource(XContentBuilder sourceBuilder) { /** * Sets the document to index in bytes form. + * @deprecated use {@link #setSource(byte[], XContentType)} */ + @Deprecated public IndexRequestBuilder setSource(byte[] source) { request.source(source); return this; } + /** + * Sets the document to index in bytes form. + */ + public IndexRequestBuilder setSource(byte[] source, XContentType xContentType) { + request.source(source, xContentType); + return this; + } + /** * Sets the document to index in bytes form (assumed to be safe to be used from different * threads). @@ -142,47 +176,35 @@ public IndexRequestBuilder setSource(byte[] source) { * @param source The source to index * @param offset The offset in the byte array * @param length The length of the data + * @deprecated use {@link #setSource(byte[], int, int, XContentType)} */ + @Deprecated public IndexRequestBuilder setSource(byte[] source, int offset, int length) { request.source(source, offset, length); return this; } /** - * Constructs a simple document with a field and a value. - */ - public IndexRequestBuilder setSource(String field1, Object value1) { - request.source(field1, value1); - return this; - } - - /** - * Constructs a simple document with a field and value pairs. - */ - public IndexRequestBuilder setSource(String field1, Object value1, String field2, Object value2) { - request.source(field1, value1, field2, value2); - return this; - } - - /** - * Constructs a simple document with a field and value pairs. - */ - public IndexRequestBuilder setSource(String field1, Object value1, String field2, Object value2, String field3, Object value3) { - request.source(field1, value1, field2, value2, field3, value3); - return this; - } - - /** - * Constructs a simple document with a field and value pairs. + * Sets the document to index in bytes form (assumed to be safe to be used from different + * threads). + * + * @param source The source to index + * @param offset The offset in the byte array + * @param length The length of the data + * @param xContentType The type/format of the source */ - public IndexRequestBuilder setSource(String field1, Object value1, String field2, Object value2, String field3, Object value3, String field4, Object value4) { - request.source(field1, value1, field2, value2, field3, value3, field4, value4); + public IndexRequestBuilder setSource(byte[] source, int offset, int length, XContentType xContentType) { + request.source(source, offset, length, xContentType); return this; } /** * Constructs a simple document with a field name and value pairs. - * Note: the number of objects passed to this method must be an even number. + *

+ * Note: the number of objects passed to this method must be an even + * number. Also the first argument in each pair (the field name) must have a + * valid String representation. + *

*/ public IndexRequestBuilder setSource(Object... source) { request.source(source); @@ -190,17 +212,22 @@ public IndexRequestBuilder setSource(Object... source) { } /** - * The content type that will be used to generate a document from user provided objects (like Map). + * Constructs a simple document with a field name and value pairs. + *

+ * Note: the number of objects passed as varargs to this method must be an even + * number. Also the first argument in each pair (the field name) must have a + * valid String representation. + *

*/ - public IndexRequestBuilder setContentType(XContentType contentType) { - request.contentType(contentType); + public IndexRequestBuilder setSource(XContentType xContentType, Object... source) { + request.source(xContentType, source); return this; } /** * Sets the type of operation to perform. */ - public IndexRequestBuilder setOpType(IndexRequest.OpType opType) { + public IndexRequestBuilder setOpType(DocWriteRequest.OpType opType) { request.opType(opType); return this; } @@ -233,6 +260,7 @@ public IndexRequestBuilder setVersionType(VersionType versionType) { /** * Sets the timestamp either as millis since the epoch, or, in the configured date format. */ + @Deprecated public IndexRequestBuilder setTimestamp(String timestamp) { request.timestamp(timestamp); return this; @@ -241,6 +269,7 @@ public IndexRequestBuilder setTimestamp(String timestamp) { /** * Sets the ttl value as a time value expression. */ + @Deprecated public IndexRequestBuilder setTTL(String ttl) { request.ttl(ttl); return this; @@ -249,6 +278,7 @@ public IndexRequestBuilder setTTL(String ttl) { /** * Sets the relative ttl value in milliseconds. It musts be greater than 0 as it makes little sense otherwise. */ + @Deprecated public IndexRequestBuilder setTTL(long ttl) { request.ttl(ttl); return this; @@ -257,6 +287,7 @@ public IndexRequestBuilder setTTL(long ttl) { /** * Sets the ttl as a {@link TimeValue} instance. */ + @Deprecated public IndexRequestBuilder setTTL(TimeValue ttl) { request.ttl(ttl); return this; diff --git a/core/src/main/java/org/elasticsearch/action/index/IndexResponse.java b/core/src/main/java/org/elasticsearch/action/index/IndexResponse.java index 9167740567c12..682005480720c 100644 --- a/core/src/main/java/org/elasticsearch/action/index/IndexResponse.java +++ b/core/src/main/java/org/elasticsearch/action/index/IndexResponse.java @@ -20,12 +20,16 @@ package org.elasticsearch.action.index; import org.elasticsearch.action.DocWriteResponse; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * A response of an index operation, * @@ -34,8 +38,9 @@ */ public class IndexResponse extends DocWriteResponse { - public IndexResponse() { + private static final String CREATED = "created"; + public IndexResponse() { } public IndexResponse(ShardId shardId, String type, String id, long version, boolean created) { @@ -56,14 +61,64 @@ public String toString() { builder.append(",id=").append(getId()); builder.append(",version=").append(getVersion()); builder.append(",result=").append(getResult().getLowercase()); - builder.append(",shards=").append(getShardInfo()); + builder.append(",shards=").append(Strings.toString(getShardInfo())); return builder.append("]").toString(); } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - super.toXContent(builder, params); - builder.field("created", result == Result.CREATED); + public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { + super.innerToXContent(builder, params); + builder.field(CREATED, result == Result.CREATED); return builder; } + + public static IndexResponse fromXContent(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + + Builder context = new Builder(); + while (parser.nextToken() != XContentParser.Token.END_OBJECT) { + parseXContentFields(parser, context); + } + return context.build(); + } + + /** + * Parse the current token and update the parsing context appropriately. + */ + public static void parseXContentFields(XContentParser parser, Builder context) throws IOException { + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + + if (CREATED.equals(currentFieldName)) { + if (token.isValue()) { + context.setCreated(parser.booleanValue()); + } + } else { + DocWriteResponse.parseInnerToXContent(parser, context); + } + } + + /** + * Builder class for {@link IndexResponse}. This builder is usually used during xcontent parsing to + * temporarily store the parsed values, then the {@link Builder#build()} method is called to + * instantiate the {@link IndexResponse}. + */ + public static class Builder extends DocWriteResponse.Builder { + + private boolean created = false; + + public void setCreated(boolean created) { + this.created = created; + } + + @Override + public IndexResponse build() { + IndexResponse indexResponse = new IndexResponse(shardId, type, id, version, created); + indexResponse.setForcedRefresh(forcedRefresh); + if (shardInfo != null) { + indexResponse.setShardInfo(shardInfo); + } + return indexResponse; + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java b/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java index cc3fbb7906d2e..c925f86df973d 100644 --- a/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java @@ -19,34 +19,16 @@ package org.elasticsearch.action.index; -import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; -import org.elasticsearch.action.admin.indices.create.CreateIndexResponse; -import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction; +import org.elasticsearch.action.bulk.TransportBulkAction; +import org.elasticsearch.action.bulk.TransportShardBulkAction; +import org.elasticsearch.action.bulk.TransportSingleItemBulkWriteAction; import org.elasticsearch.action.support.ActionFilters; -import org.elasticsearch.action.support.AutoCreateIndex; -import org.elasticsearch.action.support.replication.ReplicationOperation; -import org.elasticsearch.action.support.replication.TransportWriteAction; -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.action.index.MappingUpdatedAction; import org.elasticsearch.cluster.action.shard.ShardStateAction; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.metadata.MappingMetaData; -import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.mapper.Mapping; -import org.elasticsearch.index.mapper.SourceToParse; -import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.index.shard.ShardId; -import org.elasticsearch.index.translog.Translog.Location; -import org.elasticsearch.indices.IndexAlreadyExistsException; import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -59,146 +41,26 @@ * Defaults to true. *
  • allowIdGeneration: If the id is set not, should it be generated. Defaults to true. * + * + * Deprecated use TransportBulkAction with a single item instead */ -public class TransportIndexAction extends TransportWriteAction { - - private final AutoCreateIndex autoCreateIndex; - private final boolean allowIdGeneration; - private final TransportCreateIndexAction createIndexAction; - - private final ClusterService clusterService; - private final MappingUpdatedAction mappingUpdatedAction; +@Deprecated +public class TransportIndexAction extends TransportSingleItemBulkWriteAction { @Inject public TransportIndexAction(Settings settings, TransportService transportService, ClusterService clusterService, - IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction, - TransportCreateIndexAction createIndexAction, MappingUpdatedAction mappingUpdatedAction, + IndicesService indicesService, + ThreadPool threadPool, ShardStateAction shardStateAction, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, - AutoCreateIndex autoCreateIndex) { + TransportBulkAction bulkAction, TransportShardBulkAction shardBulkAction) { super(settings, IndexAction.NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, - actionFilters, indexNameExpressionResolver, IndexRequest::new, ThreadPool.Names.INDEX); - this.mappingUpdatedAction = mappingUpdatedAction; - this.createIndexAction = createIndexAction; - this.autoCreateIndex = autoCreateIndex; - this.allowIdGeneration = settings.getAsBoolean("action.allow_id_generation", true); - this.clusterService = clusterService; - } - - @Override - protected void doExecute(Task task, final IndexRequest request, final ActionListener listener) { - // if we don't have a master, we don't have metadata, that's fine, let it find a master using create index API - ClusterState state = clusterService.state(); - if (autoCreateIndex.shouldAutoCreate(request.index(), state)) { - CreateIndexRequest createIndexRequest = new CreateIndexRequest(); - createIndexRequest.index(request.index()); - createIndexRequest.cause("auto(index api)"); - createIndexRequest.masterNodeTimeout(request.timeout()); - createIndexAction.execute(task, createIndexRequest, new ActionListener() { - @Override - public void onResponse(CreateIndexResponse result) { - innerExecute(task, request, listener); - } - - @Override - public void onFailure(Exception e) { - if (ExceptionsHelper.unwrapCause(e) instanceof IndexAlreadyExistsException) { - // we have the index, do it - try { - innerExecute(task, request, listener); - } catch (Exception inner) { - inner.addSuppressed(e); - listener.onFailure(inner); - } - } else { - listener.onFailure(e); - } - } - }); - } else { - innerExecute(task, request, listener); - } - } - - @Override - protected void resolveRequest(MetaData metaData, IndexMetaData indexMetaData, IndexRequest request) { - super.resolveRequest(metaData, indexMetaData, request); - MappingMetaData mappingMd =indexMetaData.mappingOrDefault(request.type()); - request.resolveRouting(metaData); - request.process(mappingMd, allowIdGeneration, indexMetaData.getIndex().getName()); - ShardId shardId = clusterService.operationRouting().shardId(clusterService.state(), - indexMetaData.getIndex().getName(), request.id(), request.routing()); - request.setShardId(shardId); - } - - private void innerExecute(Task task, final IndexRequest request, final ActionListener listener) { - super.doExecute(task, request, listener); + actionFilters, indexNameExpressionResolver, IndexRequest::new, IndexRequest::new, ThreadPool.Names.INDEX, + bulkAction, shardBulkAction); } @Override protected IndexResponse newResponseInstance() { return new IndexResponse(); } - - @Override - protected WriteResult onPrimaryShard(IndexRequest request, IndexShard indexShard) throws Exception { - return executeIndexRequestOnPrimary(request, indexShard, mappingUpdatedAction); - } - - @Override - protected Location onReplicaShard(IndexRequest request, IndexShard indexShard) { - return executeIndexRequestOnReplica(request, indexShard).getTranslogLocation(); - } - - /** - * Execute the given {@link IndexRequest} on a replica shard, throwing a - * {@link RetryOnReplicaException} if the operation needs to be re-tried. - */ - public static Engine.Index executeIndexRequestOnReplica(IndexRequest request, IndexShard indexShard) { - final ShardId shardId = indexShard.shardId(); - SourceToParse sourceToParse = SourceToParse.source(SourceToParse.Origin.REPLICA, shardId.getIndexName(), request.type(), request.id(), request.source()) - .routing(request.routing()).parent(request.parent()).timestamp(request.timestamp()).ttl(request.ttl()); - - final Engine.Index operation = indexShard.prepareIndexOnReplica(sourceToParse, request.version(), request.versionType(), request.getAutoGeneratedTimestamp(), request.isRetry()); - Mapping update = operation.parsedDoc().dynamicMappingsUpdate(); - if (update != null) { - throw new RetryOnReplicaException(shardId, "Mappings are not available on the replica yet, triggered update: " + update); - } - indexShard.index(operation); - return operation; - } - - /** Utility method to prepare an index operation on primary shards */ - public static Engine.Index prepareIndexOperationOnPrimary(IndexRequest request, IndexShard indexShard) { - SourceToParse sourceToParse = SourceToParse.source(SourceToParse.Origin.PRIMARY, request.index(), request.type(), request.id(), request.source()) - .routing(request.routing()).parent(request.parent()).timestamp(request.timestamp()).ttl(request.ttl()); - return indexShard.prepareIndexOnPrimary(sourceToParse, request.version(), request.versionType(), request.getAutoGeneratedTimestamp(), request.isRetry()); - } - - public static WriteResult executeIndexRequestOnPrimary(IndexRequest request, IndexShard indexShard, - MappingUpdatedAction mappingUpdatedAction) throws Exception { - Engine.Index operation = prepareIndexOperationOnPrimary(request, indexShard); - Mapping update = operation.parsedDoc().dynamicMappingsUpdate(); - final ShardId shardId = indexShard.shardId(); - if (update != null) { - mappingUpdatedAction.updateMappingOnMaster(shardId.getIndex(), request.type(), update); - operation = prepareIndexOperationOnPrimary(request, indexShard); - update = operation.parsedDoc().dynamicMappingsUpdate(); - if (update != null) { - throw new ReplicationOperation.RetryOnPrimaryException(shardId, - "Dynamic mappings are not available on the node that holds the primary yet"); - } - } - indexShard.index(operation); - - // update the version on request so it will happen on the replicas - final long version = operation.version(); - request.version(version); - request.versionType(request.versionType().versionTypeForReplicationAndRecovery()); - - assert request.versionType().validateVersionForWrites(request.version()); - - IndexResponse response = new IndexResponse(shardId, request.type(), request.id(), request.version(), operation.isCreated()); - return new WriteResult<>(response, operation.getTranslogLocation()); - } } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineRequestBuilder.java index fc14e0de2dfaa..90cbce135af2c 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineRequestBuilder.java @@ -32,4 +32,12 @@ public DeletePipelineRequestBuilder(ElasticsearchClient client, DeletePipelineAc super(client, action, new DeletePipelineRequest(id)); } + /** + * Sets the id of the pipeline to delete. + */ + public DeletePipelineRequestBuilder setId(String id) { + request.setId(id); + return this; + } + } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java index 74ce894b05321..45cb83634f84f 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java @@ -30,7 +30,7 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.ingest.PipelineStore; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineResponse.java b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineResponse.java index f603a354f4b03..30843bdff9b28 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineResponse.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineResponse.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.ingest.PipelineConfiguration; import org.elasticsearch.rest.RestStatus; @@ -31,7 +31,7 @@ import java.util.ArrayList; import java.util.List; -public class GetPipelineResponse extends ActionResponse implements StatusToXContent { +public class GetPipelineResponse extends ActionResponse implements StatusToXContentObject { private List pipelines; @@ -52,7 +52,7 @@ public void readFrom(StreamInput in) throws IOException { int size = in.readVInt(); pipelines = new ArrayList<>(size); for (int i = 0; i < size; i++) { - pipelines.add(PipelineConfiguration.readPipelineConfiguration(in)); + pipelines.add(PipelineConfiguration.readFrom(in)); } } @@ -76,9 +76,11 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); for (PipelineConfiguration pipeline : pipelines) { builder.field(pipeline.getId(), pipeline.getConfigAsMap()); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java index 8bac5c7b80434..f64b36d47aedb 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java @@ -30,7 +30,7 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.ingest.PipelineStore; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/IngestActionFilter.java b/core/src/main/java/org/elasticsearch/action/ingest/IngestActionFilter.java deleted file mode 100644 index 68983ccfd22e7..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/ingest/IngestActionFilter.java +++ /dev/null @@ -1,242 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.ingest; - -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRequest; -import org.elasticsearch.action.ActionResponse; -import org.elasticsearch.action.bulk.BulkAction; -import org.elasticsearch.action.bulk.BulkItemResponse; -import org.elasticsearch.action.bulk.BulkRequest; -import org.elasticsearch.action.bulk.BulkResponse; -import org.elasticsearch.action.index.IndexAction; -import org.elasticsearch.action.index.IndexRequest; -import org.elasticsearch.action.support.ActionFilter; -import org.elasticsearch.action.support.ActionFilterChain; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.ingest.PipelineExecutionService; -import org.elasticsearch.node.service.NodeService; -import org.elasticsearch.tasks.Task; - -import java.util.ArrayList; -import java.util.HashSet; -import java.util.Iterator; -import java.util.List; -import java.util.Set; -import java.util.concurrent.TimeUnit; - -public final class IngestActionFilter extends AbstractComponent implements ActionFilter { - - private final PipelineExecutionService executionService; - - @Inject - public IngestActionFilter(Settings settings, NodeService nodeService) { - super(settings); - this.executionService = nodeService.getIngestService().getPipelineExecutionService(); - } - - @Override - public , Response extends ActionResponse> void apply(Task task, String action, Request request, ActionListener listener, ActionFilterChain chain) { - switch (action) { - case IndexAction.NAME: - IndexRequest indexRequest = (IndexRequest) request; - if (Strings.hasText(indexRequest.getPipeline())) { - processIndexRequest(task, action, listener, chain, (IndexRequest) request); - } else { - chain.proceed(task, action, request, listener); - } - break; - case BulkAction.NAME: - BulkRequest bulkRequest = (BulkRequest) request; - if (bulkRequest.hasIndexRequestsWithPipelines()) { - @SuppressWarnings("unchecked") - ActionListener actionListener = (ActionListener) listener; - processBulkIndexRequest(task, bulkRequest, action, chain, actionListener); - } else { - chain.proceed(task, action, request, listener); - } - break; - default: - chain.proceed(task, action, request, listener); - break; - } - } - - @Override - public void apply(String action, Response response, ActionListener listener, ActionFilterChain chain) { - chain.proceed(action, response, listener); - } - - void processIndexRequest(Task task, String action, ActionListener listener, ActionFilterChain chain, IndexRequest indexRequest) { - - executionService.executeIndexRequest(indexRequest, t -> { - logger.error((Supplier) () -> new ParameterizedMessage("failed to execute pipeline [{}]", indexRequest.getPipeline()), t); - listener.onFailure(t); - }, success -> { - // TransportIndexAction uses IndexRequest and same action name on the node that receives the request and the node that - // processes the primary action. This could lead to a pipeline being executed twice for the same - // index request, hence we set the pipeline to null once its execution completed. - indexRequest.setPipeline(null); - chain.proceed(task, action, indexRequest, listener); - }); - } - - void processBulkIndexRequest(Task task, BulkRequest original, String action, ActionFilterChain chain, ActionListener listener) { - long ingestStartTimeInNanos = System.nanoTime(); - BulkRequestModifier bulkRequestModifier = new BulkRequestModifier(original); - executionService.executeBulkRequest(() -> bulkRequestModifier, (indexRequest, exception) -> { - logger.debug((Supplier) () -> new ParameterizedMessage("failed to execute pipeline [{}] for document [{}/{}/{}]", indexRequest.getPipeline(), indexRequest.index(), indexRequest.type(), indexRequest.id()), exception); - bulkRequestModifier.markCurrentItemAsFailed(exception); - }, (exception) -> { - if (exception != null) { - logger.error("failed to execute pipeline for a bulk request", exception); - listener.onFailure(exception); - } else { - long ingestTookInMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - ingestStartTimeInNanos); - BulkRequest bulkRequest = bulkRequestModifier.getBulkRequest(); - ActionListener actionListener = bulkRequestModifier.wrapActionListenerIfNeeded(ingestTookInMillis, listener); - if (bulkRequest.requests().isEmpty()) { - // at this stage, the transport bulk action can't deal with a bulk request with no requests, - // so we stop and send an empty response back to the client. - // (this will happen if pre-processing all items in the bulk failed) - actionListener.onResponse(new BulkResponse(new BulkItemResponse[0], 0)); - } else { - chain.proceed(task, action, bulkRequest, actionListener); - } - } - }); - } - - @Override - public int order() { - return Integer.MAX_VALUE; - } - - static final class BulkRequestModifier implements Iterator> { - - final BulkRequest bulkRequest; - final Set failedSlots; - final List itemResponses; - - int currentSlot = -1; - int[] originalSlots; - - BulkRequestModifier(BulkRequest bulkRequest) { - this.bulkRequest = bulkRequest; - this.failedSlots = new HashSet<>(); - this.itemResponses = new ArrayList<>(bulkRequest.requests().size()); - } - - @Override - public ActionRequest next() { - return bulkRequest.requests().get(++currentSlot); - } - - @Override - public boolean hasNext() { - return (currentSlot + 1) < bulkRequest.requests().size(); - } - - BulkRequest getBulkRequest() { - if (itemResponses.isEmpty()) { - return bulkRequest; - } else { - BulkRequest modifiedBulkRequest = new BulkRequest(); - modifiedBulkRequest.setRefreshPolicy(bulkRequest.getRefreshPolicy()); - modifiedBulkRequest.waitForActiveShards(bulkRequest.waitForActiveShards()); - modifiedBulkRequest.timeout(bulkRequest.timeout()); - - int slot = 0; - originalSlots = new int[bulkRequest.requests().size() - failedSlots.size()]; - for (int i = 0; i < bulkRequest.requests().size(); i++) { - ActionRequest request = bulkRequest.requests().get(i); - if (failedSlots.contains(i) == false) { - modifiedBulkRequest.add(request); - originalSlots[slot++] = i; - } - } - return modifiedBulkRequest; - } - } - - ActionListener wrapActionListenerIfNeeded(long ingestTookInMillis, ActionListener actionListener) { - if (itemResponses.isEmpty()) { - return new ActionListener() { - @Override - public void onResponse(BulkResponse response) { - actionListener.onResponse(new BulkResponse(response.getItems(), response.getTookInMillis(), ingestTookInMillis)); - } - - @Override - public void onFailure(Exception e) { - actionListener.onFailure(e); - } - }; - } else { - return new IngestBulkResponseListener(ingestTookInMillis, originalSlots, itemResponses, actionListener); - } - } - - void markCurrentItemAsFailed(Exception e) { - IndexRequest indexRequest = (IndexRequest) bulkRequest.requests().get(currentSlot); - // We hit a error during preprocessing a request, so we: - // 1) Remember the request item slot from the bulk, so that we're done processing all requests we know what failed - // 2) Add a bulk item failure for this request - // 3) Continue with the next request in the bulk. - failedSlots.add(currentSlot); - BulkItemResponse.Failure failure = new BulkItemResponse.Failure(indexRequest.index(), indexRequest.type(), indexRequest.id(), e); - itemResponses.add(new BulkItemResponse(currentSlot, indexRequest.opType().lowercase(), failure)); - } - - } - - static final class IngestBulkResponseListener implements ActionListener { - - private final long ingestTookInMillis; - private final int[] originalSlots; - private final List itemResponses; - private final ActionListener actionListener; - - IngestBulkResponseListener(long ingestTookInMillis, int[] originalSlots, List itemResponses, ActionListener actionListener) { - this.ingestTookInMillis = ingestTookInMillis; - this.itemResponses = itemResponses; - this.actionListener = actionListener; - this.originalSlots = originalSlots; - } - - @Override - public void onResponse(BulkResponse response) { - for (int i = 0; i < response.getItems().length; i++) { - itemResponses.add(originalSlots[i], response.getItems()[i]); - } - actionListener.onResponse(new BulkResponse(itemResponses.toArray(new BulkItemResponse[itemResponses.size()]), response.getTookInMillis(), ingestTookInMillis)); - } - - @Override - public void onFailure(Exception e) { - actionListener.onFailure(e); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/action/ingest/IngestActionForwarder.java b/core/src/main/java/org/elasticsearch/action/ingest/IngestActionForwarder.java new file mode 100644 index 0000000000000..8b163eb1eedf8 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/ingest/IngestActionForwarder.java @@ -0,0 +1,68 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.ingest; + +import org.elasticsearch.action.Action; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.ActionListenerResponseHandler; +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.cluster.ClusterChangedEvent; +import org.elasticsearch.cluster.ClusterStateApplier; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.Randomness; +import org.elasticsearch.transport.TransportService; + +import java.util.concurrent.atomic.AtomicInteger; + +/** + * A utility for forwarding ingest requests to ingest nodes in a round-robin fashion. + * + * TODO: move this into IngestService and make index/bulk actions call that + */ +public final class IngestActionForwarder implements ClusterStateApplier { + + private final TransportService transportService; + private final AtomicInteger ingestNodeGenerator = new AtomicInteger(Randomness.get().nextInt()); + private DiscoveryNode[] ingestNodes; + + public IngestActionForwarder(TransportService transportService) { + this.transportService = transportService; + ingestNodes = new DiscoveryNode[0]; + } + + public void forwardIngestRequest(Action action, ActionRequest request, ActionListener listener) { + transportService.sendRequest(randomIngestNode(), action.name(), request, + new ActionListenerResponseHandler(listener, action::newResponse)); + } + + private DiscoveryNode randomIngestNode() { + final DiscoveryNode[] nodes = ingestNodes; + if (nodes.length == 0) { + throw new IllegalStateException("There are no ingest nodes in this cluster, unable to forward request to an ingest node."); + } + + return nodes[Math.floorMod(ingestNodeGenerator.incrementAndGet(), nodes.length)]; + } + + @Override + public void applyClusterState(ClusterChangedEvent event) { + ingestNodes = event.state().getNodes().getIngestNodes().values().toArray(DiscoveryNode.class); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/ingest/IngestProxyActionFilter.java b/core/src/main/java/org/elasticsearch/action/ingest/IngestProxyActionFilter.java deleted file mode 100644 index 5d2aea389dceb..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/ingest/IngestProxyActionFilter.java +++ /dev/null @@ -1,119 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.action.ingest; - -import org.elasticsearch.action.Action; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionListenerResponseHandler; -import org.elasticsearch.action.ActionRequest; -import org.elasticsearch.action.ActionResponse; -import org.elasticsearch.action.bulk.BulkAction; -import org.elasticsearch.action.bulk.BulkRequest; -import org.elasticsearch.action.index.IndexAction; -import org.elasticsearch.action.index.IndexRequest; -import org.elasticsearch.action.support.ActionFilter; -import org.elasticsearch.action.support.ActionFilterChain; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.node.DiscoveryNodes; -import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.Randomness; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.tasks.Task; -import org.elasticsearch.transport.TransportResponse; -import org.elasticsearch.transport.TransportService; - -import java.util.concurrent.atomic.AtomicInteger; - -public final class IngestProxyActionFilter implements ActionFilter { - - private final ClusterService clusterService; - private final TransportService transportService; - private final AtomicInteger randomNodeGenerator = new AtomicInteger(Randomness.get().nextInt()); - - @Inject - public IngestProxyActionFilter(ClusterService clusterService, TransportService transportService) { - this.clusterService = clusterService; - this.transportService = transportService; - } - - @Override - public , Response extends ActionResponse> void apply(Task task, String action, Request request, ActionListener listener, ActionFilterChain chain) { - Action ingestAction; - switch (action) { - case IndexAction.NAME: - ingestAction = IndexAction.INSTANCE; - IndexRequest indexRequest = (IndexRequest) request; - if (Strings.hasText(indexRequest.getPipeline())) { - forwardIngestRequest(ingestAction, request, listener); - } else { - chain.proceed(task, action, request, listener); - } - break; - case BulkAction.NAME: - ingestAction = BulkAction.INSTANCE; - BulkRequest bulkRequest = (BulkRequest) request; - if (bulkRequest.hasIndexRequestsWithPipelines()) { - forwardIngestRequest(ingestAction, request, listener); - } else { - chain.proceed(task, action, request, listener); - } - break; - default: - chain.proceed(task, action, request, listener); - break; - } - } - - @SuppressWarnings("unchecked") - private void forwardIngestRequest(Action action, ActionRequest request, ActionListener listener) { - transportService.sendRequest(randomIngestNode(), action.name(), request, new ActionListenerResponseHandler(listener, action::newResponse)); - } - - @Override - public void apply(String action, Response response, ActionListener listener, ActionFilterChain chain) { - chain.proceed(action, response, listener); - } - - @Override - public int order() { - return Integer.MAX_VALUE; - } - - private DiscoveryNode randomIngestNode() { - assert clusterService.localNode().isIngestNode() == false; - DiscoveryNodes nodes = clusterService.state().getNodes(); - DiscoveryNode[] ingestNodes = nodes.getIngestNodes().values().toArray(DiscoveryNode.class); - if (ingestNodes.length == 0) { - throw new IllegalStateException("There are no ingest nodes in this cluster, unable to forward request to an ingest node."); - } - - int index = getNodeNumber(); - return ingestNodes[(index) % ingestNodes.length]; - } - - private int getNodeNumber() { - int index = randomNodeGenerator.incrementAndGet(); - if (index < 0) { - index = 0; - randomNodeGenerator.set(0); - } - return index; - } -} diff --git a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequest.java b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequest.java index 10416146ba853..394349ca01691 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequest.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequest.java @@ -19,32 +19,40 @@ package org.elasticsearch.action.ingest; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; import java.util.Objects; -import static org.elasticsearch.action.ValidateActions.addValidationError; - public class PutPipelineRequest extends AcknowledgedRequest { private String id; private BytesReference source; + private XContentType xContentType; + /** + * Create a new pipeline request + * @deprecated use {@link #PutPipelineRequest(String, BytesReference, XContentType)} to avoid content type auto-detection + */ + @Deprecated public PutPipelineRequest(String id, BytesReference source) { - if (id == null) { - throw new IllegalArgumentException("id is missing"); - } - if (source == null) { - throw new IllegalArgumentException("source is missing"); - } + this(id, source, XContentFactory.xContentType(source)); + } - this.id = id; - this.source = source; + /** + * Create a new pipeline request with the id and source along with the content type of the source + */ + public PutPipelineRequest(String id, BytesReference source, XContentType xContentType) { + this.id = Objects.requireNonNull(id); + this.source = Objects.requireNonNull(source); + this.xContentType = Objects.requireNonNull(xContentType); } PutPipelineRequest() { @@ -63,11 +71,20 @@ public BytesReference getSource() { return source; } + public XContentType getXContentType() { + return xContentType; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); id = in.readString(); source = in.readBytesReference(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType = XContentType.readFrom(in); + } else { + xContentType = XContentFactory.xContentType(source); + } } @Override @@ -75,5 +92,8 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); out.writeString(id); out.writeBytesReference(source); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType.writeTo(out); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequestBuilder.java index bd927115fb5ff..c03b3b84f8b5b 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequestBuilder.java @@ -22,6 +22,7 @@ import org.elasticsearch.action.ActionRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.XContentType; public class PutPipelineRequestBuilder extends ActionRequestBuilder { @@ -29,8 +30,13 @@ public PutPipelineRequestBuilder(ElasticsearchClient client, PutPipelineAction a super(client, action, new PutPipelineRequest()); } + @Deprecated public PutPipelineRequestBuilder(ElasticsearchClient client, PutPipelineAction action, String id, BytesReference source) { super(client, action, new PutPipelineRequest(id, source)); } + public PutPipelineRequestBuilder(ElasticsearchClient client, PutPipelineAction action, String id, BytesReference source, + XContentType xContentType) { + super(client, action, new PutPipelineRequest(id, source, xContentType)); + } } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java index 82cd8d8eb7b32..7dde981804939 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java @@ -36,7 +36,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.ingest.PipelineStore; import org.elasticsearch.ingest.IngestInfo; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulateDocumentBaseResult.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulateDocumentBaseResult.java index 82b39ac897242..c6252feea276c 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulateDocumentBaseResult.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulateDocumentBaseResult.java @@ -84,7 +84,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (failure == null) { ingestDocument.toXContent(builder, params); } else { - ElasticsearchException.renderException(builder, params, failure); + ElasticsearchException.generateFailureXContent(builder, params, failure, true); } builder.endObject(); return builder; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java index a63f7a30dbeac..8d93fa5747558 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java @@ -19,11 +19,14 @@ package org.elasticsearch.action.ingest; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.ingest.ConfigurationUtils; import org.elasticsearch.ingest.IngestDocument; import org.elasticsearch.ingest.Pipeline; @@ -34,20 +37,32 @@ import java.util.Collections; import java.util.List; import java.util.Map; +import java.util.Objects; import static org.elasticsearch.ingest.IngestDocument.MetaData; -public class SimulatePipelineRequest extends ActionRequest { +public class SimulatePipelineRequest extends ActionRequest { private String id; private boolean verbose; private BytesReference source; + private XContentType xContentType; + /** + * Create a new request + * @deprecated use {@link #SimulatePipelineRequest(BytesReference, XContentType)} that does not attempt content autodetection + */ + @Deprecated public SimulatePipelineRequest(BytesReference source) { - if (source == null) { - throw new IllegalArgumentException("source is missing"); - } - this.source = source; + this(source, XContentFactory.xContentType(source)); + } + + /** + * Creates a new request with the given source and its content type + */ + public SimulatePipelineRequest(BytesReference source, XContentType xContentType) { + this.source = Objects.requireNonNull(source); + this.xContentType = Objects.requireNonNull(xContentType); } SimulatePipelineRequest() { @@ -78,12 +93,21 @@ public BytesReference getSource() { return source; } + public XContentType getXContentType() { + return xContentType; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); id = in.readOptionalString(); verbose = in.readBoolean(); source = in.readBytesReference(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType = XContentType.readFrom(in); + } else { + xContentType = XContentFactory.xContentType(source); + } } @Override @@ -92,6 +116,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(id); out.writeBoolean(verbose); out.writeBytesReference(source); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType.writeTo(out); + } } public static final class Fields { @@ -135,18 +162,18 @@ static Parsed parseWithPipelineId(String pipelineId, Map config, if (pipeline == null) { throw new IllegalArgumentException("pipeline [" + pipelineId + "] does not exist"); } - List ingestDocumentList = parseDocs(config); + List ingestDocumentList = parseDocs(config, pipelineStore.isNewIngestDateFormat()); return new Parsed(pipeline, ingestDocumentList, verbose); } static Parsed parse(Map config, boolean verbose, PipelineStore pipelineStore) throws Exception { Map pipelineConfig = ConfigurationUtils.readMap(null, null, config, Fields.PIPELINE); Pipeline pipeline = PIPELINE_FACTORY.create(SIMULATED_PIPELINE_ID, pipelineConfig, pipelineStore.getProcessorFactories()); - List ingestDocumentList = parseDocs(config); + List ingestDocumentList = parseDocs(config, pipelineStore.isNewIngestDateFormat()); return new Parsed(pipeline, ingestDocumentList, verbose); } - private static List parseDocs(Map config) { + private static List parseDocs(Map config, boolean newDateFormat) { List> docs = ConfigurationUtils.readList(null, null, config, Fields.DOCS); List ingestDocumentList = new ArrayList<>(); for (Map dataMap : docs) { @@ -158,7 +185,7 @@ private static List parseDocs(Map config) { ConfigurationUtils.readOptionalStringProperty(null, null, dataMap, MetaData.PARENT.getFieldName()), ConfigurationUtils.readOptionalStringProperty(null, null, dataMap, MetaData.TIMESTAMP.getFieldName()), ConfigurationUtils.readOptionalStringProperty(null, null, dataMap, MetaData.TTL.getFieldName()), - document); + document, newDateFormat); ingestDocumentList.add(ingestDocument); } return ingestDocumentList; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequestBuilder.java index 4a13fa111e6a2..bb5d0e4e40003 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequestBuilder.java @@ -22,22 +22,46 @@ import org.elasticsearch.action.ActionRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.XContentType; public class SimulatePipelineRequestBuilder extends ActionRequestBuilder { + /** + * Create a new builder for {@link SimulatePipelineRequest}s + */ public SimulatePipelineRequestBuilder(ElasticsearchClient client, SimulatePipelineAction action) { super(client, action, new SimulatePipelineRequest()); } + /** + * Create a new builder for {@link SimulatePipelineRequest}s + * @deprecated use {@link #SimulatePipelineRequestBuilder(ElasticsearchClient, SimulatePipelineAction, BytesReference, XContentType)} to + * avoid content type auto-detection on the source bytes + */ + @Deprecated public SimulatePipelineRequestBuilder(ElasticsearchClient client, SimulatePipelineAction action, BytesReference source) { super(client, action, new SimulatePipelineRequest(source)); } + /** + * Create a new builder for {@link SimulatePipelineRequest}s + */ + public SimulatePipelineRequestBuilder(ElasticsearchClient client, SimulatePipelineAction action, BytesReference source, + XContentType xContentType) { + super(client, action, new SimulatePipelineRequest(source, xContentType)); + } + + /** + * Set the id for the pipeline to simulate + */ public SimulatePipelineRequestBuilder setId(String id) { request.setId(id); return this; } + /** + * Enable or disable verbose mode + */ public SimulatePipelineRequestBuilder setVerbose(boolean verbose) { request.setVerbose(verbose); return this; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineResponse.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineResponse.java index 83029a1aab502..e9ea1a7750738 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineResponse.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineResponse.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -30,7 +30,7 @@ import java.util.Collections; import java.util.List; -public class SimulatePipelineResponse extends ActionResponse implements ToXContent { +public class SimulatePipelineResponse extends ActionResponse implements ToXContentObject { private String pipelineId; private boolean verbose; private List results; @@ -88,11 +88,13 @@ public void readFrom(StreamInput in) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.DOCUMENTS); for (SimulateDocumentResult response : results) { response.toXContent(builder, params); } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java index 4f9a219c8ad9e..3f67007df690d 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java @@ -19,7 +19,6 @@ package org.elasticsearch.action.ingest; -import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.HandledTransportAction; @@ -28,7 +27,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.ingest.PipelineStore; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -48,7 +47,7 @@ public SimulatePipelineTransportAction(Settings settings, ThreadPool threadPool, @Override protected void doExecute(SimulatePipelineRequest request, ActionListener listener) { - final Map source = XContentHelper.convertToMap(request.getSource(), false).v2(); + final Map source = XContentHelper.convertToMap(request.getSource(), false, request.getXContentType()).v2(); final SimulatePipelineRequest.Parsed simulateRequest; try { diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulateProcessorResult.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulateProcessorResult.java index c978cc56d9ef1..3ebcb6cb6f373 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulateProcessorResult.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulateProcessorResult.java @@ -19,7 +19,6 @@ package org.elasticsearch.action.ingest; import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -99,10 +98,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (failure != null && ingestDocument != null) { builder.startObject("ignored_error"); - ElasticsearchException.renderException(builder, params, failure); + ElasticsearchException.generateFailureXContent(builder, params, failure, true); builder.endObject(); } else if (failure != null) { - ElasticsearchException.renderException(builder, params, failure); + ElasticsearchException.generateFailureXContent(builder, params, failure, true); } if (ingestDocument != null) { diff --git a/core/src/main/java/org/elasticsearch/action/main/MainRequest.java b/core/src/main/java/org/elasticsearch/action/main/MainRequest.java index 1484bc2a3e9c1..1736e56a8dc06 100644 --- a/core/src/main/java/org/elasticsearch/action/main/MainRequest.java +++ b/core/src/main/java/org/elasticsearch/action/main/MainRequest.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionRequestValidationException; -public class MainRequest extends ActionRequest { +public class MainRequest extends ActionRequest { @Override public ActionRequestValidationException validate() { diff --git a/core/src/main/java/org/elasticsearch/action/main/MainResponse.java b/core/src/main/java/org/elasticsearch/action/main/MainResponse.java index 2403c3ee49cfd..39d4f31a1939a 100644 --- a/core/src/main/java/org/elasticsearch/action/main/MainResponse.java +++ b/core/src/main/java/org/elasticsearch/action/main/MainResponse.java @@ -23,28 +23,34 @@ import org.elasticsearch.Version; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; +import java.util.Objects; -public class MainResponse extends ActionResponse implements ToXContent { +public class MainResponse extends ActionResponse implements ToXContentObject { private String nodeName; private Version version; private ClusterName clusterName; + private String clusterUuid; private Build build; private boolean available; MainResponse() { } - public MainResponse(String nodeName, Version version, ClusterName clusterName, Build build, boolean available) { + public MainResponse(String nodeName, Version version, ClusterName clusterName, String clusterUuid, Build build, boolean available) { this.nodeName = nodeName; this.version = version; this.clusterName = clusterName; + this.clusterUuid = clusterUuid; this.build = build; this.available = available; } @@ -61,6 +67,10 @@ public ClusterName getClusterName() { return clusterName; } + public String getClusterUuid() { + return clusterUuid; + } + public Build getBuild() { return build; } @@ -75,6 +85,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(nodeName); Version.writeVersion(version, out); clusterName.writeTo(out); + out.writeString(clusterUuid); Build.writeBuild(build, out); out.writeBoolean(available); } @@ -85,6 +96,7 @@ public void readFrom(StreamInput in) throws IOException { nodeName = in.readString(); version = Version.readVersion(in); clusterName = new ClusterName(in); + clusterUuid = in.readString(); build = Build.readBuild(in); available = in.readBoolean(); } @@ -94,6 +106,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.startObject(); builder.field("name", nodeName); builder.field("cluster_name", clusterName.value()); + builder.field("cluster_uuid", clusterUuid); builder.startObject("version") .field("number", version.toString()) .field("build_hash", build.shortHash()) @@ -105,4 +118,46 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.endObject(); return builder; } + + private static final ObjectParser PARSER = new ObjectParser<>(MainResponse.class.getName(), true, + () -> new MainResponse()); + + static { + PARSER.declareString((response, value) -> response.nodeName = value, new ParseField("name")); + PARSER.declareString((response, value) -> response.clusterName = new ClusterName(value), new ParseField("cluster_name")); + PARSER.declareString((response, value) -> response.clusterUuid = value, new ParseField("cluster_uuid")); + PARSER.declareString((response, value) -> {}, new ParseField("tagline")); + PARSER.declareObject((response, value) -> { + response.build = new Build((String) value.get("build_hash"), (String) value.get("build_date"), + (boolean) value.get("build_snapshot")); + response.version = Version.fromString((String) value.get("number")); + response.available = true; + }, (parser, context) -> parser.map(), new ParseField("version")); + } + + public static MainResponse fromXContent(XContentParser parser) { + return PARSER.apply(parser, null); + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + MainResponse other = (MainResponse) o; + return Objects.equals(nodeName, other.nodeName) && + Objects.equals(version, other.version) && + Objects.equals(clusterUuid, other.clusterUuid) && + Objects.equals(build, other.build) && + Objects.equals(available, other.available) && + Objects.equals(clusterName, other.clusterName); + } + + @Override + public int hashCode() { + return Objects.hash(nodeName, version, clusterUuid, build, clusterName, available); + } } diff --git a/core/src/main/java/org/elasticsearch/action/main/TransportMainAction.java b/core/src/main/java/org/elasticsearch/action/main/TransportMainAction.java index c37268a52de8b..368696a9553d9 100644 --- a/core/src/main/java/org/elasticsearch/action/main/TransportMainAction.java +++ b/core/src/main/java/org/elasticsearch/action/main/TransportMainAction.java @@ -52,7 +52,7 @@ protected void doExecute(MainRequest request, ActionListener liste assert Node.NODE_NAME_SETTING.exists(settings); final boolean available = clusterState.getBlocks().hasGlobalBlock(RestStatus.SERVICE_UNAVAILABLE) == false; listener.onResponse( - new MainResponse(Node.NODE_NAME_SETTING.get(settings), Version.CURRENT, clusterState.getClusterName(), Build.CURRENT, - available)); + new MainResponse(Node.NODE_NAME_SETTING.get(settings), Version.CURRENT, clusterState.getClusterName(), + clusterState.metaData().clusterUUID(), Build.CURRENT, available)); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/AbstractAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/AbstractAsyncAction.java deleted file mode 100644 index 3ce14d8dacd87..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/search/AbstractAsyncAction.java +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.search; - -/** - * Base implementation for an async action. - */ -abstract class AbstractAsyncAction { - - private final long startTime; - - protected AbstractAsyncAction() { - this.startTime = System.currentTimeMillis(); - } - - /** - * Return the time when the action started. - */ - protected final long startTime() { - return startTime; - } - - /** - * Builds how long it took to execute the search. - */ - protected final long buildTookInMillis() { - // protect ourselves against time going backwards - // negative values don't make sense and we want to be able to serialize that thing as a vLong - return Math.max(1, System.currentTimeMillis() - startTime); - } - - abstract void start(); -} diff --git a/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java index d86134c7d4467..89be2ecabeb24 100644 --- a/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java @@ -19,397 +19,305 @@ package org.elasticsearch.action.search; -import com.carrotsearch.hppc.IntArrayList; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; -import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.util.SetOnce; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.NoShardAvailableActionException; +import org.elasticsearch.action.ShardOperationFailedException; import org.elasticsearch.action.support.TransportActions; -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.block.ClusterBlockLevel; -import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.routing.GroupShardsIterator; -import org.elasticsearch.cluster.routing.ShardIterator; -import org.elasticsearch.cluster.routing.ShardRouting; -import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.search.action.SearchTransportService; -import org.elasticsearch.search.controller.SearchPhaseController; -import org.elasticsearch.search.fetch.ShardFetchSearchRequest; +import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.search.internal.InternalSearchResponse; import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.query.QuerySearchResultProvider; -import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.Transport; import java.util.List; import java.util.Map; -import java.util.Set; +import java.util.concurrent.Executor; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; - -import static org.elasticsearch.action.search.TransportSearchHelper.internalSearchRequest; - -abstract class AbstractSearchAsyncAction extends AbstractAsyncAction { - - protected final Logger logger; - protected final SearchTransportService searchTransportService; - private final IndexNameExpressionResolver indexNameExpressionResolver; - protected final SearchPhaseController searchPhaseController; - protected final ThreadPool threadPool; - protected final ActionListener listener; - protected final GroupShardsIterator shardsIts; - protected final SearchRequest request; - protected final ClusterState clusterState; - protected final DiscoveryNodes nodes; - protected final int expectedSuccessfulOps; - private final int expectedTotalOps; - protected final AtomicInteger successfulOps = new AtomicInteger(); - private final AtomicInteger totalOps = new AtomicInteger(); - protected final AtomicArray firstResults; - private volatile AtomicArray shardFailures; +import java.util.function.BiFunction; +import java.util.stream.Collectors; + +abstract class AbstractSearchAsyncAction extends InitialSearchPhase + implements SearchPhaseContext { + private static final float DEFAULT_INDEX_BOOST = 1.0f; + private final Logger logger; + private final SearchTransportService searchTransportService; + private final Executor executor; + private final ActionListener listener; + private final SearchRequest request; + /** + * Used by subclasses to resolve node ids to DiscoveryNodes. + **/ + private final BiFunction nodeIdToConnection; + private final SearchTask task; + private final SearchPhaseResults results; + private final long clusterStateVersion; + private final Map aliasFilter; + private final Map concreteIndexBoosts; + private final SetOnce> shardFailures = new SetOnce<>(); private final Object shardFailuresMutex = new Object(); - protected volatile ScoreDoc[] sortedShardDocs; - - protected AbstractSearchAsyncAction(Logger logger, SearchTransportService searchTransportService, ClusterService clusterService, - IndexNameExpressionResolver indexNameExpressionResolver, - SearchPhaseController searchPhaseController, ThreadPool threadPool, SearchRequest request, - ActionListener listener) { + private final AtomicInteger successfulOps = new AtomicInteger(); + private final AtomicInteger skippedOps = new AtomicInteger(); + private final TransportSearchAction.SearchTimeProvider timeProvider; + + + protected AbstractSearchAsyncAction(String name, Logger logger, SearchTransportService searchTransportService, + BiFunction nodeIdToConnection, + Map aliasFilter, Map concreteIndexBoosts, + Executor executor, SearchRequest request, + ActionListener listener, GroupShardsIterator shardsIts, + TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion, + SearchTask task, SearchPhaseResults resultConsumer) { + super(name, request, shardsIts, logger); + this.timeProvider = timeProvider; this.logger = logger; this.searchTransportService = searchTransportService; - this.indexNameExpressionResolver = indexNameExpressionResolver; - this.searchPhaseController = searchPhaseController; - this.threadPool = threadPool; + this.executor = executor; this.request = request; + this.task = task; this.listener = listener; - - this.clusterState = clusterService.state(); - nodes = clusterState.nodes(); - - clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ); - - // TODO: I think startTime() should become part of ActionRequest and that should be used both for index name - // date math expressions and $now in scripts. This way all apis will deal with now in the same way instead - // of just for the _search api - String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(clusterState, request.indicesOptions(), - startTime(), request.indices()); - - for (String index : concreteIndices) { - clusterState.blocks().indexBlockedRaiseException(ClusterBlockLevel.READ, index); - } - - Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, request.routing(), - request.indices()); - - shardsIts = clusterService.operationRouting().searchShards(clusterState, concreteIndices, routingMap, request.preference()); - final int shardCount = shardsIts.size(); - failIfOverShardCountLimit(clusterService, shardCount); - expectedSuccessfulOps = shardCount; - // we need to add 1 for non active partition, since we count it in the total! - expectedTotalOps = shardsIts.totalSizeWith1ForEmpty(); - - firstResults = new AtomicArray<>(shardsIts.size()); + this.nodeIdToConnection = nodeIdToConnection; + this.clusterStateVersion = clusterStateVersion; + this.concreteIndexBoosts = concreteIndexBoosts; + this.aliasFilter = aliasFilter; + this.results = resultConsumer; } - private void failIfOverShardCountLimit(ClusterService clusterService, int shardCount) { - final long shardCountLimit = clusterService.getClusterSettings().get(TransportSearchAction.SHARD_COUNT_LIMIT_SETTING); - if (shardCount > shardCountLimit) { - throw new IllegalArgumentException("Trying to query " + shardCount + " shards, which is over the limit of " - + shardCountLimit + ". This limit exists because querying many shards at the same time can make the " - + "job of the coordinating node very CPU and/or memory intensive. It is usually a better idea to " - + "have a smaller number of larger shards. Update [" + TransportSearchAction.SHARD_COUNT_LIMIT_SETTING.getKey() - + "] to a greater value if you really want to query that many shards at the same time."); - } + /** + * Builds how long it took to execute the search. + */ + long buildTookInMillis() { + return TimeUnit.NANOSECONDS.toMillis( + timeProvider.getRelativeCurrentNanos() - timeProvider.getRelativeStartNanos()); } - public void start() { - if (expectedSuccessfulOps == 0) { + /** + * This is the main entry point for a search. This method starts the search execution of the initial phase. + */ + public final void start() { + if (getNumShards() == 0) { //no search shards to search on, bail with empty response //(it happens with search across _all with no indices around and consistent with broadcast operations) - listener.onResponse(new SearchResponse(InternalSearchResponse.empty(), null, 0, 0, buildTookInMillis(), + listener.onResponse(new SearchResponse(InternalSearchResponse.empty(), null, 0, 0, 0, buildTookInMillis(), ShardSearchFailure.EMPTY_ARRAY)); return; } - int shardIndex = -1; - for (final ShardIterator shardIt : shardsIts) { - shardIndex++; - final ShardRouting shard = shardIt.nextOrNull(); - if (shard != null) { - performFirstPhase(shardIndex, shardIt, shard); - } else { - // really, no shards active in this group - onFirstPhaseResult(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); - } - } + executePhase(this); } - void performFirstPhase(final int shardIndex, final ShardIterator shardIt, final ShardRouting shard) { - if (shard == null) { - // no more active shards... (we should not really get here, but just for safety) - onFirstPhaseResult(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); - } else { - final DiscoveryNode node = nodes.get(shard.currentNodeId()); - if (node == null) { - onFirstPhaseResult(shardIndex, shard, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); - } else { - String[] filteringAliases = indexNameExpressionResolver.filteringAliases(clusterState, - shard.index().getName(), request.indices()); - sendExecuteFirstPhase(node, internalSearchRequest(shard, shardsIts.size(), request, filteringAliases, - startTime()), new ActionListener() { - @Override - public void onResponse(FirstResult result) { - onFirstPhaseResult(shardIndex, shard, result, shardIt); - } - - @Override - public void onFailure(Exception t) { - onFirstPhaseResult(shardIndex, shard, node.getId(), shardIt, t); - } - }); + @Override + public final void executeNextPhase(SearchPhase currentPhase, SearchPhase nextPhase) { + /* This is the main search phase transition where we move to the next phase. At this point we check if there is + * at least one successful operation left and if so we move to the next phase. If not we immediately fail the + * search phase as "all shards failed"*/ + if (successfulOps.get() == 0) { // we have 0 successful results that means we shortcut stuff and return a failure + if (logger.isDebugEnabled()) { + final ShardOperationFailedException[] shardSearchFailures = ExceptionsHelper.groupBy(buildShardFailures()); + Throwable cause = shardSearchFailures.length == 0 ? null : + ElasticsearchException.guessRootCauses(shardSearchFailures[0].getCause())[0]; + logger.debug((Supplier) () -> new ParameterizedMessage("All shards failed for phase: [{}]", getName()), + cause); } - } - } - - void onFirstPhaseResult(int shardIndex, ShardRouting shard, FirstResult result, ShardIterator shardIt) { - result.shardTarget(new SearchShardTarget(shard.currentNodeId(), shard.index(), shard.id())); - processFirstPhaseResult(shardIndex, result); - // we need to increment successful ops first before we compare the exit condition otherwise if we - // are fast we could concurrently update totalOps but then preempt one of the threads which can - // cause the successor to read a wrong value from successfulOps if second phase is very fast ie. count etc. - successfulOps.incrementAndGet(); - // increment all the "future" shards to update the total ops since we some may work and some may not... - // and when that happens, we break on total ops, so we must maintain them - final int xTotalOps = totalOps.addAndGet(shardIt.remaining() + 1); - if (xTotalOps == expectedTotalOps) { - try { - innerMoveToSecondPhase(); - } catch (Exception e) { - if (logger.isDebugEnabled()) { - logger.debug( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}] while moving to second phase", - shardIt.shardId(), - request), - e); - } - raiseEarlyFailure(new ReduceSearchPhaseException(firstPhaseName(), "", e, buildShardFailures())); + onPhaseFailure(currentPhase, "all shards failed", null); + } else { + if (logger.isTraceEnabled()) { + final String resultsFrom = results.getSuccessfulResults() + .map(r -> r.getSearchShardTarget().toString()).collect(Collectors.joining(",")); + logger.trace("[{}] Moving to next phase: [{}], based on results from: {} (cluster state version: {})", + currentPhase.getName(), nextPhase.getName(), resultsFrom, clusterStateVersion); } - } else if (xTotalOps > expectedTotalOps) { - raiseEarlyFailure(new IllegalStateException("unexpected higher total ops [" + xTotalOps + "] compared " + - "to expected [" + expectedTotalOps + "]")); + executePhase(nextPhase); } } - void onFirstPhaseResult(final int shardIndex, @Nullable ShardRouting shard, @Nullable String nodeId, - final ShardIterator shardIt, Exception e) { - // we always add the shard failure for a specific shard instance - // we do make sure to clean it on a successful response from a shard - SearchShardTarget shardTarget = new SearchShardTarget(nodeId, shardIt.shardId().getIndex(), shardIt.shardId().getId()); - addShardFailure(shardIndex, shardTarget, e); - - if (totalOps.incrementAndGet() == expectedTotalOps) { + private void executePhase(SearchPhase phase) { + try { + phase.run(); + } catch (Exception e) { if (logger.isDebugEnabled()) { - if (e != null && !TransportActions.isShardNotAvailableException(e)) { - logger.debug( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}]", - shard != null ? shard.shortSummary() : - shardIt.shardId(), - request), - e); - } else if (logger.isTraceEnabled()) { - logger.trace((Supplier) () -> new ParameterizedMessage("{}: Failed to execute [{}]", shard, request), e); - } - } - final ShardSearchFailure[] shardSearchFailures = buildShardFailures(); - if (successfulOps.get() == 0) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("All shards failed for phase: [{}]", firstPhaseName()), e); - } - - // no successful ops, raise an exception - raiseEarlyFailure(new SearchPhaseExecutionException(firstPhaseName(), "all shards failed", e, shardSearchFailures)); - } else { - try { - innerMoveToSecondPhase(); - } catch (Exception inner) { - inner.addSuppressed(e); - raiseEarlyFailure(new ReduceSearchPhaseException(firstPhaseName(), "", inner, shardSearchFailures)); - } - } - } else { - final ShardRouting nextShard = shardIt.nextOrNull(); - final boolean lastShard = nextShard == null; - // trace log this exception - logger.trace( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}] lastShard [{}]", - shard != null ? shard.shortSummary() : shardIt.shardId(), - request, - lastShard), - e); - if (!lastShard) { - try { - performFirstPhase(shardIndex, shardIt, nextShard); - } catch (Exception inner) { - inner.addSuppressed(e); - onFirstPhaseResult(shardIndex, shard, shard.currentNodeId(), shardIt, inner); - } - } else { - // no more shards active, add a failure - if (logger.isDebugEnabled() && !logger.isTraceEnabled()) { // do not double log this exception - if (e != null && !TransportActions.isShardNotAvailableException(e)) { - logger.debug( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}] lastShard [{}]", - shard != null ? shard.shortSummary() : - shardIt.shardId(), - request, - lastShard), - e); - } - } + logger.debug( + (Supplier) () -> new ParameterizedMessage( + "Failed to execute [{}] while moving to [{}] phase", request, phase.getName()), + e); } + onPhaseFailure(phase, "", e); } } - protected final ShardSearchFailure[] buildShardFailures() { - AtomicArray shardFailures = this.shardFailures; + + private ShardSearchFailure[] buildShardFailures() { + AtomicArray shardFailures = this.shardFailures.get(); if (shardFailures == null) { return ShardSearchFailure.EMPTY_ARRAY; } - List> entries = shardFailures.asList(); + List entries = shardFailures.asList(); ShardSearchFailure[] failures = new ShardSearchFailure[entries.size()]; for (int i = 0; i < failures.length; i++) { - failures[i] = entries.get(i).value; + failures[i] = entries.get(i); } return failures; } - protected final void addShardFailure(final int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e) { + public final void onShardFailure(final int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e) { // we don't aggregate shard failures on non active shards (but do keep the header counts right) - if (TransportActions.isShardNotAvailableException(e)) { - return; - } - - // lazily create shard failures, so we can early build the empty shard failure list in most cases (no failures) - if (shardFailures == null) { - synchronized (shardFailuresMutex) { - if (shardFailures == null) { - shardFailures = new AtomicArray<>(shardsIts.size()); + if (TransportActions.isShardNotAvailableException(e) == false) { + AtomicArray shardFailures = this.shardFailures.get(); + // lazily create shard failures, so we can early build the empty shard failure list in most cases (no failures) + if (shardFailures == null) { // this is double checked locking but it's fine since SetOnce uses a volatile read internally + synchronized (shardFailuresMutex) { + shardFailures = this.shardFailures.get(); // read again otherwise somebody else has created it? + if (shardFailures == null) { // still null so we are the first and create a new instance + shardFailures = new AtomicArray<>(getNumShards()); + this.shardFailures.set(shardFailures); + } } } - } - ShardSearchFailure failure = shardFailures.get(shardIndex); - if (failure == null) { - shardFailures.set(shardIndex, new ShardSearchFailure(e, shardTarget)); - } else { - // the failure is already present, try and not override it with an exception that is less meaningless - // for example, getting illegal shard state - if (TransportActions.isReadOverrideException(e)) { + ShardSearchFailure failure = shardFailures.get(shardIndex); + if (failure == null) { shardFailures.set(shardIndex, new ShardSearchFailure(e, shardTarget)); + } else { + // the failure is already present, try and not override it with an exception that is less meaningless + // for example, getting illegal shard state + if (TransportActions.isReadOverrideException(e)) { + shardFailures.set(shardIndex, new ShardSearchFailure(e, shardTarget)); + } + } + + if (results.hasResult(shardIndex)) { + assert failure == null : "shard failed before but shouldn't: " + failure; + successfulOps.decrementAndGet(); // if this shard was successful before (initial phase) we have to adjust the counter } } + results.consumeShardFailure(shardIndex); } - private void raiseEarlyFailure(Exception e) { - for (AtomicArray.Entry entry : firstResults.asList()) { + /** + * This method should be called if a search phase failed to ensure all relevant search contexts and resources are released. + * this method will also notify the listener and sends back a failure to the user. + * + * @param exception the exception explaining or causing the phase failure + */ + private void raisePhaseFailure(SearchPhaseExecutionException exception) { + results.getSuccessfulResults().forEach((entry) -> { try { - DiscoveryNode node = nodes.get(entry.value.shardTarget().nodeId()); - sendReleaseSearchContext(entry.value.id(), node); + SearchShardTarget searchShardTarget = entry.getSearchShardTarget(); + Transport.Connection connection = getConnection(null, searchShardTarget.getNodeId()); + sendReleaseSearchContext(entry.getRequestId(), connection, searchShardTarget.getOriginalIndices()); } catch (Exception inner) { - inner.addSuppressed(e); + inner.addSuppressed(exception); logger.trace("failed to release context", inner); } - } - listener.onFailure(e); + }); + listener.onFailure(exception); } - /** - * Releases shard targets that are not used in the docsIdsToLoad. - */ - protected void releaseIrrelevantSearchContexts(AtomicArray queryResults, - AtomicArray docIdsToLoad) { - if (docIdsToLoad == null) { - return; + @Override + public final void onShardSuccess(Result result) { + successfulOps.incrementAndGet(); + results.consumeResult(result); + if (logger.isTraceEnabled()) { + logger.trace("got first-phase result from {}", result != null ? result.getSearchShardTarget() : null); } - // we only release search context that we did not fetch from if we are not scrolling - if (request.scroll() == null) { - for (AtomicArray.Entry entry : queryResults.asList()) { - QuerySearchResult queryResult = entry.value.queryResult(); - if (queryResult.hasHits() - && docIdsToLoad.get(entry.index) == null) { // but none of them made it to the global top docs - try { - DiscoveryNode node = nodes.get(entry.value.queryResult().shardTarget().nodeId()); - sendReleaseSearchContext(entry.value.queryResult().id(), node); - } catch (Exception e) { - logger.trace("failed to release context", e); - } - } - } + // clean a previous error on this shard group (note, this code will be serialized on the same shardIndex value level + // so its ok concurrency wise to miss potentially the shard failures being created because of another failure + // in the #addShardFailure, because by definition, it will happen on *another* shardIndex + AtomicArray shardFailures = this.shardFailures.get(); + if (shardFailures != null) { + shardFailures.set(result.getShardIndex(), null); } } - protected void sendReleaseSearchContext(long contextId, DiscoveryNode node) { - if (node != null) { - searchTransportService.sendFreeContext(node, contextId, request); - } + @Override + public final void onPhaseDone() { + executeNextPhase(this, getNextPhase(results, this)); } - protected ShardFetchSearchRequest createFetchRequest(QuerySearchResult queryResult, AtomicArray.Entry entry, - ScoreDoc[] lastEmittedDocPerShard) { - final ScoreDoc lastEmittedDoc = (lastEmittedDocPerShard != null) ? lastEmittedDocPerShard[entry.index] : null; - return new ShardFetchSearchRequest(request, queryResult.id(), entry.value, lastEmittedDoc); + @Override + public final int getNumShards() { + return results.getNumShards(); } - protected abstract void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener); + @Override + public final Logger getLogger() { + return logger; + } - protected final void processFirstPhaseResult(int shardIndex, FirstResult result) { - firstResults.set(shardIndex, result); + @Override + public final SearchTask getTask() { + return task; + } - if (logger.isTraceEnabled()) { - logger.trace("got first-phase result from {}", result != null ? result.shardTarget() : null); - } + @Override + public final SearchRequest getRequest() { + return request; + } - // clean a previous error on this shard group (note, this code will be serialized on the same shardIndex value level - // so its ok concurrency wise to miss potentially the shard failures being created because of another failure - // in the #addShardFailure, because by definition, it will happen on *another* shardIndex - AtomicArray shardFailures = this.shardFailures; - if (shardFailures != null) { - shardFailures.set(shardIndex, null); - } + @Override + public final SearchResponse buildSearchResponse(InternalSearchResponse internalSearchResponse, String scrollId) { + return new SearchResponse(internalSearchResponse, scrollId, getNumShards(), successfulOps.get(), + skippedOps.get(), buildTookInMillis(), buildShardFailures()); } - final void innerMoveToSecondPhase() throws Exception { - if (logger.isTraceEnabled()) { - StringBuilder sb = new StringBuilder(); - boolean hadOne = false; - for (int i = 0; i < firstResults.length(); i++) { - FirstResult result = firstResults.get(i); - if (result == null) { - continue; // failure - } - if (hadOne) { - sb.append(","); - } else { - hadOne = true; - } - sb.append(result.shardTarget()); - } + @Override + public final void onPhaseFailure(SearchPhase phase, String msg, Throwable cause) { + raisePhaseFailure(new SearchPhaseExecutionException(phase.getName(), msg, cause, buildShardFailures())); + } - logger.trace("Moving to second phase, based on results from: {} (cluster state version: {})", sb, clusterState.version()); - } - moveToSecondPhase(); + @Override + public final Transport.Connection getConnection(String clusterAlias, String nodeId) { + return nodeIdToConnection.apply(clusterAlias, nodeId); + } + + @Override + public final SearchTransportService getSearchTransport() { + return searchTransportService; } - protected abstract void moveToSecondPhase() throws Exception; + @Override + public final void execute(Runnable command) { + executor.execute(command); + } + + @Override + public final void onResponse(SearchResponse response) { + listener.onResponse(response); + } - protected abstract String firstPhaseName(); + @Override + public final void onFailure(Exception e) { + listener.onFailure(e); + } + + public final ShardSearchTransportRequest buildShardSearchRequest(SearchShardIterator shardIt) { + String clusterAlias = shardIt.getClusterAlias(); + AliasFilter filter = aliasFilter.get(shardIt.shardId().getIndex().getUUID()); + assert filter != null; + float indexBoost = concreteIndexBoosts.getOrDefault(shardIt.shardId().getIndex().getUUID(), DEFAULT_INDEX_BOOST); + return new ShardSearchTransportRequest(shardIt.getOriginalIndices(), request, shardIt.shardId(), getNumShards(), + filter, indexBoost, timeProvider.getAbsoluteStartMillis(), clusterAlias); + } + + /** + * Returns the next phase based on the results of the initial search phase + * @param results the results of the initial search phase. Each non null element in the result array represent a successfully + * executed shard request + * @param context the search context for the next phase + */ + protected abstract SearchPhase getNextPhase(SearchPhaseResults results, SearchPhaseContext context); + + @Override + protected void skipShard(SearchShardIterator iterator) { + successfulOps.incrementAndGet(); + skippedOps.incrementAndGet(); + super.skipShard(iterator); + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhase.java new file mode 100644 index 0000000000000..ea5cf831859de --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhase.java @@ -0,0 +1,143 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.Logger; +import org.apache.lucene.util.FixedBitSet; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.cluster.routing.GroupShardsIterator; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.search.internal.AliasFilter; +import org.elasticsearch.transport.Transport; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.concurrent.Executor; +import java.util.function.BiFunction; +import java.util.function.Function; +import java.util.stream.Stream; + +/** + * This search phrase can be used as an initial search phase to pre-filter search shards based on query rewriting. + * The queries are rewritten against the shards and based on the rewrite result shards might be able to be excluded + * from the search. The extra round trip to the search shards is very cheap and is not subject to rejections + * which allows to fan out to more shards at the same time without running into rejections even if we are hitting a + * large portion of the clusters indices. + */ +final class CanMatchPreFilterSearchPhase extends AbstractSearchAsyncAction { + + private final Function, SearchPhase> phaseFactory; + private final GroupShardsIterator shardsIts; + + CanMatchPreFilterSearchPhase(Logger logger, SearchTransportService searchTransportService, + BiFunction nodeIdToConnection, + Map aliasFilter, Map concreteIndexBoosts, + Executor executor, SearchRequest request, + ActionListener listener, GroupShardsIterator shardsIts, + TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion, + SearchTask task, Function, SearchPhase> phaseFactory) { + super("can_match", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, + listener, + shardsIts, timeProvider, clusterStateVersion, task, new BitSetSearchPhaseResults(shardsIts.size())); + this.phaseFactory = phaseFactory; + this.shardsIts = shardsIts; + } + + @Override + protected void executePhaseOnShard(SearchShardIterator shardIt, ShardRouting shard, + SearchActionListener listener) { + getSearchTransport().sendCanMatch(getConnection(shardIt.getClusterAlias(), shard.currentNodeId()), + buildShardSearchRequest(shardIt), getTask(), listener); + } + + @Override + protected SearchPhase getNextPhase(SearchPhaseResults results, + SearchPhaseContext context) { + + return phaseFactory.apply(getIterator((BitSetSearchPhaseResults) results, shardsIts)); + } + + private GroupShardsIterator getIterator(BitSetSearchPhaseResults results, + GroupShardsIterator shardsIts) { + int cardinality = results.getNumPossibleMatches(); + FixedBitSet possibleMatches = results.getPossibleMatches(); + if (cardinality == 0) { + // this is a special case where we have no hit but we need to get at least one search response in order + // to produce a valid search result with all the aggs etc. + possibleMatches.set(0); + } + int i = 0; + for (SearchShardIterator iter : shardsIts) { + if (possibleMatches.get(i++)) { + iter.reset(); + } else { + iter.resetAndSkip(); + } + } + return shardsIts; + } + + private static final class BitSetSearchPhaseResults extends InitialSearchPhase. + SearchPhaseResults { + + private final FixedBitSet possibleMatches; + private int numPossibleMatches; + + BitSetSearchPhaseResults(int size) { + super(size); + possibleMatches = new FixedBitSet(size); + } + + @Override + void consumeResult(SearchTransportService.CanMatchResponse result) { + if (result.canMatch()) { + consumeShardFailure(result.getShardIndex()); + } + } + + @Override + boolean hasResult(int shardIndex) { + return false; // unneeded + } + + @Override + synchronized void consumeShardFailure(int shardIndex) { + // we have to carry over shard failures in order to account for them in the response. + possibleMatches.set(shardIndex); + numPossibleMatches++; + } + + + synchronized int getNumPossibleMatches() { + return numPossibleMatches; + } + + synchronized FixedBitSet getPossibleMatches() { + return possibleMatches; + } + + @Override + Stream getSuccessfulResults() { + return Stream.empty(); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/ClearScrollController.java b/core/src/main/java/org/elasticsearch/action/search/ClearScrollController.java new file mode 100644 index 0000000000000..ac708d9b6b0c7 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/ClearScrollController.java @@ -0,0 +1,143 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.transport.Transport; +import org.elasticsearch.transport.TransportResponse; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; + +import static org.elasticsearch.action.search.TransportSearchHelper.parseScrollId; + +final class ClearScrollController implements Runnable { + private final DiscoveryNodes nodes; + private final SearchTransportService searchTransportService; + private final CountDown expectedOps; + private final ActionListener listener; + private final AtomicBoolean hasFailed = new AtomicBoolean(false); + private final AtomicInteger freedSearchContexts = new AtomicInteger(0); + private final Logger logger; + private final Runnable runner; + + ClearScrollController(ClearScrollRequest request, ActionListener listener, DiscoveryNodes nodes, Logger logger, + SearchTransportService searchTransportService) { + this.nodes = nodes; + this.logger = logger; + this.searchTransportService = searchTransportService; + this.listener = listener; + List scrollIds = request.getScrollIds(); + final int expectedOps; + if (scrollIds.size() == 1 && "_all".equals(scrollIds.get(0))) { + expectedOps = nodes.getSize(); + runner = this::cleanAllScrolls; + } else { + List parsedScrollIds = new ArrayList<>(); + for (String parsedScrollId : request.getScrollIds()) { + ScrollIdForNode[] context = parseScrollId(parsedScrollId).getContext(); + for (ScrollIdForNode id : context) { + parsedScrollIds.add(id); + } + } + if (parsedScrollIds.isEmpty()) { + expectedOps = 0; + runner = () -> listener.onResponse(new ClearScrollResponse(true, 0)); + } else { + expectedOps = parsedScrollIds.size(); + runner = () -> cleanScrollIds(parsedScrollIds); + } + } + this.expectedOps = new CountDown(expectedOps); + + } + + @Override + public void run() { + runner.run(); + } + + void cleanAllScrolls() { + for (final DiscoveryNode node : nodes) { + try { + Transport.Connection connection = searchTransportService.getConnection(null, node); + searchTransportService.sendClearAllScrollContexts(connection, new ActionListener() { + @Override + public void onResponse(TransportResponse response) { + onFreedContext(true); + } + + @Override + public void onFailure(Exception e) { + onFailedFreedContext(e, node); + } + }); + } catch (Exception e) { + onFailedFreedContext(e, node); + } + } + } + + void cleanScrollIds(List parsedScrollIds) { + SearchScrollAsyncAction.collectNodesAndRun(parsedScrollIds, nodes, searchTransportService, ActionListener.wrap( + lookup -> { + for (ScrollIdForNode target : parsedScrollIds) { + final DiscoveryNode node = lookup.apply(target.getClusterAlias(), target.getNode()); + if (node == null) { + onFreedContext(false); + } else { + try { + Transport.Connection connection = searchTransportService.getConnection(target.getClusterAlias(), node); + searchTransportService.sendFreeContext(connection, target.getScrollId(), + ActionListener.wrap(freed -> onFreedContext(freed.isFreed()), e -> onFailedFreedContext(e, node))); + } catch (Exception e) { + onFailedFreedContext(e, node); + } + } + } + }, listener::onFailure)); + } + + private void onFreedContext(boolean freed) { + if (freed) { + freedSearchContexts.incrementAndGet(); + } + if (expectedOps.countDown()) { + boolean succeeded = hasFailed.get() == false; + listener.onResponse(new ClearScrollResponse(succeeded, freedSearchContexts.get())); + } + } + + private void onFailedFreedContext(Throwable e, DiscoveryNode node) { + logger.warn((Supplier) () -> new ParameterizedMessage("Clear SC failed on node[{}]", node), e); + if (expectedOps.countDown()) { + listener.onResponse(new ClearScrollResponse(false, freedSearchContexts.get())); + } else { + hasFailed.set(true); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/ClearScrollRequest.java b/core/src/main/java/org/elasticsearch/action/search/ClearScrollRequest.java index 17343e8691255..4770818867c84 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ClearScrollRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/ClearScrollRequest.java @@ -23,6 +23,9 @@ import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; import java.util.ArrayList; @@ -31,9 +34,7 @@ import static org.elasticsearch.action.ValidateActions.addValidationError; -/** - */ -public class ClearScrollRequest extends ActionRequest { +public class ClearScrollRequest extends ActionRequest implements ToXContentObject { private List scrollIds; @@ -85,4 +86,47 @@ public void writeTo(StreamOutput out) throws IOException { } } + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.startArray("scroll_id"); + for (String scrollId : scrollIds) { + builder.value(scrollId); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + public void fromXContent(XContentParser parser) throws IOException { + scrollIds = null; + if (parser.nextToken() != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException("Malformed content, must start with an object"); + } else { + XContentParser.Token token; + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if ("scroll_id".equals(currentFieldName)){ + if (token == XContentParser.Token.START_ARRAY) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token.isValue() == false) { + throw new IllegalArgumentException("scroll_id array element should only contain scroll_id"); + } + addScrollId(parser.text()); + } + } else { + if (token.isValue() == false) { + throw new IllegalArgumentException("scroll_id element should only contain scroll_id"); + } + addScrollId(parser.text()); + } + } else { + throw new IllegalArgumentException("Unknown parameter [" + currentFieldName + + "] in request body or parameter is of the wrong type[" + token + "] "); + } + } + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/ClearScrollResponse.java b/core/src/main/java/org/elasticsearch/action/search/ClearScrollResponse.java index 0887d2681992d..d5e2c754a2000 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ClearScrollResponse.java +++ b/core/src/main/java/org/elasticsearch/action/search/ClearScrollResponse.java @@ -20,20 +20,33 @@ package org.elasticsearch.action.search; import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; import static org.elasticsearch.rest.RestStatus.NOT_FOUND; import static org.elasticsearch.rest.RestStatus.OK; -/** - */ -public class ClearScrollResponse extends ActionResponse implements StatusToXContent { +public class ClearScrollResponse extends ActionResponse implements StatusToXContentObject { + + private static final ParseField SUCCEEDED = new ParseField("succeeded"); + private static final ParseField NUMFREED = new ParseField("num_freed"); + + private static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>("clear_scroll", + true, a -> new ClearScrollResponse((boolean)a[0], (int)a[1])); + static { + PARSER.declareField(constructorArg(), (parser, context) -> parser.booleanValue(), SUCCEEDED, ObjectParser.ValueType.BOOLEAN); + PARSER.declareField(constructorArg(), (parser, context) -> parser.intValue(), NUMFREED, ObjectParser.ValueType.INT); + } private boolean succeeded; private int numFreed; @@ -68,11 +81,20 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.field(Fields.SUCCEEDED, succeeded); - builder.field(Fields.NUMFREED, numFreed); + builder.startObject(); + builder.field(SUCCEEDED.getPreferredName(), succeeded); + builder.field(NUMFREED.getPreferredName(), numFreed); + builder.endObject(); return builder; } + /** + * Parse the clear scroll response body into a new {@link ClearScrollResponse} object + */ + public static ClearScrollResponse fromXContent(XContentParser parser) throws IOException { + return PARSER.apply(parser, null); + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -86,10 +108,4 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(succeeded); out.writeVInt(numFreed); } - - static final class Fields { - static final String SUCCEEDED = "succeeded"; - static final String NUMFREED = "num_freed"; - } - } diff --git a/core/src/main/java/org/elasticsearch/action/search/CountedCollector.java b/core/src/main/java/org/elasticsearch/action/search/CountedCollector.java new file mode 100644 index 0000000000000..2dd255aa14c69 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/CountedCollector.java @@ -0,0 +1,79 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; + +import java.util.function.Consumer; + +/** + * This is a simple base class to simplify fan out to shards and collect their results. Each results passed to + * {@link #onResult(SearchPhaseResult)} will be set to the provided result array + * where the given index is used to set the result on the array. + */ +final class CountedCollector { + private final Consumer resultConsumer; + private final CountDown counter; + private final Runnable onFinish; + private final SearchPhaseContext context; + + CountedCollector(Consumer resultConsumer, int expectedOps, Runnable onFinish, SearchPhaseContext context) { + this.resultConsumer = resultConsumer; + this.counter = new CountDown(expectedOps); + this.onFinish = onFinish; + this.context = context; + } + + /** + * Forcefully counts down an operation and executes the provided runnable + * if all expected operations where executed + */ + void countDown() { + assert counter.isCountedDown() == false : "more operations executed than specified"; + if (counter.countDown()) { + onFinish.run(); + } + } + + /** + * Sets the result to the given array index and then runs {@link #countDown()} + */ + void onResult(R result) { + try { + resultConsumer.accept(result); + } finally { + countDown(); + } + } + + /** + * Escalates the failure via {@link SearchPhaseContext#onShardFailure(int, SearchShardTarget, Exception)} + * and then runs {@link #countDown()} + */ + void onFailure(final int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e) { + try { + context.onShardFailure(shardIndex, shardTarget, e); + } finally { + countDown(); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/DfsQueryPhase.java b/core/src/main/java/org/elasticsearch/action/search/DfsQueryPhase.java new file mode 100644 index 0000000000000..db0425db7c320 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/DfsQueryPhase.java @@ -0,0 +1,105 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.search.dfs.AggregatedDfs; +import org.elasticsearch.search.dfs.DfsSearchResult; +import org.elasticsearch.search.query.QuerySearchRequest; +import org.elasticsearch.search.query.QuerySearchResult; +import org.elasticsearch.transport.Transport; + +import java.io.IOException; +import java.util.List; +import java.util.function.Function; + +/** + * This search phase fans out to every shards to execute a distributed search with a pre-collected distributed frequencies for all + * search terms used in the actual search query. This phase is very similar to a the default query-then-fetch search phase but it doesn't + * retry on another shard if any of the shards are failing. Failures are treated as shard failures and are counted as a non-successful + * operation. + * @see CountedCollector#onFailure(int, SearchShardTarget, Exception) + */ +final class DfsQueryPhase extends SearchPhase { + private final InitialSearchPhase.ArraySearchPhaseResults queryResult; + private final SearchPhaseController searchPhaseController; + private final AtomicArray dfsSearchResults; + private final Function, SearchPhase> nextPhaseFactory; + private final SearchPhaseContext context; + private final SearchTransportService searchTransportService; + + DfsQueryPhase(AtomicArray dfsSearchResults, + SearchPhaseController searchPhaseController, + Function, SearchPhase> nextPhaseFactory, + SearchPhaseContext context) { + super("dfs_query"); + this.queryResult = searchPhaseController.newSearchPhaseResults(context.getRequest(), context.getNumShards()); + this.searchPhaseController = searchPhaseController; + this.dfsSearchResults = dfsSearchResults; + this.nextPhaseFactory = nextPhaseFactory; + this.context = context; + this.searchTransportService = context.getSearchTransport(); + } + + @Override + public void run() throws IOException { + // TODO we can potentially also consume the actual per shard results from the initial phase here in the aggregateDfs + // to free up memory early + final List resultList = dfsSearchResults.asList(); + final AggregatedDfs dfs = searchPhaseController.aggregateDfs(resultList); + final CountedCollector counter = new CountedCollector<>(queryResult::consumeResult, + resultList.size(), + () -> context.executeNextPhase(this, nextPhaseFactory.apply(queryResult)), context); + for (final DfsSearchResult dfsResult : resultList) { + final SearchShardTarget searchShardTarget = dfsResult.getSearchShardTarget(); + Transport.Connection connection = context.getConnection(searchShardTarget.getClusterAlias(), searchShardTarget.getNodeId()); + QuerySearchRequest querySearchRequest = new QuerySearchRequest(searchShardTarget.getOriginalIndices(), + dfsResult.getRequestId(), dfs); + final int shardIndex = dfsResult.getShardIndex(); + searchTransportService.sendExecuteQuery(connection, querySearchRequest, context.getTask(), + new SearchActionListener(searchShardTarget, shardIndex) { + + @Override + protected void innerOnResponse(QuerySearchResult response) { + counter.onResult(response); + } + + @Override + public void onFailure(Exception exception) { + try { + if (context.getLogger().isDebugEnabled()) { + context.getLogger().debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", + querySearchRequest.id()), exception); + } + counter.onFailure(shardIndex, searchShardTarget, exception); + } finally { + // the query might not have been executed at all (for example because thread pool rejected + // execution) and the search context that was created in dfs phase might not be released. + // release it again to be in the safe side + context.sendReleaseSearchContext(querySearchRequest.id(), connection, searchShardTarget.getOriginalIndices()); + } + } + }); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java new file mode 100644 index 0000000000000..bc673644a0683 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java @@ -0,0 +1,156 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.index.query.BoolQueryBuilder; +import org.elasticsearch.index.query.InnerHitBuilder; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryBuilders; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.collapse.CollapseBuilder; +import org.elasticsearch.search.internal.InternalSearchResponse; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.function.Function; + +/** + * This search phase is an optional phase that will be executed once all hits are fetched from the shards that executes + * field-collapsing on the inner hits. This phase only executes if field collapsing is requested in the search request and otherwise + * forwards to the next phase immediately. + */ +final class ExpandSearchPhase extends SearchPhase { + private final SearchPhaseContext context; + private final InternalSearchResponse searchResponse; + private final Function nextPhaseFactory; + + ExpandSearchPhase(SearchPhaseContext context, InternalSearchResponse searchResponse, + Function nextPhaseFactory) { + super("expand"); + this.context = context; + this.searchResponse = searchResponse; + this.nextPhaseFactory = nextPhaseFactory; + } + + /** + * Returns true iff the search request has inner hits and needs field collapsing + */ + private boolean isCollapseRequest() { + final SearchRequest searchRequest = context.getRequest(); + return searchRequest.source() != null && + searchRequest.source().collapse() != null && + searchRequest.source().collapse().getInnerHits().isEmpty() == false; + } + + @Override + public void run() throws IOException { + if (isCollapseRequest() && searchResponse.hits().getHits().length > 0) { + SearchRequest searchRequest = context.getRequest(); + CollapseBuilder collapseBuilder = searchRequest.source().collapse(); + final List innerHitBuilders = collapseBuilder.getInnerHits(); + MultiSearchRequest multiRequest = new MultiSearchRequest(); + if (collapseBuilder.getMaxConcurrentGroupRequests() > 0) { + multiRequest.maxConcurrentSearchRequests(collapseBuilder.getMaxConcurrentGroupRequests()); + } + for (SearchHit hit : searchResponse.hits().getHits()) { + BoolQueryBuilder groupQuery = new BoolQueryBuilder(); + Object collapseValue = hit.field(collapseBuilder.getField()).getValue(); + if (collapseValue != null) { + groupQuery.filter(QueryBuilders.matchQuery(collapseBuilder.getField(), collapseValue)); + } else { + groupQuery.mustNot(QueryBuilders.existsQuery(collapseBuilder.getField())); + } + QueryBuilder origQuery = searchRequest.source().query(); + if (origQuery != null) { + groupQuery.must(origQuery); + } + for (InnerHitBuilder innerHitBuilder : innerHitBuilders) { + SearchSourceBuilder sourceBuilder = buildExpandSearchSourceBuilder(innerHitBuilder) + .query(groupQuery); + SearchRequest groupRequest = new SearchRequest(searchRequest.indices()) + .types(searchRequest.types()) + .source(sourceBuilder); + multiRequest.add(groupRequest); + } + } + context.getSearchTransport().sendExecuteMultiSearch(multiRequest, context.getTask(), + ActionListener.wrap(response -> { + Iterator it = response.iterator(); + for (SearchHit hit : searchResponse.hits.getHits()) { + for (InnerHitBuilder innerHitBuilder : innerHitBuilders) { + MultiSearchResponse.Item item = it.next(); + if (item.isFailure()) { + context.onPhaseFailure(this, "failed to expand hits", item.getFailure()); + return; + } + SearchHits innerHits = item.getResponse().getHits(); + if (hit.getInnerHits() == null) { + hit.setInnerHits(new HashMap<>(innerHitBuilders.size())); + } + hit.getInnerHits().put(innerHitBuilder.getName(), innerHits); + } + } + context.executeNextPhase(this, nextPhaseFactory.apply(searchResponse)); + }, context::onFailure) + ); + } else { + context.executeNextPhase(this, nextPhaseFactory.apply(searchResponse)); + } + } + + private SearchSourceBuilder buildExpandSearchSourceBuilder(InnerHitBuilder options) { + SearchSourceBuilder groupSource = new SearchSourceBuilder(); + groupSource.from(options.getFrom()); + groupSource.size(options.getSize()); + if (options.getSorts() != null) { + options.getSorts().forEach(groupSource::sort); + } + if (options.getFetchSourceContext() != null) { + if (options.getFetchSourceContext().includes() == null && options.getFetchSourceContext().excludes() == null) { + groupSource.fetchSource(options.getFetchSourceContext().fetchSource()); + } else { + groupSource.fetchSource(options.getFetchSourceContext().includes(), + options.getFetchSourceContext().excludes()); + } + } + if (options.getDocValueFields() != null) { + options.getDocValueFields().forEach(groupSource::docValueField); + } + if (options.getStoredFieldsContext() != null && options.getStoredFieldsContext().fieldNames() != null) { + options.getStoredFieldsContext().fieldNames().forEach(groupSource::storedField); + } + if (options.getScriptFields() != null) { + for (SearchSourceBuilder.ScriptField field : options.getScriptFields()) { + groupSource.scriptField(field.fieldName(), field.script()); + } + } + if (options.getHighlightBuilder() != null) { + groupSource.highlighter(options.getHighlightBuilder()); + } + groupSource.explain(options.isExplain()); + groupSource.trackScores(options.isTrackScores()); + return groupSource; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/FetchSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/FetchSearchPhase.java new file mode 100644 index 0000000000000..4712496bc37ec --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/FetchSearchPhase.java @@ -0,0 +1,220 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import com.carrotsearch.hppc.IntArrayList; +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.search.ScoreDoc; +import org.elasticsearch.action.ActionRunnable; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.search.fetch.FetchSearchResult; +import org.elasticsearch.search.fetch.ShardFetchSearchRequest; +import org.elasticsearch.search.internal.InternalSearchResponse; +import org.elasticsearch.search.query.QuerySearchResult; +import org.elasticsearch.transport.Transport; + +import java.io.IOException; +import java.util.List; +import java.util.function.BiFunction; + +/** + * This search phase merges the query results from the previous phase together and calculates the topN hits for this search. + * Then it reaches out to all relevant shards to fetch the topN hits. + */ +final class FetchSearchPhase extends SearchPhase { + private final AtomicArray fetchResults; + private final SearchPhaseController searchPhaseController; + private final AtomicArray queryResults; + private final BiFunction nextPhaseFactory; + private final SearchPhaseContext context; + private final Logger logger; + private final InitialSearchPhase.SearchPhaseResults resultConsumer; + + FetchSearchPhase(InitialSearchPhase.SearchPhaseResults resultConsumer, + SearchPhaseController searchPhaseController, + SearchPhaseContext context) { + this(resultConsumer, searchPhaseController, context, + (response, scrollId) -> new ExpandSearchPhase(context, response, // collapse only happens if the request has inner hits + (finalResponse) -> sendResponsePhase(finalResponse, scrollId, context))); + } + + FetchSearchPhase(InitialSearchPhase.SearchPhaseResults resultConsumer, + SearchPhaseController searchPhaseController, + SearchPhaseContext context, BiFunction nextPhaseFactory) { + super("fetch"); + if (context.getNumShards() != resultConsumer.getNumShards()) { + throw new IllegalStateException("number of shards must match the length of the query results but doesn't:" + + context.getNumShards() + "!=" + resultConsumer.getNumShards()); + } + this.fetchResults = new AtomicArray<>(resultConsumer.getNumShards()); + this.searchPhaseController = searchPhaseController; + this.queryResults = resultConsumer.getAtomicArray(); + this.nextPhaseFactory = nextPhaseFactory; + this.context = context; + this.logger = context.getLogger(); + this.resultConsumer = resultConsumer; + } + + @Override + public void run() throws IOException { + context.execute(new ActionRunnable(context) { + @Override + public void doRun() throws IOException { + // we do the heavy lifting in this inner run method where we reduce aggs etc. that's why we fork this phase + // off immediately instead of forking when we send back the response to the user since there we only need + // to merge together the fetched results which is a linear operation. + innerRun(); + } + + @Override + public void onFailure(Exception e) { + context.onPhaseFailure(FetchSearchPhase.this, "", e); + } + }); + } + + private void innerRun() throws IOException { + final int numShards = context.getNumShards(); + final boolean isScrollSearch = context.getRequest().scroll() != null; + List phaseResults = queryResults.asList(); + String scrollId = isScrollSearch ? TransportSearchHelper.buildScrollId(queryResults) : null; + final SearchPhaseController.ReducedQueryPhase reducedQueryPhase = resultConsumer.reduce(); + final boolean queryAndFetchOptimization = queryResults.length() == 1; + final Runnable finishPhase = () + -> moveToNextPhase(searchPhaseController, scrollId, reducedQueryPhase, queryAndFetchOptimization ? + queryResults : fetchResults); + if (queryAndFetchOptimization) { + assert phaseResults.isEmpty() || phaseResults.get(0).fetchResult() != null : "phaseResults empty [" + phaseResults.isEmpty() + + "], single result: " + phaseResults.get(0).fetchResult(); + // query AND fetch optimization + finishPhase.run(); + } else { + final IntArrayList[] docIdsToLoad = searchPhaseController.fillDocIdsToLoad(numShards, reducedQueryPhase.scoreDocs); + if (reducedQueryPhase.scoreDocs.length == 0) { // no docs to fetch -- sidestep everything and return + phaseResults.stream() + .map(SearchPhaseResult::queryResult) + .forEach(this::releaseIrrelevantSearchContext); // we have to release contexts here to free up resources + finishPhase.run(); + } else { + final ScoreDoc[] lastEmittedDocPerShard = isScrollSearch ? + searchPhaseController.getLastEmittedDocPerShard(reducedQueryPhase, numShards) + : null; + final CountedCollector counter = new CountedCollector<>(r -> fetchResults.set(r.getShardIndex(), r), + docIdsToLoad.length, // we count down every shard in the result no matter if we got any results or not + finishPhase, context); + for (int i = 0; i < docIdsToLoad.length; i++) { + IntArrayList entry = docIdsToLoad[i]; + SearchPhaseResult queryResult = queryResults.get(i); + if (entry == null) { // no results for this shard ID + if (queryResult != null) { + // if we got some hits from this shard we have to release the context there + // we do this as we go since it will free up resources and passing on the request on the + // transport layer is cheap. + releaseIrrelevantSearchContext(queryResult.queryResult()); + } + // in any case we count down this result since we don't talk to this shard anymore + counter.countDown(); + } else { + SearchShardTarget searchShardTarget = queryResult.getSearchShardTarget(); + Transport.Connection connection = context.getConnection(searchShardTarget.getClusterAlias(), + searchShardTarget.getNodeId()); + ShardFetchSearchRequest fetchSearchRequest = createFetchRequest(queryResult.queryResult().getRequestId(), i, entry, + lastEmittedDocPerShard, searchShardTarget.getOriginalIndices()); + executeFetch(i, searchShardTarget, counter, fetchSearchRequest, queryResult.queryResult(), + connection); + } + } + } + } + } + + protected ShardFetchSearchRequest createFetchRequest(long queryId, int index, IntArrayList entry, + ScoreDoc[] lastEmittedDocPerShard, OriginalIndices originalIndices) { + final ScoreDoc lastEmittedDoc = (lastEmittedDocPerShard != null) ? lastEmittedDocPerShard[index] : null; + return new ShardFetchSearchRequest(originalIndices, queryId, entry, lastEmittedDoc); + } + + private void executeFetch(final int shardIndex, final SearchShardTarget shardTarget, + final CountedCollector counter, + final ShardFetchSearchRequest fetchSearchRequest, final QuerySearchResult querySearchResult, + final Transport.Connection connection) { + context.getSearchTransport().sendExecuteFetch(connection, fetchSearchRequest, context.getTask(), + new SearchActionListener(shardTarget, shardIndex) { + @Override + public void innerOnResponse(FetchSearchResult result) { + counter.onResult(result); + } + + @Override + public void onFailure(Exception e) { + try { + if (logger.isDebugEnabled()) { + logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute fetch phase", + fetchSearchRequest.id()), e); + } + counter.onFailure(shardIndex, shardTarget, e); + } finally { + // the search context might not be cleared on the node where the fetch was executed for example + // because the action was rejected by the thread pool. in this case we need to send a dedicated + // request to clear the search context. + releaseIrrelevantSearchContext(querySearchResult); + } + } + }); + } + + /** + * Releases shard targets that are not used in the docsIdsToLoad. + */ + private void releaseIrrelevantSearchContext(QuerySearchResult queryResult) { + // we only release search context that we did not fetch from if we are not scrolling + // and if it has at lease one hit that didn't make it to the global topDocs + if (context.getRequest().scroll() == null && queryResult.hasSearchContext()) { + try { + SearchShardTarget searchShardTarget = queryResult.getSearchShardTarget(); + Transport.Connection connection = context.getConnection(searchShardTarget.getClusterAlias(), searchShardTarget.getNodeId()); + context.sendReleaseSearchContext(queryResult.getRequestId(), connection, searchShardTarget.getOriginalIndices()); + } catch (Exception e) { + context.getLogger().trace("failed to release context", e); + } + } + } + + private void moveToNextPhase(SearchPhaseController searchPhaseController, + String scrollId, SearchPhaseController.ReducedQueryPhase reducedQueryPhase, + AtomicArray fetchResultsArr) { + final InternalSearchResponse internalResponse = searchPhaseController.merge(context.getRequest().scroll() != null, + reducedQueryPhase, fetchResultsArr.asList(), fetchResultsArr::get); + context.executeNextPhase(this, nextPhaseFactory.apply(internalResponse, scrollId)); + } + + private static SearchPhase sendResponsePhase(InternalSearchResponse response, String scrollId, SearchPhaseContext context) { + return new SearchPhase("response") { + @Override + public void run() throws IOException { + context.onResponse(context.buildSearchResponse(response, scrollId)); + } + }; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java new file mode 100644 index 0000000000000..fcee980379bf1 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java @@ -0,0 +1,326 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.action.NoShardAvailableActionException; +import org.elasticsearch.action.support.TransportActions; +import org.elasticsearch.cluster.routing.GroupShardsIterator; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.transport.ConnectTransportException; + +import java.io.IOException; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.stream.Stream; + +/** + * This is an abstract base class that encapsulates the logic to fan out to all shards in provided {@link GroupShardsIterator} + * and collect the results. If a shard request returns a failure this class handles the advance to the next replica of the shard until + * the shards replica iterator is exhausted. Each shard is referenced by position in the {@link GroupShardsIterator} which is later + * referred to as the shardIndex. + * The fan out and collect algorithm is traditionally used as the initial phase which can either be a query execution or collection + * distributed frequencies + */ +abstract class InitialSearchPhase extends SearchPhase { + private final SearchRequest request; + private final GroupShardsIterator shardsIts; + private final Logger logger; + private final int expectedTotalOps; + private final AtomicInteger totalOps = new AtomicInteger(); + private final AtomicInteger shardExecutionIndex = new AtomicInteger(0); + private final int maxConcurrentShardRequests; + + InitialSearchPhase(String name, SearchRequest request, GroupShardsIterator shardsIts, Logger logger) { + super(name); + this.request = request; + this.shardsIts = shardsIts; + this.logger = logger; + // we need to add 1 for non active partition, since we count it in the total. This means for each shard in the iterator we sum up + // it's number of active shards but use 1 as the default if no replica of a shard is active at this point. + // on a per shards level we use shardIt.remaining() to increment the totalOps pointer but add 1 for the current shard result + // we process hence we add one for the non active partition here. + this.expectedTotalOps = shardsIts.totalSizeWith1ForEmpty(); + maxConcurrentShardRequests = Math.min(request.getMaxConcurrentShardRequests(), shardsIts.size()); + } + + private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard, @Nullable String nodeId, + final SearchShardIterator shardIt, Exception e) { + // we always add the shard failure for a specific shard instance + // we do make sure to clean it on a successful response from a shard + SearchShardTarget shardTarget = new SearchShardTarget(nodeId, shardIt.shardId(), shardIt.getClusterAlias(), + shardIt.getOriginalIndices()); + onShardFailure(shardIndex, shardTarget, e); + + if (totalOps.incrementAndGet() == expectedTotalOps) { + if (logger.isDebugEnabled()) { + if (e != null && !TransportActions.isShardNotAvailableException(e)) { + logger.debug( + (Supplier) () -> new ParameterizedMessage( + "{}: Failed to execute [{}]", + shard != null ? shard.shortSummary() : + shardIt.shardId(), + request), + e); + } else if (logger.isTraceEnabled()) { + logger.trace((Supplier) () -> new ParameterizedMessage("{}: Failed to execute [{}]", shard, request), e); + } + } + onPhaseDone(); + } else { + final ShardRouting nextShard = shardIt.nextOrNull(); + final boolean lastShard = nextShard == null; + // trace log this exception + logger.trace( + (Supplier) () -> new ParameterizedMessage( + "{}: Failed to execute [{}] lastShard [{}]", + shard != null ? shard.shortSummary() : shardIt.shardId(), + request, + lastShard), + e); + if (!lastShard) { + try { + performPhaseOnShard(shardIndex, shardIt, nextShard); + } catch (Exception inner) { + inner.addSuppressed(e); + onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, inner); + } + } else { + maybeExecuteNext(); // move to the next execution if needed + // no more shards active, add a failure + if (logger.isDebugEnabled() && !logger.isTraceEnabled()) { // do not double log this exception + if (e != null && !TransportActions.isShardNotAvailableException(e)) { + logger.debug( + (Supplier) () -> new ParameterizedMessage( + "{}: Failed to execute [{}] lastShard [{}]", + shard != null ? shard.shortSummary() : + shardIt.shardId(), + request, + lastShard), + e); + } + } + } + } + } + + @Override + public final void run() throws IOException { + boolean success = shardExecutionIndex.compareAndSet(0, maxConcurrentShardRequests); + assert success; + for (int i = 0; i < maxConcurrentShardRequests; i++) { + SearchShardIterator shardRoutings = shardsIts.get(i); + if (shardRoutings.skip()) { + skipShard(shardRoutings); + } else { + performPhaseOnShard(i, shardRoutings, shardRoutings.nextOrNull()); + } + } + } + + private void maybeExecuteNext() { + final int index = shardExecutionIndex.getAndIncrement(); + if (index < shardsIts.size()) { + SearchShardIterator shardRoutings = shardsIts.get(index); + if (shardRoutings.skip()) { + skipShard(shardRoutings); + } else { + performPhaseOnShard(index, shardRoutings, shardRoutings.nextOrNull()); + } + } + } + + + private void performPhaseOnShard(final int shardIndex, final SearchShardIterator shardIt, final ShardRouting shard) { + if (shard == null) { + onShardFailure(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); + } else { + try { + executePhaseOnShard(shardIt, shard, new SearchActionListener(new SearchShardTarget(shard.currentNodeId(), + shardIt.shardId(), shardIt.getClusterAlias(), shardIt.getOriginalIndices()), shardIndex) { + @Override + public void innerOnResponse(FirstResult result) { + onShardResult(result, shardIt); + } + + @Override + public void onFailure(Exception t) { + onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, t); + } + }); + } catch (ConnectTransportException | IllegalArgumentException ex) { + // we are getting the connection early here so we might run into nodes that are not connected. in that case we move on to + // the next shard. previously when using discovery nodes here we had a special case for null when a node was not connected + // at all which is not not needed anymore. + onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, ex); + } + } + } + + private void onShardResult(FirstResult result, SearchShardIterator shardIt) { + assert result.getShardIndex() != -1 : "shard index is not set"; + assert result.getSearchShardTarget() != null : "search shard target must not be null"; + onShardSuccess(result); + // we need to increment successful ops first before we compare the exit condition otherwise if we + // are fast we could concurrently update totalOps but then preempt one of the threads which can + // cause the successor to read a wrong value from successfulOps if second phase is very fast ie. count etc. + // increment all the "future" shards to update the total ops since we some may work and some may not... + // and when that happens, we break on total ops, so we must maintain them + successfulShardExecution(shardIt); + } + + private void successfulShardExecution(SearchShardIterator shardsIt) { + final int remainingOpsOnIterator; + if (shardsIt.skip()) { + remainingOpsOnIterator = shardsIt.remaining(); + } else { + remainingOpsOnIterator = shardsIt.remaining() + 1; + } + final int xTotalOps = totalOps.addAndGet(remainingOpsOnIterator); + if (xTotalOps == expectedTotalOps) { + onPhaseDone(); + } else if (xTotalOps > expectedTotalOps) { + throw new AssertionError("unexpected higher total ops [" + xTotalOps + "] compared to expected [" + + expectedTotalOps + "]"); + } else { + maybeExecuteNext(); + } + } + + + /** + * Executed once all shard results have been received and processed + * @see #onShardFailure(int, SearchShardTarget, Exception) + * @see #onShardSuccess(SearchPhaseResult) + */ + abstract void onPhaseDone(); // as a tribute to @kimchy aka. finishHim() + + /** + * Executed once for every failed shard level request. This method is invoked before the next replica is tried for the given + * shard target. + * @param shardIndex the internal index for this shard. Each shard has an index / ordinal assigned that is used to reference + * it's results + * @param shardTarget the shard target for this failure + * @param ex the failure reason + */ + abstract void onShardFailure(int shardIndex, SearchShardTarget shardTarget, Exception ex); + + /** + * Executed once for every successful shard level request. + * @param result the result returned form the shard + * + */ + abstract void onShardSuccess(FirstResult result); + + /** + * Sends the request to the actual shard. + * @param shardIt the shards iterator + * @param shard the shard routing to send the request for + * @param listener the listener to notify on response + */ + protected abstract void executePhaseOnShard(SearchShardIterator shardIt, ShardRouting shard, + SearchActionListener listener); + + /** + * This class acts as a basic result collection that can be extended to do on-the-fly reduction or result processing + */ + abstract static class SearchPhaseResults { + private final int numShards; + + protected SearchPhaseResults(int numShards) { + this.numShards = numShards; + } + /** + * Returns the number of expected results this class should collect + */ + final int getNumShards() { + return numShards; + } + + /** + * A stream of all non-null (successful) shard results + */ + abstract Stream getSuccessfulResults(); + + /** + * Consumes a single shard result + * @param result the shards result + */ + abstract void consumeResult(Result result); + + /** + * Returns true iff a result if present for the given shard ID. + */ + abstract boolean hasResult(int shardIndex); + + void consumeShardFailure(int shardIndex) {} + + AtomicArray getAtomicArray() { + throw new UnsupportedOperationException(); + } + + /** + * Reduces the collected results + */ + SearchPhaseController.ReducedQueryPhase reduce() { + throw new UnsupportedOperationException("reduce is not supported"); + } + } + + /** + * This class acts as a basic result collection that can be extended to do on-the-fly reduction or result processing + */ + static class ArraySearchPhaseResults extends SearchPhaseResults { + final AtomicArray results; + + ArraySearchPhaseResults(int size) { + super(size); + this.results = new AtomicArray<>(size); + } + + Stream getSuccessfulResults() { + return results.asList().stream(); + } + + void consumeResult(Result result) { + assert results.get(result.getShardIndex()) == null : "shardIndex: " + result.getShardIndex() + " is already set"; + results.set(result.getShardIndex(), result); + } + + boolean hasResult(int shardIndex) { + return results.get(shardIndex) != null; + } + + @Override + AtomicArray getAtomicArray() { + return results; + } + } + + protected void skipShard(SearchShardIterator iterator) { + assert iterator.skip(); + successfulShardExecution(iterator); + } + +} diff --git a/core/src/main/java/org/elasticsearch/action/search/MultiSearchRequest.java b/core/src/main/java/org/elasticsearch/action/search/MultiSearchRequest.java index 08a1ec5b3de21..7ab97f9bc570e 100644 --- a/core/src/main/java/org/elasticsearch/action/search/MultiSearchRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/MultiSearchRequest.java @@ -22,7 +22,6 @@ import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.CompositeIndicesRequest; -import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -36,7 +35,7 @@ /** * A multi search API request. */ -public class MultiSearchRequest extends ActionRequest implements CompositeIndicesRequest { +public class MultiSearchRequest extends ActionRequest implements CompositeIndicesRequest { private int maxConcurrentSearchRequests = 0; private List requests = new ArrayList<>(); @@ -84,11 +83,6 @@ public List requests() { return this.requests; } - @Override - public List subRequests() { - return this.requests; - } - @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; diff --git a/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java b/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java index 317b775a40369..4d42ad334a9f0 100644 --- a/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java +++ b/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java @@ -23,12 +23,12 @@ import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import java.io.IOException; import java.util.Arrays; @@ -37,7 +37,7 @@ /** * A multi search response. */ -public class MultiSearchResponse extends ActionResponse implements Iterable, ToXContent { +public class MultiSearchResponse extends ActionResponse implements Iterable, ToXContentObject { /** * A search response item, holding the actual search response, or an error message if it failed. @@ -151,39 +151,31 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.RESPONSES); for (Item item : items) { builder.startObject(); if (item.isFailure()) { - ElasticsearchException.renderException(builder, params, item.getFailure()); + ElasticsearchException.generateFailureXContent(builder, params, item.getFailure(), true); builder.field(Fields.STATUS, ExceptionsHelper.status(item.getFailure()).getStatus()); } else { - item.getResponse().toXContent(builder, params); + item.getResponse().innerToXContent(builder, params); builder.field(Fields.STATUS, item.getResponse().status().getStatus()); } builder.endObject(); } builder.endArray(); + builder.endObject(); return builder; } static final class Fields { static final String RESPONSES = "responses"; static final String STATUS = "status"; - static final String ERROR = "error"; - static final String ROOT_CAUSE = "root_cause"; } @Override public String toString() { - try { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; - } + return Strings.toString(this); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/ParsedScrollId.java b/core/src/main/java/org/elasticsearch/action/search/ParsedScrollId.java index 2ddb35e135736..1f34cca8e461c 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ParsedScrollId.java +++ b/core/src/main/java/org/elasticsearch/action/search/ParsedScrollId.java @@ -34,7 +34,7 @@ class ParsedScrollId { private final ScrollIdForNode[] context; - public ParsedScrollId(String source, String type, ScrollIdForNode[] context) { + ParsedScrollId(String source, String type, ScrollIdForNode[] context) { this.source = source; this.type = type; this.context = context; diff --git a/core/src/main/java/org/elasticsearch/action/search/ScrollIdForNode.java b/core/src/main/java/org/elasticsearch/action/search/ScrollIdForNode.java index 488132fdda23d..59e1a3310672b 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ScrollIdForNode.java +++ b/core/src/main/java/org/elasticsearch/action/search/ScrollIdForNode.java @@ -19,12 +19,16 @@ package org.elasticsearch.action.search; +import org.elasticsearch.common.inject.internal.Nullable; + class ScrollIdForNode { private final String node; private final long scrollId; + private final String clusterAlias; - public ScrollIdForNode(String node, long scrollId) { + ScrollIdForNode(@Nullable String clusterAlias, String node, long scrollId) { this.node = node; + this.clusterAlias = clusterAlias; this.scrollId = scrollId; } @@ -32,7 +36,20 @@ public String getNode() { return node; } + public String getClusterAlias() { + return clusterAlias; + } + public long getScrollId() { return scrollId; } + + @Override + public String toString() { + return "ScrollIdForNode{" + + "node='" + node + '\'' + + ", scrollId=" + scrollId + + ", clusterAlias='" + clusterAlias + '\'' + + '}'; + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchActionListener.java b/core/src/main/java/org/elasticsearch/action/search/SearchActionListener.java new file mode 100644 index 0000000000000..67de87b1bb173 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchActionListener.java @@ -0,0 +1,52 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; + +/** + * An base action listener that ensures shard target and shard index is set on all responses + * received by this listener. + */ +abstract class SearchActionListener implements ActionListener { + private final int requestIndex; + private final SearchShardTarget searchShardTarget; + + protected SearchActionListener(SearchShardTarget searchShardTarget, + int shardIndex) { + assert shardIndex >= 0 : "shard index must be positive"; + this.searchShardTarget = searchShardTarget; + this.requestIndex = shardIndex; + } + + @Override + public final void onResponse(T response) { + response.setShardIndex(requestIndex); + setSearchShardTarget(response); + innerOnResponse(response); + } + + protected void setSearchShardTarget(T response) { // some impls need to override this + response.setSearchShardTarget(searchShardTarget); + } + + protected abstract void innerOnResponse(T response); +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryAndFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryAndFetchAsyncAction.java deleted file mode 100644 index 367832afab340..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryAndFetchAsyncAction.java +++ /dev/null @@ -1,145 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.search; - -import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRunnable; -import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.util.concurrent.AtomicArray; -import org.elasticsearch.search.action.SearchTransportService; -import org.elasticsearch.search.controller.SearchPhaseController; -import org.elasticsearch.search.dfs.AggregatedDfs; -import org.elasticsearch.search.dfs.DfsSearchResult; -import org.elasticsearch.search.fetch.QueryFetchSearchResult; -import org.elasticsearch.search.internal.InternalSearchResponse; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.search.query.QuerySearchRequest; -import org.elasticsearch.threadpool.ThreadPool; - -import java.io.IOException; -import java.util.concurrent.atomic.AtomicInteger; - -class SearchDfsQueryAndFetchAsyncAction extends AbstractSearchAsyncAction { - - private final AtomicArray queryFetchResults; - - SearchDfsQueryAndFetchAsyncAction(Logger logger, SearchTransportService searchTransportService, - ClusterService clusterService, IndexNameExpressionResolver indexNameExpressionResolver, - SearchPhaseController searchPhaseController, ThreadPool threadPool, - SearchRequest request, ActionListener listener) { - super(logger, searchTransportService, clusterService, indexNameExpressionResolver, searchPhaseController, threadPool, - request, listener); - queryFetchResults = new AtomicArray<>(firstResults.length()); - } - - @Override - protected String firstPhaseName() { - return "dfs"; - } - - @Override - protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener) { - searchTransportService.sendExecuteDfs(node, request, listener); - } - - @Override - protected void moveToSecondPhase() { - final AggregatedDfs dfs = searchPhaseController.aggregateDfs(firstResults); - final AtomicInteger counter = new AtomicInteger(firstResults.asList().size()); - - for (final AtomicArray.Entry entry : firstResults.asList()) { - DfsSearchResult dfsResult = entry.value; - DiscoveryNode node = nodes.get(dfsResult.shardTarget().nodeId()); - QuerySearchRequest querySearchRequest = new QuerySearchRequest(request, dfsResult.id(), dfs); - executeSecondPhase(entry.index, dfsResult, counter, node, querySearchRequest); - } - } - - void executeSecondPhase(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter, - final DiscoveryNode node, final QuerySearchRequest querySearchRequest) { - searchTransportService.sendExecuteFetch(node, querySearchRequest, new ActionListener() { - @Override - public void onResponse(QueryFetchSearchResult result) { - result.shardTarget(dfsResult.shardTarget()); - queryFetchResults.set(shardIndex, result); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - @Override - public void onFailure(Exception t) { - try { - onSecondPhaseFailure(t, querySearchRequest, shardIndex, dfsResult, counter); - } finally { - // the query might not have been executed at all (for example because thread pool rejected execution) - // and the search context that was created in dfs phase might not be released. - // release it again to be in the safe side - sendReleaseSearchContext(querySearchRequest.id(), node); - } - } - }); - } - - void onSecondPhaseFailure(Exception e, QuerySearchRequest querySearchRequest, int shardIndex, DfsSearchResult dfsResult, - AtomicInteger counter) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", querySearchRequest.id()), e); - } - this.addShardFailure(shardIndex, dfsResult.shardTarget(), e); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - private void finishHim() { - threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable(listener) { - @Override - public void doRun() throws IOException { - sortedShardDocs = searchPhaseController.sortDocs(true, queryFetchResults); - final InternalSearchResponse internalResponse = searchPhaseController.merge(true, sortedShardDocs, queryFetchResults, - queryFetchResults); - String scrollId = null; - if (request.scroll() != null) { - scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults); - } - listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); - } - - @Override - public void onFailure(Exception e) { - ReduceSearchPhaseException failure = new ReduceSearchPhaseException("query_fetch", "", e, buildShardFailures()); - if (logger.isDebugEnabled()) { - logger.debug("failed to reduce search", failure); - } - super.onFailure(e); - } - }); - - } -} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java index 7ceefb1998c7c..a901d71157137 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java @@ -19,205 +19,43 @@ package org.elasticsearch.action.search; -import com.carrotsearch.hppc.IntArrayList; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.apache.lucene.search.ScoreDoc; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRunnable; -import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.util.concurrent.AtomicArray; -import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.search.action.SearchTransportService; -import org.elasticsearch.search.controller.SearchPhaseController; -import org.elasticsearch.search.dfs.AggregatedDfs; +import org.elasticsearch.cluster.routing.GroupShardsIterator; +import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.search.dfs.DfsSearchResult; -import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.ShardFetchSearchRequest; -import org.elasticsearch.search.internal.InternalSearchResponse; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.search.query.QuerySearchRequest; -import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.search.internal.AliasFilter; +import org.elasticsearch.transport.Transport; -import java.io.IOException; -import java.util.concurrent.atomic.AtomicInteger; +import java.util.Map; +import java.util.concurrent.Executor; +import java.util.function.BiFunction; -class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction { +final class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction { - final AtomicArray queryResults; - final AtomicArray fetchResults; - final AtomicArray docIdsToLoad; + private final SearchPhaseController searchPhaseController; - SearchDfsQueryThenFetchAsyncAction(Logger logger, SearchTransportService searchTransportService, - ClusterService clusterService, IndexNameExpressionResolver indexNameExpressionResolver, - SearchPhaseController searchPhaseController, ThreadPool threadPool, - SearchRequest request, ActionListener listener) { - super(logger, searchTransportService, clusterService, indexNameExpressionResolver, searchPhaseController, threadPool, - request, listener); - queryResults = new AtomicArray<>(firstResults.length()); - fetchResults = new AtomicArray<>(firstResults.length()); - docIdsToLoad = new AtomicArray<>(firstResults.length()); + SearchDfsQueryThenFetchAsyncAction(final Logger logger, final SearchTransportService searchTransportService, + final BiFunction nodeIdToConnection, final Map aliasFilter, + final Map concreteIndexBoosts, final SearchPhaseController searchPhaseController, final Executor executor, + final SearchRequest request, final ActionListener listener, + final GroupShardsIterator shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider, + final long clusterStateVersion, final SearchTask task) { + super("dfs", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener, + shardsIts, timeProvider, clusterStateVersion, task, new ArraySearchPhaseResults<>(shardsIts.size())); + this.searchPhaseController = searchPhaseController; } @Override - protected String firstPhaseName() { - return "dfs"; + protected void executePhaseOnShard(final SearchShardIterator shardIt, final ShardRouting shard, + final SearchActionListener listener) { + getSearchTransport().sendExecuteDfs(getConnection(shardIt.getClusterAlias(), shard.currentNodeId()), + buildShardSearchRequest(shardIt) , getTask(), listener); } @Override - protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener) { - searchTransportService.sendExecuteDfs(node, request, listener); - } - - @Override - protected void moveToSecondPhase() { - final AggregatedDfs dfs = searchPhaseController.aggregateDfs(firstResults); - final AtomicInteger counter = new AtomicInteger(firstResults.asList().size()); - for (final AtomicArray.Entry entry : firstResults.asList()) { - DfsSearchResult dfsResult = entry.value; - DiscoveryNode node = nodes.get(dfsResult.shardTarget().nodeId()); - QuerySearchRequest querySearchRequest = new QuerySearchRequest(request, dfsResult.id(), dfs); - executeQuery(entry.index, dfsResult, counter, querySearchRequest, node); - } - } - - void executeQuery(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter, - final QuerySearchRequest querySearchRequest, final DiscoveryNode node) { - searchTransportService.sendExecuteQuery(node, querySearchRequest, new ActionListener() { - @Override - public void onResponse(QuerySearchResult result) { - result.shardTarget(dfsResult.shardTarget()); - queryResults.set(shardIndex, result); - if (counter.decrementAndGet() == 0) { - executeFetchPhase(); - } - } - - @Override - public void onFailure(Exception t) { - try { - onQueryFailure(t, querySearchRequest, shardIndex, dfsResult, counter); - } finally { - // the query might not have been executed at all (for example because thread pool rejected - // execution) and the search context that was created in dfs phase might not be released. - // release it again to be in the safe side - sendReleaseSearchContext(querySearchRequest.id(), node); - } - } - }); - } - - void onQueryFailure(Exception e, QuerySearchRequest querySearchRequest, int shardIndex, DfsSearchResult dfsResult, - AtomicInteger counter) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", querySearchRequest.id()), e); - } - this.addShardFailure(shardIndex, dfsResult.shardTarget(), e); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - if (successfulOps.get() == 0) { - listener.onFailure(new SearchPhaseExecutionException("query", "all shards failed", buildShardFailures())); - } else { - executeFetchPhase(); - } - } - } - - void executeFetchPhase() { - try { - innerExecuteFetchPhase(); - } catch (Exception e) { - listener.onFailure(new ReduceSearchPhaseException("query", "", e, buildShardFailures())); - } - } - - void innerExecuteFetchPhase() throws Exception { - final boolean isScrollRequest = request.scroll() != null; - sortedShardDocs = searchPhaseController.sortDocs(isScrollRequest, queryResults); - searchPhaseController.fillDocIdsToLoad(docIdsToLoad, sortedShardDocs); - - if (docIdsToLoad.asList().isEmpty()) { - finishHim(); - return; - } - - final ScoreDoc[] lastEmittedDocPerShard = (request.scroll() != null) ? - searchPhaseController.getLastEmittedDocPerShard(queryResults.asList(), sortedShardDocs, firstResults.length()) : null; - final AtomicInteger counter = new AtomicInteger(docIdsToLoad.asList().size()); - for (final AtomicArray.Entry entry : docIdsToLoad.asList()) { - QuerySearchResult queryResult = queryResults.get(entry.index); - DiscoveryNode node = nodes.get(queryResult.shardTarget().nodeId()); - ShardFetchSearchRequest fetchSearchRequest = createFetchRequest(queryResult, entry, lastEmittedDocPerShard); - executeFetch(entry.index, queryResult.shardTarget(), counter, fetchSearchRequest, node); - } - } - - void executeFetch(final int shardIndex, final SearchShardTarget shardTarget, final AtomicInteger counter, - final ShardFetchSearchRequest fetchSearchRequest, DiscoveryNode node) { - searchTransportService.sendExecuteFetch(node, fetchSearchRequest, new ActionListener() { - @Override - public void onResponse(FetchSearchResult result) { - result.shardTarget(shardTarget); - fetchResults.set(shardIndex, result); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - @Override - public void onFailure(Exception t) { - // the search context might not be cleared on the node where the fetch was executed for example - // because the action was rejected by the thread pool. in this case we need to send a dedicated - // request to clear the search context. by setting docIdsToLoad to null, the context will be cleared - // in TransportSearchTypeAction.releaseIrrelevantSearchContexts() after the search request is done. - docIdsToLoad.set(shardIndex, null); - onFetchFailure(t, fetchSearchRequest, shardIndex, shardTarget, counter); - } - }); - } - - void onFetchFailure(Exception e, ShardFetchSearchRequest fetchSearchRequest, int shardIndex, - SearchShardTarget shardTarget, AtomicInteger counter) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute fetch phase", fetchSearchRequest.id()), e); - } - this.addShardFailure(shardIndex, shardTarget, e); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - private void finishHim() { - threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable(listener) { - @Override - public void doRun() throws IOException { - final boolean isScrollRequest = request.scroll() != null; - final InternalSearchResponse internalResponse = searchPhaseController.merge(isScrollRequest, sortedShardDocs, queryResults, - fetchResults); - String scrollId = isScrollRequest ? TransportSearchHelper.buildScrollId(request.searchType(), firstResults) : null; - listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); - releaseIrrelevantSearchContexts(queryResults, docIdsToLoad); - } - - @Override - public void onFailure(Exception e) { - try { - ReduceSearchPhaseException failure = new ReduceSearchPhaseException("merge", "", e, buildShardFailures()); - if (logger.isDebugEnabled()) { - logger.debug("failed to reduce search", failure); - } - super.onFailure(failure); - } finally { - releaseIrrelevantSearchContexts(queryResults, docIdsToLoad); - } - } - }); + protected SearchPhase getNextPhase(final SearchPhaseResults results, final SearchPhaseContext context) { + return new DfsQueryPhase(results.getAtomicArray(), searchPhaseController, (queryResults) -> + new FetchSearchPhase(queryResults, searchPhaseController, context), context); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/SearchPhase.java new file mode 100644 index 0000000000000..7bb9c2ba28a89 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchPhase.java @@ -0,0 +1,42 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.elasticsearch.common.CheckedRunnable; + +import java.io.IOException; +import java.util.Objects; + +/** + * Base class for all individual search phases like collecting distributed frequencies, fetching documents, querying shards. + */ +abstract class SearchPhase implements CheckedRunnable { + private final String name; + + protected SearchPhase(String name) { + this.name = Objects.requireNonNull(name, "name must not be null"); + } + + /** + * Returns the phases name. + */ + public String getName() { + return name; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchPhaseContext.java b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseContext.java new file mode 100644 index 0000000000000..9829ff6a98337 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseContext.java @@ -0,0 +1,117 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.Logger; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.search.internal.InternalSearchResponse; +import org.elasticsearch.search.internal.ShardSearchTransportRequest; +import org.elasticsearch.transport.Transport; + +import java.util.concurrent.Executor; + +/** + * This class provide contextual state and access to resources across multiple search phases. + */ +interface SearchPhaseContext extends ActionListener, Executor { + // TODO maybe we can make this concrete later - for now we just implement this in the base class for all initial phases + + /** + * Returns the total number of shards to the current search across all indices + */ + int getNumShards(); + + /** + * Returns a logger for this context to prevent each individual phase to create their own logger. + */ + Logger getLogger(); + + /** + * Returns the currently executing search task + */ + SearchTask getTask(); + + /** + * Returns the currently executing search request + */ + SearchRequest getRequest(); + + /** + * Builds the final search response that should be send back to the user. + * @param internalSearchResponse the internal search response + * @param scrollId an optional scroll ID if this search is a scroll search + */ + SearchResponse buildSearchResponse(InternalSearchResponse internalSearchResponse, String scrollId); + + /** + * This method will communicate a fatal phase failure back to the user. In contrast to a shard failure + * will this method immediately fail the search request and return the failure to the issuer of the request + * @param phase the phase that failed + * @param msg an optional message + * @param cause the cause of the phase failure + */ + void onPhaseFailure(SearchPhase phase, String msg, Throwable cause); + + /** + * This method will record a shard failure for the given shard index. In contrast to a phase failure + * ({@link #onPhaseFailure(SearchPhase, String, Throwable)}) this method will immediately return to the user but will record + * a shard failure for the given shard index. This should be called if a shard failure happens after we successfully retrieved + * a result from that shard in a previous phase. + */ + void onShardFailure(int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e); + + /** + * Returns a connection to the node if connected otherwise and {@link org.elasticsearch.transport.ConnectTransportException} will be + * thrown. + */ + Transport.Connection getConnection(String clusterAlias, String nodeId); + + /** + * Returns the {@link SearchTransportService} to send shard request to other nodes + */ + SearchTransportService getSearchTransport(); + + /** + * Releases a search context with the given context ID on the node the given connection is connected to. + * @see org.elasticsearch.search.query.QuerySearchResult#getRequestId() + * @see org.elasticsearch.search.fetch.FetchSearchResult#getRequestId() + * + */ + default void sendReleaseSearchContext(long contextId, Transport.Connection connection, OriginalIndices originalIndices) { + if (connection != null) { + getSearchTransport().sendFreeContext(connection, contextId, originalIndices); + } + } + + /** + * Builds an request for the initial search phase. + */ + ShardSearchTransportRequest buildShardSearchRequest(SearchShardIterator shardIt); + + /** + * Processes the phase transition from on phase to another. This method handles all errors that happen during the initial run execution + * of the next phase. If there are no successful operations in the context when this method is executed the search is aborted and + * a response is returned to the user indicating that all shards have failed. + */ + void executeNextPhase(SearchPhase currentPhase, SearchPhase nextPhase); + +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchPhaseController.java b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseController.java new file mode 100644 index 0000000000000..afcb8656af631 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseController.java @@ -0,0 +1,756 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import com.carrotsearch.hppc.IntArrayList; +import com.carrotsearch.hppc.ObjectObjectHashMap; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.CollectionStatistics; +import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TermStatistics; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.search.TopFieldDocs; +import org.apache.lucene.search.grouping.CollapseTopFieldDocs; +import org.elasticsearch.common.collect.HppcMaps; +import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.script.ScriptService; +import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.aggregations.InternalAggregation; +import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; +import org.elasticsearch.search.aggregations.InternalAggregations; +import org.elasticsearch.search.aggregations.pipeline.SiblingPipelineAggregator; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.dfs.AggregatedDfs; +import org.elasticsearch.search.dfs.DfsSearchResult; +import org.elasticsearch.search.fetch.FetchSearchResult; +import org.elasticsearch.search.internal.InternalSearchResponse; +import org.elasticsearch.search.profile.ProfileShardResult; +import org.elasticsearch.search.profile.SearchProfileShardResults; +import org.elasticsearch.search.query.QuerySearchResult; +import org.elasticsearch.search.suggest.Suggest; +import org.elasticsearch.search.suggest.Suggest.Suggestion; +import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry; +import org.elasticsearch.search.suggest.completion.CompletionSuggestion; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.function.IntFunction; +import java.util.stream.Collectors; +import java.util.stream.StreamSupport; + +public final class SearchPhaseController extends AbstractComponent { + + private static final ScoreDoc[] EMPTY_DOCS = new ScoreDoc[0]; + + private final BigArrays bigArrays; + private final ScriptService scriptService; + + public SearchPhaseController(Settings settings, BigArrays bigArrays, ScriptService scriptService) { + super(settings); + this.bigArrays = bigArrays; + this.scriptService = scriptService; + } + + public AggregatedDfs aggregateDfs(Collection results) { + ObjectObjectHashMap termStatistics = HppcMaps.newNoNullKeysMap(); + ObjectObjectHashMap fieldStatistics = HppcMaps.newNoNullKeysMap(); + long aggMaxDoc = 0; + for (DfsSearchResult lEntry : results) { + final Term[] terms = lEntry.terms(); + final TermStatistics[] stats = lEntry.termStatistics(); + assert terms.length == stats.length; + for (int i = 0; i < terms.length; i++) { + assert terms[i] != null; + TermStatistics existing = termStatistics.get(terms[i]); + if (existing != null) { + assert terms[i].bytes().equals(existing.term()); + // totalTermFrequency is an optional statistic we need to check if either one or both + // are set to -1 which means not present and then set it globally to -1 + termStatistics.put(terms[i], new TermStatistics(existing.term(), + existing.docFreq() + stats[i].docFreq(), + optionalSum(existing.totalTermFreq(), stats[i].totalTermFreq()))); + } else { + termStatistics.put(terms[i], stats[i]); + } + + } + + assert !lEntry.fieldStatistics().containsKey(null); + final Object[] keys = lEntry.fieldStatistics().keys; + final Object[] values = lEntry.fieldStatistics().values; + for (int i = 0; i < keys.length; i++) { + if (keys[i] != null) { + String key = (String) keys[i]; + CollectionStatistics value = (CollectionStatistics) values[i]; + assert key != null; + CollectionStatistics existing = fieldStatistics.get(key); + if (existing != null) { + CollectionStatistics merged = new CollectionStatistics( + key, existing.maxDoc() + value.maxDoc(), + optionalSum(existing.docCount(), value.docCount()), + optionalSum(existing.sumTotalTermFreq(), value.sumTotalTermFreq()), + optionalSum(existing.sumDocFreq(), value.sumDocFreq()) + ); + fieldStatistics.put(key, merged); + } else { + fieldStatistics.put(key, value); + } + } + } + aggMaxDoc += lEntry.maxDoc(); + } + return new AggregatedDfs(termStatistics, fieldStatistics, aggMaxDoc); + } + + private static long optionalSum(long left, long right) { + return Math.min(left, right) == -1 ? -1 : left + right; + } + + /** + * Returns a score doc array of top N search docs across all shards, followed by top suggest docs for each + * named completion suggestion across all shards. If more than one named completion suggestion is specified in the + * request, the suggest docs for a named suggestion are ordered by the suggestion name. + * + * Note: The order of the sorted score docs depends on the shard index in the result array if the merge process needs to disambiguate + * the result. In oder to obtain stable results the shard index (index of the result in the result array) must be the same. + * + * @param ignoreFrom Whether to ignore the from and sort all hits in each shard result. + * Enabled only for scroll search, because that only retrieves hits of length 'size' in the query phase. + * @param results the search phase results to obtain the sort docs from + * @param bufferedTopDocs the pre-consumed buffered top docs + * @param topDocsStats the top docs stats to fill + * @param from the offset into the search results top docs + * @param size the number of hits to return from the merged top docs + */ + public SortedTopDocs sortDocs(boolean ignoreFrom, Collection results, + final Collection bufferedTopDocs, final TopDocsStats topDocsStats, int from, int size) { + if (results.isEmpty()) { + return SortedTopDocs.EMPTY; + } + final Collection topDocs = bufferedTopDocs == null ? new ArrayList<>() : bufferedTopDocs; + final Map>> groupedCompletionSuggestions = new HashMap<>(); + for (SearchPhaseResult sortedResult : results) { // TODO we can move this loop into the reduce call to only loop over this once + /* We loop over all results once, group together the completion suggestions if there are any and collect relevant + * top docs results. Each top docs gets it's shard index set on all top docs to simplify top docs merging down the road + * this allowed to remove a single shared optimization code here since now we don't materialized a dense array of + * top docs anymore but instead only pass relevant results / top docs to the merge method*/ + QuerySearchResult queryResult = sortedResult.queryResult(); + if (queryResult.hasConsumedTopDocs() == false) { // already consumed? + final TopDocs td = queryResult.consumeTopDocs(); + assert td != null; + topDocsStats.add(td); + if (td.scoreDocs.length > 0) { // make sure we set the shard index before we add it - the consumer didn't do that yet + setShardIndex(td, queryResult.getShardIndex()); + topDocs.add(td); + } + } + if (queryResult.hasSuggestHits()) { + Suggest shardSuggest = queryResult.suggest(); + for (CompletionSuggestion suggestion : shardSuggest.filter(CompletionSuggestion.class)) { + suggestion.setShardIndex(sortedResult.getShardIndex()); + List> suggestions = + groupedCompletionSuggestions.computeIfAbsent(suggestion.getName(), s -> new ArrayList<>()); + suggestions.add(suggestion); + } + } + } + final boolean hasHits = (groupedCompletionSuggestions.isEmpty() && topDocs.isEmpty()) == false; + if (hasHits) { + final TopDocs mergedTopDocs = mergeTopDocs(topDocs, size, ignoreFrom ? 0 : from); + final ScoreDoc[] mergedScoreDocs = mergedTopDocs == null ? EMPTY_DOCS : mergedTopDocs.scoreDocs; + ScoreDoc[] scoreDocs = mergedScoreDocs; + if (groupedCompletionSuggestions.isEmpty() == false) { + int numSuggestDocs = 0; + List>> completionSuggestions = + new ArrayList<>(groupedCompletionSuggestions.size()); + for (List> groupedSuggestions : groupedCompletionSuggestions.values()) { + final CompletionSuggestion completionSuggestion = CompletionSuggestion.reduceTo(groupedSuggestions); + assert completionSuggestion != null; + numSuggestDocs += completionSuggestion.getOptions().size(); + completionSuggestions.add(completionSuggestion); + } + scoreDocs = new ScoreDoc[mergedScoreDocs.length + numSuggestDocs]; + System.arraycopy(mergedScoreDocs, 0, scoreDocs, 0, mergedScoreDocs.length); + int offset = mergedScoreDocs.length; + Suggest suggestions = new Suggest(completionSuggestions); + for (CompletionSuggestion completionSuggestion : suggestions.filter(CompletionSuggestion.class)) { + for (CompletionSuggestion.Entry.Option option : completionSuggestion.getOptions()) { + scoreDocs[offset++] = option.getDoc(); + } + } + } + final boolean isSortedByField; + final SortField[] sortFields; + if (mergedTopDocs != null && mergedTopDocs instanceof TopFieldDocs) { + TopFieldDocs fieldDocs = (TopFieldDocs) mergedTopDocs; + isSortedByField = (fieldDocs instanceof CollapseTopFieldDocs && + fieldDocs.fields.length == 1 && fieldDocs.fields[0].getType() == SortField.Type.SCORE) == false; + sortFields = fieldDocs.fields; + } else { + isSortedByField = false; + sortFields = null; + } + return new SortedTopDocs(scoreDocs, isSortedByField, sortFields); + } else { + // no relevant docs + return SortedTopDocs.EMPTY; + } + } + + TopDocs mergeTopDocs(Collection results, int topN, int from) { + if (results.isEmpty()) { + return null; + } + assert results.isEmpty() == false; + final boolean setShardIndex = false; + final TopDocs topDocs = results.stream().findFirst().get(); + final TopDocs mergedTopDocs; + final int numShards = results.size(); + if (numShards == 1 && from == 0) { // only one shard and no pagination we can just return the topDocs as we got them. + return topDocs; + } else if (topDocs instanceof CollapseTopFieldDocs) { + CollapseTopFieldDocs firstTopDocs = (CollapseTopFieldDocs) topDocs; + final Sort sort = new Sort(firstTopDocs.fields); + final CollapseTopFieldDocs[] shardTopDocs = results.toArray(new CollapseTopFieldDocs[numShards]); + mergedTopDocs = CollapseTopFieldDocs.merge(sort, from, topN, shardTopDocs, setShardIndex); + } else if (topDocs instanceof TopFieldDocs) { + TopFieldDocs firstTopDocs = (TopFieldDocs) topDocs; + final Sort sort = new Sort(firstTopDocs.fields); + final TopFieldDocs[] shardTopDocs = results.toArray(new TopFieldDocs[numShards]); + mergedTopDocs = TopDocs.merge(sort, from, topN, shardTopDocs, setShardIndex); + } else { + final TopDocs[] shardTopDocs = results.toArray(new TopDocs[numShards]); + mergedTopDocs = TopDocs.merge(from, topN, shardTopDocs, setShardIndex); + } + return mergedTopDocs; + } + + private static void setShardIndex(TopDocs topDocs, int shardIndex) { + for (ScoreDoc doc : topDocs.scoreDocs) { + if (doc.shardIndex != -1) { + // once there is a single shard index initialized all others will be initialized too + // there are many asserts down in lucene land that this is actually true. we can shortcut it here. + return; + } + doc.shardIndex = shardIndex; + } + } + + public ScoreDoc[] getLastEmittedDocPerShard(ReducedQueryPhase reducedQueryPhase, int numShards) { + final ScoreDoc[] lastEmittedDocPerShard = new ScoreDoc[numShards]; + if (reducedQueryPhase.isEmptyResult == false) { + final ScoreDoc[] sortedScoreDocs = reducedQueryPhase.scoreDocs; + // from is always zero as when we use scroll, we ignore from + long size = Math.min(reducedQueryPhase.fetchHits, reducedQueryPhase.size); + // with collapsing we can have more hits than sorted docs + size = Math.min(sortedScoreDocs.length, size); + for (int sortedDocsIndex = 0; sortedDocsIndex < size; sortedDocsIndex++) { + ScoreDoc scoreDoc = sortedScoreDocs[sortedDocsIndex]; + lastEmittedDocPerShard[scoreDoc.shardIndex] = scoreDoc; + } + } + return lastEmittedDocPerShard; + + } + + /** + * Builds an array, with potential null elements, with docs to load. + */ + public IntArrayList[] fillDocIdsToLoad(int numShards, ScoreDoc[] shardDocs) { + IntArrayList[] docIdsToLoad = new IntArrayList[numShards]; + for (ScoreDoc shardDoc : shardDocs) { + IntArrayList shardDocIdsToLoad = docIdsToLoad[shardDoc.shardIndex]; + if (shardDocIdsToLoad == null) { + shardDocIdsToLoad = docIdsToLoad[shardDoc.shardIndex] = new IntArrayList(); + } + shardDocIdsToLoad.add(shardDoc.doc); + } + return docIdsToLoad; + } + + /** + * Enriches search hits and completion suggestion hits from sortedDocs using fetchResultsArr, + * merges suggestions, aggregations and profile results + * + * Expects sortedDocs to have top search docs across all shards, optionally followed by top suggest docs for each named + * completion suggestion ordered by suggestion name + */ + public InternalSearchResponse merge(boolean ignoreFrom, ReducedQueryPhase reducedQueryPhase, + Collection fetchResults, IntFunction resultsLookup) { + if (reducedQueryPhase.isEmptyResult) { + return InternalSearchResponse.empty(); + } + ScoreDoc[] sortedDocs = reducedQueryPhase.scoreDocs; + SearchHits hits = getHits(reducedQueryPhase, ignoreFrom, fetchResults, resultsLookup); + if (reducedQueryPhase.suggest != null) { + if (!fetchResults.isEmpty()) { + int currentOffset = hits.getHits().length; + for (CompletionSuggestion suggestion : reducedQueryPhase.suggest.filter(CompletionSuggestion.class)) { + final List suggestionOptions = suggestion.getOptions(); + for (int scoreDocIndex = currentOffset; scoreDocIndex < currentOffset + suggestionOptions.size(); scoreDocIndex++) { + ScoreDoc shardDoc = sortedDocs[scoreDocIndex]; + SearchPhaseResult searchResultProvider = resultsLookup.apply(shardDoc.shardIndex); + if (searchResultProvider == null) { + // this can happen if we are hitting a shard failure during the fetch phase + // in this case we referenced the shard result via teh ScoreDoc but never got a + // result from fetch. + // TODO it would be nice to assert this in the future + continue; + } + FetchSearchResult fetchResult = searchResultProvider.fetchResult(); + final int index = fetchResult.counterGetAndIncrement(); + assert index < fetchResult.hits().internalHits().length : "not enough hits fetched. index [" + index + "] length: " + + fetchResult.hits().internalHits().length; + SearchHit hit = fetchResult.hits().internalHits()[index]; + CompletionSuggestion.Entry.Option suggestOption = + suggestionOptions.get(scoreDocIndex - currentOffset); + hit.score(shardDoc.score); + hit.shard(fetchResult.getSearchShardTarget()); + suggestOption.setHit(hit); + } + currentOffset += suggestionOptions.size(); + } + assert currentOffset == sortedDocs.length : "expected no more score doc slices"; + } + } + return reducedQueryPhase.buildResponse(hits); + } + + private SearchHits getHits(ReducedQueryPhase reducedQueryPhase, boolean ignoreFrom, + Collection fetchResults, IntFunction resultsLookup) { + final boolean sorted = reducedQueryPhase.isSortedByField; + ScoreDoc[] sortedDocs = reducedQueryPhase.scoreDocs; + int sortScoreIndex = -1; + if (sorted) { + for (int i = 0; i < reducedQueryPhase.sortField.length; i++) { + if (reducedQueryPhase.sortField[i].getType() == SortField.Type.SCORE) { + sortScoreIndex = i; + } + } + } + // clean the fetch counter + for (SearchPhaseResult entry : fetchResults) { + entry.fetchResult().initCounter(); + } + int from = ignoreFrom ? 0 : reducedQueryPhase.from; + int numSearchHits = (int) Math.min(reducedQueryPhase.fetchHits - from, reducedQueryPhase.size); + // with collapsing we can have more fetch hits than sorted docs + numSearchHits = Math.min(sortedDocs.length, numSearchHits); + // merge hits + List hits = new ArrayList<>(); + if (!fetchResults.isEmpty()) { + for (int i = 0; i < numSearchHits; i++) { + ScoreDoc shardDoc = sortedDocs[i]; + SearchPhaseResult fetchResultProvider = resultsLookup.apply(shardDoc.shardIndex); + if (fetchResultProvider == null) { + // this can happen if we are hitting a shard failure during the fetch phase + // in this case we referenced the shard result via teh ScoreDoc but never got a + // result from fetch. + // TODO it would be nice to assert this in the future + continue; + } + FetchSearchResult fetchResult = fetchResultProvider.fetchResult(); + final int index = fetchResult.counterGetAndIncrement(); + assert index < fetchResult.hits().internalHits().length : "not enough hits fetched. index [" + index + "] length: " + + fetchResult.hits().internalHits().length; + SearchHit searchHit = fetchResult.hits().internalHits()[index]; + searchHit.score(shardDoc.score); + searchHit.shard(fetchResult.getSearchShardTarget()); + if (sorted) { + FieldDoc fieldDoc = (FieldDoc) shardDoc; + searchHit.sortValues(fieldDoc.fields, reducedQueryPhase.sortValueFormats); + if (sortScoreIndex != -1) { + searchHit.score(((Number) fieldDoc.fields[sortScoreIndex]).floatValue()); + } + } + hits.add(searchHit); + } + } + return new SearchHits(hits.toArray(new SearchHit[hits.size()]), reducedQueryPhase.totalHits, + reducedQueryPhase.maxScore); + } + + /** + * Reduces the given query results and consumes all aggregations and profile results. + * @param queryResults a list of non-null query shard results + */ + public ReducedQueryPhase reducedQueryPhase(Collection queryResults, boolean isScrollRequest) { + return reducedQueryPhase(queryResults, null, new ArrayList<>(), new TopDocsStats(), 0, isScrollRequest); + } + + /** + * Reduces the given query results and consumes all aggregations and profile results. + * @param queryResults a list of non-null query shard results + * @param bufferedAggs a list of pre-collected / buffered aggregations. if this list is non-null all aggregations have been consumed + * from all non-null query results. + * @param bufferedTopDocs a list of pre-collected / buffered top docs. if this list is non-null all top docs have been consumed + * from all non-null query results. + * @param numReducePhases the number of non-final reduce phases applied to the query results. + * @see QuerySearchResult#consumeAggs() + * @see QuerySearchResult#consumeProfileResult() + */ + private ReducedQueryPhase reducedQueryPhase(Collection queryResults, + List bufferedAggs, List bufferedTopDocs, + TopDocsStats topDocsStats, int numReducePhases, boolean isScrollRequest) { + assert numReducePhases >= 0 : "num reduce phases must be >= 0 but was: " + numReducePhases; + numReducePhases++; // increment for this phase + boolean timedOut = false; + Boolean terminatedEarly = null; + if (queryResults.isEmpty()) { // early terminate we have nothing to reduce + return new ReducedQueryPhase(topDocsStats.totalHits, topDocsStats.fetchHits, topDocsStats.maxScore, + timedOut, terminatedEarly, null, null, null, EMPTY_DOCS, null, null, numReducePhases, false, 0, 0, true); + } + final QuerySearchResult firstResult = queryResults.stream().findFirst().get().queryResult(); + final boolean hasSuggest = firstResult.suggest() != null; + final boolean hasProfileResults = firstResult.hasProfileResults(); + final boolean consumeAggs; + final List aggregationsList; + if (bufferedAggs != null) { + consumeAggs = false; + // we already have results from intermediate reduces and just need to perform the final reduce + assert firstResult.hasAggs() : "firstResult has no aggs but we got non null buffered aggs?"; + aggregationsList = bufferedAggs; + } else if (firstResult.hasAggs()) { + // the number of shards was less than the buffer size so we reduce agg results directly + aggregationsList = new ArrayList<>(queryResults.size()); + consumeAggs = true; + } else { + // no aggregations + aggregationsList = Collections.emptyList(); + consumeAggs = false; + } + + // count the total (we use the query result provider here, since we might not get any hits (we scrolled past them)) + final Map> groupedSuggestions = hasSuggest ? new HashMap<>() : Collections.emptyMap(); + final Map profileResults = hasProfileResults ? new HashMap<>(queryResults.size()) + : Collections.emptyMap(); + int from = 0; + int size = 0; + for (SearchPhaseResult entry : queryResults) { + QuerySearchResult result = entry.queryResult(); + from = result.from(); + size = result.size(); + if (result.searchTimedOut()) { + timedOut = true; + } + if (result.terminatedEarly() != null) { + if (terminatedEarly == null) { + terminatedEarly = result.terminatedEarly(); + } else if (result.terminatedEarly()) { + terminatedEarly = true; + } + } + if (hasSuggest) { + assert result.suggest() != null; + for (Suggestion> suggestion : result.suggest()) { + List suggestionList = groupedSuggestions.computeIfAbsent(suggestion.getName(), s -> new ArrayList<>()); + suggestionList.add(suggestion); + } + } + if (consumeAggs) { + aggregationsList.add((InternalAggregations) result.consumeAggs()); + } + if (hasProfileResults) { + String key = result.getSearchShardTarget().toString(); + profileResults.put(key, result.consumeProfileResult()); + } + } + final Suggest suggest = groupedSuggestions.isEmpty() ? null : new Suggest(Suggest.reduce(groupedSuggestions)); + ReduceContext reduceContext = new ReduceContext(bigArrays, scriptService, true); + final InternalAggregations aggregations = aggregationsList.isEmpty() ? null : reduceAggs(aggregationsList, + firstResult.pipelineAggregators(), reduceContext); + final SearchProfileShardResults shardResults = profileResults.isEmpty() ? null : new SearchProfileShardResults(profileResults); + final SortedTopDocs scoreDocs = this.sortDocs(isScrollRequest, queryResults, bufferedTopDocs, topDocsStats, from, size); + return new ReducedQueryPhase(topDocsStats.totalHits, topDocsStats.fetchHits, topDocsStats.maxScore, + timedOut, terminatedEarly, suggest, aggregations, shardResults, scoreDocs.scoreDocs, scoreDocs.sortFields, + firstResult != null ? firstResult.sortValueFormats() : null, + numReducePhases, scoreDocs.isSortedByField, size, from, firstResult == null); + } + + + /** + * Performs an intermediate reduce phase on the aggregations. For instance with this reduce phase never prune information + * that relevant for the final reduce step. For final reduce see {@link #reduceAggs(List, List, ReduceContext)} + */ + private InternalAggregations reduceAggsIncrementally(List aggregationsList) { + ReduceContext reduceContext = new ReduceContext(bigArrays, scriptService, false); + return aggregationsList.isEmpty() ? null : reduceAggs(aggregationsList, + null, reduceContext); + } + + private InternalAggregations reduceAggs(List aggregationsList, + List pipelineAggregators, ReduceContext reduceContext) { + InternalAggregations aggregations = InternalAggregations.reduce(aggregationsList, reduceContext); + if (pipelineAggregators != null) { + List newAggs = StreamSupport.stream(aggregations.spliterator(), false) + .map((p) -> (InternalAggregation) p) + .collect(Collectors.toList()); + for (SiblingPipelineAggregator pipelineAggregator : pipelineAggregators) { + InternalAggregation newAgg = pipelineAggregator.doReduce(new InternalAggregations(newAggs), reduceContext); + newAggs.add(newAgg); + } + return new InternalAggregations(newAggs); + } + return aggregations; + } + + public static final class ReducedQueryPhase { + // the sum of all hits across all reduces shards + final long totalHits; + // the number of returned hits (doc IDs) across all reduces shards + final long fetchHits; + // the max score across all reduces hits or {@link Float#NaN} if no hits returned + final float maxScore; + // true if at least one reduced result timed out + final boolean timedOut; + // non null and true if at least one reduced result was terminated early + final Boolean terminatedEarly; + // the reduced suggest results + final Suggest suggest; + // the reduced internal aggregations + final InternalAggregations aggregations; + // the reduced profile results + final SearchProfileShardResults shardResults; + // the number of reduces phases + final int numReducePhases; + // the searches merged top docs + final ScoreDoc[] scoreDocs; + // the top docs sort fields used to sort the score docs, null if the results are not sorted + final SortField[] sortField; + // true iff the result score docs is sorted by a field (not score), this implies that sortField is set. + final boolean isSortedByField; + // the size of the top hits to return + final int size; + // true iff the query phase had no results. Otherwise false + final boolean isEmptyResult; + // the offset into the merged top hits + final int from; + // sort value formats used to sort / format the result + final DocValueFormat[] sortValueFormats; + + ReducedQueryPhase(long totalHits, long fetchHits, float maxScore, boolean timedOut, Boolean terminatedEarly, Suggest suggest, + InternalAggregations aggregations, SearchProfileShardResults shardResults, ScoreDoc[] scoreDocs, + SortField[] sortFields, DocValueFormat[] sortValueFormats, int numReducePhases, boolean isSortedByField, int size, + int from, boolean isEmptyResult) { + if (numReducePhases <= 0) { + throw new IllegalArgumentException("at least one reduce phase must have been applied but was: " + numReducePhases); + } + this.totalHits = totalHits; + this.fetchHits = fetchHits; + if (Float.isInfinite(maxScore)) { + this.maxScore = Float.NaN; + } else { + this.maxScore = maxScore; + } + this.timedOut = timedOut; + this.terminatedEarly = terminatedEarly; + this.suggest = suggest; + this.aggregations = aggregations; + this.shardResults = shardResults; + this.numReducePhases = numReducePhases; + this.scoreDocs = scoreDocs; + this.sortField = sortFields; + this.isSortedByField = isSortedByField; + this.size = size; + this.from = from; + this.isEmptyResult = isEmptyResult; + this.sortValueFormats = sortValueFormats; + } + + /** + * Creates a new search response from the given merged hits. + * @see #merge(boolean, ReducedQueryPhase, Collection, IntFunction) + */ + public InternalSearchResponse buildResponse(SearchHits hits) { + return new InternalSearchResponse(hits, aggregations, suggest, shardResults, timedOut, terminatedEarly, numReducePhases); + } + } + + /** + * A {@link InitialSearchPhase.ArraySearchPhaseResults} implementation + * that incrementally reduces aggregation results as shard results are consumed. + * This implementation can be configured to batch up a certain amount of results and only reduce them + * iff the buffer is exhausted. + */ + static final class QueryPhaseResultConsumer extends InitialSearchPhase.ArraySearchPhaseResults { + private final InternalAggregations[] aggsBuffer; + private final TopDocs[] topDocsBuffer; + private final boolean hasAggs; + private final boolean hasTopDocs; + private final int bufferSize; + private int index; + private final SearchPhaseController controller; + private int numReducePhases = 0; + private final TopDocsStats topDocsStats = new TopDocsStats(); + + /** + * Creates a new {@link QueryPhaseResultConsumer} + * @param controller a controller instance to reduce the query response objects + * @param expectedResultSize the expected number of query results. Corresponds to the number of shards queried + * @param bufferSize the size of the reduce buffer. if the buffer size is smaller than the number of expected results + * the buffer is used to incrementally reduce aggregation results before all shards responded. + */ + private QueryPhaseResultConsumer(SearchPhaseController controller, int expectedResultSize, int bufferSize, + boolean hasTopDocs, boolean hasAggs) { + super(expectedResultSize); + if (expectedResultSize != 1 && bufferSize < 2) { + throw new IllegalArgumentException("buffer size must be >= 2 if there is more than one expected result"); + } + if (expectedResultSize <= bufferSize) { + throw new IllegalArgumentException("buffer size must be less than the expected result size"); + } + if (hasAggs == false && hasTopDocs == false) { + throw new IllegalArgumentException("either aggs or top docs must be present"); + } + this.controller = controller; + // no need to buffer anything if we have less expected results. in this case we don't consume any results ahead of time. + this.aggsBuffer = new InternalAggregations[hasAggs ? bufferSize : 0]; + this.topDocsBuffer = new TopDocs[hasTopDocs ? bufferSize : 0]; + this.hasTopDocs = hasTopDocs; + this.hasAggs = hasAggs; + this.bufferSize = bufferSize; + + } + + @Override + public void consumeResult(SearchPhaseResult result) { + super.consumeResult(result); + QuerySearchResult queryResult = result.queryResult(); + consumeInternal(queryResult); + } + + private synchronized void consumeInternal(QuerySearchResult querySearchResult) { + if (index == bufferSize) { + if (hasAggs) { + InternalAggregations reducedAggs = controller.reduceAggsIncrementally(Arrays.asList(aggsBuffer)); + Arrays.fill(aggsBuffer, null); + aggsBuffer[0] = reducedAggs; + } + if (hasTopDocs) { + TopDocs reducedTopDocs = controller.mergeTopDocs(Arrays.asList(topDocsBuffer), + querySearchResult.from() + querySearchResult.size() // we have to merge here in the same way we collect on a shard + , 0); + Arrays.fill(topDocsBuffer, null); + topDocsBuffer[0] = reducedTopDocs; + } + numReducePhases++; + index = 1; + } + final int i = index++; + if (hasAggs) { + aggsBuffer[i] = (InternalAggregations) querySearchResult.consumeAggs(); + } + if (hasTopDocs) { + final TopDocs topDocs = querySearchResult.consumeTopDocs(); // can't be null + topDocsStats.add(topDocs); + SearchPhaseController.setShardIndex(topDocs, querySearchResult.getShardIndex()); + topDocsBuffer[i] = topDocs; + } + } + + private synchronized List getRemainingAggs() { + return hasAggs ? Arrays.asList(aggsBuffer).subList(0, index) : null; + } + + private synchronized List getRemainingTopDocs() { + return hasTopDocs ? Arrays.asList(topDocsBuffer).subList(0, index) : null; + } + + + @Override + public ReducedQueryPhase reduce() { + return controller.reducedQueryPhase(results.asList(), getRemainingAggs(), getRemainingTopDocs(), topDocsStats, + numReducePhases, false); + } + + /** + * Returns the number of buffered results + */ + int getNumBuffered() { + return index; + } + + int getNumReducePhases() { return numReducePhases; } + } + + /** + * Returns a new ArraySearchPhaseResults instance. This might return an instance that reduces search responses incrementally. + */ + InitialSearchPhase.ArraySearchPhaseResults newSearchPhaseResults(SearchRequest request, int numShards) { + SearchSourceBuilder source = request.source(); + boolean isScrollRequest = request.scroll() != null; + final boolean hasAggs = source != null && source.aggregations() != null; + final boolean hasTopDocs = source == null || source.size() != 0; + + if (isScrollRequest == false && (hasAggs || hasTopDocs)) { + // no incremental reduce if scroll is used - we only hit a single shard or sometimes more... + if (request.getBatchedReduceSize() < numShards) { + // only use this if there are aggs and if there are more shards than we should reduce at once + return new QueryPhaseResultConsumer(this, numShards, request.getBatchedReduceSize(), hasTopDocs, hasAggs); + } + } + return new InitialSearchPhase.ArraySearchPhaseResults(numShards) { + @Override + public ReducedQueryPhase reduce() { + return reducedQueryPhase(results.asList(), isScrollRequest); + } + }; + } + + static final class TopDocsStats { + long totalHits; + long fetchHits; + float maxScore = Float.NEGATIVE_INFINITY; + + void add(TopDocs topDocs) { + totalHits += topDocs.totalHits; + fetchHits += topDocs.scoreDocs.length; + if (!Float.isNaN(topDocs.getMaxScore())) { + maxScore = Math.max(maxScore, topDocs.getMaxScore()); + } + } + } + + static final class SortedTopDocs { + static final SortedTopDocs EMPTY = new SortedTopDocs(EMPTY_DOCS, false, null); + final ScoreDoc[] scoreDocs; + final boolean isSortedByField; + final SortField[] sortFields; + + SortedTopDocs(ScoreDoc[] scoreDocs, boolean isSortedByField, SortField[] sortFields) { + this.scoreDocs = scoreDocs; + this.isSortedByField = isSortedByField; + this.sortFields = sortFields; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchPhaseExecutionException.java b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseExecutionException.java index 515d3204fb6a8..c6e0b21dffd5d 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchPhaseExecutionException.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseExecutionException.java @@ -103,6 +103,7 @@ public ShardSearchFailure[] shardFailures() { return shardFailures; } + @Override public Throwable getCause() { Throwable cause = super.getCause(); if (cause == null) { @@ -131,28 +132,34 @@ private static String buildMessage(String phaseName, String msg, ShardSearchFail } @Override - protected void innerToXContent(XContentBuilder builder, Params params) throws IOException { + protected void metadataToXContent(XContentBuilder builder, Params params) throws IOException { builder.field("phase", phaseName); final boolean group = params.paramAsBoolean("group_shard_failures", true); // we group by default builder.field("grouped", group); // notify that it's grouped builder.field("failed_shards"); builder.startArray(); - ShardOperationFailedException[] failures = params.paramAsBoolean("group_shard_failures", true) ? ExceptionsHelper.groupBy(shardFailures) : shardFailures; + ShardOperationFailedException[] failures = params.paramAsBoolean("group_shard_failures", true) ? + ExceptionsHelper.groupBy(shardFailures) : shardFailures; for (ShardOperationFailedException failure : failures) { builder.startObject(); failure.toXContent(builder, params); builder.endObject(); } builder.endArray(); - super.innerToXContent(builder, params); } @Override - protected void causeToXContent(XContentBuilder builder, Params params) throws IOException { - if (super.getCause() != null) { - // if the cause is null we inject a guessed root cause that will then be rendered twice so wi disable it manually - super.causeToXContent(builder, params); + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + Throwable ex = ExceptionsHelper.unwrapCause(this); + if (ex != this) { + generateThrowableXContent(builder, params, this); + } else { + // We don't have a cause when all shards failed, but we do have shards failures so we can "guess" a cause + // (see {@link #getCause()}). Here, we use super.getCause() because we don't want the guessed exception to + // be rendered twice (one in the "cause" field, one in "failed_shards") + innerToXContent(builder, params, this, getExceptionName(), getMessage(), getHeaders(), getMetadata(), super.getCause()); } + return builder; } @Override diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchQueryAndFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchQueryAndFetchAsyncAction.java deleted file mode 100644 index 2e13a0d26e870..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/search/SearchQueryAndFetchAsyncAction.java +++ /dev/null @@ -1,82 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.search; - -import org.apache.logging.log4j.Logger; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRunnable; -import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.search.action.SearchTransportService; -import org.elasticsearch.search.controller.SearchPhaseController; -import org.elasticsearch.search.fetch.QueryFetchSearchResult; -import org.elasticsearch.search.internal.InternalSearchResponse; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.threadpool.ThreadPool; - -import java.io.IOException; - -class SearchQueryAndFetchAsyncAction extends AbstractSearchAsyncAction { - - SearchQueryAndFetchAsyncAction(Logger logger, SearchTransportService searchTransportService, - ClusterService clusterService, IndexNameExpressionResolver indexNameExpressionResolver, - SearchPhaseController searchPhaseController, ThreadPool threadPool, - SearchRequest request, ActionListener listener) { - super(logger, searchTransportService, clusterService, indexNameExpressionResolver, searchPhaseController, threadPool, - request, listener); - } - - @Override - protected String firstPhaseName() { - return "query_fetch"; - } - - @Override - protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener) { - searchTransportService.sendExecuteFetch(node, request, listener); - } - - @Override - protected void moveToSecondPhase() throws Exception { - threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable(listener) { - @Override - public void doRun() throws IOException { - final boolean isScrollRequest = request.scroll() != null; - sortedShardDocs = searchPhaseController.sortDocs(isScrollRequest, firstResults); - final InternalSearchResponse internalResponse = searchPhaseController.merge(isScrollRequest, sortedShardDocs, firstResults, - firstResults); - String scrollId = isScrollRequest ? TransportSearchHelper.buildScrollId(request.searchType(), firstResults) : null; - listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); - } - - @Override - public void onFailure(Exception e) { - ReduceSearchPhaseException failure = new ReduceSearchPhaseException("merge", "", e, buildShardFailures()); - if (logger.isDebugEnabled()) { - logger.debug("failed to reduce search", failure); - } - super.onFailure(failure); - } - }); - } -} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java index 3987b48c5615a..de8109aadd8fe 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java @@ -19,138 +19,41 @@ package org.elasticsearch.action.search; -import com.carrotsearch.hppc.IntArrayList; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.apache.lucene.search.ScoreDoc; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRunnable; -import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.util.concurrent.AtomicArray; -import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.search.action.SearchTransportService; -import org.elasticsearch.search.controller.SearchPhaseController; -import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.ShardFetchSearchRequest; -import org.elasticsearch.search.internal.InternalSearchResponse; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.search.query.QuerySearchResultProvider; -import org.elasticsearch.threadpool.ThreadPool; - -import java.io.IOException; -import java.util.concurrent.atomic.AtomicInteger; - -class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction { - - final AtomicArray fetchResults; - final AtomicArray docIdsToLoad; - - SearchQueryThenFetchAsyncAction(Logger logger, SearchTransportService searchService, - ClusterService clusterService, IndexNameExpressionResolver indexNameExpressionResolver, - SearchPhaseController searchPhaseController, ThreadPool threadPool, - SearchRequest request, ActionListener listener) { - super(logger, searchService, clusterService, indexNameExpressionResolver, searchPhaseController, threadPool, request, listener); - fetchResults = new AtomicArray<>(firstResults.length()); - docIdsToLoad = new AtomicArray<>(firstResults.length()); +import org.elasticsearch.cluster.routing.GroupShardsIterator; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.internal.AliasFilter; +import org.elasticsearch.transport.Transport; + +import java.util.Map; +import java.util.concurrent.Executor; +import java.util.function.BiFunction; + +final class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction { + + private final SearchPhaseController searchPhaseController; + + SearchQueryThenFetchAsyncAction(final Logger logger, final SearchTransportService searchTransportService, + final BiFunction nodeIdToConnection, final Map aliasFilter, + final Map concreteIndexBoosts, final SearchPhaseController searchPhaseController, final Executor executor, + final SearchRequest request, final ActionListener listener, + final GroupShardsIterator shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider, + long clusterStateVersion, SearchTask task) { + super("query", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener, + shardsIts, timeProvider, clusterStateVersion, task, searchPhaseController.newSearchPhaseResults(request, shardsIts.size())); + this.searchPhaseController = searchPhaseController; } - @Override - protected String firstPhaseName() { - return "query"; + protected void executePhaseOnShard(final SearchShardIterator shardIt, final ShardRouting shard, + final SearchActionListener listener) { + getSearchTransport().sendExecuteQuery(getConnection(shardIt.getClusterAlias(), shard.currentNodeId()), + buildShardSearchRequest(shardIt), getTask(), listener); } @Override - protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener) { - searchTransportService.sendExecuteQuery(node, request, listener); - } - - @Override - protected void moveToSecondPhase() throws Exception { - final boolean isScrollRequest = request.scroll() != null; - sortedShardDocs = searchPhaseController.sortDocs(isScrollRequest, firstResults); - searchPhaseController.fillDocIdsToLoad(docIdsToLoad, sortedShardDocs); - - if (docIdsToLoad.asList().isEmpty()) { - finishHim(); - return; - } - - final ScoreDoc[] lastEmittedDocPerShard = isScrollRequest ? - searchPhaseController.getLastEmittedDocPerShard(firstResults.asList(), sortedShardDocs, firstResults.length()) : null; - final AtomicInteger counter = new AtomicInteger(docIdsToLoad.asList().size()); - for (AtomicArray.Entry entry : docIdsToLoad.asList()) { - QuerySearchResultProvider queryResult = firstResults.get(entry.index); - DiscoveryNode node = nodes.get(queryResult.shardTarget().nodeId()); - ShardFetchSearchRequest fetchSearchRequest = createFetchRequest(queryResult.queryResult(), entry, lastEmittedDocPerShard); - executeFetch(entry.index, queryResult.shardTarget(), counter, fetchSearchRequest, node); - } - } - - void executeFetch(final int shardIndex, final SearchShardTarget shardTarget, final AtomicInteger counter, - final ShardFetchSearchRequest fetchSearchRequest, DiscoveryNode node) { - searchTransportService.sendExecuteFetch(node, fetchSearchRequest, new ActionListener() { - @Override - public void onResponse(FetchSearchResult result) { - result.shardTarget(shardTarget); - fetchResults.set(shardIndex, result); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - @Override - public void onFailure(Exception t) { - // the search context might not be cleared on the node where the fetch was executed for example - // because the action was rejected by the thread pool. in this case we need to send a dedicated - // request to clear the search context. by setting docIdsToLoad to null, the context will be cleared - // in TransportSearchTypeAction.releaseIrrelevantSearchContexts() after the search request is done. - docIdsToLoad.set(shardIndex, null); - onFetchFailure(t, fetchSearchRequest, shardIndex, shardTarget, counter); - } - }); - } - - void onFetchFailure(Exception e, ShardFetchSearchRequest fetchSearchRequest, int shardIndex, SearchShardTarget shardTarget, - AtomicInteger counter) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute fetch phase", fetchSearchRequest.id()), e); - } - this.addShardFailure(shardIndex, shardTarget, e); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - private void finishHim() { - threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable(listener) { - @Override - public void doRun() throws IOException { - final boolean isScrollRequest = request.scroll() != null; - final InternalSearchResponse internalResponse = searchPhaseController.merge(isScrollRequest, sortedShardDocs, firstResults, - fetchResults); - String scrollId = isScrollRequest ? TransportSearchHelper.buildScrollId(request.searchType(), firstResults) : null; - listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, - successfulOps.get(), buildTookInMillis(), buildShardFailures())); - releaseIrrelevantSearchContexts(firstResults, docIdsToLoad); - } - - @Override - public void onFailure(Exception e) { - try { - ReduceSearchPhaseException failure = new ReduceSearchPhaseException("fetch", "", e, buildShardFailures()); - if (logger.isDebugEnabled()) { - logger.debug("failed to reduce search", failure); - } - super.onFailure(failure); - } finally { - releaseIrrelevantSearchContexts(firstResults, docIdsToLoad); - } - } - }); + protected SearchPhase getNextPhase(final SearchPhaseResults results, final SearchPhaseContext context) { + return new FetchSearchPhase(results, searchPhaseController, context); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java b/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java index a1b1a02a97e5b..73a4a074637d5 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java @@ -19,21 +19,25 @@ package org.elasticsearch.action.search; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.search.Scroll; import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskId; import java.io.IOException; import java.util.Arrays; +import java.util.Collections; import java.util.Objects; /** @@ -43,11 +47,16 @@ * Note, the search {@link #source(org.elasticsearch.search.builder.SearchSourceBuilder)} * is required. The search source is the different search options, including aggregations and such. *

    + * * @see org.elasticsearch.client.Requests#searchRequest(String...) * @see org.elasticsearch.client.Client#search(SearchRequest) * @see SearchResponse */ -public final class SearchRequest extends ActionRequest implements IndicesRequest.Replaceable { +public final class SearchRequest extends ActionRequest implements IndicesRequest.Replaceable { + + private static final ToXContent.Params FORMAT_PARAMS = new ToXContent.MapParams(Collections.singletonMap("pretty", "false")); + + public static final int DEFAULT_PRE_FILTER_SHARD_SIZE = 128; private SearchType searchType = SearchType.DEFAULT; @@ -64,6 +73,12 @@ public final class SearchRequest extends ActionRequest implements private Scroll scroll; + private int batchedReduceSize = 512; + + private int maxConcurrentShardRequests = 0; + + private int preFilterShardSize = DEFAULT_PRE_FILTER_SHARD_SIZE; + private String[] types = Strings.EMPTY_ARRAY; public static final IndicesOptions DEFAULT_INDICES_OPTIONS = IndicesOptions.strictExpandOpenAndForbidClosed(); @@ -192,7 +207,7 @@ public SearchRequest searchType(SearchType searchType) { * "query_then_fetch"/"queryThenFetch", and "query_and_fetch"/"queryAndFetch". */ public SearchRequest searchType(String searchType) { - return searchType(SearchType.fromString(searchType, ParseFieldMatcher.EMPTY)); + return searchType(SearchType.fromString(searchType)); } /** @@ -268,6 +283,75 @@ public Boolean requestCache() { return this.requestCache; } + /** + * Sets the number of shard results that should be reduced at once on the coordinating node. This value should be used as a protection + * mechanism to reduce the memory overhead per search request if the potential number of shards in the request can be large. + */ + public void setBatchedReduceSize(int batchedReduceSize) { + if (batchedReduceSize <= 1) { + throw new IllegalArgumentException("batchedReduceSize must be >= 2"); + } + this.batchedReduceSize = batchedReduceSize; + } + + /** + * Returns the number of shard results that should be reduced at once on the coordinating node. This value should be used as a + * protection mechanism to reduce the memory overhead per search request if the potential number of shards in the request can be large. + */ + public int getBatchedReduceSize() { + return batchedReduceSize; + } + + /** + * Returns the number of shard requests that should be executed concurrently. This value should be used as a protection mechanism to + * reduce the number of shard reqeusts fired per high level search request. Searches that hit the entire cluster can be throttled + * with this number to reduce the cluster load. The default grows with the number of nodes in the cluster but is at most 256. + */ + public int getMaxConcurrentShardRequests() { + return maxConcurrentShardRequests == 0 ? 256 : maxConcurrentShardRequests; + } + + /** + * Sets the number of shard requests that should be executed concurrently. This value should be used as a protection mechanism to + * reduce the number of shard requests fired per high level search request. Searches that hit the entire cluster can be throttled + * with this number to reduce the cluster load. The default grows with the number of nodes in the cluster but is at most 256. + */ + public void setMaxConcurrentShardRequests(int maxConcurrentShardRequests) { + if (maxConcurrentShardRequests < 1) { + throw new IllegalArgumentException("maxConcurrentShardRequests must be >= 1"); + } + this.maxConcurrentShardRequests = maxConcurrentShardRequests; + } + /** + * Sets a threshold that enforces a pre-filter roundtrip to pre-filter search shards based on query rewriting if the number of shards + * the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for + * instance a shard can not match any documents based on it's rewrite method ie. if date filters are mandatory to match but the shard + * bounds and the query are disjoint. The default is 128 + */ + public void setPreFilterShardSize(int preFilterShardSize) { + if (preFilterShardSize < 1) { + throw new IllegalArgumentException("preFilterShardSize must be >= 1"); + } + this.preFilterShardSize = preFilterShardSize; + } + + /** + * Returns a threshold that enforces a pre-filter roundtrip to pre-filter search shards based on query rewriting if the number of shards + * the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for + * instance a shard can not match any documents based on it's rewrite method ie. if date filters are mandatory to match but the shard + * bounds and the query are disjoint. The default is 128 + */ + public int getPreFilterShardSize() { + return preFilterShardSize; + } + + /** + * Returns true iff the maxConcurrentShardRequest is set. + */ + boolean isMaxConcurrentShardRequestsSet() { + return maxConcurrentShardRequests != 0; + } + /** * @return true if the request only has suggest */ @@ -275,6 +359,30 @@ public boolean isSuggestOnly() { return source != null && source.isSuggestOnly(); } + @Override + public Task createTask(long id, String type, String action, TaskId parentTaskId) { + // generating description in a lazy way since source can be quite big + return new SearchTask(id, type, action, null, parentTaskId) { + @Override + public String getDescription() { + StringBuilder sb = new StringBuilder(); + sb.append("indices["); + Strings.arrayToDelimitedString(indices, ",", sb); + sb.append("], "); + sb.append("types["); + Strings.arrayToDelimitedString(types, ",", sb); + sb.append("], "); + sb.append("search_type[").append(searchType).append("], "); + if (source != null) { + sb.append("source[").append(source.toString(FORMAT_PARAMS)).append("]"); + } else { + sb.append("source[]"); + } + return sb.toString(); + } + }; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -290,6 +398,14 @@ public void readFrom(StreamInput in) throws IOException { types = in.readStringArray(); indicesOptions = IndicesOptions.readIndicesOptions(in); requestCache = in.readOptionalBoolean(); + if (in.getVersion().onOrAfter(Version.V_5_4_0)) { + batchedReduceSize = in.readVInt(); + if (in.getVersion().onOrAfter(Version.V_5_6_0)) { + maxConcurrentShardRequests = in.readVInt(); + preFilterShardSize = in.readVInt(); + } + } + } @Override @@ -307,6 +423,14 @@ public void writeTo(StreamOutput out) throws IOException { out.writeStringArray(types); indicesOptions.writeIndicesOptions(out); out.writeOptionalBoolean(requestCache); + if (out.getVersion().onOrAfter(Version.V_5_4_0)) { + out.writeVInt(batchedReduceSize); + if (out.getVersion().onOrAfter(Version.V_5_6_0)) { + out.writeVInt(maxConcurrentShardRequests); + out.writeVInt(preFilterShardSize); + } + } + } @Override @@ -326,13 +450,16 @@ public boolean equals(Object o) { Objects.equals(requestCache, that.requestCache) && Objects.equals(scroll, that.scroll) && Arrays.equals(types, that.types) && + Objects.equals(batchedReduceSize, that.batchedReduceSize) && + Objects.equals(maxConcurrentShardRequests, that.maxConcurrentShardRequests) && + Objects.equals(preFilterShardSize, that.preFilterShardSize) && Objects.equals(indicesOptions, that.indicesOptions); } @Override public int hashCode() { return Objects.hash(searchType, Arrays.hashCode(indices), routing, preference, source, requestCache, - scroll, Arrays.hashCode(types), indicesOptions); + scroll, Arrays.hashCode(types), indicesOptions, batchedReduceSize, maxConcurrentShardRequests, preFilterShardSize); } @Override @@ -346,6 +473,9 @@ public String toString() { ", preference='" + preference + '\'' + ", requestCache=" + requestCache + ", scroll=" + scroll + + ", maxConcurrentShardRequests=" + maxConcurrentShardRequests + + ", batchedReduceSize=" + batchedReduceSize + + ", preFilterShardSize=" + preFilterShardSize + ", source=" + source + '}'; } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java index 3c320447fe833..42241477493cb 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java @@ -26,13 +26,14 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.script.Script; +import org.elasticsearch.search.collapse.CollapseBuilder; import org.elasticsearch.search.Scroll; import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; -import org.elasticsearch.search.slice.SliceBuilder; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.slice.SliceBuilder; import org.elasticsearch.search.sort.SortBuilder; import org.elasticsearch.search.sort.SortOrder; import org.elasticsearch.search.suggest.SuggestBuilder; @@ -503,6 +504,11 @@ public SearchRequestBuilder setProfile(boolean profile) { return this; } + public SearchRequestBuilder setCollapse(CollapseBuilder collapse) { + sourceBuilder().collapse(collapse); + return this; + } + @Override public String toString() { if (request.source() != null) { @@ -517,4 +523,34 @@ private SearchSourceBuilder sourceBuilder() { } return request.source(); } + + /** + * Sets the number of shard results that should be reduced at once on the coordinating node. This value should be used as a protection + * mechanism to reduce the memory overhead per search request if the potential number of shards in the request can be large. + */ + public SearchRequestBuilder setBatchedReduceSize(int batchedReduceSize) { + this.request.setBatchedReduceSize(batchedReduceSize); + return this; + } + + /** + * Sets the number of shard requests that should be executed concurrently. This value should be used as a protection mechanism to + * reduce the number of shard requests fired per high level search request. Searches that hit the entire cluster can be throttled + * with this number to reduce the cluster load. The default grows with the number of nodes in the cluster but is at most 256. + */ + public SearchRequestBuilder setMaxConcurrentShardRequests(int maxConcurrentShardRequests) { + this.request.setMaxConcurrentShardRequests(maxConcurrentShardRequests); + return this; + } + + /** + * Sets a threshold that enforces a pre-filter roundtrip to pre-filter search shards based on query rewriting if the number of shards + * the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for + * instance a shard can not match any documents based on it's rewrite method ie. if date filters are mandatory to match but the shard + * bounds and the query are disjoint. The default is 128 + */ + public SearchRequestBuilder setPreFilterShardSize(int preFilterShardSize) { + this.request.setPreFilterShardSize(preFilterShardSize); + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchResponse.java b/core/src/main/java/org/elasticsearch/action/search/SearchResponse.java index 3135d2c8f53b0..aace8c12460e7 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchResponse.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchResponse.java @@ -19,34 +19,47 @@ package org.elasticsearch.action.search; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.search.SearchHits; import org.elasticsearch.search.aggregations.Aggregations; import org.elasticsearch.search.internal.InternalSearchResponse; import org.elasticsearch.search.profile.ProfileShardResult; +import org.elasticsearch.search.profile.SearchProfileShardResults; import org.elasticsearch.search.suggest.Suggest; import java.io.IOException; +import java.util.ArrayList; +import java.util.List; import java.util.Map; import static org.elasticsearch.action.search.ShardSearchFailure.readShardSearchFailure; -import static org.elasticsearch.search.internal.InternalSearchResponse.readInternalSearchResponse; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * A response of a search request. */ -public class SearchResponse extends ActionResponse implements StatusToXContent { +public class SearchResponse extends ActionResponse implements StatusToXContentObject { + + private static final ParseField SCROLL_ID = new ParseField("_scroll_id"); + private static final ParseField TOOK = new ParseField("took"); + private static final ParseField TIMED_OUT = new ParseField("timed_out"); + private static final ParseField TERMINATED_EARLY = new ParseField("terminated_early"); + private static final ParseField NUM_REDUCE_PHASES = new ParseField("num_reduce_phases"); - private InternalSearchResponse internalResponse; + private SearchResponseSections internalResponse; private String scrollId; @@ -54,6 +67,8 @@ public class SearchResponse extends ActionResponse implements StatusToXContent { private int successfulShards; + private int skippedShards; + private ShardSearchFailure[] shardFailures; private long tookInMillis; @@ -61,13 +76,16 @@ public class SearchResponse extends ActionResponse implements StatusToXContent { public SearchResponse() { } - public SearchResponse(InternalSearchResponse internalResponse, String scrollId, int totalShards, int successfulShards, long tookInMillis, ShardSearchFailure[] shardFailures) { + public SearchResponse(SearchResponseSections internalResponse, String scrollId, int totalShards, int successfulShards, + int skippedShards, long tookInMillis, ShardSearchFailure[] shardFailures) { this.internalResponse = internalResponse; this.scrollId = scrollId; this.totalShards = totalShards; this.successfulShards = successfulShards; + this.skippedShards = skippedShards; this.tookInMillis = tookInMillis; this.shardFailures = shardFailures; + assert skippedShards <= totalShards : "skipped: " + skippedShards + " total: " + totalShards; } @Override @@ -106,6 +124,13 @@ public Boolean isTerminatedEarly() { return internalResponse.terminatedEarly(); } + /** + * Returns the number of reduce phases applied to obtain this search response + */ + public int getNumReducePhases() { + return internalResponse.getNumReducePhases(); + } + /** * How long the search took. */ @@ -134,6 +159,14 @@ public int getSuccessfulShards() { return successfulShards; } + + /** + * The number of shards skipped due to pre-filtering + */ + public int getSkippedShards() { + return skippedShards; + } + /** * The failed number of shards the search was executed on. */ @@ -168,36 +201,123 @@ public void scrollId(String scrollId) { * * @return The profile results or an empty map */ - @Nullable public Map getProfileResults() { + @Nullable + public Map getProfileResults() { return internalResponse.profile(); } - static final class Fields { - static final String _SCROLL_ID = "_scroll_id"; - static final String TOOK = "took"; - static final String TIMED_OUT = "timed_out"; - static final String TERMINATED_EARLY = "terminated_early"; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + innerToXContent(builder, params); + builder.endObject(); + return builder; + } + + public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { if (scrollId != null) { - builder.field(Fields._SCROLL_ID, scrollId); + builder.field(SCROLL_ID.getPreferredName(), scrollId); } - builder.field(Fields.TOOK, tookInMillis); - builder.field(Fields.TIMED_OUT, isTimedOut()); + builder.field(TOOK.getPreferredName(), tookInMillis); + builder.field(TIMED_OUT.getPreferredName(), isTimedOut()); if (isTerminatedEarly() != null) { - builder.field(Fields.TERMINATED_EARLY, isTerminatedEarly()); + builder.field(TERMINATED_EARLY.getPreferredName(), isTerminatedEarly()); + } + if (getNumReducePhases() != 1) { + builder.field(NUM_REDUCE_PHASES.getPreferredName(), getNumReducePhases()); } - RestActions.buildBroadcastShardsHeader(builder, params, getTotalShards(), getSuccessfulShards(), getFailedShards(), getShardFailures()); + RestActions.buildBroadcastShardsHeader(builder, params, getTotalShards(), getSuccessfulShards(), getSkippedShards(), + getFailedShards(), getShardFailures()); internalResponse.toXContent(builder, params); return builder; } + public static SearchResponse fromXContent(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + XContentParser.Token token; + String currentFieldName = null; + SearchHits hits = null; + Aggregations aggs = null; + Suggest suggest = null; + SearchProfileShardResults profile = null; + boolean timedOut = false; + Boolean terminatedEarly = null; + int numReducePhases = 1; + long tookInMillis = -1; + int successfulShards = -1; + int totalShards = -1; + int skippedShards = 0; // 0 for BWC + String scrollId = null; + List failures = new ArrayList<>(); + while((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (SCROLL_ID.match(currentFieldName)) { + scrollId = parser.text(); + } else if (TOOK.match(currentFieldName)) { + tookInMillis = parser.longValue(); + } else if (TIMED_OUT.match(currentFieldName)) { + timedOut = parser.booleanValue(); + } else if (TERMINATED_EARLY.match(currentFieldName)) { + terminatedEarly = parser.booleanValue(); + } else if (NUM_REDUCE_PHASES.match(currentFieldName)) { + numReducePhases = parser.intValue(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (SearchHits.Fields.HITS.equals(currentFieldName)) { + hits = SearchHits.fromXContent(parser); + } else if (Aggregations.AGGREGATIONS_FIELD.equals(currentFieldName)) { + aggs = Aggregations.fromXContent(parser); + } else if (Suggest.NAME.equals(currentFieldName)) { + suggest = Suggest.fromXContent(parser); + } else if (SearchProfileShardResults.PROFILE_FIELD.equals(currentFieldName)) { + profile = SearchProfileShardResults.fromXContent(parser); + } else if (RestActions._SHARDS_FIELD.match(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (RestActions.FAILED_FIELD.match(currentFieldName)) { + parser.intValue(); // we don't need it but need to consume it + } else if (RestActions.SUCCESSFUL_FIELD.match(currentFieldName)) { + successfulShards = parser.intValue(); + } else if (RestActions.TOTAL_FIELD.match(currentFieldName)) { + totalShards = parser.intValue(); + } else if (RestActions.SKIPPED_FIELD.match(currentFieldName)) { + skippedShards = parser.intValue(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (RestActions.FAILURES_FIELD.match(currentFieldName)) { + while((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + failures.add(ShardSearchFailure.fromXContent(parser)); + } + } else { + parser.skipChildren(); + } + } else { + parser.skipChildren(); + } + } + } else { + parser.skipChildren(); + } + } + } + SearchResponseSections searchResponseSections = new SearchResponseSections(hits, aggs, suggest, timedOut, terminatedEarly, + profile, numReducePhases); + return new SearchResponse(searchResponseSections, scrollId, totalShards, successfulShards, skippedShards, tookInMillis, + failures.toArray(new ShardSearchFailure[failures.size()])); + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - internalResponse = readInternalSearchResponse(in); + internalResponse = new InternalSearchResponse(in); totalShards = in.readVInt(); successfulShards = in.readVInt(); int size = in.readVInt(); @@ -211,6 +331,9 @@ public void readFrom(StreamInput in) throws IOException { } scrollId = in.readOptionalString(); tookInMillis = in.readVLong(); + if (in.getVersion().onOrAfter(Version.V_5_6_0)) { + skippedShards = in.readVInt(); + } } @Override @@ -227,10 +350,14 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(scrollId); out.writeVLong(tookInMillis); + if(out.getVersion().onOrAfter(Version.V_5_6_0)) { + out.writeVInt(skippedShards); + } } @Override public String toString() { - return Strings.toString(this, true); + return Strings.toString(this); } + } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchResponseSections.java b/core/src/main/java/org/elasticsearch/action/search/SearchResponseSections.java new file mode 100644 index 0000000000000..1757acbfd6d93 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchResponseSections.java @@ -0,0 +1,122 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.profile.ProfileShardResult; +import org.elasticsearch.search.profile.SearchProfileShardResults; +import org.elasticsearch.search.suggest.Suggest; + +import java.io.IOException; +import java.util.Collections; +import java.util.Map; + +/** + * Base class that holds the various sections which a search response is + * composed of (hits, aggs, suggestions etc.) and allows to retrieve them. + * + * The reason why this class exists is that the high level REST client uses its own classes + * to parse aggregations into, which are not serializable. This is the common part that can be + * shared between core and client. + */ +public class SearchResponseSections implements ToXContent { + + protected final SearchHits hits; + protected final Aggregations aggregations; + protected final Suggest suggest; + protected final SearchProfileShardResults profileResults; + protected final boolean timedOut; + protected final Boolean terminatedEarly; + protected final int numReducePhases; + + public SearchResponseSections(SearchHits hits, Aggregations aggregations, Suggest suggest, boolean timedOut, Boolean terminatedEarly, + SearchProfileShardResults profileResults, int numReducePhases) { + this.hits = hits; + this.aggregations = aggregations; + this.suggest = suggest; + this.profileResults = profileResults; + this.timedOut = timedOut; + this.terminatedEarly = terminatedEarly; + this.numReducePhases = numReducePhases; + } + + public final boolean timedOut() { + return this.timedOut; + } + + public final Boolean terminatedEarly() { + return this.terminatedEarly; + } + + public final SearchHits hits() { + return hits; + } + + public final Aggregations aggregations() { + return aggregations; + } + + public final Suggest suggest() { + return suggest; + } + + /** + * Returns the number of reduce phases applied to obtain this search response + */ + public final int getNumReducePhases() { + return numReducePhases; + } + + /** + * Returns the profile results for this search response (including all shards). + * An empty map is returned if profiling was not enabled + * + * @return Profile results + */ + public final Map profile() { + if (profileResults == null) { + return Collections.emptyMap(); + } + return profileResults.getShardResults(); + } + + @Override + public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + hits.toXContent(builder, params); + if (aggregations != null) { + aggregations.toXContent(builder, params); + } + if (suggest != null) { + suggest.toXContent(builder, params); + } + if (profileResults != null) { + profileResults.toXContent(builder, params); + } + return builder; + } + + protected void writeTo(StreamOutput out) throws IOException { + throw new UnsupportedOperationException(); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchScrollAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchScrollAsyncAction.java new file mode 100644 index 0000000000000..109a1f30ffc8e --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchScrollAsyncAction.java @@ -0,0 +1,286 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.search.internal.InternalScrollSearchRequest; +import org.elasticsearch.search.internal.InternalSearchResponse; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.transport.Transport; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.BiFunction; + +import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest; + +/** + * Abstract base class for scroll execution modes. This class encapsulates the basic logic to + * fan out to nodes and execute the query part of the scroll request. Subclasses can for instance + * run separate fetch phases etc. + */ +abstract class SearchScrollAsyncAction implements Runnable { + /* + * Some random TODO: + * Today we still have a dedicated executing mode for scrolls while we could simplify this by implementing + * scroll like functionality (mainly syntactic sugar) as an ordinary search with search_after. We could even go further and + * make the scroll entirely stateless and encode the state per shard in the scroll ID. + * + * Today we also hold a context per shard but maybe + * we want the context per coordinating node such that we route the scroll to the same coordinator all the time and hold the context + * here? This would have the advantage that if we loose that node the entire scroll is deal not just one shard. + * + * Additionally there is the possibility to associate the scroll with a seq. id. such that we can talk to any replica as long as + * the shards engine hasn't advanced that seq. id yet. Such a resume is possible and best effort, it could be even a safety net since + * if you rely on indices being read-only things can change in-between without notification or it's hard to detect if there where any + * changes while scrolling. These are all options to improve the current situation which we can look into down the road + */ + protected final Logger logger; + protected final ActionListener listener; + protected final ParsedScrollId scrollId; + protected final DiscoveryNodes nodes; + protected final SearchPhaseController searchPhaseController; + protected final SearchScrollRequest request; + protected final SearchTransportService searchTransportService; + private final long startTime; + private final List shardFailures = new ArrayList<>(); + private final AtomicInteger successfulOps; + + protected SearchScrollAsyncAction(ParsedScrollId scrollId, Logger logger, DiscoveryNodes nodes, + ActionListener listener, SearchPhaseController searchPhaseController, + SearchScrollRequest request, + SearchTransportService searchTransportService) { + this.startTime = System.currentTimeMillis(); + this.scrollId = scrollId; + this.successfulOps = new AtomicInteger(scrollId.getContext().length); + this.logger = logger; + this.listener = listener; + this.nodes = nodes; + this.searchPhaseController = searchPhaseController; + this.request = request; + this.searchTransportService = searchTransportService; + } + + /** + * Builds how long it took to execute the search. + */ + private long buildTookInMillis() { + // protect ourselves against time going backwards + // negative values don't make sense and we want to be able to serialize that thing as a vLong + return Math.max(1, System.currentTimeMillis() - startTime); + } + + public final void run() { + final ScrollIdForNode[] context = scrollId.getContext(); + if (context.length == 0) { + listener.onFailure(new SearchPhaseExecutionException("query", "no nodes to search on", ShardSearchFailure.EMPTY_ARRAY)); + } else { + collectNodesAndRun(Arrays.asList(context), nodes, searchTransportService, ActionListener.wrap(lookup -> run(lookup, context), + listener::onFailure)); + } + } + + /** + * This method collects nodes from the remote clusters asynchronously if any of the scroll IDs references a remote cluster. + * Otherwise the action listener will be invoked immediately with a function based on the given discovery nodes. + */ + static void collectNodesAndRun(final Iterable scrollIds, DiscoveryNodes nodes, + SearchTransportService searchTransportService, + ActionListener> listener) { + Set clusters = new HashSet<>(); + for (ScrollIdForNode target : scrollIds) { + if (target.getClusterAlias() != null) { + clusters.add(target.getClusterAlias()); + } + } + if (clusters.isEmpty()) { // no remote clusters + listener.onResponse((cluster, node) -> nodes.get(node)); + } else { + RemoteClusterService remoteClusterService = searchTransportService.getRemoteClusterService(); + remoteClusterService.collectNodes(clusters, ActionListener.wrap(nodeFunction -> { + final BiFunction clusterNodeLookup = (clusterAlias, node) -> { + if (clusterAlias == null) { + return nodes.get(node); + } else { + return nodeFunction.apply(clusterAlias, node); + } + }; + listener.onResponse(clusterNodeLookup); + }, listener::onFailure)); + } + } + + private void run(BiFunction clusterNodeLookup, final ScrollIdForNode[] context) { + final CountDown counter = new CountDown(scrollId.getContext().length); + for (int i = 0; i < context.length; i++) { + ScrollIdForNode target = context[i]; + final int shardIndex = i; + final Transport.Connection connection; + try { + DiscoveryNode node = clusterNodeLookup.apply(target.getClusterAlias(), target.getNode()); + if (node == null) { + throw new IllegalStateException("node [" + target.getNode() + "] is not available"); + } + connection = getConnection(target.getClusterAlias(), node); + } catch (Exception ex) { + onShardFailure("query", counter, target.getScrollId(), + ex, null, () -> SearchScrollAsyncAction.this.moveToNextPhase(clusterNodeLookup)); + continue; + } + final InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(target.getScrollId(), request); + // we can't create a SearchShardTarget here since we don't know the index and shard ID we are talking to + // we only know the node and the search context ID. Yet, the response will contain the SearchShardTarget + // from the target node instead...that's why we pass null here + SearchActionListener searchActionListener = new SearchActionListener(null, shardIndex) { + + @Override + protected void setSearchShardTarget(T response) { + // don't do this - it's part of the response... + assert response.getSearchShardTarget() != null : "search shard target must not be null"; + if (target.getClusterAlias() != null) { + // re-create the search target and add the cluster alias if there is any, + // we need this down the road for subseq. phases + SearchShardTarget searchShardTarget = response.getSearchShardTarget(); + response.setSearchShardTarget(new SearchShardTarget(searchShardTarget.getNodeId(), searchShardTarget.getShardId(), + target.getClusterAlias(), null)); + } + } + + @Override + protected void innerOnResponse(T result) { + assert shardIndex == result.getShardIndex() : "shard index mismatch: " + shardIndex + " but got: " + + result.getShardIndex(); + onFirstPhaseResult(shardIndex, result); + if (counter.countDown()) { + SearchPhase phase = moveToNextPhase(clusterNodeLookup); + try { + phase.run(); + } catch (Exception e) { + // we need to fail the entire request here - the entire phase just blew up + // don't call onShardFailure or onFailure here since otherwise we'd countDown the counter + // again which would result in an exception + listener.onFailure(new SearchPhaseExecutionException(phase.getName(), "Phase failed", e, + ShardSearchFailure.EMPTY_ARRAY)); + } + } + } + + @Override + public void onFailure(Exception t) { + onShardFailure("query", counter, target.getScrollId(), t, null, + () -> SearchScrollAsyncAction.this.moveToNextPhase(clusterNodeLookup)); + } + }; + executeInitialPhase(connection, internalRequest, searchActionListener); + } + } + + synchronized ShardSearchFailure[] buildShardFailures() { // pkg private for testing + if (shardFailures.isEmpty()) { + return ShardSearchFailure.EMPTY_ARRAY; + } + return shardFailures.toArray(new ShardSearchFailure[shardFailures.size()]); + } + + // we do our best to return the shard failures, but its ok if its not fully concurrently safe + // we simply try and return as much as possible + private synchronized void addShardFailure(ShardSearchFailure failure) { + shardFailures.add(failure); + } + + protected abstract void executeInitialPhase(Transport.Connection connection, InternalScrollSearchRequest internalRequest, + SearchActionListener searchActionListener); + + protected abstract SearchPhase moveToNextPhase(BiFunction clusterNodeLookup); + + protected abstract void onFirstPhaseResult(int shardId, T result); + + protected SearchPhase sendResponsePhase(SearchPhaseController.ReducedQueryPhase queryPhase, + final AtomicArray fetchResults) { + return new SearchPhase("fetch") { + @Override + public void run() throws IOException { + sendResponse(queryPhase, fetchResults); + } + }; + } + + protected final void sendResponse(SearchPhaseController.ReducedQueryPhase queryPhase, + final AtomicArray fetchResults) { + try { + final InternalSearchResponse internalResponse = searchPhaseController.merge(true, queryPhase, fetchResults.asList(), + fetchResults::get); + // the scroll ID never changes we always return the same ID. This ID contains all the shards and their context ids + // such that we can talk to them abgain in the next roundtrip. + String scrollId = null; + if (request.scroll() != null) { + scrollId = request.scrollId(); + } + listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(), + 0, buildTookInMillis(), buildShardFailures())); + } catch (Exception e) { + listener.onFailure(new ReduceSearchPhaseException("fetch", "inner finish failed", e, buildShardFailures())); + } + } + + protected void onShardFailure(String phaseName, final CountDown counter, final long searchId, Exception failure, + @Nullable SearchShardTarget searchShardTarget, + Supplier nextPhaseSupplier) { + if (logger.isDebugEnabled()) { + logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute {} phase", searchId, phaseName), failure); + } + addShardFailure(new ShardSearchFailure(failure, searchShardTarget)); + int successfulOperations = successfulOps.decrementAndGet(); + assert successfulOperations >= 0 : "successfulOperations must be >= 0 but was: " + successfulOperations; + if (counter.countDown()) { + if (successfulOps.get() == 0) { + listener.onFailure(new SearchPhaseExecutionException(phaseName, "all shards failed", failure, buildShardFailures())); + } else { + SearchPhase phase = nextPhaseSupplier.get(); + try { + phase.run(); + } catch (Exception e) { + e.addSuppressed(failure); + listener.onFailure(new SearchPhaseExecutionException(phase.getName(), "Phase failed", e, + ShardSearchFailure.EMPTY_ARRAY)); + } + } + } + } + + protected Transport.Connection getConnection(String clusterAlias, DiscoveryNode node) { + return searchTransportService.getConnection(clusterAlias, node); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java index 24e497954a7ab..7f36d71ae256b 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java @@ -20,164 +20,43 @@ package org.elasticsearch.action.search; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.apache.lucene.search.ScoreDoc; import org.elasticsearch.action.ActionListener; import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.util.concurrent.AtomicArray; -import org.elasticsearch.search.action.SearchTransportService; -import org.elasticsearch.search.controller.SearchPhaseController; import org.elasticsearch.search.fetch.QueryFetchSearchResult; import org.elasticsearch.search.fetch.ScrollQueryFetchSearchResult; import org.elasticsearch.search.internal.InternalScrollSearchRequest; -import org.elasticsearch.search.internal.InternalSearchResponse; +import org.elasticsearch.transport.Transport; -import java.util.List; -import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.BiFunction; -import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest; +final class SearchScrollQueryAndFetchAsyncAction extends SearchScrollAsyncAction { -class SearchScrollQueryAndFetchAsyncAction extends AbstractAsyncAction { - - private final Logger logger; - private final SearchPhaseController searchPhaseController; - private final SearchTransportService searchTransportService; - private final SearchScrollRequest request; - private final ActionListener listener; - private final ParsedScrollId scrollId; - private final DiscoveryNodes nodes; - private volatile AtomicArray shardFailures; + private final SearchTask task; private final AtomicArray queryFetchResults; - private final AtomicInteger successfulOps; - private final AtomicInteger counter; - - SearchScrollQueryAndFetchAsyncAction(Logger logger, ClusterService clusterService, - SearchTransportService searchTransportService, SearchPhaseController searchPhaseController, - SearchScrollRequest request, ParsedScrollId scrollId, ActionListener listener) { - this.logger = logger; - this.searchPhaseController = searchPhaseController; - this.searchTransportService = searchTransportService; - this.request = request; - this.listener = listener; - this.scrollId = scrollId; - this.nodes = clusterService.state().nodes(); - this.successfulOps = new AtomicInteger(scrollId.getContext().length); - this.counter = new AtomicInteger(scrollId.getContext().length); + SearchScrollQueryAndFetchAsyncAction(Logger logger, ClusterService clusterService, SearchTransportService searchTransportService, + SearchPhaseController searchPhaseController, SearchScrollRequest request, SearchTask task, + ParsedScrollId scrollId, ActionListener listener) { + super(scrollId, logger, clusterService.state().nodes(), listener, searchPhaseController, request, searchTransportService); + this.task = task; this.queryFetchResults = new AtomicArray<>(scrollId.getContext().length); } - protected final ShardSearchFailure[] buildShardFailures() { - if (shardFailures == null) { - return ShardSearchFailure.EMPTY_ARRAY; - } - List> entries = shardFailures.asList(); - ShardSearchFailure[] failures = new ShardSearchFailure[entries.size()]; - for (int i = 0; i < failures.length; i++) { - failures[i] = entries.get(i).value; - } - return failures; - } - - // we do our best to return the shard failures, but its ok if its not fully concurrently safe - // we simply try and return as much as possible - protected final void addShardFailure(final int shardIndex, ShardSearchFailure failure) { - if (shardFailures == null) { - shardFailures = new AtomicArray<>(scrollId.getContext().length); - } - shardFailures.set(shardIndex, failure); - } - - public void start() { - if (scrollId.getContext().length == 0) { - listener.onFailure(new SearchPhaseExecutionException("query", "no nodes to search on", ShardSearchFailure.EMPTY_ARRAY)); - return; - } - - ScrollIdForNode[] context = scrollId.getContext(); - for (int i = 0; i < context.length; i++) { - ScrollIdForNode target = context[i]; - DiscoveryNode node = nodes.get(target.getNode()); - if (node != null) { - executePhase(i, node, target.getScrollId()); - } else { - if (logger.isDebugEnabled()) { - logger.debug("Node [{}] not available for scroll request [{}]", target.getNode(), scrollId.getSource()); - } - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - } - - for (ScrollIdForNode target : scrollId.getContext()) { - DiscoveryNode node = nodes.get(target.getNode()); - if (node == null) { - if (logger.isDebugEnabled()) { - logger.debug("Node [{}] not available for scroll request [{}]", target.getNode(), scrollId.getSource()); - } - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - } - } - - void executePhase(final int shardIndex, DiscoveryNode node, final long searchId) { - InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(searchId, request); - searchTransportService.sendExecuteFetch(node, internalRequest, new ActionListener() { - @Override - public void onResponse(ScrollQueryFetchSearchResult result) { - queryFetchResults.set(shardIndex, result.result()); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - @Override - public void onFailure(Exception t) { - onPhaseFailure(t, searchId, shardIndex); - } - }); - } - - private void onPhaseFailure(Exception e, long searchId, int shardIndex) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", searchId), e); - } - addShardFailure(shardIndex, new ShardSearchFailure(e)); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - if (successfulOps.get() == 0) { - listener.onFailure(new SearchPhaseExecutionException("query_fetch", "all shards failed", e, buildShardFailures())); - } else { - finishHim(); - } - } + @Override + protected void executeInitialPhase(Transport.Connection connection, InternalScrollSearchRequest internalRequest, + SearchActionListener searchActionListener) { + searchTransportService.sendExecuteScrollFetch(connection, internalRequest, task, searchActionListener); } - private void finishHim() { - try { - innerFinishHim(); - } catch (Exception e) { - listener.onFailure(new ReduceSearchPhaseException("fetch", "", e, buildShardFailures())); - } + @Override + protected SearchPhase moveToNextPhase(BiFunction clusterNodeLookup) { + return sendResponsePhase(searchPhaseController.reducedQueryPhase(queryFetchResults.asList(), true), queryFetchResults); } - private void innerFinishHim() throws Exception { - ScoreDoc[] sortedShardDocs = searchPhaseController.sortDocs(true, queryFetchResults); - final InternalSearchResponse internalResponse = searchPhaseController.merge(true, sortedShardDocs, queryFetchResults, - queryFetchResults); - String scrollId = null; - if (request.scroll() != null) { - scrollId = request.scrollId(); - } - listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); + @Override + protected void onFirstPhaseResult(int shardId, ScrollQueryFetchSearchResult result) { + queryFetchResults.setOnce(shardId, result.result()); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java index 21f1c4ce68a34..7363ca41c20ad 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java @@ -21,210 +21,104 @@ import com.carrotsearch.hppc.IntArrayList; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.search.ScoreDoc; import org.elasticsearch.action.ActionListener; import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.util.concurrent.AtomicArray; -import org.elasticsearch.search.action.SearchTransportService; -import org.elasticsearch.search.controller.SearchPhaseController; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.search.SearchShardTarget; import org.elasticsearch.search.fetch.FetchSearchResult; import org.elasticsearch.search.fetch.ShardFetchRequest; import org.elasticsearch.search.internal.InternalScrollSearchRequest; -import org.elasticsearch.search.internal.InternalSearchResponse; import org.elasticsearch.search.query.QuerySearchResult; import org.elasticsearch.search.query.ScrollQuerySearchResult; +import org.elasticsearch.transport.Transport; -import java.util.List; -import java.util.concurrent.atomic.AtomicInteger; +import java.io.IOException; +import java.util.function.BiFunction; -import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest; +final class SearchScrollQueryThenFetchAsyncAction extends SearchScrollAsyncAction { -class SearchScrollQueryThenFetchAsyncAction extends AbstractAsyncAction { + private final SearchTask task; + private final AtomicArray fetchResults; + private final AtomicArray queryResults; - private final Logger logger; - private final SearchTransportService searchTransportService; - private final SearchPhaseController searchPhaseController; - private final SearchScrollRequest request; - private final ActionListener listener; - private final ParsedScrollId scrollId; - private final DiscoveryNodes nodes; - private volatile AtomicArray shardFailures; - final AtomicArray queryResults; - final AtomicArray fetchResults; - private volatile ScoreDoc[] sortedShardDocs; - private final AtomicInteger successfulOps; - - SearchScrollQueryThenFetchAsyncAction(Logger logger, ClusterService clusterService, - SearchTransportService searchTransportService, SearchPhaseController searchPhaseController, - SearchScrollRequest request, ParsedScrollId scrollId, ActionListener listener) { - this.logger = logger; - this.searchTransportService = searchTransportService; - this.searchPhaseController = searchPhaseController; - this.request = request; - this.listener = listener; - this.scrollId = scrollId; - this.nodes = clusterService.state().nodes(); - this.successfulOps = new AtomicInteger(scrollId.getContext().length); - this.queryResults = new AtomicArray<>(scrollId.getContext().length); + SearchScrollQueryThenFetchAsyncAction(Logger logger, ClusterService clusterService, SearchTransportService searchTransportService, + SearchPhaseController searchPhaseController, SearchScrollRequest request, SearchTask task, + ParsedScrollId scrollId, ActionListener listener) { + super(scrollId, logger, clusterService.state().nodes(), listener, searchPhaseController, request, + searchTransportService); + this.task = task; this.fetchResults = new AtomicArray<>(scrollId.getContext().length); + this.queryResults = new AtomicArray<>(scrollId.getContext().length); } - protected final ShardSearchFailure[] buildShardFailures() { - if (shardFailures == null) { - return ShardSearchFailure.EMPTY_ARRAY; - } - List> entries = shardFailures.asList(); - ShardSearchFailure[] failures = new ShardSearchFailure[entries.size()]; - for (int i = 0; i < failures.length; i++) { - failures[i] = entries.get(i).value; - } - return failures; - } - - // we do our best to return the shard failures, but its ok if its not fully concurrently safe - // we simply try and return as much as possible - protected final void addShardFailure(final int shardIndex, ShardSearchFailure failure) { - if (shardFailures == null) { - shardFailures = new AtomicArray<>(scrollId.getContext().length); - } - shardFailures.set(shardIndex, failure); + protected void onFirstPhaseResult(int shardId, ScrollQuerySearchResult result) { + queryResults.setOnce(shardId, result.queryResult()); } - public void start() { - if (scrollId.getContext().length == 0) { - listener.onFailure(new SearchPhaseExecutionException("query", "no nodes to search on", ShardSearchFailure.EMPTY_ARRAY)); - return; - } - final AtomicInteger counter = new AtomicInteger(scrollId.getContext().length); - - ScrollIdForNode[] context = scrollId.getContext(); - for (int i = 0; i < context.length; i++) { - ScrollIdForNode target = context[i]; - DiscoveryNode node = nodes.get(target.getNode()); - if (node != null) { - executeQueryPhase(i, counter, node, target.getScrollId()); - } else { - if (logger.isDebugEnabled()) { - logger.debug("Node [{}] not available for scroll request [{}]", target.getNode(), scrollId.getSource()); - } - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - try { - executeFetchPhase(); - } catch (Exception e) { - listener.onFailure(new SearchPhaseExecutionException("query", "Fetch failed", e, ShardSearchFailure.EMPTY_ARRAY)); - return; - } - } - } - } + @Override + protected void executeInitialPhase(Transport.Connection connection, InternalScrollSearchRequest internalRequest, + SearchActionListener searchActionListener) { + searchTransportService.sendExecuteScrollQuery(connection, internalRequest, task, searchActionListener); } - private void executeQueryPhase(final int shardIndex, final AtomicInteger counter, DiscoveryNode node, final long searchId) { - InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(searchId, request); - searchTransportService.sendExecuteQuery(node, internalRequest, new ActionListener() { - @Override - public void onResponse(ScrollQuerySearchResult result) { - queryResults.set(shardIndex, result.queryResult()); - if (counter.decrementAndGet() == 0) { - try { - executeFetchPhase(); - } catch (Exception e) { - onFailure(e); - } - } - } - + @Override + protected SearchPhase moveToNextPhase(BiFunction clusterNodeLookup) { + return new SearchPhase("fetch") { @Override - public void onFailure(Exception t) { - onQueryPhaseFailure(shardIndex, counter, searchId, t); - } - }); - } - - void onQueryPhaseFailure(final int shardIndex, final AtomicInteger counter, final long searchId, Exception failure) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", searchId), failure); - } - addShardFailure(shardIndex, new ShardSearchFailure(failure)); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - if (successfulOps.get() == 0) { - listener.onFailure(new SearchPhaseExecutionException("query", "all shards failed", failure, buildShardFailures())); - } else { - try { - executeFetchPhase(); - } catch (Exception e) { - e.addSuppressed(failure); - listener.onFailure(new SearchPhaseExecutionException("query", "Fetch failed", e, ShardSearchFailure.EMPTY_ARRAY)); - } - } - } - } - - private void executeFetchPhase() throws Exception { - sortedShardDocs = searchPhaseController.sortDocs(true, queryResults); - AtomicArray docIdsToLoad = new AtomicArray<>(queryResults.length()); - searchPhaseController.fillDocIdsToLoad(docIdsToLoad, sortedShardDocs); - - if (docIdsToLoad.asList().isEmpty()) { - finishHim(); - return; - } - - - final ScoreDoc[] lastEmittedDocPerShard = searchPhaseController.getLastEmittedDocPerShard(queryResults.asList(), - sortedShardDocs, queryResults.length()); - final AtomicInteger counter = new AtomicInteger(docIdsToLoad.asList().size()); - for (final AtomicArray.Entry entry : docIdsToLoad.asList()) { - IntArrayList docIds = entry.value; - final QuerySearchResult querySearchResult = queryResults.get(entry.index); - ScoreDoc lastEmittedDoc = lastEmittedDocPerShard[entry.index]; - ShardFetchRequest shardFetchRequest = new ShardFetchRequest(querySearchResult.id(), docIds, lastEmittedDoc); - DiscoveryNode node = nodes.get(querySearchResult.shardTarget().nodeId()); - searchTransportService.sendExecuteFetchScroll(node, shardFetchRequest, new ActionListener() { - @Override - public void onResponse(FetchSearchResult result) { - result.shardTarget(querySearchResult.shardTarget()); - fetchResults.set(entry.index, result); - if (counter.decrementAndGet() == 0) { - finishHim(); - } + public void run() throws IOException { + final SearchPhaseController.ReducedQueryPhase reducedQueryPhase = searchPhaseController.reducedQueryPhase( + queryResults.asList(), true); + if (reducedQueryPhase.scoreDocs.length == 0) { + sendResponse(reducedQueryPhase, fetchResults); + return; } - @Override - public void onFailure(Exception t) { - if (logger.isDebugEnabled()) { - logger.debug("Failed to execute fetch phase", t); - } - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); + final IntArrayList[] docIdsToLoad = searchPhaseController.fillDocIdsToLoad(queryResults.length(), + reducedQueryPhase.scoreDocs); + final ScoreDoc[] lastEmittedDocPerShard = searchPhaseController.getLastEmittedDocPerShard(reducedQueryPhase, + queryResults.length()); + final CountDown counter = new CountDown(docIdsToLoad.length); + for (int i = 0; i < docIdsToLoad.length; i++) { + final int index = i; + final IntArrayList docIds = docIdsToLoad[index]; + if (docIds != null) { + final QuerySearchResult querySearchResult = queryResults.get(index); + ScoreDoc lastEmittedDoc = lastEmittedDocPerShard[index]; + ShardFetchRequest shardFetchRequest = new ShardFetchRequest(querySearchResult.getRequestId(), docIds, + lastEmittedDoc); + SearchShardTarget searchShardTarget = querySearchResult.getSearchShardTarget(); + DiscoveryNode node = clusterNodeLookup.apply(searchShardTarget.getClusterAlias(), searchShardTarget.getNodeId()); + assert node != null : "target node is null in secondary phase"; + Transport.Connection connection = getConnection(searchShardTarget.getClusterAlias(), node); + searchTransportService.sendExecuteFetchScroll(connection, shardFetchRequest, task, + new SearchActionListener(querySearchResult.getSearchShardTarget(), index) { + @Override + protected void innerOnResponse(FetchSearchResult response) { + fetchResults.setOnce(response.getShardIndex(), response); + if (counter.countDown()) { + sendResponse(reducedQueryPhase, fetchResults); + } + } + + @Override + public void onFailure(Exception t) { + onShardFailure(getName(), counter, querySearchResult.getRequestId(), + t, querySearchResult.getSearchShardTarget(), + () -> sendResponsePhase(reducedQueryPhase, fetchResults)); + } + }); + } else { + // the counter is set to the total size of docIdsToLoad + // which can have null values so we have to count them down too + if (counter.countDown()) { + sendResponse(reducedQueryPhase, fetchResults); + } } } - }); - } - } - - private void finishHim() { - try { - innerFinishHim(); - } catch (Exception e) { - listener.onFailure(new ReduceSearchPhaseException("fetch", "inner finish failed", e, buildShardFailures())); - } - } - - private void innerFinishHim() { - InternalSearchResponse internalResponse = searchPhaseController.merge(true, sortedShardDocs, queryResults, fetchResults); - String scrollId = null; - if (request.scroll() != null) { - scrollId = request.scrollId(); - } - listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); + } + }; } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java b/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java index 9ab2a4cf5605a..7bd6677abbd89 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java @@ -24,18 +24,19 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.search.Scroll; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskId; import java.io.IOException; import java.util.Objects; import static org.elasticsearch.action.ValidateActions.addValidationError; -/** - * - */ -public class SearchScrollRequest extends ActionRequest { - +public class SearchScrollRequest extends ActionRequest implements ToXContentObject { private String scrollId; private Scroll scroll; @@ -110,6 +111,11 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalWriteable(scroll); } + @Override + public Task createTask(long id, String type, String action, TaskId parentTaskId) { + return new SearchTask(id, type, action, getDescription(), parentTaskId); + } + @Override public boolean equals(Object o) { if (this == o) { @@ -135,4 +141,45 @@ public String toString() { ", scroll=" + scroll + '}'; } + + @Override + public String getDescription() { + return "scrollId[" + scrollId + "], scroll[" + scroll + "]"; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field("scroll_id", scrollId); + if (scroll != null) { + builder.field("scroll", scroll.keepAlive().getStringRep()); + } + builder.endObject(); + return builder; + } + + /** + * Parse a search scroll request from a request body provided through the REST layer. + * Values that are already be set and are also found while parsing will be overridden. + */ + public void fromXContent(XContentParser parser) throws IOException { + if (parser.nextToken() != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException("Malformed content, must start with an object"); + } else { + XContentParser.Token token; + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if ("scroll_id".equals(currentFieldName) && token == XContentParser.Token.VALUE_STRING) { + scrollId(parser.text()); + } else if ("scroll".equals(currentFieldName) && token == XContentParser.Token.VALUE_STRING) { + scroll(new Scroll(TimeValue.parseTimeValue(parser.text(), null, "scroll"))); + } else { + throw new IllegalArgumentException("Unknown parameter [" + currentFieldName + + "] in request body or parameter is of the wrong type[" + token + "] "); + } + } + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchShardIterator.java b/core/src/main/java/org/elasticsearch/action/search/SearchShardIterator.java new file mode 100644 index 0000000000000..c36d2b7908f78 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchShardIterator.java @@ -0,0 +1,78 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.cluster.routing.PlainShardIterator; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.index.shard.ShardId; + +import java.util.List; + +/** + * Extension of {@link PlainShardIterator} used in the search api, which also holds the {@link OriginalIndices} + * of the search request. Useful especially with cross cluster search, as each cluster has its own set of original indices. + */ +public final class SearchShardIterator extends PlainShardIterator { + + private final OriginalIndices originalIndices; + private String clusterAlias; + private boolean skip = false; + + /** + * Creates a {@link PlainShardIterator} instance that iterates over a subset of the given shards + * this the a given shardId. + * + * @param shardId shard id of the group + * @param shards shards to iterate + */ + public SearchShardIterator(String clusterAlias, ShardId shardId, List shards, OriginalIndices originalIndices) { + super(shardId, shards); + this.originalIndices = originalIndices; + this.clusterAlias = clusterAlias; + } + + /** + * Returns the original indices associated with this shard iterator, specifically with the cluster that this shard belongs to. + */ + public OriginalIndices getOriginalIndices() { + return originalIndices; + } + + public String getClusterAlias() { + return clusterAlias; + } + + /** + * Reset the iterator and mark it as skippable + * @see #skip() + */ + void resetAndSkip() { + reset(); + skip = true; + } + + /** + * Returns true if the search execution should skip this shard since it can not match any documents given the query. + */ + boolean skip() { + return skip; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchTask.java b/core/src/main/java/org/elasticsearch/action/search/SearchTask.java new file mode 100644 index 0000000000000..d0a1cdd456f47 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchTask.java @@ -0,0 +1,39 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.elasticsearch.tasks.CancellableTask; +import org.elasticsearch.tasks.TaskId; + +/** + * Task storing information about a currently running search request. + */ +public class SearchTask extends CancellableTask { + + public SearchTask(long id, String type, String action, String description, TaskId parentTaskId) { + super(id, type, action, description, parentTaskId); + } + + @Override + public boolean shouldCancelChildrenOnCancellation() { + return true; + } + +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java b/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java new file mode 100644 index 0000000000000..052f2bb0fdeba --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java @@ -0,0 +1,469 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.ActionListenerResponseHandler; +import org.elasticsearch.action.IndicesRequest; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchService; +import org.elasticsearch.search.dfs.DfsSearchResult; +import org.elasticsearch.search.fetch.FetchSearchResult; +import org.elasticsearch.search.fetch.QueryFetchSearchResult; +import org.elasticsearch.search.fetch.ScrollQueryFetchSearchResult; +import org.elasticsearch.search.fetch.ShardFetchRequest; +import org.elasticsearch.search.fetch.ShardFetchSearchRequest; +import org.elasticsearch.search.internal.InternalScrollSearchRequest; +import org.elasticsearch.search.internal.ShardSearchRequest; +import org.elasticsearch.search.internal.ShardSearchTransportRequest; +import org.elasticsearch.search.query.QuerySearchRequest; +import org.elasticsearch.search.query.QuerySearchResult; +import org.elasticsearch.search.query.ScrollQuerySearchResult; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.transport.Transport; +import org.elasticsearch.transport.TransportActionProxy; +import org.elasticsearch.transport.TaskAwareTransportRequestHandler; +import org.elasticsearch.transport.TransportChannel; +import org.elasticsearch.transport.TransportRequest; +import org.elasticsearch.transport.TransportRequestOptions; +import org.elasticsearch.transport.TransportResponse; +import org.elasticsearch.transport.TransportService; + +import java.io.IOException; +import java.util.function.Supplier; + +/** + * An encapsulation of {@link org.elasticsearch.search.SearchService} operations exposed through + * transport. + */ +public class SearchTransportService extends AbstractComponent { + + public static final String FREE_CONTEXT_SCROLL_ACTION_NAME = "indices:data/read/search[free_context/scroll]"; + public static final String FREE_CONTEXT_ACTION_NAME = "indices:data/read/search[free_context]"; + public static final String CLEAR_SCROLL_CONTEXTS_ACTION_NAME = "indices:data/read/search[clear_scroll_contexts]"; + public static final String DFS_ACTION_NAME = "indices:data/read/search[phase/dfs]"; + public static final String QUERY_ACTION_NAME = "indices:data/read/search[phase/query]"; + public static final String QUERY_ID_ACTION_NAME = "indices:data/read/search[phase/query/id]"; + public static final String QUERY_SCROLL_ACTION_NAME = "indices:data/read/search[phase/query/scroll]"; + @Deprecated + public static final String QUERY_FETCH_ACTION_NAME = "indices:data/read/search[phase/query+fetch]"; + public static final String QUERY_FETCH_SCROLL_ACTION_NAME = "indices:data/read/search[phase/query+fetch/scroll]"; + public static final String FETCH_ID_SCROLL_ACTION_NAME = "indices:data/read/search[phase/fetch/id/scroll]"; + public static final String FETCH_ID_ACTION_NAME = "indices:data/read/search[phase/fetch/id]"; + public static final String QUERY_CAN_MATCH_NAME = "indices:data/read/search[can_match]"; + + private final TransportService transportService; + + public SearchTransportService(Settings settings, TransportService transportService) { + super(settings); + this.transportService = transportService; + } + + public void sendFreeContext(Transport.Connection connection, final long contextId, OriginalIndices originalIndices) { + transportService.sendRequest(connection, FREE_CONTEXT_ACTION_NAME, new SearchFreeContextRequest(originalIndices, contextId), + TransportRequestOptions.EMPTY, new ActionListenerResponseHandler<>(new ActionListener() { + @Override + public void onResponse(SearchFreeContextResponse response) { + // no need to respond if it was freed or not + } + + @Override + public void onFailure(Exception e) { + + } + }, SearchFreeContextResponse::new)); + } + + public void sendFreeContext(Transport.Connection connection, long contextId, final ActionListener listener) { + transportService.sendRequest(connection, FREE_CONTEXT_SCROLL_ACTION_NAME, new ScrollFreeContextRequest(contextId), + TransportRequestOptions.EMPTY, new ActionListenerResponseHandler<>(listener, SearchFreeContextResponse::new)); + } + + public void sendCanMatch(Transport.Connection connection, final ShardSearchTransportRequest request, SearchTask task, final + ActionListener listener) { + if (connection.getNode().getVersion().onOrAfter(Version.V_5_6_0)) { + transportService.sendChildRequest(connection, QUERY_CAN_MATCH_NAME, request, task, + TransportRequestOptions.EMPTY, new ActionListenerResponseHandler<>(listener, CanMatchResponse::new)); + } else { + throw new IllegalArgumentException("can_match is not supported on pre 5.6.0 nodes"); + } + } + + public void sendClearAllScrollContexts(Transport.Connection connection, final ActionListener listener) { + transportService.sendRequest(connection, CLEAR_SCROLL_CONTEXTS_ACTION_NAME, TransportRequest.Empty.INSTANCE, + TransportRequestOptions.EMPTY, new ActionListenerResponseHandler<>(listener, () -> TransportResponse.Empty.INSTANCE)); + } + + public void sendExecuteDfs(Transport.Connection connection, final ShardSearchTransportRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(connection, DFS_ACTION_NAME, request, task, + new ActionListenerResponseHandler<>(listener, DfsSearchResult::new)); + } + + public void sendExecuteQuery(Transport.Connection connection, final ShardSearchTransportRequest request, SearchTask task, + final SearchActionListener listener) { + // we optimize this and expect a QueryFetchSearchResult if we only have a single shard in the search request + // this used to be the QUERY_AND_FETCH which doesn't exists anymore. + final boolean fetchDocuments = request.numberOfShards() == 1; + Supplier supplier = fetchDocuments ? QueryFetchSearchResult::new : QuerySearchResult::new; + if (connection.getVersion().before(Version.V_5_3_0) && fetchDocuments) { + // this is a BWC layer for pre 5.3 indices + if (request.scroll() != null) { + /** + * This is needed for nodes pre 5.3 when the single shard optimization is used. + * These nodes will set the last emitted doc only if the removed `query_and_fetch` search type is set + * in the request. See {@link SearchType}. + */ + request.searchType(SearchType.QUERY_AND_FETCH); + } + transportService.sendChildRequest(connection, QUERY_FETCH_ACTION_NAME, request, task, + new ActionListenerResponseHandler<>(listener, supplier)); + } else { + transportService.sendChildRequest(connection, QUERY_ACTION_NAME, request, task, + new ActionListenerResponseHandler<>(listener, supplier)); + } + } + + public void sendExecuteQuery(Transport.Connection connection, final QuerySearchRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(connection, QUERY_ID_ACTION_NAME, request, task, + new ActionListenerResponseHandler<>(listener, QuerySearchResult::new)); + } + + public void sendExecuteScrollQuery(Transport.Connection connection, final InternalScrollSearchRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(connection, QUERY_SCROLL_ACTION_NAME, request, task, + new ActionListenerResponseHandler<>(listener, ScrollQuerySearchResult::new)); + } + + public void sendExecuteScrollFetch(Transport.Connection connection, final InternalScrollSearchRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(connection, QUERY_FETCH_SCROLL_ACTION_NAME, request, task, + new ActionListenerResponseHandler<>(listener, ScrollQueryFetchSearchResult::new)); + } + + public void sendExecuteFetch(Transport.Connection connection, final ShardFetchSearchRequest request, SearchTask task, + final SearchActionListener listener) { + sendExecuteFetch(connection, FETCH_ID_ACTION_NAME, request, task, listener); + } + + public void sendExecuteFetchScroll(Transport.Connection connection, final ShardFetchRequest request, SearchTask task, + final SearchActionListener listener) { + sendExecuteFetch(connection, FETCH_ID_SCROLL_ACTION_NAME, request, task, listener); + } + + private void sendExecuteFetch(Transport.Connection connection, String action, final ShardFetchRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(connection, action, request, task, + new ActionListenerResponseHandler<>(listener, FetchSearchResult::new)); + } + + /** + * Used by {@link TransportSearchAction} to send the expand queries (field collapsing). + */ + void sendExecuteMultiSearch(final MultiSearchRequest request, SearchTask task, + final ActionListener listener) { + transportService.sendChildRequest(transportService.getConnection(transportService.getLocalNode()), MultiSearchAction.NAME, request, + task, new ActionListenerResponseHandler<>(listener, MultiSearchResponse::new)); + } + + public RemoteClusterService getRemoteClusterService() { + return transportService.getRemoteClusterService(); + } + + static class ScrollFreeContextRequest extends TransportRequest { + private long id; + + ScrollFreeContextRequest() { + } + + ScrollFreeContextRequest(long id) { + this.id = id; + } + + public long id() { + return this.id; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + id = in.readLong(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeLong(id); + } + } + + static class SearchFreeContextRequest extends ScrollFreeContextRequest implements IndicesRequest { + private OriginalIndices originalIndices; + + SearchFreeContextRequest() { + } + + SearchFreeContextRequest(OriginalIndices originalIndices, long id) { + super(id); + this.originalIndices = originalIndices; + } + + @Override + public String[] indices() { + if (originalIndices == null) { + return null; + } + return originalIndices.indices(); + } + + @Override + public IndicesOptions indicesOptions() { + if (originalIndices == null) { + return null; + } + return originalIndices.indicesOptions(); + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + originalIndices = OriginalIndices.readOriginalIndices(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + OriginalIndices.writeOriginalIndices(originalIndices, out); + } + } + + public static class SearchFreeContextResponse extends TransportResponse { + + private boolean freed; + + SearchFreeContextResponse() { + } + + SearchFreeContextResponse(boolean freed) { + this.freed = freed; + } + + public boolean isFreed() { + return freed; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + freed = in.readBoolean(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeBoolean(freed); + } + } + + public static void registerRequestHandler(TransportService transportService, SearchService searchService) { + transportService.registerRequestHandler(FREE_CONTEXT_SCROLL_ACTION_NAME, ScrollFreeContextRequest::new, ThreadPool.Names.SAME, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(ScrollFreeContextRequest request, TransportChannel channel, Task task) throws Exception { + boolean freed = searchService.freeContext(request.id()); + channel.sendResponse(new SearchFreeContextResponse(freed)); + } + }); + TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_SCROLL_ACTION_NAME, + (Supplier) SearchFreeContextResponse::new); + transportService.registerRequestHandler(FREE_CONTEXT_ACTION_NAME, SearchFreeContextRequest::new, ThreadPool.Names.SAME, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(SearchFreeContextRequest request, TransportChannel channel, Task task) throws Exception { + boolean freed = searchService.freeContext(request.id()); + channel.sendResponse(new SearchFreeContextResponse(freed)); + } + }); + TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_ACTION_NAME, + (Supplier) SearchFreeContextResponse::new); + transportService.registerRequestHandler(CLEAR_SCROLL_CONTEXTS_ACTION_NAME, () -> TransportRequest.Empty.INSTANCE, + ThreadPool.Names.SAME, new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(TransportRequest.Empty request, TransportChannel channel, Task task) throws Exception { + searchService.freeAllScrollContexts(); + channel.sendResponse(TransportResponse.Empty.INSTANCE); + } + }); + TransportActionProxy.registerProxyAction(transportService, CLEAR_SCROLL_CONTEXTS_ACTION_NAME, + () -> TransportResponse.Empty.INSTANCE); + + transportService.registerRequestHandler(DFS_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel, Task task) throws Exception { + DfsSearchResult result = searchService.executeDfsPhase(request, (SearchTask)task); + channel.sendResponse(result); + + } + }); + TransportActionProxy.registerProxyAction(transportService, DFS_ACTION_NAME, DfsSearchResult::new); + + transportService.registerRequestHandler(QUERY_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel, Task task) throws Exception { + SearchPhaseResult result = searchService.executeQueryPhase(request, (SearchTask)task); + channel.sendResponse(result); + } + }); + TransportActionProxy.registerProxyAction(transportService, QUERY_ACTION_NAME, + (request) -> ((ShardSearchRequest)request).numberOfShards() == 1 ? QueryFetchSearchResult::new : QuerySearchResult::new); + + transportService.registerRequestHandler(QUERY_ID_ACTION_NAME, QuerySearchRequest::new, ThreadPool.Names.SEARCH, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(QuerySearchRequest request, TransportChannel channel, Task task) throws Exception { + QuerySearchResult result = searchService.executeQueryPhase(request, (SearchTask)task); + channel.sendResponse(result); + } + }); + TransportActionProxy.registerProxyAction(transportService, QUERY_ID_ACTION_NAME, QuerySearchResult::new); + + transportService.registerRequestHandler(QUERY_SCROLL_ACTION_NAME, InternalScrollSearchRequest::new, ThreadPool.Names.SEARCH, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(InternalScrollSearchRequest request, TransportChannel channel, Task task) throws Exception { + ScrollQuerySearchResult result = searchService.executeQueryPhase(request, (SearchTask)task); + channel.sendResponse(result); + } + }); + TransportActionProxy.registerProxyAction(transportService, QUERY_SCROLL_ACTION_NAME, ScrollQuerySearchResult::new); + + // this is for BWC with pre 5.3 indices in 5.3 we will only execute a `indices:data/read/search[phase/query+fetch]` + // if the node is pre 5.3 + transportService.registerRequestHandler(QUERY_FETCH_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel, Task task) throws Exception { + assert request.numberOfShards() == 1 : "expected single shard request but got: " + request.numberOfShards(); + SearchPhaseResult result = searchService.executeQueryPhase(request, (SearchTask)task); + channel.sendResponse(result); + } + }); + TransportActionProxy.registerProxyAction(transportService, QUERY_FETCH_ACTION_NAME, QueryFetchSearchResult::new); + + transportService.registerRequestHandler(QUERY_FETCH_SCROLL_ACTION_NAME, InternalScrollSearchRequest::new, ThreadPool.Names.SEARCH, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(InternalScrollSearchRequest request, TransportChannel channel, Task task) throws Exception { + ScrollQueryFetchSearchResult result = searchService.executeFetchPhase(request, (SearchTask)task); + channel.sendResponse(result); + } + }); + TransportActionProxy.registerProxyAction(transportService, QUERY_FETCH_SCROLL_ACTION_NAME, ScrollQueryFetchSearchResult::new); + + transportService.registerRequestHandler(FETCH_ID_SCROLL_ACTION_NAME, ShardFetchRequest::new, ThreadPool.Names.SEARCH, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(ShardFetchRequest request, TransportChannel channel, Task task) throws Exception { + FetchSearchResult result = searchService.executeFetchPhase(request, (SearchTask)task); + channel.sendResponse(result); + } + }); + TransportActionProxy.registerProxyAction(transportService, FETCH_ID_SCROLL_ACTION_NAME, FetchSearchResult::new); + + transportService.registerRequestHandler(FETCH_ID_ACTION_NAME, ShardFetchSearchRequest::new, ThreadPool.Names.SEARCH, + new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(ShardFetchSearchRequest request, TransportChannel channel, Task task) throws Exception { + FetchSearchResult result = searchService.executeFetchPhase(request, (SearchTask)task); + channel.sendResponse(result); + } + }); + TransportActionProxy.registerProxyAction(transportService, FETCH_ID_ACTION_NAME, FetchSearchResult::new); + + // this is super cheap and should not hit thread-pool rejections + transportService.registerRequestHandler(QUERY_CAN_MATCH_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, + false, true, new TaskAwareTransportRequestHandler() { + @Override + public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel, Task task) throws Exception { + boolean canMatch = searchService.canMatch(request); + channel.sendResponse(new CanMatchResponse(canMatch)); + } + }); + TransportActionProxy.registerProxyAction(transportService, QUERY_CAN_MATCH_NAME, + (Supplier) CanMatchResponse::new); + } + + + public static final class CanMatchResponse extends SearchPhaseResult { + private boolean canMatch; + + public CanMatchResponse() { + } + + public CanMatchResponse(boolean canMatch) { + this.canMatch = canMatch; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + canMatch = in.readBoolean(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeBoolean(canMatch); + } + + public boolean canMatch() { + return canMatch; + } + } + + + /** + * Returns a connection to the given node on the provided cluster. If the cluster alias is null the node will be resolved + * against the local cluster. + * @param clusterAlias the cluster alias the node should be resolve against + * @param node the node to resolve + * @return a connection to the given node belonging to the cluster with the provided alias. + */ + Transport.Connection getConnection(String clusterAlias, DiscoveryNode node) { + if (clusterAlias == null) { + return transportService.getConnection(node); + } else { + return transportService.getRemoteClusterService().getConnection(node, clusterAlias); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchType.java b/core/src/main/java/org/elasticsearch/action/search/SearchType.java index 31535736957cd..cbbeef60f6c04 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchType.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchType.java @@ -19,8 +19,6 @@ package org.elasticsearch.action.search; -import org.elasticsearch.common.ParseFieldMatcher; - /** * Search type represent the manner at which the search operation is executed. * @@ -39,16 +37,12 @@ public enum SearchType { * are fetched. This is very handy when the index has a lot of shards (not replicas, shard id groups). */ QUERY_THEN_FETCH((byte) 1), + // 2 used to be DFS_QUERY_AND_FETCH + /** - * Same as {@link #QUERY_AND_FETCH}, except for an initial scatter phase which goes and computes the distributed - * term frequencies for more accurate scoring. - */ - DFS_QUERY_AND_FETCH((byte) 2), - /** - * The most naive (and possibly fastest) implementation is to simply execute the query on all relevant shards - * and return the results. Each shard returns size results. Since each shard already returns size hits, this - * type actually returns size times number of shards results back to the caller. + * Only used for pre 5.3 request where this type is still needed */ + @Deprecated QUERY_AND_FETCH((byte) 3); /** @@ -75,12 +69,9 @@ public byte id() { public static SearchType fromId(byte id) { if (id == 0) { return DFS_QUERY_THEN_FETCH; - } else if (id == 1) { + } else if (id == 1 + || id == 3) { // This is a BWC layer for pre 5.3 indices where QUERY_AND_FETCH was id 3 but we don't have it anymore from 5.3 on return QUERY_THEN_FETCH; - } else if (id == 2) { - return DFS_QUERY_AND_FETCH; - } else if (id == 3) { - return QUERY_AND_FETCH; } else { throw new IllegalArgumentException("No search type for [" + id + "]"); } @@ -91,18 +82,14 @@ public static SearchType fromId(byte id) { * one of "dfs_query_then_fetch"/"dfsQueryThenFetch", "dfs_query_and_fetch"/"dfsQueryAndFetch", * "query_then_fetch"/"queryThenFetch" and "query_and_fetch"/"queryAndFetch". */ - public static SearchType fromString(String searchType, ParseFieldMatcher parseFieldMatcher) { + public static SearchType fromString(String searchType) { if (searchType == null) { return SearchType.DEFAULT; } if ("dfs_query_then_fetch".equals(searchType)) { return SearchType.DFS_QUERY_THEN_FETCH; - } else if ("dfs_query_and_fetch".equals(searchType)) { - return SearchType.DFS_QUERY_AND_FETCH; } else if ("query_then_fetch".equals(searchType)) { return SearchType.QUERY_THEN_FETCH; - } else if ("query_and_fetch".equals(searchType)) { - return SearchType.QUERY_AND_FETCH; } else { throw new IllegalArgumentException("No search type for [" + searchType + "]"); } diff --git a/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java b/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java index 8070081dcd865..7eb939ca8274e 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java +++ b/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java @@ -21,22 +21,34 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.action.OriginalIndices; import org.elasticsearch.action.ShardOperationFailedException; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.search.SearchException; import org.elasticsearch.search.SearchShardTarget; import java.io.IOException; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * Represents a failure to search on a specific shard. */ public class ShardSearchFailure implements ShardOperationFailedException { + private static final String REASON_FIELD = "reason"; + private static final String NODE_FIELD = "node"; + private static final String INDEX_FIELD = "index"; + private static final String SHARD_FIELD = "shard"; + public static final ShardSearchFailure[] EMPTY_ARRAY = new ShardSearchFailure[0]; private SearchShardTarget shardTarget; @@ -68,7 +80,7 @@ public ShardSearchFailure(String reason, SearchShardTarget shardTarget) { this(reason, shardTarget, RestStatus.INTERNAL_SERVER_ERROR); } - public ShardSearchFailure(String reason, SearchShardTarget shardTarget, RestStatus status) { + private ShardSearchFailure(String reason, SearchShardTarget shardTarget, RestStatus status) { this.shardTarget = shardTarget; this.reason = reason; this.status = status; @@ -93,7 +105,7 @@ public RestStatus status() { @Override public String index() { if (shardTarget != null) { - return shardTarget.index(); + return shardTarget.getIndex(); } return null; } @@ -104,7 +116,7 @@ public String index() { @Override public int shardId() { if (shardTarget != null) { - return shardTarget.shardId().id(); + return shardTarget.getShardId().id(); } return -1; } @@ -153,20 +165,56 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.field("shard", shardId()); - builder.field("index", index()); + builder.field(SHARD_FIELD, shardId()); + builder.field(INDEX_FIELD, index()); if (shardTarget != null) { - builder.field("node", shardTarget.nodeId()); + builder.field(NODE_FIELD, shardTarget.getNodeId()); } if (cause != null) { - builder.field("reason"); + builder.field(REASON_FIELD); builder.startObject(); - ElasticsearchException.toXContent(builder, params, cause); + ElasticsearchException.generateThrowableXContent(builder, params, cause); builder.endObject(); } return builder; } + public static ShardSearchFailure fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token; + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.currentToken(), parser::getTokenLocation); + String currentFieldName = null; + int shardId = -1; + String indexName = null; + String nodeId = null; + ElasticsearchException exception = null; + while((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (SHARD_FIELD.equals(currentFieldName)) { + shardId = parser.intValue(); + } else if (INDEX_FIELD.equals(currentFieldName)) { + indexName = parser.text(); + } else if (NODE_FIELD.equals(currentFieldName)) { + nodeId = parser.text(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (REASON_FIELD.equals(currentFieldName)) { + exception = ElasticsearchException.fromXContent(parser); + } else { + parser.skipChildren(); + } + } else { + parser.skipChildren(); + } + } + return new ShardSearchFailure(exception, + new SearchShardTarget(nodeId, + new ShardId(new Index(indexName, IndexMetaData.INDEX_UUID_NA_VALUE), shardId), null, OriginalIndices.NONE)); + } + @Override public Throwable getCause() { return cause; diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java index 220e7f5b25054..d9afbdacafe3c 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java @@ -19,33 +19,16 @@ package org.elasticsearch.action.search; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.HandledTransportAction; -import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.concurrent.CountDown; -import org.elasticsearch.search.action.SearchTransportService; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.transport.TransportResponse; import org.elasticsearch.transport.TransportService; -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicReference; - -import static org.elasticsearch.action.search.TransportSearchHelper.parseScrollId; - -/** - */ public class TransportClearScrollAction extends HandledTransportAction { private final ClusterService clusterService; @@ -53,107 +36,19 @@ public class TransportClearScrollAction extends HandledTransportAction listener) { - new Async(request, listener, clusterService.state()).run(); - } - - private class Async { - final DiscoveryNodes nodes; - final CountDown expectedOps; - final List contexts = new ArrayList<>(); - final ActionListener listener; - final AtomicReference expHolder; - final AtomicInteger numberOfFreedSearchContexts = new AtomicInteger(0); - - private Async(ClearScrollRequest request, ActionListener listener, ClusterState clusterState) { - int expectedOps = 0; - this.nodes = clusterState.nodes(); - if (request.getScrollIds().size() == 1 && "_all".equals(request.getScrollIds().get(0))) { - expectedOps = nodes.getSize(); - } else { - for (String parsedScrollId : request.getScrollIds()) { - ScrollIdForNode[] context = parseScrollId(parsedScrollId).getContext(); - expectedOps += context.length; - this.contexts.add(context); - } - } - this.listener = listener; - this.expHolder = new AtomicReference<>(); - this.expectedOps = new CountDown(expectedOps); - } - - public void run() { - if (expectedOps.isCountedDown()) { - listener.onResponse(new ClearScrollResponse(true, 0)); - return; - } - - if (contexts.isEmpty()) { - for (final DiscoveryNode node : nodes) { - searchTransportService.sendClearAllScrollContexts(node, new ActionListener() { - @Override - public void onResponse(TransportResponse response) { - onFreedContext(true); - } - - @Override - public void onFailure(Exception e) { - onFailedFreedContext(e, node); - } - }); - } - } else { - for (ScrollIdForNode[] context : contexts) { - for (ScrollIdForNode target : context) { - final DiscoveryNode node = nodes.get(target.getNode()); - if (node == null) { - onFreedContext(false); - continue; - } - - searchTransportService.sendFreeContext(node, target.getScrollId(), new ActionListener() { - @Override - public void onResponse(SearchTransportService.SearchFreeContextResponse freed) { - onFreedContext(freed.isFreed()); - } - - @Override - public void onFailure(Exception e) { - onFailedFreedContext(e, node); - } - }); - } - } - } - } - - void onFreedContext(boolean freed) { - if (freed) { - numberOfFreedSearchContexts.incrementAndGet(); - } - if (expectedOps.countDown()) { - boolean succeeded = expHolder.get() == null; - listener.onResponse(new ClearScrollResponse(succeeded, numberOfFreedSearchContexts.get())); - } - } - - void onFailedFreedContext(Throwable e, DiscoveryNode node) { - logger.warn((Supplier) () -> new ParameterizedMessage("Clear SC failed on node[{}]", node), e); - if (expectedOps.countDown()) { - listener.onResponse(new ClearScrollResponse(false, numberOfFreedSearchContexts.get())); - } else { - expHolder.set(e); - } - } - + Runnable runnable = new ClearScrollController(request, listener, clusterService.state().nodes(), logger, searchTransportService); + runnable.run(); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java index efd0403527657..cc2638070a94f 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java @@ -34,6 +34,7 @@ import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; +import java.util.List; import java.util.Queue; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.atomic.AtomicInteger; @@ -46,19 +47,18 @@ public class TransportMultiSearchAction extends HandledTransportAction searchAction, - IndexNameExpressionResolver indexNameExpressionResolver, int availableProcessors) { - super(Settings.EMPTY, MultiSearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, MultiSearchRequest::new); + IndexNameExpressionResolver resolver, int availableProcessors) { + super(Settings.EMPTY, MultiSearchAction.NAME, threadPool, transportService, actionFilters, resolver, MultiSearchRequest::new); this.clusterService = clusterService; this.searchAction = searchAction; this.availableProcessors = availableProcessors; @@ -90,10 +90,9 @@ protected void doExecute(MultiSearchRequest request, ActionListener requests, AtomicArray responses, - AtomicInteger responseCounter, ActionListener listener) { + /** + * Executes a single request from the queue of requests. When a request finishes, another request is taken from the queue. When a + * request is executed, a permit is taken on the specified semaphore, and released as each request completes. + * + * @param requests the queue of multi-search requests to execute + * @param responses atomic array to hold the responses corresponding to each search request slot + * @param responseCounter incremented on each response + * @param listener the listener attached to the multi-search request + */ + private void executeSearch( + final Queue requests, + final AtomicArray responses, + final AtomicInteger responseCounter, + final ActionListener listener) { SearchRequestSlot request = requests.poll(); if (request == null) { - // Ok... so there're no more requests then this is ok, we're then waiting for running requests to complete + /* + * The number of times that we poll an item from the queue here is the minimum of the number of requests and the maximum number + * of concurrent requests. At first glance, it appears that we should never poll from the queue and not obtain a request given + * that we only poll here no more times than the number of requests. However, this is not the only consumer of this queue as + * earlier requests that have already completed will poll from the queue too and they could complete before later polls are + * invoked here. Thus, it can be the case that we poll here and and the queue was empty. + */ return; } + + /* + * With a request in hand, we are now prepared to execute the search request. There are two possibilities, either we go asynchronous + * or we do not (this can happen if the request does not resolve to any shards). If we do not go asynchronous, we are going to come + * back on the same thread that attempted to execute the search request. At this point, or any other point where we come back on the + * same thread as when the request was submitted, we should not recurse lest we might descend into a stack overflow. To avoid this, + * when we handle the response rather than going recursive, we fork to another thread, otherwise we recurse. + */ + final Thread thread = Thread.currentThread(); searchAction.execute(request.request, new ActionListener() { @Override - public void onResponse(SearchResponse searchResponse) { - responses.set(request.responseSlot, new MultiSearchResponse.Item(searchResponse, null)); - handleResponse(); + public void onResponse(final SearchResponse searchResponse) { + handleResponse(request.responseSlot, new MultiSearchResponse.Item(searchResponse, null)); } @Override - public void onFailure(Exception e) { - responses.set(request.responseSlot, new MultiSearchResponse.Item(null, e)); - handleResponse(); + public void onFailure(final Exception e) { + handleResponse(request.responseSlot, new MultiSearchResponse.Item(null, e)); } - private void handleResponse() { + private void handleResponse(final int responseSlot, final MultiSearchResponse.Item item) { + responses.set(responseSlot, item); if (responseCounter.decrementAndGet() == 0) { - listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()]))); + assert requests.isEmpty(); + finish(); } else { - executeSearch(requests, responses, responseCounter, listener); + if (thread == Thread.currentThread()) { + // we are on the same thread, we need to fork to another thread to avoid recursive stack overflow on a single thread + threadPool.generic().execute(() -> executeSearch(requests, responses, responseCounter, listener)); + } else { + // we are on a different thread (we went asynchronous), it's safe to recurse + executeSearch(requests, responses, responseCounter, listener); + } } } + + private void finish() { + listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()]))); + } }); } @@ -142,5 +178,7 @@ static final class SearchRequestSlot { this.request = request; this.responseSlot = responseSlot; } + } + } diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java index 8a33bff8f0e9a..21a1097f8fbbd 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java @@ -20,105 +20,380 @@ package org.elasticsearch.action.search; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsGroup; +import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsResponse; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.HandledTransportAction; import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.block.ClusterBlockLevel; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.cluster.routing.GroupShardsIterator; +import org.elasticsearch.cluster.routing.ShardIterator; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.IndexNotFoundException; -import org.elasticsearch.indices.IndexClosedException; -import org.elasticsearch.search.action.SearchTransportService; -import org.elasticsearch.search.controller.SearchPhaseController; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.search.SearchService; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.internal.AliasFilter; +import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.RemoteClusterAware; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.transport.Transport; import org.elasticsearch.transport.TransportService; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; import java.util.Map; import java.util.Set; +import java.util.concurrent.Executor; +import java.util.function.BiFunction; +import java.util.function.LongSupplier; -import static org.elasticsearch.action.search.SearchType.QUERY_AND_FETCH; import static org.elasticsearch.action.search.SearchType.QUERY_THEN_FETCH; public class TransportSearchAction extends HandledTransportAction { /** The maximum number of shards for a single search request. */ public static final Setting SHARD_COUNT_LIMIT_SETTING = Setting.longSetting( - "action.search.shard_count.limit", 1000L, 1L, Property.Dynamic, Property.NodeScope); + "action.search.shard_count.limit", Long.MAX_VALUE, 1L, Property.Dynamic, Property.NodeScope); private final ClusterService clusterService; private final SearchTransportService searchTransportService; + private final RemoteClusterService remoteClusterService; private final SearchPhaseController searchPhaseController; + private final SearchService searchService; @Inject - public TransportSearchAction(Settings settings, ThreadPool threadPool, SearchPhaseController searchPhaseController, - TransportService transportService, SearchTransportService searchTransportService, - ClusterService clusterService, ActionFilters actionFilters, IndexNameExpressionResolver - indexNameExpressionResolver) { + public TransportSearchAction(Settings settings, ThreadPool threadPool, TransportService transportService, SearchService searchService, + SearchTransportService searchTransportService, SearchPhaseController searchPhaseController, + ClusterService clusterService, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver) { super(settings, SearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, SearchRequest::new); this.searchPhaseController = searchPhaseController; this.searchTransportService = searchTransportService; + this.remoteClusterService = searchTransportService.getRemoteClusterService(); + SearchTransportService.registerRequestHandler(transportService, searchService); this.clusterService = clusterService; + this.searchService = searchService; + } + + private Map buildPerIndexAliasFilter(SearchRequest request, ClusterState clusterState, + Index[] concreteIndices, Map remoteAliasMap) { + final Map aliasFilterMap = new HashMap<>(); + for (Index index : concreteIndices) { + clusterState.blocks().indexBlockedRaiseException(ClusterBlockLevel.READ, index.getName()); + AliasFilter aliasFilter = searchService.buildAliasFilter(clusterState, index.getName(), request.indices()); + assert aliasFilter != null; + aliasFilterMap.put(index.getUUID(), aliasFilter); + } + aliasFilterMap.putAll(remoteAliasMap); + return aliasFilterMap; + } + + private Map resolveIndexBoosts(SearchRequest searchRequest, ClusterState clusterState) { + if (searchRequest.source() == null) { + return Collections.emptyMap(); + } + + SearchSourceBuilder source = searchRequest.source(); + if (source.indexBoosts() == null) { + return Collections.emptyMap(); + } + + Map concreteIndexBoosts = new HashMap<>(); + for (SearchSourceBuilder.IndexBoost ib : source.indexBoosts()) { + Index[] concreteIndices = + indexNameExpressionResolver.concreteIndices(clusterState, searchRequest.indicesOptions(), ib.getIndex()); + + for (Index concreteIndex : concreteIndices) { + concreteIndexBoosts.putIfAbsent(concreteIndex.getUUID(), ib.getBoost()); + } + } + return Collections.unmodifiableMap(concreteIndexBoosts); + } + + /** + * Search operations need two clocks. One clock is to fulfill real clock needs (e.g., resolving + * "now" to an index name). Another clock is needed for measuring how long a search operation + * took. These two uses are at odds with each other. There are many issues with using a real + * clock for measuring how long an operation took (they often lack precision, they are subject + * to moving backwards due to NTP and other such complexities, etc.). There are also issues with + * using a relative clock for reporting real time. Thus, we simply separate these two uses. + */ + static class SearchTimeProvider { + + private final long absoluteStartMillis; + private final long relativeStartNanos; + private final LongSupplier relativeCurrentNanosProvider; + + /** + * Instantiates a new search time provider. The absolute start time is the real clock time + * used for resolving index expressions that include dates. The relative start time is the + * start of the search operation according to a relative clock. The total time the search + * operation took can be measured against the provided relative clock and the relative start + * time. + * + * @param absoluteStartMillis the absolute start time in milliseconds since the epoch + * @param relativeStartNanos the relative start time in nanoseconds + * @param relativeCurrentNanosProvider provides the current relative time + */ + SearchTimeProvider( + final long absoluteStartMillis, + final long relativeStartNanos, + final LongSupplier relativeCurrentNanosProvider) { + this.absoluteStartMillis = absoluteStartMillis; + this.relativeStartNanos = relativeStartNanos; + this.relativeCurrentNanosProvider = relativeCurrentNanosProvider; + } + + long getAbsoluteStartMillis() { + return absoluteStartMillis; + } + + long getRelativeStartNanos() { + return relativeStartNanos; + } + + long getRelativeCurrentNanos() { + return relativeCurrentNanosProvider.getAsLong(); + } } @Override - protected void doExecute(SearchRequest searchRequest, ActionListener listener) { - // optimize search type for cases where there is only one shard group to search on - try { - ClusterState clusterState = clusterService.state(); - String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(clusterState, searchRequest); - Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, - searchRequest.routing(), searchRequest.indices()); - int shardCount = clusterService.operationRouting().searchShardsCount(clusterState, concreteIndices, routingMap); - if (shardCount == 1) { - // if we only have one group, then we always want Q_A_F, no need for DFS, and no need to do THEN since we hit one shard - searchRequest.searchType(QUERY_AND_FETCH); + protected void doExecute(Task task, SearchRequest searchRequest, ActionListener listener) { + final long absoluteStartMillis = System.currentTimeMillis(); + final long relativeStartNanos = System.nanoTime(); + final SearchTimeProvider timeProvider = + new SearchTimeProvider(absoluteStartMillis, relativeStartNanos, System::nanoTime); + + + final ClusterState clusterState = clusterService.state(); + final Map remoteClusterIndices = remoteClusterService.groupIndices(searchRequest.indicesOptions(), + searchRequest.indices(), idx -> indexNameExpressionResolver.hasIndexOrAlias(idx, clusterState)); + OriginalIndices localIndices = remoteClusterIndices.remove(RemoteClusterAware.LOCAL_CLUSTER_GROUP_KEY); + if (remoteClusterIndices.isEmpty()) { + executeSearch((SearchTask)task, timeProvider, searchRequest, localIndices, remoteClusterIndices, Collections.emptyList(), + (clusterName, nodeId) -> null, clusterState, Collections.emptyMap(), listener, clusterState.getNodes() + .getDataNodes().size()); + } else { + remoteClusterService.collectSearchShards(searchRequest.indicesOptions(), searchRequest.preference(), searchRequest.routing(), + remoteClusterIndices, ActionListener.wrap((searchShardsResponses) -> { + List remoteShardIterators = new ArrayList<>(); + Map remoteAliasFilters = new HashMap<>(); + BiFunction clusterNodeLookup = processRemoteShards(searchShardsResponses, + remoteClusterIndices, remoteShardIterators, remoteAliasFilters); + int numNodesInvovled = searchShardsResponses.values().stream().mapToInt(r -> r.getNodes().length).sum() + + clusterState.getNodes().getDataNodes().size(); + executeSearch((SearchTask) task, timeProvider, searchRequest, localIndices, remoteClusterIndices, remoteShardIterators, + clusterNodeLookup, clusterState, remoteAliasFilters, listener, numNodesInvovled); + }, listener::onFailure)); + } + } + + static BiFunction processRemoteShards(Map searchShardsResponses, + Map remoteIndicesByCluster, + List remoteShardIterators, + Map aliasFilterMap) { + Map> clusterToNode = new HashMap<>(); + for (Map.Entry entry : searchShardsResponses.entrySet()) { + String clusterAlias = entry.getKey(); + ClusterSearchShardsResponse searchShardsResponse = entry.getValue(); + HashMap idToDiscoveryNode = new HashMap<>(); + clusterToNode.put(clusterAlias, idToDiscoveryNode); + for (DiscoveryNode remoteNode : searchShardsResponse.getNodes()) { + idToDiscoveryNode.put(remoteNode.getId(), remoteNode); } - if (searchRequest.isSuggestOnly()) { - // disable request cache if we have only suggest - searchRequest.requestCache(false); - switch (searchRequest.searchType()) { - case DFS_QUERY_AND_FETCH: - case DFS_QUERY_THEN_FETCH: - // convert to Q_T_F if we have only suggest - searchRequest.searchType(QUERY_THEN_FETCH); - break; + final Map indicesAndFilters = searchShardsResponse.getIndicesAndFilters(); + for (ClusterSearchShardsGroup clusterSearchShardsGroup : searchShardsResponse.getGroups()) { + //add the cluster name to the remote index names for indices disambiguation + //this ends up in the hits returned with the search response + ShardId shardId = clusterSearchShardsGroup.getShardId(); + final AliasFilter aliasFilter; + if (indicesAndFilters == null) { + aliasFilter = AliasFilter.EMPTY; + } else { + aliasFilter = indicesAndFilters.get(shardId.getIndexName()); + assert aliasFilter != null : "alias filter must not be null for index: " + shardId.getIndex(); } + String[] aliases = aliasFilter.getAliases(); + String[] finalIndices = aliases.length == 0 ? new String[] {shardId.getIndexName()} : aliases; + // here we have to map the filters to the UUID since from now on we use the uuid for the lookup + aliasFilterMap.put(shardId.getIndex().getUUID(), aliasFilter); + final OriginalIndices originalIndices = remoteIndicesByCluster.get(clusterAlias); + assert originalIndices != null : "original indices are null for clusterAlias: " + clusterAlias; + SearchShardIterator shardIterator = new SearchShardIterator(clusterAlias, shardId, + Arrays.asList(clusterSearchShardsGroup.getShards()), new OriginalIndices(finalIndices, + originalIndices.indicesOptions())); + remoteShardIterators.add(shardIterator); } - } catch (IndexNotFoundException | IndexClosedException e) { - // ignore these failures, we will notify the search response if its really the case from the actual action - } catch (Exception e) { - logger.debug("failed to optimize search type, continue as normal", e); } + return (clusterAlias, nodeId) -> { + Map clusterNodes = clusterToNode.get(clusterAlias); + if (clusterNodes == null) { + throw new IllegalArgumentException("unknown remote cluster: " + clusterAlias); + } + return clusterNodes.get(nodeId); + }; + } + + private void executeSearch(SearchTask task, SearchTimeProvider timeProvider, SearchRequest searchRequest, OriginalIndices localIndices, + Map remoteClusterIndices, List remoteShardIterators, + BiFunction remoteConnections, ClusterState clusterState, + Map remoteAliasMap, ActionListener listener, int nodeCount) { + + clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ); + // TODO: I think startTime() should become part of ActionRequest and that should be used both for index name + // date math expressions and $now in scripts. This way all apis will deal with now in the same way instead + // of just for the _search api + final Index[] indices; + if (localIndices.indices().length == 0 && remoteClusterIndices.isEmpty() == false) { + indices = Index.EMPTY_ARRAY; // don't search on _all if only remote indices were specified + } else { + indices = indexNameExpressionResolver.concreteIndices(clusterState, searchRequest.indicesOptions(), + timeProvider.getAbsoluteStartMillis(), localIndices.indices()); + } + Map aliasFilter = buildPerIndexAliasFilter(searchRequest, clusterState, indices, remoteAliasMap); + Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, searchRequest.routing(), + searchRequest.indices()); + String[] concreteIndices = new String[indices.length]; + for (int i = 0; i < indices.length; i++) { + concreteIndices[i] = indices[i].getName(); + } + GroupShardsIterator localShardsIterator = clusterService.operationRouting().searchShards(clusterState, + concreteIndices, routingMap, searchRequest.preference()); + GroupShardsIterator shardIterators = mergeShardsIterators(localShardsIterator, localIndices, + remoteShardIterators); + + failIfOverShardCountLimit(clusterService, shardIterators.size()); + + Map concreteIndexBoosts = resolveIndexBoosts(searchRequest, clusterState); + + // optimize search type for cases where there is only one shard group to search on + if (shardIterators.size() == 1) { + // if we only have one group, then we always want Q_A_F, no need for DFS, and no need to do THEN since we hit one shard + searchRequest.searchType(QUERY_THEN_FETCH); + } + if (searchRequest.isSuggestOnly()) { + // disable request cache if we have only suggest + searchRequest.requestCache(false); + switch (searchRequest.searchType()) { + case DFS_QUERY_THEN_FETCH: + // convert to Q_T_F if we have only suggest + searchRequest.searchType(QUERY_THEN_FETCH); + break; + } + } + + final DiscoveryNodes nodes = clusterState.nodes(); + BiFunction connectionLookup = (clusterName, nodeId) -> { + final DiscoveryNode discoveryNode = clusterName == null ? nodes.get(nodeId) : remoteConnections.apply(clusterName, nodeId); + if (discoveryNode == null) { + throw new IllegalStateException("no node found for id: " + nodeId); + } + return searchTransportService.getConnection(clusterName, discoveryNode); + }; + if (searchRequest.isMaxConcurrentShardRequestsSet() == false) { + // we try to set a default of max concurrent shard requests based on + // the node count but upper-bound it by 256 by default to keep it sane. A single + // search request that fans out lots of shards should hit a cluster too hard while 256 is already a lot + // we multiply is by the default number of shards such that a single request in a cluster of 1 would hit all shards of a + // default index. + searchRequest.setMaxConcurrentShardRequests(Math.min(256, nodeCount + * IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getDefault(Settings.EMPTY))); + } + boolean preFilterSearchShards = shouldPreFilterSearchShards(searchRequest, shardIterators); + searchAsyncAction(task, searchRequest, shardIterators, timeProvider, connectionLookup, clusterState.version(), + Collections.unmodifiableMap(aliasFilter), concreteIndexBoosts, listener, preFilterSearchShards).start(); + } + + private boolean shouldPreFilterSearchShards(SearchRequest searchRequest, GroupShardsIterator shardIterators) { + SearchSourceBuilder source = searchRequest.source(); + return searchRequest.searchType() == QUERY_THEN_FETCH && // we can't do this for DFS it needs to fan out to all shards all the time + SearchService.canRewriteToMatchNone(source) && + searchRequest.getPreFilterShardSize() < shardIterators.size(); + } + + static GroupShardsIterator mergeShardsIterators(GroupShardsIterator localShardsIterator, + OriginalIndices localIndices, + List remoteShardIterators) { + List shards = new ArrayList<>(); + for (SearchShardIterator shardIterator : remoteShardIterators) { + shards.add(shardIterator); + } + for (ShardIterator shardIterator : localShardsIterator) { + shards.add(new SearchShardIterator(null, shardIterator.shardId(), shardIterator.getShardRoutings(), localIndices)); + } + return new GroupShardsIterator<>(shards); + } - searchAsyncAction(searchRequest, listener).start(); + @Override + protected final void doExecute(SearchRequest searchRequest, ActionListener listener) { + throw new UnsupportedOperationException("the task parameter is required"); } - private AbstractSearchAsyncAction searchAsyncAction(SearchRequest searchRequest, ActionListener listener) { - AbstractSearchAsyncAction searchAsyncAction; - switch(searchRequest.searchType()) { - case DFS_QUERY_THEN_FETCH: - searchAsyncAction = new SearchDfsQueryThenFetchAsyncAction(logger, searchTransportService, clusterService, - indexNameExpressionResolver, searchPhaseController, threadPool, searchRequest, listener); - break; - case QUERY_THEN_FETCH: - searchAsyncAction = new SearchQueryThenFetchAsyncAction(logger, searchTransportService, clusterService, - indexNameExpressionResolver, searchPhaseController, threadPool, searchRequest, listener); - break; - case DFS_QUERY_AND_FETCH: - searchAsyncAction = new SearchDfsQueryAndFetchAsyncAction(logger, searchTransportService, clusterService, - indexNameExpressionResolver, searchPhaseController, threadPool, searchRequest, listener); - break; - case QUERY_AND_FETCH: - searchAsyncAction = new SearchQueryAndFetchAsyncAction(logger, searchTransportService, clusterService, - indexNameExpressionResolver, searchPhaseController, threadPool, searchRequest, listener); - break; - default: - throw new IllegalStateException("Unknown search type: [" + searchRequest.searchType() + "]"); - } - return searchAsyncAction; + private AbstractSearchAsyncAction searchAsyncAction(SearchTask task, SearchRequest searchRequest, + GroupShardsIterator shardIterators, + SearchTimeProvider timeProvider, + BiFunction connectionLookup, + long clusterStateVersion, Map aliasFilter, + Map concreteIndexBoosts, + ActionListener listener, boolean preFilter) { + Executor executor = threadPool.executor(ThreadPool.Names.SEARCH); + if (preFilter) { + return new CanMatchPreFilterSearchPhase(logger, searchTransportService, connectionLookup, + aliasFilter, concreteIndexBoosts, executor, searchRequest, listener, shardIterators, + timeProvider, clusterStateVersion, task, (iter) -> { + AbstractSearchAsyncAction action = searchAsyncAction(task, searchRequest, iter, timeProvider, connectionLookup, + clusterStateVersion, aliasFilter, concreteIndexBoosts, listener, false); + return new SearchPhase(action.getName()) { + @Override + public void run() throws IOException { + action.start(); + } + }; + }); + } else { + AbstractSearchAsyncAction searchAsyncAction; + switch (searchRequest.searchType()) { + case DFS_QUERY_THEN_FETCH: + searchAsyncAction = new SearchDfsQueryThenFetchAsyncAction(logger, searchTransportService, connectionLookup, + aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, + timeProvider, clusterStateVersion, task); + break; + case QUERY_AND_FETCH: + case QUERY_THEN_FETCH: + searchAsyncAction = new SearchQueryThenFetchAsyncAction(logger, searchTransportService, connectionLookup, + aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, + timeProvider, clusterStateVersion, task); + break; + default: + throw new IllegalStateException("Unknown search type: [" + searchRequest.searchType() + "]"); + } + return searchAsyncAction; + } } + private static void failIfOverShardCountLimit(ClusterService clusterService, int shardCount) { + final long shardCountLimit = clusterService.getClusterSettings().get(SHARD_COUNT_LIMIT_SETTING); + if (shardCount > shardCountLimit) { + throw new IllegalArgumentException("Trying to query " + shardCount + " shards, which is over the limit of " + + shardCountLimit + ". This limit exists because querying many shards at the same time can make the " + + "job of the coordinating node very CPU and/or memory intensive. It is usually a better idea to " + + "have a smaller number of larger shards. Update [" + SHARD_COUNT_LIMIT_SETTING.getKey() + + "] to a greater value if you really want to query that many shards at the same time."); + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportSearchHelper.java b/core/src/main/java/org/elasticsearch/action/search/TransportSearchHelper.java index db42527b125fa..94632973984b5 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportSearchHelper.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportSearchHelper.java @@ -21,11 +21,11 @@ import org.apache.lucene.store.ByteArrayDataInput; import org.apache.lucene.store.RAMOutputStream; -import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; import org.elasticsearch.search.internal.InternalScrollSearchRequest; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; +import org.elasticsearch.transport.RemoteClusterAware; import java.io.IOException; import java.util.Base64; @@ -35,33 +35,23 @@ */ final class TransportSearchHelper { - static ShardSearchTransportRequest internalSearchRequest(ShardRouting shardRouting, int numberOfShards, SearchRequest request, - String[] filteringAliases, long nowInMillis) { - return new ShardSearchTransportRequest(request, shardRouting, numberOfShards, filteringAliases, nowInMillis); - } - static InternalScrollSearchRequest internalScrollSearchRequest(long id, SearchScrollRequest request) { return new InternalScrollSearchRequest(request, id); } - static String buildScrollId(SearchType searchType, AtomicArray searchPhaseResults) throws IOException { - if (searchType == SearchType.DFS_QUERY_THEN_FETCH || searchType == SearchType.QUERY_THEN_FETCH) { - return buildScrollId(ParsedScrollId.QUERY_THEN_FETCH_TYPE, searchPhaseResults); - } else if (searchType == SearchType.QUERY_AND_FETCH || searchType == SearchType.DFS_QUERY_AND_FETCH) { - return buildScrollId(ParsedScrollId.QUERY_AND_FETCH_TYPE, searchPhaseResults); - } else { - throw new IllegalStateException("search_type [" + searchType + "] not supported"); - } - } - - static String buildScrollId(String type, AtomicArray searchPhaseResults) throws IOException { + static String buildScrollId(AtomicArray searchPhaseResults) throws IOException { try (RAMOutputStream out = new RAMOutputStream()) { - out.writeString(type); + out.writeString(searchPhaseResults.length() == 1 ? ParsedScrollId.QUERY_AND_FETCH_TYPE : ParsedScrollId.QUERY_THEN_FETCH_TYPE); out.writeVInt(searchPhaseResults.asList().size()); - for (AtomicArray.Entry entry : searchPhaseResults.asList()) { - SearchPhaseResult searchPhaseResult = entry.value; - out.writeLong(searchPhaseResult.id()); - out.writeString(searchPhaseResult.shardTarget().nodeId()); + for (SearchPhaseResult searchPhaseResult : searchPhaseResults.asList()) { + out.writeLong(searchPhaseResult.getRequestId()); + SearchShardTarget searchShardTarget = searchPhaseResult.getSearchShardTarget(); + if (searchShardTarget.getClusterAlias() != null) { + out.writeString(RemoteClusterAware.buildRemoteIndexName(searchShardTarget.getClusterAlias(), + searchShardTarget.getNodeId())); + } else { + out.writeString(searchShardTarget.getNodeId()); + } } byte[] bytes = new byte[(int) out.getFilePointer()]; out.writeTo(bytes, 0); @@ -78,7 +68,15 @@ static ParsedScrollId parseScrollId(String scrollId) { for (int i = 0; i < context.length; ++i) { long id = in.readLong(); String target = in.readString(); - context[i] = new ScrollIdForNode(target, id); + String clusterAlias; + final int index = target.indexOf(RemoteClusterAware.REMOTE_CLUSTER_INDEX_SEPARATOR); + if (index == -1) { + clusterAlias = null; + } else { + clusterAlias = target.substring(0, index); + target = target.substring(index+1); + } + context[i] = new ScrollIdForNode(clusterAlias, target, id); } if (in.getPosition() != bytes.length) { throw new IllegalArgumentException("Not all bytes were read"); diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java index 485baaa022b40..448dd9d636763 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java @@ -26,8 +26,7 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.search.action.SearchTransportService; -import org.elasticsearch.search.controller.SearchPhaseController; +import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -46,9 +45,9 @@ public class TransportSearchScrollAction extends HandledTransportAction listener) { + protected final void doExecute(SearchScrollRequest request, ActionListener listener) { + throw new UnsupportedOperationException("the task parameter is required"); + } + @Override + protected void doExecute(Task task, SearchScrollRequest request, ActionListener listener) { try { ParsedScrollId scrollId = parseScrollId(request.scrollId()); - AbstractAsyncAction action; + Runnable action; switch (scrollId.getType()) { case QUERY_THEN_FETCH_TYPE: action = new SearchScrollQueryThenFetchAsyncAction(logger, clusterService, searchTransportService, - searchPhaseController, request, scrollId, listener); + searchPhaseController, request, (SearchTask)task, scrollId, listener); break; - case QUERY_AND_FETCH_TYPE: + case QUERY_AND_FETCH_TYPE: // TODO can we get rid of this? action = new SearchScrollQueryAndFetchAsyncAction(logger, clusterService, searchTransportService, - searchPhaseController, request, scrollId, listener); + searchPhaseController, request, (SearchTask)task, scrollId, listener); break; default: throw new IllegalArgumentException("Scroll id type [" + scrollId.getType() + "] unrecognized"); } - action.start(); + action.run(); } catch (Exception e) { listener.onFailure(e); } diff --git a/core/src/main/java/org/elasticsearch/action/support/ActionFilter.java b/core/src/main/java/org/elasticsearch/action/support/ActionFilter.java index f536d9e0ceb69..3e12d0cc84223 100644 --- a/core/src/main/java/org/elasticsearch/action/support/ActionFilter.java +++ b/core/src/main/java/org/elasticsearch/action/support/ActionFilter.java @@ -40,29 +40,21 @@ public interface ActionFilter { * Enables filtering the execution of an action on the request side, either by sending a response through the * {@link ActionListener} or by continuing the execution through the given {@link ActionFilterChain chain} */ - , Response extends ActionResponse> void apply(Task task, String action, Request request, + void apply(Task task, String action, Request request, ActionListener listener, ActionFilterChain chain); - - /** - * Enables filtering the execution of an action on the response side, either by sending a response through the - * {@link ActionListener} or by continuing the execution through the given {@link ActionFilterChain chain} - */ - void apply(String action, Response response, ActionListener listener, - ActionFilterChain chain); - /** * A simple base class for injectable action filters that spares the implementation from handling the * filter chain. This base class should serve any action filter implementations that doesn't require * to apply async filtering logic. */ - public abstract static class Simple extends AbstractComponent implements ActionFilter { + abstract class Simple extends AbstractComponent implements ActionFilter { protected Simple(Settings settings) { super(settings); } @Override - public final , Response extends ActionResponse> void apply(Task task, String action, Request request, + public final void apply(Task task, String action, Request request, ActionListener listener, ActionFilterChain chain) { if (apply(action, request, listener)) { chain.proceed(task, action, request, listener); @@ -73,20 +65,6 @@ public final , Response extends ActionRes * Applies this filter and returns {@code true} if the execution chain should proceed, or {@code false} * if it should be aborted since the filter already handled the request and called the given listener. */ - protected abstract boolean apply(String action, ActionRequest request, ActionListener listener); - - @Override - public final void apply(String action, Response response, ActionListener listener, - ActionFilterChain chain) { - if (apply(action, response, listener)) { - chain.proceed(action, response, listener); - } - } - - /** - * Applies this filter and returns {@code true} if the execution chain should proceed, or {@code false} - * if it should be aborted since the filter already handled the response by calling the given listener. - */ - protected abstract boolean apply(String action, ActionResponse response, ActionListener listener); + protected abstract boolean apply(String action, ActionRequest request, ActionListener listener); } } diff --git a/core/src/main/java/org/elasticsearch/action/support/ActionFilterChain.java b/core/src/main/java/org/elasticsearch/action/support/ActionFilterChain.java index 54f55e187a995..97e0c535bffdf 100644 --- a/core/src/main/java/org/elasticsearch/action/support/ActionFilterChain.java +++ b/core/src/main/java/org/elasticsearch/action/support/ActionFilterChain.java @@ -27,17 +27,11 @@ /** * A filter chain allowing to continue and process the transport action request */ -public interface ActionFilterChain, Response extends ActionResponse> { +public interface ActionFilterChain { /** * Continue processing the request. Should only be called if a response has not been sent through * the given {@link ActionListener listener} */ - void proceed(Task task, final String action, final Request request, final ActionListener listener); - - /** - * Continue processing the response. Should only be called if a response has not been sent through - * the given {@link ActionListener listener} - */ - void proceed(final String action, final Response response, final ActionListener listener); + void proceed(Task task, String action, Request request, ActionListener listener); } diff --git a/core/src/main/java/org/elasticsearch/action/support/ActiveShardsObserver.java b/core/src/main/java/org/elasticsearch/action/support/ActiveShardsObserver.java index 7217961d899c4..280ba6ac94dc6 100644 --- a/core/src/main/java/org/elasticsearch/action/support/ActiveShardsObserver.java +++ b/core/src/main/java/org/elasticsearch/action/support/ActiveShardsObserver.java @@ -30,6 +30,7 @@ import org.elasticsearch.threadpool.ThreadPool; import java.util.function.Consumer; +import java.util.function.Predicate; /** * This class provides primitives for waiting for a configured number of shards @@ -68,17 +69,12 @@ public void waitForActiveShards(final String indexName, return; } - final ClusterStateObserver observer = new ClusterStateObserver(clusterService, logger, threadPool.getThreadContext()); - if (activeShardCount.enoughShardsActive(observer.observedState(), indexName)) { - onResult.accept(true); + final ClusterState state = clusterService.state(); + final ClusterStateObserver observer = new ClusterStateObserver(state, clusterService, null, logger, threadPool.getThreadContext()); + if (activeShardCount.enoughShardsActive(state, indexName)) { + onResult.accept(true); } else { - final ClusterStateObserver.ChangePredicate shardsAllocatedPredicate = - new ClusterStateObserver.ValidationPredicate() { - @Override - protected boolean validate(final ClusterState newState) { - return activeShardCount.enoughShardsActive(newState, indexName); - } - }; + final Predicate shardsAllocatedPredicate = newState -> activeShardCount.enoughShardsActive(newState, indexName); final ClusterStateObserver.Listener observerListener = new ClusterStateObserver.Listener() { @Override diff --git a/core/src/main/java/org/elasticsearch/action/support/AdapterActionFuture.java b/core/src/main/java/org/elasticsearch/action/support/AdapterActionFuture.java index eab486f492934..b0f914e1a6298 100644 --- a/core/src/main/java/org/elasticsearch/action/support/AdapterActionFuture.java +++ b/core/src/main/java/org/elasticsearch/action/support/AdapterActionFuture.java @@ -41,6 +41,7 @@ public T actionGet() { try { return get(); } catch (InterruptedException e) { + Thread.currentThread().interrupt(); throw new IllegalStateException("Future got interrupted", e); } catch (ExecutionException e) { throw rethrowExecutionException(e); @@ -69,6 +70,7 @@ public T actionGet(long timeout, TimeUnit unit) { } catch (TimeoutException e) { throw new ElasticsearchTimeoutException(e); } catch (InterruptedException e) { + Thread.currentThread().interrupt(); throw new IllegalStateException("Future got interrupted", e); } catch (ExecutionException e) { throw rethrowExecutionException(e); @@ -103,4 +105,5 @@ public void onFailure(Exception e) { } protected abstract T convert(L listenerResponse); + } diff --git a/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java b/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java index 5c9152b475179..6a53b456166bf 100644 --- a/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java +++ b/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java @@ -24,11 +24,14 @@ import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Strings; import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.mapper.MapperService; import java.util.ArrayList; @@ -39,6 +42,7 @@ * a write operation is about to happen in a non existing index. */ public final class AutoCreateIndex { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(AutoCreateIndex.class)); public static final Setting AUTO_CREATE_INDEX_SETTING = new Setting<>("action.auto_create_index", "true", AutoCreate::new, Property.NodeScope, Setting.Property.Dynamic); @@ -63,18 +67,20 @@ public boolean needToCheck() { /** * Should the index be auto created? + * @throws IndexNotFoundException if the the index doesn't exist and shouldn't be auto created */ public boolean shouldAutoCreate(String index, ClusterState state) { + if (resolver.hasIndexOrAlias(index, state)) { + return false; + } // One volatile read, so that all checks are done against the same instance: final AutoCreate autoCreate = this.autoCreate; if (autoCreate.autoCreateIndex == false) { - return false; + throw new IndexNotFoundException("no such index and [" + AUTO_CREATE_INDEX_SETTING.getKey() + "] is [false]", index); } if (dynamicMappingDisabled) { - return false; - } - if (resolver.hasIndexOrAlias(index, state)) { - return false; + throw new IndexNotFoundException("no such index and [" + MapperService.INDEX_MAPPER_DYNAMIC_SETTING.getKey() + "] is [false]", + index); } // matches not set, default value of "true" if (autoCreate.expressions.isEmpty()) { @@ -84,10 +90,15 @@ public boolean shouldAutoCreate(String index, ClusterState state) { String indexExpression = expression.v1(); boolean include = expression.v2(); if (Regex.simpleMatch(indexExpression, index)) { - return include; + if (include) { + return true; + } + throw new IndexNotFoundException("no such index and [" + AUTO_CREATE_INDEX_SETTING.getKey() + "] contains [-" + + indexExpression + "] which forbids automatic creation of the index", index); } } - return false; + throw new IndexNotFoundException("no such index and [" + AUTO_CREATE_INDEX_SETTING.getKey() + "] ([" + autoCreate + + "]) doesn't match", index); } AutoCreate getAutoCreate() { @@ -101,28 +112,37 @@ void setAutoCreate(AutoCreate autoCreate) { static class AutoCreate { private final boolean autoCreateIndex; private final List> expressions; + private final String string; private AutoCreate(String value) { boolean autoCreateIndex; List> expressions = new ArrayList<>(); try { autoCreateIndex = Booleans.parseBooleanExact(value); + if (Booleans.isStrictlyBoolean(value) == false) { + DEPRECATION_LOGGER.deprecated("Expected a boolean [true/false] or index name pattern for setting [{}] but got [{}]", + AUTO_CREATE_INDEX_SETTING.getKey(), value); + } } catch (IllegalArgumentException ex) { try { String[] patterns = Strings.commaDelimitedListToStringArray(value); for (String pattern : patterns) { - if (pattern == null || pattern.length() == 0) { - throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] must be either [true, false, or a comma separated list of index patterns]"); + if (pattern == null || pattern.trim().length() == 0) { + throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] must " + + "be either [true, false, or a comma separated list of index patterns]"); } + pattern = pattern.trim(); Tuple expression; if (pattern.startsWith("-")) { if (pattern.length() == 1) { - throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] must contain an index name after [-]"); + throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] " + + "must contain an index name after [-]"); } expression = new Tuple<>(pattern.substring(1), false); } else if(pattern.startsWith("+")) { if (pattern.length() == 1) { - throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] must contain an index name after [+]"); + throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] " + + "must contain an index name after [+]"); } expression = new Tuple<>(pattern.substring(1), true); } else { @@ -138,6 +158,7 @@ private AutoCreate(String value) { } this.expressions = expressions; this.autoCreateIndex = autoCreateIndex; + this.string = value; } boolean isAutoCreateIndex() { @@ -147,5 +168,10 @@ boolean isAutoCreateIndex() { List> getExpressions() { return expressions; } + + @Override + public String toString() { + return string; + } } } diff --git a/core/src/main/java/org/elasticsearch/action/support/ContextPreservingActionListener.java b/core/src/main/java/org/elasticsearch/action/support/ContextPreservingActionListener.java new file mode 100644 index 0000000000000..72f1e7c1d6643 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/support/ContextPreservingActionListener.java @@ -0,0 +1,61 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.support; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.common.util.concurrent.ThreadContext; + +import java.util.function.Supplier; + +/** + * Restores the given {@link org.elasticsearch.common.util.concurrent.ThreadContext.StoredContext} + * once the listener is invoked + */ +public final class ContextPreservingActionListener implements ActionListener { + + private final ActionListener delegate; + private final Supplier context; + + public ContextPreservingActionListener(Supplier contextSupplier, ActionListener delegate) { + this.delegate = delegate; + this.context = contextSupplier; + } + + @Override + public void onResponse(R r) { + try (ThreadContext.StoredContext ignore = context.get()) { + delegate.onResponse(r); + } + } + + @Override + public void onFailure(Exception e) { + try (ThreadContext.StoredContext ignore = context.get()) { + delegate.onFailure(e); + } + } + + /** + * Wraps the provided action listener in a {@link ContextPreservingActionListener} that will + * also copy the response headers when the {@link ThreadContext.StoredContext} is closed + */ + public static ContextPreservingActionListener wrapPreservingContext(ActionListener listener, ThreadContext threadContext) { + return new ContextPreservingActionListener<>(threadContext.newRestorableContext(true), listener); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/support/DefaultShardOperationFailedException.java b/core/src/main/java/org/elasticsearch/action/support/DefaultShardOperationFailedException.java index 0fe3be1ad634b..656a0eb90d9d3 100644 --- a/core/src/main/java/org/elasticsearch/action/support/DefaultShardOperationFailedException.java +++ b/core/src/main/java/org/elasticsearch/action/support/DefaultShardOperationFailedException.java @@ -128,7 +128,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (reason != null) { builder.field("reason"); builder.startObject(); - ElasticsearchException.toXContent(builder, params, reason); + ElasticsearchException.generateThrowableXContent(builder, params, reason); builder.endObject(); } return builder; diff --git a/core/src/main/java/org/elasticsearch/action/support/GroupedActionListener.java b/core/src/main/java/org/elasticsearch/action/support/GroupedActionListener.java new file mode 100644 index 0000000000000..ed9b7c8d15d60 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/support/GroupedActionListener.java @@ -0,0 +1,81 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.support; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.CountDown; + +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; + +/** + * An action listener that delegates it's results to another listener once + * it has received one or more failures or N results. This allows synchronous + * tasks to be forked off in a loop with the same listener and respond to a + * higher level listener once all tasks responded. + */ +public final class GroupedActionListener implements ActionListener { + private final CountDown countDown; + private final AtomicInteger pos = new AtomicInteger(); + private final AtomicArray results; + private final ActionListener> delegate; + private final Collection defaults; + private final AtomicReference failure = new AtomicReference<>(); + + /** + * Creates a new listener + * @param delegate the delegate listener + * @param groupSize the group size + */ + public GroupedActionListener(ActionListener> delegate, int groupSize, + Collection defaults) { + results = new AtomicArray<>(groupSize); + countDown = new CountDown(groupSize); + this.delegate = delegate; + this.defaults = defaults; + } + + @Override + public void onResponse(T element) { + results.setOnce(pos.incrementAndGet() - 1, element); + if (countDown.countDown()) { + if (failure.get() != null) { + delegate.onFailure(failure.get()); + } else { + List collect = this.results.asList(); + collect.addAll(defaults); + delegate.onResponse(Collections.unmodifiableList(collect)); + } + } + } + + @Override + public void onFailure(Exception e) { + if (failure.compareAndSet(null, e) == false) { + failure.get().addSuppressed(e); + } + if (countDown.countDown()) { + delegate.onFailure(failure.get()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/support/HandledTransportAction.java b/core/src/main/java/org/elasticsearch/action/support/HandledTransportAction.java index 0a53b63b662af..68b699cb110ba 100644 --- a/core/src/main/java/org/elasticsearch/action/support/HandledTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/HandledTransportAction.java @@ -35,7 +35,7 @@ /** * A TransportAction that self registers a handler into the transport service */ -public abstract class HandledTransportAction, Response extends ActionResponse> +public abstract class HandledTransportAction extends TransportAction { protected HandledTransportAction(Settings settings, String actionName, ThreadPool threadPool, TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, diff --git a/core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java b/core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java index 2bc49f7e9f869..a79044f2d6923 100644 --- a/core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java +++ b/core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java @@ -195,8 +195,8 @@ public static IndicesOptions fromParameters(Object wildcardsString, Object ignor //note that allowAliasesToMultipleIndices is not exposed, always true (only for internal use) return fromOptions( - lenientNodeBooleanValue(ignoreUnavailableString, defaultSettings.ignoreUnavailable()), - lenientNodeBooleanValue(allowNoIndicesString, defaultSettings.allowNoIndices()), + lenientNodeBooleanValue(ignoreUnavailableString, "ignore_unavailable", defaultSettings.ignoreUnavailable()), + lenientNodeBooleanValue(allowNoIndicesString, "allow_no_indices", defaultSettings.allowNoIndices()), expandWildcardsOpen, expandWildcardsClosed, defaultSettings.allowAliasesToMultipleIndices(), diff --git a/core/src/main/java/org/elasticsearch/action/support/ToXContentToBytes.java b/core/src/main/java/org/elasticsearch/action/support/ToXContentToBytes.java index 445967252a237..741b197dbc836 100644 --- a/core/src/main/java/org/elasticsearch/action/support/ToXContentToBytes.java +++ b/core/src/main/java/org/elasticsearch/action/support/ToXContentToBytes.java @@ -69,10 +69,16 @@ public final BytesReference buildAsBytes(XContentType contentType) { @Override public final String toString() { + return toString(EMPTY_PARAMS); + } + + public final String toString(Params params) { try { XContentBuilder builder = XContentFactory.jsonBuilder(); - builder.prettyPrint(); - toXContent(builder, EMPTY_PARAMS); + if (params.paramAsBoolean("pretty", true)) { + builder.prettyPrint(); + } + toXContent(builder, params); return builder.string(); } catch (Exception e) { // So we have a stack trace logged somewhere diff --git a/core/src/main/java/org/elasticsearch/action/support/TransportAction.java b/core/src/main/java/org/elasticsearch/action/support/TransportAction.java index 7d1a091d6b3c8..ea606a15d7cc0 100644 --- a/core/src/main/java/org/elasticsearch/action/support/TransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/TransportAction.java @@ -26,7 +26,6 @@ import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.tasks.Task; @@ -41,12 +40,11 @@ /** * */ -public abstract class TransportAction, Response extends ActionResponse> extends AbstractComponent { +public abstract class TransportAction extends AbstractComponent { protected final ThreadPool threadPool; protected final String actionName; private final ActionFilter[] filters; - protected final ParseFieldMatcher parseFieldMatcher; protected final IndexNameExpressionResolver indexNameExpressionResolver; protected final TaskManager taskManager; @@ -56,7 +54,6 @@ protected TransportAction(Settings settings, String actionName, ThreadPool threa this.threadPool = threadPool; this.actionName = actionName; this.filters = actionFilters.filters(); - this.parseFieldMatcher = new ParseFieldMatcher(settings); this.indexNameExpressionResolver = indexNameExpressionResolver; this.taskManager = taskManager; } @@ -141,17 +138,8 @@ public final void execute(Task task, Request request, ActionListener l listener = new TaskResultStoringActionListener<>(taskManager, task, listener); } - if (filters.length == 0) { - try { - doExecute(task, request, listener); - } catch(Exception e) { - logger.trace("Error during transport action execution.", e); - listener.onFailure(e); - } - } else { - RequestFilterChain requestFilterChain = new RequestFilterChain<>(this, logger); - requestFilterChain.proceed(task, actionName, request, listener); - } + RequestFilterChain requestFilterChain = new RequestFilterChain<>(this, logger); + requestFilterChain.proceed(task, actionName, request, listener); } protected void doExecute(Task task, Request request, ActionListener listener) { @@ -160,7 +148,7 @@ protected void doExecute(Task task, Request request, ActionListener li protected abstract void doExecute(Request request, ActionListener listener); - private static class RequestFilterChain, Response extends ActionResponse> + private static class RequestFilterChain implements ActionFilterChain { private final TransportAction action; @@ -179,8 +167,7 @@ public void proceed(Task task, String actionName, Request request, ActionListene if (i < this.action.filters.length) { this.action.filters[i].apply(task, actionName, request, listener, this); } else if (i == this.action.filters.length) { - this.action.doExecute(task, request, new FilteredActionListener<>(actionName, listener, - new ResponseFilterChain<>(this.action.filters, logger))); + this.action.doExecute(task, request, listener); } else { listener.onFailure(new IllegalStateException("proceed was called too many times")); } @@ -190,69 +177,6 @@ public void proceed(Task task, String actionName, Request request, ActionListene } } - @Override - public void proceed(String action, Response response, ActionListener listener) { - assert false : "request filter chain should never be called on the response side"; - } - } - - private static class ResponseFilterChain, Response extends ActionResponse> - implements ActionFilterChain { - - private final ActionFilter[] filters; - private final AtomicInteger index; - private final Logger logger; - - private ResponseFilterChain(ActionFilter[] filters, Logger logger) { - this.filters = filters; - this.index = new AtomicInteger(filters.length); - this.logger = logger; - } - - @Override - public void proceed(Task task, String action, Request request, ActionListener listener) { - assert false : "response filter chain should never be called on the request side"; - } - - @Override - public void proceed(String action, Response response, ActionListener listener) { - int i = index.decrementAndGet(); - try { - if (i >= 0) { - filters[i].apply(action, response, listener, this); - } else if (i == -1) { - listener.onResponse(response); - } else { - listener.onFailure(new IllegalStateException("proceed was called too many times")); - } - } catch (Exception e) { - logger.trace("Error during transport action execution.", e); - listener.onFailure(e); - } - } - } - - private static class FilteredActionListener implements ActionListener { - - private final String actionName; - private final ActionListener listener; - private final ResponseFilterChain chain; - - private FilteredActionListener(String actionName, ActionListener listener, ResponseFilterChain chain) { - this.actionName = actionName; - this.listener = listener; - this.chain = chain; - } - - @Override - public void onResponse(Response response) { - chain.proceed(actionName, response, listener); - } - - @Override - public void onFailure(Exception e) { - listener.onFailure(e); - } } /** diff --git a/core/src/main/java/org/elasticsearch/action/support/WriteRequest.java b/core/src/main/java/org/elasticsearch/action/support/WriteRequest.java index 6379a4fb259c3..50edcd39bd16e 100644 --- a/core/src/main/java/org/elasticsearch/action/support/WriteRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/WriteRequest.java @@ -65,36 +65,43 @@ enum RefreshPolicy implements Writeable { /** * Don't refresh after this request. The default. */ - NONE, + NONE("false"), /** * Force a refresh as part of this request. This refresh policy does not scale for high indexing or search throughput but is useful * to present a consistent view to for indices with very low traffic. And it is wonderful for tests! */ - IMMEDIATE, + IMMEDIATE("true"), /** * Leave this request open until a refresh has made the contents of this request visible to search. This refresh policy is * compatible with high indexing and search throughput but it causes the request to wait to reply until a refresh occurs. */ - WAIT_UNTIL; + WAIT_UNTIL("wait_for"); + + private final String value; + + RefreshPolicy(String value) { + this.value = value; + } + + public String getValue() { + return value; + } /** * Parse the string representation of a refresh policy, usually from a request parameter. */ - public static RefreshPolicy parse(String string) { - switch (string) { - case "false": - return NONE; - /* - * Empty string is IMMEDIATE because that makes "POST /test/test/1?refresh" perform a refresh which reads well and is what folks - * are used to. - */ - case "": - case "true": + public static RefreshPolicy parse(String value) { + for (RefreshPolicy policy : values()) { + if (policy.getValue().equals(value)) { + return policy; + } + } + if ("".equals(value)) { + // Empty string is IMMEDIATE because that makes "POST /test/test/1?refresh" perform + // a refresh which reads well and is what folks are used to. return IMMEDIATE; - case "wait_for": - return WAIT_UNTIL; } - throw new IllegalArgumentException("Unknown value for refresh: [" + string + "]."); + throw new IllegalArgumentException("Unknown value for refresh: [" + value + "]."); } public static RefreshPolicy readFrom(StreamInput in) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/action/support/broadcast/BroadcastRequest.java b/core/src/main/java/org/elasticsearch/action/support/broadcast/BroadcastRequest.java index 508581050a6eb..1a4a3b39ef066 100644 --- a/core/src/main/java/org/elasticsearch/action/support/broadcast/BroadcastRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/broadcast/BroadcastRequest.java @@ -31,7 +31,7 @@ /** * */ -public class BroadcastRequest> extends ActionRequest implements IndicesRequest.Replaceable { +public class BroadcastRequest> extends ActionRequest implements IndicesRequest.Replaceable { protected String[] indices; private IndicesOptions indicesOptions = IndicesOptions.strictExpandOpenAndForbidClosed(); diff --git a/core/src/main/java/org/elasticsearch/action/support/broadcast/TransportBroadcastAction.java b/core/src/main/java/org/elasticsearch/action/support/broadcast/TransportBroadcastAction.java index 87ef385a243c8..f8b0c087ea7e2 100644 --- a/core/src/main/java/org/elasticsearch/action/support/broadcast/TransportBroadcastAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/broadcast/TransportBroadcastAction.java @@ -44,6 +44,7 @@ import org.elasticsearch.transport.TransportResponseHandler; import org.elasticsearch.transport.TransportService; +import java.io.IOException; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReferenceArray; import java.util.function.Supplier; @@ -86,9 +87,9 @@ protected final void doExecute(Request request, ActionListener listene protected abstract ShardResponse newShardResponse(); - protected abstract ShardResponse shardOperation(ShardRequest request); + protected abstract ShardResponse shardOperation(ShardRequest request) throws IOException; - protected ShardResponse shardOperation(ShardRequest request, Task task) { + protected ShardResponse shardOperation(ShardRequest request, Task task) throws IOException { return shardOperation(request); } @@ -96,7 +97,7 @@ protected ShardResponse shardOperation(ShardRequest request, Task task) { * Determines the shards this operation will be executed on. The operation is executed once per shard iterator, typically * on the first shard in it. If the operation fails, it will be retried on the next shard in the iterator. */ - protected abstract GroupShardsIterator shards(ClusterState clusterState, Request request, String[] concreteIndices); + protected abstract GroupShardsIterator shards(ClusterState clusterState, Request request, String[] concreteIndices); protected abstract ClusterBlockException checkGlobalBlock(ClusterState state, Request request); @@ -109,7 +110,7 @@ protected class AsyncBroadcastAction { private final ActionListener listener; private final ClusterState clusterState; private final DiscoveryNodes nodes; - private final GroupShardsIterator shardsIts; + private final GroupShardsIterator shardsIts; private final int expectedOps; private final AtomicInteger counterOps = new AtomicInteger(); private final AtomicReferenceArray shardsResponses; @@ -177,7 +178,6 @@ protected void performOperation(final ShardIterator shardIt, final ShardRouting // no node connected, act as failure onOperation(shard, shardIt, shardIndex, new NoShardAvailableActionException(shardIt.shardId())); } else { - taskManager.registerChildTask(task, node.getId()); transportService.sendRequest(node, transportShardAction, shardRequest, new TransportResponseHandler() { @Override public ShardResponse newInstance() { diff --git a/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java b/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java index 98c962b3eec69..b0c539be39e4f 100644 --- a/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java @@ -270,7 +270,7 @@ protected AsyncAction(Task task, Request request, ActionListener liste ShardsIterator shardIt = shards(clusterState, request, concreteIndices); nodeIds = new HashMap<>(); - for (ShardRouting shard : shardIt.asUnordered()) { + for (ShardRouting shard : shardIt) { // send a request to the shard only if it is assigned to a node that is in the local node's cluster state // a scenario in which a shard can be assigned but to a node that is not in the local node's cluster state // is when the shard is assigned to the master node, the local node has detected the master as failed @@ -318,7 +318,6 @@ private void sendNodeRequest(final DiscoveryNode node, List shards NodeRequest nodeRequest = new NodeRequest(node.getId(), request, shards); if (task != null) { nodeRequest.setParentTask(clusterService.localNode().getId(), task.getId()); - taskManager.registerChildTask(task, node.getId()); } transportService.sendRequest(node, transportNodeBroadcastAction, nodeRequest, new TransportResponseHandler() { @Override @@ -439,7 +438,6 @@ private void onShardOperation(final NodeRequest request, final Object[] shardRes } catch (Exception e) { BroadcastShardOperationFailedException failure = new BroadcastShardOperationFailedException(shardRouting.shardId(), "operation " + actionName + " failed", e); - failure.setIndex(shardRouting.getIndexName()); failure.setShard(shardRouting.shardId()); shardResults[shardIndex] = failure; if (TransportActions.isShardNotAvailableException(e)) { @@ -505,11 +503,7 @@ public IndicesOptions indicesOptions() { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); indicesLevelRequest = readRequestFrom(in); - int size = in.readVInt(); - shards = new ArrayList<>(size); - for (int i = 0; i < size; i++) { - shards.add(new ShardRouting(in)); - } + shards = in.readList(ShardRouting::new); nodeId = in.readString(); } @@ -517,11 +511,7 @@ public void readFrom(StreamInput in) throws IOException { public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); indicesLevelRequest.writeTo(out); - int size = shards.size(); - out.writeVInt(size); - for (int i = 0; i < size; i++) { - shards.get(i).writeTo(out); - } + out.writeList(shards); out.writeString(nodeId); } } @@ -532,10 +522,10 @@ class NodeResponse extends TransportResponse { protected List exceptions; protected List results; - public NodeResponse() { + NodeResponse() { } - public NodeResponse(String nodeId, + NodeResponse(String nodeId, int totalShards, List results, List exceptions) { @@ -566,18 +556,9 @@ public void readFrom(StreamInput in) throws IOException { super.readFrom(in); nodeId = in.readString(); totalShards = in.readVInt(); - int resultsSize = in.readVInt(); - results = new ArrayList<>(resultsSize); - for (; resultsSize > 0; resultsSize--) { - final ShardOperationResult result = in.readBoolean() ? readShardResult(in) : null; - results.add(result); - } + results = in.readList((stream) -> stream.readBoolean() ? readShardResult(stream) : null); if (in.readBoolean()) { - int failureShards = in.readVInt(); - exceptions = new ArrayList<>(failureShards); - for (int i = 0; i < failureShards; i++) { - exceptions.add(new BroadcastShardOperationFailedException(in)); - } + exceptions = in.readList(BroadcastShardOperationFailedException::new); } else { exceptions = null; } @@ -594,11 +575,7 @@ public void writeTo(StreamOutput out) throws IOException { } out.writeBoolean(exceptions != null); if (exceptions != null) { - int failureShards = exceptions.size(); - out.writeVInt(failureShards); - for (int i = 0; i < failureShards; i++) { - exceptions.get(i).writeTo(out); - } + out.writeList(exceptions); } } } diff --git a/core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedRequest.java b/core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedRequest.java index e3f32543bf2c3..54c1ee6675390 100644 --- a/core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedRequest.java @@ -18,6 +18,7 @@ */ package org.elasticsearch.action.support.master; +import org.elasticsearch.Version; import org.elasticsearch.cluster.ack.AckedRequest; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -73,19 +74,45 @@ public final TimeValue timeout() { /** * Reads the timeout value */ + @Deprecated protected void readTimeout(StreamInput in) throws IOException { - timeout = new TimeValue(in); + // in older ES versions, we would explicitly call this method in subclasses + // now we properly serialize the timeout value as part of the readFrom method + if (in.getVersion().before(Version.V_5_6_0)) { + timeout = new TimeValue(in); + } } /** * writes the timeout value */ + @Deprecated protected void writeTimeout(StreamOutput out) throws IOException { - timeout.writeTo(out); + // in older ES versions, we would explicitly call this method in subclasses + // now we properly serialize the timeout value as part of the writeTo method + if (out.getVersion().before(Version.V_5_6_0)) { + timeout.writeTo(out); + } } @Override public TimeValue ackTimeout() { return timeout; } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + if (in.getVersion().onOrAfter(Version.V_5_6_0)) { + timeout = new TimeValue(in); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_5_6_0)) { + timeout.writeTo(out); + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/support/master/MasterNodeRequest.java b/core/src/main/java/org/elasticsearch/action/support/master/MasterNodeRequest.java index efbcadf445ff4..6f2ce6c4ef96d 100644 --- a/core/src/main/java/org/elasticsearch/action/support/master/MasterNodeRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/MasterNodeRequest.java @@ -29,7 +29,7 @@ /** * A based request for master based operation. */ -public abstract class MasterNodeRequest> extends ActionRequest { +public abstract class MasterNodeRequest> extends ActionRequest { public static final TimeValue DEFAULT_MASTER_NODE_TIMEOUT = TimeValue.timeValueSeconds(30); diff --git a/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeAction.java b/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeAction.java index a664c325a4b1d..f2bc4da423dea 100644 --- a/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeAction.java @@ -45,6 +45,7 @@ import org.elasticsearch.transport.TransportException; import org.elasticsearch.transport.TransportService; +import java.util.function.Predicate; import java.util.function.Supplier; /** @@ -110,14 +111,6 @@ class AsyncSingleAction { private volatile ClusterStateObserver observer; private final Task task; - private final ClusterStateObserver.ChangePredicate retryableOrNoBlockPredicate = new ClusterStateObserver.ValidationPredicate() { - @Override - protected boolean validate(ClusterState newState) { - ClusterBlockException blockException = checkBlock(request, newState); - return (blockException == null || !blockException.retryable()); - } - }; - AsyncSingleAction(Task task, Request request, ActionListener listener) { this.task = task; this.request = request; @@ -128,12 +121,13 @@ protected boolean validate(ClusterState newState) { } public void start() { - this.observer = new ClusterStateObserver(clusterService, request.masterNodeTimeout(), logger, threadPool.getThreadContext()); - doStart(); + ClusterState state = clusterService.state(); + this.observer = new ClusterStateObserver(state, clusterService, request.masterNodeTimeout(), logger, threadPool.getThreadContext()); + doStart(state); } - protected void doStart() { - final ClusterState clusterState = observer.observedState(); + protected void doStart(ClusterState clusterState) { + final Predicate masterChangePredicate = MasterNodeChangePredicate.build(clusterState); final DiscoveryNodes nodes = clusterState.nodes(); if (nodes.isLocalNodeElectedMaster() || localExecute(request)) { // check for block, if blocked, retry, else, execute locally @@ -143,7 +137,10 @@ protected void doStart() { listener.onFailure(blockException); } else { logger.trace("can't execute due to a cluster block, retrying", blockException); - retry(blockException, retryableOrNoBlockPredicate); + retry(blockException, newState -> { + ClusterBlockException newException = checkBlock(request, newState); + return (newException == null || !newException.retryable()); + }); } } else { ActionListener delegate = new ActionListener() { @@ -157,26 +154,24 @@ public void onFailure(Exception t) { if (t instanceof Discovery.FailedToCommitClusterStateException || (t instanceof NotMasterException)) { logger.debug((org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("master could not publish cluster state or stepped down before publishing action [{}], scheduling a retry", actionName), t); - retry(t, MasterNodeChangePredicate.INSTANCE); + retry(t, masterChangePredicate); } else { listener.onFailure(t); } } }; - taskManager.registerChildTask(task, nodes.getLocalNodeId()); threadPool.executor(executor).execute(new ActionRunnable(delegate) { @Override protected void doRun() throws Exception { - masterOperation(task, request, clusterService.state(), delegate); + masterOperation(task, request, clusterState, delegate); } }); } } else { if (nodes.getMasterNode() == null) { logger.debug("no known master node, scheduling a retry"); - retry(null, MasterNodeChangePredicate.INSTANCE); + retry(null, masterChangePredicate); } else { - taskManager.registerChildTask(task, nodes.getMasterNode().getId()); transportService.sendRequest(nodes.getMasterNode(), actionName, request, new ActionListenerResponseHandler(listener, TransportMasterNodeAction.this::newResponse) { @Override public void handleException(final TransportException exp) { @@ -185,7 +180,7 @@ public void handleException(final TransportException exp) { // we want to retry here a bit to see if a new master is elected logger.debug("connection exception while trying to forward request with action name [{}] to master node [{}], scheduling a retry. Error: [{}]", actionName, nodes.getMasterNode(), exp.getDetailedMessage()); - retry(cause, MasterNodeChangePredicate.INSTANCE); + retry(cause, masterChangePredicate); } else { listener.onFailure(exp); } @@ -195,12 +190,12 @@ public void handleException(final TransportException exp) { } } - private void retry(final Throwable failure, final ClusterStateObserver.ChangePredicate changePredicate) { + private void retry(final Throwable failure, final Predicate statePredicate) { observer.waitForNextChange( new ClusterStateObserver.Listener() { @Override public void onNewClusterState(ClusterState state) { - doStart(); + doStart(state); } @Override @@ -213,7 +208,7 @@ public void onTimeout(TimeValue timeout) { logger.debug((org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("timed out while retrying [{}] after failure (timeout [{}])", actionName, timeout), failure); listener.onFailure(new MasterNotDiscoveredException(failure)); } - }, changePredicate + }, statePredicate ); } } diff --git a/core/src/main/java/org/elasticsearch/action/support/master/info/TransportClusterInfoAction.java b/core/src/main/java/org/elasticsearch/action/support/master/info/TransportClusterInfoAction.java index 66b9fce5d711d..8894704c2ea02 100644 --- a/core/src/main/java/org/elasticsearch/action/support/master/info/TransportClusterInfoAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/info/TransportClusterInfoAction.java @@ -54,5 +54,5 @@ protected final void masterOperation(final Request request, final ClusterState s doMasterOperation(request, concreteIndices, state, listener); } - protected abstract void doMasterOperation(Request request, String[] concreteIndices, ClusterState state, final ActionListener listener); + protected abstract void doMasterOperation(Request request, String[] concreteIndices, ClusterState state, ActionListener listener); } diff --git a/core/src/main/java/org/elasticsearch/action/support/nodes/BaseNodesRequest.java b/core/src/main/java/org/elasticsearch/action/support/nodes/BaseNodesRequest.java index 663537f25da59..b6760c1cfcba9 100644 --- a/core/src/main/java/org/elasticsearch/action/support/nodes/BaseNodesRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/nodes/BaseNodesRequest.java @@ -32,7 +32,7 @@ /** * */ -public abstract class BaseNodesRequest> extends ActionRequest { +public abstract class BaseNodesRequest> extends ActionRequest { /** * the list of nodesIds that will be used to resolve this request and {@link #concreteNodes} diff --git a/core/src/main/java/org/elasticsearch/action/support/nodes/TransportNodesAction.java b/core/src/main/java/org/elasticsearch/action/support/nodes/TransportNodesAction.java index 3582f5f5aaf8f..4583e47bc1db7 100644 --- a/core/src/main/java/org/elasticsearch/action/support/nodes/TransportNodesAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/nodes/TransportNodesAction.java @@ -109,14 +109,10 @@ protected NodesResponse newResponse(NodesRequest request, AtomicReferenceArray n for (int i = 0; i < nodesResponses.length(); ++i) { Object response = nodesResponses.get(i); - if (nodeResponseClass.isInstance(response)) { - responses.add(nodeResponseClass.cast(response)); - } else if (response instanceof FailedNodeException) { + if (response instanceof FailedNodeException) { failures.add((FailedNodeException)response); } else { - logger.warn("ignoring unexpected response [{}] of type [{}], expected [{}] or [{}]", - response, response != null ? response.getClass().getName() : null, - nodeResponseClass.getSimpleName(), FailedNodeException.class.getSimpleName()); + responses.add(nodeResponseClass.cast(response)); } } @@ -144,8 +140,6 @@ protected NodeResponse nodeOperation(NodeRequest request, Task task) { return nodeOperation(request); } - protected abstract boolean accumulateExceptions(); - /** * resolve node ids to concrete nodes of the incoming request **/ @@ -198,7 +192,6 @@ void start() { TransportRequest nodeRequest = newNodeRequest(nodeId, request); if (task != null) { nodeRequest.setParentTask(clusterService.localNode().getId(), task.getId()); - taskManager.registerChildTask(task, node.getId()); } transportService.sendRequest(node, transportNodeAction, nodeRequest, builder.build(), @@ -243,9 +236,7 @@ private void onFailure(int idx, String nodeId, Throwable t) { (org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("failed to execute on node [{}]", nodeId), t); } - if (accumulateExceptions()) { - responses.set(idx, new FailedNodeException(nodeId, "Failed node [" + nodeId + "]", t)); - } + responses.set(idx, new FailedNodeException(nodeId, "Failed node [" + nodeId + "]", t)); if (counter.incrementAndGet() == responses.length()) { finishHim(); } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java index d541ef6a35c11..daa180bb49de8 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java @@ -32,9 +32,9 @@ import org.elasticsearch.cluster.routing.IndexRoutingTable; import org.elasticsearch.cluster.routing.IndexShardRoutingTable; import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.util.set.Sets; -import org.elasticsearch.index.engine.VersionConflictEngineException; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.transport.TransportResponse; @@ -112,21 +112,22 @@ public void execute() throws Exception { pendingActions.incrementAndGet(); primaryResult = primary.perform(request); final ReplicaRequest replicaRequest = primaryResult.replicaRequest(); - assert replicaRequest.primaryTerm() > 0 : "replicaRequest doesn't have a primary term"; - if (logger.isTraceEnabled()) { - logger.trace("[{}] op [{}] completed on primary for request [{}]", primaryId, opType, request); - } + if (replicaRequest != null) { + if (logger.isTraceEnabled()) { + logger.trace("[{}] op [{}] completed on primary for request [{}]", primaryId, opType, request); + } - // we have to get a new state after successfully indexing into the primary in order to honour recovery semantics. - // we have to make sure that every operation indexed into the primary after recovery start will also be replicated - // to the recovery target. If we use an old cluster state, we may miss a relocation that has started since then. - ClusterState clusterState = clusterStateSupplier.get(); - final List shards = getShards(primaryId, clusterState); - Set inSyncAllocationIds = getInSyncAllocationIds(primaryId, clusterState); + // we have to get a new state after successfully indexing into the primary in order to honour recovery semantics. + // we have to make sure that every operation indexed into the primary after recovery start will also be replicated + // to the recovery target. If we use an old cluster state, we may miss a relocation that has started since then. + ClusterState clusterState = clusterStateSupplier.get(); + final List shards = getShards(primaryId, clusterState); + Set inSyncAllocationIds = getInSyncAllocationIds(primaryId, clusterState); - markUnavailableShardsAsStale(replicaRequest, inSyncAllocationIds, shards); + markUnavailableShardsAsStale(replicaRequest, inSyncAllocationIds, shards); - performOnReplicas(replicaRequest, shards); + performOnReplicas(replicaRequest, shards); + } successfulShards.incrementAndGet(); decPendingAndFinishIfNeeded(); @@ -144,7 +145,7 @@ private void markUnavailableShardsAsStale(ReplicaRequest replicaRequest, Set decPendingAndFinishIfNeeded() @@ -198,7 +199,7 @@ public void onFailure(Exception replicaException) { shard, replicaRequest), replicaException); - if (ignoreReplicaException(replicaException)) { + if (TransportActions.isShardNotAvailableException(replicaException)) { decPendingAndFinishIfNeeded(); } else { RestStatus restStatus = ExceptionsHelper.status(replicaException); @@ -208,7 +209,7 @@ public void onFailure(Exception replicaException) { logger.warn( (org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("[{}] {}", shard.shardId(), message), replicaException); - replicasProxy.failShard(shard, replicaRequest.primaryTerm(), message, replicaException, + replicasProxy.failShard(shard, message, replicaException, ReplicationOperation.this::decPendingAndFinishIfNeeded, ReplicationOperation.this::onPrimaryDemoted, throwable -> decPendingAndFinishIfNeeded() @@ -310,30 +311,6 @@ private void finishAsFailed(Exception exception) { } } - - /** - * Should an exception be ignored when the operation is performed on the replica. - */ - public static boolean ignoreReplicaException(Exception e) { - if (TransportActions.isShardNotAvailableException(e)) { - return true; - } - // on version conflict or document missing, it means - // that a new change has crept into the replica, and it's fine - if (isConflictException(e)) { - return true; - } - return false; - } - - public static boolean isConflictException(Throwable t) { - final Throwable cause = ExceptionsHelper.unwrapCause(t); - // on version conflict or document missing, it means - // that a new change has crept into the replica, and it's fine - return cause instanceof VersionConflictEngineException; - } - - public interface Primary< Request extends ReplicationRequest, ReplicaRequest extends ReplicationRequest, @@ -359,7 +336,6 @@ public interface Primary< * @return the request to send to the repicas */ PrimaryResultT perform(Request request) throws Exception; - } public interface Replicas> { @@ -371,12 +347,12 @@ public interface Replicas listener); + void performOn(ShardRouting replica, ReplicaRequest replicaRequest, + ActionListener listener); /** * Fail the specified shard, removing it from the current set of active shards * @param replica shard to fail - * @param primaryTerm the primary term of the primary shard when requesting the failure * @param message a (short) description of the reason * @param exception the original exception which caused the ReplicationOperation to request the shard to be failed * @param onSuccess a callback to call when the shard has been successfully removed from the active set. @@ -384,7 +360,7 @@ public interface Replicas onPrimaryDemoted, Consumer onIgnoredFailure); /** @@ -392,13 +368,12 @@ void failShard(ShardRouting replica, long primaryTerm, String message, Exception * * @param shardId shard id * @param allocationId allocation id to remove from the set of in-sync allocation ids - * @param primaryTerm the primary term of the primary shard when requesting the failure * @param onSuccess a callback to call when the allocation id has been successfully removed from the in-sync set. * @param onPrimaryDemoted a callback to call when the request failed because the current primary was already demoted * by the master. * @param onIgnoredFailure a callback to call when the request failed, but the failure can be safely ignored. */ - void markShardCopyAsStale(ShardId shardId, String allocationId, long primaryTerm, Runnable onSuccess, + void markShardCopyAsStale(ShardId shardId, String allocationId, Runnable onSuccess, Consumer onPrimaryDemoted, Consumer onIgnoredFailure); } @@ -419,7 +394,11 @@ public RetryOnPrimaryException(StreamInput in) throws IOException { public interface PrimaryResult> { - R replicaRequest(); + /** + * @return null if no operation needs to be sent to a replica + * (for example when the operation failed on the primary due to a parsing exception) + */ + @Nullable R replicaRequest(); void setShardInfo(ReplicationResponse.ShardInfo shardInfo); } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java index 596d2581a7953..d50909b62c564 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.support.replication; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; @@ -43,7 +44,7 @@ * Requests that are run on a particular replica, first on the primary and then on the replicas like {@link IndexRequest} or * {@link TransportShardRefreshAction}. */ -public abstract class ReplicationRequest> extends ActionRequest +public abstract class ReplicationRequest> extends ActionRequest implements IndicesRequest { public static final TimeValue DEFAULT_TIMEOUT = new TimeValue(1, TimeUnit.MINUTES); @@ -55,7 +56,7 @@ public abstract class ReplicationRequest 0) { - builder.startArray(Fields.FAILURES); + builder.startArray(FAILURES); for (Failure failure : failures) { failure.toXContent(builder, params); } @@ -160,9 +171,51 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } + public static ShardInfo fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + + int total = 0, successful = 0; + List failuresList = null; + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (TOTAL.equals(currentFieldName)) { + total = parser.intValue(); + } else if (SUCCESSFUL.equals(currentFieldName)) { + successful = parser.intValue(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (FAILURES.equals(currentFieldName)) { + failuresList = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + failuresList.add(Failure.fromXContent(parser)); + } + } else { + parser.skipChildren(); // skip potential inner arrays for forward compatibility + } + } else if (token == XContentParser.Token.START_OBJECT) { + parser.skipChildren(); // skip potential inner arrays for forward compatibility + } + } + Failure[] failures = EMPTY; + if (failuresList != null) { + failures = failuresList.toArray(new Failure[failuresList.size()]); + } + return new ShardInfo(total, successful, failures); + } + @Override public String toString() { - return Strings.toString(this); + return "ShardInfo{" + + "total=" + total + + ", successful=" + successful + + ", failures=" + Arrays.toString(failures) + + '}'; } public static ShardInfo readShardInfo(StreamInput in) throws IOException { @@ -171,7 +224,14 @@ public static ShardInfo readShardInfo(StreamInput in) throws IOException { return shardInfo; } - public static class Failure implements ShardOperationFailedException, ToXContent { + public static class Failure implements ShardOperationFailedException, ToXContentObject { + + private static final String _INDEX = "_index"; + private static final String _SHARD = "_shard"; + private static final String _NODE = "_node"; + private static final String REASON = "reason"; + private static final String STATUS = "status"; + private static final String PRIMARY = "primary"; private ShardId shardId; private String nodeId; @@ -268,39 +328,57 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); - builder.field(Fields._INDEX, shardId.getIndexName()); - builder.field(Fields._SHARD, shardId.id()); - builder.field(Fields._NODE, nodeId); - builder.field(Fields.REASON); + builder.field(_INDEX, shardId.getIndexName()); + builder.field(_SHARD, shardId.id()); + builder.field(_NODE, nodeId); + builder.field(REASON); builder.startObject(); - ElasticsearchException.toXContent(builder, params, cause); + ElasticsearchException.generateThrowableXContent(builder, params, cause); builder.endObject(); - builder.field(Fields.STATUS, status); - builder.field(Fields.PRIMARY, primary); + builder.field(STATUS, status); + builder.field(PRIMARY, primary); builder.endObject(); return builder; } - private static class Fields { - - private static final String _INDEX = "_index"; - private static final String _SHARD = "_shard"; - private static final String _NODE = "_node"; - private static final String REASON = "reason"; - private static final String STATUS = "status"; - private static final String PRIMARY = "primary"; - + public static Failure fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + + String shardIndex = null, nodeId = null; + int shardId = -1; + boolean primary = false; + RestStatus status = null; + ElasticsearchException reason = null; + + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (_INDEX.equals(currentFieldName)) { + shardIndex = parser.text(); + } else if (_SHARD.equals(currentFieldName)) { + shardId = parser.intValue(); + } else if (_NODE.equals(currentFieldName)) { + nodeId = parser.text(); + } else if (STATUS.equals(currentFieldName)) { + status = RestStatus.valueOf(parser.text()); + } else if (PRIMARY.equals(currentFieldName)) { + primary = parser.booleanValue(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (REASON.equals(currentFieldName)) { + reason = ElasticsearchException.fromXContent(parser); + } else { + parser.skipChildren(); // skip potential inner objects for forward compatibility + } + } else if (token == XContentParser.Token.START_ARRAY) { + parser.skipChildren(); // skip potential inner arrays for forward compatibility + } + } + return new Failure(new ShardId(shardIndex, IndexMetaData.INDEX_UUID_NA_VALUE, shardId), nodeId, reason, status, primary); } } - - private static class Fields { - - private static final String _SHARDS = "_shards"; - private static final String TOTAL = "total"; - private static final String SUCCESSFUL = "successful"; - private static final String FAILED = "failed"; - private static final String FAILURES = "failures"; - - } } } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java b/core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java index e33d10eaa25ad..8193cf77cebef 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java @@ -119,7 +119,6 @@ public void onFailure(Exception e) { protected void shardExecute(Task task, Request request, ShardId shardId, ActionListener shardActionListener) { ShardRequest shardRequest = newShardRequest(request, shardId); shardRequest.setParentTask(clusterService.localNode().getId(), task.getId()); - taskManager.registerChildTask(task, clusterService.localNode().getId()); replicatedBroadcastShardAction.execute(shardRequest, shardActionListener); } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java b/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java index 9587b4e6b2cc5..65471e366fea2 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java @@ -21,6 +21,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.ActionListenerResponseHandler; import org.elasticsearch.action.UnavailableShardsException; @@ -42,6 +43,7 @@ import org.elasticsearch.cluster.routing.IndexShardRoutingTable; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lease.Releasable; @@ -49,13 +51,13 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.AbstractRunnable; -import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardState; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.ShardNotFoundException; +import org.elasticsearch.indices.IndexClosedException; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.node.NodeClosedException; import org.elasticsearch.tasks.Task; @@ -93,17 +95,16 @@ public abstract class TransportReplicationAction< Response extends ReplicationResponse > extends TransportAction { - protected final TransportService transportService; + private final TransportService transportService; protected final ClusterService clusterService; - protected final IndicesService indicesService; + private final IndicesService indicesService; private final ShardStateAction shardStateAction; private final TransportRequestOptions transportOptions; private final String executor; // package private for testing - final String transportReplicaAction; - final String transportPrimaryAction; - private final ReplicasProxy replicasProxy; + private final String transportReplicaAction; + private final String transportPrimaryAction; protected TransportReplicationAction(Settings settings, String actionName, TransportService transportService, ClusterService clusterService, IndicesService indicesService, @@ -130,8 +131,6 @@ protected TransportReplicationAction(Settings settings, String actionName, Trans new ReplicaOperationTransportHandler()); this.transportOptions = transportOptions(); - - this.replicasProxy = new ReplicasProxy(); } @Override @@ -167,27 +166,34 @@ protected void resolveRequest(MetaData metaData, IndexMetaData indexMetaData, Re * Primary operation on node with primary copy. * * @param shardRequest the request to the primary shard + * @param primary the primary shard to perform the operation on */ - protected abstract PrimaryResult shardOperationOnPrimary(Request shardRequest) throws Exception; + protected abstract PrimaryResult shardOperationOnPrimary( + Request shardRequest, IndexShard primary) throws Exception; /** * Synchronous replica operation on nodes with replica copies. This is done under the lock form - * {@link #acquireReplicaOperationLock(ShardId, long, String, ActionListener)}. + * {@link IndexShard#acquireReplicaOperationLock(long, ActionListener, String)} + * + * @param shardRequest the request to the replica shard + * @param replica the replica shard to perform the operation on */ - protected abstract ReplicaResult shardOperationOnReplica(ReplicaRequest shardRequest); + protected abstract ReplicaResult shardOperationOnReplica(ReplicaRequest shardRequest, IndexShard replica) throws Exception; /** - * Cluster level block to check before request execution + * Cluster level block to check before request execution. Returning null means that no blocks need to be checked. */ + @Nullable protected ClusterBlockLevel globalBlockLevel() { - return ClusterBlockLevel.WRITE; + return null; } /** - * Index level block to check before request execution + * Index level block to check before request execution. Returning null means that no blocks need to be checked. */ + @Nullable protected ClusterBlockLevel indexBlockLevel() { - return ClusterBlockLevel.WRITE; + return null; } /** @@ -203,7 +209,7 @@ protected TransportRequestOptions transportOptions() { protected boolean retryPrimaryException(final Throwable e) { return e.getClass() == ReplicationOperation.RetryOnPrimaryException.class - || TransportActions.isShardNotAvailableException(e); + || TransportActions.isShardNotAvailableException(e); } class OperationTransportHandler implements TransportRequestHandler { @@ -247,28 +253,42 @@ public void messageReceived(final ConcreteShardRequest request, final T @Override public void messageReceived(ConcreteShardRequest request, TransportChannel channel, Task task) { - new AsyncPrimaryAction(request.request, request.targetAllocationID, channel, (ReplicationTask) task).run(); + // incoming primary term can be 0 if request is coming from a < 5.6 node (relocated primary) + // just use as speculative term the one from the current cluster state, it's validated against the actual primary term + // within acquirePrimaryShardReference + final long primaryTerm; + if (request.primaryTerm > 0L) { + primaryTerm = request.primaryTerm; + } else { + ShardId shardId = request.request.shardId(); + primaryTerm = clusterService.state().metaData().getIndexSafe(shardId.getIndex()).primaryTerm(shardId.id()); + } + new AsyncPrimaryAction(request.request, request.targetAllocationID, primaryTerm, channel, (ReplicationTask) task).run(); } } class AsyncPrimaryAction extends AbstractRunnable implements ActionListener { private final Request request; - /** targetAllocationID of the shard this request is meant for */ + // targetAllocationID of the shard this request is meant for private final String targetAllocationID; + // primary term of the shard this request is meant for + private final long primaryTerm; private final TransportChannel channel; private final ReplicationTask replicationTask; - AsyncPrimaryAction(Request request, String targetAllocationID, TransportChannel channel, ReplicationTask replicationTask) { + AsyncPrimaryAction(Request request, String targetAllocationID, long primaryTerm, TransportChannel channel, + ReplicationTask replicationTask) { this.request = request; this.targetAllocationID = targetAllocationID; + this.primaryTerm = primaryTerm; this.channel = channel; this.replicationTask = replicationTask; } @Override protected void doRun() throws Exception { - acquirePrimaryShardReference(request.shardId(), targetAllocationID, this); + acquirePrimaryShardReference(request.shardId(), targetAllocationID, primaryTerm, this); } @Override @@ -283,8 +303,21 @@ public void onResponse(PrimaryShardReference primaryShardReference) { final ShardRouting primary = primaryShardReference.routingEntry(); assert primary.relocating() : "indexShard is marked as relocated but routing isn't" + primary; DiscoveryNode relocatingNode = clusterService.state().nodes().get(primary.relocatingNodeId()); + if (relocatingNode != null && relocatingNode.getVersion().major > Version.CURRENT.major) { + // ES 6.x requires a primary context hand-off during primary relocation which is not implemented on ES 5.x, + // otherwise it might not be aware of a replica that finished recovery and became activated on the master before + // the new primary became in charge of replicating operations, as the cluster state with that in-sync information + // might not be applied yet on the primary relocation target before it would be in charge of replicating operations. + // This would mean that the new primary could advance the global checkpoint too quickly, not taking into account + // the newly in-sync replica. + // ES 6.x detects that the primary is relocating from a 5.x node, and activates the primary mode of the global + // checkpoint tracker only after activation of the relocation target, which means, however, that requests cannot + // be handled as long as the relocation target shard has not been activated. + throw new ReplicationOperation.RetryOnPrimaryException(request.shardId(), + "waiting for 6.x primary to be activated"); + } transportService.sendRequest(relocatingNode, transportPrimaryAction, - new ConcreteShardRequest<>(request, primary.allocationId().getRelocationId()), + new ConcreteShardRequest<>(request, primary.allocationId().getRelocationId(), primaryTerm), transportOptions, new TransportChannelResponseHandler(logger, channel, "rerouting indexing to target primary " + primary, TransportReplicationAction.this::newResponseInstance) { @@ -304,19 +337,12 @@ public void handleException(TransportException exp) { } else { setPhase(replicationTask, "primary"); final IndexMetaData indexMetaData = clusterService.state().getMetaData().index(request.shardId().getIndex()); - final boolean executeOnReplicas = (indexMetaData == null) || shouldExecuteReplication(indexMetaData.getSettings()); + final boolean executeOnReplicas = (indexMetaData == null) || shouldExecuteReplication(indexMetaData); final ActionListener listener = createResponseListener(primaryShardReference); - createReplicatedOperation(request, new ActionListener() { - @Override - public void onResponse(PrimaryResult result) { - result.respond(listener); - } - - @Override - public void onFailure(Exception e) { - listener.onFailure(e); - } - }, primaryShardReference, executeOnReplicas).execute(); + createReplicatedOperation(request, + ActionListener.wrap(result -> result.respond(listener), listener::onFailure), + primaryShardReference, executeOnReplicas) + .execute(); } } catch (Exception e) { Releasables.closeWhileHandlingException(primaryShardReference); // release shard operation lock before responding to caller @@ -361,22 +387,37 @@ public void onFailure(Exception e) { }; } - protected ReplicationOperation createReplicatedOperation( - Request request, ActionListener listener, + protected ReplicationOperation> createReplicatedOperation( + Request request, ActionListener> listener, PrimaryShardReference primaryShardReference, boolean executeOnReplicas) { return new ReplicationOperation<>(request, primaryShardReference, listener, - executeOnReplicas, replicasProxy, clusterService::state, logger, actionName + executeOnReplicas, new ReplicasProxy(primaryTerm), clusterService::state, logger, actionName ); } } - protected class PrimaryResult implements ReplicationOperation.PrimaryResult { + protected static class PrimaryResult, + Response extends ReplicationResponse> + implements ReplicationOperation.PrimaryResult { final ReplicaRequest replicaRequest; - final Response finalResponse; + public final Response finalResponseIfSuccessful; + public final Exception finalFailure; - public PrimaryResult(ReplicaRequest replicaRequest, Response finalResponse) { + /** + * Result of executing a primary operation + * expects finalResponseIfSuccessful or finalFailure to be not-null + */ + public PrimaryResult(ReplicaRequest replicaRequest, Response finalResponseIfSuccessful, Exception finalFailure) { + assert finalFailure != null ^ finalResponseIfSuccessful != null + : "either a response or a failure has to be not null, " + + "found [" + finalFailure + "] failure and ["+ finalResponseIfSuccessful + "] response"; this.replicaRequest = replicaRequest; - this.finalResponse = finalResponse; + this.finalResponseIfSuccessful = finalResponseIfSuccessful; + this.finalFailure = finalFailure; + } + + public PrimaryResult(ReplicaRequest replicaRequest, Response replicationResponse) { + this(replicaRequest, replicationResponse, null); } @Override @@ -386,22 +427,37 @@ public ReplicaRequest replicaRequest() { @Override public void setShardInfo(ReplicationResponse.ShardInfo shardInfo) { - finalResponse.setShardInfo(shardInfo); + if (finalResponseIfSuccessful != null) { + finalResponseIfSuccessful.setShardInfo(shardInfo); + } } public void respond(ActionListener listener) { - listener.onResponse(finalResponse); + if (finalResponseIfSuccessful != null) { + listener.onResponse(finalResponseIfSuccessful); + } else { + listener.onFailure(finalFailure); + } } } - protected class ReplicaResult { - /** - * Public constructor so subclasses can call it. - */ - public ReplicaResult() {} + protected static class ReplicaResult { + final Exception finalFailure; + + public ReplicaResult(Exception finalFailure) { + this.finalFailure = finalFailure; + } + + public ReplicaResult() { + this(null); + } public void respond(ActionListener listener) { - listener.onResponse(TransportResponse.Empty.INSTANCE); + if (finalFailure == null) { + listener.onResponse(TransportResponse.Empty.INSTANCE); + } else { + listener.onFailure(finalFailure); + } } } @@ -415,7 +471,8 @@ public void messageReceived(final ConcreteShardRequest request, @Override public void messageReceived(ConcreteShardRequest requestWithAID, TransportChannel channel, Task task) throws Exception { - new AsyncReplicaAction(requestWithAID.request, requestWithAID.targetAllocationID, channel, (ReplicationTask) task).run(); + new AsyncReplicaAction(requestWithAID.request, requestWithAID.targetAllocationID, requestWithAID.getPrimaryTerm(), channel, + (ReplicationTask) task).run(); } } @@ -435,7 +492,9 @@ private final class AsyncReplicaAction extends AbstractRunnable implements Actio private final ReplicaRequest request; // allocation id of the replica this request is meant for private final String targetAllocationID; + private final long primaryTerm; private final TransportChannel channel; + private final IndexShard replica; /** * The task on the node with the replica shard. */ @@ -444,17 +503,23 @@ private final class AsyncReplicaAction extends AbstractRunnable implements Actio // something we want to avoid at all costs private final ClusterStateObserver observer = new ClusterStateObserver(clusterService, null, logger, threadPool.getThreadContext()); - AsyncReplicaAction(ReplicaRequest request, String targetAllocationID, TransportChannel channel, ReplicationTask task) { + AsyncReplicaAction(ReplicaRequest request, String targetAllocationID, long primaryTerm, TransportChannel channel, + ReplicationTask task) { this.request = request; this.channel = channel; this.task = task; this.targetAllocationID = targetAllocationID; + assert primaryTerm > 0L : "primary term can't be zero"; + this.primaryTerm = primaryTerm; + final ShardId shardId = request.shardId(); + assert shardId != null : "request shardId must be set"; + this.replica = getIndexShard(shardId); } @Override public void onResponse(Releasable releasable) { try { - ReplicaResult replicaResult = shardOperationOnReplica(request); + ReplicaResult replicaResult = shardOperationOnReplica(request, replica); releasable.close(); // release shard operation lock before responding to caller replicaResult.respond(new ResponseListener()); } catch (Exception e) { @@ -473,18 +538,18 @@ public void onFailure(Exception e) { transportReplicaAction, request), e); - final ThreadContext.StoredContext context = threadPool.getThreadContext().newStoredContext(); + request.onRetry(); observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override public void onNewClusterState(ClusterState state) { - context.close(); // Forking a thread on local node via transport service so that custom transport service have an // opportunity to execute custom logic before the replica operation begins String extraMessage = "action [" + transportReplicaAction + "], request[" + request + "]"; TransportChannelResponseHandler handler = - new TransportChannelResponseHandler<>(logger, channel, extraMessage, () -> TransportResponse.Empty.INSTANCE); + new TransportChannelResponseHandler<>(logger, channel, extraMessage, + () -> TransportResponse.Empty.INSTANCE); transportService.sendRequest(clusterService.localNode(), transportReplicaAction, - new ConcreteShardRequest<>(request, targetAllocationID), + new ConcreteShardRequest<>(request, targetAllocationID, primaryTerm), handler); } @@ -521,8 +586,12 @@ protected void responseWithFailure(Exception e) { @Override protected void doRun() throws Exception { setPhase(task, "replica"); - assert request.shardId() != null : "request shardId must be set"; - acquireReplicaOperationLock(request.shardId(), request.primaryTerm(), targetAllocationID, this); + final String actualAllocationId = this.replica.routingEntry().allocationId().getId(); + if (actualAllocationId.equals(targetAllocationID) == false) { + throw new ShardNotFoundException(this.replica.shardId(), "expected aID [{}] but found [{}]", targetAllocationID, + actualAllocationId); + } + replica.acquireReplicaOperationLock(primaryTerm, this, executor); } /** @@ -550,6 +619,11 @@ public void onFailure(Exception e) { } } + private IndexShard getIndexShard(ShardId shardId) { + IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex()); + return indexService.getShard(shardId.id()); + } + /** * Responsible for routing and retrying failed operations on the primary. * The actual primary operation is done in {@link ReplicationOperation} on the @@ -582,7 +656,7 @@ public void onFailure(Exception e) { @Override protected void doRun() { setPhase(task, "routing"); - final ClusterState state = observer.observedState(); + final ClusterState state = observer.setAndGetObservedState(); if (handleBlockExceptions(state)) { return; } @@ -594,6 +668,9 @@ protected void doRun() { retry(new IndexNotFoundException(concreteIndex)); return; } + if (indexMetaData.getState() == IndexMetaData.State.CLOSE) { + throw new IndexClosedException(indexMetaData.getIndex()); + } // resolve all derived request fields, so we can route and apply it resolveRequest(state.metaData(), indexMetaData, request); @@ -605,21 +682,21 @@ protected void doRun() { return; } final DiscoveryNode node = state.nodes().get(primary.currentNodeId()); - taskManager.registerChildTask(task, node.getId()); if (primary.currentNodeId().equals(state.nodes().getLocalNodeId())) { - performLocalAction(state, primary, node); + performLocalAction(state, primary, node, indexMetaData); } else { performRemoteAction(state, primary, node); } } - private void performLocalAction(ClusterState state, ShardRouting primary, DiscoveryNode node) { + private void performLocalAction(ClusterState state, ShardRouting primary, DiscoveryNode node, IndexMetaData indexMetaData) { setPhase(task, "waiting_on_primary"); if (logger.isTraceEnabled()) { logger.trace("send action [{}] on primary [{}] for request [{}] with cluster state version [{}] to [{}] ", transportPrimaryAction, request.shardId(), request, state.version(), primary.currentNodeId()); } - performAction(node, transportPrimaryAction, true, new ConcreteShardRequest<>(request, primary.allocationId().getId())); + performAction(node, transportPrimaryAction, true, + new ConcreteShardRequest<>(request, primary.allocationId().getId(), indexMetaData.primaryTerm(primary.id()))); } private void performRemoteAction(ClusterState state, ShardRouting primary, DiscoveryNode node) { @@ -670,15 +747,21 @@ private ShardRouting primary(ClusterState state) { } private boolean handleBlockExceptions(ClusterState state) { - ClusterBlockException blockException = state.blocks().globalBlockedException(globalBlockLevel()); - if (blockException != null) { - handleBlockException(blockException); - return true; + ClusterBlockLevel globalBlockLevel = globalBlockLevel(); + if (globalBlockLevel != null) { + ClusterBlockException blockException = state.blocks().globalBlockedException(globalBlockLevel); + if (blockException != null) { + handleBlockException(blockException); + return true; + } } - blockException = state.blocks().indexBlockedException(indexBlockLevel(), concreteIndex(state)); - if (blockException != null) { - handleBlockException(blockException); - return true; + ClusterBlockLevel indexBlockLevel = indexBlockLevel(); + if (indexBlockLevel != null) { + ClusterBlockException blockException = state.blocks().indexBlockedException(indexBlockLevel, concreteIndex(state)); + if (blockException != null) { + handleBlockException(blockException); + return true; + } } return false; } @@ -745,11 +828,10 @@ void retry(Exception failure) { } setPhase(task, "waiting_for_retry"); request.onRetry(); - final ThreadContext.StoredContext context = threadPool.getThreadContext().newStoredContext(); + request.primaryTerm(0L); observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override public void onNewClusterState(ClusterState state) { - context.close(); run(); } @@ -760,7 +842,6 @@ public void onClusterServiceClose() { @Override public void onTimeout(TimeValue timeout) { - context.close(); // Try one more time... run(); } @@ -816,10 +897,9 @@ void retryBecauseUnavailable(ShardId shardId, String message) { * tries to acquire reference to {@link IndexShard} to perform a primary operation. Released after performing primary operation locally * and replication of the operation to all replica shards is completed / failed (see {@link ReplicationOperation}). */ - protected void acquirePrimaryShardReference(ShardId shardId, String allocationId, - ActionListener onReferenceAcquired) { - IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex()); - IndexShard indexShard = indexService.getShard(shardId.id()); + private void acquirePrimaryShardReference(ShardId shardId, String allocationId, long primaryTerm, + ActionListener onReferenceAcquired) { + IndexShard indexShard = getIndexShard(shardId); // we may end up here if the cluster state used to route the primary is so stale that the underlying // index shard was replaced with a replica. For example - in a two node cluster, if the primary fails // the replica will take over and a replica will be assigned to the first node. @@ -827,10 +907,16 @@ protected void acquirePrimaryShardReference(ShardId shardId, String allocationId throw new ReplicationOperation.RetryOnPrimaryException(indexShard.shardId(), "actual shard is not a primary " + indexShard.routingEntry()); } + final String actualAllocationId = indexShard.routingEntry().allocationId().getId(); if (actualAllocationId.equals(allocationId) == false) { throw new ShardNotFoundException(shardId, "expected aID [{}] but found [{}]", allocationId, actualAllocationId); } + final long actualTerm = indexShard.getPrimaryTerm(); + if (actualTerm != primaryTerm) { + throw new ShardNotFoundException(shardId, "expected aID [{}] with term [{}] but found [{}]", allocationId, + primaryTerm, actualTerm); + } ActionListener onAcquired = new ActionListener() { @Override @@ -847,29 +933,16 @@ public void onFailure(Exception e) { indexShard.acquirePrimaryOperationLock(onAcquired, executor); } - /** - * tries to acquire an operation on replicas. The lock is closed as soon as replication is completed on the node. - */ - protected void acquireReplicaOperationLock(ShardId shardId, long primaryTerm, final String allocationId, - ActionListener onLockAcquired) { - IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex()); - IndexShard indexShard = indexService.getShard(shardId.id()); - final String actualAllocationId = indexShard.routingEntry().allocationId().getId(); - if (actualAllocationId.equals(allocationId) == false) { - throw new ShardNotFoundException(shardId, "expected aID [{}] but found [{}]", allocationId, actualAllocationId); - } - indexShard.acquireReplicaOperationLock(primaryTerm, onLockAcquired, executor); - } - /** * Indicated whether this operation should be replicated to shadow replicas or not. If this method returns true the replication phase * will be skipped. For example writes such as index and delete don't need to be replicated on shadow replicas but refresh and flush do. */ - protected boolean shouldExecuteReplication(Settings settings) { - return IndexMetaData.isIndexUsingShadowReplicas(settings) == false; + protected boolean shouldExecuteReplication(IndexMetaData indexMetaData) { + return IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.getSettings()) == false; } - class PrimaryShardReference implements ReplicationOperation.Primary, Releasable { + class PrimaryShardReference implements + ReplicationOperation.Primary>, Releasable { private final IndexShard indexShard; private final Releasable operationLock; @@ -899,9 +972,7 @@ public void failShard(String reason, Exception e) { @Override public PrimaryResult perform(Request request) throws Exception { - PrimaryResult result = shardOperationOnPrimary(request); - result.replicaRequest().primaryTerm(indexShard.getPrimaryTerm()); - return result; + return shardOperationOnPrimary(request, indexShard); } @Override @@ -912,8 +983,15 @@ public ShardRouting routingEntry() { final class ReplicasProxy implements ReplicationOperation.Replicas { + private final long primaryTerm; + + ReplicasProxy(long primaryTerm) { + this.primaryTerm = primaryTerm; + } + @Override - public void performOn(ShardRouting replica, ReplicaRequest request, ActionListener listener) { + public void performOn(ShardRouting replica, ReplicaRequest request, + ActionListener listener) { String nodeId = replica.currentNodeId(); final DiscoveryNode node = clusterService.state().nodes().get(nodeId); if (node == null) { @@ -921,19 +999,19 @@ public void performOn(ShardRouting replica, ReplicaRequest request, ActionListen return; } transportService.sendRequest(node, transportReplicaAction, - new ConcreteShardRequest<>(request, replica.allocationId().getId()), transportOptions, + new ConcreteShardRequest<>(request, replica.allocationId().getId(), primaryTerm), transportOptions, new ActionListenerResponseHandler<>(listener, () -> TransportResponse.Empty.INSTANCE)); } @Override - public void failShard(ShardRouting replica, long primaryTerm, String message, Exception exception, + public void failShard(ShardRouting replica, String message, Exception exception, Runnable onSuccess, Consumer onPrimaryDemoted, Consumer onIgnoredFailure) { shardStateAction.remoteShardFailed(replica.shardId(), replica.allocationId().getId(), primaryTerm, message, exception, createListener(onSuccess, onPrimaryDemoted, onIgnoredFailure)); } @Override - public void markShardCopyAsStale(ShardId shardId, String allocationId, long primaryTerm, Runnable onSuccess, + public void markShardCopyAsStale(ShardId shardId, String allocationId, Runnable onSuccess, Consumer onPrimaryDemoted, Consumer onIgnoredFailure) { shardStateAction.remoteShardFailed(shardId, allocationId, primaryTerm, "mark copy as stale", null, createListener(onSuccess, onPrimaryDemoted, onIgnoredFailure)); @@ -964,24 +1042,29 @@ public void onFailure(Exception shardFailedError) { } /** a wrapper class to encapsulate a request when being sent to a specific allocation id **/ - public static final class ConcreteShardRequest extends TransportRequest { + public static final class ConcreteShardRequest> extends TransportRequest { /** {@link AllocationId#getId()} of the shard this request is sent to **/ private String targetAllocationID; + private long primaryTerm; + private R request; ConcreteShardRequest(Supplier requestSupplier) { request = requestSupplier.get(); // null now, but will be populated by reading from the streams targetAllocationID = null; + primaryTerm = 0L; } - ConcreteShardRequest(R request, String targetAllocationID) { + ConcreteShardRequest(R request, String targetAllocationID, long primaryTerm) { Objects.requireNonNull(request); Objects.requireNonNull(targetAllocationID); this.request = request; this.targetAllocationID = targetAllocationID; + this.primaryTerm = primaryTerm; + this.request.primaryTerm(primaryTerm); // for bwc with ES < v5.6, that has the primary term on the inner ReplicationRequest } @Override @@ -1005,18 +1088,30 @@ public Task createTask(long id, String type, String action, TaskId parentTaskId) @Override public String getDescription() { - return "[" + request.getDescription() + "] for aID [" + targetAllocationID + "]"; + return "[" + request.getDescription() + "] for aID [" + targetAllocationID + "] and term [" + primaryTerm + "]"; } @Override public void readFrom(StreamInput in) throws IOException { targetAllocationID = in.readString(); + if (in.getVersion().onOrAfter(Version.V_5_6_0)) { + primaryTerm = in.readVLong(); + } request.readFrom(in); + if (in.getVersion().before(Version.V_5_6_0)) { + primaryTerm = request.primaryTerm(); // for bwc with ES < v5.6, that has the primary term on the inner ReplicationRequest + } } @Override public void writeTo(StreamOutput out) throws IOException { out.writeString(targetAllocationID); + if (out.getVersion().onOrAfter(Version.V_5_6_0)) { + out.writeVLong(primaryTerm); + } else { + // ensure that inner ReplicationRequest has primary term set + assert request.primaryTerm() == primaryTerm : "term on inner replication request not properly set"; + } request.writeTo(out); } @@ -1027,6 +1122,15 @@ public R getRequest() { public String getTargetAllocationID() { return targetAllocationID; } + + public long getPrimaryTerm() { + return primaryTerm; + } + + @Override + public String toString() { + return "request: " + request + ", target allocation id: " + targetAllocationID + ", primary term: " + primaryTerm; + } } /** diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java b/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java index 39b49a4a409d9..0322b2d2d12f3 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java @@ -21,17 +21,18 @@ import org.apache.logging.log4j.Logger; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.bulk.BulkShardRequest; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.WriteRequest; import org.elasticsearch.action.support.WriteResponse; import org.elasticsearch.cluster.action.shard.ShardStateAction; +import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.index.translog.Translog.Location; import org.elasticsearch.indices.IndicesService; @@ -46,92 +47,69 @@ /** * Base class for transport actions that modify data in some shard like index, delete, and shardBulk. + * Allows performing async actions (e.g. refresh) after performing write operations on primary and replica shards */ public abstract class TransportWriteAction< Request extends ReplicatedWriteRequest, + ReplicaRequest extends ReplicatedWriteRequest, Response extends ReplicationResponse & WriteResponse - > extends TransportReplicationAction { + > extends TransportReplicationAction { protected TransportWriteAction(Settings settings, String actionName, TransportService transportService, ClusterService clusterService, IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Supplier request, - String executor) { + Supplier replicaRequest, String executor) { super(settings, actionName, transportService, clusterService, indicesService, threadPool, shardStateAction, actionFilters, - indexNameExpressionResolver, request, request, executor); + indexNameExpressionResolver, request, replicaRequest, executor); } /** - * Called on the primary with a reference to the {@linkplain IndexShard} to modify. - */ - protected abstract WriteResult onPrimaryShard(Request request, IndexShard indexShard) throws Exception; - - /** - * Called once per replica with a reference to the {@linkplain IndexShard} to modify. + * Called on the primary with a reference to the primary {@linkplain IndexShard} to modify. * - * @return the translog location of the {@linkplain IndexShard} after the write was completed or null if no write occurred + * @return the result of the operation on primary, including current translog location and operation response and failure + * async refresh is performed on the primary shard according to the Request refresh policy */ - protected abstract Translog.Location onReplicaShard(Request request, IndexShard indexShard); - @Override - protected final WritePrimaryResult shardOperationOnPrimary(Request request) throws Exception { - IndexShard indexShard = indexShard(request); - WriteResult result = onPrimaryShard(request, indexShard); - return new WritePrimaryResult(request, result.getResponse(), result.getLocation(), indexShard); - } - - @Override - protected final WriteReplicaResult shardOperationOnReplica(Request request) { - IndexShard indexShard = indexShard(request); - Translog.Location location = onReplicaShard(request, indexShard); - return new WriteReplicaResult(indexShard, request, location); - } + protected abstract WritePrimaryResult shardOperationOnPrimary( + Request request, IndexShard primary) throws Exception; /** - * Fetch the IndexShard for the request. Protected so it can be mocked in tests. - */ - protected IndexShard indexShard(Request request) { - final ShardId shardId = request.shardId(); - IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex()); - return indexService.getShard(shardId.id()); - } - - /** - * Simple result from a write action. Write actions have static method to return these so they can integrate with bulk. + * Called once per replica with a reference to the replica {@linkplain IndexShard} to modify. + * + * @return the result of the operation on replica, including current translog location and operation response and failure + * async refresh is performed on the replica shard according to the ReplicaRequest refresh policy */ - public static class WriteResult { - private final Response response; - private final Translog.Location location; - - public WriteResult(Response response, @Nullable Location location) { - this.response = response; - this.location = location; - } - - public Response getResponse() { - return response; - } - - public Translog.Location getLocation() { - return location; - } - } + @Override + protected abstract WriteReplicaResult shardOperationOnReplica( + ReplicaRequest request, IndexShard replica) throws Exception; /** * Result of taking the action on the primary. */ - class WritePrimaryResult extends PrimaryResult implements RespondingWriteResult { + protected static class WritePrimaryResult, + Response extends ReplicationResponse & WriteResponse> extends PrimaryResult + implements RespondingWriteResult { boolean finishedAsyncActions; + public final Location location; ActionListener listener = null; - public WritePrimaryResult(Request request, Response finalResponse, - @Nullable Translog.Location location, - IndexShard indexShard) { - super(request, finalResponse); - /* - * We call this before replication because this might wait for a refresh and that can take a while. This way we wait for the - * refresh in parallel on the primary and on the replica. - */ - new AsyncAfterWriteAction(indexShard, request, location, this, logger).run(); + public WritePrimaryResult(ReplicaRequest request, @Nullable Response finalResponse, + @Nullable Location location, @Nullable Exception operationFailure, + IndexShard primary, Logger logger) { + super(request, finalResponse, operationFailure); + this.location = location; + assert location == null || operationFailure == null + : "expected either failure to be null or translog location to be null, " + + "but found: [" + location + "] translog location and [" + operationFailure + "] failure"; + if (operationFailure != null) { + this.finishedAsyncActions = true; + } else { + /* + * We call this before replication because this might wait for a refresh and that can take a while. + * This way we wait for the refresh in parallel on the primary and on the replica. + */ + new AsyncAfterWriteAction(primary, request, location, this, logger).run(); + } } @Override @@ -160,7 +138,7 @@ public synchronized void onFailure(Exception exception) { @Override public synchronized void onSuccess(boolean forcedRefresh) { - finalResponse.setForcedRefresh(forcedRefresh); + finalResponseIfSuccessful.setForcedRefresh(forcedRefresh); finishedAsyncActions = true; respondIfPossible(null); } @@ -169,12 +147,21 @@ public synchronized void onSuccess(boolean forcedRefresh) { /** * Result of taking the action on the replica. */ - class WriteReplicaResult extends ReplicaResult implements RespondingWriteResult { + protected static class WriteReplicaResult> + extends ReplicaResult implements RespondingWriteResult { + public final Location location; boolean finishedAsyncActions; private ActionListener listener; - public WriteReplicaResult(IndexShard indexShard, ReplicatedWriteRequest request, Translog.Location location) { - new AsyncAfterWriteAction(indexShard, request, location, this, logger).run(); + public WriteReplicaResult(ReplicaRequest request, @Nullable Location location, + @Nullable Exception operationFailure, IndexShard replica, Logger logger) { + super(operationFailure); + this.location = location; + if (operationFailure != null) { + this.finishedAsyncActions = true; + } else { + new AsyncAfterWriteAction(replica, request, location, this, logger).run(); + } } @Override @@ -209,6 +196,16 @@ public synchronized void onSuccess(boolean forcedRefresh) { } } + @Override + protected ClusterBlockLevel globalBlockLevel() { + return ClusterBlockLevel.WRITE; + } + + @Override + protected ClusterBlockLevel indexBlockLevel() { + return ClusterBlockLevel.WRITE; + } + /** * callback used by {@link AsyncAfterWriteAction} to notify that all post * process actions have been executed diff --git a/core/src/main/java/org/elasticsearch/action/support/single/instance/InstanceShardOperationRequest.java b/core/src/main/java/org/elasticsearch/action/support/single/instance/InstanceShardOperationRequest.java index cb9a6ab9f6984..d91de908311a9 100644 --- a/core/src/main/java/org/elasticsearch/action/support/single/instance/InstanceShardOperationRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/single/instance/InstanceShardOperationRequest.java @@ -35,7 +35,7 @@ /** * */ -public abstract class InstanceShardOperationRequest> extends ActionRequest +public abstract class InstanceShardOperationRequest> extends ActionRequest implements IndicesRequest { public static final TimeValue DEFAULT_TIMEOUT = new TimeValue(1, TimeUnit.MINUTES); diff --git a/core/src/main/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationAction.java b/core/src/main/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationAction.java index 81da5ec9a86ff..f05d254c266af 100644 --- a/core/src/main/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationAction.java @@ -39,12 +39,12 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.node.NodeClosedException; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.transport.TransportResponseHandler; import org.elasticsearch.transport.ConnectTransportException; import org.elasticsearch.transport.TransportChannel; import org.elasticsearch.transport.TransportException; import org.elasticsearch.transport.TransportRequestHandler; import org.elasticsearch.transport.TransportRequestOptions; +import org.elasticsearch.transport.TransportResponseHandler; import org.elasticsearch.transport.TransportService; import java.util.function.Supplier; @@ -114,7 +114,6 @@ class AsyncSingleAction { private final Request request; private volatile ClusterStateObserver observer; private ShardIterator shardIt; - private DiscoveryNodes nodes; AsyncSingleAction(Request request, ActionListener listener) { this.request = request; @@ -122,14 +121,14 @@ class AsyncSingleAction { } public void start() { - this.observer = new ClusterStateObserver(clusterService, request.timeout(), logger, threadPool.getThreadContext()); - doStart(); + ClusterState state = clusterService.state(); + this.observer = new ClusterStateObserver(state, clusterService, request.timeout(), logger, threadPool.getThreadContext()); + doStart(state); } - protected void doStart() { - nodes = observer.observedState().nodes(); + protected void doStart(ClusterState clusterState) { try { - ClusterBlockException blockException = checkGlobalBlock(observer.observedState()); + ClusterBlockException blockException = checkGlobalBlock(clusterState); if (blockException != null) { if (blockException.retryable()) { retry(blockException); @@ -138,9 +137,9 @@ protected void doStart() { throw blockException; } } - request.concreteIndex(indexNameExpressionResolver.concreteSingleIndex(observer.observedState(), request).getName()); - resolveRequest(observer.observedState(), request); - blockException = checkRequestBlock(observer.observedState(), request); + request.concreteIndex(indexNameExpressionResolver.concreteSingleIndex(clusterState, request).getName()); + resolveRequest(clusterState, request); + blockException = checkRequestBlock(clusterState, request); if (blockException != null) { if (blockException.retryable()) { retry(blockException); @@ -149,7 +148,7 @@ protected void doStart() { throw blockException; } } - shardIt = shards(observer.observedState(), request); + shardIt = shards(clusterState, request); } catch (Exception e) { listener.onFailure(e); return; @@ -173,7 +172,7 @@ protected void doStart() { } request.shardId = shardIt.shardId(); - DiscoveryNode node = nodes.get(shard.currentNodeId()); + DiscoveryNode node = clusterState.nodes().get(shard.currentNodeId()); transportService.sendRequest(node, shardActionName, request, transportOptions(), new TransportResponseHandler() { @Override @@ -223,18 +222,18 @@ void retry(@Nullable final Exception failure) { observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override public void onNewClusterState(ClusterState state) { - doStart(); + doStart(state); } @Override public void onClusterServiceClose() { - listener.onFailure(new NodeClosedException(nodes.getLocalNode())); + listener.onFailure(new NodeClosedException(clusterService.localNode())); } @Override public void onTimeout(TimeValue timeout) { // just to be on the safe side, see if we can start it now? - doStart(); + doStart(observer.setAndGetObservedState()); } }, request.timeout()); } diff --git a/core/src/main/java/org/elasticsearch/action/support/single/shard/SingleShardRequest.java b/core/src/main/java/org/elasticsearch/action/support/single/shard/SingleShardRequest.java index 499932fce687e..f3b7a7e5fe241 100644 --- a/core/src/main/java/org/elasticsearch/action/support/single/shard/SingleShardRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/single/shard/SingleShardRequest.java @@ -34,7 +34,7 @@ /** * */ -public abstract class SingleShardRequest> extends ActionRequest implements IndicesRequest { +public abstract class SingleShardRequest> extends ActionRequest implements IndicesRequest { public static final IndicesOptions INDICES_OPTIONS = IndicesOptions.strictSingleIndexNoExpandForbidClosed(); diff --git a/core/src/main/java/org/elasticsearch/action/support/single/shard/TransportSingleShardAction.java b/core/src/main/java/org/elasticsearch/action/support/single/shard/TransportSingleShardAction.java index 8981caa60f760..811dcbed3dcf9 100644 --- a/core/src/main/java/org/elasticsearch/action/support/single/shard/TransportSingleShardAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/single/shard/TransportSingleShardAction.java @@ -46,6 +46,7 @@ import org.elasticsearch.transport.TransportResponseHandler; import org.elasticsearch.transport.TransportService; +import java.io.IOException; import java.util.function.Supplier; import static org.elasticsearch.action.support.TransportActions.isShardNotAvailableException; @@ -94,7 +95,7 @@ protected void doExecute(Request request, ActionListener listener) { new AsyncSingleAction(request, listener).start(); } - protected abstract Response shardOperation(Request request, ShardId shardId); + protected abstract Response shardOperation(Request request, ShardId shardId) throws IOException; protected abstract Response newResponse(); diff --git a/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksRequest.java b/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksRequest.java index abc081dbc071f..e912eebb4fb39 100644 --- a/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksRequest.java @@ -36,13 +36,13 @@ /** * A base class for task requests */ -public class BaseTasksRequest> extends ActionRequest { +public class BaseTasksRequest> extends ActionRequest { public static final String[] ALL_ACTIONS = Strings.EMPTY_ARRAY; public static final String[] ALL_NODES = Strings.EMPTY_ARRAY; - private String[] nodesIds = ALL_NODES; + private String[] nodes = ALL_NODES; private TimeValue timeout; @@ -58,7 +58,7 @@ public BaseTasksRequest() { @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (taskId.isSet() && nodesIds.length > 0) { + if (taskId.isSet() && nodes.length > 0) { validationException = addValidationError("task id cannot be used together with node ids", validationException); } @@ -81,13 +81,13 @@ public String[] getActions() { return actions; } - public final String[] getNodesIds() { - return nodesIds; + public final String[] getNodes() { + return nodes; } @SuppressWarnings("unchecked") - public final Request setNodesIds(String... nodesIds) { - this.nodesIds = nodesIds; + public final Request setNodes(String... nodes) { + this.nodes = nodes; return (Request) this; } @@ -142,7 +142,7 @@ public void readFrom(StreamInput in) throws IOException { super.readFrom(in); taskId = TaskId.readFromStream(in); parentTaskId = TaskId.readFromStream(in); - nodesIds = in.readStringArray(); + nodes = in.readStringArray(); actions = in.readStringArray(); timeout = in.readOptionalWriteable(TimeValue::new); } @@ -152,7 +152,7 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); taskId.writeTo(out); parentTaskId.writeTo(out); - out.writeStringArrayNullable(nodesIds); + out.writeStringArrayNullable(nodes); out.writeStringArrayNullable(actions); out.writeOptionalWriteable(timeout); } diff --git a/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksResponse.java b/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksResponse.java index 43be2b46db1c4..fdbd8e6fe708f 100644 --- a/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksResponse.java +++ b/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksResponse.java @@ -19,16 +19,22 @@ package org.elasticsearch.action.support.tasks; +import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.action.FailedNodeException; import org.elasticsearch.action.TaskOperationFailure; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.tasks.TaskId; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.List; +import java.util.stream.Stream; + +import static java.util.stream.Collectors.toList; +import static org.elasticsearch.ExceptionsHelper.rethrowAndSuppress; /** @@ -38,9 +44,6 @@ public class BaseTasksResponse extends ActionResponse { private List taskFailures; private List nodeFailures; - public BaseTasksResponse() { - } - public BaseTasksResponse(List taskFailures, List nodeFailures) { this.taskFailures = taskFailures == null ? Collections.emptyList() : Collections.unmodifiableList(new ArrayList<>(taskFailures)); this.nodeFailures = nodeFailures == null ? Collections.emptyList() : Collections.unmodifiableList(new ArrayList<>(nodeFailures)); @@ -60,17 +63,28 @@ public List getNodeFailures() { return nodeFailures; } + /** + * Rethrow task failures if there are any. + */ + public void rethrowFailures(String operationName) { + rethrowAndSuppress(Stream.concat( + getNodeFailures().stream(), + getTaskFailures().stream().map(f -> new ElasticsearchException( + "{} of [{}] failed", f.getCause(), operationName, new TaskId(f.getNodeId(), f.getTaskId())))) + .collect(toList())); + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); int size = in.readVInt(); - List taskFailures = new ArrayList<>(); + List taskFailures = new ArrayList<>(size); for (int i = 0; i < size; i++) { taskFailures.add(new TaskOperationFailure(in)); } size = in.readVInt(); this.taskFailures = Collections.unmodifiableList(taskFailures); - List nodeFailures = new ArrayList<>(); + List nodeFailures = new ArrayList<>(size); for (int i = 0; i < size; i++) { nodeFailures.add(new FailedNodeException(in)); } diff --git a/core/src/main/java/org/elasticsearch/action/support/tasks/TasksRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/support/tasks/TasksRequestBuilder.java index a3528cb75c4d3..656dae9992889 100644 --- a/core/src/main/java/org/elasticsearch/action/support/tasks/TasksRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/support/tasks/TasksRequestBuilder.java @@ -48,7 +48,7 @@ public final RequestBuilder setTaskId(TaskId taskId) { @SuppressWarnings("unchecked") public final RequestBuilder setNodesIds(String... nodesIds) { - request.setNodesIds(nodesIds); + request.setNodes(nodesIds); return (RequestBuilder) this; } @@ -63,5 +63,14 @@ public final RequestBuilder setTimeout(TimeValue timeout) { request.setTimeout(timeout); return (RequestBuilder) this; } + + /** + * Match all children of the provided task. + */ + @SuppressWarnings("unchecked") + public final RequestBuilder setParentTaskId(TaskId taskId) { + request.setParentTaskId(taskId); + return (RequestBuilder) this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/support/tasks/TransportTasksAction.java b/core/src/main/java/org/elasticsearch/action/support/tasks/TransportTasksAction.java index 6752ccd7293a7..35b2b41dfda6e 100644 --- a/core/src/main/java/org/elasticsearch/action/support/tasks/TransportTasksAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/tasks/TransportTasksAction.java @@ -33,10 +33,12 @@ import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.collect.ImmutableOpenMap; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.NodeShouldNotConnectException; @@ -57,6 +59,8 @@ import java.util.function.Consumer; import java.util.function.Supplier; +import static java.util.Collections.emptyList; + /** * The base class for transport actions that are interacting with currently running tasks. */ @@ -100,21 +104,56 @@ protected void doExecute(Task task, TasksRequest request, ActionListener listener) { TasksRequest request = nodeTaskRequest.tasksRequest; - List results = new ArrayList<>(); - List exceptions = new ArrayList<>(); - processTasks(request, task -> { - try { - TaskResponse response = taskOperation(request, task); - if (response != null) { - results.add(response); + List tasks = new ArrayList<>(); + processTasks(request, tasks::add); + if (tasks.isEmpty()) { + listener.onResponse(new NodeTasksResponse(clusterService.localNode().getId(), emptyList(), emptyList())); + return; + } + AtomicArray> responses = new AtomicArray<>(tasks.size()); + final AtomicInteger counter = new AtomicInteger(tasks.size()); + for (int i = 0; i < tasks.size(); i++) { + final int taskIndex = i; + ActionListener taskListener = new ActionListener() { + @Override + public void onResponse(TaskResponse response) { + responses.setOnce(taskIndex, response == null ? null : new Tuple<>(response, null)); + respondIfFinished(); } - } catch (Exception ex) { - exceptions.add(new TaskOperationFailure(clusterService.localNode().getId(), task.getId(), ex)); + + @Override + public void onFailure(Exception e) { + responses.setOnce(taskIndex, new Tuple<>(null, e)); + respondIfFinished(); + } + + private void respondIfFinished() { + if (counter.decrementAndGet() != 0) { + return; + } + List results = new ArrayList<>(); + List exceptions = new ArrayList<>(); + for (Tuple response : responses.asList()) { + if (response.v1() == null) { + assert response.v2() != null; + exceptions.add(new TaskOperationFailure(clusterService.localNode().getId(), tasks.get(taskIndex).getId(), + response.v2())); + } else { + assert response.v2() == null; + results.add(response.v1()); + } + } + listener.onResponse(new NodeTasksResponse(clusterService.localNode().getId(), results, exceptions)); + } + }; + try { + taskOperation(request, tasks.get(taskIndex), taskListener); + } catch (Exception e) { + taskListener.onFailure(e); } - }); - return new NodeTasksResponse(clusterService.localNode().getId(), results, exceptions); + } } protected String[] filterNodeIds(DiscoveryNodes nodes, String[] nodesIds) { @@ -125,7 +164,7 @@ protected String[] resolveNodes(TasksRequest request, ClusterState clusterState) if (request.getTaskId().isSet()) { return new String[]{request.getTaskId().getNodeId()}; } else { - return clusterState.nodes().resolveNodes(request.getNodesIds()); + return clusterState.nodes().resolveNodes(request.getNodes()); } } @@ -178,14 +217,15 @@ protected TasksResponse newResponse(TasksRequest request, AtomicReferenceArray r protected abstract TaskResponse readTaskResponse(StreamInput in) throws IOException; - protected abstract TaskResponse taskOperation(TasksRequest request, OperationTask task); + /** + * Perform the required operation on the task. It is OK start an asynchronous operation or to throw an exception but not both. + */ + protected abstract void taskOperation(TasksRequest request, OperationTask task, ActionListener listener); protected boolean transportCompress() { return false; } - protected abstract boolean accumulateExceptions(); - private class AsyncAction { private final TasksRequest request; @@ -236,7 +276,6 @@ private void start() { } else { NodeTaskRequest nodeRequest = new NodeTaskRequest(request); nodeRequest.setParentTask(clusterService.localNode().getId(), task.getId()); - taskManager.registerChildTask(task, node.getId()); transportService.sendRequest(node, transportNodeAction, nodeRequest, builder.build(), new TransportResponseHandler() { @Override @@ -280,9 +319,9 @@ private void onFailure(int idx, String nodeId, Throwable t) { (org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("failed to execute on node [{}]", nodeId), t); } - if (accumulateExceptions()) { - responses.set(idx, new FailedNodeException(nodeId, "Failed node [" + nodeId + "]", t)); - } + + responses.set(idx, new FailedNodeException(nodeId, "Failed node [" + nodeId + "]", t)); + if (counter.incrementAndGet() == responses.length()) { finishHim(); } @@ -305,7 +344,27 @@ class NodeTransportHandler implements TransportRequestHandler { @Override public void messageReceived(final NodeTaskRequest request, final TransportChannel channel) throws Exception { - channel.sendResponse(nodeOperation(request)); + nodeOperation(request, new ActionListener() { + @Override + public void onResponse( + TransportTasksAction.NodeTasksResponse response) { + try { + channel.sendResponse(response); + } catch (Exception e) { + onFailure(e); + } + } + + @Override + public void onFailure(Exception e) { + try { + channel.sendResponse(e); + } catch (IOException e1) { + e1.addSuppressed(e); + logger.warn("Failed to send failure", e1); + } + } + }); } } @@ -341,10 +400,10 @@ private class NodeTasksResponse extends TransportResponse { protected List exceptions; protected List results; - public NodeTasksResponse() { + NodeTasksResponse() { } - public NodeTasksResponse(String nodeId, + NodeTasksResponse(String nodeId, List results, List exceptions) { this.nodeId = nodeId; diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsRequest.java b/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsRequest.java index af192cea600c6..fda393e375c9e 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsRequest.java @@ -23,14 +23,11 @@ import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.CompositeIndicesRequest; -import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.RealtimeRequest; import org.elasticsearch.action.ValidateActions; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; @@ -41,7 +38,7 @@ import java.util.List; import java.util.Set; -public class MultiTermVectorsRequest extends ActionRequest implements Iterable, CompositeIndicesRequest, RealtimeRequest { +public class MultiTermVectorsRequest extends ActionRequest implements Iterable, CompositeIndicesRequest, RealtimeRequest { String preference; List requests = new ArrayList<>(); @@ -76,11 +73,6 @@ public ActionRequestValidationException validate() { return validationException; } - @Override - public List subRequests() { - return requests; - } - @Override public Iterator iterator() { return Collections.unmodifiableCollection(requests).iterator(); @@ -94,43 +86,41 @@ public List getRequests() { return requests; } - public void add(TermVectorsRequest template, BytesReference data) throws Exception { + public void add(TermVectorsRequest template, @Nullable XContentParser parser) throws IOException { XContentParser.Token token; String currentFieldName = null; - if (data.length() > 0) { - try (XContentParser parser = XContentFactory.xContent(data).createParser(data)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.START_ARRAY) { - if ("docs".equals(currentFieldName)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token != XContentParser.Token.START_OBJECT) { - throw new IllegalArgumentException("docs array element should include an object"); - } - TermVectorsRequest termVectorsRequest = new TermVectorsRequest(template); - TermVectorsRequest.parseRequest(termVectorsRequest, parser); - add(termVectorsRequest); + if (parser != null) { + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token == XContentParser.Token.START_ARRAY) { + if ("docs".equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException("docs array element should include an object"); } - } else if ("ids".equals(currentFieldName)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (!token.isValue()) { - throw new IllegalArgumentException("ids array element should only contain ids"); - } - ids.add(parser.text()); - } - } else { - throw new ElasticsearchParseException("no parameter named [{}] and type ARRAY", currentFieldName); + TermVectorsRequest termVectorsRequest = new TermVectorsRequest(template); + TermVectorsRequest.parseRequest(termVectorsRequest, parser); + add(termVectorsRequest); } - } else if (token == XContentParser.Token.START_OBJECT && currentFieldName != null) { - if ("parameters".equals(currentFieldName)) { - TermVectorsRequest.parseRequest(template, parser); - } else { - throw new ElasticsearchParseException("no parameter named [{}] and type OBJECT", currentFieldName); + } else if ("ids".equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (!token.isValue()) { + throw new IllegalArgumentException("ids array element should only contain ids"); + } + ids.add(parser.text()); } - } else if (currentFieldName != null) { - throw new ElasticsearchParseException("_mtermvectors: Parameter [{}] not supported", currentFieldName); + } else { + throw new ElasticsearchParseException("no parameter named [{}] and type ARRAY", currentFieldName); + } + } else if (token == XContentParser.Token.START_OBJECT && currentFieldName != null) { + if ("parameters".equals(currentFieldName)) { + TermVectorsRequest.parseRequest(template, parser); + } else { + throw new ElasticsearchParseException("no parameter named [{}] and type OBJECT", currentFieldName); } + } else if (currentFieldName != null) { + throw new ElasticsearchParseException("_mtermvectors: Parameter [{}] not supported", currentFieldName); } } } diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsResponse.java b/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsResponse.java index 233d4b0c63884..8508c834a9f36 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsResponse.java @@ -24,14 +24,14 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.Arrays; import java.util.Iterator; -public class MultiTermVectorsResponse extends ActionResponse implements Iterable, ToXContent { +public class MultiTermVectorsResponse extends ActionResponse implements Iterable, ToXContentObject { /** * Represents a failure. @@ -124,6 +124,7 @@ public Iterator iterator() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.DOCS); for (MultiTermVectorsItemResponse response : responses) { if (response.isFailed()) { @@ -132,16 +133,15 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Fields._INDEX, failure.getIndex()); builder.field(Fields._TYPE, failure.getType()); builder.field(Fields._ID, failure.getId()); - ElasticsearchException.renderException(builder, params, failure.getCause()); + ElasticsearchException.generateFailureXContent(builder, params, failure.getCause(), true); builder.endObject(); } else { TermVectorsResponse getResponse = response.getResponse(); - builder.startObject(); getResponse.toXContent(builder, params); - builder.endObject(); } } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsFields.java b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsFields.java index 0ae8824ce8d7f..088691a5c9c19 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsFields.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsFields.java @@ -105,13 +105,13 @@ *
  • vint: frequency (always returned)
  • *
  • *
      - *
    • vint: position_1 (if positions == true)
    • - *
    • vint: startOffset_1 (if offset == true)
    • - *
    • vint: endOffset_1 (if offset == true)
    • - *
    • BytesRef: payload_1 (if payloads == true)
    • + *
    • vint: position_1 (if positions)
    • + *
    • vint: startOffset_1 (if offset)
    • + *
    • vint: endOffset_1 (if offset)
    • + *
    • BytesRef: payload_1 (if payloads)
    • *
    • ...
    • - *
    • vint: endOffset_freqency (if offset == true)
    • - *
    • BytesRef: payload_freqency (if payloads == true)
    • + *
    • vint: endOffset_freqency (if offset)
    • + *
    • BytesRef: payload_freqency (if payloads)
    • *
  • * */ @@ -200,7 +200,7 @@ private final class TermVector extends Terms { private long sumDocFreq; private int docCount; - public TermVector(BytesReference termVectors, long readOffset) throws IOException { + TermVector(BytesReference termVectors, long readOffset) throws IOException { this.perFieldTermVectorInput = termVectors.streamInput(); this.readOffset = readOffset; reset(); diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java index 3f33b2e3901e7..0fe83e214463a 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java @@ -20,8 +20,8 @@ package org.elasticsearch.action.termvectors; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; -import org.elasticsearch.action.DocumentRequest; import org.elasticsearch.action.RealtimeRequest; import org.elasticsearch.action.ValidateActions; import org.elasticsearch.action.get.MultiGetRequest; @@ -34,7 +34,9 @@ import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; import java.io.IOException; @@ -56,7 +58,7 @@ * Note, the {@link #index()}, {@link #type(String)} and {@link #id(String)} are * required. */ -public class TermVectorsRequest extends SingleShardRequest implements DocumentRequest, RealtimeRequest { +public class TermVectorsRequest extends SingleShardRequest implements RealtimeRequest { private String type; @@ -64,6 +66,8 @@ public class TermVectorsRequest extends SingleShardRequest i private BytesReference doc; + private XContentType xContentType; + private String routing; private String parent; @@ -157,8 +161,9 @@ public TermVectorsRequest(TermVectorsRequest other) { super(other.index()); this.id = other.id(); this.type = other.type(); - if (this.doc != null) { + if (other.doc != null) { this.doc = new BytesArray(other.doc().toBytesRef(), true); + this.xContentType = other.xContentType; } this.flagsEnum = other.getFlags().clone(); this.preference = other.preference(); @@ -180,7 +185,7 @@ public TermVectorsRequest(MultiGetRequest.Item item) { super(item.index()); this.id = item.id(); this.type = item.type(); - this.selectedFields(item.fields()); + this.selectedFields(item.storedFields()); this.routing(item.routing()); this.parent(item.parent()); } @@ -200,7 +205,6 @@ public TermVectorsRequest type(String type) { /** * Returns the type of document to get the term vector for. */ - @Override public String type() { return type; } @@ -208,7 +212,6 @@ public String type() { /** * Returns the id of document the term vector is requested for. */ - @Override public String id() { return id; } @@ -228,40 +231,51 @@ public BytesReference doc() { return doc; } + public XContentType xContentType() { + return xContentType; + } + /** * Sets an artificial document from which term vectors are requested for. */ public TermVectorsRequest doc(XContentBuilder documentBuilder) { - return this.doc(documentBuilder.bytes(), true); + return this.doc(documentBuilder.bytes(), true, documentBuilder.contentType()); } /** * Sets an artificial document from which term vectors are requested for. + * @deprecated use {@link #doc(BytesReference, boolean, XContentType)} to avoid content auto detection */ + @Deprecated public TermVectorsRequest doc(BytesReference doc, boolean generateRandomId) { + return this.doc(doc, generateRandomId, XContentFactory.xContentType(doc)); + } + + /** + * Sets an artificial document from which term vectors are requested for. + */ + public TermVectorsRequest doc(BytesReference doc, boolean generateRandomId, XContentType xContentType) { // assign a random id to this artificial document, for routing if (generateRandomId) { this.id(String.valueOf(randomInt.getAndAdd(1))); } this.doc = doc; + this.xContentType = xContentType; return this; } /** * @return The routing for this request. */ - @Override public String routing() { return routing; } - @Override public TermVectorsRequest routing(String routing) { this.routing = routing; return this; } - @Override public String parent() { return parent; } @@ -485,6 +499,11 @@ public void readFrom(StreamInput in) throws IOException { if (in.readBoolean()) { doc = in.readBytesReference(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType = XContentType.readFrom(in); + } else { + xContentType = XContentFactory.xContentType(doc); + } } routing = in.readOptionalString(); parent = in.readOptionalString(); @@ -525,6 +544,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(doc != null); if (doc != null) { out.writeBytesReference(doc); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType.writeTo(out); + } } out.writeOptionalString(routing); out.writeOptionalString(parent); @@ -555,7 +577,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeLong(version); } - public static enum Flag { + public enum Flag { // Do not change the order of these flags we use // the ordinal for encoding! Only append to the end! Positions, Offsets, Payloads, FieldStatistics, TermStatistics diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java index 964aa00b5c35d..c63400be7e941 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java @@ -36,7 +36,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.dfs.AggregatedDfs; @@ -46,7 +46,7 @@ import java.util.Iterator; import java.util.Set; -public class TermVectorsResponse extends ActionResponse implements ToXContent { +public class TermVectorsResponse extends ActionResponse implements ToXContentObject { private static class FieldStrings { // term statistics strings @@ -174,6 +174,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws assert index != null; assert type != null; assert id != null; + builder.startObject(); builder.field(FieldStrings._INDEX, index); builder.field(FieldStrings._TYPE, type); if (!isArtificial()) { @@ -182,15 +183,15 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(FieldStrings._VERSION, docVersion); builder.field(FieldStrings.FOUND, isExists()); builder.field(FieldStrings.TOOK, tookInMillis); - if (!isExists()) { - return builder; - } - builder.startObject(FieldStrings.TERM_VECTORS); - final CharsRefBuilder spare = new CharsRefBuilder(); - Fields theFields = getFields(); - Iterator fieldIter = theFields.iterator(); - while (fieldIter.hasNext()) { - buildField(builder, spare, theFields, fieldIter); + if (isExists()) { + builder.startObject(FieldStrings.TERM_VECTORS); + final CharsRefBuilder spare = new CharsRefBuilder(); + Fields theFields = getFields(); + Iterator fieldIter = theFields.iterator(); + while (fieldIter.hasNext()) { + buildField(builder, spare, theFields, fieldIter); + } + builder.endObject(); } builder.endObject(); return builder; diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsWriter.java b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsWriter.java index 6b5e497b8e512..06eea6367edca 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsWriter.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsWriter.java @@ -71,7 +71,7 @@ void setFields(Fields termVectorsByField, Set selectedFields, EnumSet groupShardsIter = clusterService.operationRouting().searchShards(state, + new String[] { request.concreteIndex() }, null, request.request().preference()); + return groupShardsIter.iterator().next(); + } + return clusterService.operationRouting().getShards(state, request.concreteIndex(), request.request().id(), request.request().routing(), request.request().preference()); } diff --git a/core/src/main/java/org/elasticsearch/action/update/TransportUpdateAction.java b/core/src/main/java/org/elasticsearch/action/update/TransportUpdateAction.java index d35c7bdb5847c..6a8501878e3af 100644 --- a/core/src/main/java/org/elasticsearch/action/update/TransportUpdateAction.java +++ b/core/src/main/java/org/elasticsearch/action/update/TransportUpdateAction.java @@ -54,7 +54,7 @@ import org.elasticsearch.index.engine.VersionConflictEngineException; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; -import org.elasticsearch.indices.IndexAlreadyExistsException; +import org.elasticsearch.ResourceAlreadyExistsException; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -129,7 +129,7 @@ public void onResponse(CreateIndexResponse result) { @Override public void onFailure(Exception e) { - if (unwrapCause(e) instanceof IndexAlreadyExistsException) { + if (unwrapCause(e) instanceof ResourceAlreadyExistsException) { // we have the index, do it try { innerExecute(request, listener); @@ -176,7 +176,7 @@ protected void shardOperation(final UpdateRequest request, final ActionListener< final ShardId shardId = request.getShardId(); final IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex()); final IndexShard indexShard = indexService.getShard(shardId.getId()); - final UpdateHelper.Result result = updateHelper.prepare(request, indexShard); + final UpdateHelper.Result result = updateHelper.prepare(request, indexShard, threadPool::absoluteTimeInMillis); switch (result.getResponseResult()) { case CREATED: IndexRequest upsertRequest = result.action(); @@ -186,8 +186,10 @@ protected void shardOperation(final UpdateRequest request, final ActionListener< @Override public void onResponse(IndexResponse response) { UpdateResponse update = new UpdateResponse(response.getShardInfo(), response.getShardId(), response.getType(), response.getId(), response.getVersion(), response.getResult()); - if (request.fields() != null && request.fields().length > 0) { - Tuple> sourceAndContent = XContentHelper.convertToMap(upsertSourceBytes, true); + if ((request.fetchSource() != null && request.fetchSource().fetchSource()) || + (request.fields() != null && request.fields().length > 0)) { + Tuple> sourceAndContent = + XContentHelper.convertToMap(upsertSourceBytes, true, upsertRequest.getContentType()); update.setGetResult(updateHelper.extractGetResult(request, request.concreteIndex(), response.getVersion(), sourceAndContent.v2(), sourceAndContent.v1(), upsertSourceBytes)); } else { update.setGetResult(null); diff --git a/core/src/main/java/org/elasticsearch/action/update/UpdateHelper.java b/core/src/main/java/org/elasticsearch/action/update/UpdateHelper.java index 081cfd951b363..d2bbdc2bd8026 100644 --- a/core/src/main/java/org/elasticsearch/action/update/UpdateHelper.java +++ b/core/src/main/java/org/elasticsearch/action/update/UpdateHelper.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.update; +import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.index.IndexRequest; @@ -27,10 +28,11 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; @@ -44,6 +46,7 @@ import org.elasticsearch.index.mapper.TimestampFieldMapper; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.script.CompiledScript; import org.elasticsearch.script.ExecutableScript; import org.elasticsearch.script.Script; import org.elasticsearch.script.ScriptContext; @@ -51,19 +54,19 @@ import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import org.elasticsearch.search.lookup.SourceLookup; +import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.Map; +import java.util.function.LongSupplier; /** * Helper for translating an update request to an index, delete request or update response. */ public class UpdateHelper extends AbstractComponent { - private final ScriptService scriptService; - @Inject public UpdateHelper(Settings settings, ScriptService scriptService) { super(settings); this.scriptService = scriptService; @@ -72,19 +75,18 @@ public UpdateHelper(Settings settings, ScriptService scriptService) { /** * Prepares an update request by converting it into an index or delete request or an update response (no action). */ - @SuppressWarnings("unchecked") - public Result prepare(UpdateRequest request, IndexShard indexShard) { + public Result prepare(UpdateRequest request, IndexShard indexShard, LongSupplier nowInMillis) { final GetResult getResult = indexShard.getService().get(request.type(), request.id(), new String[]{RoutingFieldMapper.NAME, ParentFieldMapper.NAME, TTLFieldMapper.NAME, TimestampFieldMapper.NAME}, - true, request.version(), request.versionType(), FetchSourceContext.FETCH_SOURCE, false); - return prepare(indexShard.shardId(), request, getResult); + true, request.version(), request.versionType(), FetchSourceContext.FETCH_SOURCE); + return prepare(indexShard.shardId(), request, getResult, nowInMillis); } /** * Prepares an update request by converting it into an index or delete request or an update response (no action). */ @SuppressWarnings("unchecked") - protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult getResult) { + protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult getResult, LongSupplier nowInMillis) { long getDateNS = System.nanoTime(); if (!getResult.isExists()) { if (request.upsertRequest() == null && !request.docAsUpsert()) { @@ -100,6 +102,7 @@ protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult // Tell the script that this is a create and not an update ctx.put("op", "create"); ctx.put("_source", upsertDoc); + ctx.put("_now", nowInMillis.getAsLong()); ctx = executeScript(request.script, ctx); //Allow the script to set TTL using ctx._ttl if (ttl == null) { @@ -114,7 +117,7 @@ protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult if (!"create".equals(scriptOpChoice)) { if (!"none".equals(scriptOpChoice)) { logger.warn("Used upsert operation [{}] for script [{}], doing nothing...", scriptOpChoice, - request.script.getScript()); + request.script.getIdOrCode()); } UpdateResponse update = new UpdateResponse(shardId, getResult.getType(), getResult.getId(), getResult.getVersion(), DocWriteResponse.Result.NOOP); @@ -131,6 +134,7 @@ protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult .setRefreshPolicy(request.getRefreshPolicy()) .routing(request.routing()) .parent(request.parent()) + .timeout(request.timeout()) .waitForActiveShards(request.waitForActiveShards()); if (request.versionType() != VersionType.INTERNAL) { // in all but the internal versioning mode, we want to create the new document using the given version. @@ -139,7 +143,12 @@ protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult return new Result(indexRequest, DocWriteResponse.Result.CREATED, null, null); } - final long updateVersion = getResult.getVersion(); + long updateVersion = getResult.getVersion(); + + if (request.versionType() != VersionType.INTERNAL) { + assert request.versionType() == VersionType.FORCE; + updateVersion = request.version(); // remember, match_any is excluded by the conflict test + } if (getResult.internalSourceRef() == null) { // no source, we can't do nothing, through a failure... @@ -188,6 +197,7 @@ protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult ctx.put("_timestamp", originalTimestamp); ctx.put("_ttl", originalTtl); ctx.put("_source", sourceAndContent.v2()); + ctx.put("_now", nowInMillis.getAsLong()); ctx = executeScript(request.script, ctx); @@ -221,12 +231,14 @@ protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult .version(updateVersion).versionType(request.versionType()) .waitForActiveShards(request.waitForActiveShards()) .timestamp(timestamp).ttl(ttl) + .timeout(request.timeout()) .setRefreshPolicy(request.getRefreshPolicy()); return new Result(indexRequest, DocWriteResponse.Result.UPDATED, updatedSourceAsMap, updateSourceContentType); } else if ("delete".equals(operation)) { DeleteRequest deleteRequest = Requests.deleteRequest(request.index()).type(request.type()).id(request.id()).routing(routing).parent(parent) .version(updateVersion).versionType(request.versionType()) .waitForActiveShards(request.waitForActiveShards()) + .timeout(request.timeout()) .setRefreshPolicy(request.getRefreshPolicy()); return new Result(deleteRequest, DocWriteResponse.Result.DELETED, updatedSourceAsMap, updateSourceContentType); } else if ("none".equals(operation)) { @@ -234,7 +246,7 @@ protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult update.setGetResult(extractGetResult(request, request.index(), getResult.getVersion(), updatedSourceAsMap, updateSourceContentType, getResult.internalSourceRef())); return new Result(update, DocWriteResponse.Result.NOOP, updatedSourceAsMap, updateSourceContentType); } else { - logger.warn("Used update operation [{}] for script [{}], doing nothing...", operation, request.script.getScript()); + logger.warn("Used update operation [{}] for script [{}], doing nothing...", operation, request.script.getIdOrCode()); UpdateResponse update = new UpdateResponse(shardId, getResult.getType(), getResult.getId(), getResult.getVersion(), DocWriteResponse.Result.NOOP); return new Result(update, DocWriteResponse.Result.NOOP, updatedSourceAsMap, updateSourceContentType); } @@ -243,7 +255,8 @@ protected Result prepare(ShardId shardId, UpdateRequest request, final GetResult private Map executeScript(Script script, Map ctx) { try { if (scriptService != null) { - ExecutableScript executableScript = scriptService.executable(script, ScriptContext.Standard.UPDATE, Collections.emptyMap()); + CompiledScript compiledScript = scriptService.compile(script, ScriptContext.Standard.UPDATE); + ExecutableScript executableScript = scriptService.executable(compiledScript, script.getParams()); executableScript.setNextVar("ctx", ctx); executableScript.run(); // we need to unwrap the ctx... @@ -267,17 +280,19 @@ private TimeValue getTTLFromScriptContext(Map ctx) { } /** - * Extracts the fields from the updated document to be returned in a update response + * Applies {@link UpdateRequest#fetchSource()} to the _source of the updated document to be returned in a update response. + * For BWC this function also extracts the {@link UpdateRequest#fields()} from the updated document to be returned in a update response */ public GetResult extractGetResult(final UpdateRequest request, String concreteIndex, long version, final Map source, XContentType sourceContentType, @Nullable final BytesReference sourceAsBytes) { - if (request.fields() == null || request.fields().length == 0) { + if ((request.fields() == null || request.fields().length == 0) && + (request.fetchSource() == null || request.fetchSource().fetchSource() == false)) { return null; } + SourceLookup sourceLookup = new SourceLookup(); + sourceLookup.setSource(source); boolean sourceRequested = false; Map fields = null; if (request.fields() != null && request.fields().length > 0) { - SourceLookup sourceLookup = new SourceLookup(); - sourceLookup.setSource(source); for (String field : request.fields()) { if (field.equals("_source")) { sourceRequested = true; @@ -298,8 +313,26 @@ public GetResult extractGetResult(final UpdateRequest request, String concreteIn } } + BytesReference sourceFilteredAsBytes = sourceAsBytes; + if (request.fetchSource() != null && request.fetchSource().fetchSource()) { + sourceRequested = true; + if (request.fetchSource().includes().length > 0 || request.fetchSource().excludes().length > 0) { + Object value = sourceLookup.filter(request.fetchSource()); + try { + final int initialCapacity = Math.min(1024, sourceAsBytes.length()); + BytesStreamOutput streamOutput = new BytesStreamOutput(initialCapacity); + try (XContentBuilder builder = new XContentBuilder(sourceContentType.xContent(), streamOutput)) { + builder.value(value); + sourceFilteredAsBytes = builder.bytes(); + } + } catch (IOException e) { + throw new ElasticsearchException("Error filtering source", e); + } + } + } + // TODO when using delete/none, we can still return the source as bytes by generating it (using the sourceContentType) - return new GetResult(concreteIndex, request.type(), request.id(), version, true, sourceRequested ? sourceAsBytes : null, fields); + return new GetResult(concreteIndex, request.type(), request.id(), version, true, sourceRequested ? sourceFilteredAsBytes : null, fields); } public static class Result { diff --git a/core/src/main/java/org/elasticsearch/action/update/UpdateRequest.java b/core/src/main/java/org/elasticsearch/action/update/UpdateRequest.java index 3f2dde6784aea..416b9e2c68508 100644 --- a/core/src/main/java/org/elasticsearch/action/update/UpdateRequest.java +++ b/core/src/main/java/org/elasticsearch/action/update/UpdateRequest.java @@ -20,28 +20,30 @@ package org.elasticsearch.action.update; import org.elasticsearch.action.ActionRequestValidationException; -import org.elasticsearch.action.DocumentRequest; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.WriteRequest; import org.elasticsearch.action.support.replication.ReplicationRequest; import org.elasticsearch.action.support.single.instance.InstanceShardOperationRequest; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.script.Script; -import org.elasticsearch.script.ScriptService; -import org.elasticsearch.script.ScriptService.ScriptType; +import org.elasticsearch.script.ScriptType; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import java.io.IOException; import java.util.Collections; @@ -54,7 +56,7 @@ /** */ public class UpdateRequest extends InstanceShardOperationRequest - implements DocumentRequest, WriteRequest { + implements DocWriteRequest, WriteRequest, ToXContentObject { private String type; private String id; @@ -68,6 +70,7 @@ public class UpdateRequest extends InstanceShardOperationRequest Script script; private String[] fields; + private FetchSourceContext fetchSourceContext; private long version = Versions.MATCH_ANY; private VersionType versionType = VersionType.INTERNAL; @@ -82,6 +85,7 @@ public class UpdateRequest extends InstanceShardOperationRequest private boolean scriptedUpsert = false; private boolean docAsUpsert = false; private boolean detectNoop = true; + private static DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(UpdateRequest.class)); @Nullable private IndexRequest doc; @@ -106,9 +110,8 @@ public ActionRequestValidationException validate() { validationException = addValidationError("id is missing", validationException); } - if (versionType != VersionType.INTERNAL) { - validationException = addValidationError("version type [" + versionType + "] is not supported by the update API", - validationException); + if (!(versionType == VersionType.INTERNAL || versionType == VersionType.FORCE)) { + validationException = addValidationError("version type [" + versionType + "] is not supported by the update API", validationException); } else { if (version != Versions.MATCH_ANY && retryOnConflict > 0) { @@ -129,6 +132,9 @@ public ActionRequestValidationException validate() { if (doc == null && docAsUpsert) { validationException = addValidationError("doc must be specified if doc_as_upsert is enabled", validationException); } + if (versionType == VersionType.FORCE) { + deprecationLogger.deprecated("version type FORCE is deprecated and will be removed in the next major version"); + } return validationException; } @@ -221,14 +227,14 @@ public UpdateRequest script(Script script) { */ @Deprecated public String scriptString() { - return this.script == null ? null : this.script.getScript(); + return this.script == null ? null : this.script.getIdOrCode(); } /** * @deprecated Use {@link #script()} instead */ @Deprecated - public ScriptService.ScriptType scriptType() { + public ScriptType scriptType() { return this.script == null ? null : this.script.getType(); } @@ -248,7 +254,7 @@ public Map scriptParams() { * @deprecated Use {@link #script(Script)} instead */ @Deprecated - public UpdateRequest script(String script, ScriptService.ScriptType scriptType) { + public UpdateRequest script(String script, ScriptType scriptType) { updateOrCreateScript(script, scriptType, null, null); return this; } @@ -324,13 +330,13 @@ public UpdateRequest scriptParams(Map scriptParams) { private void updateOrCreateScript(String scriptContent, ScriptType type, String lang, Map params) { Script script = script(); if (script == null) { - script = new Script(scriptContent == null ? "" : scriptContent, type == null ? ScriptType.INLINE : type, lang, params); + script = new Script(type == null ? ScriptType.INLINE : type, lang, scriptContent == null ? "" : scriptContent, params); } else { - String newScriptContent = scriptContent == null ? script.getScript() : scriptContent; + String newScriptContent = scriptContent == null ? script.getIdOrCode() : scriptContent; ScriptType newScriptType = type == null ? script.getType() : type; String newScriptLang = lang == null ? script.getLang() : lang; Map newScriptParams = params == null ? script.getParams() : params; - script = new Script(newScriptContent, newScriptType, newScriptLang, newScriptParams); + script = new Script(newScriptType, newScriptLang, newScriptContent, newScriptParams); } script(script); } @@ -343,8 +349,8 @@ private void updateOrCreateScript(String scriptContent, ScriptType type, String * @deprecated Use {@link #script(Script)} instead */ @Deprecated - public UpdateRequest script(String script, ScriptService.ScriptType scriptType, @Nullable Map scriptParams) { - this.script = new Script(script, scriptType, null, scriptParams); + public UpdateRequest script(String script, ScriptType scriptType, @Nullable Map scriptParams) { + this.script = new Script(scriptType, Script.DEFAULT_SCRIPT_LANG, script, scriptParams); return this; } @@ -365,25 +371,91 @@ public UpdateRequest script(String script, ScriptService.ScriptType scriptType, * @deprecated Use {@link #script(Script)} instead */ @Deprecated - public UpdateRequest script(String script, @Nullable String scriptLang, ScriptService.ScriptType scriptType, + public UpdateRequest script(String script, @Nullable String scriptLang, ScriptType scriptType, @Nullable Map scriptParams) { - this.script = new Script(script, scriptType, scriptLang, scriptParams); + this.script = new Script(scriptType, scriptLang, script, scriptParams); return this; } /** * Explicitly specify the fields that will be returned. By default, nothing is returned. + * @deprecated Use {@link UpdateRequest#fetchSource(String[], String[])} instead */ + @Deprecated public UpdateRequest fields(String... fields) { this.fields = fields; return this; } + /** + * Indicate that _source should be returned with every hit, with an + * "include" and/or "exclude" set which can include simple wildcard + * elements. + * + * @param include + * An optional include (optionally wildcarded) pattern to filter + * the returned _source + * @param exclude + * An optional exclude (optionally wildcarded) pattern to filter + * the returned _source + */ + public UpdateRequest fetchSource(@Nullable String include, @Nullable String exclude) { + FetchSourceContext context = this.fetchSourceContext == null ? FetchSourceContext.FETCH_SOURCE : this.fetchSourceContext; + this.fetchSourceContext = new FetchSourceContext(context.fetchSource(), new String[] {include}, new String[]{exclude}); + return this; + } + + /** + * Indicate that _source should be returned, with an + * "include" and/or "exclude" set which can include simple wildcard + * elements. + * + * @param includes + * An optional list of include (optionally wildcarded) pattern to + * filter the returned _source + * @param excludes + * An optional list of exclude (optionally wildcarded) pattern to + * filter the returned _source + */ + public UpdateRequest fetchSource(@Nullable String[] includes, @Nullable String[] excludes) { + FetchSourceContext context = this.fetchSourceContext == null ? FetchSourceContext.FETCH_SOURCE : this.fetchSourceContext; + this.fetchSourceContext = new FetchSourceContext(context.fetchSource(), includes, excludes); + return this; + } + + /** + * Indicates whether the response should contain the updated _source. + */ + public UpdateRequest fetchSource(boolean fetchSource) { + FetchSourceContext context = this.fetchSourceContext == null ? FetchSourceContext.FETCH_SOURCE : this.fetchSourceContext; + this.fetchSourceContext = new FetchSourceContext(fetchSource, context.includes(), context.excludes()); + return this; + } + + /** + * Explicitely set the fetch source context for this request + */ + public UpdateRequest fetchSource(FetchSourceContext context) { + this.fetchSourceContext = context; + return this; + } + + /** * Get the fields to be returned. + * @deprecated Use {@link UpdateRequest#fetchSource()} instead */ + @Deprecated public String[] fields() { - return this.fields; + return fields; + } + + /** + * Gets the {@link FetchSourceContext} which defines how the _source should + * be fetched. + */ + public FetchSourceContext fetchSource() { + return fetchSourceContext; } /** @@ -399,31 +471,33 @@ public int retryOnConflict() { return this.retryOnConflict; } - /** - * Sets the version, which will cause the index operation to only be performed if a matching - * version exists and no changes happened on the doc since then. - */ + @Override public UpdateRequest version(long version) { this.version = version; return this; } + @Override public long version() { return this.version; } - /** - * Sets the versioning type. Defaults to {@link VersionType#INTERNAL}. - */ + @Override public UpdateRequest versionType(VersionType versionType) { this.versionType = versionType; return this; } + @Override public VersionType versionType() { return this.versionType; } + @Override + public OpType opType() { + return OpType.UPDATE; + } + @Override public UpdateRequest setRefreshPolicy(RefreshPolicy refreshPolicy) { this.refreshPolicy = refreshPolicy; @@ -491,7 +565,9 @@ public UpdateRequest doc(Map source, XContentType contentType) { /** * Sets the doc to use for updates when a script is not specified. + * @deprecated use {@link #doc(String, XContentType)} */ + @Deprecated public UpdateRequest doc(String source) { safeDoc().source(source); return this; @@ -500,6 +576,16 @@ public UpdateRequest doc(String source) { /** * Sets the doc to use for updates when a script is not specified. */ + public UpdateRequest doc(String source, XContentType xContentType) { + safeDoc().source(source, xContentType); + return this; + } + + /** + * Sets the doc to use for updates when a script is not specified. + * @deprecated use {@link #doc(byte[], XContentType)} + */ + @Deprecated public UpdateRequest doc(byte[] source) { safeDoc().source(source); return this; @@ -508,11 +594,29 @@ public UpdateRequest doc(byte[] source) { /** * Sets the doc to use for updates when a script is not specified. */ + public UpdateRequest doc(byte[] source, XContentType xContentType) { + safeDoc().source(source, xContentType); + return this; + } + + /** + * Sets the doc to use for updates when a script is not specified. + * @deprecated use {@link #doc(byte[], int, int, XContentType)} + */ + @Deprecated public UpdateRequest doc(byte[] source, int offset, int length) { safeDoc().source(source, offset, length); return this; } + /** + * Sets the doc to use for updates when a script is not specified. + */ + public UpdateRequest doc(byte[] source, int offset, int length, XContentType xContentType) { + safeDoc().source(source, offset, length, xContentType); + return this; + } + /** * Sets the doc to use for updates when a script is not specified, the doc provided * is a field and value pairs. @@ -523,10 +627,11 @@ public UpdateRequest doc(Object... source) { } /** - * Sets the doc to use for updates when a script is not specified. + * Sets the doc to use for updates when a script is not specified, the doc provided + * is a field and value pairs. */ - public UpdateRequest doc(String field, Object value) { - safeDoc().source(field, value); + public UpdateRequest doc(XContentType xContentType, Object... source) { + safeDoc().source(xContentType, source); return this; } @@ -576,7 +681,9 @@ public UpdateRequest upsert(Map source, XContentType contentType) { /** * Sets the doc source of the update request to be used when the document does not exists. + * @deprecated use {@link #upsert(String, XContentType)} */ + @Deprecated public UpdateRequest upsert(String source) { safeUpsertRequest().source(source); return this; @@ -585,6 +692,16 @@ public UpdateRequest upsert(String source) { /** * Sets the doc source of the update request to be used when the document does not exists. */ + public UpdateRequest upsert(String source, XContentType xContentType) { + safeUpsertRequest().source(source, xContentType); + return this; + } + + /** + * Sets the doc source of the update request to be used when the document does not exists. + * @deprecated use {@link #upsert(byte[], XContentType)} + */ + @Deprecated public UpdateRequest upsert(byte[] source) { safeUpsertRequest().source(source); return this; @@ -593,11 +710,29 @@ public UpdateRequest upsert(byte[] source) { /** * Sets the doc source of the update request to be used when the document does not exists. */ + public UpdateRequest upsert(byte[] source, XContentType xContentType) { + safeUpsertRequest().source(source, xContentType); + return this; + } + + /** + * Sets the doc source of the update request to be used when the document does not exists. + * @deprecated use {@link #upsert(byte[], int, int, XContentType)} + */ + @Deprecated public UpdateRequest upsert(byte[] source, int offset, int length) { safeUpsertRequest().source(source, offset, length); return this; } + /** + * Sets the doc source of the update request to be used when the document does not exists. + */ + public UpdateRequest upsert(byte[] source, int offset, int length, XContentType xContentType) { + safeUpsertRequest().source(source, offset, length, xContentType); + return this; + } + /** * Sets the doc source of the update request to be used when the document does not exists. The doc * includes field and value pairs. @@ -607,6 +742,15 @@ public UpdateRequest upsert(Object... source) { return this; } + /** + * Sets the doc source of the update request to be used when the document does not exists. The doc + * includes field and value pairs. + */ + public UpdateRequest upsert(XContentType xContentType, Object... source) { + safeUpsertRequest().source(xContentType, source); + return this; + } + public IndexRequest upsertRequest() { return this.upsertRequest; } @@ -618,18 +762,6 @@ private IndexRequest safeUpsertRequest() { return upsertRequest; } - public UpdateRequest source(XContentBuilder source) throws Exception { - return source(source.bytes()); - } - - public UpdateRequest source(byte[] source) throws Exception { - return source(source, 0, source.length); - } - - public UpdateRequest source(byte[] source, int offset, int length) throws Exception { - return source(new BytesArray(source, offset, length)); - } - /** * Should this update attempt to detect if it is a noop? Defaults to true. * @return this for chaining @@ -646,51 +778,49 @@ public boolean detectNoop() { return detectNoop; } - public UpdateRequest source(BytesReference source) throws Exception { + public UpdateRequest fromXContent(XContentParser parser) throws IOException { Script script = null; - try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) { - XContentParser.Token token = parser.nextToken(); - if (token == null) { - return this; - } - String currentFieldName = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if ("script".equals(currentFieldName)) { - script = Script.parse(parser, ParseFieldMatcher.EMPTY); - } else if ("scripted_upsert".equals(currentFieldName)) { - scriptedUpsert = parser.booleanValue(); - } else if ("upsert".equals(currentFieldName)) { - XContentType xContentType = XContentFactory.xContentType(source); - XContentBuilder builder = XContentFactory.contentBuilder(xContentType); - builder.copyCurrentStructure(parser); - safeUpsertRequest().source(builder); - } else if ("doc".equals(currentFieldName)) { - XContentType xContentType = XContentFactory.xContentType(source); - XContentBuilder docBuilder = XContentFactory.contentBuilder(xContentType); - docBuilder.copyCurrentStructure(parser); - safeDoc().source(docBuilder); - } else if ("doc_as_upsert".equals(currentFieldName)) { - docAsUpsert(parser.booleanValue()); - } else if ("detect_noop".equals(currentFieldName)) { - detectNoop(parser.booleanValue()); - } else if ("fields".equals(currentFieldName)) { - List fields = null; - if (token == XContentParser.Token.START_ARRAY) { - fields = (List) parser.list(); - } else if (token.isValue()) { - fields = Collections.singletonList(parser.text()); - } - if (fields != null) { - fields(fields.toArray(new String[fields.size()])); - } + XContentParser.Token token = parser.nextToken(); + if (token == null) { + return this; + } + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if ("script".equals(currentFieldName)) { + script = Script.parse(parser); + } else if ("scripted_upsert".equals(currentFieldName)) { + scriptedUpsert = parser.booleanValue(); + } else if ("upsert".equals(currentFieldName)) { + XContentBuilder builder = XContentFactory.contentBuilder(parser.contentType()); + builder.copyCurrentStructure(parser); + safeUpsertRequest().source(builder); + } else if ("doc".equals(currentFieldName)) { + XContentBuilder docBuilder = XContentFactory.contentBuilder(parser.contentType()); + docBuilder.copyCurrentStructure(parser); + safeDoc().source(docBuilder); + } else if ("doc_as_upsert".equals(currentFieldName)) { + docAsUpsert(parser.booleanValue()); + } else if ("detect_noop".equals(currentFieldName)) { + detectNoop(parser.booleanValue()); + } else if ("fields".equals(currentFieldName)) { + List fields = null; + if (token == XContentParser.Token.START_ARRAY) { + fields = (List) parser.list(); + } else if (token.isValue()) { + fields = Collections.singletonList(parser.text()); } - } - if (script != null) { - this.script = script; + if (fields != null) { + fields(fields.toArray(new String[fields.size()])); + } + } else if ("_source".equals(currentFieldName)) { + fetchSourceContext = FetchSourceContext.fromXContent(parser); } } + if (script != null) { + this.script = script; + } return this; } @@ -729,13 +859,8 @@ public void readFrom(StreamInput in) throws IOException { doc = new IndexRequest(); doc.readFrom(in); } - int size = in.readInt(); - if (size >= 0) { - fields = new String[size]; - for (int i = 0; i < size; i++) { - fields[i] = in.readString(); - } - } + fields = in.readOptionalStringArray(); + fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new); if (in.readBoolean()) { upsertRequest = new IndexRequest(); upsertRequest.readFrom(in); @@ -772,14 +897,8 @@ public void writeTo(StreamOutput out) throws IOException { doc.id(id); doc.writeTo(out); } - if (fields == null) { - out.writeInt(-1); - } else { - out.writeInt(fields.length); - for (String field : fields) { - out.writeString(field); - } - } + out.writeOptionalStringArray(fields); + out.writeOptionalWriteable(fetchSourceContext); if (upsertRequest == null) { out.writeBoolean(false); } else { @@ -797,4 +916,42 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(scriptedUpsert); } + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + if (docAsUpsert) { + builder.field("doc_as_upsert", docAsUpsert); + } + if (doc != null) { + XContentType xContentType = doc.getContentType(); + try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, doc.source(), xContentType)) { + builder.field("doc"); + builder.copyCurrentStructure(parser); + } + } + if (script != null) { + builder.field("script", script); + } + if (upsertRequest != null) { + XContentType xContentType = upsertRequest.getContentType(); + try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, upsertRequest.source(), xContentType)) { + builder.field("upsert"); + builder.copyCurrentStructure(parser); + } + } + if (scriptedUpsert) { + builder.field("scripted_upsert", scriptedUpsert); + } + if (detectNoop == false) { + builder.field("detect_noop", detectNoop); + } + if (fields != null) { + builder.array("fields", fields); + } + if (fetchSourceContext != null) { + builder.field("_source", fetchSourceContext); + } + builder.endObject(); + return builder; + } } diff --git a/core/src/main/java/org/elasticsearch/action/update/UpdateRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/update/UpdateRequestBuilder.java index f2d80bfe66e8f..2846ac4977fc5 100644 --- a/core/src/main/java/org/elasticsearch/action/update/UpdateRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/update/UpdateRequestBuilder.java @@ -25,17 +25,22 @@ import org.elasticsearch.action.support.replication.ReplicationRequest; import org.elasticsearch.action.support.single.instance.InstanceShardOperationRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; -import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; +import org.elasticsearch.rest.action.document.RestUpdateAction; import org.elasticsearch.script.Script; import java.util.Map; public class UpdateRequestBuilder extends InstanceShardOperationRequestBuilder implements WriteRequestBuilder { + private static final DeprecationLogger DEPRECATION_LOGGER = + new DeprecationLogger(Loggers.getLogger(RestUpdateAction.class)); public UpdateRequestBuilder(ElasticsearchClient client, UpdateAction action) { super(client, action, new UpdateRequest()); @@ -90,12 +95,57 @@ public UpdateRequestBuilder setScript(Script script) { /** * Explicitly specify the fields that will be returned. By default, nothing is returned. + * @deprecated Use {@link UpdateRequestBuilder#setFetchSource(String[], String[])} instead */ + @Deprecated public UpdateRequestBuilder setFields(String... fields) { + DEPRECATION_LOGGER.deprecated("Deprecated field [fields] used, expected [_source] instead"); request.fields(fields); return this; } + /** + * Indicate that _source should be returned with every hit, with an + * "include" and/or "exclude" set which can include simple wildcard + * elements. + * + * @param include + * An optional include (optionally wildcarded) pattern to filter + * the returned _source + * @param exclude + * An optional exclude (optionally wildcarded) pattern to filter + * the returned _source + */ + public UpdateRequestBuilder setFetchSource(@Nullable String include, @Nullable String exclude) { + request.fetchSource(include, exclude); + return this; + } + + /** + * Indicate that _source should be returned, with an + * "include" and/or "exclude" set which can include simple wildcard + * elements. + * + * @param includes + * An optional list of include (optionally wildcarded) pattern to + * filter the returned _source + * @param excludes + * An optional list of exclude (optionally wildcarded) pattern to + * filter the returned _source + */ + public UpdateRequestBuilder setFetchSource(@Nullable String[] includes, @Nullable String[] excludes) { + request.fetchSource(includes, excludes); + return this; + } + + /** + * Indicates whether the response should contain the updated _source. + */ + public UpdateRequestBuilder setFetchSource(boolean fetchSource) { + request.fetchSource(fetchSource); + return this; + } + /** * Sets the number of retries of a version conflict occurs because the document was updated between * getting it and updating it. Defaults to 0. @@ -174,7 +224,9 @@ public UpdateRequestBuilder setDoc(Map source, XContentType contentType) { /** * Sets the doc to use for updates when a script is not specified. + * @deprecated use {@link #setDoc(String, XContentType)} */ + @Deprecated public UpdateRequestBuilder setDoc(String source) { request.doc(source); return this; @@ -183,6 +235,16 @@ public UpdateRequestBuilder setDoc(String source) { /** * Sets the doc to use for updates when a script is not specified. */ + public UpdateRequestBuilder setDoc(String source, XContentType xContentType) { + request.doc(source, xContentType); + return this; + } + + /** + * Sets the doc to use for updates when a script is not specified. + * @deprecated use {@link #setDoc(byte[], XContentType)} + */ + @Deprecated public UpdateRequestBuilder setDoc(byte[] source) { request.doc(source); return this; @@ -191,6 +253,16 @@ public UpdateRequestBuilder setDoc(byte[] source) { /** * Sets the doc to use for updates when a script is not specified. */ + public UpdateRequestBuilder setDoc(byte[] source, XContentType xContentType) { + request.doc(source, xContentType); + return this; + } + + /** + * Sets the doc to use for updates when a script is not specified. + * @deprecated use {@link #setDoc(byte[], int, int, XContentType)} + */ + @Deprecated public UpdateRequestBuilder setDoc(byte[] source, int offset, int length) { request.doc(source, offset, length); return this; @@ -199,8 +271,8 @@ public UpdateRequestBuilder setDoc(byte[] source, int offset, int length) { /** * Sets the doc to use for updates when a script is not specified. */ - public UpdateRequestBuilder setDoc(String field, Object value) { - request.doc(field, value); + public UpdateRequestBuilder setDoc(byte[] source, int offset, int length, XContentType xContentType) { + request.doc(source, offset, length, xContentType); return this; } @@ -213,6 +285,15 @@ public UpdateRequestBuilder setDoc(Object... source) { return this; } + /** + * Sets the doc to use for updates when a script is not specified, the doc provided + * is a field and value pairs. + */ + public UpdateRequestBuilder setDoc(XContentType xContentType, Object... source) { + request.doc(xContentType, source); + return this; + } + /** * Sets the index request to be used if the document does not exists. Otherwise, a {@link org.elasticsearch.index.engine.DocumentMissingException} * is thrown. @@ -248,7 +329,9 @@ public UpdateRequestBuilder setUpsert(Map source, XContentType contentType) { /** * Sets the doc source of the update request to be used when the document does not exists. + * @deprecated use {@link #setUpsert(String, XContentType)} */ + @Deprecated public UpdateRequestBuilder setUpsert(String source) { request.upsert(source); return this; @@ -257,45 +340,62 @@ public UpdateRequestBuilder setUpsert(String source) { /** * Sets the doc source of the update request to be used when the document does not exists. */ - public UpdateRequestBuilder setUpsert(byte[] source) { - request.upsert(source); + public UpdateRequestBuilder setUpsert(String source, XContentType xContentType) { + request.upsert(source, xContentType); return this; } /** * Sets the doc source of the update request to be used when the document does not exists. + * @deprecated use {@link #setDoc(byte[], XContentType)} */ - public UpdateRequestBuilder setUpsert(byte[] source, int offset, int length) { - request.upsert(source, offset, length); + @Deprecated + public UpdateRequestBuilder setUpsert(byte[] source) { + request.upsert(source); return this; } /** - * Sets the doc source of the update request to be used when the document does not exists. The doc - * includes field and value pairs. + * Sets the doc source of the update request to be used when the document does not exists. */ - public UpdateRequestBuilder setUpsert(Object... source) { - request.upsert(source); + public UpdateRequestBuilder setUpsert(byte[] source, XContentType xContentType) { + request.upsert(source, xContentType); return this; } - public UpdateRequestBuilder setSource(XContentBuilder source) throws Exception { - request.source(source); + /** + * Sets the doc source of the update request to be used when the document does not exists. + * @deprecated use {@link #setUpsert(byte[], int, int, XContentType)} + */ + @Deprecated + public UpdateRequestBuilder setUpsert(byte[] source, int offset, int length) { + request.upsert(source, offset, length); return this; } - public UpdateRequestBuilder setSource(byte[] source) throws Exception { - request.source(source); + /** + * Sets the doc source of the update request to be used when the document does not exists. + */ + public UpdateRequestBuilder setUpsert(byte[] source, int offset, int length, XContentType xContentType) { + request.upsert(source, offset, length, xContentType); return this; } - public UpdateRequestBuilder setSource(byte[] source, int offset, int length) throws Exception { - request.source(source, offset, length); + /** + * Sets the doc source of the update request to be used when the document does not exists. The doc + * includes field and value pairs. + */ + public UpdateRequestBuilder setUpsert(Object... source) { + request.upsert(source); return this; } - public UpdateRequestBuilder setSource(BytesReference source) throws Exception { - request.source(source); + /** + * Sets the doc source of the update request to be used when the document does not exists. The doc + * includes field and value pairs. + */ + public UpdateRequestBuilder setUpsert(XContentType xContentType, Object... source) { + request.upsert(xContentType, source); return this; } @@ -330,6 +430,7 @@ public UpdateRequestBuilder setScriptedUpsert(boolean scriptedUpsert) { * and the source of the document isn't changed then the ttl update won't take * effect. */ + @Deprecated public UpdateRequestBuilder setTtl(Long ttl) { request.doc().ttl(ttl); return this; @@ -340,6 +441,7 @@ public UpdateRequestBuilder setTtl(Long ttl) { * and the source of the document isn't changed then the ttl update won't take * effect. */ + @Deprecated public UpdateRequestBuilder setTtl(String ttl) { request.doc().ttl(ttl); return this; @@ -350,6 +452,7 @@ public UpdateRequestBuilder setTtl(String ttl) { * and the source of the document isn't changed then the ttl update won't take * effect. */ + @Deprecated public UpdateRequestBuilder setTtl(TimeValue ttl) { request.doc().ttl(ttl); return this; diff --git a/core/src/main/java/org/elasticsearch/action/update/UpdateResponse.java b/core/src/main/java/org/elasticsearch/action/update/UpdateResponse.java index 8061174d091a1..bc7ed479eaab3 100644 --- a/core/src/main/java/org/elasticsearch/action/update/UpdateResponse.java +++ b/core/src/main/java/org/elasticsearch/action/update/UpdateResponse.java @@ -23,14 +23,19 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.get.GetResult; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + public class UpdateResponse extends DocWriteResponse { + private static final String GET = "get"; + private GetResult getResult; public UpdateResponse() { @@ -82,16 +87,11 @@ public void writeTo(StreamOutput out) throws IOException { } } - - static final class Fields { - static final String GET = "get"; - } - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - super.toXContent(builder, params); + public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { + super.innerToXContent(builder, params); if (getGetResult() != null) { - builder.startObject(Fields.GET); + builder.startObject(GET); getGetResult().toXContentEmbedded(builder, params); builder.endObject(); } @@ -111,4 +111,59 @@ public String toString() { return builder.append("]").toString(); } + public static UpdateResponse fromXContent(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + + Builder context = new Builder(); + while (parser.nextToken() != XContentParser.Token.END_OBJECT) { + parseXContentFields(parser, context); + } + return context.build(); + } + + /** + * Parse the current token and update the parsing context appropriately. + */ + public static void parseXContentFields(XContentParser parser, Builder context) throws IOException { + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + + if (GET.equals(currentFieldName)) { + if (token == XContentParser.Token.START_OBJECT) { + context.setGetResult(GetResult.fromXContentEmbedded(parser)); + } + } else { + DocWriteResponse.parseInnerToXContent(parser, context); + } + } + + /** + * Builder class for {@link UpdateResponse}. This builder is usually used during xcontent parsing to + * temporarily store the parsed values, then the {@link DocWriteResponse.Builder#build()} method is called to + * instantiate the {@link UpdateResponse}. + */ + public static class Builder extends DocWriteResponse.Builder { + + private GetResult getResult = null; + + public void setGetResult(GetResult getResult) { + this.getResult = getResult; + } + + @Override + public UpdateResponse build() { + UpdateResponse update; + if (shardInfo != null) { + update = new UpdateResponse(shardInfo, shardId, type, id, version, result); + } else { + update = new UpdateResponse(shardId, type, id, version, result); + } + if (getResult != null) { + update.setGetResult(new GetResult(update.getIndex(), update.getType(), update.getId(), update.getVersion(), + getResult.isExists(),getResult.internalSourceRef(), getResult.getFields())); + } + update.setForcedRefresh(forcedRefresh); + return update; + } + } } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java index 2a38a020fee56..1d6cef6fffed2 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java @@ -19,18 +19,29 @@ package org.elasticsearch.bootstrap; +import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.core.Appender; +import org.apache.logging.log4j.core.LoggerContext; +import org.apache.logging.log4j.core.appender.ConsoleAppender; +import org.apache.logging.log4j.core.config.Configurator; import org.apache.lucene.util.Constants; import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.StringHelper; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; +import org.elasticsearch.cli.ExitCodes; import org.elasticsearch.cli.Terminal; +import org.elasticsearch.cli.UserException; import org.elasticsearch.common.PidFile; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.inject.CreationException; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.logging.LogConfigurator; import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.settings.KeyStoreWrapper; +import org.elasticsearch.common.settings.SecureSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.BoundTransportAddress; import org.elasticsearch.env.Environment; @@ -38,13 +49,19 @@ import org.elasticsearch.monitor.os.OsProbe; import org.elasticsearch.monitor.process.ProcessProbe; import org.elasticsearch.node.Node; -import org.elasticsearch.node.internal.InternalSettingsPreparer; +import org.elasticsearch.node.NodeValidationException; +import org.elasticsearch.node.InternalSettingsPreparer; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.PrintStream; +import java.io.UnsupportedEncodingException; +import java.net.URISyntaxException; import java.nio.file.Path; -import java.util.Map; +import java.security.NoSuchAlgorithmException; +import java.util.List; +import java.util.Collections; +import java.util.Locale; import java.util.concurrent.CountDownLatch; /** @@ -56,6 +73,7 @@ final class Bootstrap { private volatile Node node; private final CountDownLatch keepAliveLatch = new CountDownLatch(1); private final Thread keepAliveThread; + private final Spawner spawner = new Spawner(); /** creates a new instance */ Bootstrap() { @@ -80,7 +98,7 @@ public void run() { } /** initialize native resources */ - public static void initializeNatives(Path tmpFile, boolean mlockAll, boolean seccomp, boolean ctrlHandler) { + public static void initializeNatives(Path tmpFile, boolean mlockAll, boolean systemCallFilter, boolean ctrlHandler) { final Logger logger = Loggers.getLogger(Bootstrap.class); // check if the user is running as root, and bail @@ -88,9 +106,9 @@ public static void initializeNatives(Path tmpFile, boolean mlockAll, boolean sec throw new RuntimeException("can not run elasticsearch as root"); } - // enable secure computing mode - if (seccomp) { - Natives.trySeccomp(tmpFile); + // enable system call filter + if (systemCallFilter) { + Natives.tryInstallSystemCallFilter(tmpFile); } // mlockall if requested @@ -130,6 +148,7 @@ public boolean handle(int code) { Natives.trySetMaxNumberOfThreads(); Natives.trySetMaxSizeVirtualMemory(); + Natives.trySetMaxFileSize(); // init lucene random seed. it will use /dev/urandom where available: StringHelper.randomId(); @@ -142,12 +161,41 @@ static void initializeProbes() { JvmInfo.jvmInfo(); } - private void setup(boolean addShutdownHook, Environment environment) throws Exception { + private void setup(boolean addShutdownHook, Environment environment) throws BootstrapException { Settings settings = environment.settings(); + + try { + spawner.spawnNativePluginControllers(environment); + } catch (IOException e) { + throw new BootstrapException(e); + } + + // TODO: remove in 6.0.0 + if (BootstrapSettings.SECCOMP_SETTING.exists(settings)) { + final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(Bootstrap.class)); + deprecationLogger.deprecated( + "[{}] is deprecated, use [{}]", + BootstrapSettings.SECCOMP_SETTING.getKey(), + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.getKey()); + + if (!BootstrapSettings.SECCOMP_SETTING.get(settings).equals(BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.get(settings))) { + final String message = String.format( + Locale.ROOT, + "[%s=%s] and [%s=%s] are set to inconsistent values; as [%s] is deprecated, prefer [%s]", + BootstrapSettings.SECCOMP_SETTING.getKey(), + Boolean.toString(BootstrapSettings.SECCOMP_SETTING.get(settings)), + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.getKey(), + Boolean.toString(BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.get(settings)), + BootstrapSettings.SECCOMP_SETTING.getKey(), + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.getKey()); + throw new BootstrapException(new UserException(ExitCodes.CONFIG, message)); + } + } + initializeNatives( environment.tmpFile(), BootstrapSettings.MEMORY_LOCK_SETTING.get(settings), - BootstrapSettings.SECCOMP_SETTING.get(settings), + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.get(settings), BootstrapSettings.CTRLHANDLER_SETTING.get(settings)); // initialize probes before the security manager is installed @@ -158,7 +206,9 @@ private void setup(boolean addShutdownHook, Environment environment) throws Exce @Override public void run() { try { - IOUtils.close(node); + IOUtils.close(node, spawner); + LoggerContext context = (LoggerContext) LogManager.getContext(false); + Configurator.shutdown(context); } catch (IOException ex) { throw new ElasticsearchException("failed to stop node", ex); } @@ -166,83 +216,120 @@ public void run() { }); } - // look for jar hell - JarHell.checkJarHell(); + try { + // look for jar hell + JarHell.checkJarHell(); + } catch (IOException | URISyntaxException e) { + throw new BootstrapException(e); + } // install SM after natives, shutdown hooks, etc. - Security.configure(environment, BootstrapSettings.SECURITY_FILTER_BAD_DEFAULTS_SETTING.get(settings)); + try { + Security.configure(environment, BootstrapSettings.SECURITY_FILTER_BAD_DEFAULTS_SETTING.get(settings)); + } catch (IOException | NoSuchAlgorithmException e) { + throw new BootstrapException(e); + } node = new Node(environment) { @Override - protected void validateNodeBeforeAcceptingRequests(Settings settings, BoundTransportAddress boundTransportAddress) { - BootstrapCheck.check(settings, boundTransportAddress); + protected void validateNodeBeforeAcceptingRequests( + final Settings settings, + final BoundTransportAddress boundTransportAddress, List checks) throws NodeValidationException { + BootstrapChecks.check(settings, boundTransportAddress, checks); } }; } - private static Environment initialEnvironment(boolean foreground, Path pidFile, Map esSettings) { + private static SecureSettings loadSecureSettings(Environment initialEnv) throws BootstrapException { + final KeyStoreWrapper keystore; + try { + keystore = KeyStoreWrapper.load(initialEnv.configFile()); + } catch (IOException e) { + throw new BootstrapException(e); + } + if (keystore == null) { + return null; // no keystore + } + + try { + keystore.decrypt(new char[0] /* TODO: read password from stdin */); + } catch (Exception e) { + throw new BootstrapException(e); + } + return keystore; + } + + + private static Environment createEnvironment(boolean foreground, Path pidFile, + SecureSettings secureSettings, Settings initialSettings) { Terminal terminal = foreground ? Terminal.DEFAULT : null; Settings.Builder builder = Settings.builder(); if (pidFile != null) { builder.put(Environment.PIDFILE_SETTING.getKey(), pidFile); } - return InternalSettingsPreparer.prepareEnvironment(builder.build(), terminal, esSettings); + builder.put(initialSettings); + if (secureSettings != null) { + builder.setSecureSettings(secureSettings); + } + return InternalSettingsPreparer.prepareEnvironment(builder.build(), terminal, Collections.emptyMap()); } - private void start() { + private void start() throws NodeValidationException { node.start(); keepAliveThread.start(); } static void stop() throws IOException { try { - IOUtils.close(INSTANCE.node); + IOUtils.close(INSTANCE.node, INSTANCE.spawner); } finally { INSTANCE.keepAliveLatch.countDown(); } } - /** Set the system property before anything has a chance to trigger its use */ - // TODO: why? is it just a bad default somewhere? or is it some BS around 'but the client' garbage <-- my guess - @SuppressForbidden(reason = "sets logger prefix on initialization") - static void initLoggerPrefix() { - System.setProperty("es.logger.prefix", ""); - } - /** - * This method is invoked by {@link Elasticsearch#main(String[])} - * to startup elasticsearch. + * This method is invoked by {@link Elasticsearch#main(String[])} to startup elasticsearch. */ static void init( final boolean foreground, final Path pidFile, - final Map esSettings) throws Exception { - // Set the system property before anything has a chance to trigger its use - initLoggerPrefix(); - + final boolean quiet, + final Environment initialEnv) throws BootstrapException, NodeValidationException, UserException { // force the class initializer for BootstrapInfo to run before // the security manager is installed BootstrapInfo.init(); INSTANCE = new Bootstrap(); - Environment environment = initialEnvironment(foreground, pidFile, esSettings); - LogConfigurator.configure(environment, true); + final SecureSettings keystore = loadSecureSettings(initialEnv); + Environment environment = createEnvironment(foreground, pidFile, keystore, initialEnv.settings()); + try { + LogConfigurator.configure(environment); + } catch (IOException e) { + throw new BootstrapException(e); + } checkForCustomConfFile(); + checkConfigExtension(environment.configExtension()); if (environment.pidFile() != null) { - PidFile.create(environment.pidFile(), true); + try { + PidFile.create(environment.pidFile(), true); + } catch (IOException e) { + throw new BootstrapException(e); + } } + final boolean closeStandardStreams = (foreground == false) || quiet; try { - if (!foreground) { - Loggers.disableConsoleLogging(); + if (closeStandardStreams) { + final Logger rootLogger = ESLoggerFactory.getRootLogger(); + final Appender maybeConsoleAppender = Loggers.findAppender(rootLogger, ConsoleAppender.class); + if (maybeConsoleAppender != null) { + Loggers.removeAppender(rootLogger, maybeConsoleAppender); + } closeSystOut(); } - // fail if using broken version - JVMCheck.check(); - // fail if somebody replaced the lucene jars checkLucene(); @@ -254,15 +341,24 @@ static void init( INSTANCE.setup(true, environment); + try { + // any secure settings must be read during node construction + IOUtils.close(keystore); + } catch (IOException e) { + throw new BootstrapException(e); + } + INSTANCE.start(); - if (!foreground) { + if (closeStandardStreams) { closeSysError(); } - } catch (Exception e) { + } catch (NodeValidationException | RuntimeException e) { // disable console logging, so user does not see the exception twice (jvm will show it already) - if (foreground) { - Loggers.disableConsoleLogging(); + final Logger rootLogger = ESLoggerFactory.getRootLogger(); + final Appender maybeConsoleAppender = Loggers.findAppender(rootLogger, ConsoleAppender.class); + if (foreground && maybeConsoleAppender != null) { + Loggers.removeAppender(rootLogger, maybeConsoleAppender); } Logger logger = Loggers.getLogger(Bootstrap.class); if (INSTANCE.node != null) { @@ -272,17 +368,30 @@ static void init( if (e instanceof CreationException) { // guice: log the shortened exc to the log file ByteArrayOutputStream os = new ByteArrayOutputStream(); - PrintStream ps = new PrintStream(os, false, "UTF-8"); + PrintStream ps = null; + try { + ps = new PrintStream(os, false, "UTF-8"); + } catch (UnsupportedEncodingException uee) { + assert false; + e.addSuppressed(uee); + } new StartupException(e).printStackTrace(ps); ps.flush(); - logger.error("Guice Exception: {}", os.toString("UTF-8")); + try { + logger.error("Guice Exception: {}", os.toString("UTF-8")); + } catch (UnsupportedEncodingException uee) { + assert false; + e.addSuppressed(uee); + } + } else if (e instanceof NodeValidationException) { + logger.error("node validation exception\n{}", e.getMessage()); } else { // full exception logger.error("Exception", e); } // re-enable it if appropriate, so they can see any logging during the shutdown process - if (foreground) { - Loggers.enableConsoleLogging(); + if (foreground && maybeConsoleAppender != null) { + Loggers.addAppender(rootLogger, maybeConsoleAppender); } throw e; @@ -316,6 +425,14 @@ private static void checkUnsetAndMaybeExit(String confFileSetting, String settin } } + // pkg private for tests + static void checkConfigExtension(String extension) { + if (".yaml".equals(extension) || ".json".equals(extension)) { + final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(Bootstrap.class)); + deprecationLogger.deprecated("elasticsearch{} is deprecated; rename your configuration file to elasticsearch.yml", extension); + } + } + @SuppressForbidden(reason = "Allowed to exit explicitly in bootstrap phase") private static void exit(int status) { System.exit(status); diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapCheck.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapCheck.java index 2091b4f3d3582..ffe52dfe5b957 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapCheck.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapCheck.java @@ -19,555 +19,27 @@ package org.elasticsearch.bootstrap; -import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.apache.lucene.util.Constants; -import org.elasticsearch.common.SuppressForbidden; -import org.elasticsearch.common.io.PathUtils; -import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.transport.BoundTransportAddress; -import org.elasticsearch.common.transport.TransportAddress; -import org.elasticsearch.monitor.jvm.JvmInfo; -import org.elasticsearch.monitor.process.ProcessProbe; -import org.elasticsearch.node.Node; - -import java.io.BufferedReader; -import java.io.IOException; -import java.nio.file.Files; -import java.nio.file.Path; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collections; -import java.util.List; -import java.util.Locale; - /** - * We enforce limits once any network host is configured. In this case we assume the node is running in production - * and all production limit checks must pass. This should be extended as we go to settings like: - * - discovery.zen.ping.unicast.hosts is set if we use zen disco - * - ensure we can write in all data directories - * - fail if the default cluster.name is used, if this is setup on network a real clustername should be used? + * Encapsulates a bootstrap check. */ -final class BootstrapCheck { - - private BootstrapCheck() { - } +public interface BootstrapCheck { /** - * checks the current limits against the snapshot or release build - * checks + * Test if the node fails the check. * - * @param settings the current node settings - * @param boundTransportAddress the node network bindings + * @return {@code true} if the node failed the check */ - static void check(final Settings settings, final BoundTransportAddress boundTransportAddress) { - check( - enforceLimits(boundTransportAddress), - BootstrapSettings.IGNORE_SYSTEM_BOOTSTRAP_CHECKS.get(settings), - checks(settings), - Node.NODE_NAME_SETTING.get(settings)); - } - - /** - * executes the provided checks and fails the node if - * enforceLimits is true, otherwise logs warnings - * - * @param enforceLimits true if the checks should be enforced or - * otherwise warned - * @param ignoreSystemChecks true if system checks should be enforced - * or otherwise warned - * @param checks the checks to execute - * @param nodeName the node name to be used as a logging prefix - */ - // visible for testing - static void check(final boolean enforceLimits, final boolean ignoreSystemChecks, final List checks, final String nodeName) { - check(enforceLimits, ignoreSystemChecks, checks, Loggers.getLogger(BootstrapCheck.class, nodeName)); - } - - /** - * executes the provided checks and fails the node if - * enforceLimits is true, otherwise logs warnings - * - * @param enforceLimits true if the checks should be enforced or - * otherwise warned - * @param ignoreSystemChecks true if system checks should be enforced - * or otherwise warned - * @param checks the checks to execute - * @param logger the logger to - */ - static void check( - final boolean enforceLimits, - final boolean ignoreSystemChecks, - final List checks, - final Logger logger) { - final List errors = new ArrayList<>(); - final List ignoredErrors = new ArrayList<>(); - - if (enforceLimits) { - logger.info("bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks"); - } - if (enforceLimits && ignoreSystemChecks) { - logger.warn("enforcing bootstrap checks but ignoring system bootstrap checks, consider not ignoring system checks"); - } - - for (final Check check : checks) { - if (check.check()) { - if ((!enforceLimits || (check.isSystemCheck() && ignoreSystemChecks)) && !check.alwaysEnforce()) { - ignoredErrors.add(check.errorMessage()); - } else { - errors.add(check.errorMessage()); - } - } - } - - if (!ignoredErrors.isEmpty()) { - ignoredErrors.forEach(error -> log(logger, error)); - } - - if (!errors.isEmpty()) { - final List messages = new ArrayList<>(1 + errors.size()); - messages.add("bootstrap checks failed"); - messages.addAll(errors); - final RuntimeException re = new RuntimeException(String.join("\n", messages)); - errors.stream().map(IllegalStateException::new).forEach(re::addSuppressed); - throw re; - } - - } - - static void log(final Logger logger, final String error) { - logger.warn(error); - } + boolean check(); /** - * Tests if the checks should be enforced + * The error message for a failed check. * - * @param boundTransportAddress the node network bindings - * @return true if the checks should be enforced + * @return the error message on check failure */ - // visible for testing - static boolean enforceLimits(BoundTransportAddress boundTransportAddress) { - return !(Arrays.stream(boundTransportAddress.boundAddresses()).allMatch(TransportAddress::isLoopbackOrLinkLocalAddress) && - boundTransportAddress.publishAddress().isLoopbackOrLinkLocalAddress()); - } - - // the list of checks to execute - static List checks(final Settings settings) { - final List checks = new ArrayList<>(); - checks.add(new HeapSizeCheck()); - final FileDescriptorCheck fileDescriptorCheck - = Constants.MAC_OS_X ? new OsXFileDescriptorCheck() : new FileDescriptorCheck(); - checks.add(fileDescriptorCheck); - checks.add(new MlockallCheck(BootstrapSettings.MEMORY_LOCK_SETTING.get(settings))); - if (Constants.LINUX) { - checks.add(new MaxNumberOfThreadsCheck()); - } - if (Constants.LINUX || Constants.MAC_OS_X) { - checks.add(new MaxSizeVirtualMemoryCheck()); - } - if (Constants.LINUX) { - checks.add(new MaxMapCountCheck()); - } - checks.add(new ClientJvmCheck()); - checks.add(new OnErrorCheck()); - checks.add(new OnOutOfMemoryErrorCheck()); - return Collections.unmodifiableList(checks); - } - - /** - * Encapsulates a limit check - */ - interface Check { - - /** - * test if the node fails the check - * - * @return true if the node failed the check - */ - boolean check(); - - /** - * the message for a failed check - * - * @return the error message on check failure - */ - String errorMessage(); - - /** - * test if the check is a system-level check - * - * @return true if the check is a system-level check as opposed - * to an Elasticsearch-level check - */ - boolean isSystemCheck(); - - default boolean alwaysEnforce() { - return false; - } - - } - - static class HeapSizeCheck implements BootstrapCheck.Check { - - @Override - public boolean check() { - final long initialHeapSize = getInitialHeapSize(); - final long maxHeapSize = getMaxHeapSize(); - return initialHeapSize != 0 && maxHeapSize != 0 && initialHeapSize != maxHeapSize; - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "initial heap size [%d] not equal to maximum heap size [%d]; " + - "this can cause resize pauses and prevents mlockall from locking the entire heap", - getInitialHeapSize(), - getMaxHeapSize() - ); - } - - // visible for testing - long getInitialHeapSize() { - return JvmInfo.jvmInfo().getConfiguredInitialHeapSize(); - } - - // visible for testing - long getMaxHeapSize() { - return JvmInfo.jvmInfo().getConfiguredMaxHeapSize(); - } - - @Override - public final boolean isSystemCheck() { - return false; - } - - } - - static class OsXFileDescriptorCheck extends FileDescriptorCheck { - - public OsXFileDescriptorCheck() { - // see constant OPEN_MAX defined in - // /usr/include/sys/syslimits.h on OS X and its use in JVM - // initialization in int os:init_2(void) defined in the JVM - // code for BSD (contains OS X) - super(10240); - } - - } - - static class FileDescriptorCheck implements Check { - - private final int limit; - - FileDescriptorCheck() { - this(1 << 16); - } - - protected FileDescriptorCheck(final int limit) { - if (limit <= 0) { - throw new IllegalArgumentException("limit must be positive but was [" + limit + "]"); - } - this.limit = limit; - } - - public final boolean check() { - final long maxFileDescriptorCount = getMaxFileDescriptorCount(); - return maxFileDescriptorCount != -1 && maxFileDescriptorCount < limit; - } - - @Override - public final String errorMessage() { - return String.format( - Locale.ROOT, - "max file descriptors [%d] for elasticsearch process likely too low, increase to at least [%d]", - getMaxFileDescriptorCount(), - limit - ); - } - - // visible for testing - long getMaxFileDescriptorCount() { - return ProcessProbe.getInstance().getMaxFileDescriptorCount(); - } - - @Override - public final boolean isSystemCheck() { - return true; - } - - } - - static class MlockallCheck implements Check { - - private final boolean mlockallSet; - - public MlockallCheck(final boolean mlockAllSet) { - this.mlockallSet = mlockAllSet; - } - - @Override - public boolean check() { - return mlockallSet && !isMemoryLocked(); - } - - @Override - public String errorMessage() { - return "memory locking requested for elasticsearch process but memory is not locked"; - } - - // visible for testing - boolean isMemoryLocked() { - return Natives.isMemoryLocked(); - } - - @Override - public final boolean isSystemCheck() { - return true; - } - - } - - static class MaxNumberOfThreadsCheck implements Check { - - private final long maxNumberOfThreadsThreshold = 1 << 11; - - @Override - public boolean check() { - return getMaxNumberOfThreads() != -1 && getMaxNumberOfThreads() < maxNumberOfThreadsThreshold; - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "max number of threads [%d] for user [%s] likely too low, increase to at least [%d]", - getMaxNumberOfThreads(), - BootstrapInfo.getSystemProperties().get("user.name"), - maxNumberOfThreadsThreshold); - } - - // visible for testing - long getMaxNumberOfThreads() { - return JNANatives.MAX_NUMBER_OF_THREADS; - } - - @Override - public final boolean isSystemCheck() { - return true; - } - - } - - static class MaxSizeVirtualMemoryCheck implements Check { - - @Override - public boolean check() { - return getMaxSizeVirtualMemory() != Long.MIN_VALUE && getMaxSizeVirtualMemory() != getRlimInfinity(); - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "max size virtual memory [%d] for user [%s] likely too low, increase to [unlimited]", - getMaxSizeVirtualMemory(), - BootstrapInfo.getSystemProperties().get("user.name")); - } - - // visible for testing - long getRlimInfinity() { - return JNACLibrary.RLIM_INFINITY; - } - - // visible for testing - long getMaxSizeVirtualMemory() { - return JNANatives.MAX_SIZE_VIRTUAL_MEMORY; - } - - @Override - public final boolean isSystemCheck() { - return true; - } - - } - - static class MaxMapCountCheck implements Check { - - private final long limit = 1 << 18; - - @Override - public boolean check() { - return getMaxMapCount() != -1 && getMaxMapCount() < limit; - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "max virtual memory areas vm.max_map_count [%d] likely too low, increase to at least [%d]", - getMaxMapCount(), - limit); - } - - // visible for testing - long getMaxMapCount() { - return getMaxMapCount(Loggers.getLogger(BootstrapCheck.class)); - } - - // visible for testing - long getMaxMapCount(Logger logger) { - final Path path = getProcSysVmMaxMapCountPath(); - try (final BufferedReader bufferedReader = getBufferedReader(path)) { - final String rawProcSysVmMaxMapCount = readProcSysVmMaxMapCount(bufferedReader); - if (rawProcSysVmMaxMapCount != null) { - try { - return parseProcSysVmMaxMapCount(rawProcSysVmMaxMapCount); - } catch (final NumberFormatException e) { - logger.warn( - (Supplier) () -> new ParameterizedMessage( - "unable to parse vm.max_map_count [{}]", - rawProcSysVmMaxMapCount), - e); - } - } - } catch (final IOException e) { - logger.warn((Supplier) () -> new ParameterizedMessage("I/O exception while trying to read [{}]", path), e); - } - return -1; - } - - @SuppressForbidden(reason = "access /proc/sys/vm/max_map_count") - private Path getProcSysVmMaxMapCountPath() { - return PathUtils.get("/proc/sys/vm/max_map_count"); - } - - // visible for testing - BufferedReader getBufferedReader(final Path path) throws IOException { - return Files.newBufferedReader(path); - } - - // visible for testing - String readProcSysVmMaxMapCount(final BufferedReader bufferedReader) throws IOException { - return bufferedReader.readLine(); - } - - // visible for testing - long parseProcSysVmMaxMapCount(final String procSysVmMaxMapCount) throws NumberFormatException { - return Long.parseLong(procSysVmMaxMapCount); - } - - @Override - public final boolean isSystemCheck() { - return true; - } - - } - - static class ClientJvmCheck implements BootstrapCheck.Check { - - @Override - public boolean check() { - return getVmName().toLowerCase(Locale.ROOT).contains("client"); - } - - // visible for testing - String getVmName() { - return JvmInfo.jvmInfo().getVmName(); - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "JVM is using the client VM [%s] but should be using a server VM for the best performance", - getVmName()); - } - - @Override - public final boolean isSystemCheck() { - return false; - } - - } - - abstract static class MightForkCheck implements BootstrapCheck.Check { - - @Override - public boolean check() { - return isSeccompInstalled() && mightFork(); - } - - // visible for testing - boolean isSeccompInstalled() { - return Natives.isSeccompInstalled(); - } - - // visible for testing - abstract boolean mightFork(); - - @Override - public final boolean isSystemCheck() { - return false; - } - - @Override - public final boolean alwaysEnforce() { - return true; - } - - } - - static class OnErrorCheck extends MightForkCheck { - - @Override - boolean mightFork() { - final String onError = onError(); - return onError != null && !onError.equals(""); - } - - // visible for testing - String onError() { - return JvmInfo.jvmInfo().onError(); - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "OnError [%s] requires forking but is prevented by system call filters ([%s=true]);" + - " upgrade to at least Java 8u92 and use ExitOnOutOfMemoryError", - onError(), - BootstrapSettings.SECCOMP_SETTING.getKey()); - } - - } - - static class OnOutOfMemoryErrorCheck extends MightForkCheck { - - @Override - boolean mightFork() { - final String onOutOfMemoryError = onOutOfMemoryError(); - return onOutOfMemoryError != null && !onOutOfMemoryError.equals(""); - } - - // visible for testing - String onOutOfMemoryError() { - return JvmInfo.jvmInfo().onOutOfMemoryError(); - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "OnOutOfMemoryError [%s] requires forking but is prevented by system call filters ([%s=true]);" + - " upgrade to at least Java 8u92 and use ExitOnOutOfMemoryError", - onOutOfMemoryError(), - BootstrapSettings.SECCOMP_SETTING.getKey()); - } + String errorMessage(); + default boolean alwaysEnforce() { + return false; } } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java new file mode 100644 index 0000000000000..45cd04e794225 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java @@ -0,0 +1,701 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.bootstrap; + +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.util.Constants; +import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.io.PathUtils; +import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.transport.BoundTransportAddress; +import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.discovery.DiscoveryModule; +import org.elasticsearch.monitor.jvm.JvmInfo; +import org.elasticsearch.monitor.process.ProcessProbe; +import org.elasticsearch.node.Node; +import org.elasticsearch.node.NodeValidationException; + +import java.io.BufferedReader; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Locale; +import java.util.function.Predicate; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +/** + * We enforce bootstrap checks once a node has the transport protocol bound to a non-loopback interface or if the system property {@code + * es.enforce.bootstrap.checks} is set to {@true}. In this case we assume the node is running in production and all bootstrap checks must + * pass. + */ +final class BootstrapChecks { + + private BootstrapChecks() { + } + + static final String ES_ENFORCE_BOOTSTRAP_CHECKS = "es.enforce.bootstrap.checks"; + + /** + * Executes the bootstrap checks if the node has the transport protocol bound to a non-loopback interface. If the system property + * {@code es.enforce.bootstrap.checks} is set to {@code true} then the bootstrap checks will be enforced regardless of whether or not + * the transport protocol is bound to a non-loopback interface. + * + * @param settings the current node settings + * @param boundTransportAddress the node network bindings + */ + static void check(final Settings settings, final BoundTransportAddress boundTransportAddress, List additionalChecks) + throws NodeValidationException { + final List builtInChecks = checks(settings); + final List combinedChecks = new ArrayList<>(builtInChecks); + combinedChecks.addAll(additionalChecks); + check( + enforceLimits(boundTransportAddress, DiscoveryModule.DISCOVERY_TYPE_SETTING.get(settings)), + Collections.unmodifiableList(combinedChecks), + Node.NODE_NAME_SETTING.get(settings)); + } + + /** + * Executes the provided checks and fails the node if {@code enforceLimits} is {@code true}, otherwise logs warnings. If the system + * property {@code es.enforce.bootstrap.checks} is set to {@code true} then the bootstrap checks will be enforced regardless of whether + * or not the transport protocol is bound to a non-loopback interface. + * + * @param enforceLimits {@code true} if the checks should be enforced or otherwise warned + * @param checks the checks to execute + * @param nodeName the node name to be used as a logging prefix + */ + static void check( + final boolean enforceLimits, + final List checks, + final String nodeName) throws NodeValidationException { + check(enforceLimits, checks, Loggers.getLogger(BootstrapChecks.class, nodeName)); + } + + /** + * Executes the provided checks and fails the node if {@code enforceLimits} is {@code true}, otherwise logs warnings. If the system + * property {@code es.enforce.bootstrap.checks }is set to {@code true} then the bootstrap checks will be enforced regardless of whether + * or not the transport protocol is bound to a non-loopback interface. + * + * @param enforceLimits {@code true} if the checks should be enforced or otherwise warned + * @param checks the checks to execute + * @param logger the logger to + */ + static void check( + final boolean enforceLimits, + final List checks, + final Logger logger) throws NodeValidationException { + final List errors = new ArrayList<>(); + final List ignoredErrors = new ArrayList<>(); + + final String esEnforceBootstrapChecks = System.getProperty(ES_ENFORCE_BOOTSTRAP_CHECKS); + final boolean enforceBootstrapChecks; + if (esEnforceBootstrapChecks == null) { + enforceBootstrapChecks = false; + } else if (Boolean.TRUE.toString().equals(esEnforceBootstrapChecks)) { + enforceBootstrapChecks = true; + } else { + final String message = + String.format( + Locale.ROOT, + "[%s] must be [true] but was [%s]", + ES_ENFORCE_BOOTSTRAP_CHECKS, + esEnforceBootstrapChecks); + throw new IllegalArgumentException(message); + } + + if (enforceLimits) { + logger.info("bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks"); + } else if (enforceBootstrapChecks) { + logger.info("explicitly enforcing bootstrap checks"); + } + + for (final BootstrapCheck check : checks) { + if (check.check()) { + if (!(enforceLimits || enforceBootstrapChecks) && !check.alwaysEnforce()) { + ignoredErrors.add(check.errorMessage()); + } else { + errors.add(check.errorMessage()); + } + } + } + + if (!ignoredErrors.isEmpty()) { + ignoredErrors.forEach(error -> log(logger, error)); + } + + if (!errors.isEmpty()) { + final List messages = new ArrayList<>(1 + errors.size()); + messages.add("[" + errors.size() + "] bootstrap checks failed"); + for (int i = 0; i < errors.size(); i++) { + messages.add("[" + (i + 1) + "]: " + errors.get(i)); + } + final NodeValidationException ne = new NodeValidationException(String.join("\n", messages)); + errors.stream().map(IllegalStateException::new).forEach(ne::addSuppressed); + throw ne; + } + } + + static void log(final Logger logger, final String error) { + logger.warn(error); + } + + /** + * Tests if the checks should be enforced. + * + * @param boundTransportAddress the node network bindings + * @param discoveryType the discovery type + * @return {@code true} if the checks should be enforced + */ + static boolean enforceLimits(final BoundTransportAddress boundTransportAddress, final String discoveryType) { + final boolean bound = + !(Arrays.stream(boundTransportAddress.boundAddresses()).allMatch(TransportAddress::isLoopbackOrLinkLocalAddress) && + boundTransportAddress.publishAddress().isLoopbackOrLinkLocalAddress()); + return bound && !"single-node".equals(discoveryType); + } + + // the list of checks to execute + static List checks(final Settings settings) { + final List checks = new ArrayList<>(); + checks.add(new HeapSizeCheck()); + final FileDescriptorCheck fileDescriptorCheck + = Constants.MAC_OS_X ? new OsXFileDescriptorCheck() : new FileDescriptorCheck(); + checks.add(fileDescriptorCheck); + checks.add(new MlockallCheck(BootstrapSettings.MEMORY_LOCK_SETTING.get(settings))); + if (Constants.LINUX) { + checks.add(new MaxNumberOfThreadsCheck()); + } + if (Constants.LINUX || Constants.MAC_OS_X) { + checks.add(new MaxSizeVirtualMemoryCheck()); + } + if (Constants.LINUX || Constants.MAC_OS_X) { + checks.add(new MaxFileSizeCheck()); + } + if (Constants.LINUX) { + checks.add(new MaxMapCountCheck()); + } + checks.add(new ClientJvmCheck()); + checks.add(new UseSerialGCCheck()); + checks.add(new SystemCallFilterCheck(BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.get(settings))); + checks.add(new OnErrorCheck()); + checks.add(new OnOutOfMemoryErrorCheck()); + checks.add(new EarlyAccessCheck()); + checks.add(new G1GCCheck()); + return Collections.unmodifiableList(checks); + } + + static class HeapSizeCheck implements BootstrapCheck { + + @Override + public boolean check() { + final long initialHeapSize = getInitialHeapSize(); + final long maxHeapSize = getMaxHeapSize(); + return initialHeapSize != 0 && maxHeapSize != 0 && initialHeapSize != maxHeapSize; + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "initial heap size [%d] not equal to maximum heap size [%d]; " + + "this can cause resize pauses and prevents mlockall from locking the entire heap", + getInitialHeapSize(), + getMaxHeapSize() + ); + } + + // visible for testing + long getInitialHeapSize() { + return JvmInfo.jvmInfo().getConfiguredInitialHeapSize(); + } + + // visible for testing + long getMaxHeapSize() { + return JvmInfo.jvmInfo().getConfiguredMaxHeapSize(); + } + + } + + static class OsXFileDescriptorCheck extends FileDescriptorCheck { + + OsXFileDescriptorCheck() { + // see constant OPEN_MAX defined in + // /usr/include/sys/syslimits.h on OS X and its use in JVM + // initialization in int os:init_2(void) defined in the JVM + // code for BSD (contains OS X) + super(10240); + } + + } + + static class FileDescriptorCheck implements BootstrapCheck { + + private final int limit; + + FileDescriptorCheck() { + this(1 << 16); + } + + protected FileDescriptorCheck(final int limit) { + if (limit <= 0) { + throw new IllegalArgumentException("limit must be positive but was [" + limit + "]"); + } + this.limit = limit; + } + + public final boolean check() { + final long maxFileDescriptorCount = getMaxFileDescriptorCount(); + return maxFileDescriptorCount != -1 && maxFileDescriptorCount < limit; + } + + @Override + public final String errorMessage() { + return String.format( + Locale.ROOT, + "max file descriptors [%d] for elasticsearch process is too low, increase to at least [%d]", + getMaxFileDescriptorCount(), + limit + ); + } + + // visible for testing + long getMaxFileDescriptorCount() { + return ProcessProbe.getInstance().getMaxFileDescriptorCount(); + } + + } + + static class MlockallCheck implements BootstrapCheck { + + private final boolean mlockallSet; + + MlockallCheck(final boolean mlockAllSet) { + this.mlockallSet = mlockAllSet; + } + + @Override + public boolean check() { + return mlockallSet && !isMemoryLocked(); + } + + @Override + public String errorMessage() { + return "memory locking requested for elasticsearch process but memory is not locked"; + } + + // visible for testing + boolean isMemoryLocked() { + return Natives.isMemoryLocked(); + } + + } + + static class MaxNumberOfThreadsCheck implements BootstrapCheck { + + // this should be plenty for machines up to 256 cores + private static final long MAX_NUMBER_OF_THREADS_THRESHOLD = 1 << 11; + + @Override + public boolean check() { + return getMaxNumberOfThreads() != -1 && getMaxNumberOfThreads() < MAX_NUMBER_OF_THREADS_THRESHOLD; + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "max number of threads [%d] for user [%s] is too low, increase to at least [%d]", + getMaxNumberOfThreads(), + BootstrapInfo.getSystemProperties().get("user.name"), + MAX_NUMBER_OF_THREADS_THRESHOLD); + } + + // visible for testing + long getMaxNumberOfThreads() { + return JNANatives.MAX_NUMBER_OF_THREADS; + } + + } + + static class MaxSizeVirtualMemoryCheck implements BootstrapCheck { + + @Override + public boolean check() { + return getMaxSizeVirtualMemory() != Long.MIN_VALUE && getMaxSizeVirtualMemory() != getRlimInfinity(); + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "max size virtual memory [%d] for user [%s] is too low, increase to [unlimited]", + getMaxSizeVirtualMemory(), + BootstrapInfo.getSystemProperties().get("user.name")); + } + + // visible for testing + long getRlimInfinity() { + return JNACLibrary.RLIM_INFINITY; + } + + // visible for testing + long getMaxSizeVirtualMemory() { + return JNANatives.MAX_SIZE_VIRTUAL_MEMORY; + } + + } + + /** + * Bootstrap check that the maximum file size is unlimited (otherwise Elasticsearch could run in to an I/O exception writing files). + */ + static class MaxFileSizeCheck implements BootstrapCheck { + + @Override + public boolean check() { + final long maxFileSize = getMaxFileSize(); + return maxFileSize != Long.MIN_VALUE && maxFileSize != getRlimInfinity(); + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "max file size [%d] for user [%s] is too low, increase to [unlimited]", + getMaxFileSize(), + BootstrapInfo.getSystemProperties().get("user.name")); + } + + long getRlimInfinity() { + return JNACLibrary.RLIM_INFINITY; + } + + long getMaxFileSize() { + return JNANatives.MAX_FILE_SIZE; + } + + } + + static class MaxMapCountCheck implements BootstrapCheck { + + private static final long LIMIT = 1 << 18; + + @Override + public boolean check() { + return getMaxMapCount() != -1 && getMaxMapCount() < LIMIT; + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "max virtual memory areas vm.max_map_count [%d] is too low, increase to at least [%d]", + getMaxMapCount(), + LIMIT); + } + + // visible for testing + long getMaxMapCount() { + return getMaxMapCount(Loggers.getLogger(BootstrapChecks.class)); + } + + // visible for testing + long getMaxMapCount(Logger logger) { + final Path path = getProcSysVmMaxMapCountPath(); + try (BufferedReader bufferedReader = getBufferedReader(path)) { + final String rawProcSysVmMaxMapCount = readProcSysVmMaxMapCount(bufferedReader); + if (rawProcSysVmMaxMapCount != null) { + try { + return parseProcSysVmMaxMapCount(rawProcSysVmMaxMapCount); + } catch (final NumberFormatException e) { + logger.warn( + (Supplier) () -> new ParameterizedMessage( + "unable to parse vm.max_map_count [{}]", + rawProcSysVmMaxMapCount), + e); + } + } + } catch (final IOException e) { + logger.warn((Supplier) () -> new ParameterizedMessage("I/O exception while trying to read [{}]", path), e); + } + return -1; + } + + @SuppressForbidden(reason = "access /proc/sys/vm/max_map_count") + private Path getProcSysVmMaxMapCountPath() { + return PathUtils.get("/proc/sys/vm/max_map_count"); + } + + // visible for testing + BufferedReader getBufferedReader(final Path path) throws IOException { + return Files.newBufferedReader(path); + } + + // visible for testing + String readProcSysVmMaxMapCount(final BufferedReader bufferedReader) throws IOException { + return bufferedReader.readLine(); + } + + // visible for testing + long parseProcSysVmMaxMapCount(final String procSysVmMaxMapCount) throws NumberFormatException { + return Long.parseLong(procSysVmMaxMapCount); + } + + } + + static class ClientJvmCheck implements BootstrapCheck { + + @Override + public boolean check() { + return getVmName().toLowerCase(Locale.ROOT).contains("client"); + } + + // visible for testing + String getVmName() { + return JvmInfo.jvmInfo().getVmName(); + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "JVM is using the client VM [%s] but should be using a server VM for the best performance", + getVmName()); + } + + } + + /** + * Checks if the serial collector is in use. This collector is single-threaded and devastating + * for performance and should not be used for a server application like Elasticsearch. + */ + static class UseSerialGCCheck implements BootstrapCheck { + + @Override + public boolean check() { + return getUseSerialGC().equals("true"); + } + + // visible for testing + String getUseSerialGC() { + return JvmInfo.jvmInfo().useSerialGC(); + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "JVM is using the serial collector but should not be for the best performance; " + + "either it's the default for the VM [%s] or -XX:+UseSerialGC was explicitly specified", + JvmInfo.jvmInfo().getVmName()); + } + + } + + /** + * Bootstrap check that if system call filters are enabled, then system call filters must have installed successfully. + */ + static class SystemCallFilterCheck implements BootstrapCheck { + + private final boolean areSystemCallFiltersEnabled; + + SystemCallFilterCheck(final boolean areSystemCallFiltersEnabled) { + this.areSystemCallFiltersEnabled = areSystemCallFiltersEnabled; + } + + @Override + public boolean check() { + return areSystemCallFiltersEnabled && !isSystemCallFilterInstalled(); + } + + // visible for testing + boolean isSystemCallFilterInstalled() { + return Natives.isSystemCallFilterInstalled(); + } + + @Override + public String errorMessage() { + return "system call filters failed to install; " + + "check the logs and fix your configuration or disable system call filters at your own risk"; + } + + } + + abstract static class MightForkCheck implements BootstrapCheck { + + @Override + public boolean check() { + return isSystemCallFilterInstalled() && mightFork(); + } + + // visible for testing + boolean isSystemCallFilterInstalled() { + return Natives.isSystemCallFilterInstalled(); + } + + // visible for testing + abstract boolean mightFork(); + + @Override + public final boolean alwaysEnforce() { + return true; + } + + } + + static class OnErrorCheck extends MightForkCheck { + + @Override + boolean mightFork() { + final String onError = onError(); + return onError != null && !onError.equals(""); + } + + // visible for testing + String onError() { + return JvmInfo.jvmInfo().onError(); + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "OnError [%s] requires forking but is prevented by system call filters ([%s=true]);" + + " upgrade to at least Java 8u92 and use ExitOnOutOfMemoryError", + onError(), + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.getKey()); + } + + } + + static class OnOutOfMemoryErrorCheck extends MightForkCheck { + + @Override + boolean mightFork() { + final String onOutOfMemoryError = onOutOfMemoryError(); + return onOutOfMemoryError != null && !onOutOfMemoryError.equals(""); + } + + // visible for testing + String onOutOfMemoryError() { + return JvmInfo.jvmInfo().onOutOfMemoryError(); + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "OnOutOfMemoryError [%s] requires forking but is prevented by system call filters ([%s=true]);" + + " upgrade to at least Java 8u92 and use ExitOnOutOfMemoryError", + onOutOfMemoryError(), + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.getKey()); + } + + } + + /** + * Bootstrap check for early-access builds from OpenJDK. + */ + static class EarlyAccessCheck implements BootstrapCheck { + + @Override + public boolean check() { + return "Oracle Corporation".equals(jvmVendor()) && javaVersion().endsWith("-ea"); + } + + String jvmVendor() { + return Constants.JVM_VENDOR; + } + + String javaVersion() { + return Constants.JAVA_VERSION; + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "Java version [%s] is an early-access build, only use release builds", + javaVersion()); + } + + } + + /** + * Bootstrap check for versions of HotSpot that are known to have issues that can lead to index corruption when G1GC is enabled. + */ + static class G1GCCheck implements BootstrapCheck { + + @Override + public boolean check() { + if ("Oracle Corporation".equals(jvmVendor()) && isJava8() && isG1GCEnabled()) { + final String jvmVersion = jvmVersion(); + // HotSpot versions on Java 8 match this regular expression; note that this changes with Java 9 after JEP-223 + final Pattern pattern = Pattern.compile("(\\d+)\\.(\\d+)-b\\d+"); + final Matcher matcher = pattern.matcher(jvmVersion); + final boolean matches = matcher.matches(); + assert matches : jvmVersion; + final int major = Integer.parseInt(matcher.group(1)); + final int update = Integer.parseInt(matcher.group(2)); + // HotSpot versions for Java 8 have major version 25, the bad versions are all versions prior to update 40 + return major == 25 && update < 40; + } else { + return false; + } + } + + // visible for testing + String jvmVendor() { + return Constants.JVM_VENDOR; + } + + // visible for testing + boolean isG1GCEnabled() { + assert "Oracle Corporation".equals(jvmVendor()); + return JvmInfo.jvmInfo().useG1GC().equals("true"); + } + + // visible for testing + String jvmVersion() { + assert "Oracle Corporation".equals(jvmVendor()); + return Constants.JVM_VERSION; + } + + // visible for testing + boolean isJava8() { + assert "Oracle Corporation".equals(jvmVendor()); + return JavaVersion.current().equals(JavaVersion.parse("1.8")); + } + + @Override + public String errorMessage() { + return String.format( + Locale.ROOT, + "JVM version [%s] can cause data corruption when used with G1GC; upgrade to at least Java 8u40", jvmVersion()); + } + + } + +} diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapException.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapException.java new file mode 100644 index 0000000000000..635afaf959967 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapException.java @@ -0,0 +1,43 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.bootstrap; + +import java.nio.file.Path; + +/** + * Wrapper exception for checked exceptions thrown during the bootstrap process. Methods invoked + * during bootstrap should explicitly declare the checked exceptions that they can throw, rather + * than declaring the top-level checked exception {@link Exception}. This exception exists to wrap + * these checked exceptions so that + * {@link Bootstrap#init(boolean, Path, boolean, org.elasticsearch.env.Environment)} + * does not have to declare all of these checked exceptions. + */ +class BootstrapException extends Exception { + + /** + * Wraps an existing exception. + * + * @param cause the underlying cause of bootstrap failing + */ + BootstrapException(final Exception cause) { + super(cause); + } + +} diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapInfo.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapInfo.java index 791836bf8a4a7..3ff6639de9aad 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapInfo.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapInfo.java @@ -24,16 +24,16 @@ import java.util.Dictionary; import java.util.Enumeration; -/** - * Exposes system startup information +/** + * Exposes system startup information */ @SuppressForbidden(reason = "exposes read-only view of system properties") public final class BootstrapInfo { /** no instantiation */ private BootstrapInfo() {} - - /** + + /** * Returns true if we successfully loaded native libraries. *

    * If this returns false, then native operations such as locking @@ -42,19 +42,19 @@ private BootstrapInfo() {} public static boolean isNativesAvailable() { return Natives.JNA_AVAILABLE; } - - /** + + /** * Returns true if we were able to lock the process's address space. */ public static boolean isMemoryLocked() { return Natives.isMemoryLocked(); } - + /** - * Returns true if secure computing mode is enabled (supported systems only) + * Returns true if system call filter is installed (supported systems only) */ - public static boolean isSeccompInstalled() { - return Natives.isSeccompInstalled(); + public static boolean isSystemCallFilterInstalled() { + return Natives.isSystemCallFilterInstalled(); } /** diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapSettings.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapSettings.java index ad37916881bfd..2d907088f1f9b 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapSettings.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapSettings.java @@ -33,11 +33,12 @@ private BootstrapSettings() { public static final Setting MEMORY_LOCK_SETTING = Setting.boolSetting("bootstrap.memory_lock", false, Property.NodeScope); + // TODO: remove in 6.0.0 public static final Setting SECCOMP_SETTING = Setting.boolSetting("bootstrap.seccomp", true, Property.NodeScope); + public static final Setting SYSTEM_CALL_FILTER_SETTING = + Setting.boolSetting("bootstrap.system_call_filter", SECCOMP_SETTING, Property.NodeScope); public static final Setting CTRLHANDLER_SETTING = Setting.boolSetting("bootstrap.ctrlhandler", true, Property.NodeScope); - public static final Setting IGNORE_SYSTEM_BOOTSTRAP_CHECKS = - Setting.boolSetting("bootstrap.ignore_system_bootstrap_checks", false, Property.NodeScope); } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/ESPolicy.java b/core/src/main/java/org/elasticsearch/bootstrap/ESPolicy.java index ddc25c8853519..74fa7e0c1d5ac 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/ESPolicy.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/ESPolicy.java @@ -21,17 +21,19 @@ import org.elasticsearch.common.SuppressForbidden; -import java.net.SocketPermission; -import java.net.URL; import java.io.FilePermission; import java.io.IOException; +import java.net.SocketPermission; +import java.net.URL; import java.security.CodeSource; import java.security.Permission; import java.security.PermissionCollection; import java.security.Permissions; import java.security.Policy; import java.security.ProtectionDomain; +import java.util.Collections; import java.util.Map; +import java.util.function.Predicate; /** custom policy for union of static and dynamic permissions */ final class ESPolicy extends Policy { @@ -47,9 +49,9 @@ final class ESPolicy extends Policy { final PermissionCollection dynamic; final Map plugins; - public ESPolicy(PermissionCollection dynamic, Map plugins, boolean filterBadDefaults) { + ESPolicy(PermissionCollection dynamic, Map plugins, boolean filterBadDefaults) { this.template = Security.readPolicy(getClass().getResource(POLICY_RESOURCE), JarHell.parseClassPath()); - this.untrusted = Security.readPolicy(getClass().getResource(UNTRUSTED_RESOURCE), new URL[0]); + this.untrusted = Security.readPolicy(getClass().getResource(UNTRUSTED_RESOURCE), Collections.emptySet()); if (filterBadDefaults) { this.system = new SystemPolicy(Policy.getPolicy()); } else { @@ -133,18 +135,66 @@ public PermissionCollection getPermissions(CodeSource codesource) { // TODO: remove this hack when insecure defaults are removed from java + /** + * Wraps a bad default permission, applying a pre-implies to any permissions before checking if the wrapped bad default permission + * implies a permission. + */ + private static class BadDefaultPermission extends Permission { + + private final Permission badDefaultPermission; + private final Predicate preImplies; + + /** + * Construct an instance with a pre-implies check to apply to desired permissions. + * + * @param badDefaultPermission the bad default permission to wrap + * @param preImplies a test that is applied to a desired permission before checking if the bad default permission that + * this instance wraps implies the desired permission + */ + BadDefaultPermission(final Permission badDefaultPermission, final Predicate preImplies) { + super(badDefaultPermission.getName()); + this.badDefaultPermission = badDefaultPermission; + this.preImplies = preImplies; + } + + @Override + public final boolean implies(Permission permission) { + return preImplies.test(permission) && badDefaultPermission.implies(permission); + } + + @Override + public final boolean equals(Object obj) { + return badDefaultPermission.equals(obj); + } + + @Override + public int hashCode() { + return badDefaultPermission.hashCode(); + } + + @Override + public String getActions() { + return badDefaultPermission.getActions(); + } + + } + // default policy file states: // "It is strongly recommended that you either remove this permission // from this policy file or further restrict it to code sources // that you specify, because Thread.stop() is potentially unsafe." // not even sure this method still works... - static final Permission BAD_DEFAULT_NUMBER_ONE = new RuntimePermission("stopThread"); + private static final Permission BAD_DEFAULT_NUMBER_ONE = new BadDefaultPermission(new RuntimePermission("stopThread"), p -> true); // default policy file states: // "allows anyone to listen on dynamic ports" // specified exactly because that is what we want, and fastest since it won't imply any // expensive checks for the implicit "resolve" - static final Permission BAD_DEFAULT_NUMBER_TWO = new SocketPermission("localhost:0", "listen"); + private static final Permission BAD_DEFAULT_NUMBER_TWO = + new BadDefaultPermission( + new SocketPermission("localhost:0", "listen"), + // we apply this pre-implies test because some SocketPermission#implies calls do expensive reverse-DNS resolves + p -> p instanceof SocketPermission && p.getActions().contains("listen")); /** * Wraps the Java system policy, filtering out bad default permissions that @@ -159,7 +209,7 @@ static class SystemPolicy extends Policy { @Override public boolean implies(ProtectionDomain domain, Permission permission) { - if (BAD_DEFAULT_NUMBER_ONE.equals(permission) || BAD_DEFAULT_NUMBER_TWO.equals(permission)) { + if (BAD_DEFAULT_NUMBER_ONE.implies(permission) || BAD_DEFAULT_NUMBER_TWO.implies(permission)) { return false; } return delegate.implies(domain, permission); diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java b/core/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java index fa9b153047730..a8152569083e0 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java @@ -24,25 +24,29 @@ import joptsimple.OptionSpecBuilder; import joptsimple.util.PathConverter; import org.elasticsearch.Build; +import org.elasticsearch.cli.EnvironmentAwareCommand; import org.elasticsearch.cli.ExitCodes; -import org.elasticsearch.cli.SettingCommand; import org.elasticsearch.cli.Terminal; import org.elasticsearch.cli.UserException; +import org.elasticsearch.common.logging.LogConfigurator; +import org.elasticsearch.env.Environment; import org.elasticsearch.monitor.jvm.JvmInfo; +import org.elasticsearch.node.NodeValidationException; import java.io.IOException; import java.nio.file.Path; +import java.security.Permission; import java.util.Arrays; -import java.util.Map; /** * This class starts elasticsearch. */ -class Elasticsearch extends SettingCommand { +class Elasticsearch extends EnvironmentAwareCommand { private final OptionSpecBuilder versionOption; private final OptionSpecBuilder daemonizeOption; private final OptionSpec pidfileOption; + private final OptionSpecBuilder quietOption; // visible for testing Elasticsearch() { @@ -57,12 +61,25 @@ class Elasticsearch extends SettingCommand { .availableUnless(versionOption) .withRequiredArg() .withValuesConvertedBy(new PathConverter()); + quietOption = parser.acceptsAll(Arrays.asList("q", "quiet"), + "Turns off standard ouput/error streams logging in console") + .availableUnless(versionOption) + .availableUnless(daemonizeOption); } /** * Main entry point for starting elasticsearch */ public static void main(final String[] args) throws Exception { + // we want the JVM to think there is a security manager installed so that if internal policy decisions that would be based on the + // presence of a security manager or lack thereof act as if there is a security manager present (e.g., DNS cache policy) + System.setSecurityManager(new SecurityManager() { + @Override + public void checkPermission(Permission perm) { + // grant all permissions so that we can later set the security manager to the one that we want + } + }); + LogConfigurator.registerErrorListener(); final Elasticsearch elasticsearch = new Elasticsearch(); int status = main(args, elasticsearch, Terminal.DEFAULT); if (status != ExitCodes.OK) { @@ -75,7 +92,16 @@ static int main(final String[] args, final Elasticsearch elasticsearch, final Te } @Override - protected void execute(Terminal terminal, OptionSet options, Map settings) throws Exception { + protected boolean shouldConfigureLoggingWithoutConfig() { + /* + * If we allow logging to be configured without a config before we are ready to read the log4j2.properties file, then we will fail + * to detect uses of logging before it is properly configured. + */ + return false; + } + + @Override + protected void execute(Terminal terminal, OptionSet options, Environment env) throws UserException { if (options.nonOptionArguments().isEmpty() == false) { throw new UserException(ExitCodes.USAGE, "Positional arguments not allowed, found " + options.nonOptionArguments()); } @@ -91,17 +117,23 @@ protected void execute(Terminal terminal, OptionSet options, Map final boolean daemonize = options.has(daemonizeOption); final Path pidFile = pidfileOption.value(options); + final boolean quiet = options.has(quietOption); - init(daemonize, pidFile, settings); + try { + init(daemonize, pidFile, quiet, env); + } catch (NodeValidationException e) { + throw new UserException(ExitCodes.CONFIG, e.getMessage()); + } } - void init(final boolean daemonize, final Path pidFile, final Map esSettings) { + void init(final boolean daemonize, final Path pidFile, final boolean quiet, Environment initialEnv) + throws NodeValidationException, UserException { try { - Bootstrap.init(!daemonize, pidFile, esSettings); - } catch (final Throwable t) { + Bootstrap.init(!daemonize, pidFile, quiet, initialEnv); + } catch (BootstrapException | RuntimeException e) { // format exceptions to the console in a special way // to avoid 2MB stacktraces from guice, etc. - throw new StartupException(t); + throw new StartupException(e); } } @@ -111,7 +143,8 @@ void init(final boolean daemonize, final Path pidFile, final Map * * http://commons.apache.org/proper/commons-daemon/procrun.html * - * NOTE: If this method is renamed and/or moved, make sure to update service.bat! + * NOTE: If this method is renamed and/or moved, make sure to + * update elasticsearch-service.bat! */ static void close(String[] args) throws IOException { Bootstrap.stop(); diff --git a/core/src/main/java/org/elasticsearch/bootstrap/JNACLibrary.java b/core/src/main/java/org/elasticsearch/bootstrap/JNACLibrary.java index fe0f400698fdb..64dabe9236306 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/JNACLibrary.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/JNACLibrary.java @@ -40,6 +40,7 @@ final class JNACLibrary { public static final int ENOMEM = 12; public static final int RLIMIT_MEMLOCK = Constants.MAC_OS_X ? 6 : 8; public static final int RLIMIT_AS = Constants.MAC_OS_X ? 5 : 9; + public static final int RLIMIT_FSIZE = Constants.MAC_OS_X ? 1 : 1; public static final long RLIM_INFINITY = Constants.MAC_OS_X ? 9223372036854775807L : -1L; static { @@ -61,7 +62,7 @@ public static final class Rlimit extends Structure implements Structure.ByRefere @Override protected List getFieldOrder() { - return Arrays.asList(new String[] { "rlim_cur", "rlim_max" }); + return Arrays.asList("rlim_cur", "rlim_max"); } } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/JNAKernel32Library.java b/core/src/main/java/org/elasticsearch/bootstrap/JNAKernel32Library.java index 747ba2e458f4b..99574c2b39bb6 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/JNAKernel32Library.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/JNAKernel32Library.java @@ -24,6 +24,7 @@ import com.sun.jna.NativeLong; import com.sun.jna.Pointer; import com.sun.jna.Structure; +import com.sun.jna.WString; import com.sun.jna.win32.StdCallLibrary; import org.apache.logging.log4j.Logger; import org.apache.lucene.util.Constants; @@ -109,7 +110,7 @@ class NativeHandlerCallback implements StdCallLibrary.StdCallCallback { private final ConsoleCtrlHandler handler; - public NativeHandlerCallback(ConsoleCtrlHandler handler) { + NativeHandlerCallback(ConsoleCtrlHandler handler) { this.handler = handler; } @@ -149,17 +150,19 @@ public static class MemoryBasicInformation extends Structure { @Override protected List getFieldOrder() { - return Arrays.asList(new String[]{"BaseAddress", "AllocationBase", "AllocationProtect", "RegionSize", "State", "Protect", "Type"}); + return Arrays.asList("BaseAddress", "AllocationBase", "AllocationProtect", "RegionSize", "State", "Protect", "Type"); } } public static class SizeT extends IntegerType { + // JNA requires this no-arg constructor to be public, + // otherwise it fails to register kernel32 library public SizeT() { this(0); } - public SizeT(long value) { + SizeT(long value) { super(Native.SIZE_T_SIZE, value); } @@ -221,6 +224,17 @@ public SizeT(long value) { */ native boolean CloseHandle(Pointer handle); + /** + * Retrieves the short path form of the specified path. See + * {@code GetShortPathName}. + * + * @param lpszLongPath the path string + * @param lpszShortPath a buffer to receive the short name + * @param cchBuffer the size of the buffer + * @return the length of the string copied into {@code lpszShortPath}, otherwise zero for failure + */ + native int GetShortPathNameW(WString lpszLongPath, char[] lpszShortPath, int cchBuffer); + /** * Creates or opens a new job object * @@ -261,10 +275,8 @@ public static class JOBOBJECT_BASIC_LIMIT_INFORMATION extends Structure implemen @Override protected List getFieldOrder() { - return Arrays.asList(new String[] { - "PerProcessUserTimeLimit", "PerJobUserTimeLimit", "LimitFlags", "MinimumWorkingSetSize", - "MaximumWorkingSetSize", "ActiveProcessLimit", "Affinity", "PriorityClass", "SchedulingClass" - }); + return Arrays.asList("PerProcessUserTimeLimit", "PerJobUserTimeLimit", "LimitFlags", "MinimumWorkingSetSize", + "MaximumWorkingSetSize", "ActiveProcessLimit", "Affinity", "PriorityClass", "SchedulingClass"); } } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/JNANatives.java b/core/src/main/java/org/elasticsearch/bootstrap/JNANatives.java index 5f3e357ff5f72..4a40db846e0df 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/JNANatives.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/JNANatives.java @@ -21,6 +21,7 @@ import com.sun.jna.Native; import com.sun.jna.Pointer; +import com.sun.jna.WString; import org.apache.logging.log4j.Logger; import org.apache.lucene.util.Constants; import org.elasticsearch.common.logging.Loggers; @@ -43,17 +44,19 @@ private JNANatives() {} // Set to true, in case native mlockall call was successful static boolean LOCAL_MLOCKALL = false; - // Set to true, in case native seccomp call was successful - static boolean LOCAL_SECCOMP = false; + // Set to true, in case native system call filter install was successful + static boolean LOCAL_SYSTEM_CALL_FILTER = false; // Set to true, in case policy can be applied to all threads of the process (even existing ones) // otherwise they are only inherited for new threads (ES app threads) - static boolean LOCAL_SECCOMP_ALL = false; + static boolean LOCAL_SYSTEM_CALL_FILTER_ALL = false; // set to the maximum number of threads that can be created for // the user ID that owns the running Elasticsearch process static long MAX_NUMBER_OF_THREADS = -1; static long MAX_SIZE_VIRTUAL_MEMORY = Long.MIN_VALUE; + static long MAX_FILE_SIZE = Long.MIN_VALUE; + static void tryMlockall() { int errno = Integer.MIN_VALUE; String errMsg = null; @@ -137,6 +140,17 @@ static void trySetMaxSizeVirtualMemory() { } } + static void trySetMaxFileSize() { + if (Constants.LINUX || Constants.MAC_OS_X) { + final JNACLibrary.Rlimit rlimit = new JNACLibrary.Rlimit(); + if (JNACLibrary.getrlimit(JNACLibrary.RLIMIT_FSIZE, rlimit) == 0) { + MAX_FILE_SIZE = rlimit.rlim_cur.longValue(); + } else { + logger.warn("unable to retrieve max file size [" + JNACLibrary.strerror(Native.getLastError()) + "]"); + } + } + } + static String rlimitToString(long value) { assert Constants.LINUX || Constants.MAC_OS_X; if (value == JNACLibrary.RLIM_INFINITY) { @@ -194,6 +208,35 @@ static void tryVirtualLock() { } } + /** + * Retrieves the short path form of the specified path. + * + * @param path the path + * @return the short path name (or the original path if getting the short path name fails for any reason) + */ + static String getShortPathName(String path) { + assert Constants.WINDOWS; + try { + final WString longPath = new WString("\\\\?\\" + path); + // first we get the length of the buffer needed + final int length = JNAKernel32Library.getInstance().GetShortPathNameW(longPath, null, 0); + if (length == 0) { + logger.warn("failed to get short path name: {}", Native.getLastError()); + return path; + } + final char[] shortPath = new char[length]; + // knowing the length of the buffer, now we get the short name + if (JNAKernel32Library.getInstance().GetShortPathNameW(longPath, shortPath, length) > 0) { + return Native.toString(shortPath); + } else { + logger.warn("failed to get short path name: {}", Native.getLastError()); + return path; + } + } catch (final UnsatisfiedLinkError e) { + return path; + } + } + static void addConsoleCtrlHandler(ConsoleCtrlHandler handler) { // The console Ctrl handler is necessary on Windows platforms only. if (Constants.WINDOWS) { @@ -210,12 +253,12 @@ static void addConsoleCtrlHandler(ConsoleCtrlHandler handler) { } } - static void trySeccomp(Path tmpFile) { + static void tryInstallSystemCallFilter(Path tmpFile) { try { - int ret = Seccomp.init(tmpFile); - LOCAL_SECCOMP = true; + int ret = SystemCallFilter.init(tmpFile); + LOCAL_SYSTEM_CALL_FILTER = true; if (ret == 1) { - LOCAL_SECCOMP_ALL = true; + LOCAL_SYSTEM_CALL_FILTER_ALL = true; } } catch (Exception e) { // this is likely to happen unless the kernel is newish, its a best effort at the moment diff --git a/core/src/main/java/org/elasticsearch/bootstrap/JVMCheck.java b/core/src/main/java/org/elasticsearch/bootstrap/JVMCheck.java deleted file mode 100644 index c367b38a79baa..0000000000000 --- a/core/src/main/java/org/elasticsearch/bootstrap/JVMCheck.java +++ /dev/null @@ -1,246 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.bootstrap; - -import org.apache.lucene.util.Constants; -import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.monitor.jvm.JvmInfo; - -import java.lang.management.ManagementFactory; -import java.util.Collections; -import java.util.HashMap; -import java.util.Map; -import java.util.Optional; - -/** Checks that the JVM is ok and won't cause index corruption */ -final class JVMCheck { - /** no instantiation */ - private JVMCheck() {} - - /** - * URL with latest JVM recommendations - */ - static final String JVM_RECOMMENDATIONS = "http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html"; - - /** - * System property which if set causes us to bypass the check completely (but issues a warning in doing so) - */ - static final String JVM_BYPASS = "es.bypass.vm.check"; - - /** - * Metadata and messaging for checking and reporting HotSpot - * issues. - */ - interface HotSpotCheck { - /** - * If this HotSpot check should be executed. - * - * @return true if this HotSpot check should be executed - */ - boolean check(); - - /** - * The error message to display when this HotSpot issue is - * present. - * - * @return the error message for this HotSpot issue - */ - String getErrorMessage(); - - /** - * The warning message for this HotSpot issue if a workaround - * exists and is used. - * - * @return the warning message for this HotSpot issue - */ - Optional getWarningMessage(); - - /** - * The workaround for this HotSpot issue, if one exists. - * - * @return the workaround for this HotSpot issue, if one exists - */ - Optional getWorkaround(); - } - - /** - * Metadata and messaging for hotspot bugs. - */ - static class HotspotBug implements HotSpotCheck { - - /** OpenJDK bug URL */ - final String bugUrl; - - /** Compiler workaround flag (null if there is no workaround) */ - final String workAround; - - HotspotBug(String bugUrl, String workAround) { - this.bugUrl = bugUrl; - this.workAround = workAround; - } - - /** Returns an error message to the user for a broken version */ - public String getErrorMessage() { - StringBuilder sb = new StringBuilder(); - sb.append("Java version: ").append(fullVersion()); - sb.append(" suffers from critical bug ").append(bugUrl); - sb.append(" which can cause data corruption."); - sb.append(System.lineSeparator()); - sb.append("Please upgrade the JVM, see ").append(JVM_RECOMMENDATIONS); - sb.append(" for current recommendations."); - if (workAround != null) { - sb.append(System.lineSeparator()); - sb.append("If you absolutely cannot upgrade, please add ").append(workAround); - sb.append(" to the ES_JAVA_OPTS environment variable."); - sb.append(System.lineSeparator()); - sb.append("Upgrading is preferred, this workaround will result in degraded performance."); - } - return sb.toString(); - } - - /** Warns the user when a workaround is being used to dodge the bug */ - public Optional getWarningMessage() { - StringBuilder sb = new StringBuilder(); - sb.append("Workaround flag ").append(workAround); - sb.append(" for bug ").append(bugUrl); - sb.append(" found. "); - sb.append(System.lineSeparator()); - sb.append("This will result in degraded performance!"); - sb.append(System.lineSeparator()); - sb.append("Upgrading is preferred, see ").append(JVM_RECOMMENDATIONS); - sb.append(" for current recommendations."); - return Optional.of(sb.toString()); - } - - public boolean check() { - return true; - } - - @Override - public Optional getWorkaround() { - return Optional.of(workAround); - } - } - - static class G1GCCheck implements HotSpotCheck { - @Override - public boolean check() { - return JvmInfo.jvmInfo().useG1GC().equals("true"); - } - - /** Returns an error message to the user for a broken version */ - public String getErrorMessage() { - StringBuilder sb = new StringBuilder(); - sb.append("Java version: ").append(fullVersion()); - sb.append(" can cause data corruption"); - sb.append(" when used with G1GC."); - sb.append(System.lineSeparator()); - sb.append("Please upgrade the JVM, see ").append(JVM_RECOMMENDATIONS); - sb.append(" for current recommendations."); - return sb.toString(); - } - - @Override - public Optional getWarningMessage() { - return Optional.empty(); - } - - @Override - public Optional getWorkaround() { - return Optional.empty(); - } - } - - /** mapping of hotspot version to hotspot bug information for the most serious bugs */ - static final Map JVM_BROKEN_HOTSPOT_VERSIONS; - - static { - Map bugs = new HashMap<>(); - - // 1.7.0: loop optimizer bug - bugs.put("21.0-b17", new HotspotBug("https://bugs.openjdk.java.net/browse/JDK-7070134", "-XX:-UseLoopPredicate")); - // register allocation issues (technically only x86/amd64). This impacted update 40, 45, and 51 - bugs.put("24.0-b56", new HotspotBug("https://bugs.openjdk.java.net/browse/JDK-8024830", "-XX:-UseSuperWord")); - bugs.put("24.45-b08", new HotspotBug("https://bugs.openjdk.java.net/browse/JDK-8024830", "-XX:-UseSuperWord")); - bugs.put("24.51-b03", new HotspotBug("https://bugs.openjdk.java.net/browse/JDK-8024830", "-XX:-UseSuperWord")); - G1GCCheck g1GcCheck = new G1GCCheck(); - bugs.put("25.0-b70", g1GcCheck); - bugs.put("25.11-b03", g1GcCheck); - bugs.put("25.20-b23", g1GcCheck); - bugs.put("25.25-b02", g1GcCheck); - bugs.put("25.31-b07", g1GcCheck); - - JVM_BROKEN_HOTSPOT_VERSIONS = Collections.unmodifiableMap(bugs); - } - - /** - * Checks that the current JVM is "ok". This means it doesn't have severe bugs that cause data corruption. - */ - static void check() { - if (Boolean.parseBoolean(System.getProperty(JVM_BYPASS))) { - Loggers.getLogger(JVMCheck.class).warn("bypassing jvm version check for version [{}], this can result in data corruption!", fullVersion()); - } else if ("Oracle Corporation".equals(Constants.JVM_VENDOR)) { - HotSpotCheck bug = JVM_BROKEN_HOTSPOT_VERSIONS.get(Constants.JVM_VERSION); - if (bug != null && bug.check()) { - if (bug.getWorkaround().isPresent() && ManagementFactory.getRuntimeMXBean().getInputArguments().contains(bug.getWorkaround().get())) { - Loggers.getLogger(JVMCheck.class).warn("{}", bug.getWarningMessage().get()); - } else { - throw new RuntimeException(bug.getErrorMessage()); - } - } - } else if ("IBM Corporation".equals(Constants.JVM_VENDOR)) { - // currently some old JVM versions from IBM will easily result in index corruption. - // 2.8+ seems ok for ES from testing. - float version = Float.POSITIVE_INFINITY; - try { - version = Float.parseFloat(Constants.JVM_VERSION); - } catch (NumberFormatException ignored) { - // this is just a simple best-effort to detect old runtimes, - // if we cannot parse it, we don't fail. - } - if (version < 2.8f) { - StringBuilder sb = new StringBuilder(); - sb.append("IBM J9 runtimes < 2.8 suffer from several bugs which can cause data corruption."); - sb.append(System.lineSeparator()); - sb.append("Your version: " + fullVersion()); - sb.append(System.lineSeparator()); - sb.append("Please upgrade the JVM to a recent IBM JDK"); - throw new RuntimeException(sb.toString()); - } - } - } - - /** - * Returns java + jvm version, looks like this: - * {@code Oracle Corporation 1.8.0_45 [Java HotSpot(TM) 64-Bit Server VM 25.45-b02]} - */ - static String fullVersion() { - StringBuilder sb = new StringBuilder(); - sb.append(Constants.JAVA_VENDOR); - sb.append(" "); - sb.append(Constants.JAVA_VERSION); - sb.append(" ["); - sb.append(Constants.JVM_NAME); - sb.append(" "); - sb.append(Constants.JVM_VERSION); - sb.append("]"); - return sb.toString(); - } -} diff --git a/core/src/main/java/org/elasticsearch/bootstrap/JarHell.java b/core/src/main/java/org/elasticsearch/bootstrap/JarHell.java index 4a0aaec68c250..1959e5e81394b 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/JarHell.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/JarHell.java @@ -27,6 +27,7 @@ import java.io.IOException; import java.net.MalformedURLException; +import java.net.URISyntaxException; import java.net.URL; import java.net.URLClassLoader; import java.nio.file.FileVisitResult; @@ -35,9 +36,11 @@ import java.nio.file.SimpleFileVisitor; import java.nio.file.attribute.BasicFileAttributes; import java.util.Arrays; +import java.util.Collections; import java.util.Enumeration; import java.util.HashMap; import java.util.HashSet; +import java.util.LinkedHashSet; import java.util.Locale; import java.util.Map; import java.util.Set; @@ -74,7 +77,7 @@ public static void main(String args[]) throws Exception { * Checks the current classpath for duplicate classes * @throws IllegalStateException if jar hell was found */ - public static void checkJarHell() throws Exception { + public static void checkJarHell() throws IOException, URISyntaxException { ClassLoader loader = JarHell.class.getClassLoader(); Logger logger = Loggers.getLogger(JarHell.class); if (logger.isDebugEnabled()) { @@ -92,7 +95,7 @@ public static void checkJarHell() throws Exception { * @return array of URLs * @throws IllegalStateException if the classpath contains empty elements */ - public static URL[] parseClassPath() { + public static Set parseClassPath() { return parseClassPath(System.getProperty("java.class.path")); } @@ -103,13 +106,12 @@ public static URL[] parseClassPath() { * @throws IllegalStateException if the classpath contains empty elements */ @SuppressForbidden(reason = "resolves against CWD because that is how classpaths work") - static URL[] parseClassPath(String classPath) { + static Set parseClassPath(String classPath) { String pathSeparator = System.getProperty("path.separator"); String fileSeparator = System.getProperty("file.separator"); String elements[] = classPath.split(pathSeparator); - URL urlElements[] = new URL[elements.length]; - for (int i = 0; i < elements.length; i++) { - String element = elements[i]; + Set urlElements = new LinkedHashSet<>(); // order is already lost, but some filesystems have it + for (String element : elements) { // Technically empty classpath element behaves like CWD. // So below is the "correct" code, however in practice with ES, this is usually just a misconfiguration, // from old shell scripts left behind or something: @@ -135,13 +137,17 @@ static URL[] parseClassPath(String classPath) { } // now just parse as ordinary file try { - urlElements[i] = PathUtils.get(element).toUri().toURL(); + URL url = PathUtils.get(element).toUri().toURL(); + if (urlElements.add(url) == false) { + throw new IllegalStateException("jar hell!" + System.lineSeparator() + + "duplicate jar [" + element + "] on classpath: " + classPath); + } } catch (MalformedURLException e) { // should not happen, as we use the filesystem API throw new RuntimeException(e); } } - return urlElements; + return Collections.unmodifiableSet(urlElements); } /** @@ -149,7 +155,7 @@ static URL[] parseClassPath(String classPath) { * @throws IllegalStateException if jar hell was found */ @SuppressForbidden(reason = "needs JarFile for speed, just reading entries") - public static void checkJarHell(URL urls[]) throws Exception { + public static void checkJarHell(Set urls) throws URISyntaxException, IOException { Logger logger = Loggers.getLogger(JarHell.class); // we don't try to be sneaky and use deprecated/internal/not portable stuff // like sun.boot.class.path, and with jigsaw we don't yet have a way to get @@ -167,8 +173,8 @@ public static void checkJarHell(URL urls[]) throws Exception { } if (path.toString().endsWith(".jar")) { if (!seenJars.add(path)) { - logger.debug("excluding duplicate classpath element: {}", path); - continue; + throw new IllegalStateException("jar hell!" + System.lineSeparator() + + "duplicate jar on classpath: " + path); } logger.debug("examining jar: {}", path); try (JarFile file = new JarFile(path.toString())) { @@ -197,8 +203,8 @@ public static void checkJarHell(URL urls[]) throws Exception { public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { String entry = root.relativize(file).toString(); if (entry.endsWith(".class")) { - // normalize with the os separator - entry = entry.replace(sep, ".").substring(0, entry.length() - 6); + // normalize with the os separator, remove '.class' + entry = entry.replace(sep, ".").substring(0, entry.length() - ".class".length()); checkClass(clazzes, entry, path); } return super.visitFile(file, attrs); @@ -271,14 +277,6 @@ static void checkClass(Map clazzes, String clazz, Path jarpath) { "class: " + clazz + System.lineSeparator() + "exists multiple times in jar: " + jarpath + " !!!!!!!!!"); } else { - if (clazz.startsWith("org.apache.logging.log4j.core.impl.ThrowableProxy")) { - /* - * deliberate to hack around a bug in Log4j - * cf. https://github.com/elastic/elasticsearch/issues/20304 - * cf. https://issues.apache.org/jira/browse/LOG4J2-1560 - */ - return; - } throw new IllegalStateException("jar hell!" + System.lineSeparator() + "class: " + clazz + System.lineSeparator() + "jar1: " + previous + System.lineSeparator() + diff --git a/core/src/main/java/org/elasticsearch/bootstrap/JavaVersion.java b/core/src/main/java/org/elasticsearch/bootstrap/JavaVersion.java index 63de83d88d0f8..03722e03060a7 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/JavaVersion.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/JavaVersion.java @@ -33,9 +33,7 @@ public List getVersion() { } private JavaVersion(List version) { - if (version.size() >= 2 - && version.get(0).intValue() == 1 - && version.get(1).intValue() == 8) { + if (version.size() >= 2 && version.get(0) == 1 && version.get(1) == 8) { // for Java 8 there is ambiguity since both 1.8 and 8 are supported, // so we rewrite the former to the latter version = new ArrayList<>(version.subList(1, version.size())); diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Natives.java b/core/src/main/java/org/elasticsearch/bootstrap/Natives.java index 9fad34e329f58..ef40b5b862e26 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Natives.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Natives.java @@ -76,6 +76,20 @@ static void tryVirtualLock() { JNANatives.tryVirtualLock(); } + /** + * Retrieves the short path form of the specified path. + * + * @param path the path + * @return the short path name (or the original path if getting the short path name fails for any reason) + */ + static String getShortPathName(final String path) { + if (!JNA_AVAILABLE) { + logger.warn("cannot obtain short path for [{}] because JNA is not avilable", path); + return path; + } + return JNANatives.getShortPathName(path); + } + static void addConsoleCtrlHandler(ConsoleCtrlHandler handler) { if (!JNA_AVAILABLE) { logger.warn("cannot register console handler because JNA is not available"); @@ -91,12 +105,12 @@ static boolean isMemoryLocked() { return JNANatives.LOCAL_MLOCKALL; } - static void trySeccomp(Path tmpFile) { + static void tryInstallSystemCallFilter(Path tmpFile) { if (!JNA_AVAILABLE) { - logger.warn("cannot install syscall filters because JNA is not available"); + logger.warn("cannot install system call filter because JNA is not available"); return; } - JNANatives.trySeccomp(tmpFile); + JNANatives.tryInstallSystemCallFilter(tmpFile); } static void trySetMaxNumberOfThreads() { @@ -115,10 +129,18 @@ static void trySetMaxSizeVirtualMemory() { JNANatives.trySetMaxSizeVirtualMemory(); } - static boolean isSeccompInstalled() { + static void trySetMaxFileSize() { + if (!JNA_AVAILABLE) { + logger.warn("cannot getrlimit RLIMIT_FSIZE because JNA is not available"); + return; + } + JNANatives.trySetMaxFileSize(); + } + + static boolean isSystemCallFilterInstalled() { if (!JNA_AVAILABLE) { return false; } - return JNANatives.LOCAL_SECCOMP; + return JNANatives.LOCAL_SYSTEM_CALL_FILTER; } } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Seccomp.java b/core/src/main/java/org/elasticsearch/bootstrap/Seccomp.java deleted file mode 100644 index 88c618d445cd0..0000000000000 --- a/core/src/main/java/org/elasticsearch/bootstrap/Seccomp.java +++ /dev/null @@ -1,649 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.bootstrap; - -import com.sun.jna.Library; -import com.sun.jna.Memory; -import com.sun.jna.Native; -import com.sun.jna.NativeLong; -import com.sun.jna.Pointer; -import com.sun.jna.Structure; -import com.sun.jna.ptr.PointerByReference; -import org.apache.logging.log4j.Logger; -import org.apache.lucene.util.Constants; -import org.apache.lucene.util.IOUtils; -import org.elasticsearch.common.logging.Loggers; - -import java.io.IOException; -import java.nio.ByteBuffer; -import java.nio.ByteOrder; -import java.nio.file.Files; -import java.nio.file.Path; -import java.util.Arrays; -import java.util.Collections; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -/** - * Installs a limited form of secure computing mode, - * to filters system calls to block process execution. - *

    - * This is supported on Linux, Solaris, FreeBSD, OpenBSD, Mac OS X, and Windows. - *

    - * On Linux it currently supports amd64 and i386 architectures, requires Linux kernel 3.5 or above, and requires - * {@code CONFIG_SECCOMP} and {@code CONFIG_SECCOMP_FILTER} compiled into the kernel. - *

    - * On Linux BPF Filters are installed using either {@code seccomp(2)} (3.17+) or {@code prctl(2)} (3.5+). {@code seccomp(2)} - * is preferred, as it allows filters to be applied to any existing threads in the process, and one motivation - * here is to protect against bugs in the JVM. Otherwise, code will fall back to the {@code prctl(2)} method - * which will at least protect elasticsearch application threads. - *

    - * Linux BPF filters will return {@code EACCES} (Access Denied) for the following system calls: - *

      - *
    • {@code execve}
    • - *
    • {@code fork}
    • - *
    • {@code vfork}
    • - *
    • {@code execveat}
    • - *
    - *

    - * On Solaris 10 or higher, the following privileges are dropped with {@code priv_set(3C)}: - *

      - *
    • {@code PRIV_PROC_FORK}
    • - *
    • {@code PRIV_PROC_EXEC}
    • - *
    - *

    - * On BSD systems, process creation is restricted with {@code setrlimit(RLIMIT_NPROC)}. - *

    - * On Mac OS X Leopard or above, a custom {@code sandbox(7)} ("Seatbelt") profile is installed that - * denies the following rules: - *

      - *
    • {@code process-fork}
    • - *
    • {@code process-exec}
    • - *
    - *

    - * On Windows, process creation is restricted with {@code SetInformationJobObject/ActiveProcessLimit}. - *

    - * This is not intended as a sandbox. It is another level of security, mostly intended to annoy - * security researchers and make their lives more difficult in achieving "remote execution" exploits. - * @see - * http://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt - * @see - * https://reverse.put.as/wp-content/uploads/2011/06/The-Apple-Sandbox-BHDC2011-Paper.pdf - * @see - * https://docs.oracle.com/cd/E23824_01/html/821-1456/prbac-2.html - */ -// not an example of how to write code!!! -final class Seccomp { - private static final Logger logger = Loggers.getLogger(Seccomp.class); - - // Linux implementation, based on seccomp(2) or prctl(2) with bpf filtering - - /** Access to non-standard Linux libc methods */ - interface LinuxLibrary extends Library { - /** - * maps to prctl(2) - */ - int prctl(int option, NativeLong arg2, NativeLong arg3, NativeLong arg4, NativeLong arg5); - /** - * used to call seccomp(2), its too new... - * this is the only way, DON'T use it on some other architecture unless you know wtf you are doing - */ - NativeLong syscall(NativeLong number, Object... args); - } - - // null if unavailable or something goes wrong. - private static final LinuxLibrary linux_libc; - - static { - LinuxLibrary lib = null; - if (Constants.LINUX) { - try { - lib = (LinuxLibrary) Native.loadLibrary("c", LinuxLibrary.class); - } catch (UnsatisfiedLinkError e) { - logger.warn("unable to link C library. native methods (seccomp) will be disabled.", e); - } - } - linux_libc = lib; - } - - /** the preferred method is seccomp(2), since we can apply to all threads of the process */ - static final int SECCOMP_SET_MODE_FILTER = 1; // since Linux 3.17 - static final int SECCOMP_FILTER_FLAG_TSYNC = 1; // since Linux 3.17 - - /** otherwise, we can use prctl(2), which will at least protect ES application threads */ - static final int PR_GET_NO_NEW_PRIVS = 39; // since Linux 3.5 - static final int PR_SET_NO_NEW_PRIVS = 38; // since Linux 3.5 - static final int PR_GET_SECCOMP = 21; // since Linux 2.6.23 - static final int PR_SET_SECCOMP = 22; // since Linux 2.6.23 - static final long SECCOMP_MODE_FILTER = 2; // since Linux Linux 3.5 - - /** corresponds to struct sock_filter */ - static final class SockFilter { - short code; // insn - byte jt; // number of insn to jump (skip) if true - byte jf; // number of insn to jump (skip) if false - int k; // additional data - - SockFilter(short code, byte jt, byte jf, int k) { - this.code = code; - this.jt = jt; - this.jf = jf; - this.k = k; - } - } - - /** corresponds to struct sock_fprog */ - public static final class SockFProg extends Structure implements Structure.ByReference { - public short len; // number of filters - public Pointer filter; // filters - - public SockFProg(SockFilter filters[]) { - len = (short) filters.length; - // serialize struct sock_filter * explicitly, its less confusing than the JNA magic we would need - Memory filter = new Memory(len * 8); - ByteBuffer bbuf = filter.getByteBuffer(0, len * 8); - bbuf.order(ByteOrder.nativeOrder()); // little endian - for (SockFilter f : filters) { - bbuf.putShort(f.code); - bbuf.put(f.jt); - bbuf.put(f.jf); - bbuf.putInt(f.k); - } - this.filter = filter; - } - - @Override - protected List getFieldOrder() { - return Arrays.asList(new String[] { "len", "filter" }); - } - } - - // BPF "macros" and constants - static final int BPF_LD = 0x00; - static final int BPF_W = 0x00; - static final int BPF_ABS = 0x20; - static final int BPF_JMP = 0x05; - static final int BPF_JEQ = 0x10; - static final int BPF_JGE = 0x30; - static final int BPF_JGT = 0x20; - static final int BPF_RET = 0x06; - static final int BPF_K = 0x00; - - static SockFilter BPF_STMT(int code, int k) { - return new SockFilter((short) code, (byte) 0, (byte) 0, k); - } - - static SockFilter BPF_JUMP(int code, int k, int jt, int jf) { - return new SockFilter((short) code, (byte) jt, (byte) jf, k); - } - - static final int SECCOMP_RET_ERRNO = 0x00050000; - static final int SECCOMP_RET_DATA = 0x0000FFFF; - static final int SECCOMP_RET_ALLOW = 0x7FFF0000; - - // some errno constants for error checking/handling - static final int EPERM = 0x01; - static final int EACCES = 0x0D; - static final int EFAULT = 0x0E; - static final int EINVAL = 0x16; - static final int ENOSYS = 0x26; - - // offsets that our BPF checks - // check with offsetof() when adding a new arch, move to Arch if different. - static final int SECCOMP_DATA_NR_OFFSET = 0x00; - static final int SECCOMP_DATA_ARCH_OFFSET = 0x04; - - static class Arch { - /** AUDIT_ARCH_XXX constant from linux/audit.h */ - final int audit; - /** syscall limit (necessary for blacklisting on amd64, to ban 32-bit syscalls) */ - final int limit; - /** __NR_fork */ - final int fork; - /** __NR_vfork */ - final int vfork; - /** __NR_execve */ - final int execve; - /** __NR_execveat */ - final int execveat; - /** __NR_seccomp */ - final int seccomp; - - Arch(int audit, int limit, int fork, int vfork, int execve, int execveat, int seccomp) { - this.audit = audit; - this.limit = limit; - this.fork = fork; - this.vfork = vfork; - this.execve = execve; - this.execveat = execveat; - this.seccomp = seccomp; - } - } - - /** supported architectures map keyed by os.arch */ - private static final Map ARCHITECTURES; - static { - Map m = new HashMap<>(); - m.put("amd64", new Arch(0xC000003E, 0x3FFFFFFF, 57, 58, 59, 322, 317)); - m.put("i386", new Arch(0x40000003, 0xFFFFFFFF, 2, 190, 11, 358, 354)); - ARCHITECTURES = Collections.unmodifiableMap(m); - } - - /** invokes prctl() from linux libc library */ - private static int linux_prctl(int option, long arg2, long arg3, long arg4, long arg5) { - return linux_libc.prctl(option, new NativeLong(arg2), new NativeLong(arg3), new NativeLong(arg4), new NativeLong(arg5)); - } - - /** invokes syscall() from linux libc library */ - private static long linux_syscall(long number, Object... args) { - return linux_libc.syscall(new NativeLong(number), args).longValue(); - } - - /** try to install our BPF filters via seccomp() or prctl() to block execution */ - private static int linuxImpl() { - // first be defensive: we can give nice errors this way, at the very least. - // also, some of these security features get backported to old versions, checking kernel version here is a big no-no! - final Arch arch = ARCHITECTURES.get(Constants.OS_ARCH); - boolean supported = Constants.LINUX && arch != null; - if (supported == false) { - throw new UnsupportedOperationException("seccomp unavailable: '" + Constants.OS_ARCH + "' architecture unsupported"); - } - - // we couldn't link methods, could be some really ancient kernel (e.g. < 2.1.57) or some bug - if (linux_libc == null) { - throw new UnsupportedOperationException("seccomp unavailable: could not link methods. requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in"); - } - - // pure paranoia: - - // check that unimplemented syscalls actually return ENOSYS - // you never know (e.g. https://code.google.com/p/chromium/issues/detail?id=439795) - if (linux_syscall(999) >= 0) { - throw new UnsupportedOperationException("seccomp unavailable: your kernel is buggy and you should upgrade"); - } - - switch (Native.getLastError()) { - case ENOSYS: - break; // ok - case EPERM: - // NOT ok, but likely a docker container - if (logger.isDebugEnabled()) { - logger.debug("syscall(BOGUS) bogusly gets EPERM instead of ENOSYS"); - } - break; - default: - throw new UnsupportedOperationException("seccomp unavailable: your kernel is buggy and you should upgrade"); - } - - // try to check system calls really are who they claim - // you never know (e.g. https://chromium.googlesource.com/chromium/src.git/+/master/sandbox/linux/seccomp-bpf/sandbox_bpf.cc#57) - final int bogusArg = 0xf7a46a5c; - - // test seccomp(BOGUS) - long ret = linux_syscall(arch.seccomp, bogusArg); - if (ret != -1) { - throw new UnsupportedOperationException("seccomp unavailable: seccomp(BOGUS_OPERATION) returned " + ret); - } else { - int errno = Native.getLastError(); - switch (errno) { - case ENOSYS: break; // ok - case EINVAL: break; // ok - default: throw new UnsupportedOperationException("seccomp(BOGUS_OPERATION): " + JNACLibrary.strerror(errno)); - } - } - - // test seccomp(VALID, BOGUS) - ret = linux_syscall(arch.seccomp, SECCOMP_SET_MODE_FILTER, bogusArg); - if (ret != -1) { - throw new UnsupportedOperationException("seccomp unavailable: seccomp(SECCOMP_SET_MODE_FILTER, BOGUS_FLAG) returned " + ret); - } else { - int errno = Native.getLastError(); - switch (errno) { - case ENOSYS: break; // ok - case EINVAL: break; // ok - default: throw new UnsupportedOperationException("seccomp(SECCOMP_SET_MODE_FILTER, BOGUS_FLAG): " + JNACLibrary.strerror(errno)); - } - } - - // test prctl(BOGUS) - ret = linux_prctl(bogusArg, 0, 0, 0, 0); - if (ret != -1) { - throw new UnsupportedOperationException("seccomp unavailable: prctl(BOGUS_OPTION) returned " + ret); - } else { - int errno = Native.getLastError(); - switch (errno) { - case ENOSYS: break; // ok - case EINVAL: break; // ok - default: throw new UnsupportedOperationException("prctl(BOGUS_OPTION): " + JNACLibrary.strerror(errno)); - } - } - - // now just normal defensive checks - - // check for GET_NO_NEW_PRIVS - switch (linux_prctl(PR_GET_NO_NEW_PRIVS, 0, 0, 0, 0)) { - case 0: break; // not yet set - case 1: break; // already set by caller - default: - int errno = Native.getLastError(); - if (errno == EINVAL) { - // friendly error, this will be the typical case for an old kernel - throw new UnsupportedOperationException("seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in"); - } else { - throw new UnsupportedOperationException("prctl(PR_GET_NO_NEW_PRIVS): " + JNACLibrary.strerror(errno)); - } - } - // check for SECCOMP - switch (linux_prctl(PR_GET_SECCOMP, 0, 0, 0, 0)) { - case 0: break; // not yet set - case 2: break; // already in filter mode by caller - default: - int errno = Native.getLastError(); - if (errno == EINVAL) { - throw new UnsupportedOperationException("seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed"); - } else { - throw new UnsupportedOperationException("prctl(PR_GET_SECCOMP): " + JNACLibrary.strerror(errno)); - } - } - // check for SECCOMP_MODE_FILTER - if (linux_prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, 0, 0, 0) != 0) { - int errno = Native.getLastError(); - switch (errno) { - case EFAULT: break; // available - case EINVAL: throw new UnsupportedOperationException("seccomp unavailable: CONFIG_SECCOMP_FILTER not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed"); - default: throw new UnsupportedOperationException("prctl(PR_SET_SECCOMP): " + JNACLibrary.strerror(errno)); - } - } - - // ok, now set PR_SET_NO_NEW_PRIVS, needed to be able to set a seccomp filter as ordinary user - if (linux_prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) != 0) { - throw new UnsupportedOperationException("prctl(PR_SET_NO_NEW_PRIVS): " + JNACLibrary.strerror(Native.getLastError())); - } - - // check it worked - if (linux_prctl(PR_GET_NO_NEW_PRIVS, 0, 0, 0, 0) != 1) { - throw new UnsupportedOperationException("seccomp filter did not really succeed: prctl(PR_GET_NO_NEW_PRIVS): " + JNACLibrary.strerror(Native.getLastError())); - } - - // BPF installed to check arch, limit, then syscall. See https://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt for details. - SockFilter insns[] = { - /* 1 */ BPF_STMT(BPF_LD + BPF_W + BPF_ABS, SECCOMP_DATA_ARCH_OFFSET), // - /* 2 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.audit, 0, 7), // if (arch != audit) goto fail; - /* 3 */ BPF_STMT(BPF_LD + BPF_W + BPF_ABS, SECCOMP_DATA_NR_OFFSET), // - /* 4 */ BPF_JUMP(BPF_JMP + BPF_JGT + BPF_K, arch.limit, 5, 0), // if (syscall > LIMIT) goto fail; - /* 5 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.fork, 4, 0), // if (syscall == FORK) goto fail; - /* 6 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.vfork, 3, 0), // if (syscall == VFORK) goto fail; - /* 7 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.execve, 2, 0), // if (syscall == EXECVE) goto fail; - /* 8 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.execveat, 1, 0), // if (syscall == EXECVEAT) goto fail; - /* 9 */ BPF_STMT(BPF_RET + BPF_K, SECCOMP_RET_ALLOW), // pass: return OK; - /* 10 */ BPF_STMT(BPF_RET + BPF_K, SECCOMP_RET_ERRNO | (EACCES & SECCOMP_RET_DATA)), // fail: return EACCES; - }; - // seccomp takes a long, so we pass it one explicitly to keep the JNA simple - SockFProg prog = new SockFProg(insns); - prog.write(); - long pointer = Pointer.nativeValue(prog.getPointer()); - - int method = 1; - // install filter, if this works, after this there is no going back! - // first try it with seccomp(SECCOMP_SET_MODE_FILTER), falling back to prctl() - if (linux_syscall(arch.seccomp, SECCOMP_SET_MODE_FILTER, SECCOMP_FILTER_FLAG_TSYNC, new NativeLong(pointer)) != 0) { - method = 0; - int errno1 = Native.getLastError(); - if (logger.isDebugEnabled()) { - logger.debug("seccomp(SECCOMP_SET_MODE_FILTER): {}, falling back to prctl(PR_SET_SECCOMP)...", JNACLibrary.strerror(errno1)); - } - if (linux_prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, pointer, 0, 0) != 0) { - int errno2 = Native.getLastError(); - throw new UnsupportedOperationException("seccomp(SECCOMP_SET_MODE_FILTER): " + JNACLibrary.strerror(errno1) + - ", prctl(PR_SET_SECCOMP): " + JNACLibrary.strerror(errno2)); - } - } - - // now check that the filter was really installed, we should be in filter mode. - if (linux_prctl(PR_GET_SECCOMP, 0, 0, 0, 0) != 2) { - throw new UnsupportedOperationException("seccomp filter installation did not really succeed. seccomp(PR_GET_SECCOMP): " + JNACLibrary.strerror(Native.getLastError())); - } - - logger.debug("Linux seccomp filter installation successful, threads: [{}]", method == 1 ? "all" : "app" ); - return method; - } - - // OS X implementation via sandbox(7) - - /** Access to non-standard OS X libc methods */ - interface MacLibrary extends Library { - /** - * maps to sandbox_init(3), since Leopard - */ - int sandbox_init(String profile, long flags, PointerByReference errorbuf); - - /** - * releases memory when an error occurs during initialization (e.g. syntax bug) - */ - void sandbox_free_error(Pointer errorbuf); - } - - // null if unavailable, or something goes wrong. - private static final MacLibrary libc_mac; - - static { - MacLibrary lib = null; - if (Constants.MAC_OS_X) { - try { - lib = (MacLibrary) Native.loadLibrary("c", MacLibrary.class); - } catch (UnsatisfiedLinkError e) { - logger.warn("unable to link C library. native methods (seatbelt) will be disabled.", e); - } - } - libc_mac = lib; - } - - /** The only supported flag... */ - static final int SANDBOX_NAMED = 1; - /** Allow everything except process fork and execution */ - static final String SANDBOX_RULES = "(version 1) (allow default) (deny process-fork) (deny process-exec)"; - - /** try to install our custom rule profile into sandbox_init() to block execution */ - private static void macImpl(Path tmpFile) throws IOException { - // first be defensive: we can give nice errors this way, at the very least. - boolean supported = Constants.MAC_OS_X; - if (supported == false) { - throw new IllegalStateException("bug: should not be trying to initialize seatbelt for an unsupported OS"); - } - - // we couldn't link methods, could be some really ancient OS X (< Leopard) or some bug - if (libc_mac == null) { - throw new UnsupportedOperationException("seatbelt unavailable: could not link methods. requires Leopard or above."); - } - - // write rules to a temporary file, which will be passed to sandbox_init() - Path rules = Files.createTempFile(tmpFile, "es", "sb"); - Files.write(rules, Collections.singleton(SANDBOX_RULES)); - - boolean success = false; - try { - PointerByReference errorRef = new PointerByReference(); - int ret = libc_mac.sandbox_init(rules.toAbsolutePath().toString(), SANDBOX_NAMED, errorRef); - // if sandbox_init() fails, add the message from the OS (e.g. syntax error) and free the buffer - if (ret != 0) { - Pointer errorBuf = errorRef.getValue(); - RuntimeException e = new UnsupportedOperationException("sandbox_init(): " + errorBuf.getString(0)); - libc_mac.sandbox_free_error(errorBuf); - throw e; - } - logger.debug("OS X seatbelt initialization successful"); - success = true; - } finally { - if (success) { - Files.delete(rules); - } else { - IOUtils.deleteFilesIgnoringExceptions(rules); - } - } - } - - // Solaris implementation via priv_set(3C) - - /** Access to non-standard Solaris libc methods */ - interface SolarisLibrary extends Library { - /** - * see priv_set(3C), a convenience method for setppriv(2). - */ - int priv_set(int op, String which, String... privs); - } - - // null if unavailable, or something goes wrong. - private static final SolarisLibrary libc_solaris; - - static { - SolarisLibrary lib = null; - if (Constants.SUN_OS) { - try { - lib = (SolarisLibrary) Native.loadLibrary("c", SolarisLibrary.class); - } catch (UnsatisfiedLinkError e) { - logger.warn("unable to link C library. native methods (priv_set) will be disabled.", e); - } - } - libc_solaris = lib; - } - - // constants for priv_set(2) - static final int PRIV_OFF = 1; - static final String PRIV_ALLSETS = null; - // see privileges(5) for complete list of these - static final String PRIV_PROC_FORK = "proc_fork"; - static final String PRIV_PROC_EXEC = "proc_exec"; - - static void solarisImpl() { - // first be defensive: we can give nice errors this way, at the very least. - boolean supported = Constants.SUN_OS; - if (supported == false) { - throw new IllegalStateException("bug: should not be trying to initialize priv_set for an unsupported OS"); - } - - // we couldn't link methods, could be some really ancient Solaris or some bug - if (libc_solaris == null) { - throw new UnsupportedOperationException("priv_set unavailable: could not link methods. requires Solaris 10+"); - } - - // drop a null-terminated list of privileges - if (libc_solaris.priv_set(PRIV_OFF, PRIV_ALLSETS, PRIV_PROC_FORK, PRIV_PROC_EXEC, null) != 0) { - throw new UnsupportedOperationException("priv_set unavailable: priv_set(): " + JNACLibrary.strerror(Native.getLastError())); - } - - logger.debug("Solaris priv_set initialization successful"); - } - - // BSD implementation via setrlimit(2) - - // TODO: add OpenBSD to Lucene Constants - // TODO: JNA doesn't have netbsd support, but this mechanism should work there too. - static final boolean OPENBSD = Constants.OS_NAME.startsWith("OpenBSD"); - - // not a standard limit, means something different on linux, etc! - static final int RLIMIT_NPROC = 7; - - static void bsdImpl() { - boolean supported = Constants.FREE_BSD || OPENBSD || Constants.MAC_OS_X; - if (supported == false) { - throw new IllegalStateException("bug: should not be trying to initialize RLIMIT_NPROC for an unsupported OS"); - } - - JNACLibrary.Rlimit limit = new JNACLibrary.Rlimit(); - limit.rlim_cur.setValue(0); - limit.rlim_max.setValue(0); - if (JNACLibrary.setrlimit(RLIMIT_NPROC, limit) != 0) { - throw new UnsupportedOperationException("RLIMIT_NPROC unavailable: " + JNACLibrary.strerror(Native.getLastError())); - } - - logger.debug("BSD RLIMIT_NPROC initialization successful"); - } - - // windows impl via job ActiveProcessLimit - - static void windowsImpl() { - if (!Constants.WINDOWS) { - throw new IllegalStateException("bug: should not be trying to initialize ActiveProcessLimit for an unsupported OS"); - } - - JNAKernel32Library lib = JNAKernel32Library.getInstance(); - - // create a new Job - Pointer job = lib.CreateJobObjectW(null, null); - if (job == null) { - throw new UnsupportedOperationException("CreateJobObject: " + Native.getLastError()); - } - - try { - // retrieve the current basic limits of the job - int clazz = JNAKernel32Library.JOBOBJECT_BASIC_LIMIT_INFORMATION_CLASS; - JNAKernel32Library.JOBOBJECT_BASIC_LIMIT_INFORMATION limits = new JNAKernel32Library.JOBOBJECT_BASIC_LIMIT_INFORMATION(); - limits.write(); - if (!lib.QueryInformationJobObject(job, clazz, limits.getPointer(), limits.size(), null)) { - throw new UnsupportedOperationException("QueryInformationJobObject: " + Native.getLastError()); - } - limits.read(); - // modify the number of active processes to be 1 (exactly the one process we will add to the job). - limits.ActiveProcessLimit = 1; - limits.LimitFlags = JNAKernel32Library.JOB_OBJECT_LIMIT_ACTIVE_PROCESS; - limits.write(); - if (!lib.SetInformationJobObject(job, clazz, limits.getPointer(), limits.size())) { - throw new UnsupportedOperationException("SetInformationJobObject: " + Native.getLastError()); - } - // assign ourselves to the job - if (!lib.AssignProcessToJobObject(job, lib.GetCurrentProcess())) { - throw new UnsupportedOperationException("AssignProcessToJobObject: " + Native.getLastError()); - } - } finally { - lib.CloseHandle(job); - } - - logger.debug("Windows ActiveProcessLimit initialization successful"); - } - - /** - * Attempt to drop the capability to execute for the process. - *

    - * This is best effort and OS and architecture dependent. It may throw any Throwable. - * @return 0 if we can do this for application threads, 1 for the entire process - */ - static int init(Path tmpFile) throws Exception { - if (Constants.LINUX) { - return linuxImpl(); - } else if (Constants.MAC_OS_X) { - // try to enable both mechanisms if possible - bsdImpl(); - macImpl(tmpFile); - return 1; - } else if (Constants.SUN_OS) { - solarisImpl(); - return 1; - } else if (Constants.FREE_BSD || OPENBSD) { - bsdImpl(); - return 1; - } else if (Constants.WINDOWS) { - windowsImpl(); - return 1; - } else { - throw new UnsupportedOperationException("syscall filtering not supported for OS: '" + Constants.OS_NAME + "'"); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Security.java b/core/src/main/java/org/elasticsearch/bootstrap/Security.java index 909d3dc153c7d..72b4ae15feb02 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Security.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Security.java @@ -19,11 +19,12 @@ package org.elasticsearch.bootstrap; +import org.elasticsearch.Build; import org.elasticsearch.SecureSM; import org.elasticsearch.Version; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.io.PathUtils; +import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; import org.elasticsearch.http.HttpTransportSettings; @@ -48,8 +49,11 @@ import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedHashSet; import java.util.List; import java.util.Map; +import java.util.Set; /** * Initializes SecurityManager with necessary permissions. @@ -85,12 +89,6 @@ * cleanups to the scripting apis). But still it can provide some defense for users * that enable dynamic scripting without being fully aware of the consequences. *
    - *

    Disabling Security

    - * SecurityManager can be disabled completely with this setting: - *
    - * es.security.manager.enabled = false
    - * 
    - *
    *

    Debugging Security

    * A good place to start when there is a problem is to turn on security debugging: *
    @@ -114,13 +112,13 @@ private Security() {}
          * @param environment configuration for generating dynamic permissions
          * @param filterBadDefaults true if we should filter out bad java defaults in the system policy.
          */
    -    static void configure(Environment environment, boolean filterBadDefaults) throws Exception {
    +    static void configure(Environment environment, boolean filterBadDefaults) throws IOException, NoSuchAlgorithmException {
     
             // enable security policy: union of template and environment-based paths, and possibly plugin permissions
             Policy.setPolicy(new ESPolicy(createPermissions(environment), getPluginPermissions(environment), filterBadDefaults));
     
             // enable security manager
    -        System.setSecurityManager(new SecureSM(new String[] { "org.elasticsearch.bootstrap." }));
    +        System.setSecurityManager(new SecureSM(new String[] { "org.elasticsearch.bootstrap.", "org.elasticsearch.cli" }));
     
             // do some basic tests
             selfTest();
    @@ -133,19 +131,23 @@ static void configure(Environment environment, boolean filterBadDefaults) throws
         @SuppressForbidden(reason = "proper use of URL")
         static Map getPluginPermissions(Environment environment) throws IOException, NoSuchAlgorithmException {
             Map map = new HashMap<>();
    -        // collect up lists of plugins and modules
    -        List pluginsAndModules = new ArrayList<>();
    +        // collect up set of plugins and modules by listing directories.
    +        Set pluginsAndModules = new LinkedHashSet<>(); // order is already lost, but some filesystems have it
             if (Files.exists(environment.pluginsFile())) {
                 try (DirectoryStream stream = Files.newDirectoryStream(environment.pluginsFile())) {
                     for (Path plugin : stream) {
    -                    pluginsAndModules.add(plugin);
    +                    if (pluginsAndModules.add(plugin) == false) {
    +                        throw new IllegalStateException("duplicate plugin: " + plugin);
    +                    }
                     }
                 }
             }
             if (Files.exists(environment.modulesFile())) {
                 try (DirectoryStream stream = Files.newDirectoryStream(environment.modulesFile())) {
    -                for (Path plugin : stream) {
    -                    pluginsAndModules.add(plugin);
    +                for (Path module : stream) {
    +                    if (pluginsAndModules.add(module) == false) {
    +                        throw new IllegalStateException("duplicate module: " + module);
    +                    }
                     }
                 }
             }
    @@ -155,15 +157,18 @@ static Map getPluginPermissions(Environment environment) throws I
                 if (Files.exists(policyFile)) {
                     // first get a list of URLs for the plugins' jars:
                     // we resolve symlinks so map is keyed on the normalize codebase name
    -                List codebases = new ArrayList<>();
    +                Set codebases = new LinkedHashSet<>(); // order is already lost, but some filesystems have it
                     try (DirectoryStream jarStream = Files.newDirectoryStream(plugin, "*.jar")) {
                         for (Path jar : jarStream) {
    -                        codebases.add(jar.toRealPath().toUri().toURL());
    +                        URL url = jar.toRealPath().toUri().toURL();
    +                        if (codebases.add(url) == false) {
    +                            throw new IllegalStateException("duplicate module/plugin: " + url);
    +                        }
                         }
                     }
     
                     // parse the plugin's policy file into a set of permissions
    -                Policy policy = readPolicy(policyFile.toUri().toURL(), codebases.toArray(new URL[codebases.size()]));
    +                Policy policy = readPolicy(policyFile.toUri().toURL(), codebases);
     
                     // consult this policy for each of the plugin's jars:
                     for (URL url : codebases) {
    @@ -181,25 +186,46 @@ static Map getPluginPermissions(Environment environment) throws I
         /**
          * Reads and returns the specified {@code policyFile}.
          * 

    - * Resources (e.g. jar files and directories) listed in {@code codebases} location - * will be provided to the policy file via a system property of the short name: - * e.g. ${codebase.joda-convert-1.2.jar} would map to full URL. + * Jar files listed in {@code codebases} location will be provided to the policy file via + * a system property of the short name: e.g. ${codebase.joda-convert-1.2.jar} + * would map to full URL. */ @SuppressForbidden(reason = "accesses fully qualified URLs to configure security") - static Policy readPolicy(URL policyFile, URL codebases[]) { + static Policy readPolicy(URL policyFile, Set codebases) { try { + List propertiesSet = new ArrayList<>(); try { // set codebase properties for (URL url : codebases) { String shortName = PathUtils.get(url.toURI()).getFileName().toString(); - System.setProperty("codebase." + shortName, url.toString()); + if (shortName.endsWith(".jar") == false) { + continue; // tests :( + } + String property = "codebase." + shortName; + if (shortName.startsWith("elasticsearch-rest-client")) { + // The rest client is currently the only example where we have an elasticsearch built artifact + // which needs special permissions in policy files when used. This temporary solution is to + // pass in an extra system property that omits the -version.jar suffix the other properties have. + // That allows the snapshots to reference snapshot builds of the client, and release builds to + // referenced release builds of the client, all with the same grant statements. + final String esVersion = Version.CURRENT + (Build.CURRENT.isSnapshot() ? "-SNAPSHOT" : ""); + final int index = property.indexOf("-" + esVersion + ".jar"); + assert index >= 0; + String restClientAlias = property.substring(0, index); + propertiesSet.add(restClientAlias); + System.setProperty(restClientAlias, url.toString()); + } + propertiesSet.add(property); + String previous = System.setProperty(property, url.toString()); + if (previous != null) { + throw new IllegalStateException("codebase property already set: " + shortName + "->" + previous); + } } return Policy.getInstance("JavaPolicy", new URIParameter(policyFile.toURI())); } finally { // clear codebase properties - for (URL url : codebases) { - String shortName = PathUtils.get(url.toURI()).getFileName().toString(); - System.clearProperty("codebase." + shortName); + for (String property : propertiesSet) { + System.clearProperty(property); } } } catch (NoSuchAlgorithmException | URISyntaxException e) { @@ -254,14 +280,48 @@ static void addFilePermissions(Permissions policy, Environment environment) { if (environment.sharedDataFile() != null) { addPath(policy, Environment.PATH_SHARED_DATA_SETTING.getKey(), environment.sharedDataFile(), "read,readlink,write,delete"); } + final Set dataFilesPaths = new HashSet<>(); for (Path path : environment.dataFiles()) { addPath(policy, Environment.PATH_DATA_SETTING.getKey(), path, "read,readlink,write,delete"); + /* + * We have to do this after adding the path because a side effect of that is that the directory is created; the Path#toRealPath + * invocation will fail if the directory does not already exist. We use Path#toRealPath to follow symlinks and handle issues + * like unicode normalization or case-insensitivity on some filesystems (e.g., the case-insensitive variant of HFS+ on macOS). + */ + try { + final Path realPath = path.toRealPath(); + if (!dataFilesPaths.add(realPath)) { + throw new IllegalStateException("path [" + realPath + "] is duplicated by [" + path + "]"); + } + } catch (final IOException e) { + throw new IllegalStateException("unable to access [" + path + "]", e); + } } // TODO: this should be removed in ES 6.0! We will no longer support data paths with the cluster as a folder assert Version.CURRENT.major < 6 : "cluster name is no longer used in data path"; for (Path path : environment.dataWithClusterFiles()) { addPathIfExists(policy, Environment.PATH_DATA_SETTING.getKey(), path, "read,readlink,write,delete"); } + /* + * If path.data and default.path.data are set, we need read access to the paths in default.path.data to check for the existence of + * index directories there that could have arisen from a bug in the handling of simultaneous configuration of path.data and + * default.path.data that was introduced in Elasticsearch 5.3.0. + * + * If path.data is not set then default.path.data would take precedence in setting the data paths for the environment and + * permissions would have been granted above. + * + * If path.data is not set and default.path.data is not set, then we would fallback to the default data directory under + * Elasticsearch home and again permissions would have been granted above. + * + * If path.data is set and default.path.data is not set, there is nothing to do here. + */ + if (Environment.PATH_DATA_SETTING.exists(environment.settings()) + && Environment.DEFAULT_PATH_DATA_SETTING.exists(environment.settings())) { + for (final String path : Environment.DEFAULT_PATH_DATA_SETTING.get(environment.settings())) { + // write permissions are not needed here, we are not going to be writing to any paths here + addPath(policy, Environment.DEFAULT_PATH_DATA_SETTING.getKey(), getPath(path), "read,readlink"); + } + } for (Path path : environment.repoFiles()) { addPath(policy, Environment.PATH_REPO_SETTING.getKey(), path, "read,readlink,write,delete"); } @@ -271,42 +331,110 @@ static void addFilePermissions(Permissions policy, Environment environment) { } } - static void addBindPermissions(Permissions policy, Settings settings) throws IOException { + @SuppressForbidden(reason = "read path that is not configured in environment") + private static Path getPath(final String path) { + return PathUtils.get(path); + } + + /** + * Add dynamic {@link SocketPermission}s based on HTTP and transport settings. + * + * @param policy the {@link Permissions} instance to apply the dynamic {@link SocketPermission}s to. + * @param settings the {@link Settings} instance to read the HTTP and transport settings from + */ + private static void addBindPermissions(Permissions policy, Settings settings) { + addSocketPermissionForHttp(policy, settings); + addSocketPermissionForTransportProfiles(policy, settings); + addSocketPermissionForTribeNodes(policy, settings); + } + + /** + * Add dynamic {@link SocketPermission} based on HTTP settings. + * + * @param policy the {@link Permissions} instance to apply the dynamic {@link SocketPermission}s to. + * @param settings the {@link Settings} instance to read the HTTP settingsfrom + */ + private static void addSocketPermissionForHttp(final Permissions policy, final Settings settings) { // http is simple - String httpRange = HttpTransportSettings.SETTING_HTTP_PORT.get(settings).getPortRangeString(); - // listen is always called with 'localhost' but use wildcard to be sure, no name service is consulted. - // see SocketPermission implies() code - policy.add(new SocketPermission("*:" + httpRange, "listen,resolve")); - // transport is waaaay overengineered - Map profiles = TransportSettings.TRANSPORT_PROFILES_SETTING.get(settings).getAsGroups(); - if (!profiles.containsKey(TransportSettings.DEFAULT_PROFILE)) { - profiles = new HashMap<>(profiles); - profiles.put(TransportSettings.DEFAULT_PROFILE, Settings.EMPTY); - } + final String httpRange = HttpTransportSettings.SETTING_HTTP_PORT.get(settings).getPortRangeString(); + addSocketPermissionForPortRange(policy, httpRange); + } + + /** + * Add dynamic {@link SocketPermission} based on transport settings. This method will first check if there is a port range specified in + * the transport profile specified by {@code profileSettings} and will fall back to {@code settings}. + * + * @param policy the {@link Permissions} instance to apply the dynamic {@link SocketPermission}s to + * @param settings the {@link Settings} instance to read the transport settings from + */ + private static void addSocketPermissionForTransportProfiles( + final Permissions policy, + final Settings settings) { + // transport is way over-engineered + final Map profiles = new HashMap<>(TransportSettings.TRANSPORT_PROFILES_SETTING.get(settings).getAsGroups()); + profiles.putIfAbsent(TransportSettings.DEFAULT_PROFILE, Settings.EMPTY); - // loop through all profiles and add permissions for each one, if its valid. - // (otherwise Netty transports are lenient and ignores it) - for (Map.Entry entry : profiles.entrySet()) { - Settings profileSettings = entry.getValue(); - String name = entry.getKey(); - String transportRange = profileSettings.get("port", TransportSettings.PORT.get(settings)); + // loop through all profiles and add permissions for each one, if it's valid; otherwise Netty transports are lenient and ignores it + for (final Map.Entry entry : profiles.entrySet()) { + final Settings profileSettings = entry.getValue(); + final String name = entry.getKey(); - // a profile is only valid if its the default profile, or if it has an actual name and specifies a port - boolean valid = TransportSettings.DEFAULT_PROFILE.equals(name) || (Strings.hasLength(name) && profileSettings.get("port") != null); + // a profile is only valid if it's the default profile, or if it has an actual name and specifies a port + // TODO: can this leniency be removed? + final boolean valid = + TransportSettings.DEFAULT_PROFILE.equals(name) || + (name != null && name.length() > 0 && profileSettings.get("port") != null); if (valid) { - // listen is always called with 'localhost' but use wildcard to be sure, no name service is consulted. - // see SocketPermission implies() code - policy.add(new SocketPermission("*:" + transportRange, "listen,resolve")); + final String transportRange = profileSettings.get("port"); + if (transportRange != null) { + addSocketPermissionForPortRange(policy, transportRange); + } else { + addSocketPermissionForTransport(policy, settings); + } } } } /** - * Add access to path (and all files underneath it) - * @param policy current policy to add permissions to + * Add dynamic {@link SocketPermission} based on transport settings. + * + * @param policy the {@link Permissions} instance to apply the dynamic {@link SocketPermission}s to + * @param settings the {@link Settings} instance to read the transport settings from + */ + private static void addSocketPermissionForTransport(final Permissions policy, final Settings settings) { + final String transportRange = TransportSettings.PORT.get(settings); + addSocketPermissionForPortRange(policy, transportRange); + } + + private static void addSocketPermissionForTribeNodes(final Permissions policy, final Settings settings) { + for (final Settings tribeNodeSettings : settings.getGroups("tribe", true).values()) { + // tribe nodes have HTTP disabled by default, so we check if HTTP is enabled before granting + if (NetworkModule.HTTP_ENABLED.exists(tribeNodeSettings) && NetworkModule.HTTP_ENABLED.get(tribeNodeSettings)) { + addSocketPermissionForHttp(policy, tribeNodeSettings); + } + addSocketPermissionForTransport(policy, tribeNodeSettings); + } + } + + /** + * Add dynamic {@link SocketPermission} for the specified port range. + * + * @param policy the {@link Permissions} instance to apply the dynamic {@link SocketPermission} to. + * @param portRange the port range + */ + private static void addSocketPermissionForPortRange(final Permissions policy, final String portRange) { + // listen is always called with 'localhost' but use wildcard to be sure, no name service is consulted. + // see SocketPermission implies() code + policy.add(new SocketPermission("*:" + portRange, "listen,resolve")); + } + + /** + * Add access to path (and all files underneath it); this also creates the directory if it does not exist. + * + * @param policy current policy to add permissions to * @param configurationName the configuration name associated with the path (for error messages only) - * @param path the path itself - * @param permissions set of filepermissions to grant to the path + * @param path the path itself + * @param permissions set of file permissions to grant to the path */ static void addPath(Permissions policy, String configurationName, Path path, String permissions) { // paths may not exist yet, this also checks accessibility diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java b/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java new file mode 100644 index 0000000000000..f1616ba0eea09 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java @@ -0,0 +1,137 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.bootstrap; + +import org.apache.lucene.util.Constants; +import org.apache.lucene.util.IOUtils; +import org.elasticsearch.env.Environment; +import org.elasticsearch.plugins.Platforms; +import org.elasticsearch.plugins.PluginInfo; + +import java.io.Closeable; +import java.io.IOException; +import java.nio.file.DirectoryStream; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Locale; +import java.util.concurrent.atomic.AtomicBoolean; + +/** + * Spawns native plugin controller processes if present. Will only work prior to a system call + * filter being installed. + */ +final class Spawner implements Closeable { + + /* + * References to the processes that have been spawned, so that we can destroy them. + */ + private final List processes = new ArrayList<>(); + private AtomicBoolean spawned = new AtomicBoolean(); + + @Override + public void close() throws IOException { + IOUtils.close(() -> processes.stream().map(s -> (Closeable) s::destroy).iterator()); + } + + /** + * Spawns the native controllers for each plugin + * + * @param environment the node environment + * @throws IOException if an I/O error occurs reading the plugins or spawning a native process + */ + void spawnNativePluginControllers(final Environment environment) throws IOException { + if (!spawned.compareAndSet(false, true)) { + throw new IllegalStateException("native controllers already spawned"); + } + final Path pluginsFile = environment.pluginsFile(); + if (!Files.exists(pluginsFile)) { + throw new IllegalStateException("plugins directory [" + pluginsFile + "] not found"); + } + /* + * For each plugin, attempt to spawn the controller daemon. Silently ignore any plugin that + * don't include a controller for the correct platform. + */ + try (DirectoryStream stream = Files.newDirectoryStream(pluginsFile)) { + for (final Path plugin : stream) { + final PluginInfo info = PluginInfo.readFromProperties(plugin); + final Path spawnPath = Platforms.nativeControllerPath(plugin); + if (!Files.isRegularFile(spawnPath)) { + continue; + } + if (!info.hasNativeController()) { + final String message = String.format( + Locale.ROOT, + "plugin [%s] does not have permission to fork native controller", + plugin.getFileName()); + throw new IllegalArgumentException(message); + } + final Process process = + spawnNativePluginController(spawnPath, environment.tmpFile()); + processes.add(process); + } + } + } + + /** + * Attempt to spawn the controller daemon for a given plugin. The spawned process will remain + * connected to this JVM via its stdin, stdout, and stderr streams, but the references to these + * streams are not available to code outside this package. + */ + private Process spawnNativePluginController( + final Path spawnPath, + final Path tmpPath) throws IOException { + final String command; + if (Constants.WINDOWS) { + /* + * We have to get the short path name or starting the process could fail due to max path limitations. The underlying issue here + * is that starting the process on Windows ultimately involves the use of CreateProcessW. CreateProcessW has a limitation that + * if its first argument (the application name) is null, then its second argument (the command line for the process to start) is + * restricted in length to 260 characters (cf. https://msdn.microsoft.com/en-us/library/windows/desktop/ms682425.aspx). Since + * this is exactly how the JDK starts the process on Windows (cf. + * http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/java/lang/ProcessImpl_md.c#l319), this + * limitation is in force. As such, we use the short name to avoid any such problems. + */ + command = Natives.getShortPathName(spawnPath.toString()); + } else { + command = spawnPath.toString(); + } + final ProcessBuilder pb = new ProcessBuilder(command); + + // the only environment variable passes on the path to the temporary directory + pb.environment().clear(); + pb.environment().put("TMPDIR", tmpPath.toString()); + + // the output stream of the process object corresponds to the daemon's stdin + return pb.start(); + } + + /** + * The collection of processes representing spawned native controllers. + * + * @return the processes + */ + List getProcesses() { + return Collections.unmodifiableList(processes); + } + +} diff --git a/core/src/main/java/org/elasticsearch/bootstrap/StartupException.java b/core/src/main/java/org/elasticsearch/bootstrap/StartupException.java index a78f82ef3e732..59629eb5b3b19 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/StartupException.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/StartupException.java @@ -104,7 +104,7 @@ private void printStackTrace(Consumer consumer) { continue; } - consumer.accept("\tat " + line.toString()); + consumer.accept("\tat " + line); linesWritten++; } } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java new file mode 100644 index 0000000000000..2be2d768ab346 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java @@ -0,0 +1,658 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.bootstrap; + +import com.sun.jna.Library; +import com.sun.jna.Memory; +import com.sun.jna.Native; +import com.sun.jna.NativeLong; +import com.sun.jna.Pointer; +import com.sun.jna.Structure; +import com.sun.jna.ptr.PointerByReference; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.util.Constants; +import org.apache.lucene.util.IOUtils; +import org.elasticsearch.common.logging.Loggers; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.ByteOrder; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * Installs a system call filter to block process execution. + *

    + * This is supported on Linux, Solaris, FreeBSD, OpenBSD, Mac OS X, and Windows. + *

    + * On Linux it currently supports amd64 and i386 architectures, requires Linux kernel 3.5 or above, and requires + * {@code CONFIG_SECCOMP} and {@code CONFIG_SECCOMP_FILTER} compiled into the kernel. + *

    + * On Linux BPF Filters are installed using either {@code seccomp(2)} (3.17+) or {@code prctl(2)} (3.5+). {@code seccomp(2)} + * is preferred, as it allows filters to be applied to any existing threads in the process, and one motivation + * here is to protect against bugs in the JVM. Otherwise, code will fall back to the {@code prctl(2)} method + * which will at least protect elasticsearch application threads. + *

    + * Linux BPF filters will return {@code EACCES} (Access Denied) for the following system calls: + *

      + *
    • {@code execve}
    • + *
    • {@code fork}
    • + *
    • {@code vfork}
    • + *
    • {@code execveat}
    • + *
    + *

    + * On Solaris 10 or higher, the following privileges are dropped with {@code priv_set(3C)}: + *

      + *
    • {@code PRIV_PROC_FORK}
    • + *
    • {@code PRIV_PROC_EXEC}
    • + *
    + *

    + * On BSD systems, process creation is restricted with {@code setrlimit(RLIMIT_NPROC)}. + *

    + * On Mac OS X Leopard or above, a custom {@code sandbox(7)} ("Seatbelt") profile is installed that + * denies the following rules: + *

      + *
    • {@code process-fork}
    • + *
    • {@code process-exec}
    • + *
    + *

    + * On Windows, process creation is restricted with {@code SetInformationJobObject/ActiveProcessLimit}. + *

    + * This is not intended as a sandbox. It is another level of security, mostly intended to annoy + * security researchers and make their lives more difficult in achieving "remote execution" exploits. + * @see + * http://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt + * @see + * https://reverse.put.as/wp-content/uploads/2011/06/The-Apple-Sandbox-BHDC2011-Paper.pdf + * @see + * https://docs.oracle.com/cd/E23824_01/html/821-1456/prbac-2.html + */ +// not an example of how to write code!!! +final class SystemCallFilter { + private static final Logger logger = Loggers.getLogger(SystemCallFilter.class); + + // Linux implementation, based on seccomp(2) or prctl(2) with bpf filtering + + /** Access to non-standard Linux libc methods */ + interface LinuxLibrary extends Library { + /** + * maps to prctl(2) + */ + int prctl(int option, NativeLong arg2, NativeLong arg3, NativeLong arg4, NativeLong arg5); + /** + * used to call seccomp(2), its too new... + * this is the only way, DON'T use it on some other architecture unless you know wtf you are doing + */ + NativeLong syscall(NativeLong number, Object... args); + } + + // null if unavailable or something goes wrong. + private static final LinuxLibrary linux_libc; + + static { + LinuxLibrary lib = null; + if (Constants.LINUX) { + try { + lib = (LinuxLibrary) Native.loadLibrary("c", LinuxLibrary.class); + } catch (UnsatisfiedLinkError e) { + logger.warn("unable to link C library. native methods (seccomp) will be disabled.", e); + } + } + linux_libc = lib; + } + + /** the preferred method is seccomp(2), since we can apply to all threads of the process */ + static final int SECCOMP_SET_MODE_FILTER = 1; // since Linux 3.17 + static final int SECCOMP_FILTER_FLAG_TSYNC = 1; // since Linux 3.17 + + /** otherwise, we can use prctl(2), which will at least protect ES application threads */ + static final int PR_GET_NO_NEW_PRIVS = 39; // since Linux 3.5 + static final int PR_SET_NO_NEW_PRIVS = 38; // since Linux 3.5 + static final int PR_GET_SECCOMP = 21; // since Linux 2.6.23 + static final int PR_SET_SECCOMP = 22; // since Linux 2.6.23 + static final long SECCOMP_MODE_FILTER = 2; // since Linux Linux 3.5 + + /** corresponds to struct sock_filter */ + static final class SockFilter { + short code; // insn + byte jt; // number of insn to jump (skip) if true + byte jf; // number of insn to jump (skip) if false + int k; // additional data + + SockFilter(short code, byte jt, byte jf, int k) { + this.code = code; + this.jt = jt; + this.jf = jf; + this.k = k; + } + } + + /** corresponds to struct sock_fprog */ + public static final class SockFProg extends Structure implements Structure.ByReference { + public short len; // number of filters + public Pointer filter; // filters + + SockFProg(SockFilter filters[]) { + len = (short) filters.length; + // serialize struct sock_filter * explicitly, its less confusing than the JNA magic we would need + Memory filter = new Memory(len * 8); + ByteBuffer bbuf = filter.getByteBuffer(0, len * 8); + bbuf.order(ByteOrder.nativeOrder()); // little endian + for (SockFilter f : filters) { + bbuf.putShort(f.code); + bbuf.put(f.jt); + bbuf.put(f.jf); + bbuf.putInt(f.k); + } + this.filter = filter; + } + + @Override + protected List getFieldOrder() { + return Arrays.asList("len", "filter"); + } + } + + // BPF "macros" and constants + static final int BPF_LD = 0x00; + static final int BPF_W = 0x00; + static final int BPF_ABS = 0x20; + static final int BPF_JMP = 0x05; + static final int BPF_JEQ = 0x10; + static final int BPF_JGE = 0x30; + static final int BPF_JGT = 0x20; + static final int BPF_RET = 0x06; + static final int BPF_K = 0x00; + + static SockFilter BPF_STMT(int code, int k) { + return new SockFilter((short) code, (byte) 0, (byte) 0, k); + } + + static SockFilter BPF_JUMP(int code, int k, int jt, int jf) { + return new SockFilter((short) code, (byte) jt, (byte) jf, k); + } + + static final int SECCOMP_RET_ERRNO = 0x00050000; + static final int SECCOMP_RET_DATA = 0x0000FFFF; + static final int SECCOMP_RET_ALLOW = 0x7FFF0000; + + // some errno constants for error checking/handling + static final int EPERM = 0x01; + static final int EACCES = 0x0D; + static final int EFAULT = 0x0E; + static final int EINVAL = 0x16; + static final int ENOSYS = 0x26; + + // offsets that our BPF checks + // check with offsetof() when adding a new arch, move to Arch if different. + static final int SECCOMP_DATA_NR_OFFSET = 0x00; + static final int SECCOMP_DATA_ARCH_OFFSET = 0x04; + + static class Arch { + /** AUDIT_ARCH_XXX constant from linux/audit.h */ + final int audit; + /** syscall limit (necessary for blacklisting on amd64, to ban 32-bit syscalls) */ + final int limit; + /** __NR_fork */ + final int fork; + /** __NR_vfork */ + final int vfork; + /** __NR_execve */ + final int execve; + /** __NR_execveat */ + final int execveat; + /** __NR_seccomp */ + final int seccomp; + + Arch(int audit, int limit, int fork, int vfork, int execve, int execveat, int seccomp) { + this.audit = audit; + this.limit = limit; + this.fork = fork; + this.vfork = vfork; + this.execve = execve; + this.execveat = execveat; + this.seccomp = seccomp; + } + } + + /** supported architectures map keyed by os.arch */ + private static final Map ARCHITECTURES; + static { + Map m = new HashMap<>(); + m.put("amd64", new Arch(0xC000003E, 0x3FFFFFFF, 57, 58, 59, 322, 317)); + m.put("i386", new Arch(0x40000003, 0xFFFFFFFF, 2, 190, 11, 358, 354)); + m.put("aarch64", new Arch(0xC00000B7, 0xFFFFFFFF, 1079, 1071, 221, 281, 277)); + ARCHITECTURES = Collections.unmodifiableMap(m); + } + + /** invokes prctl() from linux libc library */ + private static int linux_prctl(int option, long arg2, long arg3, long arg4, long arg5) { + return linux_libc.prctl(option, new NativeLong(arg2), new NativeLong(arg3), new NativeLong(arg4), new NativeLong(arg5)); + } + + /** invokes syscall() from linux libc library */ + private static long linux_syscall(long number, Object... args) { + return linux_libc.syscall(new NativeLong(number), args).longValue(); + } + + /** try to install our BPF filters via seccomp() or prctl() to block execution */ + private static int linuxImpl() { + // first be defensive: we can give nice errors this way, at the very least. + // also, some of these security features get backported to old versions, checking kernel version here is a big no-no! + final Arch arch = ARCHITECTURES.get(Constants.OS_ARCH); + boolean supported = Constants.LINUX && arch != null; + if (supported == false) { + throw new UnsupportedOperationException("seccomp unavailable: '" + Constants.OS_ARCH + "' architecture unsupported"); + } + + // we couldn't link methods, could be some really ancient kernel (e.g. < 2.1.57) or some bug + if (linux_libc == null) { + throw new UnsupportedOperationException("seccomp unavailable: could not link methods. requires kernel 3.5+ " + + "with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in"); + } + + // pure paranoia: + + // check that unimplemented syscalls actually return ENOSYS + // you never know (e.g. https://code.google.com/p/chromium/issues/detail?id=439795) + if (linux_syscall(999) >= 0) { + throw new UnsupportedOperationException("seccomp unavailable: your kernel is buggy and you should upgrade"); + } + + switch (Native.getLastError()) { + case ENOSYS: + break; // ok + case EPERM: + // NOT ok, but likely a docker container + if (logger.isDebugEnabled()) { + logger.debug("syscall(BOGUS) bogusly gets EPERM instead of ENOSYS"); + } + break; + default: + throw new UnsupportedOperationException("seccomp unavailable: your kernel is buggy and you should upgrade"); + } + + // try to check system calls really are who they claim + // you never know (e.g. https://chromium.googlesource.com/chromium/src.git/+/master/sandbox/linux/seccomp-bpf/sandbox_bpf.cc#57) + final int bogusArg = 0xf7a46a5c; + + // test seccomp(BOGUS) + long ret = linux_syscall(arch.seccomp, bogusArg); + if (ret != -1) { + throw new UnsupportedOperationException("seccomp unavailable: seccomp(BOGUS_OPERATION) returned " + ret); + } else { + int errno = Native.getLastError(); + switch (errno) { + case ENOSYS: break; // ok + case EINVAL: break; // ok + default: throw new UnsupportedOperationException("seccomp(BOGUS_OPERATION): " + JNACLibrary.strerror(errno)); + } + } + + // test seccomp(VALID, BOGUS) + ret = linux_syscall(arch.seccomp, SECCOMP_SET_MODE_FILTER, bogusArg); + if (ret != -1) { + throw new UnsupportedOperationException("seccomp unavailable: seccomp(SECCOMP_SET_MODE_FILTER, BOGUS_FLAG) returned " + ret); + } else { + int errno = Native.getLastError(); + switch (errno) { + case ENOSYS: break; // ok + case EINVAL: break; // ok + default: throw new UnsupportedOperationException("seccomp(SECCOMP_SET_MODE_FILTER, BOGUS_FLAG): " + + JNACLibrary.strerror(errno)); + } + } + + // test prctl(BOGUS) + ret = linux_prctl(bogusArg, 0, 0, 0, 0); + if (ret != -1) { + throw new UnsupportedOperationException("seccomp unavailable: prctl(BOGUS_OPTION) returned " + ret); + } else { + int errno = Native.getLastError(); + switch (errno) { + case ENOSYS: break; // ok + case EINVAL: break; // ok + default: throw new UnsupportedOperationException("prctl(BOGUS_OPTION): " + JNACLibrary.strerror(errno)); + } + } + + // now just normal defensive checks + + // check for GET_NO_NEW_PRIVS + switch (linux_prctl(PR_GET_NO_NEW_PRIVS, 0, 0, 0, 0)) { + case 0: break; // not yet set + case 1: break; // already set by caller + default: + int errno = Native.getLastError(); + if (errno == EINVAL) { + // friendly error, this will be the typical case for an old kernel + throw new UnsupportedOperationException("seccomp unavailable: requires kernel 3.5+ with" + + " CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in"); + } else { + throw new UnsupportedOperationException("prctl(PR_GET_NO_NEW_PRIVS): " + JNACLibrary.strerror(errno)); + } + } + // check for SECCOMP + switch (linux_prctl(PR_GET_SECCOMP, 0, 0, 0, 0)) { + case 0: break; // not yet set + case 2: break; // already in filter mode by caller + default: + int errno = Native.getLastError(); + if (errno == EINVAL) { + throw new UnsupportedOperationException("seccomp unavailable: CONFIG_SECCOMP not compiled into kernel," + + " CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed"); + } else { + throw new UnsupportedOperationException("prctl(PR_GET_SECCOMP): " + JNACLibrary.strerror(errno)); + } + } + // check for SECCOMP_MODE_FILTER + if (linux_prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, 0, 0, 0) != 0) { + int errno = Native.getLastError(); + switch (errno) { + case EFAULT: break; // available + case EINVAL: throw new UnsupportedOperationException("seccomp unavailable: CONFIG_SECCOMP_FILTER not" + + " compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed"); + default: throw new UnsupportedOperationException("prctl(PR_SET_SECCOMP): " + JNACLibrary.strerror(errno)); + } + } + + // ok, now set PR_SET_NO_NEW_PRIVS, needed to be able to set a seccomp filter as ordinary user + if (linux_prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) != 0) { + throw new UnsupportedOperationException("prctl(PR_SET_NO_NEW_PRIVS): " + JNACLibrary.strerror(Native.getLastError())); + } + + // check it worked + if (linux_prctl(PR_GET_NO_NEW_PRIVS, 0, 0, 0, 0) != 1) { + throw new UnsupportedOperationException("seccomp filter did not really succeed: prctl(PR_GET_NO_NEW_PRIVS): " + + JNACLibrary.strerror(Native.getLastError())); + } + + // BPF installed to check arch, limit, then syscall. + // See https://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt for details. + SockFilter insns[] = { + /* 1 */ BPF_STMT(BPF_LD + BPF_W + BPF_ABS, SECCOMP_DATA_ARCH_OFFSET), // + /* 2 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.audit, 0, 7), // if (arch != audit) goto fail; + /* 3 */ BPF_STMT(BPF_LD + BPF_W + BPF_ABS, SECCOMP_DATA_NR_OFFSET), // + /* 4 */ BPF_JUMP(BPF_JMP + BPF_JGT + BPF_K, arch.limit, 5, 0), // if (syscall > LIMIT) goto fail; + /* 5 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.fork, 4, 0), // if (syscall == FORK) goto fail; + /* 6 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.vfork, 3, 0), // if (syscall == VFORK) goto fail; + /* 7 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.execve, 2, 0), // if (syscall == EXECVE) goto fail; + /* 8 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.execveat, 1, 0), // if (syscall == EXECVEAT) goto fail; + /* 9 */ BPF_STMT(BPF_RET + BPF_K, SECCOMP_RET_ALLOW), // pass: return OK; + /* 10 */ BPF_STMT(BPF_RET + BPF_K, SECCOMP_RET_ERRNO | (EACCES & SECCOMP_RET_DATA)), // fail: return EACCES; + }; + // seccomp takes a long, so we pass it one explicitly to keep the JNA simple + SockFProg prog = new SockFProg(insns); + prog.write(); + long pointer = Pointer.nativeValue(prog.getPointer()); + + int method = 1; + // install filter, if this works, after this there is no going back! + // first try it with seccomp(SECCOMP_SET_MODE_FILTER), falling back to prctl() + if (linux_syscall(arch.seccomp, SECCOMP_SET_MODE_FILTER, SECCOMP_FILTER_FLAG_TSYNC, new NativeLong(pointer)) != 0) { + method = 0; + int errno1 = Native.getLastError(); + if (logger.isDebugEnabled()) { + logger.debug("seccomp(SECCOMP_SET_MODE_FILTER): {}, falling back to prctl(PR_SET_SECCOMP)...", + JNACLibrary.strerror(errno1)); + } + if (linux_prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, pointer, 0, 0) != 0) { + int errno2 = Native.getLastError(); + throw new UnsupportedOperationException("seccomp(SECCOMP_SET_MODE_FILTER): " + JNACLibrary.strerror(errno1) + + ", prctl(PR_SET_SECCOMP): " + JNACLibrary.strerror(errno2)); + } + } + + // now check that the filter was really installed, we should be in filter mode. + if (linux_prctl(PR_GET_SECCOMP, 0, 0, 0, 0) != 2) { + throw new UnsupportedOperationException("seccomp filter installation did not really succeed. seccomp(PR_GET_SECCOMP): " + + JNACLibrary.strerror(Native.getLastError())); + } + + logger.debug("Linux seccomp filter installation successful, threads: [{}]", method == 1 ? "all" : "app" ); + return method; + } + + // OS X implementation via sandbox(7) + + /** Access to non-standard OS X libc methods */ + interface MacLibrary extends Library { + /** + * maps to sandbox_init(3), since Leopard + */ + int sandbox_init(String profile, long flags, PointerByReference errorbuf); + + /** + * releases memory when an error occurs during initialization (e.g. syntax bug) + */ + void sandbox_free_error(Pointer errorbuf); + } + + // null if unavailable, or something goes wrong. + private static final MacLibrary libc_mac; + + static { + MacLibrary lib = null; + if (Constants.MAC_OS_X) { + try { + lib = (MacLibrary) Native.loadLibrary("c", MacLibrary.class); + } catch (UnsatisfiedLinkError e) { + logger.warn("unable to link C library. native methods (seatbelt) will be disabled.", e); + } + } + libc_mac = lib; + } + + /** The only supported flag... */ + static final int SANDBOX_NAMED = 1; + /** Allow everything except process fork and execution */ + static final String SANDBOX_RULES = "(version 1) (allow default) (deny process-fork) (deny process-exec)"; + + /** try to install our custom rule profile into sandbox_init() to block execution */ + private static void macImpl(Path tmpFile) throws IOException { + // first be defensive: we can give nice errors this way, at the very least. + boolean supported = Constants.MAC_OS_X; + if (supported == false) { + throw new IllegalStateException("bug: should not be trying to initialize seatbelt for an unsupported OS"); + } + + // we couldn't link methods, could be some really ancient OS X (< Leopard) or some bug + if (libc_mac == null) { + throw new UnsupportedOperationException("seatbelt unavailable: could not link methods. requires Leopard or above."); + } + + // write rules to a temporary file, which will be passed to sandbox_init() + Path rules = Files.createTempFile(tmpFile, "es", "sb"); + Files.write(rules, Collections.singleton(SANDBOX_RULES)); + + boolean success = false; + try { + PointerByReference errorRef = new PointerByReference(); + int ret = libc_mac.sandbox_init(rules.toAbsolutePath().toString(), SANDBOX_NAMED, errorRef); + // if sandbox_init() fails, add the message from the OS (e.g. syntax error) and free the buffer + if (ret != 0) { + Pointer errorBuf = errorRef.getValue(); + RuntimeException e = new UnsupportedOperationException("sandbox_init(): " + errorBuf.getString(0)); + libc_mac.sandbox_free_error(errorBuf); + throw e; + } + logger.debug("OS X seatbelt initialization successful"); + success = true; + } finally { + if (success) { + Files.delete(rules); + } else { + IOUtils.deleteFilesIgnoringExceptions(rules); + } + } + } + + // Solaris implementation via priv_set(3C) + + /** Access to non-standard Solaris libc methods */ + interface SolarisLibrary extends Library { + /** + * see priv_set(3C), a convenience method for setppriv(2). + */ + int priv_set(int op, String which, String... privs); + } + + // null if unavailable, or something goes wrong. + private static final SolarisLibrary libc_solaris; + + static { + SolarisLibrary lib = null; + if (Constants.SUN_OS) { + try { + lib = (SolarisLibrary) Native.loadLibrary("c", SolarisLibrary.class); + } catch (UnsatisfiedLinkError e) { + logger.warn("unable to link C library. native methods (priv_set) will be disabled.", e); + } + } + libc_solaris = lib; + } + + // constants for priv_set(2) + static final int PRIV_OFF = 1; + static final String PRIV_ALLSETS = null; + // see privileges(5) for complete list of these + static final String PRIV_PROC_FORK = "proc_fork"; + static final String PRIV_PROC_EXEC = "proc_exec"; + + static void solarisImpl() { + // first be defensive: we can give nice errors this way, at the very least. + boolean supported = Constants.SUN_OS; + if (supported == false) { + throw new IllegalStateException("bug: should not be trying to initialize priv_set for an unsupported OS"); + } + + // we couldn't link methods, could be some really ancient Solaris or some bug + if (libc_solaris == null) { + throw new UnsupportedOperationException("priv_set unavailable: could not link methods. requires Solaris 10+"); + } + + // drop a null-terminated list of privileges + if (libc_solaris.priv_set(PRIV_OFF, PRIV_ALLSETS, PRIV_PROC_FORK, PRIV_PROC_EXEC, null) != 0) { + throw new UnsupportedOperationException("priv_set unavailable: priv_set(): " + JNACLibrary.strerror(Native.getLastError())); + } + + logger.debug("Solaris priv_set initialization successful"); + } + + // BSD implementation via setrlimit(2) + + // TODO: add OpenBSD to Lucene Constants + // TODO: JNA doesn't have netbsd support, but this mechanism should work there too. + static final boolean OPENBSD = Constants.OS_NAME.startsWith("OpenBSD"); + + // not a standard limit, means something different on linux, etc! + static final int RLIMIT_NPROC = 7; + + static void bsdImpl() { + boolean supported = Constants.FREE_BSD || OPENBSD || Constants.MAC_OS_X; + if (supported == false) { + throw new IllegalStateException("bug: should not be trying to initialize RLIMIT_NPROC for an unsupported OS"); + } + + JNACLibrary.Rlimit limit = new JNACLibrary.Rlimit(); + limit.rlim_cur.setValue(0); + limit.rlim_max.setValue(0); + if (JNACLibrary.setrlimit(RLIMIT_NPROC, limit) != 0) { + throw new UnsupportedOperationException("RLIMIT_NPROC unavailable: " + JNACLibrary.strerror(Native.getLastError())); + } + + logger.debug("BSD RLIMIT_NPROC initialization successful"); + } + + // windows impl via job ActiveProcessLimit + + static void windowsImpl() { + if (!Constants.WINDOWS) { + throw new IllegalStateException("bug: should not be trying to initialize ActiveProcessLimit for an unsupported OS"); + } + + JNAKernel32Library lib = JNAKernel32Library.getInstance(); + + // create a new Job + Pointer job = lib.CreateJobObjectW(null, null); + if (job == null) { + throw new UnsupportedOperationException("CreateJobObject: " + Native.getLastError()); + } + + try { + // retrieve the current basic limits of the job + int clazz = JNAKernel32Library.JOBOBJECT_BASIC_LIMIT_INFORMATION_CLASS; + JNAKernel32Library.JOBOBJECT_BASIC_LIMIT_INFORMATION limits = new JNAKernel32Library.JOBOBJECT_BASIC_LIMIT_INFORMATION(); + limits.write(); + if (!lib.QueryInformationJobObject(job, clazz, limits.getPointer(), limits.size(), null)) { + throw new UnsupportedOperationException("QueryInformationJobObject: " + Native.getLastError()); + } + limits.read(); + // modify the number of active processes to be 1 (exactly the one process we will add to the job). + limits.ActiveProcessLimit = 1; + limits.LimitFlags = JNAKernel32Library.JOB_OBJECT_LIMIT_ACTIVE_PROCESS; + limits.write(); + if (!lib.SetInformationJobObject(job, clazz, limits.getPointer(), limits.size())) { + throw new UnsupportedOperationException("SetInformationJobObject: " + Native.getLastError()); + } + // assign ourselves to the job + if (!lib.AssignProcessToJobObject(job, lib.GetCurrentProcess())) { + throw new UnsupportedOperationException("AssignProcessToJobObject: " + Native.getLastError()); + } + } finally { + lib.CloseHandle(job); + } + + logger.debug("Windows ActiveProcessLimit initialization successful"); + } + + /** + * Attempt to drop the capability to execute for the process. + *

    + * This is best effort and OS and architecture dependent. It may throw any Throwable. + * @return 0 if we can do this for application threads, 1 for the entire process + */ + static int init(Path tmpFile) throws Exception { + if (Constants.LINUX) { + return linuxImpl(); + } else if (Constants.MAC_OS_X) { + // try to enable both mechanisms if possible + bsdImpl(); + macImpl(tmpFile); + return 1; + } else if (Constants.SUN_OS) { + solarisImpl(); + return 1; + } else if (Constants.FREE_BSD || OPENBSD) { + bsdImpl(); + return 1; + } else if (Constants.WINDOWS) { + windowsImpl(); + return 1; + } else { + throw new UnsupportedOperationException("syscall filtering not supported for OS: '" + Constants.OS_NAME + "'"); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/cli/Command.java b/core/src/main/java/org/elasticsearch/cli/Command.java index 2e896759ebbfa..a60dece26113a 100644 --- a/core/src/main/java/org/elasticsearch/cli/Command.java +++ b/core/src/main/java/org/elasticsearch/cli/Command.java @@ -23,15 +23,22 @@ import joptsimple.OptionParser; import joptsimple.OptionSet; import joptsimple.OptionSpec; +import org.apache.logging.log4j.Level; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.logging.LogConfigurator; +import org.elasticsearch.common.settings.Settings; +import java.io.Closeable; import java.io.IOException; +import java.io.PrintWriter; +import java.io.StringWriter; import java.util.Arrays; /** * An action to execute within a cli. */ -public abstract class Command { +public abstract class Command implements Closeable { /** A description of the command, used in the help output. */ protected final String description; @@ -41,15 +48,44 @@ public abstract class Command { private final OptionSpec helpOption = parser.acceptsAll(Arrays.asList("h", "help"), "show help").forHelp(); private final OptionSpec silentOption = parser.acceptsAll(Arrays.asList("s", "silent"), "show minimal output"); - private final OptionSpec verboseOption = parser.acceptsAll(Arrays.asList("v", "verbose"), "show verbose output") - .availableUnless(silentOption); + private final OptionSpec verboseOption = + parser.acceptsAll(Arrays.asList("v", "verbose"), "show verbose output").availableUnless(silentOption); public Command(String description) { this.description = description; } + final SetOnce shutdownHookThread = new SetOnce<>(); + /** Parses options for this command from args and executes it. */ public final int main(String[] args, Terminal terminal) throws Exception { + if (addShutdownHook()) { + shutdownHookThread.set(new Thread(() -> { + try { + this.close(); + } catch (final IOException e) { + try ( + StringWriter sw = new StringWriter(); + PrintWriter pw = new PrintWriter(sw)) { + e.printStackTrace(pw); + terminal.println(sw.toString()); + } catch (final IOException impossible) { + // StringWriter#close declares a checked IOException from the Closeable interface but the Javadocs for StringWriter + // say that an exception here is impossible + throw new AssertionError(impossible); + } + } + })); + Runtime.getRuntime().addShutdownHook(shutdownHookThread.get()); + } + + if (shouldConfigureLoggingWithoutConfig()) { + // initialize default for es.logger.level because we will not read the log4j2.properties + final String loggerLevel = System.getProperty("es.logger.level", Level.INFO.name()); + final Settings settings = Settings.builder().put("logger.level", loggerLevel).build(); + LogConfigurator.configureWithoutConfig(settings); + } + try { mainWithoutErrorHandling(args, terminal); } catch (OptionException e) { @@ -66,6 +102,16 @@ public final int main(String[] args, Terminal terminal) throws Exception { return ExitCodes.OK; } + /** + * Indicate whether or not logging should be configured without reading a log4j2.properties. Most commands should do this because we do + * not configure logging for CLI tools. Only commands that configure logging on their own should not do this. + * + * @return true if logging should be configured without reading a log4j2.properties file + */ + protected boolean shouldConfigureLoggingWithoutConfig() { + return true; + } + /** * Executes the command, but all errors are thrown. */ @@ -110,4 +156,19 @@ protected static void exit(int status) { * Any runtime user errors (like an input file that does not exist), should throw a {@link UserException}. */ protected abstract void execute(Terminal terminal, OptionSet options) throws Exception; + /** + * Return whether or not to install the shutdown hook to cleanup resources on exit. This method should only be overridden in test + * classes. + * + * @return whether or not to install the shutdown hook + */ + protected boolean addShutdownHook() { + return true; + } + + @Override + public void close() throws IOException { + + } + } diff --git a/core/src/main/java/org/elasticsearch/cli/EnvironmentAwareCommand.java b/core/src/main/java/org/elasticsearch/cli/EnvironmentAwareCommand.java new file mode 100644 index 0000000000000..df063de18d3d2 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cli/EnvironmentAwareCommand.java @@ -0,0 +1,100 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cli; + +import joptsimple.OptionSet; +import joptsimple.OptionSpec; +import joptsimple.util.KeyValuePair; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.Environment; +import org.elasticsearch.node.InternalSettingsPreparer; + +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; + +/** A cli command which requires an {@link org.elasticsearch.env.Environment} to use current paths and settings. */ +public abstract class EnvironmentAwareCommand extends Command { + + private final OptionSpec settingOption; + + public EnvironmentAwareCommand(String description) { + super(description); + this.settingOption = parser.accepts("E", "Configure a setting").withRequiredArg().ofType(KeyValuePair.class); + } + + @Override + protected void execute(Terminal terminal, OptionSet options) throws Exception { + final Map settings = new HashMap<>(); + for (final KeyValuePair kvp : settingOption.values(options)) { + if (kvp.value.isEmpty()) { + throw new UserException(ExitCodes.USAGE, "setting [" + kvp.key + "] must not be empty"); + } + if (settings.containsKey(kvp.key)) { + final String message = String.format( + Locale.ROOT, + "setting [%s] already set, saw [%s] and [%s]", + kvp.key, + settings.get(kvp.key), + kvp.value); + throw new UserException(ExitCodes.USAGE, message); + } + settings.put(kvp.key, kvp.value); + } + + putSystemPropertyIfSettingIsMissing(settings, "default.path.conf", "es.default.path.conf"); + putSystemPropertyIfSettingIsMissing(settings, "default.path.data", "es.default.path.data"); + putSystemPropertyIfSettingIsMissing(settings, "default.path.logs", "es.default.path.logs"); + putSystemPropertyIfSettingIsMissing(settings, "path.conf", "es.path.conf"); + putSystemPropertyIfSettingIsMissing(settings, "path.data", "es.path.data"); + putSystemPropertyIfSettingIsMissing(settings, "path.home", "es.path.home"); + putSystemPropertyIfSettingIsMissing(settings, "path.logs", "es.path.logs"); + + execute(terminal, options, createEnv(terminal, settings)); + } + + /** Create an {@link Environment} for the command to use. Overrideable for tests. */ + protected Environment createEnv(Terminal terminal, Map settings) { + return InternalSettingsPreparer.prepareEnvironment(Settings.EMPTY, terminal, settings); + } + + /** Ensure the given setting exists, reading it from system properties if not already set. */ + protected static void putSystemPropertyIfSettingIsMissing(final Map settings, final String setting, final String key) { + final String value = System.getProperty(key); + if (value != null) { + if (settings.containsKey(setting)) { + final String message = + String.format( + Locale.ROOT, + "duplicate setting [%s] found via command-line [%s] and system property [%s]", + setting, + settings.get(setting), + value); + throw new IllegalArgumentException(message); + } else { + settings.put(setting, value); + } + } + } + + /** Execute the command with the initialized {@link Environment}. */ + protected abstract void execute(Terminal terminal, OptionSet options, Environment env) throws Exception; + +} diff --git a/core/src/main/java/org/elasticsearch/cli/SettingCommand.java b/core/src/main/java/org/elasticsearch/cli/SettingCommand.java deleted file mode 100644 index 17f7c9e5204cb..0000000000000 --- a/core/src/main/java/org/elasticsearch/cli/SettingCommand.java +++ /dev/null @@ -1,77 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cli; - -import joptsimple.OptionSet; -import joptsimple.OptionSpec; -import joptsimple.util.KeyValuePair; - -import java.util.HashMap; -import java.util.Locale; -import java.util.Map; - -public abstract class SettingCommand extends Command { - - private final OptionSpec settingOption; - - public SettingCommand(String description) { - super(description); - this.settingOption = parser.accepts("E", "Configure a setting").withRequiredArg().ofType(KeyValuePair.class); - } - - @Override - protected void execute(Terminal terminal, OptionSet options) throws Exception { - final Map settings = new HashMap<>(); - for (final KeyValuePair kvp : settingOption.values(options)) { - if (kvp.value.isEmpty()) { - throw new UserException(ExitCodes.USAGE, "Setting [" + kvp.key + "] must not be empty"); - } - settings.put(kvp.key, kvp.value); - } - - putSystemPropertyIfSettingIsMissing(settings, "path.conf", "es.path.conf"); - putSystemPropertyIfSettingIsMissing(settings, "path.data", "es.path.data"); - putSystemPropertyIfSettingIsMissing(settings, "path.home", "es.path.home"); - putSystemPropertyIfSettingIsMissing(settings, "path.logs", "es.path.logs"); - - execute(terminal, options, settings); - } - - protected static void putSystemPropertyIfSettingIsMissing(final Map settings, final String setting, final String key) { - final String value = System.getProperty(key); - if (value != null) { - if (settings.containsKey(setting)) { - final String message = - String.format( - Locale.ROOT, - "duplicate setting [%s] found via command-line [%s] and system property [%s]", - setting, - settings.get(setting), - value); - throw new IllegalArgumentException(message); - } else { - settings.put(setting, value); - } - } - } - - protected abstract void execute(Terminal terminal, OptionSet options, Map settings) throws Exception; - -} diff --git a/core/src/main/java/org/elasticsearch/cli/Terminal.java b/core/src/main/java/org/elasticsearch/cli/Terminal.java index 58eb5012d07e0..d42e3475dc491 100644 --- a/core/src/main/java/org/elasticsearch/cli/Terminal.java +++ b/core/src/main/java/org/elasticsearch/cli/Terminal.java @@ -27,6 +27,7 @@ import java.io.InputStreamReader; import java.io.PrintWriter; import java.nio.charset.Charset; +import java.util.Locale; /** * A Terminal wraps access to reading input and writing output for a cli. @@ -92,6 +93,27 @@ public final void print(Verbosity verbosity, String msg) { } } + /** + * Prompt for a yes or no answer from the user. This method will loop until 'y' or 'n' + * (or the default empty value) is entered. + */ + public final boolean promptYesNo(String prompt, boolean defaultYes) { + String answerPrompt = defaultYes ? " [Y/n]" : " [y/N]"; + while (true) { + String answer = readText(prompt + answerPrompt); + if (answer == null || answer.isEmpty()) { + return defaultYes; + } + answer = answer.toLowerCase(Locale.ROOT); + boolean answerYes = answer.equals("y"); + if (answerYes == false && answer.equals("n") == false) { + println("Did not understand answer '" + answer + "'"); + continue; + } + return answerYes; + } + } + private static class ConsoleTerminal extends Terminal { private static final Console CONSOLE = System.console(); diff --git a/core/src/main/java/org/elasticsearch/client/Client.java b/core/src/main/java/org/elasticsearch/client/Client.java index 0cf22d7a2c4fc..2080856e30204 100644 --- a/core/src/main/java/org/elasticsearch/client/Client.java +++ b/core/src/main/java/org/elasticsearch/client/Client.java @@ -30,6 +30,10 @@ import org.elasticsearch.action.explain.ExplainRequest; import org.elasticsearch.action.explain.ExplainRequestBuilder; import org.elasticsearch.action.explain.ExplainResponse; +import org.elasticsearch.action.fieldcaps.FieldCapabilities; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequest; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequestBuilder; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesResponse; import org.elasticsearch.action.fieldstats.FieldStatsRequest; import org.elasticsearch.action.fieldstats.FieldStatsRequestBuilder; import org.elasticsearch.action.fieldstats.FieldStatsResponse; @@ -452,12 +456,39 @@ public interface Client extends ElasticsearchClient, Releasable { */ void clearScroll(ClearScrollRequest request, ActionListener listener); + /** + * @deprecated Use _field_caps instead or run a min/max aggregations on the desired fields + */ + @Deprecated FieldStatsRequestBuilder prepareFieldStats(); + /** + * @deprecated Use _field_caps instead or run a min/max aggregations on the desired fields + */ + @Deprecated ActionFuture fieldStats(FieldStatsRequest request); + /** + * @deprecated Use _field_caps instead or run a min/max aggregations on the desired fields + */ + @Deprecated void fieldStats(FieldStatsRequest request, ActionListener listener); + /** + * Builder for the field capabilities request. + */ + FieldCapabilitiesRequestBuilder prepareFieldCaps(); + + /** + * An action that returns the field capabilities from the provided request + */ + ActionFuture fieldCaps(FieldCapabilitiesRequest request); + + /** + * An action that returns the field capabilities from the provided request + */ + void fieldCaps(FieldCapabilitiesRequest request, ActionListener listener); + /** * Returns this clients settings */ diff --git a/core/src/main/java/org/elasticsearch/client/ClusterAdminClient.java b/core/src/main/java/org/elasticsearch/client/ClusterAdminClient.java index 9e0d1a941192c..3f705a215ea11 100644 --- a/core/src/main/java/org/elasticsearch/client/ClusterAdminClient.java +++ b/core/src/main/java/org/elasticsearch/client/ClusterAdminClient.java @@ -112,6 +112,7 @@ import org.elasticsearch.action.ingest.WritePipelineResponse; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.tasks.TaskId; /** @@ -545,9 +546,16 @@ public interface ClusterAdminClient extends ElasticsearchClient { /** * Stores an ingest pipeline + * @deprecated use {@link #preparePutPipeline(String, BytesReference, XContentType)} */ + @Deprecated PutPipelineRequestBuilder preparePutPipeline(String id, BytesReference source); + /** + * Stores an ingest pipeline + */ + PutPipelineRequestBuilder preparePutPipeline(String id, BytesReference source, XContentType xContentType); + /** * Deletes a stored ingest pipeline */ @@ -563,6 +571,11 @@ public interface ClusterAdminClient extends ElasticsearchClient { */ DeletePipelineRequestBuilder prepareDeletePipeline(); + /** + * Deletes a stored ingest pipeline + */ + DeletePipelineRequestBuilder prepareDeletePipeline(String id); + /** * Returns a stored ingest pipeline */ @@ -591,8 +604,14 @@ public interface ClusterAdminClient extends ElasticsearchClient { /** * Simulates an ingest pipeline */ + @Deprecated SimulatePipelineRequestBuilder prepareSimulatePipeline(BytesReference source); + /** + * Simulates an ingest pipeline + */ + SimulatePipelineRequestBuilder prepareSimulatePipeline(BytesReference source, XContentType xContentType); + /** * Explain the allocation of a shard */ diff --git a/core/src/main/java/org/elasticsearch/client/ElasticsearchClient.java b/core/src/main/java/org/elasticsearch/client/ElasticsearchClient.java index d9ddc56d48a07..c6aa4991a8232 100644 --- a/core/src/main/java/org/elasticsearch/client/ElasticsearchClient.java +++ b/core/src/main/java/org/elasticsearch/client/ElasticsearchClient.java @@ -40,8 +40,8 @@ public interface ElasticsearchClient { * @param The request builder type. * @return A future allowing to get back the response. */ - , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> ActionFuture execute( - final Action action, final Request request); + > ActionFuture execute( + Action action, Request request); /** * Executes a generic action, denoted by an {@link Action}. @@ -53,8 +53,8 @@ , Response extends ActionResponse, Reques * @param The response type. * @param The request builder type. */ - , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> void execute( - final Action action, final Request request, ActionListener listener); + > void execute( + Action action, Request request, ActionListener listener); /** * Prepares a request builder to execute, specified by {@link Action}. @@ -65,8 +65,8 @@ , Response extends ActionResponse, Reques * @param The request builder. * @return The request builder, that can, at a later stage, execute the request. */ - , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> RequestBuilder prepareExecute( - final Action action); + > RequestBuilder prepareExecute( + Action action); /** * Returns the threadpool used to execute requests on this client diff --git a/core/src/main/java/org/elasticsearch/client/FilterClient.java b/core/src/main/java/org/elasticsearch/client/FilterClient.java index d0f52282c7683..23d3c2c3d0c2f 100644 --- a/core/src/main/java/org/elasticsearch/client/FilterClient.java +++ b/core/src/main/java/org/elasticsearch/client/FilterClient.java @@ -62,7 +62,7 @@ public void close() { } @Override - protected , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> void doExecute( + protected > void doExecute( Action action, Request request, ActionListener listener) { in().execute(action, request, listener); } diff --git a/core/src/main/java/org/elasticsearch/client/IndicesAdminClient.java b/core/src/main/java/org/elasticsearch/client/IndicesAdminClient.java index 24d190c68a10d..176cbf60b10c1 100644 --- a/core/src/main/java/org/elasticsearch/client/IndicesAdminClient.java +++ b/core/src/main/java/org/elasticsearch/client/IndicesAdminClient.java @@ -50,6 +50,9 @@ import org.elasticsearch.action.admin.indices.exists.types.TypesExistsRequest; import org.elasticsearch.action.admin.indices.exists.types.TypesExistsRequestBuilder; import org.elasticsearch.action.admin.indices.exists.types.TypesExistsResponse; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequest; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequestBuilder; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesResponse; import org.elasticsearch.action.admin.indices.flush.FlushRequest; import org.elasticsearch.action.admin.indices.flush.FlushRequestBuilder; import org.elasticsearch.action.admin.indices.flush.FlushResponse; @@ -817,5 +820,4 @@ public interface IndicesAdminClient extends ElasticsearchClient { * Swaps the index pointed to by an alias given all provided conditions are satisfied */ void rolloverIndex(RolloverRequest request, ActionListener listener); - } diff --git a/core/src/main/java/org/elasticsearch/client/ParentTaskAssigningClient.java b/core/src/main/java/org/elasticsearch/client/ParentTaskAssigningClient.java index 44ba2b76e43c0..62843c41b7027 100644 --- a/core/src/main/java/org/elasticsearch/client/ParentTaskAssigningClient.java +++ b/core/src/main/java/org/elasticsearch/client/ParentTaskAssigningClient.java @@ -58,7 +58,7 @@ public Client unwrap() { } @Override - protected < Request extends ActionRequest, + protected < Request extends ActionRequest, Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder > void doExecute(Action action, Request request, ActionListener listener) { diff --git a/core/src/main/java/org/elasticsearch/client/node/NodeClient.java b/core/src/main/java/org/elasticsearch/client/node/NodeClient.java index e68b902e2592d..e4f26b157026d 100644 --- a/core/src/main/java/org/elasticsearch/client/node/NodeClient.java +++ b/core/src/main/java/org/elasticsearch/client/node/NodeClient.java @@ -28,12 +28,14 @@ import org.elasticsearch.action.support.TransportAction; import org.elasticsearch.client.Client; import org.elasticsearch.client.support.AbstractClient; +import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.tasks.Task; import org.elasticsearch.tasks.TaskListener; import org.elasticsearch.threadpool.ThreadPool; import java.util.Map; +import java.util.function.Supplier; /** * Client that executes actions on the local node. @@ -41,13 +43,19 @@ public class NodeClient extends AbstractClient { private Map actions; + /** + * The id of the local {@link DiscoveryNode}. Useful for generating task ids from tasks returned by + * {@link #executeLocally(GenericAction, ActionRequest, TaskListener)}. + */ + private Supplier localNodeId; public NodeClient(Settings settings, ThreadPool threadPool) { super(settings, threadPool); } - public void intialize(Map actions) { + public void initialize(Map actions, Supplier localNodeId) { this.actions = actions; + this.localNodeId = localNodeId; } @Override @@ -56,7 +64,7 @@ public void close() { } @Override - public < Request extends ActionRequest, + public < Request extends ActionRequest, Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder > void doExecute(Action action, Request request, ActionListener listener) { @@ -69,7 +77,7 @@ > void doExecute(Action action, Request reque * method if you don't need access to the task when listening for the response. This is the method used to implement the {@link Client} * interface. */ - public < Request extends ActionRequest, + public < Request extends ActionRequest, Response extends ActionResponse > Task executeLocally(GenericAction action, Request request, ActionListener listener) { return transportAction(action).execute(request, listener); @@ -79,17 +87,25 @@ > Task executeLocally(GenericAction action, Request request, * Execute an {@link Action} locally, returning that {@link Task} used to track it, and linking an {@link TaskListener}. Prefer this * method if you need access to the task when listening for the response. */ - public < Request extends ActionRequest, + public < Request extends ActionRequest, Response extends ActionResponse > Task executeLocally(GenericAction action, Request request, TaskListener listener) { return transportAction(action).execute(request, listener); } + /** + * The id of the local {@link DiscoveryNode}. Useful for generating task ids from tasks returned by + * {@link #executeLocally(GenericAction, ActionRequest, TaskListener)}. + */ + public String getLocalNodeId() { + return localNodeId.get(); + } + /** * Get the {@link TransportAction} for an {@link Action}, throwing exceptions if the action isn't available. */ @SuppressWarnings("unchecked") - private < Request extends ActionRequest, + private < Request extends ActionRequest, Response extends ActionResponse > TransportAction transportAction(GenericAction action) { if (actions == null) { diff --git a/core/src/main/java/org/elasticsearch/client/support/AbstractClient.java b/core/src/main/java/org/elasticsearch/client/support/AbstractClient.java index c3816d8d37fad..fa050af77cb16 100644 --- a/core/src/main/java/org/elasticsearch/client/support/AbstractClient.java +++ b/core/src/main/java/org/elasticsearch/client/support/AbstractClient.java @@ -272,6 +272,10 @@ import org.elasticsearch.action.explain.ExplainRequest; import org.elasticsearch.action.explain.ExplainRequestBuilder; import org.elasticsearch.action.explain.ExplainResponse; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesAction; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequest; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequestBuilder; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesResponse; import org.elasticsearch.action.fieldstats.FieldStatsAction; import org.elasticsearch.action.fieldstats.FieldStatsRequest; import org.elasticsearch.action.fieldstats.FieldStatsRequestBuilder; @@ -343,6 +347,7 @@ import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.tasks.TaskId; import org.elasticsearch.threadpool.ThreadPool; @@ -380,13 +385,13 @@ public final AdminClient admin() { } @Override - public final , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> RequestBuilder prepareExecute( + public final > RequestBuilder prepareExecute( final Action action) { return action.newRequestBuilder(this); } @Override - public final , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> ActionFuture execute( + public final > ActionFuture execute( Action action, Request request) { PlainActionFuture actionFuture = PlainActionFuture.newFuture(); execute(action, request, actionFuture); @@ -397,13 +402,13 @@ public final , Response extends ActionRes * This is the single execution point of *all* clients. */ @Override - public final , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> void execute( + public final > void execute( Action action, Request request, ActionListener listener) { listener = threadedWrapper.wrap(listener); doExecute(action, request, listener); } - protected abstract , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> void doExecute(final Action action, final Request request, ActionListener listener); + protected abstract > void doExecute(Action action, Request request, ActionListener listener); @Override public ActionFuture index(final IndexRequest request) { @@ -669,12 +674,27 @@ public FieldStatsRequestBuilder prepareFieldStats() { return new FieldStatsRequestBuilder(this, FieldStatsAction.INSTANCE); } + @Override + public void fieldCaps(FieldCapabilitiesRequest request, ActionListener listener) { + execute(FieldCapabilitiesAction.INSTANCE, request, listener); + } + + @Override + public ActionFuture fieldCaps(FieldCapabilitiesRequest request) { + return execute(FieldCapabilitiesAction.INSTANCE, request); + } + + @Override + public FieldCapabilitiesRequestBuilder prepareFieldCaps() { + return new FieldCapabilitiesRequestBuilder(this, FieldCapabilitiesAction.INSTANCE); + } + static class Admin implements AdminClient { private final ClusterAdmin clusterAdmin; private final IndicesAdmin indicesAdmin; - public Admin(ElasticsearchClient client) { + Admin(ElasticsearchClient client) { this.clusterAdmin = new ClusterAdmin(client); this.indicesAdmin = new IndicesAdmin(client); } @@ -694,24 +714,24 @@ static class ClusterAdmin implements ClusterAdminClient { private final ElasticsearchClient client; - public ClusterAdmin(ElasticsearchClient client) { + ClusterAdmin(ElasticsearchClient client) { this.client = client; } @Override - public , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> ActionFuture execute( + public > ActionFuture execute( Action action, Request request) { return client.execute(action, request); } @Override - public , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> void execute( + public > void execute( Action action, Request request, ActionListener listener) { client.execute(action, request, listener); } @Override - public , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> RequestBuilder prepareExecute( + public > RequestBuilder prepareExecute( Action action) { return client.prepareExecute(action); } @@ -1084,6 +1104,11 @@ public PutPipelineRequestBuilder preparePutPipeline(String id, BytesReference so return new PutPipelineRequestBuilder(this, PutPipelineAction.INSTANCE, id, source); } + @Override + public PutPipelineRequestBuilder preparePutPipeline(String id, BytesReference source, XContentType xContentType) { + return new PutPipelineRequestBuilder(this, PutPipelineAction.INSTANCE, id, source, xContentType); + } + @Override public void deletePipeline(DeletePipelineRequest request, ActionListener listener) { execute(DeletePipelineAction.INSTANCE, request, listener); @@ -1099,6 +1124,11 @@ public DeletePipelineRequestBuilder prepareDeletePipeline() { return new DeletePipelineRequestBuilder(this, DeletePipelineAction.INSTANCE); } + @Override + public DeletePipelineRequestBuilder prepareDeletePipeline(String id) { + return new DeletePipelineRequestBuilder(this, DeletePipelineAction.INSTANCE, id); + } + @Override public void getPipeline(GetPipelineRequest request, ActionListener listener) { execute(GetPipelineAction.INSTANCE, request, listener); @@ -1129,6 +1159,11 @@ public SimulatePipelineRequestBuilder prepareSimulatePipeline(BytesReference sou return new SimulatePipelineRequestBuilder(this, SimulatePipelineAction.INSTANCE, source); } + @Override + public SimulatePipelineRequestBuilder prepareSimulatePipeline(BytesReference source, XContentType xContentType) { + return new SimulatePipelineRequestBuilder(this, SimulatePipelineAction.INSTANCE, source, xContentType); + } + @Override public void allocationExplain(ClusterAllocationExplainRequest request, ActionListener listener) { execute(ClusterAllocationExplainAction.INSTANCE, request, listener); @@ -1197,7 +1232,7 @@ public DeleteStoredScriptRequestBuilder prepareDeleteStoredScript(){ @Override public DeleteStoredScriptRequestBuilder prepareDeleteStoredScript(@Nullable String scriptLang, String id){ - return prepareDeleteStoredScript().setScriptLang(scriptLang).setId(id); + return prepareDeleteStoredScript().setLang(scriptLang).setId(id); } } @@ -1205,24 +1240,24 @@ static class IndicesAdmin implements IndicesAdminClient { private final ElasticsearchClient client; - public IndicesAdmin(ElasticsearchClient client) { + IndicesAdmin(ElasticsearchClient client) { this.client = client; } @Override - public , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> ActionFuture execute( + public > ActionFuture execute( Action action, Request request) { return client.execute(action, request); } @Override - public , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> void execute( + public > void execute( Action action, Request request, ActionListener listener) { client.execute(action, request, listener); } @Override - public , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> RequestBuilder prepareExecute( + public > RequestBuilder prepareExecute( Action action) { return client.prepareExecute(action); } @@ -1743,7 +1778,7 @@ public void getSettings(GetSettingsRequest request, ActionListener headers) { return new FilterClient(this) { @Override - protected , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> void doExecute(Action action, Request request, ActionListener listener) { + protected > void doExecute(Action action, Request request, ActionListener listener) { ThreadContext threadContext = threadPool().getThreadContext(); try (ThreadContext.StoredContext ctx = threadContext.stashAndMergeHeaders(headers)) { super.doExecute(action, request, listener); diff --git a/core/src/main/java/org/elasticsearch/client/transport/TransportClient.java b/core/src/main/java/org/elasticsearch/client/transport/TransportClient.java index f7ce9f929bdc0..d748af3685035 100644 --- a/core/src/main/java/org/elasticsearch/client/transport/TransportClient.java +++ b/core/src/main/java/org/elasticsearch/client/transport/TransportClient.java @@ -27,8 +27,9 @@ import org.elasticsearch.action.ActionRequestBuilder; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.client.support.AbstractClient; -import org.elasticsearch.client.transport.support.TransportProxyClient; +import org.elasticsearch.cluster.ClusterModule; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.component.LifecycleComponent; import org.elasticsearch.common.inject.Injector; import org.elasticsearch.common.inject.Module; @@ -40,11 +41,14 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsModule; import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.node.Node; -import org.elasticsearch.node.internal.InternalSettingsPreparer; +import org.elasticsearch.node.InternalSettingsPreparer; import org.elasticsearch.plugins.ActionPlugin; +import org.elasticsearch.plugins.NetworkPlugin; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.plugins.PluginsService; import org.elasticsearch.plugins.SearchPlugin; @@ -52,16 +56,23 @@ import org.elasticsearch.threadpool.ExecutorBuilder; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TcpTransport; +import org.elasticsearch.transport.Transport; import org.elasticsearch.transport.TransportService; import java.io.Closeable; +import java.net.UnknownHostException; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.concurrent.TimeUnit; +import java.util.function.Function; import java.util.stream.Collectors; +import java.util.stream.Stream; + +import static java.util.stream.Collectors.toList; +import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds; /** * The transport client allows to create a client that is not part of the cluster, but simply connects to one @@ -72,6 +83,15 @@ */ public abstract class TransportClient extends AbstractClient { + public static final Setting CLIENT_TRANSPORT_NODES_SAMPLER_INTERVAL = + Setting.positiveTimeSetting("client.transport.nodes_sampler_interval", timeValueSeconds(5), Setting.Property.NodeScope); + public static final Setting CLIENT_TRANSPORT_PING_TIMEOUT = + Setting.positiveTimeSetting("client.transport.ping_timeout", timeValueSeconds(5), Setting.Property.NodeScope); + public static final Setting CLIENT_TRANSPORT_IGNORE_CLUSTER_NAME = + Setting.boolSetting("client.transport.ignore_cluster_name", false, Setting.Property.NodeScope); + public static final Setting CLIENT_TRANSPORT_SNIFF = + Setting.boolSetting("client.transport.sniff", false, Setting.Property.NodeScope); + private static PluginsService newPluginService(final Settings settings, Collection> plugins) { final Settings.Builder settingsBuilder = Settings.builder() .put(TcpTransport.PING_SCHEDULE.getKey(), "5s") // enable by default the transport schedule ping interval @@ -99,7 +119,7 @@ protected static Collection> addPlugins(Collection> plugins) { + Collection> plugins, HostFailureListener failureListner) { if (Node.NODE_NAME_SETTING.exists(providedSettings) == false) { providedSettings = Settings.builder().put(providedSettings).put(Node.NODE_NAME_SETTING.getKey(), "_client_").build(); } @@ -110,60 +130,75 @@ private static ClientTemplate buildTemplate(Settings providedSettings, Settings resourcesToClose.add(() -> ThreadPool.terminate(threadPool, 10, TimeUnit.SECONDS)); final NetworkService networkService = new NetworkService(settings, Collections.emptyList()); try { - final List> additionalSettings = new ArrayList<>(); - final List additionalSettingsFilter = new ArrayList<>(); - additionalSettings.addAll(pluginsService.getPluginSettings()); - additionalSettingsFilter.addAll(pluginsService.getPluginSettingsFilter()); + final List> additionalSettings = new ArrayList<>(pluginsService.getPluginSettings()); + final List additionalSettingsFilter = new ArrayList<>(pluginsService.getPluginSettingsFilter()); for (final ExecutorBuilder builder : threadPool.builders()) { additionalSettings.addAll(builder.getRegisteredSettings()); } SettingsModule settingsModule = new SettingsModule(settings, additionalSettings, additionalSettingsFilter); - NetworkModule networkModule = new NetworkModule(networkService, settings, true); SearchModule searchModule = new SearchModule(settings, true, pluginsService.filterPlugins(SearchPlugin.class)); List entries = new ArrayList<>(); - entries.addAll(networkModule.getNamedWriteables()); + entries.addAll(NetworkModule.getNamedWriteables()); entries.addAll(searchModule.getNamedWriteables()); + entries.addAll(ClusterModule.getNamedWriteables()); entries.addAll(pluginsService.filterPlugins(Plugin.class).stream() .flatMap(p -> p.getNamedWriteables().stream()) .collect(Collectors.toList())); NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(entries); + NamedXContentRegistry xContentRegistry = new NamedXContentRegistry(Stream.of( + searchModule.getNamedXContents().stream(), + pluginsService.filterPlugins(Plugin.class).stream() + .flatMap(p -> p.getNamedXContent().stream()) + ).flatMap(Function.identity()).collect(toList())); ModulesBuilder modules = new ModulesBuilder(); // plugin modules must be added here, before others or we can get crazy injection errors... for (Module pluginModule : pluginsService.createGuiceModules()) { modules.add(pluginModule); } - modules.add(networkModule); modules.add(b -> b.bind(ThreadPool.class).toInstance(threadPool)); - modules.add(searchModule); - ActionModule actionModule = new ActionModule(false, true, settings, null, settingsModule.getClusterSettings(), - pluginsService.filterPlugins(ActionPlugin.class)); + ActionModule actionModule = new ActionModule(true, settings, null, settingsModule.getIndexScopedSettings(), + settingsModule.getClusterSettings(), settingsModule.getSettingsFilter(), threadPool, + pluginsService.filterPlugins(ActionPlugin.class), null, null); modules.add(actionModule); - pluginsService.processModules(modules); CircuitBreakerService circuitBreakerService = Node.createCircuitBreakerService(settingsModule.getSettings(), settingsModule.getClusterSettings()); resourcesToClose.add(circuitBreakerService); BigArrays bigArrays = new BigArrays(settings, circuitBreakerService); resourcesToClose.add(bigArrays); modules.add(settingsModule); + NetworkModule networkModule = new NetworkModule(settings, true, pluginsService.filterPlugins(NetworkPlugin.class), threadPool, + bigArrays, circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService, null); + final Transport transport = networkModule.getTransportSupplier().get(); + final TransportAddress address; + try { + address = transport.addressesFromString("0.0.0.0:0", 1)[0]; // this is just a dummy transport address + } catch (UnknownHostException e) { + throw new RuntimeException(e); + } + final TransportService transportService = new TransportService(settings, transport, threadPool, + networkModule.getTransportInterceptor(), + boundTransportAddress -> DiscoveryNode.createLocal(settings, address, UUIDs.randomBase64UUID()), null); modules.add((b -> { b.bind(BigArrays.class).toInstance(bigArrays); b.bind(PluginsService.class).toInstance(pluginsService); b.bind(CircuitBreakerService.class).toInstance(circuitBreakerService); b.bind(NamedWriteableRegistry.class).toInstance(namedWriteableRegistry); + b.bind(Transport.class).toInstance(transport); + b.bind(TransportService.class).toInstance(transportService); + b.bind(NetworkService.class).toInstance(networkService); })); Injector injector = modules.createInjector(); - final TransportService transportService = injector.getInstance(TransportService.class); final TransportClientNodesService nodesService = - new TransportClientNodesService(settings, transportService, threadPool); + new TransportClientNodesService(settings, transportService, threadPool, failureListner == null + ? (t, e) -> {} : failureListner); final TransportProxyClient proxy = new TransportProxyClient(settings, transportService, nodesService, actionModule.getActions().values().stream().map(x -> x.getAction()).collect(Collectors.toList())); - List pluginLifecycleComponents = new ArrayList<>(); - pluginLifecycleComponents.addAll(pluginsService.getGuiceServiceClasses().stream() + List pluginLifecycleComponents = new ArrayList<>(pluginsService.getGuiceServiceClasses().stream() .map(injector::getInstance).collect(Collectors.toList())); resourcesToClose.addAll(pluginLifecycleComponents); @@ -216,7 +251,7 @@ ThreadPool getThreadPool() { * Creates a new TransportClient with the given settings and plugins */ public TransportClient(Settings settings, Collection> plugins) { - this(buildTemplate(settings, Settings.EMPTY, plugins)); + this(buildTemplate(settings, Settings.EMPTY, plugins, null)); } /** @@ -225,8 +260,9 @@ public TransportClient(Settings settings, Collection> pl * @param defaultSettings default settings that are merged after the plugins have added it's additional settings. * @param plugins the client plugins */ - protected TransportClient(Settings settings, Settings defaultSettings, Collection> plugins) { - this(buildTemplate(settings, defaultSettings, plugins)); + protected TransportClient(Settings settings, Settings defaultSettings, Collection> plugins, + HostFailureListener hostFailureListener) { + this(buildTemplate(settings, defaultSettings, plugins, hostFailureListener)); } private TransportClient(ClientTemplate template) { @@ -323,7 +359,25 @@ public void close() { } @Override - protected , Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder> void doExecute(Action action, Request request, ActionListener listener) { + protected > void doExecute(Action action, Request request, ActionListener listener) { proxy.execute(action, request, listener); } + + /** + * Listener that allows to be notified whenever a node failure / disconnect happens + */ + @FunctionalInterface + public interface HostFailureListener { + /** + * Called once a node disconnect is detected. + * @param node the node that has been disconnected + * @param ex the exception causing the disconnection + */ + void onNodeDisconnected(DiscoveryNode node, Exception ex); + } + + // pkg private for testing + TransportClientNodesService getNodesService() { + return nodesService; + } } diff --git a/core/src/main/java/org/elasticsearch/client/transport/TransportClientNodesService.java b/core/src/main/java/org/elasticsearch/client/transport/TransportClientNodesService.java index 18c2d15ec390f..3e50b4d74c916 100644 --- a/core/src/main/java/org/elasticsearch/client/transport/TransportClientNodesService.java +++ b/core/src/main/java/org/elasticsearch/client/transport/TransportClientNodesService.java @@ -22,6 +22,7 @@ import com.carrotsearch.hppc.cursors.ObjectCursor; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.util.IOUtils; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; @@ -35,16 +36,20 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.Randomness; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.settings.Setting; -import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.common.util.concurrent.FutureUtils; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.ConnectTransportException; +import org.elasticsearch.transport.ConnectionProfile; import org.elasticsearch.transport.FutureTransportResponseHandler; +import org.elasticsearch.transport.NodeDisconnectedException; +import org.elasticsearch.transport.NodeNotConnectedException; +import org.elasticsearch.transport.PlainTransportFuture; +import org.elasticsearch.transport.Transport; import org.elasticsearch.transport.TransportException; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportResponseHandler; @@ -64,12 +69,7 @@ import java.util.concurrent.ScheduledFuture; import java.util.concurrent.atomic.AtomicInteger; -import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds; - -/** - * - */ -public class TransportClientNodesService extends AbstractComponent implements Closeable { +final class TransportClientNodesService extends AbstractComponent implements Closeable { private final TimeValue nodesSamplerInterval; @@ -103,37 +103,45 @@ public class TransportClientNodesService extends AbstractComponent implements Cl private volatile boolean closed; + private final TransportClient.HostFailureListener hostFailureListener; + + // TODO: migrate this to use low level connections and single type channels + /** {@link ConnectionProfile} to use when to connecting to the listed nodes and doing a liveness check */ + private static final ConnectionProfile LISTED_NODES_PROFILE; + + static { + ConnectionProfile.Builder builder = new ConnectionProfile.Builder(); + builder.addConnections(1, + TransportRequestOptions.Type.BULK, + TransportRequestOptions.Type.PING, + TransportRequestOptions.Type.RECOVERY, + TransportRequestOptions.Type.REG, + TransportRequestOptions.Type.STATE); + LISTED_NODES_PROFILE = builder.build(); + } - public static final Setting CLIENT_TRANSPORT_NODES_SAMPLER_INTERVAL = - Setting.positiveTimeSetting("client.transport.nodes_sampler_interval", timeValueSeconds(5), Property.NodeScope); - public static final Setting CLIENT_TRANSPORT_PING_TIMEOUT = - Setting.positiveTimeSetting("client.transport.ping_timeout", timeValueSeconds(5), Property.NodeScope); - public static final Setting CLIENT_TRANSPORT_IGNORE_CLUSTER_NAME = - Setting.boolSetting("client.transport.ignore_cluster_name", false, Property.NodeScope); - public static final Setting CLIENT_TRANSPORT_SNIFF = - Setting.boolSetting("client.transport.sniff", false, Property.NodeScope); - - public TransportClientNodesService(Settings settings,TransportService transportService, - ThreadPool threadPool) { + TransportClientNodesService(Settings settings, TransportService transportService, + ThreadPool threadPool, TransportClient.HostFailureListener hostFailureListener) { super(settings); this.clusterName = ClusterName.CLUSTER_NAME_SETTING.get(settings); this.transportService = transportService; this.threadPool = threadPool; this.minCompatibilityVersion = Version.CURRENT.minimumCompatibilityVersion(); - this.nodesSamplerInterval = CLIENT_TRANSPORT_NODES_SAMPLER_INTERVAL.get(this.settings); - this.pingTimeout = CLIENT_TRANSPORT_PING_TIMEOUT.get(this.settings).millis(); - this.ignoreClusterName = CLIENT_TRANSPORT_IGNORE_CLUSTER_NAME.get(this.settings); + this.nodesSamplerInterval = TransportClient.CLIENT_TRANSPORT_NODES_SAMPLER_INTERVAL.get(this.settings); + this.pingTimeout = TransportClient.CLIENT_TRANSPORT_PING_TIMEOUT.get(this.settings).millis(); + this.ignoreClusterName = TransportClient.CLIENT_TRANSPORT_IGNORE_CLUSTER_NAME.get(this.settings); if (logger.isDebugEnabled()) { logger.debug("node_sampler_interval[{}]", nodesSamplerInterval); } - if (CLIENT_TRANSPORT_SNIFF.get(this.settings)) { + if (TransportClient.CLIENT_TRANSPORT_SNIFF.get(this.settings)) { this.nodesSampler = new SniffNodesSampler(); } else { this.nodesSampler = new SimpleNodeSampler(); } + this.hostFailureListener = hostFailureListener; this.nodesSamplerFuture = threadPool.schedule(nodesSamplerInterval, ThreadPool.Names.GENERIC, new ScheduledNodeSampler()); } @@ -179,8 +187,7 @@ public TransportClientNodesService addTransportAddresses(TransportAddress... tra if (filtered.isEmpty()) { return this; } - List builder = new ArrayList<>(); - builder.addAll(listedNodes()); + List builder = new ArrayList<>(listedNodes); for (TransportAddress transportAddress : filtered) { DiscoveryNode node = new DiscoveryNode("#transport#-" + tempNodeIdGenerator.incrementAndGet(), transportAddress, Collections.emptyMap(), Collections.emptySet(), minCompatibilityVersion); @@ -198,15 +205,25 @@ public TransportClientNodesService removeTransportAddress(TransportAddress trans if (closed) { throw new IllegalStateException("transport client is closed, can't remove an address"); } - List builder = new ArrayList<>(); + List listNodesBuilder = new ArrayList<>(); for (DiscoveryNode otherNode : listedNodes) { if (!otherNode.getAddress().equals(transportAddress)) { - builder.add(otherNode); + listNodesBuilder.add(otherNode); } else { - logger.debug("removing address [{}]", otherNode); + logger.debug("removing address [{}] from listed nodes", otherNode); } } - listedNodes = Collections.unmodifiableList(builder); + listedNodes = Collections.unmodifiableList(listNodesBuilder); + List nodesBuilder = new ArrayList<>(); + for (DiscoveryNode otherNode : nodes) { + if (!otherNode.getAddress().equals(transportAddress)) { + nodesBuilder.add(otherNode); + } else { + logger.debug("disconnecting from node with address [{}]", otherNode); + transportService.disconnectFromNode(otherNode); + } + } + nodes = Collections.unmodifiableList(nodesBuilder); nodesSampler.sample(); } return this; @@ -227,13 +244,17 @@ public void execute(NodeListenerCallback callback, ActionLi } ensureNodesAreAvailable(nodes); int index = getNodeNumber(); - RetryListener retryListener = new RetryListener<>(callback, listener, nodes, index); - DiscoveryNode node = nodes.get((index) % nodes.size()); + RetryListener retryListener = new RetryListener<>(callback, listener, nodes, index, hostFailureListener); + DiscoveryNode node = retryListener.getNode(0); try { callback.doWithNode(node, retryListener); } catch (Exception e) { - //this exception can't come from the TransportService as it doesn't throw exception at all - listener.onFailure(e); + try { + //this exception can't come from the TransportService as it doesn't throw exception at all + listener.onFailure(e); + } finally { + retryListener.maybeNodeFailed(node, e); + } } } @@ -242,15 +263,17 @@ public static class RetryListener implements ActionListener private final ActionListener listener; private final List nodes; private final int index; + private final TransportClient.HostFailureListener hostFailureListener; private volatile int i; - public RetryListener(NodeListenerCallback callback, ActionListener listener, - List nodes, int index) { + RetryListener(NodeListenerCallback callback, ActionListener listener, + List nodes, int index, TransportClient.HostFailureListener hostFailureListener) { this.callback = callback; this.listener = listener; this.nodes = nodes; this.index = index; + this.hostFailureListener = hostFailureListener; } @Override @@ -260,13 +283,15 @@ public void onResponse(Response response) { @Override public void onFailure(Exception e) { - if (ExceptionsHelper.unwrapCause(e) instanceof ConnectTransportException) { + Throwable throwable = ExceptionsHelper.unwrapCause(e); + if (throwable instanceof ConnectTransportException) { + maybeNodeFailed(getNode(this.i), (ConnectTransportException) throwable); int i = ++this.i; if (i >= nodes.size()) { listener.onFailure(new NoNodeAvailableException("None of the configured nodes were available: " + nodes, e)); } else { try { - callback.doWithNode(nodes.get((index + i) % nodes.size()), this); + callback.doWithNode(getNode(i), this); } catch(final Exception inner) { inner.addSuppressed(e); // this exception can't come from the TransportService as it doesn't throw exceptions at all @@ -278,7 +303,15 @@ public void onFailure(Exception e) { } } + final DiscoveryNode getNode(int i) { + return nodes.get((index + i) % nodes.size()); + } + final void maybeNodeFailed(DiscoveryNode node, Exception ex) { + if (ex instanceof NodeDisconnectedException || ex instanceof NodeNotConnectedException) { + hostFailureListener.onNodeDisconnected(node, ex); + } + } } @Override @@ -371,49 +404,37 @@ protected void doSample() { HashSet newNodes = new HashSet<>(); HashSet newFilteredNodes = new HashSet<>(); for (DiscoveryNode listedNode : listedNodes) { - if (!transportService.nodeConnected(listedNode)) { - try { - // its a listed node, light connect to it... - logger.trace("connecting to listed node (light) [{}]", listedNode); - transportService.connectToNodeLight(listedNode); - } catch (Exception e) { - logger.debug( - (Supplier) - () -> new ParameterizedMessage("failed to connect to node [{}], removed from nodes list", listedNode), e); - newFilteredNodes.add(listedNode); - continue; - } - } - try { - LivenessResponse livenessResponse = transportService.submitRequest(listedNode, TransportLivenessAction.NAME, - new LivenessRequest(), - TransportRequestOptions.builder().withType(TransportRequestOptions.Type.STATE).withTimeout(pingTimeout).build(), - new FutureTransportResponseHandler() { - @Override - public LivenessResponse newInstance() { - return new LivenessResponse(); - } - }).txGet(); + try (Transport.Connection connection = transportService.openConnection(listedNode, LISTED_NODES_PROFILE)){ + final PlainTransportFuture handler = new PlainTransportFuture<>( + new FutureTransportResponseHandler() { + @Override + public LivenessResponse newInstance() { + return new LivenessResponse(); + } + }); + transportService.sendRequest(connection, TransportLivenessAction.NAME, new LivenessRequest(), + TransportRequestOptions.builder().withType(TransportRequestOptions.Type.STATE).withTimeout(pingTimeout).build(), + handler); + final LivenessResponse livenessResponse = handler.txGet(); if (!ignoreClusterName && !clusterName.equals(livenessResponse.getClusterName())) { logger.warn("node {} not part of the cluster {}, ignoring...", listedNode, clusterName); newFilteredNodes.add(listedNode); - } else if (livenessResponse.getDiscoveryNode() != null) { + } else { // use discovered information but do keep the original transport address, // so people can control which address is exactly used. DiscoveryNode nodeWithInfo = livenessResponse.getDiscoveryNode(); newNodes.add(new DiscoveryNode(nodeWithInfo.getName(), nodeWithInfo.getId(), nodeWithInfo.getEphemeralId(), nodeWithInfo.getHostName(), nodeWithInfo.getHostAddress(), listedNode.getAddress(), nodeWithInfo.getAttributes(), nodeWithInfo.getRoles(), nodeWithInfo.getVersion())); - } else { - // although we asked for one node, our target may not have completed - // initialization yet and doesn't have cluster nodes - logger.debug("node {} didn't return any discovery info, temporarily using transport discovery node", listedNode); - newNodes.add(listedNode); } + } catch (ConnectTransportException e) { + logger.debug( + (Supplier) + () -> new ParameterizedMessage("failed to connect to node [{}], ignoring...", listedNode), e); + hostFailureListener.onNodeDisconnected(listedNode, e); } catch (Exception e) { logger.info( (Supplier) () -> new ParameterizedMessage("failed to get node info for {}, disconnecting...", listedNode), e); - transportService.disconnectFromNode(listedNode); } } @@ -438,76 +459,93 @@ protected void doSample() { final CountDownLatch latch = new CountDownLatch(nodesToPing.size()); final ConcurrentMap clusterStateResponses = ConcurrentCollections.newConcurrentMap(); - for (final DiscoveryNode listedNode : nodesToPing) { - threadPool.executor(ThreadPool.Names.MANAGEMENT).execute(new Runnable() { - @Override - public void run() { - try { - if (!transportService.nodeConnected(listedNode)) { - try { + try { + for (final DiscoveryNode nodeToPing : nodesToPing) { + threadPool.executor(ThreadPool.Names.MANAGEMENT).execute(new AbstractRunnable() { + + /** + * we try to reuse existing connections but if needed we will open a temporary connection + * that will be closed at the end of the execution. + */ + Transport.Connection connectionToClose = null; + + void onDone() { + try { + IOUtils.closeWhileHandlingException(connectionToClose); + } finally { + latch.countDown(); + } + } - // if its one of the actual nodes we will talk to, not to listed nodes, fully connect - if (nodes.contains(listedNode)) { - logger.trace("connecting to cluster node [{}]", listedNode); - transportService.connectToNode(listedNode); - } else { - // its a listed node, light connect to it... - logger.trace("connecting to listed node (light) [{}]", listedNode); - transportService.connectToNodeLight(listedNode); - } - } catch (Exception e) { - logger.debug( - (Supplier) - () -> new ParameterizedMessage("failed to connect to node [{}], ignoring...", listedNode), e); - latch.countDown(); - return; + @Override + public void onFailure(Exception e) { + onDone(); + if (e instanceof ConnectTransportException) { + logger.debug((Supplier) + () -> new ParameterizedMessage("failed to connect to node [{}], ignoring...", nodeToPing), e); + hostFailureListener.onNodeDisconnected(nodeToPing, e); + } else { + logger.info( + (Supplier) () -> new ParameterizedMessage( + "failed to get local cluster state info for {}, disconnecting...", nodeToPing), e); + } + } + + @Override + protected void doRun() throws Exception { + Transport.Connection pingConnection = null; + if (nodes.contains(nodeToPing)) { + try { + pingConnection = transportService.getConnection(nodeToPing); + } catch (NodeNotConnectedException e) { + // will use a temp connection } } - transportService.sendRequest(listedNode, ClusterStateAction.NAME, - Requests.clusterStateRequest().clear().nodes(true).local(true), - TransportRequestOptions.builder().withType(TransportRequestOptions.Type.STATE) - .withTimeout(pingTimeout).build(), - new TransportResponseHandler() { - - @Override - public ClusterStateResponse newInstance() { - return new ClusterStateResponse(); - } + if (pingConnection == null) { + logger.trace("connecting to cluster node [{}]", nodeToPing); + connectionToClose = transportService.openConnection(nodeToPing, LISTED_NODES_PROFILE); + pingConnection = connectionToClose; + } + transportService.sendRequest(pingConnection, ClusterStateAction.NAME, + Requests.clusterStateRequest().clear().nodes(true).local(true), + TransportRequestOptions.builder().withType(TransportRequestOptions.Type.STATE) + .withTimeout(pingTimeout).build(), + new TransportResponseHandler() { + + @Override + public ClusterStateResponse newInstance() { + return new ClusterStateResponse(); + } - @Override - public String executor() { - return ThreadPool.Names.SAME; - } + @Override + public String executor() { + return ThreadPool.Names.SAME; + } - @Override - public void handleResponse(ClusterStateResponse response) { - clusterStateResponses.put(listedNode, response); - latch.countDown(); - } + @Override + public void handleResponse(ClusterStateResponse response) { + clusterStateResponses.put(nodeToPing, response); + onDone(); + } - @Override - public void handleException(TransportException e) { - logger.info( - (Supplier) () -> new ParameterizedMessage( - "failed to get local cluster state for {}, disconnecting...", listedNode), e); - transportService.disconnectFromNode(listedNode); - latch.countDown(); + @Override + public void handleException(TransportException e) { + logger.info( + (Supplier) () -> new ParameterizedMessage( + "failed to get local cluster state for {}, disconnecting...", nodeToPing), e); + try { + hostFailureListener.onNodeDisconnected(nodeToPing, e); + } finally { + onDone(); } - }); - } catch (Exception e) { - logger.info( - (Supplier)() -> new ParameterizedMessage( - "failed to get local cluster state info for {}, disconnecting...", listedNode), e); - transportService.disconnectFromNode(listedNode); - latch.countDown(); + } + }); } - } - }); - } - - try { + }); + } latch.await(); } catch (InterruptedException e) { + Thread.currentThread().interrupt(); return; } @@ -534,4 +572,9 @@ public interface NodeListenerCallback { void doWithNode(DiscoveryNode node, ActionListener listener); } + + // pkg private for testing + void doSample() { + nodesSampler.doSample(); + } } diff --git a/core/src/main/java/org/elasticsearch/client/transport/support/TransportProxyClient.java b/core/src/main/java/org/elasticsearch/client/transport/TransportProxyClient.java similarity index 79% rename from core/src/main/java/org/elasticsearch/client/transport/support/TransportProxyClient.java rename to core/src/main/java/org/elasticsearch/client/transport/TransportProxyClient.java index 34833e9400a8e..5436bef172a47 100644 --- a/core/src/main/java/org/elasticsearch/client/transport/support/TransportProxyClient.java +++ b/core/src/main/java/org/elasticsearch/client/transport/TransportProxyClient.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.client.transport.support; +package org.elasticsearch.client.transport; import org.elasticsearch.action.Action; import org.elasticsearch.action.ActionListener; @@ -26,9 +26,6 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.action.GenericAction; import org.elasticsearch.action.TransportActionNodeProxy; -import org.elasticsearch.client.transport.TransportClientNodesService; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.transport.TransportService; @@ -38,15 +35,12 @@ import static java.util.Collections.unmodifiableMap; -/** - * - */ -public class TransportProxyClient { +final class TransportProxyClient { private final TransportClientNodesService nodesService; private final Map proxies; - public TransportProxyClient(Settings settings, TransportService transportService, + TransportProxyClient(Settings settings, TransportService transportService, TransportClientNodesService nodesService, List actions) { this.nodesService = nodesService; Map proxies = new HashMap<>(); @@ -58,7 +52,9 @@ public TransportProxyClient(Settings settings, TransportService transportService this.proxies = unmodifiableMap(proxies); } - public > void execute(final Action action, final Request request, ActionListener listener) { + public > void execute(final Action action, + final Request request, ActionListener listener) { final TransportActionNodeProxy proxy = proxies.get(action); nodesService.execute((n, l) -> proxy.execute(n, request, l), listener); } diff --git a/core/src/main/java/org/elasticsearch/cluster/AbstractDiffable.java b/core/src/main/java/org/elasticsearch/cluster/AbstractDiffable.java index f9d5f33cad6ca..8e63bc2b9d700 100644 --- a/core/src/main/java/org/elasticsearch/cluster/AbstractDiffable.java +++ b/core/src/main/java/org/elasticsearch/cluster/AbstractDiffable.java @@ -40,12 +40,7 @@ public Diff diff(T previousState) { } } - @Override - public Diff readDiffFrom(StreamInput in) throws IOException { - return new CompleteDiff<>(this, in); - } - - public static > Diff readDiffFrom(T reader, StreamInput in) throws IOException { + public static > Diff readDiffFrom(Reader reader, StreamInput in) throws IOException { return new CompleteDiff(reader, in); } @@ -57,23 +52,23 @@ private static class CompleteDiff> implements Diff { /** * Creates simple diff with changes */ - public CompleteDiff(T part) { + CompleteDiff(T part) { this.part = part; } /** * Creates simple diff without changes */ - public CompleteDiff() { + CompleteDiff() { this.part = null; } /** * Read simple diff from the stream */ - public CompleteDiff(Diffable reader, StreamInput in) throws IOException { + CompleteDiff(Reader reader, StreamInput in) throws IOException { if (in.readBoolean()) { - this.part = reader.readFrom(in); + this.part = reader.read(in); } else { this.part = null; } diff --git a/core/src/main/java/org/elasticsearch/cluster/AbstractNamedDiffable.java b/core/src/main/java/org/elasticsearch/cluster/AbstractNamedDiffable.java new file mode 100644 index 0000000000000..fb253a1a5df69 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/AbstractNamedDiffable.java @@ -0,0 +1,133 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster; + +import org.elasticsearch.Version; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.io.stream.NamedWriteable; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; + +import java.io.IOException; + +/** + * Abstract diffable object with simple diffs implementation that sends the entire object if object has changed or + * nothing is object remained the same. Comparing to AbstractDiffable, this class also works with NamedWriteables + */ +public abstract class AbstractNamedDiffable> implements Diffable, NamedWriteable { + + @Override + public Diff diff(T previousState) { + if (this.get().equals(previousState)) { + return new CompleteNamedDiff<>(previousState.getWriteableName(), previousState.getMinimalSupportedVersion()); + } else { + return new CompleteNamedDiff<>(get()); + } + } + + public static > NamedDiff readDiffFrom(Class tClass, String name, StreamInput in) + throws IOException { + return new CompleteNamedDiff<>(tClass, name, in); + } + + private static class CompleteNamedDiff> implements NamedDiff { + + @Nullable + private final T part; + + private final String name; + + /** + * A non-null value is only required for write operation, if the diff was just read from the stream the version + * is unnecessary. + */ + @Nullable + private final Version minimalSupportedVersion; + + /** + * Creates simple diff with changes + */ + CompleteNamedDiff(T part) { + this.part = part; + this.name = part.getWriteableName(); + this.minimalSupportedVersion = part.getMinimalSupportedVersion(); + } + + /** + * Creates simple diff without changes + */ + CompleteNamedDiff(String name, Version minimalSupportedVersion) { + this.part = null; + this.name = name; + this.minimalSupportedVersion = minimalSupportedVersion; + } + + /** + * Read simple diff from the stream + */ + CompleteNamedDiff(Class tClass, String name, StreamInput in) throws IOException { + if (in.readBoolean()) { + this.part = in.readNamedWriteable(tClass, name); + this.minimalSupportedVersion = part.getMinimalSupportedVersion(); + } else { + this.part = null; + this.minimalSupportedVersion = null; // We just read this diff, so it's not going to be written + } + this.name = name; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + assert minimalSupportedVersion != null : "shouldn't be called on diff that was de-serialized from the stream"; + if (part != null) { + out.writeBoolean(true); + part.writeTo(out); + } else { + out.writeBoolean(false); + } + } + + @Override + public T apply(T part) { + if (this.part != null) { + return this.part; + } else { + return part; + } + } + + @Override + public String getWriteableName() { + return name; + } + + @Override + public Version getMinimalSupportedVersion() { + assert minimalSupportedVersion != null : "shouldn't be called on the diff that was de-serialized from the stream"; + return minimalSupportedVersion; + } + } + + @SuppressWarnings("unchecked") + public T get() { + return (T) this; + } + +} diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterChangedEvent.java b/core/src/main/java/org/elasticsearch/cluster/ClusterChangedEvent.java index e3164eacdbb6b..701656db9ce8f 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterChangedEvent.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterChangedEvent.java @@ -20,17 +20,21 @@ package org.elasticsearch.cluster; import com.carrotsearch.hppc.cursors.ObjectCursor; +import com.carrotsearch.hppc.cursors.ObjectObjectCursor; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexGraveyard; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.gateway.GatewayService; import org.elasticsearch.index.Index; import java.util.ArrayList; import java.util.Collections; +import java.util.HashSet; import java.util.List; import java.util.Objects; +import java.util.Set; import java.util.stream.Collectors; /** @@ -143,6 +147,33 @@ public boolean metaDataChanged() { return state.metaData() != previousState.metaData(); } + /** + * Returns a set of custom meta data types when any custom metadata for the cluster has changed + * between the previous cluster state and the new cluster state. custom meta data types are + * returned iff they have been added, updated or removed between the previous and the current state + */ + public Set changedCustomMetaDataSet() { + Set result = new HashSet<>(); + ImmutableOpenMap currentCustoms = state.metaData().customs(); + ImmutableOpenMap previousCustoms = previousState.metaData().customs(); + if (currentCustoms.equals(previousCustoms) == false) { + for (ObjectObjectCursor currentCustomMetaData : currentCustoms) { + // new custom md added or existing custom md changed + if (previousCustoms.containsKey(currentCustomMetaData.key) == false + || currentCustomMetaData.value.equals(previousCustoms.get(currentCustomMetaData.key)) == false) { + result.add(currentCustomMetaData.key); + } + } + // existing custom md deleted + for (ObjectObjectCursor previousCustomMetaData : previousCustoms) { + if (currentCustoms.containsKey(previousCustomMetaData.key) == false) { + result.add(previousCustomMetaData.key); + } + } + } + return result; + } + /** * Returns true iff the {@link IndexMetaData} for a given index * has changed between the previous cluster state and the new cluster state. diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java b/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java index 4e582cb32ca03..2d59abdeb9eab 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java @@ -19,12 +19,12 @@ package org.elasticsearch.cluster; -import org.apache.logging.log4j.Logger; import org.elasticsearch.cluster.action.index.MappingUpdatedAction; import org.elasticsearch.cluster.action.index.NodeMappingRefreshAction; import org.elasticsearch.cluster.action.shard.ShardStateAction; +import org.elasticsearch.cluster.metadata.IndexGraveyard; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.metadata.IndexTemplateFilter; +import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService; import org.elasticsearch.cluster.metadata.MetaDataDeleteIndexService; import org.elasticsearch.cluster.metadata.MetaDataIndexAliasesService; @@ -32,6 +32,7 @@ import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService; import org.elasticsearch.cluster.metadata.MetaDataMappingService; import org.elasticsearch.cluster.metadata.MetaDataUpdateSettingsService; +import org.elasticsearch.cluster.metadata.RepositoriesMetaData; import org.elasticsearch.cluster.routing.DelayedAllocationService; import org.elasticsearch.cluster.routing.RoutingService; import org.elasticsearch.cluster.routing.allocation.AllocationService; @@ -54,19 +55,28 @@ import org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.io.stream.NamedWriteable; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry.Entry; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.io.stream.Writeable.Reader; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.ExtensionPoint; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.gateway.GatewayAllocator; +import org.elasticsearch.ingest.IngestMetadata; import org.elasticsearch.plugins.ClusterPlugin; +import org.elasticsearch.script.ScriptMetaData; import org.elasticsearch.tasks.TaskResultsService; +import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; +import java.util.LinkedHashMap; import java.util.List; import java.util.Map; import java.util.Objects; @@ -89,9 +99,6 @@ public class ClusterModule extends AbstractModule { final Collection allocationDeciders; final ShardsAllocator shardsAllocator; - // pkg private so tests can mock - Class clusterInfoServiceImpl = InternalClusterInfoService.class; - public ClusterModule(Settings settings, ClusterService clusterService, List clusterPlugins) { this.settings = settings; this.allocationDeciders = createAllocationDeciders(settings, clusterService.getClusterSettings(), clusterPlugins); @@ -100,6 +107,52 @@ public ClusterModule(Settings settings, ClusterService clusterService, List getNamedWriteables() { + List entries = new ArrayList<>(); + // Cluster State + registerClusterCustom(entries, SnapshotsInProgress.TYPE, SnapshotsInProgress::new, SnapshotsInProgress::readDiffFrom); + registerClusterCustom(entries, RestoreInProgress.TYPE, RestoreInProgress::new, RestoreInProgress::readDiffFrom); + registerClusterCustom(entries, SnapshotDeletionsInProgress.TYPE, SnapshotDeletionsInProgress::new, + SnapshotDeletionsInProgress::readDiffFrom); + // Metadata + registerMetaDataCustom(entries, RepositoriesMetaData.TYPE, RepositoriesMetaData::new, RepositoriesMetaData::readDiffFrom); + registerMetaDataCustom(entries, IngestMetadata.TYPE, IngestMetadata::new, IngestMetadata::readDiffFrom); + registerMetaDataCustom(entries, ScriptMetaData.TYPE, ScriptMetaData::new, ScriptMetaData::readDiffFrom); + registerMetaDataCustom(entries, IndexGraveyard.TYPE, IndexGraveyard::new, IndexGraveyard::readDiffFrom); + return entries; + } + + public static List getNamedXWriteables() { + List entries = new ArrayList<>(); + // Metadata + entries.add(new NamedXContentRegistry.Entry(MetaData.Custom.class, new ParseField(RepositoriesMetaData.TYPE), + RepositoriesMetaData::fromXContent)); + entries.add(new NamedXContentRegistry.Entry(MetaData.Custom.class, new ParseField(IngestMetadata.TYPE), + IngestMetadata::fromXContent)); + entries.add(new NamedXContentRegistry.Entry(MetaData.Custom.class, new ParseField(ScriptMetaData.TYPE), + ScriptMetaData::fromXContent)); + entries.add(new NamedXContentRegistry.Entry(MetaData.Custom.class, new ParseField(IndexGraveyard.TYPE), + IndexGraveyard::fromXContent)); + return entries; + } + + private static void registerClusterCustom(List entries, String name, Reader reader, + Reader diffReader) { + registerCustom(entries, ClusterState.Custom.class, name, reader, diffReader); + } + + private static void registerMetaDataCustom(List entries, String name, Reader reader, + Reader diffReader) { + registerCustom(entries, MetaData.Custom.class, name, reader, diffReader); + } + + private static void registerCustom(List entries, Class category, String name, + Reader reader, Reader diffReader) { + entries.add(new Entry(category, name, reader)); + entries.add(new Entry(NamedDiff.class, name, diffReader)); + } + public IndexNameExpressionResolver getIndexNameExpressionResolver() { return indexNameExpressionResolver; } @@ -109,21 +162,21 @@ public IndexNameExpressionResolver getIndexNameExpressionResolver() { public static Collection createAllocationDeciders(Settings settings, ClusterSettings clusterSettings, List clusterPlugins) { // collect deciders by class so that we can detect duplicates - Map deciders = new HashMap<>(); + Map deciders = new LinkedHashMap<>(); addAllocationDecider(deciders, new MaxRetryAllocationDecider(settings)); - addAllocationDecider(deciders, new SameShardAllocationDecider(settings)); - addAllocationDecider(deciders, new FilterAllocationDecider(settings, clusterSettings)); addAllocationDecider(deciders, new ReplicaAfterPrimaryActiveAllocationDecider(settings)); - addAllocationDecider(deciders, new ThrottlingAllocationDecider(settings, clusterSettings)); addAllocationDecider(deciders, new RebalanceOnlyWhenActiveAllocationDecider(settings)); addAllocationDecider(deciders, new ClusterRebalanceAllocationDecider(settings, clusterSettings)); addAllocationDecider(deciders, new ConcurrentRebalanceAllocationDecider(settings, clusterSettings)); addAllocationDecider(deciders, new EnableAllocationDecider(settings, clusterSettings)); - addAllocationDecider(deciders, new AwarenessAllocationDecider(settings, clusterSettings)); - addAllocationDecider(deciders, new ShardsLimitAllocationDecider(settings, clusterSettings)); addAllocationDecider(deciders, new NodeVersionAllocationDecider(settings)); - addAllocationDecider(deciders, new DiskThresholdDecider(settings, clusterSettings)); addAllocationDecider(deciders, new SnapshotInProgressAllocationDecider(settings, clusterSettings)); + addAllocationDecider(deciders, new FilterAllocationDecider(settings, clusterSettings)); + addAllocationDecider(deciders, new SameShardAllocationDecider(settings, clusterSettings)); + addAllocationDecider(deciders, new DiskThresholdDecider(settings, clusterSettings)); + addAllocationDecider(deciders, new ThrottlingAllocationDecider(settings, clusterSettings)); + addAllocationDecider(deciders, new ShardsLimitAllocationDecider(settings, clusterSettings)); + addAllocationDecider(deciders, new AwarenessAllocationDecider(settings, clusterSettings)); clusterPlugins.stream() .flatMap(p -> p.createAllocationDeciders(settings, clusterSettings).stream()) @@ -162,7 +215,6 @@ private static ShardsAllocator createShardsAllocator(Settings settings, ClusterS @Override protected void configure() { - bind(ClusterInfoService.class).to(clusterInfoServiceImpl).asEagerSingleton(); bind(GatewayAllocator.class).asEagerSingleton(); bind(AllocationService.class).asEagerSingleton(); bind(ClusterService.class).toInstance(clusterService); diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterState.java b/core/src/main/java/org/elasticsearch/cluster/ClusterState.java index 202fc0f7bad1f..f0fd06c2d28c9 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterState.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterState.java @@ -22,7 +22,6 @@ import com.carrotsearch.hppc.cursors.IntObjectCursor; import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; - import org.elasticsearch.cluster.block.ClusterBlock; import org.elasticsearch.cluster.block.ClusterBlocks; import org.elasticsearch.cluster.metadata.IndexMetaData; @@ -37,25 +36,24 @@ import org.elasticsearch.cluster.routing.RoutingNodes; import org.elasticsearch.cluster.routing.RoutingTable; import org.elasticsearch.cluster.routing.ShardRouting; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; import org.elasticsearch.common.UUIDs; +import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.discovery.Discovery; -import org.elasticsearch.discovery.local.LocalDiscovery; -import org.elasticsearch.discovery.zen.publish.PublishClusterStateAction; +import org.elasticsearch.discovery.zen.PublishClusterStateAction; import java.io.IOException; import java.util.EnumSet; @@ -67,83 +65,34 @@ /** * Represents the current state of the cluster. *

    - * The cluster state object is immutable with an - * exception of the {@link RoutingNodes} structure, which is built on demand from the {@link RoutingTable}, - * and cluster state {@link #status}, which is updated during cluster state publishing and applying - * processing. The cluster state can be updated only on the master node. All updates are performed by on a + * The cluster state object is immutable with the exception of the {@link RoutingNodes} structure, which is + * built on demand from the {@link RoutingTable}. + * The cluster state can be updated only on the master node. All updates are performed by on a * single thread and controlled by the {@link ClusterService}. After every update the - * {@link Discovery#publish} method publishes new version of the cluster state to all other nodes in the + * {@link Discovery#publish} method publishes a new version of the cluster state to all other nodes in the * cluster. The actual publishing mechanism is delegated to the {@link Discovery#publish} method and depends on - * the type of discovery. For example, for local discovery it is implemented by the {@link LocalDiscovery#publish} - * method. In the Zen Discovery it is handled in the {@link PublishClusterStateAction#publish} method. The + * the type of discovery. In the Zen Discovery it is handled in the {@link PublishClusterStateAction#publish} method. The * publishing mechanism can be overridden by other discovery. *

    * The cluster state implements the {@link Diffable} interface in order to support publishing of cluster state * differences instead of the entire state on each change. The publishing mechanism should only send differences - * to a node if this node was present in the previous version of the cluster state. If a node is not present was - * not present in the previous version of the cluster state, such node is unlikely to have the previous cluster - * state version and should be sent a complete version. In order to make sure that the differences are applied to + * to a node if this node was present in the previous version of the cluster state. If a node was + * not present in the previous version of the cluster state, this node is unlikely to have the previous cluster + * state version and should be sent a complete version. In order to make sure that the differences are applied to the * correct version of the cluster state, each cluster state version update generates {@link #stateUUID} that uniquely * identifies this version of the state. This uuid is verified by the {@link ClusterStateDiff#apply} method to - * makes sure that the correct diffs are applied. If uuids don’t match, the {@link ClusterStateDiff#apply} method - * throws the {@link IncompatibleClusterStateVersionException}, which should cause the publishing mechanism to send + * make sure that the correct diffs are applied. If uuids don’t match, the {@link ClusterStateDiff#apply} method + * throws the {@link IncompatibleClusterStateVersionException}, which causes the publishing mechanism to send * a full version of the cluster state to the node on which this exception was thrown. */ public class ClusterState implements ToXContent, Diffable { - public static final ClusterState PROTO = builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)).build(); - - public static enum ClusterStateStatus { - UNKNOWN((byte) 0), - RECEIVED((byte) 1), - BEING_APPLIED((byte) 2), - APPLIED((byte) 3); - - private final byte id; - - ClusterStateStatus(byte id) { - this.id = id; - } + public static final ClusterState EMPTY_STATE = builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)).build(); - public byte id() { - return this.id; - } + public interface Custom extends NamedDiffable, ToXContent { } - public interface Custom extends Diffable, ToXContent { - - String type(); - } - - private static final Map customPrototypes = new HashMap<>(); - - /** - * Register a custom index meta data factory. Make sure to call it from a static block. - */ - public static void registerPrototype(String type, Custom proto) { - customPrototypes.put(type, proto); - } - - static { - // register non plugin custom parts - registerPrototype(SnapshotsInProgress.TYPE, SnapshotsInProgress.PROTO); - registerPrototype(RestoreInProgress.TYPE, RestoreInProgress.PROTO); - } - - @Nullable - public static T lookupPrototype(String type) { - //noinspection unchecked - return (T) customPrototypes.get(type); - } - - public static T lookupPrototypeSafe(String type) { - @SuppressWarnings("unchecked") - T proto = (T) customPrototypes.get(type); - if (proto == null) { - throw new IllegalArgumentException("No custom state prototype registered for type [" + type + "], node likely missing plugins"); - } - return proto; - } + private static final NamedDiffableValueSerializer CUSTOM_VALUE_SERIALIZER = new NamedDiffableValueSerializer<>(Custom.class); public static final String UNKNOWN_UUID = "_na_"; @@ -170,13 +119,13 @@ public static T lookupPrototypeSafe(String type) { // built on demand private volatile RoutingNodes routingNodes; - private volatile ClusterStateStatus status; - public ClusterState(long version, String stateUUID, ClusterState state) { - this(state.clusterName, version, stateUUID, state.metaData(), state.routingTable(), state.nodes(), state.blocks(), state.customs(), false); + this(state.clusterName, version, stateUUID, state.metaData(), state.routingTable(), state.nodes(), state.blocks(), state.customs(), + false); } - public ClusterState(ClusterName clusterName, long version, String stateUUID, MetaData metaData, RoutingTable routingTable, DiscoveryNodes nodes, ClusterBlocks blocks, ImmutableOpenMap customs, boolean wasReadFromDiff) { + public ClusterState(ClusterName clusterName, long version, String stateUUID, MetaData metaData, RoutingTable routingTable, + DiscoveryNodes nodes, ClusterBlocks blocks, ImmutableOpenMap customs, boolean wasReadFromDiff) { this.version = version; this.stateUUID = stateUUID; this.clusterName = clusterName; @@ -185,19 +134,9 @@ public ClusterState(ClusterName clusterName, long version, String stateUUID, Met this.nodes = nodes; this.blocks = blocks; this.customs = customs; - this.status = ClusterStateStatus.UNKNOWN; this.wasReadFromDiff = wasReadFromDiff; } - public ClusterStateStatus status() { - return status; - } - - public ClusterState status(ClusterStateStatus newStatus) { - this.status = newStatus; - return this; - } - public long version() { return this.version; } @@ -278,50 +217,48 @@ public RoutingNodes getRoutingNodes() { return routingNodes; } - public String prettyPrint() { + @Override + public String toString() { StringBuilder sb = new StringBuilder(); sb.append("cluster uuid: ").append(metaData.clusterUUID()).append("\n"); sb.append("version: ").append(version).append("\n"); sb.append("state uuid: ").append(stateUUID).append("\n"); sb.append("from_diff: ").append(wasReadFromDiff).append("\n"); sb.append("meta data version: ").append(metaData.version()).append("\n"); + final String TAB = " "; for (IndexMetaData indexMetaData : metaData) { - final String TAB = " "; sb.append(TAB).append(indexMetaData.getIndex()); sb.append(": v[").append(indexMetaData.getVersion()).append("]\n"); for (int shard = 0; shard < indexMetaData.getNumberOfShards(); shard++) { sb.append(TAB).append(TAB).append(shard).append(": "); sb.append("p_term [").append(indexMetaData.primaryTerm(shard)).append("], "); - sb.append("a_ids ").append(indexMetaData.inSyncAllocationIds(shard)).append("\n"); + sb.append("isa_ids ").append(indexMetaData.inSyncAllocationIds(shard)).append("\n"); } } - sb.append(blocks().prettyPrint()); - sb.append(nodes().prettyPrint()); - sb.append(routingTable().prettyPrint()); - sb.append(getRoutingNodes().prettyPrint()); - return sb.toString(); - } - - @Override - public String toString() { - try { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; + sb.append(blocks()); + sb.append(nodes()); + sb.append(routingTable()); + sb.append(getRoutingNodes()); + if (customs.isEmpty() == false) { + sb.append("customs:\n"); + for (ObjectObjectCursor cursor : customs) { + final String type = cursor.key; + final Custom custom = cursor.value; + sb.append(TAB).append(type).append(": ").append(custom); + } } + return sb.toString(); } /** - * a cluster state supersedes another state iff they are from the same master and the version this state is higher thant the other state. + * a cluster state supersedes another state if they are from the same master and the version of this state is higher than that of the + * other state. *

    * In essence that means that all the changes from the other cluster state are also reflected by the current one */ public boolean supersedes(ClusterState other) { - return this.nodes().getMasterNodeId() != null && this.nodes().getMasterNodeId().equals(other.nodes().getMasterNodeId()) && this.version() > other.version(); + return this.nodes().getMasterNodeId() != null && this.nodes().getMasterNodeId().equals(other.nodes().getMasterNodeId()) + && this.version() > other.version(); } @@ -346,7 +283,7 @@ public enum Metric { private final String value; - private Metric(String value) { + Metric(String value) { this.value = value; } @@ -443,11 +380,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.startObject("mappings"); for (ObjectObjectCursor cursor1 : templateMetaData.mappings()) { - byte[] mappingSource = cursor1.value.uncompressed(); - Map mapping; - try (XContentParser parser = XContentFactory.xContent(mappingSource).createParser(mappingSource)) { - mapping = parser.map(); - } + Map mapping = XContentHelper.convertToMap(new BytesArray(cursor1.value.uncompressed()), false).v2(); if (mapping.size() == 1 && mapping.containsKey(cursor1.key)) { // the type name is the root value, reduce it mapping = (Map) mapping.get(cursor1.key); @@ -475,11 +408,8 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.startObject("mappings"); for (ObjectObjectCursor cursor : indexMetaData.getMappings()) { - byte[] mappingSource = cursor.value.source().uncompressed(); - Map mapping; - try (XContentParser parser = XContentFactory.xContent(mappingSource).createParser(mappingSource)) { - mapping = parser.map(); - } + Map mapping = XContentHelper + .convertToMap(new BytesArray(cursor.value.source().uncompressed()), false).v2(); if (mapping.size() == 1 && mapping.containsKey(cursor.key)) { // the type name is the root value, reduce it mapping = (Map) mapping.get(cursor.key); @@ -629,12 +559,6 @@ public DiscoveryNodes nodes() { return nodes; } - public Builder routingResult(RoutingAllocation.Result routingResult) { - this.routingTable = routingResult.routingTable(); - this.metaData = routingResult.metaData(); - return this; - } - public Builder routingTable(RoutingTable routingTable) { this.routingTable = routingTable; return this; @@ -674,10 +598,6 @@ public Builder stateUUID(String uuid) { return this; } - public Custom getCustom(String type) { - return customs.get(type); - } - public Builder putCustom(String type, Custom custom) { customs.put(type, custom); return this; @@ -715,53 +635,39 @@ public static byte[] toBytes(ClusterState state) throws IOException { * @param data input bytes * @param localNode used to set the local node in the cluster state. */ - public static ClusterState fromBytes(byte[] data, DiscoveryNode localNode) throws IOException { - return readFrom(StreamInput.wrap(data), localNode); + public static ClusterState fromBytes(byte[] data, DiscoveryNode localNode, NamedWriteableRegistry registry) throws IOException { + StreamInput in = new NamedWriteableAwareStreamInput(StreamInput.wrap(data), registry); + return readFrom(in, localNode); } - - /** - * @param in input stream - * @param localNode used to set the local node in the cluster state. can be null. - */ - public static ClusterState readFrom(StreamInput in, @Nullable DiscoveryNode localNode) throws IOException { - return PROTO.readFrom(in, localNode); - } } @Override - public Diff diff(ClusterState previousState) { + public Diff diff(ClusterState previousState) { return new ClusterStateDiff(previousState, this); } - @Override - public Diff readDiffFrom(StreamInput in) throws IOException { - return new ClusterStateDiff(in, this); + public static Diff readDiffFrom(StreamInput in, DiscoveryNode localNode) throws IOException { + return new ClusterStateDiff(in, localNode); } - public ClusterState readFrom(StreamInput in, DiscoveryNode localNode) throws IOException { + public static ClusterState readFrom(StreamInput in, DiscoveryNode localNode) throws IOException { ClusterName clusterName = new ClusterName(in); Builder builder = new Builder(clusterName); builder.version = in.readLong(); builder.uuid = in.readString(); - builder.metaData = MetaData.Builder.readFrom(in); - builder.routingTable = RoutingTable.Builder.readFrom(in); - builder.nodes = DiscoveryNodes.Builder.readFrom(in, localNode); - builder.blocks = ClusterBlocks.Builder.readClusterBlocks(in); + builder.metaData = MetaData.readFrom(in); + builder.routingTable = RoutingTable.readFrom(in); + builder.nodes = DiscoveryNodes.readFrom(in, localNode); + builder.blocks = new ClusterBlocks(in); int customSize = in.readVInt(); for (int i = 0; i < customSize; i++) { - String type = in.readString(); - Custom customIndexMetaData = lookupPrototypeSafe(type).readFrom(in); - builder.putCustom(type, customIndexMetaData); + Custom customIndexMetaData = in.readNamedWriteable(Custom.class); + builder.putCustom(customIndexMetaData.getWriteableName(), customIndexMetaData); } return builder.build(); } - @Override - public ClusterState readFrom(StreamInput in) throws IOException { - return readFrom(in, nodes.getLocalNode()); - } - @Override public void writeTo(StreamOutput out) throws IOException { clusterName.writeTo(out); @@ -771,10 +677,18 @@ public void writeTo(StreamOutput out) throws IOException { routingTable.writeTo(out); nodes.writeTo(out); blocks.writeTo(out); - out.writeVInt(customs.size()); - for (ObjectObjectCursor cursor : customs) { - out.writeString(cursor.key); - cursor.value.writeTo(out); + // filter out custom states not supported by the other node + int numberOfCustoms = 0; + for (ObjectCursor cursor : customs.values()) { + if (out.getVersion().onOrAfter(cursor.value.getMinimalSupportedVersion())) { + numberOfCustoms++; + } + } + out.writeVInt(numberOfCustoms); + for (ObjectCursor cursor : customs.values()) { + if (out.getVersion().onOrAfter(cursor.value.getMinimalSupportedVersion())) { + out.writeNamedWriteable(cursor.value); + } } } @@ -798,7 +712,7 @@ private static class ClusterStateDiff implements Diff { private final Diff> customs; - public ClusterStateDiff(ClusterState before, ClusterState after) { + ClusterStateDiff(ClusterState before, ClusterState after) { fromUuid = before.stateUUID; toUuid = after.stateUUID; toVersion = after.version; @@ -807,30 +721,19 @@ public ClusterStateDiff(ClusterState before, ClusterState after) { nodes = after.nodes.diff(before.nodes); metaData = after.metaData.diff(before.metaData); blocks = after.blocks.diff(before.blocks); - customs = DiffableUtils.diff(before.customs, after.customs, DiffableUtils.getStringKeySerializer()); + customs = DiffableUtils.diff(before.customs, after.customs, DiffableUtils.getStringKeySerializer(), CUSTOM_VALUE_SERIALIZER); } - public ClusterStateDiff(StreamInput in, ClusterState proto) throws IOException { + ClusterStateDiff(StreamInput in, DiscoveryNode localNode) throws IOException { clusterName = new ClusterName(in); fromUuid = in.readString(); toUuid = in.readString(); toVersion = in.readLong(); - routingTable = proto.routingTable.readDiffFrom(in); - nodes = proto.nodes.readDiffFrom(in); - metaData = proto.metaData.readDiffFrom(in); - blocks = proto.blocks.readDiffFrom(in); - customs = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), - new DiffableUtils.DiffableValueSerializer() { - @Override - public Custom read(StreamInput in, String key) throws IOException { - return lookupPrototypeSafe(key).readFrom(in); - } - - @Override - public Diff readDiff(StreamInput in, String key) throws IOException { - return lookupPrototypeSafe(key).readDiffFrom(in); - } - }); + routingTable = RoutingTable.readDiffFrom(in); + nodes = DiscoveryNodes.readDiffFrom(in, localNode); + metaData = MetaData.readDiffFrom(in); + blocks = ClusterBlocks.readDiffFrom(in); + customs = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), CUSTOM_VALUE_SERIALIZER); } @Override diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterStateApplier.java b/core/src/main/java/org/elasticsearch/cluster/ClusterStateApplier.java new file mode 100644 index 0000000000000..c339a8ed97e7a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterStateApplier.java @@ -0,0 +1,34 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster; + +import org.elasticsearch.cluster.service.ClusterService; + +/** + * A component that is in charge of applying an incoming cluster state to the node internal data structures. + * The single apply method is called before the cluster state becomes visible via {@link ClusterService#state()}. + */ +public interface ClusterStateApplier { + + /** + * Called when a new cluster state ({@link ClusterChangedEvent#state()} needs to be applied + */ + void applyClusterState(ClusterChangedEvent event); +} diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterStateObserver.java b/core/src/main/java/org/elasticsearch/cluster/ClusterStateObserver.java index e18ec5543d944..fd4783c9bd8b6 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterStateObserver.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterStateObserver.java @@ -26,7 +26,10 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.ThreadContext; +import java.util.Objects; import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Predicate; +import java.util.function.Supplier; /** * A utility class which simplifies interacting with the cluster state in cases where @@ -37,20 +40,14 @@ public class ClusterStateObserver { protected final Logger logger; - public final ChangePredicate MATCH_ALL_CHANGES_PREDICATE = new EventPredicate() { - - @Override - public boolean apply(ClusterChangedEvent changedEvent) { - return changedEvent.previousState().version() != changedEvent.state().version(); - } - }; + private final Predicate MATCH_ALL_CHANGES_PREDICATE = state -> true; private final ClusterService clusterService; private final ThreadContext contextHolder; volatile TimeValue timeOutValue; - final AtomicReference lastObservedState; + final AtomicReference lastObservedState; final TimeoutClusterStateListener clusterStateListener = new ObserverClusterStateListener(); // observingContext is not null when waiting on cluster state changes final AtomicReference observingContext = new AtomicReference<>(null); @@ -68,8 +65,17 @@ public ClusterStateObserver(ClusterService clusterService, Logger logger, Thread * to wait indefinitely */ public ClusterStateObserver(ClusterService clusterService, @Nullable TimeValue timeout, Logger logger, ThreadContext contextHolder) { + this(clusterService.state(), clusterService, timeout, logger, contextHolder); + } + /** + * @param timeout a global timeout for this observer. After it has expired the observer + * will fail any existing or new #waitForNextChange calls. Set to null + * to wait indefinitely + */ + public ClusterStateObserver(ClusterState initialState, ClusterService clusterService, @Nullable TimeValue timeout, Logger logger, + ThreadContext contextHolder) { this.clusterService = clusterService; - this.lastObservedState = new AtomicReference<>(new ObservedState(clusterService.state())); + this.lastObservedState = new AtomicReference<>(new StoredState(initialState)); this.timeOutValue = timeout; if (timeOutValue != null) { this.startTimeNS = System.nanoTime(); @@ -78,11 +84,14 @@ public ClusterStateObserver(ClusterService clusterService, @Nullable TimeValue t this.contextHolder = contextHolder; } - /** last cluster state observer by this observer. Note that this may not be the current one */ - public ClusterState observedState() { - ObservedState state = lastObservedState.get(); - assert state != null; - return state.clusterState; + /** sets the last observed state to the currently applied cluster state and returns it */ + public ClusterState setAndGetObservedState() { + if (observingContext.get() != null) { + throw new ElasticsearchException("cannot set current cluster state while waiting for a cluster state change"); + } + ClusterState clusterState = clusterService.state(); + lastObservedState.set(new StoredState(clusterState)); + return clusterState; } /** indicates whether this observer has timedout */ @@ -98,19 +107,19 @@ public void waitForNextChange(Listener listener, @Nullable TimeValue timeOutValu waitForNextChange(listener, MATCH_ALL_CHANGES_PREDICATE, timeOutValue); } - public void waitForNextChange(Listener listener, ChangePredicate changePredicate) { - waitForNextChange(listener, changePredicate, null); + public void waitForNextChange(Listener listener, Predicate statePredicate) { + waitForNextChange(listener, statePredicate, null); } /** - * Wait for the next cluster state which satisfies changePredicate + * Wait for the next cluster state which satisfies statePredicate * * @param listener callback listener - * @param changePredicate predicate to check whether cluster state changes are relevant and the callback should be called + * @param statePredicate predicate to check whether cluster state changes are relevant and the callback should be called * @param timeOutValue a timeout for waiting. If null the global observer timeout will be used. */ - public void waitForNextChange(Listener listener, ChangePredicate changePredicate, @Nullable TimeValue timeOutValue) { - + public void waitForNextChange(Listener listener, Predicate statePredicate, @Nullable TimeValue timeOutValue) { + listener = new ContextPreservingListener(listener, contextHolder.newRestorableContext(false)); if (observingContext.get() != null) { throw new ElasticsearchException("already waiting for a cluster state change"); } @@ -126,7 +135,7 @@ public void waitForNextChange(Listener listener, ChangePredicate changePredicate logger.trace("observer timed out. notifying listener. timeout setting [{}], time since start [{}]", timeOutValue, new TimeValue(timeSinceStartMS)); // update to latest, in case people want to retry timedOut = true; - lastObservedState.set(new ObservedState(clusterService.state())); + lastObservedState.set(new StoredState(clusterService.state())); listener.onTimeout(timeOutValue); return; } @@ -140,34 +149,24 @@ public void waitForNextChange(Listener listener, ChangePredicate changePredicate timedOut = false; } - // sample a new state - ObservedState newState = new ObservedState(clusterService.state()); - ObservedState lastState = lastObservedState.get(); - if (changePredicate.apply(lastState.clusterState, lastState.status, newState.clusterState, newState.status)) { + // sample a new state. This state maybe *older* than the supplied state if we are called from an applier, + // which wants to wait for something else to happen + ClusterState newState = clusterService.state(); + if (lastObservedState.get().isOlderOrDifferentMaster(newState) && statePredicate.test(newState)) { // good enough, let's go. logger.trace("observer: sampled state accepted by predicate ({})", newState); - lastObservedState.set(newState); - listener.onNewClusterState(newState.clusterState); + lastObservedState.set(new StoredState(newState)); + listener.onNewClusterState(newState); } else { logger.trace("observer: sampled state rejected by predicate ({}). adding listener to ClusterService", newState); - ObservingContext context = new ObservingContext(new ContextPreservingListener(listener, contextHolder.newStoredContext()), changePredicate); + final ObservingContext context = new ObservingContext(listener, statePredicate); if (!observingContext.compareAndSet(null, context)) { throw new ElasticsearchException("already waiting for a cluster state change"); } - clusterService.add(timeoutTimeLeftMS == null ? null : new TimeValue(timeoutTimeLeftMS), clusterStateListener); + clusterService.addTimeoutListener(timeoutTimeLeftMS == null ? null : new TimeValue(timeoutTimeLeftMS), clusterStateListener); } } - /** - * reset this observer to the give cluster state. Any pending waits will be canceled. - */ - public void reset(ClusterState toState) { - if (observingContext.getAndSet(null) != null) { - clusterService.remove(clusterStateListener); - } - lastObservedState.set(new ObservedState(toState)); - } - class ObserverClusterStateListener implements TimeoutClusterStateListener { @Override @@ -177,18 +176,18 @@ public void clusterChanged(ClusterChangedEvent event) { // No need to remove listener as it is the responsibility of the thread that set observingContext to null return; } - if (context.changePredicate.apply(event)) { + final ClusterState state = event.state(); + if (context.statePredicate.test(state)) { if (observingContext.compareAndSet(context, null)) { - clusterService.remove(this); - ObservedState state = new ObservedState(event.state()); + clusterService.removeTimeoutListener(this); logger.trace("observer: accepting cluster state change ({})", state); - lastObservedState.set(state); - context.listener.onNewClusterState(state.clusterState); + lastObservedState.set(new StoredState(state)); + context.listener.onNewClusterState(state); } else { - logger.trace("observer: predicate approved change but observing context has changed - ignoring (new cluster state version [{}])", event.state().version()); + logger.trace("observer: predicate approved change but observing context has changed - ignoring (new cluster state version [{}])", state.version()); } } else { - logger.trace("observer: predicate rejected change (new cluster state version [{}])", event.state().version()); + logger.trace("observer: predicate rejected change (new cluster state version [{}])", state.version()); } } @@ -199,15 +198,14 @@ public void postAdded() { // No need to remove listener as it is the responsibility of the thread that set observingContext to null return; } - ObservedState newState = new ObservedState(clusterService.state()); - ObservedState lastState = lastObservedState.get(); - if (context.changePredicate.apply(lastState.clusterState, lastState.status, newState.clusterState, newState.status)) { + ClusterState newState = clusterService.state(); + if (lastObservedState.get().isOlderOrDifferentMaster(newState) && context.statePredicate.test(newState)) { // double check we're still listening if (observingContext.compareAndSet(context, null)) { logger.trace("observer: post adding listener: accepting current cluster state ({})", newState); - clusterService.remove(this); - lastObservedState.set(newState); - context.listener.onNewClusterState(newState.clusterState); + clusterService.removeTimeoutListener(this); + lastObservedState.set(new StoredState(newState)); + context.listener.onNewClusterState(newState); } else { logger.trace("observer: postAdded - predicate approved state but observing context has changed - ignoring ({})", newState); } @@ -222,7 +220,7 @@ public void onClose() { if (context != null) { logger.trace("observer: cluster service closed. notifying listener."); - clusterService.remove(this); + clusterService.removeTimeoutListener(this); context.listener.onClusterServiceClose(); } } @@ -231,17 +229,37 @@ public void onClose() { public void onTimeout(TimeValue timeout) { ObservingContext context = observingContext.getAndSet(null); if (context != null) { - clusterService.remove(this); + clusterService.removeTimeoutListener(this); long timeSinceStartMS = TimeValue.nsecToMSec(System.nanoTime() - startTimeNS); logger.trace("observer: timeout notification from cluster service. timeout setting [{}], time since start [{}]", timeOutValue, new TimeValue(timeSinceStartMS)); // update to latest, in case people want to retry - lastObservedState.set(new ObservedState(clusterService.state())); + lastObservedState.set(new StoredState(clusterService.state())); timedOut = true; context.listener.onTimeout(timeOutValue); } } } + /** + * The observer considers two cluster states to be the same if they have the same version and master node id (i.e. null or set) + */ + private static class StoredState { + private final String masterNodeId; + private final long version; + + StoredState(ClusterState clusterState) { + this.masterNodeId = clusterState.nodes().getMasterNodeId(); + this.version = clusterState.version(); + } + + /** + * returns true if stored state is older then given state or they are from a different master, meaning they can't be compared + * */ + public boolean isOlderOrDifferentMaster(ClusterState clusterState) { + return version < clusterState.version() || Objects.equals(masterNodeId, clusterState.nodes().getMasterNodeId()) == false; + } + } + public interface Listener { /** called when a new state is observed */ @@ -253,101 +271,45 @@ public interface Listener { void onTimeout(TimeValue timeout); } - public interface ChangePredicate { - - /** - * a rough check used when starting to monitor for a new change. Called infrequently can be less accurate. - * - * @return true if newState should be accepted - */ - boolean apply(ClusterState previousState, - ClusterState.ClusterStateStatus previousStatus, - ClusterState newState, - ClusterState.ClusterStateStatus newStatus); - - /** - * called to see whether a cluster change should be accepted - * - * @return true if changedEvent.state() should be accepted - */ - boolean apply(ClusterChangedEvent changedEvent); - } - - - public abstract static class ValidationPredicate implements ChangePredicate { - - @Override - public boolean apply(ClusterState previousState, ClusterState.ClusterStateStatus previousStatus, ClusterState newState, ClusterState.ClusterStateStatus newStatus) { - return (previousState != newState || previousStatus != newStatus) && validate(newState); - } - - protected abstract boolean validate(ClusterState newState); - - @Override - public boolean apply(ClusterChangedEvent changedEvent) { - return changedEvent.previousState().version() != changedEvent.state().version() && validate(changedEvent.state()); - } - } - - public abstract static class EventPredicate implements ChangePredicate { - @Override - public boolean apply(ClusterState previousState, ClusterState.ClusterStateStatus previousStatus, ClusterState newState, ClusterState.ClusterStateStatus newStatus) { - return previousState != newState || previousStatus != newStatus; - } - - } - static class ObservingContext { public final Listener listener; - public final ChangePredicate changePredicate; + public final Predicate statePredicate; - public ObservingContext(Listener listener, ChangePredicate changePredicate) { + ObservingContext(Listener listener, Predicate statePredicate) { this.listener = listener; - this.changePredicate = changePredicate; - } - } - - static class ObservedState { - public final ClusterState clusterState; - public final ClusterState.ClusterStateStatus status; - - public ObservedState(ClusterState clusterState) { - this.clusterState = clusterState; - this.status = clusterState.status(); - } - - @Override - public String toString() { - return "version [" + clusterState.version() + "], status [" + status + "]"; + this.statePredicate = statePredicate; } } private static final class ContextPreservingListener implements Listener { private final Listener delegate; - private final ThreadContext.StoredContext tempContext; + private final Supplier contextSupplier; - private ContextPreservingListener(Listener delegate, ThreadContext.StoredContext storedContext) { - this.tempContext = storedContext; + private ContextPreservingListener(Listener delegate, Supplier contextSupplier) { + this.contextSupplier = contextSupplier; this.delegate = delegate; } @Override public void onNewClusterState(ClusterState state) { - tempContext.restore(); - delegate.onNewClusterState(state); + try (ThreadContext.StoredContext context = contextSupplier.get()) { + delegate.onNewClusterState(state); + } } @Override public void onClusterServiceClose() { - tempContext.restore(); - delegate.onClusterServiceClose(); + try (ThreadContext.StoredContext context = contextSupplier.get()) { + delegate.onClusterServiceClose(); + } } @Override public void onTimeout(TimeValue timeout) { - tempContext.restore(); - delegate.onTimeout(timeout); + try (ThreadContext.StoredContext context = contextSupplier.get()) { + delegate.onTimeout(timeout); + } } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterStateTaskExecutor.java b/core/src/main/java/org/elasticsearch/cluster/ClusterStateTaskExecutor.java index 9324e075e90d6..3693447cfb6d2 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterStateTaskExecutor.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterStateTaskExecutor.java @@ -18,20 +18,21 @@ */ package org.elasticsearch.cluster; +import org.elasticsearch.common.Nullable; + import java.util.IdentityHashMap; import java.util.List; import java.util.Map; -import java.util.function.Consumer; public interface ClusterStateTaskExecutor { /** * Update the cluster state based on the current state and the given tasks. Return the *same instance* if no state * should be changed. */ - BatchResult execute(ClusterState currentState, List tasks) throws Exception; + ClusterTasksResult execute(ClusterState currentState, List tasks) throws Exception; /** - * indicates whether this task should only run if current node is master + * indicates whether this executor should only run if the current node is master */ default boolean runOnlyOnMaster() { return true; @@ -69,18 +70,22 @@ default String describeTasks(List tasks) { * Represents the result of a batched execution of cluster state update tasks * @param the type of the cluster state update task */ - class BatchResult { + class ClusterTasksResult { + public final boolean noMaster; + @Nullable public final ClusterState resultingState; public final Map executionResults; /** * Construct an execution result instance with a correspondence between the tasks and their execution result + * @param noMaster whether this node steps down as master or has lost connection to the master * @param resultingState the resulting cluster state * @param executionResults the correspondence between tasks and their outcome */ - BatchResult(ClusterState resultingState, Map executionResults) { + ClusterTasksResult(boolean noMaster, ClusterState resultingState, Map executionResults) { this.resultingState = resultingState; this.executionResults = executionResults; + this.noMaster = noMaster; } public static Builder builder() { @@ -118,8 +123,13 @@ private Builder result(T task, TaskResult executionResult) { return this; } - public BatchResult build(ClusterState resultingState) { - return new BatchResult<>(resultingState, executionResults); + public ClusterTasksResult build(ClusterState resultingState) { + return new ClusterTasksResult<>(false, resultingState, executionResults); + } + + ClusterTasksResult build(ClusterTasksResult result, ClusterState previousState) { + return new ClusterTasksResult<>(result.noMaster, result.resultingState == null ? previousState : result.resultingState, + executionResults); } } } @@ -149,18 +159,5 @@ public Exception getFailure() { assert !isSuccess(); return failure; } - - /** - * Handle the execution result with the provided consumers - * @param onSuccess handler to invoke on success - * @param onFailure handler to invoke on failure; the throwable passed through will not be null - */ - public void handle(Runnable onSuccess, Consumer onFailure) { - if (failure == null) { - onSuccess.run(); - } else { - onFailure.accept(failure); - } - } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterStateUpdateTask.java b/core/src/main/java/org/elasticsearch/cluster/ClusterStateUpdateTask.java index a679d09861679..b298e7e915dea 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterStateUpdateTask.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterStateUpdateTask.java @@ -28,7 +28,7 @@ /** * A task that can update the cluster state. */ -public abstract class ClusterStateUpdateTask implements ClusterStateTaskConfig, ClusterStateTaskExecutor, ClusterStateTaskListener { +public abstract class ClusterStateUpdateTask implements ClusterStateTaskConfig, ClusterStateTaskExecutor, ClusterStateTaskListener { private final Priority priority; @@ -41,9 +41,9 @@ public ClusterStateUpdateTask(Priority priority) { } @Override - public final BatchResult execute(ClusterState currentState, List tasks) throws Exception { + public final ClusterTasksResult execute(ClusterState currentState, List tasks) throws Exception { ClusterState result = execute(currentState); - return BatchResult.builder().successes(tasks).build(result); + return ClusterTasksResult.builder().successes(tasks).build(result); } @Override @@ -75,4 +75,13 @@ public TimeValue timeout() { public Priority priority() { return priority; } + + /** + * Marked as final as cluster state update tasks should only run on master. + * For local requests, use {@link LocalClusterUpdateTask} instead. + */ + @Override + public final boolean runOnlyOnMaster() { + return true; + } } diff --git a/core/src/main/java/org/elasticsearch/cluster/Diffable.java b/core/src/main/java/org/elasticsearch/cluster/Diffable.java index b039f5e9b8bf7..57d5ea9ed1f75 100644 --- a/core/src/main/java/org/elasticsearch/cluster/Diffable.java +++ b/core/src/main/java/org/elasticsearch/cluster/Diffable.java @@ -34,13 +34,4 @@ public interface Diffable extends Writeable { */ Diff diff(T previousState); - /** - * Reads the {@link org.elasticsearch.cluster.Diff} from StreamInput - */ - Diff readDiffFrom(StreamInput in) throws IOException; - - /** - * Reads an object of this type from the provided {@linkplain StreamInput}. The receiving instance remains unchanged. - */ - T readFrom(StreamInput in) throws IOException; } diff --git a/core/src/main/java/org/elasticsearch/cluster/DiffableUtils.java b/core/src/main/java/org/elasticsearch/cluster/DiffableUtils.java index 1a3557890dd38..f7bb42b8dc368 100644 --- a/core/src/main/java/org/elasticsearch/cluster/DiffableUtils.java +++ b/core/src/main/java/org/elasticsearch/cluster/DiffableUtils.java @@ -23,10 +23,12 @@ import com.carrotsearch.hppc.cursors.IntObjectCursor; import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; +import org.elasticsearch.Version; import org.elasticsearch.common.collect.ImmutableOpenIntMap; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable.Reader; import java.io.IOException; import java.util.ArrayList; @@ -74,7 +76,7 @@ public static > MapDiff> d /** * Calculates diff between two ImmutableOpenMaps of non-diffable objects */ - public static MapDiff> diff(ImmutableOpenMap before, ImmutableOpenMap after, KeySerializer keySerializer, NonDiffableValueSerializer valueSerializer) { + public static MapDiff> diff(ImmutableOpenMap before, ImmutableOpenMap after, KeySerializer keySerializer, ValueSerializer valueSerializer) { assert after != null && before != null; return new ImmutableOpenMapDiff<>(before, after, keySerializer, valueSerializer); } @@ -90,7 +92,7 @@ public static > MapDiff /** * Calculates diff between two ImmutableOpenIntMaps of non-diffable objects */ - public static MapDiff> diff(ImmutableOpenIntMap before, ImmutableOpenIntMap after, KeySerializer keySerializer, NonDiffableValueSerializer valueSerializer) { + public static MapDiff> diff(ImmutableOpenIntMap before, ImmutableOpenIntMap after, KeySerializer keySerializer, ValueSerializer valueSerializer) { assert after != null && before != null; return new ImmutableOpenIntMapDiff<>(before, after, keySerializer, valueSerializer); } @@ -106,7 +108,7 @@ public static > MapDiff> diff(Map /** * Calculates diff between two Maps of non-diffable objects */ - public static MapDiff> diff(Map before, Map after, KeySerializer keySerializer, NonDiffableValueSerializer valueSerializer) { + public static MapDiff> diff(Map before, Map after, KeySerializer keySerializer, ValueSerializer valueSerializer) { assert after != null && before != null; return new JdkMapDiff<>(before, after, keySerializer, valueSerializer); } @@ -135,22 +137,22 @@ public static MapDiff> readJdkMapDiff(StreamInput in, Key /** * Loads an object that represents difference between two ImmutableOpenMaps of Diffable objects using Diffable proto object */ - public static > MapDiff> readImmutableOpenMapDiff(StreamInput in, KeySerializer keySerializer, T proto) throws IOException { - return new ImmutableOpenMapDiff<>(in, keySerializer, new DiffablePrototypeValueReader<>(proto)); + public static > MapDiff> readImmutableOpenMapDiff(StreamInput in, KeySerializer keySerializer, Reader reader, Reader> diffReader) throws IOException { + return new ImmutableOpenMapDiff<>(in, keySerializer, new DiffableValueReader<>(reader, diffReader)); } /** * Loads an object that represents difference between two ImmutableOpenIntMaps of Diffable objects using Diffable proto object */ - public static > MapDiff> readImmutableOpenIntMapDiff(StreamInput in, KeySerializer keySerializer, T proto) throws IOException { - return new ImmutableOpenIntMapDiff<>(in, keySerializer, new DiffablePrototypeValueReader<>(proto)); + public static > MapDiff> readImmutableOpenIntMapDiff(StreamInput in, KeySerializer keySerializer, Reader reader, Reader> diffReader) throws IOException { + return new ImmutableOpenIntMapDiff<>(in, keySerializer, new DiffableValueReader<>(reader, diffReader)); } /** * Loads an object that represents difference between two Maps of Diffable objects using Diffable proto object */ - public static > MapDiff> readJdkMapDiff(StreamInput in, KeySerializer keySerializer, T proto) throws IOException { - return new JdkMapDiff<>(in, keySerializer, new DiffablePrototypeValueReader<>(proto)); + public static > MapDiff> readJdkMapDiff(StreamInput in, KeySerializer keySerializer, Reader reader, Reader> diffReader) throws IOException { + return new JdkMapDiff<>(in, keySerializer, new DiffableValueReader<>(reader, diffReader)); } /** @@ -164,7 +166,7 @@ protected JdkMapDiff(StreamInput in, KeySerializer keySerializer, ValueSerial super(in, keySerializer, valueSerializer); } - public JdkMapDiff(Map before, Map after, + JdkMapDiff(Map before, Map after, KeySerializer keySerializer, ValueSerializer valueSerializer) { super(keySerializer, valueSerializer); assert after != null && before != null; @@ -191,8 +193,7 @@ public JdkMapDiff(Map before, Map after, @Override public Map apply(Map map) { - Map builder = new HashMap<>(); - builder.putAll(map); + Map builder = new HashMap<>(map); for (K part : deletes) { builder.remove(part); @@ -214,12 +215,17 @@ public Map apply(Map map) { * * @param the object type */ - private static class ImmutableOpenMapDiff extends MapDiff> { + public static class ImmutableOpenMapDiff extends MapDiff> { protected ImmutableOpenMapDiff(StreamInput in, KeySerializer keySerializer, ValueSerializer valueSerializer) throws IOException { super(in, keySerializer, valueSerializer); } + private ImmutableOpenMapDiff(KeySerializer keySerializer, ValueSerializer valueSerializer, + List deletes, Map> diffs, Map upserts) { + super(keySerializer, valueSerializer, deletes, diffs, upserts); + } + public ImmutableOpenMapDiff(ImmutableOpenMap before, ImmutableOpenMap after, KeySerializer keySerializer, ValueSerializer valueSerializer) { super(keySerializer, valueSerializer); @@ -245,6 +251,21 @@ public ImmutableOpenMapDiff(ImmutableOpenMap before, ImmutableOpenMap withKeyRemoved(K key) { + if (this.diffs.containsKey(key) == false && this.upserts.containsKey(key) == false) { + return this; + } + Map> newDiffs = new HashMap<>(this.diffs); + newDiffs.remove(key); + Map newUpserts = new HashMap<>(this.upserts); + newUpserts.remove(key); + return new ImmutableOpenMapDiff<>(this.keySerializer, this.valueSerializer, this.deletes, newDiffs, newUpserts); + } + @Override public ImmutableOpenMap apply(ImmutableOpenMap map) { ImmutableOpenMap.Builder builder = ImmutableOpenMap.builder(); @@ -276,7 +297,7 @@ protected ImmutableOpenIntMapDiff(StreamInput in, KeySerializer keySeri super(in, keySerializer, valueSerializer); } - public ImmutableOpenIntMapDiff(ImmutableOpenIntMap before, ImmutableOpenIntMap after, + ImmutableOpenIntMapDiff(ImmutableOpenIntMap before, ImmutableOpenIntMap after, KeySerializer keySerializer, ValueSerializer valueSerializer) { super(keySerializer, valueSerializer); assert after != null && before != null; @@ -346,6 +367,15 @@ protected MapDiff(KeySerializer keySerializer, ValueSerializer valueSer upserts = new HashMap<>(); } + protected MapDiff(KeySerializer keySerializer, ValueSerializer valueSerializer, + List deletes, Map> diffs, Map upserts) { + this.keySerializer = keySerializer; + this.valueSerializer = valueSerializer; + this.deletes = deletes; + this.diffs = diffs; + this.upserts = upserts; + } + protected MapDiff(StreamInput in, KeySerializer keySerializer, ValueSerializer valueSerializer) throws IOException { this.keySerializer = keySerializer; this.valueSerializer = valueSerializer; @@ -406,12 +436,29 @@ public void writeTo(StreamOutput out) throws IOException { for (K delete : deletes) { keySerializer.writeKey(delete, out); } - out.writeVInt(diffs.size()); + Version version = out.getVersion(); + // filter out custom states not supported by the other node + int diffCount = 0; + for (Diff diff : diffs.values()) { + if(valueSerializer.supportsVersion(diff, version)) { + diffCount++; + } + } + out.writeVInt(diffCount); for (Map.Entry> entry : diffs.entrySet()) { - keySerializer.writeKey(entry.getKey(), out); - valueSerializer.writeDiff(entry.getValue(), out); + if(valueSerializer.supportsVersion(entry.getValue(), version)) { + keySerializer.writeKey(entry.getKey(), out); + valueSerializer.writeDiff(entry.getValue(), out); + } + } + // filter out custom states not supported by the other node + int upsertsCount = 0; + for (T upsert : upserts.values()) { + if(valueSerializer.supportsVersion(upsert, version)) { + upsertsCount++; + } } - out.writeVInt(upserts.size()); + out.writeVInt(upsertsCount); for (Map.Entry entry : upserts.entrySet()) { keySerializer.writeKey(entry.getKey(), out); valueSerializer.write(entry.getValue(), out); @@ -511,6 +558,20 @@ public interface ValueSerializer { */ boolean supportsDiffableValues(); + /** + * Whether this serializer supports the version of the output stream + */ + default boolean supportsVersion(Diff value, Version version) { + return true; + } + + /** + * Whether this serializer supports the version of the output stream + */ + default boolean supportsVersion(V value, Version version) { + return true; + } + /** * Computes diff if this serializer supports diffable values */ @@ -600,25 +661,27 @@ public Diff readDiff(StreamInput in, K key) throws IOException { } /** - * Implementation of the ValueSerializer that uses a prototype object for reading operations + * Implementation of the ValueSerializer that wraps value and diff readers. * * Note: this implementation is ignoring the key. */ - public static class DiffablePrototypeValueReader> extends DiffableValueSerializer { - private final V proto; + public static class DiffableValueReader> extends DiffableValueSerializer { + private final Reader reader; + private final Reader> diffReader; - public DiffablePrototypeValueReader(V proto) { - this.proto = proto; + public DiffableValueReader(Reader reader, Reader> diffReader) { + this.reader = reader; + this.diffReader = diffReader; } @Override public V read(StreamInput in, K key) throws IOException { - return proto.readFrom(in); + return reader.read(in); } @Override public Diff readDiff(StreamInput in, K key) throws IOException { - return proto.readDiffFrom(in); + return diffReader.read(in); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/InternalClusterInfoService.java b/core/src/main/java/org/elasticsearch/cluster/InternalClusterInfoService.java index c2cdaf90bc440..b8ac2a5eb50c3 100644 --- a/core/src/main/java/org/elasticsearch/cluster/InternalClusterInfoService.java +++ b/core/src/main/java/org/elasticsearch/cluster/InternalClusterInfoService.java @@ -25,11 +25,10 @@ import org.elasticsearch.action.admin.cluster.node.stats.NodeStats; import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsRequest; import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse; -import org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction; import org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest; import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; import org.elasticsearch.action.admin.indices.stats.ShardStats; -import org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; @@ -39,7 +38,6 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -66,7 +64,8 @@ * Every time the timer runs, gathers information about the disk usage and * shard sizes across the cluster. */ -public class InternalClusterInfoService extends AbstractComponent implements ClusterInfoService, LocalNodeMasterListener, ClusterStateListener { +public class InternalClusterInfoService extends AbstractComponent + implements ClusterInfoService, LocalNodeMasterListener, ClusterStateListener { public static final Setting INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL_SETTING = Setting.timeSetting("cluster.info.update.interval", TimeValue.timeValueSeconds(30), TimeValue.timeValueSeconds(10), @@ -84,37 +83,32 @@ public class InternalClusterInfoService extends AbstractComponent implements Clu private volatile boolean isMaster = false; private volatile boolean enabled; private volatile TimeValue fetchTimeout; - private final TransportNodesStatsAction transportNodesStatsAction; - private final TransportIndicesStatsAction transportIndicesStatsAction; private final ClusterService clusterService; private final ThreadPool threadPool; + private final NodeClient client; private final List listeners = new CopyOnWriteArrayList<>(); - @Inject - public InternalClusterInfoService(Settings settings, ClusterSettings clusterSettings, - TransportNodesStatsAction transportNodesStatsAction, - TransportIndicesStatsAction transportIndicesStatsAction, ClusterService clusterService, - ThreadPool threadPool) { + public InternalClusterInfoService(Settings settings, ClusterService clusterService, ThreadPool threadPool, NodeClient client) { super(settings); this.leastAvailableSpaceUsages = ImmutableOpenMap.of(); this.mostAvailableSpaceUsages = ImmutableOpenMap.of(); this.shardRoutingToDataPath = ImmutableOpenMap.of(); this.shardSizes = ImmutableOpenMap.of(); - this.transportNodesStatsAction = transportNodesStatsAction; - this.transportIndicesStatsAction = transportIndicesStatsAction; this.clusterService = clusterService; this.threadPool = threadPool; + this.client = client; this.updateFrequency = INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL_SETTING.get(settings); this.fetchTimeout = INTERNAL_CLUSTER_INFO_TIMEOUT_SETTING.get(settings); this.enabled = DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED_SETTING.get(settings); + ClusterSettings clusterSettings = clusterService.getClusterSettings(); clusterSettings.addSettingsUpdateConsumer(INTERNAL_CLUSTER_INFO_TIMEOUT_SETTING, this::setFetchTimeout); clusterSettings.addSettingsUpdateConsumer(INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL_SETTING, this::setUpdateFrequency); clusterSettings.addSettingsUpdateConsumer(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED_SETTING, this::setEnabled); // Add InternalClusterInfoService to listen for Master changes - this.clusterService.add((LocalNodeMasterListener)this); + this.clusterService.addLocalNodeMasterListener(this); // Add to listen for state changes (when nodes are added) - this.clusterService.add((ClusterStateListener)this); + this.clusterService.addListener(this); } private void setEnabled(boolean enabled) { @@ -174,7 +168,7 @@ public void clusterChanged(ClusterChangedEvent event) { } } - if (this.isMaster && dataNodeAdded && clusterService.state().getNodes().getDataNodes().size() > 1) { + if (this.isMaster && dataNodeAdded && event.state().getNodes().getDataNodes().size() > 1) { if (logger.isDebugEnabled()) { logger.debug("data node was added, retrieving new cluster info"); } @@ -259,8 +253,7 @@ protected CountDownLatch updateNodeStats(final ActionListener(listener, latch)); + client.admin().cluster().nodesStats(nodesStatsRequest, new LatchedActionListener<>(listener, latch)); return latch; } @@ -274,7 +267,7 @@ protected CountDownLatch updateIndicesStats(final ActionListener(listener, latch)); + client.admin().indices().stats(indicesStatsRequest, new LatchedActionListener<>(listener, latch)); return latch; } @@ -415,9 +408,9 @@ static void fillDiskUsagePerNode(Logger logger, List nodeStatsArray, if (leastAvailablePath == null) { assert mostAvailablePath == null; mostAvailablePath = leastAvailablePath = info; - } else if (leastAvailablePath.getAvailable().bytes() > info.getAvailable().bytes()){ + } else if (leastAvailablePath.getAvailable().getBytes() > info.getAvailable().getBytes()){ leastAvailablePath = info; - } else if (mostAvailablePath.getAvailable().bytes() < info.getAvailable().bytes()) { + } else if (mostAvailablePath.getAvailable().getBytes() < info.getAvailable().getBytes()) { mostAvailablePath = info; } } @@ -428,21 +421,21 @@ static void fillDiskUsagePerNode(Logger logger, List nodeStatsArray, nodeId, mostAvailablePath.getTotal(), leastAvailablePath.getAvailable(), leastAvailablePath.getTotal(), leastAvailablePath.getAvailable()); } - if (leastAvailablePath.getTotal().bytes() < 0) { + if (leastAvailablePath.getTotal().getBytes() < 0) { if (logger.isTraceEnabled()) { logger.trace("node: [{}] least available path has less than 0 total bytes of disk [{}], skipping", - nodeId, leastAvailablePath.getTotal().bytes()); + nodeId, leastAvailablePath.getTotal().getBytes()); } } else { - newLeastAvaiableUsages.put(nodeId, new DiskUsage(nodeId, nodeName, leastAvailablePath.getPath(), leastAvailablePath.getTotal().bytes(), leastAvailablePath.getAvailable().bytes())); + newLeastAvaiableUsages.put(nodeId, new DiskUsage(nodeId, nodeName, leastAvailablePath.getPath(), leastAvailablePath.getTotal().getBytes(), leastAvailablePath.getAvailable().getBytes())); } - if (mostAvailablePath.getTotal().bytes() < 0) { + if (mostAvailablePath.getTotal().getBytes() < 0) { if (logger.isTraceEnabled()) { logger.trace("node: [{}] most available path has less than 0 total bytes of disk [{}], skipping", - nodeId, mostAvailablePath.getTotal().bytes()); + nodeId, mostAvailablePath.getTotal().getBytes()); } } else { - newMostAvaiableUsages.put(nodeId, new DiskUsage(nodeId, nodeName, mostAvailablePath.getPath(), mostAvailablePath.getTotal().bytes(), mostAvailablePath.getAvailable().bytes())); + newMostAvaiableUsages.put(nodeId, new DiskUsage(nodeId, nodeName, mostAvailablePath.getPath(), mostAvailablePath.getTotal().getBytes(), mostAvailablePath.getAvailable().getBytes())); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/LocalClusterUpdateTask.java b/core/src/main/java/org/elasticsearch/cluster/LocalClusterUpdateTask.java new file mode 100644 index 0000000000000..9692ff8d4e1ed --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/LocalClusterUpdateTask.java @@ -0,0 +1,93 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.cluster; + +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.Priority; +import org.elasticsearch.common.unit.TimeValue; + +import java.util.List; + +/** + * Used to apply state updates on nodes that are not necessarily master + */ +public abstract class LocalClusterUpdateTask implements ClusterStateTaskConfig, ClusterStateTaskExecutor, + ClusterStateTaskListener { + + private final Priority priority; + + public LocalClusterUpdateTask() { + this(Priority.NORMAL); + } + + public LocalClusterUpdateTask(Priority priority) { + this.priority = priority; + } + + public abstract ClusterTasksResult execute(ClusterState currentState) throws Exception; + + @Override + public final ClusterTasksResult execute(ClusterState currentState, + List tasks) throws Exception { + assert tasks.size() == 1 && tasks.get(0) == this : "expected one-element task list containing current object but was " + tasks; + ClusterTasksResult result = execute(currentState); + return ClusterTasksResult.builder().successes(tasks).build(result, currentState); + } + + /** + * node stepped down as master or has lost connection to the master + */ + public static ClusterTasksResult noMaster() { + return new ClusterTasksResult(true, null, null); + } + + /** + * no changes were made to the cluster state. Useful to execute a runnable on the cluster state applier thread + */ + public static ClusterTasksResult unchanged() { + return new ClusterTasksResult(false, null, null); + } + + /** + * locally apply cluster state received from a master + */ + public static ClusterTasksResult newState(ClusterState clusterState) { + return new ClusterTasksResult(false, clusterState, null); + } + + @Override + public String describeTasks(List tasks) { + return ""; // one of task, source is enough + } + + @Nullable + public TimeValue timeout() { + return null; + } + + @Override + public Priority priority() { + return priority; + } + + @Override + public final boolean runOnlyOnMaster() { + return false; + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/MasterNodeChangePredicate.java b/core/src/main/java/org/elasticsearch/cluster/MasterNodeChangePredicate.java index afb557e5bbcd5..5bcfecaebafba 100644 --- a/core/src/main/java/org/elasticsearch/cluster/MasterNodeChangePredicate.java +++ b/core/src/main/java/org/elasticsearch/cluster/MasterNodeChangePredicate.java @@ -19,22 +19,35 @@ package org.elasticsearch.cluster; -public enum MasterNodeChangePredicate implements ClusterStateObserver.ChangePredicate { - INSTANCE; +import org.elasticsearch.cluster.node.DiscoveryNode; + +import java.util.function.Predicate; + +public final class MasterNodeChangePredicate { + + private MasterNodeChangePredicate() { - @Override - public boolean apply( - ClusterState previousState, - ClusterState.ClusterStateStatus previousStatus, - ClusterState newState, - ClusterState.ClusterStateStatus newStatus) { - // checking if the masterNodeId changed is insufficient as the - // same master node might get re-elected after a disruption - return newState.nodes().getMasterNodeId() != null && newState != previousState; } - @Override - public boolean apply(ClusterChangedEvent changedEvent) { - return changedEvent.nodesDelta().masterNodeChanged(); + /** + * builds a predicate that will accept a cluster state only if it was generated after the current has + * (re-)joined the master + */ + public static Predicate build(ClusterState currentState) { + final long currentVersion = currentState.version(); + final DiscoveryNode masterNode = currentState.nodes().getMasterNode(); + final String currentMasterId = masterNode == null ? null : masterNode.getEphemeralId(); + return newState -> { + final DiscoveryNode newMaster = newState.nodes().getMasterNode(); + final boolean accept; + if (newMaster == null) { + accept = false; + } else if (newMaster.getEphemeralId().equals(currentMasterId) == false) { + accept = true; + } else { + accept = newState.version() > currentVersion; + } + return accept; + }; } } diff --git a/core/src/main/java/org/elasticsearch/cluster/NamedDiff.java b/core/src/main/java/org/elasticsearch/cluster/NamedDiff.java new file mode 100644 index 0000000000000..9da3167ae88c5 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/NamedDiff.java @@ -0,0 +1,36 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster; + +import org.elasticsearch.Version; +import org.elasticsearch.common.io.stream.NamedWriteable; + +/** + * Diff that also support NamedWriteable interface + */ +public interface NamedDiff> extends Diff, NamedWriteable { + /** + * The minimal version of the recipient this custom object can be sent to + */ + default Version getMinimalSupportedVersion() { + return Version.CURRENT.minimumCompatibilityVersion(); + } + +} diff --git a/core/src/main/java/org/elasticsearch/cluster/NamedDiffable.java b/core/src/main/java/org/elasticsearch/cluster/NamedDiffable.java new file mode 100644 index 0000000000000..0797442209689 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/NamedDiffable.java @@ -0,0 +1,35 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster; + +import org.elasticsearch.Version; +import org.elasticsearch.common.io.stream.NamedWriteable; + +/** + * Diff that also support NamedWriteable interface + */ +public interface NamedDiffable extends Diffable, NamedWriteable { + /** + * The minimal version of the recipient this custom object can be sent to + */ + default Version getMinimalSupportedVersion() { + return Version.CURRENT.minimumCompatibilityVersion(); + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/NamedDiffableValueSerializer.java b/core/src/main/java/org/elasticsearch/cluster/NamedDiffableValueSerializer.java new file mode 100644 index 0000000000000..c6434db9e8777 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/NamedDiffableValueSerializer.java @@ -0,0 +1,58 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster; + +import org.elasticsearch.Version; +import org.elasticsearch.common.io.stream.StreamInput; + +import java.io.IOException; + +/** + * Value Serializer for named diffables + */ +public class NamedDiffableValueSerializer> extends DiffableUtils.DiffableValueSerializer { + + private final Class tClass; + + public NamedDiffableValueSerializer(Class tClass) { + this.tClass = tClass; + } + + @Override + public T read(StreamInput in, String key) throws IOException { + return in.readNamedWriteable(tClass, key); + } + + @Override + public boolean supportsVersion(Diff value, Version version) { + return version.onOrAfter(((NamedDiff)value).getMinimalSupportedVersion()); + } + + @Override + public boolean supportsVersion(T value, Version version) { + return version.onOrAfter(value.getMinimalSupportedVersion()); + } + + @SuppressWarnings("unchecked") + @Override + public Diff readDiff(StreamInput in, String key) throws IOException { + return in.readNamedWriteable(NamedDiff.class, key); + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/NodeConnectionsService.java b/core/src/main/java/org/elasticsearch/cluster/NodeConnectionsService.java index 99f161b9da5fa..aab75eb2aad7b 100644 --- a/core/src/main/java/org/elasticsearch/cluster/NodeConnectionsService.java +++ b/core/src/main/java/org/elasticsearch/cluster/NodeConnectionsService.java @@ -21,6 +21,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.component.AbstractLifecycleComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.lease.Releasable; @@ -31,10 +32,15 @@ import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.common.util.concurrent.FutureUtils; import org.elasticsearch.common.util.concurrent.KeyedLock; +import org.elasticsearch.discovery.zen.MasterFaultDetection; +import org.elasticsearch.discovery.zen.NodesFaultDetection; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; +import java.util.HashSet; +import java.util.Set; import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.CountDownLatch; import java.util.concurrent.ScheduledFuture; import static org.elasticsearch.common.settings.Setting.Property; @@ -45,8 +51,8 @@ * This component is responsible for connecting to nodes once they are added to the cluster state, and disconnect when they are * removed. Also, it periodically checks that all connections are still open and if needed restores them. * Note that this component is *not* responsible for removing nodes from the cluster if they disconnect / do not respond - * to pings. This is done by {@link org.elasticsearch.discovery.zen.fd.NodesFaultDetection}. Master fault detection - * is done by {@link org.elasticsearch.discovery.zen.fd.MasterFaultDetection}. + * to pings. This is done by {@link NodesFaultDetection}. Master fault detection + * is done by {@link MasterFaultDetection}. */ public class NodeConnectionsService extends AbstractLifecycleComponent { @@ -73,20 +79,58 @@ public NodeConnectionsService(Settings settings, ThreadPool threadPool, Transpor this.reconnectInterval = NodeConnectionsService.CLUSTER_NODE_RECONNECT_INTERVAL_SETTING.get(settings); } - public void connectToAddedNodes(ClusterChangedEvent event) { - - // TODO: do this in parallel (and wait) - for (final DiscoveryNode node : event.nodesDelta().addedNodes()) { + public void connectToNodes(DiscoveryNodes discoveryNodes) { + CountDownLatch latch = new CountDownLatch(discoveryNodes.getSize()); + for (final DiscoveryNode node : discoveryNodes) { + final boolean connected; try (Releasable ignored = nodeLocks.acquire(node)) { - Integer current = nodes.put(node, 0); - assert current == null : "node " + node + " was added in event but already in internal nodes"; - validateNodeConnected(node); + nodes.putIfAbsent(node, 0); + connected = transportService.nodeConnected(node); + } + if (connected) { + latch.countDown(); + } else { + // spawn to another thread to do in parallel + threadPool.executor(ThreadPool.Names.MANAGEMENT).execute(new AbstractRunnable() { + @Override + public void onFailure(Exception e) { + // both errors and rejections are logged here. the service + // will try again after `cluster.nodes.reconnect_interval` on all nodes but the current master. + // On the master, node fault detection will remove these nodes from the cluster as their are not + // connected. Note that it is very rare that we end up here on the master. + logger.warn((Supplier) () -> new ParameterizedMessage("failed to connect to {}", node), e); + } + + @Override + protected void doRun() throws Exception { + try (Releasable ignored = nodeLocks.acquire(node)) { + validateAndConnectIfNeeded(node); + } + } + + @Override + public void onAfter() { + latch.countDown(); + } + }); } } + try { + latch.await(); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + } } - public void disconnectFromRemovedNodes(ClusterChangedEvent event) { - for (final DiscoveryNode node : event.nodesDelta().removedNodes()) { + /** + * Disconnects from all nodes except the ones provided as parameter + */ + public void disconnectFromNodesExcept(DiscoveryNodes nodesToKeep) { + Set currentNodes = new HashSet<>(nodes.keySet()); + for (DiscoveryNode node : nodesToKeep) { + currentNodes.remove(node); + } + for (final DiscoveryNode node : currentNodes) { try (Releasable ignored = nodeLocks.acquire(node)) { Integer current = nodes.remove(node); assert current != null : "node " + node + " was removed in event but not in internal nodes"; @@ -99,8 +143,8 @@ public void disconnectFromRemovedNodes(ClusterChangedEvent event) { } } - void validateNodeConnected(DiscoveryNode node) { - assert nodeLocks.isHeldByCurrentThread(node) : "validateNodeConnected must be called under lock"; + void validateAndConnectIfNeeded(DiscoveryNode node) { + assert nodeLocks.isHeldByCurrentThread(node) : "validateAndConnectIfNeeded must be called under lock"; if (lifecycle.stoppedOrClosed() || nodes.containsKey(node) == false) { // we double check existence of node since connectToNode might take time... // nothing to do @@ -136,7 +180,7 @@ public void onFailure(Exception e) { protected void doRun() { for (DiscoveryNode node : nodes.keySet()) { try (Releasable ignored = nodeLocks.acquire(node)) { - validateNodeConnected(node); + validateAndConnectIfNeeded(node); } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/RestoreInProgress.java b/core/src/main/java/org/elasticsearch/cluster/RestoreInProgress.java index 55a09f87f759a..55e70dbe644ff 100644 --- a/core/src/main/java/org/elasticsearch/cluster/RestoreInProgress.java +++ b/core/src/main/java/org/elasticsearch/cluster/RestoreInProgress.java @@ -39,12 +39,10 @@ /** * Meta data about restore processes that are currently executing */ -public class RestoreInProgress extends AbstractDiffable implements Custom { +public class RestoreInProgress extends AbstractNamedDiffable implements Custom { public static final String TYPE = "restore"; - public static final RestoreInProgress PROTO = new RestoreInProgress(); - private final List entries; /** @@ -377,15 +375,15 @@ public static State fromValue(byte value) { * {@inheritDoc} */ @Override - public String type() { + public String getWriteableName() { return TYPE; } - /** - * {@inheritDoc} - */ - @Override - public RestoreInProgress readFrom(StreamInput in) throws IOException { + public static NamedDiff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(Custom.class, TYPE, in); + } + + public RestoreInProgress(StreamInput in) throws IOException { Entry[] entries = new Entry[in.readVInt()]; for (int i = 0; i < entries.length; i++) { Snapshot snapshot = new Snapshot(in); @@ -404,7 +402,7 @@ public RestoreInProgress readFrom(StreamInput in) throws IOException { } entries[i] = new Entry(snapshot, state, Collections.unmodifiableList(indexBuilder), builder.build()); } - return new RestoreInProgress(entries); + this.entries = Arrays.asList(entries); } /** diff --git a/core/src/main/java/org/elasticsearch/cluster/SnapshotDeletionsInProgress.java b/core/src/main/java/org/elasticsearch/cluster/SnapshotDeletionsInProgress.java new file mode 100644 index 0000000000000..9641c95a827a1 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/SnapshotDeletionsInProgress.java @@ -0,0 +1,220 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster; + +import org.elasticsearch.Version; +import org.elasticsearch.cluster.ClusterState.Custom; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.snapshots.Snapshot; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Objects; + +/** + * A class that represents the snapshot deletions that are in progress in the cluster. + */ +public class SnapshotDeletionsInProgress extends AbstractNamedDiffable implements Custom { + + public static final String TYPE = "snapshot_deletions"; + // the version where SnapshotDeletionsInProgress was introduced + public static final Version VERSION_INTRODUCED = Version.V_5_2_0; + + // the list of snapshot deletion request entries + private final List entries; + + private SnapshotDeletionsInProgress(List entries) { + this.entries = Collections.unmodifiableList(entries); + } + + public SnapshotDeletionsInProgress(StreamInput in) throws IOException { + this.entries = Collections.unmodifiableList(in.readList(Entry::new)); + } + + /** + * Returns a new instance of {@link SnapshotDeletionsInProgress} with the given + * {@link Entry} added. + */ + public static SnapshotDeletionsInProgress newInstance(Entry entry) { + return new SnapshotDeletionsInProgress(Collections.singletonList(entry)); + } + + /** + * Returns a new instance of {@link SnapshotDeletionsInProgress} which adds + * the given {@link Entry} to the invoking instance. + */ + public SnapshotDeletionsInProgress withAddedEntry(Entry entry) { + List entries = new ArrayList<>(getEntries()); + entries.add(entry); + return new SnapshotDeletionsInProgress(entries); + } + + /** + * Returns a new instance of {@link SnapshotDeletionsInProgress} which removes + * the given entry from the invoking instance. + */ + public SnapshotDeletionsInProgress withRemovedEntry(Entry entry) { + List entries = new ArrayList<>(getEntries()); + entries.remove(entry); + return new SnapshotDeletionsInProgress(entries); + } + + /** + * Returns an unmodifiable list of snapshot deletion entries. + */ + public List getEntries() { + return entries; + } + + /** + * Returns {@code true} if there are snapshot deletions in progress in the cluster, + * returns {@code false} otherwise. + */ + public boolean hasDeletionsInProgress() { + return entries.isEmpty() == false; + } + + @Override + public String getWriteableName() { + return TYPE; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + + SnapshotDeletionsInProgress that = (SnapshotDeletionsInProgress) o; + return entries.equals(that.entries); + } + + @Override + public int hashCode() { + return 31 + entries.hashCode(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeList(entries); + } + + public static NamedDiff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(Custom.class, TYPE, in); + } + + @Override + public Version getMinimalSupportedVersion() { + return VERSION_INTRODUCED; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startArray(TYPE); + for (Entry entry : entries) { + builder.startObject(); + { + builder.field("repository", entry.snapshot.getRepository()); + builder.field("snapshot", entry.snapshot.getSnapshotId().getName()); + builder.timeValueField("start_time_millis", "start_time", entry.startTime); + builder.field("repository_state_id", entry.repositoryStateId); + } + builder.endObject(); + } + builder.endArray(); + return builder; + } + + /** + * A class representing a snapshot deletion request entry in the cluster state. + */ + public static final class Entry implements Writeable { + private final Snapshot snapshot; + private final long startTime; + private final long repositoryStateId; + + public Entry(Snapshot snapshot, long startTime, long repositoryStateId) { + this.snapshot = snapshot; + this.startTime = startTime; + this.repositoryStateId = repositoryStateId; + } + + public Entry(StreamInput in) throws IOException { + this.snapshot = new Snapshot(in); + this.startTime = in.readVLong(); + this.repositoryStateId = in.readLong(); + } + + /** + * The snapshot to delete. + */ + public Snapshot getSnapshot() { + return snapshot; + } + + /** + * The start time in milliseconds for deleting the snapshots. + */ + public long getStartTime() { + return startTime; + } + + /** + * The repository state id at the time the snapshot deletion began. + */ + public long getRepositoryStateId() { + return repositoryStateId; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + Entry that = (Entry) o; + return snapshot.equals(that.snapshot) + && startTime == that.startTime + && repositoryStateId == that.repositoryStateId; + } + + @Override + public int hashCode() { + return Objects.hash(snapshot, startTime, repositoryStateId); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + snapshot.writeTo(out); + out.writeVLong(startTime); + out.writeLong(repositoryStateId); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java b/core/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java index 7088037353096..6e085fe7ebecc 100644 --- a/core/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java +++ b/core/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java @@ -22,6 +22,7 @@ import com.carrotsearch.hppc.ObjectContainer; import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; +import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterState.Custom; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.io.stream.StreamInput; @@ -43,10 +44,14 @@ /** * Meta data about snapshots that are currently executing */ -public class SnapshotsInProgress extends AbstractDiffable implements Custom { +public class SnapshotsInProgress extends AbstractNamedDiffable implements Custom { public static final String TYPE = "snapshots"; - public static final SnapshotsInProgress PROTO = new SnapshotsInProgress(); + // denotes an undefined repository state id, which will happen when receiving a cluster state with + // a snapshot in progress from a pre 5.2.x node + public static final long UNDEFINED_REPOSITORY_STATE_ID = -2L; + // the version where repository state ids were introduced + private static final Version REPOSITORY_ID_INTRODUCED_VERSION = Version.V_5_2_0; @Override public boolean equals(Object o) { @@ -74,9 +79,10 @@ public static class Entry { private final List indices; private final ImmutableOpenMap> waitingIndices; private final long startTime; + private final long repositoryStateId; public Entry(Snapshot snapshot, boolean includeGlobalState, boolean partial, State state, List indices, - long startTime, ImmutableOpenMap shards) { + long startTime, long repositoryStateId, ImmutableOpenMap shards) { this.state = state; this.snapshot = snapshot; this.includeGlobalState = includeGlobalState; @@ -90,10 +96,12 @@ public Entry(Snapshot snapshot, boolean includeGlobalState, boolean partial, Sta this.shards = shards; this.waitingIndices = findWaitingIndices(shards); } + this.repositoryStateId = repositoryStateId; } public Entry(Entry entry, State state, ImmutableOpenMap shards) { - this(entry.snapshot, entry.includeGlobalState, entry.partial, state, entry.indices, entry.startTime, shards); + this(entry.snapshot, entry.includeGlobalState, entry.partial, state, entry.indices, entry.startTime, + entry.repositoryStateId, shards); } public Entry(Entry entry, ImmutableOpenMap shards) { @@ -132,6 +140,10 @@ public long startTime() { return startTime; } + public long getRepositoryStateId() { + return repositoryStateId; + } + @Override public boolean equals(Object o) { if (this == o) return true; @@ -147,6 +159,7 @@ public boolean equals(Object o) { if (!snapshot.equals(entry.snapshot)) return false; if (state != entry.state) return false; if (!waitingIndices.equals(entry.waitingIndices)) return false; + if (repositoryStateId != entry.repositoryStateId) return false; return true; } @@ -161,6 +174,7 @@ public int hashCode() { result = 31 * result + indices.hashCode(); result = 31 * result + waitingIndices.hashCode(); result = 31 * result + Long.hashCode(startTime); + result = 31 * result + Long.hashCode(repositoryStateId); return result; } @@ -169,14 +183,16 @@ public String toString() { return snapshot.toString(); } - private ImmutableOpenMap> findWaitingIndices(ImmutableOpenMap shards) { + // package private for testing + ImmutableOpenMap> findWaitingIndices(ImmutableOpenMap shards) { Map> waitingIndicesMap = new HashMap<>(); for (ObjectObjectCursor entry : shards) { if (entry.value.state() == State.WAITING) { - List waitingShards = waitingIndicesMap.get(entry.key.getIndex()); + final String indexName = entry.key.getIndexName(); + List waitingShards = waitingIndicesMap.get(indexName); if (waitingShards == null) { waitingShards = new ArrayList<>(); - waitingIndicesMap.put(entry.key.getIndexName(), waitingShards); + waitingIndicesMap.put(indexName, waitingShards); } waitingShards.add(entry.key); } @@ -190,7 +206,6 @@ private ImmutableOpenMap> findWaitingIndices(ImmutableOpen } return waitingIndicesBuilder.build(); } - } /** @@ -210,12 +225,9 @@ public static boolean completed(ObjectContainer shards) { public static class ShardSnapshotStatus { - private State state; - private String nodeId; - private String reason; - - private ShardSnapshotStatus() { - } + private final State state; + private final String nodeId; + private final String reason; public ShardSnapshotStatus(String nodeId) { this(nodeId, State.INIT); @@ -229,6 +241,14 @@ public ShardSnapshotStatus(String nodeId, State state, String reason) { this.nodeId = nodeId; this.state = state; this.reason = reason; + // If the state is failed we have to have a reason for this failure + assert state.failed() == false || reason != null; + } + + public ShardSnapshotStatus(StreamInput in) throws IOException { + nodeId = in.readOptionalString(); + state = State.fromValue(in.readByte()); + reason = in.readOptionalString(); } public State state() { @@ -243,18 +263,6 @@ public String reason() { return reason; } - public static ShardSnapshotStatus readShardSnapshotStatus(StreamInput in) throws IOException { - ShardSnapshotStatus shardSnapshotStatus = new ShardSnapshotStatus(); - shardSnapshotStatus.readFrom(in); - return shardSnapshotStatus; - } - - public void readFrom(StreamInput in) throws IOException { - nodeId = in.readOptionalString(); - state = State.fromValue(in.readByte()); - reason = in.readOptionalString(); - } - public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(nodeId); out.writeByte(state.value); @@ -282,6 +290,11 @@ public int hashCode() { result = 31 * result + (reason != null ? reason.hashCode() : 0); return result; } + + @Override + public String toString() { + return "ShardSnapshotStatus[state=" + state + ", nodeId=" + nodeId + ", reason=" + reason + "]"; + } } public enum State { @@ -365,12 +378,15 @@ public Entry snapshot(final Snapshot snapshot) { } @Override - public String type() { + public String getWriteableName() { return TYPE; } - @Override - public SnapshotsInProgress readFrom(StreamInput in) throws IOException { + public static NamedDiff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(Custom.class, TYPE, in); + } + + public SnapshotsInProgress(StreamInput in) throws IOException { Entry[] entries = new Entry[in.readVInt()]; for (int i = 0; i < entries.length; i++) { Snapshot snapshot = new Snapshot(in); @@ -389,7 +405,13 @@ public SnapshotsInProgress readFrom(StreamInput in) throws IOException { ShardId shardId = ShardId.readShardId(in); String nodeId = in.readOptionalString(); State shardState = State.fromValue(in.readByte()); - builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState)); + // Workaround for https://github.com/elastic/elasticsearch/issues/25878 + String reason = shardState.failed() ? "" : null; + builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState, reason)); + } + long repositoryStateId = UNDEFINED_REPOSITORY_STATE_ID; + if (in.getVersion().onOrAfter(REPOSITORY_ID_INTRODUCED_VERSION)) { + repositoryStateId = in.readLong(); } entries[i] = new Entry(snapshot, includeGlobalState, @@ -397,9 +419,10 @@ public SnapshotsInProgress readFrom(StreamInput in) throws IOException { state, Collections.unmodifiableList(indexBuilder), startTime, + repositoryStateId, builder.build()); } - return new SnapshotsInProgress(entries); + this.entries = Arrays.asList(entries); } @Override @@ -421,6 +444,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(shardEntry.value.nodeId()); out.writeByte(shardEntry.value.state().value()); } + if (out.getVersion().onOrAfter(REPOSITORY_ID_INTRODUCED_VERSION)) { + out.writeLong(entry.repositoryStateId); + } } } @@ -434,6 +460,7 @@ public void writeTo(StreamOutput out) throws IOException { private static final String INDICES = "indices"; private static final String START_TIME_MILLIS = "start_time_millis"; private static final String START_TIME = "start_time"; + private static final String REPOSITORY_STATE_ID = "repository_state_id"; private static final String SHARDS = "shards"; private static final String INDEX = "index"; private static final String SHARD = "shard"; @@ -465,6 +492,7 @@ public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params p } builder.endArray(); builder.timeValueField(START_TIME_MILLIS, START_TIME, entry.startTime()); + builder.field(REPOSITORY_STATE_ID, entry.getRepositoryStateId()); builder.startArray(SHARDS); { for (ObjectObjectCursor shardEntry : entry.shards) { diff --git a/core/src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java b/core/src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java index 13aa148f8b19f..543118a172f95 100644 --- a/core/src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java +++ b/core/src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java @@ -25,10 +25,10 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.cluster.ClusterChangedEvent; +import org.elasticsearch.cluster.ClusterStateTaskExecutor; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterStateObserver; import org.elasticsearch.cluster.ClusterStateTaskConfig; -import org.elasticsearch.cluster.ClusterStateTaskExecutor; import org.elasticsearch.cluster.ClusterStateTaskListener; import org.elasticsearch.cluster.MasterNodeChangePredicate; import org.elasticsearch.cluster.NotMasterException; @@ -37,8 +37,8 @@ import org.elasticsearch.cluster.routing.RoutingService; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; +import org.elasticsearch.cluster.routing.allocation.FailedShard; +import org.elasticsearch.cluster.routing.allocation.StaleShard; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Priority; @@ -69,6 +69,7 @@ import java.util.List; import java.util.Locale; import java.util.Set; +import java.util.function.Predicate; public class ShardStateAction extends AbstractComponent { @@ -91,11 +92,13 @@ public ShardStateAction(Settings settings, ClusterService clusterService, Transp transportService.registerRequestHandler(SHARD_FAILED_ACTION_NAME, ShardEntry::new, ThreadPool.Names.SAME, new ShardFailedTransportHandler(clusterService, new ShardFailedClusterStateTaskExecutor(allocationService, routingService, logger), logger)); } - private void sendShardAction(final String actionName, final ClusterStateObserver observer, final ShardEntry shardEntry, final Listener listener) { - DiscoveryNode masterNode = observer.observedState().nodes().getMasterNode(); + private void sendShardAction(final String actionName, final ClusterState currentState, final ShardEntry shardEntry, final Listener listener) { + ClusterStateObserver observer = new ClusterStateObserver(currentState, clusterService, null, logger, threadPool.getThreadContext()); + DiscoveryNode masterNode = currentState.nodes().getMasterNode(); + Predicate changePredicate = MasterNodeChangePredicate.build(currentState); if (masterNode == null) { logger.warn("{} no master known for action [{}] for shard entry [{}]", shardEntry.shardId, actionName, shardEntry); - waitForNewMasterAndRetry(actionName, observer, shardEntry, listener); + waitForNewMasterAndRetry(actionName, observer, shardEntry, listener, changePredicate); } else { logger.debug("{} sending [{}] to [{}] for shard entry [{}]", shardEntry.shardId, actionName, masterNode.getId(), shardEntry); transportService.sendRequest(masterNode, @@ -108,7 +111,7 @@ public void handleResponse(TransportResponse.Empty response) { @Override public void handleException(TransportException exp) { if (isMasterChannelException(exp)) { - waitForNewMasterAndRetry(actionName, observer, shardEntry, listener); + waitForNewMasterAndRetry(actionName, observer, shardEntry, listener, changePredicate); } else { logger.warn((Supplier) () -> new ParameterizedMessage("{} unexpected failure while sending request [{}] to [{}] for shard entry [{}]", shardEntry.shardId, actionName, masterNode, shardEntry), exp); listener.onFailure(exp instanceof RemoteTransportException ? (Exception) (exp.getCause() instanceof Exception ? exp.getCause() : new ElasticsearchException(exp.getCause())) : exp); @@ -142,31 +145,39 @@ private static boolean isMasterChannelException(TransportException exp) { */ public void remoteShardFailed(final ShardId shardId, String allocationId, long primaryTerm, final String message, @Nullable final Exception failure, Listener listener) { assert primaryTerm > 0L : "primary term should be strictly positive"; - shardFailed(shardId, allocationId, primaryTerm, message, failure, listener); + shardFailed(shardId, allocationId, primaryTerm, message, failure, listener, clusterService.state()); } /** * Send a shard failed request to the master node to update the cluster state when a shard on the local node failed. */ public void localShardFailed(final ShardRouting shardRouting, final String message, @Nullable final Exception failure, Listener listener) { - shardFailed(shardRouting.shardId(), shardRouting.allocationId().getId(), 0L, message, failure, listener); + localShardFailed(shardRouting, message, failure, listener, clusterService.state()); + } + + /** + * Send a shard failed request to the master node to update the cluster state when a shard on the local node failed. + */ + public void localShardFailed(final ShardRouting shardRouting, final String message, @Nullable final Exception failure, Listener listener, + final ClusterState currentState) { + shardFailed(shardRouting.shardId(), shardRouting.allocationId().getId(), 0L, message, failure, listener, currentState); } - private void shardFailed(final ShardId shardId, String allocationId, long primaryTerm, final String message, @Nullable final Exception failure, Listener listener) { - ClusterStateObserver observer = new ClusterStateObserver(clusterService, null, logger, threadPool.getThreadContext()); + private void shardFailed(final ShardId shardId, String allocationId, long primaryTerm, final String message, + @Nullable final Exception failure, Listener listener, ClusterState currentState) { ShardEntry shardEntry = new ShardEntry(shardId, allocationId, primaryTerm, message, failure); - sendShardAction(SHARD_FAILED_ACTION_NAME, observer, shardEntry, listener); + sendShardAction(SHARD_FAILED_ACTION_NAME, currentState, shardEntry, listener); } // visible for testing - protected void waitForNewMasterAndRetry(String actionName, ClusterStateObserver observer, ShardEntry shardEntry, Listener listener) { + protected void waitForNewMasterAndRetry(String actionName, ClusterStateObserver observer, ShardEntry shardEntry, Listener listener, Predicate changePredicate) { observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override public void onNewClusterState(ClusterState state) { if (logger.isTraceEnabled()) { - logger.trace("new cluster state [{}] after waiting for master election to fail shard entry [{}]", state.prettyPrint(), shardEntry); + logger.trace("new cluster state [{}] after waiting for master election to fail shard entry [{}]", state, shardEntry); } - sendShardAction(actionName, observer, shardEntry, listener); + sendShardAction(actionName, state, shardEntry, listener); } @Override @@ -180,7 +191,7 @@ public void onTimeout(TimeValue timeout) { // we wait indefinitely for a new master assert false; } - }, MasterNodeChangePredicate.INSTANCE); + }, changePredicate); } private static class ShardFailedTransportHandler implements TransportRequestHandler { @@ -188,7 +199,7 @@ private static class ShardFailedTransportHandler implements TransportRequestHand private final ShardFailedClusterStateTaskExecutor shardFailedClusterStateTaskExecutor; private final Logger logger; - public ShardFailedTransportHandler(ClusterService clusterService, ShardFailedClusterStateTaskExecutor shardFailedClusterStateTaskExecutor, Logger logger) { + ShardFailedTransportHandler(ClusterService clusterService, ShardFailedClusterStateTaskExecutor shardFailedClusterStateTaskExecutor, Logger logger) { this.clusterService = clusterService; this.shardFailedClusterStateTaskExecutor = shardFailedClusterStateTaskExecutor; this.logger = logger; @@ -249,11 +260,11 @@ public ShardFailedClusterStateTaskExecutor(AllocationService allocationService, } @Override - public BatchResult execute(ClusterState currentState, List tasks) throws Exception { - BatchResult.Builder batchResultBuilder = BatchResult.builder(); + public ClusterTasksResult execute(ClusterState currentState, List tasks) throws Exception { + ClusterTasksResult.Builder batchResultBuilder = ClusterTasksResult.builder(); List tasksToBeApplied = new ArrayList<>(); - List shardRoutingsToBeApplied = new ArrayList<>(); - List staleShardsToBeApplied = new ArrayList<>(); + List failedShardsToBeApplied = new ArrayList<>(); + List staleShardsToBeApplied = new ArrayList<>(); for (ShardEntry task : tasks) { IndexMetaData indexMetaData = currentState.metaData().index(task.shardId.getIndex()); @@ -293,7 +304,7 @@ public BatchResult execute(ClusterState currentState, List 0 && inSyncAllocationIds.contains(task.allocationId)) { logger.debug("{} marking shard {} as stale (shard failed task: [{}])", task.shardId, task.allocationId, task); tasksToBeApplied.add(task); - staleShardsToBeApplied.add(new FailedRerouteAllocation.StaleShard(task.shardId, task.allocationId)); + staleShardsToBeApplied.add(new StaleShard(task.shardId, task.allocationId)); } else { // tasks that correspond to non-existent shards are marked as successful logger.debug("{} ignoring shard failed task [{}] (shard does not exist anymore)", task.shardId, task); @@ -303,21 +314,18 @@ public BatchResult execute(ClusterState currentState, List) () -> new ParameterizedMessage("failed to apply failed shards {}", shardRoutingsToBeApplied), e); + logger.warn((Supplier) () -> new ParameterizedMessage("failed to apply failed shards {}", failedShardsToBeApplied), e); // failures are communicated back to the requester // cluster state will not be updated in this case batchResultBuilder.failures(tasksToBeApplied, e); @@ -327,8 +335,7 @@ public BatchResult execute(ClusterState currentState, List failedShards, - List staleShards) { + ClusterState applyFailedShards(ClusterState currentState, List failedShards, List staleShards) { return allocationService.applyFailedShards(currentState, failedShards, staleShards); } @@ -346,9 +353,11 @@ public void clusterStatePublished(ClusterChangedEvent clusterChangedEvent) { } public void shardStarted(final ShardRouting shardRouting, final String message, Listener listener) { - ClusterStateObserver observer = new ClusterStateObserver(clusterService, null, logger, threadPool.getThreadContext()); + shardStarted(shardRouting, message, listener, clusterService.state()); + } + public void shardStarted(final ShardRouting shardRouting, final String message, Listener listener, ClusterState currentState) { ShardEntry shardEntry = new ShardEntry(shardRouting.shardId(), shardRouting.allocationId().getId(), 0L, message, null); - sendShardAction(SHARD_STARTED_ACTION_NAME, observer, shardEntry, listener); + sendShardAction(SHARD_STARTED_ACTION_NAME, currentState, shardEntry, listener); } private static class ShardStartedTransportHandler implements TransportRequestHandler { @@ -356,7 +365,7 @@ private static class ShardStartedTransportHandler implements TransportRequestHan private final ShardStartedClusterStateTaskExecutor shardStartedClusterStateTaskExecutor; private final Logger logger; - public ShardStartedTransportHandler(ClusterService clusterService, ShardStartedClusterStateTaskExecutor shardStartedClusterStateTaskExecutor, Logger logger) { + ShardStartedTransportHandler(ClusterService clusterService, ShardStartedClusterStateTaskExecutor shardStartedClusterStateTaskExecutor, Logger logger) { this.clusterService = clusterService; this.shardStartedClusterStateTaskExecutor = shardStartedClusterStateTaskExecutor; this.logger = logger; @@ -366,7 +375,7 @@ public ShardStartedTransportHandler(ClusterService clusterService, ShardStartedC public void messageReceived(ShardEntry request, TransportChannel channel) throws Exception { logger.debug("{} received shard started for [{}]", request.shardId, request); clusterService.submitStateUpdateTask( - "shard-started", + "shard-started " + request, request, ClusterStateTaskConfig.build(Priority.URGENT), shardStartedClusterStateTaskExecutor, @@ -385,8 +394,8 @@ public ShardStartedClusterStateTaskExecutor(AllocationService allocationService, } @Override - public BatchResult execute(ClusterState currentState, List tasks) throws Exception { - BatchResult.Builder builder = BatchResult.builder(); + public ClusterTasksResult execute(ClusterState currentState, List tasks) throws Exception { + ClusterTasksResult.Builder builder = ClusterTasksResult.builder(); List tasksToBeApplied = new ArrayList<>(); List shardRoutingsToBeApplied = new ArrayList<>(tasks.size()); Set seenShardRoutings = new HashSet<>(); // to prevent duplicates @@ -426,11 +435,7 @@ public BatchResult execute(ClusterState currentState, List) () -> new ParameterizedMessage("failed to apply started shards {}", shardRoutingsToBeApplied), e); diff --git a/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlock.java b/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlock.java index 5a7f8f7c0a934..9f4dfde7022f8 100644 --- a/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlock.java +++ b/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlock.java @@ -19,6 +19,7 @@ package org.elasticsearch.cluster.block; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; @@ -46,18 +47,22 @@ public class ClusterBlock implements Streamable, ToXContent { private boolean disableStatePersistence = false; + private boolean allowReleaseResources; + private RestStatus status; ClusterBlock() { } - public ClusterBlock(int id, String description, boolean retryable, boolean disableStatePersistence, RestStatus status, EnumSet levels) { + public ClusterBlock(int id, String description, boolean retryable, boolean disableStatePersistence, boolean allowReleaseResources, RestStatus status, + EnumSet levels) { this.id = id; this.description = description; this.retryable = retryable; this.disableStatePersistence = disableStatePersistence; this.status = status; this.levels = levels; + this.allowReleaseResources = allowReleaseResources; } public int id() { @@ -128,14 +133,19 @@ public void readFrom(StreamInput in) throws IOException { id = in.readVInt(); description = in.readString(); final int len = in.readVInt(); - ArrayList levels = new ArrayList<>(); + ArrayList levels = new ArrayList<>(len); for (int i = 0; i < len; i++) { - levels.add(ClusterBlockLevel.fromId(in.readVInt())); + levels.add(in.readEnum(ClusterBlockLevel.class)); } this.levels = EnumSet.copyOf(levels); retryable = in.readBoolean(); disableStatePersistence = in.readBoolean(); status = RestStatus.readFrom(in); + if (in.getVersion().onOrAfter(Version.V_5_5_0)) { + allowReleaseResources = in.readBoolean(); + } else { + allowReleaseResources = false; + } } @Override @@ -144,11 +154,14 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(description); out.writeVInt(levels.size()); for (ClusterBlockLevel level : levels) { - out.writeVInt(level.id()); + out.writeEnum(level); } out.writeBoolean(retryable); out.writeBoolean(disableStatePersistence); RestStatus.writeTo(out, status); + if (out.getVersion().onOrAfter(Version.V_5_5_0)) { + out.writeBoolean(allowReleaseResources); + } } @Override @@ -179,4 +192,8 @@ public boolean equals(Object o) { public int hashCode() { return id; } + + public boolean isAllowReleaseResources() { + return allowReleaseResources; + } } diff --git a/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlockLevel.java b/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlockLevel.java index 45ff1d3707bcf..127c5eef744c6 100644 --- a/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlockLevel.java +++ b/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlockLevel.java @@ -26,34 +26,11 @@ * */ public enum ClusterBlockLevel { - READ(0), - WRITE(1), - METADATA_READ(2), - METADATA_WRITE(3); + READ, + WRITE, + METADATA_READ, + METADATA_WRITE; - public static final EnumSet ALL = EnumSet.of(READ, WRITE, METADATA_READ, METADATA_WRITE); + public static final EnumSet ALL = EnumSet.allOf(ClusterBlockLevel.class); public static final EnumSet READ_WRITE = EnumSet.of(READ, WRITE); - - private final int id; - - ClusterBlockLevel(int id) { - this.id = id; - } - - public int id() { - return this.id; - } - - static ClusterBlockLevel fromId(int id) { - if (id == 0) { - return READ; - } else if (id == 1) { - return WRITE; - } else if (id == 2) { - return METADATA_READ; - } else if (id == 3) { - return METADATA_WRITE; - } - throw new IllegalArgumentException("No cluster block level matching [" + id + "]"); - } } diff --git a/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlocks.java b/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlocks.java index e6f04c8702cf4..9e05d50831882 100644 --- a/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlocks.java +++ b/core/src/main/java/org/elasticsearch/cluster/block/ClusterBlocks.java @@ -21,6 +21,7 @@ import com.carrotsearch.hppc.cursors.ObjectObjectCursor; import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.cluster.Diff; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaDataIndexStateService; import org.elasticsearch.common.collect.ImmutableOpenMap; @@ -48,8 +49,6 @@ public class ClusterBlocks extends AbstractDiffable { public static final ClusterBlocks EMPTY_CLUSTER_BLOCK = new ClusterBlocks(emptySet(), ImmutableOpenMap.of()); - public static final ClusterBlocks PROTO = EMPTY_CLUSTER_BLOCK; - private final Set global; private final ImmutableOpenMap> indicesBlocks; @@ -59,23 +58,7 @@ public class ClusterBlocks extends AbstractDiffable { ClusterBlocks(Set global, ImmutableOpenMap> indicesBlocks) { this.global = global; this.indicesBlocks = indicesBlocks; - - levelHolders = new ImmutableLevelHolder[ClusterBlockLevel.values().length]; - for (final ClusterBlockLevel level : ClusterBlockLevel.values()) { - Predicate containsLevel = block -> block.contains(level); - Set newGlobal = unmodifiableSet(global.stream() - .filter(containsLevel) - .collect(toSet())); - - ImmutableOpenMap.Builder> indicesBuilder = ImmutableOpenMap.builder(); - for (ObjectObjectCursor> entry : indicesBlocks) { - indicesBuilder.put(entry.key, unmodifiableSet(entry.value.stream() - .filter(containsLevel) - .collect(toSet()))); - } - - levelHolders[level.id()] = new ImmutableLevelHolder(newGlobal, indicesBuilder.build()); - } + levelHolders = generateLevelHolders(global, indicesBlocks); } public Set global() { @@ -87,17 +70,38 @@ public ImmutableOpenMap> indices() { } public Set global(ClusterBlockLevel level) { - return levelHolders[level.id()].global(); + return levelHolders[level.ordinal()].global(); } public ImmutableOpenMap> indices(ClusterBlockLevel level) { - return levelHolders[level.id()].indices(); + return levelHolders[level.ordinal()].indices(); } private Set blocksForIndex(ClusterBlockLevel level, String index) { return indices(level).getOrDefault(index, emptySet()); } + private static ImmutableLevelHolder[] generateLevelHolders(Set global, + ImmutableOpenMap> indicesBlocks) { + ImmutableLevelHolder[] levelHolders = new ImmutableLevelHolder[ClusterBlockLevel.values().length]; + for (final ClusterBlockLevel level : ClusterBlockLevel.values()) { + Predicate containsLevel = block -> block.contains(level); + Set newGlobal = unmodifiableSet(global.stream() + .filter(containsLevel) + .collect(toSet())); + + ImmutableOpenMap.Builder> indicesBuilder = ImmutableOpenMap.builder(); + for (ObjectObjectCursor> entry : indicesBlocks) { + indicesBuilder.put(entry.key, unmodifiableSet(entry.value.stream() + .filter(containsLevel) + .collect(toSet()))); + } + + levelHolders[level.ordinal()] = new ImmutableLevelHolder(newGlobal, indicesBuilder.build()); + } + return levelHolders; + } + /** * Returns true if one of the global blocks as its disable state persistence flag set. */ @@ -199,7 +203,28 @@ public ClusterBlockException indicesBlockedException(ClusterBlockLevel level, St return new ClusterBlockException(unmodifiableSet(blocks.collect(toSet()))); } - public String prettyPrint() { + /** + * Returns true iff non of the given have a {@link ClusterBlockLevel#METADATA_WRITE} in place where the + * {@link ClusterBlock#isAllowReleaseResources()} returns false. This is used in places where resources will be released + * like the deletion of an index to free up resources on nodes. + * @param indices the indices to check + */ + public ClusterBlockException indicesAllowReleaseResources(String[] indices) { + final Function> blocksForIndexAtLevel = index -> + blocksForIndex(ClusterBlockLevel.METADATA_WRITE, index).stream(); + Stream blocks = concat( + global(ClusterBlockLevel.METADATA_WRITE).stream(), + Stream.of(indices).flatMap(blocksForIndexAtLevel)).filter(clusterBlock -> clusterBlock.isAllowReleaseResources() == false); + Set clusterBlocks = unmodifiableSet(blocks.collect(toSet())); + if (clusterBlocks.isEmpty()) { + return null; + } + return new ClusterBlockException(clusterBlocks); + } + + + @Override + public String toString() { if (global.isEmpty() && indices().isEmpty()) { return ""; } @@ -238,15 +263,16 @@ private static void writeBlockSet(Set blocks, StreamOutput out) th } } - @Override - public ClusterBlocks readFrom(StreamInput in) throws IOException { + public ClusterBlocks(StreamInput in) throws IOException { Set global = readBlockSet(in); int size = in.readVInt(); ImmutableOpenMap.Builder> indicesBuilder = ImmutableOpenMap.builder(size); for (int j = 0; j < size; j++) { indicesBuilder.put(in.readString().intern(), readBlockSet(in)); } - return new ClusterBlocks(global, indicesBuilder.build()); + this.global = global; + this.indicesBlocks = indicesBuilder.build(); + levelHolders = generateLevelHolders(global, indicesBlocks); } private static Set readBlockSet(StreamInput in) throws IOException { @@ -258,9 +284,11 @@ private static Set readBlockSet(StreamInput in) throws IOException return unmodifiableSet(blocks); } - static class ImmutableLevelHolder { + public static Diff readDiffFrom(StreamInput in) throws IOException { + return AbstractDiffable.readDiffFrom(ClusterBlocks::new, in); + } - static final ImmutableLevelHolder EMPTY = new ImmutableLevelHolder(emptySet(), ImmutableOpenMap.of()); + static class ImmutableLevelHolder { private final Set global; private final ImmutableOpenMap> indices; @@ -304,30 +332,31 @@ public Builder blocks(ClusterBlocks blocks) { } public Builder addBlocks(IndexMetaData indexMetaData) { + String indexName = indexMetaData.getIndex().getName(); if (indexMetaData.getState() == IndexMetaData.State.CLOSE) { - addIndexBlock(indexMetaData.getIndex().getName(), MetaDataIndexStateService.INDEX_CLOSED_BLOCK); + addIndexBlock(indexName, MetaDataIndexStateService.INDEX_CLOSED_BLOCK); } if (IndexMetaData.INDEX_READ_ONLY_SETTING.get(indexMetaData.getSettings())) { - addIndexBlock(indexMetaData.getIndex().getName(), IndexMetaData.INDEX_READ_ONLY_BLOCK); + addIndexBlock(indexName, IndexMetaData.INDEX_READ_ONLY_BLOCK); } if (IndexMetaData.INDEX_BLOCKS_READ_SETTING.get(indexMetaData.getSettings())) { - addIndexBlock(indexMetaData.getIndex().getName(), IndexMetaData.INDEX_READ_BLOCK); + addIndexBlock(indexName, IndexMetaData.INDEX_READ_BLOCK); } if (IndexMetaData.INDEX_BLOCKS_WRITE_SETTING.get(indexMetaData.getSettings())) { - addIndexBlock(indexMetaData.getIndex().getName(), IndexMetaData.INDEX_WRITE_BLOCK); + addIndexBlock(indexName, IndexMetaData.INDEX_WRITE_BLOCK); } if (IndexMetaData.INDEX_BLOCKS_METADATA_SETTING.get(indexMetaData.getSettings())) { - addIndexBlock(indexMetaData.getIndex().getName(), IndexMetaData.INDEX_METADATA_BLOCK); + addIndexBlock(indexName, IndexMetaData.INDEX_METADATA_BLOCK); + } + if (IndexMetaData.INDEX_BLOCKS_READ_ONLY_ALLOW_DELETE_SETTING.get(indexMetaData.getSettings())) { + addIndexBlock(indexName, IndexMetaData.INDEX_READ_ONLY_ALLOW_DELETE_BLOCK); } return this; } public Builder updateBlocks(IndexMetaData indexMetaData) { - removeIndexBlock(indexMetaData.getIndex().getName(), MetaDataIndexStateService.INDEX_CLOSED_BLOCK); - removeIndexBlock(indexMetaData.getIndex().getName(), IndexMetaData.INDEX_READ_ONLY_BLOCK); - removeIndexBlock(indexMetaData.getIndex().getName(), IndexMetaData.INDEX_READ_BLOCK); - removeIndexBlock(indexMetaData.getIndex().getName(), IndexMetaData.INDEX_WRITE_BLOCK); - removeIndexBlock(indexMetaData.getIndex().getName(), IndexMetaData.INDEX_METADATA_BLOCK); + // let's remove all blocks for this index and add them back -- no need to remove all individual blocks.... + indices.remove(indexMetaData.getIndex().getName()); return addBlocks(indexMetaData); } @@ -382,9 +411,5 @@ public ClusterBlocks build() { } return new ClusterBlocks(unmodifiableSet(new HashSet<>(global)), indicesBuilder.build()); } - - public static ClusterBlocks readClusterBlocks(StreamInput in) throws IOException { - return PROTO.readFrom(in); - } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/AliasMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/AliasMetaData.java index 7e4d19174852a..f8c360bed39f2 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/AliasMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/AliasMetaData.java @@ -21,13 +21,16 @@ import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.cluster.Diff; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; @@ -42,8 +45,6 @@ */ public class AliasMetaData extends AbstractDiffable { - public static final AliasMetaData PROTO = new AliasMetaData("", null, null, null); - private final String alias; private final CompressedXContent filter; @@ -174,22 +175,29 @@ public void writeTo(StreamOutput out) throws IOException { } - @Override - public AliasMetaData readFrom(StreamInput in) throws IOException { - String alias = in.readString(); - CompressedXContent filter = null; + public AliasMetaData(StreamInput in) throws IOException { + alias = in.readString(); if (in.readBoolean()) { filter = CompressedXContent.readCompressedString(in); + } else { + filter = null; } - String indexRouting = null; if (in.readBoolean()) { indexRouting = in.readString(); + } else { + indexRouting = null; } - String searchRouting = null; if (in.readBoolean()) { searchRouting = in.readString(); + searchRoutingValues = Collections.unmodifiableSet(Strings.splitStringByCommaToSet(searchRouting)); + } else { + searchRouting = null; + searchRoutingValues = emptySet(); } - return new AliasMetaData(alias, filter, indexRouting, searchRouting); + } + + public static Diff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(AliasMetaData::new, in); } public static class Builder { @@ -228,14 +236,7 @@ public Builder filter(String filter) { this.filter = null; return this; } - try { - try (XContentParser parser = XContentFactory.xContent(filter).createParser(filter)) { - filter(parser.mapOrdered()); - } - return this; - } catch (IOException e) { - throw new ElasticsearchGenerationException("Failed to generate [" + filter + "]", e); - } + return filter(XContentHelper.convertToMap(XContentFactory.xContent(filter), filter, true)); } public Builder filter(Map filter) { @@ -289,11 +290,7 @@ public static void toXContent(AliasMetaData aliasMetaData, XContentBuilder build if (binary) { builder.field("filter", aliasMetaData.filter.compressed()); } else { - byte[] data = aliasMetaData.filter().uncompressed(); - try (XContentParser parser = XContentFactory.xContent(data).createParser(data)) { - Map filter = parser.mapOrdered(); - builder.field("filter", filter); - } + builder.field("filter", XContentHelper.convertToMap(new BytesArray(aliasMetaData.filter().uncompressed()), true).v2()); } } if (aliasMetaData.indexRouting() != null) { @@ -339,14 +336,6 @@ public static AliasMetaData fromXContent(XContentParser parser) throws IOExcepti } return builder.build(); } - - public void writeTo(AliasMetaData aliasMetaData, StreamOutput out) throws IOException { - aliasMetaData.writeTo(out); - } - - public static AliasMetaData readFrom(StreamInput in) throws IOException { - return PROTO.readFrom(in); - } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/AliasOrIndex.java b/core/src/main/java/org/elasticsearch/cluster/metadata/AliasOrIndex.java index 4ad9b7e5317e7..786bd9af78a4c 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/AliasOrIndex.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/AliasOrIndex.java @@ -121,7 +121,7 @@ public Tuple next() { } @Override - public final void remove() { + public void remove() { throw new UnsupportedOperationException(); } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/AliasValidator.java b/core/src/main/java/org/elasticsearch/cluster/metadata/AliasValidator.java index cb46b22fe7e43..1a28117cf70c2 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/AliasValidator.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/AliasValidator.java @@ -25,7 +25,9 @@ import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryParseContext; @@ -75,8 +77,8 @@ public void validateAliasMetaData(AliasMetaData aliasMetaData, String index, Met public void validateAliasStandalone(Alias alias) { validateAliasStandalone(alias.name(), alias.indexRouting()); if (Strings.hasLength(alias.filter())) { - try (XContentParser parser = XContentFactory.xContent(alias.filter()).createParser(alias.filter())) { - parser.map(); + try { + XContentHelper.convertToMap(XContentFactory.xContent(alias.filter()), alias.filter(), false); } catch (Exception e) { throw new IllegalArgumentException("failed to parse filter for alias [" + alias.name() + "]", e); } @@ -99,10 +101,11 @@ public void validateAlias(String alias, String index, @Nullable String indexRout } } - private void validateAliasStandalone(String alias, String indexRouting) { + void validateAliasStandalone(String alias, String indexRouting) { if (!Strings.hasText(alias)) { throw new IllegalArgumentException("alias name is required"); } + MetaDataCreateIndexService.validateIndexOrAliasName(alias, InvalidAliasNameException::new); if (indexRouting != null && indexRouting.indexOf(',') != -1) { throw new IllegalArgumentException("alias [" + alias + "] has several index routing values associated with it"); } @@ -113,9 +116,10 @@ private void validateAliasStandalone(String alias, String indexRouting) { * provided {@link org.elasticsearch.index.query.QueryShardContext} * @throws IllegalArgumentException if the filter is not valid */ - public void validateAliasFilter(String alias, String filter, QueryShardContext queryShardContext) { + public void validateAliasFilter(String alias, String filter, QueryShardContext queryShardContext, + NamedXContentRegistry xContentRegistry) { assert queryShardContext != null; - try (XContentParser parser = XContentFactory.xContent(filter).createParser(filter)) { + try (XContentParser parser = XContentFactory.xContent(filter).createParser(xContentRegistry, filter)) { validateAliasFilter(parser, queryShardContext); } catch (Exception e) { throw new IllegalArgumentException("failed to parse filter for alias [" + alias + "]", e); @@ -127,9 +131,10 @@ public void validateAliasFilter(String alias, String filter, QueryShardContext q * provided {@link org.elasticsearch.index.query.QueryShardContext} * @throws IllegalArgumentException if the filter is not valid */ - public void validateAliasFilter(String alias, byte[] filter, QueryShardContext queryShardContext) { + public void validateAliasFilter(String alias, byte[] filter, QueryShardContext queryShardContext, + NamedXContentRegistry xContentRegistry) { assert queryShardContext != null; - try (XContentParser parser = XContentFactory.xContent(filter).createParser(filter)) { + try (XContentParser parser = XContentFactory.xContent(filter).createParser(xContentRegistry, filter)) { validateAliasFilter(parser, queryShardContext); } catch (Exception e) { throw new IllegalArgumentException("failed to parse filter for alias [" + alias + "]", e); diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/AutoExpandReplicas.java b/core/src/main/java/org/elasticsearch/cluster/metadata/AutoExpandReplicas.java index 4b4a8e54d7c6c..0031aaf19de02 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/AutoExpandReplicas.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/AutoExpandReplicas.java @@ -19,6 +19,8 @@ package org.elasticsearch.cluster.metadata; import org.elasticsearch.common.Booleans; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -28,12 +30,17 @@ * based on the number of datanodes in the cluster. This class handles all the parsing and streamlines the access to these values. */ final class AutoExpandReplicas { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(AutoExpandReplicas.class)); + // the value we recognize in the "max" position to mean all the nodes private static final String ALL_NODES_VALUE = "all"; public static final Setting SETTING = new Setting<>(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, "false", (value) -> { final int min; final int max; - if (Booleans.parseBoolean(value, true) == false) { + if (Booleans.isExplicitFalse(value)) { + if (Booleans.isStrictlyBoolean(value) == false) { + DEPRECATION_LOGGER.deprecated("Expected [false] for setting [{}] but got [{}]", IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, value); + } return new AutoExpandReplicas(0, 0, false); } final int dash = value.indexOf('-'); diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/ClusterNameExpressionResolver.java b/core/src/main/java/org/elasticsearch/cluster/metadata/ClusterNameExpressionResolver.java new file mode 100644 index 0000000000000..2032c2f4ef3ba --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/ClusterNameExpressionResolver.java @@ -0,0 +1,100 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.metadata; + +import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.regex.Regex; +import org.elasticsearch.common.settings.Settings; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * Resolves cluster names from an expression. The expression must be the exact match of a cluster + * name or must be a wildcard expression. + */ +public final class ClusterNameExpressionResolver extends AbstractComponent { + + private final WildcardExpressionResolver wildcardResolver = new WildcardExpressionResolver(); + + public ClusterNameExpressionResolver(Settings settings) { + super(settings); + } + + /** + * Resolves the provided cluster expression to matching cluster names. This method only + * supports exact or wildcard matches. + * + * @param remoteClusters the aliases for remote clusters + * @param clusterExpression the expressions that can be resolved to cluster names. + * @return the resolved cluster aliases. + */ + public List resolveClusterNames(Set remoteClusters, String clusterExpression) { + if (remoteClusters.contains(clusterExpression)) { + return Collections.singletonList(clusterExpression); + } else if (Regex.isSimpleMatchPattern(clusterExpression)) { + return wildcardResolver.resolve(remoteClusters, clusterExpression); + } else { + return Collections.emptyList(); + } + } + + private static class WildcardExpressionResolver { + + private List resolve(Set remoteClusters, String clusterExpression) { + if (isTrivialWildcard(clusterExpression)) { + return resolveTrivialWildcard(remoteClusters); + } + + Set matches = matches(remoteClusters, clusterExpression); + if (matches.isEmpty()) { + return Collections.emptyList(); + } else { + return new ArrayList<>(matches); + } + } + + private boolean isTrivialWildcard(String clusterExpression) { + return Regex.isMatchAllPattern(clusterExpression); + } + + private List resolveTrivialWildcard(Set remoteClusters) { + return new ArrayList<>(remoteClusters); + } + + private static Set matches(Set remoteClusters, String expression) { + if (expression.indexOf("*") == expression.length() - 1) { + return otherWildcard(remoteClusters, expression); + } else { + return otherWildcard(remoteClusters, expression); + } + } + + private static Set otherWildcard(Set remoteClusters, String expression) { + final String pattern = expression; + return remoteClusters.stream() + .filter(n -> Regex.simpleMatch(pattern, n)) + .collect(Collectors.toSet()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java index f6a54fc82d72f..d60617ea6423a 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java @@ -20,15 +20,15 @@ package org.elasticsearch.cluster.metadata; import org.elasticsearch.cluster.Diff; +import org.elasticsearch.cluster.NamedDiff; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.joda.Joda; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.ContextParser; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -43,7 +43,6 @@ import java.util.List; import java.util.Objects; import java.util.concurrent.TimeUnit; -import java.util.function.BiFunction; /** * A collection of tombstones for explicitly marking indices as deleted in the cluster state. @@ -67,10 +66,9 @@ public final class IndexGraveyard implements MetaData.Custom { 500, // the default maximum number of tombstones Setting.Property.NodeScope); - public static final IndexGraveyard PROTO = new IndexGraveyard(new ArrayList<>()); public static final String TYPE = "index-graveyard"; private static final ParseField TOMBSTONES_FIELD = new ParseField("tombstones"); - private static final ObjectParser, ParseFieldMatcherSupplier> GRAVEYARD_PARSER; + private static final ObjectParser, Void> GRAVEYARD_PARSER; static { GRAVEYARD_PARSER = new ObjectParser<>("index_graveyard", ArrayList::new); GRAVEYARD_PARSER.declareObjectArray(List::addAll, Tombstone.getParser(), TOMBSTONES_FIELD); @@ -83,7 +81,7 @@ private IndexGraveyard(final List list) { tombstones = Collections.unmodifiableList(list); } - private IndexGraveyard(final StreamInput in) throws IOException { + public IndexGraveyard(final StreamInput in) throws IOException { final int queueSize = in.readVInt(); List tombstones = new ArrayList<>(queueSize); for (int i = 0; i < queueSize; i++) { @@ -92,12 +90,8 @@ private IndexGraveyard(final StreamInput in) throws IOException { this.tombstones = Collections.unmodifiableList(tombstones); } - public static IndexGraveyard fromStream(final StreamInput in) throws IOException { - return new IndexGraveyard(in); - } - @Override - public String type() { + public String getWriteableName() { return TYPE; } @@ -144,8 +138,8 @@ public XContentBuilder toXContent(final XContentBuilder builder, final Params pa return builder.endArray(); } - public IndexGraveyard fromXContent(final XContentParser parser) throws IOException { - return new IndexGraveyard(GRAVEYARD_PARSER.parse(parser, () -> ParseFieldMatcher.STRICT)); + public static IndexGraveyard fromXContent(final XContentParser parser) throws IOException { + return new IndexGraveyard(GRAVEYARD_PARSER.parse(parser, null)); } @Override @@ -161,19 +155,13 @@ public void writeTo(final StreamOutput out) throws IOException { } } - @Override - public IndexGraveyard readFrom(final StreamInput in) throws IOException { - return new IndexGraveyard(in); - } - @Override @SuppressWarnings("unchecked") public Diff diff(final MetaData.Custom previous) { return new IndexGraveyardDiff((IndexGraveyard) previous, this); } - @Override - public Diff readDiffFrom(final StreamInput in) throws IOException { + public static NamedDiff readDiffFrom(final StreamInput in) throws IOException { return new IndexGraveyardDiff(in); } @@ -273,7 +261,7 @@ public IndexGraveyard build(final Settings settings) { /** * A class representing a diff of two IndexGraveyard objects. */ - public static final class IndexGraveyardDiff implements Diff { + public static final class IndexGraveyardDiff implements NamedDiff { private final List added; private final int removedCount; @@ -349,6 +337,11 @@ public List getAdded() { public int getRemovedCount() { return removedCount; } + + @Override + public String getWriteableName() { + return TYPE; + } } /** @@ -359,16 +352,17 @@ public static final class Tombstone implements ToXContent, Writeable { private static final String INDEX_KEY = "index"; private static final String DELETE_DATE_IN_MILLIS_KEY = "delete_date_in_millis"; private static final String DELETE_DATE_KEY = "delete_date"; - private static final ObjectParser TOMBSTONE_PARSER; + private static final ObjectParser TOMBSTONE_PARSER; static { TOMBSTONE_PARSER = new ObjectParser<>("tombstoneEntry", Tombstone.Builder::new); - TOMBSTONE_PARSER.declareObject(Tombstone.Builder::index, Index::parseIndex, new ParseField(INDEX_KEY)); + TOMBSTONE_PARSER.declareObject(Tombstone.Builder::index, (parser, context) -> Index.fromXContent(parser), + new ParseField(INDEX_KEY)); TOMBSTONE_PARSER.declareLong(Tombstone.Builder::deleteDateInMillis, new ParseField(DELETE_DATE_IN_MILLIS_KEY)); TOMBSTONE_PARSER.declareString((b, s) -> {}, new ParseField(DELETE_DATE_KEY)); } - static BiFunction getParser() { - return (p, c) -> TOMBSTONE_PARSER.apply(p, c).build(); + static ContextParser getParser() { + return (parser, context) -> TOMBSTONE_PARSER.apply(parser, null).build(); } private final Index index; @@ -443,7 +437,7 @@ public XContentBuilder toXContent(final XContentBuilder builder, final Params pa } public static Tombstone fromXContent(final XContentParser parser) throws IOException { - return TOMBSTONE_PARSER.parse(parser, () -> ParseFieldMatcher.STRICT).build(); + return TOMBSTONE_PARSER.parse(parser, null).build(); } /** diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java index 4ab3b85e46ac3..57a2876128342 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java @@ -33,7 +33,7 @@ import org.elasticsearch.cluster.node.DiscoveryNodeFilters; import org.elasticsearch.cluster.routing.allocation.IndexMetaDataUpdater; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; +import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.collect.ImmutableOpenIntMap; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.collect.MapBuilder; @@ -44,10 +44,10 @@ import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.loader.SettingsLoader; -import org.elasticsearch.common.xcontent.FromXContentBuilder; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.gateway.MetaDataStateFormat; @@ -59,7 +59,6 @@ import org.joda.time.DateTimeZone; import java.io.IOException; -import java.text.ParseException; import java.util.Arrays; import java.util.Collections; import java.util.EnumSet; @@ -70,16 +69,18 @@ import java.util.Set; import java.util.function.Function; +import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.IP_VALIDATOR; import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.AND; import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.OR; import static org.elasticsearch.common.settings.Settings.readSettingsFromStream; import static org.elasticsearch.common.settings.Settings.writeSettingsToStream; -/** - * - */ -public class IndexMetaData implements Diffable, FromXContentBuilder, ToXContent { +public class IndexMetaData implements Diffable, ToXContent { + /** + * This class will be removed in v7.0 + */ + @Deprecated public interface Custom extends Diffable, ToXContent { String type(); @@ -88,6 +89,16 @@ public interface Custom extends Diffable, ToXContent { Custom fromXContent(XContentParser parser) throws IOException; + /** + * Reads the {@link org.elasticsearch.cluster.Diff} from StreamInput + */ + Diff readDiffFrom(StreamInput in) throws IOException; + + /** + * Reads an object of this type from the provided {@linkplain StreamInput}. The receiving instance remains unchanged. + */ + Custom readFrom(StreamInput in) throws IOException; + /** * Merges from this to another, with this being more important, i.e., if something exists in this and another, * this will prevail. @@ -119,12 +130,13 @@ public static T lookupPrototypeSafe(String type) { return proto; } - public static final ClusterBlock INDEX_READ_ONLY_BLOCK = new ClusterBlock(5, "index read-only (api)", false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA_WRITE)); - public static final ClusterBlock INDEX_READ_BLOCK = new ClusterBlock(7, "index read (api)", false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.READ)); - public static final ClusterBlock INDEX_WRITE_BLOCK = new ClusterBlock(8, "index write (api)", false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.WRITE)); - public static final ClusterBlock INDEX_METADATA_BLOCK = new ClusterBlock(9, "index metadata (api)", false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.METADATA_WRITE, ClusterBlockLevel.METADATA_READ)); + public static final ClusterBlock INDEX_READ_ONLY_BLOCK = new ClusterBlock(5, "index read-only (api)", false, false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA_WRITE)); + public static final ClusterBlock INDEX_READ_BLOCK = new ClusterBlock(7, "index read (api)", false, false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.READ)); + public static final ClusterBlock INDEX_WRITE_BLOCK = new ClusterBlock(8, "index write (api)", false, false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.WRITE)); + public static final ClusterBlock INDEX_METADATA_BLOCK = new ClusterBlock(9, "index metadata (api)", false, false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.METADATA_WRITE, ClusterBlockLevel.METADATA_READ)); + public static final ClusterBlock INDEX_READ_ONLY_ALLOW_DELETE_BLOCK = new ClusterBlock(12, "index read-only / allow delete (api)", false, false, true, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.METADATA_WRITE, ClusterBlockLevel.WRITE)); - public static enum State { + public enum State { OPEN((byte) 0), CLOSE((byte) 1); @@ -157,20 +169,37 @@ public static State fromString(String state) { } } + static Setting buildNumberOfShardsSetting() { + /* This is a safety limit that should only be exceeded in very rare and special cases. The assumption is that + * 99% of the users have less than 1024 shards per index. We also make it a hard check that requires restart of nodes + * if a cluster should allow to create more than 1024 shards per index. NOTE: this does not limit the number of shards per cluster. + * this also prevents creating stuff like a new index with millions of shards by accident which essentially kills the entire cluster + * with OOM on the spot.*/ + final int maxNumShards = Integer.parseInt(System.getProperty("es.index.max_number_of_shards", "1024")); + if (maxNumShards < 1) { + throw new IllegalArgumentException("es.index.max_number_of_shards must be > 0"); + } + return Setting.intSetting(SETTING_NUMBER_OF_SHARDS, Math.min(5, maxNumShards), 1, maxNumShards, + Property.IndexScope, Property.Final); + } + public static final String INDEX_SETTING_PREFIX = "index."; public static final String SETTING_NUMBER_OF_SHARDS = "index.number_of_shards"; - public static final Setting INDEX_NUMBER_OF_SHARDS_SETTING = - Setting.intSetting(SETTING_NUMBER_OF_SHARDS, 5, 1, Property.IndexScope); + public static final Setting INDEX_NUMBER_OF_SHARDS_SETTING = buildNumberOfShardsSetting(); public static final String SETTING_NUMBER_OF_REPLICAS = "index.number_of_replicas"; public static final Setting INDEX_NUMBER_OF_REPLICAS_SETTING = Setting.intSetting(SETTING_NUMBER_OF_REPLICAS, 1, 0, Property.Dynamic, Property.IndexScope); public static final String SETTING_SHADOW_REPLICAS = "index.shadow_replicas"; public static final Setting INDEX_SHADOW_REPLICAS_SETTING = - Setting.boolSetting(SETTING_SHADOW_REPLICAS, false, Property.IndexScope); + Setting.boolSetting(SETTING_SHADOW_REPLICAS, false, Property.IndexScope, Property.Deprecated); + + public static final String SETTING_ROUTING_PARTITION_SIZE = "index.routing_partition_size"; + public static final Setting INDEX_ROUTING_PARTITION_SIZE_SETTING = + Setting.intSetting(SETTING_ROUTING_PARTITION_SIZE, 1, 1, Property.IndexScope); public static final String SETTING_SHARED_FILESYSTEM = "index.shared_filesystem"; public static final Setting INDEX_SHARED_FILESYSTEM_SETTING = - Setting.boolSetting(SETTING_SHARED_FILESYSTEM, false, Property.IndexScope); + Setting.boolSetting(SETTING_SHARED_FILESYSTEM, INDEX_SHADOW_REPLICAS_SETTING, Property.IndexScope, Property.Deprecated); public static final String SETTING_AUTO_EXPAND_REPLICAS = "index.auto_expand_replicas"; public static final Setting INDEX_AUTO_EXPAND_REPLICAS_SETTING = AutoExpandReplicas.SETTING; @@ -190,12 +219,20 @@ public static State fromString(String state) { public static final Setting INDEX_BLOCKS_METADATA_SETTING = Setting.boolSetting(SETTING_BLOCKS_METADATA, false, Property.Dynamic, Property.IndexScope); + public static final String SETTING_READ_ONLY_ALLOW_DELETE = "index.blocks.read_only_allow_delete"; + public static final Setting INDEX_BLOCKS_READ_ONLY_ALLOW_DELETE_SETTING = + Setting.boolSetting(SETTING_READ_ONLY_ALLOW_DELETE, false, Property.Dynamic, Property.IndexScope); + public static final String SETTING_VERSION_CREATED = "index.version.created"; public static final String SETTING_VERSION_CREATED_STRING = "index.version.created_string"; public static final String SETTING_VERSION_UPGRADED = "index.version.upgraded"; public static final String SETTING_VERSION_UPGRADED_STRING = "index.version.upgraded_string"; - public static final String SETTING_VERSION_MINIMUM_COMPATIBLE = "index.version.minimum_compatible"; public static final String SETTING_CREATION_DATE = "index.creation_date"; + /** + * The user provided name for an index. This is the plain string provided by the user when the index was created. + * It might still contain date math expressions etc. (added in 5.0) + */ + public static final String SETTING_INDEX_PROVIDED_NAME = "index.provided_name"; public static final String SETTING_PRIORITY = "index.priority"; public static final Setting INDEX_PRIORITY_SETTING = Setting.intSetting("index.priority", 1, 0, Property.Dynamic, Property.IndexScope); @@ -206,15 +243,19 @@ public static State fromString(String state) { new Setting<>(SETTING_DATA_PATH, "", Function.identity(), Property.IndexScope); public static final String SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE = "index.shared_filesystem.recover_on_any_node"; public static final Setting INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING = - Setting.boolSetting(SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false, Property.Dynamic, Property.IndexScope); + Setting.boolSetting(SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false, + Property.Dynamic, Property.IndexScope, Property.Deprecated); public static final String INDEX_UUID_NA_VALUE = "_na_"; + public static final String INDEX_ROUTING_REQUIRE_GROUP_PREFIX = "index.routing.allocation.require"; + public static final String INDEX_ROUTING_INCLUDE_GROUP_PREFIX = "index.routing.allocation.include"; + public static final String INDEX_ROUTING_EXCLUDE_GROUP_PREFIX = "index.routing.allocation.exclude"; public static final Setting INDEX_ROUTING_REQUIRE_GROUP_SETTING = - Setting.groupSetting("index.routing.allocation.require.", Property.Dynamic, Property.IndexScope); + Setting.groupSetting(INDEX_ROUTING_REQUIRE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.IndexScope); public static final Setting INDEX_ROUTING_INCLUDE_GROUP_SETTING = - Setting.groupSetting("index.routing.allocation.include.", Property.Dynamic, Property.IndexScope); + Setting.groupSetting(INDEX_ROUTING_INCLUDE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.IndexScope); public static final Setting INDEX_ROUTING_EXCLUDE_GROUP_SETTING = - Setting.groupSetting("index.routing.allocation.exclude.", Property.Dynamic, Property.IndexScope); + Setting.groupSetting(INDEX_ROUTING_EXCLUDE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.IndexScope); public static final Setting INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING = Setting.groupSetting("index.routing.allocation.initial_recovery."); // this is only setable internally not a registered setting!! @@ -228,9 +269,12 @@ public static State fromString(String state) { Setting.Property.Dynamic, Setting.Property.IndexScope); - public static final IndexMetaData PROTO = IndexMetaData.builder("") - .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)) - .numberOfShards(1).numberOfReplicas(0).build(); + /** + * an internal index format description, allowing us to find out if this index is upgraded or needs upgrading + */ + private static final String INDEX_FORMAT = "index.format"; + public static final Setting INDEX_FORMAT_SETTING = + Setting.intSetting(INDEX_FORMAT, 0, Setting.Property.IndexScope, Setting.Property.Final); public static final String KEY_IN_SYNC_ALLOCATIONS = "in_sync_allocations"; static final String KEY_VERSION = "version"; @@ -244,6 +288,7 @@ public static State fromString(String state) { public static final String INDEX_STATE_FILE_PREFIX = "state-"; private final int routingNumShards; private final int routingFactor; + private final int routingPartitionSize; private final int numberOfShards; private final int numberOfReplicas; @@ -273,7 +318,6 @@ public static State fromString(String state) { private final Version indexCreatedVersion; private final Version indexUpgradedVersion; - private final org.apache.lucene.util.Version minimumCompatibleLuceneVersion; private final ActiveShardCount waitForActiveShards; @@ -281,8 +325,8 @@ private IndexMetaData(Index index, long version, long[] primaryTerms, State stat ImmutableOpenMap mappings, ImmutableOpenMap aliases, ImmutableOpenMap customs, ImmutableOpenIntMap> inSyncAllocationIds, DiscoveryNodeFilters requireFilters, DiscoveryNodeFilters initialRecoveryFilters, DiscoveryNodeFilters includeFilters, DiscoveryNodeFilters excludeFilters, - Version indexCreatedVersion, Version indexUpgradedVersion, org.apache.lucene.util.Version minimumCompatibleLuceneVersion, - int routingNumShards, ActiveShardCount waitForActiveShards) { + Version indexCreatedVersion, Version indexUpgradedVersion, + int routingNumShards, int routingPartitionSize, ActiveShardCount waitForActiveShards) { this.index = index; this.version = version; @@ -303,9 +347,9 @@ private IndexMetaData(Index index, long version, long[] primaryTerms, State stat this.initialRecoveryFilters = initialRecoveryFilters; this.indexCreatedVersion = indexCreatedVersion; this.indexUpgradedVersion = indexUpgradedVersion; - this.minimumCompatibleLuceneVersion = minimumCompatibleLuceneVersion; this.routingNumShards = routingNumShards; this.routingFactor = routingNumShards / numberOfShards; + this.routingPartitionSize = routingPartitionSize; this.waitForActiveShards = waitForActiveShards; assert numberOfShards * routingFactor == routingNumShards : routingNumShards + " must be a multiple of " + numberOfShards; } @@ -362,13 +406,6 @@ public Version getUpgradedVersion() { return indexUpgradedVersion; } - /** - * Return the {@link org.apache.lucene.util.Version} of the oldest lucene segment in the index - */ - public org.apache.lucene.util.Version getMinimumCompatibleVersion() { - return minimumCompatibleLuceneVersion; - } - public long getCreationDate() { return settings.getAsLong(SETTING_CREATION_DATE, -1L); } @@ -385,6 +422,14 @@ public int getNumberOfReplicas() { return numberOfReplicas; } + public int getRoutingPartitionSize() { + return routingPartitionSize; + } + + public boolean isRoutingPartitionedIndex() { + return routingPartitionSize != 1; + } + public int getTotalNumberOfShards() { return totalNumberOfShards; } @@ -414,12 +459,14 @@ public MappingMetaData mapping(String mappingType) { return mappings.get(mappingType); } - public static final Setting INDEX_SHRINK_SOURCE_UUID = Setting.simpleString("index.shrink.source.uuid"); - public static final Setting INDEX_SHRINK_SOURCE_NAME = Setting.simpleString("index.shrink.source.name"); + public static final String INDEX_SHRINK_SOURCE_UUID_KEY = "index.shrink.source.uuid"; + public static final String INDEX_SHRINK_SOURCE_NAME_KEY = "index.shrink.source.name"; + public static final Setting INDEX_SHRINK_SOURCE_UUID = Setting.simpleString(INDEX_SHRINK_SOURCE_UUID_KEY); + public static final Setting INDEX_SHRINK_SOURCE_NAME = Setting.simpleString(INDEX_SHRINK_SOURCE_NAME_KEY); public Index getMergeSourceIndex() { - return INDEX_SHRINK_SOURCE_UUID.exists(settings) ? new Index(INDEX_SHRINK_SOURCE_NAME.get(settings), INDEX_SHRINK_SOURCE_UUID.get(settings)) : null; + return INDEX_SHRINK_SOURCE_UUID.exists(settings) ? new Index(INDEX_SHRINK_SOURCE_NAME.get(settings), INDEX_SHRINK_SOURCE_UUID.get(settings)) : null; } /** @@ -546,13 +593,11 @@ public Diff diff(IndexMetaData previousState) { return new IndexMetaDataDiff(previousState, this); } - @Override - public Diff readDiffFrom(StreamInput in) throws IOException { + public static Diff readDiffFrom(StreamInput in) throws IOException { return new IndexMetaDataDiff(in); } - @Override - public IndexMetaData fromXContent(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException { + public static IndexMetaData fromXContent(XContentParser parser) throws IOException { return Builder.fromXContent(parser); } @@ -575,7 +620,7 @@ private static class IndexMetaDataDiff implements Diff { private final Diff> customs; private final Diff>> inSyncAllocationIds; - public IndexMetaDataDiff(IndexMetaData before, IndexMetaData after) { + IndexMetaDataDiff(IndexMetaData before, IndexMetaData after) { index = after.index.getName(); version = after.version; routingNumShards = after.routingNumShards; @@ -589,15 +634,17 @@ public IndexMetaDataDiff(IndexMetaData before, IndexMetaData after) { DiffableUtils.getVIntKeySerializer(), DiffableUtils.StringSetValueSerializer.getInstance()); } - public IndexMetaDataDiff(StreamInput in) throws IOException { + IndexMetaDataDiff(StreamInput in) throws IOException { index = in.readString(); routingNumShards = in.readInt(); version = in.readLong(); state = State.fromId(in.readByte()); settings = Settings.readSettingsFromStream(in); primaryTerms = in.readVLongArray(); - mappings = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), MappingMetaData.PROTO); - aliases = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), AliasMetaData.PROTO); + mappings = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), MappingMetaData::new, + MappingMetaData::readDiffFrom); + aliases = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), AliasMetaData::new, + AliasMetaData::readDiffFrom); customs = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), new DiffableUtils.DiffableValueSerializer() { @Override @@ -605,6 +652,7 @@ public Custom read(StreamInput in, String key) throws IOException { return lookupPrototypeSafe(key).readFrom(in); } + @SuppressWarnings("unchecked") @Override public Diff readDiff(StreamInput in, String key) throws IOException { return lookupPrototypeSafe(key).readDiffFrom(in); @@ -644,8 +692,7 @@ public IndexMetaData apply(IndexMetaData part) { } } - @Override - public IndexMetaData readFrom(StreamInput in) throws IOException { + public static IndexMetaData readFrom(StreamInput in) throws IOException { Builder builder = new Builder(in.readString()); builder.version(in.readLong()); builder.setRoutingNumShards(in.readInt()); @@ -654,12 +701,12 @@ public IndexMetaData readFrom(StreamInput in) throws IOException { builder.primaryTerms(in.readVLongArray()); int mappingsSize = in.readVInt(); for (int i = 0; i < mappingsSize; i++) { - MappingMetaData mappingMd = MappingMetaData.PROTO.readFrom(in); + MappingMetaData mappingMd = new MappingMetaData(in); builder.putMapping(mappingMd); } int aliasesSize = in.readVInt(); for (int i = 0; i < aliasesSize; i++) { - AliasMetaData aliasMd = AliasMetaData.Builder.readFrom(in); + AliasMetaData aliasMd = new AliasMetaData(in); builder.putAlias(aliasMd); } int customSize = in.readVInt(); @@ -781,6 +828,11 @@ public int getRoutingNumShards() { return routingNumShards == null ? numberOfShards() : routingNumShards; } + /** + * Returns the number of shards. + * + * @return the provided value or -1 if it has not been set. + */ public int numberOfShards() { return settings.getAsInt(SETTING_NUMBER_OF_SHARDS, -1); } @@ -790,10 +842,29 @@ public Builder numberOfReplicas(int numberOfReplicas) { return this; } + /** + * Returns the number of replicas. + * + * @return the provided value or -1 if it has not been set. + */ public int numberOfReplicas() { return settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, -1); } + public Builder routingPartitionSize(int routingPartitionSize) { + settings = Settings.builder().put(settings).put(SETTING_ROUTING_PARTITION_SIZE, routingPartitionSize).build(); + return this; + } + + /** + * Returns the routing partition size. + * + * @return the provided value or -1 if it has not been set. + */ + public int routingPartitionSize() { + return settings.getAsInt(SETTING_ROUTING_PARTITION_SIZE, -1); + } + public Builder creationDate(long creationDate) { settings = Settings.builder().put(settings).put(SETTING_CREATION_DATE, creationDate).build(); return this; @@ -813,9 +884,7 @@ public MappingMetaData mapping(String type) { } public Builder putMapping(String type, String source) throws IOException { - try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) { - putMapping(new MappingMetaData(type, parser.mapOrdered())); - } + putMapping(new MappingMetaData(type, XContentHelper.convertToMap(XContentFactory.xContent(source), source, true))); return this; } @@ -859,7 +928,7 @@ public Set getInSyncAllocationIds(int shardId) { } public Builder putInSyncAllocationIds(int shardId, Set allocationIds) { - inSyncAllocationIds.put(shardId, new HashSet(allocationIds)); + inSyncAllocationIds.put(shardId, new HashSet<>(allocationIds)); return this; } @@ -938,6 +1007,12 @@ public IndexMetaData build() { throw new IllegalArgumentException("must specify non-negative number of shards for index [" + index + "]"); } + int routingPartitionSize = INDEX_ROUTING_PARTITION_SIZE_SETTING.get(settings); + if (routingPartitionSize != 1 && routingPartitionSize >= getRoutingNumShards()) { + throw new IllegalArgumentException("routing partition size [" + routingPartitionSize + "] should be a positive number" + + " less than the number of shards [" + getRoutingNumShards() + "] for [" + index + "]"); + } + // fill missing slots in inSyncAllocationIds with empty set if needed and make all entries immutable ImmutableOpenIntMap.Builder> filledInSyncAllocationIds = ImmutableOpenIntMap.builder(); for (int i = 0; i < numberOfShards; i++) { @@ -977,17 +1052,6 @@ public IndexMetaData build() { } Version indexCreatedVersion = Version.indexCreated(settings); Version indexUpgradedVersion = settings.getAsVersion(IndexMetaData.SETTING_VERSION_UPGRADED, indexCreatedVersion); - String stringLuceneVersion = settings.get(SETTING_VERSION_MINIMUM_COMPATIBLE); - final org.apache.lucene.util.Version minimumCompatibleLuceneVersion; - if (stringLuceneVersion != null) { - try { - minimumCompatibleLuceneVersion = org.apache.lucene.util.Version.parse(stringLuceneVersion); - } catch (ParseException ex) { - throw new IllegalStateException("Cannot parse lucene version [" + stringLuceneVersion + "] in the [" + SETTING_VERSION_MINIMUM_COMPATIBLE + "] setting", ex); - } - } else { - minimumCompatibleLuceneVersion = null; - } if (primaryTerms == null) { initializePrimaryTerms(); @@ -1004,9 +1068,10 @@ public IndexMetaData build() { } final String uuid = settings.get(SETTING_INDEX_UUID, INDEX_UUID_NA_VALUE); + return new IndexMetaData(new Index(index, uuid), version, primaryTerms, state, numberOfShards, numberOfReplicas, tmpSettings, mappings.build(), tmpAliases.build(), customs.build(), filledInSyncAllocationIds.build(), requireFilters, initialRecoveryFilters, includeFilters, excludeFilters, - indexCreatedVersion, indexUpgradedVersion, minimumCompatibleLuceneVersion, getRoutingNumShards(), waitForActiveShards); + indexCreatedVersion, indexUpgradedVersion, getRoutingNumShards(), routingPartitionSize, waitForActiveShards); } public static void toXContent(IndexMetaData indexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException { @@ -1029,11 +1094,7 @@ public static void toXContent(IndexMetaData indexMetaData, XContentBuilder build if (binary) { builder.value(cursor.value.source().compressed()); } else { - byte[] data = cursor.value.source().uncompressed(); - try (XContentParser parser = XContentFactory.xContent(data).createParser(data)) { - Map mapping = parser.mapOrdered(); - builder.map(mapping); - } + builder.map(XContentHelper.convertToMap(new BytesArray(cursor.value.source().uncompressed()), true).v2()); } } builder.endArray(); @@ -1185,10 +1246,6 @@ public static IndexMetaData fromXContent(XContentParser parser) throws IOExcepti } return builder.build(); } - - public static IndexMetaData readFrom(StreamInput in) throws IOException { - return PROTO.readFrom(in); - } } /** @@ -1199,6 +1256,7 @@ public static IndexMetaData readFrom(StreamInput in) throws IOException { * {@link #isIndexUsingShadowReplicas(org.elasticsearch.common.settings.Settings)}. */ public static boolean isOnSharedFilesystem(Settings settings) { + // don't use the setting directly, not to trigger verbose deprecation logging return settings.getAsBoolean(SETTING_SHARED_FILESYSTEM, isIndexUsingShadowReplicas(settings)); } @@ -1208,6 +1266,7 @@ public static boolean isOnSharedFilesystem(Settings settings) { * setting for this is false. */ public static boolean isIndexUsingShadowReplicas(Settings settings) { + // don't use the setting directly, not to trigger verbose deprecation logging return settings.getAsBoolean(SETTING_SHADOW_REPLICAS, false); } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java index df53395fe27e1..711d685c1d668 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java @@ -29,6 +29,8 @@ import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.joda.DateMathParser; import org.elasticsearch.common.joda.FormatDateTimeFormatter; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; @@ -48,13 +50,15 @@ import java.util.Locale; import java.util.Map; import java.util.Set; -import java.util.concurrent.Callable; +import java.util.function.Predicate; import java.util.stream.Collectors; public class IndexNameExpressionResolver extends AbstractComponent { private final List expressionResolvers; private final DateMathExpressionResolver dateMathExpressionResolver; + private static final DeprecationLogger DEPRECATION_LOGGER = + new DeprecationLogger(Loggers.getLogger(IndexNameExpressionResolver.class)); public IndexNameExpressionResolver(Settings settings) { super(settings); @@ -131,9 +135,9 @@ public Index[] concreteIndices(ClusterState state, IndicesOptions options, Strin * @throws IllegalArgumentException if one of the aliases resolve to multiple indices and the provided * indices options in the context don't allow such a case. */ - public String[] concreteIndexNames(ClusterState state, IndicesOptions options, long startTime, String... indexExpressions) { + public Index[] concreteIndices(ClusterState state, IndicesOptions options, long startTime, String... indexExpressions) { Context context = new Context(state, options, startTime); - return concreteIndexNames(context, indexExpressions); + return concreteIndices(context, indexExpressions); } String[] concreteIndexNames(Context context, String... indexExpressions) { @@ -159,7 +163,6 @@ Index[] concreteIndices(Context context, String... indexExpressions) { if (indexExpressions.length == 1) { failNoIndices = options.allowNoIndices() == false; } - List expressions = Arrays.asList(indexExpressions); for (ExpressionResolver expressionResolver : expressionResolvers) { expressions = expressionResolver.resolve(context, expressions); @@ -269,8 +272,19 @@ public String resolveDateMathExpression(String dateExpression) { * the index itself - null is returned. Returns null if no filtering is required. */ public String[] filteringAliases(ClusterState state, String index, String... expressions) { + return indexAliases(state, index, AliasMetaData::filteringRequired, false, expressions); + } + + /** + * Iterates through the list of indices and selects the effective list of required aliases for the + * given index. + *

    Only aliases where the given predicate tests successfully are returned. If the indices list contains a non-required reference to + * the index itself - null is returned. Returns null if no filtering is required. + */ + public String[] indexAliases(ClusterState state, String index, Predicate requiredAlias, boolean skipIdentity, + String... expressions) { // expand the aliases wildcard - List resolvedExpressions = expressions != null ? Arrays.asList(expressions) : Collections.emptyList(); + List resolvedExpressions = expressions != null ? Arrays.asList(expressions) : Collections.emptyList(); Context context = new Context(state, IndicesOptions.lenientExpandOpen(), true); for (ExpressionResolver expressionResolver : expressionResolvers) { resolvedExpressions = expressionResolver.resolve(context, resolvedExpressions); @@ -279,54 +293,50 @@ public String[] filteringAliases(ClusterState state, String index, String... exp if (isAllIndices(resolvedExpressions)) { return null; } + final IndexMetaData indexMetaData = state.metaData().getIndices().get(index); + if (indexMetaData == null) { + // Shouldn't happen + throw new IndexNotFoundException(index); + } // optimize for the most common single index/alias scenario if (resolvedExpressions.size() == 1) { String alias = resolvedExpressions.get(0); - IndexMetaData indexMetaData = state.metaData().getIndices().get(index); - if (indexMetaData == null) { - // Shouldn't happen - throw new IndexNotFoundException(index); - } + AliasMetaData aliasMetaData = indexMetaData.getAliases().get(alias); - boolean filteringRequired = aliasMetaData != null && aliasMetaData.filteringRequired(); - if (!filteringRequired) { + if (aliasMetaData == null || requiredAlias.test(aliasMetaData) == false) { return null; } return new String[]{alias}; } - List filteringAliases = null; + List aliases = null; for (String alias : resolvedExpressions) { if (alias.equals(index)) { - return null; - } - - IndexMetaData indexMetaData = state.metaData().getIndices().get(index); - if (indexMetaData == null) { - // Shouldn't happen - throw new IndexNotFoundException(index); + if (skipIdentity) { + continue; + } else { + return null; + } } - AliasMetaData aliasMetaData = indexMetaData.getAliases().get(alias); // Check that this is an alias for the current index // Otherwise - skip it if (aliasMetaData != null) { - boolean filteringRequired = aliasMetaData.filteringRequired(); - if (filteringRequired) { - // If filtering required - add it to the list of filters - if (filteringAliases == null) { - filteringAliases = new ArrayList<>(); + if (requiredAlias.test(aliasMetaData)) { + // If required - add it to the list of aliases + if (aliases == null) { + aliases = new ArrayList<>(); } - filteringAliases.add(alias); + aliases.add(alias); } else { - // If not, we have a non filtering alias for this index - no filtering needed + // If not, we have a non required alias for this index - no futher checking needed return null; } } } - if (filteringAliases == null) { + if (aliases == null) { return null; } - return filteringAliases.toArray(new String[filteringAliases.size()]); + return aliases.toArray(new String[aliases.size()]); } /** @@ -503,11 +513,11 @@ static final class Context { this(state, options, System.currentTimeMillis(), preserveAliases); } - public Context(ClusterState state, IndicesOptions options, long startTime) { + Context(ClusterState state, IndicesOptions options, long startTime) { this(state, options, startTime, false); } - public Context(ClusterState state, IndicesOptions options, long startTime, boolean preserveAliases) { + Context(ClusterState state, IndicesOptions options, long startTime, boolean preserveAliases) { this.state = state; this.options = options; this.startTime = startTime; @@ -580,6 +590,8 @@ public List resolve(Context context, List expressions) { private Set innerResolve(Context context, List expressions, IndicesOptions options, MetaData metaData) { Set result = null; + boolean wildcardSeen = false; + boolean plusSeen = false; for (int i = 0; i < expressions.size(); i++) { String expression = expressions.get(i); if (aliasOrIndexExists(metaData, expression)) { @@ -594,36 +606,36 @@ private Set innerResolve(Context context, List expressions, Indi boolean add = true; if (expression.charAt(0) == '+') { // if its the first, add empty result set + plusSeen = true; if (i == 0) { result = new HashSet<>(); } expression = expression.substring(1); } else if (expression.charAt(0) == '-') { - // if its the first, fill it with all the indices... - if (i == 0) { - List concreteIndices = resolveEmptyOrTrivialWildcard(options, metaData, false); - result = new HashSet<>(concreteIndices); + // if there is a negation without a wildcard being previously seen, add it verbatim, + // otherwise return the expression + if (wildcardSeen) { + add = false; + expression = expression.substring(1); + } else { + add = true; } - add = false; - expression = expression.substring(1); + } + if (result == null) { + // add all the previous ones... + result = new HashSet<>(expressions.subList(0, i)); } if (!Regex.isSimpleMatchPattern(expression)) { if (!unavailableIgnoredOrExists(options, metaData, expression)) { throw infe(expression); } - if (result != null) { - if (add) { - result.add(expression); - } else { - result.remove(expression); - } + if (add) { + result.add(expression); + } else { + result.remove(expression); } continue; } - if (result == null) { - // add all the previous ones... - result = new HashSet<>(expressions.subList(0, i)); - } final IndexMetaData.State excludeState = excludeState(options); final Map matches = matches(metaData, expression); @@ -637,6 +649,13 @@ private Set innerResolve(Context context, List expressions, Indi if (!noIndicesAllowedOrMatches(options, matches)) { throw infe(expression); } + + if (Regex.isSimpleMatchPattern(expression)) { + wildcardSeen = true; + } + } + if (plusSeen) { + DEPRECATION_LOGGER.deprecated("support for '+' as part of index expressions is deprecated"); } return result; } @@ -751,7 +770,7 @@ static final class DateMathExpressionResolver implements ExpressionResolver { private final String defaultDateFormatterPattern; private final DateTimeFormatter defaultDateFormatter; - public DateMathExpressionResolver(Settings settings) { + DateMathExpressionResolver(Settings settings) { String defaultTimeZoneId = settings.get("date_math_expression_resolver.default_time_zone", "UTC"); this.defaultTimeZone = DateTimeZone.forID(defaultTimeZoneId); defaultDateFormatterPattern = settings.get("date_math_expression_resolver.default_date_format", "YYYY.MM.dd"); @@ -850,12 +869,7 @@ String resolveExpression(String expression, final Context context) { DateTimeFormatter parser = dateFormatter.withZone(timeZone); FormatDateTimeFormatter formatter = new FormatDateTimeFormatter(dateFormatterPattern, parser, Locale.ROOT); DateMathParser dateMathParser = new DateMathParser(formatter); - long millis = dateMathParser.parse(mathExpression, new Callable() { - @Override - public Long call() throws Exception { - return context.getStartTime(); - } - }, false, timeZone); + long millis = dateMathParser.parse(mathExpression, context::getStartTime, false, timeZone); String time = formatter.printer().print(millis); beforePlaceHolderSb.append(time); diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java index 13d7e152ba672..50535519e1edd 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java @@ -21,7 +21,11 @@ import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; +import org.elasticsearch.Version; import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.cluster.Diff; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.collect.MapBuilder; import org.elasticsearch.common.compress.CompressedXContent; @@ -33,10 +37,12 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; import java.util.Map; +import java.util.Objects; import java.util.Set; /** @@ -44,12 +50,30 @@ */ public class IndexTemplateMetaData extends AbstractDiffable { - public static final IndexTemplateMetaData PROTO = IndexTemplateMetaData.builder("").build(); - private final String name; private final int order; + /** + * The version is an arbitrary number managed by the user so that they can easily and quickly verify the existence of a given template. + * Expected usage: + *

    
    +     * PUT /_template/my_template
    +     * {
    +     *   "template": "my_index-*",
    +     *   "mappings": { ... },
    +     *   "version": 1
    +     * }
    +     * 
    + * Then, some process from the user can occasionally verify that the template exists with the appropriate version without having to + * check the template's content: + *
    
    +     * GET /_template/my_template?filter_path=*.version
    +     * 
    + */ + @Nullable + private final Integer version; + private final String template; private final Settings settings; @@ -61,10 +85,14 @@ public class IndexTemplateMetaData extends AbstractDiffable customs; - public IndexTemplateMetaData(String name, int order, String template, Settings settings, ImmutableOpenMap mappings, - ImmutableOpenMap aliases, ImmutableOpenMap customs) { + public IndexTemplateMetaData(String name, int order, Integer version, + String template, Settings settings, + ImmutableOpenMap mappings, + ImmutableOpenMap aliases, + ImmutableOpenMap customs) { this.name = name; this.order = order; + this.version = version; this.template = template; this.settings = settings; this.mappings = mappings; @@ -84,6 +112,16 @@ public int getOrder() { return order(); } + @Nullable + public Integer getVersion() { + return version(); + } + + @Nullable + public Integer version() { + return version; + } + public String getName() { return this.name; } @@ -150,21 +188,21 @@ public boolean equals(Object o) { if (!settings.equals(that.settings)) return false; if (!template.equals(that.template)) return false; - return true; + return Objects.equals(version, that.version); } @Override public int hashCode() { int result = name.hashCode(); result = 31 * result + order; + result = 31 * result + Objects.hashCode(version); result = 31 * result + template.hashCode(); result = 31 * result + settings.hashCode(); result = 31 * result + mappings.hashCode(); return result; } - @Override - public IndexTemplateMetaData readFrom(StreamInput in) throws IOException { + public static IndexTemplateMetaData readFrom(StreamInput in) throws IOException { Builder builder = new Builder(in.readString()); builder.order(in.readInt()); builder.template(in.readString()); @@ -175,7 +213,7 @@ public IndexTemplateMetaData readFrom(StreamInput in) throws IOException { } int aliasesSize = in.readVInt(); for (int i = 0; i < aliasesSize; i++) { - AliasMetaData aliasMd = AliasMetaData.Builder.readFrom(in); + AliasMetaData aliasMd = new AliasMetaData(in); builder.putAlias(aliasMd); } int customSize = in.readVInt(); @@ -184,9 +222,16 @@ public IndexTemplateMetaData readFrom(StreamInput in) throws IOException { IndexMetaData.Custom customIndexMetaData = IndexMetaData.lookupPrototypeSafe(type).readFrom(in); builder.putCustom(type, customIndexMetaData); } + if (in.getVersion().onOrAfter(Version.V_5_0_0_beta1)) { + builder.version(in.readOptionalVInt()); + } return builder.build(); } + public static Diff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(IndexTemplateMetaData::readFrom, in); + } + @Override public void writeTo(StreamOutput out) throws IOException { out.writeString(name); @@ -207,6 +252,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(cursor.key); cursor.value.writeTo(out); } + if (out.getVersion().onOrAfter(Version.V_5_0_0_beta1)) { + out.writeOptionalVInt(version); + } } public static class Builder { @@ -220,6 +268,8 @@ public static class Builder { private int order; + private Integer version; + private String template; private Settings settings = Settings.Builder.EMPTY_SETTINGS; @@ -240,6 +290,7 @@ public Builder(String name) { public Builder(IndexTemplateMetaData indexTemplateMetaData) { this.name = indexTemplateMetaData.name(); order(indexTemplateMetaData.order()); + version(indexTemplateMetaData.version()); template(indexTemplateMetaData.template()); settings(indexTemplateMetaData.settings()); @@ -253,6 +304,11 @@ public Builder order(int order) { return this; } + public Builder version(Integer version) { + this.version = version; + return this; + } + public Builder template(String template) { this.template = template; return this; @@ -312,14 +368,26 @@ public IndexMetaData.Custom getCustom(String type) { } public IndexTemplateMetaData build() { - return new IndexTemplateMetaData(name, order, template, settings, mappings.build(), aliases.build(), customs.build()); + return new IndexTemplateMetaData(name, order, version, template, settings, mappings.build(), aliases.build(), customs.build()); } @SuppressWarnings("unchecked") - public static void toXContent(IndexTemplateMetaData indexTemplateMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException { + public static void toXContent(IndexTemplateMetaData indexTemplateMetaData, XContentBuilder builder, ToXContent.Params params) + throws IOException { builder.startObject(indexTemplateMetaData.name()); + toInnerXContent(indexTemplateMetaData, builder, params); + + builder.endObject(); + } + + public static void toInnerXContent(IndexTemplateMetaData indexTemplateMetaData, XContentBuilder builder, ToXContent.Params params) + throws IOException { + builder.field("order", indexTemplateMetaData.order()); + if (indexTemplateMetaData.version() != null) { + builder.field("version", indexTemplateMetaData.version()); + } builder.field("template", indexTemplateMetaData.template()); builder.startObject("settings"); @@ -330,10 +398,7 @@ public static void toXContent(IndexTemplateMetaData indexTemplateMetaData, XCont builder.startObject("mappings"); for (ObjectObjectCursor cursor : indexTemplateMetaData.mappings()) { byte[] mappingSource = cursor.value.uncompressed(); - Map mapping; - try (XContentParser parser = XContentFactory.xContent(mappingSource).createParser(mappingSource)) {; - mapping = parser.map(); - } + Map mapping = XContentHelper.convertToMap(new BytesArray(mappingSource), true).v2(); if (mapping.size() == 1 && mapping.containsKey(cursor.key)) { // the type name is the root value, reduce it mapping = (Map) mapping.get(cursor.key); @@ -346,10 +411,7 @@ public static void toXContent(IndexTemplateMetaData indexTemplateMetaData, XCont builder.startArray("mappings"); for (ObjectObjectCursor cursor : indexTemplateMetaData.mappings()) { byte[] data = cursor.value.uncompressed(); - try (XContentParser parser = XContentFactory.xContent(data).createParser(data)) { - Map mapping = parser.mapOrdered(); - builder.map(mapping); - } + builder.map(XContentHelper.convertToMap(new BytesArray(data), true).v2()); } builder.endArray(); } @@ -365,8 +427,6 @@ public static void toXContent(IndexTemplateMetaData indexTemplateMetaData, XCont AliasMetaData.Builder.toXContent(cursor.value, builder, params); } builder.endObject(); - - builder.endObject(); } public static IndexTemplateMetaData fromXContent(XContentParser parser, String templateName) throws IOException { @@ -380,7 +440,9 @@ public static IndexTemplateMetaData fromXContent(XContentParser parser, String t } else if (token == XContentParser.Token.START_OBJECT) { if ("settings".equals(currentFieldName)) { Settings.Builder templateSettingsBuilder = Settings.builder(); - templateSettingsBuilder.put(SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered())).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX); + templateSettingsBuilder.put( + SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered())) + .normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX); builder.settings(templateSettingsBuilder.build()); } else if ("mappings".equals(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { @@ -388,7 +450,8 @@ public static IndexTemplateMetaData fromXContent(XContentParser parser, String t currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { String mappingType = currentFieldName; - Map mappingSource = MapBuilder.newMapBuilder().put(mappingType, parser.mapOrdered()).map(); + Map mappingSource = + MapBuilder.newMapBuilder().put(mappingType, parser.mapOrdered()).map(); builder.putMapping(mappingType, XContentFactory.jsonBuilder().map(mappingSource).string()); } } @@ -428,6 +491,8 @@ public static IndexTemplateMetaData fromXContent(XContentParser parser, String t builder.template(parser.text()); } else if ("order".equals(currentFieldName)) { builder.order(parser.intValue()); + } else if ("version".equals(currentFieldName)) { + builder.version(parser.intValue()); } } } @@ -451,10 +516,6 @@ private static String skipTemplateName(XContentParser parser) throws IOException return null; } - - public static IndexTemplateMetaData readFrom(StreamInput in) throws IOException { - return PROTO.readFrom(in); - } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java index 0798dff1c9315..06dfbafaeb040 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java @@ -19,8 +19,10 @@ package org.elasticsearch.cluster.metadata; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.TimestampParsingException; import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.cluster.Diff; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -29,7 +31,6 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; -import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.TimestampFieldMapper; @@ -43,8 +44,6 @@ */ public class MappingMetaData extends AbstractDiffable { - public static final MappingMetaData PROTO = new MappingMetaData(); - public static class Routing { public static final Routing EMPTY = new Routing(false); @@ -177,10 +176,7 @@ public MappingMetaData(DocumentMapper docMapper) { public MappingMetaData(CompressedXContent mapping) throws IOException { this.source = mapping; - Map mappingMap; - try (XContentParser parser = XContentHelper.createParser(mapping.compressedReference())) { - mappingMap = parser.mapOrdered(); - } + Map mappingMap = XContentHelper.convertToMap(mapping.compressedReference(), true).v2(); if (mappingMap.size() != 1) { throw new IllegalStateException("Can't derive type from mapping, no root type: " + mapping.string()); } @@ -220,7 +216,7 @@ private void initMappers(Map withoutType) { String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); if (fieldName.equals("required")) { - required = lenientNodeBooleanValue(fieldNode); + required = lenientNodeBooleanValue(fieldNode, fieldName); } } this.routing = new Routing(required); @@ -237,13 +233,13 @@ private void initMappers(Map withoutType) { String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); if (fieldName.equals("enabled")) { - enabled = lenientNodeBooleanValue(fieldNode); + enabled = lenientNodeBooleanValue(fieldNode, fieldName); } else if (fieldName.equals("format")) { format = fieldNode.toString(); } else if (fieldName.equals("default") && fieldNode != null) { defaultTimestamp = fieldNode.toString(); } else if (fieldName.equals("ignore_missing")) { - ignoreMissing = lenientNodeBooleanValue(fieldNode); + ignoreMissing = lenientNodeBooleanValue(fieldNode, fieldName); } } this.timestamp = new Timestamp(enabled, format, defaultTimestamp, ignoreMissing); @@ -289,7 +285,7 @@ public boolean hasParentField() { /** * Converts the serialized compressed form of the mappings into a parsed map. */ - public Map sourceAsMap() throws IOException { + public Map sourceAsMap() throws ElasticsearchParseException { Map mapping = XContentHelper.convertToMap(source.compressedReference(), true).v2(); if (mapping.size() == 1 && mapping.containsKey(type())) { // the type name is the root value, reduce it @@ -301,7 +297,7 @@ public Map sourceAsMap() throws IOException { /** * Converts the serialized compressed form of the mappings into a parsed map. */ - public Map getSourceAsMap() throws IOException { + public Map getSourceAsMap() throws ElasticsearchParseException { return sourceAsMap(); } @@ -351,11 +347,11 @@ public int hashCode() { return result; } - public MappingMetaData readFrom(StreamInput in) throws IOException { - String type = in.readString(); - CompressedXContent source = CompressedXContent.readCompressedString(in); + public MappingMetaData(StreamInput in) throws IOException { + type = in.readString(); + source = CompressedXContent.readCompressedString(in); // routing - Routing routing = new Routing(in.readBoolean()); + routing = new Routing(in.readBoolean()); // timestamp boolean enabled = in.readBoolean(); @@ -365,9 +361,12 @@ public MappingMetaData readFrom(StreamInput in) throws IOException { ignoreMissing = in.readOptionalBoolean(); - final Timestamp timestamp = new Timestamp(enabled, format, defaultTimestamp, ignoreMissing); - final boolean hasParentField = in.readBoolean(); - return new MappingMetaData(type, source, routing, timestamp, hasParentField); + timestamp = new Timestamp(enabled, format, defaultTimestamp, ignoreMissing); + hasParentField = in.readBoolean(); + } + + public static Diff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(MappingMetaData::new, in); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java index fd7e08fec3117..847f520d767ec 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java @@ -28,23 +28,26 @@ import org.elasticsearch.cluster.Diffable; import org.elasticsearch.cluster.DiffableUtils; import org.elasticsearch.cluster.InternalClusterInfoService; +import org.elasticsearch.cluster.NamedDiffable; +import org.elasticsearch.cluster.NamedDiffableValueSerializer; import org.elasticsearch.cluster.block.ClusterBlock; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.routing.allocation.DiskThresholdSettings; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.collect.HppcMaps; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.loader.SettingsLoader; -import org.elasticsearch.common.xcontent.FromXContentBuilder; +import org.elasticsearch.common.xcontent.NamedXContentRegistry.UnknownNamedObjectException; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -57,9 +60,7 @@ import org.elasticsearch.index.store.IndexStoreConfig; import org.elasticsearch.indices.recovery.RecoverySettings; import org.elasticsearch.indices.ttl.IndicesTTLService; -import org.elasticsearch.ingest.IngestMetadata; import org.elasticsearch.rest.RestStatus; -import org.elasticsearch.script.ScriptMetaData; import java.io.IOException; import java.util.ArrayList; @@ -68,6 +69,7 @@ import java.util.Comparator; import java.util.EnumSet; import java.util.HashMap; +import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -80,9 +82,9 @@ import static org.elasticsearch.common.settings.Settings.writeSettingsToStream; import static org.elasticsearch.common.util.set.Sets.newHashSet; -public class MetaData implements Iterable, Diffable, FromXContentBuilder, ToXContent { +public class MetaData implements Iterable, Diffable, ToXContent { - public static final MetaData PROTO = builder().build(); + private static final Logger logger = Loggers.getLogger(MetaData.class); public static final String ALL = "_all"; @@ -97,56 +99,45 @@ public enum XContentContext { SNAPSHOT } + /** + * Indicates that this custom metadata will be returned as part of an API call but will not be persisted + */ public static EnumSet API_ONLY = EnumSet.of(XContentContext.API); - public static EnumSet API_AND_GATEWAY = EnumSet.of(XContentContext.API, XContentContext.GATEWAY); - public static EnumSet API_AND_SNAPSHOT = EnumSet.of(XContentContext.API, XContentContext.SNAPSHOT); - - public interface Custom extends Diffable, ToXContent { - - String type(); - - Custom fromXContent(XContentParser parser) throws IOException; - - EnumSet context(); - } - public static Map customPrototypes = new HashMap<>(); + /** + * Indicates that this custom metadata will be returned as part of an API call and will be persisted between + * node restarts, but will not be a part of a snapshot global state + */ + public static EnumSet API_AND_GATEWAY = EnumSet.of(XContentContext.API, XContentContext.GATEWAY); - static { - // register non plugin custom metadata - registerPrototype(RepositoriesMetaData.TYPE, RepositoriesMetaData.PROTO); - registerPrototype(IngestMetadata.TYPE, IngestMetadata.PROTO); - registerPrototype(ScriptMetaData.TYPE, ScriptMetaData.PROTO); - registerPrototype(IndexGraveyard.TYPE, IndexGraveyard.PROTO); - } + /** + * Indicates that this custom metadata will be returned as part of an API call and stored as a part of + * a snapshot global state, but will not be persisted between node restarts + */ + public static EnumSet API_AND_SNAPSHOT = EnumSet.of(XContentContext.API, XContentContext.SNAPSHOT); /** - * Register a custom index meta data factory. Make sure to call it from a static block. + * Indicates that this custom metadata will be returned as part of an API call, stored as a part of + * a snapshot global state, and will be persisted between node restarts */ - public static void registerPrototype(String type, Custom proto) { - customPrototypes.put(type, proto); - } + public static EnumSet ALL_CONTEXTS = EnumSet.allOf(XContentContext.class); - @Nullable - public static T lookupPrototype(String type) { - //noinspection unchecked - return (T) customPrototypes.get(type); - } + public interface Custom extends NamedDiffable, ToXContent { - public static T lookupPrototypeSafe(String type) { - //noinspection unchecked - T proto = (T) customPrototypes.get(type); - if (proto == null) { - throw new IllegalArgumentException("No custom metadata prototype registered for type [" + type + "], node likely missing plugins"); - } - return proto; + EnumSet context(); } - public static final Setting SETTING_READ_ONLY_SETTING = Setting.boolSetting("cluster.blocks.read_only", false, Property.Dynamic, Property.NodeScope); - public static final ClusterBlock CLUSTER_READ_ONLY_BLOCK = new ClusterBlock(6, "cluster read-only (api)", false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA_WRITE)); + public static final ClusterBlock CLUSTER_READ_ONLY_BLOCK = new ClusterBlock(6, "cluster read-only (api)", false, false, + false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA_WRITE)); + + public static final Setting SETTING_READ_ONLY_ALLOW_DELETE_SETTING = + Setting.boolSetting("cluster.blocks.read_only_allow_delete", false, Property.Dynamic, Property.NodeScope); + + public static final ClusterBlock CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK = new ClusterBlock(13, "cluster read-only / allow delete (api)", + false, false, true, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA_WRITE)); public static final MetaData EMPTY_META_DATA = builder().build(); @@ -158,6 +149,8 @@ public static T lookupPrototypeSafe(String type) { public static final String GLOBAL_STATE_FILE_PREFIX = "global-"; + private static final NamedDiffableValueSerializer CUSTOM_VALUE_SERIALIZER = new NamedDiffableValueSerializer<>(Custom.class); + private final String clusterUUID; private final long version; @@ -579,14 +572,14 @@ public static boolean isGlobalStateEquals(MetaData metaData1, MetaData metaData2 // Check if any persistent metadata needs to be saved int customCount1 = 0; for (ObjectObjectCursor cursor : metaData1.customs) { - if (customPrototypes.get(cursor.key).context().contains(XContentContext.GATEWAY)) { + if (cursor.value.context().contains(XContentContext.GATEWAY)) { if (!cursor.value.equals(metaData2.custom(cursor.key))) return false; customCount1++; } } int customCount2 = 0; - for (ObjectObjectCursor cursor : metaData2.customs) { - if (customPrototypes.get(cursor.key).context().contains(XContentContext.GATEWAY)) { + for (ObjectCursor cursor : metaData2.customs.values()) { + if (cursor.value.context().contains(XContentContext.GATEWAY)) { customCount2++; } } @@ -599,13 +592,11 @@ public Diff diff(MetaData previousState) { return new MetaDataDiff(previousState, this); } - @Override - public Diff readDiffFrom(StreamInput in) throws IOException { + public static Diff readDiffFrom(StreamInput in) throws IOException { return new MetaDataDiff(in); } - @Override - public MetaData fromXContent(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException { + public static MetaData fromXContent(XContentParser parser) throws IOException { return Builder.fromXContent(parser); } @@ -627,35 +618,26 @@ private static class MetaDataDiff implements Diff { private Diff> templates; private Diff> customs; - public MetaDataDiff(MetaData before, MetaData after) { + MetaDataDiff(MetaData before, MetaData after) { clusterUUID = after.clusterUUID; version = after.version; transientSettings = after.transientSettings; persistentSettings = after.persistentSettings; indices = DiffableUtils.diff(before.indices, after.indices, DiffableUtils.getStringKeySerializer()); templates = DiffableUtils.diff(before.templates, after.templates, DiffableUtils.getStringKeySerializer()); - customs = DiffableUtils.diff(before.customs, after.customs, DiffableUtils.getStringKeySerializer()); + customs = DiffableUtils.diff(before.customs, after.customs, DiffableUtils.getStringKeySerializer(), CUSTOM_VALUE_SERIALIZER); } - public MetaDataDiff(StreamInput in) throws IOException { + MetaDataDiff(StreamInput in) throws IOException { clusterUUID = in.readString(); version = in.readLong(); transientSettings = Settings.readSettingsFromStream(in); persistentSettings = Settings.readSettingsFromStream(in); - indices = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), IndexMetaData.PROTO); - templates = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), IndexTemplateMetaData.PROTO); - customs = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), - new DiffableUtils.DiffableValueSerializer() { - @Override - public Custom read(StreamInput in, String key) throws IOException { - return lookupPrototypeSafe(key).readFrom(in); - } - - @Override - public Diff readDiff(StreamInput in, String key) throws IOException { - return lookupPrototypeSafe(key).readDiffFrom(in); - } - }); + indices = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), IndexMetaData::readFrom, + IndexMetaData::readDiffFrom); + templates = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), IndexTemplateMetaData::readFrom, + IndexTemplateMetaData::readDiffFrom); + customs = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), CUSTOM_VALUE_SERIALIZER); } @Override @@ -683,8 +665,7 @@ public MetaData apply(MetaData part) { } } - @Override - public MetaData readFrom(StreamInput in) throws IOException { + public static MetaData readFrom(StreamInput in) throws IOException { Builder builder = new Builder(); builder.version = in.readLong(); builder.clusterUUID = in.readString(); @@ -692,17 +673,16 @@ public MetaData readFrom(StreamInput in) throws IOException { builder.persistentSettings(readSettingsFromStream(in)); int size = in.readVInt(); for (int i = 0; i < size; i++) { - builder.put(IndexMetaData.Builder.readFrom(in), false); + builder.put(IndexMetaData.readFrom(in), false); } size = in.readVInt(); for (int i = 0; i < size; i++) { - builder.put(IndexTemplateMetaData.Builder.readFrom(in)); + builder.put(IndexTemplateMetaData.readFrom(in)); } int customSize = in.readVInt(); for (int i = 0; i < customSize; i++) { - String type = in.readString(); - Custom customIndexMetaData = lookupPrototypeSafe(type).readFrom(in); - builder.putCustom(type, customIndexMetaData); + Custom customIndexMetaData = in.readNamedWriteable(Custom.class); + builder.putCustom(customIndexMetaData.getWriteableName(), customIndexMetaData); } return builder.build(); } @@ -721,10 +701,18 @@ public void writeTo(StreamOutput out) throws IOException { for (ObjectCursor cursor : templates.values()) { cursor.value.writeTo(out); } - out.writeVInt(customs.size()); - for (ObjectObjectCursor cursor : customs) { - out.writeString(cursor.key); - cursor.value.writeTo(out); + // filter out custom states not supported by the other node + int numberOfCustoms = 0; + for (ObjectCursor cursor : customs.values()) { + if (out.getVersion().onOrAfter(cursor.value.getMinimalSupportedVersion())) { + numberOfCustoms++; + } + } + out.writeVInt(numberOfCustoms); + for (ObjectCursor cursor : customs.values()) { + if (out.getVersion().onOrAfter(cursor.value.getMinimalSupportedVersion())) { + out.writeNamedWriteable(cursor.value); + } } } @@ -1012,55 +1000,70 @@ public MetaData build() { // while these datastructures aren't even used. // 2) The aliasAndIndexLookup can be updated instead of rebuilding it all the time. - // build all concrete indices arrays: - // TODO: I think we can remove these arrays. it isn't worth the effort, for operations on all indices. - // When doing an operation across all indices, most of the time is spent on actually going to all shards and - // do the required operations, the bottleneck isn't resolving expressions into concrete indices. - List allIndicesLst = new ArrayList<>(); - for (ObjectCursor cursor : indices.values()) { - allIndicesLst.add(cursor.value.getIndex().getName()); - } - String[] allIndices = allIndicesLst.toArray(new String[allIndicesLst.size()]); - - List allOpenIndicesLst = new ArrayList<>(); - List allClosedIndicesLst = new ArrayList<>(); + final Set allIndices = new HashSet<>(indices.size()); + final List allOpenIndices = new ArrayList<>(); + final List allClosedIndices = new ArrayList<>(); + final Set duplicateAliasesIndices = new HashSet<>(); for (ObjectCursor cursor : indices.values()) { - IndexMetaData indexMetaData = cursor.value; + final IndexMetaData indexMetaData = cursor.value; + final String name = indexMetaData.getIndex().getName(); + boolean added = allIndices.add(name); + assert added : "double index named [" + name + "]"; if (indexMetaData.getState() == IndexMetaData.State.OPEN) { - allOpenIndicesLst.add(indexMetaData.getIndex().getName()); + allOpenIndices.add(indexMetaData.getIndex().getName()); } else if (indexMetaData.getState() == IndexMetaData.State.CLOSE) { - allClosedIndicesLst.add(indexMetaData.getIndex().getName()); + allClosedIndices.add(indexMetaData.getIndex().getName()); + } + indexMetaData.getAliases().keysIt().forEachRemaining(duplicateAliasesIndices::add); + } + duplicateAliasesIndices.retainAll(allIndices); + if (duplicateAliasesIndices.isEmpty() == false) { + // iterate again and constructs a helpful message + ArrayList duplicates = new ArrayList<>(); + for (ObjectCursor cursor : indices.values()) { + for (String alias: duplicateAliasesIndices) { + if (cursor.value.getAliases().containsKey(alias)) { + duplicates.add(alias + " (alias of " + cursor.value.getIndex() + ")"); + } + } } + assert duplicates.size() > 0; + throw new IllegalStateException("index and alias names need to be unique, but the following duplicates were found [" + + Strings.collectionToCommaDelimitedString(duplicates)+ "]"); + } - String[] allOpenIndices = allOpenIndicesLst.toArray(new String[allOpenIndicesLst.size()]); - String[] allClosedIndices = allClosedIndicesLst.toArray(new String[allClosedIndicesLst.size()]); // build all indices map SortedMap aliasAndIndexLookup = new TreeMap<>(); for (ObjectCursor cursor : indices.values()) { IndexMetaData indexMetaData = cursor.value; - aliasAndIndexLookup.put(indexMetaData.getIndex().getName(), new AliasOrIndex.Index(indexMetaData)); + AliasOrIndex existing = aliasAndIndexLookup.put(indexMetaData.getIndex().getName(), new AliasOrIndex.Index(indexMetaData)); + assert existing == null : "duplicate for " + indexMetaData.getIndex(); for (ObjectObjectCursor aliasCursor : indexMetaData.getAliases()) { AliasMetaData aliasMetaData = aliasCursor.value; - AliasOrIndex aliasOrIndex = aliasAndIndexLookup.get(aliasMetaData.getAlias()); - if (aliasOrIndex == null) { - aliasOrIndex = new AliasOrIndex.Alias(aliasMetaData, indexMetaData); - aliasAndIndexLookup.put(aliasMetaData.getAlias(), aliasOrIndex); - } else if (aliasOrIndex instanceof AliasOrIndex.Alias) { - AliasOrIndex.Alias alias = (AliasOrIndex.Alias) aliasOrIndex; - alias.addIndex(indexMetaData); - } else if (aliasOrIndex instanceof AliasOrIndex.Index) { - AliasOrIndex.Index index = (AliasOrIndex.Index) aliasOrIndex; - throw new IllegalStateException("index and alias names need to be unique, but alias [" + aliasMetaData.getAlias() + "] and index " + index.getIndex().getIndex() + " have the same name"); - } else { - throw new IllegalStateException("unexpected alias [" + aliasMetaData.getAlias() + "][" + aliasOrIndex + "]"); - } + aliasAndIndexLookup.compute(aliasMetaData.getAlias(), (aliasName, alias) -> { + if (alias == null) { + return new AliasOrIndex.Alias(aliasMetaData, indexMetaData); + } else { + assert alias instanceof AliasOrIndex.Alias : alias.getClass().getName(); + ((AliasOrIndex.Alias) alias).addIndex(indexMetaData); + return alias; + } + }); } } aliasAndIndexLookup = Collections.unmodifiableSortedMap(aliasAndIndexLookup); + // build all concrete indices arrays: + // TODO: I think we can remove these arrays. it isn't worth the effort, for operations on all indices. + // When doing an operation across all indices, most of the time is spent on actually going to all shards and + // do the required operations, the bottleneck isn't resolving expressions into concrete indices. + String[] allIndicesArray = allIndices.toArray(new String[allIndices.size()]); + String[] allOpenIndicesArray = allOpenIndices.toArray(new String[allOpenIndices.size()]); + String[] allClosedIndicesArray = allClosedIndices.toArray(new String[allClosedIndices.size()]); + return new MetaData(clusterUUID, version, transientSettings, persistentSettings, indices.build(), templates.build(), - customs.build(), allIndices, allOpenIndices, allClosedIndices, aliasAndIndexLookup); + customs.build(), allIndicesArray, allOpenIndicesArray, allClosedIndicesArray, aliasAndIndexLookup); } public static String toXContent(MetaData metaData) throws IOException { @@ -1079,7 +1082,7 @@ public static void toXContent(MetaData metaData, XContentBuilder builder, ToXCon builder.field("version", metaData.version()); builder.field("cluster_uuid", metaData.clusterUUID); - if (!metaData.persistentSettings().getAsMap().isEmpty()) { + if (!metaData.persistentSettings().isEmpty()) { builder.startObject("settings"); for (Map.Entry entry : metaData.persistentSettings().getAsMap().entrySet()) { builder.field(entry.getKey(), entry.getValue()); @@ -1087,7 +1090,7 @@ public static void toXContent(MetaData metaData, XContentBuilder builder, ToXCon builder.endObject(); } - if (context == XContentContext.API && !metaData.transientSettings().getAsMap().isEmpty()) { + if (context == XContentContext.API && !metaData.transientSettings().isEmpty()) { builder.startObject("transient_settings"); for (Map.Entry entry : metaData.transientSettings().getAsMap().entrySet()) { builder.field(entry.getKey(), entry.getValue()); @@ -1110,8 +1113,7 @@ public static void toXContent(MetaData metaData, XContentBuilder builder, ToXCon } for (ObjectObjectCursor cursor : metaData.customs()) { - Custom proto = lookupPrototypeSafe(cursor.key); - if (proto.context().contains(context)) { + if (cursor.value.context().contains(context)) { builder.startObject(cursor.key); cursor.value.toXContent(builder, params); builder.endObject(); @@ -1162,14 +1164,12 @@ public static MetaData fromXContent(XContentParser parser) throws IOException { builder.put(IndexTemplateMetaData.Builder.fromXContent(parser, parser.currentName())); } } else { - // check if its a custom index metadata - Custom proto = lookupPrototype(currentFieldName); - if (proto == null) { - //TODO warn + try { + Custom custom = parser.namedObject(Custom.class, currentFieldName, null); + builder.putCustom(custom.getWriteableName(), custom); + } catch (UnknownNamedObjectException ex) { + logger.warn("Skipping unknown custom object with type {}", currentFieldName); parser.skipChildren(); - } else { - Custom custom = proto.fromXContent(parser); - builder.putCustom(custom.type(), custom); } } } else if (token.isValue()) { @@ -1186,10 +1186,6 @@ public static MetaData fromXContent(XContentParser parser) throws IOException { } return builder.build(); } - - public static MetaData readFrom(StreamInput in) throws IOException { - return PROTO.readFrom(in); - } } private static final ToXContent.Params FORMAT_PARAMS; diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java index 8b3cbec0ebded..a27c42f55dcb9 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java @@ -25,6 +25,7 @@ import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ResourceAlreadyExistsException; import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.alias.Alias; @@ -46,7 +47,6 @@ import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.ShardRoutingState; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Priority; import org.elasticsearch.common.Strings; @@ -59,20 +59,20 @@ import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.env.Environment; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexService; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.MapperService.MergeReason; import org.elasticsearch.index.query.QueryShardContext; -import org.elasticsearch.indices.IndexAlreadyExistsException; import org.elasticsearch.indices.IndexCreationException; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.InvalidIndexNameException; +import org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason; import org.elasticsearch.threadpool.ThreadPool; import org.joda.time.DateTime; import org.joda.time.DateTimeZone; @@ -89,8 +89,10 @@ import java.util.Map; import java.util.Set; import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.BiFunction; import java.util.function.Predicate; +import static org.elasticsearch.action.support.ContextPreservingActionListener.wrapPreservingContext; import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS; import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_CREATION_DATE; import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_INDEX_UUID; @@ -110,45 +112,60 @@ public class MetaDataCreateIndexService extends AbstractComponent { private final AllocationService allocationService; private final AliasValidator aliasValidator; private final Environment env; - private final NodeServicesProvider nodeServicesProvider; private final IndexScopedSettings indexScopedSettings; private final ActiveShardsObserver activeShardsObserver; + private final NamedXContentRegistry xContentRegistry; + private final ThreadPool threadPool; @Inject public MetaDataCreateIndexService(Settings settings, ClusterService clusterService, IndicesService indicesService, AllocationService allocationService, AliasValidator aliasValidator, Environment env, - NodeServicesProvider nodeServicesProvider, IndexScopedSettings indexScopedSettings, - ThreadPool threadPool) { + IndexScopedSettings indexScopedSettings, ThreadPool threadPool, + NamedXContentRegistry xContentRegistry) { super(settings); this.clusterService = clusterService; this.indicesService = indicesService; this.allocationService = allocationService; this.aliasValidator = aliasValidator; this.env = env; - this.nodeServicesProvider = nodeServicesProvider; this.indexScopedSettings = indexScopedSettings; this.activeShardsObserver = new ActiveShardsObserver(settings, clusterService, threadPool); + this.threadPool = threadPool; + this.xContentRegistry = xContentRegistry; } + /** + * Validate the name for an index against some static rules and a cluster state. + */ public static void validateIndexName(String index, ClusterState state) { + validateIndexOrAliasName(index, InvalidIndexNameException::new); + if (!index.toLowerCase(Locale.ROOT).equals(index)) { + throw new InvalidIndexNameException(index, "must be lowercase"); + } if (state.routingTable().hasIndex(index)) { - throw new IndexAlreadyExistsException(state.routingTable().index(index).getIndex()); + throw new ResourceAlreadyExistsException(state.routingTable().index(index).getIndex()); } if (state.metaData().hasIndex(index)) { - throw new IndexAlreadyExistsException(state.metaData().index(index).getIndex()); + throw new ResourceAlreadyExistsException(state.metaData().index(index).getIndex()); + } + if (state.metaData().hasAlias(index)) { + throw new InvalidIndexNameException(index, "already exists as alias"); } + } + + /** + * Validate the name for an index or alias against some static rules. + */ + public static void validateIndexOrAliasName(String index, BiFunction exceptionCtor) { if (!Strings.validFileName(index)) { - throw new InvalidIndexNameException(index, "must not contain the following characters " + Strings.INVALID_FILENAME_CHARS); + throw exceptionCtor.apply(index, "must not contain the following characters " + Strings.INVALID_FILENAME_CHARS); } if (index.contains("#")) { - throw new InvalidIndexNameException(index, "must not contain '#'"); + throw exceptionCtor.apply(index, "must not contain '#'"); } if (index.charAt(0) == '_' || index.charAt(0) == '-' || index.charAt(0) == '+') { - throw new InvalidIndexNameException(index, "must not start with '_', '-', or '+'"); - } - if (!index.toLowerCase(Locale.ROOT).equals(index)) { - throw new InvalidIndexNameException(index, "must be lowercase"); + throw exceptionCtor.apply(index, "must not start with '_', '-', or '+'"); } int byteCount = 0; try { @@ -158,15 +175,10 @@ public static void validateIndexName(String index, ClusterState state) { throw new ElasticsearchException("Unable to determine length of index name", e); } if (byteCount > MAX_INDEX_NAME_BYTES) { - throw new InvalidIndexNameException(index, - "index name is too long, (" + byteCount + - " > " + MAX_INDEX_NAME_BYTES + ")"); - } - if (state.metaData().hasAlias(index)) { - throw new InvalidIndexNameException(index, "already exists as alias"); + throw exceptionCtor.apply(index, "index name is too long, (" + byteCount + " > " + MAX_INDEX_NAME_BYTES + ")"); } if (index.equals(".") || index.equals("..")) { - throw new InvalidIndexNameException(index, "must not be '.' or '..'"); + throw exceptionCtor.apply(index, "must not be '.' or '..'"); } } @@ -210,7 +222,9 @@ private void onlyCreateIndex(final CreateIndexClusterStateUpdateRequest request, request.settings(updatedSettingsBuilder.build()); clusterService.submitStateUpdateTask("create-index [" + request.index() + "], cause [" + request.cause() + "]", - new AckedClusterStateUpdateTask(Priority.URGENT, request, listener) { + new AckedClusterStateUpdateTask(Priority.URGENT, request, + wrapPreservingContext(listener, threadPool.getThreadContext())) { + @Override protected ClusterStateUpdateResponse newResponse(boolean acknowledged) { return new ClusterStateUpdateResponse(acknowledged); @@ -219,7 +233,8 @@ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) { @Override public ClusterState execute(ClusterState currentState) throws Exception { Index createdIndex = null; - String removalReason = null; + String removalExtraInfo = null; + IndexRemovalReason removalReason = IndexRemovalReason.FAILURE; try { validate(request, currentState); @@ -241,62 +256,71 @@ public ClusterState execute(ClusterState currentState) throws Exception { List templateNames = new ArrayList<>(); for (Map.Entry entry : request.mappings().entrySet()) { - mappings.put(entry.getKey(), MapperService.parseMapping(entry.getValue())); + mappings.put(entry.getKey(), MapperService.parseMapping(xContentRegistry, entry.getValue())); } for (Map.Entry entry : request.customs().entrySet()) { customs.put(entry.getKey(), entry.getValue()); } - // apply templates, merging the mappings into the request mapping if exists - for (IndexTemplateMetaData template : templates) { - templateNames.add(template.getName()); - for (ObjectObjectCursor cursor : template.mappings()) { - if (mappings.containsKey(cursor.key)) { - XContentHelper.mergeDefaults(mappings.get(cursor.key), MapperService.parseMapping(cursor.value.string())); - } else { - mappings.put(cursor.key, MapperService.parseMapping(cursor.value.string())); - } - } - // handle custom - for (ObjectObjectCursor cursor : template.customs()) { - String type = cursor.key; - IndexMetaData.Custom custom = cursor.value; - IndexMetaData.Custom existing = customs.get(type); - if (existing == null) { - customs.put(type, custom); - } else { - IndexMetaData.Custom merged = existing.mergeWith(custom); - customs.put(type, merged); - } - } - //handle aliases - for (ObjectObjectCursor cursor : template.aliases()) { - AliasMetaData aliasMetaData = cursor.value; - //if an alias with same name came with the create index request itself, - // ignore this one taken from the index template - if (request.aliases().contains(new Alias(aliasMetaData.alias()))) { - continue; + final Index shrinkFromIndex = request.shrinkFrom(); + + if (shrinkFromIndex == null) { + // apply templates, merging the mappings into the request mapping if exists + for (IndexTemplateMetaData template : templates) { + templateNames.add(template.getName()); + for (ObjectObjectCursor cursor : template.mappings()) { + String mappingString = cursor.value.string(); + if (mappings.containsKey(cursor.key)) { + XContentHelper.mergeDefaults(mappings.get(cursor.key), + MapperService.parseMapping(xContentRegistry, mappingString)); + } else { + mappings.put(cursor.key, + MapperService.parseMapping(xContentRegistry, mappingString)); + } } - //if an alias with same name was already processed, ignore this one - if (templatesAliases.containsKey(cursor.key)) { - continue; + // handle custom + for (ObjectObjectCursor cursor : template.customs()) { + String type = cursor.key; + IndexMetaData.Custom custom = cursor.value; + IndexMetaData.Custom existing = customs.get(type); + if (existing == null) { + customs.put(type, custom); + } else { + IndexMetaData.Custom merged = existing.mergeWith(custom); + customs.put(type, merged); + } } - - //Allow templatesAliases to be templated by replacing a token with the name of the index that we are applying it to - if (aliasMetaData.alias().contains("{index}")) { - String templatedAlias = aliasMetaData.alias().replace("{index}", request.index()); - aliasMetaData = AliasMetaData.newAliasMetaData(aliasMetaData, templatedAlias); + //handle aliases + for (ObjectObjectCursor cursor : template.aliases()) { + AliasMetaData aliasMetaData = cursor.value; + //if an alias with same name came with the create index request itself, + // ignore this one taken from the index template + if (request.aliases().contains(new Alias(aliasMetaData.alias()))) { + continue; + } + //if an alias with same name was already processed, ignore this one + if (templatesAliases.containsKey(cursor.key)) { + continue; + } + + //Allow templatesAliases to be templated by replacing a token with the name of the index that we are applying it to + if (aliasMetaData.alias().contains("{index}")) { + String templatedAlias = aliasMetaData.alias().replace("{index}", request.index()); + aliasMetaData = AliasMetaData.newAliasMetaData(aliasMetaData, templatedAlias); + } + + aliasValidator.validateAliasMetaData(aliasMetaData, request.index(), currentState.metaData()); + templatesAliases.put(aliasMetaData.alias(), aliasMetaData); } - - aliasValidator.validateAliasMetaData(aliasMetaData, request.index(), currentState.metaData()); - templatesAliases.put(aliasMetaData.alias(), aliasMetaData); } } Settings.Builder indexSettingsBuilder = Settings.builder(); - // apply templates, here, in reverse order, since first ones are better matching - for (int i = templates.size() - 1; i >= 0; i--) { - indexSettingsBuilder.put(templates.get(i).settings()); + if (shrinkFromIndex == null) { + // apply templates, here, in reverse order, since first ones are better matching + for (int i = templates.size() - 1; i >= 0; i--) { + indexSettingsBuilder.put(templates.get(i).settings()); + } } // now, put the request settings, so they override templates indexSettingsBuilder.put(request.settings()); @@ -312,29 +336,35 @@ public ClusterState execute(ClusterState currentState) throws Exception { if (indexSettingsBuilder.get(SETTING_VERSION_CREATED) == null) { DiscoveryNodes nodes = currentState.nodes(); - final Version createdVersion = Version.smallest(Version.CURRENT, nodes.getSmallestNonClientNodeVersion()); + final Version createdVersion = Version.min(Version.CURRENT, nodes.getSmallestNonClientNodeVersion()); indexSettingsBuilder.put(SETTING_VERSION_CREATED, createdVersion); } if (indexSettingsBuilder.get(SETTING_CREATION_DATE) == null) { indexSettingsBuilder.put(SETTING_CREATION_DATE, new DateTime(DateTimeZone.UTC).getMillis()); } - + indexSettingsBuilder.put(IndexMetaData.SETTING_INDEX_PROVIDED_NAME, request.getProvidedName()); indexSettingsBuilder.put(SETTING_INDEX_UUID, UUIDs.randomBase64UUID()); - final Index shrinkFromIndex = request.shrinkFrom(); - int routingNumShards = IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(indexSettingsBuilder.build());; - if (shrinkFromIndex != null) { - prepareShrinkIndexSettings(currentState, mappings.keySet(), indexSettingsBuilder, shrinkFromIndex, - request.index()); - IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex); + final IndexMetaData.Builder tmpImdBuilder = IndexMetaData.builder(request.index()); + + final int routingNumShards; + if (shrinkFromIndex == null) { + routingNumShards = IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(indexSettingsBuilder.build()); + } else { + final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex); routingNumShards = sourceMetaData.getRoutingNumShards(); } + tmpImdBuilder.setRoutingNumShards(routingNumShards); + + if (shrinkFromIndex != null) { + prepareShrinkIndexSettings( + currentState, mappings.keySet(), indexSettingsBuilder, shrinkFromIndex, request.index()); + } + final Settings actualIndexSettings = indexSettingsBuilder.build(); + tmpImdBuilder.settings(actualIndexSettings); - Settings actualIndexSettings = indexSettingsBuilder.build(); - IndexMetaData.Builder tmpImdBuilder = IndexMetaData.builder(request.index()) - .setRoutingNumShards(routingNumShards); // Set up everything, now locally create the index to see that things are ok, and apply - final IndexMetaData tmpImd = tmpImdBuilder.settings(actualIndexSettings).build(); + final IndexMetaData tmpImd = tmpImdBuilder.build(); ActiveShardCount waitForActiveShards = request.waitForActiveShards(); if (waitForActiveShards == ActiveShardCount.DEFAULT) { waitForActiveShards = tmpImd.getWaitForActiveShards(); @@ -345,26 +375,29 @@ public ClusterState execute(ClusterState currentState) throws Exception { (tmpImd.getNumberOfReplicas() + 1) + "]"); } // create the index here (on the master) to validate it can be created, as well as adding the mapping - final IndexService indexService = indicesService.createIndex(nodeServicesProvider, tmpImd, Collections.emptyList()); + final IndexService indexService = indicesService.createIndex(tmpImd, Collections.emptyList()); createdIndex = indexService.index(); // now add the mappings MapperService mapperService = indexService.mapperService(); try { - mapperService.merge(mappings, request.updateAllTypes()); - } catch (MapperParsingException mpe) { - removalReason = "failed on parsing default mapping/mappings on index creation"; - throw mpe; + mapperService.merge(mappings, MergeReason.MAPPING_UPDATE, request.updateAllTypes()); + } catch (Exception e) { + removalExtraInfo = "failed on parsing default mapping/mappings on index creation"; + throw e; } - final QueryShardContext queryShardContext = indexService.newQueryShardContext(); + // the context is only used for validation so it's fine to pass fake values for the shard id and the current + // timestamp + final QueryShardContext queryShardContext = indexService.newQueryShardContext(0, null, () -> 0L); for (Alias alias : request.aliases()) { if (Strings.hasLength(alias.filter())) { - aliasValidator.validateAliasFilter(alias.name(), alias.filter(), queryShardContext); + aliasValidator.validateAliasFilter(alias.name(), alias.filter(), queryShardContext, xContentRegistry); } } for (AliasMetaData aliasMetaData : templatesAliases.values()) { if (aliasMetaData.filter() != null) { - aliasValidator.validateAliasFilter(aliasMetaData.alias(), aliasMetaData.filter().uncompressed(), queryShardContext); + aliasValidator.validateAliasFilter(aliasMetaData.alias(), aliasMetaData.filter().uncompressed(), + queryShardContext, xContentRegistry); } } @@ -378,6 +411,11 @@ public ClusterState execute(ClusterState currentState) throws Exception { final IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData.builder(request.index()) .settings(actualIndexSettings) .setRoutingNumShards(routingNumShards); + + for (int shardId = 0; shardId < tmpImd.getNumberOfShards(); shardId++) { + indexMetaDataBuilder.primaryTerm(shardId, tmpImd.primaryTerm(shardId)); + } + for (MappingMetaData mappingMd : mappingsMetaData.values()) { indexMetaDataBuilder.putMapping(mappingMd); } @@ -401,7 +439,7 @@ public ClusterState execute(ClusterState currentState) throws Exception { try { indexMetaData = indexMetaDataBuilder.build(); } catch (Exception e) { - removalReason = "failed to build index metadata"; + removalExtraInfo = "failed to build index metadata"; throw e; } @@ -430,24 +468,24 @@ public ClusterState execute(ClusterState currentState) throws Exception { if (request.state() == State.OPEN) { RoutingTable.Builder routingTableBuilder = RoutingTable.builder(updatedState.routingTable()) .addAsNew(updatedState.metaData().index(request.index())); - RoutingAllocation.Result routingResult = allocationService.reroute( + updatedState = allocationService.reroute( ClusterState.builder(updatedState).routingTable(routingTableBuilder.build()).build(), "index [" + request.index() + "] created"); - updatedState = ClusterState.builder(updatedState).routingResult(routingResult).build(); } - removalReason = "cleaning up after validating index on master"; + removalExtraInfo = "cleaning up after validating index on master"; + removalReason = IndexRemovalReason.NO_LONGER_ASSIGNED; return updatedState; } finally { if (createdIndex != null) { // Index was already partially created - need to clean up - indicesService.removeIndex(createdIndex, removalReason != null ? removalReason : "failed to create index"); + indicesService.removeIndex(createdIndex, removalReason, removalExtraInfo); } } } @Override public void onFailure(String source, Exception e) { - if (e instanceof IndexAlreadyExistsException) { + if (e instanceof ResourceAlreadyExistsException) { logger.trace((Supplier) () -> new ParameterizedMessage("[{}] failed to create", request.index()), e); } else { logger.debug((Supplier) () -> new ParameterizedMessage("[{}] failed to create", request.index()), e); @@ -466,12 +504,7 @@ private List findTemplates(CreateIndexClusterStateUpdateR } } - CollectionUtil.timSort(templates, new Comparator() { - @Override - public int compare(IndexTemplateMetaData o1, IndexTemplateMetaData o2) { - return o2.order() - o1.order(); - } - }); + CollectionUtil.timSort(templates, Comparator.comparingInt(IndexTemplateMetaData::order).reversed()); return templates; } @@ -500,15 +533,6 @@ List getIndexSettingsValidationErrors(Settings settings) { validationErrors.add("custom path [" + customPath + "] is not a sub-path of path.shared_data [" + env.sharedDataFile() + "]"); } } - //norelease - this can be removed? - Integer number_of_primaries = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null); - Integer number_of_replicas = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, null); - if (number_of_primaries != null && number_of_primaries <= 0) { - validationErrors.add("index must have 1 or more primary shards"); - } - if (number_of_replicas != null && number_of_replicas < 0) { - validationErrors.add("index must have 0 or more replica shards"); - } return validationErrors; } @@ -520,7 +544,7 @@ static List validateShrinkIndex(ClusterState state, String sourceIndex, Set targetIndexMappingsTypes, String targetIndexName, Settings targetIndexSettings) { if (state.metaData().hasIndex(targetIndexName)) { - throw new IndexAlreadyExistsException(state.metaData().index(targetIndexName).getIndex()); + throw new ResourceAlreadyExistsException(state.metaData().index(targetIndexName).getIndex()); } final IndexMetaData sourceMetaData = state.metaData().index(sourceIndex); if (sourceMetaData == null) { @@ -541,6 +565,7 @@ static List validateShrinkIndex(ClusterState state, String sourceIndex, throw new IllegalArgumentException("mappings are not allowed when shrinking indices" + ", all mappings are copied from the source index"); } + if (IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.exists(targetIndexSettings)) { // this method applies all necessary checks ie. if the target shards are less than the source shards // of if the source shards are divisible by the number of target shards @@ -578,13 +603,16 @@ static void prepareShrinkIndexSettings(ClusterState currentState, Set ma indexSettingsBuilder // we use "i.r.a.initial_recovery" rather than "i.r.a.require|include" since we want the replica to allocate right away // once we are allocated. - .put("index.routing.allocation.initial_recovery._id", + .put(IndexMetaData.INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING.getKey() + "_id", Strings.arrayToCommaDelimitedString(nodesToAllocateOn.toArray())) // we only try once and then give up with a shrink index .put("index.allocation.max_retries", 1) // now copy all similarity / analysis settings - this overrides all settings from the user unless they // wanna add extra settings + .put(IndexMetaData.SETTING_VERSION_CREATED, sourceMetaData.getCreationVersion()) + .put(IndexMetaData.SETTING_VERSION_UPGRADED, sourceMetaData.getUpgradedVersion()) .put(sourceMetaData.getSettings().filter(analysisSimilarityPredicate)) + .put(IndexMetaData.SETTING_ROUTING_PARTITION_SIZE, sourceMetaData.getRoutingPartitionSize()) .put(IndexMetaData.INDEX_SHRINK_SOURCE_NAME.getKey(), shrinkFromIndex.getName()) .put(IndexMetaData.INDEX_SHRINK_SOURCE_UUID.getKey(), shrinkFromIndex.getUUID()); } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataDeleteIndexService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataDeleteIndexService.java index 07e64eac619ea..a2212b5c3f01c 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataDeleteIndexService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataDeleteIndexService.java @@ -23,21 +23,23 @@ import org.elasticsearch.action.admin.indices.delete.DeleteIndexClusterStateUpdateRequest; import org.elasticsearch.cluster.AckedClusterStateUpdateTask; import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.RestoreInProgress; import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse; import org.elasticsearch.cluster.block.ClusterBlocks; import org.elasticsearch.cluster.routing.RoutingTable; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Priority; +import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.index.Index; +import org.elasticsearch.snapshots.RestoreService; import org.elasticsearch.snapshots.SnapshotsService; import java.util.Arrays; -import java.util.Collection; import java.util.Set; import static java.util.stream.Collectors.toSet; @@ -64,7 +66,7 @@ public void deleteIndices(final DeleteIndexClusterStateUpdateRequest request, throw new IllegalArgumentException("Index name is required"); } - clusterService.submitStateUpdateTask("delete-index " + request.indices(), + clusterService.submitStateUpdateTask("delete-index " + Arrays.toString(request.indices()), new AckedClusterStateUpdateTask(Priority.URGENT, request, listener) { @Override @@ -74,7 +76,7 @@ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) { @Override public ClusterState execute(final ClusterState currentState) { - return deleteIndices(currentState, Arrays.asList(request.indices())); + return deleteIndices(currentState, Sets.newHashSet(request.indices())); } }); } @@ -82,7 +84,7 @@ public ClusterState execute(final ClusterState currentState) { /** * Delete some indices from the cluster state. */ - public ClusterState deleteIndices(ClusterState currentState, Collection indices) { + public ClusterState deleteIndices(ClusterState currentState, Set indices) { final MetaData meta = currentState.metaData(); final Set metaDatas = indices.stream().map(i -> meta.getIndexSafe(i)).collect(toSet()); // Check if index deletion conflicts with any running snapshots @@ -95,7 +97,7 @@ public ClusterState deleteIndices(ClusterState currentState, Collection i final int previousGraveyardSize = graveyardBuilder.tombstones().size(); for (final Index index : indices) { String indexName = index.getName(); - logger.debug("[{}] deleting index", index); + logger.info("{} deleting index", index); routingTableBuilder.remove(indexName); clusterBlocksBuilder.removeIndexBlocks(indexName); metaDataBuilder.remove(indexName); @@ -108,9 +110,26 @@ public ClusterState deleteIndices(ClusterState currentState, Collection i MetaData newMetaData = metaDataBuilder.build(); ClusterBlocks blocks = clusterBlocksBuilder.build(); - RoutingAllocation.Result routingResult = allocationService.reroute( - ClusterState.builder(currentState).routingTable(routingTableBuilder.build()).metaData(newMetaData).build(), + + // update snapshot restore entries + ImmutableOpenMap customs = currentState.getCustoms(); + final RestoreInProgress restoreInProgress = currentState.custom(RestoreInProgress.TYPE); + if (restoreInProgress != null) { + RestoreInProgress updatedRestoreInProgress = RestoreService.updateRestoreStateWithDeletedIndices(restoreInProgress, indices); + if (updatedRestoreInProgress != restoreInProgress) { + ImmutableOpenMap.Builder builder = ImmutableOpenMap.builder(customs); + builder.put(RestoreInProgress.TYPE, updatedRestoreInProgress); + customs = builder.build(); + } + } + + return allocationService.reroute( + ClusterState.builder(currentState) + .routingTable(routingTableBuilder.build()) + .metaData(newMetaData) + .blocks(blocks) + .customs(customs) + .build(), "deleted indices [" + indices + "]"); - return ClusterState.builder(currentState).routingResult(routingResult).metaData(newMetaData).blocks(blocks).build(); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java index c21454a09a0f0..feee41d742326 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java @@ -19,7 +19,6 @@ package org.elasticsearch.cluster.metadata; -import com.carrotsearch.hppc.cursors.ObjectCursor; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.alias.IndicesAliasesClusterStateUpdateRequest; @@ -33,10 +32,10 @@ import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexService; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.indices.IndicesService; @@ -50,6 +49,7 @@ import java.util.function.Function; import static java.util.Collections.emptyList; +import static org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason.NO_LONGER_ASSIGNED; /** * Service responsible for submitting add and remove aliases requests @@ -62,19 +62,19 @@ public class MetaDataIndexAliasesService extends AbstractComponent { private final AliasValidator aliasValidator; - private final NodeServicesProvider nodeServicesProvider; - private final MetaDataDeleteIndexService deleteIndexService; + private final NamedXContentRegistry xContentRegistry; + @Inject public MetaDataIndexAliasesService(Settings settings, ClusterService clusterService, IndicesService indicesService, - AliasValidator aliasValidator, NodeServicesProvider nodeServicesProvider, MetaDataDeleteIndexService deleteIndexService) { + AliasValidator aliasValidator, MetaDataDeleteIndexService deleteIndexService, NamedXContentRegistry xContentRegistry) { super(settings); this.clusterService = clusterService; this.indicesService = indicesService; this.aliasValidator = aliasValidator; - this.nodeServicesProvider = nodeServicesProvider; this.deleteIndexService = deleteIndexService; + this.xContentRegistry = xContentRegistry; } public void indicesAliases(final IndicesAliasesClusterStateUpdateRequest request, @@ -139,20 +139,19 @@ ClusterState innerExecute(ClusterState currentState, Iterable actio if (indexService == null) { // temporarily create the index and add mappings so we can parse the filter try { - indexService = indicesService.createIndex(nodeServicesProvider, index, emptyList()); + indexService = indicesService.createIndex(index, emptyList()); + indicesToClose.add(index.getIndex()); } catch (IOException e) { throw new ElasticsearchException("Failed to create temporary index for parsing the alias", e); } - for (ObjectCursor cursor : index.getMappings().values()) { - MappingMetaData mappingMetaData = cursor.value; - indexService.mapperService().merge(mappingMetaData.type(), mappingMetaData.source(), - MapperService.MergeReason.MAPPING_RECOVERY, false); - } - indicesToClose.add(index.getIndex()); + indexService.mapperService().merge(index, MapperService.MergeReason.MAPPING_RECOVERY, false); } indices.put(action.getIndex(), indexService); } - aliasValidator.validateAliasFilter(alias, filter, indexService.newQueryShardContext()); + // the context is only used for validation so it's fine to pass fake values for the shard id and the current + // timestamp + aliasValidator.validateAliasFilter(alias, filter, indexService.newQueryShardContext(0, null, () -> 0L), + xContentRegistry); } }; changed |= action.apply(newAliasValidator, metadata, index); @@ -169,7 +168,7 @@ ClusterState innerExecute(ClusterState currentState, Iterable actio return currentState; } finally { for (Index index : indicesToClose) { - indicesService.removeIndex(index, "created for alias processing"); + indicesService.removeIndex(index, NO_LONGER_ASSIGNED, "created for alias processing"); } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java index 53a0ede809a55..7f8a176243aa6 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java @@ -20,6 +20,7 @@ package org.elasticsearch.cluster.metadata; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.close.CloseIndexClusterStateUpdateRequest; import org.elasticsearch.action.admin.indices.open.OpenIndexClusterStateUpdateRequest; @@ -31,14 +32,12 @@ import org.elasticsearch.cluster.block.ClusterBlocks; import org.elasticsearch.cluster.routing.RoutingTable; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Priority; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.snapshots.RestoreService; @@ -55,22 +54,20 @@ */ public class MetaDataIndexStateService extends AbstractComponent { - public static final ClusterBlock INDEX_CLOSED_BLOCK = new ClusterBlock(4, "index closed", false, false, RestStatus.FORBIDDEN, ClusterBlockLevel.READ_WRITE); + public static final ClusterBlock INDEX_CLOSED_BLOCK = new ClusterBlock(4, "index closed", false, false, false, RestStatus.FORBIDDEN, ClusterBlockLevel.READ_WRITE); private final ClusterService clusterService; private final AllocationService allocationService; private final MetaDataIndexUpgradeService metaDataIndexUpgradeService; - private final NodeServicesProvider nodeServiceProvider; private final IndicesService indicesService; @Inject public MetaDataIndexStateService(Settings settings, ClusterService clusterService, AllocationService allocationService, MetaDataIndexUpgradeService metaDataIndexUpgradeService, - NodeServicesProvider nodeServicesProvider, IndicesService indicesService) { + IndicesService indicesService) { super(settings); - this.nodeServiceProvider = nodeServicesProvider; this.indicesService = indicesService; this.clusterService = clusterService; this.allocationService = allocationService; @@ -125,11 +122,10 @@ public ClusterState execute(ClusterState currentState) { rtBuilder.remove(index.getIndex().getName()); } - RoutingAllocation.Result routingResult = allocationService.reroute( + //no explicit wait for other nodes needed as we use AckedClusterStateUpdateTask + return allocationService.reroute( ClusterState.builder(updatedState).routingTable(rtBuilder.build()).build(), "indices closed [" + indicesAsString + "]"); - //no explicit wait for other nodes needed as we use AckedClusterStateUpdateTask - return ClusterState.builder(updatedState).routingResult(routingResult).build(); } }); } @@ -165,14 +161,16 @@ public ClusterState execute(ClusterState currentState) { MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData()); ClusterBlocks.Builder blocksBuilder = ClusterBlocks.builder() .blocks(currentState.blocks()); + final Version minIndexCompatibilityVersion = currentState.getNodes().getMaxNodeVersion() + .minimumIndexCompatibilityVersion(); for (IndexMetaData closedMetaData : indicesToOpen) { final String indexName = closedMetaData.getIndex().getName(); IndexMetaData indexMetaData = IndexMetaData.builder(closedMetaData).state(IndexMetaData.State.OPEN).build(); // The index might be closed because we couldn't import it due to old incompatible version // We need to check that this index can be upgraded to the current version - indexMetaData = metaDataIndexUpgradeService.upgradeIndexMetaData(indexMetaData); + indexMetaData = metaDataIndexUpgradeService.upgradeIndexMetaData(indexMetaData, minIndexCompatibilityVersion); try { - indicesService.verifyIndexMetadata(nodeServiceProvider, indexMetaData, indexMetaData); + indicesService.verifyIndexMetadata(indexMetaData, indexMetaData); } catch (Exception e) { throw new ElasticsearchException("Failed to verify index " + indexMetaData.getIndex(), e); } @@ -188,11 +186,10 @@ public ClusterState execute(ClusterState currentState) { rtBuilder.addAsFromCloseToOpen(updatedState.metaData().getIndexSafe(index.getIndex())); } - RoutingAllocation.Result routingResult = allocationService.reroute( + //no explicit wait for other nodes needed as we use AckedClusterStateUpdateTask + return allocationService.reroute( ClusterState.builder(updatedState).routingTable(rtBuilder.build()).build(), "indices opened [" + indicesAsString + "]"); - //no explicit wait for other nodes needed as we use AckedClusterStateUpdateTask - return ClusterState.builder(updatedState).routingResult(routingResult).build(); } }); } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java index 101f59f3aecbe..0a16a97a62d23 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java @@ -19,6 +19,7 @@ package org.elasticsearch.cluster.metadata; import com.carrotsearch.hppc.cursors.ObjectCursor; + import org.elasticsearch.Version; import org.elasticsearch.action.admin.indices.alias.Alias; import org.elasticsearch.action.support.master.MasterNodeRequest; @@ -32,14 +33,15 @@ import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.regex.Regex; +import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexService; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.indices.IndexTemplateAlreadyExistsException; +import org.elasticsearch.index.mapper.MapperService.MergeReason; import org.elasticsearch.indices.IndexTemplateMissingException; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.InvalidIndexTemplateException; @@ -53,6 +55,8 @@ import java.util.Map; import java.util.Set; +import static org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason.NO_LONGER_ASSIGNED; + /** * Service responsible for submitting index templates updates */ @@ -62,16 +66,21 @@ public class MetaDataIndexTemplateService extends AbstractComponent { private final AliasValidator aliasValidator; private final IndicesService indicesService; private final MetaDataCreateIndexService metaDataCreateIndexService; - private final NodeServicesProvider nodeServicesProvider; + private final IndexScopedSettings indexScopedSettings; + private final NamedXContentRegistry xContentRegistry; @Inject - public MetaDataIndexTemplateService(Settings settings, ClusterService clusterService, MetaDataCreateIndexService metaDataCreateIndexService, AliasValidator aliasValidator, IndicesService indicesService, NodeServicesProvider nodeServicesProvider) { + public MetaDataIndexTemplateService(Settings settings, ClusterService clusterService, + MetaDataCreateIndexService metaDataCreateIndexService, + AliasValidator aliasValidator, IndicesService indicesService, + IndexScopedSettings indexScopedSettings, NamedXContentRegistry xContentRegistry) { super(settings); this.clusterService = clusterService; this.aliasValidator = aliasValidator; this.indicesService = indicesService; this.metaDataCreateIndexService = metaDataCreateIndexService; - this.nodeServicesProvider = nodeServicesProvider; + this.indexScopedSettings = indexScopedSettings; + this.xContentRegistry = xContentRegistry; } public void removeTemplates(final RemoveRequest request, final RemoveListener listener) { @@ -157,10 +166,10 @@ public void onFailure(String source, Exception e) { @Override public ClusterState execute(ClusterState currentState) throws Exception { if (request.create && currentState.metaData().templates().containsKey(request.name)) { - throw new IndexTemplateAlreadyExistsException(request.name); + throw new IllegalArgumentException("index_template [" + request.name + "] already exists"); } - validateAndAddTemplate(request, templateBuilder, indicesService, nodeServicesProvider); + validateAndAddTemplate(request, templateBuilder, indicesService, xContentRegistry); for (Alias alias : request.aliases) { AliasMetaData aliasMetaData = AliasMetaData.builder(alias.name()).filter(alias.filter()) @@ -184,26 +193,31 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS }); } - private static void validateAndAddTemplate(final PutRequest request, IndexTemplateMetaData.Builder templateBuilder, IndicesService indicesService, - NodeServicesProvider nodeServicesProvider) throws Exception { + private static void validateAndAddTemplate(final PutRequest request, IndexTemplateMetaData.Builder templateBuilder, + IndicesService indicesService, NamedXContentRegistry xContentRegistry) throws Exception { Index createdIndex = null; final String temporaryIndexName = UUIDs.randomBase64UUID(); try { + // use the provided values, otherwise just pick valid dummy values + int dummyPartitionSize = IndexMetaData.INDEX_ROUTING_PARTITION_SIZE_SETTING.get(request.settings); + int dummyShards = request.settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, + dummyPartitionSize == 1 ? 1 : dummyPartitionSize + 1); //create index service for parsing and validating "mappings" Settings dummySettings = Settings.builder() .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .put(request.settings) - .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, dummyShards) .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) .put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID()) .build(); final IndexMetaData tmpIndexMetadata = IndexMetaData.builder(temporaryIndexName).settings(dummySettings).build(); - IndexService dummyIndexService = indicesService.createIndex(nodeServicesProvider, tmpIndexMetadata, Collections.emptyList()); + IndexService dummyIndexService = indicesService.createIndex(tmpIndexMetadata, Collections.emptyList()); createdIndex = dummyIndexService.index(); templateBuilder.order(request.order); + templateBuilder.version(request.version); templateBuilder.template(request.template); templateBuilder.settings(request.settings); @@ -214,14 +228,14 @@ private static void validateAndAddTemplate(final PutRequest request, IndexTempla } catch (Exception e) { throw new MapperParsingException("Failed to parse mapping [{}]: {}", e, entry.getKey(), e.getMessage()); } - mappingsForValidation.put(entry.getKey(), MapperService.parseMapping(entry.getValue())); + mappingsForValidation.put(entry.getKey(), MapperService.parseMapping(xContentRegistry, entry.getValue())); } - dummyIndexService.mapperService().merge(mappingsForValidation, false); + dummyIndexService.mapperService().merge(mappingsForValidation, MergeReason.MAPPING_UPDATE, false); } finally { if (createdIndex != null) { - indicesService.removeIndex(createdIndex, " created for parsing template mapping"); + indicesService.removeIndex(createdIndex, NO_LONGER_ASSIGNED, " created for parsing template mapping"); } } } @@ -259,6 +273,14 @@ private void validate(PutRequest request) { validationErrors.add("template must not contain the following characters " + Strings.INVALID_FILENAME_CHARS); } + try { + indexScopedSettings.validate(request.settings); + } catch (IllegalArgumentException iae) { + validationErrors.add(iae.getMessage()); + for (Throwable t : iae.getSuppressed()) { + validationErrors.add(t.getMessage()); + } + } List indexSettingsValidation = metaDataCreateIndexService.getIndexSettingsValidationErrors(request.settings); validationErrors.addAll(indexSettingsValidation); if (!validationErrors.isEmpty()) { @@ -288,6 +310,7 @@ public static class PutRequest { final String cause; boolean create; int order; + Integer version; String template; Settings settings = Settings.Builder.EMPTY_SETTINGS; Map mappings = new HashMap<>(); @@ -345,6 +368,11 @@ public PutRequest masterTimeout(TimeValue masterTimeout) { this.masterTimeout = masterTimeout; return this; } + + public PutRequest version(Integer version) { + this.version = version; + return this; + } } public static class PutResponse { diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java index d1141aeb9f477..909f60f1addd3 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java @@ -19,20 +19,30 @@ package org.elasticsearch.cluster.metadata; import com.carrotsearch.hppc.cursors.ObjectCursor; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.analysis.Analyzer; import org.elasticsearch.Version; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.analysis.IndexAnalyzers; +import org.elasticsearch.index.analysis.AnalyzerScope; import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.similarity.SimilarityService; import org.elasticsearch.indices.mapper.MapperRegistry; +import org.elasticsearch.plugins.Plugin; +import java.util.AbstractMap; +import java.util.Collection; import java.util.Collections; +import java.util.Map; +import java.util.Set; +import java.util.function.UnaryOperator; /** * This service is responsible for upgrading legacy index metadata to the current version @@ -44,14 +54,26 @@ */ public class MetaDataIndexUpgradeService extends AbstractComponent { + private final NamedXContentRegistry xContentRegistry; private final MapperRegistry mapperRegistry; private final IndexScopedSettings indexScopedSettings; + private final UnaryOperator upgraders; @Inject - public MetaDataIndexUpgradeService(Settings settings, MapperRegistry mapperRegistry, IndexScopedSettings indexScopedSettings) { + public MetaDataIndexUpgradeService(Settings settings, NamedXContentRegistry xContentRegistry, MapperRegistry mapperRegistry, + IndexScopedSettings indexScopedSettings, + Collection> indexMetaDataUpgraders) { super(settings); + this.xContentRegistry = xContentRegistry; this.mapperRegistry = mapperRegistry; this.indexScopedSettings = indexScopedSettings; + this.upgraders = indexMetaData -> { + IndexMetaData newIndexMetaData = indexMetaData; + for (UnaryOperator upgrader : indexMetaDataUpgraders) { + newIndexMetaData = upgrader.apply(newIndexMetaData); + } + return newIndexMetaData; + }; } /** @@ -61,19 +83,21 @@ public MetaDataIndexUpgradeService(Settings settings, MapperRegistry mapperRegis * If the index does not need upgrade it returns the index metadata unchanged, otherwise it returns a modified index metadata. If index * cannot be updated the method throws an exception. */ - public IndexMetaData upgradeIndexMetaData(IndexMetaData indexMetaData) { + public IndexMetaData upgradeIndexMetaData(IndexMetaData indexMetaData, Version minimumIndexCompatibilityVersion) { // Throws an exception if there are too-old segments: if (isUpgraded(indexMetaData)) { assert indexMetaData == archiveBrokenIndexSettings(indexMetaData) : "all settings must have been upgraded before"; return indexMetaData; } - checkSupportedVersion(indexMetaData); + checkSupportedVersion(indexMetaData, minimumIndexCompatibilityVersion); IndexMetaData newMetaData = indexMetaData; // we have to run this first otherwise in we try to create IndexSettings // with broken settings and fail in checkMappingsCompatibility newMetaData = archiveBrokenIndexSettings(newMetaData); // only run the check with the upgraded settings!! checkMappingsCompatibility(newMetaData); + // apply plugin checks + newMetaData = upgraders.apply(newMetaData); return markAsUpgraded(newMetaData); } @@ -86,30 +110,26 @@ boolean isUpgraded(IndexMetaData indexMetaData) { } /** - * Elasticsearch 5.0 no longer supports indices with pre Lucene v5.0 (Elasticsearch v2.0.0.beta1) segments. All indices - * that were created before Elasticsearch v2.0.0.beta1 should be reindexed in Elasticsearch 2.x - * before they can be opened by this version of elasticsearch. */ - private void checkSupportedVersion(IndexMetaData indexMetaData) { - if (indexMetaData.getState() == IndexMetaData.State.OPEN && isSupportedVersion(indexMetaData) == false) { - throw new IllegalStateException("The index [" + indexMetaData.getIndex() + "] was created before v2.0.0.beta1." - + " It should be reindexed in Elasticsearch 2.x before upgrading to " + Version.CURRENT + "."); + * Elasticsearch v6.0 no longer supports indices created pre v5.0. All indices + * that were created before Elasticsearch v5.0 should be re-indexed in Elasticsearch 5.x + * before they can be opened by this version of elasticsearch. + */ + private void checkSupportedVersion(IndexMetaData indexMetaData, Version minimumIndexCompatibilityVersion) { + if (indexMetaData.getState() == IndexMetaData.State.OPEN && isSupportedVersion(indexMetaData, + minimumIndexCompatibilityVersion) == false) { + throw new IllegalStateException("The index [" + indexMetaData.getIndex() + "] was created with version [" + + indexMetaData.getCreationVersion() + "] but the minimum compatible version is [" + + + minimumIndexCompatibilityVersion + "]. It should be re-indexed in Elasticsearch " + minimumIndexCompatibilityVersion.major + + ".x before upgrading to " + Version.CURRENT + "."); } } /* * Returns true if this index can be supported by the current version of elasticsearch */ - private static boolean isSupportedVersion(IndexMetaData indexMetaData) { - if (indexMetaData.getCreationVersion().onOrAfter(Version.V_2_0_0_beta1)) { - // The index was created with elasticsearch that was using Lucene 5.2.1 - return true; - } - if (indexMetaData.getMinimumCompatibleVersion() != null && - indexMetaData.getMinimumCompatibleVersion().onOrAfter(org.apache.lucene.util.Version.LUCENE_5_0_0)) { - //The index was upgraded we can work with it - return true; - } - return false; + private static boolean isSupportedVersion(IndexMetaData indexMetaData, Version minimumIndexCompatibilityVersion) { + return indexMetaData.getCreationVersion().onOrAfter(minimumIndexCompatibilityVersion); } /** @@ -121,13 +141,31 @@ private void checkMappingsCompatibility(IndexMetaData indexMetaData) { // been started yet. However, we don't really need real analyzers at this stage - so we can fake it IndexSettings indexSettings = new IndexSettings(indexMetaData, this.settings); SimilarityService similarityService = new SimilarityService(indexSettings, Collections.emptyMap()); + final NamedAnalyzer fakeDefault = new NamedAnalyzer("fake_default", AnalyzerScope.INDEX, new Analyzer() { + @Override + protected TokenStreamComponents createComponents(String fieldName) { + throw new UnsupportedOperationException("shouldn't be here"); + } + }); + // this is just a fake map that always returns the same value for any possible string key + // also the entrySet impl isn't fully correct but we implement it since internally + // IndexAnalyzers will iterate over all analyzers to close them. + final Map analyzerMap = new AbstractMap() { + @Override + public NamedAnalyzer get(Object key) { + assert key instanceof String : "key must be a string but was: " + key.getClass(); + return new NamedAnalyzer((String)key, AnalyzerScope.INDEX, fakeDefault.analyzer()); + } - try (AnalysisService analysisService = new FakeAnalysisService(indexSettings)) { - MapperService mapperService = new MapperService(indexSettings, analysisService, similarityService, mapperRegistry, () -> null); - for (ObjectCursor cursor : indexMetaData.getMappings().values()) { - MappingMetaData mappingMetaData = cursor.value; - mapperService.merge(mappingMetaData.type(), mappingMetaData.source(), MapperService.MergeReason.MAPPING_RECOVERY, false); + @Override + public Set> entrySet() { + return Collections.emptySet(); } + }; + try (IndexAnalyzers fakeIndexAnalzyers = new IndexAnalyzers(indexSettings, fakeDefault, fakeDefault, fakeDefault, analyzerMap, analyzerMap)) { + MapperService mapperService = new MapperService(indexSettings, fakeIndexAnalzyers, xContentRegistry, similarityService, + mapperRegistry, () -> null); + mapperService.merge(indexMetaData, MapperService.MergeReason.MAPPING_RECOVERY, false); } } catch (Exception ex) { // Wrap the inner exception so we have the index name in the exception message @@ -143,37 +181,12 @@ private IndexMetaData markAsUpgraded(IndexMetaData indexMetaData) { return IndexMetaData.builder(indexMetaData).settings(settings).build(); } - /** - * A fake analysis server that returns the same keyword analyzer for all requests - */ - private static class FakeAnalysisService extends AnalysisService { - - private Analyzer fakeAnalyzer = new Analyzer() { - @Override - protected TokenStreamComponents createComponents(String fieldName) { - throw new UnsupportedOperationException("shouldn't be here"); - } - }; - - public FakeAnalysisService(IndexSettings indexSettings) { - super(indexSettings, Collections.emptyMap(), Collections.emptyMap(), Collections.emptyMap(), Collections.emptyMap()); - } - - @Override - public NamedAnalyzer analyzer(String name) { - return new NamedAnalyzer(name, fakeAnalyzer); - } - - @Override - public void close() { - fakeAnalyzer.close(); - super.close(); - } - } - IndexMetaData archiveBrokenIndexSettings(IndexMetaData indexMetaData) { final Settings settings = indexMetaData.getSettings(); - final Settings upgrade = indexScopedSettings.archiveUnknownOrBrokenSettings(settings); + final Settings upgrade = indexScopedSettings.archiveUnknownOrInvalidSettings( + settings, + e -> logger.warn("{} ignoring unknown index setting: [{}] with value [{}]; archiving", indexMetaData.getIndex(), e.getKey(), e.getValue()), + (e, ex) -> logger.warn((Supplier) () -> new ParameterizedMessage("{} ignoring invalid index setting: [{}] with value [{}]; archiving", indexMetaData.getIndex(), e.getKey(), e.getValue()), ex)); if (upgrade != settings) { return IndexMetaData.builder(indexMetaData).settings(upgrade).build(); } else { diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java index 8ce58637b1550..865b58c468a52 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java @@ -22,18 +22,18 @@ import com.carrotsearch.hppc.cursors.ObjectCursor; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.util.IOUtils; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.mapping.put.PutMappingClusterStateUpdateRequest; import org.elasticsearch.cluster.AckedClusterStateTaskListener; +import org.elasticsearch.cluster.ClusterStateTaskExecutor; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterStateTaskConfig; -import org.elasticsearch.cluster.ClusterStateTaskExecutor; import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Priority; -import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.inject.Inject; @@ -41,9 +41,9 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexService; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.MapperService.MergeReason; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.InvalidTypeNameException; @@ -51,10 +51,11 @@ import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; -import java.util.HashSet; import java.util.List; import java.util.Map; -import java.util.Set; + +import static org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason.NO_LONGER_ASSIGNED; + /** * Service responsible for submitting mapping changes */ @@ -63,17 +64,15 @@ public class MetaDataMappingService extends AbstractComponent { private final ClusterService clusterService; private final IndicesService indicesService; - final ClusterStateTaskExecutor refreshExecutor = new RefreshTaskExecutor(); - final ClusterStateTaskExecutor putMappingExecutor = new PutMappingExecutor(); - private final NodeServicesProvider nodeServicesProvider; + final RefreshTaskExecutor refreshExecutor = new RefreshTaskExecutor(); + final PutMappingExecutor putMappingExecutor = new PutMappingExecutor(); @Inject - public MetaDataMappingService(Settings settings, ClusterService clusterService, IndicesService indicesService, NodeServicesProvider nodeServicesProvider) { + public MetaDataMappingService(Settings settings, ClusterService clusterService, IndicesService indicesService) { super(settings); this.clusterService = clusterService; this.indicesService = indicesService; - this.nodeServicesProvider = nodeServicesProvider; } static class RefreshTask { @@ -93,9 +92,9 @@ public String toString() { class RefreshTaskExecutor implements ClusterStateTaskExecutor { @Override - public BatchResult execute(ClusterState currentState, List tasks) throws Exception { + public ClusterTasksResult execute(ClusterState currentState, List tasks) throws Exception { ClusterState newClusterState = executeRefresh(currentState, tasks); - return BatchResult.builder().successes(tasks).build(newClusterState); + return ClusterTasksResult.builder().successes(tasks).build(newClusterState); } } @@ -146,12 +145,9 @@ ClusterState executeRefresh(final ClusterState currentState, final List metaData : indexMetaData.getMappings().values()) { - // don't apply the default mapping, it has been applied when the mapping was created - indexService.mapperService().merge(metaData.value.type(), metaData.value.source(), MapperService.MergeReason.MAPPING_RECOVERY, true); - } + indexService.mapperService().merge(indexMetaData, MergeReason.MAPPING_RECOVERY, true); } IndexMetaData.Builder builder = IndexMetaData.builder(indexMetaData); @@ -163,7 +159,7 @@ ClusterState executeRefresh(final ClusterState currentState, final List { @Override - public BatchResult execute(ClusterState currentState, - List tasks) throws Exception { - Set indicesToClose = new HashSet<>(); - BatchResult.Builder builder = BatchResult.builder(); + public ClusterTasksResult execute(ClusterState currentState, + List tasks) throws Exception { + Map indexMapperServices = new HashMap<>(); + ClusterTasksResult.Builder builder = ClusterTasksResult.builder(); try { - // precreate incoming indices; for (PutMappingClusterStateUpdateRequest request : tasks) { try { for (Index index : request.indices()) { final IndexMetaData indexMetaData = currentState.metaData().getIndexSafe(index); - if (indicesService.hasIndex(indexMetaData.getIndex()) == false) { - // if the index does not exists we create it once, add all types to the mapper service and - // close it later once we are done with mapping update - indicesToClose.add(indexMetaData.getIndex()); - IndexService indexService = indicesService.createIndex(nodeServicesProvider, indexMetaData, - Collections.emptyList()); + if (indexMapperServices.containsKey(indexMetaData.getIndex()) == false) { + MapperService mapperService = indicesService.createIndexMapperService(indexMetaData); + indexMapperServices.put(index, mapperService); // add mappings for all types, we need them for cross-type validation - for (ObjectCursor mapping : indexMetaData.getMappings().values()) { - indexService.mapperService().merge(mapping.value.type(), mapping.value.source(), - MapperService.MergeReason.MAPPING_RECOVERY, request.updateAllTypes()); - } + mapperService.merge(indexMetaData, MergeReason.MAPPING_RECOVERY, request.updateAllTypes()); } } - currentState = applyRequest(currentState, request); + currentState = applyRequest(currentState, request, indexMapperServices); builder.success(request); } catch (Exception e) { builder.failure(request, e); @@ -246,34 +235,33 @@ public BatchResult execute(ClusterState cur } return builder.build(currentState); } finally { - for (Index index : indicesToClose) { - indicesService.removeIndex(index, "created for mapping processing"); - } + IOUtils.close(indexMapperServices.values()); } } - private ClusterState applyRequest(ClusterState currentState, PutMappingClusterStateUpdateRequest request) throws IOException { + private ClusterState applyRequest(ClusterState currentState, PutMappingClusterStateUpdateRequest request, + Map indexMapperServices) throws IOException { String mappingType = request.type(); CompressedXContent mappingUpdateSource = new CompressedXContent(request.source()); final MetaData metaData = currentState.metaData(); - final List> updateList = new ArrayList<>(); + final List updateList = new ArrayList<>(); for (Index index : request.indices()) { - IndexService indexService = indicesService.indexServiceSafe(index); + MapperService mapperService = indexMapperServices.get(index); // IMPORTANT: always get the metadata from the state since it get's batched // and if we pull it from the indexService we might miss an update etc. final IndexMetaData indexMetaData = currentState.getMetaData().getIndexSafe(index); - // this is paranoia... just to be sure we use the exact same indexService and metadata tuple on the update that + // this is paranoia... just to be sure we use the exact same metadata tuple on the update that // we used for the validation, it makes this mechanism little less scary (a little) - updateList.add(new Tuple<>(indexService, indexMetaData)); + updateList.add(indexMetaData); // try and parse it (no need to add it here) so we can bail early in case of parsing exception DocumentMapper newMapper; - DocumentMapper existingMapper = indexService.mapperService().documentMapper(request.type()); + DocumentMapper existingMapper = mapperService.documentMapper(request.type()); if (MapperService.DEFAULT_MAPPING.equals(request.type())) { // _default_ types do not go through merging, but we do test the new settings. Also don't apply the old default - newMapper = indexService.mapperService().parse(request.type(), mappingUpdateSource, false); + newMapper = mapperService.parse(request.type(), mappingUpdateSource, false); } else { - newMapper = indexService.mapperService().parse(request.type(), mappingUpdateSource, existingMapper == null); + newMapper = mapperService.parse(request.type(), mappingUpdateSource, existingMapper == null); if (existingMapper != null) { // first, simulate: just call merge and ignore the result existingMapper.merge(newMapper.mapping(), request.updateAllTypes()); @@ -289,9 +277,9 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt for (ObjectCursor mapping : indexMetaData.getMappings().values()) { String parentType = newMapper.parentFieldMapper().type(); if (parentType.equals(mapping.value.type()) && - indexService.mapperService().getParentTypes().contains(parentType) == false) { + mapperService.getParentTypes().contains(parentType) == false) { throw new IllegalArgumentException("can't add a _parent field that points to an " + - "already existing type, that isn't already a parent"); + "already existing type, that isn't already a parent"); } } } @@ -309,24 +297,25 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt throw new InvalidTypeNameException("Document mapping type name can't start with '_', found: [" + mappingType + "]"); } MetaData.Builder builder = MetaData.builder(metaData); - for (Tuple toUpdate : updateList) { + boolean updated = false; + for (IndexMetaData indexMetaData : updateList) { // do the actual merge here on the master, and update the mapping source // we use the exact same indexService and metadata we used to validate above here to actually apply the update - final IndexService indexService = toUpdate.v1(); - final IndexMetaData indexMetaData = toUpdate.v2(); final Index index = indexMetaData.getIndex(); + final MapperService mapperService = indexMapperServices.get(index); CompressedXContent existingSource = null; - DocumentMapper existingMapper = indexService.mapperService().documentMapper(mappingType); + DocumentMapper existingMapper = mapperService.documentMapper(mappingType); if (existingMapper != null) { existingSource = existingMapper.mappingSource(); } - DocumentMapper mergedMapper = indexService.mapperService().merge(mappingType, mappingUpdateSource, MapperService.MergeReason.MAPPING_UPDATE, request.updateAllTypes()); + DocumentMapper mergedMapper = mapperService.merge(mappingType, mappingUpdateSource, MergeReason.MAPPING_UPDATE, request.updateAllTypes()); CompressedXContent updatedSource = mergedMapper.mappingSource(); if (existingSource != null) { if (existingSource.equals(updatedSource)) { // same source, no changes, ignore it } else { + updated = true; // use the merged mapping source if (logger.isDebugEnabled()) { logger.debug("{} update_mapping [{}] with source [{}]", index, mergedMapper.type(), updatedSource); @@ -336,6 +325,7 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt } } else { + updated = true; if (logger.isDebugEnabled()) { logger.debug("{} create_mapping [{}] with source [{}]", index, mappingType, updatedSource); } else if (logger.isInfoEnabled()) { @@ -346,13 +336,16 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData.builder(indexMetaData); // Mapping updates on a single type may have side-effects on other types so we need to // update mapping metadata on all types - for (DocumentMapper mapper : indexService.mapperService().docMappers(true)) { + for (DocumentMapper mapper : mapperService.docMappers(true)) { indexMetaDataBuilder.putMapping(new MappingMetaData(mapper.mappingSource())); } builder.put(indexMetaDataBuilder); } - - return ClusterState.builder(currentState).metaData(builder).build(); + if (updated) { + return ClusterState.builder(currentState).metaData(builder).build(); + } else { + return currentState; + } } @Override diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java index 9db777a4794b9..47a986d3b1eea 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java @@ -33,7 +33,6 @@ import org.elasticsearch.cluster.block.ClusterBlocks; import org.elasticsearch.cluster.routing.RoutingTable; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Priority; import org.elasticsearch.common.collect.Tuple; @@ -44,8 +43,8 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.Index; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.threadpool.ThreadPool; import java.io.IOException; import java.util.ArrayList; @@ -56,6 +55,8 @@ import java.util.Map; import java.util.Set; +import static org.elasticsearch.action.support.ContextPreservingActionListener.wrapPreservingContext; + /** * Service responsible for submitting update index settings requests */ @@ -67,18 +68,18 @@ public class MetaDataUpdateSettingsService extends AbstractComponent implements private final IndexScopedSettings indexScopedSettings; private final IndicesService indicesService; - private final NodeServicesProvider nodeServiceProvider; + private final ThreadPool threadPool; @Inject public MetaDataUpdateSettingsService(Settings settings, ClusterService clusterService, AllocationService allocationService, - IndexScopedSettings indexScopedSettings, IndicesService indicesService, NodeServicesProvider nodeServicesProvider) { + IndexScopedSettings indexScopedSettings, IndicesService indicesService, ThreadPool threadPool) { super(settings); this.clusterService = clusterService; - this.clusterService.add(this); + this.threadPool = threadPool; + this.clusterService.addListener(this); this.allocationService = allocationService; this.indexScopedSettings = indexScopedSettings; this.indicesService = indicesService; - this.nodeServiceProvider = nodeServicesProvider; } @Override @@ -165,10 +166,6 @@ public void updateSettings(final UpdateSettingsClusterStateUpdateRequest request indexScopedSettings.validate(normalizedSettings); // never allow to change the number of shards for (Map.Entry entry : normalizedSettings.getAsMap().entrySet()) { - if (entry.getKey().equals(IndexMetaData.SETTING_NUMBER_OF_SHARDS)) { - listener.onFailure(new IllegalArgumentException("can't change the number of shards for an index")); - return; - } Setting setting = indexScopedSettings.get(entry.getKey()); assert setting != null; // we already validated the normalized settings settingsForClosedIndices.put(entry.getKey(), entry.getValue()); @@ -184,7 +181,8 @@ public void updateSettings(final UpdateSettingsClusterStateUpdateRequest request final boolean preserveExisting = request.isPreserveExisting(); clusterService.submitStateUpdateTask("update-settings", - new AckedClusterStateUpdateTask(Priority.URGENT, request, listener) { + new AckedClusterStateUpdateTask(Priority.URGENT, request, + wrapPreservingContext(listener, threadPool.getThreadContext())) { @Override protected ClusterStateUpdateResponse newResponse(boolean acknowledged) { @@ -218,7 +216,7 @@ public ClusterState execute(ClusterState currentState) { closeIndices )); } - if (!skippedSettigns.getAsMap().isEmpty() && !openIndices.isEmpty()) { + if (!skippedSettigns.isEmpty() && !openIndices.isEmpty()) { throw new IllegalArgumentException(String.format(Locale.ROOT, "Can't update non dynamic settings [%s] for open indices %s", skippedSettigns.getAsMap().keySet(), @@ -228,6 +226,9 @@ public ClusterState execute(ClusterState currentState) { int updatedNumberOfReplicas = openSettings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, -1); if (updatedNumberOfReplicas != -1 && preserveExisting == false) { + // we do *not* update the in sync allocation ids as they will be removed upon the first index + // operation which make these copies stale + // TODO: update the list once the data is deleted by the node? routingTableBuilder.updateNumberOfReplicas(updatedNumberOfReplicas, actualIndices); metaDataBuilder.updateNumberOfReplicas(updatedNumberOfReplicas, actualIndices); logger.info("updating number_of_replicas to [{}] for indices {}", updatedNumberOfReplicas, actualIndices); @@ -235,6 +236,7 @@ public ClusterState execute(ClusterState currentState) { ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()); maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_READ_ONLY_BLOCK, IndexMetaData.INDEX_READ_ONLY_SETTING, openSettings); + maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_READ_ONLY_ALLOW_DELETE_BLOCK, IndexMetaData.INDEX_BLOCKS_READ_ONLY_ALLOW_DELETE_SETTING, openSettings); maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_METADATA_BLOCK, IndexMetaData.INDEX_BLOCKS_METADATA_SETTING, openSettings); maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_WRITE_BLOCK, IndexMetaData.INDEX_BLOCKS_WRITE_SETTING, openSettings); maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_READ_BLOCK, IndexMetaData.INDEX_BLOCKS_READ_SETTING, openSettings); @@ -271,18 +273,21 @@ public ClusterState execute(ClusterState currentState) { ClusterState updatedState = ClusterState.builder(currentState).metaData(metaDataBuilder).routingTable(routingTableBuilder.build()).blocks(blocks).build(); // now, reroute in case things change that require it (like number of replicas) - RoutingAllocation.Result routingResult = allocationService.reroute(updatedState, "settings update"); - updatedState = ClusterState.builder(updatedState).routingResult(routingResult).build(); + updatedState = allocationService.reroute(updatedState, "settings update"); try { for (Index index : openIndices) { final IndexMetaData currentMetaData = currentState.getMetaData().getIndexSafe(index); final IndexMetaData updatedMetaData = updatedState.metaData().getIndexSafe(index); - indicesService.verifyIndexMetadata(nodeServiceProvider, currentMetaData, updatedMetaData); + indicesService.verifyIndexMetadata(currentMetaData, updatedMetaData); } for (Index index : closeIndices) { final IndexMetaData currentMetaData = currentState.getMetaData().getIndexSafe(index); final IndexMetaData updatedMetaData = updatedState.metaData().getIndexSafe(index); - indicesService.verifyIndexMetadata(nodeServiceProvider, currentMetaData, updatedMetaData); + // Verifies that the current index settings can be updated with the updated dynamic settings. + indicesService.verifyIndexMetadata(currentMetaData, updatedMetaData); + // Now check that we can create the index with the updated settings (dynamic and non-dynamic). + // This step is mandatory since we allow to update non-dynamic settings on closed indices. + indicesService.verifyIndexMetadata(updatedMetaData, updatedMetaData); } } catch (IOException ex) { throw ExceptionsHelper.convertToElastic(ex); @@ -310,9 +315,9 @@ private static void maybeUpdateClusterBlock(String[] actualIndices, ClusterBlock public void upgradeIndexSettings(final UpgradeSettingsClusterStateUpdateRequest request, final ActionListener listener) { - - - clusterService.submitStateUpdateTask("update-index-compatibility-versions", new AckedClusterStateUpdateTask(Priority.URGENT, request, listener) { + clusterService.submitStateUpdateTask("update-index-compatibility-versions", + new AckedClusterStateUpdateTask(Priority.URGENT, request, + wrapPreservingContext(listener, threadPool.getThreadContext())) { @Override protected ClusterStateUpdateResponse newResponse(boolean acknowledged) { @@ -330,7 +335,6 @@ public ClusterState execute(ClusterState currentState) { // No reason to pollute the settings, we didn't really upgrade anything metaDataBuilder.put(IndexMetaData.builder(indexMetaData) .settings(Settings.builder().put(indexMetaData.getSettings()) - .put(IndexMetaData.SETTING_VERSION_MINIMUM_COMPATIBLE, entry.getValue().v2()) .put(IndexMetaData.SETTING_VERSION_UPGRADED, entry.getValue().v1()) ) ); diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoriesMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoriesMetaData.java index 2dc842ceaae3e..67909bff61449 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoriesMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoriesMetaData.java @@ -21,6 +21,9 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.cluster.AbstractNamedDiffable; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.NamedDiff; import org.elasticsearch.cluster.metadata.MetaData.Custom; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -39,12 +42,10 @@ /** * Contains metadata about registered snapshot repositories */ -public class RepositoriesMetaData extends AbstractDiffable implements MetaData.Custom { +public class RepositoriesMetaData extends AbstractNamedDiffable implements Custom { public static final String TYPE = "repositories"; - public static final RepositoriesMetaData PROTO = new RepositoriesMetaData(); - private final List repositories; /** @@ -100,20 +101,20 @@ public int hashCode() { * {@inheritDoc} */ @Override - public String type() { + public String getWriteableName() { return TYPE; } - /** - * {@inheritDoc} - */ - @Override - public Custom readFrom(StreamInput in) throws IOException { + public RepositoriesMetaData(StreamInput in) throws IOException { RepositoryMetaData[] repository = new RepositoryMetaData[in.readVInt()]; for (int i = 0; i < repository.length; i++) { - repository[i] = RepositoryMetaData.readFrom(in); + repository[i] = new RepositoryMetaData(in); } - return new RepositoriesMetaData(repository); + this.repositories = Arrays.asList(repository); + } + + public static NamedDiff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(Custom.class, TYPE, in); } /** @@ -127,11 +128,7 @@ public void writeTo(StreamOutput out) throws IOException { } } - /** - * {@inheritDoc} - */ - @Override - public RepositoriesMetaData fromXContent(XContentParser parser) throws IOException { + public static RepositoriesMetaData fromXContent(XContentParser parser) throws IOException { XContentParser.Token token; List repository = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoryMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoryMetaData.java index 3c13a10c1cfd8..847db915b8bce 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoryMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoryMetaData.java @@ -73,17 +73,10 @@ public Settings settings() { } - /** - * Reads repository metadata from stream input - * - * @param in stream input - * @return repository metadata - */ - public static RepositoryMetaData readFrom(StreamInput in) throws IOException { - String name = in.readString(); - String type = in.readString(); - Settings settings = Settings.readSettingsFromStream(in); - return new RepositoryMetaData(name, type, settings); + public RepositoryMetaData(StreamInput in) throws IOException { + name = in.readString(); + type = in.readString(); + settings = Settings.readSettingsFromStream(in); } /** diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/TemplateUpgradeService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/TemplateUpgradeService.java new file mode 100644 index 0000000000000..8e8f0c594bc71 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/TemplateUpgradeService.java @@ -0,0 +1,268 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.metadata; + +import com.carrotsearch.hppc.cursors.ObjectCursor; +import com.carrotsearch.hppc.cursors.ObjectObjectCursor; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.admin.indices.template.delete.DeleteIndexTemplateRequest; +import org.elasticsearch.action.admin.indices.template.delete.DeleteIndexTemplateResponse; +import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateRequest; +import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateResponse; +import org.elasticsearch.client.Client; +import org.elasticsearch.cluster.ClusterChangedEvent; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.collect.ImmutableOpenMap; +import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.gateway.GatewayService; +import org.elasticsearch.indices.IndexTemplateMissingException; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.threadpool.ThreadPool; + +import java.io.IOException; +import java.util.Collection; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.UnaryOperator; + +import static java.util.Collections.singletonMap; + +/** + * Upgrades Templates on behalf of installed {@link Plugin}s when a node joins the cluster + */ +public class TemplateUpgradeService extends AbstractComponent implements ClusterStateListener { + private final UnaryOperator> indexTemplateMetaDataUpgraders; + + public final ClusterService clusterService; + + public final ThreadPool threadPool; + + public final Client client; + + private final AtomicInteger updatesInProgress = new AtomicInteger(); + + private ImmutableOpenMap lastTemplateMetaData; + + public TemplateUpgradeService(Settings settings, Client client, ClusterService clusterService, ThreadPool threadPool, + Collection>> indexTemplateMetaDataUpgraders) { + super(settings); + this.client = client; + this.clusterService = clusterService; + this.threadPool = threadPool; + this.indexTemplateMetaDataUpgraders = templates -> { + Map upgradedTemplates = new HashMap<>(templates); + for (UnaryOperator> upgrader : indexTemplateMetaDataUpgraders) { + upgradedTemplates = upgrader.apply(upgradedTemplates); + } + return upgradedTemplates; + }; + clusterService.addListener(this); + } + + @Override + public void clusterChanged(ClusterChangedEvent event) { + ClusterState state = event.state(); + if (state.blocks().hasGlobalBlock(GatewayService.STATE_NOT_RECOVERED_BLOCK)) { + // wait until the gateway has recovered from disk, otherwise we think may not have the index templates, + // while they actually do exist + return; + } + + if (updatesInProgress.get() > 0) { + // we are already running some updates - skip this cluster state update + return; + } + + ImmutableOpenMap templates = state.getMetaData().getTemplates(); + + if (templates == lastTemplateMetaData) { + // we already checked these sets of templates - no reason to check it again + // we can do identity check here because due to cluster state diffs the actual map will not change + // if there were no changes + return; + } + + if (shouldLocalNodeUpdateTemplates(state.nodes()) == false) { + return; + } + + lastTemplateMetaData = templates; + Optional, Set>> changes = calculateTemplateChanges(templates); + if (changes.isPresent()) { + if (updatesInProgress.compareAndSet(0, changes.get().v1().size() + changes.get().v2().size())) { + logger.info("Starting template upgrade to version {}, {} templates will be updated and {} will be removed", + Version.CURRENT, + changes.get().v1().size(), + changes.get().v2().size()); + threadPool.generic().execute(() -> updateTemplates(changes.get().v1(), changes.get().v2())); + } + } + } + + /** + * Checks if the current node should update the templates + * + * If the master has the newest verison in the cluster - it will be dedicated template updater. + * Otherwise the node with the highest id among nodes with the highest version should update the templates + */ + boolean shouldLocalNodeUpdateTemplates(DiscoveryNodes nodes) { + DiscoveryNode localNode = nodes.getLocalNode(); + // Only data and master nodes should update the template + if (localNode.isDataNode() || localNode.isMasterNode()) { + DiscoveryNode masterNode = nodes.getMasterNode(); + if (masterNode == null) { + return false; + } + Version maxVersion = nodes.getLargestNonClientNodeVersion(); + if (maxVersion.equals(masterNode.getVersion())) { + // If the master has the latest version - we will allow it to handle the update + return nodes.isLocalNodeElectedMaster(); + } else { + if (maxVersion.equals(localNode.getVersion()) == false) { + // The localhost node doesn't have the latest version - not going to update + return false; + } + for (ObjectCursor node : nodes.getMasterAndDataNodes().values()) { + if (node.value.getVersion().equals(maxVersion) && node.value.getId().compareTo(localNode.getId()) > 0) { + // We have a node with higher id then mine - it should update + return false; + } + } + // We have the highest version and highest id - we should perform the update + return true; + } + } else { + return false; + } + } + + void updateTemplates(Map changes, Set deletions) { + for (Map.Entry change : changes.entrySet()) { + PutIndexTemplateRequest request = + new PutIndexTemplateRequest(change.getKey()).source(change.getValue(), XContentType.JSON); + request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); + client.admin().indices().putTemplate(request, new ActionListener() { + @Override + public void onResponse(PutIndexTemplateResponse response) { + if(updatesInProgress.decrementAndGet() == 0) { + logger.info("Finished upgrading templates to version {}", Version.CURRENT); + } + if (response.isAcknowledged() == false) { + logger.warn("Error updating template [{}], request was not acknowledged", change.getKey()); + } + } + + @Override + public void onFailure(Exception e) { + if(updatesInProgress.decrementAndGet() == 0) { + logger.info("Templates were upgraded to version {}", Version.CURRENT); + } + logger.warn(new ParameterizedMessage("Error updating template [{}]", change.getKey()), e); + } + }); + } + + for (String template : deletions) { + DeleteIndexTemplateRequest request = new DeleteIndexTemplateRequest(template); + request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); + client.admin().indices().deleteTemplate(request, new ActionListener() { + @Override + public void onResponse(DeleteIndexTemplateResponse response) { + updatesInProgress.decrementAndGet(); + if (response.isAcknowledged() == false) { + logger.warn("Error deleting template [{}], request was not acknowledged", template); + } + } + + @Override + public void onFailure(Exception e) { + updatesInProgress.decrementAndGet(); + if (e instanceof IndexTemplateMissingException == false) { + // we might attempt to delete the same template from different nodes - so that's ok if template doesn't exist + // otherwise we need to warn + logger.warn(new ParameterizedMessage("Error deleting template [{}]", template), e); + } + } + }); + } + } + + int getUpdatesInProgress() { + return updatesInProgress.get(); + } + + Optional, Set>> calculateTemplateChanges( + ImmutableOpenMap templates) { + // collect current templates + Map existingMap = new HashMap<>(); + for (ObjectObjectCursor customCursor : templates) { + existingMap.put(customCursor.key, customCursor.value); + } + // upgrade global custom meta data + Map upgradedMap = indexTemplateMetaDataUpgraders.apply(existingMap); + if (upgradedMap.equals(existingMap) == false) { + Set deletes = new HashSet<>(); + Map changes = new HashMap<>(); + // remove templates if needed + existingMap.keySet().forEach(s -> { + if (upgradedMap.containsKey(s) == false) { + deletes.add(s); + } + }); + upgradedMap.forEach((key, value) -> { + if (value.equals(existingMap.get(key)) == false) { + changes.put(key, toBytesReference(value)); + } + }); + return Optional.of(new Tuple<>(changes, deletes)); + } + return Optional.empty(); + } + + private static final ToXContent.Params PARAMS = new ToXContent.MapParams(singletonMap("reduce_mappings", "true")); + + private BytesReference toBytesReference(IndexTemplateMetaData templateMetaData) { + try { + return XContentHelper.toXContent((builder, params) -> { + IndexTemplateMetaData.Builder.toInnerXContent(templateMetaData, builder, params); + return builder; + }, XContentType.JSON, PARAMS, false); + } catch (IOException ex) { + throw new IllegalStateException("Cannot serialize template [" + templateMetaData.getName() + "]", ex); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java index 0c39c43bc9fe0..31b59b9f281fc 100644 --- a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java +++ b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java @@ -35,7 +35,6 @@ import java.util.Collections; import java.util.EnumSet; import java.util.HashMap; -import java.util.HashSet; import java.util.Map; import java.util.Set; import java.util.function.Predicate; @@ -96,7 +95,7 @@ public static boolean isIngestNode(Settings settings) { * @param version the version of the node */ public DiscoveryNode(final String id, TransportAddress address, Version version) { - this(id, address, Collections.emptyMap(), Collections.emptySet(), version); + this(id, address, Collections.emptyMap(), EnumSet.allOf(Role.class), version); } /** @@ -192,18 +191,24 @@ public DiscoveryNode(String nodeName, String nodeId, String ephemeralId, String /** Creates a DiscoveryNode representing the local node. */ public static DiscoveryNode createLocal(Settings settings, TransportAddress publishAddress, String nodeId) { Map attributes = new HashMap<>(Node.NODE_ATTRIBUTES.get(settings).getAsMap()); - Set roles = new HashSet<>(); + Set roles = getRolesFromSettings(settings); + + return new DiscoveryNode(Node.NODE_NAME_SETTING.get(settings), nodeId, publishAddress, attributes, roles, Version.CURRENT); + } + + /** extract node roles from the given settings */ + public static Set getRolesFromSettings(Settings settings) { + Set roles = EnumSet.noneOf(Role.class); if (Node.NODE_INGEST_SETTING.get(settings)) { - roles.add(DiscoveryNode.Role.INGEST); + roles.add(Role.INGEST); } if (Node.NODE_MASTER_SETTING.get(settings)) { - roles.add(DiscoveryNode.Role.MASTER); + roles.add(Role.MASTER); } if (Node.NODE_DATA_SETTING.get(settings)) { - roles.add(DiscoveryNode.Role.DATA); + roles.add(Role.DATA); } - - return new DiscoveryNode(Node.NODE_NAME_SETTING.get(settings), nodeId, publishAddress,attributes, roles, Version.CURRENT); + return roles; } /** @@ -217,7 +222,14 @@ public DiscoveryNode(StreamInput in) throws IOException { this.ephemeralId = in.readString().intern(); this.hostName = in.readString().intern(); this.hostAddress = in.readString().intern(); - this.address = TransportAddressSerializers.addressFromStream(in); + if (in.getVersion().after(Version.V_5_0_2)) { + this.address = TransportAddressSerializers.addressFromStream(in); + } else { + // we need to do this to preserve the host information during pinging and joining of a master. Since the version of the + // DiscoveryNode is set to Version#minimumCompatibilityVersion(), the host information gets lost as we do not serialize the + // hostString for the address + this.address = TransportAddressSerializers.addressFromStream(in, hostName); + } int size = in.readVInt(); this.attributes = new HashMap<>(size); for (int i = 0; i < size; i++) { @@ -226,11 +238,7 @@ public DiscoveryNode(StreamInput in) throws IOException { int rolesSize = in.readVInt(); this.roles = EnumSet.noneOf(Role.class); for (int i = 0; i < rolesSize; i++) { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= Role.values().length) { - throw new IOException("Unknown Role ordinal [" + ordinal + "]"); - } - this.roles.add(Role.values()[ordinal]); + this.roles.add(in.readEnum(Role.class)); } this.version = Version.readVersion(in); } @@ -250,7 +258,7 @@ public void writeTo(StreamOutput out) throws IOException { } out.writeVInt(roles.size()); for (Role role : roles) { - out.writeVInt(role.ordinal()); + out.writeEnum(role); } Version.writeVersion(version, out); } diff --git a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java index e8ede54f4a2d1..23f8257227776 100644 --- a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java +++ b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java @@ -21,6 +21,7 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.network.InetAddresses; import org.elasticsearch.common.network.NetworkAddress; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Settings; @@ -28,6 +29,7 @@ import java.util.HashMap; import java.util.Map; +import java.util.function.Consumer; /** */ @@ -38,6 +40,28 @@ public enum OpType { OR } + /** + * Validates the IP addresses in a group of {@link Settings} by looking for the keys + * "_ip", "_host_ip", and "_publish_ip" and ensuring each of their comma separated values + * that has no wildcards is a valid IP address. + */ + public static final Consumer IP_VALIDATOR = (settings) -> { + Map settingsMap = settings.getAsMap(); + for (Map.Entry entry : settingsMap.entrySet()) { + String propertyKey = entry.getKey(); + if (entry.getValue() == null) { + continue; // this setting gets reset + } + if ("_ip".equals(propertyKey) || "_host_ip".equals(propertyKey) || "_publish_ip".equals(propertyKey)) { + for (String value : Strings.tokenizeToStringArray(entry.getValue(), ",")) { + if (Regex.isSimpleMatchPattern(value) == false && InetAddresses.isInetAddress(value) == false) { + throw new IllegalArgumentException("invalid IP address [" + value + "] for [" + propertyKey + "]"); + } + } + } + } + }; + public static DiscoveryNodeFilters buildFromSettings(OpType opType, String prefix, Settings settings) { return buildFromKeyValue(opType, settings.getByPrefix(prefix).getAsMap()); } @@ -45,7 +69,7 @@ public static DiscoveryNodeFilters buildFromSettings(OpType opType, String prefi public static DiscoveryNodeFilters buildFromKeyValue(OpType opType, Map filters) { Map bFilters = new HashMap<>(); for (Map.Entry entry : filters.entrySet()) { - String[] values = Strings.splitStringByCommaToArray(entry.getValue()); + String[] values = Strings.tokenizeToStringArray(entry.getValue(), ","); if (values.length > 0) { bFilters.put(entry.getKey(), values); } diff --git a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodes.java b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodes.java index 9d0edf7b910db..b0f6604cac4f0 100644 --- a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodes.java +++ b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodes.java @@ -24,6 +24,7 @@ import com.carrotsearch.hppc.cursors.ObjectObjectCursor; import org.elasticsearch.Version; import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.cluster.Diff; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.collect.ImmutableOpenMap; @@ -47,7 +48,6 @@ public class DiscoveryNodes extends AbstractDiffable implements Iterable { public static final DiscoveryNodes EMPTY_NODES = builder().build(); - public static final DiscoveryNodes PROTO = EMPTY_NODES; private final ImmutableOpenMap nodes; private final ImmutableOpenMap dataNodes; @@ -56,20 +56,25 @@ public class DiscoveryNodes extends AbstractDiffable implements private final String masterNodeId; private final String localNodeId; - private final Version minNodeVersion; private final Version minNonClientNodeVersion; + private final Version maxNonClientNodeVersion; + private final Version maxNodeVersion; + private final Version minNodeVersion; private DiscoveryNodes(ImmutableOpenMap nodes, ImmutableOpenMap dataNodes, ImmutableOpenMap masterNodes, ImmutableOpenMap ingestNodes, - String masterNodeId, String localNodeId, Version minNodeVersion, Version minNonClientNodeVersion) { + String masterNodeId, String localNodeId, Version minNonClientNodeVersion, Version maxNonClientNodeVersion, + Version maxNodeVersion, Version minNodeVersion) { this.nodes = nodes; this.dataNodes = dataNodes; this.masterNodes = masterNodes; this.ingestNodes = ingestNodes; this.masterNodeId = masterNodeId; this.localNodeId = localNodeId; - this.minNodeVersion = minNodeVersion; this.minNonClientNodeVersion = minNonClientNodeVersion; + this.maxNonClientNodeVersion = maxNonClientNodeVersion; + this.minNodeVersion = minNodeVersion; + this.maxNodeVersion = maxNodeVersion; } @Override @@ -173,7 +178,6 @@ public boolean nodeExists(DiscoveryNode node) { return existing != null && existing.equals(node); } - /** * Get the id of the master node * @@ -230,23 +234,44 @@ public boolean isAllNodes(String... nodesIds) { return nodesIds == null || nodesIds.length == 0 || (nodesIds.length == 1 && nodesIds[0].equals("_all")); } + /** + * Returns the version of the node with the oldest version in the cluster that is not a client node + * + * If there are no non-client nodes, Version.CURRENT will be returned. + * + * @return the oldest version in the cluster + */ + public Version getSmallestNonClientNodeVersion() { + return minNonClientNodeVersion; + } /** - * Returns the version of the node with the oldest version in the cluster + * Returns the version of the node with the youngest version in the cluster that is not a client node. + * + * If there are no non-client nodes, Version.CURRENT will be returned. + * + * @return the youngest version in the cluster + */ + public Version getLargestNonClientNodeVersion() { + return maxNonClientNodeVersion; + } + + /** + * Returns the version of the node with the oldest version in the cluster. * * @return the oldest version in the cluster */ - public Version getSmallestVersion() { + public Version getMinNodeVersion() { return minNodeVersion; } /** - * Returns the version of the node with the oldest version in the cluster that is not a client node + * Returns the version of the node with the yougest version in the cluster * - * @return the oldest version in the cluster + * @return the youngest version in the cluster */ - public Version getSmallestNonClientNodeVersion() { - return minNonClientNodeVersion; + public Version getMaxNodeVersion() { + return maxNodeVersion; } /** @@ -397,16 +422,6 @@ public Delta delta(DiscoveryNodes other) { @Override public String toString() { - StringBuilder sb = new StringBuilder(); - sb.append("{"); - for (DiscoveryNode node : this) { - sb.append(node).append(','); - } - sb.append("}"); - return sb.toString(); - } - - public String prettyPrint() { StringBuilder sb = new StringBuilder(); sb.append("nodes: \n"); for (DiscoveryNode node : this) { @@ -430,11 +445,7 @@ public static class Delta { private final List removed; private final List added; - public Delta(String localNodeId, List removed, List added) { - this(null, null, localNodeId, removed, added); - } - - public Delta(@Nullable DiscoveryNode previousMasterNode, @Nullable DiscoveryNode newMasterNode, String localNodeId, + private Delta(@Nullable DiscoveryNode previousMasterNode, @Nullable DiscoveryNode newMasterNode, String localNodeId, List removed, List added) { this.previousMasterNode = previousMasterNode; this.newMasterNode = newMasterNode; @@ -538,7 +549,7 @@ public void writeTo(StreamOutput out) throws IOException { } } - private DiscoveryNodes readFrom(StreamInput in, DiscoveryNode localNode) throws IOException { + public static DiscoveryNodes readFrom(StreamInput in, DiscoveryNode localNode) throws IOException { Builder builder = new Builder(); if (in.readBoolean()) { builder.masterNodeId(in.readString()); @@ -561,9 +572,8 @@ private DiscoveryNodes readFrom(StreamInput in, DiscoveryNode localNode) throws return builder.build(); } - @Override - public DiscoveryNodes readFrom(StreamInput in) throws IOException { - return readFrom(in, getLocalNode()); + public static Diff readDiffFrom(StreamInput in, DiscoveryNode localNode) throws IOException { + return AbstractDiffable.readDiffFrom(in1 -> readFrom(in1, localNode), in); } public static Builder builder() { @@ -668,33 +678,43 @@ public DiscoveryNodes build() { ImmutableOpenMap.Builder dataNodesBuilder = ImmutableOpenMap.builder(); ImmutableOpenMap.Builder masterNodesBuilder = ImmutableOpenMap.builder(); ImmutableOpenMap.Builder ingestNodesBuilder = ImmutableOpenMap.builder(); - Version minNodeVersion = Version.CURRENT; - Version minNonClientNodeVersion = Version.CURRENT; + Version minNodeVersion = null; + Version maxNodeVersion = null; + Version minNonClientNodeVersion = null; + Version maxNonClientNodeVersion = null; for (ObjectObjectCursor nodeEntry : nodes) { if (nodeEntry.value.isDataNode()) { dataNodesBuilder.put(nodeEntry.key, nodeEntry.value); - minNonClientNodeVersion = Version.smallest(minNonClientNodeVersion, nodeEntry.value.getVersion()); } if (nodeEntry.value.isMasterNode()) { masterNodesBuilder.put(nodeEntry.key, nodeEntry.value); - minNonClientNodeVersion = Version.smallest(minNonClientNodeVersion, nodeEntry.value.getVersion()); + } + final Version version = nodeEntry.value.getVersion(); + if (nodeEntry.value.isDataNode() || nodeEntry.value.isMasterNode()) { + if (minNonClientNodeVersion == null) { + minNonClientNodeVersion = version; + maxNonClientNodeVersion = version; + } else { + minNonClientNodeVersion = Version.min(minNonClientNodeVersion, version); + maxNonClientNodeVersion = Version.max(maxNonClientNodeVersion, version); + } } if (nodeEntry.value.isIngestNode()) { ingestNodesBuilder.put(nodeEntry.key, nodeEntry.value); } - minNodeVersion = Version.smallest(minNodeVersion, nodeEntry.value.getVersion()); + minNodeVersion = minNodeVersion == null ? version : Version.min(minNodeVersion, version); + maxNodeVersion = maxNodeVersion == null ? version : Version.max(maxNodeVersion, version); } return new DiscoveryNodes( nodes.build(), dataNodesBuilder.build(), masterNodesBuilder.build(), ingestNodesBuilder.build(), - masterNodeId, localNodeId, minNodeVersion, minNonClientNodeVersion + masterNodeId, localNodeId, minNonClientNodeVersion == null ? Version.CURRENT : minNonClientNodeVersion, + maxNonClientNodeVersion == null ? Version.CURRENT : maxNonClientNodeVersion, + maxNodeVersion == null ? Version.CURRENT : maxNodeVersion, + minNodeVersion == null ? Version.CURRENT : minNodeVersion ); } - public static DiscoveryNodes readFrom(StreamInput in, @Nullable DiscoveryNode localNode) throws IOException { - return PROTO.readFrom(in, localNode); - } - public boolean isLocalNodeElectedMaster() { return masterNodeId != null && masterNodeId.equals(localNodeId); } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/AllocationId.java b/core/src/main/java/org/elasticsearch/cluster/routing/AllocationId.java index cb0fb487693e2..9ab91f9f8de2b 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/AllocationId.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/AllocationId.java @@ -21,8 +21,6 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -48,7 +46,7 @@ public class AllocationId implements ToXContent, Writeable { private static final String ID_KEY = "id"; private static final String RELOCATION_ID_KEY = "relocation_id"; - private static final ObjectParser ALLOCATION_ID_PARSER = new ObjectParser<>( + private static final ObjectParser ALLOCATION_ID_PARSER = new ObjectParser<>( "allocationId"); static { @@ -203,6 +201,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } public static AllocationId fromXContent(XContentParser parser) throws IOException { - return ALLOCATION_ID_PARSER.parse(parser, new AllocationId.Builder(), () -> ParseFieldMatcher.STRICT).build(); + return ALLOCATION_ID_PARSER.parse(parser, new AllocationId.Builder(), null).build(); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/DelayedAllocationService.java b/core/src/main/java/org/elasticsearch/cluster/routing/DelayedAllocationService.java index 29d74dd893351..4522dfcf98f18 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/DelayedAllocationService.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/DelayedAllocationService.java @@ -106,12 +106,7 @@ public void onFailure(Exception e) { @Override public ClusterState execute(ClusterState currentState) throws Exception { removeIfSameTask(this); - RoutingAllocation.Result routingResult = allocationService.reroute(currentState, "assign delayed unassigned shards"); - if (routingResult.changed()) { - return ClusterState.builder(currentState).routingResult(routingResult).build(); - } else { - return currentState; - } + return allocationService.reroute(currentState, "assign delayed unassigned shards"); } @Override @@ -138,7 +133,7 @@ public DelayedAllocationService(Settings settings, ThreadPool threadPool, Cluste this.threadPool = threadPool; this.clusterService = clusterService; this.allocationService = allocationService; - clusterService.addFirst(this); + clusterService.addListener(this); } @Override @@ -151,7 +146,7 @@ protected void doStop() { @Override protected void doClose() { - clusterService.remove(this); + clusterService.removeListener(this); removeTaskAndCancel(); } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/GroupShardsIterator.java b/core/src/main/java/org/elasticsearch/cluster/routing/GroupShardsIterator.java index 9cf429383fdb7..21b02043a2249 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/GroupShardsIterator.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/GroupShardsIterator.java @@ -30,14 +30,14 @@ * ShardsIterators are always returned in ascending order independently of their order at construction * time. The incoming iterators are sorted to ensure consistent iteration behavior across Nodes / JVMs. */ -public class GroupShardsIterator implements Iterable { +public final class GroupShardsIterator implements Iterable { - private final List iterators; + private final List iterators; /** * Constructs a enw GroupShardsIterator from the given list. */ - public GroupShardsIterator(List iterators) { + public GroupShardsIterator(List iterators) { CollectionUtil.timSort(iterators); this.iterators = iterators; } @@ -60,13 +60,8 @@ public int totalSize() { */ public int totalSizeWith1ForEmpty() { int size = 0; - for (ShardIterator shard : iterators) { - int sizeActive = shard.size(); - if (sizeActive == 0) { - size += 1; - } else { - size += sizeActive; - } + for (ShardIt shard : iterators) { + size += Math.max(1, shard.size()); } return size; } @@ -80,7 +75,11 @@ public int size() { } @Override - public Iterator iterator() { + public Iterator iterator() { return iterators.iterator(); } + + public ShardIt get(int index) { + return iterators.get(index); + } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java b/core/src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java index fd8e7d02b28a6..9de95fab72695 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java @@ -25,13 +25,13 @@ import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.Version; import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.cluster.Diff; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.routing.RecoverySource.LocalShardsRecoverySource; import org.elasticsearch.cluster.routing.RecoverySource.PeerRecoverySource; import org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource; import org.elasticsearch.cluster.routing.RecoverySource.StoreRecoverySource; -import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Randomness; import org.elasticsearch.common.collect.ImmutableOpenIntMap; import org.elasticsearch.common.io.stream.StreamInput; @@ -64,8 +64,6 @@ */ public class IndexRoutingTable extends AbstractDiffable implements Iterable { - public static final IndexRoutingTable PROTO = builder(new Index("", "_na_")).build(); - private final Index index; private final ShardShuffler shuffler; @@ -134,11 +132,22 @@ boolean validate(MetaData metaData) { throw new IllegalStateException("shard routing has an index [" + shardRouting.index() + "] that is different " + "from the routing table"); } + final Set inSyncAllocationIds = indexMetaData.inSyncAllocationIds(shardRouting.id()); if (shardRouting.active() && - indexMetaData.inSyncAllocationIds(shardRouting.id()).contains(shardRouting.allocationId().getId()) == false) { + inSyncAllocationIds.contains(shardRouting.allocationId().getId()) == false) { throw new IllegalStateException("active shard routing " + shardRouting + " has no corresponding entry in the in-sync " + - "allocation set " + indexMetaData.inSyncAllocationIds(shardRouting.id())); + "allocation set " + inSyncAllocationIds); } + + if (indexMetaData.getCreationVersion().onOrAfter(Version.V_5_0_0_alpha1) && + IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.getSettings()) == false && // see #20650 + shardRouting.primary() && shardRouting.initializing() && + shardRouting.recoverySource().getType() == RecoverySource.Type.SNAPSHOT && + inSyncAllocationIds.contains(shardRouting.allocationId().getId()) == false) + throw new IllegalStateException("a primary shard routing " + shardRouting + " is a primary that is recovering from " + + "a known allocation id but has no corresponding entry in the in-sync " + + "allocation set " + inSyncAllocationIds); + } } return true; @@ -258,37 +267,6 @@ public ShardsIterator randomAllActiveShardsIt() { return new PlainShardsIterator(shuffler.shuffle(allActiveShards)); } - /** - * A group shards iterator where each group ({@link ShardIterator} - * is an iterator across shard replication group. - */ - public GroupShardsIterator groupByShardsIt() { - // use list here since we need to maintain identity across shards - ArrayList set = new ArrayList<>(shards.size()); - for (IndexShardRoutingTable indexShard : this) { - set.add(indexShard.shardsIt()); - } - return new GroupShardsIterator(set); - } - - /** - * A groups shards iterator where each groups is a single {@link ShardRouting} and a group - * is created for each shard routing. - *

    - * This basically means that components that use the {@link GroupShardsIterator} will iterate - * over *all* the shards (all the replicas) within the index.

    - */ - public GroupShardsIterator groupByAllIt() { - // use list here since we need to maintain identity across shards - ArrayList set = new ArrayList<>(); - for (IndexShardRoutingTable indexShard : this) { - for (ShardRouting shardRouting : indexShard) { - set.add(shardRouting.shardsIt()); - } - } - return new GroupShardsIterator(set); - } - @Override public boolean equals(Object o) { if (this == o) return true; @@ -309,8 +287,7 @@ public int hashCode() { return result; } - @Override - public IndexRoutingTable readFrom(StreamInput in) throws IOException { + public static IndexRoutingTable readFrom(StreamInput in) throws IOException { Index index = new Index(in); Builder builder = new Builder(index); @@ -322,6 +299,10 @@ public IndexRoutingTable readFrom(StreamInput in) throws IOException { return builder.build(); } + public static Diff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(IndexRoutingTable::readFrom, in); + } + @Override public void writeTo(StreamOutput out) throws IOException { index.writeTo(out); @@ -344,46 +325,32 @@ public Builder(Index index) { this.index = index; } - /** - * Reads an {@link IndexRoutingTable} from an {@link StreamInput} - * - * @param in {@link StreamInput} to read the {@link IndexRoutingTable} from - * @return {@link IndexRoutingTable} read - * @throws IOException if something happens during read - */ - public static IndexRoutingTable readFrom(StreamInput in) throws IOException { - return PROTO.readFrom(in); - } - /** * Initializes a new empty index, as if it was created from an API. */ public Builder initializeAsNew(IndexMetaData indexMetaData) { - RecoverySource primaryRecoverySource = indexMetaData.getMergeSourceIndex() != null ? - LocalShardsRecoverySource.INSTANCE : - StoreRecoverySource.EMPTY_STORE_INSTANCE; - return initializeEmpty(indexMetaData, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null), primaryRecoverySource); + return initializeEmpty(indexMetaData, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null)); } /** * Initializes an existing index. */ public Builder initializeAsRecovery(IndexMetaData indexMetaData) { - return initializeEmpty(indexMetaData, new UnassignedInfo(UnassignedInfo.Reason.CLUSTER_RECOVERED, null), null); + return initializeEmpty(indexMetaData, new UnassignedInfo(UnassignedInfo.Reason.CLUSTER_RECOVERED, null)); } /** * Initializes a new index caused by dangling index imported. */ public Builder initializeAsFromDangling(IndexMetaData indexMetaData) { - return initializeEmpty(indexMetaData, new UnassignedInfo(UnassignedInfo.Reason.DANGLING_INDEX_IMPORTED, null), null); + return initializeEmpty(indexMetaData, new UnassignedInfo(UnassignedInfo.Reason.DANGLING_INDEX_IMPORTED, null)); } /** * Initializes a new empty index, as as a result of opening a closed index. */ public Builder initializeAsFromCloseToOpen(IndexMetaData indexMetaData) { - return initializeEmpty(indexMetaData, new UnassignedInfo(UnassignedInfo.Reason.INDEX_REOPENED, null), null); + return initializeEmpty(indexMetaData, new UnassignedInfo(UnassignedInfo.Reason.INDEX_REOPENED, null)); } /** @@ -435,28 +402,36 @@ private Builder initializeAsRestore(IndexMetaData indexMetaData, SnapshotRecover /** * Initializes a new empty index, with an option to control if its from an API or not. - * - * @param primaryRecoverySource recovery source for primary shards. If null, it is automatically determined based on active - * allocation ids */ - private Builder initializeEmpty(IndexMetaData indexMetaData, UnassignedInfo unassignedInfo, @Nullable RecoverySource primaryRecoverySource) { + private Builder initializeEmpty(IndexMetaData indexMetaData, UnassignedInfo unassignedInfo) { assert indexMetaData.getIndex().equals(index); if (!shards.isEmpty()) { throw new IllegalStateException("trying to initialize an index with fresh shards, but already has shards created"); } for (int shardNumber = 0; shardNumber < indexMetaData.getNumberOfShards(); shardNumber++) { ShardId shardId = new ShardId(index, shardNumber); - if (primaryRecoverySource == null) { - if (indexMetaData.inSyncAllocationIds(shardNumber).isEmpty() && indexMetaData.getCreationVersion().onOrAfter(Version.V_5_0_0_alpha1)) { - primaryRecoverySource = indexMetaData.getMergeSourceIndex() != null ? LocalShardsRecoverySource.INSTANCE : StoreRecoverySource.EMPTY_STORE_INSTANCE; - } else { - primaryRecoverySource = StoreRecoverySource.EXISTING_STORE_INSTANCE; - } + final RecoverySource primaryRecoverySource; + if (indexMetaData.inSyncAllocationIds(shardNumber).isEmpty() == false) { + // we have previous valid copies for this shard. use them for recovery + primaryRecoverySource = StoreRecoverySource.EXISTING_STORE_INSTANCE; + } else if (indexMetaData.getCreationVersion().before(Version.V_5_0_0_alpha1) && + unassignedInfo.getReason() != UnassignedInfo.Reason.INDEX_CREATED // tests can create old indices + ) { + // the index is old and didn't maintain inSyncAllocationIds. Fall back to old behavior and require + // finding existing copies + primaryRecoverySource = StoreRecoverySource.EXISTING_STORE_INSTANCE; + } else if (indexMetaData.getMergeSourceIndex() != null) { + // this is a new index but the initial shards should merged from another index + primaryRecoverySource = LocalShardsRecoverySource.INSTANCE; + } else { + // a freshly created index with no restriction + primaryRecoverySource = StoreRecoverySource.EMPTY_STORE_INSTANCE; } IndexShardRoutingTable.Builder indexShardRoutingBuilder = new IndexShardRoutingTable.Builder(shardId); for (int i = 0; i <= indexMetaData.getNumberOfReplicas(); i++) { boolean primary = i == 0; - indexShardRoutingBuilder.addShard(ShardRouting.newUnassigned(shardId, primary, primary ? primaryRecoverySource : PeerRecoverySource.INSTANCE, unassignedInfo)); + indexShardRoutingBuilder.addShard(ShardRouting.newUnassigned(shardId, primary, + primary ? primaryRecoverySource : PeerRecoverySource.INSTANCE, unassignedInfo)); } shards.put(shardNumber, indexShardRoutingBuilder.build()); } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java b/core/src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java index 619959923e942..2028cc5b8c943 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java @@ -33,6 +33,7 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; +import java.util.HashSet; import java.util.Iterator; import java.util.LinkedList; import java.util.List; @@ -184,6 +185,15 @@ public List activeShards() { return this.activeShards; } + /** + * Returns a {@link List} of all initializing shards, including target shards of relocations + * + * @return a {@link List} of shards + */ + public List getAllInitializingShards() { + return this.allInitializingShards; + } + /** * Returns a {@link List} of active shards * @@ -572,14 +582,6 @@ public Builder(ShardId shardId) { } public Builder addShard(ShardRouting shardEntry) { - for (ShardRouting shard : shards) { - // don't add two that map to the same node id - // we rely on the fact that a node does not have primary and backup of the same shard - if (shard.assignedToNode() && shardEntry.assignedToNode() - && shard.currentNodeId().equals(shardEntry.currentNodeId())) { - return this; - } - } shards.add(shardEntry); return this; } @@ -590,9 +592,28 @@ public Builder removeShard(ShardRouting shardEntry) { } public IndexShardRoutingTable build() { + // don't allow more than one shard copy with same id to be allocated to same node + assert distinctNodes(shards) : "more than one shard with same id assigned to same node (shards: " + shards + ")"; return new IndexShardRoutingTable(shardId, Collections.unmodifiableList(new ArrayList<>(shards))); } + static boolean distinctNodes(List shards) { + Set nodes = new HashSet<>(); + for (ShardRouting shard : shards) { + if (shard.assignedToNode()) { + if (nodes.add(shard.currentNodeId()) == false) { + return false; + } + if (shard.relocating()) { + if (nodes.add(shard.relocatingNodeId()) == false) { + return false; + } + } + } + } + return true; + } + public static IndexShardRoutingTable readFrom(StreamInput in) throws IOException { Index index = new Index(in); return readFromThin(in, index); diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java b/core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java index ef3fae4830137..5280725169980 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java @@ -68,12 +68,7 @@ public ShardIterator getShards(ClusterState clusterState, String index, int shar return preferenceActiveShardIterator(indexShard, clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference); } - public int searchShardsCount(ClusterState clusterState, String[] concreteIndices, @Nullable Map> routing) { - final Set shards = computeTargetedShards(clusterState, concreteIndices, routing); - return shards.size(); - } - - public GroupShardsIterator searchShards(ClusterState clusterState, String[] concreteIndices, @Nullable Map> routing, @Nullable String preference) { + public GroupShardsIterator searchShards(ClusterState clusterState, String[] concreteIndices, @Nullable Map> routing, @Nullable String preference) { final Set shards = computeTargetedShards(clusterState, concreteIndices, routing); final Set set = new HashSet<>(shards.size()); for (IndexShardRoutingTable shard : shards) { @@ -82,7 +77,7 @@ public GroupShardsIterator searchShards(ClusterState clusterState, String[] conc set.add(iterator); } } - return new GroupShardsIterator(new ArrayList<>(set)); + return new GroupShardsIterator<>(new ArrayList<>(set)); } private static final Map> EMPTY_ROUTING = Collections.emptyMap(); @@ -97,13 +92,10 @@ private Set computeTargetedShards(ClusterState clusterSt final Set effectiveRouting = routing.get(index); if (effectiveRouting != null) { for (String r : effectiveRouting) { - int shardId = generateShardId(indexMetaData, null, r); - IndexShardRoutingTable indexShard = indexRouting.shard(shardId); - if (indexShard == null) { - throw new ShardNotFoundException(new ShardId(indexRouting.getIndex(), shardId)); + final int routingPartitionSize = indexMetaData.getRoutingPartitionSize(); + for (int partitionOffset = 0; partitionOffset < routingPartitionSize; partitionOffset++) { + set.add(shardRoutingTable(indexRouting, calculateScaledShardId(indexMetaData, r, partitionOffset))); } - // we might get duplicates, but that's ok, they will override one another - set.add(indexShard); } } else { for (IndexShardRoutingTable indexShard : indexRouting) { @@ -126,7 +118,7 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index Preference preferenceType = Preference.parse(preference); if (preferenceType == Preference.SHARDS) { // starts with _shards, so execute on specific ones - int index = preference.indexOf(';'); + int index = preference.indexOf('|'); String shards; if (index == -1) { @@ -192,6 +184,14 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index } } + private IndexShardRoutingTable shardRoutingTable(IndexRoutingTable indexRouting, int shardId) { + IndexShardRoutingTable indexShard = indexRouting.shard(shardId); + if (indexShard == null) { + throw new ShardNotFoundException(new ShardId(indexRouting.getIndex(), shardId)); + } + return indexShard; + } + protected IndexRoutingTable indexRoutingTable(ClusterState clusterState, String index) { IndexRoutingTable indexRouting = clusterState.routingTable().index(index); if (indexRouting == null) { @@ -218,15 +218,33 @@ public ShardId shardId(ClusterState clusterState, String index, String id, @Null return new ShardId(indexMetaData.getIndex(), generateShardId(indexMetaData, id, routing)); } - static int generateShardId(IndexMetaData indexMetaData, String id, @Nullable String routing) { - final int hash; + static int generateShardId(IndexMetaData indexMetaData, @Nullable String id, @Nullable String routing) { + final String effectiveRouting; + final int partitionOffset; + if (routing == null) { - hash = Murmur3HashFunction.hash(id); + assert(indexMetaData.isRoutingPartitionedIndex() == false) : "A routing value is required for gets from a partitioned index"; + effectiveRouting = id; } else { - hash = Murmur3HashFunction.hash(routing); + effectiveRouting = routing; } + + if (indexMetaData.isRoutingPartitionedIndex()) { + partitionOffset = Math.floorMod(Murmur3HashFunction.hash(id), indexMetaData.getRoutingPartitionSize()); + } else { + // we would have still got 0 above but this check just saves us an unnecessary hash calculation + partitionOffset = 0; + } + + return calculateScaledShardId(indexMetaData, effectiveRouting, partitionOffset); + } + + private static int calculateScaledShardId(IndexMetaData indexMetaData, String effectiveRouting, int partitionOffset) { + final int hash = Murmur3HashFunction.hash(effectiveRouting) + partitionOffset; + // we don't use IMD#getNumberOfShards since the index might have been shrunk such that we need to use the size // of original index to hash documents return Math.floorMod(hash, indexMetaData.getRoutingNumShards()) / indexMetaData.getRoutingFactor(); } + } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/PlainShardIterator.java b/core/src/main/java/org/elasticsearch/cluster/routing/PlainShardIterator.java index 5950bd35d37f0..bb45ca66956f8 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/PlainShardIterator.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/PlainShardIterator.java @@ -43,7 +43,6 @@ public PlainShardIterator(ShardId shardId, List shards) { this.shardId = shardId; } - @Override public ShardId shardId() { return this.shardId; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/PlainShardsIterator.java b/core/src/main/java/org/elasticsearch/cluster/routing/PlainShardsIterator.java index 3c2a338ae6467..6cb1989a8dd02 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/PlainShardsIterator.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/PlainShardsIterator.java @@ -18,6 +18,8 @@ */ package org.elasticsearch.cluster.routing; +import java.util.Collections; +import java.util.Iterator; import java.util.List; /** @@ -74,33 +76,12 @@ public int sizeActive() { } @Override - public int assignedReplicasIncludingRelocating() { - int count = 0; - for (ShardRouting shard : shards) { - if (shard.unassigned()) { - continue; - } - // if the shard is primary and relocating, add one to the counter since we perform it on the replica as well - // (and we already did it on the primary) - if (shard.primary()) { - if (shard.relocating()) { - count++; - } - } else { - count++; - // if we are relocating the replica, we want to perform the index operation on both the relocating - // shard and the target shard. This means that we won't loose index operations between end of recovery - // and reassignment of the shard by the master node - if (shard.relocating()) { - count++; - } - } - } - return count; + public List getShardRoutings() { + return Collections.unmodifiableList(shards); } @Override - public Iterable asUnordered() { - return shards; + public Iterator iterator() { + return shards.iterator(); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingChangesObserver.java b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingChangesObserver.java index 0f3a8c6f214c6..883b4c22f7fc0 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingChangesObserver.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingChangesObserver.java @@ -69,6 +69,12 @@ public interface RoutingChangesObserver { */ void replicaPromoted(ShardRouting replicaShard); + /** + * Called when an initializing replica is reinitialized. This happens when a primary relocation completes, which + * reinitializes all currently initializing replicas as their recovery source node changes + */ + void initializedReplicaReinitialized(ShardRouting oldReplica, ShardRouting reinitializedReplica); + /** * Abstract implementation of {@link RoutingChangesObserver} that does not take any action. Useful for subclasses that only override @@ -120,6 +126,11 @@ public void startedPrimaryReinitialized(ShardRouting startedPrimaryShard, ShardR public void replicaPromoted(ShardRouting replicaShard) { } + + @Override + public void initializedReplicaReinitialized(ShardRouting oldReplica, ShardRouting reinitializedReplica) { + + } } class DelegatingRoutingChangesObserver implements RoutingChangesObserver { @@ -192,5 +203,12 @@ public void replicaPromoted(ShardRouting replicaShard) { routingChangesObserver.replicaPromoted(replicaShard); } } + + @Override + public void initializedReplicaReinitialized(ShardRouting oldReplica, ShardRouting reinitializedReplica) { + for (RoutingChangesObserver routingChangesObserver : routingChangesObservers) { + routingChangesObserver.initializedReplicaReinitialized(oldReplica, reinitializedReplica); + } + } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNode.java b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNode.java index 8403f45a550b1..4ba277d99cd5d 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNode.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNode.java @@ -19,18 +19,15 @@ package org.elasticsearch.cluster.routing; -import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.shard.ShardId; import java.util.ArrayList; import java.util.Collections; -import java.util.Comparator; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.List; -import java.util.Map; /** * A {@link RoutingNode} represents a cluster node associated with a single {@link DiscoveryNode} including all shards @@ -103,7 +100,8 @@ public int size() { */ void add(ShardRouting shard) { if (shards.containsKey(shard.shardId())) { - throw new IllegalStateException("Trying to add a shard " + shard.shardId() + " to a node [" + nodeId + "] where it already exists"); + throw new IllegalStateException("Trying to add a shard " + shard.shardId() + " to a node [" + nodeId + + "] where it already exists. current [" + shards.get(shard.shardId()) + "]. new [" + shard + "]"); } shards.put(shard.shardId(), shard); } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java index 553e3b7324769..5016330e92bc8 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java @@ -23,6 +23,8 @@ import com.carrotsearch.hppc.cursors.ObjectCursor; import org.apache.logging.log4j.Logger; import org.apache.lucene.util.CollectionUtil; +import org.elasticsearch.Assertions; +import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; @@ -319,14 +321,35 @@ public ShardRouting activePrimary(ShardId shardId) { /** * Returns one active replica shard for the given shard id or null if * no active replica is found. + * + * Since replicas could possibly be on nodes with a older version of ES than + * the primary is, this will return replicas on the highest version of ES. + * */ - public ShardRouting activeReplica(ShardId shardId) { + public ShardRouting activeReplicaWithHighestVersion(ShardId shardId) { + Version highestVersionSeen = null; + ShardRouting candidate = null; for (ShardRouting shardRouting : assignedShards(shardId)) { if (!shardRouting.primary() && shardRouting.active()) { - return shardRouting; + // It's possible for replicaNodeVersion to be null, when deassociating dead nodes + // that have been removed, the shards are failed, and part of the shard failing + // calls this method with an out-of-date RoutingNodes, where the version might not + // be accessible. Therefore, we need to protect against the version being null + // (meaning the node will be going away). + RoutingNode replicaNode = node(shardRouting.currentNodeId()); + if (replicaNode != null && replicaNode.node() != null) { + Version replicaNodeVersion = replicaNode.node().getVersion(); + if (highestVersionSeen == null || replicaNodeVersion.after(highestVersionSeen)) { + highestVersionSeen = replicaNodeVersion; + candidate = shardRouting; + } else if (candidate == null) { + // Only use this replica if there are no other candidates + candidate = shardRouting; + } + } } } - return null; + return candidate; } /** @@ -391,7 +414,8 @@ public List shardsWithState(String index, ShardRoutingState... sta return shards; } - public String prettyPrint() { + @Override + public String toString() { StringBuilder sb = new StringBuilder("routing_nodes:\n"); for (RoutingNode routingNode : this) { sb.append(routingNode.prettyPrint()); @@ -450,6 +474,9 @@ public Tuple relocateShard(ShardRouting startedShard, * * Moves the initializing shard to started. If the shard is a relocation target, also removes the relocation source. * + * If the started shard is a primary relocation target, this also reinitializes currently initializing replicas as their + * recovery source changes + * * @return the started shard */ public ShardRouting startShard(Logger logger, ShardRouting initializingShard, RoutingChangesObserver routingChangesObserver) { @@ -467,6 +494,30 @@ public ShardRouting startShard(Logger logger, ShardRouting initializingShard, Ro + initializingShard + " but was: " + relocationSourceShard.getTargetRelocatingShard(); remove(relocationSourceShard); routingChangesObserver.relocationCompleted(relocationSourceShard); + + // if this is a primary shard with ongoing replica recoveries, reinitialize them as their recovery source changed + if (startedShard.primary()) { + List assignedShards = assignedShards(startedShard.shardId()); + // copy list to prevent ConcurrentModificationException + for (ShardRouting routing : new ArrayList<>(assignedShards)) { + if (routing.initializing() && routing.primary() == false) { + if (routing.isRelocationTarget()) { + // find the relocation source + ShardRouting sourceShard = getByAllocationId(routing.shardId(), routing.allocationId().getRelocationId()); + // cancel relocation and start relocation to same node again + ShardRouting startedReplica = cancelRelocation(sourceShard); + remove(routing); + routingChangesObserver.shardFailed(routing, + new UnassignedInfo(UnassignedInfo.Reason.REINITIALIZED, "primary changed")); + relocateShard(startedReplica, sourceShard.relocatingNodeId(), + sourceShard.getExpectedShardSize(), routingChangesObserver); + } else { + ShardRouting reinitializedReplica = reinitReplica(routing); + routingChangesObserver.initializedReplicaReinitialized(routing, reinitializedReplica); + } + } + } + } } return startedShard; } @@ -536,8 +587,22 @@ assert getByAllocationId(failedShard.shardId(), failedShard.allocationId().getId // fail actual shard if (failedShard.initializing()) { if (failedShard.relocatingNodeId() == null) { - // initializing shard that is not relocation target, just move to unassigned - moveToUnassigned(failedShard, unassignedInfo); + if (failedShard.primary()) { + // promote active replica to primary if active replica exists (only the case for shadow replicas) + ShardRouting activeReplica = activeReplicaWithHighestVersion(failedShard.shardId()); + assert activeReplica == null || IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.getSettings()) : + "initializing primary [" + failedShard + "] with active replicas [" + activeReplica + "] only expected when " + + "using shadow replicas"; + if (activeReplica == null) { + moveToUnassigned(failedShard, unassignedInfo); + } else { + movePrimaryToUnassignedAndDemoteToReplica(failedShard, unassignedInfo); + promoteReplicaToPrimary(activeReplica, indexMetaData, routingChangesObserver); + } + } else { + // initializing shard that is not relocation target, just move to unassigned + moveToUnassigned(failedShard, unassignedInfo); + } } else { // The shard is a target of a relocating shard. In that case we only need to remove the target shard and cancel the source // relocation. No shard is left unassigned @@ -556,20 +621,12 @@ assert getByAllocationId(failedShard.shardId(), failedShard.allocationId().getId assert failedShard.active(); if (failedShard.primary()) { // promote active replica to primary if active replica exists - ShardRouting activeReplica = activeReplica(failedShard.shardId()); + ShardRouting activeReplica = activeReplicaWithHighestVersion(failedShard.shardId()); if (activeReplica == null) { moveToUnassigned(failedShard, unassignedInfo); } else { - // if the activeReplica was relocating before this call to failShard, its relocation was cancelled above when we - // failed initializing replica shards (and moved replica relocation source back to started) - assert activeReplica.started() : "replica relocation should have been cancelled: " + activeReplica; movePrimaryToUnassignedAndDemoteToReplica(failedShard, unassignedInfo); - ShardRouting primarySwappedCandidate = promoteActiveReplicaShardToPrimary(activeReplica); - routingChangesObserver.replicaPromoted(activeReplica); - if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.getSettings())) { - ShardRouting initializedShard = reinitShadowPrimary(primarySwappedCandidate); - routingChangesObserver.startedPrimaryReinitialized(primarySwappedCandidate, initializedShard); - } + promoteReplicaToPrimary(activeReplica, indexMetaData, routingChangesObserver); } } else { assert failedShard.primary() == false; @@ -585,6 +642,19 @@ assert node(failedShard.currentNodeId()).getByShardId(failedShard.shardId()) == " was matched but wasn't removed"; } + private void promoteReplicaToPrimary(ShardRouting activeReplica, IndexMetaData indexMetaData, + RoutingChangesObserver routingChangesObserver) { + // if the activeReplica was relocating before this call to failShard, its relocation was cancelled earlier when we + // failed initializing replica shards (and moved replica relocation source back to started) + assert activeReplica.started() : "replica relocation should have been cancelled: " + activeReplica; + ShardRouting primarySwappedCandidate = promoteActiveReplicaShardToPrimary(activeReplica); + routingChangesObserver.replicaPromoted(activeReplica); + if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.getSettings())) { + ShardRouting initializedShard = reinitShadowPrimary(primarySwappedCandidate); + routingChangesObserver.startedPrimaryReinitialized(primarySwappedCandidate, initializedShard); + } + } + /** * Mark a shard as started and adjusts internal statistics. * @@ -706,6 +776,16 @@ private ShardRouting reinitShadowPrimary(ShardRouting candidate) { updateAssigned(candidate, reinitializedShard); inactivePrimaryCount++; inactiveShardCount++; + addRecovery(reinitializedShard); + return reinitializedShard; + } + + private ShardRouting reinitReplica(ShardRouting shard) { + assert shard.primary() == false : "shard must be a replica: " + shard; + assert shard.initializing() : "can only reinitialize an initializing replica: " + shard; + assert shard.isRelocationTarget() == false : "replication target cannot be reinitialized: " + shard; + ShardRouting reinitializedShard = shard.reinitializeReplicaShard(); + updateAssigned(shard, reinitializedShard); return reinitializedShard; } @@ -968,9 +1048,7 @@ public ShardRouting[] drain() { * this method does nothing. */ public static boolean assertShardStats(RoutingNodes routingNodes) { - boolean run = false; - assert (run = true); // only run if assertions are enabled! - if (!run) { + if (!Assertions.ENABLED) { return true; } int unassignedPrimaryCount = 0; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java index 8300d3e37fd64..1c3d629a72fea 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java @@ -25,7 +25,6 @@ import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterStateUpdateTask; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Priority; import org.elasticsearch.common.component.AbstractLifecycleComponent; @@ -96,12 +95,7 @@ protected void performReroute(String reason) { @Override public ClusterState execute(ClusterState currentState) { rerouting.set(false); - RoutingAllocation.Result routingResult = allocationService.reroute(currentState, reason); - if (!routingResult.changed()) { - // no state changed - return currentState; - } - return ClusterState.builder(currentState).routingResult(routingResult).build(); + return allocationService.reroute(currentState, reason); } @Override @@ -115,7 +109,7 @@ public void onFailure(String source, Exception e) { rerouting.set(false); ClusterState state = clusterService.state(); if (logger.isTraceEnabled()) { - logger.error((Supplier) () -> new ParameterizedMessage("unexpected failure during [{}], current state:\n{}", source, state.prettyPrint()), e); + logger.error((Supplier) () -> new ParameterizedMessage("unexpected failure during [{}], current state:\n{}", source, state), e); } else { logger.error((Supplier) () -> new ParameterizedMessage("unexpected failure during [{}], current state version [{}]", source, state.version()), e); } @@ -124,7 +118,7 @@ public void onFailure(String source, Exception e) { } catch (Exception e) { rerouting.set(false); ClusterState state = clusterService.state(); - logger.warn((Supplier) () -> new ParameterizedMessage("failed to reroute routing table, current state:\n{}", state.prettyPrint()), e); + logger.warn((Supplier) () -> new ParameterizedMessage("failed to reroute routing table, current state:\n{}", state), e); } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingTable.java b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingTable.java index 2d960ce0450bb..a248d6a939a0c 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingTable.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingTable.java @@ -56,8 +56,6 @@ */ public class RoutingTable implements Iterable, Diffable { - public static RoutingTable PROTO = builder().build(); - public static final RoutingTable EMPTY_ROUTING_TABLE = builder().build(); private final long version; @@ -240,7 +238,7 @@ public GroupShardsIterator allActiveShardsGrouped(String[] indices, boolean incl return allSatisfyingPredicateShardsGrouped(indices, includeEmpty, includeRelocationTargets, ACTIVE_PREDICATE); } - public GroupShardsIterator allAssignedShardsGrouped(String[] indices, boolean includeEmpty) { + public GroupShardsIterator allAssignedShardsGrouped(String[] indices, boolean includeEmpty) { return allAssignedShardsGrouped(indices, includeEmpty, false); } @@ -251,14 +249,14 @@ public GroupShardsIterator allAssignedShardsGrouped(String[] indices, boolean in * @param includeRelocationTargets if true, an extra shard iterator will be added for relocating shards. The extra * iterator contains a single ShardRouting pointing at the relocating target */ - public GroupShardsIterator allAssignedShardsGrouped(String[] indices, boolean includeEmpty, boolean includeRelocationTargets) { + public GroupShardsIterator allAssignedShardsGrouped(String[] indices, boolean includeEmpty, boolean includeRelocationTargets) { return allSatisfyingPredicateShardsGrouped(indices, includeEmpty, includeRelocationTargets, ASSIGNED_PREDICATE); } - private static Predicate ACTIVE_PREDICATE = shardRouting -> shardRouting.active(); - private static Predicate ASSIGNED_PREDICATE = shardRouting -> shardRouting.assignedToNode(); + private static Predicate ACTIVE_PREDICATE = ShardRouting::active; + private static Predicate ASSIGNED_PREDICATE = ShardRouting::assignedToNode; - private GroupShardsIterator allSatisfyingPredicateShardsGrouped(String[] indices, boolean includeEmpty, boolean includeRelocationTargets, Predicate predicate) { + private GroupShardsIterator allSatisfyingPredicateShardsGrouped(String[] indices, boolean includeEmpty, boolean includeRelocationTargets, Predicate predicate) { // use list here since we need to maintain identity across shards ArrayList set = new ArrayList<>(); for (String index : indices) { @@ -280,7 +278,7 @@ private GroupShardsIterator allSatisfyingPredicateShardsGrouped(String[] indices } } } - return new GroupShardsIterator(set); + return new GroupShardsIterator<>(set); } public ShardsIterator allShards(String[] indices) { @@ -322,9 +320,8 @@ private ShardsIterator allShardsSatisfyingPredicate(String[] indices, Predicate< * @param indices The indices to return all the shards (replicas) * @return All the primary shards grouped into a single shard element group each * @throws IndexNotFoundException If an index passed does not exists - * @see IndexRoutingTable#groupByAllIt() */ - public GroupShardsIterator activePrimaryShardsGrouped(String[] indices, boolean includeEmpty) { + public GroupShardsIterator activePrimaryShardsGrouped(String[] indices, boolean includeEmpty) { // use list here since we need to maintain identity across shards ArrayList set = new ArrayList<>(); for (String index : indices) { @@ -341,7 +338,7 @@ public GroupShardsIterator activePrimaryShardsGrouped(String[] indices, boolean } } } - return new GroupShardsIterator(set); + return new GroupShardsIterator<>(set); } @Override @@ -349,18 +346,16 @@ public Diff diff(RoutingTable previousState) { return new RoutingTableDiff(previousState, this); } - @Override - public Diff readDiffFrom(StreamInput in) throws IOException { + public static Diff readDiffFrom(StreamInput in) throws IOException { return new RoutingTableDiff(in); } - @Override - public RoutingTable readFrom(StreamInput in) throws IOException { + public static RoutingTable readFrom(StreamInput in) throws IOException { Builder builder = new Builder(); builder.version = in.readLong(); int size = in.readVInt(); for (int i = 0; i < size; i++) { - IndexRoutingTable index = IndexRoutingTable.Builder.readFrom(in); + IndexRoutingTable index = IndexRoutingTable.readFrom(in); builder.add(index); } @@ -382,14 +377,15 @@ private static class RoutingTableDiff implements Diff { private final Diff> indicesRouting; - public RoutingTableDiff(RoutingTable before, RoutingTable after) { + RoutingTableDiff(RoutingTable before, RoutingTable after) { version = after.version; indicesRouting = DiffableUtils.diff(before.indicesRouting, after.indicesRouting, DiffableUtils.getStringKeySerializer()); } - public RoutingTableDiff(StreamInput in) throws IOException { + RoutingTableDiff(StreamInput in) throws IOException { version = in.readLong(); - indicesRouting = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), IndexRoutingTable.PROTO); + indicesRouting = DiffableUtils.readImmutableOpenMapDiff(in, DiffableUtils.getStringKeySerializer(), IndexRoutingTable::readFrom, + IndexRoutingTable::readDiffFrom); } @Override @@ -607,13 +603,10 @@ public RoutingTable build() { indicesRouting = null; return table; } - - public static RoutingTable readFrom(StreamInput in) throws IOException { - return PROTO.readFrom(in); - } } - public String prettyPrint() { + @Override + public String toString() { StringBuilder sb = new StringBuilder("routing_table (version ").append(version).append("):\n"); for (ObjectObjectCursor entry : indicesRouting) { sb.append(entry.value.prettyPrint()).append('\n'); diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java b/core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java index e441fd8111335..3a60e5338d798 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java @@ -390,7 +390,18 @@ public ShardRouting reinitializePrimaryShard() { assert primary : this; return new ShardRouting(shardId, currentNodeId, null, primary, ShardRoutingState.INITIALIZING, StoreRecoverySource.EXISTING_STORE_INSTANCE, new UnassignedInfo(UnassignedInfo.Reason.REINITIALIZED, null), - AllocationId.newInitializing(), UNAVAILABLE_EXPECTED_SHARD_SIZE); + allocationId, UNAVAILABLE_EXPECTED_SHARD_SIZE); + } + + /** + * Reinitializes a replica shard, giving it a fresh allocation id + */ + public ShardRouting reinitializeReplicaShard() { + assert state == ShardRoutingState.INITIALIZING : this; + assert primary == false : this; + assert isRelocationTarget() == false : this; + return new ShardRouting(shardId, currentNodeId, null, primary, ShardRoutingState.INITIALIZING, + recoverySource, unassignedInfo, AllocationId.newInitializing(), expectedShardSize); } /** diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/ShardsIterator.java b/core/src/main/java/org/elasticsearch/cluster/routing/ShardsIterator.java index bdfe35ad2b9fb..638875ea07138 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/ShardsIterator.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/ShardsIterator.java @@ -18,10 +18,12 @@ */ package org.elasticsearch.cluster.routing; +import java.util.List; + /** * Allows to iterate over unrelated shards. */ -public interface ShardsIterator { +public interface ShardsIterator extends Iterable { /** * Resets the iterator to its initial state. @@ -42,13 +44,6 @@ public interface ShardsIterator { */ int sizeActive(); - /** - * Returns the number of replicas in this iterator that are not in the - * {@link ShardRoutingState#UNASSIGNED}. The returned double-counts replicas - * that are in the state {@link ShardRoutingState#RELOCATING} - */ - int assignedReplicasIncludingRelocating(); - /** * Returns the next shard, or null if none available. */ @@ -67,6 +62,9 @@ public interface ShardsIterator { @Override boolean equals(Object other); - Iterable asUnordered(); + /** + * Returns the {@link ShardRouting}s that this shards iterator holds. + */ + List getShardRoutings(); } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java b/core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java index 4670e1e473671..3726bac781e3c 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java @@ -185,15 +185,15 @@ public static AllocationStatus readFrom(StreamInput in) throws IOException { } } - public static AllocationStatus fromDecision(Decision decision) { + public static AllocationStatus fromDecision(Decision.Type decision) { Objects.requireNonNull(decision); - switch (decision.type()) { + switch (decision) { case NO: return DECIDERS_NO; case THROTTLE: return DECIDERS_THROTTLED; default: - throw new IllegalArgumentException("no allocation attempt from decision[" + decision.type() + "]"); + throw new IllegalArgumentException("no allocation attempt from decision[" + decision + "]"); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AbstractAllocationDecision.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AbstractAllocationDecision.java new file mode 100644 index 0000000000000..7ee17c558f70d --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AbstractAllocationDecision.java @@ -0,0 +1,187 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.routing.allocation; + +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.routing.allocation.decider.Decision.Type; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.stream.Collectors; + +/** + * An abstract class for representing various types of allocation decisions. + */ +public abstract class AbstractAllocationDecision implements ToXContent, Writeable { + + @Nullable + protected final DiscoveryNode targetNode; + @Nullable + protected final List nodeDecisions; + + protected AbstractAllocationDecision(@Nullable DiscoveryNode targetNode, @Nullable List nodeDecisions) { + this.targetNode = targetNode; + this.nodeDecisions = nodeDecisions != null ? sortNodeDecisions(nodeDecisions) : null; + } + + protected AbstractAllocationDecision(StreamInput in) throws IOException { + targetNode = in.readOptionalWriteable(DiscoveryNode::new); + nodeDecisions = in.readBoolean() ? Collections.unmodifiableList(in.readList(NodeAllocationResult::new)) : null; + } + + /** + * Returns {@code true} if a decision was taken by the allocator, {@code false} otherwise. + * If no decision was taken, then the rest of the fields in this object cannot be accessed and will + * throw an {@code IllegalStateException}. + */ + public abstract boolean isDecisionTaken(); + + /** + * Get the node that the allocator will assign the shard to, returning {@code null} if there is no node to + * which the shard will be assigned or moved. If {@link #isDecisionTaken()} returns {@code false}, then + * invoking this method will throw an {@code IllegalStateException}. + */ + @Nullable + public DiscoveryNode getTargetNode() { + checkDecisionState(); + return targetNode; + } + + /** + * Gets the sorted list of individual node-level decisions that went into making the ultimate decision whether + * to allocate or move the shard. If {@link #isDecisionTaken()} returns {@code false}, then + * invoking this method will throw an {@code IllegalStateException}. + */ + @Nullable + public List getNodeDecisions() { + checkDecisionState(); + return nodeDecisions; + } + + /** + * Gets the explanation for the decision. If {@link #isDecisionTaken()} returns {@code false}, then invoking + * this method will throw an {@code IllegalStateException}. + */ + public abstract String getExplanation(); + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeOptionalWriteable(targetNode); + if (nodeDecisions != null) { + out.writeBoolean(true); + out.writeList(nodeDecisions); + } else { + out.writeBoolean(false); + } + } + + protected void checkDecisionState() { + if (isDecisionTaken() == false) { + throw new IllegalStateException("decision was not taken, individual object fields cannot be accessed"); + } + } + + /** + * Generates X-Content for a {@link DiscoveryNode} that leaves off some of the non-critical fields. + */ + public static XContentBuilder discoveryNodeToXContent(DiscoveryNode node, boolean outerObjectWritten, XContentBuilder builder) + throws IOException { + + builder.field(outerObjectWritten ? "id" : "node_id", node.getId()); + builder.field(outerObjectWritten ? "name" : "node_name", node.getName()); + builder.field("transport_address", node.getAddress().toString()); + if (node.getAttributes().isEmpty() == false) { + builder.startObject(outerObjectWritten ? "attributes" : "node_attributes"); + for (Map.Entry entry : node.getAttributes().entrySet()) { + builder.field(entry.getKey(), entry.getValue()); + } + builder.endObject(); + } + return builder; + } + + /** + * Sorts a list of node level decisions by the decision type, then by weight ranking, and finally by node id. + */ + public List sortNodeDecisions(List nodeDecisions) { + return Collections.unmodifiableList(nodeDecisions.stream().sorted().collect(Collectors.toList())); + } + + /** + * Generates X-Content for the node-level decisions, creating the outer "node_decisions" object + * in which they are serialized. + */ + public XContentBuilder nodeDecisionsToXContent(List nodeDecisions, XContentBuilder builder, Params params) + throws IOException { + + if (nodeDecisions != null && nodeDecisions.isEmpty() == false) { + builder.startArray("node_allocation_decisions"); + { + for (NodeAllocationResult explanation : nodeDecisions) { + explanation.toXContent(builder, params); + } + } + builder.endArray(); + } + return builder; + } + + /** + * Returns {@code true} if there is at least one node that returned a {@link Type#YES} decision for allocating this shard. + */ + protected boolean atLeastOneNodeWithYesDecision() { + if (nodeDecisions == null) { + return false; + } + for (NodeAllocationResult result : nodeDecisions) { + if (result.getNodeDecision() == AllocationDecision.YES) { + return true; + } + } + return false; + } + + @Override + public boolean equals(Object other) { + if (this == other) { + return true; + } + if (other == null || other instanceof AbstractAllocationDecision == false) { + return false; + } + @SuppressWarnings("unchecked") AbstractAllocationDecision that = (AbstractAllocationDecision) other; + return Objects.equals(targetNode, that.targetNode) && Objects.equals(nodeDecisions, that.nodeDecisions); + } + + @Override + public int hashCode() { + return Objects.hash(targetNode, nodeDecisions); + } + +} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocateUnassignedDecision.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocateUnassignedDecision.java new file mode 100644 index 0000000000000..decdafd724c7d --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocateUnassignedDecision.java @@ -0,0 +1,331 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.routing.allocation; + +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.routing.UnassignedInfo.AllocationStatus; +import org.elasticsearch.cluster.routing.allocation.decider.Decision; +import org.elasticsearch.cluster.routing.allocation.decider.Decision.Type; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Collections; +import java.util.EnumMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; + +/** + * Represents the allocation decision by an allocator for an unassigned shard. + */ +public class AllocateUnassignedDecision extends AbstractAllocationDecision { + /** a constant representing a shard decision where no decision was taken */ + public static final AllocateUnassignedDecision NOT_TAKEN = + new AllocateUnassignedDecision(AllocationStatus.NO_ATTEMPT, null, null, null, false, 0L, 0L); + /** + * a map of cached common no/throttle decisions that don't need explanations, + * this helps prevent unnecessary object allocations for the non-explain API case + */ + private static final Map CACHED_DECISIONS; + static { + Map cachedDecisions = new EnumMap<>(AllocationStatus.class); + cachedDecisions.put(AllocationStatus.FETCHING_SHARD_DATA, + new AllocateUnassignedDecision(AllocationStatus.FETCHING_SHARD_DATA, null, null, null, false, 0L, 0L)); + cachedDecisions.put(AllocationStatus.NO_VALID_SHARD_COPY, + new AllocateUnassignedDecision(AllocationStatus.NO_VALID_SHARD_COPY, null, null, null, false, 0L, 0L)); + cachedDecisions.put(AllocationStatus.DECIDERS_NO, + new AllocateUnassignedDecision(AllocationStatus.DECIDERS_NO, null, null, null, false, 0L, 0L)); + cachedDecisions.put(AllocationStatus.DECIDERS_THROTTLED, + new AllocateUnassignedDecision(AllocationStatus.DECIDERS_THROTTLED, null, null, null, false, 0L, 0L)); + cachedDecisions.put(AllocationStatus.DELAYED_ALLOCATION, + new AllocateUnassignedDecision(AllocationStatus.DELAYED_ALLOCATION, null, null, null, false, 0L, 0L)); + CACHED_DECISIONS = Collections.unmodifiableMap(cachedDecisions); + } + + @Nullable + private final AllocationStatus allocationStatus; + @Nullable + private final String allocationId; + private final boolean reuseStore; + private final long remainingDelayInMillis; + private final long configuredDelayInMillis; + + private AllocateUnassignedDecision(AllocationStatus allocationStatus, + DiscoveryNode assignedNode, + String allocationId, + List nodeDecisions, + boolean reuseStore, + long remainingDelayInMillis, + long configuredDelayInMillis) { + super(assignedNode, nodeDecisions); + assert assignedNode != null || allocationStatus != null : + "a yes decision must have a node to assign the shard to"; + assert allocationId == null || assignedNode != null : + "allocation id can only be null if the assigned node is null"; + this.allocationStatus = allocationStatus; + this.allocationId = allocationId; + this.reuseStore = reuseStore; + this.remainingDelayInMillis = remainingDelayInMillis; + this.configuredDelayInMillis = configuredDelayInMillis; + } + + public AllocateUnassignedDecision(StreamInput in) throws IOException { + super(in); + allocationStatus = in.readOptionalWriteable(AllocationStatus::readFrom); + allocationId = in.readOptionalString(); + reuseStore = in.readBoolean(); + remainingDelayInMillis = in.readVLong(); + configuredDelayInMillis = in.readVLong(); + } + + /** + * Returns a NO decision with the given {@link AllocationStatus}, and the individual node-level + * decisions that comprised the final NO decision if in explain mode. + */ + public static AllocateUnassignedDecision no(AllocationStatus allocationStatus, @Nullable List decisions) { + return no(allocationStatus, decisions, false); + } + + /** + * Returns a NO decision for a delayed shard allocation on a replica shard, with the individual node-level + * decisions that comprised the final NO decision, if in explain mode. Instances created with this + * method will return {@link AllocationStatus#DELAYED_ALLOCATION} for {@link #getAllocationStatus()}. + */ + public static AllocateUnassignedDecision delayed(long remainingDelay, long totalDelay, + @Nullable List decisions) { + return no(AllocationStatus.DELAYED_ALLOCATION, decisions, false, remainingDelay, totalDelay); + } + + /** + * Returns a NO decision with the given {@link AllocationStatus}, and the individual node-level + * decisions that comprised the final NO decision if in explain mode. + */ + public static AllocateUnassignedDecision no(AllocationStatus allocationStatus, @Nullable List decisions, + boolean reuseStore) { + return no(allocationStatus, decisions, reuseStore, 0L, 0L); + } + + private static AllocateUnassignedDecision no(AllocationStatus allocationStatus, @Nullable List decisions, + boolean reuseStore, long remainingDelay, long totalDelay) { + if (decisions != null) { + return new AllocateUnassignedDecision(allocationStatus, null, null, decisions, reuseStore, remainingDelay, totalDelay); + } else { + return getCachedDecision(allocationStatus); + } + } + + /** + * Returns a THROTTLE decision, with the individual node-level decisions that + * comprised the final THROTTLE decision if in explain mode. + */ + public static AllocateUnassignedDecision throttle(@Nullable List decisions) { + if (decisions != null) { + return new AllocateUnassignedDecision(AllocationStatus.DECIDERS_THROTTLED, null, null, decisions, false, 0L, 0L); + } else { + return getCachedDecision(AllocationStatus.DECIDERS_THROTTLED); + } + } + + /** + * Creates a YES decision with the given individual node-level decisions that + * comprised the final YES decision, along with the node id to which the shard is assigned and + * the allocation id for the shard, if available. + */ + public static AllocateUnassignedDecision yes(DiscoveryNode assignedNode, @Nullable String allocationId, + @Nullable List decisions, boolean reuseStore) { + return new AllocateUnassignedDecision(null, assignedNode, allocationId, decisions, reuseStore, 0L, 0L); + } + + /** + * Creates a {@link AllocateUnassignedDecision} from the given {@link Decision} and the assigned node, if any. + */ + public static AllocateUnassignedDecision fromDecision(Decision decision, @Nullable DiscoveryNode assignedNode, + @Nullable List nodeDecisions) { + final Type decisionType = decision.type(); + AllocationStatus allocationStatus = decisionType != Type.YES ? AllocationStatus.fromDecision(decisionType) : null; + return new AllocateUnassignedDecision(allocationStatus, assignedNode, null, nodeDecisions, false, 0L, 0L); + } + + private static AllocateUnassignedDecision getCachedDecision(AllocationStatus allocationStatus) { + AllocateUnassignedDecision decision = CACHED_DECISIONS.get(allocationStatus); + return Objects.requireNonNull(decision, "precomputed decision not found for " + allocationStatus); + } + + @Override + public boolean isDecisionTaken() { + return allocationStatus != AllocationStatus.NO_ATTEMPT; + } + + /** + * Returns the {@link AllocationDecision} denoting the result of an allocation attempt. + * If {@link #isDecisionTaken()} returns {@code false}, then invoking this method will + * throw an {@code IllegalStateException}. + */ + public AllocationDecision getAllocationDecision() { + checkDecisionState(); + return AllocationDecision.fromAllocationStatus(allocationStatus); + } + + /** + * Returns the status of an unsuccessful allocation attempt. This value will be {@code null} if + * no decision was taken or if the decision was {@link Decision.Type#YES}. If {@link #isDecisionTaken()} + * returns {@code false}, then invoking this method will throw an {@code IllegalStateException}. + */ + @Nullable + public AllocationStatus getAllocationStatus() { + checkDecisionState(); + return allocationStatus; + } + + /** + * Gets the allocation id for the existing shard copy that the allocator is assigning the shard to. + * This method returns a non-null value iff {@link #getTargetNode()} returns a non-null value + * and the node on which the shard is assigned already has a shard copy with an in-sync allocation id + * that we can re-use. If {@link #isDecisionTaken()} returns {@code false}, then invoking this method + * will throw an {@code IllegalStateException}. + */ + @Nullable + public String getAllocationId() { + checkDecisionState(); + return allocationId; + } + + /** + * Gets the remaining delay for allocating the replica shard when a node holding the replica left + * the cluster and the deciders are waiting to see if the node returns before allocating the replica + * elsewhere. Only returns a meaningful positive value if {@link #getAllocationStatus()} returns + * {@link AllocationStatus#DELAYED_ALLOCATION}. If {@link #isDecisionTaken()} returns {@code false}, + * then invoking this method will throw an {@code IllegalStateException}. + */ + public long getRemainingDelayInMillis() { + checkDecisionState(); + return remainingDelayInMillis; + } + + /** + * Gets the total configured delay for allocating the replica shard when a node holding the replica left + * the cluster and the deciders are waiting to see if the node returns before allocating the replica + * elsewhere. Only returns a meaningful positive value if {@link #getAllocationStatus()} returns + * {@link AllocationStatus#DELAYED_ALLOCATION}. If {@link #isDecisionTaken()} returns {@code false}, + * then invoking this method will throw an {@code IllegalStateException}. + */ + public long getConfiguredDelayInMillis() { + checkDecisionState(); + return configuredDelayInMillis; + } + + @Override + public String getExplanation() { + checkDecisionState(); + AllocationDecision allocationDecision = getAllocationDecision(); + if (allocationDecision == AllocationDecision.YES) { + return "can allocate the shard"; + } else if (allocationDecision == AllocationDecision.THROTTLED) { + return "allocation temporarily throttled"; + } else if (allocationDecision == AllocationDecision.AWAITING_INFO) { + return "cannot allocate because information about existing shard data is still being retrieved from some of the nodes"; + } else if (allocationDecision == AllocationDecision.NO_VALID_SHARD_COPY) { + if (hasNodeWithStaleOrCorruptShard()) { + return "cannot allocate because all found copies of the shard are either stale or corrupt"; + } else { + return "cannot allocate because a previous copy of the primary shard existed but can no longer be found on " + + "the nodes in the cluster"; + } + } else if (allocationDecision == AllocationDecision.ALLOCATION_DELAYED) { + return "cannot allocate because the cluster is still waiting " + + TimeValue.timeValueMillis(remainingDelayInMillis) + + " for the departed node holding a replica to rejoin" + + (atLeastOneNodeWithYesDecision() ? + ", despite being allowed to allocate the shard to at least one other node" : ""); + } else { + assert allocationDecision == AllocationDecision.NO; + if (reuseStore) { + return "cannot allocate because allocation is not permitted to any of the nodes that hold an in-sync shard copy"; + } else { + return "cannot allocate because allocation is not permitted to any of the nodes"; + } + } + } + + private boolean hasNodeWithStaleOrCorruptShard() { + return getNodeDecisions() != null && getNodeDecisions().stream().anyMatch(result -> + result.getShardStoreInfo() != null + && (result.getShardStoreInfo().getAllocationId() != null + || result.getShardStoreInfo().getStoreException() != null)); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + checkDecisionState(); + builder.field("can_allocate", getAllocationDecision()); + builder.field("allocate_explanation", getExplanation()); + if (targetNode != null) { + builder.startObject("target_node"); + discoveryNodeToXContent(targetNode, true, builder); + builder.endObject(); + } + if (allocationId != null) { + builder.field("allocation_id", allocationId); + } + if (allocationStatus == AllocationStatus.DELAYED_ALLOCATION) { + builder.timeValueField("configured_delay_in_millis", "configured_delay", TimeValue.timeValueMillis(configuredDelayInMillis)); + builder.timeValueField("remaining_delay_in_millis", "remaining_delay", TimeValue.timeValueMillis(remainingDelayInMillis)); + } + nodeDecisionsToXContent(nodeDecisions, builder, params); + return builder; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeOptionalWriteable(allocationStatus); + out.writeOptionalString(allocationId); + out.writeBoolean(reuseStore); + out.writeVLong(remainingDelayInMillis); + out.writeVLong(configuredDelayInMillis); + } + + @Override + public boolean equals(Object other) { + if (super.equals(other) == false) { + return false; + } + if (other instanceof AllocateUnassignedDecision == false) { + return false; + } + @SuppressWarnings("unchecked") AllocateUnassignedDecision that = (AllocateUnassignedDecision) other; + return Objects.equals(allocationStatus, that.allocationStatus) + && Objects.equals(allocationId, that.allocationId) + && reuseStore == that.reuseStore + && configuredDelayInMillis == that.configuredDelayInMillis + && remainingDelayInMillis == that.remainingDelayInMillis; + } + + @Override + public int hashCode() { + return 31 * super.hashCode() + Objects.hash(allocationStatus, allocationId, reuseStore, + configuredDelayInMillis, remainingDelayInMillis); + } + +} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationDecision.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationDecision.java new file mode 100644 index 0000000000000..0fe9549635ecd --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationDecision.java @@ -0,0 +1,156 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.routing.allocation; + +import org.elasticsearch.cluster.routing.UnassignedInfo.AllocationStatus; +import org.elasticsearch.cluster.routing.allocation.decider.Decision; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; + +import java.io.IOException; +import java.util.Locale; + +/** + * An enum which represents the various decision types that can be taken by the + * allocators and deciders for allocating a shard to a node. + */ +public enum AllocationDecision implements Writeable { + /** + * The shard can be allocated to a node. + */ + YES((byte) 0), + /** + * The allocation attempt was throttled for the shard. + */ + THROTTLED((byte) 1), + /** + * The shard cannot be allocated, which can happen for any number of reasons, + * including the allocation deciders gave a NO decision for allocating. + */ + NO((byte) 2), + /** + * The shard could not be rebalanced to another node despite rebalancing + * being allowed, because moving the shard to the other node would not form + * a better cluster balance. + */ + WORSE_BALANCE((byte) 3), + /** + * Waiting on getting shard data from all nodes before making a decision + * about where to allocate the shard. + */ + AWAITING_INFO((byte) 4), + /** + * The allocation decision has been delayed waiting for a replica with a shard copy + * that left the cluster to rejoin. + */ + ALLOCATION_DELAYED((byte) 5), + /** + * The shard was denied allocation because there were no valid shard copies + * found for it amongst the nodes in the cluster. + */ + NO_VALID_SHARD_COPY((byte) 6), + /** + * No attempt was made to allocate the shard + */ + NO_ATTEMPT((byte) 7); + + + private final byte id; + + AllocationDecision(byte id) { + this.id = id; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeByte(id); + } + + public static AllocationDecision readFrom(StreamInput in) throws IOException { + byte id = in.readByte(); + switch (id) { + case 0: + return YES; + case 1: + return THROTTLED; + case 2: + return NO; + case 3: + return WORSE_BALANCE; + case 4: + return AWAITING_INFO; + case 5: + return ALLOCATION_DELAYED; + case 6: + return NO_VALID_SHARD_COPY; + case 7: + return NO_ATTEMPT; + default: + throw new IllegalArgumentException("Unknown value [" + id + "]"); + } + } + + /** + * Gets an {@link AllocationDecision} from a {@link AllocationStatus}. + */ + public static AllocationDecision fromAllocationStatus(AllocationStatus allocationStatus) { + if (allocationStatus == null) { + return YES; + } else { + switch (allocationStatus) { + case DECIDERS_THROTTLED: + return THROTTLED; + case FETCHING_SHARD_DATA: + return AWAITING_INFO; + case DELAYED_ALLOCATION: + return ALLOCATION_DELAYED; + case NO_VALID_SHARD_COPY: + return NO_VALID_SHARD_COPY; + case NO_ATTEMPT: + return NO_ATTEMPT; + default: + assert allocationStatus == AllocationStatus.DECIDERS_NO : "unhandled AllocationStatus type [" + allocationStatus + "]"; + return NO; + } + } + } + + /** + * Gets an {@link AllocationDecision} from a {@link Decision.Type} + */ + public static AllocationDecision fromDecisionType(Decision.Type type) { + switch (type) { + case YES: + return YES; + case THROTTLE: + return THROTTLED; + default: + assert type == Decision.Type.NO; + return NO; + } + } + + @Override + public String toString() { + return super.toString().toLowerCase(Locale.ROOT); + } + +} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationExplanation.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationExplanation.java deleted file mode 100644 index e2b5de9b52dae..0000000000000 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationExplanation.java +++ /dev/null @@ -1,153 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cluster.routing.allocation; - -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.index.shard.ShardId; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -/** - * Instances of this class keeps explanations of decisions that have been made by allocation. - * An {@link AllocationExplanation} consists of a set of per node explanations. - * Since {@link NodeExplanation}s are related to shards an {@link AllocationExplanation} maps - * a shards id to a set of {@link NodeExplanation}s. - */ -public class AllocationExplanation implements Streamable { - - public static final AllocationExplanation EMPTY = new AllocationExplanation(); - - /** - * Instances of this class keep messages and informations about nodes of an allocation - */ - public static class NodeExplanation { - private final DiscoveryNode node; - - private final String description; - - /** - * Creates a new {@link NodeExplanation} - * - * @param node node referenced by this {@link NodeExplanation} - * @param description a message associated with the given node - */ - public NodeExplanation(DiscoveryNode node, String description) { - this.node = node; - this.description = description; - } - - /** - * The node referenced by the explanation - * @return referenced node - */ - public DiscoveryNode node() { - return node; - } - - /** - * Get the explanation for the node - * @return explanation for the node - */ - public String description() { - return description; - } - } - - private final Map> explanations = new HashMap<>(); - - /** - * Create and add a node explanation to this explanation referencing a shard - * @param shardId id the of the referenced shard - * @param nodeExplanation Explanation itself - * @return AllocationExplanation involving the explanation - */ - public AllocationExplanation add(ShardId shardId, NodeExplanation nodeExplanation) { - List list = explanations.get(shardId); - if (list == null) { - list = new ArrayList<>(); - explanations.put(shardId, list); - } - list.add(nodeExplanation); - return this; - } - - /** - * List of explanations involved by this AllocationExplanation - * @return Map of shard ids and corresponding explanations - */ - public Map> explanations() { - return this.explanations; - } - - /** - * Read an {@link AllocationExplanation} from an {@link StreamInput} - * @param in {@link StreamInput} to read from - * @return a new {@link AllocationExplanation} read from the stream - * @throws IOException if something bad happened while reading - */ - public static AllocationExplanation readAllocationExplanation(StreamInput in) throws IOException { - AllocationExplanation e = new AllocationExplanation(); - e.readFrom(in); - return e; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - int size = in.readVInt(); - for (int i = 0; i < size; i++) { - ShardId shardId = ShardId.readShardId(in); - int size2 = in.readVInt(); - List ne = new ArrayList<>(size2); - for (int j = 0; j < size2; j++) { - DiscoveryNode node = null; - if (in.readBoolean()) { - node = new DiscoveryNode(in); - } - ne.add(new NodeExplanation(node, in.readString())); - } - explanations.put(shardId, ne); - } - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(explanations.size()); - for (Map.Entry> entry : explanations.entrySet()) { - entry.getKey().writeTo(out); - out.writeVInt(entry.getValue().size()); - for (NodeExplanation nodeExplanation : entry.getValue()) { - if (nodeExplanation.node() == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - nodeExplanation.node().writeTo(out); - } - out.writeString(nodeExplanation.description()); - } - } - } -} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java index 214bedc324c66..3eaac2a2057d8 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java @@ -20,8 +20,8 @@ package org.elasticsearch.cluster.routing.allocation; import org.elasticsearch.cluster.ClusterInfoService; -import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.RestoreInProgress; import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.cluster.health.ClusterStateHealth; import org.elasticsearch.cluster.metadata.IndexMetaData; @@ -32,16 +32,18 @@ import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.UnassignedInfo; import org.elasticsearch.cluster.routing.UnassignedInfo.AllocationStatus; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation.Result; import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocator; import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands; import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders; +import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.gateway.GatewayAllocator; +import java.util.ArrayList; import java.util.Collections; +import java.util.Comparator; import java.util.Iterator; import java.util.List; import java.util.function.Function; @@ -62,7 +64,6 @@ public class AllocationService extends AbstractComponent { private final GatewayAllocator gatewayAllocator; private final ShardsAllocator shardsAllocator; private final ClusterInfoService clusterInfoService; - private final ClusterName clusterName; @Inject public AllocationService(Settings settings, AllocationDeciders allocationDeciders, GatewayAllocator gatewayAllocator, @@ -72,63 +73,66 @@ public AllocationService(Settings settings, AllocationDeciders allocationDecider this.gatewayAllocator = gatewayAllocator; this.shardsAllocator = shardsAllocator; this.clusterInfoService = clusterInfoService; - clusterName = ClusterName.CLUSTER_NAME_SETTING.get(settings); } /** * Applies the started shards. Note, only initializing ShardRouting instances that exist in the routing table should be * provided as parameter and no duplicates should be contained. *

    - * If the same instance of the routing table is returned, then no change has been made.

    + * If the same instance of the {@link ClusterState} is returned, then no change has been made.

    */ - public Result applyStartedShards(ClusterState clusterState, List startedShards) { - return applyStartedShards(clusterState, startedShards, true); - } - - public Result applyStartedShards(ClusterState clusterState, List startedShards, boolean withReroute) { + public ClusterState applyStartedShards(ClusterState clusterState, List startedShards) { if (startedShards.isEmpty()) { - return Result.unchanged(clusterState); + return clusterState; } RoutingNodes routingNodes = getMutableRoutingNodes(clusterState); // shuffle the unassigned nodes, just so we won't have things like poison failed shards routingNodes.unassigned().shuffle(); - StartedRerouteAllocation allocation = new StartedRerouteAllocation(allocationDeciders, routingNodes, clusterState, startedShards, - clusterInfoService.getClusterInfo(), currentNanoTime()); + RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, clusterState, + clusterInfoService.getClusterInfo(), currentNanoTime(), false); + // as starting a primary relocation target can reinitialize replica shards, start replicas first + startedShards = new ArrayList<>(startedShards); + Collections.sort(startedShards, Comparator.comparing(ShardRouting::primary)); applyStartedShards(allocation, startedShards); - gatewayAllocator.applyStartedShards(allocation); - if (withReroute) { - reroute(allocation); - } + gatewayAllocator.applyStartedShards(allocation, startedShards); + reroute(allocation); String startedShardsAsString = firstListElementsToCommaDelimitedString(startedShards, s -> s.shardId().toString()); - return buildResultAndLogHealthChange(allocation, "shards started [" + startedShardsAsString + "] ..."); - } - - protected Result buildResultAndLogHealthChange(RoutingAllocation allocation, String reason) { - return buildResultAndLogHealthChange(allocation, reason, new RoutingExplanations()); + return buildResultAndLogHealthChange(clusterState, allocation, "shards started [" + startedShardsAsString + "] ..."); } - protected Result buildResultAndLogHealthChange(RoutingAllocation allocation, String reason, RoutingExplanations explanations) { - RoutingTable oldRoutingTable = allocation.routingTable(); + protected ClusterState buildResultAndLogHealthChange(ClusterState oldState, RoutingAllocation allocation, String reason) { + RoutingTable oldRoutingTable = oldState.routingTable(); RoutingNodes newRoutingNodes = allocation.routingNodes(); final RoutingTable newRoutingTable = new RoutingTable.Builder().updateNodes(oldRoutingTable.version(), newRoutingNodes).build(); MetaData newMetaData = allocation.updateMetaDataWithRoutingChanges(newRoutingTable); assert newRoutingTable.validate(newMetaData); // validates the routing table is coherent with the cluster state metadata + final ClusterState.Builder newStateBuilder = ClusterState.builder(oldState) + .routingTable(newRoutingTable) + .metaData(newMetaData); + final RestoreInProgress restoreInProgress = allocation.custom(RestoreInProgress.TYPE); + if (restoreInProgress != null) { + RestoreInProgress updatedRestoreInProgress = allocation.updateRestoreInfoWithRoutingChanges(restoreInProgress); + if (updatedRestoreInProgress != restoreInProgress) { + ImmutableOpenMap.Builder customsBuilder = ImmutableOpenMap.builder(allocation.getCustoms()); + customsBuilder.put(RestoreInProgress.TYPE, updatedRestoreInProgress); + newStateBuilder.customs(customsBuilder.build()); + } + } + final ClusterState newState = newStateBuilder.build(); logClusterHealthStateChange( - new ClusterStateHealth(ClusterState.builder(clusterName). - metaData(allocation.metaData()).routingTable(oldRoutingTable).build()), - new ClusterStateHealth(ClusterState.builder(clusterName). - metaData(newMetaData).routingTable(newRoutingTable).build()), + new ClusterStateHealth(oldState), + new ClusterStateHealth(newState), reason ); - return Result.changed(newRoutingTable, newMetaData, explanations); + return newState; } - public Result applyFailedShard(ClusterState clusterState, ShardRouting failedShard) { - return applyFailedShards(clusterState, Collections.singletonList(new FailedRerouteAllocation.FailedShard(failedShard, null, null)), + public ClusterState applyFailedShard(ClusterState clusterState, ShardRouting failedShard) { + return applyFailedShards(clusterState, Collections.singletonList(new FailedShard(failedShard, null, null)), Collections.emptyList()); } - public Result applyFailedShards(ClusterState clusterState, List failedShards) { + public ClusterState applyFailedShards(ClusterState clusterState, List failedShards) { return applyFailedShards(clusterState, failedShards, Collections.emptyList()); } @@ -138,24 +142,24 @@ public Result applyFailedShards(ClusterState clusterState, List - * If the same instance of the routing table is returned, then no change has been made.

    + * If the same instance of ClusterState is returned, then no change has been made.

    */ - public Result applyFailedShards(ClusterState clusterState, List failedShards, - List staleShards) { + public ClusterState applyFailedShards(final ClusterState clusterState, final List failedShards, + final List staleShards) { if (staleShards.isEmpty() && failedShards.isEmpty()) { - return Result.unchanged(clusterState); + return clusterState; } - clusterState = IndexMetaDataUpdater.removeStaleIdsWithoutRoutings(clusterState, staleShards); + ClusterState tmpState = IndexMetaDataUpdater.removeStaleIdsWithoutRoutings(clusterState, staleShards); - RoutingNodes routingNodes = getMutableRoutingNodes(clusterState); + RoutingNodes routingNodes = getMutableRoutingNodes(tmpState); // shuffle the unassigned nodes, just so we won't have things like poison failed shards routingNodes.unassigned().shuffle(); long currentNanoTime = currentNanoTime(); - FailedRerouteAllocation allocation = new FailedRerouteAllocation(allocationDeciders, routingNodes, clusterState, failedShards, - clusterInfoService.getClusterInfo(), currentNanoTime); + RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, tmpState, + clusterInfoService.getClusterInfo(), currentNanoTime, false); - for (FailedRerouteAllocation.FailedShard failedShardEntry : failedShards) { - ShardRouting shardToFail = failedShardEntry.routingEntry; + for (FailedShard failedShardEntry : failedShards) { + ShardRouting shardToFail = failedShardEntry.getRoutingEntry(); IndexMetaData indexMetaData = allocation.metaData().getIndexSafe(shardToFail.shardId().getIndex()); allocation.addIgnoreShardForNode(shardToFail.shardId(), shardToFail.currentNodeId()); // failing a primary also fails initializing replica shards, re-resolve ShardRouting @@ -166,26 +170,26 @@ public Result applyFailedShards(ClusterState clusterState, List s.routingEntry.shardId().toString()); - return buildResultAndLogHealthChange(allocation, "shards failed [" + failedShardsAsString + "] ..."); + String failedShardsAsString = firstListElementsToCommaDelimitedString(failedShards, s -> s.getRoutingEntry().shardId().toString()); + return buildResultAndLogHealthChange(clusterState, allocation, "shards failed [" + failedShardsAsString + "] ..."); } /** * unassigned an shards that are associated with nodes that are no longer part of the cluster, potentially promoting replicas * if needed. */ - public RoutingAllocation.Result deassociateDeadNodes(ClusterState clusterState, boolean reroute, String reason) { + public ClusterState deassociateDeadNodes(final ClusterState clusterState, boolean reroute, String reason) { RoutingNodes routingNodes = getMutableRoutingNodes(clusterState); // shuffle the unassigned nodes, just so we won't have things like poison failed shards routingNodes.unassigned().shuffle(); @@ -200,9 +204,9 @@ public RoutingAllocation.Result deassociateDeadNodes(ClusterState clusterState, } if (allocation.routingNodesChanged() == false) { - return Result.unchanged(clusterState); + return clusterState; } - return buildResultAndLogHealthChange(allocation, reason); + return buildResultAndLogHealthChange(clusterState, allocation, reason); } /** @@ -244,7 +248,7 @@ private String firstListElementsToCommaDelimitedString(List elements, Fun .collect(Collectors.joining(", ")); } - public Result reroute(ClusterState clusterState, AllocationCommands commands, boolean explain, boolean retryFailed) { + public CommandsResult reroute(final ClusterState clusterState, AllocationCommands commands, boolean explain, boolean retryFailed) { RoutingNodes routingNodes = getMutableRoutingNodes(clusterState); // we don't shuffle the unassigned shards here, to try and get as close as possible to // a consistent result of the effect the commands have on the routing @@ -261,25 +265,25 @@ public Result reroute(ClusterState clusterState, AllocationCommands commands, bo // the assumption is that commands will move / act on shards (or fail through exceptions) // so, there will always be shard "movements", so no need to check on reroute reroute(allocation); - return buildResultAndLogHealthChange(allocation, "reroute commands", explanations); + return new CommandsResult(explanations, buildResultAndLogHealthChange(clusterState, allocation, "reroute commands")); } /** * Reroutes the routing table based on the live nodes. *

    - * If the same instance of the routing table is returned, then no change has been made. + * If the same instance of ClusterState is returned, then no change has been made. */ - public Result reroute(ClusterState clusterState, String reason) { + public ClusterState reroute(ClusterState clusterState, String reason) { return reroute(clusterState, reason, false); } /** * Reroutes the routing table based on the live nodes. *

    - * If the same instance of the routing table is returned, then no change has been made. + * If the same instance of ClusterState is returned, then no change has been made. */ - protected Result reroute(ClusterState clusterState, String reason, boolean debug) { + protected ClusterState reroute(final ClusterState clusterState, String reason, boolean debug) { RoutingNodes routingNodes = getMutableRoutingNodes(clusterState); // shuffle the unassigned nodes, just so we won't have things like poison failed shards routingNodes.unassigned().shuffle(); @@ -288,9 +292,9 @@ protected Result reroute(ClusterState clusterState, String reason, boolean debug allocation.debugDecision(debug); reroute(allocation); if (allocation.routingNodesChanged() == false) { - return Result.unchanged(clusterState); + return clusterState; } - return buildResultAndLogHealthChange(allocation, reason); + return buildResultAndLogHealthChange(clusterState, allocation, reason); } private void logClusterHealthStateChange(ClusterStateHealth previousStateHealth, ClusterStateHealth newStateHealth, String reason) { @@ -368,4 +372,39 @@ private RoutingNodes getMutableRoutingNodes(ClusterState clusterState) { protected long currentNanoTime() { return System.nanoTime(); } + + /** + * this class is used to describe results of applying a set of + * {@link org.elasticsearch.cluster.routing.allocation.command.AllocationCommand} + */ + public static class CommandsResult { + + private final RoutingExplanations explanations; + + private final ClusterState clusterState; + + /** + * Creates a new {@link CommandsResult} + * @param explanations Explanation for the reroute actions + * @param clusterState Resulting cluster state + */ + private CommandsResult(RoutingExplanations explanations, ClusterState clusterState) { + this.clusterState = clusterState; + this.explanations = explanations; + } + + /** + * Get the explanation of this result + */ + public RoutingExplanations explanations() { + return explanations; + } + + /** + * thre resulting cluster state, after the commands were applied + */ + public ClusterState getClusterState() { + return clusterState; + } + } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdMonitor.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdMonitor.java index 103aa87dcd38f..390acda0fa519 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdMonitor.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdMonitor.java @@ -62,10 +62,10 @@ public DiskThresholdMonitor(Settings settings, ClusterSettings clusterSettings, */ private void warnAboutDiskIfNeeded(DiskUsage usage) { // Check absolute disk values - if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdHigh().bytes()) { + if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdHigh().getBytes()) { logger.warn("high disk watermark [{}] exceeded on {}, shards will be relocated away from this node", diskThresholdSettings.getFreeBytesThresholdHigh(), usage); - } else if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdLow().bytes()) { + } else if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdLow().getBytes()) { logger.info("low disk watermark [{}] exceeded on {}, replicas will not be assigned to this node", diskThresholdSettings.getFreeBytesThresholdLow(), usage); } @@ -100,7 +100,7 @@ public void onNewInfo(ClusterInfo info) { String node = entry.key; DiskUsage usage = entry.value; warnAboutDiskIfNeeded(usage); - if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdHigh().bytes() || + if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdHigh().getBytes() || usage.getFreeDiskAsPercentage() < diskThresholdSettings.getFreeDiskThresholdHigh()) { if ((System.nanoTime() - lastRunNS) > diskThresholdSettings.getRerouteInterval().nanos()) { lastRunNS = System.nanoTime(); @@ -112,7 +112,7 @@ public void onNewInfo(ClusterInfo info) { node, diskThresholdSettings.getRerouteInterval()); } nodeHasPassedWatermark.add(node); - } else if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdLow().bytes() || + } else if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdLow().getBytes() || usage.getFreeDiskAsPercentage() < diskThresholdSettings.getFreeDiskThresholdLow()) { nodeHasPassedWatermark.add(node); } else { diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettings.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettings.java index 81b9042fb3363..b87add57ce75a 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettings.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettings.java @@ -49,6 +49,8 @@ public class DiskThresholdSettings { Setting.positiveTimeSetting("cluster.routing.allocation.disk.reroute_interval", TimeValue.timeValueSeconds(60), Setting.Property.Dynamic, Setting.Property.NodeScope); + private volatile String lowWatermarkRaw; + private volatile String highWatermarkRaw; private volatile Double freeDiskThresholdLow; private volatile Double freeDiskThresholdHigh; private volatile ByteSizeValue freeBytesThresholdLow; @@ -86,6 +88,7 @@ private void setEnabled(boolean enabled) { private void setLowWatermark(String lowWatermark) { // Watermark is expressed in terms of used data, but we need "free" data watermark + this.lowWatermarkRaw = lowWatermark; this.freeDiskThresholdLow = 100.0 - thresholdPercentageFromWatermark(lowWatermark); this.freeBytesThresholdLow = thresholdBytesFromWatermark(lowWatermark, CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey()); @@ -93,11 +96,26 @@ private void setLowWatermark(String lowWatermark) { private void setHighWatermark(String highWatermark) { // Watermark is expressed in terms of used data, but we need "free" data watermark + this.highWatermarkRaw = highWatermark; this.freeDiskThresholdHigh = 100.0 - thresholdPercentageFromWatermark(highWatermark); this.freeBytesThresholdHigh = thresholdBytesFromWatermark(highWatermark, CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey()); } + /** + * Gets the raw (uninterpreted) low watermark value as found in the settings. + */ + public String getLowWatermarkRaw() { + return lowWatermarkRaw; + } + + /** + * Gets the raw (uninterpreted) high watermark value as found in the settings. + */ + public String getHighWatermarkRaw() { + return highWatermarkRaw; + } + public Double getFreeDiskThresholdLow() { return freeDiskThresholdLow; } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/FailedRerouteAllocation.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/FailedRerouteAllocation.java deleted file mode 100644 index 8a31998a7dd99..0000000000000 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/FailedRerouteAllocation.java +++ /dev/null @@ -1,87 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cluster.routing.allocation; - -import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.cluster.ClusterInfo; -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.routing.RoutingNodes; -import org.elasticsearch.cluster.routing.ShardRouting; -import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders; -import org.elasticsearch.index.shard.ShardId; - -import java.util.List; - -/** - * This {@link RoutingAllocation} keeps a shard which routing - * allocation has failed. - */ -public class FailedRerouteAllocation extends RoutingAllocation { - - /** - * A failed shard with the shard routing itself and an optional - * details on why it failed. - */ - public static class FailedShard { - public final ShardRouting routingEntry; - public final String message; - public final Exception failure; - - public FailedShard(ShardRouting routingEntry, String message, Exception failure) { - assert routingEntry.assignedToNode() : "only assigned shards can be failed " + routingEntry; - this.routingEntry = routingEntry; - this.message = message; - this.failure = failure; - } - - @Override - public String toString() { - return "failed shard, shard " + routingEntry + ", message [" + message + "], failure [" + - ExceptionsHelper.detailedMessage(failure) + "]"; - } - } - - public static class StaleShard { - public final ShardId shardId; - public final String allocationId; - - public StaleShard(ShardId shardId, String allocationId) { - this.shardId = shardId; - this.allocationId = allocationId; - } - - @Override - public String toString() { - return "stale shard, shard " + shardId + ", alloc. id [" + allocationId + "]"; - } - } - - private final List failedShards; - - public FailedRerouteAllocation(AllocationDeciders deciders, RoutingNodes routingNodes, ClusterState clusterState, - List failedShards, ClusterInfo clusterInfo, long currentNanoTime) { - super(deciders, routingNodes, clusterState, clusterInfo, currentNanoTime, false); - this.failedShards = failedShards; - } - - public List failedShards() { - return failedShards; - } -} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/FailedShard.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/FailedShard.java new file mode 100644 index 0000000000000..9bf9fa86d1814 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/FailedShard.java @@ -0,0 +1,69 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.routing.allocation; + +import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.common.Nullable; + +/** + * A class representing a failed shard. + */ +public class FailedShard { + private final ShardRouting routingEntry; + private final String message; + private final Exception failure; + + public FailedShard(ShardRouting routingEntry, String message, Exception failure) { + assert routingEntry.assignedToNode() : "only assigned shards can be failed " + routingEntry; + this.routingEntry = routingEntry; + this.message = message; + this.failure = failure; + } + + @Override + public String toString() { + return "failed shard, shard " + routingEntry + ", message [" + message + "], failure [" + + ExceptionsHelper.detailedMessage(failure) + "]"; + } + + /** + * The shard routing entry for the failed shard. + */ + public ShardRouting getRoutingEntry() { + return routingEntry; + } + + /** + * The failure message, if available, explaining why the shard failed. + */ + @Nullable + public String getMessage() { + return message; + } + + /** + * The exception, if present, causing the shard to fail. + */ + @Nullable + public Exception getFailure() { + return failure; + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/IndexMetaDataUpdater.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/IndexMetaDataUpdater.java index edeeeea224a4f..b24a961829d15 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/IndexMetaDataUpdater.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/IndexMetaDataUpdater.java @@ -22,12 +22,12 @@ import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; +import org.elasticsearch.cluster.routing.IndexShardRoutingTable; import org.elasticsearch.cluster.routing.RecoverySource; import org.elasticsearch.cluster.routing.RoutingChangesObserver; import org.elasticsearch.cluster.routing.RoutingTable; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.UnassignedInfo; -import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation.StaleShard; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; @@ -94,11 +94,6 @@ public void relocationCompleted(ShardRouting removedRelocationSource) { removeAllocationId(removedRelocationSource); } - @Override - public void startedPrimaryReinitialized(ShardRouting startedPrimaryShard, ShardRouting initializedShard) { - removeAllocationId(startedPrimaryShard); - } - /** * Updates the current {@link MetaData} based on the changes of this RoutingChangesObserver. Specifically * we update {@link IndexMetaData#getInSyncAllocationIds()} and {@link IndexMetaData#primaryTerm(int)} based on @@ -180,10 +175,13 @@ private IndexMetaData.Builder updateInSyncAllocations(RoutingTable newRoutingTab // Prevent set of inSyncAllocationIds to grow unboundedly. This can happen for example if we don't write to a primary // but repeatedly shut down nodes that have active replicas. // We use number_of_replicas + 1 (= possible active shard copies) to bound the inSyncAllocationIds set + // Only trim the set of allocation ids when it grows, otherwise we might trim too eagerly when the number + // of replicas was decreased while shards were unassigned. int maxActiveShards = oldIndexMetaData.getNumberOfReplicas() + 1; // +1 for the primary - if (inSyncAllocationIds.size() > maxActiveShards) { + IndexShardRoutingTable newShardRoutingTable = newRoutingTable.shardRoutingTable(shardId); + if (inSyncAllocationIds.size() > oldInSyncAllocationIds.size() && inSyncAllocationIds.size() > maxActiveShards) { // trim entries that have no corresponding shard routing in the cluster state (i.e. trim unavailable copies) - List assignedShards = newRoutingTable.shardRoutingTable(shardId).assignedShards(); + List assignedShards = newShardRoutingTable.assignedShards(); assert assignedShards.size() <= maxActiveShards : "cannot have more assigned shards " + assignedShards + " than maximum possible active shards " + maxActiveShards; Set assignedAllocations = assignedShards.stream().map(s -> s.allocationId().getId()).collect(Collectors.toSet()); @@ -193,16 +191,12 @@ private IndexMetaData.Builder updateInSyncAllocations(RoutingTable newRoutingTab .collect(Collectors.toSet()); } - // only update in-sync allocation ids if there is at least one entry remaining. Assume for example that there only - // ever was a primary active and now it failed. If we were to remove the allocation id from the in-sync set, this would - // create an empty primary on the next allocation (see ShardRouting#allocatedPostIndexCreate) - if (inSyncAllocationIds.isEmpty() && oldInSyncAllocationIds.isEmpty() == false) { - assert updates.firstFailedPrimary != null : - "in-sync set became empty but active primary wasn't failed: " + oldInSyncAllocationIds; - if (updates.firstFailedPrimary != null) { - // add back allocation id of failed primary - inSyncAllocationIds.add(updates.firstFailedPrimary.allocationId().getId()); - } + // only remove allocation id of failed active primary if there is at least one active shard remaining. Assume for example that + // the primary fails but there is no new primary to fail over to. If we were to remove the allocation id of the primary from the + // in-sync set, this could create an empty primary on the next allocation. + if (newShardRoutingTable.activeShards().isEmpty() && updates.firstFailedPrimary != null) { + // add back allocation id of failed primary + inSyncAllocationIds.add(updates.firstFailedPrimary.allocationId().getId()); } assert inSyncAllocationIds.isEmpty() == false || oldInSyncAllocationIds.isEmpty() : @@ -229,17 +223,17 @@ public static ClusterState removeStaleIdsWithoutRoutings(ClusterState clusterSta MetaData.Builder metaDataBuilder = null; // group staleShards entries by index for (Map.Entry> indexEntry : staleShards.stream().collect( - Collectors.groupingBy(fs -> fs.shardId.getIndex())).entrySet()) { + Collectors.groupingBy(fs -> fs.getShardId().getIndex())).entrySet()) { final IndexMetaData oldIndexMetaData = oldMetaData.getIndexSafe(indexEntry.getKey()); IndexMetaData.Builder indexMetaDataBuilder = null; // group staleShards entries by shard id for (Map.Entry> shardEntry : indexEntry.getValue().stream().collect( - Collectors.groupingBy(staleShard -> staleShard.shardId)).entrySet()) { + Collectors.groupingBy(staleShard -> staleShard.getShardId())).entrySet()) { int shardNumber = shardEntry.getKey().getId(); Set oldInSyncAllocations = oldIndexMetaData.inSyncAllocationIds(shardNumber); - Set idsToRemove = shardEntry.getValue().stream().map(e -> e.allocationId).collect(Collectors.toSet()); + Set idsToRemove = shardEntry.getValue().stream().map(e -> e.getAllocationId()).collect(Collectors.toSet()); assert idsToRemove.stream().allMatch(id -> oldRoutingTable.getByAllocationId(shardEntry.getKey(), id) == null) : - "removing stale ids: " + idsToRemove + ", some of which have still a routing entry: " + oldRoutingTable.prettyPrint(); + "removing stale ids: " + idsToRemove + ", some of which have still a routing entry: " + oldRoutingTable; Set remainingInSyncAllocations = Sets.difference(oldInSyncAllocations, idsToRemove); assert remainingInSyncAllocations.isEmpty() == false : "Set of in-sync ids cannot become empty for shard " + shardEntry.getKey() + " (before: " + oldInSyncAllocations + ", ids to remove: " + idsToRemove + ")"; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/MoveDecision.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/MoveDecision.java new file mode 100644 index 0000000000000..de9795ff4c253 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/MoveDecision.java @@ -0,0 +1,315 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.routing.allocation; + +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.routing.allocation.decider.Decision; +import org.elasticsearch.cluster.routing.allocation.decider.Decision.Type; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.List; +import java.util.Objects; + +/** + * Represents a decision to move a started shard, either because it is no longer allowed to remain on its current node + * or because moving it to another node will form a better cluster balance. + */ +public final class MoveDecision extends AbstractAllocationDecision { + /** a constant representing no decision taken */ + public static final MoveDecision NOT_TAKEN = new MoveDecision(null, null, AllocationDecision.NO_ATTEMPT, null, null, 0); + /** cached decisions so we don't have to recreate objects for common decisions when not in explain mode. */ + private static final MoveDecision CACHED_STAY_DECISION = + new MoveDecision(Decision.YES, null, AllocationDecision.NO_ATTEMPT, null, null, 0); + private static final MoveDecision CACHED_CANNOT_MOVE_DECISION = + new MoveDecision(Decision.NO, null, AllocationDecision.NO, null, null, 0); + + @Nullable + AllocationDecision allocationDecision; + @Nullable + private final Decision canRemainDecision; + @Nullable + private final Decision clusterRebalanceDecision; + private final int currentNodeRanking; + + private MoveDecision(Decision canRemainDecision, Decision clusterRebalanceDecision, AllocationDecision allocationDecision, + DiscoveryNode assignedNode, List nodeDecisions, int currentNodeRanking) { + super(assignedNode, nodeDecisions); + this.allocationDecision = allocationDecision; + this.canRemainDecision = canRemainDecision; + this.clusterRebalanceDecision = clusterRebalanceDecision; + this.currentNodeRanking = currentNodeRanking; + } + + public MoveDecision(StreamInput in) throws IOException { + super(in); + allocationDecision = in.readOptionalWriteable(AllocationDecision::readFrom); + canRemainDecision = in.readOptionalWriteable(Decision::readFrom); + clusterRebalanceDecision = in.readOptionalWriteable(Decision::readFrom); + currentNodeRanking = in.readVInt(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeOptionalWriteable(allocationDecision); + out.writeOptionalWriteable(canRemainDecision); + out.writeOptionalWriteable(clusterRebalanceDecision); + out.writeVInt(currentNodeRanking); + } + + /** + * Creates a move decision for the shard being able to remain on its current node, so the shard won't + * be forced to move to another node. + */ + public static MoveDecision stay(Decision canRemainDecision) { + if (canRemainDecision != null) { + assert canRemainDecision.type() != Type.NO; + return new MoveDecision(canRemainDecision, null, AllocationDecision.NO_ATTEMPT, null, null, 0); + } else { + return CACHED_STAY_DECISION; + } + } + + /** + * Creates a move decision for the shard not being allowed to remain on its current node. + * + * @param canRemainDecision the decision for whether the shard is allowed to remain on its current node + * @param allocationDecision the {@link AllocationDecision} for moving the shard to another node + * @param assignedNode the node where the shard should move to + * @param nodeDecisions the node-level decisions that comprised the final decision, non-null iff explain is true + * @return the {@link MoveDecision} for moving the shard to another node + */ + public static MoveDecision cannotRemain(Decision canRemainDecision, AllocationDecision allocationDecision, DiscoveryNode assignedNode, + List nodeDecisions) { + assert canRemainDecision != null; + assert canRemainDecision.type() != Type.YES : "create decision with MoveDecision#stay instead"; + if (nodeDecisions == null && allocationDecision == AllocationDecision.NO) { + // the final decision is NO (no node to move the shard to) and we are not in explain mode, return a cached version + return CACHED_CANNOT_MOVE_DECISION; + } else { + assert ((assignedNode == null) == (allocationDecision != AllocationDecision.YES)); + return new MoveDecision(canRemainDecision, null, allocationDecision, assignedNode, nodeDecisions, 0); + } + } + + /** + * Creates a move decision for when rebalancing the shard is not allowed. + */ + public static MoveDecision cannotRebalance(Decision canRebalanceDecision, AllocationDecision allocationDecision, int currentNodeRanking, + List nodeDecisions) { + return new MoveDecision(null, canRebalanceDecision, allocationDecision, null, nodeDecisions, currentNodeRanking); + } + + /** + * Creates a decision for whether to move the shard to a different node to form a better cluster balance. + */ + public static MoveDecision rebalance(Decision canRebalanceDecision, AllocationDecision allocationDecision, + @Nullable DiscoveryNode assignedNode, int currentNodeRanking, + List nodeDecisions) { + return new MoveDecision(null, canRebalanceDecision, allocationDecision, assignedNode, nodeDecisions, currentNodeRanking); + } + + @Override + public boolean isDecisionTaken() { + return canRemainDecision != null || clusterRebalanceDecision != null; + } + + /** + * Creates a new move decision from this decision, plus adding a remain decision. + */ + public MoveDecision withRemainDecision(Decision canRemainDecision) { + return new MoveDecision(canRemainDecision, clusterRebalanceDecision, allocationDecision, + targetNode, nodeDecisions, currentNodeRanking); + } + + /** + * Returns {@code true} if the shard cannot remain on its current node and can be moved, + * returns {@code false} otherwise. If {@link #isDecisionTaken()} returns {@code false}, + * then invoking this method will throw an {@code IllegalStateException}. + */ + public boolean forceMove() { + checkDecisionState(); + return canRemain() == false && allocationDecision == AllocationDecision.YES; + } + + /** + * Returns {@code true} if the shard can remain on its current node, returns {@code false} otherwise. + * If {@link #isDecisionTaken()} returns {@code false}, then invoking this method will throw an {@code IllegalStateException}. + */ + public boolean canRemain() { + checkDecisionState(); + return canRemainDecision.type() == Type.YES; + } + + /** + * Returns the decision for the shard being allowed to remain on its current node. If {@link #isDecisionTaken()} + * returns {@code false}, then invoking this method will throw an {@code IllegalStateException}. + */ + public Decision getCanRemainDecision() { + checkDecisionState(); + return canRemainDecision; + } + + /** + * Returns {@code true} if the shard is allowed to be rebalanced to another node in the cluster, + * returns {@code false} otherwise. If {@link #getClusterRebalanceDecision()} returns {@code null}, then + * the result of this method is meaningless, as no rebalance decision was taken. If {@link #isDecisionTaken()} + * returns {@code false}, then invoking this method will throw an {@code IllegalStateException}. + */ + public boolean canRebalanceCluster() { + checkDecisionState(); + return clusterRebalanceDecision != null && clusterRebalanceDecision.type() == Type.YES; + } + + /** + * Returns the decision for being allowed to rebalance the shard. Invoking this method will return + * {@code null} if {@link #canRemain()} ()} returns {@code false}, which means the node is not allowed to + * remain on its current node, so the cluster is forced to attempt to move the shard to a different node, + * as opposed to attempting to rebalance the shard if a better cluster balance is possible by moving it. + * If {@link #isDecisionTaken()} returns {@code false}, then invoking this method will throw an + * {@code IllegalStateException}. + */ + @Nullable + public Decision getClusterRebalanceDecision() { + checkDecisionState(); + return clusterRebalanceDecision; + } + + /** + * Returns the {@link AllocationDecision} for moving this shard to another node. If {@link #isDecisionTaken()} returns + * {@code false}, then invoking this method will throw an {@code IllegalStateException}. + */ + @Nullable + public AllocationDecision getAllocationDecision() { + return allocationDecision; + } + + /** + * Gets the current ranking of the node to which the shard is currently assigned, relative to the + * other nodes in the cluster as reported in {@link NodeAllocationResult#getWeightRanking()}. The + * ranking will only return a meaningful positive integer if {@link #getClusterRebalanceDecision()} returns + * a non-null value; otherwise, 0 will be returned. If {@link #isDecisionTaken()} returns + * {@code false}, then invoking this method will throw an {@code IllegalStateException}. + */ + public int getCurrentNodeRanking() { + checkDecisionState(); + return currentNodeRanking; + } + + @Override + public String getExplanation() { + checkDecisionState(); + String explanation; + if (clusterRebalanceDecision != null) { + // it was a decision to rebalance the shard, because the shard was allowed to remain on its current node + if (allocationDecision == AllocationDecision.AWAITING_INFO) { + explanation = "cannot rebalance as information about existing copies of this shard in the cluster is still being gathered"; + } else if (clusterRebalanceDecision.type() == Type.NO) { + explanation = "rebalancing is not allowed" + (atLeastOneNodeWithYesDecision() ? ", even though there " + + "is at least one node on which the shard can be allocated" : ""); + } else if (clusterRebalanceDecision.type() == Type.THROTTLE) { + explanation = "rebalancing is throttled"; + } else { + assert clusterRebalanceDecision.type() == Type.YES; + if (getTargetNode() != null) { + if (allocationDecision == AllocationDecision.THROTTLED) { + explanation = "shard rebalancing throttled"; + } else { + explanation = "can rebalance shard"; + } + } else { + explanation = "cannot rebalance as no target node exists that can both allocate this shard " + + "and improve the cluster balance"; + } + } + } else { + // it was a decision to force move the shard + assert canRemain() == false; + if (allocationDecision == AllocationDecision.YES) { + explanation = "shard cannot remain on this node and is force-moved to another node"; + } else if (allocationDecision == AllocationDecision.THROTTLED) { + explanation = "shard cannot remain on this node but is throttled on moving to another node"; + } else { + assert allocationDecision == AllocationDecision.NO; + explanation = "cannot move shard to another node, even though it is not allowed to remain on its current node"; + } + } + return explanation; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + checkDecisionState(); + if (targetNode != null) { + builder.startObject("target_node"); + discoveryNodeToXContent(targetNode, true, builder); + builder.endObject(); + } + builder.field("can_remain_on_current_node", canRemain() ? "yes" : "no"); + if (canRemain() == false && canRemainDecision.getDecisions().isEmpty() == false) { + builder.startArray("can_remain_decisions"); + canRemainDecision.toXContent(builder, params); + builder.endArray(); + } + if (clusterRebalanceDecision != null) { + AllocationDecision rebalanceDecision = AllocationDecision.fromDecisionType(clusterRebalanceDecision.type()); + builder.field("can_rebalance_cluster", rebalanceDecision); + if (rebalanceDecision != AllocationDecision.YES && clusterRebalanceDecision.getDecisions().isEmpty() == false) { + builder.startArray("can_rebalance_cluster_decisions"); + clusterRebalanceDecision.toXContent(builder, params); + builder.endArray(); + } + } + if (clusterRebalanceDecision != null) { + builder.field("can_rebalance_to_other_node", allocationDecision); + builder.field("rebalance_explanation", getExplanation()); + } else { + builder.field("can_move_to_other_node", forceMove() ? "yes" : "no"); + builder.field("move_explanation", getExplanation()); + } + nodeDecisionsToXContent(nodeDecisions, builder, params); + return builder; + } + + @Override + public boolean equals(Object other) { + if (super.equals(other) == false) { + return false; + } + if (other instanceof MoveDecision == false) { + return false; + } + @SuppressWarnings("unchecked") MoveDecision that = (MoveDecision) other; + return Objects.equals(allocationDecision, that.allocationDecision) + && Objects.equals(canRemainDecision, that.canRemainDecision) + && Objects.equals(clusterRebalanceDecision, that.clusterRebalanceDecision) + && currentNodeRanking == that.currentNodeRanking; + } + + @Override + public int hashCode() { + return 31 * super.hashCode() + Objects.hash(allocationDecision, canRemainDecision, clusterRebalanceDecision, currentNodeRanking); + } + +} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/NodeAllocationResult.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/NodeAllocationResult.java new file mode 100644 index 0000000000000..0d3fe2df920f6 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/NodeAllocationResult.java @@ -0,0 +1,305 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.routing.allocation; + +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.routing.allocation.decider.Decision; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Comparator; + +import static org.elasticsearch.cluster.routing.allocation.AbstractAllocationDecision.discoveryNodeToXContent; + +/** + * This class represents the shard allocation decision and its explanation for a single node. + */ +public class NodeAllocationResult implements ToXContent, Writeable, Comparable { + + private static final Comparator nodeResultComparator = + Comparator.comparing(NodeAllocationResult::getNodeDecision) + .thenComparingInt(NodeAllocationResult::getWeightRanking) + .thenComparing(r -> r.getNode().getId()); + + private final DiscoveryNode node; + @Nullable + private final ShardStoreInfo shardStoreInfo; + private final AllocationDecision nodeDecision; + @Nullable + private final Decision canAllocateDecision; + private final int weightRanking; + + public NodeAllocationResult(DiscoveryNode node, ShardStoreInfo shardStoreInfo, @Nullable Decision decision) { + this.node = node; + this.shardStoreInfo = shardStoreInfo; + this.canAllocateDecision = decision; + this.nodeDecision = decision != null ? AllocationDecision.fromDecisionType(canAllocateDecision.type()) : AllocationDecision.NO; + this.weightRanking = 0; + } + + public NodeAllocationResult(DiscoveryNode node, AllocationDecision nodeDecision, Decision canAllocate, int weightRanking) { + this.node = node; + this.shardStoreInfo = null; + this.canAllocateDecision = canAllocate; + this.nodeDecision = nodeDecision; + this.weightRanking = weightRanking; + } + + public NodeAllocationResult(DiscoveryNode node, Decision decision, int weightRanking) { + this.node = node; + this.shardStoreInfo = null; + this.canAllocateDecision = decision; + this.nodeDecision = AllocationDecision.fromDecisionType(decision.type()); + this.weightRanking = weightRanking; + } + + public NodeAllocationResult(StreamInput in) throws IOException { + node = new DiscoveryNode(in); + shardStoreInfo = in.readOptionalWriteable(ShardStoreInfo::new); + if (in.getVersion().before(Version.V_5_2_1)) { + canAllocateDecision = Decision.readFrom(in); + } else { + canAllocateDecision = in.readOptionalWriteable(Decision::readFrom); + } + nodeDecision = AllocationDecision.readFrom(in); + weightRanking = in.readVInt(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + node.writeTo(out); + out.writeOptionalWriteable(shardStoreInfo); + if (out.getVersion().before(Version.V_5_2_1)) { + if (canAllocateDecision == null) { + Decision.NO.writeTo(out); + } else { + canAllocateDecision.writeTo(out); + } + } else { + out.writeOptionalWriteable(canAllocateDecision); + } + nodeDecision.writeTo(out); + out.writeVInt(weightRanking); + } + + /** + * Get the node that this decision is for. + */ + public DiscoveryNode getNode() { + return node; + } + + /** + * Get the shard store information for the node, if it exists. + */ + @Nullable + public ShardStoreInfo getShardStoreInfo() { + return shardStoreInfo; + } + + /** + * The decision details for allocating to this node. Returns {@code null} if + * no allocation decision was taken on the node; in this case, {@link #getNodeDecision()} + * will return {@link AllocationDecision#NO}. + */ + @Nullable + public Decision getCanAllocateDecision() { + return canAllocateDecision; + } + + /** + * Is the weight assigned for the node? + */ + public boolean isWeightRanked() { + return weightRanking > 0; + } + + /** + * The weight ranking for allocating a shard to the node. Each node will have + * a unique weight ranking that is relative to the other nodes against which the + * deciders ran. For example, suppose there are 3 nodes which the allocation deciders + * decided upon: node1, node2, and node3. If node2 had the best weight for holding the + * shard, followed by node3, followed by node1, then node2's weight will be 1, node3's + * weight will be 2, and node1's weight will be 1. A value of 0 means the weight was + * not calculated or factored into the decision. + */ + public int getWeightRanking() { + return weightRanking; + } + + /** + * Gets the {@link AllocationDecision} for allocating to this node. + */ + public AllocationDecision getNodeDecision() { + return nodeDecision; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + { + discoveryNodeToXContent(node, false, builder); + builder.field("node_decision", nodeDecision); + if (shardStoreInfo != null) { + shardStoreInfo.toXContent(builder, params); + } + if (isWeightRanked()) { + builder.field("weight_ranking", getWeightRanking()); + } + if (canAllocateDecision != null && canAllocateDecision.getDecisions().isEmpty() == false) { + builder.startArray("deciders"); + canAllocateDecision.toXContent(builder, params); + builder.endArray(); + } + } + builder.endObject(); + return builder; + } + + @Override + public int compareTo(NodeAllocationResult other) { + return nodeResultComparator.compare(this, other); + } + + /** A class that captures metadata about a shard store on a node. */ + public static final class ShardStoreInfo implements ToXContent, Writeable { + private final boolean inSync; + @Nullable + private final String allocationId; + private final long matchingBytes; + @Nullable + private final Exception storeException; + + public ShardStoreInfo(String allocationId, boolean inSync, Exception storeException) { + this.inSync = inSync; + this.allocationId = allocationId; + this.matchingBytes = -1; + this.storeException = storeException; + } + + public ShardStoreInfo(long matchingBytes) { + this.inSync = false; + this.allocationId = null; + this.matchingBytes = matchingBytes; + this.storeException = null; + } + + public ShardStoreInfo(StreamInput in) throws IOException { + this.inSync = in.readBoolean(); + this.allocationId = in.readOptionalString(); + this.matchingBytes = in.readLong(); + this.storeException = in.readException(); + } + + /** + * Returns {@code true} if the shard copy is in-sync and contains the latest data. + * Returns {@code false} if the shard copy is stale or if the shard copy being examined + * is for a replica shard allocation. + */ + public boolean isInSync() { + return inSync; + } + + /** + * Gets the allocation id for the shard copy, if it exists. + */ + @Nullable + public String getAllocationId() { + return allocationId; + } + + /** + * Returns {@code true} if the shard copy has a matching sync id with the primary shard. + * Returns {@code false} if the shard copy does not have a matching sync id with the primary + * shard, or this explanation pertains to the allocation of a primary shard, in which case + * matching sync ids are irrelevant. + */ + public boolean hasMatchingSyncId() { + return matchingBytes == Long.MAX_VALUE; + } + + /** + * Gets the number of matching bytes the shard copy has with the primary shard. + * Returns {@code Long.MAX_VALUE} if {@link #hasMatchingSyncId()} returns {@code true}. + * Returns -1 if not applicable (this value only applies to assigning replica shards). + */ + public long getMatchingBytes() { + return matchingBytes; + } + + /** + * Gets the store exception when trying to read the store, if there was an error. If + * there was no error, returns {@code null}. + */ + @Nullable + public Exception getStoreException() { + return storeException; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeBoolean(inSync); + out.writeOptionalString(allocationId); + out.writeLong(matchingBytes); + out.writeException(storeException); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject("store"); + { + if (matchingBytes < 0) { + // dealing with a primary shard + if (allocationId == null && storeException == null) { + // there was no information we could obtain of any shard data on the node + builder.field("found", false); + } else { + builder.field("in_sync", inSync); + } + } + if (allocationId != null) { + builder.field("allocation_id", allocationId); + } + if (matchingBytes >= 0) { + if (hasMatchingSyncId()) { + builder.field("matching_sync_id", true); + } else { + builder.byteSizeField("matching_size_in_bytes", "matching_size", matchingBytes); + } + } + if (storeException != null) { + builder.startObject("store_exception"); + ElasticsearchException.generateThrowableXContent(builder, params, storeException); + builder.endObject(); + } + } + builder.endObject(); + return builder; + } + } + +} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RerouteExplanation.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RerouteExplanation.java index d591cc131b590..36dbadec49e1b 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RerouteExplanation.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RerouteExplanation.java @@ -20,7 +20,6 @@ package org.elasticsearch.cluster.routing.allocation; import org.elasticsearch.cluster.routing.allocation.command.AllocationCommand; -import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands; import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -59,7 +58,7 @@ public static RerouteExplanation readFrom(StreamInput in) throws IOException { public static void writeTo(RerouteExplanation explanation, StreamOutput out) throws IOException { out.writeNamedWriteable(explanation.command); - Decision.writeTo(explanation.decisions, out); + explanation.decisions.writeTo(out); } @Override @@ -67,15 +66,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.startObject(); builder.field("command", command.name()); builder.field("parameters", command); - // The Decision could be a Multi or Single decision, and they should - // both be encoded the same, so check and wrap in an array if necessary - if (decisions instanceof Decision.Multi) { - decisions.toXContent(builder, params); - } else { - builder.startArray("decisions"); - decisions.toXContent(builder, params); - builder.endArray(); - } + builder.startArray("decisions"); + decisions.toXContent(builder, params); + builder.endArray(); builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java index f2d40b38a4673..e1ae367bebf76 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java @@ -21,6 +21,7 @@ import org.elasticsearch.cluster.ClusterInfo; import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.RestoreInProgress; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.routing.RoutingChangesObserver; @@ -30,6 +31,7 @@ import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.snapshots.RestoreService.RestoreInProgressUpdater; import java.util.HashMap; import java.util.HashSet; @@ -46,83 +48,6 @@ */ public class RoutingAllocation { - /** - * this class is used to describe results of a {@link RoutingAllocation} - */ - public static class Result { - - private final boolean changed; - - private final RoutingTable routingTable; - - private final MetaData metaData; - - private final RoutingExplanations explanations; - - /** - * Creates a new {@link RoutingAllocation.Result} where no change to the routing table was made. - * @param clusterState the unchanged {@link ClusterState} - */ - public static Result unchanged(ClusterState clusterState) { - return new Result(false, clusterState.routingTable(), clusterState.metaData(), new RoutingExplanations()); - } - - /** - * Creates a new {@link RoutingAllocation.Result} where changes were made to the routing table. - * @param routingTable the {@link RoutingTable} this Result references - * @param metaData the {@link MetaData} this Result references - * @param explanations Explanation for the reroute actions - */ - public static Result changed(RoutingTable routingTable, MetaData metaData, RoutingExplanations explanations) { - return new Result(true, routingTable, metaData, explanations); - } - - /** - * Creates a new {@link RoutingAllocation.Result} - * @param changed a flag to determine whether the actual {@link RoutingTable} has been changed - * @param routingTable the {@link RoutingTable} this Result references - * @param metaData the {@link MetaData} this Result references - * @param explanations Explanation for the reroute actions - */ - private Result(boolean changed, RoutingTable routingTable, MetaData metaData, RoutingExplanations explanations) { - this.changed = changed; - this.routingTable = routingTable; - this.metaData = metaData; - this.explanations = explanations; - } - - /** determine whether the actual {@link RoutingTable} has been changed - * @return true if the {@link RoutingTable} has been changed by allocation. Otherwise false - */ - public boolean changed() { - return this.changed; - } - - /** - * Get the {@link MetaData} referenced by this result - * @return referenced {@link MetaData} - */ - public MetaData metaData() { - return metaData; - } - - /** - * Get the {@link RoutingTable} referenced by this result - * @return referenced {@link RoutingTable} - */ - public RoutingTable routingTable() { - return routingTable; - } - - /** - * Get the explanation of this result - * @return explanation - */ - public RoutingExplanations explanations() { - return explanations; - } - } - private final AllocationDeciders deciders; private final RoutingNodes routingNodes; @@ -135,8 +60,6 @@ public RoutingExplanations explanations() { private final ImmutableOpenMap customs; - private final AllocationExplanation explanation = new AllocationExplanation(); - private final ClusterInfo clusterInfo; private Map> ignoredShardToNodes = null; @@ -145,7 +68,7 @@ public RoutingExplanations explanations() { private final boolean retryFailed; - private boolean debugDecision = false; + private DebugMode debugDecision = DebugMode.OFF; private boolean hasPendingAsyncFetch = false; @@ -153,8 +76,9 @@ public RoutingExplanations explanations() { private final IndexMetaDataUpdater indexMetaDataUpdater = new IndexMetaDataUpdater(); private final RoutingNodesChangedObserver nodesChangedObserver = new RoutingNodesChangedObserver(); + private final RestoreInProgressUpdater restoreInProgressUpdater = new RestoreInProgressUpdater(); private final RoutingChangesObserver routingChangesObserver = new RoutingChangesObserver.DelegatingRoutingChangesObserver( - nodesChangedObserver, indexMetaDataUpdater + nodesChangedObserver, indexMetaDataUpdater, restoreInProgressUpdater ); @@ -231,12 +155,8 @@ public T custom(String key) { return (T)customs.get(key); } - /** - * Get explanations of current routing - * @return explanation of routing - */ - public AllocationExplanation explanation() { - return explanation; + public ImmutableOpenMap getCustoms() { + return customs; } public void ignoreDisable(boolean ignoreDisable) { @@ -247,11 +167,19 @@ public boolean ignoreDisable() { return this.ignoreDisable; } - public void debugDecision(boolean debug) { + public void setDebugMode(DebugMode debug) { this.debugDecision = debug; } + public void debugDecision(boolean debug) { + this.debugDecision = debug ? DebugMode.ON : DebugMode.OFF; + } + public boolean debugDecision() { + return this.debugDecision != DebugMode.OFF; + } + + public DebugMode getDebugMode() { return this.debugDecision; } @@ -311,6 +239,13 @@ public MetaData updateMetaDataWithRoutingChanges(RoutingTable newRoutingTable) { return indexMetaDataUpdater.applyChanges(metaData, newRoutingTable); } + /** + * Returns updated {@link RestoreInProgress} based on the changes that were made to the routing nodes + */ + public RestoreInProgress updateRestoreInfoWithRoutingChanges(RestoreInProgress restoreInProgress) { + return restoreInProgressUpdater.applyChanges(restoreInProgress); + } + /** * Returns true iff changes were made to the routing nodes */ @@ -353,4 +288,20 @@ public void setHasPendingAsyncFetch() { public boolean isRetryFailed() { return retryFailed; } + + public enum DebugMode { + /** + * debug mode is off + */ + OFF, + /** + * debug mode is on + */ + ON, + /** + * debug mode is on, but YES decisions from a {@link org.elasticsearch.cluster.routing.allocation.decider.Decision.Multi} + * are not included. + */ + EXCLUDE_YES_DECISIONS + } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingNodesChangedObserver.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingNodesChangedObserver.java index 42e80689eeca5..3e465e42b4453 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingNodesChangedObserver.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingNodesChangedObserver.java @@ -96,6 +96,17 @@ public void replicaPromoted(ShardRouting replicaShard) { setChanged(); } + @Override + public void initializedReplicaReinitialized(ShardRouting oldReplica, ShardRouting reinitializedReplica) { + assert oldReplica.initializing() && oldReplica.primary() == false : + "expected initializing replica shard " + oldReplica; + assert reinitializedReplica.initializing() && reinitializedReplica.primary() == false : + "expected reinitialized replica shard " + reinitializedReplica; + assert oldReplica.allocationId().getId().equals(reinitializedReplica.allocationId().getId()) == false : + "expected allocation id to change for reinitialized replica shard (old: " + oldReplica + " new: " + reinitializedReplica + ")"; + setChanged(); + } + /** * Marks the allocation as changed. */ diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/ShardAllocationDecision.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/ShardAllocationDecision.java new file mode 100644 index 0000000000000..557ce9300c610 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/ShardAllocationDecision.java @@ -0,0 +1,105 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.routing.allocation; + +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; + +/** + * Represents the decision taken for the allocation of a single shard. If + * the shard is unassigned, {@link #getAllocateDecision()} will return an + * object containing the decision and its explanation, and {@link #getMoveDecision()} + * will return an object for which {@link MoveDecision#isDecisionTaken()} returns + * {@code false}. If the shard is in the started state, then {@link #getMoveDecision()} + * will return an object containing the decision to move/rebalance the shard, and + * {@link #getAllocateDecision()} will return an object for which + * {@link AllocateUnassignedDecision#isDecisionTaken()} returns {@code false}. If + * the shard is neither unassigned nor started (i.e. it is initializing or relocating), + * then both {@link #getAllocateDecision()} and {@link #getMoveDecision()} will return + * objects whose {@code isDecisionTaken()} method returns {@code false}. + */ +public final class ShardAllocationDecision implements ToXContent, Writeable { + public static final ShardAllocationDecision NOT_TAKEN = + new ShardAllocationDecision(AllocateUnassignedDecision.NOT_TAKEN, MoveDecision.NOT_TAKEN); + + private final AllocateUnassignedDecision allocateDecision; + private final MoveDecision moveDecision; + + public ShardAllocationDecision(AllocateUnassignedDecision allocateDecision, + MoveDecision moveDecision) { + this.allocateDecision = allocateDecision; + this.moveDecision = moveDecision; + } + + public ShardAllocationDecision(StreamInput in) throws IOException { + allocateDecision = new AllocateUnassignedDecision(in); + moveDecision = new MoveDecision(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + allocateDecision.writeTo(out); + moveDecision.writeTo(out); + } + + /** + * Returns {@code true} if either an allocation decision or a move decision was taken + * for the shard. If no decision was taken, as in the case of initializing or relocating + * shards, then this method returns {@code false}. + */ + public boolean isDecisionTaken() { + return allocateDecision.isDecisionTaken() || moveDecision.isDecisionTaken(); + } + + /** + * Gets the unassigned allocation decision for the shard. If the shard was not in the unassigned state, + * the instance of {@link AllocateUnassignedDecision} that is returned will have {@link AllocateUnassignedDecision#isDecisionTaken()} + * return {@code false}. + */ + public AllocateUnassignedDecision getAllocateDecision() { + return allocateDecision; + } + + /** + * Gets the move decision for the shard. If the shard was not in the started state, + * the instance of {@link MoveDecision} that is returned will have {@link MoveDecision#isDecisionTaken()} + * return {@code false}. + */ + public MoveDecision getMoveDecision() { + return moveDecision; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + if (allocateDecision.isDecisionTaken()) { + allocateDecision.toXContent(builder, params); + } + if (moveDecision.isDecisionTaken()) { + moveDecision.toXContent(builder, params); + } + return builder; + } + +} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/StaleShard.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/StaleShard.java new file mode 100644 index 0000000000000..9454f62db9732 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/StaleShard.java @@ -0,0 +1,54 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.routing.allocation; + +import org.elasticsearch.index.shard.ShardId; + +/** + * A class that represents a stale shard copy. + */ +public class StaleShard { + private final ShardId shardId; + private final String allocationId; + + public StaleShard(ShardId shardId, String allocationId) { + this.shardId = shardId; + this.allocationId = allocationId; + } + + @Override + public String toString() { + return "stale shard, shard " + shardId + ", alloc. id [" + allocationId + "]"; + } + + /** + * The shard id of the stale shard. + */ + public ShardId getShardId() { + return shardId; + } + + /** + * The allocation id of the stale shard. + */ + public String getAllocationId() { + return allocationId; + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/StartedRerouteAllocation.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/StartedRerouteAllocation.java deleted file mode 100644 index e63ce2b19e923..0000000000000 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/StartedRerouteAllocation.java +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cluster.routing.allocation; - -import org.elasticsearch.cluster.ClusterInfo; -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.routing.RoutingNodes; -import org.elasticsearch.cluster.routing.ShardRouting; -import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders; - -import java.util.List; - -/** - * This {@link RoutingAllocation} holds a list of started shards within a - * cluster - */ -public class StartedRerouteAllocation extends RoutingAllocation { - - private final List startedShards; - - public StartedRerouteAllocation(AllocationDeciders deciders, RoutingNodes routingNodes, ClusterState clusterState, - List startedShards, ClusterInfo clusterInfo, long currentNanoTime) { - super(deciders, routingNodes, clusterState, clusterInfo, currentNanoTime, false); - this.startedShards = startedShards; - } - - /** - * Get started shards - * @return list of started shards - */ - public List startedShards() { - return startedShards; - } -} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java index 7087ae57c4b3f..a60287a8487a3 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java @@ -29,8 +29,13 @@ import org.elasticsearch.cluster.routing.RoutingNodes; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.ShardRoutingState; -import org.elasticsearch.cluster.routing.UnassignedInfo; +import org.elasticsearch.cluster.routing.UnassignedInfo.AllocationStatus; +import org.elasticsearch.cluster.routing.allocation.AllocateUnassignedDecision; +import org.elasticsearch.cluster.routing.allocation.AllocationDecision; +import org.elasticsearch.cluster.routing.allocation.MoveDecision; +import org.elasticsearch.cluster.routing.allocation.NodeAllocationResult; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; +import org.elasticsearch.cluster.routing.allocation.ShardAllocationDecision; import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders; import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.cluster.routing.allocation.decider.Decision.Type; @@ -42,14 +47,17 @@ import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.gateway.PriorityComparator; +import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.HashMap; import java.util.HashSet; import java.util.IdentityHashMap; import java.util.Iterator; +import java.util.List; import java.util.Map; import java.util.Set; @@ -73,9 +81,9 @@ public class BalancedShardsAllocator extends AbstractComponent implements ShardsAllocator { public static final Setting INDEX_BALANCE_FACTOR_SETTING = - Setting.floatSetting("cluster.routing.allocation.balance.index", 0.55f, Property.Dynamic, Property.NodeScope); + Setting.floatSetting("cluster.routing.allocation.balance.index", 0.55f, 0.0f, Property.Dynamic, Property.NodeScope); public static final Setting SHARD_BALANCE_FACTOR_SETTING = - Setting.floatSetting("cluster.routing.allocation.balance.shard", 0.45f, Property.Dynamic, Property.NodeScope); + Setting.floatSetting("cluster.routing.allocation.balance.shard", 0.45f, 0.0f, Property.Dynamic, Property.NodeScope); public static final Setting THRESHOLD_SETTING = Setting.floatSetting("cluster.routing.allocation.balance.threshold", 1.0f, 0.0f, Property.Dynamic, Property.NodeScope); @@ -104,12 +112,6 @@ private void setThreshold(float threshold) { this.threshold = threshold; } - @Override - public Map weighShard(RoutingAllocation allocation, ShardRouting shard) { - final Balancer balancer = new Balancer(logger, allocation, weightFunction, threshold); - return balancer.weighShard(shard); - } - @Override public void allocate(RoutingAllocation allocation) { if (allocation.routingNodes().size() == 0) { @@ -122,6 +124,23 @@ public void allocate(RoutingAllocation allocation) { balancer.balance(); } + @Override + public ShardAllocationDecision decideShardAllocation(final ShardRouting shard, final RoutingAllocation allocation) { + Balancer balancer = new Balancer(logger, allocation, weightFunction, threshold); + AllocateUnassignedDecision allocateUnassignedDecision = AllocateUnassignedDecision.NOT_TAKEN; + MoveDecision moveDecision = MoveDecision.NOT_TAKEN; + if (shard.unassigned()) { + allocateUnassignedDecision = balancer.decideAllocateUnassigned(shard, Sets.newHashSet()); + } else { + moveDecision = balancer.decideMove(shard); + if (moveDecision.isDecisionTaken() && moveDecision.canRemain()) { + MoveDecision rebalanceDecision = balancer.decideRebalance(shard); + moveDecision = rebalanceDecision.withRemainDecision(moveDecision.getCanRemainDecision()); + } + } + return new ShardAllocationDecision(allocateUnassignedDecision, moveDecision); + } + /** * Returns the currently configured delta threshold */ @@ -210,7 +229,7 @@ private float weight(Balancer balancer, ModelNode node, String index, int numAdd */ public static class Balancer { private final Logger logger; - private final Map nodes = new HashMap<>(); + private final Map nodes; private final RoutingAllocation allocation; private final RoutingNodes routingNodes; private final WeightFunction weight; @@ -218,6 +237,7 @@ public static class Balancer { private final float threshold; private final MetaData metaData; private final float avgShardsPerNode; + private final NodeSorter sorter; public Balancer(Logger logger, RoutingAllocation allocation, WeightFunction weight, float threshold) { this.logger = logger; @@ -227,7 +247,8 @@ public Balancer(Logger logger, RoutingAllocation allocation, WeightFunction weig this.routingNodes = allocation.routingNodes(); this.metaData = allocation.metaData(); avgShardsPerNode = ((float) metaData.getTotalNumberOfShards()) / routingNodes.size(); - buildModelFromAssigned(); + nodes = Collections.unmodifiableMap(buildModelFromAssigned()); + sorter = newNodeSorter(); } /** @@ -261,11 +282,18 @@ private NodeSorter newNodeSorter() { return new NodeSorter(nodesArray(), weight, this); } + /** + * The absolute value difference between two weights. + */ private static float absDelta(float lower, float higher) { assert higher >= lower : higher + " lt " + lower +" but was expected to be gte"; return Math.abs(higher - lower); } + /** + * Returns {@code true} iff the weight delta between two nodes is under a defined threshold. + * See {@link #THRESHOLD_SETTING} for defining the threshold. + */ private static boolean lessThan(float delta, float threshold) { /* deltas close to the threshold are "rounded" to the threshold manually to prevent floating point problems if the delta is very close to the @@ -303,12 +331,127 @@ private void balance() { balanceByWeights(); } + /** + * Makes a decision about moving a single shard to a different node to form a more + * optimally balanced cluster. This method is invoked from the cluster allocation + * explain API only. + */ + private MoveDecision decideRebalance(final ShardRouting shard) { + if (shard.started() == false) { + // we can only rebalance started shards + return MoveDecision.NOT_TAKEN; + } + + Decision canRebalance = allocation.deciders().canRebalance(shard, allocation); + + sorter.reset(shard.getIndexName()); + ModelNode[] modelNodes = sorter.modelNodes; + final String currentNodeId = shard.currentNodeId(); + // find currently assigned node + ModelNode currentNode = null; + for (ModelNode node : modelNodes) { + if (node.getNodeId().equals(currentNodeId)) { + currentNode = node; + break; + } + } + assert currentNode != null : "currently assigned node could not be found"; + + // balance the shard, if a better node can be found + final float currentWeight = sorter.weight(currentNode); + final AllocationDeciders deciders = allocation.deciders(); + final String idxName = shard.getIndexName(); + Type rebalanceDecisionType = Type.NO; + ModelNode assignedNode = null; + List> betterBalanceNodes = new ArrayList<>(); + List> sameBalanceNodes = new ArrayList<>(); + List> worseBalanceNodes = new ArrayList<>(); + for (ModelNode node : modelNodes) { + if (node == currentNode) { + continue; // skip over node we're currently allocated to + } + final Decision canAllocate = deciders.canAllocate(shard, node.getRoutingNode(), allocation); + // the current weight of the node in the cluster, as computed by the weight function; + // this is a comparison of the number of shards on this node to the number of shards + // that should be on each node on average (both taking the cluster as a whole into account + // as well as shards per index) + final float nodeWeight = sorter.weight(node); + // if the node we are examining has a worse (higher) weight than the node the shard is + // assigned to, then there is no way moving the shard to the node with the worse weight + // can make the balance of the cluster better, so we check for that here + final boolean betterWeightThanCurrent = nodeWeight <= currentWeight; + boolean rebalanceConditionsMet = false; + if (betterWeightThanCurrent) { + // get the delta between the weights of the node we are checking and the node that holds the shard + float currentDelta = absDelta(nodeWeight, currentWeight); + // checks if the weight delta is above a certain threshold; if it is not above a certain threshold, + // then even though the node we are examining has a better weight and may make the cluster balance + // more even, it doesn't make sense to execute the heavyweight operation of relocating a shard unless + // the gains make it worth it, as defined by the threshold + boolean deltaAboveThreshold = lessThan(currentDelta, threshold) == false; + // simulate the weight of the node if we were to relocate the shard to it + float weightWithShardAdded = weight.weightShardAdded(this, node, idxName); + // calculate the delta of the weights of the two nodes if we were to add the shard to the + // node in question and move it away from the node that currently holds it. + float proposedDelta = weightWithShardAdded - weight.weightShardRemoved(this, currentNode, idxName); + boolean betterWeightWithShardAdded = proposedDelta < currentDelta; + rebalanceConditionsMet = deltaAboveThreshold && betterWeightWithShardAdded; + // if the simulated weight delta with the shard moved away is better than the weight delta + // with the shard remaining on the current node, and we are allowed to allocate to the + // node in question, then allow the rebalance + if (rebalanceConditionsMet && canAllocate.type().higherThan(rebalanceDecisionType)) { + // rebalance to the node, only will get overwritten if the decision here is to + // THROTTLE and we get a decision with YES on another node + rebalanceDecisionType = canAllocate.type(); + assignedNode = node; + } + } + Tuple nodeResult = Tuple.tuple(node, canAllocate); + if (rebalanceConditionsMet) { + betterBalanceNodes.add(nodeResult); + } else if (betterWeightThanCurrent) { + sameBalanceNodes.add(nodeResult); + } else { + worseBalanceNodes.add(nodeResult); + } + } + + int weightRanking = 0; + List nodeDecisions = new ArrayList<>(modelNodes.length - 1); + for (Tuple result : betterBalanceNodes) { + nodeDecisions.add(new NodeAllocationResult( + result.v1().routingNode.node(), AllocationDecision.fromDecisionType(result.v2().type()), result.v2(), ++weightRanking) + ); + } + int currentNodeWeightRanking = ++weightRanking; + for (Tuple result : sameBalanceNodes) { + AllocationDecision nodeDecision = result.v2().type() == Type.NO ? AllocationDecision.NO : AllocationDecision.WORSE_BALANCE; + nodeDecisions.add(new NodeAllocationResult( + result.v1().routingNode.node(), nodeDecision, result.v2(), currentNodeWeightRanking) + ); + } + for (Tuple result : worseBalanceNodes) { + AllocationDecision nodeDecision = result.v2().type() == Type.NO ? AllocationDecision.NO : AllocationDecision.WORSE_BALANCE; + nodeDecisions.add(new NodeAllocationResult( + result.v1().routingNode.node(), nodeDecision, result.v2(), ++weightRanking) + ); + } + + if (canRebalance.type() != Type.YES || allocation.hasPendingAsyncFetch()) { + AllocationDecision allocationDecision = allocation.hasPendingAsyncFetch() ? AllocationDecision.AWAITING_INFO : + AllocationDecision.fromDecisionType(canRebalance.type()); + return MoveDecision.cannotRebalance(canRebalance, allocationDecision, currentNodeWeightRanking, nodeDecisions); + } else { + return MoveDecision.rebalance(canRebalance, AllocationDecision.fromDecisionType(rebalanceDecisionType), + assignedNode != null ? assignedNode.routingNode.node() : null, currentNodeWeightRanking, nodeDecisions); + } + } + public Map weighShard(ShardRouting shard) { - final NodeSorter sorter = newNodeSorter(); final ModelNode[] modelNodes = sorter.modelNodes; final float[] weights = sorter.weights; - buildWeightOrderedIndices(sorter); + buildWeightOrderedIndices(); Map nodes = new HashMap<>(modelNodes.length); float currentNodeWeight = 0.0f; for (int i = 0; i < modelNodes.length; i++) { @@ -332,20 +475,19 @@ public Map weighShard(ShardRouting shard) { * weight of the maximum node and the minimum node according to the * {@link WeightFunction}. This weight is calculated per index to * distribute shards evenly per index. The balancer tries to relocate - * shards only if the delta exceeds the threshold. If the default case + * shards only if the delta exceeds the threshold. In the default case * the threshold is set to 1.0 to enforce gaining relocation * only, or in other words relocations that move the weight delta closer * to 0.0 */ private void balanceByWeights() { - final NodeSorter sorter = newNodeSorter(); final AllocationDeciders deciders = allocation.deciders(); final ModelNode[] modelNodes = sorter.modelNodes; final float[] weights = sorter.weights; - for (String index : buildWeightOrderedIndices(sorter)) { + for (String index : buildWeightOrderedIndices()) { IndexMetaData indexMetaData = metaData.index(index); - // find nodes that have a shard of this index or where shards of this index are allowed to stay + // find nodes that have a shard of this index or where shards of this index are allowed to be allocated to, // move these nodes to the front of modelNodes so that we can only balance based on these nodes int relevantNodes = 0; for (int i = 0; i < modelNodes.length; i++) { @@ -440,14 +582,14 @@ private void balanceByWeights() { * allocations on added nodes from one index when the weight parameters * for global balance overrule the index balance at an intermediate * state. For example this can happen if we have 3 nodes and 3 indices - * with 3 shards and 1 shard. At the first stage all three nodes hold - * 2 shard for each index. now we add another node and the first index - * is balanced moving 3 two of the nodes over to the new node since it + * with 3 primary and 1 replica shards. At the first stage all three nodes hold + * 2 shard for each index. Now we add another node and the first index + * is balanced moving three shards from two of the nodes over to the new node since it * has no shards yet and global balance for the node is way below * average. To re-balance we need to move shards back eventually likely * to the nodes we relocated them from. */ - private String[] buildWeightOrderedIndices(NodeSorter sorter) { + private String[] buildWeightOrderedIndices() { final String[] indices = allocation.routingTable().indicesRouting().keys().toArray(String.class); final float[] deltas = new float[indices.length]; for (int i = 0; i < deltas.length; i++) { @@ -501,27 +643,52 @@ public void moveShards() { // Iterate over the started shards interleaving between nodes, and check if they can remain. In the presence of throttling // shard movements, the goal of this iteration order is to achieve a fairer movement of shards from the nodes that are // offloading the shards. - final NodeSorter sorter = newNodeSorter(); for (Iterator it = allocation.routingNodes().nodeInterleavedShardIterator(); it.hasNext(); ) { ShardRouting shardRouting = it.next(); - // we can only move started shards... - if (shardRouting.started()) { + final MoveDecision moveDecision = decideMove(shardRouting); + if (moveDecision.isDecisionTaken() && moveDecision.forceMove()) { final ModelNode sourceNode = nodes.get(shardRouting.currentNodeId()); - assert sourceNode != null && sourceNode.containsShard(shardRouting); - RoutingNode routingNode = sourceNode.getRoutingNode(); - Decision decision = allocation.deciders().canRemain(shardRouting, routingNode, allocation); - if (decision.type() == Decision.Type.NO) { - moveShard(sorter, shardRouting, sourceNode, routingNode); + final ModelNode targetNode = nodes.get(moveDecision.getTargetNode().getId()); + sourceNode.removeShard(shardRouting); + Tuple relocatingShards = routingNodes.relocateShard(shardRouting, targetNode.getNodeId(), + allocation.clusterInfo().getShardSize(shardRouting, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE), allocation.changes()); + targetNode.addShard(relocatingShards.v2()); + if (logger.isTraceEnabled()) { + logger.trace("Moved shard [{}] to node [{}]", shardRouting, targetNode.getRoutingNode()); } + } else if (moveDecision.isDecisionTaken() && moveDecision.canRemain() == false) { + logger.trace("[{}][{}] can't move", shardRouting.index(), shardRouting.id()); } } } /** - * Move started shard to the minimal eligible node with respect to the weight function + * Makes a decision on whether to move a started shard to another node. The following rules apply + * to the {@link MoveDecision} return object: + * 1. If the shard is not started, no decision will be taken and {@link MoveDecision#isDecisionTaken()} will return false. + * 2. If the shard is allowed to remain on its current node, no attempt will be made to move the shard and + * {@link MoveDecision#canRemainDecision} will have a decision type of YES. All other fields in the object will be null. + * 3. If the shard is not allowed to remain on its current node, then {@link MoveDecision#getAllocationDecision()} will be + * populated with the decision of moving to another node. If {@link MoveDecision#forceMove()} ()} returns {@code true}, then + * {@link MoveDecision#targetNode} will return a non-null value, otherwise the assignedNodeId will be null. + * 4. If the method is invoked in explain mode (e.g. from the cluster allocation explain APIs), then + * {@link MoveDecision#nodeDecisions} will have a non-null value. */ - private void moveShard(NodeSorter sorter, ShardRouting shardRouting, ModelNode sourceNode, RoutingNode routingNode) { - logger.debug("[{}][{}] allocated on [{}], but can no longer be allocated on it, moving...", shardRouting.index(), shardRouting.id(), routingNode.node()); + public MoveDecision decideMove(final ShardRouting shardRouting) { + if (shardRouting.started() == false) { + // we can only move started shards + return MoveDecision.NOT_TAKEN; + } + + final boolean explain = allocation.debugDecision(); + final ModelNode sourceNode = nodes.get(shardRouting.currentNodeId()); + assert sourceNode != null && sourceNode.containsShard(shardRouting); + RoutingNode routingNode = sourceNode.getRoutingNode(); + Decision canRemain = allocation.deciders().canRemain(shardRouting, routingNode, allocation); + if (canRemain.type() != Decision.Type.NO) { + return MoveDecision.stay(canRemain); + } + sorter.reset(shardRouting.getIndexName()); /* * the sorter holds the minimum weight node first for the shards index. @@ -529,23 +696,36 @@ private void moveShard(NodeSorter sorter, ShardRouting shardRouting, ModelNode s * This is not guaranteed to be balanced after this operation we still try best effort to * allocate on the minimal eligible node. */ + Type bestDecision = Type.NO; + RoutingNode targetNode = null; + final List nodeExplanationMap = explain ? new ArrayList<>() : null; + int weightRanking = 0; for (ModelNode currentNode : sorter.modelNodes) { if (currentNode != sourceNode) { RoutingNode target = currentNode.getRoutingNode(); // don't use canRebalance as we want hard filtering rules to apply. See #17698 Decision allocationDecision = allocation.deciders().canAllocate(shardRouting, target, allocation); - if (allocationDecision.type() == Type.YES) { // TODO maybe we can respect throttling here too? - sourceNode.removeShard(shardRouting); - Tuple relocatingShards = routingNodes.relocateShard(shardRouting, target.nodeId(), allocation.clusterInfo().getShardSize(shardRouting, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE), allocation.changes()); - currentNode.addShard(relocatingShards.v2()); - if (logger.isTraceEnabled()) { - logger.trace("Moved shard [{}] to node [{}]", shardRouting, routingNode.node()); + if (explain) { + nodeExplanationMap.add(new NodeAllocationResult( + currentNode.getRoutingNode().node(), allocationDecision, ++weightRanking)); + } + // TODO maybe we can respect throttling here too? + if (allocationDecision.type().higherThan(bestDecision)) { + bestDecision = allocationDecision.type(); + if (bestDecision == Type.YES) { + targetNode = target; + if (explain == false) { + // we are not in explain mode and already have a YES decision on the best weighted node, + // no need to continue iterating + break; + } } - return; } } } - logger.debug("[{}][{}] can't move", shardRouting.index(), shardRouting.id()); + + return MoveDecision.cannotRemain(canRemain, AllocationDecision.fromDecisionType(bestDecision), + targetNode != null ? targetNode.node() : null, nodeExplanationMap); } /** @@ -557,7 +737,8 @@ private void moveShard(NodeSorter sorter, ShardRouting shardRouting, ModelNode s * on the target node which we respect during the allocation / balancing * process. In short, this method recreates the status-quo in the cluster. */ - private void buildModelFromAssigned() { + private Map buildModelFromAssigned() { + Map nodes = new HashMap<>(); for (RoutingNode rn : routingNodes) { ModelNode node = new ModelNode(rn); nodes.put(rn.nodeId(), node); @@ -572,6 +753,7 @@ private void buildModelFromAssigned() { } } } + return nodes; } /** @@ -626,116 +808,61 @@ private void allocateUnassigned() { do { for (int i = 0; i < primaryLength; i++) { ShardRouting shard = primary[i]; - if (!shard.primary()) { - final Decision decision = deciders.canAllocate(shard, allocation); - if (decision.type() == Type.NO) { - UnassignedInfo.AllocationStatus allocationStatus = UnassignedInfo.AllocationStatus.fromDecision(decision); - unassigned.ignoreShard(shard, allocationStatus, allocation.changes()); - while(i < primaryLength-1 && comparator.compare(primary[i], primary[i+1]) == 0) { - unassigned.ignoreShard(primary[++i], allocationStatus, allocation.changes()); - } - continue; - } else { + AllocateUnassignedDecision allocationDecision = decideAllocateUnassigned(shard, throttledNodes); + final String assignedNodeId = allocationDecision.getTargetNode() != null ? + allocationDecision.getTargetNode().getId() : null; + final ModelNode minNode = assignedNodeId != null ? nodes.get(assignedNodeId) : null; + + if (allocationDecision.getAllocationDecision() == AllocationDecision.YES) { + if (logger.isTraceEnabled()) { + logger.trace("Assigned shard [{}] to [{}]", shard, minNode.getNodeId()); + } + + final long shardSize = DiskThresholdDecider.getExpectedShardSize(shard, allocation, + ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE); + shard = routingNodes.initializeShard(shard, minNode.getNodeId(), null, shardSize, allocation.changes()); + minNode.addShard(shard); + if (!shard.primary()) { + // copy over the same replica shards to the secondary array so they will get allocated + // in a subsequent iteration, allowing replicas of other shards to be allocated first while(i < primaryLength-1 && comparator.compare(primary[i], primary[i+1]) == 0) { secondary[secondaryLength++] = primary[++i]; } } - } - assert !shard.assignedToNode() : shard; - /* find an node with minimal weight we can allocate on*/ - float minWeight = Float.POSITIVE_INFINITY; - ModelNode minNode = null; - Decision decision = null; - if (throttledNodes.size() < nodes.size()) { - /* Don't iterate over an identity hashset here the - * iteration order is different for each run and makes testing hard */ - for (ModelNode node : nodes.values()) { - if (throttledNodes.contains(node)) { - continue; - } - if (!node.containsShard(shard)) { - // simulate weight if we would add shard to node - float currentWeight = weight.weightShardAdded(this, node, shard.getIndexName()); - /* - * Unless the operation is not providing any gains we - * don't check deciders - */ - if (currentWeight <= minWeight) { - Decision currentDecision = deciders.canAllocate(shard, node.getRoutingNode(), allocation); - NOUPDATE: - if (currentDecision.type() == Type.YES || currentDecision.type() == Type.THROTTLE) { - if (currentWeight == minWeight) { - /* we have an equal weight tie breaking: - * 1. if one decision is YES prefer it - * 2. prefer the node that holds the primary for this index with the next id in the ring ie. - * for the 3 shards 2 replica case we try to build up: - * 1 2 0 - * 2 0 1 - * 0 1 2 - * such that if we need to tie-break we try to prefer the node holding a shard with the minimal id greater - * than the id of the shard we need to assign. This works find when new indices are created since - * primaries are added first and we only add one shard set a time in this algorithm. - */ - if (currentDecision.type() == decision.type()) { - final int repId = shard.id(); - final int nodeHigh = node.highestPrimary(shard.index().getName()); - final int minNodeHigh = minNode.highestPrimary(shard.getIndexName()); - if ((((nodeHigh > repId && minNodeHigh > repId) || (nodeHigh < repId && minNodeHigh < repId)) && (nodeHigh < minNodeHigh)) - || (nodeHigh > minNodeHigh && nodeHigh > repId && minNodeHigh < repId)) { - // nothing to set here; the minNode, minWeight, and decision get set below - } else { - break NOUPDATE; - } - } else if (currentDecision.type() != Type.YES) { - break NOUPDATE; - } - } - minNode = node; - minWeight = currentWeight; - decision = currentDecision; - } - } - } + } else { + // did *not* receive a YES decision + if (logger.isTraceEnabled()) { + logger.trace("No eligible node found to assign shard [{}] allocation_status [{}]", shard, + allocationDecision.getAllocationStatus()); } - } - assert (decision == null) == (minNode == null); - if (minNode != null) { - final long shardSize = DiskThresholdDecider.getExpectedShardSize(shard, allocation, - ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE); - if (decision.type() == Type.YES) { - if (logger.isTraceEnabled()) { - logger.trace("Assigned shard [{}] to [{}]", shard, minNode.getNodeId()); - } - shard = routingNodes.initializeShard(shard, minNode.getNodeId(), null, shardSize, allocation.changes()); - minNode.addShard(shard); - continue; // don't add to ignoreUnassigned - } else { + if (minNode != null) { + // throttle decision scenario + assert allocationDecision.getAllocationStatus() == AllocationStatus.DECIDERS_THROTTLED; + final long shardSize = DiskThresholdDecider.getExpectedShardSize(shard, allocation, + ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE); minNode.addShard(shard.initialize(minNode.getNodeId(), null, shardSize)); final RoutingNode node = minNode.getRoutingNode(); final Decision.Type nodeLevelDecision = deciders.canAllocate(node, allocation).type(); if (nodeLevelDecision != Type.YES) { if (logger.isTraceEnabled()) { - logger.trace("Can not allocate on node [{}] remove from round decision [{}]", node, decision.type()); + logger.trace("Can not allocate on node [{}] remove from round decision [{}]", node, + allocationDecision.getAllocationStatus()); } assert nodeLevelDecision == Type.NO; throttledNodes.add(minNode); } + } else { + if (logger.isTraceEnabled()) { + logger.trace("No Node found to assign shard [{}]", shard); + } } - if (logger.isTraceEnabled()) { - logger.trace("No eligible node found to assign shard [{}] decision [{}]", shard, decision.type()); - } - } else if (logger.isTraceEnabled()) { - logger.trace("No Node found to assign shard [{}]", shard); - } - assert decision == null || decision.type() == Type.THROTTLE; - UnassignedInfo.AllocationStatus allocationStatus = - decision == null ? UnassignedInfo.AllocationStatus.DECIDERS_NO : - UnassignedInfo.AllocationStatus.fromDecision(decision); - unassigned.ignoreShard(shard, allocationStatus, allocation.changes()); - if (!shard.primary()) { // we could not allocate it and we are a replica - check if we can ignore the other replicas - while(secondaryLength > 0 && comparator.compare(shard, secondary[secondaryLength-1]) == 0) { - unassigned.ignoreShard(secondary[--secondaryLength], allocationStatus, allocation.changes()); + + unassigned.ignoreShard(shard, allocationDecision.getAllocationStatus(), allocation.changes()); + if (!shard.primary()) { // we could not allocate it and we are a replica - check if we can ignore the other replicas + while(i < primaryLength-1 && comparator.compare(primary[i], primary[i+1]) == 0) { + unassigned.ignoreShard(primary[++i], allocationDecision.getAllocationStatus(), allocation.changes()); + } } } } @@ -748,6 +875,114 @@ private void allocateUnassigned() { // clear everything we have either added it or moved to ignoreUnassigned } + /** + * Make a decision for allocating an unassigned shard. This method returns a two values in a tuple: the + * first value is the {@link Decision} taken to allocate the unassigned shard, the second value is the + * {@link ModelNode} representing the node that the shard should be assigned to. If the decision returned + * is of type {@link Type#NO}, then the assigned node will be null. + */ + private AllocateUnassignedDecision decideAllocateUnassigned(final ShardRouting shard, final Set throttledNodes) { + if (shard.assignedToNode()) { + // we only make decisions for unassigned shards here + return AllocateUnassignedDecision.NOT_TAKEN; + } + + final boolean explain = allocation.debugDecision(); + Decision shardLevelDecision = allocation.deciders().canAllocate(shard, allocation); + if (shardLevelDecision.type() == Type.NO && explain == false) { + // NO decision for allocating the shard, irrespective of any particular node, so exit early + return AllocateUnassignedDecision.no(AllocationStatus.DECIDERS_NO, null); + } + + /* find an node with minimal weight we can allocate on*/ + float minWeight = Float.POSITIVE_INFINITY; + ModelNode minNode = null; + Decision decision = null; + if (throttledNodes.size() >= nodes.size() && explain == false) { + // all nodes are throttled, so we know we won't be able to allocate this round, + // so if we are not in explain mode, short circuit + return AllocateUnassignedDecision.no(AllocationStatus.DECIDERS_NO, null); + } + /* Don't iterate over an identity hashset here the + * iteration order is different for each run and makes testing hard */ + Map nodeExplanationMap = explain ? new HashMap<>() : null; + List> nodeWeights = explain ? new ArrayList<>() : null; + for (ModelNode node : nodes.values()) { + if ((throttledNodes.contains(node) || node.containsShard(shard)) && explain == false) { + // decision is NO without needing to check anything further, so short circuit + continue; + } + + // simulate weight if we would add shard to node + float currentWeight = weight.weightShardAdded(this, node, shard.getIndexName()); + // moving the shard would not improve the balance, and we are not in explain mode, so short circuit + if (currentWeight > minWeight && explain == false) { + continue; + } + + Decision currentDecision = allocation.deciders().canAllocate(shard, node.getRoutingNode(), allocation); + if (explain) { + nodeExplanationMap.put(node.getNodeId(), + new NodeAllocationResult(node.getRoutingNode().node(), currentDecision, 0)); + nodeWeights.add(Tuple.tuple(node.getNodeId(), currentWeight)); + } + if (currentDecision.type() == Type.YES || currentDecision.type() == Type.THROTTLE) { + final boolean updateMinNode; + if (currentWeight == minWeight) { + /* we have an equal weight tie breaking: + * 1. if one decision is YES prefer it + * 2. prefer the node that holds the primary for this index with the next id in the ring ie. + * for the 3 shards 2 replica case we try to build up: + * 1 2 0 + * 2 0 1 + * 0 1 2 + * such that if we need to tie-break we try to prefer the node holding a shard with the minimal id greater + * than the id of the shard we need to assign. This works find when new indices are created since + * primaries are added first and we only add one shard set a time in this algorithm. + */ + if (currentDecision.type() == decision.type()) { + final int repId = shard.id(); + final int nodeHigh = node.highestPrimary(shard.index().getName()); + final int minNodeHigh = minNode.highestPrimary(shard.getIndexName()); + updateMinNode = ((((nodeHigh > repId && minNodeHigh > repId) + || (nodeHigh < repId && minNodeHigh < repId)) + && (nodeHigh < minNodeHigh)) + || (nodeHigh > minNodeHigh && nodeHigh > repId && minNodeHigh < repId)); + } else { + updateMinNode = currentDecision.type() == Type.YES; + } + } else { + updateMinNode = true; + } + if (updateMinNode) { + minNode = node; + minWeight = currentWeight; + decision = currentDecision; + } + } + } + if (decision == null) { + // decision was not set and a node was not assigned, so treat it as a NO decision + decision = Decision.NO; + } + List nodeDecisions = null; + if (explain) { + nodeDecisions = new ArrayList<>(); + // fill in the correct weight ranking, once we've been through all nodes + nodeWeights.sort((nodeWeight1, nodeWeight2) -> Float.compare(nodeWeight1.v2(), nodeWeight2.v2())); + int weightRanking = 0; + for (Tuple nodeWeight : nodeWeights) { + NodeAllocationResult current = nodeExplanationMap.get(nodeWeight.v1()); + nodeDecisions.add(new NodeAllocationResult(current.getNode(), current.getCanAllocateDecision(), ++weightRanking)); + } + } + return AllocateUnassignedDecision.fromDecision( + decision, + minNode != null ? minNode.routingNode.node() : null, + nodeDecisions + ); + } + /** * Tries to find a relocation from the max node to the minimal node for an arbitrary shard of the given index on the * balance model. Iff this method returns a true the relocation has already been executed on the @@ -792,10 +1027,8 @@ private boolean tryRelocateShard(ModelNode minNode, ModelNode maxNode, String id long shardSize = allocation.clusterInfo().getShardSize(candidate, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE); if (decision.type() == Type.YES) { /* only allocate on the cluster if we are not throttled */ - if (logger.isTraceEnabled()) { - logger.trace("Relocate shard [{}] from node [{}] to node [{}]", candidate, maxNode.getNodeId(), + logger.debug("Relocate shard [{}] from node [{}] to node [{}]", candidate, maxNode.getNodeId(), minNode.getNodeId()); - } /* now allocate on the cluster */ minNode.addShard(routingNodes.relocateShard(candidate, minNode.getNodeId(), shardSize, allocation.changes()).v1()); return true; @@ -819,7 +1052,7 @@ static class ModelNode implements Iterable { private int numShards = 0; private final RoutingNode routingNode; - public ModelNode(RoutingNode routingNode) { + ModelNode(RoutingNode routingNode) { this.routingNode = routingNode; } @@ -897,7 +1130,7 @@ static final class ModelIndex implements Iterable { private final Set shards = new HashSet<>(4); // expect few shards of same index to be allocated on same node private int highestPrimary = -1; - public ModelIndex(String id) { + ModelIndex(String id) { this.id = id; } @@ -954,7 +1187,7 @@ static final class NodeSorter extends IntroSorter { private final Balancer balancer; private float pivotWeight; - public NodeSorter(ModelNode[] modelNodes, WeightFunction function, Balancer balancer) { + NodeSorter(ModelNode[] modelNodes, WeightFunction function, Balancer balancer) { this.function = function; this.balancer = balancer; this.modelNodes = modelNodes; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/ShardsAllocator.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/ShardsAllocator.java index 35f3b2654181e..7e9d15b45282f 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/ShardsAllocator.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/ShardsAllocator.java @@ -19,11 +19,12 @@ package org.elasticsearch.cluster.routing.allocation.allocator; -import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.cluster.routing.allocation.AllocateUnassignedDecision; +import org.elasticsearch.cluster.routing.allocation.MoveDecision; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; +import org.elasticsearch.cluster.routing.allocation.ShardAllocationDecision; -import java.util.Map; /** *

    * A {@link ShardsAllocator} is the main entry point for shard allocation on nodes in the cluster. @@ -44,13 +45,17 @@ public interface ShardsAllocator { void allocate(RoutingAllocation allocation); /** - * Returns a map of node to a float "weight" of where the allocator would like to place the shard. - * Higher weights signify greater desire to place the shard on that node. - * Does not modify the allocation at all. + * Returns the decision for where a shard should reside in the cluster. If the shard is unassigned, + * then the {@link AllocateUnassignedDecision} will be non-null. If the shard is not in the unassigned + * state, then the {@link MoveDecision} will be non-null. * - * @param allocation current node allocation - * @param shard shard to weigh - * @return map of nodes to float weights + * This method is primarily used by the cluster allocation explain API to provide detailed explanations + * for the allocation of a single shard. Implementations of the {@link #allocate(RoutingAllocation)} method + * may use the results of this method implementation to decide on allocating shards in the routing table + * to the cluster. + * + * If an implementation of this interface does not support explaining decisions for a single shard through + * the cluster explain API, then this method should throw a {@code UnsupportedOperationException}. */ - Map weighShard(RoutingAllocation allocation, ShardRouting shard); + ShardAllocationDecision decideShardAllocation(ShardRouting shard, RoutingAllocation allocation); } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AbstractAllocateAllocationCommand.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AbstractAllocateAllocationCommand.java index 90a591b119938..4ffd70aee1cd8 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AbstractAllocateAllocationCommand.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AbstractAllocateAllocationCommand.java @@ -30,7 +30,6 @@ import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ObjectParser; @@ -50,8 +49,8 @@ public abstract class AbstractAllocateAllocationCommand implements AllocationCom private static final String SHARD_FIELD = "shard"; private static final String NODE_FIELD = "node"; - protected static > ObjectParser createAllocateParser(String command) { - ObjectParser parser = new ObjectParser<>(command); + protected static > ObjectParser createAllocateParser(String command) { + ObjectParser parser = new ObjectParser<>(command); parser.declareString(Builder::setIndex, new ParseField(INDEX_FIELD)); parser.declareInt(Builder::setShard, new ParseField(SHARD_FIELD)); parser.declareString(Builder::setNode, new ParseField(NODE_FIELD)); diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateEmptyPrimaryAllocationCommand.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateEmptyPrimaryAllocationCommand.java index 82d6f436d2a70..157acc0e537b6 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateEmptyPrimaryAllocationCommand.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateEmptyPrimaryAllocationCommand.java @@ -30,8 +30,6 @@ import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentParser; @@ -49,8 +47,7 @@ public class AllocateEmptyPrimaryAllocationCommand extends BasePrimaryAllocation public static final String NAME = "allocate_empty_primary"; public static final ParseField COMMAND_NAME_FIELD = new ParseField(NAME); - private static final ObjectParser EMPTY_PRIMARY_PARSER = BasePrimaryAllocationCommand - .createAllocatePrimaryParser(NAME); + private static final ObjectParser EMPTY_PRIMARY_PARSER = BasePrimaryAllocationCommand.createAllocatePrimaryParser(NAME); /** * Creates a new {@link AllocateEmptyPrimaryAllocationCommand} @@ -83,7 +80,7 @@ public static class Builder extends BasePrimaryAllocationCommand.Builder ParseFieldMatcher.STRICT); + return EMPTY_PRIMARY_PARSER.parse(parser, this, null); } @Override diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateReplicaAllocationCommand.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateReplicaAllocationCommand.java index 8c47deee66fe6..6ec09a9bbbbe9 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateReplicaAllocationCommand.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateReplicaAllocationCommand.java @@ -28,8 +28,6 @@ import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentParser; @@ -46,8 +44,7 @@ public class AllocateReplicaAllocationCommand extends AbstractAllocateAllocation public static final String NAME = "allocate_replica"; public static final ParseField COMMAND_NAME_FIELD = new ParseField(NAME); - private static final ObjectParser REPLICA_PARSER = - createAllocateParser(NAME); + private static final ObjectParser REPLICA_PARSER = createAllocateParser(NAME); /** * Creates a new {@link AllocateReplicaAllocationCommand} @@ -80,7 +77,7 @@ protected static class Builder extends AbstractAllocateAllocationCommand.Builder @Override public Builder parse(XContentParser parser) throws IOException { - return REPLICA_PARSER.parse(parser, this, () -> ParseFieldMatcher.STRICT); + return REPLICA_PARSER.parse(parser, this, null); } @Override diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateStalePrimaryAllocationCommand.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateStalePrimaryAllocationCommand.java index acdd5cae30b49..c643fb5c948ae 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateStalePrimaryAllocationCommand.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateStalePrimaryAllocationCommand.java @@ -28,8 +28,6 @@ import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentParser; @@ -46,8 +44,7 @@ public class AllocateStalePrimaryAllocationCommand extends BasePrimaryAllocation public static final String NAME = "allocate_stale_primary"; public static final ParseField COMMAND_NAME_FIELD = new ParseField(NAME); - private static final ObjectParser STALE_PRIMARY_PARSER = BasePrimaryAllocationCommand - .createAllocatePrimaryParser(NAME); + private static final ObjectParser STALE_PRIMARY_PARSER = BasePrimaryAllocationCommand.createAllocatePrimaryParser(NAME); /** * Creates a new {@link AllocateStalePrimaryAllocationCommand} @@ -81,7 +78,7 @@ public static class Builder extends BasePrimaryAllocationCommand.Builder ParseFieldMatcher.STRICT); + return STALE_PRIMARY_PARSER.parse(parser, this, null); } @Override diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommandRegistry.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommandRegistry.java deleted file mode 100644 index 27c1f074c40a7..0000000000000 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommandRegistry.java +++ /dev/null @@ -1,31 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cluster.routing.allocation.command; - -import org.elasticsearch.common.xcontent.ParseFieldRegistry; - -/** - * Registry of allocation commands. This is it's own class just to make Guice happy. - */ -public class AllocationCommandRegistry extends ParseFieldRegistry> { - public AllocationCommandRegistry() { - super("allocation_command"); - } -} diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommands.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommands.java index 10ba3f5594458..5098b027f611a 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommands.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommands.java @@ -23,7 +23,6 @@ import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.routing.allocation.RoutingExplanations; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -126,12 +125,10 @@ public static void writeTo(AllocationCommands commands, StreamOutput out) throws * } *

    * @param parser {@link XContentParser} to read the commands from - * @param registry of allocation command parsers * @return {@link AllocationCommands} read * @throws IOException if something bad happens while reading the stream */ - public static AllocationCommands fromXContent(XContentParser parser, ParseFieldMatcher parseFieldMatcher, - AllocationCommandRegistry registry) throws IOException { + public static AllocationCommands fromXContent(XContentParser parser) throws IOException { AllocationCommands commands = new AllocationCommands(); XContentParser.Token token = parser.currentToken(); @@ -160,7 +157,7 @@ public static AllocationCommands fromXContent(XContentParser parser, ParseFieldM token = parser.nextToken(); String commandName = parser.currentName(); token = parser.nextToken(); - commands.add(registry.lookup(commandName, parseFieldMatcher, parser.getTokenLocation()).fromXContent(parser)); + commands.add(parser.namedObject(AllocationCommand.class, commandName, null)); // move to the end object one if (parser.nextToken() != XContentParser.Token.END_OBJECT) { throw new ElasticsearchParseException("allocation command is malformed, done parsing a command, but didn't get END_OBJECT, got [{}] instead", token); diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/BasePrimaryAllocationCommand.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/BasePrimaryAllocationCommand.java index f4dc4fba4b877..2cb0426012524 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/BasePrimaryAllocationCommand.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/BasePrimaryAllocationCommand.java @@ -20,7 +20,6 @@ package org.elasticsearch.cluster.routing.allocation.command; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ObjectParser; @@ -35,8 +34,8 @@ public abstract class BasePrimaryAllocationCommand extends AbstractAllocateAlloc private static final String ACCEPT_DATA_LOSS_FIELD = "accept_data_loss"; - protected static > ObjectParser createAllocatePrimaryParser(String command) { - ObjectParser parser = AbstractAllocateAllocationCommand.createAllocateParser(command); + protected static > ObjectParser createAllocatePrimaryParser(String command) { + ObjectParser parser = AbstractAllocateAllocationCommand.createAllocateParser(command); parser.declareBoolean(Builder::setAcceptDataLoss, new ParseField(ACCEPT_DATA_LOSS_FIELD)); return parser; } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AllocationDeciders.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AllocationDeciders.java index 5b6f145fe8f2b..53e67ba25a429 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AllocationDeciders.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AllocationDeciders.java @@ -23,13 +23,12 @@ import org.elasticsearch.cluster.routing.RoutingNode; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import java.util.Collection; import java.util.Collections; -import java.util.List; -import java.util.Set; + +import static org.elasticsearch.cluster.routing.allocation.RoutingAllocation.DebugMode.EXCLUDE_YES_DECISIONS; /** * A composite {@link AllocationDecider} combining the "decision" of multiple @@ -56,7 +55,8 @@ public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation alloca } else { ret.add(decision); } - } else if (decision != Decision.ALWAYS) { + } else if (decision != Decision.ALWAYS + && (allocation.getDebugMode() != EXCLUDE_YES_DECISIONS || decision.type() != Decision.Type.YES)) { ret.add(decision); } } @@ -74,7 +74,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing // short track if a NO is returned. if (decision == Decision.NO) { if (logger.isTraceEnabled()) { - logger.trace("Can not allocate [{}] on node [{}] due to [{}]", shardRouting, node.nodeId(), allocationDecider.getClass().getSimpleName()); + logger.trace("Can not allocate [{}] on node [{}] due to [{}]", shardRouting, node.node(), allocationDecider.getClass().getSimpleName()); } // short circuit only if debugging is not enabled if (!allocation.debugDecision()) { @@ -82,7 +82,8 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing } else { ret.add(decision); } - } else if (decision != Decision.ALWAYS) { + } else if (decision != Decision.ALWAYS + && (allocation.getDebugMode() != EXCLUDE_YES_DECISIONS || decision.type() != Decision.Type.YES)) { // the assumption is that a decider that returns the static instance Decision#ALWAYS // does not really implements canAllocate ret.add(decision); @@ -112,7 +113,8 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl } else { ret.add(decision); } - } else if (decision != Decision.ALWAYS) { + } else if (decision != Decision.ALWAYS + && (allocation.getDebugMode() != EXCLUDE_YES_DECISIONS || decision.type() != Decision.Type.YES)) { ret.add(decision); } } @@ -131,7 +133,8 @@ public Decision canAllocate(IndexMetaData indexMetaData, RoutingNode node, Routi } else { ret.add(decision); } - } else if (decision != Decision.ALWAYS) { + } else if (decision != Decision.ALWAYS + && (allocation.getDebugMode() != EXCLUDE_YES_DECISIONS || decision.type() != Decision.Type.YES)) { ret.add(decision); } } @@ -150,7 +153,8 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingAllocation allocat } else { ret.add(decision); } - } else if (decision != Decision.ALWAYS) { + } else if (decision != Decision.ALWAYS + && (allocation.getDebugMode() != EXCLUDE_YES_DECISIONS || decision.type() != Decision.Type.YES)) { ret.add(decision); } } @@ -169,7 +173,8 @@ public Decision canAllocate(RoutingNode node, RoutingAllocation allocation) { } else { ret.add(decision); } - } else if (decision != Decision.ALWAYS) { + } else if (decision != Decision.ALWAYS + && (allocation.getDebugMode() != EXCLUDE_YES_DECISIONS || decision.type() != Decision.Type.YES)) { ret.add(decision); } } @@ -188,7 +193,8 @@ public Decision canRebalance(RoutingAllocation allocation) { } else { ret.add(decision); } - } else if (decision != Decision.ALWAYS) { + } else if (decision != Decision.ALWAYS + && (allocation.getDebugMode() != EXCLUDE_YES_DECISIONS || decision.type() != Decision.Type.YES)) { ret.add(decision); } } @@ -216,7 +222,8 @@ public Decision canForceAllocatePrimary(ShardRouting shardRouting, RoutingNode n } else { ret.add(decision); } - } else if (decision != Decision.ALWAYS) { + } else if (decision != Decision.ALWAYS + && (allocation.getDebugMode() != EXCLUDE_YES_DECISIONS || decision.type() != Decision.Type.YES)) { ret.add(decision); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java index ceeb23bab1a4f..4160fd224aa14 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java @@ -49,7 +49,7 @@ * To enable allocation awareness in this example nodes should contain a value * for the rack_id key like: *
    - * node.rack_id:1
    + * node.attr.rack_id:1
      * 
    *

    * Awareness can also be used to prevent over-allocation in the case of node or @@ -78,31 +78,15 @@ public class AwarenessAllocationDecider extends AllocationDecider { public static final String NAME = "awareness"; public static final Setting CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING = - new Setting<>("cluster.routing.allocation.awareness.attributes", "", Strings::splitStringByCommaToArray , Property.Dynamic, + new Setting<>("cluster.routing.allocation.awareness.attributes", "", s -> Strings.tokenizeToStringArray(s, ","), Property.Dynamic, Property.NodeScope); public static final Setting CLUSTER_ROUTING_ALLOCATION_AWARENESS_FORCE_GROUP_SETTING = Setting.groupSetting("cluster.routing.allocation.awareness.force.", Property.Dynamic, Property.NodeScope); - private String[] awarenessAttributes; + private volatile String[] awarenessAttributes; private volatile Map forcedAwarenessAttributes; - /** - * Creates a new {@link AwarenessAllocationDecider} instance - */ - public AwarenessAllocationDecider() { - this(Settings.Builder.EMPTY_SETTINGS); - } - - /** - * Creates a new {@link AwarenessAllocationDecider} instance from given settings - * - * @param settings {@link Settings} to use - */ - public AwarenessAllocationDecider(Settings settings) { - this(settings, new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS)); - } - public AwarenessAllocationDecider(Settings settings, ClusterSettings clusterSettings) { super(settings); this.awarenessAttributes = CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.get(settings); @@ -140,7 +124,9 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation, boolean moveToNode) { if (awarenessAttributes.length == 0) { - return allocation.decision(Decision.YES, NAME, "allocation awareness is not enabled"); + return allocation.decision(Decision.YES, NAME, + "allocation awareness is not enabled, set cluster setting [%s] to enable it", + CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.getKey()); } IndexMetaData indexMetaData = allocation.metaData().getIndexSafe(shardRouting.index()); @@ -148,7 +134,10 @@ private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, Rout for (String awarenessAttribute : awarenessAttributes) { // the node the shard exists on must be associated with an awareness attribute if (!node.node().getAttributes().containsKey(awarenessAttribute)) { - return allocation.decision(Decision.NO, NAME, "node does not contain the awareness attribute: [%s]", awarenessAttribute); + return allocation.decision(Decision.NO, NAME, + "node does not contain the awareness attribute [%s]; required attributes cluster setting [%s=%s]", + awarenessAttribute, CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.getKey(), + allocation.debugDecision() ? Strings.arrayToCommaDelimitedString(awarenessAttributes) : null); } // build attr_value -> nodes map @@ -206,15 +195,14 @@ private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, Rout // if we are above with leftover, then we know we are not good, even with mod if (currentNodeCount > (requiredCountPerAttribute + leftoverPerAttribute)) { return allocation.decision(Decision.NO, NAME, - "there are too many shards on the node for attribute [%s], there are [%d] total shards for the index " + - " and [%d] total attributes values, expected the node count [%d] to be lower or equal to the required " + - "number of shards per attribute [%d] plus leftover [%d]", + "there are too many copies of the shard allocated to nodes with attribute [%s], there are [%d] total configured " + + "shard copies for this shard id and [%d] total attribute values, expected the allocated shard count per " + + "attribute [%d] to be less than or equal to the upper bound of the required number of shards per attribute [%d]", awarenessAttribute, shardCount, numberOfAttributes, currentNodeCount, - requiredCountPerAttribute, - leftoverPerAttribute); + requiredCountPerAttribute + leftoverPerAttribute); } // all is well, we are below or same as average if (currentNodeCount <= requiredCountPerAttribute) { diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ClusterRebalanceAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ClusterRebalanceAllocationDecider.java index c343d4254c8a5..281f6a603c3f3 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ClusterRebalanceAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ClusterRebalanceAllocationDecider.java @@ -48,14 +48,15 @@ public class ClusterRebalanceAllocationDecider extends AllocationDecider { public static final String NAME = "cluster_rebalance"; + private static final String CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE = "cluster.routing.allocation.allow_rebalance"; public static final Setting CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE_SETTING = - new Setting<>("cluster.routing.allocation.allow_rebalance", ClusterRebalanceType.INDICES_ALL_ACTIVE.name().toLowerCase(Locale.ROOT), + new Setting<>(CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, ClusterRebalanceType.INDICES_ALL_ACTIVE.toString(), ClusterRebalanceType::parseString, Property.Dynamic, Property.NodeScope); /** * An enum representation for the configured re-balance type. */ - public static enum ClusterRebalanceType { + public enum ClusterRebalanceType { /** * Re-balancing is allowed once a shard replication group is active */ @@ -80,6 +81,11 @@ public static ClusterRebalanceType parseString(String typeString) { throw new IllegalArgumentException("Illegal value for " + CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE_SETTING + ": " + typeString); } + + @Override + public String toString() { + return name().toLowerCase(Locale.ROOT); + } } private volatile ClusterRebalanceType type; @@ -94,8 +100,7 @@ public ClusterRebalanceAllocationDecider(Settings settings, ClusterSettings clus CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE_SETTING.getRaw(settings)); type = ClusterRebalanceType.INDICES_ALL_ACTIVE; } - logger.debug("using [{}] with [{}]", CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE_SETTING.getKey(), - type.toString().toLowerCase(Locale.ROOT)); + logger.debug("using [{}] with [{}]", CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, type); clusterSettings.addSettingsUpdateConsumer(CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE_SETTING, this::setType); } @@ -115,12 +120,14 @@ public Decision canRebalance(RoutingAllocation allocation) { // check if there are unassigned primaries. if ( allocation.routingNodes().hasUnassignedPrimaries() ) { return allocation.decision(Decision.NO, NAME, - "the cluster has unassigned primary shards and rebalance type is set to [%s]", type); + "the cluster has unassigned primary shards and cluster setting [%s] is set to [%s]", + CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, type); } // check if there are initializing primaries that don't have a relocatingNodeId entry. if ( allocation.routingNodes().hasInactivePrimaries() ) { return allocation.decision(Decision.NO, NAME, - "the cluster has inactive primary shards and rebalance type is set to [%s]", type); + "the cluster has inactive primary shards and cluster setting [%s] is set to [%s]", + CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, type); } return allocation.decision(Decision.YES, NAME, "all primary shards are active"); @@ -129,16 +136,18 @@ public Decision canRebalance(RoutingAllocation allocation) { // check if there are unassigned shards. if (allocation.routingNodes().hasUnassignedShards() ) { return allocation.decision(Decision.NO, NAME, - "the cluster has unassigned shards and rebalance type is set to [%s]", type); + "the cluster has unassigned shards and cluster setting [%s] is set to [%s]", + CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, type); } // in case all indices are assigned, are there initializing shards which // are not relocating? if ( allocation.routingNodes().hasInactiveShards() ) { return allocation.decision(Decision.NO, NAME, - "the cluster has inactive shards and rebalance type is set to [%s]", type); + "the cluster has inactive shards and cluster setting [%s] is set to [%s]", + CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, type); } } // type == Type.ALWAYS - return allocation.decision(Decision.YES, NAME, "all shards are active, rebalance type is [%s]", type); + return allocation.decision(Decision.YES, NAME, "all shards are active"); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ConcurrentRebalanceAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ConcurrentRebalanceAllocationDecider.java index dd3ece10dd5f1..63fbad59b922a 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ConcurrentRebalanceAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ConcurrentRebalanceAllocationDecider.java @@ -66,9 +66,11 @@ public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation alloca } int relocatingShards = allocation.routingNodes().getRelocatingShardCount(); if (relocatingShards >= clusterConcurrentRebalance) { - return allocation.decision(Decision.NO, NAME, - "too many shards are concurrently rebalancing [%d], limit: [%d]", - relocatingShards, clusterConcurrentRebalance); + return allocation.decision(Decision.THROTTLE, NAME, + "reached the limit of concurrently rebalancing shards [%d], cluster setting [%s=%d]", + relocatingShards, + CLUSTER_ROUTING_ALLOCATION_CLUSTER_CONCURRENT_REBALANCE_SETTING.getKey(), + clusterConcurrentRebalance); } return allocation.decision(Decision.YES, NAME, "below threshold [%d] for concurrent rebalances, current rebalance shard count [%d]", diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/Decision.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/Decision.java index 3792f536f2f1d..a2198ad90d9b0 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/Decision.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/Decision.java @@ -22,6 +22,7 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -38,7 +39,7 @@ * * @see AllocationDecider */ -public abstract class Decision implements ToXContent { +public abstract class Decision implements ToXContent, Writeable { public static final Decision ALWAYS = new Single(Type.YES); public static final Decision YES = new Single(Type.YES); @@ -57,26 +58,6 @@ public static Decision single(Type type, @Nullable String label, @Nullable Strin return new Single(type, label, explanation, explanationParams); } - public static void writeTo(Decision decision, StreamOutput out) throws IOException { - if (decision instanceof Multi) { - // Flag specifying whether it is a Multi or Single Decision - out.writeBoolean(true); - out.writeVInt(((Multi) decision).decisions.size()); - for (Decision d : ((Multi) decision).decisions) { - writeTo(d, out); - } - } else { - // Flag specifying whether it is a Multi or Single Decision - out.writeBoolean(false); - Single d = ((Single) decision); - Type.writeTo(d.type, out); - out.writeOptionalString(d.label); - // Flatten explanation on serialization, so that explanationParams - // do not need to be serialized - out.writeOptionalString(d.getExplanation()); - } - } - public static Decision readFrom(StreamInput in) throws IOException { // Determine whether to read a Single or Multi Decision if (in.readBoolean()) { @@ -100,10 +81,16 @@ public static Decision readFrom(StreamInput in) throws IOException { * This enumeration defines the * possible types of decisions */ - public enum Type { - YES, - NO, - THROTTLE; + public enum Type implements Writeable { + YES(1), + THROTTLE(2), + NO(0); + + private final int id; + + Type(int id) { + this.id = id; + } public static Type resolve(String s) { return Type.valueOf(s.toUpperCase(Locale.ROOT)); @@ -123,21 +110,22 @@ public static Type readFrom(StreamInput in) throws IOException { } } - public static void writeTo(Type type, StreamOutput out) throws IOException { - switch (type) { - case NO: - out.writeVInt(0); - break; - case YES: - out.writeVInt(1); - break; - case THROTTLE: - out.writeVInt(2); - break; - default: - throw new IllegalArgumentException("Invalid Type [" + type + "]"); + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(id); + } + + public boolean higherThan(Type other) { + if (this == NO) { + return false; + } else if (other == NO) { + return true; + } else if (other == THROTTLE && this == YES) { + return true; } + return false; } + } /** @@ -152,6 +140,12 @@ public static void writeTo(Type type, StreamOutput out) throws IOException { @Nullable public abstract String label(); + /** + * Get the explanation for this decision. + */ + @Nullable + public abstract String getExplanation(); + /** * Return the list of all decisions that make up this decision */ @@ -210,8 +204,9 @@ public List getDecisions() { } /** - * Returns the explanation string, fully formatted. Only formats the string once + * Returns the explanation string, fully formatted. Only formats the string once. */ + @Override @Nullable public String getExplanation() { if (explanationString == null && explanation != null) { @@ -263,6 +258,16 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.endObject(); return builder; } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeBoolean(false); // flag specifying its a single decision + type.writeTo(out); + out.writeOptionalString(label); + // Flatten explanation on serialization, so that explanationParams + // do not need to be serialized + out.writeOptionalString(getExplanation()); + } } /** @@ -303,6 +308,12 @@ public String label() { return null; } + @Override + @Nullable + public String getExplanation() { + throw new UnsupportedOperationException("multi-level decisions do not have an explanation"); + } + @Override public List getDecisions() { return Collections.unmodifiableList(this.decisions); @@ -339,12 +350,19 @@ public String toString() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startArray("decisions"); for (Decision d : decisions) { d.toXContent(builder, params); } - builder.endArray(); return builder; } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeBoolean(true); // flag indicating it is a multi decision + out.writeVInt(getDecisions().size()); + for (Decision d : getDecisions()) { + d.writeTo(out); + } + } } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java index b64b74cc9cb27..56663be1ef427 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java @@ -40,6 +40,9 @@ import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; +import static org.elasticsearch.cluster.routing.allocation.DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING; +import static org.elasticsearch.cluster.routing.allocation.DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING; + /** * The {@link DiskThresholdDecider} checks that the node a shard is potentially * being allocated to has enough disk space. @@ -128,16 +131,19 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing shardRouting.active() == false && shardRouting.recoverySource().getType() == RecoverySource.Type.EMPTY_STORE; // checks for exact byte comparisons - if (freeBytes < diskThresholdSettings.getFreeBytesThresholdLow().bytes()) { + if (freeBytes < diskThresholdSettings.getFreeBytesThresholdLow().getBytes()) { if (skipLowTresholdChecks == false) { if (logger.isDebugEnabled()) { logger.debug("less than the required {} free bytes threshold ({} bytes free) on node {}, preventing allocation", diskThresholdSettings.getFreeBytesThresholdLow(), freeBytes, node.nodeId()); } return allocation.decision(Decision.NO, NAME, - "the node is above the low watermark and has less than required [%s] free, free: [%s]", - diskThresholdSettings.getFreeBytesThresholdLow(), new ByteSizeValue(freeBytes)); - } else if (freeBytes > diskThresholdSettings.getFreeBytesThresholdHigh().bytes()) { + "the node is above the low watermark cluster setting [%s=%s], having less than the minimum required [%s] free " + + "space, actual free: [%s]", + CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey(), + diskThresholdSettings.getLowWatermarkRaw(), + diskThresholdSettings.getFreeBytesThresholdLow(), new ByteSizeValue(freeBytes)); + } else if (freeBytes > diskThresholdSettings.getFreeBytesThresholdHigh().getBytes()) { // Allow the shard to be allocated because it is primary that // has never been allocated if it's under the high watermark if (logger.isDebugEnabled()) { @@ -146,7 +152,8 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing diskThresholdSettings.getFreeBytesThresholdLow(), freeBytes, node.nodeId()); } return allocation.decision(Decision.YES, NAME, - "the node is above the low watermark, but this primary shard has never been allocated before"); + "the node is above the low watermark, but less than the high watermark, and this primary shard has " + + "never been allocated before"); } else { // Even though the primary has never been allocated, the node is // above the high watermark, so don't allow allocating the shard @@ -156,9 +163,11 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing diskThresholdSettings.getFreeBytesThresholdHigh(), freeBytes, node.nodeId()); } return allocation.decision(Decision.NO, NAME, - "the node is above the high watermark even though this shard has never been allocated " + - "and has less than required [%s] free on node, free: [%s]", - diskThresholdSettings.getFreeBytesThresholdHigh(), new ByteSizeValue(freeBytes)); + "the node is above the high watermark cluster setting [%s=%s], having less than the minimum required [%s] free " + + "space, actual free: [%s]", + CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), + diskThresholdSettings.getHighWatermarkRaw(), + diskThresholdSettings.getFreeBytesThresholdHigh(), new ByteSizeValue(freeBytes)); } } @@ -172,8 +181,10 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing Strings.format1Decimals(usedDiskPercentage, "%"), node.nodeId()); } return allocation.decision(Decision.NO, NAME, - "the node is above the low watermark and has more than allowed [%s%%] used disk, free: [%s%%]", - usedDiskThresholdLow, freeDiskPercentage); + "the node is above the low watermark cluster setting [%s=%s], using more disk space than the maximum allowed " + + "[%s%%], actual free: [%s%%]", + CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey(), + diskThresholdSettings.getLowWatermarkRaw(), usedDiskThresholdLow, freeDiskPercentage); } else if (freeDiskPercentage > diskThresholdSettings.getFreeDiskThresholdHigh()) { // Allow the shard to be allocated because it is primary that // has never been allocated if it's under the high watermark @@ -184,7 +195,8 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing Strings.format1Decimals(usedDiskPercentage, "%"), node.nodeId()); } return allocation.decision(Decision.YES, NAME, - "the node is above the low watermark, but this primary shard has never been allocated before"); + "the node is above the low watermark, but less than the high watermark, and this primary shard has " + + "never been allocated before"); } else { // Even though the primary has never been allocated, the node is // above the high watermark, so don't allow allocating the shard @@ -195,9 +207,10 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing Strings.format1Decimals(freeDiskPercentage, "%"), node.nodeId()); } return allocation.decision(Decision.NO, NAME, - "the node is above the high watermark even though this shard has never been allocated " + - "and has more than allowed [%s%%] used disk, free: [%s%%]", - usedDiskThresholdHigh, freeDiskPercentage); + "the node is above the high watermark cluster setting [%s=%s], using more disk space than the maximum allowed " + + "[%s%%], actual free: [%s%%]", + CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), + diskThresholdSettings.getHighWatermarkRaw(), usedDiskThresholdHigh, freeDiskPercentage); } } @@ -205,14 +218,16 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing final long shardSize = getExpectedShardSize(shardRouting, allocation, 0); double freeSpaceAfterShard = freeDiskPercentageAfterShardAssigned(usage, shardSize); long freeBytesAfterShard = freeBytes - shardSize; - if (freeBytesAfterShard < diskThresholdSettings.getFreeBytesThresholdHigh().bytes()) { + if (freeBytesAfterShard < diskThresholdSettings.getFreeBytesThresholdHigh().getBytes()) { logger.warn("after allocating, node [{}] would have less than the required " + "{} free bytes threshold ({} bytes free), preventing allocation", node.nodeId(), diskThresholdSettings.getFreeBytesThresholdHigh(), freeBytesAfterShard); return allocation.decision(Decision.NO, NAME, - "after allocating the shard to this node, it would be above the high watermark " + - "and have less than required [%s] free, free: [%s]", - diskThresholdSettings.getFreeBytesThresholdLow(), new ByteSizeValue(freeBytesAfterShard)); + "allocating the shard to this node will bring the node above the high watermark cluster setting [%s=%s] " + + "and cause it to have less than the minimum required [%s] of free space (free bytes after shard added: [%s])", + CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), + diskThresholdSettings.getHighWatermarkRaw(), + diskThresholdSettings.getFreeBytesThresholdHigh(), new ByteSizeValue(freeBytesAfterShard)); } if (freeSpaceAfterShard < diskThresholdSettings.getFreeDiskThresholdHigh()) { logger.warn("after allocating, node [{}] would have more than the allowed " + @@ -220,9 +235,10 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing node.nodeId(), Strings.format1Decimals(diskThresholdSettings.getFreeDiskThresholdHigh(), "%"), Strings.format1Decimals(freeSpaceAfterShard, "%")); return allocation.decision(Decision.NO, NAME, - "after allocating the shard to this node, it would be above the high watermark " + - "and have more than allowed [%s%%] used disk, free: [%s%%]", - usedDiskThresholdLow, freeSpaceAfterShard); + "allocating the shard to this node will bring the node above the high watermark cluster setting [%s=%s] " + + "and cause it to use more disk space than the maximum allowed [%s%%] (free space after shard added: [%s%%])", + CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), + diskThresholdSettings.getHighWatermarkRaw(), usedDiskThresholdHigh, freeSpaceAfterShard); } return allocation.decision(Decision.YES, NAME, @@ -258,15 +274,17 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl return allocation.decision(Decision.YES, NAME, "this shard is not allocated on the most utilized disk and can remain"); } - if (freeBytes < diskThresholdSettings.getFreeBytesThresholdHigh().bytes()) { + if (freeBytes < diskThresholdSettings.getFreeBytesThresholdHigh().getBytes()) { if (logger.isDebugEnabled()) { logger.debug("less than the required {} free bytes threshold ({} bytes free) on node {}, shard cannot remain", diskThresholdSettings.getFreeBytesThresholdHigh(), freeBytes, node.nodeId()); } return allocation.decision(Decision.NO, NAME, - "after allocating this shard this node would be above the high watermark " + - "and there would be less than required [%s] free on node, free: [%s]", - diskThresholdSettings.getFreeBytesThresholdHigh(), new ByteSizeValue(freeBytes)); + "the shard cannot remain on this node because it is above the high watermark cluster setting [%s=%s] " + + "and there is less than the required [%s] free space on node, actual free: [%s]", + CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), + diskThresholdSettings.getHighWatermarkRaw(), + diskThresholdSettings.getFreeBytesThresholdHigh(), new ByteSizeValue(freeBytes)); } if (freeDiskPercentage < diskThresholdSettings.getFreeDiskThresholdHigh()) { if (logger.isDebugEnabled()) { @@ -274,9 +292,11 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl diskThresholdSettings.getFreeDiskThresholdHigh(), freeDiskPercentage, node.nodeId()); } return allocation.decision(Decision.NO, NAME, - "after allocating this shard this node would be above the high watermark " + - "and there would be less than required [%s%%] free disk on node, free: [%s%%]", - diskThresholdSettings.getFreeDiskThresholdHigh(), freeDiskPercentage); + "the shard cannot remain on this node because it is above the high watermark cluster setting [%s=%s] " + + "and there is less than the required [%s%%] free disk on node, actual free: [%s%%]", + CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), + diskThresholdSettings.getHighWatermarkRaw(), + diskThresholdSettings.getFreeDiskThresholdHigh(), freeDiskPercentage); } return allocation.decision(Decision.YES, NAME, diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/EnableAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/EnableAllocationDecider.java index 64bf594214252..7bb073a4c4561 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/EnableAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/EnableAllocationDecider.java @@ -98,33 +98,39 @@ public void setEnableAllocation(Allocation enableAllocation) { @Override public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) { if (allocation.ignoreDisable()) { - return allocation.decision(Decision.YES, NAME, "allocation is explicitly ignoring any disabling of allocation"); + return allocation.decision(Decision.YES, NAME, + "explicitly ignoring any disabling of allocation due to manual allocation commands via the reroute API"); } final IndexMetaData indexMetaData = allocation.metaData().getIndexSafe(shardRouting.index()); final Allocation enable; + final boolean usedIndexSetting; if (INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.exists(indexMetaData.getSettings())) { enable = INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.get(indexMetaData.getSettings()); + usedIndexSetting = true; } else { enable = this.enableAllocation; + usedIndexSetting = false; } switch (enable) { case ALL: return allocation.decision(Decision.YES, NAME, "all allocations are allowed"); case NONE: - return allocation.decision(Decision.NO, NAME, "no allocations are allowed"); + return allocation.decision(Decision.NO, NAME, "no allocations are allowed due to {}", setting(enable, usedIndexSetting)); case NEW_PRIMARIES: if (shardRouting.primary() && shardRouting.active() == false && shardRouting.recoverySource().getType() != RecoverySource.Type.EXISTING_STORE) { return allocation.decision(Decision.YES, NAME, "new primary allocations are allowed"); } else { - return allocation.decision(Decision.NO, NAME, "non-new primary allocations are forbidden"); + return allocation.decision(Decision.NO, NAME, "non-new primary allocations are forbidden due to {}", + setting(enable, usedIndexSetting)); } case PRIMARIES: if (shardRouting.primary()) { return allocation.decision(Decision.YES, NAME, "primary allocations are allowed"); } else { - return allocation.decision(Decision.NO, NAME, "replica allocations are forbidden"); + return allocation.decision(Decision.NO, NAME, "replica allocations are forbidden due to {}", + setting(enable, usedIndexSetting)); } default: throw new IllegalStateException("Unknown allocation option"); @@ -139,33 +145,64 @@ public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation alloca Settings indexSettings = allocation.metaData().getIndexSafe(shardRouting.index()).getSettings(); final Rebalance enable; + final boolean usedIndexSetting; if (INDEX_ROUTING_REBALANCE_ENABLE_SETTING.exists(indexSettings)) { enable = INDEX_ROUTING_REBALANCE_ENABLE_SETTING.get(indexSettings); + usedIndexSetting = true; } else { enable = this.enableRebalance; + usedIndexSetting = false; } switch (enable) { case ALL: return allocation.decision(Decision.YES, NAME, "all rebalancing is allowed"); case NONE: - return allocation.decision(Decision.NO, NAME, "no rebalancing is allowed"); + return allocation.decision(Decision.NO, NAME, "no rebalancing is allowed due to %s", setting(enable, usedIndexSetting)); case PRIMARIES: if (shardRouting.primary()) { return allocation.decision(Decision.YES, NAME, "primary rebalancing is allowed"); } else { - return allocation.decision(Decision.NO, NAME, "replica rebalancing is forbidden"); + return allocation.decision(Decision.NO, NAME, "replica rebalancing is forbidden due to %s", + setting(enable, usedIndexSetting)); } case REPLICAS: if (shardRouting.primary() == false) { return allocation.decision(Decision.YES, NAME, "replica rebalancing is allowed"); } else { - return allocation.decision(Decision.NO, NAME, "primary rebalancing is forbidden"); + return allocation.decision(Decision.NO, NAME, "primary rebalancing is forbidden due to %s", + setting(enable, usedIndexSetting)); } default: throw new IllegalStateException("Unknown rebalance option"); } } + private static String setting(Allocation allocation, boolean usedIndexSetting) { + StringBuilder buf = new StringBuilder(); + if (usedIndexSetting) { + buf.append("index setting ["); + buf.append(INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.getKey()); + } else { + buf.append("cluster setting ["); + buf.append(CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING.getKey()); + } + buf.append("=").append(allocation.toString().toLowerCase(Locale.ROOT)).append("]"); + return buf.toString(); + } + + private static String setting(Rebalance rebalance, boolean usedIndexSetting) { + StringBuilder buf = new StringBuilder(); + if (usedIndexSetting) { + buf.append("index setting ["); + buf.append(INDEX_ROUTING_REBALANCE_ENABLE_SETTING.getKey()); + } else { + buf.append("cluster setting ["); + buf.append(CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING.getKey()); + } + buf.append("=").append(rebalance.toString().toLowerCase(Locale.ROOT)).append("]"); + return buf.toString(); + } + /** * Allocation values or rather their string representation to be used used with * {@link EnableAllocationDecider#CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING} / diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java index 4071007c79123..933b0a829d569 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java @@ -30,6 +30,9 @@ import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; +import java.util.EnumSet; + +import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.IP_VALIDATOR; import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.AND; import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.OR; @@ -64,12 +67,26 @@ public class FilterAllocationDecider extends AllocationDecider { public static final String NAME = "filter"; + private static final String CLUSTER_ROUTING_REQUIRE_GROUP_PREFIX = "cluster.routing.allocation.require"; + private static final String CLUSTER_ROUTING_INCLUDE_GROUP_PREFIX = "cluster.routing.allocation.include"; + private static final String CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX = "cluster.routing.allocation.exclude"; public static final Setting CLUSTER_ROUTING_REQUIRE_GROUP_SETTING = - Setting.groupSetting("cluster.routing.allocation.require.", Property.Dynamic, Property.NodeScope); + Setting.groupSetting(CLUSTER_ROUTING_REQUIRE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.NodeScope); public static final Setting CLUSTER_ROUTING_INCLUDE_GROUP_SETTING = - Setting.groupSetting("cluster.routing.allocation.include.", Property.Dynamic, Property.NodeScope); + Setting.groupSetting(CLUSTER_ROUTING_INCLUDE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.NodeScope); public static final Setting CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING = - Setting.groupSetting("cluster.routing.allocation.exclude.", Property.Dynamic, Property.NodeScope); + Setting.groupSetting(CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.NodeScope); + + /** + * The set of {@link RecoverySource.Type} values for which the + * {@link IndexMetaData#INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING} should apply. + * Note that we do not include the {@link RecoverySource.Type#SNAPSHOT} type here + * because if the snapshot is restored to a different cluster that does not contain + * the initial recovery node id, or to the same cluster where the initial recovery node + * id has been decommissioned, then the primary shards will never be allocated. + */ + static EnumSet INITIAL_RECOVERY_TYPES = + EnumSet.of(RecoverySource.Type.EMPTY_STORE, RecoverySource.Type.LOCAL_SHARDS); private volatile DiscoveryNodeFilters clusterRequireFilters; private volatile DiscoveryNodeFilters clusterIncludeFilters; @@ -93,11 +110,13 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing // this is a setting that can only be set within the system! IndexMetaData indexMd = allocation.metaData().getIndexSafe(shardRouting.index()); DiscoveryNodeFilters initialRecoveryFilters = indexMd.getInitialRecoveryFilters(); - if (shardRouting.recoverySource().getType() != RecoverySource.Type.EXISTING_STORE && - initialRecoveryFilters != null && + if (initialRecoveryFilters != null && + INITIAL_RECOVERY_TYPES.contains(shardRouting.recoverySource().getType()) && initialRecoveryFilters.match(node.node()) == false) { - return allocation.decision(Decision.NO, NAME, "node does not match index initial recovery filters [%s]", - indexMd.includeFilters()); + String explanation = (shardRouting.recoverySource().getType() == RecoverySource.Type.LOCAL_SHARDS) ? + "initial allocation of the shrunken index is only allowed on nodes [%s] that hold a copy of every shard in the index" : + "initial allocation of the index is only allowed on nodes [%s]"; + return allocation.decision(Decision.NO, NAME, explanation, initialRecoveryFilters); } } return shouldFilter(shardRouting, node, allocation); @@ -136,17 +155,20 @@ private Decision shouldFilter(IndexMetaData indexMd, RoutingNode node, RoutingAl private Decision shouldIndexFilter(IndexMetaData indexMd, RoutingNode node, RoutingAllocation allocation) { if (indexMd.requireFilters() != null) { if (!indexMd.requireFilters().match(node.node())) { - return allocation.decision(Decision.NO, NAME, "node does not match index required filters [%s]", indexMd.requireFilters()); + return allocation.decision(Decision.NO, NAME, "node does not match index setting [%s] filters [%s]", + IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_PREFIX, indexMd.requireFilters()); } } if (indexMd.includeFilters() != null) { if (!indexMd.includeFilters().match(node.node())) { - return allocation.decision(Decision.NO, NAME, "node does not match index include filters [%s]", indexMd.includeFilters()); + return allocation.decision(Decision.NO, NAME, "node does not match index setting [%s] filters [%s]", + IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_PREFIX, indexMd.includeFilters()); } } if (indexMd.excludeFilters() != null) { if (indexMd.excludeFilters().match(node.node())) { - return allocation.decision(Decision.NO, NAME, "node matches index exclude filters [%s]", indexMd.excludeFilters()); + return allocation.decision(Decision.NO, NAME, "node matches index setting [%s] filters [%s]", + IndexMetaData.INDEX_ROUTING_EXCLUDE_GROUP_SETTING.getKey(), indexMd.excludeFilters()); } } return null; @@ -155,17 +177,20 @@ private Decision shouldIndexFilter(IndexMetaData indexMd, RoutingNode node, Rout private Decision shouldClusterFilter(RoutingNode node, RoutingAllocation allocation) { if (clusterRequireFilters != null) { if (!clusterRequireFilters.match(node.node())) { - return allocation.decision(Decision.NO, NAME, "node does not match global required filters [%s]", clusterRequireFilters); + return allocation.decision(Decision.NO, NAME, "node does not match cluster setting [%s] filters [%s]", + CLUSTER_ROUTING_REQUIRE_GROUP_PREFIX, clusterRequireFilters); } } if (clusterIncludeFilters != null) { if (!clusterIncludeFilters.match(node.node())) { - return allocation.decision(Decision.NO, NAME, "node does not match global include filters [%s]", clusterIncludeFilters); + return allocation.decision(Decision.NO, NAME, "node does not cluster setting [%s] filters [%s]", + CLUSTER_ROUTING_INCLUDE_GROUP_PREFIX, clusterIncludeFilters); } } if (clusterExcludeFilters != null) { if (clusterExcludeFilters.match(node.node())) { - return allocation.decision(Decision.NO, NAME, "node matches global exclude filters [%s]", clusterExcludeFilters); + return allocation.decision(Decision.NO, NAME, "node matches cluster setting [%s] filters [%s]", + CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX, clusterExcludeFilters); } } return null; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/MaxRetryAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/MaxRetryAllocationDecider.java index 395d347232939..59a836abece37 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/MaxRetryAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/MaxRetryAllocationDecider.java @@ -54,7 +54,8 @@ public MaxRetryAllocationDecider(Settings settings) { @Override public Decision canAllocate(ShardRouting shardRouting, RoutingAllocation allocation) { - UnassignedInfo unassignedInfo = shardRouting.unassignedInfo(); + final UnassignedInfo unassignedInfo = shardRouting.unassignedInfo(); + final Decision decision; if (unassignedInfo != null && unassignedInfo.getNumFailedAllocations() > 0) { final IndexMetaData indexMetaData = allocation.metaData().getIndexSafe(shardRouting.index()); final int maxRetry = SETTING_ALLOCATION_MAX_RETRY.get(indexMetaData.getSettings()); @@ -62,16 +63,21 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingAllocation allocat // if we are called via the _reroute API we ignore the failure counter and try to allocate // this improves the usability since people don't need to raise the limits to issue retries since a simple _reroute call is // enough to manually retry. - return allocation.decision(Decision.YES, NAME, "shard has already failed allocating [" - + unassignedInfo.getNumFailedAllocations() + "] times vs. [" + maxRetry + "] retries allowed " - + unassignedInfo.toString() + " - retrying once on manual allocation"); + decision = allocation.decision(Decision.YES, NAME, "shard has exceeded the maximum number of retries [%d] on " + + "failed allocation attempts - retrying once due to a manual reroute command, [%s]", + maxRetry, unassignedInfo.toString()); } else if (unassignedInfo.getNumFailedAllocations() >= maxRetry) { - return allocation.decision(Decision.NO, NAME, "shard has already failed allocating [" - + unassignedInfo.getNumFailedAllocations() + "] times vs. [" + maxRetry + "] retries allowed " - + unassignedInfo.toString() + " - manually call [/_cluster/reroute?retry_failed=true] to retry"); + decision = allocation.decision(Decision.NO, NAME, "shard has exceeded the maximum number of retries [%d] on " + + "failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [%s]", + maxRetry, unassignedInfo.toString()); + } else { + decision = allocation.decision(Decision.YES, NAME, "shard has failed allocating [%d] times but [%d] retries are allowed", + unassignedInfo.getNumFailedAllocations(), maxRetry); } + } else { + decision = allocation.decision(Decision.YES, NAME, "shard has no previous failures"); } - return allocation.decision(Decision.YES, NAME, "shard has no previous failures"); + return decision; } @Override diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/RebalanceOnlyWhenActiveAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/RebalanceOnlyWhenActiveAllocationDecider.java index d8042f18a2731..c4cd2ecf50dda 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/RebalanceOnlyWhenActiveAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/RebalanceOnlyWhenActiveAllocationDecider.java @@ -37,8 +37,8 @@ public RebalanceOnlyWhenActiveAllocationDecider(Settings settings) { @Override public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation allocation) { if (!allocation.routingNodes().allReplicasActive(shardRouting.shardId(), allocation.metaData())) { - return allocation.decision(Decision.NO, NAME, "rebalancing can not occur if not all replicas are active in the cluster"); + return allocation.decision(Decision.NO, NAME, "rebalancing is not allowed until all replicas in the cluster are active"); } - return allocation.decision(Decision.YES, NAME, "all replicas are active in the cluster, rebalancing can occur"); + return allocation.decision(Decision.YES, NAME, "rebalancing is allowed as all replicas are active in the cluster"); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SameShardAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SameShardAllocationDecider.java index 3f2921dfcdcf4..1bf776422521c 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SameShardAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SameShardAllocationDecider.java @@ -23,7 +23,9 @@ import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; /** @@ -46,53 +48,90 @@ public class SameShardAllocationDecider extends AllocationDecider { public static final String NAME = "same_shard"; public static final Setting CLUSTER_ROUTING_ALLOCATION_SAME_HOST_SETTING = - Setting.boolSetting("cluster.routing.allocation.same_shard.host", false, Setting.Property.NodeScope); + Setting.boolSetting("cluster.routing.allocation.same_shard.host", false, Property.Dynamic, Property.NodeScope); - private final boolean sameHost; + private volatile boolean sameHost; - public SameShardAllocationDecider(Settings settings) { + public SameShardAllocationDecider(Settings settings, ClusterSettings clusterSettings) { super(settings); - this.sameHost = CLUSTER_ROUTING_ALLOCATION_SAME_HOST_SETTING.get(settings); + clusterSettings.addSettingsUpdateConsumer(CLUSTER_ROUTING_ALLOCATION_SAME_HOST_SETTING, this::setSameHost); + } + + /** + * Sets the same host setting. {@code true} if allocating the same shard copy to the same host + * should not be allowed, even when multiple nodes are being run on the same host. {@code false} + * otherwise. + */ + private void setSameHost(boolean sameHost) { + this.sameHost = sameHost; } @Override public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) { Iterable assignedShards = allocation.routingNodes().assignedShards(shardRouting.shardId()); - for (ShardRouting assignedShard : assignedShards) { - if (node.nodeId().equals(assignedShard.currentNodeId())) { - return allocation.decision(Decision.NO, NAME, - "the shard cannot be allocated on the same node id [%s] on which it already exists", node.nodeId()); - } + Decision decision = decideSameNode(shardRouting, node, allocation, assignedShards); + if (decision.type() == Decision.Type.NO || sameHost == false) { + // if its already a NO decision looking at the node, or we aren't configured to look at the host, return the decision + return decision; } - if (sameHost) { - if (node.node() != null) { - for (RoutingNode checkNode : allocation.routingNodes()) { - if (checkNode.node() == null) { - continue; + if (node.node() != null) { + for (RoutingNode checkNode : allocation.routingNodes()) { + if (checkNode.node() == null) { + continue; + } + // check if its on the same host as the one we want to allocate to + boolean checkNodeOnSameHostName = false; + boolean checkNodeOnSameHostAddress = false; + if (Strings.hasLength(checkNode.node().getHostAddress()) && Strings.hasLength(node.node().getHostAddress())) { + if (checkNode.node().getHostAddress().equals(node.node().getHostAddress())) { + checkNodeOnSameHostAddress = true; } - // check if its on the same host as the one we want to allocate to - boolean checkNodeOnSameHost = false; - if (Strings.hasLength(checkNode.node().getHostAddress()) && Strings.hasLength(node.node().getHostAddress())) { - if (checkNode.node().getHostAddress().equals(node.node().getHostAddress())) { - checkNodeOnSameHost = true; - } - } else if (Strings.hasLength(checkNode.node().getHostName()) && Strings.hasLength(node.node().getHostName())) { - if (checkNode.node().getHostName().equals(node.node().getHostName())) { - checkNodeOnSameHost = true; - } + } else if (Strings.hasLength(checkNode.node().getHostName()) && Strings.hasLength(node.node().getHostName())) { + if (checkNode.node().getHostName().equals(node.node().getHostName())) { + checkNodeOnSameHostName = true; } - if (checkNodeOnSameHost) { - for (ShardRouting assignedShard : assignedShards) { - if (checkNode.nodeId().equals(assignedShard.currentNodeId())) { - return allocation.decision(Decision.NO, NAME, - "shard cannot be allocated on the same host [%s] on which it already exists", node.nodeId()); - } + } + if (checkNodeOnSameHostAddress || checkNodeOnSameHostName) { + for (ShardRouting assignedShard : assignedShards) { + if (checkNode.nodeId().equals(assignedShard.currentNodeId())) { + String hostType = checkNodeOnSameHostAddress ? "address" : "name"; + String host = checkNodeOnSameHostAddress ? node.node().getHostAddress() : node.node().getHostName(); + return allocation.decision(Decision.NO, NAME, + "the shard cannot be allocated on host %s [%s], where it already exists on node [%s]; " + + "set cluster setting [%s] to false to allow multiple nodes on the same host to hold the same " + + "shard copies", + hostType, host, node.nodeId(), CLUSTER_ROUTING_ALLOCATION_SAME_HOST_SETTING.getKey()); } } } } } - return allocation.decision(Decision.YES, NAME, "shard is not allocated to same node or host"); + return allocation.decision(Decision.YES, NAME, "the shard does not exist on the same host"); + } + + @Override + public Decision canForceAllocatePrimary(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) { + assert shardRouting.primary() : "must not call force allocate on a non-primary shard"; + Iterable assignedShards = allocation.routingNodes().assignedShards(shardRouting.shardId()); + return decideSameNode(shardRouting, node, allocation, assignedShards); + } + + private Decision decideSameNode(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation, + Iterable assignedShards) { + for (ShardRouting assignedShard : assignedShards) { + if (node.nodeId().equals(assignedShard.currentNodeId())) { + if (assignedShard.isSameAllocation(shardRouting)) { + return allocation.decision(Decision.NO, NAME, + "the shard cannot be allocated to the node on which it already exists [%s]", + shardRouting.toString()); + } else { + return allocation.decision(Decision.NO, NAME, + "the shard cannot be allocated to the same node on which a copy of the shard already exists [%s]", + assignedShard.toString()); + } + } + } + return allocation.decision(Decision.YES, NAME, "the shard does not exist on the same node"); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java index aa4fe3d593df8..2118d37fe4717 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java @@ -29,6 +29,8 @@ import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; +import java.util.function.BiPredicate; + /** * This {@link AllocationDecider} limits the number of shards per node on a per * index or node-wide basis. The allocator prevents a single node to hold more @@ -83,45 +85,17 @@ private void setClusterShardLimit(int clusterShardLimit) { @Override public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) { - IndexMetaData indexMd = allocation.metaData().getIndexSafe(shardRouting.index()); - final int indexShardLimit = INDEX_TOTAL_SHARDS_PER_NODE_SETTING.get(indexMd.getSettings(), settings); - // Capture the limit here in case it changes during this method's - // execution - final int clusterShardLimit = this.clusterShardLimit; - - if (indexShardLimit <= 0 && clusterShardLimit <= 0) { - return allocation.decision(Decision.YES, NAME, "total shard limits are disabled: [index: %d, cluster: %d] <= 0", - indexShardLimit, clusterShardLimit); - } - - int indexShardCount = 0; - int nodeShardCount = 0; - for (ShardRouting nodeShard : node) { - // don't count relocating shards... - if (nodeShard.relocating()) { - continue; - } - nodeShardCount++; - if (nodeShard.index().equals(shardRouting.index())) { - indexShardCount++; - } - } - if (clusterShardLimit > 0 && nodeShardCount >= clusterShardLimit) { - return allocation.decision(Decision.NO, NAME, "too many shards for this node [%d], cluster-level limit per node: [%d]", - nodeShardCount, clusterShardLimit); - } - if (indexShardLimit > 0 && indexShardCount >= indexShardLimit) { - return allocation.decision(Decision.NO, NAME, - "too many shards for this index [%s] on node [%d], index-level limit per node: [%d]", - shardRouting.index(), indexShardCount, indexShardLimit); - } - return allocation.decision(Decision.YES, NAME, - "the shard count is under index limit [%d] and cluster level node limit [%d] of total shards per node", - indexShardLimit, clusterShardLimit); + return doDecide(shardRouting, node, allocation, (count, limit) -> count >= limit); } @Override public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) { + return doDecide(shardRouting, node, allocation, (count, limit) -> count > limit); + + } + + private Decision doDecide(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation, + BiPredicate decider) { IndexMetaData indexMd = allocation.metaData().getIndexSafe(shardRouting.index()); final int indexShardLimit = INDEX_TOTAL_SHARDS_PER_NODE_SETTING.get(indexMd.getSettings(), settings); // Capture the limit here in case it changes during this method's @@ -145,20 +119,20 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl indexShardCount++; } } - // Subtle difference between the `canAllocate` and `canRemain` is that - // this checks > while canAllocate checks >= - if (clusterShardLimit > 0 && nodeShardCount > clusterShardLimit) { - return allocation.decision(Decision.NO, NAME, "too many shards for this node [%d], cluster-level limit per node: [%d]", - nodeShardCount, clusterShardLimit); + + if (clusterShardLimit > 0 && decider.test(nodeShardCount, clusterShardLimit)) { + return allocation.decision(Decision.NO, NAME, + "too many shards [%d] allocated to this node, cluster setting [%s=%d]", + nodeShardCount, CLUSTER_TOTAL_SHARDS_PER_NODE_SETTING.getKey(), clusterShardLimit); } - if (indexShardLimit > 0 && indexShardCount > indexShardLimit) { + if (indexShardLimit > 0 && decider.test(indexShardCount, indexShardLimit)) { return allocation.decision(Decision.NO, NAME, - "too many shards for this index [%s] on node [%d], index-level limit per node: [%d]", - shardRouting.index(), indexShardCount, indexShardLimit); + "too many shards [%d] allocated to this node for index [%s], index setting [%s=%d]", + indexShardCount, shardRouting.getIndexName(), INDEX_TOTAL_SHARDS_PER_NODE_SETTING.getKey(), indexShardLimit); } return allocation.decision(Decision.YES, NAME, - "the shard count is under index limit [%d] and cluster level node limit [%d] of total shards per node", - indexShardLimit, clusterShardLimit); + "the shard count [%d] for this node is under the index limit [%d] and cluster level node limit [%d]", + nodeShardCount, indexShardLimit, clusterShardLimit); } @Override @@ -182,10 +156,12 @@ public Decision canAllocate(RoutingNode node, RoutingAllocation allocation) { nodeShardCount++; } if (clusterShardLimit >= 0 && nodeShardCount >= clusterShardLimit) { - return allocation.decision(Decision.NO, NAME, "too many shards for this node [%d], cluster-level limit per node: [%d]", - nodeShardCount, clusterShardLimit); + return allocation.decision(Decision.NO, NAME, + "too many shards [%d] allocated to this node, cluster setting [%s=%d]", + nodeShardCount, CLUSTER_TOTAL_SHARDS_PER_NODE_SETTING.getKey(), clusterShardLimit); } - return allocation.decision(Decision.YES, NAME, "the shard count is under node limit [%d] of total shards per node", - clusterShardLimit); + return allocation.decision(Decision.YES, NAME, + "the shard count [%d] for this node is under the cluster level node limit [%d]", + nodeShardCount, clusterShardLimit); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SnapshotInProgressAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SnapshotInProgressAllocationDecider.java index bd9bf35a68ed5..d7f2c04f26d71 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SnapshotInProgressAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SnapshotInProgressAllocationDecider.java @@ -107,15 +107,16 @@ private Decision canMove(ShardRouting shardRouting, RoutingAllocation allocation if (shardSnapshotStatus != null && !shardSnapshotStatus.state().completed() && shardSnapshotStatus.nodeId() != null && shardSnapshotStatus.nodeId().equals(shardRouting.currentNodeId())) { if (logger.isTraceEnabled()) { - logger.trace("Preventing snapshotted shard [{}] to be moved from node [{}]", + logger.trace("Preventing snapshotted shard [{}] from being moved away from node [{}]", shardRouting.shardId(), shardSnapshotStatus.nodeId()); } - return allocation.decision(Decision.NO, NAME, "snapshot for shard [%s] is currently running on node [%s]", - shardRouting.shardId(), shardSnapshotStatus.nodeId()); + return allocation.decision(Decision.THROTTLE, NAME, + "waiting for snapshotting of shard [%s] to complete on this node [%s]", + shardRouting.shardId(), shardSnapshotStatus.nodeId()); } } } - return allocation.decision(Decision.YES, NAME, "the shard is not primary or relocation is disabled"); + return allocation.decision(Decision.YES, NAME, "the shard is not being snapshotted"); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ThrottlingAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ThrottlingAllocationDecider.java index df2e1d122347a..721de71435def 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ThrottlingAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ThrottlingAllocationDecider.java @@ -126,8 +126,10 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing } if (primariesInRecovery >= primariesInitialRecoveries) { // TODO: Should index creation not be throttled for primary shards? - return allocation.decision(THROTTLE, NAME, "too many primaries are currently recovering [%d], limit: [%d]", - primariesInRecovery, primariesInitialRecoveries); + return allocation.decision(THROTTLE, NAME, + "reached the limit of ongoing initial primary recoveries [%d], cluster setting [%s=%d]", + primariesInRecovery, CLUSTER_ROUTING_ALLOCATION_NODE_INITIAL_PRIMARIES_RECOVERIES_SETTING.getKey(), + primariesInitialRecoveries); } else { return allocation.decision(YES, NAME, "below primary recovery limit of [%d]", primariesInitialRecoveries); } @@ -138,8 +140,11 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing // Allocating a shard to this node will increase the incoming recoveries int currentInRecoveries = allocation.routingNodes().getIncomingRecoveries(node.nodeId()); if (currentInRecoveries >= concurrentIncomingRecoveries) { - return allocation.decision(THROTTLE, NAME, "too many incoming shards are currently recovering [%d], limit: [%d]", - currentInRecoveries, concurrentIncomingRecoveries); + return allocation.decision(THROTTLE, NAME, + "reached the limit of incoming shard recoveries [%d], cluster setting [%s=%d] (can also be set via [%s])", + currentInRecoveries, CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_INCOMING_RECOVERIES_SETTING.getKey(), + concurrentIncomingRecoveries, + CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING.getKey()); } else { // search for corresponding recovery source (= primary shard) and check number of outgoing recoveries on that node ShardRouting primaryShard = allocation.routingNodes().activePrimary(shardRouting.shardId()); @@ -148,8 +153,13 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing } int primaryNodeOutRecoveries = allocation.routingNodes().getOutgoingRecoveries(primaryShard.currentNodeId()); if (primaryNodeOutRecoveries >= concurrentOutgoingRecoveries) { - return allocation.decision(THROTTLE, NAME, "too many outgoing shards are currently recovering [%d], limit: [%d]", - primaryNodeOutRecoveries, concurrentOutgoingRecoveries); + return allocation.decision(THROTTLE, NAME, + "reached the limit of outgoing shard recoveries [%d] on the node [%s] which holds the primary, " + + "cluster setting [%s=%d] (can also be set via [%s])", + primaryNodeOutRecoveries, node.nodeId(), + CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_OUTGOING_RECOVERIES_SETTING.getKey(), + concurrentOutgoingRecoveries, + CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING.getKey()); } else { return allocation.decision(YES, NAME, "below shard recovery limit of outgoing: [%d < %d] incoming: [%d < %d]", primaryNodeOutRecoveries, diff --git a/core/src/main/java/org/elasticsearch/cluster/service/ClusterService.java b/core/src/main/java/org/elasticsearch/cluster/service/ClusterService.java index e981313f4024e..c1fdbc4706f30 100644 --- a/core/src/main/java/org/elasticsearch/cluster/service/ClusterService.java +++ b/core/src/main/java/org/elasticsearch/cluster/service/ClusterService.java @@ -22,16 +22,19 @@ import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.Assertions; import org.elasticsearch.cluster.AckedClusterStateTaskListener; import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterState.Builder; +import org.elasticsearch.cluster.ClusterStateApplier; import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.ClusterStateObserver; import org.elasticsearch.cluster.ClusterStateTaskConfig; import org.elasticsearch.cluster.ClusterStateTaskExecutor; +import org.elasticsearch.cluster.ClusterStateTaskExecutor.ClusterTasksResult; import org.elasticsearch.cluster.ClusterStateTaskListener; -import org.elasticsearch.cluster.ClusterStateUpdateTask; import org.elasticsearch.cluster.LocalNodeMasterListener; import org.elasticsearch.cluster.NodeConnectionsService; import org.elasticsearch.cluster.TimeoutClusterStateListener; @@ -59,30 +62,32 @@ import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; import org.elasticsearch.common.util.concurrent.FutureUtils; import org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor; -import org.elasticsearch.common.util.concurrent.PrioritizedRunnable; import org.elasticsearch.common.util.iterable.Iterables; import org.elasticsearch.discovery.Discovery; +import org.elasticsearch.discovery.DiscoverySettings; import org.elasticsearch.threadpool.ThreadPool; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collection; import java.util.Collections; -import java.util.HashMap; -import java.util.IdentityHashMap; import java.util.Iterator; import java.util.List; import java.util.Locale; import java.util.Map; import java.util.Objects; import java.util.Queue; +import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.Executor; import java.util.concurrent.Future; import java.util.concurrent.ScheduledFuture; import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; import java.util.function.BiConsumer; +import java.util.function.UnaryOperator; import java.util.stream.Collectors; +import java.util.stream.Stream; import static org.elasticsearch.common.util.concurrent.EsExecutors.daemonThreadFactory; @@ -95,6 +100,7 @@ public class ClusterService extends AbstractLifecycleComponent { public static final String UPDATE_THREAD_NAME = "clusterService#updateTask"; private final ThreadPool threadPool; private final ClusterName clusterName; + private final Supplier localNodeSupplier; private BiConsumer clusterStatePublisher; @@ -104,39 +110,44 @@ public class ClusterService extends AbstractLifecycleComponent { private TimeValue slowTaskLoggingThreshold; - private volatile PrioritizedEsThreadPoolExecutor updateTasksExecutor; + private volatile PrioritizedEsThreadPoolExecutor threadPoolExecutor; + private volatile ClusterServiceTaskBatcher taskBatcher; /** * Those 3 state listeners are changing infrequently - CopyOnWriteArrayList is just fine */ - private final Collection priorityClusterStateListeners = new CopyOnWriteArrayList<>(); + private final Collection highPriorityStateAppliers = new CopyOnWriteArrayList<>(); + private final Collection normalPriorityStateAppliers = new CopyOnWriteArrayList<>(); + private final Collection lowPriorityStateAppliers = new CopyOnWriteArrayList<>(); + private final Iterable clusterStateAppliers = Iterables.concat(highPriorityStateAppliers, + normalPriorityStateAppliers, lowPriorityStateAppliers); + private final Collection clusterStateListeners = new CopyOnWriteArrayList<>(); - private final Collection lastClusterStateListeners = new CopyOnWriteArrayList<>(); - private final Map> updateTasksPerExecutor = new HashMap<>(); - // TODO this is rather frequently changing I guess a Synced Set would be better here and a dedicated remove API - private final Collection postAppliedListeners = new CopyOnWriteArrayList<>(); - private final Iterable preAppliedListeners = Iterables.concat(priorityClusterStateListeners, - clusterStateListeners, lastClusterStateListeners); + private final Collection timeoutClusterStateListeners = + Collections.newSetFromMap(new ConcurrentHashMap()); private final LocalNodeMasterListeners localNodeMasterListeners; private final Queue onGoingTimeouts = ConcurrentCollections.newQueue(); - private volatile ClusterState clusterState; + private final AtomicReference state; private final ClusterBlocks.Builder initialBlocks; private NodeConnectionsService nodeConnectionsService; + private DiscoverySettings discoverySettings; + public ClusterService(Settings settings, - ClusterSettings clusterSettings, ThreadPool threadPool) { + ClusterSettings clusterSettings, ThreadPool threadPool, Supplier localNodeSupplier) { super(settings); + this.localNodeSupplier = localNodeSupplier; this.operationRouting = new OperationRouting(settings, clusterSettings); this.threadPool = threadPool; this.clusterSettings = clusterSettings; this.clusterName = ClusterName.CLUSTER_NAME_SETTING.get(settings); // will be replaced on doStart. - this.clusterState = ClusterState.builder(clusterName).build(); + this.state = new AtomicReference<>(ClusterState.builder(clusterName).build()); this.clusterSettings.addSettingsUpdateConsumer(CLUSTER_SERVICE_SLOW_TASK_LOGGING_THRESHOLD_SETTING, this::setSlowTaskLoggingThreshold); @@ -156,10 +167,8 @@ public synchronized void setClusterStatePublisher(BiConsumer updateFunction) { + this.state.getAndUpdate(updateFunction); } public synchronized void setNodeConnectionsService(NodeConnectionsService nodeConnectionsService) { @@ -197,13 +206,19 @@ public synchronized void removeInitialStateBlock(int blockId) throws IllegalStat @Override protected synchronized void doStart() { Objects.requireNonNull(clusterStatePublisher, "please set a cluster state publisher before starting"); - Objects.requireNonNull(clusterState.nodes().getLocalNode(), "please set the local node before starting"); Objects.requireNonNull(nodeConnectionsService, "please set the node connection service before starting"); - add(localNodeMasterListeners); - this.clusterState = ClusterState.builder(clusterState).blocks(initialBlocks).build(); - this.updateTasksExecutor = EsExecutors.newSinglePrioritizing(UPDATE_THREAD_NAME, daemonThreadFactory(settings, UPDATE_THREAD_NAME), - threadPool.getThreadContext()); - this.clusterState = ClusterState.builder(clusterState).blocks(initialBlocks).build(); + Objects.requireNonNull(discoverySettings, "please set discovery settings before starting"); + addListener(localNodeMasterListeners); + DiscoveryNode localNode = localNodeSupplier.get(); + assert localNode != null; + updateState(state -> { + assert state.nodes().getLocalNodeId() == null : "local node is already set"; + DiscoveryNodes nodes = DiscoveryNodes.builder(state.nodes()).add(localNode).localNodeId(localNode.getId()).build(); + return ClusterState.builder(state).nodes(nodes).blocks(initialBlocks).build(); + }); + this.threadPoolExecutor = EsExecutors.newSinglePrioritizing(UPDATE_THREAD_NAME, + daemonThreadFactory(settings, UPDATE_THREAD_NAME), threadPool.getThreadContext(), threadPool.scheduler()); + this.taskBatcher = new ClusterServiceTaskBatcher(logger, threadPoolExecutor); } @Override @@ -217,25 +232,59 @@ protected synchronized void doStop() { logger.debug("failed to notify listeners on shutdown", ex); } } - ThreadPool.terminate(updateTasksExecutor, 10, TimeUnit.SECONDS); + ThreadPool.terminate(threadPoolExecutor, 10, TimeUnit.SECONDS); // close timeout listeners that did not have an ongoing timeout - postAppliedListeners - .stream() - .filter(listener -> listener instanceof TimeoutClusterStateListener) - .map(listener -> (TimeoutClusterStateListener)listener) - .forEach(TimeoutClusterStateListener::onClose); - remove(localNodeMasterListeners); + timeoutClusterStateListeners.forEach(TimeoutClusterStateListener::onClose); + removeListener(localNodeMasterListeners); } @Override protected synchronized void doClose() { } + class ClusterServiceTaskBatcher extends TaskBatcher { + + ClusterServiceTaskBatcher(Logger logger, PrioritizedEsThreadPoolExecutor threadExecutor) { + super(logger, threadExecutor); + } + + @Override + protected void onTimeout(List tasks, TimeValue timeout) { + threadPool.generic().execute( + () -> tasks.forEach( + task -> ((UpdateTask) task).listener.onFailure(task.source, + new ProcessClusterEventTimeoutException(timeout, task.source)))); + } + + @Override + protected void run(Object batchingKey, List tasks, String tasksSummary) { + ClusterStateTaskExecutor taskExecutor = (ClusterStateTaskExecutor) batchingKey; + List updateTasks = (List) tasks; + runTasks(new ClusterService.TaskInputs(taskExecutor, updateTasks, tasksSummary)); + } + + class UpdateTask extends BatchedTask { + final ClusterStateTaskListener listener; + + UpdateTask(Priority priority, String source, Object task, ClusterStateTaskListener listener, + ClusterStateTaskExecutor executor) { + super(priority, source, executor, task); + this.listener = listener; + } + + @Override + public String describeTasks(List tasks) { + return ((ClusterStateTaskExecutor) batchingKey).describeTasks( + tasks.stream().map(BatchedTask::getTask).collect(Collectors.toList())); + } + } + } + /** * The local node. */ public DiscoveryNode localNode() { - DiscoveryNode localNode = clusterState.getNodes().getLocalNode(); + DiscoveryNode localNode = state().getNodes().getLocalNode(); if (localNode == null) { throw new IllegalStateException("No local node found. Is the node started?"); } @@ -247,41 +296,62 @@ public OperationRouting operationRouting() { } /** - * The current state. + * The current cluster state. */ public ClusterState state() { - return this.clusterState; + assert assertNotCalledFromClusterStateApplier("the applied cluster state is not yet available"); + return this.state.get(); } /** - * Adds a priority listener for updated cluster states. + * Adds a high priority applier of updated cluster states. */ - public void addFirst(ClusterStateListener listener) { - priorityClusterStateListeners.add(listener); + public void addHighPriorityApplier(ClusterStateApplier applier) { + highPriorityStateAppliers.add(applier); } /** - * Adds last listener. + * Adds an applier which will be called after all high priority and normal appliers have been called. */ - public void addLast(ClusterStateListener listener) { - lastClusterStateListeners.add(listener); + public void addLowPriorityApplier(ClusterStateApplier applier) { + lowPriorityStateAppliers.add(applier); } /** - * Adds a listener for updated cluster states. + * Adds a applier of updated cluster states. */ - public void add(ClusterStateListener listener) { + public void addStateApplier(ClusterStateApplier applier) { + normalPriorityStateAppliers.add(applier); + } + + /** + * Removes an applier of updated cluster states. + */ + public void removeApplier(ClusterStateApplier applier) { + normalPriorityStateAppliers.remove(applier); + highPriorityStateAppliers.remove(applier); + lowPriorityStateAppliers.remove(applier); + } + + /** + * Add a listener for updated cluster states + */ + public void addListener(ClusterStateListener listener) { clusterStateListeners.add(listener); } /** * Removes a listener for updated cluster states. */ - public void remove(ClusterStateListener listener) { + public void removeListener(ClusterStateListener listener) { clusterStateListeners.remove(listener); - priorityClusterStateListeners.remove(listener); - lastClusterStateListeners.remove(listener); - postAppliedListeners.remove(listener); + } + + /** + * Removes a timeout listener for updated cluster states. + */ + public void removeTimeoutListener(TimeoutClusterStateListener listener) { + timeoutClusterStateListeners.remove(listener); for (Iterator it = onGoingTimeouts.iterator(); it.hasNext(); ) { NotifyTimeout timeout = it.next(); if (timeout.listener.equals(listener)) { @@ -294,32 +364,32 @@ public void remove(ClusterStateListener listener) { /** * Add a listener for on/off local node master events */ - public void add(LocalNodeMasterListener listener) { + public void addLocalNodeMasterListener(LocalNodeMasterListener listener) { localNodeMasterListeners.add(listener); } /** * Remove the given listener for on/off local master events */ - public void remove(LocalNodeMasterListener listener) { + public void removeLocalNodeMasterListener(LocalNodeMasterListener listener) { localNodeMasterListeners.remove(listener); } /** - * Adds a cluster state listener that will timeout after the provided timeout, - * and is executed after the clusterstate has been successfully applied ie. is - * in state {@link org.elasticsearch.cluster.ClusterState.ClusterStateStatus#APPLIED} - * NOTE: a {@code null} timeout means that the listener will never be removed - * automatically + * Adds a cluster state listener that is expected to be removed during a short period of time. + * If provided, the listener will be notified once a specific time has elapsed. + * + * NOTE: the listener is not remmoved on timeout. This is the responsibility of the caller. */ - public void add(@Nullable final TimeValue timeout, final TimeoutClusterStateListener listener) { + public void addTimeoutListener(@Nullable final TimeValue timeout, final TimeoutClusterStateListener listener) { if (lifecycle.stoppedOrClosed()) { listener.onClose(); return; } + // call the post added notification on the same event thread try { - updateTasksExecutor.execute(new SourcePrioritizedRunnable(Priority.HIGH, "_add_listener_") { + threadPoolExecutor.execute(new SourcePrioritizedRunnable(Priority.HIGH, "_add_listener_") { @Override public void run() { if (timeout != null) { @@ -327,7 +397,7 @@ public void run() { notifyTimeout.future = threadPool.schedule(timeout, ThreadPool.Names.GENERIC, notifyTimeout); onGoingTimeouts.add(notifyTimeout); } - postAppliedListeners.add(listener); + timeoutClusterStateListeners.add(listener); listener.postAdded(); } }); @@ -349,11 +419,11 @@ public void run() { * task * */ - public void submitStateUpdateTask(final String source, final ClusterStateUpdateTask updateTask) { + public & ClusterStateTaskListener> void submitStateUpdateTask( + final String source, final T updateTask) { submitStateUpdateTask(source, updateTask, updateTask, updateTask, updateTask); } - /** * Submits a cluster state update task; submitted updates will be * batched across the same instance of executor. The exact batching @@ -399,41 +469,11 @@ public void submitStateUpdateTasks(final String source, if (!lifecycle.started()) { return; } - if (tasks.isEmpty()) { - return; - } try { - // convert to an identity map to check for dups based on update tasks semantics of using identity instead of equal - final IdentityHashMap tasksIdentity = new IdentityHashMap<>(tasks); - final List> updateTasks = tasksIdentity.entrySet().stream().map( - entry -> new UpdateTask<>(source, entry.getKey(), config, executor, safe(entry.getValue(), logger)) - ).collect(Collectors.toList()); - - synchronized (updateTasksPerExecutor) { - List existingTasks = updateTasksPerExecutor.computeIfAbsent(executor, k -> new ArrayList<>()); - for (@SuppressWarnings("unchecked") UpdateTask existing : existingTasks) { - if (tasksIdentity.containsKey(existing.task)) { - throw new IllegalStateException("task [" + executor.describeTasks(Collections.singletonList(existing.task)) + - "] with source [" + source + "] is already queued"); - } - } - existingTasks.addAll(updateTasks); - } - - final UpdateTask firstTask = updateTasks.get(0); - - if (config.timeout() != null) { - updateTasksExecutor.execute(firstTask, threadPool.scheduler(), config.timeout(), () -> threadPool.generic().execute(() -> { - for (UpdateTask task : updateTasks) { - if (task.processed.getAndSet(true) == false) { - logger.debug("cluster state update task [{}] timed out after [{}]", source, config.timeout()); - task.listener.onFailure(source, new ProcessClusterEventTimeoutException(config.timeout(), source)); - } - } - })); - } else { - updateTasksExecutor.execute(firstTask); - } + List safeTasks = tasks.entrySet().stream() + .map(e -> taskBatcher.new UpdateTask(config.priority(), source, e.getKey(), safe(e.getValue(), logger), executor)) + .collect(Collectors.toList()); + taskBatcher.submitTasks(safeTasks, config.timeout()); } catch (EsRejectedExecutionException e) { // ignore cases where we are shutting down..., there is really nothing interesting // to be done here... @@ -447,36 +487,20 @@ public void submitStateUpdateTasks(final String source, * Returns the tasks that are pending. */ public List pendingTasks() { - PrioritizedEsThreadPoolExecutor.Pending[] pendings = updateTasksExecutor.getPending(); - List pendingClusterTasks = new ArrayList<>(pendings.length); - for (PrioritizedEsThreadPoolExecutor.Pending pending : pendings) { - final String source; - final long timeInQueue; - // we have to capture the task as it will be nulled after execution and we don't want to change while we check things here. - final Object task = pending.task; - if (task == null) { - continue; - } else if (task instanceof SourcePrioritizedRunnable) { - SourcePrioritizedRunnable runnable = (SourcePrioritizedRunnable) task; - source = runnable.source(); - timeInQueue = runnable.getAgeInMillis(); - } else { - assert false : "expected SourcePrioritizedRunnable got " + task.getClass(); - source = "unknown [" + task.getClass() + "]"; - timeInQueue = 0; - } - - pendingClusterTasks.add( - new PendingClusterTask(pending.insertionOrder, pending.priority, new Text(source), timeInQueue, pending.executing)); - } - return pendingClusterTasks; + return Arrays.stream(threadPoolExecutor.getPending()).map(pending -> { + assert pending.task instanceof SourcePrioritizedRunnable : + "thread pool executor should only use SourcePrioritizedRunnable instances but found: " + pending.task.getClass().getName(); + SourcePrioritizedRunnable task = (SourcePrioritizedRunnable) pending.task; + return new PendingClusterTask(pending.insertionOrder, pending.priority, new Text(task.source()), + task.getAgeInMillis(), pending.executing); + }).collect(Collectors.toList()); } /** * Returns the number of currently pending tasks. */ public int numberOfPendingTasks() { - return updateTasksExecutor.getNumberOfPendingTasks(); + return threadPoolExecutor.getNumberOfPendingTasks(); } /** @@ -485,7 +509,7 @@ public int numberOfPendingTasks() { * @return A zero time value if the queue is empty, otherwise the time value oldest task waiting in the queue */ public TimeValue getMaxTaskWaitTime() { - return updateTasksExecutor.getMaxTaskWaitTime(); + return threadPoolExecutor.getMaxTaskWaitTime(); } /** asserts that the current thread is the cluster state update thread */ @@ -495,64 +519,119 @@ public static boolean assertClusterStateThread() { return true; } - public ClusterName getClusterName() { - return clusterName; + /** asserts that the current thread is NOT the cluster state update thread */ + public static boolean assertNotClusterStateUpdateThread(String reason) { + assert Thread.currentThread().getName().contains(UPDATE_THREAD_NAME) == false : + "Expected current thread [" + Thread.currentThread() + "] to not be the cluster state update thread. Reason: [" + reason + "]"; + return true; } - abstract static class SourcePrioritizedRunnable extends PrioritizedRunnable { - protected final String source; - - public SourcePrioritizedRunnable(Priority priority, String source) { - super(priority); - this.source = source; + /** asserts that the current stack trace does NOT invlove a cluster state applier */ + private static boolean assertNotCalledFromClusterStateApplier(String reason) { + if (Thread.currentThread().getName().contains(UPDATE_THREAD_NAME)) { + for (StackTraceElement element : Thread.currentThread().getStackTrace()) { + final String className = element.getClassName(); + final String methodName = element.getMethodName(); + if (className.equals(ClusterStateObserver.class.getName())) { + // people may start an observer from an applier + return true; + } else if (className.equals(ClusterService.class.getName()) + && methodName.equals("callClusterStateAppliers")) { + throw new AssertionError("should not be called by a cluster state applier. reason [" + reason + "]"); + } + } } + return true; + } - public String source() { - return source; - } + public ClusterName getClusterName() { + return clusterName; } - void runTasksForExecutor(ClusterStateTaskExecutor executor) { - final ArrayList> toExecute = new ArrayList<>(); - final Map> processTasksBySource = new HashMap<>(); - synchronized (updateTasksPerExecutor) { - List pending = updateTasksPerExecutor.remove(executor); - if (pending != null) { - for (UpdateTask task : pending) { - if (task.processed.getAndSet(true) == false) { - logger.trace("will process {}", task.toString(executor)); - toExecute.add(task); - processTasksBySource.computeIfAbsent(task.source, s -> new ArrayList<>()).add(task.task); - } else { - logger.trace("skipping {}, already processed", task.toString(executor)); - } - } - } - } - if (toExecute.isEmpty()) { - return; - } - final String tasksSummary = processTasksBySource.entrySet().stream().map(entry -> { - String tasks = executor.describeTasks(entry.getValue()); - return tasks.isEmpty() ? entry.getKey() : entry.getKey() + "[" + tasks + "]"; - }).reduce((s1, s2) -> s1 + ", " + s2).orElse(""); + public void setDiscoverySettings(DiscoverySettings discoverySettings) { + this.discoverySettings = discoverySettings; + } + void runTasks(TaskInputs taskInputs) { if (!lifecycle.started()) { - logger.debug("processing [{}]: ignoring, cluster_service not started", tasksSummary); + logger.debug("processing [{}]: ignoring, cluster service not started", taskInputs.summary); return; } - logger.debug("processing [{}]: execute", tasksSummary); - ClusterState previousClusterState = clusterState; - if (!previousClusterState.nodes().isLocalNodeElectedMaster() && executor.runOnlyOnMaster()) { - logger.debug("failing [{}]: local node is no longer master", tasksSummary); - toExecute.stream().forEach(task -> task.listener.onNoLongerMaster(task.source)); + + logger.debug("processing [{}]: execute", taskInputs.summary); + ClusterState previousClusterState = state(); + + if (!previousClusterState.nodes().isLocalNodeElectedMaster() && taskInputs.runOnlyOnMaster()) { + logger.debug("failing [{}]: local node is no longer master", taskInputs.summary); + taskInputs.onNoLongerMaster(); return; } - ClusterStateTaskExecutor.BatchResult batchResult; + long startTimeNS = currentTimeInNanos(); + TaskOutputs taskOutputs = calculateTaskOutputs(taskInputs, previousClusterState, startTimeNS); + taskOutputs.notifyFailedTasks(); + + if (taskOutputs.clusterStateUnchanged()) { + taskOutputs.notifySuccessfulTasksOnUnchangedClusterState(); + TimeValue executionTime = TimeValue.timeValueMillis(Math.max(0, TimeValue.nsecToMSec(currentTimeInNanos() - startTimeNS))); + logger.debug("processing [{}]: took [{}] no change in cluster_state", taskInputs.summary, executionTime); + warnAboutSlowTaskIfNeeded(executionTime, taskInputs.summary); + } else { + ClusterState newClusterState = taskOutputs.newClusterState; + if (logger.isTraceEnabled()) { + logger.trace("cluster state updated, source [{}]\n{}", taskInputs.summary, newClusterState); + } else if (logger.isDebugEnabled()) { + logger.debug("cluster state updated, version [{}], source [{}]", newClusterState.version(), taskInputs.summary); + } + try { + publishAndApplyChanges(taskInputs, taskOutputs); + TimeValue executionTime = TimeValue.timeValueMillis(Math.max(0, TimeValue.nsecToMSec(currentTimeInNanos() - startTimeNS))); + logger.debug("processing [{}]: took [{}] done applying updated cluster_state (version: {}, uuid: {})", taskInputs.summary, + executionTime, newClusterState.version(), newClusterState.stateUUID()); + warnAboutSlowTaskIfNeeded(executionTime, taskInputs.summary); + } catch (Exception e) { + TimeValue executionTime = TimeValue.timeValueMillis(Math.max(0, TimeValue.nsecToMSec(currentTimeInNanos() - startTimeNS))); + final long version = newClusterState.version(); + final String stateUUID = newClusterState.stateUUID(); + final String fullState = newClusterState.toString(); + logger.warn( + (Supplier) () -> new ParameterizedMessage( + "failed to apply updated cluster state in [{}]:\nversion [{}], uuid [{}], source [{}]\n{}", + executionTime, + version, + stateUUID, + taskInputs.summary, + fullState), + e); + // TODO: do we want to call updateTask.onFailure here? + } + } + } + + public TaskOutputs calculateTaskOutputs(TaskInputs taskInputs, ClusterState previousClusterState, long startTimeNS) { + ClusterTasksResult clusterTasksResult = executeTasks(taskInputs, startTimeNS, previousClusterState); + // extract those that are waiting for results + List nonFailedTasks = new ArrayList<>(); + for (ClusterServiceTaskBatcher.UpdateTask updateTask : taskInputs.updateTasks) { + assert clusterTasksResult.executionResults.containsKey(updateTask.task) : "missing " + updateTask; + final ClusterStateTaskExecutor.TaskResult taskResult = + clusterTasksResult.executionResults.get(updateTask.task); + if (taskResult.isSuccess()) { + nonFailedTasks.add(updateTask); + } + } + ClusterState newClusterState = patchVersionsAndNoMasterBlocks(previousClusterState, clusterTasksResult); + + return new TaskOutputs(taskInputs, previousClusterState, newClusterState, nonFailedTasks, + clusterTasksResult.executionResults); + } + + private ClusterTasksResult executeTasks(TaskInputs taskInputs, long startTimeNS, ClusterState previousClusterState) { + ClusterTasksResult clusterTasksResult; try { - List inputs = toExecute.stream().map(tUpdateTask -> tUpdateTask.task).collect(Collectors.toList()); - batchResult = executor.execute(previousClusterState, inputs); + List inputs = taskInputs.updateTasks.stream() + .map(ClusterServiceTaskBatcher.UpdateTask::getTask).collect(Collectors.toList()); + clusterTasksResult = taskInputs.executor.execute(previousClusterState, inputs); } catch (Exception e) { TimeValue executionTime = TimeValue.timeValueMillis(Math.max(0, TimeValue.nsecToMSec(currentTimeInNanos() - startTimeNS))); if (logger.isTraceEnabled()) { @@ -561,219 +640,294 @@ void runTasksForExecutor(ClusterStateTaskExecutor executor) { "failed to execute cluster state update in [{}], state:\nversion [{}], source [{}]\n{}{}{}", executionTime, previousClusterState.version(), - tasksSummary, - previousClusterState.nodes().prettyPrint(), - previousClusterState.routingTable().prettyPrint(), - previousClusterState.getRoutingNodes().prettyPrint()), + taskInputs.summary, + previousClusterState.nodes(), + previousClusterState.routingTable(), + previousClusterState.getRoutingNodes()), e); } - warnAboutSlowTaskIfNeeded(executionTime, tasksSummary); - batchResult = ClusterStateTaskExecutor.BatchResult.builder() - .failures(toExecute.stream().map(updateTask -> updateTask.task)::iterator, e) - .build(previousClusterState); - } - - assert batchResult.executionResults != null; - assert batchResult.executionResults.size() == toExecute.size() - : String.format(Locale.ROOT, "expected [%d] task result%s but was [%d]", toExecute.size(), - toExecute.size() == 1 ? "" : "s", batchResult.executionResults.size()); - boolean assertsEnabled = false; - assert (assertsEnabled = true); - if (assertsEnabled) { - for (UpdateTask updateTask : toExecute) { - assert batchResult.executionResults.containsKey(updateTask.task) : - "missing task result for " + updateTask.toString(executor); - } + warnAboutSlowTaskIfNeeded(executionTime, taskInputs.summary); + clusterTasksResult = ClusterTasksResult.builder() + .failures(taskInputs.updateTasks.stream().map(ClusterServiceTaskBatcher.UpdateTask::getTask)::iterator, e) + .build(previousClusterState); } - ClusterState newClusterState = batchResult.resultingState; - final ArrayList> proccessedListeners = new ArrayList<>(); - // fail all tasks that have failed and extract those that are waiting for results - for (UpdateTask updateTask : toExecute) { - assert batchResult.executionResults.containsKey(updateTask.task) : "missing " + updateTask.toString(executor); - final ClusterStateTaskExecutor.TaskResult executionResult = - batchResult.executionResults.get(updateTask.task); - executionResult.handle( - () -> proccessedListeners.add(updateTask), - ex -> { - logger.debug( - (Supplier) - () -> new ParameterizedMessage("cluster state update task {} failed", updateTask.toString(executor)), ex); - updateTask.listener.onFailure(updateTask.source, ex); - } - ); + assert clusterTasksResult.executionResults != null; + assert clusterTasksResult.executionResults.size() == taskInputs.updateTasks.size() + : String.format(Locale.ROOT, "expected [%d] task result%s but was [%d]", taskInputs.updateTasks.size(), + taskInputs.updateTasks.size() == 1 ? "" : "s", clusterTasksResult.executionResults.size()); + if (Assertions.ENABLED) { + for (ClusterServiceTaskBatcher.UpdateTask updateTask : taskInputs.updateTasks) { + assert clusterTasksResult.executionResults.containsKey(updateTask.task) : + "missing task result for " + updateTask; + } } - if (previousClusterState == newClusterState) { - for (UpdateTask task : proccessedListeners) { - if (task.listener instanceof AckedClusterStateTaskListener) { - //no need to wait for ack if nothing changed, the update can be counted as acknowledged - ((AckedClusterStateTaskListener) task.listener).onAllNodesAcked(null); - } - task.listener.clusterStateProcessed(task.source, previousClusterState, newClusterState); + return clusterTasksResult; + } + + private ClusterState patchVersionsAndNoMasterBlocks(ClusterState previousClusterState, ClusterTasksResult executionResult) { + ClusterState newClusterState = executionResult.resultingState; + + if (executionResult.noMaster) { + assert newClusterState == previousClusterState : "state can only be changed by ClusterService when noMaster = true"; + if (previousClusterState.nodes().getMasterNodeId() != null) { + // remove block if it already exists before adding new one + assert previousClusterState.blocks().hasGlobalBlock(discoverySettings.getNoMasterBlock().id()) == false : + "NO_MASTER_BLOCK should only be added by ClusterService"; + ClusterBlocks clusterBlocks = ClusterBlocks.builder().blocks(previousClusterState.blocks()) + .addGlobalBlock(discoverySettings.getNoMasterBlock()) + .build(); + + DiscoveryNodes discoveryNodes = new DiscoveryNodes.Builder(previousClusterState.nodes()).masterNodeId(null).build(); + newClusterState = ClusterState.builder(previousClusterState) + .blocks(clusterBlocks) + .nodes(discoveryNodes) + .build(); + } + } else if (newClusterState.nodes().isLocalNodeElectedMaster() && previousClusterState != newClusterState) { + // only the master controls the version numbers + Builder builder = ClusterState.builder(newClusterState).incrementVersion(); + if (previousClusterState.routingTable() != newClusterState.routingTable()) { + builder.routingTable(RoutingTable.builder(newClusterState.routingTable()) + .version(newClusterState.routingTable().version() + 1).build()); + } + if (previousClusterState.metaData() != newClusterState.metaData()) { + builder.metaData(MetaData.builder(newClusterState.metaData()).version(newClusterState.metaData().version() + 1)); } - TimeValue executionTime = TimeValue.timeValueMillis(Math.max(0, TimeValue.nsecToMSec(currentTimeInNanos() - startTimeNS))); - logger.debug("processing [{}]: took [{}] no change in cluster_state", tasksSummary, executionTime); - warnAboutSlowTaskIfNeeded(executionTime, tasksSummary); - return; - } - try { - ArrayList ackListeners = new ArrayList<>(); - if (newClusterState.nodes().isLocalNodeElectedMaster()) { - // only the master controls the version numbers - Builder builder = ClusterState.builder(newClusterState).incrementVersion(); - if (previousClusterState.routingTable() != newClusterState.routingTable()) { - builder.routingTable(RoutingTable.builder(newClusterState.routingTable()) - .version(newClusterState.routingTable().version() + 1).build()); - } - if (previousClusterState.metaData() != newClusterState.metaData()) { - builder.metaData(MetaData.builder(newClusterState.metaData()).version(newClusterState.metaData().version() + 1)); - } - newClusterState = builder.build(); - for (UpdateTask task : proccessedListeners) { - if (task.listener instanceof AckedClusterStateTaskListener) { - final AckedClusterStateTaskListener ackedListener = (AckedClusterStateTaskListener) task.listener; - if (ackedListener.ackTimeout() == null || ackedListener.ackTimeout().millis() == 0) { - ackedListener.onAckTimeout(); - } else { - try { - ackListeners.add(new AckCountDownListener(ackedListener, newClusterState.version(), newClusterState.nodes(), - threadPool)); - } catch (EsRejectedExecutionException ex) { - if (logger.isDebugEnabled()) { - logger.debug("Couldn't schedule timeout thread - node might be shutting down", ex); - } - //timeout straightaway, otherwise we could wait forever as the timeout thread has not started - ackedListener.onAckTimeout(); - } - } - } - } + // remove the no master block, if it exists + if (newClusterState.blocks().hasGlobalBlock(discoverySettings.getNoMasterBlock().id())) { + builder.blocks(ClusterBlocks.builder().blocks(newClusterState.blocks()) + .removeGlobalBlock(discoverySettings.getNoMasterBlock().id())); } - final Discovery.AckListener ackListener = new DelegetingAckListener(ackListeners); - newClusterState.status(ClusterState.ClusterStateStatus.BEING_APPLIED); + newClusterState = builder.build(); + } - if (logger.isTraceEnabled()) { - logger.trace("cluster state updated, source [{}]\n{}", tasksSummary, newClusterState.prettyPrint()); - } else if (logger.isDebugEnabled()) { - logger.debug("cluster state updated, version [{}], source [{}]", newClusterState.version(), tasksSummary); - } + assert newClusterState.nodes().getMasterNodeId() == null || + newClusterState.blocks().hasGlobalBlock(discoverySettings.getNoMasterBlock().id()) == false : + "cluster state with master node must not have NO_MASTER_BLOCK"; - ClusterChangedEvent clusterChangedEvent = new ClusterChangedEvent(tasksSummary, newClusterState, previousClusterState); - // new cluster state, notify all listeners - final DiscoveryNodes.Delta nodesDelta = clusterChangedEvent.nodesDelta(); - if (nodesDelta.hasChanges() && logger.isInfoEnabled()) { - String summary = nodesDelta.shortSummary(); - if (summary.length() > 0) { - logger.info("{}, reason: {}", summary, tasksSummary); - } - } + return newClusterState; + } - nodeConnectionsService.connectToAddedNodes(clusterChangedEvent); - - // if we are the master, publish the new state to all nodes - // we publish here before we send a notification to all the listeners, since if it fails - // we don't want to notify - if (newClusterState.nodes().isLocalNodeElectedMaster()) { - logger.debug("publishing cluster state version [{}]", newClusterState.version()); - try { - clusterStatePublisher.accept(clusterChangedEvent, ackListener); - } catch (Discovery.FailedToCommitClusterStateException t) { - final long version = newClusterState.version(); - logger.warn( - (Supplier) () -> new ParameterizedMessage( - "failing [{}]: failed to commit cluster state version [{}]", tasksSummary, version), - t); - proccessedListeners.forEach(task -> task.listener.onFailure(task.source, t)); - return; - } + private void publishAndApplyChanges(TaskInputs taskInputs, TaskOutputs taskOutputs) { + ClusterState previousClusterState = taskOutputs.previousClusterState; + ClusterState newClusterState = taskOutputs.newClusterState; + + ClusterChangedEvent clusterChangedEvent = new ClusterChangedEvent(taskInputs.summary, newClusterState, previousClusterState); + // new cluster state, notify all listeners + final DiscoveryNodes.Delta nodesDelta = clusterChangedEvent.nodesDelta(); + if (nodesDelta.hasChanges() && logger.isInfoEnabled()) { + String summary = nodesDelta.shortSummary(); + if (summary.length() > 0) { + logger.info("{}, reason: {}", summary, taskInputs.summary); } + } + + final Discovery.AckListener ackListener = newClusterState.nodes().isLocalNodeElectedMaster() ? + taskOutputs.createAckListener(threadPool, newClusterState) : + null; - // update the current cluster state - clusterState = newClusterState; - logger.debug("set local cluster state to version {}", newClusterState.version()); + nodeConnectionsService.connectToNodes(newClusterState.nodes()); + + // if we are the master, publish the new state to all nodes + // we publish here before we send a notification to all the listeners, since if it fails + // we don't want to notify + if (newClusterState.nodes().isLocalNodeElectedMaster()) { + logger.debug("publishing cluster state version [{}]", newClusterState.version()); try { - // nothing to do until we actually recover from the gateway or any other block indicates we need to disable persistency - if (clusterChangedEvent.state().blocks().disableStatePersistence() == false && clusterChangedEvent.metaDataChanged()) { - final Settings incomingSettings = clusterChangedEvent.state().metaData().settings(); - clusterSettings.applySettings(incomingSettings); - } - } catch (Exception ex) { - logger.warn("failed to apply cluster settings", ex); + clusterStatePublisher.accept(clusterChangedEvent, ackListener); + } catch (Discovery.FailedToCommitClusterStateException t) { + final long version = newClusterState.version(); + logger.warn( + (Supplier) () -> new ParameterizedMessage( + "failing [{}]: failed to commit cluster state version [{}]", taskInputs.summary, version), + t); + // ensure that list of connected nodes in NodeConnectionsService is in-sync with the nodes of the current cluster state + nodeConnectionsService.connectToNodes(previousClusterState.nodes()); + nodeConnectionsService.disconnectFromNodesExcept(previousClusterState.nodes()); + taskOutputs.publishingFailed(t); + return; } - for (ClusterStateListener listener : preAppliedListeners) { - try { - listener.clusterChanged(clusterChangedEvent); - } catch (Exception ex) { - logger.warn("failed to notify ClusterStateListener", ex); - } + } + + logger.debug("applying cluster state version {}", newClusterState.version()); + try { + // nothing to do until we actually recover from the gateway or any other block indicates we need to disable persistency + if (clusterChangedEvent.state().blocks().disableStatePersistence() == false && clusterChangedEvent.metaDataChanged()) { + final Settings incomingSettings = clusterChangedEvent.state().metaData().settings(); + clusterSettings.applySettings(incomingSettings); } + } catch (Exception ex) { + logger.warn("failed to apply cluster settings", ex); + } - nodeConnectionsService.disconnectFromRemovedNodes(clusterChangedEvent); + logger.debug("set local cluster state to version {}", newClusterState.version()); + callClusterStateAppliers(newClusterState, clusterChangedEvent); - newClusterState.status(ClusterState.ClusterStateStatus.APPLIED); + nodeConnectionsService.disconnectFromNodesExcept(newClusterState.nodes()); - for (ClusterStateListener listener : postAppliedListeners) { - try { - listener.clusterChanged(clusterChangedEvent); - } catch (Exception ex) { - logger.warn("failed to notify ClusterStateListener", ex); - } - } + updateState(css -> newClusterState); - //manual ack only from the master at the end of the publish - if (newClusterState.nodes().isLocalNodeElectedMaster()) { - try { - ackListener.onNodeAck(newClusterState.nodes().getLocalNode(), null); - } catch (Exception e) { - final DiscoveryNode localNode = newClusterState.nodes().getLocalNode(); - logger.debug( - (Supplier) () -> new ParameterizedMessage("error while processing ack for master node [{}]", localNode), - e); - } + Stream.concat(clusterStateListeners.stream(), timeoutClusterStateListeners.stream()).forEach(listener -> { + try { + logger.trace("calling [{}] with change to version [{}]", listener, newClusterState.version()); + listener.clusterChanged(clusterChangedEvent); + } catch (Exception ex) { + logger.warn("failed to notify ClusterStateListener", ex); } + }); - for (UpdateTask task : proccessedListeners) { - task.listener.clusterStateProcessed(task.source, previousClusterState, newClusterState); + //manual ack only from the master at the end of the publish + if (newClusterState.nodes().isLocalNodeElectedMaster()) { + try { + ackListener.onNodeAck(newClusterState.nodes().getLocalNode(), null); + } catch (Exception e) { + final DiscoveryNode localNode = newClusterState.nodes().getLocalNode(); + logger.debug( + (Supplier) () -> new ParameterizedMessage("error while processing ack for master node [{}]", localNode), + e); } + } + taskOutputs.processedDifferentClusterState(previousClusterState, newClusterState); + + if (newClusterState.nodes().isLocalNodeElectedMaster()) { try { - executor.clusterStatePublished(clusterChangedEvent); + taskOutputs.clusterStatePublished(clusterChangedEvent); } catch (Exception e) { logger.error( (Supplier) () -> new ParameterizedMessage( "exception thrown while notifying executor of new cluster state publication [{}]", - tasksSummary), + taskInputs.summary), e); } + } + } - TimeValue executionTime = TimeValue.timeValueMillis(Math.max(0, TimeValue.nsecToMSec(currentTimeInNanos() - startTimeNS))); - logger.debug("processing [{}]: took [{}] done applying updated cluster_state (version: {}, uuid: {})", tasksSummary, - executionTime, newClusterState.version(), newClusterState.stateUUID()); - warnAboutSlowTaskIfNeeded(executionTime, tasksSummary); - } catch (Exception e) { - TimeValue executionTime = TimeValue.timeValueMillis(Math.max(0, TimeValue.nsecToMSec(currentTimeInNanos() - startTimeNS))); - final long version = newClusterState.version(); - final String stateUUID = newClusterState.stateUUID(); - final String prettyPrint = newClusterState.prettyPrint(); - logger.warn( - (Supplier) () -> new ParameterizedMessage( - "failed to apply updated cluster state in [{}]:\nversion [{}], uuid [{}], source [{}]\n{}", - executionTime, - version, - stateUUID, - tasksSummary, - prettyPrint), - e); - // TODO: do we want to call updateTask.onFailure here? + private void callClusterStateAppliers(ClusterState newClusterState, ClusterChangedEvent clusterChangedEvent) { + for (ClusterStateApplier applier : clusterStateAppliers) { + try { + logger.trace("calling [{}] with change to version [{}]", applier, newClusterState.version()); + applier.applyClusterState(clusterChangedEvent); + } catch (Exception ex) { + logger.warn("failed to notify ClusterStateApplier", ex); + } + } + } + + /** + * Represents a set of tasks to be processed together with their executor + */ + class TaskInputs { + public final String summary; + public final List updateTasks; + public final ClusterStateTaskExecutor executor; + + TaskInputs(ClusterStateTaskExecutor executor, List updateTasks, String summary) { + this.summary = summary; + this.executor = executor; + this.updateTasks = updateTasks; + } + + public boolean runOnlyOnMaster() { + return executor.runOnlyOnMaster(); + } + + public void onNoLongerMaster() { + updateTasks.stream().forEach(task -> task.listener.onNoLongerMaster(task.source)); + } + } + + /** + * Output created by executing a set of tasks provided as TaskInputs + */ + class TaskOutputs { + public final TaskInputs taskInputs; + public final ClusterState previousClusterState; + public final ClusterState newClusterState; + public final List nonFailedTasks; + public final Map executionResults; + + TaskOutputs(TaskInputs taskInputs, ClusterState previousClusterState, + ClusterState newClusterState, List nonFailedTasks, + Map executionResults) { + this.taskInputs = taskInputs; + this.previousClusterState = previousClusterState; + this.newClusterState = newClusterState; + this.nonFailedTasks = nonFailedTasks; + this.executionResults = executionResults; + } + + public void publishingFailed(Discovery.FailedToCommitClusterStateException t) { + nonFailedTasks.forEach(task -> task.listener.onFailure(task.source, t)); + } + + public void processedDifferentClusterState(ClusterState previousClusterState, ClusterState newClusterState) { + nonFailedTasks.forEach(task -> task.listener.clusterStateProcessed(task.source, previousClusterState, newClusterState)); + } + + public void clusterStatePublished(ClusterChangedEvent clusterChangedEvent) { + taskInputs.executor.clusterStatePublished(clusterChangedEvent); + } + + public Discovery.AckListener createAckListener(ThreadPool threadPool, ClusterState newClusterState) { + ArrayList ackListeners = new ArrayList<>(); + + //timeout straightaway, otherwise we could wait forever as the timeout thread has not started + nonFailedTasks.stream().filter(task -> task.listener instanceof AckedClusterStateTaskListener).forEach(task -> { + final AckedClusterStateTaskListener ackedListener = (AckedClusterStateTaskListener) task.listener; + if (ackedListener.ackTimeout() == null || ackedListener.ackTimeout().millis() == 0) { + ackedListener.onAckTimeout(); + } else { + try { + ackListeners.add(new AckCountDownListener(ackedListener, newClusterState.version(), newClusterState.nodes(), + threadPool)); + } catch (EsRejectedExecutionException ex) { + if (logger.isDebugEnabled()) { + logger.debug("Couldn't schedule timeout thread - node might be shutting down", ex); + } + //timeout straightaway, otherwise we could wait forever as the timeout thread has not started + ackedListener.onAckTimeout(); + } + } + }); + + return new DelegetingAckListener(ackListeners); + } + + public boolean clusterStateUnchanged() { + return previousClusterState == newClusterState; + } + + public void notifyFailedTasks() { + // fail all tasks that have failed + for (ClusterServiceTaskBatcher.UpdateTask updateTask : taskInputs.updateTasks) { + assert executionResults.containsKey(updateTask.task) : "missing " + updateTask; + final ClusterStateTaskExecutor.TaskResult taskResult = executionResults.get(updateTask.task); + if (taskResult.isSuccess() == false) { + updateTask.listener.onFailure(updateTask.source, taskResult.getFailure()); + } + } } + public void notifySuccessfulTasksOnUnchangedClusterState() { + nonFailedTasks.forEach(task -> { + if (task.listener instanceof AckedClusterStateTaskListener) { + //no need to wait for ack if nothing changed, the update can be counted as acknowledged + ((AckedClusterStateTaskListener) task.listener).onAllNodesAcked(null); + } + task.listener.clusterStateProcessed(task.source, newClusterState, newClusterState); + }); + } } // this one is overridden in tests so we can control time - protected long currentTimeInNanos() {return System.nanoTime();} + protected long currentTimeInNanos() { + return System.nanoTime(); + } private static SafeClusterStateTaskListener safe(ClusterStateTaskListener listener, Logger logger) { if (listener instanceof AckedClusterStateTaskListener) { @@ -787,7 +941,7 @@ private static class SafeClusterStateTaskListener implements ClusterStateTaskLis private final ClusterStateTaskListener listener; private final Logger logger; - public SafeClusterStateTaskListener(ClusterStateTaskListener listener, Logger logger) { + SafeClusterStateTaskListener(ClusterStateTaskListener listener, Logger logger) { this.listener = listener; this.logger = logger; } @@ -824,9 +978,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS (Supplier) () -> new ParameterizedMessage( "exception thrown by listener while notifying of cluster state processed from [{}], old cluster state:\n" + "{}\nnew cluster state:\n{}", - source, - oldState.prettyPrint(), - newState.prettyPrint()), + source, oldState, newState), e); } } @@ -836,7 +988,7 @@ private static class SafeAckedClusterStateTaskListener extends SafeClusterStateT private final AckedClusterStateTaskListener listener; private final Logger logger; - public SafeAckedClusterStateTaskListener(AckedClusterStateTaskListener listener, Logger logger) { + SafeAckedClusterStateTaskListener(AckedClusterStateTaskListener listener, Logger logger) { super(listener, logger); this.listener = listener; this.logger = logger; @@ -872,38 +1024,6 @@ public TimeValue ackTimeout() { } } - class UpdateTask extends SourcePrioritizedRunnable { - - public final T task; - public final ClusterStateTaskConfig config; - public final ClusterStateTaskExecutor executor; - public final ClusterStateTaskListener listener; - public final AtomicBoolean processed = new AtomicBoolean(); - - UpdateTask(String source, T task, ClusterStateTaskConfig config, ClusterStateTaskExecutor executor, - ClusterStateTaskListener listener) { - super(config.priority(), source); - this.task = task; - this.config = config; - this.executor = executor; - this.listener = listener; - } - - @Override - public void run() { - runTasksForExecutor(executor); - } - - public String toString(ClusterStateTaskExecutor executor) { - String taskDescription = executor.describeTasks(Collections.singletonList(task)); - if (taskDescription.isEmpty()) { - return "[" + source + "]"; - } else { - return "[" + source + "[" + taskDescription + "]]"; - } - } - } - private void warnAboutSlowTaskIfNeeded(TimeValue executionTime, String source) { if (executionTime.getMillis() > slowTaskLoggingThreshold.getMillis()) { logger.warn("cluster state update task [{}] took [{}] above the warn threshold of {}", source, executionTime, @@ -1057,12 +1177,7 @@ private static class AckCountDownListener implements Discovery.AckListener { countDown = Math.max(1, countDown); logger.trace("expecting {} acknowledgements for cluster_state update (version: {})", countDown, clusterStateVersion); this.countDown = new CountDown(countDown); - this.ackTimeoutCallback = threadPool.schedule(ackedTaskListener.ackTimeout(), ThreadPool.Names.GENERIC, new Runnable() { - @Override - public void run() { - onTimeout(); - } - }); + this.ackTimeoutCallback = threadPool.schedule(ackedTaskListener.ackTimeout(), ThreadPool.Names.GENERIC, () -> onTimeout()); } @Override diff --git a/core/src/main/java/org/elasticsearch/cluster/service/SourcePrioritizedRunnable.java b/core/src/main/java/org/elasticsearch/cluster/service/SourcePrioritizedRunnable.java new file mode 100644 index 0000000000000..6358acf7e1c7c --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/service/SourcePrioritizedRunnable.java @@ -0,0 +1,44 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.service; + +import org.elasticsearch.common.Priority; +import org.elasticsearch.common.util.concurrent.PrioritizedRunnable; + +/** + * PrioritizedRunnable that also has a source string + */ +public abstract class SourcePrioritizedRunnable extends PrioritizedRunnable { + protected final String source; + + public SourcePrioritizedRunnable(Priority priority, String source) { + super(priority); + this.source = source; + } + + public String source() { + return source; + } + + @Override + public String toString() { + return "[" + source + "]"; + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/service/TaskBatcher.java b/core/src/main/java/org/elasticsearch/cluster/service/TaskBatcher.java new file mode 100644 index 0000000000000..867d4191f800f --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/service/TaskBatcher.java @@ -0,0 +1,207 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.service; + +import org.apache.logging.log4j.Logger; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.Priority; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; +import org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.IdentityHashMap; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.Function; +import java.util.stream.Collectors; + +/** + * Batching support for {@link PrioritizedEsThreadPoolExecutor} + * Tasks that share the same batching key are batched (see {@link BatchedTask#batchingKey}) + */ +public abstract class TaskBatcher { + + private final Logger logger; + private final PrioritizedEsThreadPoolExecutor threadExecutor; + // package visible for tests + final Map> tasksPerBatchingKey = new HashMap<>(); + + public TaskBatcher(Logger logger, PrioritizedEsThreadPoolExecutor threadExecutor) { + this.logger = logger; + this.threadExecutor = threadExecutor; + } + + public void submitTasks(List tasks, @Nullable TimeValue timeout) throws EsRejectedExecutionException { + if (tasks.isEmpty()) { + return; + } + final BatchedTask firstTask = tasks.get(0); + assert tasks.stream().allMatch(t -> t.batchingKey == firstTask.batchingKey) : + "tasks submitted in a batch should share the same batching key: " + tasks; + // convert to an identity map to check for dups based on task identity + final Map tasksIdentity = tasks.stream().collect(Collectors.toMap( + BatchedTask::getTask, + Function.identity(), + (a, b) -> { throw new IllegalStateException("cannot add duplicate task: " + a); }, + IdentityHashMap::new)); + + synchronized (tasksPerBatchingKey) { + LinkedHashSet existingTasks = tasksPerBatchingKey.computeIfAbsent(firstTask.batchingKey, + k -> new LinkedHashSet<>(tasks.size())); + for (BatchedTask existing : existingTasks) { + // check that there won't be two tasks with the same identity for the same batching key + BatchedTask duplicateTask = tasksIdentity.get(existing.getTask()); + if (duplicateTask != null) { + throw new IllegalStateException("task [" + duplicateTask.describeTasks( + Collections.singletonList(existing)) + "] with source [" + duplicateTask.source + "] is already queued"); + } + } + existingTasks.addAll(tasks); + } + + if (timeout != null) { + threadExecutor.execute(firstTask, timeout, () -> onTimeoutInternal(tasks, timeout)); + } else { + threadExecutor.execute(firstTask); + } + } + + private void onTimeoutInternal(List tasks, TimeValue timeout) { + final ArrayList toRemove = new ArrayList<>(); + for (BatchedTask task : tasks) { + if (task.processed.getAndSet(true) == false) { + logger.debug("task [{}] timed out after [{}]", task.source, timeout); + toRemove.add(task); + } + } + if (toRemove.isEmpty() == false) { + BatchedTask firstTask = toRemove.get(0); + Object batchingKey = firstTask.batchingKey; + assert tasks.stream().allMatch(t -> t.batchingKey == batchingKey) : + "tasks submitted in a batch should share the same batching key: " + tasks; + synchronized (tasksPerBatchingKey) { + LinkedHashSet existingTasks = tasksPerBatchingKey.get(batchingKey); + if (existingTasks != null) { + existingTasks.removeAll(toRemove); + if (existingTasks.isEmpty()) { + tasksPerBatchingKey.remove(batchingKey); + } + } + } + onTimeout(toRemove, timeout); + } + } + + /** + * Action to be implemented by the specific batching implementation. + * All tasks have the same batching key. + */ + protected abstract void onTimeout(List tasks, TimeValue timeout); + + void runIfNotProcessed(BatchedTask updateTask) { + // if this task is already processed, it shouldn't execute other tasks with same batching key that arrived later, + // to give other tasks with different batching key a chance to execute. + if (updateTask.processed.get() == false) { + final List toExecute = new ArrayList<>(); + final Map> processTasksBySource = new HashMap<>(); + synchronized (tasksPerBatchingKey) { + LinkedHashSet pending = tasksPerBatchingKey.remove(updateTask.batchingKey); + if (pending != null) { + for (BatchedTask task : pending) { + if (task.processed.getAndSet(true) == false) { + logger.trace("will process {}", task); + toExecute.add(task); + processTasksBySource.computeIfAbsent(task.source, s -> new ArrayList<>()).add(task); + } else { + logger.trace("skipping {}, already processed", task); + } + } + } + } + + if (toExecute.isEmpty() == false) { + final String tasksSummary = processTasksBySource.entrySet().stream().map(entry -> { + String tasks = updateTask.describeTasks(entry.getValue()); + return tasks.isEmpty() ? entry.getKey() : entry.getKey() + "[" + tasks + "]"; + }).reduce((s1, s2) -> s1 + ", " + s2).orElse(""); + + run(updateTask.batchingKey, toExecute, tasksSummary); + } + } + } + + /** + * Action to be implemented by the specific batching implementation + * All tasks have the given batching key. + */ + protected abstract void run(Object batchingKey, List tasks, String tasksSummary); + + /** + * Represents a runnable task that supports batching. + * Implementors of TaskBatcher can subclass this to add a payload to the task. + */ + protected abstract class BatchedTask extends SourcePrioritizedRunnable { + /** + * whether the task has been processed already + */ + protected final AtomicBoolean processed = new AtomicBoolean(); + + /** + * the object that is used as batching key + */ + protected final Object batchingKey; + /** + * the task object that is wrapped + */ + protected final Object task; + + protected BatchedTask(Priority priority, String source, Object batchingKey, Object task) { + super(priority, source); + this.batchingKey = batchingKey; + this.task = task; + } + + @Override + public void run() { + runIfNotProcessed(this); + } + + @Override + public String toString() { + String taskDescription = describeTasks(Collections.singletonList(this)); + if (taskDescription.isEmpty()) { + return "[" + source + "]"; + } else { + return "[" + source + "[" + taskDescription + "]]"; + } + } + + public abstract String describeTasks(List tasks); + + public Object getTask() { + return task; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/common/Booleans.java b/core/src/main/java/org/elasticsearch/common/Booleans.java index 9c5f574663363..90303545cfb60 100644 --- a/core/src/main/java/org/elasticsearch/common/Booleans.java +++ b/core/src/main/java/org/elasticsearch/common/Booleans.java @@ -24,30 +24,6 @@ */ public class Booleans { - /** - * Returns false if text is in false, 0, off, no; else, true - */ - public static boolean parseBoolean(char[] text, int offset, int length, boolean defaultValue) { - // TODO: the leniency here is very dangerous: a simple typo will be misinterpreted and the user won't know. - // We should remove it and cutover to https://github.com/rmuir/booleanparser - if (text == null || length == 0) { - return defaultValue; - } - if (length == 1) { - return text[offset] != '0'; - } - if (length == 2) { - return !(text[offset] == 'n' && text[offset + 1] == 'o'); - } - if (length == 3) { - return !(text[offset] == 'o' && text[offset + 1] == 'f' && text[offset + 2] == 'f'); - } - if (length == 5) { - return !(text[offset] == 'f' && text[offset + 1] == 'a' && text[offset + 2] == 'l' && text[offset + 3] == 's' && text[offset + 4] == 'e'); - } - return true; - } - /** * returns true if the a sequence of chars is one of "true","false","on","off","yes","no","0","1" * @@ -78,6 +54,17 @@ public static boolean isBoolean(char[] text, int offset, int length) { return false; } + public static boolean isBoolean(String text) { + return isExplicitTrue(text) || isExplicitFalse(text); + } + + public static Boolean parseBooleanExact(String value, Boolean defaultValue) { + if (Strings.hasText(value)) { + return parseBooleanExact(value); + } + return defaultValue; + } + /*** * * @return true/false @@ -96,6 +83,13 @@ public static Boolean parseBooleanExact(String value) { throw new IllegalArgumentException("Failed to parse value [" + value + "] cannot be parsed to boolean [ true/1/on/yes OR false/0/off/no ]"); } + /** + * @return true iff the provided value is either "true" or "false". + */ + public static boolean isStrictlyBoolean(String value) { + return "false".equals(value) || "true".equals(value); + } + public static Boolean parseBoolean(String value, Boolean defaultValue) { if (value == null) { // only for the null case we do that here! return defaultValue; diff --git a/core/src/main/java/org/elasticsearch/common/CheckedBiConsumer.java b/core/src/main/java/org/elasticsearch/common/CheckedBiConsumer.java new file mode 100644 index 0000000000000..3f8b76bf3653f --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/CheckedBiConsumer.java @@ -0,0 +1,30 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common; + +import java.util.function.BiConsumer; + +/** + * A {@link BiConsumer}-like interface which allows throwing checked exceptions. + */ +@FunctionalInterface +public interface CheckedBiConsumer { + void accept(T t, U u) throws E; +} diff --git a/core/src/main/java/org/elasticsearch/common/CheckedConsumer.java b/core/src/main/java/org/elasticsearch/common/CheckedConsumer.java new file mode 100644 index 0000000000000..1806f42eff0dd --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/CheckedConsumer.java @@ -0,0 +1,30 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common; + +import java.util.function.Consumer; + +/** + * A {@link Consumer}-like interface which allows throwing checked exceptions. + */ +@FunctionalInterface +public interface CheckedConsumer { + void accept(T t) throws E; +} diff --git a/core/src/main/java/org/elasticsearch/common/CheckedFunction.java b/core/src/main/java/org/elasticsearch/common/CheckedFunction.java new file mode 100644 index 0000000000000..4a2d222db0b7a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/CheckedFunction.java @@ -0,0 +1,30 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common; + +import java.util.function.Function; + +/** + * A {@link Function}-like interface which allows throwing checked exceptions. + */ +@FunctionalInterface +public interface CheckedFunction { + R apply(T t) throws E; +} diff --git a/core/src/main/java/org/elasticsearch/common/CheckedRunnable.java b/core/src/main/java/org/elasticsearch/common/CheckedRunnable.java new file mode 100644 index 0000000000000..196eb53a878d5 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/CheckedRunnable.java @@ -0,0 +1,30 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common; + +import java.lang.Runnable; + +/** + * A {@link Runnable}-like interface which allows throwing checked exceptions. + */ +@FunctionalInterface +public interface CheckedRunnable { + void run() throws E; +} diff --git a/core/src/main/java/org/elasticsearch/common/CheckedSupplier.java b/core/src/main/java/org/elasticsearch/common/CheckedSupplier.java new file mode 100644 index 0000000000000..8f4d2edea0ceb --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/CheckedSupplier.java @@ -0,0 +1,30 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common; + +import java.util.function.Supplier; + +/** + * A {@link Supplier}-like interface which allows throwing checked exceptions. + */ +@FunctionalInterface +public interface CheckedSupplier { + R get() throws E; +} diff --git a/core/src/main/java/org/elasticsearch/common/FieldMemoryStats.java b/core/src/main/java/org/elasticsearch/common/FieldMemoryStats.java new file mode 100644 index 0000000000000..a09895fdbedce --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/FieldMemoryStats.java @@ -0,0 +1,132 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common; + +import com.carrotsearch.hppc.ObjectLongHashMap; +import com.carrotsearch.hppc.cursors.ObjectLongCursor; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Iterator; +import java.util.Objects; + +/** + * A reusable class to encode field -> memory size mappings + */ +public final class FieldMemoryStats implements Writeable, Iterable>{ + + private final ObjectLongHashMap stats; + + /** + * Creates a new FieldMemoryStats instance + */ + public FieldMemoryStats(ObjectLongHashMap stats) { + this.stats = Objects.requireNonNull(stats, "status must be non-null"); + assert !stats.containsKey(null); + } + + /** + * Creates a new FieldMemoryStats instance from a stream + */ + public FieldMemoryStats(StreamInput input) throws IOException { + int size = input.readVInt(); + stats = new ObjectLongHashMap<>(size); + for (int i = 0; i < size; i++) { + stats.put(input.readString(), input.readVLong()); + } + } + + /** + * Adds / merges the given field memory stats into this stats instance + */ + public void add(FieldMemoryStats fieldMemoryStats) { + for (ObjectLongCursor entry : fieldMemoryStats.stats) { + stats.addTo(entry.key, entry.value); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(stats.size()); + for (ObjectLongCursor entry : stats) { + out.writeString(entry.key); + out.writeVLong(entry.value); + } + } + + /** + * Generates x-content into the given builder for each of the fields in this stats instance + * @param builder the builder to generated on + * @param key the top level key for this stats object + * @param rawKey the raw byte key for each of the fields byte sizes + * @param readableKey the readable key for each of the fields byte sizes + */ + public void toXContent(XContentBuilder builder, String key, String rawKey, String readableKey) throws IOException { + builder.startObject(key); + for (ObjectLongCursor entry : stats) { + builder.startObject(entry.key); + builder.byteSizeField(rawKey, readableKey, entry.value); + builder.endObject(); + } + builder.endObject(); + } + + /** + * Creates a deep copy of this stats instance + */ + public FieldMemoryStats copy() { + return new FieldMemoryStats(stats.clone()); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + FieldMemoryStats that = (FieldMemoryStats) o; + return Objects.equals(stats, that.stats); + } + + @Override + public int hashCode() { + return Objects.hash(stats); + } + + @Override + public Iterator> iterator() { + return stats.iterator(); + } + + /** + * Returns the fields value in bytes or 0 if it's not present in the stats + */ + public long get(String field) { + return stats.get(field); + } + + /** + * Returns true iff the given field is in the stats + */ + public boolean containsField(String field) { + return stats.containsKey(field); + } +} diff --git a/core/src/main/java/org/elasticsearch/common/Numbers.java b/core/src/main/java/org/elasticsearch/common/Numbers.java index 52d0337ef7300..1735a0dfa6570 100644 --- a/core/src/main/java/org/elasticsearch/common/Numbers.java +++ b/core/src/main/java/org/elasticsearch/common/Numbers.java @@ -21,6 +21,9 @@ import org.apache.lucene.util.BytesRef; +import java.math.BigDecimal; +import java.math.BigInteger; + /** * A set of utilities for numbers. */ @@ -178,4 +181,56 @@ public static boolean isValidDouble(double value) { } return true; } + + /** Return the long that {@code n} stores, or throws an exception if the + * stored value cannot be converted to a long that stores the exact same + * value. */ + public static long toLongExact(Number n) { + if (n instanceof Byte || n instanceof Short || n instanceof Integer + || n instanceof Long) { + return n.longValue(); + } else if (n instanceof Float || n instanceof Double) { + double d = n.doubleValue(); + if (d != Math.round(d)) { + throw new IllegalArgumentException(n + " is not an integer value"); + } + return n.longValue(); + } else if (n instanceof BigDecimal) { + return ((BigDecimal) n).toBigIntegerExact().longValueExact(); + } else if (n instanceof BigInteger) { + return ((BigInteger) n).longValueExact(); + } else { + throw new IllegalArgumentException("Cannot check whether [" + n + "] of class [" + n.getClass().getName() + + "] is actually a long"); + } + } + + /** Return the int that {@code n} stores, or throws an exception if the + * stored value cannot be converted to an int that stores the exact same + * value. */ + public static int toIntExact(Number n) { + return Math.toIntExact(toLongExact(n)); + } + + /** Return the short that {@code n} stores, or throws an exception if the + * stored value cannot be converted to a short that stores the exact same + * value. */ + public static short toShortExact(Number n) { + long l = toLongExact(n); + if (l != (short) l) { + throw new ArithmeticException("short overflow: " + l); + } + return (short) l; + } + + /** Return the byte that {@code n} stores, or throws an exception if the + * stored value cannot be converted to a byte that stores the exact same + * value. */ + public static byte toByteExact(Number n) { + long l = toLongExact(n); + if (l != (byte) l) { + throw new ArithmeticException("byte overflow: " + l); + } + return (byte) l; + } } diff --git a/core/src/main/java/org/elasticsearch/common/ParseField.java b/core/src/main/java/org/elasticsearch/common/ParseField.java index 7121be7d1d880..fc9377eeb2f20 100644 --- a/core/src/main/java/org/elasticsearch/common/ParseField.java +++ b/core/src/main/java/org/elasticsearch/common/ParseField.java @@ -101,14 +101,10 @@ public ParseField withAllDeprecated(String allReplacedWith) { /** * @param fieldName * the field name to match against this {@link ParseField} - * @param strict - * if true an exception will be thrown if a deprecated field name - * is given. If false the deprecated name will be matched but a - * message will also be logged to the {@link DeprecationLogger} * @return true if fieldName matches any of the acceptable * names for this {@link ParseField}. */ - boolean match(String fieldName, boolean strict) { + public boolean match(String fieldName) { Objects.requireNonNull(fieldName, "fieldName cannot be null"); // if this parse field has not been completely deprecated then try to // match the preferred name @@ -128,11 +124,7 @@ boolean match(String fieldName, boolean strict) { // message to indicate what should be used instead msg = "Deprecated field [" + fieldName + "] used, replaced by [" + allReplacedWith + "]"; } - if (strict) { - throw new IllegalArgumentException(msg); - } else { - DEPRECATION_LOGGER.deprecated(msg); - } + DEPRECATION_LOGGER.deprecated(msg); return true; } } diff --git a/core/src/main/java/org/elasticsearch/common/ParseFieldMatcher.java b/core/src/main/java/org/elasticsearch/common/ParseFieldMatcher.java deleted file mode 100644 index 9866694a230ef..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/ParseFieldMatcher.java +++ /dev/null @@ -1,59 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common; - -import org.elasticsearch.common.settings.Settings; - -/** - * Matcher to use in combination with {@link ParseField} while parsing requests. Matches a {@link ParseField} - * against a field name and throw deprecation exception depending on the current value of the {@link #PARSE_STRICT} setting. - */ -public class ParseFieldMatcher { - public static final String PARSE_STRICT = "index.query.parse.strict"; - public static final ParseFieldMatcher EMPTY = new ParseFieldMatcher(false); - public static final ParseFieldMatcher STRICT = new ParseFieldMatcher(true); - - private final boolean strict; - - public ParseFieldMatcher(Settings settings) { - this(settings.getAsBoolean(PARSE_STRICT, false)); - } - - public ParseFieldMatcher(boolean strict) { - this.strict = strict; - } - - /** Should deprecated settings be rejected? */ - public boolean isStrict() { - return strict; - } - - /** - * Matches a {@link ParseField} against a field name, and throws deprecation exception depending on the current - * value of the {@link #PARSE_STRICT} setting. - * @param fieldName the field name found in the request while parsing - * @param parseField the parse field that we are looking for - * @throws IllegalArgumentException whenever we are in strict mode and the request contained a deprecated field - * @return true whenever the parse field that we are looking for was found, false otherwise - */ - public boolean match(String fieldName, ParseField parseField) { - return parseField.match(fieldName, strict); - } -} diff --git a/core/src/main/java/org/elasticsearch/common/ParseFieldMatcherSupplier.java b/core/src/main/java/org/elasticsearch/common/ParseFieldMatcherSupplier.java deleted file mode 100644 index 672890c2b9725..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/ParseFieldMatcherSupplier.java +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common; - -import org.elasticsearch.index.query.QueryParseContext; -import org.elasticsearch.index.query.QueryShardContext; - -/** - * This interface should be implemented by classes like {@link QueryParseContext} or {@link QueryShardContext} that - * are able to carry a {@link ParseFieldMatcher}. - */ -@FunctionalInterface -public interface ParseFieldMatcherSupplier { - - /** - * @return the parseFieldMatcher - */ - ParseFieldMatcher getParseFieldMatcher(); -} diff --git a/core/src/main/java/org/elasticsearch/common/ParsingException.java b/core/src/main/java/org/elasticsearch/common/ParsingException.java index 0519ab38339c3..5dc2c8a74e4c5 100644 --- a/core/src/main/java/org/elasticsearch/common/ParsingException.java +++ b/core/src/main/java/org/elasticsearch/common/ParsingException.java @@ -95,12 +95,11 @@ public RestStatus status() { } @Override - protected void innerToXContent(XContentBuilder builder, Params params) throws IOException { + protected void metadataToXContent(XContentBuilder builder, Params params) throws IOException { if (lineNumber != UNKNOWN_POSITION) { builder.field("line", lineNumber); builder.field("col", columnNumber); } - super.innerToXContent(builder, params); } @Override diff --git a/core/src/main/java/org/elasticsearch/common/Strings.java b/core/src/main/java/org/elasticsearch/common/Strings.java index 955b836ca1cbf..ce1cda593c82c 100644 --- a/core/src/main/java/org/elasticsearch/common/Strings.java +++ b/core/src/main/java/org/elasticsearch/common/Strings.java @@ -734,10 +734,12 @@ public static Set commaDelimitedListToSet(String str) { * @return the delimited String */ public static String collectionToDelimitedString(Iterable coll, String delim, String prefix, String suffix) { - return collectionToDelimitedString(coll, delim, prefix, suffix, new StringBuilder()); + StringBuilder sb = new StringBuilder(); + collectionToDelimitedString(coll, delim, prefix, suffix, sb); + return sb.toString(); } - public static String collectionToDelimitedString(Iterable coll, String delim, String prefix, String suffix, StringBuilder sb) { + public static void collectionToDelimitedString(Iterable coll, String delim, String prefix, String suffix, StringBuilder sb) { Iterator it = coll.iterator(); while (it.hasNext()) { sb.append(prefix).append(it.next()).append(suffix); @@ -745,7 +747,6 @@ public static String collectionToDelimitedString(Iterable coll, String delim, sb.append(delim); } } - return sb.toString(); } /** @@ -780,12 +781,14 @@ public static String collectionToCommaDelimitedString(Iterable coll) { * @return the delimited String */ public static String arrayToDelimitedString(Object[] arr, String delim) { - return arrayToDelimitedString(arr, delim, new StringBuilder()); + StringBuilder sb = new StringBuilder(); + arrayToDelimitedString(arr, delim, sb); + return sb.toString(); } - public static String arrayToDelimitedString(Object[] arr, String delim, StringBuilder sb) { + public static void arrayToDelimitedString(Object[] arr, String delim, StringBuilder sb) { if (isEmpty(arr)) { - return ""; + return; } for (int i = 0; i < arr.length; i++) { if (i > 0) { @@ -793,7 +796,6 @@ public static String arrayToDelimitedString(Object[] arr, String delim, StringBu } sb.append(arr[i]); } - return sb.toString(); } /** @@ -880,26 +882,17 @@ public static boolean isAllOrWildcard(String[] data) { } /** - * Return a {@link String} that is the json representation of the provided - * {@link ToXContent}. + * Return a {@link String} that is the json representation of the provided {@link ToXContent}. + * Wraps the output into an anonymous object. */ public static String toString(ToXContent toXContent) { - return toString(toXContent, false); - } - - /** - * Return a {@link String} that is the json representation of the provided - * {@link ToXContent}. - * @param wrapInObject set this to true if the ToXContent instance expects to be inside an object - */ - public static String toString(ToXContent toXContent, boolean wrapInObject) { try { XContentBuilder builder = JsonXContent.contentBuilder(); - if (wrapInObject) { + if (toXContent.isFragment()) { builder.startObject(); } toXContent.toXContent(builder, ToXContent.EMPTY_PARAMS); - if (wrapInObject) { + if (toXContent.isFragment()) { builder.endObject(); } return builder.string(); diff --git a/core/src/main/java/org/elasticsearch/common/Table.java b/core/src/main/java/org/elasticsearch/common/Table.java index ab0252b11dc33..430070ee19cca 100644 --- a/core/src/main/java/org/elasticsearch/common/Table.java +++ b/core/src/main/java/org/elasticsearch/common/Table.java @@ -30,8 +30,6 @@ import static java.util.Collections.emptyMap; -/** - */ public class Table { private List headers = new ArrayList<>(); @@ -197,6 +195,22 @@ public Cell findHeaderByName(String header) { return null; } + public Map getAliasMap() { + Map headerAliasMap = new HashMap<>(); + for (int i = 0; i < headers.size(); i++) { + Cell headerCell = headers.get(i); + String headerName = headerCell.value.toString(); + if (headerCell.attr.containsKey("alias")) { + String[] aliases = Strings.splitStringByCommaToArray(headerCell.attr.get("alias")); + for (String alias : aliases) { + headerAliasMap.put(alias, headerName); + } + } + headerAliasMap.put(headerName, headerName); + } + return headerAliasMap; + } + public static class Cell { public final Object value; public final Map attr; diff --git a/core/src/main/java/org/elasticsearch/common/blobstore/BlobContainer.java b/core/src/main/java/org/elasticsearch/common/blobstore/BlobContainer.java index 1427f34a642a7..a04c75941e7b6 100644 --- a/core/src/main/java/org/elasticsearch/common/blobstore/BlobContainer.java +++ b/core/src/main/java/org/elasticsearch/common/blobstore/BlobContainer.java @@ -105,8 +105,11 @@ public interface BlobContainer { Map listBlobsByPrefix(String blobNamePrefix) throws IOException; /** - * Atomically renames the source blob into the target blob. If the source blob does not exist or the - * target blob already exists, an exception is thrown. + * Renames the source blob into the target blob. If the source blob does not exist or the + * target blob already exists, an exception is thrown. Atomicity of the move operation + * can only be guaranteed on an implementation-by-implementation basis. The only current + * implementation of {@link BlobContainer} for which atomicity can be guaranteed is the + * {@link org.elasticsearch.common.blobstore.fs.FsBlobContainer}. * * @param sourceBlobName * The blob to rename. diff --git a/core/src/main/java/org/elasticsearch/common/blobstore/BlobPath.java b/core/src/main/java/org/elasticsearch/common/blobstore/BlobPath.java index 9092e13eb1b40..ea02aebb0aaad 100644 --- a/core/src/main/java/org/elasticsearch/common/blobstore/BlobPath.java +++ b/core/src/main/java/org/elasticsearch/common/blobstore/BlobPath.java @@ -55,15 +55,14 @@ public String[] toArray() { } public BlobPath add(String path) { - List paths = new ArrayList<>(); - paths.addAll(this.paths); + List paths = new ArrayList<>(this.paths); paths.add(path); return new BlobPath(Collections.unmodifiableList(paths)); } public String buildAsString() { String p = String.join(SEPARATOR, paths); - if (p.isEmpty()) { + if (p.isEmpty() || p.endsWith(SEPARATOR)) { return p; } return p + SEPARATOR; diff --git a/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java b/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java index ad9fe257cf77f..725535ecadbd8 100644 --- a/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java +++ b/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java @@ -46,7 +46,7 @@ public FsBlobStore(Settings settings, Path path) throws IOException { super(settings); this.path = path; Files.createDirectories(path); - this.bufferSizeInBytes = (int) settings.getAsBytesSize("repositories.fs.buffer_size", new ByteSizeValue(100, ByteSizeUnit.KB)).bytes(); + this.bufferSizeInBytes = (int) settings.getAsBytesSize("repositories.fs.buffer_size", new ByteSizeValue(100, ByteSizeUnit.KB)).getBytes(); } @Override diff --git a/core/src/main/java/org/elasticsearch/common/blobstore/url/URLBlobContainer.java b/core/src/main/java/org/elasticsearch/common/blobstore/url/URLBlobContainer.java index 537031ef7783a..ede57d461a13d 100644 --- a/core/src/main/java/org/elasticsearch/common/blobstore/url/URLBlobContainer.java +++ b/core/src/main/java/org/elasticsearch/common/blobstore/url/URLBlobContainer.java @@ -24,9 +24,11 @@ import org.elasticsearch.common.blobstore.support.AbstractBlobContainer; import java.io.BufferedInputStream; +import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStream; import java.net.URL; +import java.nio.file.NoSuchFileException; import java.util.Map; /** @@ -99,7 +101,11 @@ public boolean blobExists(String blobName) { @Override public InputStream readBlob(String name) throws IOException { - return new BufferedInputStream(new URL(path, name).openStream(), blobStore.bufferSizeInBytes()); + try { + return new BufferedInputStream(new URL(path, name).openStream(), blobStore.bufferSizeInBytes()); + } catch (FileNotFoundException fnfe) { + throw new NoSuchFileException("[" + name + "] blob not found"); + } } @Override diff --git a/core/src/main/java/org/elasticsearch/common/blobstore/url/URLBlobStore.java b/core/src/main/java/org/elasticsearch/common/blobstore/url/URLBlobStore.java index 813a12571ec35..2386f79c076f1 100644 --- a/core/src/main/java/org/elasticsearch/common/blobstore/url/URLBlobStore.java +++ b/core/src/main/java/org/elasticsearch/common/blobstore/url/URLBlobStore.java @@ -55,7 +55,7 @@ public class URLBlobStore extends AbstractComponent implements BlobStore { public URLBlobStore(Settings settings, URL path) { super(settings); this.path = path; - this.bufferSizeInBytes = (int) settings.getAsBytesSize("repositories.uri.buffer_size", new ByteSizeValue(100, ByteSizeUnit.KB)).bytes(); + this.bufferSizeInBytes = (int) settings.getAsBytesSize("repositories.uri.buffer_size", new ByteSizeValue(100, ByteSizeUnit.KB)).getBytes(); } /** diff --git a/core/src/main/java/org/elasticsearch/common/breaker/ChildMemoryCircuitBreaker.java b/core/src/main/java/org/elasticsearch/common/breaker/ChildMemoryCircuitBreaker.java index 68bf52e9e0dba..e4019f9f6656d 100644 --- a/core/src/main/java/org/elasticsearch/common/breaker/ChildMemoryCircuitBreaker.java +++ b/core/src/main/java/org/elasticsearch/common/breaker/ChildMemoryCircuitBreaker.java @@ -90,12 +90,12 @@ public ChildMemoryCircuitBreaker(BreakerSettings settings, ChildMemoryCircuitBre @Override public void circuitBreak(String fieldName, long bytesNeeded) { this.trippedCount.incrementAndGet(); - final String message = "[" + this.name + "] Data too large, data for [" + - fieldName + "] would be larger than limit of [" + + final String message = "[" + this.name + "] Data too large, data for [" + fieldName + "]" + + " would be [" + bytesNeeded + "/" + new ByteSizeValue(bytesNeeded) + "]" + + ", which is larger than the limit of [" + memoryBytesLimit + "/" + new ByteSizeValue(memoryBytesLimit) + "]"; logger.debug("{}", message); - throw new CircuitBreakingException(message, - bytesNeeded, this.memoryBytesLimit); + throw new CircuitBreakingException(message, bytesNeeded, memoryBytesLimit); } /** diff --git a/core/src/main/java/org/elasticsearch/common/breaker/CircuitBreakingException.java b/core/src/main/java/org/elasticsearch/common/breaker/CircuitBreakingException.java index e700d30164480..e01fe1beee224 100644 --- a/core/src/main/java/org/elasticsearch/common/breaker/CircuitBreakingException.java +++ b/core/src/main/java/org/elasticsearch/common/breaker/CircuitBreakingException.java @@ -73,9 +73,8 @@ public RestStatus status() { } @Override - protected void innerToXContent(XContentBuilder builder, Params params) throws IOException { + protected void metadataToXContent(XContentBuilder builder, Params params) throws IOException { builder.field("bytes_wanted", bytesWanted); builder.field("bytes_limit", byteLimit); - super.innerToXContent(builder, params); } } diff --git a/core/src/main/java/org/elasticsearch/common/breaker/MemoryCircuitBreaker.java b/core/src/main/java/org/elasticsearch/common/breaker/MemoryCircuitBreaker.java index 3ac4a52994d26..dbd1fe92ffe9a 100644 --- a/core/src/main/java/org/elasticsearch/common/breaker/MemoryCircuitBreaker.java +++ b/core/src/main/java/org/elasticsearch/common/breaker/MemoryCircuitBreaker.java @@ -57,7 +57,7 @@ public MemoryCircuitBreaker(ByteSizeValue limit, double overheadConstant, Logger * @param oldBreaker the previous circuit breaker to inherit the used value from (starting offset) */ public MemoryCircuitBreaker(ByteSizeValue limit, double overheadConstant, MemoryCircuitBreaker oldBreaker, Logger logger) { - this.memoryBytesLimit = limit.bytes(); + this.memoryBytesLimit = limit.getBytes(); this.overheadConstant = overheadConstant; if (oldBreaker == null) { this.used = new AtomicLong(0); @@ -79,7 +79,9 @@ public MemoryCircuitBreaker(ByteSizeValue limit, double overheadConstant, Memory @Override public void circuitBreak(String fieldName, long bytesNeeded) throws CircuitBreakingException { this.trippedCount.incrementAndGet(); - final String message = "Data too large, data for field [" + fieldName + "] would be larger than limit of [" + + final String message = "[" + getName() + "] Data too large, data for field [" + fieldName + "]" + + " would be [" + bytesNeeded + "/" + new ByteSizeValue(bytesNeeded) + "]" + + ", which is larger than the limit of [" + memoryBytesLimit + "/" + new ByteSizeValue(memoryBytesLimit) + "]"; logger.debug("{}", message); throw new CircuitBreakingException(message, bytesNeeded, memoryBytesLimit); diff --git a/core/src/main/java/org/elasticsearch/common/bytes/BytesArray.java b/core/src/main/java/org/elasticsearch/common/bytes/BytesArray.java index 43c1df588b170..9b78c2fe5a788 100644 --- a/core/src/main/java/org/elasticsearch/common/bytes/BytesArray.java +++ b/core/src/main/java/org/elasticsearch/common/bytes/BytesArray.java @@ -20,12 +20,6 @@ package org.elasticsearch.common.bytes; import org.apache.lucene.util.BytesRef; -import org.elasticsearch.common.io.stream.StreamInput; - -import java.io.IOException; -import java.io.OutputStream; -import java.nio.charset.StandardCharsets; -import java.util.Arrays; public final class BytesArray extends BytesReference { diff --git a/core/src/main/java/org/elasticsearch/common/bytes/BytesReference.java b/core/src/main/java/org/elasticsearch/common/bytes/BytesReference.java index f31ea2bbf8214..92632ad7874fd 100644 --- a/core/src/main/java/org/elasticsearch/common/bytes/BytesReference.java +++ b/core/src/main/java/org/elasticsearch/common/bytes/BytesReference.java @@ -23,6 +23,7 @@ import org.apache.lucene.util.BytesRefIterator; import org.elasticsearch.common.io.stream.StreamInput; +import java.io.EOFException; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; @@ -215,6 +216,7 @@ private static void advance(final BytesRef ref, final int length) { * that way. */ private static final class MarkSupportingStreamInputWrapper extends StreamInput { + // can't use FilterStreamInput it needs to reset the delegate private final BytesReference reference; private BytesReferenceStreamInput input; private int mark = 0; @@ -254,6 +256,11 @@ public int available() throws IOException { return input.available(); } + @Override + protected void ensureCanReadBytes(int length) throws EOFException { + input.ensureCanReadBytes(length); + } + @Override public void reset() throws IOException { input = new BytesReferenceStreamInput(reference.iterator(), reference.length()); diff --git a/core/src/main/java/org/elasticsearch/common/bytes/BytesReferenceStreamInput.java b/core/src/main/java/org/elasticsearch/common/bytes/BytesReferenceStreamInput.java index 4426ea53efabd..f7f1cdc650199 100644 --- a/core/src/main/java/org/elasticsearch/common/bytes/BytesReferenceStreamInput.java +++ b/core/src/main/java/org/elasticsearch/common/bytes/BytesReferenceStreamInput.java @@ -32,17 +32,17 @@ */ final class BytesReferenceStreamInput extends StreamInput { private final BytesRefIterator iterator; - private int sliceOffset; + private int sliceIndex; private BytesRef slice; private final int length; // the total size of the stream private int offset; // the current position of the stream - public BytesReferenceStreamInput(BytesRefIterator iterator, final int length) throws IOException { + BytesReferenceStreamInput(BytesRefIterator iterator, final int length) throws IOException { this.iterator = iterator; this.slice = iterator.next(); this.length = length; this.offset = 0; - this.sliceOffset = 0; + this.sliceIndex = 0; } @Override @@ -51,15 +51,15 @@ public byte readByte() throws IOException { throw new EOFException(); } maybeNextSlice(); - byte b = slice.bytes[slice.offset + (sliceOffset++)]; + byte b = slice.bytes[slice.offset + (sliceIndex++)]; offset++; return b; } private void maybeNextSlice() throws IOException { - while (sliceOffset == slice.length) { + while (sliceIndex == slice.length) { slice = iterator.next(); - sliceOffset = 0; + sliceIndex = 0; if (slice == null) { throw new EOFException(); } @@ -92,12 +92,12 @@ public int read(final byte[] b, final int bOffset, final int len) throws IOExcep int destOffset = bOffset; while (remaining > 0) { maybeNextSlice(); - final int currentLen = Math.min(remaining, slice.length - sliceOffset); + final int currentLen = Math.min(remaining, slice.length - sliceIndex); assert currentLen > 0 : "length has to be > 0 to make progress but was: " + currentLen; - System.arraycopy(slice.bytes, slice.offset + sliceOffset, b, destOffset, currentLen); + System.arraycopy(slice.bytes, slice.offset + sliceIndex, b, destOffset, currentLen); destOffset += currentLen; remaining -= currentLen; - sliceOffset += currentLen; + sliceIndex += currentLen; offset += currentLen; assert remaining >= 0 : "remaining: " + remaining; } @@ -114,6 +114,14 @@ public int available() throws IOException { return length - offset; } + @Override + protected void ensureCanReadBytes(int bytesToRead) throws EOFException { + int bytesAvailable = length - offset; + if (bytesAvailable < bytesToRead) { + throw new EOFException("tried to read: " + bytesToRead + " bytes but only " + bytesAvailable + " remaining"); + } + } + @Override public long skip(long n) throws IOException { final int skip = (int) Math.min(Integer.MAX_VALUE, n); @@ -121,9 +129,9 @@ public long skip(long n) throws IOException { int remaining = numBytesSkipped; while (remaining > 0) { maybeNextSlice(); - int currentLen = Math.min(remaining, slice.length - (slice.offset + sliceOffset)); + int currentLen = Math.min(remaining, slice.length - sliceIndex); remaining -= currentLen; - sliceOffset += currentLen; + sliceIndex += currentLen; offset += currentLen; assert remaining >= 0 : "remaining: " + remaining; } diff --git a/core/src/main/java/org/elasticsearch/common/bytes/ReleasablePagedBytesReference.java b/core/src/main/java/org/elasticsearch/common/bytes/ReleasablePagedBytesReference.java index 2700ea4dc135e..ac90e546f7eb5 100644 --- a/core/src/main/java/org/elasticsearch/common/bytes/ReleasablePagedBytesReference.java +++ b/core/src/main/java/org/elasticsearch/common/bytes/ReleasablePagedBytesReference.java @@ -30,13 +30,17 @@ */ public final class ReleasablePagedBytesReference extends PagedBytesReference implements Releasable { - public ReleasablePagedBytesReference(BigArrays bigarrays, ByteArray byteArray, int length) { + private final Releasable releasable; + + public ReleasablePagedBytesReference(BigArrays bigarrays, ByteArray byteArray, int length, + Releasable releasable) { super(bigarrays, byteArray, length); + this.releasable = releasable; } @Override public void close() { - Releasables.close(byteArray); + Releasables.close(releasable); } } diff --git a/core/src/main/java/org/elasticsearch/common/cache/Cache.java b/core/src/main/java/org/elasticsearch/common/cache/Cache.java index a42d01ccf723c..91d011ba03cad 100644 --- a/core/src/main/java/org/elasticsearch/common/cache/Cache.java +++ b/core/src/main/java/org/elasticsearch/common/cache/Cache.java @@ -34,6 +34,8 @@ import java.util.concurrent.locks.ReentrantLock; import java.util.concurrent.locks.ReentrantReadWriteLock; import java.util.function.BiFunction; +import java.util.function.Consumer; +import java.util.function.Predicate; import java.util.function.ToLongBiFunction; /** @@ -67,13 +69,13 @@ */ public class Cache { // positive if entries have an expiration - private long expireAfterAccess = -1; + private long expireAfterAccessNanos = -1; // true if entries can expire after access private boolean entriesExpireAfterAccess; // positive if entries have an expiration after write - private long expireAfterWrite = -1; + private long expireAfterWriteNanos = -1; // true if entries can expire after initial insertion private boolean entriesExpireAfterWrite; @@ -98,22 +100,32 @@ public class Cache { Cache() { } - void setExpireAfterAccess(long expireAfterAccess) { - if (expireAfterAccess <= 0) { - throw new IllegalArgumentException("expireAfterAccess <= 0"); + void setExpireAfterAccessNanos(long expireAfterAccessNanos) { + if (expireAfterAccessNanos <= 0) { + throw new IllegalArgumentException("expireAfterAccessNanos <= 0"); } - this.expireAfterAccess = expireAfterAccess; + this.expireAfterAccessNanos = expireAfterAccessNanos; this.entriesExpireAfterAccess = true; } - void setExpireAfterWrite(long expireAfterWrite) { - if (expireAfterWrite <= 0) { - throw new IllegalArgumentException("expireAfterWrite <= 0"); + // pkg-private for testing + long getExpireAfterAccessNanos() { + return this.expireAfterAccessNanos; + } + + void setExpireAfterWriteNanos(long expireAfterWriteNanos) { + if (expireAfterWriteNanos <= 0) { + throw new IllegalArgumentException("expireAfterWriteNanos <= 0"); } - this.expireAfterWrite = expireAfterWrite; + this.expireAfterWriteNanos = expireAfterWriteNanos; this.entriesExpireAfterWrite = true; } + // pkg-private for testing + long getExpireAfterWriteNanos() { + return this.expireAfterWriteNanos; + } + void setMaximumWeight(long maximumWeight) { if (maximumWeight < 0) { throw new IllegalArgumentException("maximumWeight < 0"); @@ -156,7 +168,7 @@ static class Entry { Entry after; State state = State.NEW; - public Entry(K key, V value, long writeTime) { + Entry(K key, V value, long writeTime) { this.key = key; this.value = value; this.writeTime = this.accessTime = writeTime; @@ -183,33 +195,40 @@ private static class CacheSegment { SegmentStats segmentStats = new SegmentStats(); /** - * get an entry from the segment + * get an entry from the segment; expired entries will be returned as null but not removed from the cache until the LRU list is + * pruned or a manual {@link Cache#refresh()} is performed however a caller can take action using the provided callback * - * @param key the key of the entry to get from the cache - * @param now the access time of this entry + * @param key the key of the entry to get from the cache + * @param now the access time of this entry + * @param isExpired test if the entry is expired + * @param onExpiration a callback if the entry associated to the key is expired * @return the entry if there was one, otherwise null */ - Entry get(K key, long now) { + Entry get(K key, long now, Predicate> isExpired, Consumer> onExpiration) { CompletableFuture> future; Entry entry = null; try (ReleasableLock ignored = readLock.acquire()) { future = map.get(key); } if (future != null) { - try { - entry = future.handle((ok, ex) -> { - if (ok != null) { - segmentStats.hit(); - ok.accessTime = now; - return ok; - } else { - segmentStats.miss(); - return null; - } - }).get(); - } catch (ExecutionException | InterruptedException e) { - throw new IllegalStateException(e); - } + try { + entry = future.handle((ok, ex) -> { + if (ok != null && !isExpired.test(ok)) { + segmentStats.hit(); + ok.accessTime = now; + return ok; + } else { + segmentStats.miss(); + if (ok != null) { + assert isExpired.test(ok); + onExpiration.accept(ok); + } + return null; + } + }).get(); + } catch (ExecutionException | InterruptedException e) { + throw new IllegalStateException(e); + } } else { segmentStats.miss(); @@ -317,13 +336,13 @@ void eviction() { * @return the value to which the specified key is mapped, or null if this map contains no mapping for the key */ public V get(K key) { - return get(key, now()); + return get(key, now(), e -> {}); } - private V get(K key, long now) { + private V get(K key, long now, Consumer> onExpiration) { CacheSegment segment = getCacheSegment(key); - Entry entry = segment.get(key, now); - if (entry == null || isExpired(entry, now)) { + Entry entry = segment.get(key, now, e -> isExpired(e, now), onExpiration); + if (entry == null) { return null; } else { promote(entry, now); @@ -336,15 +355,23 @@ private V get(K key, long now) { * value using the given mapping function and enters it into this map unless null. The load method for a given key * will be invoked at most once. * + * Use of different {@link CacheLoader} implementations on the same key concurrently may result in only the first + * loader function being called and the second will be returned the result provided by the first including any exceptions + * thrown during the execution of the first. + * * @param key the key whose associated value is to be returned or computed for if non-existent * @param loader the function to compute a value given a key - * @return the current (existing or computed) value associated with the specified key, or null if the computed - * value is null - * @throws ExecutionException thrown if loader throws an exception + * @return the current (existing or computed) non-null value associated with the specified key + * @throws ExecutionException thrown if loader throws an exception or returns a null value */ public V computeIfAbsent(K key, CacheLoader loader) throws ExecutionException { long now = now(); - V value = get(key, now); + // we have to eagerly evict expired entries or our putIfAbsent call below will fail + V value = get(key, now, e -> { + try (ReleasableLock ignored = lruLock.acquire()) { + evictEntry(e); + } + }); if (value == null) { // we need to synchronize loading of a value for a given key; however, holding the segment lock while // invoking load can lead to deadlock against another thread due to dependent key loading; therefore, we @@ -400,6 +427,11 @@ public V computeIfAbsent(K key, CacheLoader loader) throws ExecutionExcept try { value = completableValue.get(); + // check to ensure the future hasn't been completed with an exception + if (future.isCompletedExceptionally()) { + future.get(); // call get to force the exception to be thrown for other concurrent callers + throw new IllegalStateException("the future was completed exceptionally but no exception was thrown"); + } } catch (InterruptedException e) { throw new IllegalStateException(e); } @@ -670,13 +702,18 @@ private void evict(long now) { assert lruLock.isHeldByCurrentThread(); while (tail != null && shouldPrune(tail, now)) { - CacheSegment segment = getCacheSegment(tail.key); - Entry entry = tail; - if (segment != null) { - segment.remove(tail.key); - } - delete(entry, RemovalNotification.RemovalReason.EVICTED); + evictEntry(tail); + } + } + + private void evictEntry(Entry entry) { + assert lruLock.isHeldByCurrentThread(); + + CacheSegment segment = getCacheSegment(entry.key); + if (segment != null) { + segment.remove(entry.key); } + delete(entry, RemovalNotification.RemovalReason.EVICTED); } private void delete(Entry entry, RemovalNotification.RemovalReason removalReason) { @@ -696,8 +733,8 @@ private boolean exceedsWeight() { } private boolean isExpired(Entry entry, long now) { - return (entriesExpireAfterAccess && now - entry.accessTime > expireAfterAccess) || - (entriesExpireAfterWrite && now - entry.writeTime > expireAfterWrite); + return (entriesExpireAfterAccess && now - entry.accessTime > expireAfterAccessNanos) || + (entriesExpireAfterWrite && now - entry.writeTime > expireAfterWriteNanos); } private boolean unlink(Entry entry) { diff --git a/core/src/main/java/org/elasticsearch/common/cache/CacheBuilder.java b/core/src/main/java/org/elasticsearch/common/cache/CacheBuilder.java index ffb0e591180ea..67c8d508ba572 100644 --- a/core/src/main/java/org/elasticsearch/common/cache/CacheBuilder.java +++ b/core/src/main/java/org/elasticsearch/common/cache/CacheBuilder.java @@ -19,13 +19,15 @@ package org.elasticsearch.common.cache; +import org.elasticsearch.common.unit.TimeValue; + import java.util.Objects; import java.util.function.ToLongBiFunction; public class CacheBuilder { private long maximumWeight = -1; - private long expireAfterAccess = -1; - private long expireAfterWrite = -1; + private long expireAfterAccessNanos = -1; + private long expireAfterWriteNanos = -1; private ToLongBiFunction weigher; private RemovalListener removalListener; @@ -44,19 +46,35 @@ public CacheBuilder setMaximumWeight(long maximumWeight) { return this; } - public CacheBuilder setExpireAfterAccess(long expireAfterAccess) { - if (expireAfterAccess <= 0) { + /** + * Sets the amount of time before an entry in the cache expires after it was last accessed. + * + * @param expireAfterAccess The amount of time before an entry expires after it was last accessed. Must not be {@code null} and must + * be greater than 0. + */ + public CacheBuilder setExpireAfterAccess(TimeValue expireAfterAccess) { + Objects.requireNonNull(expireAfterAccess); + final long expireAfterAccessNanos = expireAfterAccess.getNanos(); + if (expireAfterAccessNanos <= 0) { throw new IllegalArgumentException("expireAfterAccess <= 0"); } - this.expireAfterAccess = expireAfterAccess; + this.expireAfterAccessNanos = expireAfterAccessNanos; return this; } - public CacheBuilder setExpireAfterWrite(long expireAfterWrite) { - if (expireAfterWrite <= 0) { + /** + * Sets the amount of time before an entry in the cache expires after it was written. + * + * @param expireAfterWrite The amount of time before an entry expires after it was written. Must not be {@code null} and must be + * greater than 0. + */ + public CacheBuilder setExpireAfterWrite(TimeValue expireAfterWrite) { + Objects.requireNonNull(expireAfterWrite); + final long expireAfterWriteNanos = expireAfterWrite.getNanos(); + if (expireAfterWriteNanos <= 0) { throw new IllegalArgumentException("expireAfterWrite <= 0"); } - this.expireAfterWrite = expireAfterWrite; + this.expireAfterWriteNanos = expireAfterWriteNanos; return this; } @@ -77,11 +95,11 @@ public Cache build() { if (maximumWeight != -1) { cache.setMaximumWeight(maximumWeight); } - if (expireAfterAccess != -1) { - cache.setExpireAfterAccess(expireAfterAccess); + if (expireAfterAccessNanos != -1) { + cache.setExpireAfterAccessNanos(expireAfterAccessNanos); } - if (expireAfterWrite != -1) { - cache.setExpireAfterWrite(expireAfterWrite); + if (expireAfterWriteNanos != -1) { + cache.setExpireAfterWriteNanos(expireAfterWriteNanos); } if (weigher != null) { cache.setWeigher(weigher); diff --git a/core/src/main/java/org/elasticsearch/common/collect/CopyOnWriteHashMap.java b/core/src/main/java/org/elasticsearch/common/collect/CopyOnWriteHashMap.java index 5316768673660..85d7eda836395 100644 --- a/core/src/main/java/org/elasticsearch/common/collect/CopyOnWriteHashMap.java +++ b/core/src/main/java/org/elasticsearch/common/collect/CopyOnWriteHashMap.java @@ -433,7 +433,7 @@ private static class EntryIterator implements Iterator> { private final Deque> entries; private final Deque> nodes; - public EntryIterator(Node node) { + EntryIterator(Node node) { entries = new ArrayDeque<>(); nodes = new ArrayDeque<>(); node.visit(entries, nodes); diff --git a/core/src/main/java/org/elasticsearch/common/collect/Iterators.java b/core/src/main/java/org/elasticsearch/common/collect/Iterators.java index d44bf7341c47b..a8c811a0d067d 100644 --- a/core/src/main/java/org/elasticsearch/common/collect/Iterators.java +++ b/core/src/main/java/org/elasticsearch/common/collect/Iterators.java @@ -36,7 +36,7 @@ static class ConcatenatedIterator implements Iterator { private final Iterator[] iterators; private int index = 0; - public ConcatenatedIterator(Iterator... iterators) { + ConcatenatedIterator(Iterator... iterators) { if (iterators == null) { throw new NullPointerException("iterators"); } diff --git a/core/src/main/java/org/elasticsearch/common/component/AbstractLifecycleComponent.java b/core/src/main/java/org/elasticsearch/common/component/AbstractLifecycleComponent.java index 6f1534b57d831..49e8b40b350c7 100644 --- a/core/src/main/java/org/elasticsearch/common/component/AbstractLifecycleComponent.java +++ b/core/src/main/java/org/elasticsearch/common/component/AbstractLifecycleComponent.java @@ -21,6 +21,7 @@ import org.elasticsearch.common.settings.Settings; +import java.io.IOException; import java.util.List; import java.util.concurrent.CopyOnWriteArrayList; @@ -104,11 +105,17 @@ public void close() { listener.beforeClose(); } lifecycle.moveToClosed(); - doClose(); + try { + doClose(); + } catch (IOException e) { + // TODO: we need to separate out closing (ie shutting down) services, vs releasing runtime transient + // structures. Shutting down services should use IOUtils.close + logger.warn("failed to close " + getClass().getName(), e); + } for (LifecycleListener listener : listeners) { listener.afterClose(); } } - protected abstract void doClose(); + protected abstract void doClose() throws IOException; } diff --git a/core/src/main/java/org/elasticsearch/common/compress/Compressor.java b/core/src/main/java/org/elasticsearch/common/compress/Compressor.java index 883078dafe8ce..edd6f2fde7b03 100644 --- a/core/src/main/java/org/elasticsearch/common/compress/Compressor.java +++ b/core/src/main/java/org/elasticsearch/common/compress/Compressor.java @@ -20,6 +20,7 @@ package org.elasticsearch.common.compress; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.io.stream.ReleasableBytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -33,5 +34,9 @@ public interface Compressor { StreamInput streamInput(StreamInput in) throws IOException; + /** + * Creates a new stream output that compresses the contents and writes to the provided stream + * output. Closing the returned {@link StreamOutput} will close the provided stream output. + */ StreamOutput streamOutput(StreamOutput out) throws IOException; } diff --git a/core/src/main/java/org/elasticsearch/common/compress/CompressorFactory.java b/core/src/main/java/org/elasticsearch/common/compress/CompressorFactory.java index 82e049704cc82..0bcc54d3d2992 100644 --- a/core/src/main/java/org/elasticsearch/common/compress/CompressorFactory.java +++ b/core/src/main/java/org/elasticsearch/common/compress/CompressorFactory.java @@ -28,6 +28,7 @@ import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; +import java.util.Objects; /** */ @@ -70,9 +71,10 @@ private static boolean isAncient(BytesReference bytes) { /** * Uncompress the provided data, data can be detected as compressed using {@link #isCompressed(BytesReference)}. + * @throws NullPointerException a NullPointerException will be thrown when bytes is null */ public static BytesReference uncompressIfNeeded(BytesReference bytes) throws IOException { - Compressor compressor = compressor(bytes); + Compressor compressor = compressor(Objects.requireNonNull(bytes, "the BytesReference must not be null")); BytesReference uncompressed; if (compressor != null) { uncompressed = uncompress(bytes, compressor); diff --git a/core/src/main/java/org/elasticsearch/common/compress/DeflateCompressor.java b/core/src/main/java/org/elasticsearch/common/compress/DeflateCompressor.java index 42e2efa358cfa..794a8db4960c6 100644 --- a/core/src/main/java/org/elasticsearch/common/compress/DeflateCompressor.java +++ b/core/src/main/java/org/elasticsearch/common/compress/DeflateCompressor.java @@ -20,7 +20,6 @@ package org.elasticsearch.common.compress; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.compress.Compressor; import org.elasticsearch.common.io.stream.InputStreamStreamInput; import org.elasticsearch.common.io.stream.OutputStreamStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; @@ -47,7 +46,7 @@ public class DeflateCompressor implements Compressor { // It needs to be different from other compressors and to not be specific // enough so that no stream starting with these bytes could be detected as // a XContent - private static final byte[] HEADER = new byte[] { 'D', 'F', 'L', '\0' }; + private static final byte[] HEADER = new byte[]{'D', 'F', 'L', '\0'}; // 3 is a good trade-off between speed and compression ratio private static final int LEVEL = 3; // We use buffering on the input and output of in/def-laters in order to @@ -88,6 +87,7 @@ public StreamInput streamInput(StreamInput in) throws IOException { decompressedIn = new BufferedInputStream(decompressedIn, BUFFER_SIZE); return new InputStreamStreamInput(decompressedIn) { final AtomicBoolean closed = new AtomicBoolean(false); + public void close() throws IOException { try { super.close(); @@ -107,10 +107,11 @@ public StreamOutput streamOutput(StreamOutput out) throws IOException { final boolean nowrap = true; final Deflater deflater = new Deflater(LEVEL, nowrap); final boolean syncFlush = true; - OutputStream compressedOut = new DeflaterOutputStream(out, deflater, BUFFER_SIZE, syncFlush); - compressedOut = new BufferedOutputStream(compressedOut, BUFFER_SIZE); + DeflaterOutputStream deflaterOutputStream = new DeflaterOutputStream(out, deflater, BUFFER_SIZE, syncFlush); + OutputStream compressedOut = new BufferedOutputStream(deflaterOutputStream, BUFFER_SIZE); return new OutputStreamStreamOutput(compressedOut) { final AtomicBoolean closed = new AtomicBoolean(false); + public void close() throws IOException { try { super.close(); diff --git a/core/src/main/java/org/elasticsearch/common/geo/GeoDistance.java b/core/src/main/java/org/elasticsearch/common/geo/GeoDistance.java index f63636174f6ce..6b77d6dae6489 100644 --- a/core/src/main/java/org/elasticsearch/common/geo/GeoDistance.java +++ b/core/src/main/java/org/elasticsearch/common/geo/GeoDistance.java @@ -19,18 +19,13 @@ package org.elasticsearch.common.geo; -import org.apache.lucene.util.Bits; -import org.apache.lucene.util.SloppyMath; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.unit.DistanceUnit; -import org.elasticsearch.index.fielddata.FieldData; -import org.elasticsearch.index.fielddata.GeoPointValues; -import org.elasticsearch.index.fielddata.MultiGeoPointValues; -import org.elasticsearch.index.fielddata.NumericDoubleValues; -import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; -import org.elasticsearch.index.fielddata.SortingNumericDoubleValues; import java.io.IOException; import java.util.Locale; @@ -39,164 +34,57 @@ * Geo distance calculation. */ public enum GeoDistance implements Writeable { - /** - * Calculates distance as points on a plane. Faster, but less accurate than {@link #ARC}. - * @deprecated use {@link GeoUtils#planeDistance} - */ - @Deprecated - PLANE { - @Override - public double calculate(double sourceLatitude, double sourceLongitude, double targetLatitude, double targetLongitude, DistanceUnit unit) { - double px = targetLongitude - sourceLongitude; - double py = targetLatitude - sourceLatitude; - return Math.sqrt(px * px + py * py) * unit.getDistancePerDegree(); - } - - @Override - public double normalize(double distance, DistanceUnit unit) { - return distance; - } - - @Override - public FixedSourceDistance fixedSourceDistance(double sourceLatitude, double sourceLongitude, DistanceUnit unit) { - return new PlaneFixedSourceDistance(sourceLatitude, sourceLongitude, unit); - } - }, - - /** - * Calculates distance factor. - * Note: {@code calculate} is simply returning the RHS of the spherical law of cosines from 2 lat,lon points. - * {@code normalize} also returns the RHS of the spherical law of cosines for a given distance - * @deprecated use {@link SloppyMath#haversinMeters} to get distance in meters, law of cosines is being removed - */ - @Deprecated - FACTOR { - @Override - public double calculate(double sourceLatitude, double sourceLongitude, double targetLatitude, double targetLongitude, DistanceUnit unit) { - double longitudeDifference = targetLongitude - sourceLongitude; - double a = Math.toRadians(90D - sourceLatitude); - double c = Math.toRadians(90D - targetLatitude); - return (Math.cos(a) * Math.cos(c)) + (Math.sin(a) * Math.sin(c) * Math.cos(Math.toRadians(longitudeDifference))); - } - - @Override - public double normalize(double distance, DistanceUnit unit) { - return Math.cos(distance / unit.getEarthRadius()); - } - - @Override - public FixedSourceDistance fixedSourceDistance(double sourceLatitude, double sourceLongitude, DistanceUnit unit) { - return new FactorFixedSourceDistance(sourceLatitude, sourceLongitude); - } - }, - /** - * Calculates distance as points on a globe. - * @deprecated use {@link GeoUtils#arcDistance} - */ - @Deprecated - ARC { - @Override - public double calculate(double sourceLatitude, double sourceLongitude, double targetLatitude, double targetLongitude, DistanceUnit unit) { - double result = SloppyMath.haversinMeters(sourceLatitude, sourceLongitude, targetLatitude, targetLongitude); - return unit.fromMeters(result); - } - - @Override - public double normalize(double distance, DistanceUnit unit) { - return distance; - } - - @Override - public FixedSourceDistance fixedSourceDistance(double sourceLatitude, double sourceLongitude, DistanceUnit unit) { - return new ArcFixedSourceDistance(sourceLatitude, sourceLongitude, unit); - } - }, - /** - * Calculates distance as points on a globe in a sloppy way. Close to the pole areas the accuracy - * of this function decreases. - */ - @Deprecated - SLOPPY_ARC { + PLANE, ARC; - @Override - public double normalize(double distance, DistanceUnit unit) { - return distance; - } - - @Override - public double calculate(double sourceLatitude, double sourceLongitude, double targetLatitude, double targetLongitude, DistanceUnit unit) { - return unit.fromMeters(SloppyMath.haversinMeters(sourceLatitude, sourceLongitude, targetLatitude, targetLongitude)); - } - - @Override - public FixedSourceDistance fixedSourceDistance(double sourceLatitude, double sourceLongitude, DistanceUnit unit) { - return new SloppyArcFixedSourceDistance(sourceLatitude, sourceLongitude, unit); - } - }; + private static final DeprecationLogger DEPRECATION_LOGGER = + new DeprecationLogger(Loggers.getLogger(GeoDistance.class)); + /** Creates a GeoDistance instance from an input stream */ public static GeoDistance readFromStream(StreamInput in) throws IOException { + Version clientVersion = in.getVersion(); int ord = in.readVInt(); + // bwc client deprecation for FACTOR and SLOPPY_ARC + if (clientVersion.before(Version.V_5_3_3)) { + switch (ord) { + case 0: return PLANE; + case 1: // FACTOR uses PLANE + // bwc client deprecation for FACTOR + DEPRECATION_LOGGER.deprecated("[factor] is deprecated. Using [plane] instead."); + return PLANE; + case 2: return ARC; + case 3: // SLOPPY_ARC uses ARC + // bwc client deprecation for SLOPPY_ARC + DEPRECATION_LOGGER.deprecated("[sloppy_arc] is deprecated. Using [arc] instead."); + return ARC; + default: + throw new IOException("Unknown GeoDistance ordinal [" + ord + "]"); + } + } + if (ord < 0 || ord >= values().length) { throw new IOException("Unknown GeoDistance ordinal [" + ord + "]"); } return GeoDistance.values()[ord]; } + /** Writes an instance of a GeoDistance object to an output stream */ @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(this.ordinal()); - } - - /** - * Default {@link GeoDistance} function. This method should be used, If no specific function has been selected. - * This is an alias for SLOPPY_ARC - */ - @Deprecated - public static final GeoDistance DEFAULT = SLOPPY_ARC; - - public abstract double normalize(double distance, DistanceUnit unit); - - public abstract double calculate(double sourceLatitude, double sourceLongitude, double targetLatitude, double targetLongitude, DistanceUnit unit); - - public abstract FixedSourceDistance fixedSourceDistance(double sourceLatitude, double sourceLongitude, DistanceUnit unit); - - private static final double MIN_LAT = Math.toRadians(-90d); // -PI/2 - private static final double MAX_LAT = Math.toRadians(90d); // PI/2 - private static final double MIN_LON = Math.toRadians(-180d); // -PI - private static final double MAX_LON = Math.toRadians(180d); // PI - - public static DistanceBoundingCheck distanceBoundingCheck(double sourceLatitude, double sourceLongitude, double distance, DistanceUnit unit) { - // angular distance in radians on a great circle - // assume worst-case: use the minor axis - double radDist = unit.toMeters(distance) / GeoUtils.EARTH_SEMI_MINOR_AXIS; - - double radLat = Math.toRadians(sourceLatitude); - double radLon = Math.toRadians(sourceLongitude); - - double minLat = radLat - radDist; - double maxLat = radLat + radDist; - - double minLon, maxLon; - if (minLat > MIN_LAT && maxLat < MAX_LAT) { - double deltaLon = Math.asin(Math.sin(radDist) / Math.cos(radLat)); - minLon = radLon - deltaLon; - if (minLon < MIN_LON) minLon += 2d * Math.PI; - maxLon = radLon + deltaLon; - if (maxLon > MAX_LON) maxLon -= 2d * Math.PI; - } else { - // a pole is within the distance - minLat = Math.max(minLat, MIN_LAT); - maxLat = Math.min(maxLat, MAX_LAT); - minLon = MIN_LON; - maxLon = MAX_LON; + Version clientVersion = out.getVersion(); + int ord = this.ordinal(); + if (clientVersion.before(Version.V_5_3_3)) { + switch (ord) { + case 0: + out.write(0); // write PLANE ordinal + return; + case 1: + out.write(2); // write bwc ARC ordinal + return; + default: + throw new IOException("Unknown GeoDistance ordinal [" + ord + "]"); + } } - - GeoPoint topLeft = new GeoPoint(Math.toDegrees(maxLat), Math.toDegrees(minLon)); - GeoPoint bottomRight = new GeoPoint(Math.toDegrees(minLat), Math.toDegrees(maxLon)); - if (minLon > maxLon) { - return new Meridian180DistanceBoundingCheck(topLeft, bottomRight); - } - return new SimpleDistanceBoundingCheck(topLeft, bottomRight); + out.writeVInt(this.ordinal()); } /** @@ -204,8 +92,6 @@ public static DistanceBoundingCheck distanceBoundingCheck(double sourceLatitude, * *
      *
    • plane for GeoDistance.PLANE
    • - *
    • sloppy_arc for GeoDistance.SLOPPY_ARC
    • - *
    • factor for GeoDistance.FACTOR
    • *
    • arc for GeoDistance.ARC
    • *
    * @@ -216,224 +102,21 @@ public static GeoDistance fromString(String name) { name = name.toLowerCase(Locale.ROOT); if ("plane".equals(name)) { return PLANE; + } else if ("sloppy_arc".equals(name)) { + DEPRECATION_LOGGER.deprecated("[sloppy_arc] is deprecated. Use [arc] instead."); + return ARC; } else if ("arc".equals(name)) { return ARC; - } else if ("sloppy_arc".equals(name)) { - return SLOPPY_ARC; - } else if ("factor".equals(name)) { - return FACTOR; } throw new IllegalArgumentException("No geo distance for [" + name + "]"); } - public interface FixedSourceDistance { - - double calculate(double targetLatitude, double targetLongitude); - } - - public interface DistanceBoundingCheck { - - boolean isWithin(double targetLatitude, double targetLongitude); - - GeoPoint topLeft(); - - GeoPoint bottomRight(); - } - - public static final AlwaysDistanceBoundingCheck ALWAYS_INSTANCE = new AlwaysDistanceBoundingCheck(); - - private static class AlwaysDistanceBoundingCheck implements DistanceBoundingCheck { - @Override - public boolean isWithin(double targetLatitude, double targetLongitude) { - return true; - } - - @Override - public GeoPoint topLeft() { - return null; - } - - @Override - public GeoPoint bottomRight() { - return null; - } - } - - public static class Meridian180DistanceBoundingCheck implements DistanceBoundingCheck { - - private final GeoPoint topLeft; - private final GeoPoint bottomRight; - - public Meridian180DistanceBoundingCheck(GeoPoint topLeft, GeoPoint bottomRight) { - this.topLeft = topLeft; - this.bottomRight = bottomRight; - } - - @Override - public boolean isWithin(double targetLatitude, double targetLongitude) { - return (targetLatitude >= bottomRight.lat() && targetLatitude <= topLeft.lat()) && - (targetLongitude >= topLeft.lon() || targetLongitude <= bottomRight.lon()); - } - - @Override - public GeoPoint topLeft() { - return topLeft; - } - - @Override - public GeoPoint bottomRight() { - return bottomRight; - } - } - - public static class SimpleDistanceBoundingCheck implements DistanceBoundingCheck { - private final GeoPoint topLeft; - private final GeoPoint bottomRight; - - public SimpleDistanceBoundingCheck(GeoPoint topLeft, GeoPoint bottomRight) { - this.topLeft = topLeft; - this.bottomRight = bottomRight; - } - - @Override - public boolean isWithin(double targetLatitude, double targetLongitude) { - return (targetLatitude >= bottomRight.lat() && targetLatitude <= topLeft.lat()) && - (targetLongitude >= topLeft.lon() && targetLongitude <= bottomRight.lon()); - } - - @Override - public GeoPoint topLeft() { - return topLeft; - } - - @Override - public GeoPoint bottomRight() { - return bottomRight; - } - } - - public static class PlaneFixedSourceDistance implements FixedSourceDistance { - - private final double sourceLatitude; - private final double sourceLongitude; - private final double distancePerDegree; - - public PlaneFixedSourceDistance(double sourceLatitude, double sourceLongitude, DistanceUnit unit) { - this.sourceLatitude = sourceLatitude; - this.sourceLongitude = sourceLongitude; - this.distancePerDegree = unit.getDistancePerDegree(); - } - - @Override - public double calculate(double targetLatitude, double targetLongitude) { - double px = targetLongitude - sourceLongitude; - double py = targetLatitude - sourceLatitude; - return Math.sqrt(px * px + py * py) * distancePerDegree; - } - } - - public static class FactorFixedSourceDistance implements FixedSourceDistance { - - private final double sourceLongitude; - - private final double a; - private final double sinA; - private final double cosA; - - public FactorFixedSourceDistance(double sourceLatitude, double sourceLongitude) { - this.sourceLongitude = sourceLongitude; - this.a = Math.toRadians(90D - sourceLatitude); - this.sinA = Math.sin(a); - this.cosA = Math.cos(a); - } - - @Override - public double calculate(double targetLatitude, double targetLongitude) { - double longitudeDifference = targetLongitude - sourceLongitude; - double c = Math.toRadians(90D - targetLatitude); - return (cosA * Math.cos(c)) + (sinA * Math.sin(c) * Math.cos(Math.toRadians(longitudeDifference))); - } - } - - /** - * Basic implementation of {@link FixedSourceDistance}. This class keeps the basic parameters for a distance - * functions based on a fixed source. Namely latitude, longitude and unit. - */ - public abstract static class FixedSourceDistanceBase implements FixedSourceDistance { - protected final double sourceLatitude; - protected final double sourceLongitude; - protected final DistanceUnit unit; - - public FixedSourceDistanceBase(double sourceLatitude, double sourceLongitude, DistanceUnit unit) { - this.sourceLatitude = sourceLatitude; - this.sourceLongitude = sourceLongitude; - this.unit = unit; - } - } - - public static class ArcFixedSourceDistance extends FixedSourceDistanceBase { - - public ArcFixedSourceDistance(double sourceLatitude, double sourceLongitude, DistanceUnit unit) { - super(sourceLatitude, sourceLongitude, unit); - } - - @Override - public double calculate(double targetLatitude, double targetLongitude) { - return ARC.calculate(sourceLatitude, sourceLongitude, targetLatitude, targetLongitude, unit); - } - - } - - public static class SloppyArcFixedSourceDistance extends FixedSourceDistanceBase { - - public SloppyArcFixedSourceDistance(double sourceLatitude, double sourceLongitude, DistanceUnit unit) { - super(sourceLatitude, sourceLongitude, unit); - } - - @Override - public double calculate(double targetLatitude, double targetLongitude) { - return SLOPPY_ARC.calculate(sourceLatitude, sourceLongitude, targetLatitude, targetLongitude, unit); - } - } - - - /** - * Return a {@link SortedNumericDoubleValues} instance that returns the distances to a list of geo-points for each document. - */ - public static SortedNumericDoubleValues distanceValues(final MultiGeoPointValues geoPointValues, final FixedSourceDistance... distances) { - final GeoPointValues singleValues = FieldData.unwrapSingleton(geoPointValues); - if (singleValues != null && distances.length == 1) { - final Bits docsWithField = FieldData.unwrapSingletonBits(geoPointValues); - return FieldData.singleton(new NumericDoubleValues() { - - @Override - public double get(int docID) { - if (docsWithField != null && !docsWithField.get(docID)) { - return 0d; - } - final GeoPoint point = singleValues.get(docID); - return distances[0].calculate(point.lat(), point.lon()); - } - - }, docsWithField); - } else { - return new SortingNumericDoubleValues() { - - @Override - public void setDocument(int doc) { - geoPointValues.setDocument(doc); - resize(geoPointValues.count() * distances.length); - int valueCounter = 0; - for (FixedSourceDistance distance : distances) { - for (int i = 0; i < geoPointValues.count(); ++i) { - final GeoPoint point = geoPointValues.valueAt(i); - values[valueCounter] = distance.calculate(point.lat(), point.lon()); - valueCounter++; - } - } - sort(); - } - }; + /** compute the distance between two points using the selected algorithm (PLANE, ARC) */ + public double calculate(double srcLat, double srcLon, double dstLat, double dstLon, DistanceUnit unit) { + if (this == PLANE) { + return DistanceUnit.convert(GeoUtils.planeDistance(srcLat, srcLon, dstLat, dstLon), + DistanceUnit.METERS, unit); } + return DistanceUnit.convert(GeoUtils.arcDistance(srcLat, srcLon, dstLat, dstLon), DistanceUnit.METERS, unit); } } diff --git a/core/src/main/java/org/elasticsearch/common/geo/GeoPoint.java b/core/src/main/java/org/elasticsearch/common/geo/GeoPoint.java index 504fc41313702..15e2fb4fabbe5 100644 --- a/core/src/main/java/org/elasticsearch/common/geo/GeoPoint.java +++ b/core/src/main/java/org/elasticsearch/common/geo/GeoPoint.java @@ -19,8 +19,15 @@ package org.elasticsearch.common.geo; +import org.apache.lucene.document.LatLonDocValuesField; +import org.apache.lucene.document.LatLonPoint; +import org.apache.lucene.geo.GeoEncodingUtils; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.spatial.geopoint.document.GeoPointField; import org.apache.lucene.util.BitUtil; +import org.apache.lucene.util.BytesRef; + +import java.util.Arrays; import static org.elasticsearch.common.geo.GeoHashUtils.mortonEncode; import static org.elasticsearch.common.geo.GeoHashUtils.stringEncode; @@ -88,6 +95,24 @@ public GeoPoint resetFromIndexHash(long hash) { return this; } + // todo this is a crutch because LatLonPoint doesn't have a helper for returning .stringValue() + // todo remove with next release of lucene + public GeoPoint resetFromIndexableField(IndexableField field) { + if (field instanceof LatLonPoint) { + BytesRef br = field.binaryValue(); + byte[] bytes = Arrays.copyOfRange(br.bytes, br.offset, br.length); + return this.reset( + GeoEncodingUtils.decodeLatitude(bytes, 0), + GeoEncodingUtils.decodeLongitude(bytes, Integer.BYTES)); + } else if (field instanceof LatLonDocValuesField) { + long encoded = (long)(field.numericValue()); + return this.reset( + GeoEncodingUtils.decodeLatitude((int)(encoded >>> 32)), + GeoEncodingUtils.decodeLongitude((int)encoded)); + } + return resetFromIndexHash(Long.parseLong(field.stringValue())); + } + public GeoPoint resetFromGeoHash(String geohash) { final long hash = mortonEncode(geohash); return this.reset(GeoPointField.decodeLatitude(hash), GeoPointField.decodeLongitude(hash)); diff --git a/core/src/main/java/org/elasticsearch/common/geo/GeoUtils.java b/core/src/main/java/org/elasticsearch/common/geo/GeoUtils.java index b81720057c655..0c223539418af 100644 --- a/core/src/main/java/org/elasticsearch/common/geo/GeoUtils.java +++ b/core/src/main/java/org/elasticsearch/common/geo/GeoUtils.java @@ -19,14 +19,23 @@ package org.elasticsearch.common.geo; +import org.apache.lucene.geo.Rectangle; import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree; import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree; +import org.apache.lucene.util.Bits; import org.apache.lucene.util.SloppyMath; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; import org.elasticsearch.index.mapper.GeoPointFieldMapper; +import org.elasticsearch.index.fielddata.FieldData; +import org.elasticsearch.index.fielddata.GeoPointValues; +import org.elasticsearch.index.fielddata.MultiGeoPointValues; +import org.elasticsearch.index.fielddata.NumericDoubleValues; +import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; +import org.elasticsearch.index.fielddata.SortingNumericDoubleValues; +import org.elasticsearch.index.mapper.GeoPointFieldMapper; import java.io.IOException; @@ -484,7 +493,8 @@ public static double arcDistance(double lat1, double lon1, double lat2, double l /** * Return the distance (in meters) between 2 lat,lon geo points using a simple tangential plane - * this provides a faster alternative to {@link GeoUtils#arcDistance} when points are within 5 km + * this provides a faster alternative to {@link GeoUtils#arcDistance} but is innaccurate for distances greater than + * 4 decimal degrees */ public static double planeDistance(double lat1, double lon1, double lat2, double lon2) { double x = (lon2 - lon1) * SloppyMath.TO_RADIANS * Math.cos((lat2 + lat1) / 2.0 * SloppyMath.TO_RADIANS); @@ -492,6 +502,62 @@ public static double planeDistance(double lat1, double lon1, double lat2, double return Math.sqrt(x * x + y * y) * EARTH_MEAN_RADIUS; } + /** check if point is within a rectangle + * todo: move this to lucene Rectangle class + */ + public static boolean rectangleContainsPoint(Rectangle r, double lat, double lon) { + if (lat >= r.minLat && lat <= r.maxLat) { + // if rectangle crosses the dateline we only check if the lon is >= min or max + return r.crossesDateline() ? lon >= r.minLon || lon <= r.maxLon : lon >= r.minLon && lon <= r.maxLon; + } + return false; + } + + /** + * Return a {@link SortedNumericDoubleValues} instance that returns the distances to a list of geo-points + * for each document. + */ + public static SortedNumericDoubleValues distanceValues(final GeoDistance distance, + final DistanceUnit unit, + final MultiGeoPointValues geoPointValues, + final GeoPoint... fromPoints) { + final GeoPointValues singleValues = FieldData.unwrapSingleton(geoPointValues); + if (singleValues != null && fromPoints.length == 1) { + final Bits docsWithField = FieldData.unwrapSingletonBits(geoPointValues); + return FieldData.singleton(new NumericDoubleValues() { + + @Override + public double get(int docID) { + if (docsWithField != null && !docsWithField.get(docID)) { + return 0d; + } + final GeoPoint to = singleValues.get(docID); + final GeoPoint from = fromPoints[0]; + return distance.calculate(from.lat(), from.lon(), to.lat(), to.lon(), unit); + } + + }, docsWithField); + } else { + return new SortingNumericDoubleValues() { + + @Override + public void setDocument(int doc) { + geoPointValues.setDocument(doc); + resize(geoPointValues.count() * fromPoints.length); + int v = 0; + for (GeoPoint from : fromPoints) { + for (int i = 0; i < geoPointValues.count(); ++i) { + final GeoPoint point = geoPointValues.valueAt(i); + values[v] = distance.calculate(from.lat(), from.lon(), point.lat(), point.lon(), unit); + v++; + } + } + sort(); + } + }; + } + } + private GeoUtils() { } } diff --git a/core/src/main/java/org/elasticsearch/common/geo/ShapeRelation.java b/core/src/main/java/org/elasticsearch/common/geo/ShapeRelation.java index e9966834a0119..e83e18ce43255 100644 --- a/core/src/main/java/org/elasticsearch/common/geo/ShapeRelation.java +++ b/core/src/main/java/org/elasticsearch/common/geo/ShapeRelation.java @@ -44,16 +44,12 @@ public enum ShapeRelation implements Writeable { } public static ShapeRelation readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown ShapeRelation ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(ShapeRelation.class); } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(ordinal()); + out.writeEnum(this); } public static ShapeRelation getRelationByName(String name) { diff --git a/core/src/main/java/org/elasticsearch/common/geo/SpatialStrategy.java b/core/src/main/java/org/elasticsearch/common/geo/SpatialStrategy.java index e1b0356b68636..d578dda7bcf65 100644 --- a/core/src/main/java/org/elasticsearch/common/geo/SpatialStrategy.java +++ b/core/src/main/java/org/elasticsearch/common/geo/SpatialStrategy.java @@ -34,7 +34,7 @@ public enum SpatialStrategy implements Writeable { private final String strategyName; - private SpatialStrategy(String strategyName) { + SpatialStrategy(String strategyName) { this.strategyName = strategyName; } @@ -43,16 +43,12 @@ public String getStrategyName() { } public static SpatialStrategy readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown SpatialStrategy ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(SpatialStrategy.class); } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(ordinal()); + out.writeEnum(this); } public static SpatialStrategy fromString(String strategyName) { diff --git a/core/src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java b/core/src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java index cb2f8bb4e78af..8f40d265acfc9 100644 --- a/core/src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java +++ b/core/src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java @@ -23,6 +23,7 @@ import com.vividsolutions.jts.geom.Geometry; import com.vividsolutions.jts.geom.GeometryFactory; import org.apache.logging.log4j.Logger; +import org.elasticsearch.Assertions; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.io.stream.NamedWriteable; @@ -58,9 +59,7 @@ public abstract class ShapeBuilder extends ToXContentToBytes implements NamedWri static { // if asserts are enabled we run the debug statements even if they are not logged // to prevent exceptions only present if debug enabled - boolean debug = false; - assert debug = true; - DEBUG = debug; + DEBUG = Assertions.ENABLED; } public static final double DATELINE = 180; @@ -79,21 +78,21 @@ public abstract class ShapeBuilder extends ToXContentToBytes implements NamedWri /** It's possible that some geometries in a MULTI* shape might overlap. With the possible exception of GeometryCollection, * this normally isn't allowed. */ - protected final boolean multiPolygonMayOverlap = false; + protected static final boolean MULTI_POLYGON_MAY_OVERLAP = false; /** @see org.locationtech.spatial4j.shape.jts.JtsGeometry#validate() */ - protected final boolean autoValidateJtsGeometry = true; + protected static final boolean AUTO_VALIDATE_JTS_GEOMETRY = true; /** @see org.locationtech.spatial4j.shape.jts.JtsGeometry#index() */ - protected final boolean autoIndexJtsGeometry = true;//may want to turn off once SpatialStrategy impls do it. + protected static final boolean AUTO_INDEX_JTS_GEOMETRY = true;//may want to turn off once SpatialStrategy impls do it. protected ShapeBuilder() { } protected JtsGeometry jtsGeometry(Geometry geom) { //dateline180Check is false because ElasticSearch does it's own dateline wrapping - JtsGeometry jtsGeometry = new JtsGeometry(geom, SPATIAL_CONTEXT, false, multiPolygonMayOverlap); - if (autoValidateJtsGeometry) + JtsGeometry jtsGeometry = new JtsGeometry(geom, SPATIAL_CONTEXT, false, MULTI_POLYGON_MAY_OVERLAP); + if (AUTO_VALIDATE_JTS_GEOMETRY) jtsGeometry.validate(); - if (autoIndexJtsGeometry) + if (AUTO_INDEX_JTS_GEOMETRY) jtsGeometry.index(); return jtsGeometry; } @@ -381,7 +380,7 @@ public int compare(Edge o1, Edge o2) { } } - public static enum Orientation { + public enum Orientation { LEFT, RIGHT; @@ -427,7 +426,7 @@ protected static final boolean debugEnabled() { /** * Enumeration that lists all {@link GeoShapeType}s that can be handled */ - public static enum GeoShapeType { + public enum GeoShapeType { POINT("point"), MULTIPOINT("multipoint"), LINESTRING("linestring"), @@ -440,7 +439,7 @@ public static enum GeoShapeType { private final String shapename; - private GeoShapeType(String shapename) { + GeoShapeType(String shapename) { this.shapename = shapename; } diff --git a/core/src/main/java/org/elasticsearch/common/inject/ConstantFactory.java b/core/src/main/java/org/elasticsearch/common/inject/ConstantFactory.java index 87bf31e911e91..aa7029f2a9cae 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/ConstantFactory.java +++ b/core/src/main/java/org/elasticsearch/common/inject/ConstantFactory.java @@ -30,7 +30,7 @@ class ConstantFactory implements InternalFactory { private final Initializable initializable; - public ConstantFactory(Initializable initializable) { + ConstantFactory(Initializable initializable) { this.initializable = initializable; } diff --git a/core/src/main/java/org/elasticsearch/common/inject/DeferredLookups.java b/core/src/main/java/org/elasticsearch/common/inject/DeferredLookups.java index 40d589f37dcb1..2bc8f770bf857 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/DeferredLookups.java +++ b/core/src/main/java/org/elasticsearch/common/inject/DeferredLookups.java @@ -34,7 +34,7 @@ class DeferredLookups implements Lookups { private final InjectorImpl injector; private final List lookups = new ArrayList<>(); - public DeferredLookups(InjectorImpl injector) { + DeferredLookups(InjectorImpl injector) { this.injector = injector; } diff --git a/core/src/main/java/org/elasticsearch/common/inject/EncounterImpl.java b/core/src/main/java/org/elasticsearch/common/inject/EncounterImpl.java index 8b8b7b78218b8..ed49d28dea7c5 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/EncounterImpl.java +++ b/core/src/main/java/org/elasticsearch/common/inject/EncounterImpl.java @@ -36,7 +36,7 @@ final class EncounterImpl implements TypeEncounter { private List> injectionListeners; // lazy private boolean valid = true; - public EncounterImpl(Errors errors, Lookups lookups) { + EncounterImpl(Errors errors, Lookups lookups) { this.errors = errors; this.lookups = lookups; } diff --git a/core/src/main/java/org/elasticsearch/common/inject/ExposedKeyFactory.java b/core/src/main/java/org/elasticsearch/common/inject/ExposedKeyFactory.java index efc10b27e498e..b3cf0a14b6be6 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/ExposedKeyFactory.java +++ b/core/src/main/java/org/elasticsearch/common/inject/ExposedKeyFactory.java @@ -33,7 +33,7 @@ class ExposedKeyFactory implements InternalFactory, BindingProcessor.Creat private final PrivateElements privateElements; private BindingImpl delegate; - public ExposedKeyFactory(Key key, PrivateElements privateElements) { + ExposedKeyFactory(Key key, PrivateElements privateElements) { this.key = key; this.privateElements = privateElements; } diff --git a/core/src/main/java/org/elasticsearch/common/inject/Initializer.java b/core/src/main/java/org/elasticsearch/common/inject/Initializer.java index 1d68f163bfe23..ce7d7765ce320 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/Initializer.java +++ b/core/src/main/java/org/elasticsearch/common/inject/Initializer.java @@ -115,7 +115,7 @@ private class InjectableReference implements Initializable { private final Object source; private MembersInjectorImpl membersInjector; - public InjectableReference(InjectorImpl injector, T instance, Object source) { + InjectableReference(InjectorImpl injector, T instance, Object source) { this.injector = injector; this.instance = Objects.requireNonNull(instance, "instance"); this.source = Objects.requireNonNull(source, "source"); diff --git a/core/src/main/java/org/elasticsearch/common/inject/InjectionRequestProcessor.java b/core/src/main/java/org/elasticsearch/common/inject/InjectionRequestProcessor.java index e8b38b5133089..909779cb442a6 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/InjectionRequestProcessor.java +++ b/core/src/main/java/org/elasticsearch/common/inject/InjectionRequestProcessor.java @@ -86,7 +86,7 @@ private class StaticInjection { final StaticInjectionRequest request; List memberInjectors; - public StaticInjection(InjectorImpl injector, StaticInjectionRequest request) { + StaticInjection(InjectorImpl injector, StaticInjectionRequest request) { this.injector = injector; this.source = request.getSource(); this.request = request; diff --git a/core/src/main/java/org/elasticsearch/common/inject/InternalFactoryToProviderAdapter.java b/core/src/main/java/org/elasticsearch/common/inject/InternalFactoryToProviderAdapter.java index 8739d9182d8c8..54fecae9ba995 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/InternalFactoryToProviderAdapter.java +++ b/core/src/main/java/org/elasticsearch/common/inject/InternalFactoryToProviderAdapter.java @@ -33,11 +33,11 @@ class InternalFactoryToProviderAdapter implements InternalFactory { private final Initializable> initializable; private final Object source; - public InternalFactoryToProviderAdapter(Initializable> initializable) { + InternalFactoryToProviderAdapter(Initializable> initializable) { this(initializable, SourceProvider.UNKNOWN_SOURCE); } - public InternalFactoryToProviderAdapter( + InternalFactoryToProviderAdapter( Initializable> initializable, Object source) { this.initializable = Objects.requireNonNull(initializable, "provider"); this.source = Objects.requireNonNull(source, "source"); diff --git a/core/src/main/java/org/elasticsearch/common/inject/Key.java b/core/src/main/java/org/elasticsearch/common/inject/Key.java index 83ab440d23c83..833aac2d3fc4f 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/Key.java +++ b/core/src/main/java/org/elasticsearch/common/inject/Key.java @@ -380,7 +380,7 @@ private static void ensureIsBindingAnnotation( } } - static enum NullAnnotationStrategy implements AnnotationStrategy { + enum NullAnnotationStrategy implements AnnotationStrategy { INSTANCE; @Override diff --git a/core/src/main/java/org/elasticsearch/common/inject/ModulesBuilder.java b/core/src/main/java/org/elasticsearch/common/inject/ModulesBuilder.java index 3321b75f4e50b..6928033c69a1f 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/ModulesBuilder.java +++ b/core/src/main/java/org/elasticsearch/common/inject/ModulesBuilder.java @@ -20,6 +20,7 @@ package org.elasticsearch.common.inject; import java.util.ArrayList; +import java.util.Collections; import java.util.Iterator; import java.util.List; @@ -28,9 +29,7 @@ public class ModulesBuilder implements Iterable { private final List modules = new ArrayList<>(); public ModulesBuilder add(Module... newModules) { - for (Module module : newModules) { - modules.add(module); - } + Collections.addAll(modules, newModules); return this; } diff --git a/core/src/main/java/org/elasticsearch/common/inject/ProviderToInternalFactoryAdapter.java b/core/src/main/java/org/elasticsearch/common/inject/ProviderToInternalFactoryAdapter.java index d7b6afbe6daae..f443756908867 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/ProviderToInternalFactoryAdapter.java +++ b/core/src/main/java/org/elasticsearch/common/inject/ProviderToInternalFactoryAdapter.java @@ -30,7 +30,7 @@ class ProviderToInternalFactoryAdapter implements Provider { private final InjectorImpl injector; private final InternalFactory internalFactory; - public ProviderToInternalFactoryAdapter(InjectorImpl injector, + ProviderToInternalFactoryAdapter(InjectorImpl injector, InternalFactory internalFactory) { this.injector = injector; this.internalFactory = internalFactory; diff --git a/core/src/main/java/org/elasticsearch/common/inject/SingleFieldInjector.java b/core/src/main/java/org/elasticsearch/common/inject/SingleFieldInjector.java index 10ba17d86cda6..ce02a26ffd0aa 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/SingleFieldInjector.java +++ b/core/src/main/java/org/elasticsearch/common/inject/SingleFieldInjector.java @@ -34,7 +34,7 @@ class SingleFieldInjector implements SingleMemberInjector { final Dependency dependency; final InternalFactory factory; - public SingleFieldInjector(InjectorImpl injector, InjectionPoint injectionPoint, Errors errors) + SingleFieldInjector(InjectorImpl injector, InjectionPoint injectionPoint, Errors errors) throws ErrorsException { this.injectionPoint = injectionPoint; this.field = (Field) injectionPoint.getMember(); diff --git a/core/src/main/java/org/elasticsearch/common/inject/SingleMethodInjector.java b/core/src/main/java/org/elasticsearch/common/inject/SingleMethodInjector.java index 9c407791160c4..7330d05df3b38 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/SingleMethodInjector.java +++ b/core/src/main/java/org/elasticsearch/common/inject/SingleMethodInjector.java @@ -34,7 +34,7 @@ class SingleMethodInjector implements SingleMemberInjector { final SingleParameterInjector[] parameterInjectors; final InjectionPoint injectionPoint; - public SingleMethodInjector(InjectorImpl injector, InjectionPoint injectionPoint, Errors errors) + SingleMethodInjector(InjectorImpl injector, InjectionPoint injectionPoint, Errors errors) throws ErrorsException { this.injectionPoint = injectionPoint; final Method method = (Method) injectionPoint.getMember(); diff --git a/core/src/main/java/org/elasticsearch/common/inject/assistedinject/AssistedConstructor.java b/core/src/main/java/org/elasticsearch/common/inject/assistedinject/AssistedConstructor.java index 0716653708181..cb434a90369d3 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/assistedinject/AssistedConstructor.java +++ b/core/src/main/java/org/elasticsearch/common/inject/assistedinject/AssistedConstructor.java @@ -43,13 +43,13 @@ class AssistedConstructor { private final List allParameters; @SuppressWarnings("unchecked") - public AssistedConstructor(Constructor constructor, List> parameterTypes) { + AssistedConstructor(Constructor constructor, List> parameterTypes) { this.constructor = constructor; Annotation[][] annotations = constructor.getParameterAnnotations(); List typeList = new ArrayList<>(); - allParameters = new ArrayList<>(); + allParameters = new ArrayList<>(parameterTypes.size()); // categorize params as @Assisted or @Injected for (int i = 0; i < parameterTypes.size(); i++) { diff --git a/core/src/main/java/org/elasticsearch/common/inject/assistedinject/Parameter.java b/core/src/main/java/org/elasticsearch/common/inject/assistedinject/Parameter.java index 5ceb086db9f4d..a21dc3aa7f54d 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/assistedinject/Parameter.java +++ b/core/src/main/java/org/elasticsearch/common/inject/assistedinject/Parameter.java @@ -39,7 +39,7 @@ class Parameter { private final Annotation bindingAnnotation; private final boolean isProvider; - public Parameter(Type type, Annotation[] annotations) { + Parameter(Type type, Annotation[] annotations) { this.type = type; this.bindingAnnotation = getBindingAnnotation(annotations); this.isAssisted = hasAssistedAnnotation(annotations); diff --git a/core/src/main/java/org/elasticsearch/common/inject/assistedinject/ParameterListKey.java b/core/src/main/java/org/elasticsearch/common/inject/assistedinject/ParameterListKey.java index fc2a96e19dfad..7967f47394879 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/assistedinject/ParameterListKey.java +++ b/core/src/main/java/org/elasticsearch/common/inject/assistedinject/ParameterListKey.java @@ -34,11 +34,11 @@ class ParameterListKey { private final List paramList; - public ParameterListKey(List paramList) { + ParameterListKey(List paramList) { this.paramList = new ArrayList<>(paramList); } - public ParameterListKey(Type[] types) { + ParameterListKey(Type[] types) { this(Arrays.asList(types)); } diff --git a/core/src/main/java/org/elasticsearch/common/inject/internal/NullOutputException.java b/core/src/main/java/org/elasticsearch/common/inject/internal/NullOutputException.java index cc5a23a786dcf..0fec6b5bac281 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/internal/NullOutputException.java +++ b/core/src/main/java/org/elasticsearch/common/inject/internal/NullOutputException.java @@ -24,7 +24,7 @@ * @author Bob Lee */ class NullOutputException extends NullPointerException { - public NullOutputException(String s) { + NullOutputException(String s) { super(s); } } diff --git a/core/src/main/java/org/elasticsearch/common/inject/matcher/AbstractMatcher.java b/core/src/main/java/org/elasticsearch/common/inject/matcher/AbstractMatcher.java index 931d290fc19c3..76df334e4e315 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/matcher/AbstractMatcher.java +++ b/core/src/main/java/org/elasticsearch/common/inject/matcher/AbstractMatcher.java @@ -36,7 +36,7 @@ public Matcher or(Matcher other) { private static class AndMatcher extends AbstractMatcher { private final Matcher a, b; - public AndMatcher(Matcher a, Matcher b) { + AndMatcher(Matcher a, Matcher b) { this.a = a; this.b = b; } @@ -67,7 +67,7 @@ public String toString() { private static class OrMatcher extends AbstractMatcher { private final Matcher a, b; - public OrMatcher(Matcher a, Matcher b) { + OrMatcher(Matcher a, Matcher b) { this.a = a; this.b = b; } diff --git a/core/src/main/java/org/elasticsearch/common/inject/matcher/Matchers.java b/core/src/main/java/org/elasticsearch/common/inject/matcher/Matchers.java index e2ced98034e88..cc354145b11bb 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/matcher/Matchers.java +++ b/core/src/main/java/org/elasticsearch/common/inject/matcher/Matchers.java @@ -113,7 +113,7 @@ public static Matcher annotatedWith( private static class AnnotatedWithType extends AbstractMatcher { private final Class annotationType; - public AnnotatedWithType(Class annotationType) { + AnnotatedWithType(Class annotationType) { this.annotationType = Objects.requireNonNull(annotationType, "annotation type"); checkForRuntimeRetention(annotationType); } @@ -152,7 +152,7 @@ public static Matcher annotatedWith( private static class AnnotatedWith extends AbstractMatcher { private final Annotation annotation; - public AnnotatedWith(Annotation annotation) { + AnnotatedWith(Annotation annotation) { this.annotation = Objects.requireNonNull(annotation, "annotation"); checkForRuntimeRetention(annotation.annotationType()); } @@ -191,7 +191,7 @@ public static Matcher subclassesOf(final Class superclass) { private static class SubclassesOf extends AbstractMatcher { private final Class superclass; - public SubclassesOf(Class superclass) { + SubclassesOf(Class superclass) { this.superclass = Objects.requireNonNull(superclass, "superclass"); } @@ -227,7 +227,7 @@ public static Matcher only(Object value) { private static class Only extends AbstractMatcher { private final Object value; - public Only(Object value) { + Only(Object value) { this.value = Objects.requireNonNull(value, "value"); } @@ -263,7 +263,7 @@ public static Matcher identicalTo(final Object value) { private static class IdenticalTo extends AbstractMatcher { private final Object value; - public IdenticalTo(Object value) { + IdenticalTo(Object value) { this.value = Objects.requireNonNull(value, "value"); } @@ -301,7 +301,7 @@ private static class InPackage extends AbstractMatcher { private final transient Package targetPackage; private final String packageName; - public InPackage(Package targetPackage) { + InPackage(Package targetPackage) { this.targetPackage = Objects.requireNonNull(targetPackage, "package"); this.packageName = targetPackage.getName(); } @@ -345,7 +345,7 @@ public static Matcher inSubpackage(final String targetPackageName) { private static class InSubpackage extends AbstractMatcher { private final String targetPackageName; - public InSubpackage(String targetPackageName) { + InSubpackage(String targetPackageName) { this.targetPackageName = targetPackageName; } @@ -384,7 +384,7 @@ public static Matcher returns( private static class Returns extends AbstractMatcher { private final Matcher> returnType; - public Returns(Matcher> returnType) { + Returns(Matcher> returnType) { this.returnType = Objects.requireNonNull(returnType, "return type matcher"); } diff --git a/core/src/main/java/org/elasticsearch/common/inject/name/NamedImpl.java b/core/src/main/java/org/elasticsearch/common/inject/name/NamedImpl.java index e4cc088e30af7..eb3fb00a51ad1 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/name/NamedImpl.java +++ b/core/src/main/java/org/elasticsearch/common/inject/name/NamedImpl.java @@ -23,7 +23,7 @@ class NamedImpl implements Named { private final String value; - public NamedImpl(String value) { + NamedImpl(String value) { this.value = Objects.requireNonNull(value, "name"); } diff --git a/core/src/main/java/org/elasticsearch/common/io/ReleasableBytesStream.java b/core/src/main/java/org/elasticsearch/common/io/ReleasableBytesStream.java deleted file mode 100644 index e31f206bcad9c..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/io/ReleasableBytesStream.java +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.io; - -import org.elasticsearch.common.bytes.ReleasablePagedBytesReference; - -/** - * A bytes stream that requires its bytes to be released once no longer used. - */ -public interface ReleasableBytesStream extends BytesStream { - - @Override - ReleasablePagedBytesReference bytes(); - -} diff --git a/core/src/main/java/org/elasticsearch/common/io/Streams.java b/core/src/main/java/org/elasticsearch/common/io/Streams.java index f922fde3e753e..b6b9061ad7e49 100644 --- a/core/src/main/java/org/elasticsearch/common/io/Streams.java +++ b/core/src/main/java/org/elasticsearch/common/io/Streams.java @@ -20,7 +20,9 @@ package org.elasticsearch.common.io; import org.apache.lucene.util.IOUtils; -import org.elasticsearch.common.util.Callback; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.io.stream.BytesStream; +import org.elasticsearch.common.io.stream.StreamOutput; import java.io.BufferedReader; import java.io.IOException; @@ -34,6 +36,7 @@ import java.util.ArrayList; import java.util.List; import java.util.Objects; +import java.util.function.Consumer; /** * Simple utility methods for file and stream copying. @@ -219,21 +222,68 @@ public static int readFully(InputStream reader, byte[] dest, int offset, int len public static List readAllLines(InputStream input) throws IOException { final List lines = new ArrayList<>(); - readAllLines(input, new Callback() { - @Override - public void handle(String line) { - lines.add(line); - } - }); + readAllLines(input, lines::add); return lines; } - public static void readAllLines(InputStream input, Callback callback) throws IOException { + public static void readAllLines(InputStream input, Consumer consumer) throws IOException { try (BufferedReader reader = new BufferedReader(new InputStreamReader(input, StandardCharsets.UTF_8))) { String line; while ((line = reader.readLine()) != null) { - callback.handle(line); + consumer.accept(line); } } } + + /** + * Wraps the given {@link BytesStream} in a {@link StreamOutput} that simply flushes when + * close is called. + */ + public static BytesStream flushOnCloseStream(BytesStream os) { + return new FlushOnCloseOutputStream(os); + } + + /** + * A wrapper around a {@link BytesStream} that makes the close operation a flush. This is + * needed as sometimes a stream will be closed but the bytes that the stream holds still need + * to be used and the stream cannot be closed until the bytes have been consumed. + */ + private static class FlushOnCloseOutputStream extends BytesStream { + + private final BytesStream delegate; + + private FlushOnCloseOutputStream(BytesStream bytesStreamOutput) { + this.delegate = bytesStreamOutput; + } + + @Override + public void writeByte(byte b) throws IOException { + delegate.writeByte(b); + } + + @Override + public void writeBytes(byte[] b, int offset, int length) throws IOException { + delegate.writeBytes(b, offset, length); + } + + @Override + public void flush() throws IOException { + delegate.flush(); + } + + @Override + public void close() throws IOException { + flush(); + } + + @Override + public void reset() throws IOException { + delegate.reset(); + } + + @Override + public BytesReference bytes() { + return delegate.bytes(); + } + } } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/ByteBufferStreamInput.java b/core/src/main/java/org/elasticsearch/common/io/stream/ByteBufferStreamInput.java index d13f539a67006..ce43d74e02314 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/ByteBufferStreamInput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/ByteBufferStreamInput.java @@ -88,6 +88,13 @@ public int available() throws IOException { return buffer.remaining(); } + @Override + protected void ensureCanReadBytes(int length) throws EOFException { + if (buffer.remaining() < length) { + throw new EOFException("tried to read: " + length + " bytes but only " + buffer.remaining() + " remaining"); + } + } + @Override public void mark(int readlimit) { buffer.mark(); diff --git a/core/src/main/java/org/elasticsearch/common/io/BytesStream.java b/core/src/main/java/org/elasticsearch/common/io/stream/BytesStream.java similarity index 85% rename from core/src/main/java/org/elasticsearch/common/io/BytesStream.java rename to core/src/main/java/org/elasticsearch/common/io/stream/BytesStream.java index 903c1dcb79962..c20dcf62c9bbd 100644 --- a/core/src/main/java/org/elasticsearch/common/io/BytesStream.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/BytesStream.java @@ -17,11 +17,11 @@ * under the License. */ -package org.elasticsearch.common.io; +package org.elasticsearch.common.io.stream; import org.elasticsearch.common.bytes.BytesReference; -public interface BytesStream { +public abstract class BytesStream extends StreamOutput { - BytesReference bytes(); -} \ No newline at end of file + public abstract BytesReference bytes(); +} diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/BytesStreamOutput.java b/core/src/main/java/org/elasticsearch/common/io/stream/BytesStreamOutput.java index 3de5c757ae1bc..ab9a1896ef7cf 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/BytesStreamOutput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/BytesStreamOutput.java @@ -21,7 +21,6 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.bytes.PagedBytesReference; -import org.elasticsearch.common.io.BytesStream; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.ByteArray; @@ -31,7 +30,7 @@ * A @link {@link StreamOutput} that uses {@link BigArrays} to acquire pages of * bytes, which avoids frequent reallocation & copying of the internal data. */ -public class BytesStreamOutput extends StreamOutput implements BytesStream { +public class BytesStreamOutput extends BytesStream { protected final BigArrays bigArrays; @@ -50,7 +49,7 @@ public BytesStreamOutput() { /** * Create a non recycling {@link BytesStreamOutput} with enough initial pages acquired * to satisfy the capacity given by expected size. - * + * * @param expectedSize the expected maximum size of the stream in bytes. */ public BytesStreamOutput(int expectedSize) { @@ -69,13 +68,13 @@ public long position() throws IOException { @Override public void writeByte(byte b) throws IOException { - ensureCapacity(count+1); + ensureCapacity(count + 1L); bytes.set(count, b); count++; } @Override - public void writeBytes(byte[] b, int offset, int length) throws IOException { + public void writeBytes(byte[] b, int offset, int length) { // nothing to copy if (length == 0) { return; @@ -87,7 +86,7 @@ public void writeBytes(byte[] b, int offset, int length) throws IOException { } // get enough pages for new size - ensureCapacity(count+length); + ensureCapacity(((long) count) + length); // bulk copy bytes.set(count, b, offset, length); @@ -113,28 +112,23 @@ public void flush() throws IOException { } @Override - public void seek(long position) throws IOException { - if (position > Integer.MAX_VALUE) { - throw new IllegalArgumentException("position " + position + " > Integer.MAX_VALUE"); - } - - count = (int)position; - ensureCapacity(count); + public void seek(long position) { + ensureCapacity(position); + count = (int) position; } public void skip(int length) { - count += length; - ensureCapacity(count); + seek(((long) count) + length); } @Override - public void close() throws IOException { + public void close() { // empty for now. } /** * Returns the current size of the buffer. - * + * * @return the value of the count field, which is the number of valid * bytes in this output stream. * @see java.io.ByteArrayOutputStream#count @@ -156,7 +150,10 @@ public long ramBytesUsed() { return bytes.ramBytesUsed(); } - private void ensureCapacity(int offset) { + void ensureCapacity(long offset) { + if (offset > Integer.MAX_VALUE) { + throw new IllegalArgumentException(getClass().getSimpleName() + " cannot hold more than 2GB of data"); + } bytes = bigArrays.grow(bytes, offset); } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/FilterStreamInput.java b/core/src/main/java/org/elasticsearch/common/io/stream/FilterStreamInput.java index b8132b4e87033..1a3f9fe601da4 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/FilterStreamInput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/FilterStreamInput.java @@ -21,6 +21,7 @@ import org.elasticsearch.Version; +import java.io.EOFException; import java.io.IOException; /** @@ -28,7 +29,7 @@ */ public abstract class FilterStreamInput extends StreamInput { - private final StreamInput delegate; + protected final StreamInput delegate; protected FilterStreamInput(StreamInput delegate) { this.delegate = delegate; @@ -73,4 +74,9 @@ public Version getVersion() { public void setVersion(Version version) { delegate.setVersion(version); } + + @Override + protected void ensureCanReadBytes(int length) throws EOFException { + delegate.ensureCanReadBytes(length); + } } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/InputStreamStreamInput.java b/core/src/main/java/org/elasticsearch/common/io/stream/InputStreamStreamInput.java index d786041af4908..f9611f3f78852 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/InputStreamStreamInput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/InputStreamStreamInput.java @@ -98,4 +98,9 @@ public int read(byte[] b, int off, int len) throws IOException { public long skip(long n) throws IOException { return is.skip(n); } + + @Override + protected void ensureCanReadBytes(int length) throws EOFException { + // TODO what can we do here? + } } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/NamedWriteableAwareStreamInput.java b/core/src/main/java/org/elasticsearch/common/io/stream/NamedWriteableAwareStreamInput.java index 55da169e6207f..5db80a711eeb9 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/NamedWriteableAwareStreamInput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/NamedWriteableAwareStreamInput.java @@ -36,14 +36,20 @@ public NamedWriteableAwareStreamInput(StreamInput delegate, NamedWriteableRegist @Override public C readNamedWriteable(Class categoryClass) throws IOException { String name = readString(); + return readNamedWriteable(categoryClass, name); + } + + @Override + public C readNamedWriteable(@SuppressWarnings("unused") Class categoryClass, + @SuppressWarnings("unused") String name) throws IOException { Writeable.Reader reader = namedWriteableRegistry.getReader(categoryClass, name); C c = reader.read(this); if (c == null) { throw new IOException( - "Writeable.Reader [" + reader + "] returned null which is not allowed and probably means it screwed up the stream."); + "Writeable.Reader [" + reader + "] returned null which is not allowed and probably means it screwed up the stream."); } assert name.equals(c.getWriteableName()) : c + " claims to have a different name [" + c.getWriteableName() - + "] than it was read from [" + name + "]."; + + "] than it was read from [" + name + "]."; return c; } } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/NamedWriteableRegistry.java b/core/src/main/java/org/elasticsearch/common/io/stream/NamedWriteableRegistry.java index 1a3f57052de45..8fde972e8e4e2 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/NamedWriteableRegistry.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/NamedWriteableRegistry.java @@ -25,10 +25,6 @@ import java.util.List; import java.util.Map; import java.util.Objects; -import java.util.stream.Collectors; -import java.util.stream.Stream; - -import org.elasticsearch.plugins.Plugin; /** * A registry for {@link org.elasticsearch.common.io.stream.Writeable.Reader} readers of {@link NamedWriteable}. @@ -47,7 +43,7 @@ public static class Entry { /** A name for the writeable which is unique to the {@link #categoryClass}. */ public final String name; - /** A reader captability of reading*/ + /** A reader capability of reading*/ public final Writeable.Reader reader; /** Creates a new entry which can be stored by the registry. */ diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/NotSerializableExceptionWrapper.java b/core/src/main/java/org/elasticsearch/common/io/stream/NotSerializableExceptionWrapper.java index b25d4bc9b72ff..fd4a215eabf2b 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/NotSerializableExceptionWrapper.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/NotSerializableExceptionWrapper.java @@ -38,8 +38,7 @@ public final class NotSerializableExceptionWrapper extends ElasticsearchExceptio private final RestStatus status; public NotSerializableExceptionWrapper(Throwable other) { - super(ElasticsearchException.getExceptionName(other) + - ": " + other.getMessage(), other.getCause()); + super(ElasticsearchException.getExceptionName(other) + ": " + other.getMessage(), other.getCause()); this.name = ElasticsearchException.getExceptionName(other); this.status = ExceptionsHelper.status(other); setStackTrace(other.getStackTrace()); @@ -51,6 +50,9 @@ public NotSerializableExceptionWrapper(Throwable other) { for (String key : ex.getHeaderKeys()) { this.addHeader(key, ex.getHeader(key)); } + for (String key : ex.getMetadataKeys()) { + this.addMetadata(key, ex.getMetadata(key)); + } } } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/ReleasableBytesStreamOutput.java b/core/src/main/java/org/elasticsearch/common/io/stream/ReleasableBytesStreamOutput.java index 674ff18f0fc15..0bfe15d6dc2f5 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/ReleasableBytesStreamOutput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/ReleasableBytesStreamOutput.java @@ -20,29 +20,66 @@ package org.elasticsearch.common.io.stream; import org.elasticsearch.common.bytes.ReleasablePagedBytesReference; -import org.elasticsearch.common.io.ReleasableBytesStream; +import org.elasticsearch.common.lease.Releasable; +import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.common.util.ByteArray; /** * An bytes stream output that allows providing a {@link BigArrays} instance * expecting it to require releasing its content ({@link #bytes()}) once done. *

    - * Please note, its is the responsibility of the caller to make sure the bytes - * reference do not "escape" and are released only once. + * Please note, closing this stream will release the bytes that are in use by any + * {@link ReleasablePagedBytesReference} returned from {@link #bytes()}, so this + * stream should only be closed after the bytes have been output or copied + * elsewhere. */ -public class ReleasableBytesStreamOutput extends BytesStreamOutput implements ReleasableBytesStream { +public class ReleasableBytesStreamOutput extends BytesStreamOutput + implements Releasable { + + private Releasable releasable; public ReleasableBytesStreamOutput(BigArrays bigarrays) { - super(BigArrays.PAGE_SIZE_IN_BYTES, bigarrays); + this(BigArrays.PAGE_SIZE_IN_BYTES, bigarrays); } public ReleasableBytesStreamOutput(int expectedSize, BigArrays bigArrays) { super(expectedSize, bigArrays); + this.releasable = Releasables.releaseOnce(this.bytes); } + /** + * Returns a {@link Releasable} implementation of a + * {@link org.elasticsearch.common.bytes.BytesReference} that represents the current state of + * the bytes in the stream. + */ @Override public ReleasablePagedBytesReference bytes() { - return new ReleasablePagedBytesReference(bigArrays, bytes, count); + return new ReleasablePagedBytesReference(bigArrays, bytes, count, releasable); } + @Override + public void close() { + Releasables.close(releasable); + } + + @Override + void ensureCapacity(long offset) { + final ByteArray prevBytes = this.bytes; + super.ensureCapacity(offset); + if (prevBytes != this.bytes) { + // re-create the releasable with the new reference + releasable = Releasables.releaseOnce(this.bytes); + } + } + + @Override + public void reset() { + final ByteArray prevBytes = this.bytes; + super.reset(); + if (prevBytes != this.bytes) { + // re-create the releasable with the new reference + releasable = Releasables.releaseOnce(this.bytes); + } + } } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java b/core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java index 794ed6f36fac7..4681af3392ec5 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java @@ -24,9 +24,10 @@ import org.apache.lucene.index.IndexFormatTooOldException; import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.store.LockObtainFailedException; +import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.BitUtil; import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.CharsRefBuilder; +import org.apache.lucene.util.CharsRef; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; import org.elasticsearch.common.Nullable; @@ -58,6 +59,7 @@ import java.util.HashMap; import java.util.LinkedHashMap; import java.util.List; +import java.util.Locale; import java.util.Map; import java.util.function.IntFunction; import java.util.function.Supplier; @@ -111,7 +113,7 @@ public void setVersion(Version version) { * bytes of the stream. */ public BytesReference readBytesReference() throws IOException { - int length = readVInt(); + int length = readArraySize(); return readBytesReference(length); } @@ -143,7 +145,7 @@ public BytesReference readBytesReference(int length) throws IOException { } public BytesRef readBytesRef() throws IOException { - int length = readVInt(); + int length = readArraySize(); return readBytesRef(length); } @@ -200,7 +202,9 @@ public int readVInt() throws IOException { return i; } b = readByte(); - assert (b & 0x80) == 0; + if ((b & 0x80) != 0) { + throw new IOException("Invalid vInt ((" + Integer.toHexString(b) + " & 0x7f) << 28) | " + Integer.toHexString(i)); + } return i | ((b & 0x7F) << 28); } @@ -212,9 +216,8 @@ public long readLong() throws IOException { } /** - * Reads a long stored in variable-length format. Reads between one and - * nine bytes. Smaller values take fewer bytes. Negative numbers are not - * supported. + * Reads a long stored in variable-length format. Reads between one and ten bytes. Smaller values take fewer bytes. Negative numbers + * are encoded in ten bytes so prefer {@link #readLong()} or {@link #readZLong()} for negative numbers. */ public long readVLong() throws IOException { byte b = readByte(); @@ -258,8 +261,16 @@ public long readVLong() throws IOException { return i; } b = readByte(); - assert (b & 0x80) == 0; - return i | ((b & 0x7FL) << 56); + i |= ((b & 0x7FL) << 56); + if ((b & 0x80) == 0) { + return i; + } + b = readByte(); + if (b != 0 && b != 1) { + throw new IOException("Invalid vlong (" + Integer.toHexString(b) + " << 63) | " + Long.toHexString(i)); + } + i |= ((long) b) << 63; + return i; } public long readZLong() throws IOException { @@ -323,15 +334,22 @@ public Integer readOptionalVInt() throws IOException { return null; } - private final CharsRefBuilder spare = new CharsRefBuilder(); + // we don't use a CharsRefBuilder since we exactly know the size of the character array up front + // this prevents calling grow for every character since we don't need this + private final CharsRef spare = new CharsRef(); public String readString() throws IOException { - final int charCount = readVInt(); - spare.clear(); - spare.grow(charCount); - int c; - while (spare.length() < charCount) { - c = readByte() & 0xff; + // TODO it would be nice to not call readByte() for every character but we don't know how much to read up-front + // we can make the loop much more complicated but that won't buy us much compared to the bounds checks in readByte() + final int charCount = readArraySize(); + if (spare.chars.length < charCount) { + // we don't use ArrayUtils.grow since there is no need to copy the array + spare.chars = new char[ArrayUtil.oversize(charCount, Character.BYTES)]; + } + spare.length = charCount; + final char[] buffer = spare.chars; + for (int i = 0; i < charCount; i++) { + final int c = readByte() & 0xff; switch (c >> 4) { case 0: case 1: @@ -341,15 +359,17 @@ public String readString() throws IOException { case 5: case 6: case 7: - spare.append((char) c); + buffer[i] = (char) c; break; case 12: case 13: - spare.append((char) ((c & 0x1F) << 6 | readByte() & 0x3F)); + buffer[i] = ((char) ((c & 0x1F) << 6 | readByte() & 0x3F)); break; case 14: - spare.append((char) ((c & 0x0F) << 12 | (readByte() & 0x3F) << 6 | (readByte() & 0x3F) << 0)); + buffer[i] = ((char) ((c & 0x0F) << 12 | (readByte() & 0x3F) << 6 | (readByte() & 0x3F) << 0)); break; + default: + throw new IOException("Invalid string; unexpected character: " + c + " hex: " + Integer.toHexString(c)); } } return spare.toString(); @@ -376,19 +396,28 @@ public final Double readOptionalDouble() throws IOException { * Reads a boolean. */ public final boolean readBoolean() throws IOException { - return readByte() != 0; + return readBoolean(readByte()); + } + + private boolean readBoolean(final byte value) { + if (value == 0) { + return false; + } else if (value == 1) { + return true; + } else { + final String message = String.format(Locale.ROOT, "unexpected byte [0x%02x]", value); + throw new IllegalStateException(message); + } } @Nullable public final Boolean readOptionalBoolean() throws IOException { - byte val = readByte(); - if (val == 2) { + final byte value = readByte(); + if (value == 2) { return null; + } else { + return readBoolean(value); } - if (val == 1) { - return true; - } - return false; } /** @@ -401,7 +430,7 @@ public final Boolean readOptionalBoolean() throws IOException { public abstract int available() throws IOException; public String[] readStringArray() throws IOException { - int size = readVInt(); + int size = readArraySize(); if (size == 0) { return Strings.EMPTY_ARRAY; } @@ -421,7 +450,7 @@ public String[] readOptionalStringArray() throws IOException { } public Map readMap(Writeable.Reader keyReader, Writeable.Reader valueReader) throws IOException { - int size = readVInt(); + int size = readArraySize(); Map map = new HashMap<>(size); for (int i = 0; i < size; i++) { K key = keyReader.read(this); @@ -443,7 +472,7 @@ public Map readMap(Writeable.Reader keyReader, Writeable.Reader< */ public Map> readMapOfLists(final Writeable.Reader keyReader, final Writeable.Reader valueReader) throws IOException { - final int size = readVInt(); + final int size = readArraySize(); if (size == 0) { return Collections.emptyMap(); } @@ -520,7 +549,7 @@ public Object readGenericValue() throws IOException { @SuppressWarnings("unchecked") private List readArrayList() throws IOException { - int size = readVInt(); + int size = readArraySize(); List list = new ArrayList(size); for (int i = 0; i < size; i++) { list.add(readGenericValue()); @@ -534,7 +563,7 @@ private DateTime readDateTime() throws IOException { } private Object[] readArray() throws IOException { - int size8 = readVInt(); + int size8 = readArraySize(); Object[] list8 = new Object[size8]; for (int i = 0; i < size8; i++) { list8[i] = readGenericValue(); @@ -543,7 +572,7 @@ private Object[] readArray() throws IOException { } private Map readLinkedHashMap() throws IOException { - int size9 = readVInt(); + int size9 = readArraySize(); Map map9 = new LinkedHashMap(size9); for (int i = 0; i < size9; i++) { map9.put(readString(), readGenericValue()); @@ -552,7 +581,7 @@ private Map readLinkedHashMap() throws IOException { } private Map readHashMap() throws IOException { - int size10 = readVInt(); + int size10 = readArraySize(); Map map10 = new HashMap(size10); for (int i = 0; i < size10; i++) { map10.put(readString(), readGenericValue()); @@ -589,7 +618,7 @@ public DateTimeZone readOptionalTimeZone() throws IOException { } public int[] readIntArray() throws IOException { - int length = readVInt(); + int length = readArraySize(); int[] values = new int[length]; for (int i = 0; i < length; i++) { values[i] = readInt(); @@ -598,7 +627,7 @@ public int[] readIntArray() throws IOException { } public int[] readVIntArray() throws IOException { - int length = readVInt(); + int length = readArraySize(); int[] values = new int[length]; for (int i = 0; i < length; i++) { values[i] = readVInt(); @@ -607,7 +636,7 @@ public int[] readVIntArray() throws IOException { } public long[] readLongArray() throws IOException { - int length = readVInt(); + int length = readArraySize(); long[] values = new long[length]; for (int i = 0; i < length; i++) { values[i] = readLong(); @@ -616,7 +645,7 @@ public long[] readLongArray() throws IOException { } public long[] readVLongArray() throws IOException { - int length = readVInt(); + int length = readArraySize(); long[] values = new long[length]; for (int i = 0; i < length; i++) { values[i] = readVLong(); @@ -625,7 +654,7 @@ public long[] readVLongArray() throws IOException { } public float[] readFloatArray() throws IOException { - int length = readVInt(); + int length = readArraySize(); float[] values = new float[length]; for (int i = 0; i < length; i++) { values[i] = readFloat(); @@ -634,7 +663,7 @@ public float[] readFloatArray() throws IOException { } public double[] readDoubleArray() throws IOException { - int length = readVInt(); + int length = readArraySize(); double[] values = new double[length]; for (int i = 0; i < length; i++) { values[i] = readDouble(); @@ -643,14 +672,14 @@ public double[] readDoubleArray() throws IOException { } public byte[] readByteArray() throws IOException { - final int length = readVInt(); + final int length = readArraySize(); final byte[] bytes = new byte[length]; readBytes(bytes, 0, bytes.length); return bytes; } public T[] readArray(Writeable.Reader reader, IntFunction arraySupplier) throws IOException { - int length = readVInt(); + int length = readArraySize(); T[] values = arraySupplier.apply(length); for (int i = 0; i < length; i++) { values[i] = reader.read(this); @@ -781,7 +810,7 @@ public T readException() throws IOException { case 17: return (T) readStackTrace(new IOException(readOptionalString(), readException()), this); default: - assert false : "no such exception for id: " + key; + throw new IOException("no such exception for id: " + key); } } return null; @@ -798,6 +827,22 @@ public C readNamedWriteable(@SuppressWarnings("unused throw new UnsupportedOperationException("can't read named writeable from StreamInput"); } + /** + * Reads a {@link NamedWriteable} from the current stream with the given name. It is assumed that the caller obtained the name + * from other source, so it's not read from the stream. The name is used for looking for + * the corresponding entry in the registry by name, so that the proper object can be read and returned. + * Default implementation throws {@link UnsupportedOperationException} as StreamInput doesn't hold a registry. + * Use {@link FilterInputStream} instead which wraps a stream and supports a {@link NamedWriteableRegistry} too. + * + * Prefer {@link StreamInput#readNamedWriteable(Class)} and {@link StreamOutput#writeNamedWriteable(NamedWriteable)} unless you + * have a compelling reason to use this method instead. + */ + @Nullable + public C readNamedWriteable(@SuppressWarnings("unused") Class categoryClass, + @SuppressWarnings("unused") String name) throws IOException { + throw new UnsupportedOperationException("can't read named writeable from StreamInput"); + } + /** * Reads an optional {@link NamedWriteable}. */ @@ -822,7 +867,7 @@ public C readOptionalNamedWriteable(Class category * @throws IOException if any step fails */ public List readStreamableList(Supplier constructor) throws IOException { - int count = readVInt(); + int count = readArraySize(); List builder = new ArrayList<>(count); for (int i=0; i List readStreamableList(Supplier constructor * Reads a list of objects */ public List readList(Writeable.Reader reader) throws IOException { - int count = readVInt(); + int count = readArraySize(); List builder = new ArrayList<>(count); for (int i=0; i List readList(Writeable.Reader reader) throws IOException { * Reads a list of {@link NamedWriteable}s. */ public List readNamedWriteableList(Class categoryClass) throws IOException { - int count = readVInt(); + int count = readArraySize(); List builder = new ArrayList<>(count); for (int i=0; i List readNamedWriteableList(Class catego return builder; } + /** + * Reads an enum with type E that was serialized based on the value of it's ordinal + */ + public > E readEnum(Class enumClass) throws IOException { + int ordinal = readVInt(); + E[] values = enumClass.getEnumConstants(); + if (ordinal < 0 || ordinal >= values.length) { + throw new IOException("Unknown " + enumClass.getSimpleName() + " ordinal [" + ordinal + "]"); + } + return values[ordinal]; + } + public static StreamInput wrap(byte[] bytes) { return wrap(bytes, 0, bytes.length); } @@ -864,4 +921,28 @@ public static StreamInput wrap(byte[] bytes, int offset, int length) { return new InputStreamStreamInput(new ByteArrayInputStream(bytes, offset, length)); } + /** + * Reads a vint via {@link #readVInt()} and applies basic checks to ensure the read array size is sane. + * This method uses {@link #ensureCanReadBytes(int)} to ensure this stream has enough bytes to read for the read array size. + */ + private int readArraySize() throws IOException { + final int arraySize = readVInt(); + if (arraySize > ArrayUtil.MAX_ARRAY_LENGTH) { + throw new IllegalStateException("array length must be <= to " + ArrayUtil.MAX_ARRAY_LENGTH + " but was: " + arraySize); + } + if (arraySize < 0) { + throw new NegativeArraySizeException("array size must be positive but was: " + arraySize); + } + // lets do a sanity check that if we are reading an array size that is bigger that the remaining bytes we can safely + // throw an exception instead of allocating the array based on the size. A simple corrutpted byte can make a node go OOM + // if the size is large and for perf reasons we allocate arrays ahead of time + ensureCanReadBytes(arraySize); + return arraySize; + } + + /** + * This method throws an {@link EOFException} if the given number of bytes can not be read from the this stream. This method might + * be a no-op depending on the underlying implementation if the information of the remaining bytes is not present. + */ + protected abstract void ensureCanReadBytes(int length) throws EOFException; } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java b/core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java index c1932fd4215ce..7b4df4b2d533a 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java @@ -24,6 +24,7 @@ import org.apache.lucene.index.IndexFormatTooOldException; import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.store.LockObtainFailedException; +import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.BitUtil; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; @@ -209,12 +210,22 @@ public void writeLong(long i) throws IOException { } /** - * Writes a non-negative long in a variable-length format. - * Writes between one and nine bytes. Smaller values take fewer bytes. - * Negative numbers are not supported. + * Writes a non-negative long in a variable-length format. Writes between one and ten bytes. Smaller values take fewer bytes. Negative + * numbers use ten bytes and trip assertions (if running in tests) so prefer {@link #writeLong(long)} or {@link #writeZLong(long)} for + * negative numbers. */ public void writeVLong(long i) throws IOException { - assert i >= 0; + if (i < 0) { + throw new IllegalStateException("Negative longs unsupported, use writeLong or writeZLong for negative numbers [" + i + "]"); + } + writeVLongNoCheck(i); + } + + /** + * Writes a long in a variable-length format without first checking if it is negative. Package private for testing. Use + * {@link #writeVLong(long)} instead. + */ + void writeVLongNoCheck(long i) throws IOException { while ((i & ~0x7F) != 0) { writeByte((byte) ((i & 0x7f) | 0x80)); i >>>= 7; @@ -298,23 +309,41 @@ public void writeText(Text text) throws IOException { } } + // we use a small buffer to convert strings to bytes since we want to prevent calling writeByte + // for every byte in the string (see #21660 for details). + // This buffer will never be the oversized limit of 1024 bytes and will not be shared across streams + private byte[] convertStringBuffer = BytesRef.EMPTY_BYTES; // TODO should we reduce it to 0 bytes once the stream is closed? + public void writeString(String str) throws IOException { - int charCount = str.length(); + final int charCount = str.length(); + final int bufferSize = Math.min(3 * charCount, 1024); // at most 3 bytes per character is needed here + if (convertStringBuffer.length < bufferSize) { // we don't use ArrayUtils.grow since copying the bytes is unnecessary + convertStringBuffer = new byte[ArrayUtil.oversize(bufferSize, Byte.BYTES)]; + } + byte[] buffer = convertStringBuffer; + int offset = 0; writeVInt(charCount); - int c; for (int i = 0; i < charCount; i++) { - c = str.charAt(i); + final int c = str.charAt(i); if (c <= 0x007F) { - writeByte((byte) c); + buffer[offset++] = ((byte) c); } else if (c > 0x07FF) { - writeByte((byte) (0xE0 | c >> 12 & 0x0F)); - writeByte((byte) (0x80 | c >> 6 & 0x3F)); - writeByte((byte) (0x80 | c >> 0 & 0x3F)); + buffer[offset++] = ((byte) (0xE0 | c >> 12 & 0x0F)); + buffer[offset++] = ((byte) (0x80 | c >> 6 & 0x3F)); + buffer[offset++] = ((byte) (0x80 | c >> 0 & 0x3F)); } else { - writeByte((byte) (0xC0 | c >> 6 & 0x1F)); - writeByte((byte) (0x80 | c >> 0 & 0x3F)); + buffer[offset++] = ((byte) (0xC0 | c >> 6 & 0x1F)); + buffer[offset++] = ((byte) (0x80 | c >> 0 & 0x3F)); + } + // make sure any possible char can fit into the buffer in any possible iteration + // we need at most 3 bytes so we flush the buffer once we have less than 3 bytes + // left before we start another iteration + if (offset > buffer.length-3) { + writeBytes(buffer, offset); + offset = 0; } } + writeBytes(buffer, offset); } public void writeFloat(float v) throws IOException { @@ -349,7 +378,7 @@ public void writeOptionalBoolean(@Nullable Boolean b) throws IOException { if (b == null) { writeByte(TWO); } else { - writeByte(b ? ONE : ZERO); + writeBoolean(b); } } @@ -448,16 +477,32 @@ public void writeMapWithConsistentOrder(@Nullable Map * @param keyWriter The key writer * @param valueWriter The value writer */ - public void writeMapOfLists(final Map> map, final Writer keyWriter, final Writer valueWriter) + public final void writeMapOfLists(final Map> map, final Writer keyWriter, final Writer valueWriter) throws IOException { - writeVInt(map.size()); - - for (final Map.Entry> entry : map.entrySet()) { - keyWriter.write(this, entry.getKey()); - writeVInt(entry.getValue().size()); - for (final V value : entry.getValue()) { + writeMap(map, keyWriter, (stream, list) -> { + writeVInt(list.size()); + for (final V value : list) { valueWriter.write(this, value); } + }); + } + + /** + * Write a {@link Map} of {@code K}-type keys to {@code V}-type. + *

    
    +     * Map<String, String> map = ...;
    +     * out.writeMap(map, StreamOutput::writeString, StreamOutput::writeString);
    +     * 
    + * + * @param keyWriter The key writer + * @param valueWriter The value writer + */ + public final void writeMap(final Map map, final Writer keyWriter, final Writer valueWriter) + throws IOException { + writeVInt(map.size()); + for (final Map.Entry entry : map.entrySet()) { + keyWriter.write(this, entry.getKey()); + valueWriter.write(this, entry.getValue()); } } @@ -783,7 +828,7 @@ public void writeException(Throwable throwable) throws IOException { writeVInt(17); } else { ElasticsearchException ex; - if (throwable instanceof ElasticsearchException && ElasticsearchException.isRegistered(throwable.getClass())) { + if (throwable instanceof ElasticsearchException && ElasticsearchException.isRegistered(throwable.getClass(), version)) { ex = (ElasticsearchException) throwable; } else { ex = new NotSerializableExceptionWrapper(throwable); @@ -871,6 +916,16 @@ public void writeList(List list) throws IOException { } } + /** + * Writes a list of strings + */ + public void writeStringList(List list) throws IOException { + writeVInt(list.size()); + for (String string: list) { + this.writeString(string); + } + } + /** * Writes a list of {@link NamedWriteable} objects. */ @@ -880,4 +935,12 @@ public void writeNamedWriteableList(List list) throws writeNamedWriteable(obj); } } + + /** + * Writes an enum with type E that by serialized it based on it's ordinal value + */ + public > void writeEnum(E enumValue) throws IOException { + writeVInt(enumValue.ordinal()); + } + } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/Writeable.java b/core/src/main/java/org/elasticsearch/common/io/stream/Writeable.java index 30607f3375909..9d645038d6528 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/Writeable.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/Writeable.java @@ -34,7 +34,7 @@ public interface Writeable { /** * Write this into the {@linkplain StreamOutput}. */ - void writeTo(final StreamOutput out) throws IOException; + void writeTo(StreamOutput out) throws IOException; /** * Reference to a method that can write some object to a {@link StreamOutput}. @@ -60,7 +60,7 @@ interface Writer { * @param out Output to write the {@code value} too * @param value The value to add */ - void write(final StreamOutput out, final V value) throws IOException; + void write(StreamOutput out, V value) throws IOException; } @@ -86,7 +86,7 @@ interface Reader { * * @param in Input to read the value from */ - V read(final StreamInput in) throws IOException; + V read(StreamInput in) throws IOException; } diff --git a/core/src/main/java/org/elasticsearch/common/joda/DateMathParser.java b/core/src/main/java/org/elasticsearch/common/joda/DateMathParser.java index 65ec7e7c2b445..ba5531c813c12 100644 --- a/core/src/main/java/org/elasticsearch/common/joda/DateMathParser.java +++ b/core/src/main/java/org/elasticsearch/common/joda/DateMathParser.java @@ -25,11 +25,11 @@ import org.joda.time.format.DateTimeFormatter; import java.util.Objects; -import java.util.concurrent.Callable; +import java.util.function.LongSupplier; /** * A parser for date/time formatted text with optional date math. - * + * * The format of the datetime is configurable, and unix timestamps can also be used. Datemath * is appended to a datetime with the following syntax: * ||[+-/](\d+)?[yMwdhHms]. @@ -43,19 +43,19 @@ public DateMathParser(FormatDateTimeFormatter dateTimeFormatter) { this.dateTimeFormatter = dateTimeFormatter; } - public long parse(String text, Callable now) { + public long parse(String text, LongSupplier now) { return parse(text, now, false, null); } // Note: we take a callable here for the timestamp in order to be able to figure out // if it has been used. For instance, the request cache does not cache requests that make // use of `now`. - public long parse(String text, Callable now, boolean roundUp, DateTimeZone timeZone) { + public long parse(String text, LongSupplier now, boolean roundUp, DateTimeZone timeZone) { long time; String mathString; if (text.startsWith("now")) { try { - time = now.call(); + time = now.getAsLong(); } catch (Exception e) { throw new ElasticsearchParseException("could not read the current timestamp", e); } @@ -63,13 +63,10 @@ public long parse(String text, Callable now, boolean roundUp, DateTimeZone } else { int index = text.indexOf("||"); if (index == -1) { - return parseDateTime(text, timeZone); + return parseDateTime(text, timeZone, roundUp); } - time = parseDateTime(text.substring(0, index), timeZone); + time = parseDateTime(text.substring(0, index), timeZone, false); mathString = text.substring(index + 2); - if (mathString.isEmpty()) { - return time; - } } return parseMath(mathString, time, roundUp, timeZone); @@ -97,7 +94,7 @@ private long parseMath(String mathString, long time, boolean roundUp, DateTimeZo throw new ElasticsearchParseException("operator not supported for date math [{}]", mathString); } } - + if (i >= mathString.length()) { throw new ElasticsearchParseException("truncated date math [{}]", mathString); } @@ -190,15 +187,29 @@ private long parseMath(String mathString, long time, boolean roundUp, DateTimeZo return dateTime.getMillis(); } - private long parseDateTime(String value, DateTimeZone timeZone) { + private long parseDateTime(String value, DateTimeZone timeZone, boolean roundUpIfNoTime) { DateTimeFormatter parser = dateTimeFormatter.parser(); if (timeZone != null) { parser = parser.withZone(timeZone); } try { - return parser.parseMillis(value); + MutableDateTime date; + // We use 01/01/1970 as a base date so that things keep working with date + // fields that are filled with times without dates + if (roundUpIfNoTime) { + date = new MutableDateTime(1970, 1, 1, 23, 59, 59, 999, DateTimeZone.UTC); + } else { + date = new MutableDateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeZone.UTC); + } + final int end = parser.parseInto(date, value, 0); + if (end < 0) { + int position = ~end; + throw new IllegalArgumentException("Parse failure at index [" + position + "] of [" + value + "]"); + } else if (end != value.length()) { + throw new IllegalArgumentException("Unrecognized chars at the end of [" + value + "]: [" + value.substring(end) + "]"); + } + return date.getMillis(); } catch (IllegalArgumentException e) { - throw new ElasticsearchParseException("failed to parse date field [{}] with format [{}]", e, value, dateTimeFormatter.format()); } } diff --git a/core/src/main/java/org/elasticsearch/common/joda/FormatDateTimeFormatter.java b/core/src/main/java/org/elasticsearch/common/joda/FormatDateTimeFormatter.java index 428828bc0feb7..72a60e8678ca2 100644 --- a/core/src/main/java/org/elasticsearch/common/joda/FormatDateTimeFormatter.java +++ b/core/src/main/java/org/elasticsearch/common/joda/FormatDateTimeFormatter.java @@ -22,6 +22,7 @@ import org.joda.time.format.DateTimeFormatter; import java.util.Locale; +import java.util.Objects; /** * A simple wrapper around {@link DateTimeFormatter} that retains the @@ -43,9 +44,9 @@ public FormatDateTimeFormatter(String format, DateTimeFormatter parser, Locale l public FormatDateTimeFormatter(String format, DateTimeFormatter parser, DateTimeFormatter printer, Locale locale) { this.format = format; - this.locale = locale; - this.printer = locale == null ? printer.withDefaultYear(1970) : printer.withLocale(locale).withDefaultYear(1970); - this.parser = locale == null ? parser.withDefaultYear(1970) : parser.withLocale(locale).withDefaultYear(1970); + this.locale = Objects.requireNonNull(locale, "A locale is required as JODA otherwise uses the default locale"); + this.printer = printer.withLocale(locale).withDefaultYear(1970); + this.parser = parser.withLocale(locale).withDefaultYear(1970); } public String format() { diff --git a/core/src/main/java/org/elasticsearch/common/joda/Joda.java b/core/src/main/java/org/elasticsearch/common/joda/Joda.java index 34c882d0d8096..2da0e7d2b6865 100644 --- a/core/src/main/java/org/elasticsearch/common/joda/Joda.java +++ b/core/src/main/java/org/elasticsearch/common/joda/Joda.java @@ -336,9 +336,10 @@ public int parseInto(DateTimeParserBucket bucket, String text, int position) { boolean isPositive = text.startsWith("-") == false; boolean isTooLong = text.length() > estimateParsedLength(); - if ((isPositive && isTooLong) || - // timestamps have to have UTC timezone - bucket.getZone() != DateTimeZone.UTC) { + if (bucket.getZone() != DateTimeZone.UTC) { + String format = hasMilliSecondPrecision ? "epoch_millis" : "epoch_second"; + throw new IllegalArgumentException("time_zone must be UTC for format [" + format + "]"); + } else if (isPositive && isTooLong) { return -1; } diff --git a/core/src/main/java/org/elasticsearch/common/lease/Releasables.java b/core/src/main/java/org/elasticsearch/common/lease/Releasables.java index bfabd20976d36..bd7b2a6e772a4 100644 --- a/core/src/main/java/org/elasticsearch/common/lease/Releasables.java +++ b/core/src/main/java/org/elasticsearch/common/lease/Releasables.java @@ -24,6 +24,7 @@ import java.io.IOException; import java.io.UncheckedIOException; import java.util.Arrays; +import java.util.concurrent.atomic.AtomicBoolean; /** Utility methods to work with {@link Releasable}s. */ public enum Releasables { @@ -93,4 +94,16 @@ public static Releasable wrap(final Iterable releasables) { public static Releasable wrap(final Releasable... releasables) { return () -> close(releasables); } + + /** + * Equivalent to {@link #wrap(Releasable...)} but can be called multiple times without double releasing. + */ + public static Releasable releaseOnce(final Releasable... releasables) { + final AtomicBoolean released = new AtomicBoolean(false); + return () -> { + if (released.compareAndSet(false, true)) { + close(releasables); + } + }; + } } diff --git a/core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java b/core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java index d9b811585de0d..3ed1d9d30ac1a 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java +++ b/core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java @@ -21,13 +21,35 @@ import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; +import org.elasticsearch.Build; +import org.elasticsearch.Version; import org.elasticsearch.common.SuppressLoggerChecks; import org.elasticsearch.common.util.concurrent.ThreadContext; +import java.time.ZoneId; +import java.time.ZonedDateTime; +import java.time.format.DateTimeFormatter; +import java.time.format.DateTimeFormatterBuilder; +import java.time.format.SignStyle; +import java.util.Collections; +import java.util.HashMap; import java.util.Iterator; +import java.util.LinkedHashMap; +import java.util.Locale; +import java.util.Map; +import java.util.Objects; import java.util.Set; import java.util.concurrent.CopyOnWriteArraySet; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import static java.time.temporal.ChronoField.DAY_OF_MONTH; +import static java.time.temporal.ChronoField.DAY_OF_WEEK; +import static java.time.temporal.ChronoField.HOUR_OF_DAY; +import static java.time.temporal.ChronoField.MINUTE_OF_HOUR; +import static java.time.temporal.ChronoField.MONTH_OF_YEAR; +import static java.time.temporal.ChronoField.SECOND_OF_MINUTE; +import static java.time.temporal.ChronoField.YEAR; /** * A logger that logs deprecation notices. @@ -36,14 +58,6 @@ public class DeprecationLogger { private final Logger logger; - /** - * The "Warning" Header comes from RFC-7234. As the RFC describes, it's generally used for caching purposes, but it can be - * used for any warning. - * - * https://tools.ietf.org/html/rfc7234#section-5.5 - */ - public static final String DEPRECATION_HEADER = "Warning"; - /** * This is set once by the {@code Node} constructor, but it uses {@link CopyOnWriteArraySet} to ensure that tests can run in parallel. *

    @@ -64,7 +78,7 @@ public class DeprecationLogger { * @throws IllegalStateException if this {@code threadContext} has already been set */ public static void setThreadContext(ThreadContext threadContext) { - assert threadContext != null; + Objects.requireNonNull(threadContext, "Cannot register a null ThreadContext"); // add returning false means it _did_ have it already if (THREAD_CONTEXT.add(threadContext) == false) { @@ -102,16 +116,175 @@ public DeprecationLogger(Logger parentLogger) { } else { name = "deprecation." + name; } - this.logger = LogManager.getLogger(name, parentLogger.getMessageFactory()); + this.logger = LogManager.getLogger(name); } /** - * Logs a deprecated message. + * Logs a deprecation message, adding a formatted warning message as a response header on the thread context. */ public void deprecated(String msg, Object... params) { deprecated(THREAD_CONTEXT, msg, params); } + // LRU set of keys used to determine if a deprecation message should be emitted to the deprecation logs + private Set keys = Collections.newSetFromMap(Collections.synchronizedMap(new LinkedHashMap() { + @Override + protected boolean removeEldestEntry(final Map.Entry eldest) { + return size() > 128; + } + })); + + /** + * Adds a formatted warning message as a response header on the thread context, and logs a deprecation message if the associated key has + * not recently been seen. + * + * @param key the key used to determine if this deprecation should be logged + * @param msg the message to log + * @param params parameters to the message + */ + public void deprecatedAndMaybeLog(final String key, final String msg, final Object... params) { + deprecated(THREAD_CONTEXT, msg, keys.add(key), params); + } + + /* + * RFC7234 specifies the warning format as warn-code warn-agent "warn-text" [ "warn-date"]. Here, warn-code is a + * three-digit number with various standard warn codes specified. The warn code 299 is apt for our purposes as it represents a + * miscellaneous persistent warning (can be presented to a human, or logged, and must not be removed by a cache). The warn-agent is an + * arbitrary token; here we use the Elasticsearch version and build hash. The warn text must be quoted. The warn-date is an optional + * quoted field that can be in a variety of specified date formats; here we use RFC 1123 format. + */ + private static final String WARNING_FORMAT = + String.format( + Locale.ROOT, + "299 Elasticsearch-%s%s-%s ", + Version.CURRENT.toString(), + Build.CURRENT.isSnapshot() ? "-SNAPSHOT" : "", + Build.CURRENT.shortHash()) + + "\"%s\" \"%s\""; + + /* + * RFC 7234 section 5.5 specifies that the warn-date is a quoted HTTP-date. HTTP-date is defined in RFC 7234 Appendix B as being from + * RFC 7231 section 7.1.1.1. RFC 7231 specifies an HTTP-date as an IMF-fixdate (or an obs-date referring to obsolete formats). The + * grammar for IMF-fixdate is specified as 'day-name "," SP date1 SP time-of-day SP GMT'. Here, day-name is + * (Mon|Tue|Wed|Thu|Fri|Sat|Sun). Then, date1 is 'day SP month SP year' where day is 2DIGIT, month is + * (Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec), and year is 4DIGIT. Lastly, time-of-day is 'hour ":" minute ":" second' where + * hour is 2DIGIT, minute is 2DIGIT, and second is 2DIGIT. Finally, 2DIGIT and 4DIGIT have the obvious definitions. + */ + private static final DateTimeFormatter RFC_7231_DATE_TIME; + + static { + final Map dow = new HashMap<>(); + dow.put(1L, "Mon"); + dow.put(2L, "Tue"); + dow.put(3L, "Wed"); + dow.put(4L, "Thu"); + dow.put(5L, "Fri"); + dow.put(6L, "Sat"); + dow.put(7L, "Sun"); + final Map moy = new HashMap<>(); + moy.put(1L, "Jan"); + moy.put(2L, "Feb"); + moy.put(3L, "Mar"); + moy.put(4L, "Apr"); + moy.put(5L, "May"); + moy.put(6L, "Jun"); + moy.put(7L, "Jul"); + moy.put(8L, "Aug"); + moy.put(9L, "Sep"); + moy.put(10L, "Oct"); + moy.put(11L, "Nov"); + moy.put(12L, "Dec"); + RFC_7231_DATE_TIME = new DateTimeFormatterBuilder() + .parseCaseInsensitive() + .parseLenient() + .optionalStart() + .appendText(DAY_OF_WEEK, dow) + .appendLiteral(", ") + .optionalEnd() + .appendValue(DAY_OF_MONTH, 2, 2, SignStyle.NOT_NEGATIVE) + .appendLiteral(' ') + .appendText(MONTH_OF_YEAR, moy) + .appendLiteral(' ') + .appendValue(YEAR, 4) + .appendLiteral(' ') + .appendValue(HOUR_OF_DAY, 2) + .appendLiteral(':') + .appendValue(MINUTE_OF_HOUR, 2) + .optionalStart() + .appendLiteral(':') + .appendValue(SECOND_OF_MINUTE, 2) + .optionalEnd() + .appendLiteral(' ') + .appendOffset("+HHMM", "GMT") + .toFormatter(Locale.getDefault(Locale.Category.FORMAT)); + } + + private static final ZoneId GMT = ZoneId.of("GMT"); + + /** + * Regular expression to test if a string matches the RFC7234 specification for warning headers. This pattern assumes that the warn code + * is always 299. Further, this pattern assumes that the warn agent represents a version of Elasticsearch including the build hash. + */ + public static Pattern WARNING_HEADER_PATTERN = Pattern.compile( + "299 " + // warn code + "Elasticsearch-\\d+\\.\\d+\\.\\d+(?:-(?:alpha|beta|rc)\\d+)?(?:-SNAPSHOT)?-(?:[a-f0-9]{7}|Unknown) " + // warn agent + "\"((?:\t| |!|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x80-\\xff]|\\\\|\\\\\")*)\" " + // quoted warning value, captured + // quoted RFC 1123 date format + "\"" + // opening quote + "(?:Mon|Tue|Wed|Thu|Fri|Sat|Sun), " + // weekday + "\\d{2} " + // 2-digit day + "(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) " + // month + "\\d{4} " + // 4-digit year + "\\d{2}:\\d{2}:\\d{2} " + // (two-digit hour):(two-digit minute):(two-digit second) + "GMT" + // GMT + "\""); // closing quote + + /** + * Extracts the warning value from the value of a warning header that is formatted according to RFC 7234. That is, given a string + * {@code 299 Elasticsearch-6.0.0 "warning value" "Sat, 25 Feb 2017 10:27:43 GMT"}, the return value of this method would be {@code + * warning value}. + * + * @param s the value of a warning header formatted according to RFC 7234. + * @return the extracted warning value + */ + public static String extractWarningValueFromWarningHeader(final String s) { + /* + * We know the exact format of the warning header, so to extract the warning value we can skip forward from the front to the first + * quote, and skip backwards from the end to the penultimate quote: + * + * 299 Elasticsearch-6.0.0 "warning value" "Sat, 25, Feb 2017 10:27:43 GMT" + * ^ ^ ^ + * firstQuote penultimateQuote lastQuote + * + * We do it this way rather than seeking forward after the first quote because there could be escaped quotes in the warning value + * but since there are none in the warning date, we can skip backwards to find the quote that closes the quoted warning value. + * + * We parse this manually rather than using the capturing regular expression because the regular expression involves a lot of + * backtracking and carries a performance penalty. However, when assertions are enabled, we still use the regular expression to + * verify that we are maintaining the warning header format. + */ + final int firstQuote = s.indexOf('\"'); + final int lastQuote = s.lastIndexOf('\"'); + final int penultimateQuote = s.lastIndexOf('\"', lastQuote - 1); + final String warningValue = s.substring(firstQuote + 1, penultimateQuote - 2); + assert assertWarningValue(s, warningValue); + return warningValue; + } + + /** + * Assert that the specified string has the warning value equal to the provided warning value. + * + * @param s the string representing a full warning header + * @param warningValue the expected warning header + * @return {@code true} if the specified string has the expected warning value + */ + private static boolean assertWarningValue(final String s, final String warningValue) { + final Matcher matcher = WARNING_HEADER_PATTERN.matcher(s); + final boolean matches = matcher.matches(); + assert matches; + return matcher.group(1).equals(warningValue); + } + /** * Logs a deprecated message to the deprecation log, as well as to the local {@link ThreadContext}. * @@ -119,24 +292,53 @@ public void deprecated(String msg, Object... params) { * @param message The deprecation message. * @param params The parameters used to fill in the message, if any exist. */ + void deprecated(final Set threadContexts, final String message, final Object... params) { + deprecated(threadContexts, message, true, params); + } + @SuppressLoggerChecks(reason = "safely delegates to logger") - void deprecated(Set threadContexts, String message, Object... params) { - Iterator iterator = threadContexts.iterator(); + void deprecated(final Set threadContexts, final String message, final boolean log, final Object... params) { + final Iterator iterator = threadContexts.iterator(); if (iterator.hasNext()) { final String formattedMessage = LoggerMessageFormat.format(message, params); - + final String warningHeaderValue = formatWarning(formattedMessage); + assert WARNING_HEADER_PATTERN.matcher(warningHeaderValue).matches(); + assert extractWarningValueFromWarningHeader(warningHeaderValue).equals(escape(formattedMessage)); while (iterator.hasNext()) { try { - iterator.next().addResponseHeader(DEPRECATION_HEADER, formattedMessage); - } catch (IllegalStateException e) { + final ThreadContext next = iterator.next(); + next.addResponseHeader("Warning", warningHeaderValue, DeprecationLogger::extractWarningValueFromWarningHeader); + } catch (final IllegalStateException e) { // ignored; it should be removed shortly } } - logger.warn(formattedMessage); - } else { + } + + if (log) { logger.warn(message, params); } } + /** + * Format a warning string in the proper warning format by prepending a warn code, warn agent, wrapping the warning string in quotes, + * and appending the RFC 7231 date. + * + * @param s the warning string to format + * @return a warning value formatted according to RFC 7234 + */ + public static String formatWarning(final String s) { + return String.format(Locale.ROOT, WARNING_FORMAT, escape(s), RFC_7231_DATE_TIME.format(ZonedDateTime.now(GMT))); + } + + /** + * Escape backslashes and quotes in the specified string. + * + * @param s the string to escape + * @return the escaped string + */ + public static String escape(String s) { + return s.replaceAll("([\"\\\\])", "\\\\$1"); + } + } diff --git a/core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java b/core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java index 853df3d31add0..1d6fdf9cd2df2 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java +++ b/core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java @@ -22,59 +22,52 @@ import org.apache.logging.log4j.Level; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.MessageFactory; +import org.apache.logging.log4j.spi.ExtendedLogger; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; -import java.util.Locale; -import java.util.function.Function; - /** * Factory to get {@link Logger}s */ -public abstract class ESLoggerFactory { +public final class ESLoggerFactory { + + private ESLoggerFactory() { + + } public static final Setting LOG_DEFAULT_LEVEL_SETTING = new Setting<>("logger.level", Level.INFO.name(), Level::valueOf, Property.NodeScope); public static final Setting LOG_LEVEL_SETTING = - Setting.prefixKeySetting("logger.", Level.INFO.name(), Level::valueOf, - Property.Dynamic, Property.NodeScope); + Setting.prefixKeySetting("logger.", (key) -> new Setting<>(key, Level.INFO.name(), Level::valueOf, Property.Dynamic, + Property.NodeScope)); public static Logger getLogger(String prefix, String name) { - name = name.intern(); - final Logger logger = getLogger(new PrefixMessageFactory(), name); - final MessageFactory factory = logger.getMessageFactory(); - // in some cases, we initialize the logger before we are ready to set the prefix - // we can not re-initialize the logger, so the above getLogger might return an existing - // instance without the prefix set; thus, we hack around this by resetting the prefix - if (prefix != null && factory instanceof PrefixMessageFactory) { - ((PrefixMessageFactory) factory).setPrefix(prefix.intern()); - } - return logger; + return getLogger(prefix, LogManager.getLogger(name)); } - public static Logger getLogger(MessageFactory messageFactory, String name) { - return LogManager.getLogger(name, messageFactory); + public static Logger getLogger(String prefix, Class clazz) { + /* + * Do not use LogManager#getLogger(Class) as this now uses Class#getCanonicalName under the hood; as this returns null for local and + * anonymous classes, any place we create, for example, an abstract component defined as an anonymous class (e.g., in tests) will + * result in a logger with a null name which will blow up in a lookup inside of Log4j. + */ + return getLogger(prefix, LogManager.getLogger(clazz.getName())); } - public static Logger getLogger(String name) { - return getLogger((String)null, name); + public static Logger getLogger(String prefix, Logger logger) { + return new PrefixLogger((ExtendedLogger)logger, logger.getName(), prefix); } - public static DeprecationLogger getDeprecationLogger(String name) { - return new DeprecationLogger(getLogger(name)); + public static Logger getLogger(Class clazz) { + return getLogger(null, clazz); } - public static DeprecationLogger getDeprecationLogger(String prefix, String name) { - return new DeprecationLogger(getLogger(prefix, name)); + public static Logger getLogger(String name) { + return getLogger(null, name); } public static Logger getRootLogger() { return LogManager.getRootLogger(); } - private ESLoggerFactory() { - // Utility class can't be built. - } - } diff --git a/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java b/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java index 68e2886cf372e..4c12206859674 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java +++ b/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java @@ -30,11 +30,18 @@ import org.apache.logging.log4j.core.config.composite.CompositeConfiguration; import org.apache.logging.log4j.core.config.properties.PropertiesConfiguration; import org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory; +import org.apache.logging.log4j.status.StatusConsoleListener; +import org.apache.logging.log4j.status.StatusData; +import org.apache.logging.log4j.status.StatusListener; +import org.apache.logging.log4j.status.StatusLogger; import org.elasticsearch.Version; +import org.elasticsearch.cli.ExitCodes; +import org.elasticsearch.cli.UserException; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.node.Node; import java.io.IOException; import java.nio.file.FileVisitOption; @@ -48,56 +55,153 @@ import java.util.EnumSet; import java.util.List; import java.util.Map; +import java.util.Objects; import java.util.Set; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.stream.StreamSupport; public class LogConfigurator { - public static void configure(final Environment environment, final boolean resolveConfig) throws IOException { - final Settings settings = environment.settings(); + /* + * We want to detect situations where we touch logging before the configuration is loaded. If we do this, Log4j will status log an error + * message at the error level. With this error listener, we can capture if this happens. More broadly, we can detect any error-level + * status log message which likely indicates that something is broken. The listener is installed immediately on startup, and then when + * we get around to configuring logging we check that no error-level log messages have been logged by the status logger. If they have we + * fail startup and any such messages can be seen on the console. + */ + private static final AtomicBoolean error = new AtomicBoolean(); + private static final StatusListener ERROR_LISTENER = new StatusConsoleListener(Level.ERROR) { + @Override + public void log(StatusData data) { + error.set(true); + super.log(data); + } + }; - setLogConfigurationSystemProperty(environment, settings); + /** + * Registers a listener for status logger errors. This listener should be registered as early as possible to ensure that no errors are + * logged by the status logger before logging is configured. + */ + public static void registerErrorListener() { + error.set(false); + StatusLogger.getLogger().registerListener(ERROR_LISTENER); + } + /** + * Configure logging without reading a log4j2.properties file, effectively configuring the + * status logger and all loggers to the console. + * + * @param settings for configuring logger.level and individual loggers + */ + public static void configureWithoutConfig(final Settings settings) { + Objects.requireNonNull(settings); // we initialize the status logger immediately otherwise Log4j will complain when we try to get the context - final ConfigurationBuilder builder = ConfigurationBuilderFactory.newConfigurationBuilder(); - builder.setStatusLevel(Level.ERROR); - Configurator.initialize(builder.build()); + configureStatusLogger(); + configureLoggerLevels(settings); + } + + /** + * Configure logging reading from any log4j2.properties found in the config directory and its + * subdirectories from the specified environment. Will also configure logging to point the logs + * directory from the specified environment. + * + * @param environment the environment for reading configs and the logs path + * @throws IOException if there is an issue readings any log4j2.properties in the config + * directory + * @throws UserException if there are no log4j2.properties in the specified configs path + */ + public static void configure(final Environment environment) throws IOException, UserException { + Objects.requireNonNull(environment); + try { + // we are about to configure logging, check that the status logger did not log any error-level messages + checkErrorListener(); + } finally { + // whether or not the error listener check failed we can remove the listener now + StatusLogger.getLogger().removeListener(ERROR_LISTENER); + } + configure(environment.settings(), environment.configFile(), environment.logsFile()); + } + + private static void checkErrorListener() { + assert errorListenerIsRegistered() : "expected error listener to be registered"; + if (error.get()) { + throw new IllegalStateException("status logger logged an error before logging was configured"); + } + } + + private static boolean errorListenerIsRegistered() { + return StreamSupport.stream(StatusLogger.getLogger().getListeners().spliterator(), false).anyMatch(l -> l == ERROR_LISTENER); + } + + private static void configure(final Settings settings, final Path configsPath, final Path logsPath) throws IOException, UserException { + Objects.requireNonNull(settings); + Objects.requireNonNull(configsPath); + Objects.requireNonNull(logsPath); + + setLogConfigurationSystemProperty(logsPath, settings); + // we initialize the status logger immediately otherwise Log4j will complain when we try to get the context + configureStatusLogger(); final LoggerContext context = (LoggerContext) LogManager.getContext(false); - if (resolveConfig) { - final List configurations = new ArrayList<>(); - final PropertiesConfigurationFactory factory = new PropertiesConfigurationFactory(); - final Set options = EnumSet.of(FileVisitOption.FOLLOW_LINKS); - Files.walkFileTree(environment.configFile(), options, Integer.MAX_VALUE, new SimpleFileVisitor() { - @Override - public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { - if (file.getFileName().toString().equals("log4j2.properties")) { - configurations.add((PropertiesConfiguration) factory.getConfiguration(file.toString(), file.toUri())); - } - return FileVisitResult.CONTINUE; + final List configurations = new ArrayList<>(); + final PropertiesConfigurationFactory factory = new PropertiesConfigurationFactory(); + final Set options = EnumSet.of(FileVisitOption.FOLLOW_LINKS); + Files.walkFileTree(configsPath, options, Integer.MAX_VALUE, new SimpleFileVisitor() { + @Override + public FileVisitResult visitFile(final Path file, final BasicFileAttributes attrs) throws IOException { + if (file.getFileName().toString().equals("log4j2.properties")) { + configurations.add((PropertiesConfiguration) factory.getConfiguration(context, file.toString(), file.toUri())); } - }); - context.start(new CompositeConfiguration(configurations)); - warnIfOldConfigurationFilePresent(environment); + return FileVisitResult.CONTINUE; + } + }); + + if (configurations.isEmpty()) { + throw new UserException( + ExitCodes.CONFIG, + "no log4j2.properties found; tried [" + configsPath + "] and its subdirectories"); } + context.start(new CompositeConfiguration(configurations)); + warnIfOldConfigurationFilePresent(configsPath); + + configureLoggerLevels(settings); + } + + private static void configureStatusLogger() { + final ConfigurationBuilder builder = ConfigurationBuilderFactory.newConfigurationBuilder(); + builder.setStatusLevel(Level.ERROR); + Configurator.initialize(builder.build()); + } + + /** + * Configures the logging levels for loggers configured in the specified settings. + * + * @param settings the settings from which logger levels will be extracted + */ + private static void configureLoggerLevels(final Settings settings) { if (ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.exists(settings)) { - Loggers.setLevel(ESLoggerFactory.getRootLogger(), ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.get(settings)); + final Level level = ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.get(settings); + Loggers.setLevel(ESLoggerFactory.getRootLogger(), level); } final Map levels = settings.filter(ESLoggerFactory.LOG_LEVEL_SETTING::match).getAsMap(); - for (String key : levels.keySet()) { - final Level level = ESLoggerFactory.LOG_LEVEL_SETTING.getConcreteSetting(key).get(settings); - Loggers.setLevel(Loggers.getLogger(key.substring("logger.".length())), level); + for (final String key : levels.keySet()) { + // do not set a log level for a logger named level (from the default log setting) + if (!key.equals(ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.getKey())) { + final Level level = ESLoggerFactory.LOG_LEVEL_SETTING.getConcreteSetting(key).get(settings); + Loggers.setLevel(ESLoggerFactory.getLogger(key.substring("logger.".length())), level); + } } } - private static void warnIfOldConfigurationFilePresent(final Environment environment) throws IOException { + private static void warnIfOldConfigurationFilePresent(final Path configsPath) throws IOException { // TODO: the warning for unsupported logging configurations can be removed in 6.0.0 assert Version.CURRENT.major < 6; final List suffixes = Arrays.asList(".yml", ".yaml", ".json", ".properties"); final Set options = EnumSet.of(FileVisitOption.FOLLOW_LINKS); - Files.walkFileTree(environment.configFile(), options, Integer.MAX_VALUE, new SimpleFileVisitor() { + Files.walkFileTree(configsPath, options, Integer.MAX_VALUE, new SimpleFileVisitor() { @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { final String fileName = file.getFileName().toString(); @@ -116,9 +220,34 @@ public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IO }); } + /** + * Set system properties that can be used in configuration files to specify paths and file patterns for log files. We expose three + * properties here: + *

      + *
    • + * {@code es.logs.base_path} the base path containing the log files + *
    • + *
    • + * {@code es.logs.cluster_name} the cluster name, used as the prefix of log filenames in the default configuration + *
    • + *
    • + * {@code es.logs.node_name} the node name, can be used as part of log filenames (only exposed if {@link Node#NODE_NAME_SETTING} is + * explicitly set) + *
    • + *
    + * + * @param logsPath the path to the log files + * @param settings the settings to extract the cluster and node names + */ @SuppressForbidden(reason = "sets system property for logging configuration") - private static void setLogConfigurationSystemProperty(final Environment environment, final Settings settings) { - System.setProperty("es.logs", environment.logsFile().resolve(ClusterName.CLUSTER_NAME_SETTING.get(settings).value()).toString()); + private static void setLogConfigurationSystemProperty(final Path logsPath, final Settings settings) { + // for backwards compatibility + System.setProperty("es.logs", logsPath.resolve(ClusterName.CLUSTER_NAME_SETTING.get(settings).value()).toString()); + System.setProperty("es.logs.base_path", logsPath.toString()); + System.setProperty("es.logs.cluster_name", ClusterName.CLUSTER_NAME_SETTING.get(settings).value()); + if (Node.NODE_NAME_SETTING.exists(settings)) { + System.setProperty("es.logs.node_name", Node.NODE_NAME_SETTING.get(settings)); + } } } diff --git a/core/src/main/java/org/elasticsearch/common/logging/Loggers.java b/core/src/main/java/org/elasticsearch/common/logging/Loggers.java index 647ada099ba1e..812a0b70f2877 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/Loggers.java +++ b/core/src/main/java/org/elasticsearch/common/logging/Loggers.java @@ -20,7 +20,9 @@ package org.elasticsearch.common.logging; import org.apache.logging.log4j.Level; +import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.core.Appender; import org.apache.logging.log4j.core.LoggerContext; import org.apache.logging.log4j.core.config.Configuration; import org.apache.logging.log4j.core.config.Configurator; @@ -33,9 +35,12 @@ import org.elasticsearch.node.Node; import java.util.ArrayList; +import java.util.Collection; import java.util.List; +import java.util.Map; import static java.util.Arrays.asList; +import static javax.security.auth.login.Configuration.getConfiguration; import static org.elasticsearch.common.util.CollectionUtils.asArrayList; /** @@ -43,24 +48,8 @@ */ public class Loggers { - static final String commonPrefix = System.getProperty("es.logger.prefix", "org.elasticsearch."); - public static final String SPACE = " "; - private static boolean consoleLoggingEnabled = true; - - public static void disableConsoleLogging() { - consoleLoggingEnabled = false; - } - - public static void enableConsoleLogging() { - consoleLoggingEnabled = true; - } - - public static boolean consoleLoggingEnabled() { - return consoleLoggingEnabled; - } - public static Logger getLogger(Class clazz, Settings settings, ShardId shardId, String... prefixes) { return getLogger(clazz, settings, shardId.getIndex(), asArrayList(Integer.toString(shardId.id()), prefixes).toArray(new String[0])); } @@ -79,10 +68,16 @@ public static Logger getLogger(Class clazz, Settings settings, Index index, S } public static Logger getLogger(Class clazz, Settings settings, String... prefixes) { - return getLogger(buildClassLoggerName(clazz), settings, prefixes); + final List prefixesList = prefixesList(settings, prefixes); + return getLogger(clazz, prefixesList.toArray(new String[prefixesList.size()])); } public static Logger getLogger(String loggerName, Settings settings, String... prefixes) { + final List prefixesList = prefixesList(settings, prefixes); + return getLogger(loggerName, prefixesList.toArray(new String[prefixesList.size()])); + } + + private static List prefixesList(Settings settings, String... prefixes) { List prefixesList = new ArrayList<>(); if (Node.NODE_NAME_SETTING.exists(settings)) { prefixesList.add(Node.NODE_NAME_SETTING.get(settings)); @@ -90,26 +85,31 @@ public static Logger getLogger(String loggerName, Settings settings, String... p if (prefixes != null && prefixes.length > 0) { prefixesList.addAll(asList(prefixes)); } - return getLogger(getLoggerName(loggerName), prefixesList.toArray(new String[prefixesList.size()])); + return prefixesList; } public static Logger getLogger(Logger parentLogger, String s) { - return ESLoggerFactory.getLogger(parentLogger.getMessageFactory(), getLoggerName(parentLogger.getName() + s)); + assert parentLogger instanceof PrefixLogger; + return ESLoggerFactory.getLogger(((PrefixLogger)parentLogger).prefix(), parentLogger.getName() + s); } public static Logger getLogger(String s) { - return ESLoggerFactory.getLogger(getLoggerName(s)); + return ESLoggerFactory.getLogger(s); } public static Logger getLogger(Class clazz) { - return ESLoggerFactory.getLogger(getLoggerName(buildClassLoggerName(clazz))); + return ESLoggerFactory.getLogger(clazz); } public static Logger getLogger(Class clazz, String... prefixes) { - return getLogger(buildClassLoggerName(clazz), prefixes); + return ESLoggerFactory.getLogger(formatPrefix(prefixes), clazz); } public static Logger getLogger(String name, String... prefixes) { + return ESLoggerFactory.getLogger(formatPrefix(prefixes), name); + } + + private static String formatPrefix(String... prefixes) { String prefix = null; if (prefixes != null && prefixes.length > 0) { StringBuilder sb = new StringBuilder(); @@ -127,7 +127,7 @@ public static Logger getLogger(String name, String... prefixes) { prefix = sb.toString(); } } - return ESLoggerFactory.getLogger(prefix, getLoggerName(name)); + return prefix; } /** @@ -145,30 +145,60 @@ public static void setLevel(Logger logger, String level) { } public static void setLevel(Logger logger, Level level) { - if (!"".equals(logger.getName())) { + if (!LogManager.ROOT_LOGGER_NAME.equals(logger.getName())) { Configurator.setLevel(logger.getName(), level); } else { - LoggerContext ctx = LoggerContext.getContext(false); - Configuration config = ctx.getConfiguration(); - LoggerConfig loggerConfig = config.getLoggerConfig(logger.getName()); + final LoggerContext ctx = LoggerContext.getContext(false); + final Configuration config = ctx.getConfiguration(); + final LoggerConfig loggerConfig = config.getLoggerConfig(logger.getName()); loggerConfig.setLevel(level); ctx.updateLoggers(); } + + // we have to descend the hierarchy + final LoggerContext ctx = LoggerContext.getContext(false); + for (final LoggerConfig loggerConfig : ctx.getConfiguration().getLoggers().values()) { + if (LogManager.ROOT_LOGGER_NAME.equals(logger.getName()) || loggerConfig.getName().startsWith(logger.getName() + ".")) { + Configurator.setLevel(loggerConfig.getName(), level); + } + } } - private static String buildClassLoggerName(Class clazz) { - String name = clazz.getName(); - if (name.startsWith("org.elasticsearch.")) { - name = Classes.getPackageName(clazz); + public static void addAppender(final Logger logger, final Appender appender) { + final LoggerContext ctx = (LoggerContext) LogManager.getContext(false); + final Configuration config = ctx.getConfiguration(); + config.addAppender(appender); + LoggerConfig loggerConfig = config.getLoggerConfig(logger.getName()); + if (!logger.getName().equals(loggerConfig.getName())) { + loggerConfig = new LoggerConfig(logger.getName(), logger.getLevel(), true); + config.addLogger(logger.getName(), loggerConfig); } - return name; + loggerConfig.addAppender(appender, null, null); + ctx.updateLoggers(); } - private static String getLoggerName(String name) { - if (name.startsWith("org.elasticsearch.")) { - name = name.substring("org.elasticsearch.".length()); + public static void removeAppender(final Logger logger, final Appender appender) { + final LoggerContext ctx = (LoggerContext) LogManager.getContext(false); + final Configuration config = ctx.getConfiguration(); + LoggerConfig loggerConfig = config.getLoggerConfig(logger.getName()); + if (!logger.getName().equals(loggerConfig.getName())) { + loggerConfig = new LoggerConfig(logger.getName(), logger.getLevel(), true); + config.addLogger(logger.getName(), loggerConfig); + } + loggerConfig.removeAppender(appender.getName()); + ctx.updateLoggers(); + } + + public static Appender findAppender(final Logger logger, final Class clazz) { + final LoggerContext ctx = (LoggerContext) LogManager.getContext(false); + final Configuration config = ctx.getConfiguration(); + final LoggerConfig loggerConfig = config.getLoggerConfig(logger.getName()); + for (final Map.Entry entry : loggerConfig.getAppenders().entrySet()) { + if (entry.getValue().getClass().equals(clazz)) { + return entry.getValue(); + } } - return commonPrefix + name; + return null; } } diff --git a/core/src/main/java/org/elasticsearch/common/logging/PrefixLogger.java b/core/src/main/java/org/elasticsearch/common/logging/PrefixLogger.java new file mode 100644 index 0000000000000..a78330c3e8564 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/logging/PrefixLogger.java @@ -0,0 +1,105 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.logging; + +import org.apache.logging.log4j.Level; +import org.apache.logging.log4j.Marker; +import org.apache.logging.log4j.MarkerManager; +import org.apache.logging.log4j.message.Message; +import org.apache.logging.log4j.spi.ExtendedLogger; +import org.apache.logging.log4j.spi.ExtendedLoggerWrapper; + +import java.util.WeakHashMap; + +/** + * A logger that prefixes all messages with a fixed prefix specified during construction. The prefix mechanism uses the marker construct, so + * for the prefixes to appear, the logging layout pattern must include the marker in its pattern. + */ +class PrefixLogger extends ExtendedLoggerWrapper { + + /* + * We can not use the built-in Marker tracking (MarkerManager) because the MarkerManager holds a permanent reference to the marker; + * however, we have transient markers from index-level and shard-level components so this would effectively be a memory leak. Since we + * can not tie into the lifecycle of these components, we have to use a mechanism that enables garbage collection of such markers when + * they are no longer in use. + */ + private static final WeakHashMap markers = new WeakHashMap<>(); + + /** + * Return the size of the cached markers. This size can vary as markers are cached but collected during GC activity when a given prefix + * is no longer in use. + * + * @return the size of the cached markers + */ + static int markersSize() { + return markers.size(); + } + + /** + * The marker for this prefix logger. + */ + private final Marker marker; + + /** + * Obtain the prefix for this prefix logger. This can be used to create a logger with the same prefix as this one. + * + * @return the prefix + */ + public String prefix() { + return marker.getName(); + } + + /** + * Construct a prefix logger with the specified name and prefix. + * + * @param logger the extended logger to wrap + * @param name the name of this prefix logger + * @param prefix the prefix for this prefix logger + */ + PrefixLogger(final ExtendedLogger logger, final String name, final String prefix) { + super(logger, name, null); + + final String actualPrefix = (prefix == null ? "" : prefix).intern(); + final Marker actualMarker; + // markers is not thread-safe, so we synchronize access + synchronized (markers) { + final Marker maybeMarker = markers.get(actualPrefix); + if (maybeMarker == null) { + actualMarker = new MarkerManager.Log4jMarker(actualPrefix); + /* + * We must create a new instance here as otherwise the marker will hold a reference to the key in the weak hash map; as + * those references are held strongly, this would give a strong reference back to the key preventing them from ever being + * collected. This also guarantees that no other strong reference can be held to the prefix anywhere. + */ + markers.put(new String(actualPrefix), actualMarker); + } else { + actualMarker = maybeMarker; + } + } + this.marker = actualMarker; + } + + @Override + public void logMessage(final String fqcn, final Level level, final Marker marker, final Message message, final Throwable t) { + assert marker == null; + super.logMessage(fqcn, level, this.marker, message, t); + } + +} diff --git a/core/src/main/java/org/elasticsearch/common/logging/PrefixMessageFactory.java b/core/src/main/java/org/elasticsearch/common/logging/PrefixMessageFactory.java deleted file mode 100644 index a141ceb75aae6..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/logging/PrefixMessageFactory.java +++ /dev/null @@ -1,221 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.logging; - -import org.apache.logging.log4j.message.Message; -import org.apache.logging.log4j.message.MessageFactory2; -import org.apache.logging.log4j.message.ObjectMessage; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.message.SimpleMessage; - -public class PrefixMessageFactory implements MessageFactory2 { - - private String prefix = ""; - - public String getPrefix() { - return prefix; - } - - public void setPrefix(String prefix) { - this.prefix = prefix; - } - - @Override - public Message newMessage(Object message) { - return new PrefixObjectMessage(prefix, message); - } - - private static class PrefixObjectMessage extends ObjectMessage { - - private final String prefix; - private final Object object; - private String prefixObjectString; - - private PrefixObjectMessage(String prefix, Object object) { - super(object); - this.prefix = prefix; - this.object = object; - } - - @Override - public String getFormattedMessage() { - if (prefixObjectString == null) { - prefixObjectString = prefix + super.getFormattedMessage(); - } - return prefixObjectString; - } - - @Override - public void formatTo(StringBuilder buffer) { - buffer.append(prefix); - super.formatTo(buffer); - } - - @Override - public Object[] getParameters() { - return new Object[]{prefix, object}; - } - - } - - @Override - public Message newMessage(String message) { - return new PrefixSimpleMessage(prefix, message); - } - - private static class PrefixSimpleMessage extends SimpleMessage { - - private final String prefix; - private String prefixMessage; - - PrefixSimpleMessage(String prefix, String message) { - super(message); - this.prefix = prefix; - } - - PrefixSimpleMessage(String prefix, CharSequence charSequence) { - super(charSequence); - this.prefix = prefix; - } - - @Override - public String getFormattedMessage() { - if (prefixMessage == null) { - prefixMessage = prefix + super.getFormattedMessage(); - } - return prefixMessage; - } - - @Override - public void formatTo(StringBuilder buffer) { - buffer.append(prefix); - super.formatTo(buffer); - } - - @Override - public int length() { - return prefixMessage.length(); - } - - @Override - public char charAt(int index) { - return prefixMessage.charAt(index); - } - - @Override - public CharSequence subSequence(int start, int end) { - return prefixMessage.subSequence(start, end); - } - - } - - @Override - public Message newMessage(String message, Object... params) { - return new PrefixParameterizedMessage(prefix, message, params); - } - - private static class PrefixParameterizedMessage extends ParameterizedMessage { - - private static ThreadLocal threadLocalStringBuilder = ThreadLocal.withInitial(StringBuilder::new); - - private final String prefix; - private String formattedMessage; - - private PrefixParameterizedMessage(String prefix, String messagePattern, Object... arguments) { - super(messagePattern, arguments); - this.prefix = prefix; - } - - @Override - public String getFormattedMessage() { - if (formattedMessage == null) { - final StringBuilder buffer = threadLocalStringBuilder.get(); - buffer.setLength(0); - formatTo(buffer); - formattedMessage = buffer.toString(); - } - return formattedMessage; - } - - @Override - public void formatTo(StringBuilder buffer) { - buffer.append(prefix); - super.formatTo(buffer); - } - - } - - @Override - public Message newMessage(CharSequence charSequence) { - return new PrefixSimpleMessage(prefix, charSequence); - } - - @Override - public Message newMessage(String message, Object p0) { - return new PrefixParameterizedMessage(prefix, message, p0); - } - - @Override - public Message newMessage(String message, Object p0, Object p1) { - return new PrefixParameterizedMessage(prefix, message, p0, p1); - } - - @Override - public Message newMessage(String message, Object p0, Object p1, Object p2) { - return new PrefixParameterizedMessage(prefix, message, p0, p1, p2); - } - - @Override - public Message newMessage(String message, Object p0, Object p1, Object p2, Object p3) { - return new PrefixParameterizedMessage(prefix, message, p0, p1, p2, p3); - } - - @Override - public Message newMessage(String message, Object p0, Object p1, Object p2, Object p3, Object p4) { - return new PrefixParameterizedMessage(prefix, message, p0, p1, p2, p3, p4); - } - - @Override - public Message newMessage(String message, Object p0, Object p1, Object p2, Object p3, Object p4, Object p5) { - return new PrefixParameterizedMessage(prefix, message, p0, p1, p2, p3, p4, p5); - } - - @Override - public Message newMessage(String message, Object p0, Object p1, Object p2, Object p3, Object p4, Object p5, Object p6) { - return new PrefixParameterizedMessage(prefix, message, p0, p1, p2, p3, p4, p5, p6); - } - - @Override - public Message newMessage(String message, Object p0, Object p1, Object p2, Object p3, Object p4, Object p5, Object p6, Object p7) { - return new PrefixParameterizedMessage(prefix, message, p0, p1, p2, p3, p4, p5, p6, p7); - } - - @Override - public Message newMessage( - String message, Object p0, Object p1, Object p2, Object p3, Object p4, Object p5, Object p6, Object p7, Object p8) { - return new PrefixParameterizedMessage(prefix, message, p0, p1, p2, p3, p4, p5, p6, p7, p8); - } - - @Override - public Message newMessage( - String message, Object p0, Object p1, Object p2, Object p3, Object p4, Object p5, Object p6, Object p7, Object p8, Object p9) { - return new PrefixParameterizedMessage(prefix, message, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9); - } -} diff --git a/core/src/main/java/org/elasticsearch/common/lucene/LoggerInfoStream.java b/core/src/main/java/org/elasticsearch/common/lucene/LoggerInfoStream.java index c4ef2ef8c703f..680760444b2a6 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/LoggerInfoStream.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/LoggerInfoStream.java @@ -23,21 +23,19 @@ import org.apache.lucene.util.InfoStream; import org.elasticsearch.common.logging.Loggers; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + /** An InfoStream (for Lucene's IndexWriter) that redirects * messages to "lucene.iw.ifd" and "lucene.iw" Logger.trace. */ - public final class LoggerInfoStream extends InfoStream { - /** Used for component-specific logging: */ - /** Logger for everything */ - private final Logger logger; + private final Logger parentLogger; - /** Logger for IndexFileDeleter */ - private final Logger ifdLogger; + private final Map loggers = new ConcurrentHashMap<>(); - public LoggerInfoStream(Logger parentLogger) { - logger = Loggers.getLogger(parentLogger, ".lucene.iw"); - ifdLogger = Loggers.getLogger(parentLogger, ".lucene.iw.ifd"); + public LoggerInfoStream(final Logger parentLogger) { + this.parentLogger = parentLogger; } @Override @@ -53,14 +51,12 @@ public boolean isEnabled(String component) { } private Logger getLogger(String component) { - if (component.equals("IFD")) { - return ifdLogger; - } else { - return logger; - } + return loggers.computeIfAbsent(component, c -> Loggers.getLogger(parentLogger, "." + c)); } @Override public void close() { + } + } diff --git a/core/src/main/java/org/elasticsearch/common/lucene/Lucene.java b/core/src/main/java/org/elasticsearch/common/lucene/Lucene.java index 94e1f05e46b86..ed4fedf64ae81 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/Lucene.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/Lucene.java @@ -27,6 +27,7 @@ import org.apache.lucene.codecs.CodecUtil; import org.apache.lucene.codecs.DocValuesFormat; import org.apache.lucene.codecs.PostingsFormat; +import org.apache.lucene.document.LatLonDocValuesField; import org.apache.lucene.index.CorruptIndexException; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexCommit; @@ -48,6 +49,7 @@ import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.ScorerSupplier; import org.apache.lucene.search.SimpleCollector; import org.apache.lucene.search.SortField; import org.apache.lucene.search.TimeLimitingCollector; @@ -55,6 +57,9 @@ import org.apache.lucene.search.TopFieldDocs; import org.apache.lucene.search.TwoPhaseIterator; import org.apache.lucene.search.Weight; +import org.apache.lucene.search.grouping.CollapseTopFieldDocs; +import org.apache.lucene.search.SortedNumericSortField; +import org.apache.lucene.search.SortedSetSortField; import org.apache.lucene.store.Directory; import org.apache.lucene.store.IOContext; import org.apache.lucene.store.IndexInput; @@ -282,23 +287,23 @@ public static boolean exists(IndexSearcher searcher, Query query) throws IOExcep } public static TopDocs readTopDocs(StreamInput in) throws IOException { - if (in.readBoolean()) { + byte type = in.readByte(); + if (type == 0) { + int totalHits = in.readVInt(); + float maxScore = in.readFloat(); + + ScoreDoc[] scoreDocs = new ScoreDoc[in.readVInt()]; + for (int i = 0; i < scoreDocs.length; i++) { + scoreDocs[i] = new ScoreDoc(in.readVInt(), in.readFloat()); + } + return new TopDocs(totalHits, scoreDocs, maxScore); + } else if (type == 1) { int totalHits = in.readVInt(); float maxScore = in.readFloat(); SortField[] fields = new SortField[in.readVInt()]; for (int i = 0; i < fields.length; i++) { - String field = null; - if (in.readBoolean()) { - field = in.readString(); - } - SortField.Type sortType = readSortType(in); - Object missingValue = readMissingValue(in); - boolean reverse = in.readBoolean(); - fields[i] = new SortField(field, sortType, reverse); - if (missingValue != null) { - fields[i].setMissingValue(missingValue); - } + fields[i] = readSortField(in); } FieldDoc[] fieldDocs = new FieldDoc[in.readVInt()]; @@ -306,15 +311,25 @@ public static TopDocs readTopDocs(StreamInput in) throws IOException { fieldDocs[i] = readFieldDoc(in); } return new TopFieldDocs(totalHits, fieldDocs, fields, maxScore); - } else { + } else if (type == 2) { int totalHits = in.readVInt(); float maxScore = in.readFloat(); - ScoreDoc[] scoreDocs = new ScoreDoc[in.readVInt()]; - for (int i = 0; i < scoreDocs.length; i++) { - scoreDocs[i] = new ScoreDoc(in.readVInt(), in.readFloat()); + String field = in.readString(); + SortField[] fields = new SortField[in.readVInt()]; + for (int i = 0; i < fields.length; i++) { + fields[i] = readSortField(in); } - return new TopDocs(totalHits, scoreDocs, maxScore); + int size = in.readVInt(); + Object[] collapseValues = new Object[size]; + FieldDoc[] fieldDocs = new FieldDoc[size]; + for (int i = 0; i < fieldDocs.length; i++) { + fieldDocs[i] = readFieldDoc(in); + collapseValues[i] = readSortValue(in); + } + return new CollapseTopFieldDocs(field, totalHits, fieldDocs, fields, collapseValues, maxScore); + } else { + throw new IllegalStateException("Unknown type " + type); } } @@ -349,13 +364,62 @@ public static FieldDoc readFieldDoc(StreamInput in) throws IOException { return new FieldDoc(in.readVInt(), in.readFloat(), cFields); } + private static Comparable readSortValue(StreamInput in) throws IOException { + byte type = in.readByte(); + if (type == 0) { + return null; + } else if (type == 1) { + return in.readString(); + } else if (type == 2) { + return in.readInt(); + } else if (type == 3) { + return in.readLong(); + } else if (type == 4) { + return in.readFloat(); + } else if (type == 5) { + return in.readDouble(); + } else if (type == 6) { + return in.readByte(); + } else if (type == 7) { + return in.readShort(); + } else if (type == 8) { + return in.readBoolean(); + } else if (type == 9) { + return in.readBytesRef(); + } else { + throw new IOException("Can't match type [" + type + "]"); + } + } + public static ScoreDoc readScoreDoc(StreamInput in) throws IOException { return new ScoreDoc(in.readVInt(), in.readFloat()); } + private static final Class GEO_DISTANCE_SORT_TYPE_CLASS = LatLonDocValuesField.newDistanceSort("some_geo_field", 0, 0).getClass(); + public static void writeTopDocs(StreamOutput out, TopDocs topDocs) throws IOException { - if (topDocs instanceof TopFieldDocs) { - out.writeBoolean(true); + if (topDocs instanceof CollapseTopFieldDocs) { + out.writeByte((byte) 2); + CollapseTopFieldDocs collapseDocs = (CollapseTopFieldDocs) topDocs; + + out.writeVInt(topDocs.totalHits); + out.writeFloat(topDocs.getMaxScore()); + + out.writeString(collapseDocs.field); + + out.writeVInt(collapseDocs.fields.length); + for (SortField sortField : collapseDocs.fields) { + writeSortField(out, sortField); + } + + out.writeVInt(topDocs.scoreDocs.length); + for (int i = 0; i < topDocs.scoreDocs.length; i++) { + ScoreDoc doc = collapseDocs.scoreDocs[i]; + writeFieldDoc(out, (FieldDoc) doc); + writeSortValue(out, collapseDocs.collapseValues[i]); + } + } else if (topDocs instanceof TopFieldDocs) { + out.writeByte((byte) 1); TopFieldDocs topFieldDocs = (TopFieldDocs) topDocs; out.writeVInt(topDocs.totalHits); @@ -363,21 +427,7 @@ public static void writeTopDocs(StreamOutput out, TopDocs topDocs) throws IOExce out.writeVInt(topFieldDocs.fields.length); for (SortField sortField : topFieldDocs.fields) { - if (sortField.getField() == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - out.writeString(sortField.getField()); - } - if (sortField.getComparatorSource() != null) { - IndexFieldData.XFieldComparatorSource comparatorSource = (IndexFieldData.XFieldComparatorSource) sortField.getComparatorSource(); - writeSortType(out, comparatorSource.reducedType()); - writeMissingValue(out, comparatorSource.missingValue(sortField.getReverse())); - } else { - writeSortType(out, sortField.getType()); - writeMissingValue(out, sortField.getMissingValue()); - } - out.writeBoolean(sortField.getReverse()); + writeSortField(out, sortField); } out.writeVInt(topDocs.scoreDocs.length); @@ -385,7 +435,7 @@ public static void writeTopDocs(StreamOutput out, TopDocs topDocs) throws IOExce writeFieldDoc(out, (FieldDoc) doc); } } else { - out.writeBoolean(false); + out.writeByte((byte) 0); out.writeVInt(topDocs.totalHits); out.writeFloat(topDocs.getMaxScore()); @@ -421,44 +471,49 @@ private static Object readMissingValue(StreamInput in) throws IOException { } } + + private static void writeSortValue(StreamOutput out, Object field) throws IOException { + if (field == null) { + out.writeByte((byte) 0); + } else { + Class type = field.getClass(); + if (type == String.class) { + out.writeByte((byte) 1); + out.writeString((String) field); + } else if (type == Integer.class) { + out.writeByte((byte) 2); + out.writeInt((Integer) field); + } else if (type == Long.class) { + out.writeByte((byte) 3); + out.writeLong((Long) field); + } else if (type == Float.class) { + out.writeByte((byte) 4); + out.writeFloat((Float) field); + } else if (type == Double.class) { + out.writeByte((byte) 5); + out.writeDouble((Double) field); + } else if (type == Byte.class) { + out.writeByte((byte) 6); + out.writeByte((Byte) field); + } else if (type == Short.class) { + out.writeByte((byte) 7); + out.writeShort((Short) field); + } else if (type == Boolean.class) { + out.writeByte((byte) 8); + out.writeBoolean((Boolean) field); + } else if (type == BytesRef.class) { + out.writeByte((byte) 9); + out.writeBytesRef((BytesRef) field); + } else { + throw new IOException("Can't handle sort field value of type [" + type + "]"); + } + } + } + public static void writeFieldDoc(StreamOutput out, FieldDoc fieldDoc) throws IOException { out.writeVInt(fieldDoc.fields.length); for (Object field : fieldDoc.fields) { - if (field == null) { - out.writeByte((byte) 0); - } else { - Class type = field.getClass(); - if (type == String.class) { - out.writeByte((byte) 1); - out.writeString((String) field); - } else if (type == Integer.class) { - out.writeByte((byte) 2); - out.writeInt((Integer) field); - } else if (type == Long.class) { - out.writeByte((byte) 3); - out.writeLong((Long) field); - } else if (type == Float.class) { - out.writeByte((byte) 4); - out.writeFloat((Float) field); - } else if (type == Double.class) { - out.writeByte((byte) 5); - out.writeDouble((Double) field); - } else if (type == Byte.class) { - out.writeByte((byte) 6); - out.writeByte((Byte) field); - } else if (type == Short.class) { - out.writeByte((byte) 7); - out.writeShort((Short) field); - } else if (type == Boolean.class) { - out.writeByte((byte) 8); - out.writeBoolean((Boolean) field); - } else if (type == BytesRef.class) { - out.writeByte((byte) 9); - out.writeBytesRef((BytesRef) field); - } else { - throw new IOException("Can't handle sort field value of type [" + type + "]"); - } - } + writeSortValue(out, field); } out.writeVInt(fieldDoc.doc); out.writeFloat(fieldDoc.score); @@ -477,10 +532,68 @@ public static SortField.Type readSortType(StreamInput in) throws IOException { return SortField.Type.values()[in.readVInt()]; } + public static SortField readSortField(StreamInput in) throws IOException { + String field = null; + if (in.readBoolean()) { + field = in.readString(); + } + SortField.Type sortType = readSortType(in); + Object missingValue = readMissingValue(in); + boolean reverse = in.readBoolean(); + SortField sortField = new SortField(field, sortType, reverse); + if (missingValue != null) { + sortField.setMissingValue(missingValue); + } + return sortField; + } + public static void writeSortType(StreamOutput out, SortField.Type sortType) throws IOException { out.writeVInt(sortType.ordinal()); } + public static void writeSortField(StreamOutput out, SortField sortField) throws IOException { + if (sortField.getClass() == GEO_DISTANCE_SORT_TYPE_CLASS) { + // for geo sorting, we replace the SortField with a SortField that assumes a double field. + // this works since the SortField is only used for merging top docs + SortField newSortField = new SortField(sortField.getField(), SortField.Type.DOUBLE); + newSortField.setMissingValue(sortField.getMissingValue()); + sortField = newSortField; + } else if (sortField.getClass() == SortedSetSortField.class) { + // for multi-valued sort field, we replace the SortedSetSortField with a simple SortField. + // It works because the sort field is only used to merge results from different shards. + SortField newSortField = new SortField(sortField.getField(), SortField.Type.STRING, sortField.getReverse()); + newSortField.setMissingValue(sortField.getMissingValue()); + sortField = newSortField; + } else if (sortField.getClass() == SortedNumericSortField.class) { + // for multi-valued sort field, we replace the SortedSetSortField with a simple SortField. + // It works because the sort field is only used to merge results from different shards. + SortField newSortField = new SortField(sortField.getField(), + ((SortedNumericSortField) sortField).getNumericType(), + sortField.getReverse()); + newSortField.setMissingValue(sortField.getMissingValue()); + sortField = newSortField; + } + + if (sortField.getClass() != SortField.class) { + throw new IllegalArgumentException("Cannot serialize SortField impl [" + sortField + "]"); + } + if (sortField.getField() == null) { + out.writeBoolean(false); + } else { + out.writeBoolean(true); + out.writeString(sortField.getField()); + } + if (sortField.getComparatorSource() != null) { + IndexFieldData.XFieldComparatorSource comparatorSource = (IndexFieldData.XFieldComparatorSource) sortField.getComparatorSource(); + writeSortType(out, comparatorSource.reducedType()); + writeMissingValue(out, comparatorSource.missingValue(sortField.getReverse())); + } else { + writeSortType(out, sortField.getType()); + writeMissingValue(out, sortField.getMissingValue()); + } + out.writeBoolean(sortField.getReverse()); + } + public static Explanation readExplanation(StreamInput in) throws IOException { boolean match = in.readBoolean(); String description = in.readString(); @@ -729,14 +842,16 @@ public void delete() { } /** - * Given a {@link Scorer}, return a {@link Bits} instance that will match + * Given a {@link ScorerSupplier}, return a {@link Bits} instance that will match * all documents contained in the set. Note that the returned {@link Bits} * instance MUST be consumed in order. */ - public static Bits asSequentialAccessBits(final int maxDoc, @Nullable Scorer scorer) throws IOException { - if (scorer == null) { + public static Bits asSequentialAccessBits(final int maxDoc, @Nullable ScorerSupplier scorerSupplier) throws IOException { + if (scorerSupplier == null) { return new Bits.MatchNoBits(maxDoc); } + // Since we want bits, we need random-access + final Scorer scorer = scorerSupplier.get(true); // this never returns null final TwoPhaseIterator twoPhase = scorer.twoPhaseIterator(); final DocIdSetIterator iterator; if (twoPhase == null) { diff --git a/core/src/main/java/org/elasticsearch/common/lucene/ShardCoreKeyMap.java b/core/src/main/java/org/elasticsearch/common/lucene/ShardCoreKeyMap.java index 146fb7ba05ec4..b237a746bf043 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/ShardCoreKeyMap.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/ShardCoreKeyMap.java @@ -21,6 +21,7 @@ import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.LeafReader.CoreClosedListener; +import org.elasticsearch.Assertions; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.ShardUtils; @@ -29,9 +30,9 @@ import java.util.Collections; import java.util.HashMap; import java.util.HashSet; -import java.util.IdentityHashMap; import java.util.Map; import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; /** * A map between segment core cache keys and the shard that these segments @@ -50,7 +51,7 @@ public final class ShardCoreKeyMap { private final Map> indexToCoreKey; public ShardCoreKeyMap() { - coreKeyToShard = new IdentityHashMap<>(); + coreKeyToShard = new ConcurrentHashMap<>(); indexToCoreKey = new HashMap<>(); } @@ -64,9 +65,17 @@ public void add(LeafReader reader) { throw new IllegalArgumentException("Could not extract shard id from " + reader); } final Object coreKey = reader.getCoreCacheKey(); + + if (coreKeyToShard.containsKey(coreKey)) { + // Do this check before entering the synchronized block in order to + // avoid taking the mutex if possible (which should happen most of + // the time). + return; + } + final String index = shardId.getIndexName(); synchronized (this) { - if (coreKeyToShard.put(coreKey, shardId) == null) { + if (coreKeyToShard.containsKey(coreKey) == false) { Set objects = indexToCoreKey.get(index); if (objects == null) { objects = new HashSet<>(); @@ -90,6 +99,14 @@ public void add(LeafReader reader) { try { reader.addCoreClosedListener(listener); addedListener = true; + + // Only add the core key to the map as a last operation so that + // if another thread sees that the core key is already in the + // map (like the check just before this synchronized block), + // then it means that the closed listener has already been + // registered. + ShardId previous = coreKeyToShard.put(coreKey, shardId); + assert previous == null; } finally { if (false == addedListener) { try { @@ -132,10 +149,7 @@ public synchronized int size() { } private synchronized boolean assertSize() { - // this is heavy and should only used in assertions - boolean assertionsEnabled = false; - assert assertionsEnabled = true; - if (assertionsEnabled == false) { + if (!Assertions.ENABLED) { throw new AssertionError("only run this if assertions are enabled"); } Collection> values = indexToCoreKey.values(); diff --git a/core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java b/core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java index 500fa206c9602..5307a417e107d 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java @@ -32,7 +32,6 @@ import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Explanation; import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.Scorer; import org.apache.lucene.search.TermQuery; @@ -64,6 +63,10 @@ public AllTermQuery(Term term) { this.term = term; } + public Term getTerm() { + return term; + } + @Override public boolean equals(Object obj) { if (sameClassAs(obj) == false) { @@ -83,21 +86,18 @@ public Query rewrite(IndexReader reader) throws IOException { if (rewritten != this) { return rewritten; } - boolean fieldExists = false; boolean hasPayloads = false; for (LeafReaderContext context : reader.leaves()) { final Terms terms = context.reader().terms(term.field()); if (terms != null) { - fieldExists = true; if (terms.hasPayloads()) { hasPayloads = true; break; } } } - if (fieldExists == false) { - return new MatchNoDocsQuery(); - } + // if the terms does not exist we could return a MatchNoDocsQuery but this would break the unified highlighter + // which rewrites query with an empty reader. if (hasPayloads == false) { return new TermQuery(term); } diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/FilteredCollector.java b/core/src/main/java/org/elasticsearch/common/lucene/search/FilteredCollector.java index 5bb9223504488..4c95d19f14367 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/FilteredCollector.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/FilteredCollector.java @@ -22,7 +22,7 @@ import org.apache.lucene.search.Collector; import org.apache.lucene.search.FilterLeafCollector; import org.apache.lucene.search.LeafCollector; -import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.ScorerSupplier; import org.apache.lucene.search.Weight; import org.apache.lucene.util.Bits; import org.elasticsearch.common.lucene.Lucene; @@ -44,9 +44,9 @@ public FilteredCollector(Collector collector, Weight filter) { @Override public LeafCollector getLeafCollector(LeafReaderContext context) throws IOException { - final Scorer filterScorer = filter.scorer(context); + final ScorerSupplier filterScorerSupplier = filter.scorerSupplier(context); final LeafCollector in = collector.getLeafCollector(context); - final Bits bits = Lucene.asSequentialAccessBits(context.reader().maxDoc(), filterScorer); + final Bits bits = Lucene.asSequentialAccessBits(context.reader().maxDoc(), filterScorerSupplier); return new FilterLeafCollector(in) { @Override diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsQuery.java b/core/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsQuery.java deleted file mode 100644 index 9caf350926c56..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsQuery.java +++ /dev/null @@ -1,79 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.lucene.search; - -import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.Term; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.Weight; -import org.apache.lucene.search.ConstantScoreWeight; -import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.Explanation; -import org.apache.lucene.search.Scorer; - -import java.io.IOException; -import java.util.Set; - -/** - * A query that matches no documents and prints the reason why in the toString method. - */ -public class MatchNoDocsQuery extends Query { - /** - * The reason why the query does not match any document. - */ - private final String reason; - - public MatchNoDocsQuery(String reason) { - this.reason = reason; - } - - public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException { - return new ConstantScoreWeight(this) { - @Override - public void extractTerms(Set terms) { - } - - @Override - public Explanation explain(LeafReaderContext context, int doc) throws IOException { - return Explanation.noMatch(reason); - } - - @Override - public Scorer scorer(LeafReaderContext context) throws IOException { - return null; - } - }; - } - - @Override - public String toString(String field) { - return "MatchNoDocsQuery[\"" + reason + "\"]"; - } - - @Override - public boolean equals(Object obj) { - return sameClassAs(obj); - } - - @Override - public int hashCode() { - return classHash(); - } -} diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/MoreLikeThisQuery.java b/core/src/main/java/org/elasticsearch/common/lucene/search/MoreLikeThisQuery.java index 06ab2b4a53012..73a290463574c 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/MoreLikeThisQuery.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/MoreLikeThisQuery.java @@ -227,10 +227,6 @@ public String[] getLikeTexts() { return likeText; } - public void setLikeText(String likeText) { - setLikeText(new String[]{likeText}); - } - public void setLikeText(String... likeText) { this.likeText = likeText; } @@ -239,7 +235,7 @@ public Fields[] getLikeFields() { return likeFields; } - public void setLikeText(Fields... likeFields) { + public void setLikeFields(Fields... likeFields) { this.likeFields = likeFields; } @@ -247,7 +243,7 @@ public void setLikeText(List likeText) { setLikeText(likeText.toArray(Strings.EMPTY_ARRAY)); } - public void setUnlikeText(Fields... unlikeFields) { + public void setUnlikeFields(Fields... unlikeFields) { this.unlikeFields = unlikeFields; } diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java b/core/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java index 87bfdacb1c760..b8e1039b2df1d 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java @@ -25,6 +25,8 @@ import org.apache.lucene.index.Term; import org.apache.lucene.index.Terms; import org.apache.lucene.index.TermsEnum; +import org.apache.lucene.search.BooleanClause; +import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.MultiPhraseQuery; import org.apache.lucene.search.Query; @@ -115,6 +117,18 @@ public void add(Term[] terms, int position) { positions.add(position); } + /** + * Returns the terms for each position in this phrase + */ + public Term[][] getTerms() { + Term[][] terms = new Term[termArrays.size()][]; + for (int i = 0; i < termArrays.size(); i++) { + terms[i] = new Term[termArrays.get(i).length]; + System.arraycopy(termArrays.get(i), 0, terms[i], 0, termArrays.get(i).length); + } + return terms; + } + /** * Returns the relative positions of terms in this phrase. */ @@ -150,7 +164,17 @@ public Query rewrite(IndexReader reader) throws IOException { } } if (terms.isEmpty()) { - return Queries.newMatchNoDocsQuery("No terms supplied for " + MultiPhrasePrefixQuery.class.getName()); + if (sizeMinus1 == 0) { + // no prefix and the phrase query is empty + return Queries.newMatchNoDocsQuery("No terms supplied for " + MultiPhrasePrefixQuery.class.getName()); + } + + // if the terms does not exist we could return a MatchNoDocsQuery but this would break the unified highlighter + // which rewrites query with an empty reader. + return new BooleanQuery.Builder() + .add(query.build(), BooleanClause.Occur.MUST) + .add(Queries.newMatchNoDocsQuery("No terms supplied for " + MultiPhrasePrefixQuery.class.getName()), + BooleanClause.Occur.MUST).build(); } query.add(terms.toArray(Term.class), position); return query.build(); diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java b/core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java index 2a3fd94e914d3..11d181076aeca 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java @@ -20,11 +20,13 @@ package org.elasticsearch.common.lucene.search; import org.apache.lucene.index.Term; +import org.apache.lucene.queries.ExtendedCommonTermsQuery; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanClause.Occur; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.PrefixQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; @@ -53,7 +55,11 @@ public static Query newNestedFilter() { } public static Query newNonNestedFilter() { - return not(newNestedFilter()); + // TODO: this is slow, make it a positive query + return new BooleanQuery.Builder() + .add(new MatchAllDocsQuery(), Occur.FILTER) + .add(newNestedFilter(), Occur.MUST_NOT) + .build(); } public static BooleanQuery filtered(@Nullable Query query, @Nullable Query filter) { @@ -137,6 +143,22 @@ public static Query applyMinimumShouldMatch(BooleanQuery query, @Nullable String } } + /** + * Potentially apply minimum should match value if we have a query that it can be applied to, + * otherwise return the original query. + */ + public static Query maybeApplyMinimumShouldMatch(Query query, @Nullable String minimumShouldMatch) { + // If the coordination factor is disabled on a boolean query we don't apply the minimum should match. + // This is done to make sure that the minimum_should_match doesn't get applied when there is only one word + // and multiple variations of the same word in the query (synonyms for instance). + if (query instanceof BooleanQuery && !((BooleanQuery) query).isCoordDisabled()) { + return applyMinimumShouldMatch((BooleanQuery) query, minimumShouldMatch); + } else if (query instanceof ExtendedCommonTermsQuery) { + ((ExtendedCommonTermsQuery)query).setLowFreqMinimumNumberShouldMatch(minimumShouldMatch); + } + return query; + } + private static Pattern spaceAroundLessThanPattern = Pattern.compile("(\\s+<\\s*)|(\\s*<\\s+)"); private static Pattern spacePattern = Pattern.compile(" "); private static Pattern lessThanPattern = Pattern.compile("<"); diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/function/CombineFunction.java b/core/src/main/java/org/elasticsearch/common/lucene/search/function/CombineFunction.java index e8bd288799810..399f3d7a2e613 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/function/CombineFunction.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/function/CombineFunction.java @@ -31,7 +31,7 @@ public enum CombineFunction implements Writeable { MULTIPLY { @Override public float combine(double queryScore, double funcScore, double maxBoost) { - return toFloat(queryScore * Math.min(funcScore, maxBoost)); + return (float) (queryScore * Math.min(funcScore, maxBoost)); } @Override @@ -48,7 +48,7 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma REPLACE { @Override public float combine(double queryScore, double funcScore, double maxBoost) { - return toFloat(Math.min(funcScore, maxBoost)); + return (float) (Math.min(funcScore, maxBoost)); } @Override @@ -64,7 +64,7 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma SUM { @Override public float combine(double queryScore, double funcScore, double maxBoost) { - return toFloat(queryScore + Math.min(funcScore, maxBoost)); + return (float) (queryScore + Math.min(funcScore, maxBoost)); } @Override @@ -79,7 +79,7 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma AVG { @Override public float combine(double queryScore, double funcScore, double maxBoost) { - return toFloat((Math.min(funcScore, maxBoost) + queryScore) / 2.0); + return (float) ((Math.min(funcScore, maxBoost) + queryScore) / 2.0); } @Override @@ -87,7 +87,7 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma Explanation minExpl = Explanation.match(Math.min(funcExpl.getValue(), maxBoost), "min of:", funcExpl, Explanation.match(maxBoost, "maxBoost")); return Explanation.match( - toFloat((Math.min(funcExpl.getValue(), maxBoost) + queryExpl.getValue()) / 2.0), "avg of", + (float) ((Math.min(funcExpl.getValue(), maxBoost) + queryExpl.getValue()) / 2.0), "avg of", queryExpl, minExpl); } @@ -95,7 +95,7 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma MIN { @Override public float combine(double queryScore, double funcScore, double maxBoost) { - return toFloat(Math.min(queryScore, Math.min(funcScore, maxBoost))); + return (float) (Math.min(queryScore, Math.min(funcScore, maxBoost))); } @Override @@ -112,7 +112,7 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma MAX { @Override public float combine(double queryScore, double funcScore, double maxBoost) { - return toFloat(Math.max(queryScore, Math.min(funcScore, maxBoost))); + return (float) (Math.max(queryScore, Math.min(funcScore, maxBoost))); } @Override @@ -129,29 +129,15 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma public abstract float combine(double queryScore, double funcScore, double maxBoost); - public static float toFloat(double input) { - assert deviation(input) <= 0.001 : "input " + input + " out of float scope for function score deviation: " + deviation(input); - return (float) input; - } - - private static double deviation(double input) { // only with assert! - float floatVersion = (float) input; - return Double.compare(floatVersion, input) == 0 || input == 0.0d ? 0 : 1.d - (floatVersion) / input; - } - public abstract Explanation explain(Explanation queryExpl, Explanation funcExpl, float maxBoost); @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(this.ordinal()); + out.writeEnum(this); } public static CombineFunction readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown CombineFunction ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(CombineFunction.class); } public static CombineFunction fromString(String combineFunction) { diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java b/core/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java index 3bc5542c2aa13..42047d664f3b6 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java @@ -96,7 +96,7 @@ public Explanation explainScore(int docId, Explanation subQueryScore) { String defaultStr = missing != null ? "?:" + missing : ""; double score = score(docId, subQueryScore.getValue()); return Explanation.match( - CombineFunction.toFloat(score), + (float) score, String.format(Locale.ROOT, "field value function: %s(doc['%s'].value%s * factor=%s)", modifierStr, field, defaultStr, boostFactor)); } @@ -191,15 +191,11 @@ public double apply(double n) { @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(this.ordinal()); + out.writeEnum(this); } public static Modifier readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown Modifier ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(Modifier.class); } @Override diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java b/core/src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java index fd7c8f6c49df1..cf12d574fd625 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java @@ -27,6 +27,7 @@ import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.ScorerSupplier; import org.apache.lucene.search.Weight; import org.apache.lucene.util.Bits; import org.elasticsearch.common.io.stream.StreamInput; @@ -37,7 +38,6 @@ import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; -import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.Locale; @@ -82,15 +82,11 @@ public enum ScoreMode implements Writeable { @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(this.ordinal()); + out.writeEnum(this); } public static ScoreMode readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown ScoreMode ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(ScoreMode.class); } public static ScoreMode fromString(String scoreMode) { @@ -157,7 +153,7 @@ class CustomBoostFactorWeight extends Weight { final Weight[] filterWeights; final boolean needsScores; - public CustomBoostFactorWeight(Query parent, Weight subQueryWeight, Weight[] filterWeights, boolean needsScores) throws IOException { + CustomBoostFactorWeight(Query parent, Weight subQueryWeight, Weight[] filterWeights, boolean needsScores) throws IOException { super(parent); this.subQueryWeight = subQueryWeight; this.filterWeights = filterWeights; @@ -189,8 +185,8 @@ private FiltersFunctionFactorScorer functionScorer(LeafReaderContext context) th for (int i = 0; i < filterFunctions.length; i++) { FilterFunction filterFunction = filterFunctions[i]; functions[i] = filterFunction.function.getLeafScoreFunction(context); - Scorer filterScorer = filterWeights[i].scorer(context); - docSets[i] = Lucene.asSequentialAccessBits(context.reader().maxDoc(), filterScorer); + ScorerSupplier filterScorerSupplier = filterWeights[i].scorerSupplier(context); + docSets[i] = Lucene.asSequentialAccessBits(context.reader().maxDoc(), filterScorerSupplier); } return new FiltersFunctionFactorScorer(this, subQueryScorer, scoreMode, filterFunctions, maxBoost, functions, docSets, combineFunction, needsScores); } @@ -215,12 +211,12 @@ public Explanation explain(LeafReaderContext context, int doc) throws IOExceptio List filterExplanations = new ArrayList<>(); for (int i = 0; i < filterFunctions.length; ++i) { Bits docSet = Lucene.asSequentialAccessBits(context.reader().maxDoc(), - filterWeights[i].scorer(context)); + filterWeights[i].scorerSupplier(context)); if (docSet.get(doc)) { FilterFunction filterFunction = filterFunctions[i]; Explanation functionExplanation = filterFunction.function.getLeafScoreFunction(context).explainScore(doc, expl); double factor = functionExplanation.getValue(); - float sc = CombineFunction.toFloat(factor); + float sc = (float) factor; Explanation filterExplanation = Explanation.match(sc, "function score, product of:", Explanation.match(1.0f, "match filter: " + filterFunction.filter.toString()), functionExplanation); filterExplanations.add(filterExplanation); @@ -233,7 +229,7 @@ public Explanation explain(LeafReaderContext context, int doc) throws IOExceptio Explanation factorExplanation; if (filterExplanations.size() > 0) { factorExplanation = Explanation.match( - CombineFunction.toFloat(score), + (float) score, "function score, score mode [" + scoreMode.toString().toLowerCase(Locale.ROOT) + "]", filterExplanations); diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java b/core/src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java index 5e94e82021f81..61de1ab303f24 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java @@ -109,7 +109,7 @@ class CustomBoostFactorWeight extends Weight { final Weight subQueryWeight; final boolean needsScores; - public CustomBoostFactorWeight(Query parent, Weight subQueryWeight, boolean needsScores) throws IOException { + CustomBoostFactorWeight(Query parent, Weight subQueryWeight, boolean needsScores) throws IOException { super(parent); this.subQueryWeight = subQueryWeight; this.needsScores = needsScores; diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java b/core/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java index 3810c16bc0ed4..5ad7232d6ad1b 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java @@ -77,7 +77,7 @@ public double score(int docId, float subQueryScore) { @Override public Explanation explainScore(int docId, Explanation subQueryScore) { return Explanation.match( - CombineFunction.toFloat(score(docId, subQueryScore.getValue())), + (float) score(docId, subQueryScore.getValue()), "random score function (seed: " + originalSeed + ")"); } }; diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java b/core/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java index 47b8525073551..5f1901c368714 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java @@ -38,7 +38,7 @@ static final class CannedScorer extends Scorer { protected int docid; protected float score; - public CannedScorer() { + CannedScorer() { super(null); } @@ -110,7 +110,7 @@ public Explanation explainScore(int docId, Explanation subQueryScore) throws IOE subQueryScore.getValue(), "_score: ", subQueryScore); return Explanation.match( - CombineFunction.toFloat(score), explanation, + (float) score, explanation, scoreExp); } return exp; diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java b/core/src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java index cadb3f55df493..2ed20c9598703 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java @@ -71,7 +71,7 @@ public Explanation explainScore(int docId, Explanation subQueryScore) throws IOE @Override public boolean needsScores() { - return false; + return scoreFunction.needsScores(); } public Explanation explainWeight() { diff --git a/core/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDAndVersionLookup.java b/core/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDAndVersionLookup.java index 67f06c4f8d05b..fd61829f15311 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDAndVersionLookup.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDAndVersionLookup.java @@ -29,8 +29,7 @@ import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.util.Bits; import org.apache.lucene.util.BytesRef; -import org.elasticsearch.common.lucene.uid.Versions.DocIdAndVersion; -import org.elasticsearch.index.mapper.UidFieldMapper; +import org.elasticsearch.common.lucene.uid.VersionsResolver.DocIdAndVersion; import org.elasticsearch.index.mapper.VersionFieldMapper; import java.io.IOException; @@ -48,52 +47,71 @@ final class PerThreadIDAndVersionLookup { // we keep it around for now, to reduce the amount of e.g. hash lookups by field and stuff /** terms enum for uid field */ + final String uidField; private final TermsEnum termsEnum; /** _version data */ private final NumericDocValues versions; + /** Reused for iteration (when the term exists) */ private PostingsEnum docsEnum; + /** used for assertions to make sure class usage meets assumptions */ + private final Object readerKey; + /** * Initialize lookup for the provided segment */ - public PerThreadIDAndVersionLookup(LeafReader reader) throws IOException { - TermsEnum termsEnum = null; - NumericDocValues versions = null; - + PerThreadIDAndVersionLookup(LeafReader reader, String uidField) throws IOException { + this.uidField = uidField; Fields fields = reader.fields(); - if (fields != null) { - Terms terms = fields.terms(UidFieldMapper.NAME); - if (terms != null) { - termsEnum = terms.iterator(); - assert termsEnum != null; - versions = reader.getNumericDocValues(VersionFieldMapper.NAME); - assert versions != null; - } + Terms terms = fields.terms(uidField); + if (terms == null) { + throw new IllegalArgumentException("reader misses the [" + uidField + + "] field"); } - - this.versions = versions; - this.termsEnum = termsEnum; + termsEnum = terms.iterator(); + versions = reader.getNumericDocValues(VersionFieldMapper.NAME); + if (versions == null) { + throw new IllegalArgumentException("reader misses the [" + VersionFieldMapper.NAME + + "] field"); + } + Object readerKey = null; + assert (readerKey = reader.getCoreCacheKey()) != null; + this.readerKey = readerKey; } /** Return null if id is not found. */ - public DocIdAndVersion lookup(BytesRef id, Bits liveDocs, LeafReaderContext context) throws IOException { + public DocIdAndVersion lookupVersion(BytesRef id, Bits liveDocs, LeafReaderContext context) + throws IOException { + assert context.reader().getCoreCacheKey().equals(readerKey) : + "context's reader is not the same as the reader class was initialized on."; + int docID = getDocID(id, liveDocs); + + if (docID != DocIdSetIterator.NO_MORE_DOCS) { + return new DocIdAndVersion(docID, versions.get(docID), context); + } else { + return null; + } + } + + /** + * returns the internal lucene doc id for the given id bytes. + * {@link DocIdSetIterator#NO_MORE_DOCS} is returned if not found + * */ + private int getDocID(BytesRef id, Bits liveDocs) throws IOException { if (termsEnum.seekExact(id)) { + int docID = DocIdSetIterator.NO_MORE_DOCS; // there may be more than one matching docID, in the case of nested docs, so we want the last one: docsEnum = termsEnum.postings(docsEnum, 0); - int docID = DocIdSetIterator.NO_MORE_DOCS; for (int d = docsEnum.nextDoc(); d != DocIdSetIterator.NO_MORE_DOCS; d = docsEnum.nextDoc()) { if (liveDocs != null && liveDocs.get(d) == false) { continue; } docID = d; } - - if (docID != DocIdSetIterator.NO_MORE_DOCS) { - return new DocIdAndVersion(docID, versions.get(docID), context); - } + return docID; + } else { + return DocIdSetIterator.NO_MORE_DOCS; } - - return null; } } diff --git a/core/src/main/java/org/elasticsearch/common/lucene/uid/Versions.java b/core/src/main/java/org/elasticsearch/common/lucene/uid/Versions.java index 72dc9c8937317..e0997954d7932 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/uid/Versions.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/uid/Versions.java @@ -19,21 +19,8 @@ package org.elasticsearch.common.lucene.uid; -import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.LeafReader; -import org.apache.lucene.index.LeafReader.CoreClosedListener; -import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.Term; -import org.apache.lucene.util.CloseableThreadLocal; -import org.elasticsearch.common.util.concurrent.ConcurrentCollections; -import org.elasticsearch.index.mapper.UidFieldMapper; - -import java.io.IOException; -import java.util.List; -import java.util.concurrent.ConcurrentMap; - /** Utility class to resolve the Lucene doc ID and version for a given uid. */ -public class Versions { +public final class Versions { /** used to indicate the write operation should succeed regardless of current version **/ public static final long MATCH_ANY = -3L; @@ -49,98 +36,4 @@ public class Versions { */ public static final long MATCH_DELETED = -4L; - // TODO: is there somewhere else we can store these? - static final ConcurrentMap> lookupStates = ConcurrentCollections.newConcurrentMapWithAggressiveConcurrency(); - - // Evict this reader from lookupStates once it's closed: - private static final CoreClosedListener removeLookupState = new CoreClosedListener() { - @Override - public void onClose(Object key) { - CloseableThreadLocal ctl = lookupStates.remove(key); - if (ctl != null) { - ctl.close(); - } - } - }; - - private static PerThreadIDAndVersionLookup getLookupState(LeafReader reader) throws IOException { - Object key = reader.getCoreCacheKey(); - CloseableThreadLocal ctl = lookupStates.get(key); - if (ctl == null) { - // First time we are seeing this reader's core; make a - // new CTL: - ctl = new CloseableThreadLocal<>(); - CloseableThreadLocal other = lookupStates.putIfAbsent(key, ctl); - if (other == null) { - // Our CTL won, we must remove it when the - // core is closed: - reader.addCoreClosedListener(removeLookupState); - } else { - // Another thread beat us to it: just use - // their CTL: - ctl = other; - } - } - - PerThreadIDAndVersionLookup lookupState = ctl.get(); - if (lookupState == null) { - lookupState = new PerThreadIDAndVersionLookup(reader); - ctl.set(lookupState); - } - - return lookupState; - } - - private Versions() { - } - - /** Wraps an {@link LeafReaderContext}, a doc ID relative to the context doc base and a version. */ - public static class DocIdAndVersion { - public final int docId; - public final long version; - public final LeafReaderContext context; - - public DocIdAndVersion(int docId, long version, LeafReaderContext context) { - this.docId = docId; - this.version = version; - this.context = context; - } - } - - /** - * Load the internal doc ID and version for the uid from the reader, returning
      - *
    • null if the uid wasn't found, - *
    • a doc ID and a version otherwise - *
    - */ - public static DocIdAndVersion loadDocIdAndVersion(IndexReader reader, Term term) throws IOException { - assert term.field().equals(UidFieldMapper.NAME); - List leaves = reader.leaves(); - if (leaves.isEmpty()) { - return null; - } - // iterate backwards to optimize for the frequently updated documents - // which are likely to be in the last segments - for (int i = leaves.size() - 1; i >= 0; i--) { - LeafReaderContext context = leaves.get(i); - LeafReader leaf = context.reader(); - PerThreadIDAndVersionLookup lookup = getLookupState(leaf); - DocIdAndVersion result = lookup.lookup(term.bytes(), leaf.getLiveDocs(), context); - if (result != null) { - return result; - } - } - return null; - } - - /** - * Load the version for the uid from the reader, returning
      - *
    • {@link #NOT_FOUND} if no matching doc exists, - *
    • the version associated with the provided uid otherwise - *
    - */ - public static long loadVersion(IndexReader reader, Term term) throws IOException { - final DocIdAndVersion docIdAndVersion = loadDocIdAndVersion(reader, term); - return docIdAndVersion == null ? NOT_FOUND : docIdAndVersion.version; - } } diff --git a/core/src/main/java/org/elasticsearch/common/lucene/uid/VersionsResolver.java b/core/src/main/java/org/elasticsearch/common/lucene/uid/VersionsResolver.java new file mode 100644 index 0000000000000..bb82881ecd0e0 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/lucene/uid/VersionsResolver.java @@ -0,0 +1,140 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.lucene.uid; + +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReader.CoreClosedListener; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.Term; +import org.apache.lucene.util.CloseableThreadLocal; +import org.elasticsearch.common.util.concurrent.ConcurrentCollections; + +import java.io.IOException; +import java.util.List; +import java.util.Objects; +import java.util.concurrent.ConcurrentMap; + +import static org.elasticsearch.common.lucene.uid.Versions.NOT_FOUND; + +/** Utility class to resolve the Lucene doc ID and version for a given uid. */ +public class VersionsResolver { + + static final ConcurrentMap> + lookupStates = ConcurrentCollections.newConcurrentMapWithAggressiveConcurrency(); + + // Evict this reader from lookupStates once it's closed: + private static final CoreClosedListener removeLookupState = key -> { + CloseableThreadLocal ctl = lookupStates.remove(key); + if (ctl != null) { + ctl.close(); + } + }; + + private static PerThreadIDAndVersionLookup getLookupState(LeafReader reader, String uidField) + throws IOException { + Object key = reader.getCoreCacheKey(); + CloseableThreadLocal ctl = lookupStates.get(key); + if (ctl == null) { + // First time we are seeing this reader's core; make a + // new CTL: + ctl = new CloseableThreadLocal<>(); + CloseableThreadLocal other = + lookupStates.putIfAbsent(key, ctl); + if (other == null) { + // Our CTL won, we must remove it when the + // core is closed: + reader.addCoreClosedListener(removeLookupState); + } else { + // Another thread beat us to it: just use + // their CTL: + ctl = other; + } + } + + PerThreadIDAndVersionLookup lookupState = ctl.get(); + if (lookupState == null) { + lookupState = new PerThreadIDAndVersionLookup(reader, uidField); + ctl.set(lookupState); + } else if (Objects.equals(lookupState.uidField, uidField) == false) { + throw new AssertionError("Index does not consistently use the same uid field: [" + + uidField + "] != [" + lookupState.uidField + "]"); + } + + return lookupState; + } + + private VersionsResolver() { + } + + /** + * Wraps an {@link LeafReaderContext}, a doc ID relative to the context doc base and + * a version. + **/ + public static class DocIdAndVersion { + public final int docId; + public final long version; + public final LeafReaderContext context; + + public DocIdAndVersion(int docId, long version, LeafReaderContext context) { + this.docId = docId; + this.version = version; + this.context = context; + } + } + + /** + * Load the internal doc ID and version for the uid from the reader, returning
      + *
    • null if the uid wasn't found, + *
    • a doc ID and a version otherwise + *
    + */ + public static DocIdAndVersion loadDocIdAndVersion(IndexReader reader, Term term) + throws IOException { + List leaves = reader.leaves(); + if (leaves.isEmpty()) { + return null; + } + // iterate backwards to optimize for the frequently updated documents + // which are likely to be in the last segments + for (int i = leaves.size() - 1; i >= 0; i--) { + LeafReaderContext context = leaves.get(i); + LeafReader leaf = context.reader(); + PerThreadIDAndVersionLookup lookup = getLookupState(leaf, term.field()); + DocIdAndVersion result = + lookup.lookupVersion(term.bytes(), leaf.getLiveDocs(), context); + if (result != null) { + return result; + } + } + return null; + } + + /** + * Load the version for the uid from the reader, returning
      + *
    • {@link Versions#NOT_FOUND} if no matching doc exists, + *
    • the version associated with the provided uid otherwise + *
    + */ + public static long loadVersion(IndexReader reader, Term term) throws IOException { + final DocIdAndVersion docIdAndVersion = loadDocIdAndVersion(reader, term); + return docIdAndVersion == null ? NOT_FOUND : docIdAndVersion.version; + } +} diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java b/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java index bb4a4bd3b3093..1c7e862e76dea 100644 --- a/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java +++ b/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java @@ -24,37 +24,46 @@ import org.elasticsearch.cluster.routing.allocation.command.AllocateReplicaAllocationCommand; import org.elasticsearch.cluster.routing.allocation.command.AllocateStalePrimaryAllocationCommand; import org.elasticsearch.cluster.routing.allocation.command.AllocationCommand; -import org.elasticsearch.cluster.routing.allocation.command.AllocationCommandRegistry; import org.elasticsearch.cluster.routing.allocation.command.CancelAllocationCommand; import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.inject.util.Providers; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; -import org.elasticsearch.common.io.stream.NamedWriteableRegistry.Entry; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.ExtensionPoint; -import org.elasticsearch.http.HttpServer; +import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.http.HttpServerTransport; +import org.elasticsearch.indices.breaker.CircuitBreakerService; +import org.elasticsearch.plugins.NetworkPlugin; import org.elasticsearch.tasks.RawTaskStatus; import org.elasticsearch.tasks.Task; +import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.Transport; -import org.elasticsearch.transport.TransportService; +import org.elasticsearch.transport.TransportInterceptor; +import org.elasticsearch.transport.TransportRequest; +import org.elasticsearch.transport.TransportRequestHandler; import org.elasticsearch.transport.local.LocalTransport; +import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.function.Supplier; /** * A module to handle registering and binding all network related classes. */ -public class NetworkModule extends AbstractModule { +public final class NetworkModule { public static final String TRANSPORT_TYPE_KEY = "transport.type"; - public static final String TRANSPORT_SERVICE_TYPE_KEY = "transport.service.type"; public static final String HTTP_TYPE_KEY = "http.type"; public static final String LOCAL_TRANSPORT = "local"; public static final String HTTP_TYPE_DEFAULT_KEY = "http.type.default"; @@ -65,58 +74,89 @@ public class NetworkModule extends AbstractModule { public static final Setting HTTP_DEFAULT_TYPE_SETTING = Setting.simpleString(HTTP_TYPE_DEFAULT_KEY, Property.NodeScope); public static final Setting HTTP_TYPE_SETTING = Setting.simpleString(HTTP_TYPE_KEY, Property.NodeScope); public static final Setting HTTP_ENABLED = Setting.boolSetting("http.enabled", true, Property.NodeScope); - public static final Setting TRANSPORT_SERVICE_TYPE_SETTING = - Setting.simpleString(TRANSPORT_SERVICE_TYPE_KEY, Property.NodeScope); public static final Setting TRANSPORT_TYPE_SETTING = Setting.simpleString(TRANSPORT_TYPE_KEY, Property.NodeScope); - private final NetworkService networkService; private final Settings settings; private final boolean transportClient; - private final AllocationCommandRegistry allocationCommandRegistry = new AllocationCommandRegistry(); - private final ExtensionPoint.SelectedType transportServiceTypes = new ExtensionPoint.SelectedType<>("transport_service", TransportService.class); - private final ExtensionPoint.SelectedType transportTypes = new ExtensionPoint.SelectedType<>("transport", Transport.class); - private final ExtensionPoint.SelectedType httpTransportTypes = new ExtensionPoint.SelectedType<>("http_transport", HttpServerTransport.class); - private final List namedWriteables = new ArrayList<>(); + private static final List namedWriteables = new ArrayList<>(); + private static final List namedXContents = new ArrayList<>(); + + static { + registerAllocationCommand(CancelAllocationCommand::new, CancelAllocationCommand::fromXContent, + CancelAllocationCommand.COMMAND_NAME_FIELD); + registerAllocationCommand(MoveAllocationCommand::new, MoveAllocationCommand::fromXContent, + MoveAllocationCommand.COMMAND_NAME_FIELD); + registerAllocationCommand(AllocateReplicaAllocationCommand::new, AllocateReplicaAllocationCommand::fromXContent, + AllocateReplicaAllocationCommand.COMMAND_NAME_FIELD); + registerAllocationCommand(AllocateEmptyPrimaryAllocationCommand::new, AllocateEmptyPrimaryAllocationCommand::fromXContent, + AllocateEmptyPrimaryAllocationCommand.COMMAND_NAME_FIELD); + registerAllocationCommand(AllocateStalePrimaryAllocationCommand::new, AllocateStalePrimaryAllocationCommand::fromXContent, + AllocateStalePrimaryAllocationCommand.COMMAND_NAME_FIELD); + namedWriteables.add( + new NamedWriteableRegistry.Entry(Task.Status.class, ReplicationTask.Status.NAME, ReplicationTask.Status::new)); + namedWriteables.add( + new NamedWriteableRegistry.Entry(Task.Status.class, RawTaskStatus.NAME, RawTaskStatus::new)); + } + + private final Map> transportFactories = new HashMap<>(); + private final Map> transportHttpFactories = new HashMap<>(); + private final List transportIntercetors = new ArrayList<>(); /** * Creates a network module that custom networking classes can be plugged into. - * @param networkService A constructed network service object to bind. * @param settings The settings for the node * @param transportClient True if only transport classes should be allowed to be registered, false otherwise. */ - public NetworkModule(NetworkService networkService, Settings settings, boolean transportClient) { - this.networkService = networkService; + public NetworkModule(Settings settings, boolean transportClient, List plugins, ThreadPool threadPool, + BigArrays bigArrays, + CircuitBreakerService circuitBreakerService, + NamedWriteableRegistry namedWriteableRegistry, + NamedXContentRegistry xContentRegistry, + NetworkService networkService, HttpServerTransport.Dispatcher dispatcher) { this.settings = settings; this.transportClient = transportClient; - registerTransportService("default", TransportService.class); - registerTransport(LOCAL_TRANSPORT, LocalTransport.class); - namedWriteables.add(new NamedWriteableRegistry.Entry(Task.Status.class, ReplicationTask.Status.NAME, ReplicationTask.Status::new)); - namedWriteables.add(new NamedWriteableRegistry.Entry(Task.Status.class, RawTaskStatus.NAME, RawTaskStatus::new)); - registerBuiltinAllocationCommands(); + registerTransport(LOCAL_TRANSPORT, () -> new LocalTransport(settings, threadPool, namedWriteableRegistry, circuitBreakerService)); + for (NetworkPlugin plugin : plugins) { + if (transportClient == false && HTTP_ENABLED.get(settings)) { + Map> httpTransportFactory = plugin.getHttpTransports(settings, threadPool, bigArrays, + circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService, dispatcher); + for (Map.Entry> entry : httpTransportFactory.entrySet()) { + registerHttpTransport(entry.getKey(), entry.getValue()); + } + } + Map> httpTransportFactory = plugin.getTransports(settings, threadPool, bigArrays, + circuitBreakerService, namedWriteableRegistry, networkService); + for (Map.Entry> entry : httpTransportFactory.entrySet()) { + registerTransport(entry.getKey(), entry.getValue()); + } + List transportInterceptors = plugin.getTransportInterceptors(threadPool.getThreadContext()); + for (TransportInterceptor interceptor : transportInterceptors) { + registerTransportInterceptor(interceptor); + } + } } public boolean isTransportClient() { return transportClient; } - /** Adds a transport service implementation that can be selected by setting {@link #TRANSPORT_SERVICE_TYPE_KEY}. */ - public void registerTransportService(String name, Class clazz) { - transportServiceTypes.registerExtension(name, clazz); - } - /** Adds a transport implementation that can be selected by setting {@link #TRANSPORT_TYPE_KEY}. */ - public void registerTransport(String name, Class clazz) { - transportTypes.registerExtension(name, clazz); + private void registerTransport(String key, Supplier factory) { + if (transportFactories.putIfAbsent(key, factory) != null) { + throw new IllegalArgumentException("transport for name: " + key + " is already registered"); + } } /** Adds an http transport implementation that can be selected by setting {@link #HTTP_TYPE_KEY}. */ // TODO: we need another name than "http transport"....so confusing with transportClient... - public void registerHttpTransport(String name, Class clazz) { + private void registerHttpTransport(String key, Supplier factory) { if (transportClient) { - throw new IllegalArgumentException("Cannot register http transport " + clazz.getName() + " for transport client"); + throw new IllegalArgumentException("Cannot register http transport " + key + " for transport client"); + } + if (transportHttpFactories.putIfAbsent(key, factory) != null) { + throw new IllegalArgumentException("transport for name: " + key + " is already registered"); } - httpTransportTypes.registerExtension(name, clazz); } /** @@ -129,56 +169,91 @@ public void registerHttpTransport(String name, Class void registerAllocationCommand(Writeable.Reader reader, AllocationCommand.Parser parser, - ParseField commandName) { - allocationCommandRegistry.register(parser, commandName); - namedWriteables.add(new Entry(AllocationCommand.class, commandName.getPreferredName(), reader)); + private static void registerAllocationCommand(Writeable.Reader reader, + CheckedFunction parser, ParseField commandName) { + namedXContents.add(new NamedXContentRegistry.Entry(AllocationCommand.class, commandName, parser)); + namedWriteables.add(new NamedWriteableRegistry.Entry(AllocationCommand.class, commandName.getPreferredName(), reader)); } - /** - * The registry of allocation command parsers. - */ - public AllocationCommandRegistry getAllocationCommandRegistry() { - return allocationCommandRegistry; + public static List getNamedWriteables() { + return Collections.unmodifiableList(namedWriteables); } - public List getNamedWriteables() { - return namedWriteables; + public static List getNamedXContents() { + return Collections.unmodifiableList(namedXContents); } - @Override - protected void configure() { - bind(NetworkService.class).toInstance(networkService); - transportServiceTypes.bindType(binder(), settings, TRANSPORT_SERVICE_TYPE_KEY, "default"); - transportTypes.bindType(binder(), settings, TRANSPORT_TYPE_KEY, TRANSPORT_DEFAULT_TYPE_SETTING.get(settings)); - - if (transportClient == false) { - if (HTTP_ENABLED.get(settings)) { - bind(HttpServer.class).asEagerSingleton(); - httpTransportTypes.bindType(binder(), settings, HTTP_TYPE_SETTING.getKey(), HTTP_DEFAULT_TYPE_SETTING.get(settings)); - } else { - bind(HttpServer.class).toProvider(Providers.of(null)); - } - // Bind the AllocationCommandRegistry so RestClusterRerouteAction can get it. - bind(AllocationCommandRegistry.class).toInstance(allocationCommandRegistry); + public Supplier getHttpServerTransportSupplier() { + final String name; + if (HTTP_TYPE_SETTING.exists(settings)) { + name = HTTP_TYPE_SETTING.get(settings); + } else { + name = HTTP_DEFAULT_TYPE_SETTING.get(settings); + } + final Supplier factory = transportHttpFactories.get(name); + if (factory == null) { + throw new IllegalStateException("Unsupported http.type [" + name + "]"); } + return factory; } - private void registerBuiltinAllocationCommands() { - registerAllocationCommand(CancelAllocationCommand::new, CancelAllocationCommand::fromXContent, - CancelAllocationCommand.COMMAND_NAME_FIELD); - registerAllocationCommand(MoveAllocationCommand::new, MoveAllocationCommand::fromXContent, - MoveAllocationCommand.COMMAND_NAME_FIELD); - registerAllocationCommand(AllocateReplicaAllocationCommand::new, AllocateReplicaAllocationCommand::fromXContent, - AllocateReplicaAllocationCommand.COMMAND_NAME_FIELD); - registerAllocationCommand(AllocateEmptyPrimaryAllocationCommand::new, AllocateEmptyPrimaryAllocationCommand::fromXContent, - AllocateEmptyPrimaryAllocationCommand.COMMAND_NAME_FIELD); - registerAllocationCommand(AllocateStalePrimaryAllocationCommand::new, AllocateStalePrimaryAllocationCommand::fromXContent, - AllocateStalePrimaryAllocationCommand.COMMAND_NAME_FIELD); + public boolean isHttpEnabled() { + return transportClient == false && HTTP_ENABLED.get(settings); + } + + public Supplier getTransportSupplier() { + final String name; + if (TRANSPORT_TYPE_SETTING.exists(settings)) { + name = TRANSPORT_TYPE_SETTING.get(settings); + } else { + name = TRANSPORT_DEFAULT_TYPE_SETTING.get(settings); + } + final Supplier factory = transportFactories.get(name); + if (factory == null) { + throw new IllegalStateException("Unsupported transport.type [" + name + "]"); + } + return factory; + } + + /** + * Registers a new {@link TransportInterceptor} + */ + private void registerTransportInterceptor(TransportInterceptor interceptor) { + this.transportIntercetors.add(Objects.requireNonNull(interceptor, "interceptor must not be null")); + } + /** + * Returns a composite {@link TransportInterceptor} containing all registered interceptors + * @see #registerTransportInterceptor(TransportInterceptor) + */ + public TransportInterceptor getTransportInterceptor() { + return new CompositeTransportInterceptor(this.transportIntercetors); } - public boolean canRegisterHttpExtensions() { - return transportClient == false; + static final class CompositeTransportInterceptor implements TransportInterceptor { + final List transportInterceptors; + + private CompositeTransportInterceptor(List transportInterceptors) { + this.transportInterceptors = new ArrayList<>(transportInterceptors); + } + + @Override + public TransportRequestHandler interceptHandler(String action, String executor, + boolean forceExecution, + TransportRequestHandler actualHandler) { + for (TransportInterceptor interceptor : this.transportInterceptors) { + actualHandler = interceptor.interceptHandler(action, executor, forceExecution, actualHandler); + } + return actualHandler; + } + + @Override + public AsyncSender interceptSender(AsyncSender sender) { + for (TransportInterceptor interceptor : this.transportInterceptors) { + sender = interceptor.interceptSender(sender); + } + return sender; + } } + } diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java index 8652d4c5c0521..9e06c39b83e03 100644 --- a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java +++ b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java @@ -32,6 +32,7 @@ import java.util.Collections; import java.util.Comparator; import java.util.List; +import java.util.Optional; /** * Utilities for network interfaces / addresses binding and publishing. @@ -227,14 +228,15 @@ static InetAddress[] getAllAddresses() throws SocketException { /** Returns addresses for the given interface (it must be marked up) */ static InetAddress[] getAddressesForInterface(String name) throws SocketException { - NetworkInterface intf = NetworkInterface.getByName(name); - if (intf == null) { + Optional networkInterface = getInterfaces().stream().filter((netIf) -> name.equals(netIf.getName())).findFirst(); + + if (networkInterface.isPresent() == false) { throw new IllegalArgumentException("No interface named '" + name + "' found, got " + getInterfaces()); } - if (!intf.isUp()) { + if (!networkInterface.get().isUp()) { throw new IllegalArgumentException("Interface '" + name + "' is not up and running"); } - List list = Collections.list(intf.getInetAddresses()); + List list = Collections.list(networkInterface.get().getInetAddresses()); if (list.isEmpty()) { throw new IllegalArgumentException("Interface '" + name + "' has no internet addresses"); } diff --git a/core/src/main/java/org/elasticsearch/common/path/PathTrie.java b/core/src/main/java/org/elasticsearch/common/path/PathTrie.java index c711bfa2a61b1..44accbf190aa2 100644 --- a/core/src/main/java/org/elasticsearch/common/path/PathTrie.java +++ b/core/src/main/java/org/elasticsearch/common/path/PathTrie.java @@ -111,7 +111,10 @@ public synchronized void insert(String[] path, int index, T value) { // in case the target(last) node already exist but without a value // than the value should be updated. if (index == (path.length - 1)) { - assert (node.value == null || node.value == value); + if (node.value != null) { + throw new IllegalArgumentException("Path [" + String.join("/", path)+ "] already has a value [" + + node.value + "]"); + } if (node.value == null) { node.value = value; } @@ -190,6 +193,9 @@ public String toString() { public void insert(String path, T value) { String[] strings = path.split(SEPARATOR); if (strings.length == 0) { + if (rootValue != null) { + throw new IllegalArgumentException("Path [/] already has a value [" + rootValue + "]"); + } rootValue = value; return; } diff --git a/core/src/main/java/org/elasticsearch/common/recycler/Recyclers.java b/core/src/main/java/org/elasticsearch/common/recycler/Recyclers.java index 5bac8f7bcfd2b..f84441fbce436 100644 --- a/core/src/main/java/org/elasticsearch/common/recycler/Recyclers.java +++ b/core/src/main/java/org/elasticsearch/common/recycler/Recyclers.java @@ -170,7 +170,7 @@ public static Recycler concurrent(final Recycler.Factory factory, fina } } - final int slot() { + int slot() { final long id = Thread.currentThread().getId(); // don't trust Thread.hashCode to have equiprobable low bits int slot = (int) BitMixer.mix64(id); diff --git a/core/src/main/java/org/elasticsearch/common/regex/Regex.java b/core/src/main/java/org/elasticsearch/common/regex/Regex.java index 061ad6c26c0e0..f1f945288e43d 100644 --- a/core/src/main/java/org/elasticsearch/common/regex/Regex.java +++ b/core/src/main/java/org/elasticsearch/common/regex/Regex.java @@ -19,8 +19,13 @@ package org.elasticsearch.common.regex; +import org.apache.lucene.util.automaton.Automata; +import org.apache.lucene.util.automaton.Automaton; +import org.apache.lucene.util.automaton.Operations; import org.elasticsearch.common.Strings; +import java.util.ArrayList; +import java.util.List; import java.util.Locale; import java.util.regex.Pattern; @@ -46,6 +51,33 @@ public static boolean isMatchAllPattern(String str) { return str.equals("*"); } + /** Return an {@link Automaton} that matches the given pattern. */ + public static Automaton simpleMatchToAutomaton(String pattern) { + List automata = new ArrayList<>(); + int previous = 0; + for (int i = pattern.indexOf('*'); i != -1; i = pattern.indexOf('*', i + 1)) { + automata.add(Automata.makeString(pattern.substring(previous, i))); + automata.add(Automata.makeAnyString()); + previous = i + 1; + } + automata.add(Automata.makeString(pattern.substring(previous))); + return Operations.concatenate(automata); + } + + /** + * Return an Automaton that matches the union of the provided patterns. + */ + public static Automaton simpleMatchToAutomaton(String... patterns) { + if (patterns.length < 1) { + throw new IllegalArgumentException("There must be at least one pattern, zero given"); + } + List automata = new ArrayList<>(); + for (String pattern : patterns) { + automata.add(simpleMatchToAutomaton(pattern)); + } + return Operations.union(automata); + } + /** * Match a String against the given pattern, supporting the following simple * pattern styles: "xxx*", "*xxx", "*xxx*" and "xxx*yyy" matches (with an diff --git a/core/src/main/java/org/elasticsearch/common/rounding/DateTimeUnit.java b/core/src/main/java/org/elasticsearch/common/rounding/DateTimeUnit.java index 375c1a27212a3..07997c8af3138 100644 --- a/core/src/main/java/org/elasticsearch/common/rounding/DateTimeUnit.java +++ b/core/src/main/java/org/elasticsearch/common/rounding/DateTimeUnit.java @@ -43,7 +43,7 @@ public enum DateTimeUnit { private final byte id; private final Function fieldFunction; - private DateTimeUnit(byte id, Function fieldFunction) { + DateTimeUnit(byte id, Function fieldFunction) { this.id = id; this.fieldFunction = fieldFunction; } diff --git a/core/src/main/java/org/elasticsearch/common/rounding/Rounding.java b/core/src/main/java/org/elasticsearch/common/rounding/Rounding.java index ad9f926e881f7..50461d6184ae7 100644 --- a/core/src/main/java/org/elasticsearch/common/rounding/Rounding.java +++ b/core/src/main/java/org/elasticsearch/common/rounding/Rounding.java @@ -128,15 +128,38 @@ public byte id() { @Override public long round(long utcMillis) { long rounded = field.roundFloor(utcMillis); - if (timeZone.isFixed() == false && timeZone.getOffset(utcMillis) != timeZone.getOffset(rounded)) { - // in this case, we crossed a time zone transition. In some edge - // cases this will - // result in a value that is not a rounded value itself. We need - // to round again - // to make sure. This will have no affect in cases where - // 'rounded' was already a proper - // rounded value - rounded = field.roundFloor(rounded); + if (timeZone.isFixed() == false) { + // special cases for non-fixed time zones with dst transitions + if (timeZone.getOffset(utcMillis) != timeZone.getOffset(rounded)) { + /* + * the offset change indicates a dst transition. In some + * edge cases this will result in a value that is not a + * rounded value before the transition. We round again to + * make sure we really return a rounded value. This will + * have no effect in cases where we already had a valid + * rounded value + */ + rounded = field.roundFloor(rounded); + } else { + /* + * check if the current time instant is at a start of a DST + * overlap by comparing the offset of the instant and the + * previous millisecond. We want to detect negative offset + * changes that result in an overlap + */ + if (timeZone.getOffset(rounded) < timeZone.getOffset(rounded - 1)) { + /* + * we are rounding a date just after a DST overlap. if + * the overlap is smaller than the time unit we are + * rounding to, we want to add the overlapping part to + * the following rounding interval + */ + long previousRounded = field.roundFloor(rounded - 1); + if (rounded - previousRounded < field.getDurationField().getUnitMillis()) { + rounded = previousRounded; + } + } + } } assert rounded == field.roundFloor(rounded); return rounded; diff --git a/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java b/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java index 787fa950bea63..05b7d96c8f6db 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java @@ -35,11 +35,10 @@ import java.util.List; import java.util.Map; import java.util.Set; -import java.util.SortedMap; -import java.util.TreeMap; import java.util.concurrent.CopyOnWriteArrayList; import java.util.function.BiConsumer; import java.util.function.Consumer; +import java.util.function.Predicate; import java.util.regex.Pattern; import java.util.stream.Collectors; @@ -56,6 +55,7 @@ public abstract class AbstractScopedSettings extends AbstractComponent { private final Setting.Property scope; private static final Pattern KEY_PATTERN = Pattern.compile("^(?:[-\\w]+[.])*[-\\w]+$"); private static final Pattern GROUP_KEY_PATTERN = Pattern.compile("^(?:[-\\w]+[.])+$"); + private static final Pattern AFFIX_KEY_PATTERN = Pattern.compile("^(?:[-\\w]+[.])+[*](?:[.][-\\w]+)+$"); protected AbstractScopedSettings(Settings settings, Set> settingsSet, Setting.Property scope) { super(settings); @@ -85,7 +85,8 @@ protected AbstractScopedSettings(Settings settings, Set> settingsSet, } protected void validateSettingKey(Setting setting) { - if (isValidKey(setting.getKey()) == false && (setting.isGroupSetting() && isValidGroupKey(setting.getKey())) == false) { + if (isValidKey(setting.getKey()) == false && (setting.isGroupSetting() && isValidGroupKey(setting.getKey()) + || isValidAffixKey(setting.getKey())) == false) { throw new IllegalArgumentException("illegal settings key: [" + setting.getKey() + "]"); } } @@ -110,6 +111,11 @@ private static boolean isValidGroupKey(String key) { return GROUP_KEY_PATTERN.matcher(key).matches(); } + // pkg private for tests + static boolean isValidAffixKey(String key) { + return AFFIX_KEY_PATTERN.matcher(key).matches(); + } + public Setting.Property getScope() { return this.scope; } @@ -188,6 +194,19 @@ public synchronized void addSettingsUpdateConsumer(Setting setting, Consu addSettingsUpdater(setting.newUpdater(consumer, logger, validator)); } + /** + * Adds a settings consumer for affix settings. Affix settings have a namespace associated to it that needs to be available to the + * consumer in order to be processed correctly. + */ + public synchronized void addAffixUpdateConsumer(Setting.AffixSetting setting, BiConsumer consumer, + BiConsumer validator) { + final Setting registeredSetting = this.complexMatchers.get(setting.getKey()); + if (setting != registeredSetting) { + throw new IllegalArgumentException("Setting is not registered for key [" + setting.getKey() + "]"); + } + addSettingsUpdater(setting.newAffixUpdater(consumer, logger, validator)); + } + synchronized void addSettingsUpdater(SettingUpdater updater) { this.settingUpdaters.add(updater); } @@ -232,11 +251,9 @@ public final void validate(Settings.Builder settingsBuilder) { */ public final void validate(Settings settings) { List exceptions = new ArrayList<>(); - // we want them sorted for deterministic error messages - SortedMap sortedSettings = new TreeMap<>(settings.getAsMap()); - for (Map.Entry entry : sortedSettings.entrySet()) { + for (String key : settings.keySet()) { // settings iterate in deterministic fashion try { - validate(entry.getKey(), settings); + validate(key, settings); } catch (RuntimeException ex) { exceptions.add(ex); } @@ -260,7 +277,12 @@ public final void validate(String key, Settings settings) { } } CollectionUtil.timSort(scoredKeys, (a,b) -> b.v1().compareTo(a.v1())); - String msg = "unknown setting [" + key + "]"; + String msgPrefix = "unknown setting"; + SecureSettings secureSettings = settings.getSecureSettings(); + if (secureSettings != null && settings.getSecureSettings().getSettingNames().contains(key)) { + msgPrefix = "unknown secure setting"; + } + String msg = msgPrefix + " [" + key + "]"; List keys = scoredKeys.stream().map((a) -> a.v2()).collect(Collectors.toList()); if (keys.isEmpty() == false) { msg += " did you mean " + (keys.size() == 1 ? "[" + keys.get(0) + "]": "any of " + keys.toString()) + "?"; @@ -358,11 +380,19 @@ private boolean assertMatcher(String key, int numComplexMatchers) { /** * Returns true if the setting for the given key is dynamically updateable. Otherwise false. */ - public boolean hasDynamicSetting(String key) { + public boolean isDynamicSetting(String key) { final Setting setting = get(key); return setting != null && setting.isDynamic(); } + /** + * Returns true if the setting for the given key is final. Otherwise false. + */ + public boolean isFinalSetting(String key) { + final Setting setting = get(key); + return setting != null && setting.isFinal(); + } + /** * Returns a settings object that contains all settings that are not * already set in the given source. The diff contains either the default value for each @@ -371,9 +401,10 @@ public boolean hasDynamicSetting(String key) { public Settings diff(Settings source, Settings defaultSettings) { Settings.Builder builder = Settings.builder(); for (Setting setting : keySettings.values()) { - if (setting.exists(source) == false) { - builder.put(setting.getKey(), setting.getRaw(defaultSettings)); - } + setting.diff(builder, source, defaultSettings); + } + for (Setting setting : complexMatchers.values()) { + setting.diff(builder, source, defaultSettings); } return builder.build(); } @@ -440,31 +471,48 @@ private boolean updateSettings(Settings toApply, Settings.Builder target, Settin boolean changed = false; final Set toRemove = new HashSet<>(); Settings.Builder settingsBuilder = Settings.builder(); + final Predicate canUpdate = (key) -> ( + isFinalSetting(key) == false && // it's not a final setting + ((onlyDynamic == false && get(key) != null) || isDynamicSetting(key))); + final Predicate canRemove = (key) ->(// we can delete if + isFinalSetting(key) == false && // it's not a final setting + (onlyDynamic && isDynamicSetting(key) // it's a dynamicSetting and we only do dynamic settings + || get(key) == null && key.startsWith(ARCHIVED_SETTINGS_PREFIX) // the setting is not registered AND it's been archived + || (onlyDynamic == false && get(key) != null))); // if it's not dynamic AND we have a key for (Map.Entry entry : toApply.getAsMap().entrySet()) { - if (entry.getValue() == null) { + if (entry.getValue() == null && (canRemove.test(entry.getKey()) || entry.getKey().endsWith("*"))) { + // this either accepts null values that suffice the canUpdate test OR wildcard expressions (key ends with *) + // we don't validate if there is any dynamic setting with that prefix yet we could do in the future toRemove.add(entry.getKey()); - } else if ((onlyDynamic == false && get(entry.getKey()) != null) || hasDynamicSetting(entry.getKey())) { + // we don't set changed here it's set after we apply deletes below if something actually changed + } else if (entry.getValue() != null && canUpdate.test(entry.getKey())) { validate(entry.getKey(), toApply); settingsBuilder.put(entry.getKey(), entry.getValue()); updates.put(entry.getKey(), entry.getValue()); changed = true; } else { - throw new IllegalArgumentException(type + " setting [" + entry.getKey() + "], not dynamically updateable"); + if (isFinalSetting(entry.getKey())) { + throw new IllegalArgumentException("final " + type + " setting [" + entry.getKey() + "], not updateable"); + } else { + throw new IllegalArgumentException(type + " setting [" + entry.getKey() + "], not dynamically updateable"); + } } - } - changed |= applyDeletes(toRemove, target); + changed |= applyDeletes(toRemove, target, canRemove); target.put(settingsBuilder.build()); return changed; } - private static boolean applyDeletes(Set deletes, Settings.Builder builder) { + private static boolean applyDeletes(Set deletes, Settings.Builder builder, Predicate canRemove) { boolean changed = false; for (String entry : deletes) { Set keysToRemove = new HashSet<>(); Set keySet = builder.internalMap().keySet(); for (String key : keySet) { - if (Regex.simpleMatch(entry, key)) { + if (Regex.simpleMatch(entry, key) && canRemove.test(key)) { + // we have to re-check with canRemove here since we might have a wildcard expression foo.* that matches + // dynamic as well as static settings if that is the case we might remove static settings since we resolve the + // wildcards late keysToRemove.add(key); } } @@ -493,11 +541,21 @@ private static Setting findOverlappingSetting(Setting newSetting, Map> unknownConsumer, + final BiConsumer, IllegalArgumentException> invalidConsumer) { Settings.Builder builder = Settings.builder(); boolean changed = false; for (Map.Entry entry : settings.getAsMap().entrySet()) { @@ -511,10 +569,10 @@ public Settings archiveUnknownOrBrokenSettings(Settings settings) { builder.put(entry.getKey(), entry.getValue()); } else { changed = true; - logger.warn("found unknown setting: {} value: {} - archiving", entry.getKey(), entry.getValue()); + unknownConsumer.accept(entry); /* - * We put them back in here such that tools can check from the outside if there are any indices with broken - * settings. The setting can remain there but we want users to be aware that some of their setting are broken and + * We put them back in here such that tools can check from the outside if there are any indices with invalid + * settings. The setting can remain there but we want users to be aware that some of their setting are invalid and * they can research why and what they need to do to replace them. */ builder.put(ARCHIVED_SETTINGS_PREFIX + entry.getKey(), entry.getValue()); @@ -522,12 +580,10 @@ public Settings archiveUnknownOrBrokenSettings(Settings settings) { } } catch (IllegalArgumentException ex) { changed = true; - logger.warn( - (Supplier) () -> new ParameterizedMessage( - "found invalid setting: {} value: {} - archiving", entry.getKey(), entry.getValue()), ex); + invalidConsumer.accept(entry, ex); /* - * We put them back in here such that tools can check from the outside if there are any indices with broken settings. The - * setting can remain there but we want users to be aware that some of their setting are broken and they can research why + * We put them back in here such that tools can check from the outside if there are any indices with invalid settings. The + * setting can remain there but we want users to be aware that some of their setting are invalid and they can research why * and what they need to do to replace them. */ builder.put(ARCHIVED_SETTINGS_PREFIX + entry.getKey(), entry.getValue()); diff --git a/core/src/main/java/org/elasticsearch/common/settings/AddFileKeyStoreCommand.java b/core/src/main/java/org/elasticsearch/common/settings/AddFileKeyStoreCommand.java new file mode 100644 index 0000000000000..5ccac9a2ac3fa --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/AddFileKeyStoreCommand.java @@ -0,0 +1,100 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + +import java.io.BufferedReader; +import java.io.File; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.nio.charset.StandardCharsets; +import java.nio.file.Files; +import java.nio.file.Path; +import java.nio.file.Paths; +import java.util.Arrays; +import java.util.List; + +import joptsimple.OptionSet; +import joptsimple.OptionSpec; +import org.elasticsearch.cli.EnvironmentAwareCommand; +import org.elasticsearch.cli.ExitCodes; +import org.elasticsearch.cli.Terminal; +import org.elasticsearch.cli.UserException; +import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.io.PathUtils; +import org.elasticsearch.env.Environment; + +/** + * A subcommand for the keystore cli which adds a file setting. + */ +class AddFileKeyStoreCommand extends EnvironmentAwareCommand { + + private final OptionSpec forceOption; + private final OptionSpec arguments; + + AddFileKeyStoreCommand() { + super("Add a file setting to the keystore"); + this.forceOption = parser.acceptsAll(Arrays.asList("f", "force"), "Overwrite existing setting without prompting"); + // jopt simple has issue with multiple non options, so we just get one set of them here + // and convert to File when necessary + // see https://github.com/jopt-simple/jopt-simple/issues/103 + this.arguments = parser.nonOptions("setting [filepath]"); + } + + @Override + protected void execute(Terminal terminal, OptionSet options, Environment env) throws Exception { + KeyStoreWrapper keystore = KeyStoreWrapper.load(env.configFile()); + if (keystore == null) { + throw new UserException(ExitCodes.DATA_ERROR, "Elasticsearch keystore not found. Use 'create' command to create one."); + } + + keystore.decrypt(new char[0] /* TODO: prompt for password when they are supported */); + + List argumentValues = arguments.values(options); + if (argumentValues.size() == 0) { + throw new UserException(ExitCodes.USAGE, "Missing setting name"); + } + String setting = argumentValues.get(0); + if (keystore.getSettingNames().contains(setting) && options.has(forceOption) == false) { + if (terminal.promptYesNo("Setting " + setting + " already exists. Overwrite?", false) == false) { + terminal.println("Exiting without modifying keystore."); + return; + } + } + + if (argumentValues.size() == 1) { + throw new UserException(ExitCodes.USAGE, "Missing file name"); + } + Path file = getPath(argumentValues.get(1)); + if (Files.exists(file) == false) { + throw new UserException(ExitCodes.IO_ERROR, "File [" + file.toString() + "] does not exist"); + } + if (argumentValues.size() > 2) { + throw new UserException(ExitCodes.USAGE, "Unrecognized extra arguments [" + + String.join(", ", argumentValues.subList(2, argumentValues.size())) + "] after filepath"); + } + keystore.setFile(setting, Files.readAllBytes(file)); + keystore.save(env.configFile()); + } + + @SuppressForbidden(reason="file arg for cli") + private Path getPath(String file) { + return PathUtils.get(file); + } +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/AddStringKeyStoreCommand.java b/core/src/main/java/org/elasticsearch/common/settings/AddStringKeyStoreCommand.java new file mode 100644 index 0000000000000..599fac8c376f0 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/AddStringKeyStoreCommand.java @@ -0,0 +1,92 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + +import java.io.BufferedReader; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; + +import joptsimple.OptionSet; +import joptsimple.OptionSpec; +import org.elasticsearch.cli.EnvironmentAwareCommand; +import org.elasticsearch.cli.ExitCodes; +import org.elasticsearch.cli.Terminal; +import org.elasticsearch.cli.UserException; +import org.elasticsearch.env.Environment; + +/** + * A subcommand for the keystore cli which adds a string setting. + */ +class AddStringKeyStoreCommand extends EnvironmentAwareCommand { + + private final OptionSpec stdinOption; + private final OptionSpec forceOption; + private final OptionSpec arguments; + + AddStringKeyStoreCommand() { + super("Add a string setting to the keystore"); + this.stdinOption = parser.acceptsAll(Arrays.asList("x", "stdin"), "Read setting value from stdin"); + this.forceOption = parser.acceptsAll(Arrays.asList("f", "force"), "Overwrite existing setting without prompting"); + this.arguments = parser.nonOptions("setting name"); + } + + // pkg private so tests can manipulate + InputStream getStdin() { + return System.in; + } + + @Override + protected void execute(Terminal terminal, OptionSet options, Environment env) throws Exception { + KeyStoreWrapper keystore = KeyStoreWrapper.load(env.configFile()); + if (keystore == null) { + throw new UserException(ExitCodes.DATA_ERROR, "Elasticsearch keystore not found. Use 'create' command to create one."); + } + + keystore.decrypt(new char[0] /* TODO: prompt for password when they are supported */); + + String setting = arguments.value(options); + if (setting == null) { + throw new UserException(ExitCodes.USAGE, "The setting name can not be null"); + } + if (keystore.getSettingNames().contains(setting) && options.has(forceOption) == false) { + if (terminal.promptYesNo("Setting " + setting + " already exists. Overwrite?", false) == false) { + terminal.println("Exiting without modifying keystore."); + return; + } + } + + final char[] value; + if (options.has(stdinOption)) { + BufferedReader stdinReader = new BufferedReader(new InputStreamReader(getStdin(), StandardCharsets.UTF_8)); + value = stdinReader.readLine().toCharArray(); + } else { + value = terminal.readSecret("Enter value for " + setting + ": "); + } + + try { + keystore.setString(setting, value); + } catch (IllegalArgumentException e) { + throw new UserException(ExitCodes.DATA_ERROR, "String value must contain only ASCII"); + } + keystore.save(env.configFile()); + } +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java b/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java index 1ce156b853682..7c5f236b636e3 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java @@ -19,18 +19,22 @@ package org.elasticsearch.common.settings; import org.elasticsearch.action.admin.indices.close.TransportCloseIndexAction; +import org.elasticsearch.script.ScriptModes; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.transport.RemoteClusterAware; import org.elasticsearch.action.search.TransportSearchAction; import org.elasticsearch.action.support.AutoCreateIndex; import org.elasticsearch.action.support.DestructiveOperations; import org.elasticsearch.action.support.master.TransportMasterNodeReadAction; import org.elasticsearch.bootstrap.BootstrapSettings; import org.elasticsearch.client.Client; -import org.elasticsearch.client.transport.TransportClientNodesService; +import org.elasticsearch.client.transport.TransportClient; import org.elasticsearch.cluster.ClusterModule; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.InternalClusterInfoService; import org.elasticsearch.cluster.NodeConnectionsService; import org.elasticsearch.cluster.action.index.MappingUpdatedAction; +import org.elasticsearch.cluster.metadata.IndexGraveyard; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.routing.allocation.DiskThresholdSettings; import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator; @@ -54,10 +58,10 @@ import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.discovery.DiscoveryModule; import org.elasticsearch.discovery.DiscoverySettings; +import org.elasticsearch.discovery.zen.ElectMasterService; +import org.elasticsearch.discovery.zen.FaultDetection; +import org.elasticsearch.discovery.zen.UnicastZenPing; import org.elasticsearch.discovery.zen.ZenDiscovery; -import org.elasticsearch.discovery.zen.elect.ElectMasterService; -import org.elasticsearch.discovery.zen.fd.FaultDetection; -import org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing; import org.elasticsearch.env.Environment; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.gateway.GatewayService; @@ -75,6 +79,7 @@ import org.elasticsearch.indices.recovery.RecoverySettings; import org.elasticsearch.indices.store.IndicesStore; import org.elasticsearch.indices.ttl.IndicesTTLService; +import org.elasticsearch.ingest.IngestService; import org.elasticsearch.monitor.fs.FsService; import org.elasticsearch.monitor.jvm.JvmGcMonitorService; import org.elasticsearch.monitor.jvm.JvmService; @@ -88,6 +93,7 @@ import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.SearchModule; import org.elasticsearch.search.SearchService; +import org.elasticsearch.search.fetch.subphase.highlight.FastVectorHighlighter; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TcpTransport; import org.elasticsearch.transport.Transport; @@ -164,11 +170,11 @@ public void apply(Settings value, Settings current, Settings previous) { public static Set> BUILT_IN_CLUSTER_SETTINGS = Collections.unmodifiableSet(new HashSet<>( Arrays.asList(AwarenessAllocationDecider.CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING, - TransportClientNodesService.CLIENT_TRANSPORT_NODES_SAMPLER_INTERVAL, // TODO these transport client settings are kind + TransportClient.CLIENT_TRANSPORT_NODES_SAMPLER_INTERVAL, // TODO these transport client settings are kind // of odd here and should only be valid if we are a transport client - TransportClientNodesService.CLIENT_TRANSPORT_PING_TIMEOUT, - TransportClientNodesService.CLIENT_TRANSPORT_IGNORE_CLUSTER_NAME, - TransportClientNodesService.CLIENT_TRANSPORT_SNIFF, + TransportClient.CLIENT_TRANSPORT_PING_TIMEOUT, + TransportClient.CLIENT_TRANSPORT_IGNORE_CLUSTER_NAME, + TransportClient.CLIENT_TRANSPORT_SNIFF, AwarenessAllocationDecider.CLUSTER_ROUTING_ALLOCATION_AWARENESS_FORCE_GROUP_SETTING, BalancedShardsAllocator.INDEX_BALANCE_FACTOR_SETTING, BalancedShardsAllocator.SHARD_BALANCE_FACTOR_SETTING, @@ -191,6 +197,7 @@ public void apply(Settings value, Settings current, Settings previous) { IndicesTTLService.INDICES_TTL_INTERVAL_SETTING, MappingUpdatedAction.INDICES_MAPPING_DYNAMIC_TIMEOUT_SETTING, MetaData.SETTING_READ_ONLY_SETTING, + MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING, RecoverySettings.INDICES_RECOVERY_MAX_BYTES_PER_SEC_SETTING, RecoverySettings.INDICES_RECOVERY_RETRY_DELAY_STATE_SYNC_SETTING, RecoverySettings.INDICES_RECOVERY_RETRY_DELAY_NETWORK_SETTING, @@ -226,7 +233,6 @@ public void apply(Settings value, Settings current, Settings previous) { NetworkModule.HTTP_DEFAULT_TYPE_SETTING, NetworkModule.TRANSPORT_DEFAULT_TYPE_SETTING, NetworkModule.HTTP_TYPE_SETTING, - NetworkModule.TRANSPORT_SERVICE_TYPE_SETTING, NetworkModule.TRANSPORT_TYPE_SETTING, HttpTransportSettings.SETTING_CORS_ALLOW_CREDENTIALS, HttpTransportSettings.SETTING_CORS_ENABLED, @@ -245,6 +251,7 @@ public void apply(Settings value, Settings current, Settings previous) { HttpTransportSettings.SETTING_CORS_ALLOW_METHODS, HttpTransportSettings.SETTING_CORS_ALLOW_HEADERS, HttpTransportSettings.SETTING_HTTP_DETAILED_ERRORS_ENABLED, + HttpTransportSettings.SETTING_HTTP_CONTENT_TYPE_REQUIRED, HttpTransportSettings.SETTING_HTTP_MAX_CONTENT_LENGTH, HttpTransportSettings.SETTING_HTTP_MAX_CHUNK_SIZE, HttpTransportSettings.SETTING_HTTP_MAX_HEADER_SIZE, @@ -261,6 +268,11 @@ public void apply(Settings value, Settings current, Settings previous) { SearchService.DEFAULT_SEARCH_TIMEOUT_SETTING, ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING, TransportSearchAction.SHARD_COUNT_LIMIT_SETTING, + RemoteClusterAware.REMOTE_CLUSTERS_SEEDS, + RemoteClusterService.REMOTE_CONNECTIONS_PER_CLUSTER, + RemoteClusterService.REMOTE_INITIAL_CONNECTION_TIMEOUT_SETTING, + RemoteClusterService.REMOTE_NODE_ATTRIBUTE, + RemoteClusterService.ENABLE_REMOTE_CLUSTERS, TransportService.TRACE_LOG_EXCLUDE_SETTING, TransportService.TRACE_LOG_INCLUDE_SETTING, TransportCloseIndexAction.CLUSTER_INDICES_CLOSE_ENABLE_SETTING, @@ -305,6 +317,8 @@ public void apply(Settings value, Settings current, Settings previous) { IndexSettings.QUERY_STRING_ANALYZE_WILDCARD, IndexSettings.QUERY_STRING_ALLOW_LEADING_WILDCARD, PrimaryShardAllocator.NODE_INITIAL_SHARDS_SETTING, + ScriptModes.TYPES_ALLOWED_SETTING, + ScriptModes.CONTEXTS_ALLOWED_SETTING, ScriptService.SCRIPT_CACHE_SIZE_SETTING, ScriptService.SCRIPT_CACHE_EXPIRE_SETTING, ScriptService.SCRIPT_AUTO_RELOAD_ENABLED_SETTING, @@ -318,9 +332,12 @@ public void apply(Settings value, Settings current, Settings previous) { HunspellService.HUNSPELL_IGNORE_CASE, HunspellService.HUNSPELL_DICTIONARY_OPTIONS, IndicesStore.INDICES_STORE_DELETE_SHARD_TIMEOUT, + Environment.DEFAULT_PATH_CONF_SETTING, Environment.PATH_CONF_SETTING, + Environment.DEFAULT_PATH_DATA_SETTING, Environment.PATH_DATA_SETTING, Environment.PATH_HOME_SETTING, + Environment.DEFAULT_PATH_LOGS_SETTING, Environment.PATH_LOGS_SETTING, Environment.PATH_REPO_SETTING, Environment.PATH_SCRIPTS_SETTING, @@ -329,7 +346,7 @@ public void apply(Settings value, Settings current, Settings previous) { NodeEnvironment.NODE_ID_SEED_SETTING, DiscoverySettings.INITIAL_STATE_TIMEOUT_SETTING, DiscoveryModule.DISCOVERY_TYPE_SETTING, - DiscoveryModule.ZEN_MASTER_SERVICE_TYPE_SETTING, + DiscoveryModule.DISCOVERY_HOSTS_PROVIDER_SETTING, FaultDetection.PING_RETRIES_SETTING, FaultDetection.PING_TIMEOUT_SETTING, FaultDetection.REGISTER_CONNECTION_LISTENER_SETTING, @@ -345,9 +362,11 @@ public void apply(Settings value, Settings current, Settings previous) { ZenDiscovery.MASTER_ELECTION_IGNORE_NON_MASTER_PINGS_SETTING, UnicastZenPing.DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING, UnicastZenPing.DISCOVERY_ZEN_PING_UNICAST_CONCURRENT_CONNECTS_SETTING, + UnicastZenPing.DISCOVERY_ZEN_PING_UNICAST_HOSTS_RESOLVE_TIMEOUT, SearchService.DEFAULT_KEEPALIVE_SETTING, SearchService.KEEPALIVE_INTERVAL_SETTING, - Node.WRITE_PORTS_FIELD_SETTING, + SearchService.LOW_LEVEL_CANCELLATION_SETTING, + Node.WRITE_PORTS_FILE_SETTING, Node.NODE_NAME_SETTING, Node.NODE_DATA_SETTING, Node.NODE_MASTER_SETTING, @@ -396,9 +415,10 @@ public void apply(Settings value, Settings current, Settings previous) { PluginsService.MANDATORY_SETTING, BootstrapSettings.SECURITY_FILTER_BAD_DEFAULTS_SETTING, BootstrapSettings.MEMORY_LOCK_SETTING, + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING, + // TODO: remove in 6.0.0 BootstrapSettings.SECCOMP_SETTING, BootstrapSettings.CTRLHANDLER_SETTING, - BootstrapSettings.IGNORE_SYSTEM_BOOTSTRAP_CHECKS, IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, IndexingMemoryController.MIN_INDEX_BUFFER_SIZE_SETTING, IndexingMemoryController.MAX_INDEX_BUFFER_SIZE_SETTING, @@ -410,6 +430,9 @@ public void apply(Settings value, Settings current, Settings previous) { ResourceWatcherService.RELOAD_INTERVAL_LOW, SearchModule.INDICES_MAX_CLAUSE_COUNT_SETTING, ThreadPool.ESTIMATED_TIME_INTERVAL_SETTING, - Node.BREAKER_TYPE_KEY + FastVectorHighlighter.SETTING_TV_HIGHLIGHT_MULTI_VALUE, + Node.BREAKER_TYPE_KEY, + IngestService.NEW_INGEST_DATE_FORMAT, + IndexGraveyard.SETTING_MAX_TOMBSTONES ))); } diff --git a/core/src/main/java/org/elasticsearch/common/settings/CreateKeyStoreCommand.java b/core/src/main/java/org/elasticsearch/common/settings/CreateKeyStoreCommand.java new file mode 100644 index 0000000000000..08860cb5ea992 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/CreateKeyStoreCommand.java @@ -0,0 +1,61 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + +import java.nio.file.Files; +import java.nio.file.Path; + +import joptsimple.OptionSet; +import org.elasticsearch.cli.EnvironmentAwareCommand; +import org.elasticsearch.cli.Terminal; +import org.elasticsearch.env.Environment; + +/** + * A subcommand for the keystore cli to create a new keystore. + */ +class CreateKeyStoreCommand extends EnvironmentAwareCommand { + + CreateKeyStoreCommand() { + super("Creates a new elasticsearch keystore"); + } + + @Override + protected void execute(Terminal terminal, OptionSet options, Environment env) throws Exception { + Path keystoreFile = KeyStoreWrapper.keystorePath(env.configFile()); + if (Files.exists(keystoreFile)) { + if (terminal.promptYesNo("An elasticsearch keystore already exists. Overwrite?", false) == false) { + terminal.println("Exiting without creating keystore."); + return; + } + } + + + char[] password = new char[0];// terminal.readSecret("Enter passphrase (empty for no passphrase): "); + /* TODO: uncomment when entering passwords on startup is supported + char[] passwordRepeat = terminal.readSecret("Enter same passphrase again: "); + if (Arrays.equals(password, passwordRepeat) == false) { + throw new UserException(ExitCodes.DATA_ERROR, "Passphrases are not equal, exiting."); + }*/ + + KeyStoreWrapper keystore = KeyStoreWrapper.create(password); + keystore.save(env.configFile()); + terminal.println("Created elasticsearch keystore in " + env.configFile()); + } +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/IndexScopedSettings.java b/core/src/main/java/org/elasticsearch/common/settings/IndexScopedSettings.java index fc0d5f4df9fbc..8762d4cb8e8c2 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/IndexScopedSettings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/IndexScopedSettings.java @@ -71,6 +71,7 @@ public final class IndexScopedSettings extends AbstractScopedSettings { IndexMetaData.INDEX_AUTO_EXPAND_REPLICAS_SETTING, IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING, IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING, + IndexMetaData.INDEX_ROUTING_PARTITION_SIZE_SETTING, IndexMetaData.INDEX_SHADOW_REPLICAS_SETTING, IndexMetaData.INDEX_SHARED_FILESYSTEM_SETTING, IndexMetaData.INDEX_READ_ONLY_SETTING, @@ -78,8 +79,10 @@ public final class IndexScopedSettings extends AbstractScopedSettings { IndexMetaData.INDEX_BLOCKS_WRITE_SETTING, IndexMetaData.INDEX_BLOCKS_METADATA_SETTING, IndexMetaData.INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING, + IndexMetaData.INDEX_BLOCKS_READ_ONLY_ALLOW_DELETE_SETTING, IndexMetaData.INDEX_PRIORITY_SETTING, IndexMetaData.INDEX_DATA_PATH_SETTING, + IndexMetaData.INDEX_FORMAT_SETTING, SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_DEBUG_SETTING, SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_WARN_SETTING, SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_INFO_SETTING, @@ -89,7 +92,6 @@ public final class IndexScopedSettings extends AbstractScopedSettings { SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_INFO_SETTING, SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_TRACE_SETTING, SearchSlowLog.INDEX_SEARCH_SLOWLOG_LEVEL, - SearchSlowLog.INDEX_SEARCH_SLOWLOG_REFORMAT, IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN_SETTING, IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG_SETTING, IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO_SETTING, @@ -110,6 +112,7 @@ public final class IndexScopedSettings extends AbstractScopedSettings { IndexSettings.INDEX_REFRESH_INTERVAL_SETTING, IndexSettings.MAX_RESULT_WINDOW_SETTING, IndexSettings.MAX_RESCORE_WINDOW_SETTING, + IndexSettings.MAX_ADJACENCY_MATRIX_FILTERS_SETTING, IndexSettings.INDEX_TRANSLOG_SYNC_INTERVAL_SETTING, IndexSettings.DEFAULT_FIELD_SETTING, IndexSettings.QUERY_STRING_LENIENT_SETTING, @@ -132,11 +135,13 @@ public final class IndexScopedSettings extends AbstractScopedSettings { MapperService.INDEX_MAPPING_NESTED_FIELDS_LIMIT_SETTING, MapperService.INDEX_MAPPING_TOTAL_FIELDS_LIMIT_SETTING, MapperService.INDEX_MAPPING_DEPTH_LIMIT_SETTING, + MapperService.INDEX_MAPPING_SINGLE_TYPE_SETTING, BitsetFilterCache.INDEX_LOAD_RANDOM_ACCESS_FILTERS_EAGERLY_SETTING, IndexModule.INDEX_STORE_TYPE_SETTING, IndexModule.INDEX_STORE_PRE_LOAD_SETTING, IndexModule.INDEX_QUERY_CACHE_ENABLED_SETTING, IndexModule.INDEX_QUERY_CACHE_EVERYTHING_SETTING, + IndexModule.INDEX_QUERY_CACHE_TERM_QUERIES_SETTING, PrimaryShardAllocator.INDEX_RECOVERY_INITIAL_SHARDS_SETTING, FsDirectoryService.INDEX_LOCK_FACTOR_SETTING, EngineConfig.INDEX_CODEC_SETTING, @@ -186,10 +191,13 @@ protected boolean isPrivateSetting(String key) { case IndexMetaData.SETTING_INDEX_UUID: case IndexMetaData.SETTING_VERSION_CREATED: case IndexMetaData.SETTING_VERSION_UPGRADED: + case IndexMetaData.SETTING_INDEX_PROVIDED_NAME: case MergePolicyConfig.INDEX_MERGE_ENABLED: + case IndexMetaData.INDEX_SHRINK_SOURCE_UUID_KEY: + case IndexMetaData.INDEX_SHRINK_SOURCE_NAME_KEY: return true; default: - return false; + return IndexMetaData.INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING.getRawKey().match(key); } } } diff --git a/core/src/main/java/org/elasticsearch/common/settings/KeyStoreCli.java b/core/src/main/java/org/elasticsearch/common/settings/KeyStoreCli.java new file mode 100644 index 0000000000000..16818341cbd0c --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/KeyStoreCli.java @@ -0,0 +1,42 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + +import org.elasticsearch.cli.MultiCommand; +import org.elasticsearch.cli.Terminal; + +/** + * A cli tool for managing secrets in the elasticsearch keystore. + */ +public class KeyStoreCli extends MultiCommand { + + private KeyStoreCli() { + super("A tool for managing settings stored in the elasticsearch keystore"); + subcommands.put("create", new CreateKeyStoreCommand()); + subcommands.put("list", new ListKeyStoreCommand()); + subcommands.put("add", new AddStringKeyStoreCommand()); + subcommands.put("add-file", new AddFileKeyStoreCommand()); + subcommands.put("remove", new RemoveSettingKeyStoreCommand()); + } + + public static void main(String[] args) throws Exception { + exit(new KeyStoreCli().main(args, Terminal.DEFAULT)); + } +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/KeyStoreWrapper.java b/core/src/main/java/org/elasticsearch/common/settings/KeyStoreWrapper.java new file mode 100644 index 0000000000000..a96f1cf841951 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/KeyStoreWrapper.java @@ -0,0 +1,385 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + +import javax.crypto.SecretKey; +import javax.crypto.SecretKeyFactory; +import javax.crypto.spec.PBEKeySpec; +import javax.security.auth.DestroyFailedException; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.nio.CharBuffer; +import java.nio.charset.CharsetEncoder; +import java.nio.charset.StandardCharsets; +import java.nio.file.Files; +import java.nio.file.Path; +import java.nio.file.StandardCopyOption; +import java.nio.file.attribute.PosixFileAttributeView; +import java.nio.file.attribute.PosixFilePermissions; +import java.security.GeneralSecurityException; +import java.security.KeyStore; +import java.security.KeyStoreException; +import java.security.NoSuchAlgorithmException; +import java.util.Arrays; +import java.util.Base64; +import java.util.Enumeration; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Locale; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; + +import org.apache.lucene.codecs.CodecUtil; +import org.apache.lucene.store.BufferedChecksumIndexInput; +import org.apache.lucene.store.ChecksumIndexInput; +import org.apache.lucene.store.IOContext; +import org.apache.lucene.store.IndexInput; +import org.apache.lucene.store.IndexOutput; +import org.apache.lucene.store.SimpleFSDirectory; +import org.apache.lucene.util.SetOnce; + +/** + * A wrapper around a Java KeyStore which provides supplements the keystore with extra metadata. + * + * Loading a keystore has 2 phases. First, call {@link #load(Path)}. Then call + * {@link #decrypt(char[])} with the keystore password, or an empty char array if + * {@link #hasPassword()} is {@code false}. Loading and decrypting should happen + * in a single thread. Once decrypted, keys may be read with the wrapper in + * multiple threads. + */ +public class KeyStoreWrapper implements SecureSettings { + + /** An identifier for the type of data that may be stored in a keystore entry. */ + private enum KeyType { + STRING, + FILE + } + + /** The name of the keystore file to read and write. */ + private static final String KEYSTORE_FILENAME = "elasticsearch.keystore"; + + /** The version of the metadata written before the keystore data. */ + private static final int FORMAT_VERSION = 2; + + /** The oldest metadata format version that can be read. */ + private static final int MIN_FORMAT_VERSION = 1; + + /** The keystore type for a newly created keystore. */ + private static final String NEW_KEYSTORE_TYPE = "PKCS12"; + + /** The algorithm used to store string setting contents. */ + private static final String NEW_KEYSTORE_STRING_KEY_ALGO = "PBE"; + + /** The algorithm used to store file setting contents. */ + private static final String NEW_KEYSTORE_FILE_KEY_ALGO = "PBE"; + + /** An encoder to check whether string values are ascii. */ + private static final CharsetEncoder ASCII_ENCODER = StandardCharsets.US_ASCII.newEncoder(); + + /** The metadata format version used to read the current keystore wrapper. */ + private final int formatVersion; + + /** True iff the keystore has a password needed to read. */ + private final boolean hasPassword; + + /** The type of the keystore, as passed to {@link java.security.KeyStore#getInstance(String)} */ + private final String type; + + /** A factory necessary for constructing instances of string secrets in a {@link KeyStore}. */ + private final SecretKeyFactory stringFactory; + + /** A factory necessary for constructing instances of file secrets in a {@link KeyStore}. */ + private final SecretKeyFactory fileFactory; + + /** + * The settings that exist in the keystore, mapped to their type of data. + */ + private final Map settingTypes; + + /** The raw bytes of the encrypted keystore. */ + private final byte[] keystoreBytes; + + /** The loaded keystore. See {@link #decrypt(char[])}. */ + private final SetOnce keystore = new SetOnce<>(); + + /** The password for the keystore. See {@link #decrypt(char[])}. */ + private final SetOnce keystorePassword = new SetOnce<>(); + + private KeyStoreWrapper(int formatVersion, boolean hasPassword, String type, + String stringKeyAlgo, String fileKeyAlgo, + Map settingTypes, byte[] keystoreBytes) { + this.formatVersion = formatVersion; + this.hasPassword = hasPassword; + this.type = type; + try { + stringFactory = SecretKeyFactory.getInstance(stringKeyAlgo); + fileFactory = SecretKeyFactory.getInstance(fileKeyAlgo); + } catch (NoSuchAlgorithmException e) { + throw new RuntimeException(e); + } + this.settingTypes = settingTypes; + this.keystoreBytes = keystoreBytes; + } + + /** Returns a path representing the ES keystore in the given config dir. */ + static Path keystorePath(Path configDir) { + return configDir.resolve(KEYSTORE_FILENAME); + } + + /** Constructs a new keystore with the given password. */ + static KeyStoreWrapper create(char[] password) throws Exception { + KeyStoreWrapper wrapper = new KeyStoreWrapper(FORMAT_VERSION, password.length != 0, NEW_KEYSTORE_TYPE, + NEW_KEYSTORE_STRING_KEY_ALGO, NEW_KEYSTORE_FILE_KEY_ALGO, new HashMap<>(), null); + KeyStore keyStore = KeyStore.getInstance(NEW_KEYSTORE_TYPE); + keyStore.load(null, null); + wrapper.keystore.set(keyStore); + wrapper.keystorePassword.set(new KeyStore.PasswordProtection(password)); + return wrapper; + } + + /** + * Loads information about the Elasticsearch keystore from the provided config directory. + * + * {@link #decrypt(char[])} must be called before reading or writing any entries. + * Returns {@code null} if no keystore exists. + */ + public static KeyStoreWrapper load(Path configDir) throws IOException { + Path keystoreFile = keystorePath(configDir); + if (Files.exists(keystoreFile) == false) { + return null; + } + + SimpleFSDirectory directory = new SimpleFSDirectory(configDir); + try (IndexInput indexInput = directory.openInput(KEYSTORE_FILENAME, IOContext.READONCE)) { + ChecksumIndexInput input = new BufferedChecksumIndexInput(indexInput); + int formatVersion = CodecUtil.checkHeader(input, KEYSTORE_FILENAME, MIN_FORMAT_VERSION, FORMAT_VERSION); + byte hasPasswordByte = input.readByte(); + boolean hasPassword = hasPasswordByte == 1; + if (hasPassword == false && hasPasswordByte != 0) { + throw new IllegalStateException("hasPassword boolean is corrupt: " + + String.format(Locale.ROOT, "%02x", hasPasswordByte)); + } + String type = input.readString(); + String stringKeyAlgo = input.readString(); + final String fileKeyAlgo; + if (formatVersion >= 2) { + fileKeyAlgo = input.readString(); + } else { + fileKeyAlgo = NEW_KEYSTORE_FILE_KEY_ALGO; + } + final Map settingTypes; + if (formatVersion >= 2) { + settingTypes = input.readMapOfStrings().entrySet().stream().collect(Collectors.toMap( + Map.Entry::getKey, + e -> KeyType.valueOf(e.getValue()))); + } else { + settingTypes = new HashMap<>(); + } + byte[] keystoreBytes = new byte[input.readInt()]; + input.readBytes(keystoreBytes, 0, keystoreBytes.length); + CodecUtil.checkFooter(input); + return new KeyStoreWrapper(formatVersion, hasPassword, type, stringKeyAlgo, fileKeyAlgo, settingTypes, keystoreBytes); + } + } + + @Override + public boolean isLoaded() { + return keystore.get() != null; + } + + /** Return true iff calling {@link #decrypt(char[])} requires a non-empty password. */ + public boolean hasPassword() { + return hasPassword; + } + + /** + * Decrypts the underlying java keystore. + * + * This may only be called once. The provided password will be zeroed out. + */ + public void decrypt(char[] password) throws GeneralSecurityException, IOException { + if (keystore.get() != null) { + throw new IllegalStateException("Keystore has already been decrypted"); + } + keystore.set(KeyStore.getInstance(type)); + try (InputStream in = new ByteArrayInputStream(keystoreBytes)) { + keystore.get().load(in, password); + } finally { + Arrays.fill(keystoreBytes, (byte)0); + } + + keystorePassword.set(new KeyStore.PasswordProtection(password)); + Arrays.fill(password, '\0'); + + + Enumeration aliases = keystore.get().aliases(); + if (formatVersion == 1) { + while (aliases.hasMoreElements()) { + settingTypes.put(aliases.nextElement(), KeyType.STRING); + } + } else { + // verify integrity: keys in keystore match what the metadata thinks exist + Set expectedSettings = new HashSet<>(settingTypes.keySet()); + while (aliases.hasMoreElements()) { + String settingName = aliases.nextElement(); + if (expectedSettings.remove(settingName) == false) { + throw new SecurityException("Keystore has been corrupted or tampered with"); + } + } + if (expectedSettings.isEmpty() == false) { + throw new SecurityException("Keystore has been corrupted or tampered with"); + } + } + } + + /** Write the keystore to the given config directory. */ + void save(Path configDir) throws Exception { + char[] password = this.keystorePassword.get().getPassword(); + + SimpleFSDirectory directory = new SimpleFSDirectory(configDir); + // write to tmp file first, then overwrite + String tmpFile = KEYSTORE_FILENAME + ".tmp"; + try (IndexOutput output = directory.createOutput(tmpFile, IOContext.DEFAULT)) { + CodecUtil.writeHeader(output, KEYSTORE_FILENAME, FORMAT_VERSION); + output.writeByte(password.length == 0 ? (byte)0 : (byte)1); + output.writeString(NEW_KEYSTORE_TYPE); + output.writeString(NEW_KEYSTORE_STRING_KEY_ALGO); + output.writeString(NEW_KEYSTORE_FILE_KEY_ALGO); + output.writeMapOfStrings(settingTypes.entrySet().stream().collect(Collectors.toMap( + Map.Entry::getKey, + e -> e.getValue().name()))); + + // TODO: in the future if we ever change any algorithms used above, we need + // to create a new KeyStore here instead of using the existing one, so that + // the encoded material inside the keystore is updated + assert type.equals(NEW_KEYSTORE_TYPE) : "keystore type changed"; + assert stringFactory.getAlgorithm().equals(NEW_KEYSTORE_STRING_KEY_ALGO) : "string pbe algo changed"; + assert fileFactory.getAlgorithm().equals(NEW_KEYSTORE_FILE_KEY_ALGO) : "file pbe algo changed"; + + ByteArrayOutputStream keystoreBytesStream = new ByteArrayOutputStream(); + keystore.get().store(keystoreBytesStream, password); + byte[] keystoreBytes = keystoreBytesStream.toByteArray(); + output.writeInt(keystoreBytes.length); + output.writeBytes(keystoreBytes, keystoreBytes.length); + CodecUtil.writeFooter(output); + } + + Path keystoreFile = keystorePath(configDir); + Files.move(configDir.resolve(tmpFile), keystoreFile, StandardCopyOption.REPLACE_EXISTING, StandardCopyOption.ATOMIC_MOVE); + PosixFileAttributeView attrs = Files.getFileAttributeView(keystoreFile, PosixFileAttributeView.class); + if (attrs != null) { + // don't rely on umask: ensure the keystore has minimal permissions + attrs.setPermissions(PosixFilePermissions.fromString("rw-rw----")); + } + } + + @Override + public Set getSettingNames() { + return settingTypes.keySet(); + } + + // TODO: make settings accessible only to code that registered the setting + @Override + public SecureString getString(String setting) throws GeneralSecurityException { + KeyStore.Entry entry = keystore.get().getEntry(setting, keystorePassword.get()); + if (settingTypes.get(setting) != KeyType.STRING || + entry instanceof KeyStore.SecretKeyEntry == false) { + throw new IllegalStateException("Secret setting " + setting + " is not a string"); + } + // TODO: only allow getting a setting once? + KeyStore.SecretKeyEntry secretKeyEntry = (KeyStore.SecretKeyEntry) entry; + PBEKeySpec keySpec = (PBEKeySpec) stringFactory.getKeySpec(secretKeyEntry.getSecretKey(), PBEKeySpec.class); + SecureString value = new SecureString(keySpec.getPassword()); + keySpec.clearPassword(); + return value; + } + + @Override + public InputStream getFile(String setting) throws GeneralSecurityException { + KeyStore.Entry entry = keystore.get().getEntry(setting, keystorePassword.get()); + if (settingTypes.get(setting) != KeyType.FILE || + entry instanceof KeyStore.SecretKeyEntry == false) { + throw new IllegalStateException("Secret setting " + setting + " is not a file"); + } + KeyStore.SecretKeyEntry secretKeyEntry = (KeyStore.SecretKeyEntry) entry; + PBEKeySpec keySpec = (PBEKeySpec) fileFactory.getKeySpec(secretKeyEntry.getSecretKey(), PBEKeySpec.class); + // The PBE keyspec gives us chars, we first convert to bytes, then decode base64 inline. + char[] chars = keySpec.getPassword(); + byte[] bytes = new byte[chars.length]; + for (int i = 0; i < bytes.length; ++i) { + bytes[i] = (byte)chars[i]; // PBE only stores the lower 8 bits, so this narrowing is ok + } + keySpec.clearPassword(); // wipe the original copy + InputStream bytesStream = new ByteArrayInputStream(bytes) { + @Override + public void close() throws IOException { + super.close(); + Arrays.fill(bytes, (byte)0); // wipe our second copy when the stream is exhausted + } + }; + return Base64.getDecoder().wrap(bytesStream); + } + + /** + * Set a string setting. + * + * @throws IllegalArgumentException if the value is not ASCII + */ + void setString(String setting, char[] value) throws GeneralSecurityException { + if (ASCII_ENCODER.canEncode(CharBuffer.wrap(value)) == false) { + throw new IllegalArgumentException("Value must be ascii"); + } + SecretKey secretKey = stringFactory.generateSecret(new PBEKeySpec(value)); + keystore.get().setEntry(setting, new KeyStore.SecretKeyEntry(secretKey), keystorePassword.get()); + settingTypes.put(setting, KeyType.STRING); + } + + /** Set a file setting. */ + void setFile(String setting, byte[] bytes) throws GeneralSecurityException { + bytes = Base64.getEncoder().encode(bytes); + char[] chars = new char[bytes.length]; + for (int i = 0; i < chars.length; ++i) { + chars[i] = (char)bytes[i]; // PBE only stores the lower 8 bits, so this narrowing is ok + } + SecretKey secretKey = stringFactory.generateSecret(new PBEKeySpec(chars)); + keystore.get().setEntry(setting, new KeyStore.SecretKeyEntry(secretKey), keystorePassword.get()); + settingTypes.put(setting, KeyType.FILE); + } + + /** Remove the given setting from the keystore. */ + void remove(String setting) throws KeyStoreException { + keystore.get().deleteEntry(setting); + settingTypes.remove(setting); + } + + @Override + public void close() throws IOException { + try { + if (keystorePassword.get() != null) { + keystorePassword.get().destroy(); + } + } catch (DestroyFailedException e) { + throw new IOException(e); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/ListKeyStoreCommand.java b/core/src/main/java/org/elasticsearch/common/settings/ListKeyStoreCommand.java new file mode 100644 index 0000000000000..8eef02f213189 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/ListKeyStoreCommand.java @@ -0,0 +1,58 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import joptsimple.OptionSet; +import org.elasticsearch.cli.EnvironmentAwareCommand; +import org.elasticsearch.cli.ExitCodes; +import org.elasticsearch.cli.Terminal; +import org.elasticsearch.cli.UserException; +import org.elasticsearch.env.Environment; + +/** + * A subcommand for the keystore cli to list all settings in the keystore. + */ +class ListKeyStoreCommand extends EnvironmentAwareCommand { + + ListKeyStoreCommand() { + super("List entries in the keystore"); + } + + @Override + protected void execute(Terminal terminal, OptionSet options, Environment env) throws Exception { + KeyStoreWrapper keystore = KeyStoreWrapper.load(env.configFile()); + if (keystore == null) { + throw new UserException(ExitCodes.DATA_ERROR, "Elasticsearch keystore not found. Use 'create' command to create one."); + } + + keystore.decrypt(new char[0] /* TODO: prompt for password when they are supported */); + + List sortedEntries = new ArrayList<>(keystore.getSettingNames()); + Collections.sort(sortedEntries); + for (String entry : sortedEntries) { + terminal.println(entry); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/RemoveSettingKeyStoreCommand.java b/core/src/main/java/org/elasticsearch/common/settings/RemoveSettingKeyStoreCommand.java new file mode 100644 index 0000000000000..c766dd769e22b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/RemoveSettingKeyStoreCommand.java @@ -0,0 +1,66 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + +import java.util.List; + +import joptsimple.OptionSet; +import joptsimple.OptionSpec; +import org.elasticsearch.cli.EnvironmentAwareCommand; +import org.elasticsearch.cli.ExitCodes; +import org.elasticsearch.cli.Terminal; +import org.elasticsearch.cli.UserException; +import org.elasticsearch.env.Environment; + +/** + * A subcommand for the keystore cli to remove a setting. + */ +class RemoveSettingKeyStoreCommand extends EnvironmentAwareCommand { + + private final OptionSpec arguments; + + RemoveSettingKeyStoreCommand() { + super("Remove a setting from the keystore"); + arguments = parser.nonOptions("setting names"); + } + + @Override + protected void execute(Terminal terminal, OptionSet options, Environment env) throws Exception { + List settings = arguments.values(options); + if (settings.isEmpty()) { + throw new UserException(ExitCodes.USAGE, "Must supply at least one setting to remove"); + } + + KeyStoreWrapper keystore = KeyStoreWrapper.load(env.configFile()); + if (keystore == null) { + throw new UserException(ExitCodes.DATA_ERROR, "Elasticsearch keystore not found. Use 'create' command to create one."); + } + + keystore.decrypt(new char[0] /* TODO: prompt for password when they are supported */); + + for (String setting : arguments.values(options)) { + if (keystore.getSettingNames().contains(setting) == false) { + throw new UserException(ExitCodes.CONFIG, "Setting [" + setting + "] does not exist in the keystore."); + } + keystore.remove(setting); + } + keystore.save(env.configFile()); + } +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/SecureSetting.java b/core/src/main/java/org/elasticsearch/common/settings/SecureSetting.java new file mode 100644 index 0000000000000..07ff3e03f4de8 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/SecureSetting.java @@ -0,0 +1,177 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + +import java.io.InputStream; +import java.security.GeneralSecurityException; +import java.util.EnumSet; +import java.util.Set; + +import org.elasticsearch.common.util.ArrayUtils; + + +/** + * A secure setting. + * + * This class allows access to settings from the Elasticsearch keystore. + */ +public abstract class SecureSetting extends Setting { + private static final Set ALLOWED_PROPERTIES = EnumSet.of(Property.Deprecated, Property.Shared); + + private static final Property[] FIXED_PROPERTIES = { + Property.NodeScope + }; + + private static final Property[] LEGACY_PROPERTIES = { + Property.NodeScope, Property.Deprecated, Property.Filtered + }; + + private SecureSetting(String key, Property... properties) { + super(key, (String)null, null, ArrayUtils.concat(properties, FIXED_PROPERTIES, Property.class)); + assert assertAllowedProperties(properties); + } + + private boolean assertAllowedProperties(Setting.Property... properties) { + for (Setting.Property property : properties) { + if (ALLOWED_PROPERTIES.contains(property) == false) { + return false; + } + } + return true; + } + + @Override + public String getDefaultRaw(Settings settings) { + throw new UnsupportedOperationException("secure settings are not strings"); + } + + @Override + public T getDefault(Settings settings) { + throw new UnsupportedOperationException("secure settings are not strings"); + } + + @Override + public String getRaw(Settings settings) { + throw new UnsupportedOperationException("secure settings are not strings"); + } + + @Override + public boolean exists(Settings settings) { + final SecureSettings secureSettings = settings.getSecureSettings(); + return secureSettings != null && secureSettings.getSettingNames().contains(getKey()); + } + + @Override + public T get(Settings settings) { + checkDeprecation(settings); + final SecureSettings secureSettings = settings.getSecureSettings(); + if (secureSettings == null || secureSettings.getSettingNames().contains(getKey()) == false) { + if (super.exists(settings)) { + throw new IllegalArgumentException("Setting [" + getKey() + "] is a secure setting" + + " and must be stored inside the Elasticsearch keystore, but was found inside elasticsearch.yml"); + } + return getFallback(settings); + } + try { + return getSecret(secureSettings); + } catch (GeneralSecurityException e) { + throw new RuntimeException("failed to read secure setting " + getKey(), e); + } + } + + /** Returns the secret setting from the keyStoreReader store. */ + abstract T getSecret(SecureSettings secureSettings) throws GeneralSecurityException; + + /** Returns the value from a fallback setting. Returns null if no fallback exists. */ + abstract T getFallback(Settings settings); + + // TODO: override toXContent + + /** + * Overrides the diff operation to make this a no-op for secure settings as they shouldn't be returned in a diff + */ + @Override + public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) { + } + + /** + * A setting which contains a sensitive string. + * + * This may be any sensitive string, e.g. a username, a password, an auth token, etc. + */ + public static Setting secureString(String name, Setting fallback, + Property... properties) { + return new SecureStringSetting(name, fallback, properties); + } + + /** + * A setting which contains a file. Reading the setting opens an input stream to the file. + * + * This may be any sensitive file, e.g. a set of credentials normally in plaintext. + */ + public static Setting secureFile(String name, Setting fallback, + Property... properties) { + return new SecureFileSetting(name, fallback, properties); + } + + private static class SecureStringSetting extends SecureSetting { + private final Setting fallback; + + private SecureStringSetting(String name, Setting fallback, Property... properties) { + super(name, properties); + this.fallback = fallback; + } + + @Override + protected SecureString getSecret(SecureSettings secureSettings) throws GeneralSecurityException { + return secureSettings.getString(getKey()); + } + + @Override + SecureString getFallback(Settings settings) { + if (fallback != null) { + return fallback.get(settings); + } + return new SecureString(new char[0]); // this means "setting does not exist" + } + } + + private static class SecureFileSetting extends SecureSetting { + private final Setting fallback; + + private SecureFileSetting(String name, Setting fallback, Property... properties) { + super(name, properties); + this.fallback = fallback; + } + + @Override + protected InputStream getSecret(SecureSettings secureSettings) throws GeneralSecurityException { + return secureSettings.getFile(getKey()); + } + + @Override + InputStream getFallback(Settings settings) { + if (fallback != null) { + return fallback.get(settings); + } + return null; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/SecureSettings.java b/core/src/main/java/org/elasticsearch/common/settings/SecureSettings.java new file mode 100644 index 0000000000000..98f980c1ec6c8 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/SecureSettings.java @@ -0,0 +1,47 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + +import java.io.Closeable; +import java.io.IOException; +import java.io.InputStream; +import java.security.GeneralSecurityException; +import java.util.Set; + +/** + * An accessor for settings which are securely stored. See {@link SecureSetting}. + */ +public interface SecureSettings extends Closeable { + + /** Returns true iff the settings are loaded and retrievable. */ + boolean isLoaded(); + + /** Returns the names of all secure settings available. */ + Set getSettingNames(); + + /** Return a string setting. The {@link SecureString} should be closed once it is used. */ + SecureString getString(String setting) throws GeneralSecurityException; + + /** Return a file setting. The {@link InputStream} should be closed once it is used. */ + InputStream getFile(String setting) throws GeneralSecurityException; + + @Override + void close() throws IOException; +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/SecureString.java b/core/src/main/java/org/elasticsearch/common/settings/SecureString.java new file mode 100644 index 0000000000000..85c4c566db149 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/settings/SecureString.java @@ -0,0 +1,150 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.settings; + +import java.io.Closeable; +import java.util.Arrays; +import java.util.Objects; + +/** + * A String implementations which allows clearing the underlying char array. + */ +public final class SecureString implements CharSequence, Closeable { + + private char[] chars; + + /** + * Constructs a new SecureString which controls the passed in char array. + * + * Note: When this instance is closed, the array will be zeroed out. + */ + public SecureString(char[] chars) { + this.chars = Objects.requireNonNull(chars); + } + + /** + * Constructs a new SecureString from an existing String. + * + * NOTE: This is not actually secure, since the provided String cannot be deallocated, but + * this constructor allows for easy compatibility between new and old apis. + * + * @deprecated Only use for compatibility between deprecated string settings and new secure strings + */ + @Deprecated + public SecureString(String s) { + this(s.toCharArray()); + } + + /** Constant time equality to avoid potential timing attacks. */ + @Override + public synchronized boolean equals(Object o) { + ensureNotClosed(); + if (this == o) return true; + if (o == null || o instanceof CharSequence == false) return false; + CharSequence that = (CharSequence) o; + if (chars.length != that.length()) { + return false; + } + + int equals = 0; + for (int i = 0; i < chars.length; i++) { + equals |= chars[i] ^ that.charAt(i); + } + + return equals == 0; + } + + @Override + public synchronized int hashCode() { + return Arrays.hashCode(chars); + } + + @Override + public synchronized int length() { + ensureNotClosed(); + return chars.length; + } + + @Override + public synchronized char charAt(int index) { + ensureNotClosed(); + return chars[index]; + } + + @Override + public SecureString subSequence(int start, int end) { + throw new UnsupportedOperationException("Cannot get subsequence of SecureString"); + } + + /** + * Convert to a {@link String}. This should only be used with APIs that do not take {@link CharSequence}. + */ + @Override + public synchronized String toString() { + return new String(chars); + } + + /** + * Closes the string by clearing the underlying char array. + */ + @Override + public synchronized void close() { + if (chars != null) { + Arrays.fill(chars, '\0'); + chars = null; + } + } + + /** + * Returns a new copy of this object that is backed by its own char array. Closing the new instance has no effect on the instance it + * was created from. This is useful for APIs which accept a char array and you want to be safe about the API potentially modifying the + * char array. For example: + * + *
    +     *     try (SecureString copy = secureString.clone()) {
    +     *         // pass thee char[] to a external API
    +     *         PasswordAuthentication auth = new PasswordAuthentication(username, copy.getChars());
    +     *         ...
    +     *     }
    +     * 
    + */ + @Override + public synchronized SecureString clone() { + ensureNotClosed(); + return new SecureString(Arrays.copyOf(chars, chars.length)); + } + + /** + * Returns the underlying char[]. This is a dangerous operation as the array may be modified while it is being used by other threads + * or a consumer may modify the values in the array. For safety, it is preferable to use {@link #clone()} and pass its chars to the + * consumer when the chars are needed multiple times. + */ + public synchronized char[] getChars() { + ensureNotClosed(); + return chars; + } + + /** Throw an exception if this string has been closed, indicating something is trying to access the data after being closed. */ + private void ensureNotClosed() { + if (chars == null) { + throw new IllegalStateException("SecureString has already been closed"); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/common/settings/Setting.java b/core/src/main/java/org/elasticsearch/common/settings/Setting.java index c22e12b3ce899..89a312092eb09 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/Setting.java +++ b/core/src/main/java/org/elasticsearch/common/settings/Setting.java @@ -32,6 +32,7 @@ import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.MemorySizeValue; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; @@ -41,14 +42,17 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.EnumSet; +import java.util.IdentityHashMap; import java.util.List; import java.util.Map; import java.util.Objects; import java.util.function.BiConsumer; import java.util.function.Consumer; import java.util.function.Function; +import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; +import java.util.stream.Stream; /** * A setting. Encapsulates typical stuff like default value, parsing, and scope. @@ -91,6 +95,12 @@ public enum Property { */ Dynamic, + /** + * mark this setting as final, not updateable even when the context is not dynamic + * ie. Setting this property on an index scoped setting will fail update when the index is closed + */ + Final, + /** * mark this setting as deprecated */ @@ -118,7 +128,8 @@ public enum Property { private Setting(Key key, @Nullable Setting fallbackSetting, Function defaultValue, Function parser, Property... properties) { - assert parser.apply(defaultValue.apply(Settings.EMPTY)) != null || this.isGroupSetting(): "parser returned null"; + assert this instanceof SecureSetting || this.isGroupSetting() || parser.apply(defaultValue.apply(Settings.EMPTY)) != null + : "parser returned null"; this.key = key; this.fallbackSetting = fallbackSetting; this.defaultValue = defaultValue; @@ -130,6 +141,9 @@ private Setting(Key key, @Nullable Setting fallbackSetting, Functiontrue if this setting is final, otherwise false + */ + public final boolean isFinal() { + return properties.contains(Property.Final); + } + /** * Returns the setting properties * @see Property @@ -269,11 +290,18 @@ boolean hasComplexMatcher() { return isGroupSetting(); } + /** + * Returns true iff this setting is a list setting. + */ + boolean isListSetting() { + return false; + } + /** * Returns the default value string representation for this setting. * @param settings a settings object for settings that has a default value depending on another setting if available */ - public final String getDefaultRaw(Settings settings) { + public String getDefaultRaw(Settings settings) { return defaultValue.apply(settings); } @@ -281,7 +309,7 @@ public final String getDefaultRaw(Settings settings) { * Returns the default value for this setting. * @param settings a settings object for settings that has a default value depending on another setting if available */ - public final T getDefault(Settings settings) { + public T getDefault(Settings settings) { return parser.apply(getDefaultRaw(settings)); } @@ -289,7 +317,7 @@ public final T getDefault(Settings settings) { * Returns true iff this setting is present in the given settings object. Otherwise false */ public boolean exists(Settings settings) { - return settings.get(getKey()) != null; + return settings.getAsMap().containsKey(getKey()); } /** @@ -311,19 +339,40 @@ public T get(Settings settings) { } } + /** + * Add this setting to the builder if it doesn't exists in the source settings. + * The value added to the builder is taken from the given default settings object. + * @param builder the settings builder to fill the diff into + * @param source the source settings object to diff + * @param defaultSettings the default settings object to diff against + */ + public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) { + if (exists(source) == false) { + builder.put(getKey(), getRaw(defaultSettings)); + } + } + /** * Returns the raw (string) settings value. If the setting is not present in the given settings object the default value is returned * instead. This is useful if the value can't be parsed due to an invalid value to access the actual value. */ public String getRaw(Settings settings) { + checkDeprecation(settings); + return settings.get(getKey(), defaultValue.apply(settings)); + } + + /** Logs a deprecation warning if the setting is deprecated and used. */ + void checkDeprecation(Settings settings) { // They're using the setting, so we need to tell them to stop if (this.isDeprecated() && this.exists(settings)) { // It would be convenient to show its replacement key, but replacement is often not so simple - final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(getClass())); - deprecationLogger.deprecated("[{}] setting was deprecated in Elasticsearch and it will be removed in a future release! " + - "See the breaking changes lists in the documentation for details", getKey()); + final String key = getKey(); + Settings.DeprecationLoggerHolder.deprecationLogger.deprecatedAndMaybeLog( + key, + "[{}] setting was deprecated in Elasticsearch and will be removed in a future release! " + + "See the breaking changes documentation for the next major version.", + key); } - return settings.get(getKey(), defaultValue.apply(settings)); } /** @@ -353,12 +402,12 @@ public final T get(Settings primary, Settings secondary) { if (exists(primary)) { return get(primary); } - if (fallbackSetting == null) { - return get(secondary); - } if (exists(secondary)) { return get(secondary); } + if (fallbackSetting == null) { + return get(primary); + } if (fallbackSetting.exists(primary)) { return fallbackSetting.get(primary); } @@ -391,8 +440,8 @@ AbstractScopedSettings.SettingUpdater newUpdater(Consumer consumer, Logger } /** - * Updates settings that depend on eachother. See {@link AbstractScopedSettings#addSettingsUpdateConsumer(Setting, Setting, BiConsumer)} - * and its usage for details. + * Updates settings that depend on each other. + * See {@link AbstractScopedSettings#addSettingsUpdateConsumer(Setting, Setting, BiConsumer)} and its usage for details. */ static AbstractScopedSettings.SettingUpdater> compoundUpdater(final BiConsumer consumer, final Setting aSetting, final Setting bSetting, Logger logger) { @@ -411,6 +460,12 @@ public Tuple getValue(Settings current, Settings previous) { @Override public void apply(Tuple value, Settings current, Settings previous) { + if (aSettingUpdater.hasChanged(current, previous)) { + logger.info("updating [{}] from [{}] to [{}]", aSetting.key, aSetting.getRaw(previous), aSetting.getRaw(current)); + } + if (bSettingUpdater.hasChanged(current, previous)) { + logger.info("updating [{}] from [{}] to [{}]", bSetting.key, bSetting.getRaw(previous), bSetting.getRaw(current)); + } consumer.accept(value.v1(), value.v2()); } @@ -421,13 +476,266 @@ public String toString() { }; } + public static class AffixSetting extends Setting { + private final AffixKey key; + private final Function> delegateFactory; + + public AffixSetting(AffixKey key, Setting delegate, Function> delegateFactory) { + super(key, delegate.defaultValue, delegate.parser, delegate.properties.toArray(new Property[0])); + this.key = key; + this.delegateFactory = delegateFactory; + } + + boolean isGroupSetting() { + return true; + } + + private Stream matchStream(Settings settings) { + return settings.getAsMap().keySet().stream().filter((key) -> match(key)).map(settingKey -> key.getConcreteString(settingKey)); + } + + AbstractScopedSettings.SettingUpdater, T>> newAffixUpdater( + BiConsumer consumer, Logger logger, BiConsumer validator) { + return new AbstractScopedSettings.SettingUpdater, T>>() { + + @Override + public boolean hasChanged(Settings current, Settings previous) { + return Stream.concat(matchStream(current), matchStream(previous)).findAny().isPresent(); + } + + @Override + public Map, T> getValue(Settings current, Settings previous) { + // we collect all concrete keys and then delegate to the actual setting for validation and settings extraction + final Map, T> result = new IdentityHashMap<>(); + Stream.concat(matchStream(current), matchStream(previous)).distinct().forEach(aKey -> { + String namespace = key.getNamespace(aKey); + AbstractScopedSettings.SettingUpdater updater = + getConcreteSetting(aKey).newUpdater((v) -> consumer.accept(namespace, v), logger, + (v) -> validator.accept(namespace, v)); + if (updater.hasChanged(current, previous)) { + // only the ones that have changed otherwise we might get too many updates + // the hasChanged above checks only if there are any changes + T value = updater.getValue(current, previous); + result.put(updater, value); + } + }); + return result; + } + + @Override + public void apply(Map, T> value, Settings current, Settings previous) { + for (Map.Entry, T> entry : value.entrySet()) { + entry.getKey().apply(entry.getValue(), current, previous); + } + } + }; + } + + @Override + public T get(Settings settings) { + throw new UnsupportedOperationException("affix settings can't return values" + + " use #getConcreteSetting to obtain a concrete setting"); + } + + @Override + public String getRaw(Settings settings) { + throw new UnsupportedOperationException("affix settings can't return values" + + " use #getConcreteSetting to obtain a concrete setting"); + } + + @Override + public Setting getConcreteSetting(String key) { + if (match(key)) { + return delegateFactory.apply(key); + } else { + throw new IllegalArgumentException("key [" + key + "] must match [" + getKey() + "] but didn't."); + } + } + + /** + * Get a setting with the given namespace filled in for prefix and suffix. + */ + public Setting getConcreteSettingForNamespace(String namespace) { + String fullKey = key.toConcreteKey(namespace).toString(); + return getConcreteSetting(fullKey); + } + + @Override + public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) { + matchStream(defaultSettings).forEach((key) -> getConcreteSetting(key).diff(builder, source, defaultSettings)); + } + + /** + * Returns the namespace for a concrete settting. Ie. an affix setting with prefix: search. and suffix: username + * will return remote as a namespace for the setting search.remote.username + */ + public String getNamespace(Setting concreteSetting) { + return key.getNamespace(concreteSetting.getKey()); + } + + /** + * Returns a stream of all concrete setting instances for the given settings. AffixSetting is only a specification, concrete + * settings depend on an actual set of setting keys. + */ + public Stream> getAllConcreteSettings(Settings settings) { + return matchStream(settings).distinct().map(this::getConcreteSetting); + } + } + + + private static class GroupSetting extends Setting { + private final String key; + private final Consumer validator; + + private GroupSetting(String key, Consumer validator, Property... properties) { + super(new GroupKey(key), (s) -> "", (s) -> null, properties); + this.key = key; + this.validator = validator; + } + + @Override + public boolean isGroupSetting() { + return true; + } + + @Override + public String getRaw(Settings settings) { + Settings subSettings = get(settings); + try { + XContentBuilder builder = XContentFactory.jsonBuilder(); + builder.startObject(); + subSettings.toXContent(builder, EMPTY_PARAMS); + builder.endObject(); + return builder.string(); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + + @Override + public Settings get(Settings settings) { + Settings byPrefix = settings.getByPrefix(getKey()); + validator.accept(byPrefix); + return byPrefix; + } + + @Override + public boolean exists(Settings settings) { + for (Map.Entry entry : settings.getAsMap().entrySet()) { + if (entry.getKey().startsWith(key)) { + return true; + } + } + return false; + } + + @Override + public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) { + Map leftGroup = get(source).getAsMap(); + Settings defaultGroup = get(defaultSettings); + for (Map.Entry entry : defaultGroup.getAsMap().entrySet()) { + if (leftGroup.containsKey(entry.getKey()) == false) { + builder.put(getKey() + entry.getKey(), entry.getValue()); + } + } + } + + @Override + public AbstractScopedSettings.SettingUpdater newUpdater(Consumer consumer, Logger logger, + Consumer validator) { + if (isDynamic() == false) { + throw new IllegalStateException("setting [" + getKey() + "] is not dynamic"); + } + final Setting setting = this; + return new AbstractScopedSettings.SettingUpdater() { + + @Override + public boolean hasChanged(Settings current, Settings previous) { + Settings currentSettings = get(current); + Settings previousSettings = get(previous); + return currentSettings.equals(previousSettings) == false; + } + + @Override + public Settings getValue(Settings current, Settings previous) { + Settings currentSettings = get(current); + Settings previousSettings = get(previous); + try { + validator.accept(currentSettings); + } catch (Exception | AssertionError e) { + throw new IllegalArgumentException("illegal value can't update [" + key + "] from [" + + previousSettings.getAsMap() + "] to [" + currentSettings.getAsMap() + "]", e); + } + return currentSettings; + } + + @Override + public void apply(Settings value, Settings current, Settings previous) { + if (logger.isInfoEnabled()) { // getRaw can create quite some objects + logger.info("updating [{}] from [{}] to [{}]", key, getRaw(previous), getRaw(current)); + } + consumer.accept(value); + } + + @Override + public String toString() { + return "Updater for: " + setting.toString(); + } + }; + } + } + + private static class ListSetting extends Setting> { + private final Function> defaultStringValue; + + private ListSetting(String key, Function> defaultStringValue, Function> parser, + Property... properties) { + super(new ListKey(key), (s) -> Setting.arrayToParsableString(defaultStringValue.apply(s).toArray(Strings.EMPTY_ARRAY)), parser, + properties); + this.defaultStringValue = defaultStringValue; + } + + @Override + public String getRaw(Settings settings) { + String[] array = settings.getAsArray(getKey(), null); + return array == null ? defaultValue.apply(settings) : arrayToParsableString(array); + } + + @Override + boolean hasComplexMatcher() { + return true; + } + + @Override + boolean isListSetting() { + return true; + } + + @Override + public boolean exists(Settings settings) { + boolean exists = super.exists(settings); + return exists || settings.get(getKey() + ".0") != null; + } + + @Override + public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) { + if (exists(source) == false) { + String[] asArray = defaultSettings.getAsArray(getKey(), null); + if (asArray == null) { + builder.putArray(getKey(), defaultStringValue.apply(defaultSettings)); + } else { + builder.putArray(getKey(), asArray); + } + } + } + } private final class Updater implements AbstractScopedSettings.SettingUpdater { private final Consumer consumer; private final Logger logger; private final Consumer accept; - public Updater(Consumer consumer, Logger logger, Consumer accept) { + Updater(Consumer consumer, Logger logger, Consumer accept) { this.consumer = consumer; this.logger = logger; this.accept = accept; @@ -540,15 +848,25 @@ public static Setting intSetting(String key, int defaultValue, Property } public static Setting boolSetting(String key, boolean defaultValue, Property... properties) { - return new Setting<>(key, (s) -> Boolean.toString(defaultValue), Booleans::parseBooleanExact, properties); + return new Setting<>(key, (s) -> Boolean.toString(defaultValue), (value) -> parseBoolean(key, value), properties); } public static Setting boolSetting(String key, Setting fallbackSetting, Property... properties) { - return new Setting<>(key, fallbackSetting, Booleans::parseBooleanExact, properties); + return new Setting<>(key, fallbackSetting, (value) -> parseBoolean(key, value), properties); } public static Setting boolSetting(String key, Function defaultValueFn, Property... properties) { - return new Setting<>(key, defaultValueFn, Booleans::parseBooleanExact, properties); + return new Setting<>(key, defaultValueFn, (value) -> parseBoolean(key, value), properties); + } + + private static Boolean parseBoolean(String key, String value) { + // let the parser handle all cases for non-proper booleans without a deprecation warning by throwing IAE + boolean booleanValue = Booleans.parseBooleanExact(value); + if (Booleans.isStrictlyBoolean(value) == false) { + DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(Setting.class)); + deprecationLogger.deprecated("Expected a boolean [true/false] for setting [{}] but got [{}]", key, value); + } + return booleanValue; } public static Setting byteSizeSetting(String key, ByteSizeValue value, Property... properties) { @@ -578,10 +896,10 @@ public static Setting byteSizeSetting(String key, Function= " + minValue); } - if (value.bytes() > maxValue.bytes()) { + if (value.getBytes() > maxValue.getBytes()) { throw new IllegalArgumentException("Failed to parse value [" + s + "] for setting [" + key + "] must be <= " + maxValue); } return value; @@ -591,9 +909,9 @@ public static ByteSizeValue parseByteSize(String s, ByteSizeValue minValue, Byte * Creates a setting which specifies a memory size. This can either be * specified as an absolute bytes value or as a percentage of the heap * memory. - * + * * @param key the key for the setting - * @param defaultValue the default value for this setting + * @param defaultValue the default value for this setting * @param properties properties properties for this setting like scope, filtering... * @return the setting object */ @@ -606,9 +924,9 @@ public static Setting memorySizeSetting(String key, ByteSizeValue * Creates a setting which specifies a memory size. This can either be * specified as an absolute bytes value or as a percentage of the heap * memory. - * + * * @param key the key for the setting - * @param defaultValue a function that supplies the default value for this setting + * @param defaultValue a function that supplies the default value for this setting * @param properties properties properties for this setting like scope, filtering... * @return the setting object */ @@ -620,7 +938,7 @@ public static Setting memorySizeSetting(String key, Function memorySizeSetting(String key, String defaul return new Setting<>(key, (s) -> defaultPercentage, (s) -> MemorySizeValue.parseBytesSizeValueOrHeapRatio(s, key), properties); } - public static Setting positiveTimeSetting(String key, TimeValue defaultValue, Property... properties) { - return timeSetting(key, defaultValue, TimeValue.timeValueMillis(0), properties); - } - public static Setting> listSetting(String key, List defaultStringValue, Function singleValueParser, Property... properties) { return listSetting(key, (s) -> defaultStringValue, singleValueParser, properties); @@ -647,32 +961,18 @@ public static Setting> listSetting(String key, Setting> fall public static Setting> listSetting(String key, Function> defaultStringValue, Function singleValueParser, Property... properties) { + if (defaultStringValue.apply(Settings.EMPTY) == null) { + throw new IllegalArgumentException("default value function must not return null"); + } Function> parser = (s) -> parseableStringToList(s).stream().map(singleValueParser).collect(Collectors.toList()); - return new Setting>(new ListKey(key), - (s) -> arrayToParsableString(defaultStringValue.apply(s).toArray(Strings.EMPTY_ARRAY)), parser, properties) { - @Override - public String getRaw(Settings settings) { - String[] array = settings.getAsArray(getKey(), null); - return array == null ? defaultValue.apply(settings) : arrayToParsableString(array); - } - - @Override - boolean hasComplexMatcher() { - return true; - } - - @Override - public boolean exists(Settings settings) { - boolean exists = super.exists(settings); - return exists || settings.get(getKey() + ".0") != null; - } - }; + return new ListSetting<>(key, defaultStringValue, parser, properties); } private static List parseableStringToList(String parsableString) { - try (XContentParser xContentParser = XContentType.JSON.xContent().createParser(parsableString)) { + // EMPTY is safe here because we never call namedObject + try (XContentParser xContentParser = XContentType.JSON.xContent().createParser(NamedXContentRegistry.EMPTY, parsableString)) { XContentParser.Token token = xContentParser.nextToken(); if (token != XContentParser.Token.START_ARRAY) { throw new IllegalArgumentException("expected START_ARRAY but got " + token); @@ -690,7 +990,6 @@ private static List parseableStringToList(String parsableString) { } } - private static String arrayToParsableString(String[] array) { try { XContentBuilder builder = XContentBuilder.builder(XContentType.JSON.xContent()); @@ -704,94 +1003,18 @@ private static String arrayToParsableString(String[] array) { throw new ElasticsearchException(ex); } } + public static Setting groupSetting(String key, Property... properties) { return groupSetting(key, (s) -> {}, properties); } - public static Setting groupSetting(String key, Consumer validator, Property... properties) { - return new Setting(new GroupKey(key), (s) -> "", (s) -> null, properties) { - @Override - public boolean isGroupSetting() { - return true; - } - - @Override - public String getRaw(Settings settings) { - Settings subSettings = get(settings); - try { - XContentBuilder builder = XContentFactory.jsonBuilder(); - builder.startObject(); - subSettings.toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - throw new RuntimeException(e); - } - } - - @Override - public Settings get(Settings settings) { - Settings byPrefix = settings.getByPrefix(getKey()); - validator.accept(byPrefix); - return byPrefix; - } - - @Override - public boolean exists(Settings settings) { - for (Map.Entry entry : settings.getAsMap().entrySet()) { - if (entry.getKey().startsWith(key)) { - return true; - } - } - return false; - } - - @Override - public AbstractScopedSettings.SettingUpdater newUpdater(Consumer consumer, Logger logger, - Consumer validator) { - if (isDynamic() == false) { - throw new IllegalStateException("setting [" + getKey() + "] is not dynamic"); - } - final Setting setting = this; - return new AbstractScopedSettings.SettingUpdater() { - - @Override - public boolean hasChanged(Settings current, Settings previous) { - Settings currentSettings = get(current); - Settings previousSettings = get(previous); - return currentSettings.equals(previousSettings) == false; - } - - @Override - public Settings getValue(Settings current, Settings previous) { - Settings currentSettings = get(current); - Settings previousSettings = get(previous); - try { - validator.accept(currentSettings); - } catch (Exception | AssertionError e) { - throw new IllegalArgumentException("illegal value can't update [" + key + "] from [" - + previousSettings.getAsMap() + "] to [" + currentSettings.getAsMap() + "]", e); - } - return currentSettings; - } - @Override - public void apply(Settings value, Settings current, Settings previous) { - logger.info("updating [{}] from [{}] to [{}]", key, getRaw(previous), getRaw(current)); - consumer.accept(value); - } - - @Override - public String toString() { - return "Updater for: " + setting.toString(); - } - }; - } - }; + public static Setting groupSetting(String key, Consumer validator, Property... properties) { + return new GroupSetting(key, validator, properties); } - public static Setting timeSetting(String key, Function defaultValue, TimeValue minValue, + public static Setting timeSetting(String key, Function defaultValue, TimeValue minValue, Property... properties) { - return new Setting<>(key, defaultValue, (s) -> { + return new Setting<>(key, (s) -> defaultValue.apply(s).getStringRep(), (s) -> { TimeValue timeValue = TimeValue.parseTimeValue(s, null, key); if (timeValue.millis() < minValue.millis()) { throw new IllegalArgumentException("Failed to parse value [" + s + "] for setting [" + key + "] must be >= " + minValue); @@ -801,17 +1024,21 @@ public static Setting timeSetting(String key, Function timeSetting(String key, TimeValue defaultValue, TimeValue minValue, Property... properties) { - return timeSetting(key, (s) -> defaultValue.getStringRep(), minValue, properties); + return timeSetting(key, (s) -> defaultValue, minValue, properties); } public static Setting timeSetting(String key, TimeValue defaultValue, Property... properties) { - return new Setting<>(key, (s) -> defaultValue.toString(), (s) -> TimeValue.parseTimeValue(s, key), properties); + return new Setting<>(key, (s) -> defaultValue.getStringRep(), (s) -> TimeValue.parseTimeValue(s, key), properties); } public static Setting timeSetting(String key, Setting fallbackSetting, Property... properties) { return new Setting<>(key, fallbackSetting, (s) -> TimeValue.parseTimeValue(s, key), properties); } + public static Setting positiveTimeSetting(String key, TimeValue defaultValue, Property... properties) { + return timeSetting(key, defaultValue, TimeValue.timeValueMillis(0), properties); + } + public static Setting doubleSetting(String key, double defaultValue, double minValue, Property... properties) { return new Setting<>(key, (s) -> Double.toString(defaultValue), (s) -> { final double d = Double.parseDouble(s); @@ -840,50 +1067,24 @@ public int hashCode() { * can easily be added with this setting. Yet, prefix key settings don't support updaters out of the box unless * {@link #getConcreteSetting(String)} is used to pull the updater. */ - public static Setting prefixKeySetting(String prefix, String defaultValue, Function parser, - Property... properties) { - return affixKeySetting(AffixKey.withPrefix(prefix), (s) -> defaultValue, parser, properties); + public static AffixSetting prefixKeySetting(String prefix, Function> delegateFactory) { + return affixKeySetting(new AffixKey(prefix), delegateFactory); } /** * This setting type allows to validate settings that have the same type and a common prefix and suffix. For instance - * storage.${backend}.enable=[true|false] can easily be added with this setting. Yet, adfix key settings don't support updaters + * storage.${backend}.enable=[true|false] can easily be added with this setting. Yet, affix key settings don't support updaters * out of the box unless {@link #getConcreteSetting(String)} is used to pull the updater. */ - public static Setting adfixKeySetting(String prefix, String suffix, Function defaultValue, - Function parser, Property... properties) { - return affixKeySetting(AffixKey.withAdfix(prefix, suffix), defaultValue, parser, properties); - } - - public static Setting adfixKeySetting(String prefix, String suffix, String defaultValue, Function parser, - Property... properties) { - return adfixKeySetting(prefix, suffix, (s) -> defaultValue, parser, properties); + public static AffixSetting affixKeySetting(String prefix, String suffix, Function> delegateFactory) { + return affixKeySetting(new AffixKey(prefix, suffix), delegateFactory); } - public static Setting affixKeySetting(AffixKey key, Function defaultValue, Function parser, - Property... properties) { - return new Setting(key, defaultValue, parser, properties) { + private static AffixSetting affixKeySetting(AffixKey key, Function> delegateFactory) { + Setting delegate = delegateFactory.apply("_na_"); + return new AffixSetting<>(key, delegate, delegateFactory); + }; - @Override - boolean isGroupSetting() { - return true; - } - - @Override - AbstractScopedSettings.SettingUpdater newUpdater(Consumer consumer, Logger logger, Consumer validator) { - throw new UnsupportedOperationException("Affix settings can't be updated. Use #getConcreteSetting for updating."); - } - - @Override - public Setting getConcreteSetting(String key) { - if (match(key)) { - return new Setting<>(key, defaultValue, parser, properties); - } else { - throw new IllegalArgumentException("key [" + key + "] must match [" + getKey() + "] but didn't."); - } - } - }; - } public interface Key { @@ -949,34 +1150,60 @@ public boolean match(String toTest) { } } + /** + * A key that allows for static pre and suffix. This is used for settings + * that have dynamic namespaces like for different accounts etc. + */ public static final class AffixKey implements Key { - public static AffixKey withPrefix(String prefix) { - return new AffixKey(prefix, null); - } - - public static AffixKey withAdfix(String prefix, String suffix) { - return new AffixKey(prefix, suffix); - } - + private final Pattern pattern; private final String prefix; private final String suffix; - public AffixKey(String prefix, String suffix) { + AffixKey(String prefix) { + this(prefix, null); + } + + AffixKey(String prefix, String suffix) { assert prefix != null || suffix != null: "Either prefix or suffix must be non-null"; + this.prefix = prefix; + if (prefix.endsWith(".") == false) { + throw new IllegalArgumentException("prefix must end with a '.'"); + } this.suffix = suffix; + if (suffix == null) { + pattern = Pattern.compile("(" + Pattern.quote(prefix) + "((?:[-\\w]+[.])*[-\\w]+$))"); + } else { + // the last part of this regexp is for lists since they are represented as x.${namespace}.y.1, x.${namespace}.y.2 + pattern = Pattern.compile("(" + Pattern.quote(prefix) + "([-\\w]+)\\." + Pattern.quote(suffix) + ")(?:\\.\\d+)?"); + } } @Override public boolean match(String key) { - boolean match = true; - if (prefix != null) { - match = key.startsWith(prefix); + return pattern.matcher(key).matches(); + } + + /** + * Returns a string representation of the concrete setting key + */ + String getConcreteString(String key) { + Matcher matcher = pattern.matcher(key); + if (matcher.matches() == false) { + throw new IllegalStateException("can't get concrete string for key " + key + " key doesn't match"); } - if (suffix != null) { - match = match && key.endsWith(suffix); + return matcher.group(1); + } + + /** + * Returns a string representation of the concrete setting key + */ + String getNamespace(String key) { + Matcher matcher = pattern.matcher(key); + if (matcher.matches() == false) { + throw new IllegalStateException("can't get concrete string for key " + key + " key doesn't match"); } - return match; + return matcher.group(2); } public SimpleKey toConcreteKey(String missingPart) { @@ -999,9 +1226,9 @@ public String toString() { sb.append(prefix); } if (suffix != null) { - sb.append("*"); + sb.append('*'); + sb.append('.'); sb.append(suffix); - sb.append("."); } return sb.toString(); } diff --git a/core/src/main/java/org/elasticsearch/common/settings/Settings.java b/core/src/main/java/org/elasticsearch/common/settings/Settings.java index 9e5dd0efbe28e..908d5d11fbbd4 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/Settings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/Settings.java @@ -19,12 +19,16 @@ package org.elasticsearch.common.settings; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.Version; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.Streams; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.LogConfigurator; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.loader.SettingsLoader; import org.elasticsearch.common.settings.loader.SettingsLoaderFactory; import org.elasticsearch.common.unit.ByteSizeUnit; @@ -35,6 +39,7 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; import java.io.InputStream; @@ -42,6 +47,9 @@ import java.nio.charset.StandardCharsets; import java.nio.file.Files; import java.nio.file.Path; +import java.security.GeneralSecurityException; +import java.util.AbstractMap; +import java.util.AbstractSet; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; @@ -49,19 +57,21 @@ import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; -import java.util.LinkedHashMap; import java.util.List; +import java.util.Locale; import java.util.Map; +import java.util.NoSuchElementException; import java.util.Objects; import java.util.Set; -import java.util.SortedMap; import java.util.TreeMap; import java.util.concurrent.TimeUnit; import java.util.function.Function; import java.util.function.Predicate; +import java.util.function.UnaryOperator; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; +import java.util.stream.Stream; import static org.elasticsearch.common.unit.ByteSizeValue.parseBytesSizeValue; import static org.elasticsearch.common.unit.SizeValue.parseSizeValue; @@ -71,15 +81,36 @@ * An immutable settings implementation. */ public final class Settings implements ToXContent { - public static final Settings EMPTY = new Builder().build(); private static final Pattern ARRAY_PATTERN = Pattern.compile("(.*)\\.\\d+$"); - private SortedMap settings; + /** The raw settings from the full key to raw string value. */ + private final Map settings; + + /** The secure settings storage associated with these settings. */ + private final SecureSettings secureSettings; - Settings(Map settings) { + /** The first level of setting names. This is constructed lazily in {@link #names()}. */ + private final SetOnce> firstLevelNames = new SetOnce<>(); + + /** + * Setting names found in this Settings for both string and secure settings. + * This is constructed lazily in {@link #keySet()}. + */ + private final SetOnce> keys = new SetOnce<>(); + + Settings(Map settings, SecureSettings secureSettings) { // we use a sorted map for consistent serialization when using getAsMap() this.settings = Collections.unmodifiableSortedMap(new TreeMap<>(settings)); + this.secureSettings = secureSettings; + } + + /** + * Retrieve the secure settings in these settings. + */ + SecureSettings getSecureSettings() { + // pkg private so it can only be accessed by local subclasses of SecureSetting + return secureSettings; } /** @@ -87,7 +118,8 @@ public final class Settings implements ToXContent { * @return an unmodifiable map of settings */ public Map getAsMap() { - return Collections.unmodifiableMap(this.settings); + // settings is always unmodifiable + return this.settings; } /** @@ -186,30 +218,16 @@ private Object convertMapsToArrays(Map map) { * A settings that are filtered (and key is removed) with the specified prefix. */ public Settings getByPrefix(String prefix) { - Builder builder = new Builder(); - for (Map.Entry entry : getAsMap().entrySet()) { - if (entry.getKey().startsWith(prefix)) { - if (entry.getKey().length() < prefix.length()) { - // ignore this. one - continue; - } - builder.put(entry.getKey().substring(prefix.length()), entry.getValue()); - } - } - return builder.build(); + return new Settings(new FilteredMap(this.settings, (k) -> k.startsWith(prefix), prefix), secureSettings == null ? null : + new PrefixedSecureSettings(secureSettings, prefix, s -> s.startsWith(prefix))); } /** * Returns a new settings object that contains all setting of the current one filtered by the given settings key predicate. */ public Settings filter(Predicate predicate) { - Builder builder = new Builder(); - for (Map.Entry entry : getAsMap().entrySet()) { - if (predicate.test(entry.getKey())) { - builder.put(entry.getKey(), entry.getValue()); - } - } - return builder.build(); + return new Settings(new FilteredMap(this.settings, predicate, null), secureSettings == null ? null : + new PrefixedSecureSettings(secureSettings, "", predicate)); } /** @@ -302,12 +320,27 @@ public Long getAsLong(String setting, Long defaultValue) { } } + /** + * We have to lazy initialize the deprecation logger as otherwise a static logger here would be constructed before logging is configured + * leading to a runtime failure (see {@link LogConfigurator#checkErrorListener()} ). The premature construction would come from any + * {@link Setting} object constructed in, for example, {@link org.elasticsearch.env.Environment}. + */ + static class DeprecationLoggerHolder { + static DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(Settings.class)); + } + /** * Returns the setting value (as boolean) associated with the setting key. If it does not exists, * returns the default value provided. */ public Boolean getAsBoolean(String setting, Boolean defaultValue) { - return Booleans.parseBoolean(get(setting), defaultValue); + String rawValue = get(setting); + Boolean booleanValue = Booleans.parseBooleanExact(rawValue, defaultValue); + if (rawValue != null && Booleans.isStrictlyBoolean(rawValue) == false) { + final DeprecationLogger deprecationLogger = DeprecationLoggerHolder.deprecationLogger; + deprecationLogger.deprecated("Expected a boolean [true/false] for setting [{}] but got [{}]", setting, rawValue); + } + return booleanValue; } /** @@ -395,6 +428,20 @@ public String[] getAsArray(String settingPrefix, String[] defaultArray) throws S public String[] getAsArray(String settingPrefix, String[] defaultArray, Boolean commaDelimited) throws SettingsException { List result = new ArrayList<>(); + final String valueFromPrefix = get(settingPrefix); + final String valueFromPreifx0 = get(settingPrefix + ".0"); + + if (valueFromPrefix != null && valueFromPreifx0 != null) { + final String message = String.format( + Locale.ROOT, + "settings object contains values for [%s=%s] and [%s=%s]", + settingPrefix, + valueFromPrefix, + settingPrefix + ".0", + valueFromPreifx0); + throw new IllegalStateException(message); + } + if (get(settingPrefix) != null) { if (commaDelimited) { String[] strings = Strings.splitStringByCommaToArray(get(settingPrefix)); @@ -443,36 +490,23 @@ public Map getGroups(String settingPrefix, boolean ignoreNonGr } return getGroupsInternal(settingPrefix, ignoreNonGrouped); } + private Map getGroupsInternal(String settingPrefix, boolean ignoreNonGrouped) throws SettingsException { - // we don't really care that it might happen twice - Map> map = new LinkedHashMap<>(); - for (Object o : settings.keySet()) { - String setting = (String) o; - if (setting.startsWith(settingPrefix)) { - String nameValue = setting.substring(settingPrefix.length()); - int dotIndex = nameValue.indexOf('.'); - if (dotIndex == -1) { - if (ignoreNonGrouped) { - continue; - } - throw new SettingsException("Failed to get setting group for [" + settingPrefix + "] setting prefix and setting [" - + setting + "] because of a missing '.'"); - } - String name = nameValue.substring(0, dotIndex); - String value = nameValue.substring(dotIndex + 1); - Map groupSettings = map.get(name); - if (groupSettings == null) { - groupSettings = new LinkedHashMap<>(); - map.put(name, groupSettings); + Settings prefixSettings = getByPrefix(settingPrefix); + Map groups = new HashMap<>(); + for (String groupName : prefixSettings.names()) { + Settings groupSettings = prefixSettings.getByPrefix(groupName + "."); + if (groupSettings.isEmpty()) { + if (ignoreNonGrouped) { + continue; } - groupSettings.put(value, get(setting)); + throw new SettingsException("Failed to get setting group for [" + settingPrefix + "] setting prefix and setting [" + + settingPrefix + groupName + "] because of a missing '.'"); } + groups.put(groupName, groupSettings); } - Map retVal = new LinkedHashMap<>(); - for (Map.Entry> entry : map.entrySet()) { - retVal.put(entry.getKey(), new Settings(Collections.unmodifiableMap(entry.getValue()))); - } - return Collections.unmodifiableMap(retVal); + + return Collections.unmodifiableMap(groups); } /** * Returns group settings for the given setting prefix. @@ -504,16 +538,24 @@ public Version getAsVersion(String setting, Version defaultVersion) throws Setti * @return The direct keys of this settings */ public Set names() { - Set names = new HashSet<>(); - for (String key : settings.keySet()) { - int i = key.indexOf("."); - if (i < 0) { - names.add(key); - } else { - names.add(key.substring(0, i)); + synchronized (firstLevelNames) { + if (firstLevelNames.get() == null) { + Stream stream = settings.keySet().stream(); + if (secureSettings != null) { + stream = Stream.concat(stream, secureSettings.getSettingNames().stream()); + } + Set names = stream.map(k -> { + int i = k.indexOf('.'); + if (i < 0) { + return k; + } else { + return k.substring(0, i); + } + }).collect(Collectors.toSet()); + firstLevelNames.set(Collections.unmodifiableSet(names)); } } - return names; + return firstLevelNames.get(); } /** @@ -553,8 +595,10 @@ public static Settings readSettingsFromStream(StreamInput in) throws IOException } public static void writeSettingsToStream(Settings settings, StreamOutput out) throws IOException { - out.writeVInt(settings.getAsMap().size()); - for (Map.Entry entry : settings.getAsMap().entrySet()) { + // pull getAsMap() to exclude secure settings in size() + Set> entries = settings.getAsMap().entrySet(); + out.writeVInt(entries.size()); + for (Map.Entry entry : entries) { out.writeString(entry.getKey()); out.writeOptionalString(entry.getValue()); } @@ -582,12 +626,36 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } + public static final Set FORMAT_PARAMS = + Collections.unmodifiableSet(new HashSet<>(Arrays.asList("settings_filter", "flat_settings"))); + /** * Returns true if this settings object contains no settings * @return true if this settings object contains no settings */ public boolean isEmpty() { - return this.settings.isEmpty(); + return this.settings.isEmpty() && (secureSettings == null || secureSettings.getSettingNames().isEmpty()); + } + + /** Returns the number of settings in this settings object. */ + public int size() { + return keySet().size(); + } + + /** Returns the fully qualified setting names contained in this settings object. */ + public Set keySet() { + synchronized (keys) { + if (keys.get() == null) { + if (secureSettings == null) { + keys.set(settings.keySet()); + } else { + Stream stream = Stream.concat(settings.keySet().stream(), secureSettings.getSettingNames().stream()); + // uniquify, since for legacy reasons the same setting name may exist in both + keys.set(Collections.unmodifiableSet(stream.collect(Collectors.toSet()))); + } + } + } + return keys.get(); } /** @@ -599,7 +667,10 @@ public static class Builder { public static final Settings EMPTY_SETTINGS = new Builder().build(); - private final Map map = new LinkedHashMap<>(); + // we use a sorted map for consistent serialization when using getAsMap() + private final Map map = new TreeMap<>(); + + private SetOnce secureSettings = new SetOnce<>(); private Builder() { @@ -623,6 +694,23 @@ public String get(String key) { return map.get(key); } + /** Return the current secure settings, or {@code null} if none have been set. */ + public SecureSettings getSecureSettings() { + return secureSettings.get(); + } + + public Builder setSecureSettings(SecureSettings secureSettings) { + if (secureSettings.isLoaded() == false) { + throw new IllegalStateException("Secure settings must already be loaded"); + } + if (this.secureSettings.get() != null) { + throw new IllegalArgumentException("Secure settings already set. Existing settings: " + + this.secureSettings.get().getSettingNames() + ", new settings: " + secureSettings.getSettingNames()); + } + this.secureSettings.set(secureSettings); + return this; + } + /** * Puts tuples of key value pairs of settings. Simplified version instead of repeating calling * put for each one. @@ -847,6 +935,9 @@ public Builder put(String settingPrefix, String groupName, String[] settings, St public Builder put(Settings settings) { removeNonArraysFieldsIfNewSettingsContainsFieldAsArray(settings.getAsMap()); map.putAll(settings.getAsMap()); + if (settings.getSecureSettings() != null) { + setSecureSettings(settings.getSecureSettings()); + } return this; } @@ -903,7 +994,9 @@ public Builder put(Dictionary properties) { /** * Loads settings from the actual string content that represents them using the * {@link SettingsLoaderFactory#loaderFromSource(String)}. + * @deprecated use {@link #loadFromSource(String, XContentType)} to avoid content type detection */ + @Deprecated public Builder loadFromSource(String source) { SettingsLoader settingsLoader = SettingsLoaderFactory.loaderFromSource(source); try { @@ -915,9 +1008,24 @@ public Builder loadFromSource(String source) { return this; } + /** + * Loads settings from the actual string content that represents them using the + * {@link SettingsLoaderFactory#loaderFromXContentType(XContentType)} method to obtain a loader + */ + public Builder loadFromSource(String source, XContentType xContentType) { + SettingsLoader settingsLoader = SettingsLoaderFactory.loaderFromXContentType(xContentType); + try { + Map loadedSettings = settingsLoader.load(source); + put(loadedSettings); + } catch (Exception e) { + throw new SettingsException("Failed to load settings from [" + source + "]", e); + } + return this; + } + /** * Loads settings from a url that represents them using the - * {@link SettingsLoaderFactory#loaderFromSource(String)}. + * {@link SettingsLoaderFactory#loaderFromResource(String)}. */ public Builder loadFromPath(Path path) throws IOException { // NOTE: loadFromStream will close the input stream @@ -926,7 +1034,7 @@ public Builder loadFromPath(Path path) throws IOException { /** * Loads settings from a stream that represents them using the - * {@link SettingsLoaderFactory#loaderFromSource(String)}. + * {@link SettingsLoaderFactory#loaderFromResource(String)}. */ public Builder loadFromStream(String resourceName, InputStream is) throws IOException { SettingsLoader settingsLoader = SettingsLoaderFactory.loaderFromResource(resourceName); @@ -937,12 +1045,10 @@ public Builder loadFromStream(String resourceName, InputStream is) throws IOExce return this; } - public Builder putProperties(Map esSettings, Predicate keyPredicate, Function keyFunction) { + public Builder putProperties(final Map esSettings, final Function keyFunction) { for (final Map.Entry esSetting : esSettings.entrySet()) { final String key = esSetting.getKey(); - if (keyPredicate.test(key)) { - map.put(keyFunction.apply(key), esSetting.getValue()); - } + map.put(keyFunction.apply(key), esSetting.getValue()); } return this; } @@ -1029,7 +1135,171 @@ public Builder normalizePrefix(String prefix) { * set on this builder. */ public Settings build() { - return new Settings(Collections.unmodifiableMap(map)); + return new Settings(map, secureSettings.get()); + } + } + + // TODO We could use an FST internally to make things even faster and more compact + private static final class FilteredMap extends AbstractMap { + private final Map delegate; + private final Predicate filter; + private final String prefix; + // we cache that size since we have to iterate the entire set + // this is safe to do since this map is only used with unmodifiable maps + private int size = -1; + @Override + public Set> entrySet() { + Set> delegateSet = delegate.entrySet(); + AbstractSet> filterSet = new AbstractSet>() { + + @Override + public Iterator> iterator() { + Iterator> iter = delegateSet.iterator(); + + return new Iterator>() { + private int numIterated; + private Entry currentElement; + @Override + public boolean hasNext() { + if (currentElement != null) { + return true; // protect against calling hasNext twice + } else { + if (numIterated == size) { // early terminate + assert size != -1 : "size was never set: " + numIterated + " vs. " + size; + return false; + } + while (iter.hasNext()) { + if (filter.test((currentElement = iter.next()).getKey())) { + numIterated++; + return true; + } + } + // we didn't find anything + currentElement = null; + return false; + } + } + + @Override + public Entry next() { + if (currentElement == null && hasNext() == false) { // protect against no #hasNext call or not respecting it + + throw new NoSuchElementException("make sure to call hasNext first"); + } + final Entry current = this.currentElement; + this.currentElement = null; + if (prefix == null) { + return current; + } + return new Entry() { + @Override + public String getKey() { + return current.getKey().substring(prefix.length()); + } + + @Override + public String getValue() { + return current.getValue(); + } + + @Override + public String setValue(String value) { + throw new UnsupportedOperationException(); + } + }; + } + }; + } + + @Override + public int size() { + return FilteredMap.this.size(); + } + }; + return filterSet; + } + + private FilteredMap(Map delegate, Predicate filter, String prefix) { + this.delegate = delegate; + this.filter = filter; + this.prefix = prefix; + } + + @Override + public String get(Object key) { + if (key instanceof String) { + final String theKey = prefix == null ? (String)key : prefix + key; + if (filter.test(theKey)) { + return delegate.get(theKey); + } + } + return null; + } + + @Override + public boolean containsKey(Object key) { + if (key instanceof String) { + final String theKey = prefix == null ? (String) key : prefix + key; + if (filter.test(theKey)) { + return delegate.containsKey(theKey); + } + } + return false; + } + + @Override + public int size() { + if (size == -1) { + size = Math.toIntExact(delegate.keySet().stream().filter((e) -> filter.test(e)).count()); + } + return size; + } + } + + private static class PrefixedSecureSettings implements SecureSettings { + private final SecureSettings delegate; + private final UnaryOperator addPrefix; + private final UnaryOperator removePrefix; + private final Predicate keyPredicate; + private final SetOnce> settingNames = new SetOnce<>(); + + PrefixedSecureSettings(SecureSettings delegate, String prefix, Predicate keyPredicate) { + this.delegate = delegate; + this.addPrefix = s -> prefix + s; + this.removePrefix = s -> s.substring(prefix.length()); + this.keyPredicate = keyPredicate; + } + + @Override + public boolean isLoaded() { + return delegate.isLoaded(); + } + + @Override + public Set getSettingNames() { + synchronized (settingNames) { + if (settingNames.get() == null) { + Set names = delegate.getSettingNames().stream() + .filter(keyPredicate).map(removePrefix).collect(Collectors.toSet()); + settingNames.set(Collections.unmodifiableSet(names)); + } + } + return settingNames.get(); + } + + @Override + public SecureString getString(String setting) throws GeneralSecurityException{ + return delegate.getString(addPrefix.apply(setting)); + } + + @Override + public InputStream getFile(String setting) throws GeneralSecurityException{ + return delegate.getFile(addPrefix.apply(setting)); + } + + @Override + public void close() throws IOException { + delegate.close(); } } } diff --git a/core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java b/core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java index 60276ce14f7a1..acad70275a6f5 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java +++ b/core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java @@ -54,6 +54,7 @@ public class SettingsModule implements Module { private final Logger logger; private final IndexScopedSettings indexScopedSettings; private final ClusterSettings clusterSettings; + private final SettingsFilter settingsFilter; public SettingsModule(Settings settings, Setting... additionalSettings) { this(settings, Arrays.asList(additionalSettings), Collections.emptyList()); @@ -137,12 +138,13 @@ public SettingsModule(Settings settings, List> additionalSettings, Li final Predicate acceptOnlyClusterSettings = TRIBE_CLIENT_NODE_SETTINGS_PREDICATE.negate(); clusterSettings.validate(settings.filter(acceptOnlyClusterSettings)); validateTribeSettings(settings, clusterSettings); + this.settingsFilter = new SettingsFilter(settings, settingsFilterPattern); } @Override public void configure(Binder binder) { binder.bind(Settings.class).toInstance(settings); - binder.bind(SettingsFilter.class).toInstance(new SettingsFilter(settings, settingsFilterPattern)); + binder.bind(SettingsFilter.class).toInstance(settingsFilter); binder.bind(ClusterSettings.class).toInstance(clusterSettings); binder.bind(IndexScopedSettings.class).toInstance(indexScopedSettings); } @@ -155,8 +157,12 @@ public void configure(Binder binder) { */ private void registerSetting(Setting setting) { if (setting.isFiltered()) { - if (settingsFilterPattern.contains(setting.getKey()) == false) { - registerSettingsFilter(setting.getKey()); + final String key = setting.getKey(); + if (settingsFilterPattern.contains(key) == false) { + registerSettingsFilter(key); + if (setting.isListSetting()) { + registerSettingsFilter(key + ".*"); + } } } if (setting.hasNodeScope() || setting.hasIndexScope()) { @@ -218,4 +224,8 @@ public IndexScopedSettings getIndexScopedSettings() { public ClusterSettings getClusterSettings() { return clusterSettings; } + + public SettingsFilter getSettingsFilter() { + return settingsFilter; + } } diff --git a/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoaderFactory.java b/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoaderFactory.java index 5f2da22c5f2dd..5d8cb4918b2ea 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoaderFactory.java +++ b/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoaderFactory.java @@ -19,6 +19,8 @@ package org.elasticsearch.common.settings.loader; +import org.elasticsearch.common.xcontent.XContentType; + /** * A class holding factory methods for settings loaders that attempts * to infer the type of the underlying settings content. @@ -33,9 +35,7 @@ private SettingsLoaderFactory() { * name. This factory method assumes that if the resource name ends * with ".json" then the content should be parsed as JSON, else if * the resource name ends with ".yml" or ".yaml" then the content - * should be parsed as YAML, else if the resource name ends with - * ".properties" then the content should be parsed as properties, - * otherwise default to attempting to parse as JSON. Note that the + * should be parsed as YAML, otherwise throws an exception. Note that the * parsers returned by this method will not accept null-valued * keys. * @@ -59,13 +59,15 @@ public static SettingsLoader loaderFromResource(String resourceName) { * contains an opening and closing brace ('{' and '}') then the * content should be parsed as JSON, else if the underlying content * fails this condition but contains a ':' then the content should - * be parsed as YAML, and otherwise should be parsed as properties. + * be parsed as YAML, and otherwise throws an exception. * Note that the JSON and YAML parsers returned by this method will * accept null-valued keys. * * @param source The underlying settings content. * @return A settings loader. + * @deprecated use {@link #loaderFromXContentType(XContentType)} instead */ + @Deprecated public static SettingsLoader loaderFromSource(String source) { if (source.indexOf('{') != -1 && source.indexOf('}') != -1) { return new JsonSettingsLoader(true); @@ -76,4 +78,20 @@ public static SettingsLoader loaderFromSource(String source) { } } + /** + * Returns a {@link SettingsLoader} based on the {@link XContentType}. Note only {@link XContentType#JSON} and + * {@link XContentType#YAML} are supported + * + * @param xContentType The content type + * @return A settings loader. + */ + public static SettingsLoader loaderFromXContentType(XContentType xContentType) { + if (xContentType == XContentType.JSON) { + return new JsonSettingsLoader(true); + } else if (xContentType == XContentType.YAML) { + return new YamlSettingsLoader(true); + } else { + throw new IllegalArgumentException("unsupported content type [" + xContentType + "]"); + } + } } diff --git a/core/src/main/java/org/elasticsearch/common/settings/loader/XContentSettingsLoader.java b/core/src/main/java/org/elasticsearch/common/settings/loader/XContentSettingsLoader.java index 30c62b91c79bd..d7eaa627a2868 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/loader/XContentSettingsLoader.java +++ b/core/src/main/java/org/elasticsearch/common/settings/loader/XContentSettingsLoader.java @@ -20,6 +20,7 @@ package org.elasticsearch.common.settings.loader; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; @@ -46,14 +47,16 @@ public abstract class XContentSettingsLoader implements SettingsLoader { @Override public Map load(String source) throws IOException { - try (XContentParser parser = XContentFactory.xContent(contentType()).createParser(source)) { + // It is safe to use EMPTY here because this never uses namedObject + try (XContentParser parser = XContentFactory.xContent(contentType()).createParser(NamedXContentRegistry.EMPTY, source)) { return load(parser); } } @Override public Map load(byte[] source) throws IOException { - try (XContentParser parser = XContentFactory.xContent(contentType()).createParser(source)) { + // It is safe to use EMPTY here because this never uses namedObject + try (XContentParser parser = XContentFactory.xContent(contentType()).createParser(NamedXContentRegistry.EMPTY, source)) { return load(parser); } } diff --git a/core/src/main/java/org/elasticsearch/common/text/Text.java b/core/src/main/java/org/elasticsearch/common/text/Text.java index 39eb817fe3c3d..d895b7c11b02d 100644 --- a/core/src/main/java/org/elasticsearch/common/text/Text.java +++ b/core/src/main/java/org/elasticsearch/common/text/Text.java @@ -100,7 +100,10 @@ public int hashCode() { @Override public boolean equals(Object obj) { - if (obj == null) { + if (this == obj) { + return true; + } + if (obj == null || getClass() != obj.getClass()) { return false; } return bytes().equals(((Text) obj).bytes()); diff --git a/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java index 94c1a2390ac2d..027d7344a29aa 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java @@ -19,6 +19,7 @@ package org.elasticsearch.common.transport; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.network.NetworkAddress; @@ -53,19 +54,38 @@ public InetSocketTransportAddress(InetSocketAddress address) { * Read from a stream. */ public InetSocketTransportAddress(StreamInput in) throws IOException { + this(in, null); + } + + /** + * Read from a stream and use the {@code hostString} when creating the InetAddress if the input comes from a version on or prior + * {@link Version#V_5_0_2} as the hostString was not serialized + */ + public InetSocketTransportAddress(StreamInput in, String hostString) throws IOException { final int len = in.readByte(); final byte[] a = new byte[len]; // 4 bytes (IPv4) or 16 bytes (IPv6) in.readFully(a); - InetAddress inetAddress = InetAddress.getByAddress(a); + final InetAddress inetAddress; + if (in.getVersion().after(Version.V_5_0_2)) { + String host = in.readString(); + inetAddress = InetAddress.getByAddress(host, a); // the host string was serialized so we can ignore the passed in value + } else { + // prior to this version, we did not serialize the host string so we used the passed in value + inetAddress = InetAddress.getByAddress(hostString, a); + } int port = in.readInt(); this.address = new InetSocketAddress(inetAddress, port); } + @Override public void writeTo(StreamOutput out) throws IOException { byte[] bytes = address().getAddress().getAddress(); // 4 bytes (IPv4) or 16 bytes (IPv6) out.writeByte((byte) bytes.length); // 1 byte out.write(bytes, 0, bytes.length); + if (out.getVersion().after(Version.V_5_0_2)) { + out.writeString(address.getHostString()); + } // don't serialize scope ids over the network!!!! // these only make sense with respect to the local machine, and will only formulate // the address incorrectly remotely. @@ -90,7 +110,7 @@ public boolean isLoopbackOrLinkLocalAddress() { @Override public String getHost() { - return getAddress(); // just delegate no resolving + return address.getHostString(); // just delegate no resolving done by getHostString } @Override diff --git a/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java index 48becc832daab..a6af5ed1bfcf3 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java @@ -53,6 +53,13 @@ public LocalTransportAddress(StreamInput in) throws IOException { id = in.readString(); } + /** + * Same as {@link LocalTransportAddress(StreamInput)} but accepts the second argument + */ + public LocalTransportAddress(StreamInput in, String hostString) throws IOException { + this(in); + } + @Override public void writeTo(StreamOutput out) throws IOException { out.writeString(id); diff --git a/core/src/main/java/org/elasticsearch/common/transport/PortsRange.java b/core/src/main/java/org/elasticsearch/common/transport/PortsRange.java index f88f1de8fe0d5..ffc87dab783fe 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/PortsRange.java +++ b/core/src/main/java/org/elasticsearch/common/transport/PortsRange.java @@ -83,4 +83,11 @@ public boolean iterate(PortCallback callback) throws NumberFormatException { public interface PortCallback { boolean onPortNumber(int portNumber); } + + @Override + public String toString() { + return "PortsRange{" + + "portRange='" + portRange + '\'' + + '}'; + } } diff --git a/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java index 08e8af2bffe57..1467caf19eb67 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java @@ -21,7 +21,6 @@ import org.elasticsearch.common.io.stream.Writeable; - /** * */ @@ -49,4 +48,5 @@ public interface TransportAddress extends Writeable { boolean isLoopbackOrLinkLocalAddress(); String toString(); + } diff --git a/core/src/main/java/org/elasticsearch/common/transport/TransportAddressSerializers.java b/core/src/main/java/org/elasticsearch/common/transport/TransportAddressSerializers.java index 784bee52d63e3..5d5cf62720b8a 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/TransportAddressSerializers.java +++ b/core/src/main/java/org/elasticsearch/common/transport/TransportAddressSerializers.java @@ -21,7 +21,6 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.io.stream.Writeable; import java.io.IOException; import java.util.HashMap; @@ -33,17 +32,17 @@ * A global registry of all supported types of {@link TransportAddress}s. This registry is not open for modification by plugins. */ public abstract class TransportAddressSerializers { - private static final Map> ADDRESS_REGISTRY; + private static final Map> ADDRESS_REGISTRY; static { - Map> registry = new HashMap<>(); + Map> registry = new HashMap<>(); addAddressType(registry, InetSocketTransportAddress.TYPE_ID, InetSocketTransportAddress::new); addAddressType(registry, LocalTransportAddress.TYPE_ID, LocalTransportAddress::new); ADDRESS_REGISTRY = unmodifiableMap(registry); } - private static void addAddressType(Map> registry, short uniqueAddressTypeId, - Writeable.Reader address) { + private static void addAddressType(Map> registry, + short uniqueAddressTypeId, CheckedBiFunction address) { if (registry.containsKey(uniqueAddressTypeId)) { throw new IllegalStateException("Address [" + uniqueAddressTypeId + "] already bound"); } @@ -51,17 +50,35 @@ private static void addAddressType(Map } public static TransportAddress addressFromStream(StreamInput input) throws IOException { + return addressFromStream(input, null); + } + + public static TransportAddress addressFromStream(StreamInput input, String hostString) throws IOException { // TODO why don't we just use named writeables here? short addressUniqueId = input.readShort(); - Writeable.Reader addressType = ADDRESS_REGISTRY.get(addressUniqueId); + CheckedBiFunction addressType = ADDRESS_REGISTRY.get(addressUniqueId); if (addressType == null) { throw new IOException("No transport address mapped to [" + addressUniqueId + "]"); } - return addressType.read(input); + return addressType.apply(input, hostString); } public static void addressToStream(StreamOutput out, TransportAddress address) throws IOException { out.writeShort(address.uniqueAddressTypeId()); address.writeTo(out); } + + /** A BiFuntion that can throw an IOException */ + @FunctionalInterface + interface CheckedBiFunction { + + /** + * Applies this function to the given arguments. + * + * @param t the first function argument + * @param u the second function argument + * @return the function result + */ + R apply(T t, U u) throws IOException; + } } diff --git a/core/src/main/java/org/elasticsearch/common/unit/ByteSizeUnit.java b/core/src/main/java/org/elasticsearch/common/unit/ByteSizeUnit.java index 4a159957d5e36..e7e43b6d78ae5 100644 --- a/core/src/main/java/org/elasticsearch/common/unit/ByteSizeUnit.java +++ b/core/src/main/java/org/elasticsearch/common/unit/ByteSizeUnit.java @@ -19,16 +19,20 @@ package org.elasticsearch.common.unit; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; + +import java.io.IOException; + /** * A SizeUnit represents size at a given unit of * granularity and provides utility methods to convert across units. * A SizeUnit does not maintain size information, but only * helps organize and use size representations that may be maintained * separately across various contexts. - * - * */ -public enum ByteSizeUnit { +public enum ByteSizeUnit implements Writeable { BYTES { @Override public long toBytes(long size) { @@ -225,6 +229,13 @@ public long toPB(long size) { static final long MAX = Long.MAX_VALUE; + public static ByteSizeUnit fromId(int id) { + if (id < 0 || id >= values().length) { + throw new IllegalArgumentException("No byte size unit found for id [" + id + "]"); + } + return values()[id]; + } + /** * Scale d by m, checking for overflow. * This has a short name to make above code more readable. @@ -235,7 +246,6 @@ static long x(long d, long m, long over) { return d * m; } - public abstract long toBytes(long size); public abstract long toKB(long size); @@ -247,4 +257,16 @@ static long x(long d, long m, long over) { public abstract long toTB(long size); public abstract long toPB(long size); -} \ No newline at end of file + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(this.ordinal()); + } + + /** + * Reads a {@link ByteSizeUnit} from a given {@link StreamInput} + */ + public static ByteSizeUnit readFrom(StreamInput in) throws IOException { + return ByteSizeUnit.fromId(in.readVInt()); + } +} diff --git a/core/src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java b/core/src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java index 5944a8e06e6f6..e0782e32cae4e 100644 --- a/core/src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java +++ b/core/src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java @@ -29,145 +29,101 @@ import java.util.Locale; import java.util.Objects; -public class ByteSizeValue implements Writeable { +public class ByteSizeValue implements Writeable, Comparable { private final long size; - private final ByteSizeUnit sizeUnit; + private final ByteSizeUnit unit; public ByteSizeValue(StreamInput in) throws IOException { size = in.readVLong(); - sizeUnit = ByteSizeUnit.BYTES; + unit = ByteSizeUnit.BYTES; } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVLong(bytes()); + out.writeVLong(getBytes()); } public ByteSizeValue(long bytes) { this(bytes, ByteSizeUnit.BYTES); } - public ByteSizeValue(long size, ByteSizeUnit sizeUnit) { + public ByteSizeValue(long size, ByteSizeUnit unit) { this.size = size; - this.sizeUnit = sizeUnit; + this.unit = unit; } public int bytesAsInt() { - long bytes = bytes(); + long bytes = getBytes(); if (bytes > Integer.MAX_VALUE) { throw new IllegalArgumentException("size [" + toString() + "] is bigger than max int"); } return (int) bytes; } - public long bytes() { - return sizeUnit.toBytes(size); - } - public long getBytes() { - return bytes(); - } - - public long kb() { - return sizeUnit.toKB(size); + return unit.toBytes(size); } public long getKb() { - return kb(); - } - - public long mb() { - return sizeUnit.toMB(size); + return unit.toKB(size); } public long getMb() { - return mb(); - } - - public long gb() { - return sizeUnit.toGB(size); + return unit.toMB(size); } public long getGb() { - return gb(); - } - - public long tb() { - return sizeUnit.toTB(size); + return unit.toGB(size); } public long getTb() { - return tb(); - } - - public long pb() { - return sizeUnit.toPB(size); + return unit.toTB(size); } public long getPb() { - return pb(); - } - - public double kbFrac() { - return ((double) bytes()) / ByteSizeUnit.C1; + return unit.toPB(size); } public double getKbFrac() { - return kbFrac(); - } - - public double mbFrac() { - return ((double) bytes()) / ByteSizeUnit.C2; + return ((double) getBytes()) / ByteSizeUnit.C1; } public double getMbFrac() { - return mbFrac(); - } - - public double gbFrac() { - return ((double) bytes()) / ByteSizeUnit.C3; + return ((double) getBytes()) / ByteSizeUnit.C2; } public double getGbFrac() { - return gbFrac(); - } - - public double tbFrac() { - return ((double) bytes()) / ByteSizeUnit.C4; + return ((double) getBytes()) / ByteSizeUnit.C3; } public double getTbFrac() { - return tbFrac(); - } - - public double pbFrac() { - return ((double) bytes()) / ByteSizeUnit.C5; + return ((double) getBytes()) / ByteSizeUnit.C4; } public double getPbFrac() { - return pbFrac(); + return ((double) getBytes()) / ByteSizeUnit.C5; } @Override public String toString() { - long bytes = bytes(); + long bytes = getBytes(); double value = bytes; String suffix = "b"; if (bytes >= ByteSizeUnit.C5) { - value = pbFrac(); + value = getPbFrac(); suffix = "pb"; } else if (bytes >= ByteSizeUnit.C4) { - value = tbFrac(); + value = getTbFrac(); suffix = "tb"; } else if (bytes >= ByteSizeUnit.C3) { - value = gbFrac(); + value = getGbFrac(); suffix = "gb"; } else if (bytes >= ByteSizeUnit.C2) { - value = mbFrac(); + value = getMbFrac(); suffix = "mb"; } else if (bytes >= ByteSizeUnit.C1) { - value = kbFrac(); + value = getKbFrac(); suffix = "kb"; } return Strings.format1Decimals(value, suffix); @@ -235,15 +191,18 @@ public boolean equals(Object o) { return false; } - ByteSizeValue sizeValue = (ByteSizeValue) o; - - return bytes() == sizeValue.bytes(); + return compareTo((ByteSizeValue) o) == 0; } @Override public int hashCode() { - int result = Long.hashCode(size); - result = 31 * result + (sizeUnit != null ? sizeUnit.hashCode() : 0); - return result; + return Double.hashCode(((double) size) * unit.toBytes(1)); + } + + @Override + public int compareTo(ByteSizeValue other) { + double thisValue = ((double) size) * unit.toBytes(1); + double otherValue = ((double) other.size) * other.unit.toBytes(1); + return Double.compare(thisValue, otherValue); } } diff --git a/core/src/main/java/org/elasticsearch/common/unit/MemorySizeValue.java b/core/src/main/java/org/elasticsearch/common/unit/MemorySizeValue.java index 6a99e06ac01ae..2830d8318a343 100644 --- a/core/src/main/java/org/elasticsearch/common/unit/MemorySizeValue.java +++ b/core/src/main/java/org/elasticsearch/common/unit/MemorySizeValue.java @@ -42,7 +42,7 @@ public static ByteSizeValue parseBytesSizeValueOrHeapRatio(String sValue, String if (percent < 0 || percent > 100) { throw new ElasticsearchParseException("percentage should be in [0-100], got [{}]", percentAsString); } - return new ByteSizeValue((long) ((percent / 100) * JvmInfo.jvmInfo().getMem().getHeapMax().bytes()), ByteSizeUnit.BYTES); + return new ByteSizeValue((long) ((percent / 100) * JvmInfo.jvmInfo().getMem().getHeapMax().getBytes()), ByteSizeUnit.BYTES); } catch (NumberFormatException e) { throw new ElasticsearchParseException("failed to parse [{}] as a double", e, percentAsString); } diff --git a/core/src/main/java/org/elasticsearch/common/unit/SizeValue.java b/core/src/main/java/org/elasticsearch/common/unit/SizeValue.java index cba51f29eeb4a..0f90582007bfd 100644 --- a/core/src/main/java/org/elasticsearch/common/unit/SizeValue.java +++ b/core/src/main/java/org/elasticsearch/common/unit/SizeValue.java @@ -27,7 +27,7 @@ import java.io.IOException; -public class SizeValue implements Writeable { +public class SizeValue implements Writeable, Comparable { private final long size; private final SizeUnit sizeUnit; @@ -201,18 +201,18 @@ public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; - SizeValue sizeValue = (SizeValue) o; - - if (size != sizeValue.size) return false; - if (sizeUnit != sizeValue.sizeUnit) return false; - - return true; + return compareTo((SizeValue) o) == 0; } @Override public int hashCode() { - int result = Long.hashCode(size); - result = 31 * result + (sizeUnit != null ? sizeUnit.hashCode() : 0); - return result; + return Double.hashCode(((double) size) * sizeUnit.toSingles(1)); + } + + @Override + public int compareTo(SizeValue other) { + double thisValue = ((double) size) * sizeUnit.toSingles(1); + double otherValue = ((double) other.size) * other.sizeUnit.toSingles(1); + return Double.compare(thisValue, otherValue); } } diff --git a/core/src/main/java/org/elasticsearch/common/unit/TimeValue.java b/core/src/main/java/org/elasticsearch/common/unit/TimeValue.java index ed67019c10399..0f6eabed1e3de 100644 --- a/core/src/main/java/org/elasticsearch/common/unit/TimeValue.java +++ b/core/src/main/java/org/elasticsearch/common/unit/TimeValue.java @@ -31,6 +31,7 @@ import java.io.IOException; import java.util.Collections; +import java.util.EnumMap; import java.util.HashMap; import java.util.HashSet; import java.util.Locale; @@ -39,7 +40,7 @@ import java.util.Set; import java.util.concurrent.TimeUnit; -public class TimeValue implements Writeable { +public class TimeValue implements Writeable, Comparable { /** How many nano-seconds in one milli-second */ public static final long NSEC_PER_MSEC = TimeUnit.NANOSECONDS.convert(1, TimeUnit.MILLISECONDS); @@ -48,7 +49,7 @@ public class TimeValue implements Writeable { private static Map BYTE_TIME_UNIT_MAP; static { - final Map timeUnitByteMap = new HashMap<>(); + final Map timeUnitByteMap = new EnumMap<>(TimeUnit.class); timeUnitByteMap.put(TimeUnit.NANOSECONDS, (byte)0); timeUnitByteMap.put(TimeUnit.MICROSECONDS, (byte)1); timeUnitByteMap.put(TimeUnit.MILLISECONDS, (byte)2); @@ -249,6 +250,12 @@ public String format(PeriodType type) { return PeriodFormat.getDefault().withParseType(type).print(period); } + /** + * Returns a {@link String} representation of the current {@link TimeValue}. + * + * Note that this method might produce fractional time values (ex 1.6m) which cannot be + * parsed by method like {@link TimeValue#parse(String, String, String)}. + */ @Override public String toString() { if (duration < 0) { @@ -319,22 +326,20 @@ public static TimeValue parseTimeValue(String sValue, TimeValue defaultValue, St } final String normalized = sValue.toLowerCase(Locale.ROOT).trim(); if (normalized.endsWith("nanos")) { - return new TimeValue(parse(sValue, normalized, 5), TimeUnit.NANOSECONDS); + return new TimeValue(parse(sValue, normalized, "nanos"), TimeUnit.NANOSECONDS); } else if (normalized.endsWith("micros")) { - return new TimeValue(parse(sValue, normalized, 6), TimeUnit.MICROSECONDS); + return new TimeValue(parse(sValue, normalized, "micros"), TimeUnit.MICROSECONDS); } else if (normalized.endsWith("ms")) { - return new TimeValue(parse(sValue, normalized, 2), TimeUnit.MILLISECONDS); + return new TimeValue(parse(sValue, normalized, "ms"), TimeUnit.MILLISECONDS); } else if (normalized.endsWith("s")) { - return new TimeValue(parse(sValue, normalized, 1), TimeUnit.SECONDS); + return new TimeValue(parse(sValue, normalized, "s"), TimeUnit.SECONDS); } else if (sValue.endsWith("m")) { - // parsing minutes should be case sensitive as `M` is generally - // accepted to mean months not minutes. This is the only case where - // the upper and lower case forms indicate different time units - return new TimeValue(parse(sValue, normalized, 1), TimeUnit.MINUTES); + // parsing minutes should be case-sensitive as 'M' means "months", not "minutes"; this is the only special case. + return new TimeValue(parse(sValue, normalized, "m"), TimeUnit.MINUTES); } else if (normalized.endsWith("h")) { - return new TimeValue(parse(sValue, normalized, 1), TimeUnit.HOURS); + return new TimeValue(parse(sValue, normalized, "h"), TimeUnit.HOURS); } else if (normalized.endsWith("d")) { - return new TimeValue(parse(sValue, normalized, 1), TimeUnit.DAYS); + return new TimeValue(parse(sValue, normalized, "d"), TimeUnit.DAYS); } else if (normalized.matches("-0*1")) { return TimeValue.MINUS_ONE; } else if (normalized.matches("0+")) { @@ -348,8 +353,8 @@ public static TimeValue parseTimeValue(String sValue, TimeValue defaultValue, St } } - private static long parse(final String initialInput, final String normalized, final int suffixLength) { - final String s = normalized.substring(0, normalized.length() - suffixLength).trim(); + private static long parse(final String initialInput, final String normalized, final String suffix) { + final String s = normalized.substring(0, normalized.length() - suffix.length()).trim(); try { return Long.parseLong(s); } catch (final NumberFormatException e) { @@ -375,17 +380,22 @@ public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; - TimeValue timeValue = (TimeValue) o; - return timeUnit.toNanos(duration) == timeValue.timeUnit.toNanos(timeValue.duration); + return this.compareTo(((TimeValue) o)) == 0; } @Override public int hashCode() { - long normalized = timeUnit.toNanos(duration); - return Long.hashCode(normalized); + return Double.hashCode(((double) duration) * timeUnit.toNanos(1)); } public static long nsecToMSec(long ns) { return ns / NSEC_PER_MSEC; } + + @Override + public int compareTo(TimeValue timeValue) { + double thisValue = ((double) duration) * timeUnit.toNanos(1); + double otherValue = ((double) timeValue.duration) * timeValue.timeUnit.toNanos(1); + return Double.compare(thisValue, otherValue); + } } diff --git a/core/src/main/java/org/elasticsearch/common/util/AbstractArray.java b/core/src/main/java/org/elasticsearch/common/util/AbstractArray.java index 1187dfef7e5a0..6a4895c7950ad 100644 --- a/core/src/main/java/org/elasticsearch/common/util/AbstractArray.java +++ b/core/src/main/java/org/elasticsearch/common/util/AbstractArray.java @@ -23,13 +23,14 @@ import java.util.Collection; import java.util.Collections; +import java.util.concurrent.atomic.AtomicBoolean; abstract class AbstractArray implements BigArray { private final BigArrays bigArrays; public final boolean clearOnResize; - private boolean released = false; + private final AtomicBoolean closed = new AtomicBoolean(false); AbstractArray(BigArrays bigArrays, boolean clearOnResize) { this.bigArrays = bigArrays; @@ -38,10 +39,13 @@ abstract class AbstractArray implements BigArray { @Override public final void close() { - bigArrays.adjustBreaker(-ramBytesUsed()); - assert !released : "double release"; - released = true; - doClose(); + if (closed.compareAndSet(false, true)) { + try { + bigArrays.adjustBreaker(-ramBytesUsed(), true); + } finally { + doClose(); + } + } } protected abstract void doClose(); diff --git a/core/src/main/java/org/elasticsearch/common/util/AbstractBigArray.java b/core/src/main/java/org/elasticsearch/common/util/AbstractBigArray.java index f26dad1fdb5af..73a05f7f2cfd6 100644 --- a/core/src/main/java/org/elasticsearch/common/util/AbstractBigArray.java +++ b/core/src/main/java/org/elasticsearch/common/util/AbstractBigArray.java @@ -87,6 +87,11 @@ public final long size() { @Override public final long ramBytesUsed() { + return ramBytesEstimated(size); + } + + /** Given the size of the array, estimate the number of bytes it will use. */ + public final long ramBytesEstimated(final long size) { // rough approximate, we only take into account the size of the values, not the overhead of the array objects return ((long) pageIndex(size - 1) + 1) * pageSize() * numBytesPerElement(); } diff --git a/core/src/main/java/org/elasticsearch/common/util/ArrayUtils.java b/core/src/main/java/org/elasticsearch/common/util/ArrayUtils.java index bb8442efa08c0..de1d665f7c6d8 100644 --- a/core/src/main/java/org/elasticsearch/common/util/ArrayUtils.java +++ b/core/src/main/java/org/elasticsearch/common/util/ArrayUtils.java @@ -87,5 +87,4 @@ public static T[] concat(T[] one, T[] other, Class clazz) { System.arraycopy(other, 0, target, one.length, other.length); return target; } - } diff --git a/core/src/main/java/org/elasticsearch/common/util/BigArrays.java b/core/src/main/java/org/elasticsearch/common/util/BigArrays.java index 6a15a3d90009b..5c539a791cf6b 100644 --- a/core/src/main/java/org/elasticsearch/common/util/BigArrays.java +++ b/core/src/main/java/org/elasticsearch/common/util/BigArrays.java @@ -25,7 +25,6 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.breaker.CircuitBreaker; import org.elasticsearch.common.breaker.CircuitBreakingException; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.recycler.Recycler; @@ -91,7 +90,7 @@ public void close() { private abstract static class AbstractArrayWrapper extends AbstractArray implements BigArray { - protected static final long SHALLOW_SIZE = RamUsageEstimator.shallowSizeOfInstance(ByteArrayWrapper.class); + static final long SHALLOW_SIZE = RamUsageEstimator.shallowSizeOfInstance(ByteArrayWrapper.class); private final Releasable releasable; private final long size; @@ -377,6 +376,7 @@ public BigArrays(Settings settings, @Nullable final CircuitBreakerService breake // Checking the breaker is disabled if not specified this(new PageCacheRecycler(settings), breakerService, false); } + // public for tests public BigArrays(PageCacheRecycler recycler, @Nullable final CircuitBreakerService breakerService, boolean checkBreaker) { this.checkBreaker = checkBreaker; @@ -392,21 +392,26 @@ public BigArrays(PageCacheRecycler recycler, @Nullable final CircuitBreakerServi /** * Adjust the circuit breaker with the given delta, if the delta is * negative, or checkBreaker is false, the breaker will be adjusted - * without tripping + * without tripping. If the data was already created before calling + * this method, and the breaker trips, we add the delta without breaking + * to account for the created data. If the data has not been created yet, + * we do not add the delta to the breaker if it trips. */ - void adjustBreaker(long delta) { + void adjustBreaker(final long delta, final boolean isDataAlreadyCreated) { if (this.breakerService != null) { CircuitBreaker breaker = this.breakerService.getBreaker(CircuitBreaker.REQUEST); - if (this.checkBreaker == true) { + if (this.checkBreaker) { // checking breaker means potentially tripping, but it doesn't // have to if the delta is negative if (delta > 0) { try { breaker.addEstimateBytesAndMaybeBreak(delta, ""); } catch (CircuitBreakingException e) { - // since we've already created the data, we need to - // add it so closing the stream re-adjusts properly - breaker.addWithoutBreaking(delta); + if (isDataAlreadyCreated) { + // since we've already created the data, we need to + // add it so closing the stream re-adjusts properly + breaker.addWithoutBreaking(delta); + } // re-throw the original exception throw e; } @@ -435,15 +440,20 @@ public CircuitBreakerService breakerService() { private T resizeInPlace(T array, long newSize) { final long oldMemSize = array.ramBytesUsed(); + final long oldSize = array.size(); + assert oldMemSize == array.ramBytesEstimated(oldSize) : + "ram bytes used should equal that which was previously estimated: ramBytesUsed=" + + oldMemSize + ", ramBytesEstimated=" + array.ramBytesEstimated(oldSize); + final long estimatedIncreaseInBytes = array.ramBytesEstimated(newSize) - oldMemSize; + adjustBreaker(estimatedIncreaseInBytes, false); array.resize(newSize); - adjustBreaker(array.ramBytesUsed() - oldMemSize); return array; } private T validate(T array) { boolean success = false; try { - adjustBreaker(array.ramBytesUsed()); + adjustBreaker(array.ramBytesUsed(), true); success = true; } finally { if (!success) { @@ -459,16 +469,17 @@ private T validate(T array) { * @param clearOnResize whether values should be set to 0 on initialization and resize */ public ByteArray newByteArray(long size, boolean clearOnResize) { - final ByteArray array; if (size > BYTE_PAGE_SIZE) { - array = new BigByteArray(size, this, clearOnResize); + // when allocating big arrays, we want to first ensure we have the capacity by + // checking with the circuit breaker before attempting to allocate + adjustBreaker(BigByteArray.estimateRamBytes(size), false); + return new BigByteArray(size, this, clearOnResize); } else if (size >= BYTE_PAGE_SIZE / 2 && recycler != null) { final Recycler.V page = recycler.bytePage(clearOnResize); - array = new ByteArrayWrapper(this, page.v(), size, page, clearOnResize); + return validate(new ByteArrayWrapper(this, page.v(), size, page, clearOnResize)); } else { - array = new ByteArrayWrapper(this, new byte[(int) size], size, null, clearOnResize); + return validate(new ByteArrayWrapper(this, new byte[(int) size], size, null, clearOnResize)); } - return validate(array); } /** @@ -541,16 +552,17 @@ public boolean equals(ByteArray array, ByteArray other) { * @param clearOnResize whether values should be set to 0 on initialization and resize */ public IntArray newIntArray(long size, boolean clearOnResize) { - final IntArray array; if (size > INT_PAGE_SIZE) { - array = new BigIntArray(size, this, clearOnResize); + // when allocating big arrays, we want to first ensure we have the capacity by + // checking with the circuit breaker before attempting to allocate + adjustBreaker(BigIntArray.estimateRamBytes(size), false); + return new BigIntArray(size, this, clearOnResize); } else if (size >= INT_PAGE_SIZE / 2 && recycler != null) { final Recycler.V page = recycler.intPage(clearOnResize); - array = new IntArrayWrapper(this, page.v(), size, page, clearOnResize); + return validate(new IntArrayWrapper(this, page.v(), size, page, clearOnResize)); } else { - array = new IntArrayWrapper(this, new int[(int) size], size, null, clearOnResize); + return validate(new IntArrayWrapper(this, new int[(int) size], size, null, clearOnResize)); } - return validate(array); } /** @@ -591,16 +603,17 @@ public IntArray grow(IntArray array, long minSize) { * @param clearOnResize whether values should be set to 0 on initialization and resize */ public LongArray newLongArray(long size, boolean clearOnResize) { - final LongArray array; if (size > LONG_PAGE_SIZE) { - array = new BigLongArray(size, this, clearOnResize); + // when allocating big arrays, we want to first ensure we have the capacity by + // checking with the circuit breaker before attempting to allocate + adjustBreaker(BigLongArray.estimateRamBytes(size), false); + return new BigLongArray(size, this, clearOnResize); } else if (size >= LONG_PAGE_SIZE / 2 && recycler != null) { final Recycler.V page = recycler.longPage(clearOnResize); - array = new LongArrayWrapper(this, page.v(), size, page, clearOnResize); + return validate(new LongArrayWrapper(this, page.v(), size, page, clearOnResize)); } else { - array = new LongArrayWrapper(this, new long[(int) size], size, null, clearOnResize); + return validate(new LongArrayWrapper(this, new long[(int) size], size, null, clearOnResize)); } - return validate(array); } /** @@ -641,16 +654,17 @@ public LongArray grow(LongArray array, long minSize) { * @param clearOnResize whether values should be set to 0 on initialization and resize */ public DoubleArray newDoubleArray(long size, boolean clearOnResize) { - final DoubleArray arr; if (size > LONG_PAGE_SIZE) { - arr = new BigDoubleArray(size, this, clearOnResize); + // when allocating big arrays, we want to first ensure we have the capacity by + // checking with the circuit breaker before attempting to allocate + adjustBreaker(BigDoubleArray.estimateRamBytes(size), false); + return new BigDoubleArray(size, this, clearOnResize); } else if (size >= LONG_PAGE_SIZE / 2 && recycler != null) { final Recycler.V page = recycler.longPage(clearOnResize); - arr = new DoubleArrayWrapper(this, page.v(), size, page, clearOnResize); + return validate(new DoubleArrayWrapper(this, page.v(), size, page, clearOnResize)); } else { - arr = new DoubleArrayWrapper(this, new long[(int) size], size, null, clearOnResize); + return validate(new DoubleArrayWrapper(this, new long[(int) size], size, null, clearOnResize)); } - return validate(arr); } /** Allocate a new {@link DoubleArray} of the given capacity. */ @@ -688,16 +702,17 @@ public DoubleArray grow(DoubleArray array, long minSize) { * @param clearOnResize whether values should be set to 0 on initialization and resize */ public FloatArray newFloatArray(long size, boolean clearOnResize) { - final FloatArray array; if (size > INT_PAGE_SIZE) { - array = new BigFloatArray(size, this, clearOnResize); + // when allocating big arrays, we want to first ensure we have the capacity by + // checking with the circuit breaker before attempting to allocate + adjustBreaker(BigFloatArray.estimateRamBytes(size), false); + return new BigFloatArray(size, this, clearOnResize); } else if (size >= INT_PAGE_SIZE / 2 && recycler != null) { final Recycler.V page = recycler.intPage(clearOnResize); - array = new FloatArrayWrapper(this, page.v(), size, page, clearOnResize); + return validate(new FloatArrayWrapper(this, page.v(), size, page, clearOnResize)); } else { - array = new FloatArrayWrapper(this, new int[(int) size], size, null, clearOnResize); + return validate(new FloatArrayWrapper(this, new int[(int) size], size, null, clearOnResize)); } - return validate(array); } /** Allocate a new {@link FloatArray} of the given capacity. */ @@ -736,14 +751,16 @@ public FloatArray grow(FloatArray array, long minSize) { public ObjectArray newObjectArray(long size) { final ObjectArray array; if (size > OBJECT_PAGE_SIZE) { - array = new BigObjectArray<>(size, this); + // when allocating big arrays, we want to first ensure we have the capacity by + // checking with the circuit breaker before attempting to allocate + adjustBreaker(BigObjectArray.estimateRamBytes(size), false); + return new BigObjectArray<>(size, this); } else if (size >= OBJECT_PAGE_SIZE / 2 && recycler != null) { final Recycler.V page = recycler.objectPage(); - array = new ObjectArrayWrapper<>(this, page.v(), size, page); + return validate(new ObjectArrayWrapper<>(this, page.v(), size, page)); } else { - array = new ObjectArrayWrapper<>(this, new Object[(int) size], size, null); + return validate(new ObjectArrayWrapper<>(this, new Object[(int) size], size, null)); } - return validate(array); } /** Resize the array to the exact provided size. */ diff --git a/core/src/main/java/org/elasticsearch/common/util/BigByteArray.java b/core/src/main/java/org/elasticsearch/common/util/BigByteArray.java index cac3132385f6c..789e6dc6bbaf6 100644 --- a/core/src/main/java/org/elasticsearch/common/util/BigByteArray.java +++ b/core/src/main/java/org/elasticsearch/common/util/BigByteArray.java @@ -33,10 +33,12 @@ */ final class BigByteArray extends AbstractBigArray implements ByteArray { + private static final BigByteArray ESTIMATOR = new BigByteArray(0, BigArrays.NON_RECYCLING_INSTANCE, false); + private byte[][] pages; /** Constructor. */ - public BigByteArray(long size, BigArrays bigArrays, boolean clearOnResize) { + BigByteArray(long size, BigArrays bigArrays, boolean clearOnResize) { super(BYTE_PAGE_SIZE, bigArrays, clearOnResize); this.size = size; pages = new byte[numPages(size)][]; @@ -44,7 +46,7 @@ public BigByteArray(long size, BigArrays bigArrays, boolean clearOnResize) { pages[i] = newBytePage(i); } } - + @Override public byte get(long index) { final int pageIndex = pageIndex(index); @@ -147,4 +149,9 @@ public void resize(long newSize) { this.size = newSize; } + /** Estimates the number of bytes that would be consumed by an array of the given size. */ + public static long estimateRamBytes(final long size) { + return ESTIMATOR.ramBytesEstimated(size); + } + } diff --git a/core/src/main/java/org/elasticsearch/common/util/BigDoubleArray.java b/core/src/main/java/org/elasticsearch/common/util/BigDoubleArray.java index 4aab593affe6d..a2c770ee9958d 100644 --- a/core/src/main/java/org/elasticsearch/common/util/BigDoubleArray.java +++ b/core/src/main/java/org/elasticsearch/common/util/BigDoubleArray.java @@ -32,10 +32,12 @@ */ final class BigDoubleArray extends AbstractBigArray implements DoubleArray { + private static final BigDoubleArray ESTIMATOR = new BigDoubleArray(0, BigArrays.NON_RECYCLING_INSTANCE, false); + private long[][] pages; /** Constructor. */ - public BigDoubleArray(long size, BigArrays bigArrays, boolean clearOnResize) { + BigDoubleArray(long size, BigArrays bigArrays, boolean clearOnResize) { super(LONG_PAGE_SIZE, bigArrays, clearOnResize); this.size = size; pages = new long[numPages(size)][]; @@ -110,4 +112,9 @@ public void fill(long fromIndex, long toIndex, double value) { } } + /** Estimates the number of bytes that would be consumed by an array of the given size. */ + public static long estimateRamBytes(final long size) { + return ESTIMATOR.ramBytesEstimated(size); + } + } diff --git a/core/src/main/java/org/elasticsearch/common/util/BigFloatArray.java b/core/src/main/java/org/elasticsearch/common/util/BigFloatArray.java index 1fa79a9f3dbe0..b67db2e84de31 100644 --- a/core/src/main/java/org/elasticsearch/common/util/BigFloatArray.java +++ b/core/src/main/java/org/elasticsearch/common/util/BigFloatArray.java @@ -32,10 +32,12 @@ */ final class BigFloatArray extends AbstractBigArray implements FloatArray { + private static final BigFloatArray ESTIMATOR = new BigFloatArray(0, BigArrays.NON_RECYCLING_INSTANCE, false); + private int[][] pages; /** Constructor. */ - public BigFloatArray(long size, BigArrays bigArrays, boolean clearOnResize) { + BigFloatArray(long size, BigArrays bigArrays, boolean clearOnResize) { super(INT_PAGE_SIZE, bigArrays, clearOnResize); this.size = size; pages = new int[numPages(size)][]; @@ -110,4 +112,9 @@ public void fill(long fromIndex, long toIndex, float value) { } } + /** Estimates the number of bytes that would be consumed by an array of the given size. */ + public static long estimateRamBytes(final long size) { + return ESTIMATOR.ramBytesEstimated(size); + } + } diff --git a/core/src/main/java/org/elasticsearch/common/util/BigIntArray.java b/core/src/main/java/org/elasticsearch/common/util/BigIntArray.java index 4ce5fc7aceeaf..d2a1ca3f49c61 100644 --- a/core/src/main/java/org/elasticsearch/common/util/BigIntArray.java +++ b/core/src/main/java/org/elasticsearch/common/util/BigIntArray.java @@ -32,10 +32,12 @@ */ final class BigIntArray extends AbstractBigArray implements IntArray { + private static final BigIntArray ESTIMATOR = new BigIntArray(0, BigArrays.NON_RECYCLING_INSTANCE, false); + private int[][] pages; /** Constructor. */ - public BigIntArray(long size, BigArrays bigArrays, boolean clearOnResize) { + BigIntArray(long size, BigArrays bigArrays, boolean clearOnResize) { super(INT_PAGE_SIZE, bigArrays, clearOnResize); this.size = size; pages = new int[numPages(size)][]; @@ -108,4 +110,9 @@ public void resize(long newSize) { this.size = newSize; } + /** Estimates the number of bytes that would be consumed by an array of the given size. */ + public static long estimateRamBytes(final long size) { + return ESTIMATOR.ramBytesEstimated(size); + } + } diff --git a/core/src/main/java/org/elasticsearch/common/util/BigLongArray.java b/core/src/main/java/org/elasticsearch/common/util/BigLongArray.java index 2e3248143b4e8..69f919382f8e4 100644 --- a/core/src/main/java/org/elasticsearch/common/util/BigLongArray.java +++ b/core/src/main/java/org/elasticsearch/common/util/BigLongArray.java @@ -32,10 +32,12 @@ */ final class BigLongArray extends AbstractBigArray implements LongArray { + private static final BigLongArray ESTIMATOR = new BigLongArray(0, BigArrays.NON_RECYCLING_INSTANCE, false); + private long[][] pages; /** Constructor. */ - public BigLongArray(long size, BigArrays bigArrays, boolean clearOnResize) { + BigLongArray(long size, BigArrays bigArrays, boolean clearOnResize) { super(LONG_PAGE_SIZE, bigArrays, clearOnResize); this.size = size; pages = new long[numPages(size)][]; @@ -111,4 +113,9 @@ public void fill(long fromIndex, long toIndex, long value) { } } + /** Estimates the number of bytes that would be consumed by an array of the given size. */ + public static long estimateRamBytes(final long size) { + return ESTIMATOR.ramBytesEstimated(size); + } + } diff --git a/core/src/main/java/org/elasticsearch/common/util/BigObjectArray.java b/core/src/main/java/org/elasticsearch/common/util/BigObjectArray.java index 19a41d3096da9..1ed012e2bb393 100644 --- a/core/src/main/java/org/elasticsearch/common/util/BigObjectArray.java +++ b/core/src/main/java/org/elasticsearch/common/util/BigObjectArray.java @@ -32,10 +32,12 @@ */ final class BigObjectArray extends AbstractBigArray implements ObjectArray { + private static final BigObjectArray ESTIMATOR = new BigObjectArray(0, BigArrays.NON_RECYCLING_INSTANCE); + private Object[][] pages; /** Constructor. */ - public BigObjectArray(long size, BigArrays bigArrays) { + BigObjectArray(long size, BigArrays bigArrays) { super(OBJECT_PAGE_SIZE, bigArrays, true); this.size = size; pages = new Object[numPages(size)][]; @@ -85,4 +87,9 @@ public void resize(long newSize) { this.size = newSize; } -} \ No newline at end of file + /** Estimates the number of bytes that would be consumed by an array of the given size. */ + public static long estimateRamBytes(final long size) { + return ESTIMATOR.ramBytesEstimated(size); + } + +} diff --git a/core/src/main/java/org/elasticsearch/common/util/Callback.java b/core/src/main/java/org/elasticsearch/common/util/Callback.java deleted file mode 100644 index d4e3c94f70048..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/util/Callback.java +++ /dev/null @@ -1,29 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.util; - -/** - * - */ -public interface Callback { - - void handle(T t); - -} diff --git a/core/src/main/java/org/elasticsearch/common/util/CollectionUtils.java b/core/src/main/java/org/elasticsearch/common/util/CollectionUtils.java index 5171be0ee91a3..54a49f7e4f254 100644 --- a/core/src/main/java/org/elasticsearch/common/util/CollectionUtils.java +++ b/core/src/main/java/org/elasticsearch/common/util/CollectionUtils.java @@ -226,7 +226,7 @@ private static class RotatedList extends AbstractList implements RandomAcc private final List in; private final int distance; - public RotatedList(List list, int distance) { + RotatedList(List list, int distance) { if (distance < 0 || distance >= list.size()) { throw new IllegalArgumentException(); } diff --git a/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java b/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java deleted file mode 100644 index fcdfaafb1d5b9..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java +++ /dev/null @@ -1,246 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.util; - -import org.elasticsearch.common.inject.Binder; -import org.elasticsearch.common.inject.multibindings.MapBinder; -import org.elasticsearch.common.inject.multibindings.Multibinder; -import org.elasticsearch.common.settings.Settings; - -import java.util.Collections; -import java.util.HashMap; -import java.util.HashSet; -import java.util.Map; -import java.util.Set; - -/** - * This class defines an official elasticsearch extension point. It registers - * all extensions by a single name and ensures that extensions are not registered - * more than once. - */ -public abstract class ExtensionPoint { - protected final String name; - protected final Class[] singletons; - - /** - * Creates a new extension point - * - * @param name the human readable underscore case name of the extension point. This is used in error messages etc. - * @param singletons a list of singletons to bind with this extension point - these are bound in {@link #bind(Binder)} - */ - public ExtensionPoint(String name, Class... singletons) { - this.name = name; - this.singletons = singletons; - } - - /** - * Binds the extension as well as the singletons to the given guice binder. - * - * @param binder the binder to use - */ - public final void bind(Binder binder) { - for (Class c : singletons) { - binder.bind(c).asEagerSingleton(); - } - bindExtensions(binder); - } - - /** - * Subclasses can bind their type, map or set extensions here. - */ - protected abstract void bindExtensions(Binder binder); - - /** - * A map based extension point which allows to register keyed implementations ie. parsers or some kind of strategies. - */ - public static class ClassMap extends ExtensionPoint { - protected final Class extensionClass; - protected final Map> extensions = new HashMap<>(); - private final Set reservedKeys; - - /** - * Creates a new {@link ClassMap} - * - * @param name the human readable underscore case name of the extension point. This is used in error messages etc. - * @param extensionClass the base class that should be extended - * @param singletons a list of singletons to bind with this extension point - these are bound in {@link #bind(Binder)} - * @param reservedKeys a set of reserved keys by internal implementations - */ - public ClassMap(String name, Class extensionClass, Set reservedKeys, Class... singletons) { - super(name, singletons); - this.extensionClass = extensionClass; - this.reservedKeys = reservedKeys; - } - - /** - * Returns the extension for the given key or null - */ - public Class getExtension(String type) { - return extensions.get(type); - } - - /** - * Registers an extension class for a given key. This method will thr - * - * @param key the extensions key - * @param extension the extension - * @throws IllegalArgumentException iff the key is already registered or if the key is a reserved key for an internal implementation - */ - public final void registerExtension(String key, Class extension) { - if (extensions.containsKey(key) || reservedKeys.contains(key)) { - throw new IllegalArgumentException("Can't register the same [" + this.name + "] more than once for [" + key + "]"); - } - extensions.put(key, extension); - } - - @Override - protected final void bindExtensions(Binder binder) { - MapBinder parserMapBinder = MapBinder.newMapBinder(binder, String.class, extensionClass); - for (Map.Entry> clazz : extensions.entrySet()) { - parserMapBinder.addBinding(clazz.getKey()).to(clazz.getValue()); - } - } - } - - /** - * A Type extension point which basically allows to registered keyed extensions like {@link ClassMap} - * but doesn't instantiate and bind all the registered key value pairs but instead replace a singleton based on a given setting via {@link #bindType(Binder, Settings, String, String)} - * Note: {@link #bind(Binder)} is not supported by this class - */ - public static final class SelectedType extends ClassMap { - - public SelectedType(String name, Class extensionClass) { - super(name, extensionClass, Collections.emptySet()); - } - - /** - * Binds the extension class to the class that is registered for the give configured for the settings key in - * the settings object. - * - * @param binder the binder to use - * @param settings the settings to look up the key to find the implementation to bind - * @param settingsKey the key to use with the settings - * @param defaultValue the default value if the settings do not contain the key, or null if there is no default - * @return the actual bound type key - */ - public String bindType(Binder binder, Settings settings, String settingsKey, String defaultValue) { - final String type = settings.get(settingsKey, defaultValue); - if (type == null) { - throw new IllegalArgumentException("Missing setting [" + settingsKey + "]"); - } - final Class instance = getExtension(type); - if (instance == null) { - throw new IllegalArgumentException("Unknown [" + this.name + "] type [" + type + "] possible values: " - + extensions.keySet()); - } - if (extensionClass == instance) { - binder.bind(extensionClass).asEagerSingleton(); - } else { - binder.bind(extensionClass).to(instance).asEagerSingleton(); - } - return type; - } - - } - - /** - * A set based extension point which allows to register extended classes that might be used to chain additional functionality etc. - */ - public static final class ClassSet extends ExtensionPoint { - protected final Class extensionClass; - private final Set> extensions = new HashSet<>(); - - /** - * Creates a new {@link ClassSet} - * - * @param name the human readable underscore case name of the extension point. This is used in error messages etc. - * @param extensionClass the base class that should be extended - * @param singletons a list of singletons to bind with this extension point - these are bound in {@link #bind(Binder)} - */ - public ClassSet(String name, Class extensionClass, Class... singletons) { - super(name, singletons); - this.extensionClass = extensionClass; - } - - /** - * Registers a new extension - * - * @param extension the extension to register - * @throws IllegalArgumentException iff the class is already registered - */ - public void registerExtension(Class extension) { - if (extensions.contains(extension)) { - throw new IllegalArgumentException("Can't register the same [" + this.name + "] more than once for [" + extension.getName() + "]"); - } - extensions.add(extension); - } - - @Override - protected void bindExtensions(Binder binder) { - Multibinder allocationMultibinder = Multibinder.newSetBinder(binder, extensionClass); - for (Class clazz : extensions) { - binder.bind(clazz).asEagerSingleton(); - allocationMultibinder.addBinding().to(clazz); - } - } - } - - /** - * A an instance of a map, mapping one instance value to another. Both key and value are instances, not classes - * like with other extension points. - */ - public static final class InstanceMap extends ExtensionPoint { - private final Map map = new HashMap<>(); - private final Class keyType; - private final Class valueType; - - /** - * Creates a new {@link ClassSet} - * - * @param name the human readable underscore case name of the extension point. This is used in error messages. - * @param singletons a list of singletons to bind with this extension point - these are bound in {@link #bind(Binder)} - */ - public InstanceMap(String name, Class keyType, Class valueType, Class... singletons) { - super(name, singletons); - this.keyType = keyType; - this.valueType = valueType; - } - - /** - * Registers a mapping from {@code key} to {@code value} - * - * @throws IllegalArgumentException iff the key is already registered - */ - public void registerExtension(K key, V value) { - V old = map.put(key, value); - if (old != null) { - throw new IllegalArgumentException("Cannot register [" + this.name + "] with key [" + key + "] to [" + value + "], already registered to [" + old + "]"); - } - } - - @Override - protected void bindExtensions(Binder binder) { - MapBinder mapBinder = MapBinder.newMapBinder(binder, keyType, valueType); - for (Map.Entry entry : map.entrySet()) { - mapBinder.addBinding(entry.getKey()).toInstance(entry.getValue()); - } - } - } -} diff --git a/core/src/main/java/org/elasticsearch/common/util/IndexFolderUpgrader.java b/core/src/main/java/org/elasticsearch/common/util/IndexFolderUpgrader.java index 7f550bc1c26ea..528982385ac54 100644 --- a/core/src/main/java/org/elasticsearch/common/util/IndexFolderUpgrader.java +++ b/core/src/main/java/org/elasticsearch/common/util/IndexFolderUpgrader.java @@ -20,10 +20,13 @@ package org.elasticsearch.common.util; import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.util.IOUtils; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; @@ -64,8 +67,8 @@ void upgrade(final Index index, final Path source, final Path target) throws IOE } catch (NoSuchFileException | FileNotFoundException exception) { // thrown when the source is non-existent because the folder was renamed // by another node (shared FS) after we checked if the target exists - logger.error("multiple nodes trying to upgrade [{}] in parallel, retry upgrading with single node", - exception, target); + logger.error((Supplier) () -> new ParameterizedMessage("multiple nodes trying to upgrade [{}] in parallel, retry " + + "upgrading with single node", target), exception); throw exception; } finally { if (success) { @@ -84,7 +87,7 @@ void upgrade(final Index index, final Path source, final Path target) throws IOE void upgrade(final String indexFolderName) throws IOException { for (NodeEnvironment.NodePath nodePath : nodeEnv.nodePaths()) { final Path indexFolderPath = nodePath.indicesPath.resolve(indexFolderName); - final IndexMetaData indexMetaData = IndexMetaData.FORMAT.loadLatestState(logger, indexFolderPath); + final IndexMetaData indexMetaData = IndexMetaData.FORMAT.loadLatestState(logger, NamedXContentRegistry.EMPTY, indexFolderPath); if (indexMetaData != null) { final Index index = indexMetaData.getIndex(); if (needsUpgrade(index, indexFolderName)) { diff --git a/core/src/main/java/org/elasticsearch/common/util/LongObjectPagedHashMap.java b/core/src/main/java/org/elasticsearch/common/util/LongObjectPagedHashMap.java index 4095f5d7014a5..a79e8d88be68a 100644 --- a/core/src/main/java/org/elasticsearch/common/util/LongObjectPagedHashMap.java +++ b/core/src/main/java/org/elasticsearch/common/util/LongObjectPagedHashMap.java @@ -161,7 +161,7 @@ public Cursor next() { } @Override - public final void remove() { + public void remove() { throw new UnsupportedOperationException(); } diff --git a/core/src/main/java/org/elasticsearch/common/util/PageCacheRecycler.java b/core/src/main/java/org/elasticsearch/common/util/PageCacheRecycler.java index 6a68dda0272a1..947aad487374e 100644 --- a/core/src/main/java/org/elasticsearch/common/util/PageCacheRecycler.java +++ b/core/src/main/java/org/elasticsearch/common/util/PageCacheRecycler.java @@ -20,7 +20,6 @@ package org.elasticsearch.common.util; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.recycler.AbstractRecyclerC; @@ -29,7 +28,6 @@ import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.concurrent.EsExecutors; import java.util.Arrays; @@ -70,7 +68,7 @@ public void close() { protected PageCacheRecycler(Settings settings) { super(settings); final Type type = TYPE_SETTING .get(settings); - final long limit = LIMIT_HEAP_SETTING .get(settings).bytes(); + final long limit = LIMIT_HEAP_SETTING .get(settings).getBytes(); final int availableProcessors = EsExecutors.boundedNumberOfProcessors(settings); // We have a global amount of memory that we need to divide across data types. diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/AtomicArray.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/AtomicArray.java index 2278220d9dd3d..fa82aa0ac634a 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/AtomicArray.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/AtomicArray.java @@ -31,16 +31,8 @@ * to get the concrete values as a list using {@link #asList()}. */ public class AtomicArray { - - private static final AtomicArray EMPTY = new AtomicArray(0); - - @SuppressWarnings("unchecked") - public static E empty() { - return (E) EMPTY; - } - private final AtomicReferenceArray array; - private volatile List> nonNullList; + private volatile List nonNullList; public AtomicArray(int size) { array = new AtomicReferenceArray<>(size); @@ -53,7 +45,6 @@ public int length() { return array.length(); } - /** * Sets the element at position {@code i} to the given value. * @@ -87,19 +78,18 @@ public E get(int i) { } /** - * Returns the it as a non null list, with an Entry wrapping each value allowing to - * retain its index. + * Returns the it as a non null list. */ - public List> asList() { + public List asList() { if (nonNullList == null) { if (array == null || array.length() == 0) { nonNullList = Collections.emptyList(); } else { - List> list = new ArrayList<>(array.length()); + List list = new ArrayList<>(array.length()); for (int i = 0; i < array.length(); i++) { E e = array.get(i); if (e != null) { - list.add(new Entry<>(i, e)); + list.add(e); } } nonNullList = list; @@ -120,23 +110,4 @@ public E[] toArray(E[] a) { } return a; } - - /** - * An entry within the array. - */ - public static class Entry { - /** - * The original index of the value within the array. - */ - public final int index; - /** - * The value. - */ - public final E value; - - public Entry(int index, E value) { - this.index = index; - this.value = value; - } - } } diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/BaseFuture.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/BaseFuture.java index c3e60ec5be3f7..ee9aea9ed70c3 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/BaseFuture.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/BaseFuture.java @@ -19,6 +19,7 @@ package org.elasticsearch.common.util.concurrent; +import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.Transports; @@ -60,7 +61,9 @@ public abstract class BaseFuture implements Future { public V get(long timeout, TimeUnit unit) throws InterruptedException, TimeoutException, ExecutionException { assert timeout <= 0 || - (Transports.assertNotTransportThread(BLOCKING_OP_REASON) && ThreadPool.assertNotScheduleThread(BLOCKING_OP_REASON)); + (Transports.assertNotTransportThread(BLOCKING_OP_REASON) && + ThreadPool.assertNotScheduleThread(BLOCKING_OP_REASON) && + ClusterService.assertNotClusterStateUpdateThread(BLOCKING_OP_REASON)); return sync.get(unit.toNanos(timeout)); } @@ -82,7 +85,9 @@ public V get(long timeout, TimeUnit unit) throws InterruptedException, */ @Override public V get() throws InterruptedException, ExecutionException { - assert Transports.assertNotTransportThread(BLOCKING_OP_REASON) && ThreadPool.assertNotScheduleThread(BLOCKING_OP_REASON); + assert Transports.assertNotTransportThread(BLOCKING_OP_REASON) && + ThreadPool.assertNotScheduleThread(BLOCKING_OP_REASON) && + ClusterService.assertNotClusterStateUpdateThread(BLOCKING_OP_REASON); return sync.get(); } diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/EsExecutors.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/EsExecutors.java index 2d682648ca40d..7ec587cf72718 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/EsExecutors.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/EsExecutors.java @@ -25,8 +25,12 @@ import org.elasticsearch.node.Node; import java.util.Arrays; +import java.util.List; +import java.util.concurrent.AbstractExecutorService; import java.util.concurrent.BlockingQueue; +import java.util.concurrent.ExecutorService; import java.util.concurrent.LinkedTransferQueue; +import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ThreadFactory; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; @@ -53,8 +57,8 @@ public static int boundedNumberOfProcessors(Settings settings) { return PROCESSORS_SETTING.get(settings); } - public static PrioritizedEsThreadPoolExecutor newSinglePrioritizing(String name, ThreadFactory threadFactory, ThreadContext contextHolder) { - return new PrioritizedEsThreadPoolExecutor(name, 1, 1, 0L, TimeUnit.MILLISECONDS, threadFactory, contextHolder); + public static PrioritizedEsThreadPoolExecutor newSinglePrioritizing(String name, ThreadFactory threadFactory, ThreadContext contextHolder, ScheduledExecutorService timer) { + return new PrioritizedEsThreadPoolExecutor(name, 1, 1, 0L, TimeUnit.MILLISECONDS, threadFactory, contextHolder, timer); } public static EsThreadPoolExecutor newScaling(String name, int min, int max, long keepAliveTime, TimeUnit unit, ThreadFactory threadFactory, ThreadContext contextHolder) { @@ -74,6 +78,50 @@ public static EsThreadPoolExecutor newFixed(String name, int size, int queueCapa return new EsThreadPoolExecutor(name, size, size, 0, TimeUnit.MILLISECONDS, queue, threadFactory, new EsAbortPolicy(), contextHolder); } + private static final ExecutorService DIRECT_EXECUTOR_SERVICE = new AbstractExecutorService() { + + @Override + public void shutdown() { + throw new UnsupportedOperationException(); + } + + @Override + public List shutdownNow() { + throw new UnsupportedOperationException(); + } + + @Override + public boolean isShutdown() { + return false; + } + + @Override + public boolean isTerminated() { + return false; + } + + @Override + public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException { + throw new UnsupportedOperationException(); + } + + @Override + public void execute(Runnable command) { + command.run(); + } + + }; + + /** + * Returns an {@link ExecutorService} that executes submitted tasks on the current thread. This executor service does not support being + * shutdown. + * + * @return an {@link ExecutorService} that executes submitted tasks on the current thread + */ + public static ExecutorService newDirectExecutorService() { + return DIRECT_EXECUTOR_SERVICE; + } + public static String threadName(Settings settings, String ... names) { String namePrefix = Arrays @@ -113,7 +161,7 @@ static class EsThreadFactory implements ThreadFactory { final AtomicInteger threadNumber = new AtomicInteger(1); final String namePrefix; - public EsThreadFactory(String namePrefix) { + EsThreadFactory(String namePrefix) { this.namePrefix = namePrefix; SecurityManager s = System.getSecurityManager(); group = (s != null) ? s.getThreadGroup() : @@ -141,7 +189,7 @@ static class ExecutorScalingQueue extends LinkedTransferQueue { ThreadPoolExecutor executor; - public ExecutorScalingQueue() { + ExecutorScalingQueue() { } @Override diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/EsThreadPoolExecutor.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/EsThreadPoolExecutor.java index 2f664679bb444..9662292cf69f6 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/EsThreadPoolExecutor.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/EsThreadPoolExecutor.java @@ -19,7 +19,6 @@ package org.elasticsearch.common.util.concurrent; - import java.util.concurrent.BlockingQueue; import java.util.concurrent.ThreadFactory; import java.util.concurrent.ThreadPoolExecutor; @@ -109,6 +108,27 @@ protected void doExecute(final Runnable command) { } } + @Override + protected void afterExecute(Runnable r, Throwable t) { + super.afterExecute(r, t); + assert assertDefaultContext(r); + } + + private boolean assertDefaultContext(Runnable r) { + try { + assert contextHolder.isDefaultContext() : "the thread context is not the default context and the thread [" + + Thread.currentThread().getName() + "] is being returned to the pool after executing [" + r + "]"; + } catch (IllegalStateException ex) { + // sometimes we execute on a closed context and isDefaultContext doen't bypass the ensureOpen checks + // this must not trigger an exception here since we only assert if the default is restored and + // we don't really care if we are closed + if (contextHolder.isClosed() == false) { + throw ex; + } + } + return true; + } + /** * Returns a stream of all pending tasks. This is similar to {@link #getQueue()} but will expose the originally submitted * {@link Runnable} instances rather than potentially wrapped ones. diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/PrioritizedEsThreadPoolExecutor.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/PrioritizedEsThreadPoolExecutor.java index f55c84e943aad..ee38637b04c0c 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/PrioritizedEsThreadPoolExecutor.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/PrioritizedEsThreadPoolExecutor.java @@ -44,11 +44,14 @@ public class PrioritizedEsThreadPoolExecutor extends EsThreadPoolExecutor { private static final TimeValue NO_WAIT_TIME_VALUE = TimeValue.timeValueMillis(0); - private AtomicLong insertionOrder = new AtomicLong(); - private Queue current = ConcurrentCollections.newQueue(); + private final AtomicLong insertionOrder = new AtomicLong(); + private final Queue current = ConcurrentCollections.newQueue(); + private final ScheduledExecutorService timer; - PrioritizedEsThreadPoolExecutor(String name, int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, ThreadFactory threadFactory, ThreadContext contextHolder) { + PrioritizedEsThreadPoolExecutor(String name, int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, + ThreadFactory threadFactory, ThreadContext contextHolder, ScheduledExecutorService timer) { super(name, corePoolSize, maximumPoolSize, keepAliveTime, unit, new PriorityBlockingQueue<>(), threadFactory, contextHolder); + this.timer = timer; } public Pending[] getPending() { @@ -88,7 +91,13 @@ private void addPending(List runnables, List pending, boolean for (Runnable runnable : runnables) { if (runnable instanceof TieBreakingPrioritizedRunnable) { TieBreakingPrioritizedRunnable t = (TieBreakingPrioritizedRunnable) runnable; - pending.add(new Pending(unwrap(t.runnable), t.priority(), t.insertionOrder, executing)); + Runnable innerRunnable = t.runnable; + if (innerRunnable != null) { + /** innerRunnable can be null if task is finished but not removed from executor yet, + * see {@link TieBreakingPrioritizedRunnable#run} and {@link TieBreakingPrioritizedRunnable#runAndClean} + */ + pending.add(new Pending(unwrap(innerRunnable), t.priority(), t.insertionOrder, executing)); + } } else if (runnable instanceof PrioritizedFutureTask) { PrioritizedFutureTask t = (PrioritizedFutureTask) runnable; Object task = t.task; @@ -107,10 +116,11 @@ protected void beforeExecute(Thread t, Runnable r) { @Override protected void afterExecute(Runnable r, Throwable t) { + super.afterExecute(r, t); current.remove(r); } - public void execute(Runnable command, final ScheduledExecutorService timer, final TimeValue timeout, final Runnable timeoutCallback) { + public void execute(Runnable command, final TimeValue timeout, final Runnable timeoutCallback) { command = wrapRunnable(command); doExecute(command); if (timeout.nanos() >= 0) { @@ -249,14 +259,14 @@ private final class PrioritizedFutureTask extends FutureTask implements Co final Priority priority; final long insertionOrder; - public PrioritizedFutureTask(Runnable runnable, Priority priority, T value, long insertionOrder) { + PrioritizedFutureTask(Runnable runnable, Priority priority, T value, long insertionOrder) { super(runnable, value); this.task = runnable; this.priority = priority; this.insertionOrder = insertionOrder; } - public PrioritizedFutureTask(PrioritizedCallable callable, long insertionOrder) { + PrioritizedFutureTask(PrioritizedCallable callable, long insertionOrder) { super(callable); this.task = callable; this.priority = callable.priority(); diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/ReleasableLock.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/ReleasableLock.java index 1a90c6992fc43..46d6abff17632 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/ReleasableLock.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/ReleasableLock.java @@ -19,6 +19,7 @@ package org.elasticsearch.common.util.concurrent; +import org.elasticsearch.Assertions; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.index.engine.EngineException; @@ -35,9 +36,7 @@ public class ReleasableLock implements Releasable { public ReleasableLock(Lock lock) { this.lock = lock; - boolean useHoldingThreads = false; - assert (useHoldingThreads = true); - if (useHoldingThreads) { + if (Assertions.ENABLED) { holdingThreads = new ThreadLocal<>(); } else { holdingThreads = null; diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java index 8c04c24ec5b6e..1ce119636f734 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java @@ -33,7 +33,12 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.Function; +import java.util.function.Supplier; +import java.util.stream.Collectors; +import java.util.stream.Stream; /** * A ThreadContext is a map of string headers and a transient map of keyed objects that are associated with @@ -69,6 +74,7 @@ public final class ThreadContext implements Closeable, Writeable { private static final ThreadContextStruct DEFAULT_CONTEXT = new ThreadContextStruct(); private final Map defaultHeader; private final ContextThreadLocal threadLocal; + private boolean isSystemContext; /** * Creates a new ThreadContext instance @@ -115,12 +121,57 @@ public StoredContext stashAndMergeHeaders(Map headers) { return () -> threadLocal.set(context); } + /** * Just like {@link #stashContext()} but no default context is set. + * @param preserveResponseHeaders if set to true the response headers of the restore thread will be preserved. */ - public StoredContext newStoredContext() { + public StoredContext newStoredContext(boolean preserveResponseHeaders) { final ThreadContextStruct context = threadLocal.get(); - return () -> threadLocal.set(context); + return () -> { + if (preserveResponseHeaders && threadLocal.get() != context) { + threadLocal.set(context.putResponseHeaders(threadLocal.get().responseHeaders)); + } else { + threadLocal.set(context); + } + }; + } + + /** + * Returns a supplier that gathers a {@link #newStoredContext(boolean)} and restores it once the + * returned supplier is invoked. The context returned from the supplier is a stored version of the + * suppliers callers context that should be restored once the originally gathered context is not needed anymore. + * For instance this method should be used like this: + * + *
    +     *     Supplier<ThreadContext.StoredContext> restorable = context.newRestorableContext(true);
    +     *     new Thread() {
    +     *         public void run() {
    +     *             try (ThreadContext.StoredContext ctx = restorable.get()) {
    +     *                 // execute with the parents context and restore the threads context afterwards
    +     *             }
    +     *         }
    +     *
    +     *     }.start();
    +     * 
    + * + * @param preserveResponseHeaders if set to true the response headers of the restore thread will be preserved. + * @return a restorable context supplier + */ + public Supplier newRestorableContext(boolean preserveResponseHeaders) { + return wrapRestorable(newStoredContext(preserveResponseHeaders)); + } + + /** + * Same as {@link #newRestorableContext(boolean)} but wraps an existing context to restore. + * @param storedContext the context to restore + */ + public Supplier wrapRestorable(StoredContext storedContext) { + return () -> { + StoredContext context = newStoredContext(false); + storedContext.restore(); + return context; + }; } @Override @@ -208,12 +259,25 @@ public T getTransient(String key) { } /** - * Add the unique response {@code value} for the specified {@code key}. - *

    - * Any duplicate {@code value} is ignored. + * Add the {@code value} for the specified {@code key} Any duplicate {@code value} is ignored. + * + * @param key the header name + * @param value the header value */ - public void addResponseHeader(String key, String value) { - threadLocal.set(threadLocal.get().putResponse(key, value)); + public void addResponseHeader(final String key, final String value) { + addResponseHeader(key, value, v -> v); + } + + /** + * Add the {@code value} for the specified {@code key} with the specified {@code uniqueValue} used for de-duplication. Any duplicate + * {@code value} after applying {@code uniqueValue} is ignored. + * + * @param key the header name + * @param value the header value + * @param uniqueValue the function that produces de-duplication values + */ + public void addResponseHeader(final String key, final String value, final Function uniqueValue) { + threadLocal.set(threadLocal.get().putResponse(key, value, uniqueValue)); } /** @@ -246,6 +310,35 @@ public Runnable unwrap(Runnable command) { return command; } + /** + * Returns true if the current context is the default context. + */ + boolean isDefaultContext() { + return threadLocal.get() == DEFAULT_CONTEXT; + } + + /** + * Marks this thread context as an internal system context. This signals that actions in this context are issued + * by the system itself rather than by a user action. + */ + public void markAsSystemContext() { + threadLocal.set(threadLocal.get().setSystemContext()); + } + + /** + * Returns true iff this context is a system context + */ + public boolean isSystemContext() { + return threadLocal.get().isSystemContext; + } + + /** + * Returns true if the context is closed, otherwise true + */ + boolean isClosed() { + return threadLocal.closed.get(); + } + @FunctionalInterface public interface StoredContext extends AutoCloseable { @Override @@ -260,6 +353,7 @@ private static final class ThreadContextStruct { private final Map requestHeaders; private final Map transientHeaders; private final Map> responseHeaders; + private final boolean isSystemContext; private ThreadContextStruct(StreamInput in) throws IOException { final int numRequest = in.readVInt(); @@ -271,27 +365,36 @@ private ThreadContextStruct(StreamInput in) throws IOException { this.requestHeaders = requestHeaders; this.responseHeaders = in.readMapOfLists(StreamInput::readString, StreamInput::readString); this.transientHeaders = Collections.emptyMap(); + isSystemContext = false; // we never serialize this it's a transient flag + } + + private ThreadContextStruct setSystemContext() { + if (isSystemContext) { + return this; + } + return new ThreadContextStruct(requestHeaders, responseHeaders, transientHeaders, true); } private ThreadContextStruct(Map requestHeaders, Map> responseHeaders, - Map transientHeaders) { + Map transientHeaders, boolean isSystemContext) { this.requestHeaders = requestHeaders; this.responseHeaders = responseHeaders; this.transientHeaders = transientHeaders; + this.isSystemContext = isSystemContext; } /** * This represents the default context and it should only ever be called by {@link #DEFAULT_CONTEXT}. */ private ThreadContextStruct() { - this(Collections.emptyMap(), Collections.emptyMap(), Collections.emptyMap()); + this(Collections.emptyMap(), Collections.emptyMap(), Collections.emptyMap(), false); } private ThreadContextStruct putRequest(String key, String value) { Map newRequestHeaders = new HashMap<>(this.requestHeaders); putSingleHeader(key, value, newRequestHeaders); - return new ThreadContextStruct(newRequestHeaders, responseHeaders, transientHeaders); + return new ThreadContextStruct(newRequestHeaders, responseHeaders, transientHeaders, isSystemContext); } private void putSingleHeader(String key, String value, Map newHeaders) { @@ -309,18 +412,40 @@ private ThreadContextStruct putHeaders(Map headers) { putSingleHeader(entry.getKey(), entry.getValue(), newHeaders); } newHeaders.putAll(this.requestHeaders); - return new ThreadContextStruct(newHeaders, responseHeaders, transientHeaders); + return new ThreadContextStruct(newHeaders, responseHeaders, transientHeaders, isSystemContext); } } - private ThreadContextStruct putResponse(String key, String value) { + private ThreadContextStruct putResponseHeaders(Map> headers) { + assert headers != null; + if (headers.isEmpty()) { + return this; + } + final Map> newResponseHeaders = new HashMap<>(this.responseHeaders); + for (Map.Entry> entry : headers.entrySet()) { + String key = entry.getKey(); + final List existingValues = newResponseHeaders.get(key); + if (existingValues != null) { + List newValues = Stream.concat(entry.getValue().stream(), + existingValues.stream()).distinct().collect(Collectors.toList()); + newResponseHeaders.put(key, Collections.unmodifiableList(newValues)); + } else { + newResponseHeaders.put(key, entry.getValue()); + } + } + return new ThreadContextStruct(requestHeaders, newResponseHeaders, transientHeaders, isSystemContext); + } + + private ThreadContextStruct putResponse(final String key, final String value, final Function uniqueValue) { assert value != null; final Map> newResponseHeaders = new HashMap<>(this.responseHeaders); final List existingValues = newResponseHeaders.get(key); if (existingValues != null) { - if (existingValues.contains(value)) { + final Set existingUniqueValues = existingValues.stream().map(uniqueValue).collect(Collectors.toSet()); + assert existingValues.size() == existingUniqueValues.size(); + if (existingUniqueValues.contains(uniqueValue.apply(value))) { return this; } @@ -332,7 +457,7 @@ private ThreadContextStruct putResponse(String key, String value) { newResponseHeaders.put(key, Collections.singletonList(value)); } - return new ThreadContextStruct(requestHeaders, newResponseHeaders, transientHeaders); + return new ThreadContextStruct(requestHeaders, newResponseHeaders, transientHeaders, isSystemContext); } private ThreadContextStruct putTransient(String key, Object value) { @@ -340,7 +465,7 @@ private ThreadContextStruct putTransient(String key, Object value) { if (newTransient.putIfAbsent(key, value) != null) { throw new IllegalArgumentException("value for key [" + key + "] already present"); } - return new ThreadContextStruct(requestHeaders, responseHeaders, newTransient); + return new ThreadContextStruct(requestHeaders, responseHeaders, newTransient, isSystemContext); } boolean isEmpty() { @@ -431,7 +556,7 @@ private class ContextPreservingRunnable implements Runnable { private final ThreadContext.StoredContext ctx; private ContextPreservingRunnable(Runnable in) { - ctx = newStoredContext(); + ctx = newStoredContext(false); this.in = in; } @@ -468,10 +593,12 @@ public Runnable unwrap() { */ private class ContextPreservingAbstractRunnable extends AbstractRunnable { private final AbstractRunnable in; - private final ThreadContext.StoredContext ctx; + private final ThreadContext.StoredContext creatorsContext; + + private ThreadContext.StoredContext threadsOriginalContext = null; private ContextPreservingAbstractRunnable(AbstractRunnable in) { - ctx = newStoredContext(); + creatorsContext = newStoredContext(false); this.in = in; } @@ -482,7 +609,13 @@ public boolean isForceExecution() { @Override public void onAfter() { - in.onAfter(); + try { + in.onAfter(); + } finally { + if (threadsOriginalContext != null) { + threadsOriginalContext.restore(); + } + } } @Override @@ -498,8 +631,9 @@ public void onRejection(Exception e) { @Override protected void doRun() throws Exception { boolean whileRunning = false; - try (ThreadContext.StoredContext ignore = stashContext()){ - ctx.restore(); + threadsOriginalContext = stashContext(); + try { + creatorsContext.restore(); whileRunning = true; in.doRun(); whileRunning = false; diff --git a/core/src/main/java/org/elasticsearch/common/util/set/Sets.java b/core/src/main/java/org/elasticsearch/common/util/set/Sets.java index 4b323c42a371a..0f1fe22c02010 100644 --- a/core/src/main/java/org/elasticsearch/common/util/set/Sets.java +++ b/core/src/main/java/org/elasticsearch/common/util/set/Sets.java @@ -21,11 +21,19 @@ import java.util.Collection; import java.util.Collections; +import java.util.EnumSet; import java.util.HashSet; import java.util.Iterator; import java.util.Objects; import java.util.Set; +import java.util.SortedSet; +import java.util.TreeSet; import java.util.concurrent.ConcurrentHashMap; +import java.util.function.BiConsumer; +import java.util.function.BinaryOperator; +import java.util.function.Function; +import java.util.function.Supplier; +import java.util.stream.Collector; import java.util.stream.Collectors; public final class Sets { @@ -63,12 +71,72 @@ public static boolean haveEmptyIntersection(Set left, Set right) { return !left.stream().anyMatch(k -> right.contains(k)); } + /** + * The relative complement, or difference, of the specified left and right set. Namely, the resulting set contains all the elements that + * are in the left set but not in the right set. Neither input is mutated by this operation, an entirely new set is returned. + * + * @param left the left set + * @param right the right set + * @param the type of the elements of the sets + * @return the relative complement of the left set with respect to the right set + */ public static Set difference(Set left, Set right) { Objects.requireNonNull(left); Objects.requireNonNull(right); return left.stream().filter(k -> !right.contains(k)).collect(Collectors.toSet()); } + /** + * The relative complement, or difference, of the specified left and right set, returned as a sorted set. Namely, the resulting set + * contains all the elements that are in the left set but not in the right set, and the set is sorted using the natural ordering of + * element type. Neither input is mutated by this operation, an entirely new set is returned. + * + * @param left the left set + * @param right the right set + * @param the type of the elements of the sets + * @return the sorted relative complement of the left set with respect to the right set + */ + public static SortedSet sortedDifference(Set left, Set right) { + Objects.requireNonNull(left); + Objects.requireNonNull(right); + return left.stream().filter(k -> !right.contains(k)).collect(new SortedSetCollector<>()); + } + + private static class SortedSetCollector implements Collector, SortedSet> { + + @Override + public Supplier> supplier() { + return TreeSet::new; + } + + @Override + public BiConsumer, T> accumulator() { + return (s, e) -> s.add(e); + } + + @Override + public BinaryOperator> combiner() { + return (s, t) -> { + s.addAll(t); + return s; + }; + } + + @Override + public Function, SortedSet> finisher() { + return Function.identity(); + } + + static final Set CHARACTERISTICS = + Collections.unmodifiableSet(EnumSet.of(Collector.Characteristics.IDENTITY_FINISH)); + + @Override + public Set characteristics() { + return CHARACTERISTICS; + } + + } + public static Set union(Set left, Set right) { Objects.requireNonNull(left); Objects.requireNonNull(right); diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/AbstractObjectParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/AbstractObjectParser.java index 6f8a606d9a644..91acb267056b0 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/AbstractObjectParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/AbstractObjectParser.java @@ -19,8 +19,8 @@ package org.elasticsearch.common.xcontent; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.xcontent.ObjectParser.ValueType; import org.elasticsearch.common.xcontent.json.JsonXContent; @@ -34,37 +34,26 @@ /** * Superclass for {@link ObjectParser} and {@link ConstructingObjectParser}. Defines most of the "declare" methods so they can be shared. */ -public abstract class AbstractObjectParser - implements BiFunction { - /** - * Reads an object from a parser using some context. - */ - @FunctionalInterface - public interface ContextParser { - T parse(XContentParser p, Context c) throws IOException; - } - - /** - * Reads an object right from the parser without any context. - */ - @FunctionalInterface - public interface NoContextParser { - T parse(XContentParser p) throws IOException; - } +public abstract class AbstractObjectParser + implements BiFunction, ContextParser { /** * Declare some field. Usually it is easier to use {@link #declareString(BiConsumer, ParseField)} or - * {@link #declareObject(BiConsumer, BiFunction, ParseField)} rather than call this directly. + * {@link #declareObject(BiConsumer, ContextParser, ParseField)} rather than call this directly. */ public abstract void declareField(BiConsumer consumer, ContextParser parser, ParseField parseField, ValueType type); - public void declareField(BiConsumer consumer, NoContextParser parser, ParseField parseField, ValueType type) { - declareField(consumer, (p, c) -> parser.parse(p), parseField, type); + public void declareField(BiConsumer consumer, CheckedFunction parser, + ParseField parseField, ValueType type) { + if (parser == null) { + throw new IllegalArgumentException("[parser] is required"); + } + declareField(consumer, (p, c) -> parser.apply(p), parseField, type); } - public void declareObject(BiConsumer consumer, BiFunction objectParser, ParseField field) { - declareField(consumer, (p, c) -> objectParser.apply(p, c), field, ValueType.OBJECT); + public void declareObject(BiConsumer consumer, ContextParser objectParser, ParseField field) { + declareField(consumer, (p, c) -> objectParser.parse(p, c), field, ValueType.OBJECT); } public void declareFloat(BiConsumer consumer, ParseField field) { @@ -100,9 +89,9 @@ public void declareBoolean(BiConsumer consumer, ParseField field declareField(consumer, XContentParser::booleanValue, field, ValueType.BOOLEAN); } - public void declareObjectArray(BiConsumer> consumer, BiFunction objectParser, + public void declareObjectArray(BiConsumer> consumer, ContextParser objectParser, ParseField field) { - declareField(consumer, (p, c) -> parseArray(p, () -> objectParser.apply(p, c)), field, ValueType.OBJECT_ARRAY); + declareField(consumer, (p, c) -> parseArray(p, () -> objectParser.parse(p, c)), field, ValueType.OBJECT_ARRAY); } public void declareStringArray(BiConsumer> consumer, ParseField field) { @@ -126,7 +115,7 @@ public void declareIntArray(BiConsumer> consumer, ParseFiel } public void declareRawObject(BiConsumer consumer, ParseField field) { - NoContextParser bytesParser = p -> { + CheckedFunction bytesParser = p -> { try (XContentBuilder builder = JsonXContent.contentBuilder()) { builder.prettyPrint(); builder.copyCurrentStructure(p); @@ -141,7 +130,7 @@ private interface IOSupplier { } private static List parseArray(XContentParser parser, IOSupplier supplier) throws IOException { List list = new ArrayList<>(); - if (parser.currentToken().isValue()) { + if (parser.currentToken().isValue() || parser.currentToken() == XContentParser.Token.START_OBJECT) { list.add(supplier.get()); // single value } else { while (parser.nextToken() != XContentParser.Token.END_ARRAY) { diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/ConstructingObjectParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/ConstructingObjectParser.java index e1400463a7226..02ccf0d67ba94 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/ConstructingObjectParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/ConstructingObjectParser.java @@ -20,7 +20,6 @@ package org.elasticsearch.common.xcontent; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.xcontent.ObjectParser.ValueType; @@ -74,7 +73,7 @@ * Note: if optional constructor arguments aren't specified then the number of allocations is always the worst case. *

    */ -public final class ConstructingObjectParser extends AbstractObjectParser { +public final class ConstructingObjectParser extends AbstractObjectParser { /** * Consumer that marks a field as a required constructor argument instead of a real object field. */ @@ -94,7 +93,7 @@ public final class ConstructingObjectParser constructorArgInfos = new ArrayList<>(); private final ObjectParser objectParser; - private final Function builder; + private final BiFunction builder; /** * The number of fields on the targetObject. This doesn't include any constructor arguments and is the size used for the array backing * the field queue. @@ -103,7 +102,7 @@ public final class ConstructingObjectParser builder) { - objectParser = new ObjectParser<>(name); + this(name, false, builder); + } + + /** + * Build the parser. + * + * @param name The name given to the delegate ObjectParser for error identification. Use what you'd use if the object worked with + * ObjectParser. + * @param ignoreUnknownFields Should this parser ignore unknown fields? This should generally be set to true only when parsing responses + * from external systems, never when parsing requests from users. + * @param builder A function that builds the object from an array of Objects. Declare this inline with the parser, casting the elements + * of the array to the arguments so they work with your favorite constructor. The objects in the array will be in the same order + * that you declared the {{@link #constructorArg()}s and none will be null. If any of the constructor arguments aren't defined in + * the XContent then parsing will throw an error. We use an array here rather than a {@code Map} to save on + * allocations. + */ + public ConstructingObjectParser(String name, boolean ignoreUnknownFields, Function builder) { + this(name, ignoreUnknownFields, (args, context) -> builder.apply(args)); + } + + /** + * Build the parser. + * + * @param name The name given to the delegate ObjectParser for error identification. Use what you'd use if the object worked with + * ObjectParser. + * @param ignoreUnknownFields Should this parser ignore unknown fields? This should generally be set to true only when parsing responses + * from external systems, never when parsing requests from users. + * @param builder A binary function that builds the object from an array of Objects and the parser context. Declare this inline with + * the parser, casting the elements of the array to the arguments so they work with your favorite constructor. The objects in + * the array will be in the same order that you declared the {{@link #constructorArg()}s and none will be null. The second + * argument is the value of the context provided to the {@link #parse(XContentParser, Object) parse function}. If any of the + * constructor arguments aren't defined in the XContent then parsing will throw an error. We use an array here rather than a + * {@code Map} to save on allocations. + */ + public ConstructingObjectParser(String name, boolean ignoreUnknownFields, BiFunction builder) { + objectParser = new ObjectParser<>(name, ignoreUnknownFields, null); this.builder = builder; + } /** @@ -123,12 +158,17 @@ public ConstructingObjectParser(String name, Function builder) @Override public Value apply(XContentParser parser, Context context) { try { - return objectParser.parse(parser, new Target(parser), context).finish(); + return parse(parser, context); } catch (IOException e) { throw new ParsingException(parser.getTokenLocation(), "[" + objectParser.getName() + "] failed to parse object", e); } } + @Override + public Value parse(XContentParser parser, Context context) throws IOException { + return objectParser.parse(parser, new Target(parser, context), context).finish(); + } + /** * Pass the {@linkplain BiConsumer} this returns the declare methods to declare a required constructor argument. See this class's * javadoc for an example. The order in which these are declared matters: it is the order that they come in the array passed to @@ -153,6 +193,19 @@ public static BiConsumer optionalConstructorArg() @Override public void declareField(BiConsumer consumer, ContextParser parser, ParseField parseField, ValueType type) { + if (consumer == null) { + throw new IllegalArgumentException("[consumer] is required"); + } + if (parser == null) { + throw new IllegalArgumentException("[parser] is required"); + } + if (parseField == null) { + throw new IllegalArgumentException("[parseField] is required"); + } + if (type == null) { + throw new IllegalArgumentException("[type] is required"); + } + if (consumer == REQUIRED_CONSTRUCTOR_ARG_MARKER || consumer == OPTIONAL_CONSTRUCTOR_ARG_MARKER) { /* * Constructor arguments are detected by this "marker" consumer. It keeps the API looking clean even if it is a bit sleezy. We @@ -201,7 +254,7 @@ private BiConsumer queueingConsumer(BiConsumer consumer /** * The target of the {@linkplain ConstructingObjectParser}. One of these is built every time you call - * {@linkplain ConstructingObjectParser#apply(XContentParser, ParseFieldMatcherSupplier)} Note that it is not static so it inherits + * {@linkplain ConstructingObjectParser#apply(XContentParser, Object)} Note that it is not static so it inherits * {@linkplain ConstructingObjectParser}'s type parameters. */ private class Target { @@ -214,6 +267,12 @@ private class Target { * location of each field so that we can give a useful error message when replaying the queue. */ private final XContentParser parser; + + /** + * The parse context that is used for this invocation. Stored here so that it can be passed to the {@link #builder}. + */ + private Context context; + /** * How many of the constructor parameters have we collected? We keep track of this so we don't have to count the * {@link #constructorArgs} array looking for nulls when we receive another constructor parameter. When this is equal to the size of @@ -233,8 +292,9 @@ private class Target { */ private Value targetObject; - public Target(XContentParser parser) { + Target(XContentParser parser, Context context) { this.parser = parser; + this.context = context; } /** @@ -267,7 +327,7 @@ private void queue(Consumer queueMe) { } /** - * Finish parsing the object. + * Finish parsing the object. */ private Value finish() { if (targetObject != null) { @@ -308,7 +368,7 @@ private Value finish() { private void buildTarget() { try { - targetObject = builder.apply(constructorArgs); + targetObject = builder.apply(constructorArgs, context); while (queuedFieldsCount > 0) { queuedFieldsCount -= 1; queuedFields[queuedFieldsCount].accept(targetObject); @@ -326,7 +386,7 @@ private static class ConstructorArgInfo { final ParseField field; final boolean required; - public ConstructorArgInfo(ParseField field, boolean required) { + ConstructorArgInfo(ParseField field, boolean required) { this.field = field; this.required = required; } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/ContextParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/ContextParser.java new file mode 100644 index 0000000000000..5862ea76aef80 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/xcontent/ContextParser.java @@ -0,0 +1,30 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.xcontent; + +import java.io.IOException; + +/** + * Reads an object from a parser using some context. + */ +@FunctionalInterface +public interface ContextParser { + T parse(XContentParser p, Context c) throws IOException; +} diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/FromXContentBuilder.java b/core/src/main/java/org/elasticsearch/common/xcontent/FromXContentBuilder.java deleted file mode 100644 index 0b0370b490e69..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/xcontent/FromXContentBuilder.java +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.xcontent; - -import org.elasticsearch.common.ParseFieldMatcher; - -import java.io.IOException; - -/** - * Indicates that the class supports XContent deserialization. - */ -public interface FromXContentBuilder { - /** - * Parses an object with the type T from parser - */ - T fromXContent(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException; -} diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/NamedXContentRegistry.java b/core/src/main/java/org/elasticsearch/common/xcontent/NamedXContentRegistry.java new file mode 100644 index 0000000000000..b991c90b51b1d --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/xcontent/NamedXContentRegistry.java @@ -0,0 +1,192 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.xcontent; + +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; + +import static java.util.Collections.emptyList; +import static java.util.Collections.emptyMap; +import static java.util.Collections.unmodifiableMap; +import static java.util.Objects.requireNonNull; + +public class NamedXContentRegistry { + /** + * The empty {@link NamedXContentRegistry} for use when you are sure that you aren't going to call + * {@link XContentParser#namedObject(Class, String, Object)}. Be *very* careful with this singleton because a parser using it will fail + * every call to {@linkplain XContentParser#namedObject(Class, String, Object)}. Every non-test usage really should be checked thorowly + * and marked with a comment about how it was checked. That way anyone that sees code that uses it knows that it is potentially + * dangerous. + */ + public static final NamedXContentRegistry EMPTY = new NamedXContentRegistry(emptyList()); + + /** + * An entry in the {@linkplain NamedXContentRegistry} containing the name of the object and the parser that can parse it. + */ + public static class Entry { + /** The class that this entry can read. */ + public final Class categoryClass; + + /** A name for the entry which is unique within the {@link #categoryClass}. */ + public final ParseField name; + + /** A parser capability of parser the entry's class. */ + private final ContextParser parser; + + /** Creates a new entry which can be stored by the registry. */ + public Entry(Class categoryClass, ParseField name, CheckedFunction parser) { + this.categoryClass = Objects.requireNonNull(categoryClass); + this.name = Objects.requireNonNull(name); + this.parser = Objects.requireNonNull((p, c) -> parser.apply(p)); + } + /** + * Creates a new entry which can be stored by the registry. + * Prefer {@link Entry#Entry(Class, ParseField, CheckedFunction)} unless you need a context to carry around while parsing. + */ + public Entry(Class categoryClass, ParseField name, ContextParser parser) { + this.categoryClass = Objects.requireNonNull(categoryClass); + this.name = Objects.requireNonNull(name); + this.parser = Objects.requireNonNull(parser); + } + } + + private final Map, Map> registry; + + public NamedXContentRegistry(List entries) { + if (entries.isEmpty()) { + registry = emptyMap(); + return; + } + entries = new ArrayList<>(entries); + entries.sort((e1, e2) -> e1.categoryClass.getName().compareTo(e2.categoryClass.getName())); + + Map, Map> registry = new HashMap<>(); + Map parsers = null; + Class currentCategory = null; + for (Entry entry : entries) { + if (currentCategory != entry.categoryClass) { + if (currentCategory != null) { + // we've seen the last of this category, put it into the big map + registry.put(currentCategory, unmodifiableMap(parsers)); + } + parsers = new HashMap<>(); + currentCategory = entry.categoryClass; + } + + for (String name : entry.name.getAllNamesIncludedDeprecated()) { + Object old = parsers.put(name, entry); + if (old != null) { + throw new IllegalArgumentException("NamedXContent [" + currentCategory.getName() + "][" + entry.name + "]" + + " is already registered for [" + old.getClass().getName() + "]," + + " cannot register [" + entry.parser.getClass().getName() + "]"); + } + } + } + // handle the last category + registry.put(currentCategory, unmodifiableMap(parsers)); + + this.registry = unmodifiableMap(registry); + } + + /** + * Parse a named object, throwing an exception if the parser isn't found. Throws an {@link ElasticsearchException} if the + * {@code categoryClass} isn't registered because this is almost always a bug. Throws a {@link UnknownNamedObjectException} if the + * {@code categoryClass} is registered but the {@code name} isn't. + */ + public T parseNamedObject(Class categoryClass, String name, XContentParser parser, C context) throws IOException { + Map parsers = registry.get(categoryClass); + if (parsers == null) { + if (registry.isEmpty()) { + // The "empty" registry will never work so we throw a better exception as a hint. + throw new ElasticsearchException("namedObject is not supported for this parser"); + } + throw new ElasticsearchException("Unknown namedObject category [" + categoryClass.getName() + "]"); + } + Entry entry = parsers.get(name); + if (entry == null) { + throw new UnknownNamedObjectException(parser.getTokenLocation(), categoryClass, name); + } + if (false == entry.name.match(name)) { + /* Note that this shouldn't happen because we already looked up the entry using the names but we need to call `match` anyway + * because it is responsible for logging deprecation warnings. */ + throw new ParsingException(parser.getTokenLocation(), + "Unknown " + categoryClass.getSimpleName() + " [" + name + "]: Parser didn't match"); + } + return categoryClass.cast(entry.parser.parse(parser, context)); + } + + /** + * Thrown when {@link NamedXContentRegistry#parseNamedObject(Class, String, XContentParser, Object)} is called with an unregistered + * name. When this bubbles up to the rest layer it is converted into a response with {@code 400 BAD REQUEST} status. + */ + public static class UnknownNamedObjectException extends ParsingException { + private final String categoryClass; + private final String name; + + public UnknownNamedObjectException(XContentLocation contentLocation, Class categoryClass, + String name) { + super(contentLocation, "Unknown " + categoryClass.getSimpleName() + " [" + name + "]"); + this.categoryClass = requireNonNull(categoryClass, "categoryClass is required").getName(); + this.name = requireNonNull(name, "name is required"); + } + + /** + * Read from a stream. + */ + public UnknownNamedObjectException(StreamInput in) throws IOException { + super(in); + categoryClass = in.readString(); + name = in.readString(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeString(categoryClass); + out.writeString(name); + } + + /** + * Category class that was missing a parser. This is a String instead of a class because the class might not be on the classpath + * of all nodes or it might be exclusive to a plugin or something. + */ + public String getCategoryClass() { + return categoryClass; + } + + /** + * Name of the missing parser. + */ + public String getName() { + return name; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/ObjectParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/ObjectParser.java index 44d9e6e1993b1..2e9c62abe3bdd 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/ObjectParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/ObjectParser.java @@ -18,9 +18,8 @@ */ package org.elasticsearch.common.xcontent; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.ParsingException; import java.io.IOException; @@ -46,8 +45,8 @@ /** * A declarative, stateless parser that turns XContent into setter calls. A single parser should be defined for each object being parsed, - * nested elements can be added via {@link #declareObject(BiConsumer, BiFunction, ParseField)} which should be satisfied where possible by - * passing another instance of {@link ObjectParser}, this one customized for that Object. + * nested elements can be added via {@link #declareObject(BiConsumer, ContextParser, ParseField)} which should be satisfied where possible + * by passing another instance of {@link ObjectParser}, this one customized for that Object. *

    * This class works well for object that do have a constructor argument or that can be built using information available from earlier in the * XContent. For objects that have constructors with required arguments that are specified on the same level as other fields see @@ -67,7 +66,7 @@ * It's highly recommended to use the high level declare methods like {@link #declareString(BiConsumer, ParseField)} instead of * {@link #declareField} which can be used to implement exceptional parsing operations not covered by the high level methods. */ -public final class ObjectParser extends AbstractObjectParser { +public final class ObjectParser extends AbstractObjectParser { /** * Adapts an array (or varags) setter into a list setter. */ @@ -83,6 +82,11 @@ public static BiConsumer> fromLi private final Map fieldParserMap = new HashMap<>(); private final String name; private final Supplier valueSupplier; + /** + * Should this parser ignore unknown fields? This should generally be set to true only when parsing responses from external systems, + * never when parsing requests from users. + */ + private final boolean ignoreUnknownFields; /** * Creates a new ObjectParser instance with a name. This name is used to reference the parser in exceptions and messages. @@ -96,18 +100,31 @@ public ObjectParser(String name) { * @param name the parsers name, used to reference the parser in exceptions and messages. * @param valueSupplier a supplier that creates a new Value instance used when the parser is used as an inner object parser. */ - public ObjectParser(String name, Supplier valueSupplier) { + public ObjectParser(String name, @Nullable Supplier valueSupplier) { + this(name, false, valueSupplier); + } + + /** + * Creates a new ObjectParser instance which a name. + * @param name the parsers name, used to reference the parser in exceptions and messages. + * @param ignoreUnknownFields Should this parser ignore unknown fields? This should generally be set to true only when parsing + * responses from external systems, never when parsing requests from users. + * @param valueSupplier a supplier that creates a new Value instance used when the parser is used as an inner object parser. + */ + public ObjectParser(String name, boolean ignoreUnknownFields, @Nullable Supplier valueSupplier) { this.name = name; this.valueSupplier = valueSupplier; + this.ignoreUnknownFields = ignoreUnknownFields; } /** * Parses a Value from the given {@link XContentParser} * @param parser the parser to build a value from - * @param context must at least provide a {@link ParseFieldMatcher} + * @param context context needed for parsing * @return a new value instance drawn from the provided value supplier on {@link #ObjectParser(String, Supplier)} * @throws IOException if an IOException occurs. */ + @Override public Value parse(XContentParser parser, Context context) throws IOException { if (valueSupplier == null) { throw new NullPointerException("valueSupplier is not set"); @@ -134,7 +151,7 @@ public Value parse(XContentParser parser, Value value, Context context) throws I } } - FieldParser fieldParser = null; + FieldParser fieldParser = null; String currentFieldName = null; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { @@ -144,9 +161,13 @@ public Value parse(XContentParser parser, Value value, Context context) throws I if (currentFieldName == null) { throw new IllegalStateException("[" + name + "] no field found"); } - assert fieldParser != null; - fieldParser.assertSupports(name, token, currentFieldName, context.getParseFieldMatcher()); - parseSub(parser, fieldParser, currentFieldName, value, context); + if (fieldParser == null) { + assert ignoreUnknownFields : "this should only be possible if configured to ignore known fields"; + parser.skipChildren(); // noop if parser points to a value, skips children if parser is start object or start array + } else { + fieldParser.assertSupports(name, token, currentFieldName); + parseSub(parser, fieldParser, currentFieldName, value, context); + } fieldParser = null; } } @@ -169,6 +190,12 @@ public interface Parser { void parse(XContentParser parser, Value value, Context context) throws IOException; } public void declareField(Parser p, ParseField parseField, ValueType type) { + if (parseField == null) { + throw new IllegalArgumentException("[parseField] is required"); + } + if (type == null) { + throw new IllegalArgumentException("[type] is required"); + } FieldParser fieldParser = new FieldParser(p, type.supportedTokens(), parseField, type); for (String fieldValue : parseField.getAllNamesIncludedDeprecated()) { fieldParserMap.putIfAbsent(fieldValue, fieldParser); @@ -178,6 +205,12 @@ public void declareField(Parser p, ParseField parseField, ValueT @Override public void declareField(BiConsumer consumer, ContextParser parser, ParseField parseField, ValueType type) { + if (consumer == null) { + throw new IllegalArgumentException("[consumer] is required"); + } + if (parser == null) { + throw new IllegalArgumentException("[parser] is required"); + } declareField((p, v, c) -> consumer.accept(v, parser.parse(p, c)), parseField, type); } @@ -322,13 +355,13 @@ public String getName() { return name; } - private void parseArray(XContentParser parser, FieldParser fieldParser, String currentFieldName, Value value, Context context) + private void parseArray(XContentParser parser, FieldParser fieldParser, String currentFieldName, Value value, Context context) throws IOException { assert parser.currentToken() == XContentParser.Token.START_ARRAY : "Token was: " + parser.currentToken(); parseValue(parser, fieldParser, currentFieldName, value, context); } - private void parseValue(XContentParser parser, FieldParser fieldParser, String currentFieldName, Value value, Context context) + private void parseValue(XContentParser parser, FieldParser fieldParser, String currentFieldName, Value value, Context context) throws IOException { try { fieldParser.parser.parse(parser, value, context); @@ -337,7 +370,7 @@ private void parseValue(XContentParser parser, FieldParser fieldParser, S } } - private void parseSub(XContentParser parser, FieldParser fieldParser, String currentFieldName, Value value, Context context) + private void parseSub(XContentParser parser, FieldParser fieldParser, String currentFieldName, Value value, Context context) throws IOException { final XContentParser.Token token = parser.currentToken(); switch (token) { @@ -361,28 +394,28 @@ private void parseSub(XContentParser parser, FieldParser fieldParser, Str } private FieldParser getParser(String fieldName) { - FieldParser parser = fieldParserMap.get(fieldName); - if (parser == null) { + FieldParser parser = fieldParserMap.get(fieldName); + if (parser == null && false == ignoreUnknownFields) { throw new IllegalArgumentException("[" + name + "] unknown field [" + fieldName + "], parser not found"); } return parser; } - public static class FieldParser { - private final Parser parser; + private class FieldParser { + private final Parser parser; private final EnumSet supportedTokens; private final ParseField parseField; private final ValueType type; - public FieldParser(Parser parser, EnumSet supportedTokens, ParseField parseField, ValueType type) { + FieldParser(Parser parser, EnumSet supportedTokens, ParseField parseField, ValueType type) { this.parser = parser; this.supportedTokens = supportedTokens; this.parseField = parseField; this.type = type; } - public void assertSupports(String parserName, XContentParser.Token token, String currentFieldName, ParseFieldMatcher matcher) { - if (matcher.match(currentFieldName, parseField) == false) { + void assertSupports(String parserName, XContentParser.Token token, String currentFieldName) { + if (parseField.match(currentFieldName) == false) { throw new IllegalStateException("[" + parserName + "] parsefield doesn't accept: " + currentFieldName); } if (supportedTokens.contains(token) == false) { @@ -415,10 +448,11 @@ public enum ValueType { FLOAT(VALUE_NUMBER, VALUE_STRING), FLOAT_OR_NULL(VALUE_NUMBER, VALUE_STRING, VALUE_NULL), DOUBLE(VALUE_NUMBER, VALUE_STRING), + DOUBLE_OR_NULL(VALUE_NUMBER, VALUE_STRING, VALUE_NULL), LONG(VALUE_NUMBER, VALUE_STRING), LONG_OR_NULL(VALUE_NUMBER, VALUE_STRING, VALUE_NULL), INT(VALUE_NUMBER, VALUE_STRING), - BOOLEAN(VALUE_BOOLEAN), + BOOLEAN(VALUE_BOOLEAN, VALUE_STRING), STRING_ARRAY(START_ARRAY, VALUE_STRING), FLOAT_ARRAY(START_ARRAY, VALUE_NUMBER, VALUE_STRING), DOUBLE_ARRAY(START_ARRAY, VALUE_NUMBER, VALUE_STRING), @@ -429,7 +463,10 @@ public enum ValueType { OBJECT_ARRAY(START_OBJECT, START_ARRAY), OBJECT_OR_BOOLEAN(START_OBJECT, VALUE_BOOLEAN), OBJECT_OR_STRING(START_OBJECT, VALUE_STRING), - VALUE(VALUE_BOOLEAN, VALUE_NULL, VALUE_EMBEDDED_OBJECT, VALUE_NUMBER, VALUE_STRING); + OBJECT_ARRAY_BOOLEAN_OR_STRING(START_OBJECT, START_ARRAY, VALUE_BOOLEAN, VALUE_STRING), + OBJECT_ARRAY_OR_STRING(START_OBJECT, START_ARRAY, VALUE_STRING), + VALUE(VALUE_BOOLEAN, VALUE_NULL, VALUE_EMBEDDED_OBJECT, VALUE_NUMBER, VALUE_STRING), + VALUE_OBJECT_ARRAY(VALUE_BOOLEAN, VALUE_NULL, VALUE_EMBEDDED_OBJECT, VALUE_NUMBER, VALUE_STRING, START_OBJECT, START_ARRAY); private final EnumSet tokens; diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/ParseFieldRegistry.java b/core/src/main/java/org/elasticsearch/common/xcontent/ParseFieldRegistry.java index 81f5b995c1815..0282fba764621 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/ParseFieldRegistry.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/ParseFieldRegistry.java @@ -20,7 +20,6 @@ package org.elasticsearch.common.xcontent; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.collect.Tuple; @@ -75,12 +74,11 @@ public void register(T value, ParseField parseField) { * Lookup a value from the registry by name while checking that the name matches the ParseField. * * @param name The name of the thing to look up. - * @param parseFieldMatcher to build nice error messages. * @return The value being looked up. Never null. * @throws ParsingException if the named thing isn't in the registry or the name was deprecated and deprecated names aren't supported. */ - public T lookup(String name, ParseFieldMatcher parseFieldMatcher, XContentLocation xContentLocation) { - T value = lookupReturningNullIfNotFound(name, parseFieldMatcher); + public T lookup(String name, XContentLocation xContentLocation) { + T value = lookupReturningNullIfNotFound(name); if (value == null) { throw new ParsingException(xContentLocation, "no [" + registryName + "] registered for [" + name + "]"); } @@ -91,19 +89,17 @@ public T lookup(String name, ParseFieldMatcher parseFieldMatcher, XContentLocati * Lookup a value from the registry by name while checking that the name matches the ParseField. * * @param name The name of the thing to look up. - * @param parseFieldMatcher The parseFieldMatcher. This is used to resolve the {@link ParseFieldMatcher} and to build nice - * error messages. * @return The value being looked up or null if it wasn't found. * @throws ParsingException if the named thing isn't in the registry or the name was deprecated and deprecated names aren't supported. */ - public T lookupReturningNullIfNotFound(String name, ParseFieldMatcher parseFieldMatcher) { + public T lookupReturningNullIfNotFound(String name) { Tuple parseFieldAndValue = registry.get(name); if (parseFieldAndValue == null) { return null; } ParseField parseField = parseFieldAndValue.v1(); T value = parseFieldAndValue.v2(); - boolean match = parseFieldMatcher.match(name, parseField); + boolean match = parseField.match(name); //this is always expected to match, ParseField is useful for deprecation warnings etc. here assert match : "ParseField did not match registered name [" + name + "][" + registryName + "]"; return value; diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/StatusToXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/StatusToXContent.java deleted file mode 100644 index f22aa39613fad..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/xcontent/StatusToXContent.java +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.common.xcontent; - -import org.elasticsearch.rest.RestStatus; - -/** - * Objects that can both render themselves in as json/yaml/etc and can provide a {@link RestStatus} for their response. Usually should be - * implemented by top level responses sent back to users from REST endpoints. - */ -public interface StatusToXContent extends ToXContent { - - /** - * Returns the REST status to make sure it is returned correctly - */ - RestStatus status(); -} diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/StatusToXContentObject.java b/core/src/main/java/org/elasticsearch/common/xcontent/StatusToXContentObject.java new file mode 100644 index 0000000000000..ba6ccdfffad08 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/xcontent/StatusToXContentObject.java @@ -0,0 +1,33 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.common.xcontent; + +import org.elasticsearch.rest.RestStatus; + +/** + * Objects that can both render themselves in as json/yaml/etc and can provide a {@link RestStatus} for their response. Usually should be + * implemented by top level responses sent back to users from REST endpoints. + */ +public interface StatusToXContentObject extends ToXContentObject { + + /** + * Returns the REST status to make sure it is returned correctly + */ + RestStatus status(); +} diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/ToXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/ToXContent.java index 01111fa940a48..92fc675807b85 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/ToXContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/ToXContent.java @@ -20,15 +20,18 @@ package org.elasticsearch.common.xcontent; import org.elasticsearch.common.Booleans; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import java.io.IOException; import java.util.Map; /** * An interface allowing to transfer an object to "XContent" using an {@link XContentBuilder}. + * The output may or may not be a value object. Objects implementing {@link ToXContentObject} output a valid value + * but those that don't may or may not require emitting a startObject and an endObject. */ public interface ToXContent { - interface Params { String param(String key); @@ -63,6 +66,7 @@ public Boolean paramAsBoolean(String key, Boolean defaultValue) { }; class MapParams implements Params { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(MapParams.class)); private final Map params; @@ -86,12 +90,16 @@ public String param(String key, String defaultValue) { @Override public boolean paramAsBoolean(String key, boolean defaultValue) { - return Booleans.parseBoolean(param(key), defaultValue); + return paramAsBoolean(key, (Boolean) defaultValue); } @Override public Boolean paramAsBoolean(String key, Boolean defaultValue) { - return Booleans.parseBoolean(param(key), defaultValue); + String rawParam = param(key); + if (rawParam != null && Booleans.isStrictlyBoolean(rawParam) == false) { + DEPRECATION_LOGGER.deprecated("Expected a boolean [true/false] for [{}] but got [{}]", key, rawParam); + } + return Booleans.parseBoolean(rawParam, defaultValue); } } @@ -126,4 +134,8 @@ public Boolean paramAsBoolean(String key, Boolean defaultValue) { } XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException; + + default boolean isFragment() { + return true; + } } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/ToXContentObject.java b/core/src/main/java/org/elasticsearch/common/xcontent/ToXContentObject.java new file mode 100644 index 0000000000000..ed9aa3047193a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/xcontent/ToXContentObject.java @@ -0,0 +1,34 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.xcontent; + +/** + * An interface allowing to transfer an object to "XContent" using an {@link XContentBuilder}. + * The difference between {@link ToXContent} and {@link ToXContentObject} is that the former may output a fragment that + * requires to start and end a new anonymous object externally, while the latter guarantees that what gets printed + * out is fully valid syntax without any external addition. + */ +public interface ToXContentObject extends ToXContent { + + @Override + default boolean isFragment() { + return false; + } +} diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContent.java index 83facb00f004d..8dad910d1fc84 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContent.java @@ -19,7 +19,6 @@ package org.elasticsearch.common.xcontent; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; import java.io.IOException; @@ -62,31 +61,31 @@ default XContentGenerator createGenerator(OutputStream os) throws IOException { /** * Creates a parser over the provided string content. */ - XContentParser createParser(String content) throws IOException; + XContentParser createParser(NamedXContentRegistry xContentRegistry, String content) throws IOException; /** * Creates a parser over the provided input stream. */ - XContentParser createParser(InputStream is) throws IOException; + XContentParser createParser(NamedXContentRegistry xContentRegistry, InputStream is) throws IOException; /** * Creates a parser over the provided bytes. */ - XContentParser createParser(byte[] data) throws IOException; + XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data) throws IOException; /** * Creates a parser over the provided bytes. */ - XContentParser createParser(byte[] data, int offset, int length) throws IOException; + XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data, int offset, int length) throws IOException; /** * Creates a parser over the provided bytes. */ - XContentParser createParser(BytesReference bytes) throws IOException; + XContentParser createParser(NamedXContentRegistry xContentRegistry, BytesReference bytes) throws IOException; /** * Creates a parser over the provided reader. */ - XContentParser createParser(Reader reader) throws IOException; + XContentParser createParser(NamedXContentRegistry xContentRegistry, Reader reader) throws IOException; } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java index 5274773b99457..f0427ce246669 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java @@ -20,10 +20,9 @@ package org.elasticsearch.common.xcontent; import org.apache.lucene.util.BytesRef; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.geo.GeoPoint; -import org.elasticsearch.common.io.BytesStream; +import org.elasticsearch.common.io.stream.BytesStream; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.text.Text; @@ -34,44 +33,109 @@ import org.joda.time.format.DateTimeFormatter; import org.joda.time.format.ISODateTimeFormat; +import java.io.Flushable; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; -import java.math.BigDecimal; -import java.math.RoundingMode; import java.nio.file.Path; +import java.util.Arrays; import java.util.Calendar; import java.util.Collections; import java.util.Date; import java.util.HashMap; +import java.util.IdentityHashMap; import java.util.Locale; import java.util.Map; +import java.util.Objects; import java.util.Set; import java.util.concurrent.TimeUnit; /** * A utility to build XContent (ie json). */ -public final class XContentBuilder implements BytesStream, Releasable { - - public static final DateTimeFormatter defaultDatePrinter = ISODateTimeFormat.dateTime().withZone(DateTimeZone.UTC); +public final class XContentBuilder implements Releasable, Flushable { + /** + * Create a new {@link XContentBuilder} using the given {@link XContent} content. + *

    + * The builder uses an internal {@link BytesStreamOutput} output stream to build the content. + *

    + * + * @param xContent the {@link XContent} + * @return a new {@link XContentBuilder} + * @throws IOException if an {@link IOException} occurs while building the content + */ public static XContentBuilder builder(XContent xContent) throws IOException { return new XContentBuilder(xContent, new BytesStreamOutput()); } + /** + * Create a new {@link XContentBuilder} using the given {@link XContent} content and some inclusive and/or exclusive filters. + *

    + * The builder uses an internal {@link BytesStreamOutput} output stream to build the content. When both exclusive and + * inclusive filters are provided, the underlying builder will first use exclusion filters to remove fields and then will check the + * remaining fields against the inclusive filters. + *

    + * + * @param xContent the {@link XContent} + * @param includes the inclusive filters: only fields and objects that match the inclusive filters will be written to the output. + * @param excludes the exclusive filters: only fields and objects that don't match the exclusive filters will be written to the output. + * @throws IOException if an {@link IOException} occurs while building the content + */ public static XContentBuilder builder(XContent xContent, Set includes, Set excludes) throws IOException { return new XContentBuilder(xContent, new BytesStreamOutput(), includes, excludes); } - private XContentGenerator generator; + public static final DateTimeFormatter DEFAULT_DATE_PRINTER = ISODateTimeFormat.dateTime().withZone(DateTimeZone.UTC); + + private static final Map, Writer> WRITERS; + static { + Map, Writer> writers = new HashMap<>(); + writers.put(Boolean.class, (b, v) -> b.value((Boolean) v)); + writers.put(Byte.class, (b, v) -> b.value((Byte) v)); + writers.put(byte[].class, (b, v) -> b.value((byte[]) v)); + writers.put(BytesRef.class, (b, v) -> b.binaryValue((BytesRef) v)); + writers.put(Date.class, (b, v) -> b.value((Date) v)); + writers.put(Double.class, (b, v) -> b.value((Double) v)); + writers.put(double[].class, (b, v) -> b.values((double[]) v)); + writers.put(Float.class, (b, v) -> b.value((Float) v)); + writers.put(float[].class, (b, v) -> b.values((float[]) v)); + writers.put(GeoPoint.class, (b, v) -> b.value((GeoPoint) v)); + writers.put(Integer.class, (b, v) -> b.value((Integer) v)); + writers.put(int[].class, (b, v) -> b.values((int[]) v)); + writers.put(Long.class, (b, v) -> b.value((Long) v)); + writers.put(long[].class, (b, v) -> b.values((long[]) v)); + writers.put(Short.class, (b, v) -> b.value((Short) v)); + writers.put(short[].class, (b, v) -> b.values((short[]) v)); + writers.put(String.class, (b, v) -> b.value((String) v)); + writers.put(String[].class, (b, v) -> b.values((String[]) v)); + writers.put(Text.class, (b, v) -> b.value((Text) v)); + + WRITERS = Collections.unmodifiableMap(writers); + } + + @FunctionalInterface + private interface Writer { + void write(XContentBuilder builder, Object value) throws IOException; + } + + /** + * XContentGenerator used to build the XContent object + */ + private final XContentGenerator generator; + /** + * Output stream to which the built object is written + */ private final OutputStream bos; + /** + * When this flag is set to true, some types of values are written in a format easier to read for a human. + */ private boolean humanReadable = false; /** - * Constructs a new builder using the provided xcontent and an OutputStream. Make sure + * Constructs a new builder using the provided XContent and an OutputStream. Make sure * to call {@link #close()} when the builder is done with. */ public XContentBuilder(XContent xContent, OutputStream bos) throws IOException { @@ -79,7 +143,7 @@ public XContentBuilder(XContent xContent, OutputStream bos) throws IOException { } /** - * Constructs a new builder using the provided xcontent, an OutputStream and + * Constructs a new builder using the provided XContent, an OutputStream and * some filters. If filters are specified, only those values matching a * filter will be written to the output stream. Make sure to call * {@link #close()} when the builder is done with. @@ -117,41 +181,45 @@ public boolean isPrettyPrint() { return generator.isPrettyPrint(); } + /** + * Indicate that the current {@link XContentBuilder} must write a line feed ("\n") + * at the end of the built object. + *

    + * This only applies for JSON XContent type. It has no effect for other types. + */ public XContentBuilder lfAtEnd() { generator.usePrintLineFeedAtEnd(); return this; } + /** + * Set the "human readable" flag. Once set, some types of values are written in a + * format easier to read for a human. + */ public XContentBuilder humanReadable(boolean humanReadable) { this.humanReadable = humanReadable; return this; } + /** + * @return the value of the "human readable" flag. When the value is equal to true, + * some types of values are written in a format easier to read for a human. + */ public boolean humanReadable() { return this.humanReadable; } - public XContentBuilder field(String name, ToXContent xContent) throws IOException { - field(name); - xContent.toXContent(this, ToXContent.EMPTY_PARAMS); - return this; - } + //////////////////////////////////////////////////////////////////////////// + // Structure (object, array, field, null values...) + ////////////////////////////////// - public XContentBuilder field(String name, ToXContent xContent, ToXContent.Params params) throws IOException { - field(name); - xContent.toXContent(this, params); + public XContentBuilder startObject() throws IOException { + generator.writeStartObject(); return this; } public XContentBuilder startObject(String name) throws IOException { - field(name); - startObject(); - return this; - } - - public XContentBuilder startObject() throws IOException { - generator.writeStartObject(); - return this; + return field(name).startObject(); } public XContentBuilder endObject() throws IOException { @@ -159,33 +227,13 @@ public XContentBuilder endObject() throws IOException { return this; } - public XContentBuilder array(String name, String... values) throws IOException { - startArray(name); - for (String value : values) { - value(value); - } - endArray(); - return this; - } - - public XContentBuilder array(String name, Object... values) throws IOException { - startArray(name); - for (Object value : values) { - value(value); - } - endArray(); + public XContentBuilder startArray() throws IOException { + generator.writeStartArray(); return this; } public XContentBuilder startArray(String name) throws IOException { - field(name); - startArray(); - return this; - } - - public XContentBuilder startArray() throws IOException { - generator.writeStartArray(); - return this; + return field(name).startArray(); } public XContentBuilder endArray() throws IOException { @@ -194,563 +242,780 @@ public XContentBuilder endArray() throws IOException { } public XContentBuilder field(String name) throws IOException { - if (name == null) { - throw new IllegalArgumentException("field name cannot be null"); - } + ensureNameNotNull(name); generator.writeFieldName(name); return this; } - public XContentBuilder field(String name, char[] value, int offset, int length) throws IOException { - field(name); - if (value == null) { - generator.writeNull(); - } else { - generator.writeString(value, offset, length); - } + public XContentBuilder nullField(String name) throws IOException { + ensureNameNotNull(name); + generator.writeNullField(name); return this; } - public XContentBuilder field(String name, String value) throws IOException { - field(name); - if (value == null) { - generator.writeNull(); - } else { - generator.writeString(value); - } + public XContentBuilder nullValue() throws IOException { + generator.writeNull(); return this; } - public XContentBuilder field(String name, Integer value) throws IOException { - field(name); - if (value == null) { - generator.writeNull(); - } else { - generator.writeNumber(value.intValue()); - } - return this; + //////////////////////////////////////////////////////////////////////////// + // Boolean + ////////////////////////////////// + + public XContentBuilder field(String name, Boolean value) throws IOException { + return (value == null) ? nullField(name) : field(name, value.booleanValue()); } - public XContentBuilder field(String name, int value) throws IOException { - field(name); - generator.writeNumber(value); + public XContentBuilder field(String name, boolean value) throws IOException { + ensureNameNotNull(name); + generator.writeBooleanField(name, value); return this; } - public XContentBuilder field(String name, Long value) throws IOException { - field(name); - if (value == null) { - generator.writeNull(); - } else { - generator.writeNumber(value.longValue()); + public XContentBuilder array(String name, boolean[] values) throws IOException { + return field(name).values(values); + } + + private XContentBuilder values(boolean[] values) throws IOException { + if (values == null) { + return nullValue(); + } + startArray(); + for (boolean b : values) { + value(b); } + endArray(); return this; } - public XContentBuilder field(String name, long value) throws IOException { - field(name); - generator.writeNumber(value); - return this; + public XContentBuilder value(Boolean value) throws IOException { + return (value == null) ? nullValue() : value(value.booleanValue()); } - public XContentBuilder field(String name, Float value) throws IOException { - field(name); - if (value == null) { - generator.writeNull(); - } else { - generator.writeNumber(value.floatValue()); - } + public XContentBuilder value(boolean value) throws IOException { + generator.writeBoolean(value); return this; } - public XContentBuilder field(String name, float value) throws IOException { - field(name); + //////////////////////////////////////////////////////////////////////////// + // Byte + ////////////////////////////////// + + public XContentBuilder field(String name, Byte value) throws IOException { + return (value == null) ? nullField(name) : field(name, value.byteValue()); + } + + public XContentBuilder field(String name, byte value) throws IOException { + return field(name).value(value); + } + + public XContentBuilder value(Byte value) throws IOException { + return (value == null) ? nullValue() : value(value.byteValue()); + } + + public XContentBuilder value(byte value) throws IOException { generator.writeNumber(value); return this; } + //////////////////////////////////////////////////////////////////////////// + // Double + ////////////////////////////////// + public XContentBuilder field(String name, Double value) throws IOException { - field(name); - if (value == null) { - generator.writeNull(); - } else { - generator.writeNumber(value); - } - return this; + return (value == null) ? nullField(name) : field(name, value.doubleValue()); } public XContentBuilder field(String name, double value) throws IOException { - field(name); - generator.writeNumber(value); + ensureNameNotNull(name); + generator.writeNumberField(name, value); return this; } - public XContentBuilder field(String name, BigDecimal value) throws IOException { - return field(name, value, value.scale(), RoundingMode.HALF_UP, true); + public XContentBuilder array(String name, double[] values) throws IOException { + return field(name).values(values); } - public XContentBuilder field(String name, BigDecimal value, int scale, RoundingMode rounding, boolean toDouble) throws IOException { - field(name); - if (value == null) { - generator.writeNull(); - } else { - if (toDouble) { - try { - generator.writeNumber(value.setScale(scale, rounding).doubleValue()); - } catch (ArithmeticException e) { - generator.writeString(value.toEngineeringString()); - } - } else { - generator.writeString(value.toEngineeringString()); - } + private XContentBuilder values(double[] values) throws IOException { + if (values == null) { + return nullValue(); + } + startArray(); + for (double b : values) { + value(b); } + endArray(); return this; } - /** - * Writes the binary content of the given BytesRef - * Use {@link org.elasticsearch.common.xcontent.XContentParser#binaryValue()} to read the value back - */ - public XContentBuilder field(String name, BytesRef value) throws IOException { - field(name); - generator.writeBinary(value.bytes, value.offset, value.length); - return this; + public XContentBuilder value(Double value) throws IOException { + return (value == null) ? nullValue() : value(value.doubleValue()); } - /** - * Writes the binary content of the given BytesReference - * Use {@link org.elasticsearch.common.xcontent.XContentParser#binaryValue()} to read the value back - */ - public XContentBuilder field(String name, BytesReference value) throws IOException { - field(name); - final BytesRef ref = value.toBytesRef(); - generator.writeBinary(ref.bytes, ref.offset, ref.length); + public XContentBuilder value(double value) throws IOException { + generator.writeNumber(value); return this; } - /** - * Writes the binary content of the given BytesRef as UTF-8 bytes - * Use {@link XContentParser#utf8Bytes()} to read the value back - */ - public XContentBuilder utf8Field(String name, BytesRef value) throws IOException { - field(name); - generator.writeUTF8String(value.bytes, value.offset, value.length); - return this; - } + //////////////////////////////////////////////////////////////////////////// + // Float + ////////////////////////////////// - public XContentBuilder field(String name, Text value) throws IOException { - field(name); - if (value.hasString()) { - generator.writeString(value.string()); - } else { - // TODO: TextBytesOptimization we can use a buffer here to convert it? maybe add a request to jackson to support InputStream as well? - final BytesRef ref = value.bytes().toBytesRef(); - generator.writeUTF8String(ref.bytes, ref.offset, ref.length); - } - return this; + public XContentBuilder field(String name, Float value) throws IOException { + return (value == null) ? nullField(name) : field(name, value.floatValue()); } - public XContentBuilder field(String name, byte[] value, int offset, int length) throws IOException { - field(name); - generator.writeBinary(value, offset, length); + public XContentBuilder field(String name, float value) throws IOException { + ensureNameNotNull(name); + generator.writeNumberField(name, value); return this; } - public XContentBuilder field(String name, Map value) throws IOException { - field(name); - value(value); - return this; + public XContentBuilder array(String name, float[] values) throws IOException { + return field(name).values(values); } - public XContentBuilder field(String name, Iterable value) throws IOException { - if (value instanceof Path) { - //treat Paths as single value - field(name); - value(value); - } else { - startArray(name); - for (Object o : value) { - value(o); - } - endArray(); + private XContentBuilder values(float[] values) throws IOException { + if (values == null) { + return nullValue(); } - return this; - } - - public XContentBuilder field(String name, boolean... value) throws IOException { - startArray(name); - for (boolean o : value) { - value(o); + startArray(); + for (float f : values) { + value(f); } endArray(); return this; } - public XContentBuilder field(String name, String... value) throws IOException { - startArray(name); - for (String o : value) { - value(o); - } - endArray(); - return this; + public XContentBuilder value(Float value) throws IOException { + return (value == null) ? nullValue() : value(value.floatValue()); } - public XContentBuilder field(String name, Object... value) throws IOException { - startArray(name); - for (Object o : value) { - value(o); - } - endArray(); + public XContentBuilder value(float value) throws IOException { + generator.writeNumber(value); return this; } - public XContentBuilder field(String name, int... value) throws IOException { - startArray(name); - for (Object o : value) { - value(o); - } - endArray(); - return this; + //////////////////////////////////////////////////////////////////////////// + // Integer + ////////////////////////////////// + + public XContentBuilder field(String name, Integer value) throws IOException { + return (value == null) ? nullField(name) : field(name, value.intValue()); } - public XContentBuilder field(String name, long... value) throws IOException { - startArray(name); - for (Object o : value) { - value(o); - } - endArray(); + public XContentBuilder field(String name, int value) throws IOException { + ensureNameNotNull(name); + generator.writeNumberField(name, value); return this; } - public XContentBuilder field(String name, float... value) throws IOException { - startArray(name); - for (Object o : value) { - value(o); - } - endArray(); - return this; + public XContentBuilder array(String name, int[] values) throws IOException { + return field(name).values(values); } - public XContentBuilder field(String name, double... value) throws IOException { - startArray(name); - for (Object o : value) { - value(o); + private XContentBuilder values(int[] values) throws IOException { + if (values == null) { + return nullValue(); + } + startArray(); + for (int i : values) { + value(i); } endArray(); return this; } - public XContentBuilder field(String name, Object value) throws IOException { - field(name); - writeValue(value); - return this; + public XContentBuilder value(Integer value) throws IOException { + return (value == null) ? nullValue() : value(value.intValue()); } - public XContentBuilder value(Object value) throws IOException { - writeValue(value); + public XContentBuilder value(int value) throws IOException { + generator.writeNumber(value); return this; } - public XContentBuilder field(String name, boolean value) throws IOException { - field(name); - generator.writeBoolean(value); + //////////////////////////////////////////////////////////////////////////// + // Long + ////////////////////////////////// + + public XContentBuilder field(String name, Long value) throws IOException { + return (value == null) ? nullField(name) : field(name, value.longValue()); + } + + public XContentBuilder field(String name, long value) throws IOException { + ensureNameNotNull(name); + generator.writeNumberField(name, value); return this; } - public XContentBuilder field(String name, byte[] value) throws IOException { - field(name); - if (value == null) { - generator.writeNull(); - } else { - generator.writeBinary(value); + public XContentBuilder array(String name, long[] values) throws IOException { + return field(name).values(values); + } + + private XContentBuilder values(long[] values) throws IOException { + if (values == null) { + return nullValue(); + } + startArray(); + for (long l : values) { + value(l); } + endArray(); return this; } - public XContentBuilder field(String name, ReadableInstant date) throws IOException { - field(name); - return value(date); + public XContentBuilder value(Long value) throws IOException { + return (value == null) ? nullValue() : value(value.longValue()); } - public XContentBuilder field(String name, ReadableInstant date, DateTimeFormatter formatter) throws IOException { - field(name); - return value(date, formatter); + public XContentBuilder value(long value) throws IOException { + generator.writeNumber(value); + return this; } - public XContentBuilder field(String name, Date date) throws IOException { - field(name); - return value(date); - } + //////////////////////////////////////////////////////////////////////////// + // Short + ////////////////////////////////// - public XContentBuilder field(String name, Date date, DateTimeFormatter formatter) throws IOException { - field(name); - return value(date, formatter); + public XContentBuilder field(String name, Short value) throws IOException { + return (value == null) ? nullField(name) : field(name, value.shortValue()); } - public XContentBuilder nullField(String name) throws IOException { - generator.writeNullField(name); - return this; + public XContentBuilder field(String name, short value) throws IOException { + return field(name).value(value); } - public XContentBuilder nullValue() throws IOException { - generator.writeNull(); - return this; + public XContentBuilder array(String name, short[] values) throws IOException { + return field(name).values(values); } - public XContentBuilder rawField(String fieldName, InputStream content) throws IOException { - generator.writeRawField(fieldName, content); + private XContentBuilder values(short[] values) throws IOException { + if (values == null) { + return nullValue(); + } + startArray(); + for (short s : values) { + value(s); + } + endArray(); return this; } - public XContentBuilder rawField(String fieldName, BytesReference content) throws IOException { - generator.writeRawField(fieldName, content); - return this; + public XContentBuilder value(Short value) throws IOException { + return (value == null) ? nullValue() : value(value.shortValue()); } - public XContentBuilder rawValue(BytesReference content) throws IOException { - generator.writeRawValue(content); + public XContentBuilder value(short value) throws IOException { + generator.writeNumber(value); return this; } - public XContentBuilder timeValueField(String rawFieldName, String readableFieldName, TimeValue timeValue) throws IOException { - if (humanReadable) { - field(readableFieldName, timeValue.toString()); + //////////////////////////////////////////////////////////////////////////// + // String + ////////////////////////////////// + + public XContentBuilder field(String name, String value) throws IOException { + if (value == null) { + return nullField(name); } - field(rawFieldName, timeValue.millis()); + ensureNameNotNull(name); + generator.writeStringField(name, value); return this; } - public XContentBuilder timeValueField(String rawFieldName, String readableFieldName, long rawTime) throws IOException { - if (humanReadable) { - field(readableFieldName, new TimeValue(rawTime).toString()); + public XContentBuilder array(String name, String... values) throws IOException { + return field(name).values(values); + } + + private XContentBuilder values(String[] values) throws IOException { + if (values == null) { + return nullValue(); } - field(rawFieldName, rawTime); + startArray(); + for (String s : values) { + value(s); + } + endArray(); return this; } - public XContentBuilder timeValueField(String rawFieldName, String readableFieldName, long rawTime, TimeUnit timeUnit) throws - IOException { - if (humanReadable) { - field(readableFieldName, new TimeValue(rawTime, timeUnit).toString()); + public XContentBuilder value(String value) throws IOException { + if (value == null) { + return nullValue(); } - field(rawFieldName, rawTime); + generator.writeString(value); return this; } - public XContentBuilder dateValueField(String rawFieldName, String readableFieldName, long rawTimestamp) throws IOException { - if (humanReadable) { - field(readableFieldName, defaultDatePrinter.print(rawTimestamp)); + //////////////////////////////////////////////////////////////////////////// + // Binary + ////////////////////////////////// + + public XContentBuilder field(String name, byte[] value) throws IOException { + if (value == null) { + return nullField(name); } - field(rawFieldName, rawTimestamp); + ensureNameNotNull(name); + generator.writeBinaryField(name, value); return this; } - public XContentBuilder byteSizeField(String rawFieldName, String readableFieldName, ByteSizeValue byteSizeValue) throws IOException { - if (humanReadable) { - field(readableFieldName, byteSizeValue.toString()); + public XContentBuilder value(byte[] value) throws IOException { + if (value == null) { + return nullValue(); } - field(rawFieldName, byteSizeValue.bytes()); + generator.writeBinary(value); return this; } - public XContentBuilder byteSizeField(String rawFieldName, String readableFieldName, long rawSize) throws IOException { - if (humanReadable) { - field(readableFieldName, new ByteSizeValue(rawSize).toString()); + public XContentBuilder field(String name, byte[] value, int offset, int length) throws IOException { + return field(name).value(value, offset, length); + } + + public XContentBuilder value(byte[] value, int offset, int length) throws IOException { + if (value == null) { + return nullValue(); } - field(rawFieldName, rawSize); + generator.writeBinary(value, offset, length); return this; } - public XContentBuilder percentageField(String rawFieldName, String readableFieldName, double percentage) throws IOException { - if (humanReadable) { - field(readableFieldName, String.format(Locale.ROOT, "%1.1f%%", percentage)); + /** + * Writes the binary content of the given {@link BytesRef}. + * + * Use {@link org.elasticsearch.common.xcontent.XContentParser#binaryValue()} to read the value back + */ + public XContentBuilder field(String name, BytesRef value) throws IOException { + return field(name).binaryValue(value); + } + + /** + * Writes the binary content of the given {@link BytesRef} as UTF-8 bytes. + * + * Use {@link XContentParser#utf8Bytes()} to read the value back + */ + public XContentBuilder utf8Field(String name, BytesRef value) throws IOException { + return field(name).utf8Value(value); + } + + /** + * Writes the binary content of the given {@link BytesRef}. + * + * Use {@link org.elasticsearch.common.xcontent.XContentParser#binaryValue()} to read the value back + */ + public XContentBuilder binaryValue(BytesRef value) throws IOException { + if (value == null) { + return nullValue(); } - field(rawFieldName, percentage); + value(value.bytes, value.offset, value.length); return this; } - public XContentBuilder value(Boolean value) throws IOException { + /** + * Writes the binary content of the given {@link BytesRef} as UTF-8 bytes. + * + * Use {@link XContentParser#utf8Bytes()} to read the value back + */ + public XContentBuilder utf8Value(BytesRef value) throws IOException { if (value == null) { return nullValue(); } - return value(value.booleanValue()); + generator.writeUTF8String(value.bytes, value.offset, value.length); + return this; } - public XContentBuilder value(boolean value) throws IOException { - generator.writeBoolean(value); - return this; + /** + * Writes the binary content of the given {@link BytesReference}. + * + * Use {@link org.elasticsearch.common.xcontent.XContentParser#binaryValue()} to read the value back + */ + public XContentBuilder field(String name, BytesReference value) throws IOException { + return field(name).value(value); } - public XContentBuilder value(ReadableInstant date) throws IOException { - return value(date, defaultDatePrinter); + /** + * Writes the binary content of the given {@link BytesReference}. + * + * Use {@link org.elasticsearch.common.xcontent.XContentParser#binaryValue()} to read the value back + */ + public XContentBuilder value(BytesReference value) throws IOException { + return (value == null) ? nullValue() : binaryValue(value.toBytesRef()); } - public XContentBuilder value(ReadableInstant date, DateTimeFormatter dateTimeFormatter) throws IOException { - if (date == null) { + //////////////////////////////////////////////////////////////////////////// + // Text + ////////////////////////////////// + + public XContentBuilder field(String name, Text value) throws IOException { + return field(name).value(value); + } + + public XContentBuilder value(Text value) throws IOException { + if (value == null) { return nullValue(); + } else if (value.hasString()) { + return value(value.string()); + } else { + // TODO: TextBytesOptimization we can use a buffer here to convert it? maybe add a + // request to jackson to support InputStream as well? + return utf8Value(value.bytes().toBytesRef()); } - return value(dateTimeFormatter.print(date)); } - public XContentBuilder value(Date date) throws IOException { - return value(date, defaultDatePrinter); + //////////////////////////////////////////////////////////////////////////// + // Date + ////////////////////////////////// + + public XContentBuilder field(String name, ReadableInstant value) throws IOException { + return field(name).value(value); } - public XContentBuilder value(Date date, DateTimeFormatter dateTimeFormatter) throws IOException { - if (date == null) { - return nullValue(); - } - return value(dateTimeFormatter.print(date.getTime())); + public XContentBuilder field(String name, ReadableInstant value, DateTimeFormatter formatter) throws IOException { + return field(name).value(value, formatter); } - public XContentBuilder value(Integer value) throws IOException { + public XContentBuilder value(ReadableInstant value) throws IOException { + return value(value, DEFAULT_DATE_PRINTER); + } + + public XContentBuilder value(ReadableInstant value, DateTimeFormatter formatter) throws IOException { if (value == null) { return nullValue(); } - return value(value.intValue()); + ensureFormatterNotNull(formatter); + return value(formatter.print(value)); } - public XContentBuilder value(int value) throws IOException { - generator.writeNumber(value); - return this; + public XContentBuilder field(String name, Date value) throws IOException { + return field(name).value(value); } - public XContentBuilder value(Long value) throws IOException { + public XContentBuilder field(String name, Date value, DateTimeFormatter formatter) throws IOException { + return field(name).value(value, formatter); + } + + public XContentBuilder value(Date value) throws IOException { + return value(value, DEFAULT_DATE_PRINTER); + } + + public XContentBuilder value(Date value, DateTimeFormatter formatter) throws IOException { if (value == null) { return nullValue(); } - return value(value.longValue()); + return value(formatter, value.getTime()); } - public XContentBuilder value(long value) throws IOException { - generator.writeNumber(value); + public XContentBuilder dateField(String name, String readableName, long value) throws IOException { + if (humanReadable) { + field(readableName).value(DEFAULT_DATE_PRINTER, value); + } + field(name, value); return this; } - public XContentBuilder value(Float value) throws IOException { + XContentBuilder value(Calendar value) throws IOException { if (value == null) { return nullValue(); } - return value(value.floatValue()); + return value(DEFAULT_DATE_PRINTER, value.getTimeInMillis()); } - public XContentBuilder value(float value) throws IOException { - generator.writeNumber(value); - return this; + XContentBuilder value(DateTimeFormatter formatter, long value) throws IOException { + ensureFormatterNotNull(formatter); + return value(formatter.print(value)); } - public XContentBuilder value(Double value) throws IOException { + //////////////////////////////////////////////////////////////////////////// + // GeoPoint & LatLon + ////////////////////////////////// + + public XContentBuilder field(String name, GeoPoint value) throws IOException { + return field(name).value(value); + } + + public XContentBuilder value(GeoPoint value) throws IOException { if (value == null) { return nullValue(); } - return value(value.doubleValue()); + return latlon(value.getLat(), value.getLon()); } - public XContentBuilder value(double value) throws IOException { - generator.writeNumber(value); - return this; + public XContentBuilder latlon(String name, double lat, double lon) throws IOException { + return field(name).latlon(lat, lon); } - public XContentBuilder value(String value) throws IOException { + public XContentBuilder latlon(double lat, double lon) throws IOException { + return startObject().field("lat", lat).field("lon", lon).endObject(); + } + + //////////////////////////////////////////////////////////////////////////// + // Path + ////////////////////////////////// + + public XContentBuilder value(Path value) throws IOException { if (value == null) { return nullValue(); } - generator.writeString(value); - return this; + return value(value.toString()); } - public XContentBuilder value(byte[] value) throws IOException { - if (value == null) { + //////////////////////////////////////////////////////////////////////////// + // Objects + // + // These methods are used when the type of value is unknown. It tries to fallback + // on typed methods and use Object.toString() as a last resort. Always prefer using + // typed methods over this. + ////////////////////////////////// + + public XContentBuilder field(String name, Object value) throws IOException { + return field(name).value(value); + } + + public XContentBuilder array(String name, Object... values) throws IOException { + return field(name).values(values); + } + + XContentBuilder values(Object[] values) throws IOException { + if (values == null) { return nullValue(); } - generator.writeBinary(value); + + // checks that the array of object does not contain references to itself because + // iterating over entries will cause a stackoverflow error + ensureNoSelfReferences(values); + + startArray(); + for (Object o : values) { + value(o); + } + endArray(); return this; } - public XContentBuilder value(byte[] value, int offset, int length) throws IOException { + public XContentBuilder value(Object value) throws IOException { + unknownValue(value); + return this; + } + + private void unknownValue(Object value) throws IOException { if (value == null) { - return nullValue(); + nullValue(); + return; + } + Writer writer = WRITERS.get(value.getClass()); + if (writer != null) { + writer.write(this, value); + } else if (value instanceof Path) { + //Path implements Iterable and causes endless recursion and a StackOverFlow if treated as an Iterable here + value((Path) value); + } else if (value instanceof Map) { + map((Map) value); + } else if (value instanceof Iterable) { + value((Iterable) value); + } else if (value instanceof Object[]) { + values((Object[]) value); + } else if (value instanceof Calendar) { + value((Calendar) value); + } else if (value instanceof ReadableInstant) { + value((ReadableInstant) value); + } else if (value instanceof BytesReference) { + value((BytesReference) value); + } else if (value instanceof ToXContent) { + value((ToXContent) value); + } else { + // This is a "value" object (like enum, DistanceUnit, etc) just toString() it + // (yes, it can be misleading when toString a Java class, but really, jackson should be used in that case) + value(Objects.toString(value)); } - generator.writeBinary(value, offset, length); - return this; } - /** - * Writes the binary content of the given BytesRef - * Use {@link org.elasticsearch.common.xcontent.XContentParser#binaryValue()} to read the value back - */ - public XContentBuilder value(BytesRef value) throws IOException { + //////////////////////////////////////////////////////////////////////////// + // ToXContent + ////////////////////////////////// + + public XContentBuilder field(String name, ToXContent value) throws IOException { + return field(name).value(value); + } + + public XContentBuilder field(String name, ToXContent value, ToXContent.Params params) throws IOException { + return field(name).value(value, params); + } + + private XContentBuilder value(ToXContent value) throws IOException { + return value(value, ToXContent.EMPTY_PARAMS); + } + + private XContentBuilder value(ToXContent value, ToXContent.Params params) throws IOException { if (value == null) { return nullValue(); } - generator.writeBinary(value.bytes, value.offset, value.length); + value.toXContent(this, params); return this; } - /** - * Writes the binary content of the given BytesReference - * Use {@link org.elasticsearch.common.xcontent.XContentParser#binaryValue()} to read the value back - */ - public XContentBuilder value(BytesReference value) throws IOException { - if (value == null) { + //////////////////////////////////////////////////////////////////////////// + // Maps & Iterable + ////////////////////////////////// + + public XContentBuilder field(String name, Map values) throws IOException { + return field(name).map(values); + } + + public XContentBuilder map(Map values) throws IOException { + if (values == null) { return nullValue(); } - BytesRef ref = value.toBytesRef(); - generator.writeBinary(ref.bytes, ref.offset, ref.length); + + // checks that the map does not contain references to itself because + // iterating over map entries will cause a stackoverflow error + ensureNoSelfReferences(values); + + startObject(); + for (Map.Entry value : values.entrySet()) { + field(value.getKey()); + unknownValue(value.getValue()); + } + endObject(); return this; } - public XContentBuilder value(Text value) throws IOException { - if (value == null) { + public XContentBuilder field(String name, Iterable values) throws IOException { + return field(name).value(values); + } + + private XContentBuilder value(Iterable values) throws IOException { + if (values == null) { return nullValue(); - } else if (value.hasString()) { - generator.writeString(value.string()); + } + + if (values instanceof Path) { + //treat as single value + value((Path) values); } else { - BytesRef bytesRef = value.bytes().toBytesRef(); - generator.writeUTF8String(bytesRef.bytes, bytesRef.offset, bytesRef.length); + // checks that the iterable does not contain references to itself because + // iterating over entries will cause a stackoverflow error + ensureNoSelfReferences(values); + + startArray(); + for (Object value : values) { + unknownValue(value); + } + endArray(); } return this; } - public XContentBuilder map(Map map) throws IOException { - if (map == null) { - return nullValue(); + //////////////////////////////////////////////////////////////////////////// + // Misc. + ////////////////////////////////// + + public XContentBuilder timeValueField(String rawFieldName, String readableFieldName, TimeValue timeValue) throws IOException { + if (humanReadable) { + field(readableFieldName, timeValue.toString()); } - writeMap(map); + field(rawFieldName, timeValue.millis()); return this; } - public XContentBuilder value(Map map) throws IOException { - if (map == null) { - return nullValue(); + public XContentBuilder timeValueField(String rawFieldName, String readableFieldName, long rawTime) throws IOException { + if (humanReadable) { + field(readableFieldName, new TimeValue(rawTime).toString()); } - writeMap(map); + field(rawFieldName, rawTime); return this; } - public XContentBuilder value(Iterable value) throws IOException { - if (value == null) { - return nullValue(); + public XContentBuilder timeValueField(String rawFieldName, String readableFieldName, long rawTime, TimeUnit timeUnit) throws + IOException { + if (humanReadable) { + field(readableFieldName, new TimeValue(rawTime, timeUnit).toString()); } - if (value instanceof Path) { - //treat as single value - writeValue(value); - } else { - startArray(); - for (Object o : value) { - value(o); - } - endArray(); + field(rawFieldName, rawTime); + return this; + } + + + public XContentBuilder percentageField(String rawFieldName, String readableFieldName, double percentage) throws IOException { + if (humanReadable) { + field(readableFieldName, String.format(Locale.ROOT, "%1.1f%%", percentage)); } + field(rawFieldName, percentage); return this; } - public XContentBuilder latlon(String name, double lat, double lon) throws IOException { - return startObject(name).field("lat", lat).field("lon", lon).endObject(); + public XContentBuilder byteSizeField(String rawFieldName, String readableFieldName, ByteSizeValue byteSizeValue) throws IOException { + if (humanReadable) { + field(readableFieldName, byteSizeValue.toString()); + } + field(rawFieldName, byteSizeValue.getBytes()); + return this; } - public XContentBuilder latlon(double lat, double lon) throws IOException { - return startObject().field("lat", lat).field("lon", lon).endObject(); + public XContentBuilder byteSizeField(String rawFieldName, String readableFieldName, long rawSize) throws IOException { + if (humanReadable) { + field(readableFieldName, new ByteSizeValue(rawSize).toString()); + } + field(rawFieldName, rawSize); + return this; + } + + //////////////////////////////////////////////////////////////////////////// + // Raw fields + ////////////////////////////////// + + /** + * Writes a raw field with the value taken from the bytes in the stream + * @deprecated use {@link #rawField(String, InputStream, XContentType)} to avoid content type auto-detection + */ + @Deprecated + public XContentBuilder rawField(String name, InputStream value) throws IOException { + generator.writeRawField(name, value); + return this; + } + + /** + * Writes a raw field with the value taken from the bytes in the stream + */ + public XContentBuilder rawField(String name, InputStream value, XContentType contentType) throws IOException { + generator.writeRawField(name, value, contentType); + return this; + } + + /** + * Writes a raw field with the given bytes as the value + * @deprecated use {@link #rawField(String name, BytesReference, XContentType)} to avoid content type auto-detection + */ + @Deprecated + public XContentBuilder rawField(String name, BytesReference value) throws IOException { + generator.writeRawField(name, value); + return this; + } + + /** + * Writes a raw field with the given bytes as the value + */ + public XContentBuilder rawField(String name, BytesReference value, XContentType contentType) throws IOException { + generator.writeRawField(name, value, contentType); + return this; + } + + /** + * Writes a value with the source coming directly from the bytes + * @deprecated use {@link #rawValue(BytesReference, XContentType)} to avoid content type auto-detection + */ + @Deprecated + public XContentBuilder rawValue(BytesReference value) throws IOException { + generator.writeRawValue(value); + return this; + } + + /** + * Writes a value with the source coming directly from the bytes + */ + public XContentBuilder rawValue(BytesReference value, XContentType contentType) throws IOException { + generator.writeRawValue(value, contentType); + return this; } public XContentBuilder copyCurrentStructure(XContentParser parser) throws IOException { @@ -758,9 +1023,9 @@ public XContentBuilder copyCurrentStructure(XContentParser parser) throws IOExce return this; } - public XContentBuilder flush() throws IOException { + @Override + public void flush() throws IOException { generator.flush(); - return this; } @Override @@ -768,7 +1033,7 @@ public void close() { try { generator.close(); } catch (IOException e) { - throw new IllegalStateException("failed to close the XContentBuilder", e); + throw new IllegalStateException("Failed to close the XContentBuilder", e); } } @@ -776,7 +1041,6 @@ public XContentGenerator generator() { return this.generator; } - @Override public BytesReference bytes() { close(); return ((BytesStream) bos).bytes(); @@ -786,156 +1050,48 @@ public BytesReference bytes() { * Returns a string representation of the builder (only applicable for text based xcontent). */ public String string() throws IOException { - close(); return bytes().utf8ToString(); } - - private void writeMap(Map map) throws IOException { - generator.writeStartObject(); - - for (Map.Entry entry : map.entrySet()) { - field(entry.getKey()); - Object value = entry.getValue(); - if (value == null) { - generator.writeNull(); - } else { - writeValue(value); - } - } - generator.writeEndObject(); - } - - @FunctionalInterface - interface Writer { - void write(XContentGenerator g, Object v) throws IOException; + static void ensureNameNotNull(String name) { + ensureNotNull(name, "Field name cannot be null"); } - private static final Map, Writer> MAP; - - static { - Map, Writer> map = new HashMap<>(); - map.put(String.class, (g, v) -> g.writeString((String) v)); - map.put(Integer.class, (g, v) -> g.writeNumber((Integer) v)); - map.put(Long.class, (g, v) -> g.writeNumber((Long) v)); - map.put(Float.class, (g, v) -> g.writeNumber((Float) v)); - map.put(Double.class, (g, v) -> g.writeNumber((Double) v)); - map.put(Byte.class, (g, v) -> g.writeNumber((Byte) v)); - map.put(Short.class, (g, v) -> g.writeNumber((Short) v)); - map.put(Boolean.class, (g, v) -> g.writeBoolean((Boolean) v)); - map.put(GeoPoint.class, (g, v) -> { - g.writeStartObject(); - g.writeNumberField("lat", ((GeoPoint) v).lat()); - g.writeNumberField("lon", ((GeoPoint) v).lon()); - g.writeEndObject(); - }); - map.put(int[].class, (g, v) -> { - g.writeStartArray(); - for (int item : (int[]) v) { - g.writeNumber(item); - } - g.writeEndArray(); - }); - map.put(long[].class, (g, v) -> { - g.writeStartArray(); - for (long item : (long[]) v) { - g.writeNumber(item); - } - g.writeEndArray(); - }); - map.put(float[].class, (g, v) -> { - g.writeStartArray(); - for (float item : (float[]) v) { - g.writeNumber(item); - } - g.writeEndArray(); - }); - map.put(double[].class, (g, v) -> { - g.writeStartArray(); - for (double item : (double[])v) { - g.writeNumber(item); - } - g.writeEndArray(); - }); - map.put(byte[].class, (g, v) -> g.writeBinary((byte[]) v)); - map.put(short[].class, (g, v) -> { - g.writeStartArray(); - for (short item : (short[])v) { - g.writeNumber(item); - } - g.writeEndArray(); - }); - map.put(BytesRef.class, (g, v) -> { - BytesRef bytes = (BytesRef) v; - g.writeBinary(bytes.bytes, bytes.offset, bytes.length); - }); - map.put(Text.class, (g, v) -> { - Text text = (Text) v; - if (text.hasString()) { - g.writeString(text.string()); - } else { - BytesRef ref = text.bytes().toBytesRef(); - g.writeUTF8String(ref.bytes, ref.offset, ref.length); - } - }); - MAP = Collections.unmodifiableMap(map); + static void ensureFormatterNotNull(DateTimeFormatter formatter) { + ensureNotNull(formatter, "DateTimeFormatter cannot be null"); } - private void writeValue(Object value) throws IOException { + static void ensureNotNull(Object value, String message) { if (value == null) { - generator.writeNull(); - return; - } - Class type = value.getClass(); - Writer writer = MAP.get(type); - if (writer != null) { - writer.write(generator, value); - } else if (value instanceof Map) { - writeMap((Map) value); - } else if (value instanceof Path) { - //Path implements Iterable and causes endless recursion and a StackOverFlow if treated as an Iterable here - generator.writeString(value.toString()); - } else if (value instanceof Iterable) { - writeIterable((Iterable) value); - } else if (value instanceof Object[]) { - writeObjectArray((Object[]) value); - } else if (value instanceof Date) { - generator.writeString(XContentBuilder.defaultDatePrinter.print(((Date) value).getTime())); - } else if (value instanceof Calendar) { - generator.writeString(XContentBuilder.defaultDatePrinter.print((((Calendar) value)).getTimeInMillis())); - } else if (value instanceof ReadableInstant) { - generator.writeString(XContentBuilder.defaultDatePrinter.print((((ReadableInstant) value)).getMillis())); - } else if (value instanceof BytesReference) { - writeBytesReference((BytesReference) value); - } else if (value instanceof ToXContent) { - ((ToXContent) value).toXContent(this, ToXContent.EMPTY_PARAMS); - } else { - // if this is a "value" object, like enum, DistanceUnit, ..., just toString it - // yea, it can be misleading when toString a Java class, but really, jackson should be used in that case - generator.writeString(value.toString()); - //throw new ElasticsearchIllegalArgumentException("type not supported for generic value conversion: " + type); + throw new IllegalArgumentException(message); } } - private void writeBytesReference(BytesReference value) throws IOException { - BytesRef ref = value.toBytesRef(); - generator.writeBinary(ref.bytes, ref.offset, ref.length); + static void ensureNoSelfReferences(Object value) { + ensureNoSelfReferences(value, Collections.newSetFromMap(new IdentityHashMap<>())); } - private void writeIterable(Iterable value) throws IOException { - generator.writeStartArray(); - for (Object v : value) { - writeValue(v); - } - generator.writeEndArray(); - } + private static void ensureNoSelfReferences(final Object value, final Set ancestors) { + if (value != null) { - private void writeObjectArray(Object[] value) throws IOException { - generator.writeStartArray(); - for (Object v : value) { - writeValue(v); + Iterable it; + if (value instanceof Map) { + it = ((Map) value).values(); + } else if ((value instanceof Iterable) && (value instanceof Path == false)) { + it = (Iterable) value; + } else if (value instanceof Object[]) { + it = Arrays.asList((Object[]) value); + } else { + return; + } + + if (ancestors.add(value) == false) { + throw new IllegalArgumentException("Object has already been built and is self-referencing itself"); + } + for (Object o : it) { + ensureNoSelfReferences(o, ancestors); + } + ancestors.remove(value); } - generator.writeEndArray(); } - } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java index 47dd1249d4adb..a5350e3c66212 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java @@ -39,7 +39,7 @@ */ public class XContentFactory { - private static int GUESS_HEADER_LENGTH = 20; + private static final int GUESS_HEADER_LENGTH = 20; /** * Returns a content builder using JSON format ({@link org.elasticsearch.common.xcontent.XContentType#JSON}. diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContentGenerator.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContentGenerator.java index a2cceae836724..60a188ca6ce58 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContentGenerator.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContentGenerator.java @@ -20,14 +20,13 @@ package org.elasticsearch.common.xcontent; import org.elasticsearch.common.bytes.BytesReference; + import java.io.Closeable; +import java.io.Flushable; import java.io.IOException; import java.io.InputStream; -/** - * - */ -public interface XContentGenerator extends Closeable { +public interface XContentGenerator extends Closeable, Flushable { XContentType contentType(); @@ -37,68 +36,97 @@ public interface XContentGenerator extends Closeable { void usePrintLineFeedAtEnd(); + void writeStartObject() throws IOException; + + void writeEndObject() throws IOException; + void writeStartArray() throws IOException; void writeEndArray() throws IOException; - void writeStartObject() throws IOException; + void writeFieldName(String name) throws IOException; - void writeEndObject() throws IOException; + void writeNull() throws IOException; - void writeFieldName(String name) throws IOException; + void writeNullField(String name) throws IOException; - void writeString(String text) throws IOException; + void writeBooleanField(String name, boolean value) throws IOException; - void writeString(char[] text, int offset, int len) throws IOException; + void writeBoolean(boolean value) throws IOException; - void writeUTF8String(byte[] text, int offset, int length) throws IOException; + void writeNumberField(String name, double value) throws IOException; - void writeBinary(byte[] data, int offset, int len) throws IOException; + void writeNumber(double value) throws IOException; - void writeBinary(byte[] data) throws IOException; + void writeNumberField(String name, float value) throws IOException; - void writeNumber(int v) throws IOException; + void writeNumber(float value) throws IOException; - void writeNumber(long v) throws IOException; + void writeNumberField(String name, int value) throws IOException; - void writeNumber(double d) throws IOException; + void writeNumber(int value) throws IOException; - void writeNumber(float f) throws IOException; + void writeNumberField(String name, long value) throws IOException; - void writeBoolean(boolean state) throws IOException; + void writeNumber(long value) throws IOException; - void writeNull() throws IOException; + void writeNumber(short value) throws IOException; - void writeStringField(String fieldName, String value) throws IOException; + void writeStringField(String name, String value) throws IOException; - void writeBooleanField(String fieldName, boolean value) throws IOException; + void writeString(String value) throws IOException; - void writeNullField(String fieldName) throws IOException; + void writeString(char[] text, int offset, int len) throws IOException; - void writeNumberField(String fieldName, int value) throws IOException; + void writeUTF8String(byte[] value, int offset, int length) throws IOException; - void writeNumberField(String fieldName, long value) throws IOException; + void writeBinaryField(String name, byte[] value) throws IOException; - void writeNumberField(String fieldName, double value) throws IOException; + void writeBinary(byte[] value) throws IOException; - void writeNumberField(String fieldName, float value) throws IOException; + void writeBinary(byte[] value, int offset, int length) throws IOException; - void writeBinaryField(String fieldName, byte[] data) throws IOException; + /** + * Writes a raw field with the value taken from the bytes in the stream + * @deprecated use {@link #writeRawField(String, InputStream, XContentType)} to avoid content type auto-detection + */ + @Deprecated + void writeRawField(String name, InputStream value) throws IOException; - void writeArrayFieldStart(String fieldName) throws IOException; + /** + * Writes a raw field with the value taken from the bytes in the stream + */ + void writeRawField(String name, InputStream value, XContentType xContentType) throws IOException; - void writeObjectFieldStart(String fieldName) throws IOException; + /** + * Writes a raw field with the given bytes as the value + * @deprecated use {@link #writeRawField(String, BytesReference, XContentType)} to avoid content type auto-detection + */ + @Deprecated + void writeRawField(String name, BytesReference value) throws IOException; - void writeRawField(String fieldName, InputStream content) throws IOException; + /** + * Writes a raw field with the given bytes as the value + */ + void writeRawField(String name, BytesReference value, XContentType xContentType) throws IOException; - void writeRawField(String fieldName, BytesReference content) throws IOException; + /** + * Writes a value with the source coming directly from the bytes + * @deprecated use {@link #writeRawValue(BytesReference, XContentType)} to avoid content type auto-detection + */ + @Deprecated + void writeRawValue(BytesReference value) throws IOException; - void writeRawValue(BytesReference content) throws IOException; + /** + * Writes a value with the source coming directly from the bytes + */ + void writeRawValue(BytesReference value, XContentType xContentType) throws IOException; void copyCurrentStructure(XContentParser parser) throws IOException; - void flush() throws IOException; + /** + * Returns {@code true} if this XContentGenerator has been closed. A closed generator can not do any more output. + */ + boolean isClosed(); - @Override - void close() throws IOException; } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContentHelper.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContentHelper.java index 2832527a5834c..c3c5c8829ad66 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContentHelper.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContentHelper.java @@ -44,23 +44,61 @@ @SuppressWarnings("unchecked") public class XContentHelper { - public static XContentParser createParser(BytesReference bytes) throws IOException { + /** + * Creates a parser based on the bytes provided + * @deprecated use {@link #createParser(NamedXContentRegistry, BytesReference, XContentType)} to avoid content type auto-detection + */ + @Deprecated + public static XContentParser createParser(NamedXContentRegistry xContentRegistry, BytesReference bytes) throws IOException { + Compressor compressor = CompressorFactory.compressor(bytes); + if (compressor != null) { + InputStream compressedInput = compressor.streamInput(bytes.streamInput()); + if (compressedInput.markSupported() == false) { + compressedInput = new BufferedInputStream(compressedInput); + } + final XContentType contentType = XContentFactory.xContentType(compressedInput); + return XContentFactory.xContent(contentType).createParser(xContentRegistry, compressedInput); + } else { + return XContentFactory.xContent(bytes).createParser(xContentRegistry, bytes.streamInput()); + } + } + + /** + * Creates a parser for the bytes using the supplied content-type + */ + public static XContentParser createParser(NamedXContentRegistry xContentRegistry, BytesReference bytes, + XContentType xContentType) throws IOException { + Objects.requireNonNull(xContentType); Compressor compressor = CompressorFactory.compressor(bytes); if (compressor != null) { InputStream compressedInput = compressor.streamInput(bytes.streamInput()); if (compressedInput.markSupported() == false) { compressedInput = new BufferedInputStream(compressedInput); } - XContentType contentType = XContentFactory.xContentType(compressedInput); - return XContentFactory.xContent(contentType).createParser(compressedInput); + return XContentFactory.xContent(xContentType).createParser(xContentRegistry, compressedInput); } else { - return XContentFactory.xContent(bytes).createParser(bytes.streamInput()); + return xContentType.xContent().createParser(xContentRegistry, bytes.streamInput()); } } - public static Tuple> convertToMap(BytesReference bytes, boolean ordered) throws ElasticsearchParseException { + /** + * Converts the given bytes into a map that is optionally ordered. + * @deprecated this method relies on auto-detection of content type. Use {@link #convertToMap(BytesReference, boolean, XContentType)} + * instead with the proper {@link XContentType} + */ + @Deprecated + public static Tuple> convertToMap(BytesReference bytes, boolean ordered) + throws ElasticsearchParseException { + return convertToMap(bytes, ordered, null); + } + + /** + * Converts the given bytes into a map that is optionally ordered. The provided {@link XContentType} must be non-null. + */ + public static Tuple> convertToMap(BytesReference bytes, boolean ordered, XContentType xContentType) + throws ElasticsearchParseException { try { - XContentType contentType; + final XContentType contentType; InputStream input; Compressor compressor = CompressorFactory.compressor(bytes); if (compressor != null) { @@ -68,34 +106,68 @@ public static Tuple> convertToMap(BytesReferen if (compressedStreamInput.markSupported() == false) { compressedStreamInput = new BufferedInputStream(compressedStreamInput); } - contentType = XContentFactory.xContentType(compressedStreamInput); input = compressedStreamInput; } else { - contentType = XContentFactory.xContentType(bytes); input = bytes.streamInput(); } - try (XContentParser parser = XContentFactory.xContent(contentType).createParser(input)) { - if (ordered) { - return Tuple.tuple(contentType, parser.mapOrdered()); - } else { - return Tuple.tuple(contentType, parser.map()); - } - } + contentType = xContentType != null ? xContentType : XContentFactory.xContentType(input); + return new Tuple<>(Objects.requireNonNull(contentType), convertToMap(XContentFactory.xContent(contentType), input, ordered)); + } catch (IOException e) { + throw new ElasticsearchParseException("Failed to parse content to map", e); + } + } + + /** + * Convert a string in some {@link XContent} format to a {@link Map}. Throws an {@link ElasticsearchParseException} if there is any + * error. + */ + public static Map convertToMap(XContent xContent, String string, boolean ordered) throws ElasticsearchParseException { + // It is safe to use EMPTY here because this never uses namedObject + try (XContentParser parser = xContent.createParser(NamedXContentRegistry.EMPTY, string)) { + return ordered ? parser.mapOrdered() : parser.map(); + } catch (IOException e) { + throw new ElasticsearchParseException("Failed to parse content to map", e); + } + } + + /** + * Convert a string in some {@link XContent} format to a {@link Map}. Throws an {@link ElasticsearchParseException} if there is any + * error. Note that unlike {@link #convertToMap(BytesReference, boolean)}, this doesn't automatically uncompress the input. + */ + public static Map convertToMap(XContent xContent, InputStream input, boolean ordered) + throws ElasticsearchParseException { + // It is safe to use EMPTY here because this never uses namedObject + try (XContentParser parser = xContent.createParser(NamedXContentRegistry.EMPTY, input)) { + return ordered ? parser.mapOrdered() : parser.map(); } catch (IOException e) { throw new ElasticsearchParseException("Failed to parse content to map", e); } } + @Deprecated public static String convertToJson(BytesReference bytes, boolean reformatJson) throws IOException { return convertToJson(bytes, reformatJson, false); } + @Deprecated public static String convertToJson(BytesReference bytes, boolean reformatJson, boolean prettyPrint) throws IOException { - XContentType xContentType = XContentFactory.xContentType(bytes); + return convertToJson(bytes, reformatJson, prettyPrint, XContentFactory.xContentType(bytes)); + } + + public static String convertToJson(BytesReference bytes, boolean reformatJson, XContentType xContentType) throws IOException { + return convertToJson(bytes, reformatJson, false, xContentType); + } + + public static String convertToJson(BytesReference bytes, boolean reformatJson, boolean prettyPrint, XContentType xContentType) + throws IOException { + Objects.requireNonNull(xContentType); if (xContentType == XContentType.JSON && !reformatJson) { return bytes.utf8ToString(); } - try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(bytes.streamInput())) { + + // It is safe to use EMPTY here because this never uses namedObject + try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(NamedXContentRegistry.EMPTY, + bytes.streamInput())) { parser.nextToken(); XContentBuilder builder = XContentFactory.jsonBuilder(); if (prettyPrint) { @@ -194,7 +266,6 @@ public static boolean update(Map source, Map cha * Merges the defaults provided as the second parameter into the content of the first. Only does recursive merge * for inner maps. */ - @SuppressWarnings({"unchecked"}) public static void mergeDefaults(Map content, Map defaults) { for (Map.Entry defaultEntry : defaults.entrySet()) { if (!content.containsKey(defaultEntry.getKey())) { @@ -258,33 +329,36 @@ private static boolean allListValuesAreMapsOfOne(List list) { return true; } - public static void copyCurrentStructure(XContentGenerator generator, XContentParser parser) throws IOException { + /** + * Low level implementation detail of {@link XContentGenerator#copyCurrentStructure(XContentParser)}. + */ + public static void copyCurrentStructure(XContentGenerator destination, XContentParser parser) throws IOException { XContentParser.Token token = parser.currentToken(); // Let's handle field-name separately first if (token == XContentParser.Token.FIELD_NAME) { - generator.writeFieldName(parser.currentName()); + destination.writeFieldName(parser.currentName()); token = parser.nextToken(); // fall-through to copy the associated value } switch (token) { case START_ARRAY: - generator.writeStartArray(); + destination.writeStartArray(); while (parser.nextToken() != XContentParser.Token.END_ARRAY) { - copyCurrentStructure(generator, parser); + copyCurrentStructure(destination, parser); } - generator.writeEndArray(); + destination.writeEndArray(); break; case START_OBJECT: - generator.writeStartObject(); + destination.writeStartObject(); while (parser.nextToken() != XContentParser.Token.END_OBJECT) { - copyCurrentStructure(generator, parser); + copyCurrentStructure(destination, parser); } - generator.writeEndObject(); + destination.writeEndObject(); break; default: // others are simple: - copyCurrentEvent(generator, parser); + copyCurrentEvent(destination, parser); } } @@ -342,7 +416,10 @@ public static void copyCurrentEvent(XContentGenerator generator, XContentParser /** * Writes a "raw" (bytes) field, handling cases where the bytes are compressed, and tries to optimize writing using * {@link XContentBuilder#rawField(String, org.elasticsearch.common.bytes.BytesReference)}. + * @deprecated use {@link #writeRawField(String, BytesReference, XContentType, XContentBuilder, Params)} to avoid content type + * auto-detection */ + @Deprecated public static void writeRawField(String field, BytesReference source, XContentBuilder builder, ToXContent.Params params) throws IOException { Compressor compressor = CompressorFactory.compressor(source); if (compressor != null) { @@ -352,4 +429,48 @@ public static void writeRawField(String field, BytesReference source, XContentBu builder.rawField(field, source); } } + + /** + * Writes a "raw" (bytes) field, handling cases where the bytes are compressed, and tries to optimize writing using + * {@link XContentBuilder#rawField(String, org.elasticsearch.common.bytes.BytesReference, XContentType)}. + */ + public static void writeRawField(String field, BytesReference source, XContentType xContentType, XContentBuilder builder, + ToXContent.Params params) throws IOException { + Objects.requireNonNull(xContentType); + Compressor compressor = CompressorFactory.compressor(source); + if (compressor != null) { + InputStream compressedStreamInput = compressor.streamInput(source.streamInput()); + builder.rawField(field, compressedStreamInput, xContentType); + } else { + builder.rawField(field, source, xContentType); + } + } + + /** + * Returns the bytes that represent the XContent output of the provided {@link ToXContent} object, using the provided + * {@link XContentType}. Wraps the output into a new anonymous object according to the value returned + * by the {@link ToXContent#isFragment()} method returns. + */ + public static BytesReference toXContent(ToXContent toXContent, XContentType xContentType, boolean humanReadable) throws IOException { + return toXContent(toXContent, xContentType, ToXContent.EMPTY_PARAMS, humanReadable); + } + + /** + * Returns the bytes that represent the XContent output of the provided {@link ToXContent} object, using the provided + * {@link XContentType}. Wraps the output into a new anonymous object according to the value returned + * by the {@link ToXContent#isFragment()} method returns. + */ + public static BytesReference toXContent(ToXContent toXContent, XContentType xContentType, Params params, boolean humanReadable) throws IOException { + try (XContentBuilder builder = XContentBuilder.builder(xContentType.xContent())) { + builder.humanReadable(humanReadable); + if (toXContent.isFragment()) { + builder.startObject(); + } + toXContent.toXContent(builder, params); + if (toXContent.isFragment()) { + builder.endObject(); + } + return builder.bytes(); + } + } } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContentParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContentParser.java index f8513828636d1..4947e1d5c775c 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContentParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContentParser.java @@ -33,7 +33,7 @@ * *
      *     XContentType xContentType = XContentType.JSON;
    - *     XContentParser parser = xContentType.xContent().createParser("{\"key\" : \"value\"}");
    + *     XContentParser parser = xContentType.xContent().createParser(NamedXContentRegistry.EMPTY, "{\"key\" : \"value\"}");
      * 
    */ public interface XContentParser extends Releasable { @@ -131,6 +131,10 @@ enum NumberType { Map mapOrdered() throws IOException; + Map mapStrings() throws IOException; + + Map mapStringsOrdered() throws IOException; + List list() throws IOException; List listOrderedMap() throws IOException; @@ -245,5 +249,16 @@ enum NumberType { */ XContentLocation getTokenLocation(); + // TODO remove context entirely when it isn't needed + /** + * Parse an object by name. + */ + T namedObject(Class categoryClass, String name, Object context) throws IOException; + + /** + * The registry used to resolve {@link #namedObject(Class, String, Object)}. Use this when building a sub-parser from this parser. + */ + NamedXContentRegistry getXContentRegistry(); + boolean isClosed(); } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContentParserUtils.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContentParserUtils.java new file mode 100644 index 0000000000000..e28b44b42c5a2 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContentParserUtils.java @@ -0,0 +1,153 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.xcontent; + +import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.xcontent.XContentParser.Token; + +import java.io.IOException; +import java.util.Locale; +import java.util.function.Consumer; +import java.util.function.Supplier; + +/** + * A set of static methods to get {@link Token} from {@link XContentParser} + * while checking for their types and throw {@link ParsingException} if needed. + */ +public final class XContentParserUtils { + + private XContentParserUtils() { + } + + /** + * Makes sure that current token is of type {@link XContentParser.Token#FIELD_NAME} and the the field name is equal to the provided one + * @throws ParsingException if the token is not of type {@link XContentParser.Token#FIELD_NAME} or is not equal to the given field name + */ + public static void ensureFieldName(XContentParser parser, Token token, String fieldName) throws IOException { + ensureExpectedToken(Token.FIELD_NAME, token, parser::getTokenLocation); + String currentName = parser.currentName(); + if (currentName.equals(fieldName) == false) { + String message = "Failed to parse object: expecting field with name [%s] but found [%s]"; + throw new ParsingException(parser.getTokenLocation(), String.format(Locale.ROOT, message, fieldName, currentName)); + } + } + + /** + * @throws ParsingException with a "unknown field found" reason + */ + public static void throwUnknownField(String field, XContentLocation location) { + String message = "Failed to parse object: unknown field [%s] found"; + throw new ParsingException(location, String.format(Locale.ROOT, message, field)); + } + + /** + * @throws ParsingException with a "unknown token found" reason + */ + public static void throwUnknownToken(XContentParser.Token token, XContentLocation location) { + String message = "Failed to parse object: unexpected token [%s] found"; + throw new ParsingException(location, String.format(Locale.ROOT, message, token)); + } + + /** + * Makes sure that provided token is of the expected type + * + * @throws ParsingException if the token is not equal to the expected type + */ + public static void ensureExpectedToken(Token expected, Token actual, Supplier location) { + if (actual != expected) { + String message = "Failed to parse object: expecting token of type [%s] but found [%s]"; + throw new ParsingException(location.get(), String.format(Locale.ROOT, message, expected, actual)); + } + } + + /** + * Parse the current token depending on its token type. The following token types will be + * parsed by the corresponding parser methods: + *
      + *
    • XContentParser.Token.VALUE_STRING: parser.text()
    • + *
    • XContentParser.Token.VALUE_NUMBER: parser.numberValue()
    • + *
    • XContentParser.Token.VALUE_BOOLEAN: parser.booleanValue()
    • + *
    • XContentParser.Token.VALUE_EMBEDDED_OBJECT: parser.binaryValue()
    • + *
    + * + * @throws ParsingException if the token none of the allowed values + */ + public static Object parseStoredFieldsValue(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + Object value = null; + if (token == XContentParser.Token.VALUE_STRING) { + //binary values will be parsed back and returned as base64 strings when reading from json and yaml + value = parser.text(); + } else if (token == XContentParser.Token.VALUE_NUMBER) { + value = parser.numberValue(); + } else if (token == XContentParser.Token.VALUE_BOOLEAN) { + value = parser.booleanValue(); + } else if (token == XContentParser.Token.VALUE_EMBEDDED_OBJECT) { + //binary values will be parsed back and returned as BytesArray when reading from cbor and smile + value = new BytesArray(parser.binaryValue()); + } else { + throwUnknownToken(token, parser.getTokenLocation()); + } + return value; + } + + /** + * This method expects that the current field name is the concatenation of a type, a delimiter and a name + * (ex: terms#foo where "terms" refers to the type of a registered {@link NamedXContentRegistry.Entry}, + * "#" is the delimiter and "foo" the name of the object to parse). + * + * It also expected that following this field name is either an Object or an array xContent structure and + * the cursor points to the start token of this structure. + * + * The method splits the field's name to extract the type and name and then parses the object + * using the {@link XContentParser#namedObject(Class, String, Object)} method. + * + * @param parser the current {@link XContentParser} + * @param delimiter the delimiter to use to splits the field's name + * @param objectClass the object class of the object to parse + * @param consumer something to consume the parsed object + * @param the type of the object to parse + * @throws IOException if anything went wrong during parsing or if the type or name cannot be derived + * from the field's name + * @throws ParsingException if the parser isn't positioned on either START_OBJECT or START_ARRAY at the beginning + */ + public static void parseTypedKeysObject(XContentParser parser, String delimiter, Class objectClass, Consumer consumer) + throws IOException { + if (parser.currentToken() != XContentParser.Token.START_OBJECT && parser.currentToken() != XContentParser.Token.START_ARRAY) { + throwUnknownToken(parser.currentToken(), parser.getTokenLocation()); + } + String currentFieldName = parser.currentName(); + if (Strings.hasLength(currentFieldName)) { + int position = currentFieldName.indexOf(delimiter); + if (position > 0) { + String type = currentFieldName.substring(0, position); + String name = currentFieldName.substring(position + 1); + consumer.accept(parser.namedObject(objectClass, type, name)); + return; + } + // if we didn't find a delimiter we ignore the object or array for forward compatibility instead of throwing an error + parser.skipChildren(); + } else { + throw new ParsingException(parser.getTokenLocation(), "Failed to parse object: empty key"); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContentType.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContentType.java index 296f9d2aedde7..40caa6e911087 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContentType.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContentType.java @@ -21,6 +21,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.cbor.CborXContent; import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.common.xcontent.smile.SmileXContent; @@ -28,18 +29,19 @@ import java.io.IOException; import java.util.Locale; +import java.util.Objects; /** * The content type of {@link org.elasticsearch.common.xcontent.XContent}. */ -public enum XContentType { +public enum XContentType implements Writeable { /** * A JSON based content type. */ JSON(0) { @Override - protected String mediaTypeWithoutParameters() { + public String mediaTypeWithoutParameters() { return "application/json"; } @@ -63,7 +65,7 @@ public XContent xContent() { */ SMILE(1) { @Override - protected String mediaTypeWithoutParameters() { + public String mediaTypeWithoutParameters() { return "application/smile"; } @@ -82,7 +84,7 @@ public XContent xContent() { */ YAML(2) { @Override - protected String mediaTypeWithoutParameters() { + public String mediaTypeWithoutParameters() { return "application/yaml"; } @@ -101,7 +103,7 @@ public XContent xContent() { */ CBOR(3) { @Override - protected String mediaTypeWithoutParameters() { + public String mediaTypeWithoutParameters() { return "application/cbor"; } @@ -116,23 +118,46 @@ public XContent xContent() { } }; + /** + * Accepts either a format string, which is equivalent to {@link XContentType#shortName()} or a media type that optionally has + * parameters and attempts to match the value to an {@link XContentType}. The comparisons are done in lower case format and this method + * also supports a wildcard accept for {@code application/*}. This method can be used to parse the {@code Accept} HTTP header or a + * format query string parameter. This method will return {@code null} if no match is found + */ public static XContentType fromMediaTypeOrFormat(String mediaType) { if (mediaType == null) { return null; } for (XContentType type : values()) { - if (isSameMediaTypeAs(mediaType, type)) { + if (isSameMediaTypeOrFormatAs(mediaType, type)) { return type; } } - if(mediaType.toLowerCase(Locale.ROOT).startsWith("application/*")) { + final String lowercaseMediaType = mediaType.toLowerCase(Locale.ROOT); + if (lowercaseMediaType.startsWith("application/*")) { return JSON; } return null; } - private static boolean isSameMediaTypeAs(String stringType, XContentType type) { + /** + * Attempts to match the given media type with the known {@link XContentType} values. This match is done in a case-insensitive manner. + * The provided media type should not include any parameters. This method is suitable for parsing part of the {@code Content-Type} + * HTTP header. This method will return {@code null} if no match is found + */ + public static XContentType fromMediaType(String mediaType) { + final String lowercaseMediaType = Objects.requireNonNull(mediaType, "mediaType cannot be null").toLowerCase(Locale.ROOT); + for (XContentType type : values()) { + if (type.mediaTypeWithoutParameters().equals(lowercaseMediaType)) { + return type; + } + } + + return null; + } + + private static boolean isSameMediaTypeOrFormatAs(String stringType, XContentType type) { return type.mediaTypeWithoutParameters().equalsIgnoreCase(stringType) || stringType.toLowerCase(Locale.ROOT).startsWith(type.mediaTypeWithoutParameters().toLowerCase(Locale.ROOT) + ";") || type.shortName().equalsIgnoreCase(stringType); @@ -156,7 +181,7 @@ public String mediaType() { public abstract XContent xContent(); - protected abstract String mediaTypeWithoutParameters(); + public abstract String mediaTypeWithoutParameters(); public static XContentType readFrom(StreamInput in) throws IOException { int index = in.readVInt(); @@ -168,7 +193,8 @@ public static XContentType readFrom(StreamInput in) throws IOException { throw new IllegalStateException("Unknown XContentType with index [" + index + "]"); } - public static void writeTo(XContentType contentType, StreamOutput out) throws IOException { - out.writeVInt(contentType.index); + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(index); } } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContent.java index 4224b5328a712..e9518cf11305a 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContent.java @@ -25,6 +25,7 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.FastStringReader; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentGenerator; @@ -76,33 +77,33 @@ public XContentGenerator createGenerator(OutputStream os, Set includes, } @Override - public XContentParser createParser(String content) throws IOException { - return new CborXContentParser(cborFactory.createParser(new FastStringReader(content))); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, String content) throws IOException { + return new CborXContentParser(xContentRegistry, cborFactory.createParser(new FastStringReader(content))); } @Override - public XContentParser createParser(InputStream is) throws IOException { - return new CborXContentParser(cborFactory.createParser(is)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, InputStream is) throws IOException { + return new CborXContentParser(xContentRegistry, cborFactory.createParser(is)); } @Override - public XContentParser createParser(byte[] data) throws IOException { - return new CborXContentParser(cborFactory.createParser(data)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data) throws IOException { + return new CborXContentParser(xContentRegistry, cborFactory.createParser(data)); } @Override - public XContentParser createParser(byte[] data, int offset, int length) throws IOException { - return new CborXContentParser(cborFactory.createParser(data, offset, length)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data, int offset, int length) throws IOException { + return new CborXContentParser(xContentRegistry, cborFactory.createParser(data, offset, length)); } @Override - public XContentParser createParser(BytesReference bytes) throws IOException { - return createParser(bytes.streamInput()); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, BytesReference bytes) throws IOException { + return createParser(xContentRegistry, bytes.streamInput()); } @Override - public XContentParser createParser(Reader reader) throws IOException { - return new CborXContentParser(cborFactory.createParser(reader)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, Reader reader) throws IOException { + return new CborXContentParser(xContentRegistry, cborFactory.createParser(reader)); } } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContentGenerator.java b/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContentGenerator.java index e63a928109d6e..119cb5c98c6c4 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContentGenerator.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContentGenerator.java @@ -20,20 +20,14 @@ package org.elasticsearch.common.xcontent.cbor; import com.fasterxml.jackson.core.JsonGenerator; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContentGenerator; import java.io.OutputStream; -import java.util.Collections; import java.util.Set; public class CborXContentGenerator extends JsonXContentGenerator { - public CborXContentGenerator(JsonGenerator jsonGenerator, OutputStream os) { - this(jsonGenerator, os, Collections.emptySet(), Collections.emptySet()); - } - public CborXContentGenerator(JsonGenerator jsonGenerator, OutputStream os, Set includes, Set excludes) { super(jsonGenerator, os, includes, excludes); } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContentParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContentParser.java index ed10ea47c0e87..b6610b5f3adea 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContentParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContentParser.java @@ -20,6 +20,8 @@ package org.elasticsearch.common.xcontent.cbor; import com.fasterxml.jackson.core.JsonParser; + +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContentParser; @@ -28,8 +30,8 @@ */ public class CborXContentParser extends JsonXContentParser { - public CborXContentParser(JsonParser parser) { - super(parser); + public CborXContentParser(NamedXContentRegistry xContentRegistry, JsonParser parser) { + super(xContentRegistry, parser); } @Override diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContent.java index c8afb94f7815c..816cf5e2659eb 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContent.java @@ -25,6 +25,7 @@ import com.fasterxml.jackson.core.JsonParser; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.FastStringReader; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentGenerator; @@ -98,32 +99,32 @@ public XContentGenerator createGenerator(OutputStream os, Set includes, } @Override - public XContentParser createParser(String content) throws IOException { - return new JsonXContentParser(jsonFactory.createParser(new FastStringReader(content))); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, String content) throws IOException { + return new JsonXContentParser(xContentRegistry, jsonFactory.createParser(new FastStringReader(content))); } @Override - public XContentParser createParser(InputStream is) throws IOException { - return new JsonXContentParser(jsonFactory.createParser(is)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, InputStream is) throws IOException { + return new JsonXContentParser(xContentRegistry, jsonFactory.createParser(is)); } @Override - public XContentParser createParser(byte[] data) throws IOException { - return new JsonXContentParser(jsonFactory.createParser(data)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data) throws IOException { + return new JsonXContentParser(xContentRegistry, jsonFactory.createParser(data)); } @Override - public XContentParser createParser(byte[] data, int offset, int length) throws IOException { - return new JsonXContentParser(jsonFactory.createParser(data, offset, length)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data, int offset, int length) throws IOException { + return new JsonXContentParser(xContentRegistry, jsonFactory.createParser(data, offset, length)); } @Override - public XContentParser createParser(BytesReference bytes) throws IOException { - return createParser(bytes.streamInput()); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, BytesReference bytes) throws IOException { + return createParser(xContentRegistry, bytes.streamInput()); } @Override - public XContentParser createParser(Reader reader) throws IOException { - return new JsonXContentParser(jsonFactory.createParser(reader)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, Reader reader) throws IOException { + return new JsonXContentParser(xContentRegistry, jsonFactory.createParser(reader)); } } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContentGenerator.java b/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContentGenerator.java index 4a393b9dd1040..1e09f8334f772 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContentGenerator.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContentGenerator.java @@ -31,6 +31,7 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.Streams; import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentGenerator; @@ -43,13 +44,9 @@ import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; -import java.util.Collections; import java.util.Objects; import java.util.Set; -/** - * - */ public class JsonXContentGenerator implements XContentGenerator { /** Generator used to write content **/ @@ -75,10 +72,6 @@ public class JsonXContentGenerator implements XContentGenerator { private static final DefaultPrettyPrinter.Indenter INDENTER = new DefaultIndenter(" ", LF.getValue()); private boolean prettyPrint = false; - public JsonXContentGenerator(JsonGenerator jsonGenerator, OutputStream os) { - this(jsonGenerator, os, Collections.emptySet(), Collections.emptySet()); - } - public JsonXContentGenerator(JsonGenerator jsonGenerator, OutputStream os, Set includes, Set excludes) { Objects.requireNonNull(includes, "Including filters must not be null"); Objects.requireNonNull(excludes, "Excluding filters must not be null"); @@ -130,16 +123,6 @@ public void usePrintLineFeedAtEnd() { writeLineFeedAtEnd = true; } - @Override - public void writeStartArray() throws IOException { - generator.writeStartArray(); - } - - @Override - public void writeEndArray() throws IOException { - generator.writeEndArray(); - } - private boolean isFiltered() { return filter != null; } @@ -184,118 +167,124 @@ public void writeEndObject() throws IOException { generator.writeEndObject(); } + @Override - public void writeFieldName(String name) throws IOException { - generator.writeFieldName(name); + public void writeStartArray() throws IOException { + generator.writeStartArray(); } @Override - public void writeString(String text) throws IOException { - generator.writeString(text); + public void writeEndArray() throws IOException { + generator.writeEndArray(); } @Override - public void writeString(char[] text, int offset, int len) throws IOException { - generator.writeString(text, offset, len); + public void writeFieldName(String name) throws IOException { + generator.writeFieldName(name); } @Override - public void writeUTF8String(byte[] text, int offset, int length) throws IOException { - generator.writeUTF8String(text, offset, length); + public void writeNull() throws IOException { + generator.writeNull(); } @Override - public void writeBinary(byte[] data, int offset, int len) throws IOException { - generator.writeBinary(data, offset, len); + public void writeNullField(String name) throws IOException { + generator.writeNullField(name); } @Override - public void writeBinary(byte[] data) throws IOException { - generator.writeBinary(data); + public void writeBooleanField(String name, boolean value) throws IOException { + generator.writeBooleanField(name, value); } @Override - public void writeNumber(int v) throws IOException { - generator.writeNumber(v); + public void writeBoolean(boolean value) throws IOException { + generator.writeBoolean(value); } @Override - public void writeNumber(long v) throws IOException { - generator.writeNumber(v); + public void writeNumberField(String name, double value) throws IOException { + generator.writeNumberField(name, value); } @Override - public void writeNumber(double d) throws IOException { - generator.writeNumber(d); + public void writeNumber(double value) throws IOException { + generator.writeNumber(value); } @Override - public void writeNumber(float f) throws IOException { - generator.writeNumber(f); + public void writeNumberField(String name, float value) throws IOException { + generator.writeNumberField(name, value); } @Override - public void writeBoolean(boolean state) throws IOException { - generator.writeBoolean(state); + public void writeNumber(float value) throws IOException { + generator.writeNumber(value); } @Override - public void writeNull() throws IOException { - generator.writeNull(); + public void writeNumberField(String name, int value) throws IOException { + generator.writeNumberField(name, value); } @Override - public void writeStringField(String fieldName, String value) throws IOException { - generator.writeStringField(fieldName, value); + public void writeNumber(int value) throws IOException { + generator.writeNumber(value); } @Override - public void writeBooleanField(String fieldName, boolean value) throws IOException { - generator.writeBooleanField(fieldName, value); + public void writeNumberField(String name, long value) throws IOException { + generator.writeNumberField(name, value); } @Override - public void writeNullField(String fieldName) throws IOException { - generator.writeNullField(fieldName); + public void writeNumber(long value) throws IOException { + generator.writeNumber(value); } @Override - public void writeNumberField(String fieldName, int value) throws IOException { - generator.writeNumberField(fieldName, value); + public void writeNumber(short value) throws IOException { + generator.writeNumber(value); } @Override - public void writeNumberField(String fieldName, long value) throws IOException { - generator.writeNumberField(fieldName, value); + public void writeStringField(String name, String value) throws IOException { + generator.writeStringField(name, value); } @Override - public void writeNumberField(String fieldName, double value) throws IOException { - generator.writeNumberField(fieldName, value); + public void writeString(String value) throws IOException { + generator.writeString(value); } @Override - public void writeNumberField(String fieldName, float value) throws IOException { - generator.writeNumberField(fieldName, value); + public void writeString(char[] value, int offset, int len) throws IOException { + generator.writeString(value, offset, len); } @Override - public void writeBinaryField(String fieldName, byte[] data) throws IOException { - generator.writeBinaryField(fieldName, data); + public void writeUTF8String(byte[] value, int offset, int length) throws IOException { + generator.writeUTF8String(value, offset, length); } @Override - public void writeArrayFieldStart(String fieldName) throws IOException { - generator.writeArrayFieldStart(fieldName); + public void writeBinaryField(String name, byte[] value) throws IOException { + generator.writeBinaryField(name, value); } @Override - public void writeObjectFieldStart(String fieldName) throws IOException { - generator.writeObjectFieldStart(fieldName); + public void writeBinary(byte[] value) throws IOException { + generator.writeBinary(value); } - private void writeStartRaw(String fieldName) throws IOException { - writeFieldName(fieldName); + @Override + public void writeBinary(byte[] value, int offset, int len) throws IOException { + generator.writeBinary(value, offset, len); + } + + private void writeStartRaw(String name) throws IOException { + writeFieldName(name); generator.writeRaw(':'); } @@ -309,7 +298,7 @@ public void writeEndRaw() { } @Override - public void writeRawField(String fieldName, InputStream content) throws IOException { + public void writeRawField(String name, InputStream content) throws IOException { if (content.markSupported() == false) { // needed for the XContentFactory.xContentType call content = new BufferedInputStream(content); @@ -318,14 +307,20 @@ public void writeRawField(String fieldName, InputStream content) throws IOExcept if (contentType == null) { throw new IllegalArgumentException("Can't write raw bytes whose xcontent-type can't be guessed"); } + writeRawField(name, content, contentType); + } + + @Override + public void writeRawField(String name, InputStream content, XContentType contentType) throws IOException { if (mayWriteRawData(contentType) == false) { - try (XContentParser parser = XContentFactory.xContent(contentType).createParser(content)) { + // EMPTY is safe here because we never call namedObject when writing raw data + try (XContentParser parser = XContentFactory.xContent(contentType).createParser(NamedXContentRegistry.EMPTY, content)) { parser.nextToken(); - writeFieldName(fieldName); + writeFieldName(name); copyCurrentStructure(parser); } } else { - writeStartRaw(fieldName); + writeStartRaw(name); flush(); Streams.copy(content, os); writeEndRaw(); @@ -333,16 +328,21 @@ public void writeRawField(String fieldName, InputStream content) throws IOExcept } @Override - public final void writeRawField(String fieldName, BytesReference content) throws IOException { + public final void writeRawField(String name, BytesReference content) throws IOException { XContentType contentType = XContentFactory.xContentType(content); if (contentType == null) { throw new IllegalArgumentException("Can't write raw bytes whose xcontent-type can't be guessed"); } + writeRawField(name, content, contentType); + } + + @Override + public final void writeRawField(String name, BytesReference content, XContentType contentType) throws IOException { if (mayWriteRawData(contentType) == false) { - writeFieldName(fieldName); + writeFieldName(name); copyRawValue(content, contentType.xContent()); } else { - writeStartRaw(fieldName); + writeStartRaw(name); flush(); content.writeTo(os); writeEndRaw(); @@ -355,6 +355,11 @@ public final void writeRawValue(BytesReference content) throws IOException { if (contentType == null) { throw new IllegalArgumentException("Can't write raw bytes whose xcontent-type can't be guessed"); } + writeRawValue(content, contentType); + } + + @Override + public final void writeRawValue(BytesReference content, XContentType contentType) throws IOException { if (mayWriteRawData(contentType) == false) { copyRawValue(content, contentType.xContent()); } else { @@ -385,8 +390,9 @@ protected boolean supportsRawWrites() { } protected void copyRawValue(BytesReference content, XContent xContent) throws IOException { + // EMPTY is safe here because we never call namedObject try (StreamInput input = content.streamInput(); - XContentParser parser = xContent.createParser(input)) { + XContentParser parser = xContent.createParser(NamedXContentRegistry.EMPTY, input)) { copyCurrentStructure(parser); } } @@ -416,7 +422,7 @@ public void close() throws IOException { } JsonStreamContext context = generator.getOutputContext(); if ((context != null) && (context.inRoot() == false)) { - throw new IOException("unclosed object or array found"); + throw new IOException("Unclosed object or array found"); } if (writeLineFeedAtEnd) { flush(); @@ -426,4 +432,8 @@ public void close() throws IOException { generator.close(); } + @Override + public boolean isClosed() { + return generator.isClosed(); + } } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContentParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContentParser.java index 5728e6035e6c2..0a570b7685e39 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContentParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContentParser.java @@ -22,8 +22,10 @@ import com.fasterxml.jackson.core.JsonLocation; import com.fasterxml.jackson.core.JsonParser; import com.fasterxml.jackson.core.JsonToken; + import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentLocation; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.support.AbstractXContentParser; @@ -38,7 +40,8 @@ public class JsonXContentParser extends AbstractXContentParser { final JsonParser parser; - public JsonXContentParser(JsonParser parser) { + public JsonXContentParser(NamedXContentRegistry xContentRegistry, JsonParser parser) { + super(xContentRegistry); this.parser = parser; } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContent.java index 94ac9b9435626..a11bdda931cfb 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContent.java @@ -25,6 +25,7 @@ import com.fasterxml.jackson.dataformat.smile.SmileGenerator; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.FastStringReader; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentGenerator; @@ -77,32 +78,32 @@ public XContentGenerator createGenerator(OutputStream os, Set includes, } @Override - public XContentParser createParser(String content) throws IOException { - return new SmileXContentParser(smileFactory.createParser(new FastStringReader(content))); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, String content) throws IOException { + return new SmileXContentParser(xContentRegistry, smileFactory.createParser(new FastStringReader(content))); } @Override - public XContentParser createParser(InputStream is) throws IOException { - return new SmileXContentParser(smileFactory.createParser(is)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, InputStream is) throws IOException { + return new SmileXContentParser(xContentRegistry, smileFactory.createParser(is)); } @Override - public XContentParser createParser(byte[] data) throws IOException { - return new SmileXContentParser(smileFactory.createParser(data)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data) throws IOException { + return new SmileXContentParser(xContentRegistry, smileFactory.createParser(data)); } @Override - public XContentParser createParser(byte[] data, int offset, int length) throws IOException { - return new SmileXContentParser(smileFactory.createParser(data, offset, length)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data, int offset, int length) throws IOException { + return new SmileXContentParser(xContentRegistry, smileFactory.createParser(data, offset, length)); } @Override - public XContentParser createParser(BytesReference bytes) throws IOException { - return createParser(bytes.streamInput()); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, BytesReference bytes) throws IOException { + return createParser(xContentRegistry, bytes.streamInput()); } @Override - public XContentParser createParser(Reader reader) throws IOException { - return new SmileXContentParser(smileFactory.createParser(reader)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, Reader reader) throws IOException { + return new SmileXContentParser(xContentRegistry, smileFactory.createParser(reader)); } } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContentGenerator.java b/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContentGenerator.java index afa420805f743..f368c0e383f31 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContentGenerator.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContentGenerator.java @@ -20,20 +20,14 @@ package org.elasticsearch.common.xcontent.smile; import com.fasterxml.jackson.core.JsonGenerator; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContentGenerator; import java.io.OutputStream; -import java.util.Collections; import java.util.Set; public class SmileXContentGenerator extends JsonXContentGenerator { - public SmileXContentGenerator(JsonGenerator jsonGenerator, OutputStream os) { - this(jsonGenerator, os, Collections.emptySet(), Collections.emptySet()); - } - public SmileXContentGenerator(JsonGenerator jsonGenerator, OutputStream os, Set includes, Set excludes) { super(jsonGenerator, os, includes, excludes); } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContentParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContentParser.java index 2bbf99db27de8..0ed4a27850598 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContentParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContentParser.java @@ -20,6 +20,8 @@ package org.elasticsearch.common.xcontent.smile; import com.fasterxml.jackson.core.JsonParser; + +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContentParser; @@ -28,8 +30,8 @@ */ public class SmileXContentParser extends JsonXContentParser { - public SmileXContentParser(JsonParser parser) { - super(parser); + public SmileXContentParser(NamedXContentRegistry xContentRegistry, JsonParser parser) { + super(xContentRegistry, parser); } @Override diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java index 9f313a59b90fc..1c23ddf006de0 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java @@ -22,6 +22,11 @@ import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.Booleans; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.LogConfigurator; +import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; @@ -35,7 +40,6 @@ * */ public abstract class AbstractXContentParser implements XContentParser { - // Currently this is not a setting that can be changed and is a policy // that relates to how parsing of things like "boost" are done across // the whole of Elasticsearch (eg if String "1.0" is a valid float). @@ -52,7 +56,21 @@ private static void checkCoerceString(boolean coerce, Class cl } } + /** + * We have to lazy initialize the deprecation logger as otherwise a static logger here would be constructed before logging is configured + * leading to a runtime failure (see {@link LogConfigurator#checkErrorListener()} ). The premature construction would come from any + * {@link Setting} object constructed in, for example, {@link org.elasticsearch.env.Environment}. + */ + private static class DeprecationLoggerHolder { + static DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(AbstractXContentParser.class)); + } + + private final NamedXContentRegistry xContentRegistry; + + public AbstractXContentParser(NamedXContentRegistry xContentRegistry) { + this.xContentRegistry = xContentRegistry; + } // The 3rd party parsers we rely on are known to silently truncate fractions: see // http://fasterxml.github.io/jackson-core/javadoc/2.3.0/com/fasterxml/jackson/core/JsonParser.html#getShortValue() @@ -87,13 +105,28 @@ public boolean isBooleanValue() throws IOException { @Override public boolean booleanValue() throws IOException { + boolean interpretedAsLenient = false; + boolean booleanValue; + String rawValue = null; + Token token = currentToken(); if (token == Token.VALUE_NUMBER) { - return intValue() != 0; + interpretedAsLenient = true; + booleanValue = intValue() != 0; + rawValue = String.valueOf(intValue()); } else if (token == Token.VALUE_STRING) { - return Booleans.parseBoolean(textCharacters(), textOffset(), textLength(), false /* irrelevant */); + rawValue = new String(textCharacters(), textOffset(), textLength()); + interpretedAsLenient = Booleans.isStrictlyBoolean(rawValue) == false; + booleanValue = Booleans.parseBoolean(rawValue, false /* irrelevant */); + } else { + booleanValue = doBooleanValue(); + } + if (interpretedAsLenient) { + final DeprecationLogger deprecationLogger = DeprecationLoggerHolder.deprecationLogger; + deprecationLogger.deprecated("Expected a boolean [true/false] for property [{}] but got [{}]", currentName(), rawValue); } - return doBooleanValue(); + return booleanValue; + } protected abstract boolean doBooleanValue() throws IOException; @@ -108,7 +141,7 @@ public short shortValue(boolean coerce) throws IOException { Token token = currentToken(); if (token == Token.VALUE_STRING) { checkCoerceString(coerce, Short.class); - return Short.parseShort(text()); + return (short) Double.parseDouble(text()); } short result = doShortValue(); ensureNumberConversion(coerce, result, Short.class); @@ -122,13 +155,12 @@ public int intValue() throws IOException { return intValue(DEFAULT_NUMBER_COERCE_POLICY); } - @Override public int intValue(boolean coerce) throws IOException { Token token = currentToken(); if (token == Token.VALUE_STRING) { checkCoerceString(coerce, Integer.class); - return Integer.parseInt(text()); + return (int) Double.parseDouble(text()); } int result = doIntValue(); ensureNumberConversion(coerce, result, Integer.class); @@ -147,7 +179,13 @@ public long longValue(boolean coerce) throws IOException { Token token = currentToken(); if (token == Token.VALUE_STRING) { checkCoerceString(coerce, Long.class); - return Long.parseLong(text()); + // longs need special handling so we don't lose precision while parsing + String stringValue = text(); + try { + return Long.parseLong(stringValue); + } catch (NumberFormatException e) { + return (long) Double.parseDouble(stringValue); + } } long result = doLongValue(); ensureNumberConversion(coerce, result, Long.class); @@ -218,6 +256,16 @@ public Map mapOrdered() throws IOException { return readOrderedMap(this); } + @Override + public Map mapStrings() throws IOException { + return readMapStrings(this); + } + + @Override + public Map mapStringsOrdered() throws IOException { + return readOrderedMapStrings(this); + } + @Override public List list() throws IOException { return readList(this); @@ -232,10 +280,18 @@ interface MapFactory { Map newMap(); } + interface MapStringsFactory { + Map newMap(); + } + static final MapFactory SIMPLE_MAP_FACTORY = HashMap::new; static final MapFactory ORDERED_MAP_FACTORY = LinkedHashMap::new; + static final MapStringsFactory SIMPLE_MAP_STRINGS_FACTORY = HashMap::new; + + static final MapStringsFactory ORDERED_MAP_STRINGS_FACTORY = LinkedHashMap::new; + static Map readMap(XContentParser parser) throws IOException { return readMap(parser, SIMPLE_MAP_FACTORY); } @@ -244,6 +300,14 @@ static Map readOrderedMap(XContentParser parser) throws IOExcept return readMap(parser, ORDERED_MAP_FACTORY); } + static Map readMapStrings(XContentParser parser) throws IOException { + return readMapStrings(parser, SIMPLE_MAP_STRINGS_FACTORY); + } + + static Map readOrderedMapStrings(XContentParser parser) throws IOException { + return readMapStrings(parser, ORDERED_MAP_STRINGS_FACTORY); + } + static List readList(XContentParser parser) throws IOException { return readList(parser, SIMPLE_MAP_FACTORY); } @@ -272,6 +336,26 @@ static Map readMap(XContentParser parser, MapFactory mapFactory) return map; } + static Map readMapStrings(XContentParser parser, MapStringsFactory mapStringsFactory) throws IOException { + Map map = mapStringsFactory.newMap(); + XContentParser.Token token = parser.currentToken(); + if (token == null) { + token = parser.nextToken(); + } + if (token == XContentParser.Token.START_OBJECT) { + token = parser.nextToken(); + } + for (; token == XContentParser.Token.FIELD_NAME; token = parser.nextToken()) { + // Must point to field name + String fieldName = parser.currentName(); + // And then the value... + parser.nextToken(); + String value = parser.text(); + map.put(fieldName, value); + } + return map; + } + static List readList(XContentParser parser, MapFactory mapFactory) throws IOException { XContentParser.Token token = parser.currentToken(); if (token == null) { @@ -313,6 +397,16 @@ static Object readValue(XContentParser parser, MapFactory mapFactory, XContentPa return null; } + @Override + public T namedObject(Class categoryClass, String name, Object context) throws IOException { + return xContentRegistry.parseNamedObject(categoryClass, name, this, context); + } + + @Override + public NamedXContentRegistry getXContentRegistry() { + return xContentRegistry; + } + @Override public abstract boolean isClosed(); } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/support/XContentMapValues.java b/core/src/main/java/org/elasticsearch/common/xcontent/support/XContentMapValues.java index a8c120f424b5c..c8d0b8c615c48 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/support/XContentMapValues.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/support/XContentMapValues.java @@ -19,20 +19,31 @@ package org.elasticsearch.common.xcontent.support; +import org.apache.lucene.util.automaton.Automata; +import org.apache.lucene.util.automaton.Automaton; +import org.apache.lucene.util.automaton.CharacterRunAutomaton; +import org.apache.lucene.util.automaton.Operations; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.common.Booleans; +import org.elasticsearch.common.Numbers; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.unit.TimeValue; import java.util.ArrayList; +import java.util.Arrays; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.function.Function; /** * */ public class XContentMapValues { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(XContentMapValues.class)); /** * Extracts raw values (string, int, and so on) based on the path provided returning all of them @@ -134,115 +145,179 @@ private static Object extractValue(String[] pathElements, int index, Object curr return null; } - public static Map filter(Map map, String[] includes, String[] excludes) { - Map result = new HashMap<>(); - filter(map, result, includes == null ? Strings.EMPTY_ARRAY : includes, excludes == null ? Strings.EMPTY_ARRAY : excludes, new StringBuilder()); - return result; + /** + * Only keep properties in {@code map} that match the {@code includes} but + * not the {@code excludes}. An empty list of includes is interpreted as a + * wildcard while an empty list of excludes does not match anything. + * + * If a property matches both an include and an exclude, then the exclude + * wins. + * + * If an object matches, then any of its sub properties are automatically + * considered as matching as well, both for includes and excludes. + * + * Dots in field names are treated as sub objects. So for instance if a + * document contains {@code a.b} as a property and {@code a} is an include, + * then {@code a.b} will be kept in the filtered map. + */ + public static Map filter(Map map, String[] includes, String[] excludes) { + return filter(includes, excludes).apply(map); } - private static void filter(Map map, Map into, String[] includes, String[] excludes, StringBuilder sb) { - if (includes.length == 0 && excludes.length == 0) { - into.putAll(map); - return; + /** + * Returns a function that filters a document map based on the given include and exclude rules. + * @see #filter(Map, String[], String[]) for details + */ + public static Function, Map> filter(String[] includes, String[] excludes) { + CharacterRunAutomaton matchAllAutomaton = new CharacterRunAutomaton(Automata.makeAnyString()); + + CharacterRunAutomaton include; + if (includes == null || includes.length == 0) { + include = matchAllAutomaton; + } else { + Automaton includeA = Regex.simpleMatchToAutomaton(includes); + includeA = makeMatchDotsInFieldNames(includeA); + include = new CharacterRunAutomaton(includeA); + } + + Automaton excludeA; + if (excludes == null || excludes.length == 0) { + excludeA = Automata.makeEmpty(); + } else { + excludeA = Regex.simpleMatchToAutomaton(excludes); + excludeA = makeMatchDotsInFieldNames(excludeA); } - for (Map.Entry entry : map.entrySet()) { + CharacterRunAutomaton exclude = new CharacterRunAutomaton(excludeA); + + // NOTE: We cannot use Operations.minus because of the special case that + // we want all sub properties to match as soon as an object matches + + return (map) -> filter(map, + include, 0, + exclude, 0, + matchAllAutomaton); + } + + /** Make matches on objects also match dots in field names. + * For instance, if the original simple regex is `foo`, this will translate + * it into `foo` OR `foo.*`. */ + private static Automaton makeMatchDotsInFieldNames(Automaton automaton) { + return Operations.union( + automaton, + Operations.concatenate(Arrays.asList(automaton, Automata.makeChar('.'), Automata.makeAnyString()))); + } + + private static int step(CharacterRunAutomaton automaton, String key, int state) { + for (int i = 0; state != -1 && i < key.length(); ++i) { + state = automaton.step(state, key.charAt(i)); + } + return state; + } + + private static Map filter(Map map, + CharacterRunAutomaton includeAutomaton, int initialIncludeState, + CharacterRunAutomaton excludeAutomaton, int initialExcludeState, + CharacterRunAutomaton matchAllAutomaton) { + Map filtered = new HashMap<>(); + for (Map.Entry entry : map.entrySet()) { String key = entry.getKey(); - int mark = sb.length(); - if (sb.length() > 0) { - sb.append('.'); + + int includeState = step(includeAutomaton, key, initialIncludeState); + if (includeState == -1) { + continue; } - sb.append(key); - String path = sb.toString(); - if (Regex.simpleMatch(excludes, path)) { - sb.setLength(mark); + int excludeState = step(excludeAutomaton, key, initialExcludeState); + if (excludeState != -1 && excludeAutomaton.isAccept(excludeState)) { continue; } - boolean exactIncludeMatch = false; // true if the current position was specifically mentioned - boolean pathIsPrefixOfAnInclude = false; // true if potentially a sub scope can be included - if (includes.length == 0) { - // implied match anything - exactIncludeMatch = true; - } else { - for (String include : includes) { - // check for prefix matches as well to see if we need to zero in, something like: obj1.arr1.* or *.field - // note, this does not work well with middle matches, like obj1.*.obj3 - if (include.charAt(0) == '*') { - if (Regex.simpleMatch(include, path)) { - exactIncludeMatch = true; - break; - } - pathIsPrefixOfAnInclude = true; - continue; - } - if (include.startsWith(path)) { - if (include.length() == path.length()) { - exactIncludeMatch = true; - break; - } else if (include.length() > path.length() && include.charAt(path.length()) == '.') { - // include might may match deeper paths. Dive deeper. - pathIsPrefixOfAnInclude = true; - continue; - } - } - if (Regex.simpleMatch(include, path)) { - exactIncludeMatch = true; - break; - } + Object value = entry.getValue(); + + CharacterRunAutomaton subIncludeAutomaton = includeAutomaton; + int subIncludeState = includeState; + if (includeAutomaton.isAccept(includeState)) { + if (excludeState == -1 || excludeAutomaton.step(excludeState, '.') == -1) { + // the exclude has no chances to match inner properties + filtered.put(key, value); + continue; + } else { + // the object matched, so consider that the include matches every inner property + // we only care about excludes now + subIncludeAutomaton = matchAllAutomaton; + subIncludeState = 0; } } - if (!(pathIsPrefixOfAnInclude || exactIncludeMatch)) { - // skip subkeys, not interesting. - sb.setLength(mark); - continue; - } + if (value instanceof Map) { + + subIncludeState = subIncludeAutomaton.step(subIncludeState, '.'); + if (subIncludeState == -1) { + continue; + } + if (excludeState != -1) { + excludeState = excludeAutomaton.step(excludeState, '.'); + } + + Map valueAsMap = (Map) value; + Map filteredValue = filter(valueAsMap, + subIncludeAutomaton, subIncludeState, excludeAutomaton, excludeState, matchAllAutomaton); + if (includeAutomaton.isAccept(includeState) || filteredValue.isEmpty() == false) { + filtered.put(key, filteredValue); + } + } else if (value instanceof Iterable) { - if (entry.getValue() instanceof Map) { - Map innerInto = new HashMap<>(); - // if we had an exact match, we want give deeper excludes their chance - filter((Map) entry.getValue(), innerInto, exactIncludeMatch ? Strings.EMPTY_ARRAY : includes, excludes, sb); - if (exactIncludeMatch || !innerInto.isEmpty()) { - into.put(entry.getKey(), innerInto); + List filteredValue = filter((Iterable) value, + subIncludeAutomaton, subIncludeState, excludeAutomaton, excludeState, matchAllAutomaton); + if (filteredValue.isEmpty() == false) { + filtered.put(key, filteredValue); } - } else if (entry.getValue() instanceof List) { - List list = (List) entry.getValue(); - List innerInto = new ArrayList<>(list.size()); - // if we had an exact match, we want give deeper excludes their chance - filter(list, innerInto, exactIncludeMatch ? Strings.EMPTY_ARRAY : includes, excludes, sb); - into.put(entry.getKey(), innerInto); - } else if (exactIncludeMatch) { - into.put(entry.getKey(), entry.getValue()); + + } else { + + // leaf property + if (includeAutomaton.isAccept(includeState) + && (excludeState == -1 || excludeAutomaton.isAccept(excludeState) == false)) { + filtered.put(key, value); + } + } - sb.setLength(mark); - } - } - private static void filter(List from, List to, String[] includes, String[] excludes, StringBuilder sb) { - if (includes.length == 0 && excludes.length == 0) { - to.addAll(from); - return; } + return filtered; + } - for (Object o : from) { - if (o instanceof Map) { - Map innerInto = new HashMap<>(); - filter((Map) o, innerInto, includes, excludes, sb); - if (!innerInto.isEmpty()) { - to.add(innerInto); + private static List filter(Iterable iterable, + CharacterRunAutomaton includeAutomaton, int initialIncludeState, + CharacterRunAutomaton excludeAutomaton, int initialExcludeState, + CharacterRunAutomaton matchAllAutomaton) { + List filtered = new ArrayList<>(); + boolean isInclude = includeAutomaton.isAccept(initialIncludeState); + for (Object value : iterable) { + if (value instanceof Map) { + int includeState = includeAutomaton.step(initialIncludeState, '.'); + int excludeState = initialExcludeState; + if (excludeState != -1) { + excludeState = excludeAutomaton.step(excludeState, '.'); } - } else if (o instanceof List) { - List innerInto = new ArrayList<>(); - filter((List) o, innerInto, includes, excludes, sb); - if (!innerInto.isEmpty()) { - to.add(innerInto); + Map filteredValue = filter((Map)value, + includeAutomaton, includeState, excludeAutomaton, excludeState, matchAllAutomaton); + if (filteredValue.isEmpty() == false) { + filtered.add(filteredValue); } - } else { - to.add(o); + } else if (value instanceof Iterable) { + List filteredValue = filter((Iterable) value, + includeAutomaton, initialIncludeState, excludeAutomaton, initialExcludeState, matchAllAutomaton); + if (filteredValue.isEmpty() == false) { + filtered.add(filteredValue); + } + } else if (isInclude) { + // #22557: only accept this array value if the key we are on is accepted: + filtered.add(value); } } + return filtered; } public static boolean isObject(Object node) { @@ -290,7 +365,7 @@ public static double nodeDoubleValue(Object node) { public static int nodeIntegerValue(Object node) { if (node instanceof Number) { - return ((Number) node).intValue(); + return Numbers.toIntExact((Number) node); } return Integer.parseInt(node.toString()); } @@ -299,10 +374,7 @@ public static int nodeIntegerValue(Object node, int defaultValue) { if (node == null) { return defaultValue; } - if (node instanceof Number) { - return ((Number) node).intValue(); - } - return Integer.parseInt(node.toString()); + return nodeIntegerValue(node); } public static short nodeShortValue(Object node, short defaultValue) { @@ -314,7 +386,7 @@ public static short nodeShortValue(Object node, short defaultValue) { public static short nodeShortValue(Object node) { if (node instanceof Number) { - return ((Number) node).shortValue(); + return Numbers.toShortExact((Number) node); } return Short.parseShort(node.toString()); } @@ -328,7 +400,7 @@ public static byte nodeByteValue(Object node, byte defaultValue) { public static byte nodeByteValue(Object node) { if (node instanceof Number) { - return ((Number) node).byteValue(); + return Numbers.toByteExact((Number) node); } return Byte.parseByte(node.toString()); } @@ -342,7 +414,7 @@ public static long nodeLongValue(Object node, long defaultValue) { public static long nodeLongValue(Object node) { if (node instanceof Number) { - return ((Number) node).longValue(); + return Numbers.toLongExact((Number) node); } return Long.parseLong(node.toString()); } @@ -350,25 +422,35 @@ public static long nodeLongValue(Object node) { /** * This method is very lenient, use {@link #nodeBooleanValue} instead. */ - public static boolean lenientNodeBooleanValue(Object node, boolean defaultValue) { + public static boolean lenientNodeBooleanValue(Object node, String name, boolean defaultValue) { if (node == null) { return defaultValue; } - return lenientNodeBooleanValue(node); + return lenientNodeBooleanValue(node, name); } /** * This method is very lenient, use {@link #nodeBooleanValue} instead. */ - public static boolean lenientNodeBooleanValue(Object node) { + public static boolean lenientNodeBooleanValue(Object node, String name) { + boolean interpretedAsLenient = false; + boolean booleanValue; + if (node instanceof Boolean) { - return (Boolean) node; + booleanValue = (Boolean) node; + } else if (node instanceof Number) { + interpretedAsLenient = true; + booleanValue = ((Number) node).intValue() != 0; + } else { + String value = node.toString(); + booleanValue = ((value.equals("false") || value.equals("0") || value.equals("off"))) == false; + interpretedAsLenient = Booleans.isStrictlyBoolean(value) == false; } - if (node instanceof Number) { - return ((Number) node).intValue() != 0; + + if (interpretedAsLenient) { + DEPRECATION_LOGGER.deprecated("Expected a boolean [true/false] for property [{}] but got [{}]", name, node.toString()); } - String value = node.toString(); - return !(value.equals("false") || value.equals("0") || value.equals("off")); + return booleanValue; } public static boolean nodeBooleanValue(Object node) { diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContent.java index 54da03118d70b..91447da1d924d 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContent.java @@ -24,6 +24,7 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.FastStringReader; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentGenerator; @@ -72,32 +73,32 @@ public XContentGenerator createGenerator(OutputStream os, Set includes, } @Override - public XContentParser createParser(String content) throws IOException { - return new YamlXContentParser(yamlFactory.createParser(new FastStringReader(content))); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, String content) throws IOException { + return new YamlXContentParser(xContentRegistry, yamlFactory.createParser(new FastStringReader(content))); } @Override - public XContentParser createParser(InputStream is) throws IOException { - return new YamlXContentParser(yamlFactory.createParser(is)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, InputStream is) throws IOException { + return new YamlXContentParser(xContentRegistry, yamlFactory.createParser(is)); } @Override - public XContentParser createParser(byte[] data) throws IOException { - return new YamlXContentParser(yamlFactory.createParser(data)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data) throws IOException { + return new YamlXContentParser(xContentRegistry, yamlFactory.createParser(data)); } @Override - public XContentParser createParser(byte[] data, int offset, int length) throws IOException { - return new YamlXContentParser(yamlFactory.createParser(data, offset, length)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, byte[] data, int offset, int length) throws IOException { + return new YamlXContentParser(xContentRegistry, yamlFactory.createParser(data, offset, length)); } @Override - public XContentParser createParser(BytesReference bytes) throws IOException { - return createParser(bytes.streamInput()); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, BytesReference bytes) throws IOException { + return createParser(xContentRegistry, bytes.streamInput()); } @Override - public XContentParser createParser(Reader reader) throws IOException { - return new YamlXContentParser(yamlFactory.createParser(reader)); + public XContentParser createParser(NamedXContentRegistry xContentRegistry, Reader reader) throws IOException { + return new YamlXContentParser(xContentRegistry, yamlFactory.createParser(reader)); } } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContentGenerator.java b/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContentGenerator.java index d2c53c8a02009..0d969c21a0f9f 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContentGenerator.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContentGenerator.java @@ -20,20 +20,14 @@ package org.elasticsearch.common.xcontent.yaml; import com.fasterxml.jackson.core.JsonGenerator; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContentGenerator; import java.io.OutputStream; -import java.util.Collections; import java.util.Set; public class YamlXContentGenerator extends JsonXContentGenerator { - public YamlXContentGenerator(JsonGenerator jsonGenerator, OutputStream os) { - this(jsonGenerator, os, Collections.emptySet(), Collections.emptySet()); - } - public YamlXContentGenerator(JsonGenerator jsonGenerator, OutputStream os, Set includes, Set excludes) { super(jsonGenerator, os, includes, excludes); } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContentParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContentParser.java index 3b674c054dc12..ddc2c19d993b8 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContentParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContentParser.java @@ -20,6 +20,8 @@ package org.elasticsearch.common.xcontent.yaml; import com.fasterxml.jackson.core.JsonParser; + +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContentParser; @@ -28,8 +30,8 @@ */ public class YamlXContentParser extends JsonXContentParser { - public YamlXContentParser(JsonParser parser) { - super(parser); + public YamlXContentParser(NamedXContentRegistry xContentRegistry, JsonParser parser) { + super(xContentRegistry, parser); } @Override diff --git a/core/src/main/java/org/elasticsearch/discovery/DiscoveryModule.java b/core/src/main/java/org/elasticsearch/discovery/DiscoveryModule.java index 040066adeb6b1..2328b5a861675 100644 --- a/core/src/main/java/org/elasticsearch/discovery/DiscoveryModule.java +++ b/core/src/main/java/org/elasticsearch/discovery/DiscoveryModule.java @@ -19,120 +19,89 @@ package org.elasticsearch.discovery; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.inject.multibindings.Multibinder; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; +import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.network.NetworkService; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.ExtensionPoint; -import org.elasticsearch.discovery.local.LocalDiscovery; +import org.elasticsearch.discovery.single.SingleNodeDiscovery; +import org.elasticsearch.discovery.zen.UnicastHostsProvider; import org.elasticsearch.discovery.zen.ZenDiscovery; -import org.elasticsearch.discovery.zen.elect.ElectMasterService; -import org.elasticsearch.discovery.zen.ping.ZenPing; -import org.elasticsearch.discovery.zen.ping.ZenPingService; -import org.elasticsearch.discovery.zen.ping.unicast.UnicastHostsProvider; -import org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing; +import org.elasticsearch.plugins.DiscoveryPlugin; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; -import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Objects; +import java.util.Optional; import java.util.function.Function; +import java.util.function.Supplier; /** * A module for loading classes for node discovery. */ -public class DiscoveryModule extends AbstractModule { +public class DiscoveryModule { public static final Setting DISCOVERY_TYPE_SETTING = - new Setting<>("discovery.type", "zen", Function.identity(), - Property.NodeScope); - public static final Setting ZEN_MASTER_SERVICE_TYPE_SETTING = - new Setting<>("discovery.zen.masterservice.type", "zen", Function.identity(), Property.NodeScope); + new Setting<>("discovery.type", "zen", Function.identity(), Property.NodeScope); + public static final Setting> DISCOVERY_HOSTS_PROVIDER_SETTING = + new Setting<>("discovery.zen.hosts_provider", (String)null, Optional::ofNullable, Property.NodeScope); - private final Settings settings; - private final Map>> unicastHostProviders = new HashMap<>(); - private final ExtensionPoint.ClassSet zenPings = new ExtensionPoint.ClassSet<>("zen_ping", ZenPing.class); - private final Map> discoveryTypes = new HashMap<>(); - private final Map> masterServiceType = new HashMap<>(); + private final Discovery discovery; - public DiscoveryModule(Settings settings) { - this.settings = settings; - addDiscoveryType("local", LocalDiscovery.class); - addDiscoveryType("zen", ZenDiscovery.class); - addElectMasterService("zen", ElectMasterService.class); - // always add the unicast hosts, or things get angry! - addZenPing(UnicastZenPing.class); - } + public DiscoveryModule(Settings settings, ThreadPool threadPool, TransportService transportService, + NamedWriteableRegistry namedWriteableRegistry, NetworkService networkService, ClusterService clusterService, + List plugins) { + final UnicastHostsProvider hostsProvider; - /** - * Adds a custom unicast hosts provider to build a dynamic list of unicast hosts list when doing unicast discovery. - * - * @param type discovery for which this provider is relevant - * @param unicastHostProvider the host provider - */ - public void addUnicastHostProvider(String type, Class unicastHostProvider) { - List> providerList = unicastHostProviders.get(type); - if (providerList == null) { - providerList = new ArrayList<>(); - unicastHostProviders.put(type, providerList); + Map> hostProviders = new HashMap<>(); + for (DiscoveryPlugin plugin : plugins) { + plugin.getZenHostsProviders(transportService, networkService).entrySet().forEach(entry -> { + if (hostProviders.put(entry.getKey(), entry.getValue()) != null) { + throw new IllegalArgumentException("Cannot register zen hosts provider [" + entry.getKey() + "] twice"); + } + }); } - providerList.add(unicastHostProvider); - } - - /** - * Adds a custom Discovery type. - */ - public void addDiscoveryType(String type, Class clazz) { - if (discoveryTypes.containsKey(type)) { - throw new IllegalArgumentException("discovery type [" + type + "] is already registered"); + Optional hostsProviderName = DISCOVERY_HOSTS_PROVIDER_SETTING.get(settings); + if (hostsProviderName.isPresent()) { + Supplier hostsProviderSupplier = hostProviders.get(hostsProviderName.get()); + if (hostsProviderSupplier == null) { + throw new IllegalArgumentException("Unknown zen hosts provider [" + hostsProviderName.get() + "]"); + } + hostsProvider = Objects.requireNonNull(hostsProviderSupplier.get()); + } else { + hostsProvider = Collections::emptyList; } - discoveryTypes.put(type, clazz); - } - /** - * Adds a custom zen master service type. - */ - public void addElectMasterService(String type, Class masterService) { - if (masterServiceType.containsKey(type)) { - throw new IllegalArgumentException("master service type [" + type + "] is already registered"); + Map> discoveryTypes = new HashMap<>(); + discoveryTypes.put("zen", + () -> new ZenDiscovery(settings, threadPool, transportService, namedWriteableRegistry, clusterService, hostsProvider)); + discoveryTypes.put("none", () -> new NoneDiscovery(settings, clusterService, clusterService.getClusterSettings())); + discoveryTypes.put("single-node", () -> new SingleNodeDiscovery(settings, clusterService)); + for (DiscoveryPlugin plugin : plugins) { + plugin.getDiscoveryTypes(threadPool, transportService, namedWriteableRegistry, + clusterService, hostsProvider).entrySet().forEach(entry -> { + if (discoveryTypes.put(entry.getKey(), entry.getValue()) != null) { + throw new IllegalArgumentException("Cannot register discovery type [" + entry.getKey() + "] twice"); + } + }); } - this.masterServiceType.put(type, masterService); - } - - public void addZenPing(Class clazz) { - zenPings.registerExtension(clazz); - } - - @Override - protected void configure() { String discoveryType = DISCOVERY_TYPE_SETTING.get(settings); - Class discoveryClass = discoveryTypes.get(discoveryType); - if (discoveryClass == null) { - throw new IllegalArgumentException("Unknown Discovery type [" + discoveryType + "]"); + Supplier discoverySupplier = discoveryTypes.get(discoveryType); + if (discoverySupplier == null) { + throw new IllegalArgumentException("Unknown discovery type [" + discoveryType + "]"); } + Loggers.getLogger(getClass(), settings).info("using discovery type [{}]", discoveryType); + discovery = Objects.requireNonNull(discoverySupplier.get()); + } - if (discoveryType.equals("local") == false) { - String masterServiceTypeKey = ZEN_MASTER_SERVICE_TYPE_SETTING.get(settings); - final Class masterService = masterServiceType.get(masterServiceTypeKey); - if (masterService == null) { - throw new IllegalArgumentException("Unknown master service type [" + masterServiceTypeKey + "]"); - } - if (masterService == ElectMasterService.class) { - bind(ElectMasterService.class).asEagerSingleton(); - } else { - bind(ElectMasterService.class).to(masterService).asEagerSingleton(); - } - bind(ZenPingService.class).asEagerSingleton(); - Multibinder unicastHostsProviderMultibinder = Multibinder.newSetBinder(binder(), UnicastHostsProvider.class); - for (Class unicastHostProvider : - unicastHostProviders.getOrDefault(discoveryType, Collections.emptyList())) { - unicastHostsProviderMultibinder.addBinding().to(unicastHostProvider); - } - zenPings.bind(binder()); - } - bind(Discovery.class).to(discoveryClass).asEagerSingleton(); + public Discovery getDiscovery() { + return discovery; } + } diff --git a/core/src/main/java/org/elasticsearch/discovery/DiscoverySettings.java b/core/src/main/java/org/elasticsearch/discovery/DiscoverySettings.java index 6f5a6c9a745a8..e9a83678f8a2c 100644 --- a/core/src/main/java/org/elasticsearch/discovery/DiscoverySettings.java +++ b/core/src/main/java/org/elasticsearch/discovery/DiscoverySettings.java @@ -37,8 +37,8 @@ public class DiscoverySettings extends AbstractComponent { public static final int NO_MASTER_BLOCK_ID = 2; - public static final ClusterBlock NO_MASTER_BLOCK_ALL = new ClusterBlock(NO_MASTER_BLOCK_ID, "no master", true, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL); - public static final ClusterBlock NO_MASTER_BLOCK_WRITES = new ClusterBlock(NO_MASTER_BLOCK_ID, "no master", true, false, RestStatus.SERVICE_UNAVAILABLE, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA_WRITE)); + public static final ClusterBlock NO_MASTER_BLOCK_ALL = new ClusterBlock(NO_MASTER_BLOCK_ID, "no master", true, true, false, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL); + public static final ClusterBlock NO_MASTER_BLOCK_WRITES = new ClusterBlock(NO_MASTER_BLOCK_ID, "no master", true, false, false, RestStatus.SERVICE_UNAVAILABLE, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA_WRITE)); /** * sets the timeout for a complete publishing cycle, including both sending and committing. the master * will continue to process the next cluster state update after this time has elapsed diff --git a/core/src/main/java/org/elasticsearch/discovery/DiscoveryStats.java b/core/src/main/java/org/elasticsearch/discovery/DiscoveryStats.java index fc419ff06a672..9542b14e5699c 100644 --- a/core/src/main/java/org/elasticsearch/discovery/DiscoveryStats.java +++ b/core/src/main/java/org/elasticsearch/discovery/DiscoveryStats.java @@ -25,7 +25,7 @@ import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.discovery.zen.publish.PendingClusterStateStats; +import org.elasticsearch.discovery.zen.PendingClusterStateStats; import java.io.IOException; diff --git a/core/src/main/java/org/elasticsearch/discovery/InitialStateDiscoveryListener.java b/core/src/main/java/org/elasticsearch/discovery/InitialStateDiscoveryListener.java deleted file mode 100644 index 1ec55c874b4c6..0000000000000 --- a/core/src/main/java/org/elasticsearch/discovery/InitialStateDiscoveryListener.java +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.discovery; - -/** - * A listener that should be called by the {@link org.elasticsearch.discovery.Discovery} component - * when the first valid initial cluster state has been submitted and processed by the cluster service. - *

    - * Note, this listener should be registered with the discovery service before it has started. - * - * - */ -public interface InitialStateDiscoveryListener { - - void initialStateProcessed(); -} diff --git a/core/src/main/java/org/elasticsearch/discovery/NoneDiscovery.java b/core/src/main/java/org/elasticsearch/discovery/NoneDiscovery.java new file mode 100644 index 0000000000000..91b04ce396ba6 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/discovery/NoneDiscovery.java @@ -0,0 +1,102 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.discovery; + +import org.elasticsearch.cluster.ClusterChangedEvent; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.routing.allocation.AllocationService; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.component.AbstractLifecycleComponent; +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.discovery.zen.ElectMasterService; + +/** + * A {@link Discovery} implementation that is used by {@link org.elasticsearch.tribe.TribeService}. This implementation + * doesn't support any clustering features. Most notably {@link #startInitialJoin()} does nothing and + * {@link #publish(ClusterChangedEvent, AckListener)} is not supported. + */ +public class NoneDiscovery extends AbstractLifecycleComponent implements Discovery { + + private final ClusterService clusterService; + private final DiscoverySettings discoverySettings; + + @Inject + public NoneDiscovery(Settings settings, ClusterService clusterService, ClusterSettings clusterSettings) { + super(settings); + this.clusterService = clusterService; + this.discoverySettings = new DiscoverySettings(settings, clusterSettings); + } + + @Override + public DiscoveryNode localNode() { + return clusterService.localNode(); + } + + @Override + public String nodeDescription() { + return clusterService.getClusterName().value() + "/" + clusterService.localNode().getId(); + } + + @Override + public void setAllocationService(AllocationService allocationService) { + + } + + @Override + public void publish(ClusterChangedEvent clusterChangedEvent, AckListener ackListener) { + throw new UnsupportedOperationException(); + } + + @Override + public DiscoveryStats stats() { + return null; + } + + @Override + public DiscoverySettings getDiscoverySettings() { + return discoverySettings; + } + + @Override + public void startInitialJoin() { + + } + + @Override + public int getMinimumMasterNodes() { + return ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.get(settings); + } + + @Override + protected void doStart() { + + } + + @Override + protected void doStop() { + + } + + @Override + protected void doClose() { + + } +} diff --git a/core/src/main/java/org/elasticsearch/discovery/local/LocalDiscovery.java b/core/src/main/java/org/elasticsearch/discovery/local/LocalDiscovery.java index d84f471cffeeb..6bab1a2ab128e 100644 --- a/core/src/main/java/org/elasticsearch/discovery/local/LocalDiscovery.java +++ b/core/src/main/java/org/elasticsearch/discovery/local/LocalDiscovery.java @@ -27,16 +27,17 @@ import org.elasticsearch.cluster.ClusterStateUpdateTask; import org.elasticsearch.cluster.Diff; import org.elasticsearch.cluster.IncompatibleClusterStateVersionException; +import org.elasticsearch.cluster.LocalClusterUpdateTask; import org.elasticsearch.cluster.block.ClusterBlocks; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.component.AbstractLifecycleComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; @@ -47,7 +48,7 @@ import org.elasticsearch.discovery.Discovery; import org.elasticsearch.discovery.DiscoverySettings; import org.elasticsearch.discovery.DiscoveryStats; -import org.elasticsearch.discovery.zen.publish.PendingClusterStateStats; +import org.elasticsearch.discovery.zen.PendingClusterStateStats; import java.util.HashSet; import java.util.Optional; @@ -69,6 +70,7 @@ public class LocalDiscovery extends AbstractLifecycleComponent implements Discov private final ClusterName clusterName; private final DiscoverySettings discoverySettings; + private final NamedWriteableRegistry namedWriteableRegistry; private volatile boolean master = false; @@ -77,11 +79,12 @@ public class LocalDiscovery extends AbstractLifecycleComponent implements Discov private volatile ClusterState lastProcessedClusterState; @Inject - public LocalDiscovery(Settings settings, ClusterService clusterService, ClusterSettings clusterSettings) { + public LocalDiscovery(Settings settings, ClusterService clusterService, ClusterSettings clusterSettings, NamedWriteableRegistry namedWriteableRegistry) { super(settings); this.clusterName = clusterService.getClusterName(); this.clusterService = clusterService; this.discoverySettings = new DiscoverySettings(settings, clusterSettings); + this.namedWriteableRegistry = namedWriteableRegistry; } @Override @@ -126,23 +129,16 @@ public void startInitialJoin() { // we are the first master (and the master) master = true; final LocalDiscovery master = firstMaster; - clusterService.submitStateUpdateTask("local-disco-initial_connect(master)", new ClusterStateUpdateTask() { + clusterService.submitStateUpdateTask("local-disco-initial_connect(master)", new LocalClusterUpdateTask() { @Override - public boolean runOnlyOnMaster() { - return false; - } - - @Override - public ClusterState execute(ClusterState currentState) { + public ClusterTasksResult execute(ClusterState currentState) { DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(); for (LocalDiscovery discovery : clusterGroups.get(clusterName).members()) { nodesBuilder.add(discovery.localNode()); } nodesBuilder.localNodeId(master.localNode().getId()).masterNodeId(master.localNode().getId()); - // remove the NO_MASTER block in this case - ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()).removeGlobalBlock(discoverySettings.getNoMasterBlock()); - return ClusterState.builder(currentState).nodes(nodesBuilder).blocks(blocks).build(); + return newState(ClusterState.builder(currentState).nodes(nodesBuilder).build()); } @Override @@ -153,25 +149,16 @@ public void onFailure(String source, Exception e) { } else if (firstMaster != null) { // tell the master to send the fact that we are here final LocalDiscovery master = firstMaster; - firstMaster.clusterService.submitStateUpdateTask("local-disco-receive(from node[" + localNode() + "])", new ClusterStateUpdateTask() { + firstMaster.clusterService.submitStateUpdateTask("local-disco-receive(from node[" + localNode() + "])", new LocalClusterUpdateTask() { @Override - public boolean runOnlyOnMaster() { - return false; - } - - @Override - public ClusterState execute(ClusterState currentState) { + public ClusterTasksResult execute(ClusterState currentState) { DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(); for (LocalDiscovery discovery : clusterGroups.get(clusterName).members()) { nodesBuilder.add(discovery.localNode()); } nodesBuilder.localNodeId(master.localNode().getId()).masterNodeId(master.localNode().getId()); currentState = ClusterState.builder(currentState).nodes(nodesBuilder).build(); - RoutingAllocation.Result result = master.allocationService.reroute(currentState, "node_add"); - if (result.changed()) { - currentState = ClusterState.builder(currentState).routingResult(result).build(); - } - return currentState; + return newState(master.allocationService.reroute(currentState, "node_add")); } @Override @@ -219,14 +206,9 @@ protected void doStop() { } final LocalDiscovery master = firstMaster; - master.clusterService.submitStateUpdateTask("local-disco-update", new ClusterStateUpdateTask() { + master.clusterService.submitStateUpdateTask("local-disco-update", new LocalClusterUpdateTask() { @Override - public boolean runOnlyOnMaster() { - return false; - } - - @Override - public ClusterState execute(ClusterState currentState) { + public ClusterTasksResult execute(ClusterState currentState) { DiscoveryNodes newNodes = currentState.nodes().removeDeadMembers(newMembers, master.localNode().getId()); DiscoveryNodes.Delta delta = newNodes.delta(currentState.nodes()); if (delta.added()) { @@ -234,9 +216,7 @@ public ClusterState execute(ClusterState currentState) { } // reroute here, so we eagerly remove dead nodes from the routing ClusterState updatedState = ClusterState.builder(currentState).nodes(newNodes).build(); - RoutingAllocation.Result routingResult = master.allocationService.deassociateDeadNodes( - ClusterState.builder(updatedState).build(), true, "node stopped"); - return ClusterState.builder(updatedState).routingResult(routingResult).build(); + return newState(master.allocationService.deassociateDeadNodes(updatedState, true, "node stopped")); } @Override @@ -329,7 +309,7 @@ private void publish(LocalDiscovery[] members, ClusterChangedEvent clusterChange clusterStateDiffBytes = BytesReference.toBytes(os.bytes()); } try { - newNodeSpecificClusterState = discovery.lastProcessedClusterState.readDiffFrom(StreamInput.wrap(clusterStateDiffBytes)).apply(discovery.lastProcessedClusterState); + newNodeSpecificClusterState = ClusterState.readDiffFrom(StreamInput.wrap(clusterStateDiffBytes), discovery.localNode()).apply(discovery.lastProcessedClusterState); logger.trace("sending diff cluster state version [{}] with size {} to [{}]", clusterState.version(), clusterStateDiffBytes.length, discovery.localNode().getName()); } catch (IncompatibleClusterStateVersionException ex) { logger.warn((Supplier) () -> new ParameterizedMessage("incompatible cluster state version [{}] - resending complete cluster state", clusterState.version()), ex); @@ -339,34 +319,28 @@ private void publish(LocalDiscovery[] members, ClusterChangedEvent clusterChange if (clusterStateBytes == null) { clusterStateBytes = Builder.toBytes(clusterState); } - newNodeSpecificClusterState = ClusterState.Builder.fromBytes(clusterStateBytes, discovery.localNode()); + newNodeSpecificClusterState = ClusterState.Builder.fromBytes(clusterStateBytes, discovery.localNode(), namedWriteableRegistry); } discovery.lastProcessedClusterState = newNodeSpecificClusterState; } final ClusterState nodeSpecificClusterState = newNodeSpecificClusterState; - nodeSpecificClusterState.status(ClusterState.ClusterStateStatus.RECEIVED); // ignore cluster state messages that do not include "me", not in the game yet... if (nodeSpecificClusterState.nodes().getLocalNode() != null) { assert nodeSpecificClusterState.nodes().getMasterNode() != null : "received a cluster state without a master"; assert !nodeSpecificClusterState.blocks().hasGlobalBlock(discoverySettings.getNoMasterBlock()) : "received a cluster state with a master block"; - discovery.clusterService.submitStateUpdateTask("local-disco-receive(from master)", new ClusterStateUpdateTask() { - @Override - public boolean runOnlyOnMaster() { - return false; - } - + discovery.clusterService.submitStateUpdateTask("local-disco-receive(from master)", new LocalClusterUpdateTask() { @Override - public ClusterState execute(ClusterState currentState) { + public ClusterTasksResult execute(ClusterState currentState) { if (currentState.supersedes(nodeSpecificClusterState)) { - return currentState; + return unchanged(); } if (currentState.blocks().hasGlobalBlock(discoverySettings.getNoMasterBlock())) { // its a fresh update from the master as we transition from a start of not having a master to having one logger.debug("got first state from fresh master [{}]", nodeSpecificClusterState.nodes().getMasterNodeId()); - return nodeSpecificClusterState; + return newState(nodeSpecificClusterState); } ClusterState.Builder builder = ClusterState.builder(nodeSpecificClusterState); @@ -378,7 +352,7 @@ public ClusterState execute(ClusterState currentState) { builder.metaData(currentState.metaData()); } - return builder.build(); + return newState(builder.build()); } @Override diff --git a/core/src/main/java/org/elasticsearch/discovery/single/SingleNodeDiscovery.java b/core/src/main/java/org/elasticsearch/discovery/single/SingleNodeDiscovery.java new file mode 100644 index 0000000000000..f4735c8bf3a0d --- /dev/null +++ b/core/src/main/java/org/elasticsearch/discovery/single/SingleNodeDiscovery.java @@ -0,0 +1,144 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.discovery.single; + +import org.elasticsearch.cluster.ClusterChangedEvent; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.ClusterStateTaskConfig; +import org.elasticsearch.cluster.ClusterStateTaskExecutor; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.cluster.routing.allocation.AllocationService; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.Priority; +import org.elasticsearch.common.component.AbstractLifecycleComponent; +import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.discovery.Discovery; +import org.elasticsearch.discovery.DiscoverySettings; +import org.elasticsearch.discovery.DiscoveryStats; +import org.elasticsearch.discovery.zen.PendingClusterStateStats; +import org.elasticsearch.discovery.zen.PendingClusterStatesQueue; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; +import java.util.Objects; + +/** + * A discovery implementation where the only member of the cluster is the local node. + */ +public class SingleNodeDiscovery extends AbstractLifecycleComponent implements Discovery { + + private final ClusterService clusterService; + private final DiscoverySettings discoverySettings; + + public SingleNodeDiscovery(final Settings settings, final ClusterService clusterService) { + super(Objects.requireNonNull(settings)); + this.clusterService = Objects.requireNonNull(clusterService); + final ClusterSettings clusterSettings = + Objects.requireNonNull(clusterService.getClusterSettings()); + this.discoverySettings = new DiscoverySettings(settings, clusterSettings); + } + + @Override + public DiscoveryNode localNode() { + return clusterService.localNode(); + } + + @Override + public String nodeDescription() { + return clusterService.getClusterName().value() + "/" + clusterService.localNode().getId(); + } + + @Override + public void setAllocationService(final AllocationService allocationService) { + + } + + @Override + public void publish(final ClusterChangedEvent event, final AckListener listener) { + + } + + @Override + public DiscoveryStats stats() { + return new DiscoveryStats((PendingClusterStateStats) null); + } + + @Override + public DiscoverySettings getDiscoverySettings() { + return discoverySettings; + } + + @Override + public void startInitialJoin() { + final ClusterStateTaskExecutor executor = + new ClusterStateTaskExecutor() { + + @Override + public ClusterTasksResult execute( + final ClusterState current, + final List tasks) throws Exception { + assert tasks.size() == 1; + final DiscoveryNodes.Builder nodes = + DiscoveryNodes.builder(current.nodes()); + // always set the local node as master, there will not be other nodes + nodes.masterNodeId(localNode().getId()); + final ClusterState next = + ClusterState.builder(current).nodes(nodes).build(); + final ClusterTasksResult.Builder result = + ClusterTasksResult.builder(); + return result.successes(tasks).build(next); + } + + @Override + public boolean runOnlyOnMaster() { + return false; + } + + }; + final ClusterStateTaskConfig config = ClusterStateTaskConfig.build(Priority.URGENT); + clusterService.submitStateUpdateTasks( + "single-node-start-initial-join", + Collections.singletonMap(localNode(), (s, e) -> {}), config, executor); + } + + @Override + public int getMinimumMasterNodes() { + return 1; + } + + @Override + protected void doStart() { + + } + + @Override + protected void doStop() { + + } + + @Override + protected void doClose() throws IOException { + + } + +} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/DiscoveryNodesProvider.java b/core/src/main/java/org/elasticsearch/discovery/zen/DiscoveryNodesProvider.java deleted file mode 100644 index b9ce79013696d..0000000000000 --- a/core/src/main/java/org/elasticsearch/discovery/zen/DiscoveryNodesProvider.java +++ /dev/null @@ -1,31 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.discovery.zen; - -import org.elasticsearch.cluster.node.DiscoveryNodes; - -/** - * - */ -public interface DiscoveryNodesProvider { - - DiscoveryNodes nodes(); - -} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ElectMasterService.java b/core/src/main/java/org/elasticsearch/discovery/zen/ElectMasterService.java new file mode 100644 index 0000000000000..248b0e35e3dd2 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/discovery/zen/ElectMasterService.java @@ -0,0 +1,230 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.discovery.zen; + +import com.carrotsearch.hppc.ObjectContainer; +import org.apache.lucene.util.CollectionUtil; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.settings.Setting.Property; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.CollectionUtils; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Iterator; +import java.util.List; +import java.util.Objects; +import java.util.stream.Collectors; + +/** + * + */ +public class ElectMasterService extends AbstractComponent { + + public static final Setting DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING = + Setting.intSetting("discovery.zen.minimum_master_nodes", -1, Property.Dynamic, Property.NodeScope); + + private volatile int minimumMasterNodes; + + /** + * a class to encapsulate all the information about a candidate in a master election + * that is needed to decided which of the candidates should win + */ + public static class MasterCandidate { + + public static final long UNRECOVERED_CLUSTER_VERSION = -1; + + final DiscoveryNode node; + + final long clusterStateVersion; + + public MasterCandidate(DiscoveryNode node, long clusterStateVersion) { + Objects.requireNonNull(node); + assert clusterStateVersion >= -1 : "got: " + clusterStateVersion; + assert node.isMasterNode(); + this.node = node; + this.clusterStateVersion = clusterStateVersion; + } + + public DiscoveryNode getNode() { + return node; + } + + public long getClusterStateVersion() { + return clusterStateVersion; + } + + @Override + public String toString() { + return "Candidate{" + + "node=" + node + + ", clusterStateVersion=" + clusterStateVersion + + '}'; + } + + /** + * compares two candidates to indicate which the a better master. + * A higher cluster state version is better + * + * @return -1 if c1 is a batter candidate, 1 if c2. + */ + public static int compare(MasterCandidate c1, MasterCandidate c2) { + // we explicitly swap c1 and c2 here. the code expects "better" is lower in a sorted + // list, so if c2 has a higher cluster state version, it needs to come first. + int ret = Long.compare(c2.clusterStateVersion, c1.clusterStateVersion); + if (ret == 0) { + ret = compareNodes(c1.getNode(), c2.getNode()); + } + return ret; + } + } + + public ElectMasterService(Settings settings) { + super(settings); + this.minimumMasterNodes = DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.get(settings); + logger.debug("using minimum_master_nodes [{}]", minimumMasterNodes); + } + + public void minimumMasterNodes(int minimumMasterNodes) { + this.minimumMasterNodes = minimumMasterNodes; + } + + public int minimumMasterNodes() { + return minimumMasterNodes; + } + + public int countMasterNodes(Iterable nodes) { + int count = 0; + for (DiscoveryNode node : nodes) { + if (node.isMasterNode()) { + count++; + } + } + return count; + } + + public boolean hasEnoughCandidates(Collection candidates) { + if (candidates.isEmpty()) { + return false; + } + if (minimumMasterNodes < 1) { + return true; + } + assert candidates.stream().map(MasterCandidate::getNode).collect(Collectors.toSet()).size() == candidates.size() : + "duplicates ahead: " + candidates; + return candidates.size() >= minimumMasterNodes; + } + + /** + * Elects a new master out of the possible nodes, returning it. Returns null + * if no master has been elected. + */ + public MasterCandidate electMaster(Collection candidates) { + assert hasEnoughCandidates(candidates); + List sortedCandidates = new ArrayList<>(candidates); + sortedCandidates.sort(MasterCandidate::compare); + return sortedCandidates.get(0); + } + + /** selects the best active master to join, where multiple are discovered */ + public DiscoveryNode tieBreakActiveMasters(Collection activeMasters) { + return activeMasters.stream().min(ElectMasterService::compareNodes).get(); + } + + public boolean hasEnoughMasterNodes(Iterable nodes) { + final int count = countMasterNodes(nodes); + return count > 0 && (minimumMasterNodes < 0 || count >= minimumMasterNodes); + } + + public boolean hasTooManyMasterNodes(Iterable nodes) { + final int count = countMasterNodes(nodes); + return count > 1 && minimumMasterNodes <= count / 2; + } + + public void logMinimumMasterNodesWarningIfNecessary(ClusterState oldState, ClusterState newState) { + // check if min_master_nodes setting is too low and log warning + if (hasTooManyMasterNodes(oldState.nodes()) == false && hasTooManyMasterNodes(newState.nodes())) { + logger.warn("value for setting \"{}\" is too low. This can result in data loss! Please set it to at least a quorum of master-" + + "eligible nodes (current value: [{}], total number of master-eligible nodes used for publishing in this round: [{}])", + ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey(), minimumMasterNodes(), + newState.getNodes().getMasterNodes().size()); + } + } + + /** + * Returns the given nodes sorted by likelihood of being elected as master, most likely first. + * Non-master nodes are not removed but are rather put in the end + */ + static List sortByMasterLikelihood(Iterable nodes) { + ArrayList sortedNodes = CollectionUtils.iterableAsArrayList(nodes); + CollectionUtil.introSort(sortedNodes, ElectMasterService::compareNodes); + return sortedNodes; + } + + /** + * Returns a list of the next possible masters. + */ + public DiscoveryNode[] nextPossibleMasters(ObjectContainer nodes, int numberOfPossibleMasters) { + List sortedNodes = sortedMasterNodes(Arrays.asList(nodes.toArray(DiscoveryNode.class))); + if (sortedNodes == null) { + return new DiscoveryNode[0]; + } + List nextPossibleMasters = new ArrayList<>(numberOfPossibleMasters); + int counter = 0; + for (DiscoveryNode nextPossibleMaster : sortedNodes) { + if (++counter >= numberOfPossibleMasters) { + break; + } + nextPossibleMasters.add(nextPossibleMaster); + } + return nextPossibleMasters.toArray(new DiscoveryNode[nextPossibleMasters.size()]); + } + + private List sortedMasterNodes(Iterable nodes) { + List possibleNodes = CollectionUtils.iterableAsArrayList(nodes); + if (possibleNodes.isEmpty()) { + return null; + } + // clean non master nodes + for (Iterator it = possibleNodes.iterator(); it.hasNext(); ) { + DiscoveryNode node = it.next(); + if (!node.isMasterNode()) { + it.remove(); + } + } + CollectionUtil.introSort(possibleNodes, ElectMasterService::compareNodes); + return possibleNodes; + } + + /** master nodes go before other nodes, with a secondary sort by id **/ + private static int compareNodes(DiscoveryNode o1, DiscoveryNode o2) { + if (o1.isMasterNode() && !o2.isMasterNode()) { + return -1; + } + if (!o1.isMasterNode() && o2.isMasterNode()) { + return 1; + } + return o1.getId().compareTo(o2.getId()); + } +} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/fd/FaultDetection.java b/core/src/main/java/org/elasticsearch/discovery/zen/FaultDetection.java similarity index 94% rename from core/src/main/java/org/elasticsearch/discovery/zen/fd/FaultDetection.java rename to core/src/main/java/org/elasticsearch/discovery/zen/FaultDetection.java index 1cfd46634a52f..715e8be03efb2 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/fd/FaultDetection.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/FaultDetection.java @@ -16,7 +16,10 @@ * specific language governing permissions and limitations * under the License. */ -package org.elasticsearch.discovery.zen.fd; + +package org.elasticsearch.discovery.zen; + +import java.io.Closeable; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.node.DiscoveryNode; @@ -32,10 +35,10 @@ import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds; /** - * A base class for {@link org.elasticsearch.discovery.zen.fd.MasterFaultDetection} & {@link org.elasticsearch.discovery.zen.fd.NodesFaultDetection}, + * A base class for {@link MasterFaultDetection} & {@link NodesFaultDetection}, * making sure both use the same setting. */ -public abstract class FaultDetection extends AbstractComponent { +public abstract class FaultDetection extends AbstractComponent implements Closeable { public static final Setting CONNECT_ON_NETWORK_DISCONNECT_SETTING = Setting.boolSetting("discovery.zen.fd.connect_on_network_disconnect", false, Property.NodeScope); @@ -79,6 +82,7 @@ public FaultDetection(Settings settings, ThreadPool threadPool, TransportService } } + @Override public void close() { transportService.removeConnectionListener(connectionListener); } diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/fd/MasterFaultDetection.java b/core/src/main/java/org/elasticsearch/discovery/zen/MasterFaultDetection.java similarity index 96% rename from core/src/main/java/org/elasticsearch/discovery/zen/fd/MasterFaultDetection.java rename to core/src/main/java/org/elasticsearch/discovery/zen/MasterFaultDetection.java index 6dc89998046c3..81d4c19d33ee2 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/fd/MasterFaultDetection.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/MasterFaultDetection.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.discovery.zen.fd; +package org.elasticsearch.discovery.zen; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; @@ -111,28 +111,10 @@ public void restart(DiscoveryNode masterNode, String reason) { } } - public void start(final DiscoveryNode masterNode, String reason) { - synchronized (masterNodeMutex) { - if (logger.isDebugEnabled()) { - logger.debug("[master] starting fault detection against master [{}], reason [{}]", masterNode, reason); - } - innerStart(masterNode); - } - } - private void innerStart(final DiscoveryNode masterNode) { this.masterNode = masterNode; this.retryCount = 0; this.notifiedMasterFailure.set(false); - - // try and connect to make sure we are connected - try { - transportService.connectToNode(masterNode); - } catch (final Exception e) { - // notify master failure (which stops also) and bail.. - notifyMasterFailure(masterNode, e, "failed to perform initial connect "); - return; - } if (masterPinger != null) { masterPinger.stop(); } @@ -168,7 +150,6 @@ public void close() { super.close(); stop("closing"); this.listeners.clear(); - transportService.removeHandler(MASTER_PING_ACTION_NAME); } @Override diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/MembershipAction.java b/core/src/main/java/org/elasticsearch/discovery/zen/MembershipAction.java new file mode 100644 index 0000000000000..d83d870210410 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/discovery/zen/MembershipAction.java @@ -0,0 +1,267 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.discovery.zen; + +import org.elasticsearch.Version; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.metadata.MetaData; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.EmptyTransportResponseHandler; +import org.elasticsearch.transport.TransportChannel; +import org.elasticsearch.transport.TransportRequest; +import org.elasticsearch.transport.TransportRequestHandler; +import org.elasticsearch.transport.TransportResponse; +import org.elasticsearch.transport.TransportService; + +import java.io.IOException; +import java.util.concurrent.TimeUnit; + +/** + * + */ +public class MembershipAction extends AbstractComponent { + + public static final String DISCOVERY_JOIN_ACTION_NAME = "internal:discovery/zen/join"; + public static final String DISCOVERY_JOIN_VALIDATE_ACTION_NAME = "internal:discovery/zen/join/validate"; + public static final String DISCOVERY_LEAVE_ACTION_NAME = "internal:discovery/zen/leave"; + + public interface JoinCallback { + void onSuccess(); + + void onFailure(Exception e); + } + + public interface MembershipListener { + void onJoin(DiscoveryNode node, JoinCallback callback); + + void onLeave(DiscoveryNode node); + } + + private final TransportService transportService; + + private final MembershipListener listener; + + public MembershipAction(Settings settings, TransportService transportService, MembershipListener listener) { + super(settings); + this.transportService = transportService; + this.listener = listener; + + + transportService.registerRequestHandler(DISCOVERY_JOIN_ACTION_NAME, JoinRequest::new, + ThreadPool.Names.GENERIC, new JoinRequestRequestHandler()); + transportService.registerRequestHandler(DISCOVERY_JOIN_VALIDATE_ACTION_NAME, + () -> new ValidateJoinRequest(), ThreadPool.Names.GENERIC, + new ValidateJoinRequestRequestHandler()); + transportService.registerRequestHandler(DISCOVERY_LEAVE_ACTION_NAME, LeaveRequest::new, + ThreadPool.Names.GENERIC, new LeaveRequestRequestHandler()); + } + + public void sendLeaveRequest(DiscoveryNode masterNode, DiscoveryNode node) { + transportService.sendRequest(node, DISCOVERY_LEAVE_ACTION_NAME, new LeaveRequest(masterNode), + EmptyTransportResponseHandler.INSTANCE_SAME); + } + + public void sendLeaveRequestBlocking(DiscoveryNode masterNode, DiscoveryNode node, TimeValue timeout) { + transportService.submitRequest(masterNode, DISCOVERY_LEAVE_ACTION_NAME, new LeaveRequest(node), + EmptyTransportResponseHandler.INSTANCE_SAME).txGet(timeout.millis(), TimeUnit.MILLISECONDS); + } + + public void sendJoinRequestBlocking(DiscoveryNode masterNode, DiscoveryNode node, TimeValue timeout) { + transportService.submitRequest(masterNode, DISCOVERY_JOIN_ACTION_NAME, new JoinRequest(node), + EmptyTransportResponseHandler.INSTANCE_SAME).txGet(timeout.millis(), TimeUnit.MILLISECONDS); + } + + /** + * Validates the join request, throwing a failure if it failed. + */ + public void sendValidateJoinRequestBlocking(DiscoveryNode node, ClusterState state, TimeValue timeout) { + transportService.submitRequest(node, DISCOVERY_JOIN_VALIDATE_ACTION_NAME, new ValidateJoinRequest(state), + EmptyTransportResponseHandler.INSTANCE_SAME).txGet(timeout.millis(), TimeUnit.MILLISECONDS); + } + + public static class JoinRequest extends TransportRequest { + + DiscoveryNode node; + + public JoinRequest() { + } + + private JoinRequest(DiscoveryNode node) { + this.node = node; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + node = new DiscoveryNode(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + node.writeTo(out); + } + } + + + private class JoinRequestRequestHandler implements TransportRequestHandler { + + @Override + public void messageReceived(final JoinRequest request, final TransportChannel channel) throws Exception { + listener.onJoin(request.node, new JoinCallback() { + @Override + public void onSuccess() { + try { + channel.sendResponse(TransportResponse.Empty.INSTANCE); + } catch (Exception e) { + onFailure(e); + } + } + + @Override + public void onFailure(Exception e) { + try { + channel.sendResponse(e); + } catch (Exception inner) { + inner.addSuppressed(e); + logger.warn("failed to send back failure on join request", inner); + } + } + }); + } + } + + static class ValidateJoinRequest extends TransportRequest { + private ClusterState state; + + ValidateJoinRequest() {} + + ValidateJoinRequest(ClusterState state) { + this.state = state; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + this.state = ClusterState.readFrom(in, null); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + this.state.writeTo(out); + } + } + + static class ValidateJoinRequestRequestHandler implements TransportRequestHandler { + + @Override + public void messageReceived(ValidateJoinRequest request, TransportChannel channel) throws Exception { + ensureNodesCompatibility(Version.CURRENT, request.state.getNodes()); + ensureIndexCompatibility(Version.CURRENT, request.state.getMetaData()); + // for now, the mere fact that we can serialize the cluster state acts as validation.... + channel.sendResponse(TransportResponse.Empty.INSTANCE); + } + } + + /** + * Ensures that all indices are compatible with the given node version. This will ensure that all indices in the given metadata + * will not be created with a newer version of elasticsearch as well as that all indices are newer or equal to the minimum index + * compatibility version. + * @see Version#minimumIndexCompatibilityVersion() + * @throws IllegalStateException if any index is incompatible with the given version + */ + static void ensureIndexCompatibility(final Version nodeVersion, MetaData metaData) { + Version supportedIndexVersion = nodeVersion.minimumIndexCompatibilityVersion(); + // we ensure that all indices in the cluster we join are compatible with us no matter if they are + // closed or not we can't read mappings of these indices so we need to reject the join... + for (IndexMetaData idxMetaData : metaData) { + if (idxMetaData.getCreationVersion().after(nodeVersion)) { + throw new IllegalStateException("index " + idxMetaData.getIndex() + " version not supported: " + + idxMetaData.getCreationVersion() + " the node version is: " + nodeVersion); + } + if (idxMetaData.getCreationVersion().before(supportedIndexVersion)) { + throw new IllegalStateException("index " + idxMetaData.getIndex() + " version not supported: " + + idxMetaData.getCreationVersion() + " minimum compatible index version is: " + supportedIndexVersion); + } + } + } + + /** ensures that the joining node has a version that's compatible with all current nodes*/ + static void ensureNodesCompatibility(final Version joiningNodeVersion, DiscoveryNodes currentNodes) { + final Version minNodeVersion = currentNodes.getMinNodeVersion(); + final Version maxNodeVersion = currentNodes.getMaxNodeVersion(); + ensureNodesCompatibility(joiningNodeVersion, minNodeVersion, maxNodeVersion); + } + + /** ensures that the joining node has a version that's compatible with a given version range */ + static void ensureNodesCompatibility(Version joiningNodeVersion, Version minClusterNodeVersion, Version maxClusterNodeVersion) { + assert minClusterNodeVersion.onOrBefore(maxClusterNodeVersion) : minClusterNodeVersion + " > " + maxClusterNodeVersion; + if (joiningNodeVersion.isCompatible(maxClusterNodeVersion) == false) { + throw new IllegalStateException("node version [" + joiningNodeVersion + "] is not supported. " + + "The cluster contains nodes with version [" + maxClusterNodeVersion + "], which is incompatible."); + } + if (joiningNodeVersion.isCompatible(minClusterNodeVersion) == false) { + throw new IllegalStateException("node version [" + joiningNodeVersion + "] is not supported." + + "The cluster contains nodes with version [" + minClusterNodeVersion + "], which is incompatible."); + } + } + + public static class LeaveRequest extends TransportRequest { + + private DiscoveryNode node; + + public LeaveRequest() { + } + + private LeaveRequest(DiscoveryNode node) { + this.node = node; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + node = new DiscoveryNode(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + node.writeTo(out); + } + } + + private class LeaveRequestRequestHandler implements TransportRequestHandler { + + @Override + public void messageReceived(LeaveRequest request, TransportChannel channel) throws Exception { + listener.onLeave(request.node); + channel.sendResponse(TransportResponse.Empty.INSTANCE); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java b/core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java index 6f0b8966d0916..7d5132b0a1fff 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java @@ -29,20 +29,15 @@ import org.elasticsearch.cluster.ClusterStateTaskExecutor; import org.elasticsearch.cluster.ClusterStateTaskListener; import org.elasticsearch.cluster.NotMasterException; -import org.elasticsearch.cluster.block.ClusterBlocks; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Priority; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.LocalTransportAddress; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.discovery.DiscoverySettings; -import org.elasticsearch.discovery.zen.elect.ElectMasterService; -import org.elasticsearch.discovery.zen.membership.MembershipAction; import java.util.ArrayList; import java.util.Collections; @@ -62,7 +57,6 @@ public class NodeJoinController extends AbstractComponent { private final ClusterService clusterService; private final AllocationService allocationService; private final ElectMasterService electMaster; - private final DiscoverySettings discoverySettings; private final JoinTaskExecutor joinTaskExecutor = new JoinTaskExecutor(); // this is set while trying to become a master @@ -71,12 +65,11 @@ public class NodeJoinController extends AbstractComponent { public NodeJoinController(ClusterService clusterService, AllocationService allocationService, ElectMasterService electMaster, - DiscoverySettings discoverySettings, Settings settings) { + Settings settings) { super(settings); this.clusterService = clusterService; this.allocationService = allocationService; this.electMaster = electMaster; - this.discoverySettings = discoverySettings; } /** @@ -410,8 +403,8 @@ public String toString() { class JoinTaskExecutor implements ClusterStateTaskExecutor { @Override - public BatchResult execute(ClusterState currentState, List joiningNodes) throws Exception { - final BatchResult.Builder results = BatchResult.builder(); + public ClusterTasksResult execute(ClusterState currentState, List joiningNodes) throws Exception { + final ClusterTasksResult.Builder results = ClusterTasksResult.builder(); final DiscoveryNodes currentNodes = currentState.nodes(); boolean nodesChanged = false; @@ -437,6 +430,9 @@ public BatchResult execute(ClusterState currentState, List execute(ClusterState currentState, List joiningNodes) { - assert currentState.nodes().getMasterNodeId() == null : currentState.prettyPrint(); - DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(currentState.nodes()); + assert currentState.nodes().getMasterNodeId() == null : currentState; + DiscoveryNodes currentNodes = currentState.nodes(); + DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(currentNodes); nodesBuilder.masterNodeId(currentState.nodes().getLocalNodeId()); - ClusterBlocks clusterBlocks = ClusterBlocks.builder().blocks(currentState.blocks()) - .removeGlobalBlock(discoverySettings.getNoMasterBlock()).build(); for (final DiscoveryNode joiningNode : joiningNodes) { - final DiscoveryNode existingNode = nodesBuilder.get(joiningNode.getId()); - if (existingNode != null && existingNode.equals(joiningNode) == false) { - logger.debug("removing existing node [{}], which conflicts with incoming join from [{}]", existingNode, joiningNode); - nodesBuilder.remove(existingNode.getId()); + if (joiningNode == FINISH_ELECTION_TASK || joiningNode == BECOME_MASTER_TASK) { + continue; + } + final DiscoveryNode nodeWithSameId = nodesBuilder.get(joiningNode.getId()); + if (nodeWithSameId != null && nodeWithSameId.equals(joiningNode) == false) { + logger.debug("removing existing node [{}], which conflicts with incoming join from [{}]", nodeWithSameId, joiningNode); + nodesBuilder.remove(nodeWithSameId.getId()); + } + final DiscoveryNode nodeWithSameAddress = currentNodes.findByAddress(joiningNode.getAddress()); + if (nodeWithSameAddress != null && nodeWithSameAddress.equals(joiningNode) == false) { + logger.debug("removing existing node [{}], which conflicts with incoming join from [{}]", nodeWithSameAddress, + joiningNode); + nodesBuilder.remove(nodeWithSameAddress.getId()); } } + // now trim any left over dead nodes - either left there when the previous master stepped down // or removed by us above - ClusterState tmpState = ClusterState.builder(currentState).nodes(nodesBuilder).blocks(clusterBlocks).build(); - RoutingAllocation.Result result = allocationService.deassociateDeadNodes(tmpState, false, - "removed dead nodes on election"); - return ClusterState.builder(tmpState).routingResult(result); + ClusterState tmpState = ClusterState.builder(currentState).nodes(nodesBuilder).build(); + return ClusterState.builder(allocationService.deassociateDeadNodes(tmpState, false, + "removed dead nodes on election")); } @Override diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/fd/NodesFaultDetection.java b/core/src/main/java/org/elasticsearch/discovery/zen/NodesFaultDetection.java similarity index 97% rename from core/src/main/java/org/elasticsearch/discovery/zen/fd/NodesFaultDetection.java rename to core/src/main/java/org/elasticsearch/discovery/zen/NodesFaultDetection.java index 40eb36cec1f15..5cd02a52504f5 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/fd/NodesFaultDetection.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/NodesFaultDetection.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.discovery.zen.fd; +package org.elasticsearch.discovery.zen; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; @@ -41,6 +41,8 @@ import org.elasticsearch.transport.TransportService; import java.io.IOException; +import java.util.Collections; +import java.util.Set; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.CopyOnWriteArrayList; @@ -91,6 +93,14 @@ public void removeListener(Listener listener) { listeners.remove(listener); } + /** + * Gets the current set of nodes involved in node fault detection. + * NB: For testing purposes. + */ + public Set getNodes() { + return Collections.unmodifiableSet(nodesFD.keySet()); + } + /** * make sure that nodes in clusterState are pinged. Any pinging to nodes which are not * part of the cluster will be stopped @@ -129,7 +139,6 @@ public NodesFaultDetection stop() { public void close() { super.close(); stop(); - transportService.removeHandler(PING_ACTION_NAME); } @Override diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/publish/PendingClusterStateStats.java b/core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStateStats.java similarity index 98% rename from core/src/main/java/org/elasticsearch/discovery/zen/publish/PendingClusterStateStats.java rename to core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStateStats.java index e060f68833801..8facf2f282cde 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/publish/PendingClusterStateStats.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStateStats.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.discovery.zen.publish; +package org.elasticsearch.discovery.zen; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/publish/PendingClusterStatesQueue.java b/core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStatesQueue.java similarity index 88% rename from core/src/main/java/org/elasticsearch/discovery/zen/publish/PendingClusterStatesQueue.java rename to core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStatesQueue.java index 01fb96b71331c..018258066de8d 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/publish/PendingClusterStatesQueue.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStatesQueue.java @@ -16,7 +16,8 @@ * specific language governing permissions and limitations * under the License. */ -package org.elasticsearch.discovery.zen.publish; + +package org.elasticsearch.discovery.zen; import org.apache.logging.log4j.Logger; import org.elasticsearch.ElasticsearchException; @@ -34,10 +35,12 @@ *

    * The queue is bound by {@link #maxQueueSize}. When the queue is at capacity and a new cluster state is inserted * the oldest cluster state will be dropped. This is safe because: - * 1) Under normal operations, master will publish & commit a cluster state before processing another change (i.e., the queue length is 1) + * 1) Under normal operations, master will publish & commit a cluster state before processing + * another change (i.e., the queue length is 1) * 2) If the master fails to commit a change, it will step down, causing a master election, which will flush the queue. * 3) In general it's safe to process the incoming cluster state as a replacement to the cluster state that's dropped. - * a) If the dropped cluster is from the same master as the incoming one is, it is likely to be superseded by the incoming state (or another state in the queue). + * a) If the dropped cluster is from the same master as the incoming one is, it is likely to be superseded by the + * incoming state (or another state in the queue). * This is only not true in very extreme cases of out of order delivery. * b) If the dropping cluster state is not from the same master, it means that: * i) we are no longer following the master of the dropped cluster state but follow the incoming one @@ -70,7 +73,8 @@ public synchronized void addPending(ClusterState state) { ClusterStateContext context = pendingStates.remove(0); logger.warn("dropping pending state [{}]. more than [{}] pending states.", context, maxQueueSize); if (context.committed()) { - context.listener.onNewClusterStateFailed(new ElasticsearchException("too many pending states ([{}] pending)", maxQueueSize)); + context.listener.onNewClusterStateFailed(new ElasticsearchException("too many pending states ([{}] pending)", + maxQueueSize)); } } } @@ -82,11 +86,13 @@ public synchronized void addPending(ClusterState state) { public synchronized ClusterState markAsCommitted(String stateUUID, StateProcessedListener listener) { final ClusterStateContext context = findState(stateUUID); if (context == null) { - listener.onNewClusterStateFailed(new IllegalStateException("can't resolve cluster state with uuid [" + stateUUID + "] to commit")); + listener.onNewClusterStateFailed(new IllegalStateException("can't resolve cluster state with uuid" + + " [" + stateUUID + "] to commit")); return null; } if (context.committed()) { - listener.onNewClusterStateFailed(new IllegalStateException("cluster state with uuid [" + stateUUID + "] is already committed")); + listener.onNewClusterStateFailed(new IllegalStateException("cluster state with uuid" + + " [" + stateUUID + "] is already committed")); return null; } context.markAsCommitted(listener); @@ -94,13 +100,14 @@ public synchronized ClusterState markAsCommitted(String stateUUID, StateProcesse } /** - * mark that the processing of the given state has failed. All committed states that are {@link ClusterState#supersedes(ClusterState)}-ed - * by this failed state, will be failed as well + * mark that the processing of the given state has failed. All committed states that are + * {@link ClusterState#supersedes(ClusterState)}-ed by this failed state, will be failed as well */ public synchronized void markAsFailed(ClusterState state, Exception reason) { final ClusterStateContext failedContext = findState(state.stateUUID()); if (failedContext == null) { - throw new IllegalArgumentException("can't resolve failed cluster state with uuid [" + state.stateUUID() + "], version [" + state.version() + "]"); + throw new IllegalArgumentException("can't resolve failed cluster state with uuid [" + state.stateUUID() + + "], version [" + state.version() + "]"); } if (failedContext.committed() == false) { throw new IllegalArgumentException("failed cluster state is not committed " + state); @@ -128,15 +135,16 @@ public synchronized void markAsFailed(ClusterState state, Exception reason) { } /** - * indicates that a cluster state was successfully processed. Any committed state that is {@link ClusterState#supersedes(ClusterState)}-ed - * by the processed state will be marked as processed as well. + * indicates that a cluster state was successfully processed. Any committed state that is + * {@link ClusterState#supersedes(ClusterState)}-ed by the processed state will be marked as processed as well. *

    - * NOTE: successfully processing a state indicates we are following the master it came from. Any committed state from another master will - * be failed by this method + * NOTE: successfully processing a state indicates we are following the master it came from. Any committed state + * from another master will be failed by this method */ public synchronized void markAsProcessed(ClusterState state) { if (findState(state.stateUUID()) == null) { - throw new IllegalStateException("can't resolve processed cluster state with uuid [" + state.stateUUID() + "], version [" + state.version() + "]"); + throw new IllegalStateException("can't resolve processed cluster state with uuid [" + state.stateUUID() + + "], version [" + state.version() + "]"); } final DiscoveryNode currentMaster = state.nodes().getMasterNode(); assert currentMaster != null : "processed cluster state mast have a master. " + state; @@ -152,17 +160,16 @@ public synchronized void markAsProcessed(ClusterState state) { contextsToRemove.add(pendingContext); if (pendingContext.committed()) { // this is a committed state , warn - logger.warn("received a cluster state (uuid[{}]/v[{}]) from a different master than the current one, rejecting (received {}, current {})", - pendingState.stateUUID(), pendingState.version(), - pendingMasterNode, currentMaster); + logger.warn("received a cluster state (uuid[{}]/v[{}]) from a different master than the current one," + + " rejecting (received {}, current {})", + pendingState.stateUUID(), pendingState.version(), pendingMasterNode, currentMaster); pendingContext.listener.onNewClusterStateFailed( - new IllegalStateException("cluster state from a different master than the current one, rejecting (received " + pendingMasterNode + ", current " + currentMaster + ")") - ); + new IllegalStateException("cluster state from a different master than the current one," + + " rejecting (received " + pendingMasterNode + ", current " + currentMaster + ")")); } else { - logger.trace("removing non-committed state with uuid[{}]/v[{}] from [{}] - a state from [{}] was successfully processed", - pendingState.stateUUID(), pendingState.version(), pendingMasterNode, - currentMaster - ); + logger.trace("removing non-committed state with uuid[{}]/v[{}] from [{}] - a state from" + + " [{}] was successfully processed", + pendingState.stateUUID(), pendingState.version(), pendingMasterNode, currentMaster); } } else if (pendingState.stateUUID().equals(state.stateUUID())) { assert pendingContext.committed() : "processed cluster state is not committed " + state; diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/PingContextProvider.java b/core/src/main/java/org/elasticsearch/discovery/zen/PingContextProvider.java new file mode 100644 index 0000000000000..7567b69cfe459 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/discovery/zen/PingContextProvider.java @@ -0,0 +1,28 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.discovery.zen; + +import org.elasticsearch.cluster.ClusterState; + +public interface PingContextProvider { + + /** return the current cluster state of the node */ + ClusterState clusterState(); +} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/publish/PublishClusterStateAction.java b/core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateAction.java similarity index 85% rename from core/src/main/java/org/elasticsearch/discovery/zen/publish/PublishClusterStateAction.java rename to core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateAction.java index 06c25ebf81a66..009abe46949c2 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/publish/PublishClusterStateAction.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateAction.java @@ -17,9 +17,10 @@ * under the License. */ -package org.elasticsearch.discovery.zen.publish; +package org.elasticsearch.discovery.zen; import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterChangedEvent; @@ -34,6 +35,8 @@ import org.elasticsearch.common.compress.Compressor; import org.elasticsearch.common.compress.CompressorFactory; import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; @@ -42,7 +45,6 @@ import org.elasticsearch.discovery.BlockingClusterStatePublishResponseHandler; import org.elasticsearch.discovery.Discovery; import org.elasticsearch.discovery.DiscoverySettings; -import org.elasticsearch.discovery.zen.ZenDiscovery; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.BytesTransportRequest; import org.elasticsearch.transport.EmptyTransportResponseHandler; @@ -83,6 +85,7 @@ public interface NewPendingClusterStateListener { } private final TransportService transportService; + private final NamedWriteableRegistry namedWriteableRegistry; private final Supplier clusterStateSupplier; private final NewPendingClusterStateListener newPendingClusterStatelistener; private final DiscoverySettings discoverySettings; @@ -92,24 +95,23 @@ public interface NewPendingClusterStateListener { public PublishClusterStateAction( Settings settings, TransportService transportService, + NamedWriteableRegistry namedWriteableRegistry, Supplier clusterStateSupplier, NewPendingClusterStateListener listener, DiscoverySettings discoverySettings, ClusterName clusterName) { super(settings); this.transportService = transportService; + this.namedWriteableRegistry = namedWriteableRegistry; this.clusterStateSupplier = clusterStateSupplier; this.newPendingClusterStatelistener = listener; this.discoverySettings = discoverySettings; this.clusterName = clusterName; this.pendingStatesQueue = new PendingClusterStatesQueue(logger, settings.getAsInt(SETTINGS_MAX_PENDING_CLUSTER_STATES, 25)); - transportService.registerRequestHandler(SEND_ACTION_NAME, BytesTransportRequest::new, ThreadPool.Names.SAME, new SendClusterStateRequestHandler()); - transportService.registerRequestHandler(COMMIT_ACTION_NAME, CommitClusterStateRequest::new, ThreadPool.Names.SAME, new CommitClusterStateRequestHandler()); - } - - public void close() { - transportService.removeHandler(SEND_ACTION_NAME); - transportService.removeHandler(COMMIT_ACTION_NAME); + transportService.registerRequestHandler(SEND_ACTION_NAME, BytesTransportRequest::new, ThreadPool.Names.SAME, false, false, + new SendClusterStateRequestHandler()); + transportService.registerRequestHandler(COMMIT_ACTION_NAME, CommitClusterStateRequest::new, ThreadPool.Names.SAME, false, false, + new CommitClusterStateRequestHandler()); } public PendingClusterStatesQueue pendingStatesQueue() { @@ -120,10 +122,12 @@ public PendingClusterStatesQueue pendingStatesQueue() { * publishes a cluster change event to other nodes. if at least minMasterNodes acknowledge the change it is committed and will * be processed by the master and the other nodes. *

    - * The method is guaranteed to throw a {@link org.elasticsearch.discovery.Discovery.FailedToCommitClusterStateException} if the change is not committed and should be rejected. + * The method is guaranteed to throw a {@link org.elasticsearch.discovery.Discovery.FailedToCommitClusterStateException} + * if the change is not committed and should be rejected. * Any other exception signals the something wrong happened but the change is committed. */ - public void publish(final ClusterChangedEvent clusterChangedEvent, final int minMasterNodes, final Discovery.AckListener ackListener) throws Discovery.FailedToCommitClusterStateException { + public void publish(final ClusterChangedEvent clusterChangedEvent, final int minMasterNodes, + final Discovery.AckListener ackListener) throws Discovery.FailedToCommitClusterStateException { final DiscoveryNodes nodes; final SendingController sendingController; final Set nodesToPublishTo; @@ -151,8 +155,10 @@ public void publish(final ClusterChangedEvent clusterChangedEvent, final int min buildDiffAndSerializeStates(clusterChangedEvent.state(), clusterChangedEvent.previousState(), nodesToPublishTo, sendFullVersion, serializedStates, serializedDiffs); - final BlockingClusterStatePublishResponseHandler publishResponseHandler = new AckClusterStatePublishResponseHandler(nodesToPublishTo, ackListener); - sendingController = new SendingController(clusterChangedEvent.state(), minMasterNodes, totalMasterNodes, publishResponseHandler); + final BlockingClusterStatePublishResponseHandler publishResponseHandler = + new AckClusterStatePublishResponseHandler(nodesToPublishTo, ackListener); + sendingController = new SendingController(clusterChangedEvent.state(), minMasterNodes, + totalMasterNodes, publishResponseHandler); } catch (Exception e) { throw new Discovery.FailedToCommitClusterStateException("unexpected error while preparing to publish", e); } @@ -203,7 +209,8 @@ private void innerPublish(final ClusterChangedEvent clusterChangedEvent, final S DiscoveryNode[] pendingNodes = publishResponseHandler.pendingNodes(); // everyone may have just responded if (pendingNodes.length > 0) { - logger.warn("timed out waiting for all nodes to process published state [{}] (timeout [{}], pending nodes: {})", clusterState.version(), publishTimeout, pendingNodes); + logger.warn("timed out waiting for all nodes to process published state [{}] (timeout [{}], pending nodes: {})", + clusterState.version(), publishTimeout, pendingNodes); } } } catch (InterruptedException e) { @@ -213,7 +220,8 @@ private void innerPublish(final ClusterChangedEvent clusterChangedEvent, final S } private void buildDiffAndSerializeStates(ClusterState clusterState, ClusterState previousState, Set nodesToPublishTo, - boolean sendFullVersion, Map serializedStates, Map serializedDiffs) { + boolean sendFullVersion, Map serializedStates, + Map serializedDiffs) { Diff diff = null; for (final DiscoveryNode node : nodesToPublishTo) { try { @@ -246,7 +254,8 @@ private void sendFullClusterState(ClusterState clusterState, Map) () -> new ParameterizedMessage("failed to serialize cluster_state before publishing it to node {}", node), e); + (org.apache.logging.log4j.util.Supplier) () -> + new ParameterizedMessage("failed to serialize cluster_state before publishing it to node {}", node), e); sendingController.onNodeSendFailed(node, e); return; } @@ -272,7 +281,8 @@ private void sendClusterStateToNode(final ClusterState clusterState, BytesRefere // -> no need to put a timeout on the options here, because we want the response to eventually be received // and not log an error if it arrives after the timeout // -> no need to compress, we already compressed the bytes - TransportRequestOptions options = TransportRequestOptions.builder().withType(TransportRequestOptions.Type.STATE).withCompress(false).build(); + TransportRequestOptions options = TransportRequestOptions.builder() + .withType(TransportRequestOptions.Type.STATE).withCompress(false).build(); transportService.sendRequest(node, SEND_ACTION_NAME, new BytesTransportRequest(bytes, node.getVersion()), options, @@ -281,7 +291,8 @@ private void sendClusterStateToNode(final ClusterState clusterState, BytesRefere @Override public void handleResponse(TransportResponse.Empty response) { if (sendingController.getPublishingTimedOut()) { - logger.debug("node {} responded for cluster state [{}] (took longer than [{}])", node, clusterState.version(), publishTimeout); + logger.debug("node {} responded for cluster state [{}] (took longer than [{}])", node, + clusterState.version(), publishTimeout); } sendingController.onNodeSendAck(node); } @@ -292,21 +303,24 @@ public void handleException(TransportException exp) { logger.debug("resending full cluster state to node {} reason {}", node, exp.getDetailedMessage()); sendFullClusterState(clusterState, serializedStates, node, publishTimeout, sendingController); } else { - logger.debug((org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("failed to send cluster state to {}", node), exp); + logger.debug((org.apache.logging.log4j.util.Supplier) () -> + new ParameterizedMessage("failed to send cluster state to {}", node), exp); sendingController.onNodeSendFailed(node, exp); } } }); } catch (Exception e) { logger.warn( - (org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("error sending cluster state to {}", node), e); + (org.apache.logging.log4j.util.Supplier) () -> + new ParameterizedMessage("error sending cluster state to {}", node), e); sendingController.onNodeSendFailed(node, e); } } private void sendCommitToNode(final DiscoveryNode node, final ClusterState clusterState, final SendingController sendingController) { try { - logger.trace("sending commit for cluster state (uuid: [{}], version [{}]) to [{}]", clusterState.stateUUID(), clusterState.version(), node); + logger.trace("sending commit for cluster state (uuid: [{}], version [{}]) to [{}]", + clusterState.stateUUID(), clusterState.version(), node); TransportRequestOptions options = TransportRequestOptions.builder().withType(TransportRequestOptions.Type.STATE).build(); // no need to put a timeout on the options here, because we want the response to eventually be received // and not log an error if it arrives after the timeout @@ -325,12 +339,16 @@ public void handleResponse(TransportResponse.Empty response) { @Override public void handleException(TransportException exp) { - logger.debug((org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("failed to commit cluster state (uuid [{}], version [{}]) to {}", clusterState.stateUUID(), clusterState.version(), node), exp); + logger.debug((org.apache.logging.log4j.util.Supplier) () -> + new ParameterizedMessage("failed to commit cluster state (uuid [{}], version [{}]) to {}", + clusterState.stateUUID(), clusterState.version(), node), exp); sendingController.getPublishResponseHandler().onFailure(node, exp); } }); } catch (Exception t) { - logger.warn((org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("error sending cluster state commit (uuid [{}], version [{}]) to {}", clusterState.stateUUID(), clusterState.version(), node), t); + logger.warn((org.apache.logging.log4j.util.Supplier) () -> + new ParameterizedMessage("error sending cluster state commit (uuid [{}], version [{}]) to {}", + clusterState.stateUUID(), clusterState.version(), node), t); sendingController.getPublishResponseHandler().onFailure(node, t); } } @@ -361,33 +379,37 @@ public static BytesReference serializeDiffClusterState(Diff diff, Version nodeVe protected void handleIncomingClusterStateRequest(BytesTransportRequest request, TransportChannel channel) throws IOException { Compressor compressor = CompressorFactory.compressor(request.bytes()); - StreamInput in; - if (compressor != null) { - in = compressor.streamInput(request.bytes().streamInput()); - } else { - in = request.bytes().streamInput(); - } - in.setVersion(request.version()); - synchronized (lastSeenClusterStateMutex) { - final ClusterState incomingState; - // If true we received full cluster state - otherwise diffs - if (in.readBoolean()) { - incomingState = ClusterState.Builder.readFrom(in, clusterStateSupplier.get().nodes().getLocalNode()); - logger.debug("received full cluster state version [{}] with size [{}]", incomingState.version(), request.bytes().length()); - } else if (lastSeenClusterState != null) { - Diff diff = lastSeenClusterState.readDiffFrom(in); - incomingState = diff.apply(lastSeenClusterState); - logger.debug("received diff cluster state version [{}] with uuid [{}], diff size [{}]", incomingState.version(), incomingState.stateUUID(), request.bytes().length()); - } else { - logger.debug("received diff for but don't have any local cluster state - requesting full state"); - throw new IncompatibleClusterStateVersionException("have no local cluster state"); + StreamInput in = request.bytes().streamInput(); + try { + if (compressor != null) { + in = compressor.streamInput(in); } - // sanity check incoming state - validateIncomingState(incomingState, lastSeenClusterState); + in = new NamedWriteableAwareStreamInput(in, namedWriteableRegistry); + in.setVersion(request.version()); + synchronized (lastSeenClusterStateMutex) { + final ClusterState incomingState; + // If true we received full cluster state - otherwise diffs + if (in.readBoolean()) { + incomingState = ClusterState.readFrom(in, clusterStateSupplier.get().nodes().getLocalNode()); + logger.debug("received full cluster state version [{}] with size [{}]", incomingState.version(), + request.bytes().length()); + } else if (lastSeenClusterState != null) { + Diff diff = ClusterState.readDiffFrom(in, lastSeenClusterState.nodes().getLocalNode()); + incomingState = diff.apply(lastSeenClusterState); + logger.debug("received diff cluster state version [{}] with uuid [{}], diff size [{}]", + incomingState.version(), incomingState.stateUUID(), request.bytes().length()); + } else { + logger.debug("received diff for but don't have any local cluster state - requesting full state"); + throw new IncompatibleClusterStateVersionException("have no local cluster state"); + } + // sanity check incoming state + validateIncomingState(incomingState, lastSeenClusterState); - pendingStatesQueue.addPending(incomingState); - lastSeenClusterState = incomingState; - lastSeenClusterState.status(ClusterState.ClusterStateStatus.RECEIVED); + pendingStatesQueue.addPending(incomingState); + lastSeenClusterState = incomingState; + } + } finally { + IOUtils.close(in); } channel.sendResponse(TransportResponse.Empty.INSTANCE); } @@ -400,13 +422,15 @@ protected void handleIncomingClusterStateRequest(BytesTransportRequest request, void validateIncomingState(ClusterState incomingState, ClusterState lastSeenClusterState) { final ClusterName incomingClusterName = incomingState.getClusterName(); if (!incomingClusterName.equals(this.clusterName)) { - logger.warn("received cluster state from [{}] which is also master but with a different cluster name [{}]", incomingState.nodes().getMasterNode(), incomingClusterName); + logger.warn("received cluster state from [{}] which is also master but with a different cluster name [{}]", + incomingState.nodes().getMasterNode(), incomingClusterName); throw new IllegalStateException("received state from a node that is not part of the cluster"); } final ClusterState clusterState = clusterStateSupplier.get(); if (clusterState.nodes().getLocalNode().equals(incomingState.nodes().getLocalNode()) == false) { - logger.warn("received a cluster state from [{}] and not part of the cluster, should not happen", incomingState.nodes().getMasterNode()); + logger.warn("received a cluster state from [{}] and not part of the cluster, should not happen", + incomingState.nodes().getMasterNode()); throw new IllegalStateException("received state with a local node that does not match the current local node"); } @@ -425,7 +449,8 @@ void validateIncomingState(ClusterState incomingState, ClusterState lastSeenClus } protected void handleCommitRequest(CommitClusterStateRequest request, final TransportChannel channel) { - final ClusterState state = pendingStatesQueue.markAsCommitted(request.stateUUID, new PendingClusterStatesQueue.StateProcessedListener() { + final ClusterState state = pendingStatesQueue.markAsCommitted(request.stateUUID, + new PendingClusterStatesQueue.StateProcessedListener() { @Override public void onNewClusterStateProcessed() { try { @@ -448,7 +473,8 @@ public void onNewClusterStateFailed(Exception e) { } }); if (state != null) { - newPendingClusterStatelistener.onNewClusterState("master " + state.nodes().getMasterNode() + " committed version [" + state.version() + "]"); + newPendingClusterStatelistener.onNewClusterState("master " + state.nodes().getMasterNode() + + " committed version [" + state.version() + "]"); } } @@ -518,13 +544,15 @@ public BlockingClusterStatePublishResponseHandler getPublishResponseHandler() { // an external marker to note that the publishing process is timed out. This is useful for proper logging. final AtomicBoolean publishingTimedOut = new AtomicBoolean(); - private SendingController(ClusterState clusterState, int minMasterNodes, int totalMasterNodes, BlockingClusterStatePublishResponseHandler publishResponseHandler) { + private SendingController(ClusterState clusterState, int minMasterNodes, int totalMasterNodes, + BlockingClusterStatePublishResponseHandler publishResponseHandler) { this.clusterState = clusterState; this.publishResponseHandler = publishResponseHandler; this.neededMastersToCommit = Math.max(0, minMasterNodes - 1); // we are one of the master nodes this.pendingMasterNodes = totalMasterNodes - 1; if (this.neededMastersToCommit > this.pendingMasterNodes) { - throw new Discovery.FailedToCommitClusterStateException("not enough masters to ack sent cluster state. [{}] needed , have [{}]", neededMastersToCommit, pendingMasterNodes); + throw new Discovery.FailedToCommitClusterStateException("not enough masters to ack sent cluster state." + + "[{}] needed , have [{}]", neededMastersToCommit, pendingMasterNodes); } this.committed = neededMastersToCommit == 0; this.committedOrFailedLatch = new CountDownLatch(committed ? 0 : 1); @@ -598,7 +626,8 @@ private synchronized void decrementPendingMasterAcksAndChangeForFailure() { public synchronized void onNodeSendFailed(DiscoveryNode node, Exception e) { if (node.isMasterNode()) { - logger.trace("master node {} failed to ack cluster state version [{}]. processing ... (current pending [{}], needed [{}])", + logger.trace("master node {} failed to ack cluster state version [{}]. " + + "processing ... (current pending [{}], needed [{}])", node, clusterState.version(), pendingMasterNodes, neededMastersToCommit); decrementPendingMasterAcksAndChangeForFailure(); } @@ -629,7 +658,8 @@ private synchronized boolean markAsFailed(String details, Exception reason) { if (committedOrFailed()) { return committed == false; } - logger.trace((org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("failed to commit version [{}]. {}", clusterState.version(), details), reason); + logger.trace((org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("failed to commit version [{}]. {}", + clusterState.version(), details), reason); committed = false; committedOrFailedLatch.countDown(); return true; diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ping/unicast/UnicastHostsProvider.java b/core/src/main/java/org/elasticsearch/discovery/zen/UnicastHostsProvider.java similarity index 95% rename from core/src/main/java/org/elasticsearch/discovery/zen/ping/unicast/UnicastHostsProvider.java rename to core/src/main/java/org/elasticsearch/discovery/zen/UnicastHostsProvider.java index dbfaed572b1e6..9ff3215cd6480 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/ping/unicast/UnicastHostsProvider.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/UnicastHostsProvider.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.discovery.zen.ping.unicast; +package org.elasticsearch.discovery.zen; import org.elasticsearch.cluster.node.DiscoveryNode; diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/UnicastZenPing.java b/core/src/main/java/org/elasticsearch/discovery/zen/UnicastZenPing.java new file mode 100644 index 0000000000000..3a68b2b4cd8a3 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/discovery/zen/UnicastZenPing.java @@ -0,0 +1,689 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.discovery.zen; + +import com.carrotsearch.hppc.cursors.ObjectCursor; +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.store.AlreadyClosedException; +import org.apache.lucene.util.IOUtils; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.lease.Releasable; +import org.elasticsearch.common.lease.Releasables; +import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.settings.Setting.Property; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.CollectionUtils; +import org.elasticsearch.common.util.concurrent.AbstractRunnable; +import org.elasticsearch.common.util.concurrent.ConcurrentCollections; +import org.elasticsearch.common.util.concurrent.EsExecutors; +import org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor; +import org.elasticsearch.common.util.concurrent.KeyedLock; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.ConnectTransportException; +import org.elasticsearch.transport.ConnectionProfile; +import org.elasticsearch.transport.NodeNotConnectedException; +import org.elasticsearch.transport.RemoteTransportException; +import org.elasticsearch.transport.Transport.Connection; +import org.elasticsearch.transport.TransportChannel; +import org.elasticsearch.transport.TransportException; +import org.elasticsearch.transport.TransportRequest; +import org.elasticsearch.transport.TransportRequestHandler; +import org.elasticsearch.transport.TransportRequestOptions; +import org.elasticsearch.transport.TransportResponse; +import org.elasticsearch.transport.TransportResponseHandler; +import org.elasticsearch.transport.TransportService; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.Objects; +import java.util.Queue; +import java.util.Set; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Future; +import java.util.concurrent.ThreadFactory; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import static java.util.Collections.emptyList; +import static java.util.Collections.emptyMap; +import static java.util.Collections.emptySet; +import static org.elasticsearch.common.util.concurrent.ConcurrentCollections.newConcurrentMap; +import static org.elasticsearch.discovery.zen.ZenPing.PingResponse.readPingResponse; + +public class UnicastZenPing extends AbstractComponent implements ZenPing { + + public static final String ACTION_NAME = "internal:discovery/zen/unicast"; + public static final Setting> DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING = + Setting.listSetting("discovery.zen.ping.unicast.hosts", emptyList(), Function.identity(), + Property.NodeScope); + public static final Setting DISCOVERY_ZEN_PING_UNICAST_CONCURRENT_CONNECTS_SETTING = + Setting.intSetting("discovery.zen.ping.unicast.concurrent_connects", 10, 0, Property.NodeScope); + public static final Setting DISCOVERY_ZEN_PING_UNICAST_HOSTS_RESOLVE_TIMEOUT = + Setting.positiveTimeSetting("discovery.zen.ping.unicast.hosts.resolve_timeout", TimeValue.timeValueSeconds(5), Property.NodeScope); + + // these limits are per-address + public static final int LIMIT_FOREIGN_PORTS_COUNT = 1; + public static final int LIMIT_LOCAL_PORTS_COUNT = 5; + + private final ThreadPool threadPool; + private final TransportService transportService; + private final ClusterName clusterName; + + private final List configuredHosts; + + private final int limitPortCounts; + + private volatile PingContextProvider contextProvider; + + private final AtomicInteger pingingRoundIdGenerator = new AtomicInteger(); + + // used as a node id prefix for configured unicast host nodes/address + private static final String UNICAST_NODE_PREFIX = "#zen_unicast_"; + + private final Map activePingingRounds = newConcurrentMap(); + + // a list of temporal responses a node will return for a request (holds responses from other nodes) + private final Queue temporalResponses = ConcurrentCollections.newQueue(); + + private final UnicastHostsProvider hostsProvider; + + protected final EsThreadPoolExecutor unicastZenPingExecutorService; + + private final TimeValue resolveTimeout; + + private volatile boolean closed = false; + + public UnicastZenPing(Settings settings, ThreadPool threadPool, TransportService transportService, + UnicastHostsProvider unicastHostsProvider) { + super(settings); + this.threadPool = threadPool; + this.transportService = transportService; + this.clusterName = ClusterName.CLUSTER_NAME_SETTING.get(settings); + this.hostsProvider = unicastHostsProvider; + + final int concurrentConnects = DISCOVERY_ZEN_PING_UNICAST_CONCURRENT_CONNECTS_SETTING.get(settings); + if (DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING.exists(settings)) { + configuredHosts = DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING.get(settings); + // we only limit to 1 addresses, makes no sense to ping 100 ports + limitPortCounts = LIMIT_FOREIGN_PORTS_COUNT; + } else { + // if unicast hosts are not specified, fill with simple defaults on the local machine + configuredHosts = transportService.getLocalAddresses(); + limitPortCounts = LIMIT_LOCAL_PORTS_COUNT; + } + resolveTimeout = DISCOVERY_ZEN_PING_UNICAST_HOSTS_RESOLVE_TIMEOUT.get(settings); + logger.debug( + "using initial hosts {}, with concurrent_connects [{}], resolve_timeout [{}]", + configuredHosts, + concurrentConnects, + resolveTimeout); + + transportService.registerRequestHandler(ACTION_NAME, UnicastPingRequest::new, ThreadPool.Names.SAME, + new UnicastPingRequestHandler()); + + final ThreadFactory threadFactory = EsExecutors.daemonThreadFactory(settings, "[unicast_connect]"); + unicastZenPingExecutorService = EsExecutors.newScaling( + "unicast_connect", + 0, concurrentConnects, + 60, + TimeUnit.SECONDS, + threadFactory, + threadPool.getThreadContext()); + } + + /** + * Resolves a list of hosts to a list of discovery nodes. Each host is resolved into a transport address (or a collection of addresses + * if the number of ports is greater than one) and the transport addresses are used to created discovery nodes. Host lookups are done + * in parallel using specified executor service up to the specified resolve timeout. + * + * @param executorService the executor service used to parallelize hostname lookups + * @param logger logger used for logging messages regarding hostname lookups + * @param hosts the hosts to resolve + * @param limitPortCounts the number of ports to resolve (should be 1 for non-local transport) + * @param transportService the transport service + * @param nodeId_prefix a prefix to use for node ids + * @param resolveTimeout the timeout before returning from hostname lookups + * @return a list of discovery nodes with resolved transport addresses + */ + public static List resolveHostsLists( + final ExecutorService executorService, + final Logger logger, + final List hosts, + final int limitPortCounts, + final TransportService transportService, + final String nodeId_prefix, + final TimeValue resolveTimeout) throws InterruptedException { + Objects.requireNonNull(executorService); + Objects.requireNonNull(logger); + Objects.requireNonNull(hosts); + Objects.requireNonNull(transportService); + Objects.requireNonNull(nodeId_prefix); + Objects.requireNonNull(resolveTimeout); + if (resolveTimeout.nanos() < 0) { + throw new IllegalArgumentException("resolve timeout must be non-negative but was [" + resolveTimeout + "]"); + } + // create tasks to submit to the executor service; we will wait up to resolveTimeout for these tasks to complete + final List> callables = + hosts + .stream() + .map(hn -> (Callable) () -> transportService.addressesFromString(hn, limitPortCounts)) + .collect(Collectors.toList()); + final List> futures = + executorService.invokeAll(callables, resolveTimeout.nanos(), TimeUnit.NANOSECONDS); + final List discoveryNodes = new ArrayList<>(); + final Set localAddresses = new HashSet<>(); + localAddresses.add(transportService.boundAddress().publishAddress()); + localAddresses.addAll(Arrays.asList(transportService.boundAddress().boundAddresses())); + // ExecutorService#invokeAll guarantees that the futures are returned in the iteration order of the tasks so we can associate the + // hostname with the corresponding task by iterating together + final Iterator it = hosts.iterator(); + for (final Future future : futures) { + final String hostname = it.next(); + if (!future.isCancelled()) { + assert future.isDone(); + try { + final TransportAddress[] addresses = future.get(); + logger.trace("resolved host [{}] to {}", hostname, addresses); + for (int addressId = 0; addressId < addresses.length; addressId++) { + final TransportAddress address = addresses[addressId]; + // no point in pinging ourselves + if (localAddresses.contains(address) == false) { + discoveryNodes.add( + new DiscoveryNode( + nodeId_prefix + hostname + "_" + addressId + "#", + address, + emptyMap(), + emptySet(), + Version.CURRENT.minimumCompatibilityVersion())); + } + } + } catch (final ExecutionException e) { + assert e.getCause() != null; + final String message = "failed to resolve host [" + hostname + "]"; + logger.warn(message, e.getCause()); + } + } else { + logger.warn("timed out after [{}] resolving host [{}]", resolveTimeout, hostname); + } + } + return discoveryNodes; + } + + @Override + public void close() { + ThreadPool.terminate(unicastZenPingExecutorService, 10, TimeUnit.SECONDS); + Releasables.close(activePingingRounds.values()); + closed = true; + } + + @Override + public void start(PingContextProvider contextProvider) { + this.contextProvider = contextProvider; + } + + /** + * Clears the list of cached ping responses. + */ + public void clearTemporalResponses() { + temporalResponses.clear(); + } + + /** + * Sends three rounds of pings notifying the specified {@link Consumer} when pinging is complete. Pings are sent after resolving + * configured unicast hosts to their IP address (subject to DNS caching within the JVM). A batch of pings is sent, then another batch + * of pings is sent at half the specified {@link TimeValue}, and then another batch of pings is sent at the specified {@link TimeValue}. + * The pings that are sent carry a timeout of 1.25 times the specified {@link TimeValue}. When pinging each node, a connection and + * handshake is performed, with a connection timeout of the specified {@link TimeValue}. + * + * @param resultsConsumer the callback when pinging is complete + * @param duration the timeout for various components of the pings + */ + @Override + public void ping(final Consumer resultsConsumer, final TimeValue duration) { + ping(resultsConsumer, duration, duration); + } + + /** + * a variant of {@link #ping(Consumer, TimeValue)}, but allows separating the scheduling duration + * from the duration used for request level time outs. This is useful for testing + */ + protected void ping(final Consumer resultsConsumer, + final TimeValue scheduleDuration, + final TimeValue requestDuration) { + final List seedNodes; + try { + seedNodes = resolveHostsLists( + unicastZenPingExecutorService, + logger, + configuredHosts, + limitPortCounts, + transportService, + UNICAST_NODE_PREFIX, + resolveTimeout); + } catch (InterruptedException e) { + throw new RuntimeException(e); + } + seedNodes.addAll(hostsProvider.buildDynamicNodes()); + final DiscoveryNodes nodes = contextProvider.clusterState().nodes(); + // add all possible master nodes that were active in the last known cluster configuration + for (ObjectCursor masterNode : nodes.getMasterNodes().values()) { + seedNodes.add(masterNode.value); + } + + final ConnectionProfile connectionProfile = + ConnectionProfile.buildSingleChannelProfile(TransportRequestOptions.Type.REG, requestDuration, requestDuration); + final PingingRound pingingRound = new PingingRound(pingingRoundIdGenerator.incrementAndGet(), seedNodes, resultsConsumer, + nodes.getLocalNode(), connectionProfile); + activePingingRounds.put(pingingRound.id(), pingingRound); + final AbstractRunnable pingSender = new AbstractRunnable() { + @Override + public void onFailure(Exception e) { + if (e instanceof AlreadyClosedException == false) { + logger.warn("unexpected error while pinging", e); + } + } + + @Override + protected void doRun() throws Exception { + sendPings(requestDuration, pingingRound); + } + }; + threadPool.generic().execute(pingSender); + threadPool.schedule(TimeValue.timeValueMillis(scheduleDuration.millis() / 3), ThreadPool.Names.GENERIC, pingSender); + threadPool.schedule(TimeValue.timeValueMillis(scheduleDuration.millis() / 3 * 2), ThreadPool.Names.GENERIC, pingSender); + threadPool.schedule(scheduleDuration, ThreadPool.Names.GENERIC, new AbstractRunnable() { + @Override + protected void doRun() throws Exception { + finishPingingRound(pingingRound); + } + + @Override + public void onFailure(Exception e) { + logger.warn("unexpected error while finishing pinging round", e); + } + }); + } + + // for testing + protected void finishPingingRound(PingingRound pingingRound) { + pingingRound.close(); + } + + protected class PingingRound implements Releasable { + private final int id; + private final Map tempConnections = new HashMap<>(); + private final KeyedLock connectionLock = new KeyedLock<>(true); + private final PingCollection pingCollection; + private final List seedNodes; + private final Consumer pingListener; + private final DiscoveryNode localNode; + private final ConnectionProfile connectionProfile; + + private AtomicBoolean closed = new AtomicBoolean(false); + + PingingRound(int id, List seedNodes, Consumer resultsConsumer, DiscoveryNode localNode, + ConnectionProfile connectionProfile) { + this.id = id; + this.seedNodes = Collections.unmodifiableList(new ArrayList<>(seedNodes)); + this.pingListener = resultsConsumer; + this.localNode = localNode; + this.connectionProfile = connectionProfile; + this.pingCollection = new PingCollection(); + } + + public int id() { + return this.id; + } + + public boolean isClosed() { + return this.closed.get(); + } + + public List getSeedNodes() { + ensureOpen(); + return seedNodes; + } + + public Connection getOrConnect(DiscoveryNode node) throws IOException { + Connection result; + try (Releasable ignore = connectionLock.acquire(node.getAddress())) { + result = tempConnections.get(node.getAddress()); + if (result == null) { + ensureOpen(); + boolean success = false; + logger.trace("[{}] opening connection to [{}]", id(), node); + result = transportService.openConnection(node, connectionProfile); + try { + transportService.handshake(result, connectionProfile.getHandshakeTimeout().millis()); + synchronized (this) { + // acquire lock and check if closed, to prevent leaving an open connection after closing + ensureOpen(); + Connection existing = tempConnections.put(node.getAddress(), result); + assert existing == null; + success = true; + } + } finally { + if (success == false) { + logger.trace("[{}] closing connection to [{}] due to failure", id(), node); + IOUtils.closeWhileHandlingException(result); + } + } + } + } + return result; + } + + private void ensureOpen() { + if (isClosed()) { + throw new AlreadyClosedException("pinging round [" + id + "] is finished"); + } + } + + public void addPingResponseToCollection(PingResponse pingResponse) { + if (localNode.equals(pingResponse.node()) == false) { + pingCollection.addPing(pingResponse); + } + } + + @Override + public void close() { + List toClose = null; + synchronized (this) { + if (closed.compareAndSet(false, true)) { + activePingingRounds.remove(id); + toClose = new ArrayList<>(tempConnections.values()); + tempConnections.clear(); + } + } + if (toClose != null) { + // we actually closed + try { + pingListener.accept(pingCollection); + } finally { + IOUtils.closeWhileHandlingException(toClose); + } + } + } + + public ConnectionProfile getConnectionProfile() { + return connectionProfile; + } + } + + + protected void sendPings(final TimeValue timeout, final PingingRound pingingRound) { + final UnicastPingRequest pingRequest = new UnicastPingRequest(); + pingRequest.id = pingingRound.id(); + pingRequest.timeout = timeout; + ClusterState lastState = contextProvider.clusterState(); + + pingRequest.pingResponse = createPingResponse(lastState); + + Set nodesFromResponses = temporalResponses.stream().map(pingResponse -> { + assert clusterName.equals(pingResponse.clusterName()) : + "got a ping request from a different cluster. expected " + clusterName + " got " + pingResponse.clusterName(); + return pingResponse.node(); + }).collect(Collectors.toSet()); + + // dedup by address + final Map uniqueNodesByAddress = + Stream.concat(pingingRound.getSeedNodes().stream(), nodesFromResponses.stream()) + .collect(Collectors.toMap(DiscoveryNode::getAddress, Function.identity(), (n1, n2) -> n1)); + + + // resolve what we can via the latest cluster state + final Set nodesToPing = uniqueNodesByAddress.values().stream() + .map(node -> { + DiscoveryNode foundNode = lastState.nodes().findByAddress(node.getAddress()); + if (foundNode == null) { + return node; + } else { + return foundNode; + } + }).collect(Collectors.toSet()); + + nodesToPing.forEach(node -> sendPingRequestToNode(node, timeout, pingingRound, pingRequest)); + } + + private void sendPingRequestToNode(final DiscoveryNode node, TimeValue timeout, final PingingRound pingingRound, + final UnicastPingRequest pingRequest) { + submitToExecutor(new AbstractRunnable() { + @Override + protected void doRun() throws Exception { + Connection connection = null; + if (transportService.nodeConnected(node)) { + try { + // concurrency can still cause disconnects + connection = transportService.getConnection(node); + } catch (NodeNotConnectedException e) { + logger.trace("[{}] node [{}] just disconnected, will create a temp connection", pingingRound.id(), node); + } + } + + if (connection == null) { + connection = pingingRound.getOrConnect(node); + } + + logger.trace("[{}] sending to {}", pingingRound.id(), node); + transportService.sendRequest(connection, ACTION_NAME, pingRequest, + TransportRequestOptions.builder().withTimeout((long) (timeout.millis() * 1.25)).build(), + getPingResponseHandler(pingingRound, node)); + } + + @Override + public void onFailure(Exception e) { + if (e instanceof ConnectTransportException || e instanceof AlreadyClosedException) { + // can't connect to the node - this is more common path! + logger.trace( + (Supplier) () -> new ParameterizedMessage( + "[{}] failed to ping {}", pingingRound.id(), node), e); + } else if (e instanceof RemoteTransportException) { + // something went wrong on the other side + logger.debug( + (Supplier) () -> new ParameterizedMessage( + "[{}] received a remote error as a response to ping {}", pingingRound.id(), node), e); + } else { + logger.warn( + (Supplier) () -> new ParameterizedMessage( + "[{}] failed send ping to {}", pingingRound.id(), node), e); + } + } + + @Override + public void onRejection(Exception e) { + // The RejectedExecutionException can come from the fact unicastZenPingExecutorService is at its max down in sendPings + // But don't bail here, we can retry later on after the send ping has been scheduled. + logger.debug("Ping execution rejected", e); + } + }); + } + + // for testing + protected void submitToExecutor(AbstractRunnable abstractRunnable) { + unicastZenPingExecutorService.execute(abstractRunnable); + } + + // for testing + protected TransportResponseHandler getPingResponseHandler(final PingingRound pingingRound, + final DiscoveryNode node) { + return new TransportResponseHandler() { + + @Override + public UnicastPingResponse newInstance() { + return new UnicastPingResponse(); + } + + @Override + public String executor() { + return ThreadPool.Names.SAME; + } + + @Override + public void handleResponse(UnicastPingResponse response) { + logger.trace("[{}] received response from {}: {}", pingingRound.id(), node, Arrays.toString(response.pingResponses)); + if (pingingRound.isClosed()) { + if (logger.isTraceEnabled()) { + logger.trace("[{}] skipping received response from {}. already closed", pingingRound.id(), node); + } + } else { + Stream.of(response.pingResponses).forEach(pingingRound::addPingResponseToCollection); + } + } + + @Override + public void handleException(TransportException exp) { + if (exp instanceof ConnectTransportException || exp.getCause() instanceof ConnectTransportException) { + // ok, not connected... + logger.trace((Supplier) () -> new ParameterizedMessage("failed to connect to {}", node), exp); + } else if (closed == false) { + logger.warn((Supplier) () -> new ParameterizedMessage("failed to send ping to [{}]", node), exp); + } + } + }; + } + + private UnicastPingResponse handlePingRequest(final UnicastPingRequest request) { + assert clusterName.equals(request.pingResponse.clusterName()) : + "got a ping request from a different cluster. expected " + clusterName + " got " + request.pingResponse.clusterName(); + temporalResponses.add(request.pingResponse); + // add to any ongoing pinging + activePingingRounds.values().forEach(p -> p.addPingResponseToCollection(request.pingResponse)); + threadPool.schedule(TimeValue.timeValueMillis(request.timeout.millis() * 2), ThreadPool.Names.SAME, + () -> temporalResponses.remove(request.pingResponse)); + + List pingResponses = CollectionUtils.iterableAsArrayList(temporalResponses); + pingResponses.add(createPingResponse(contextProvider.clusterState())); + + UnicastPingResponse unicastPingResponse = new UnicastPingResponse(); + unicastPingResponse.id = request.id; + unicastPingResponse.pingResponses = pingResponses.toArray(new PingResponse[pingResponses.size()]); + + return unicastPingResponse; + } + + class UnicastPingRequestHandler implements TransportRequestHandler { + + @Override + public void messageReceived(UnicastPingRequest request, TransportChannel channel) throws Exception { + if (request.pingResponse.clusterName().equals(clusterName)) { + channel.sendResponse(handlePingRequest(request)); + } else { + throw new IllegalStateException( + String.format( + Locale.ROOT, + "mismatched cluster names; request: [%s], local: [%s]", + request.pingResponse.clusterName().value(), + clusterName.value())); + } + } + + } + + public static class UnicastPingRequest extends TransportRequest { + + int id; + TimeValue timeout; + PingResponse pingResponse; + + public UnicastPingRequest() { + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + id = in.readInt(); + timeout = new TimeValue(in); + pingResponse = readPingResponse(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeInt(id); + timeout.writeTo(out); + pingResponse.writeTo(out); + } + } + + private PingResponse createPingResponse(ClusterState clusterState) { + DiscoveryNodes discoNodes = clusterState.nodes(); + return new PingResponse(discoNodes.getLocalNode(), discoNodes.getMasterNode(), clusterState); + } + + static class UnicastPingResponse extends TransportResponse { + + int id; + + PingResponse[] pingResponses; + + UnicastPingResponse() { + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + id = in.readInt(); + pingResponses = new PingResponse[in.readVInt()]; + for (int i = 0; i < pingResponses.length; i++) { + pingResponses[i] = readPingResponse(in); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeInt(id); + out.writeVInt(pingResponses.length); + for (PingResponse pingResponse : pingResponses) { + pingResponse.writeTo(out); + } + } + } + + protected Version getVersion() { + return Version.CURRENT; // for tests + } +} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java b/core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java index c4fc4f15f4067..8ea0c2e42c629 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java @@ -22,49 +22,40 @@ import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterStateTaskConfig; import org.elasticsearch.cluster.ClusterStateTaskExecutor; import org.elasticsearch.cluster.ClusterStateTaskListener; -import org.elasticsearch.cluster.ClusterStateUpdateTask; +import org.elasticsearch.cluster.LocalClusterUpdateTask; import org.elasticsearch.cluster.NotMasterException; -import org.elasticsearch.cluster.block.ClusterBlocks; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Priority; import org.elasticsearch.common.component.AbstractLifecycleComponent; import org.elasticsearch.common.component.Lifecycle; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.inject.internal.Nullable; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.lease.Releasables; +import org.elasticsearch.common.logging.LoggerMessageFormat; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.discovery.Discovery; import org.elasticsearch.discovery.DiscoverySettings; import org.elasticsearch.discovery.DiscoveryStats; -import org.elasticsearch.discovery.zen.elect.ElectMasterService; -import org.elasticsearch.discovery.zen.fd.MasterFaultDetection; -import org.elasticsearch.discovery.zen.fd.NodesFaultDetection; -import org.elasticsearch.discovery.zen.membership.MembershipAction; -import org.elasticsearch.discovery.zen.ping.PingContextProvider; -import org.elasticsearch.discovery.zen.ping.ZenPing; -import org.elasticsearch.discovery.zen.ping.ZenPingService; -import org.elasticsearch.discovery.zen.publish.PendingClusterStateStats; -import org.elasticsearch.discovery.zen.publish.PublishClusterStateAction; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.EmptyTransportResponseHandler; import org.elasticsearch.transport.TransportChannel; @@ -76,15 +67,14 @@ import java.io.IOException; import java.util.ArrayList; -import java.util.Arrays; -import java.util.HashSet; import java.util.List; import java.util.Set; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; -import java.util.function.BiFunction; +import java.util.function.Consumer; import java.util.stream.Collectors; import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds; @@ -95,7 +85,7 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover Setting.positiveTimeSetting("discovery.zen.ping_timeout", timeValueSeconds(3), Property.NodeScope); public static final Setting JOIN_TIMEOUT_SETTING = Setting.timeSetting("discovery.zen.join_timeout", - settings -> TimeValue.timeValueMillis(PING_TIMEOUT_SETTING.get(settings).millis() * 20).toString(), + settings -> TimeValue.timeValueMillis(PING_TIMEOUT_SETTING.get(settings).millis() * 20), TimeValue.timeValueMillis(0), Property.NodeScope); public static final Setting JOIN_RETRY_ATTEMPTS_SETTING = Setting.intSetting("discovery.zen.join_retry_attempts", 3, 1, Property.NodeScope); @@ -107,7 +97,7 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover Setting.boolSetting("discovery.zen.send_leave_request", true, Property.NodeScope); public static final Setting MASTER_ELECTION_WAIT_FOR_JOINS_TIMEOUT_SETTING = Setting.timeSetting("discovery.zen.master_election.wait_for_joins_timeout", - settings -> TimeValue.timeValueMillis(JOIN_TIMEOUT_SETTING.get(settings).millis() / 2).toString(), TimeValue.timeValueMillis(0), + settings -> TimeValue.timeValueMillis(JOIN_TIMEOUT_SETTING.get(settings).millis() / 2), TimeValue.timeValueMillis(0), Property.NodeScope); public static final Setting MASTER_ELECTION_IGNORE_NON_MASTER_PINGS_SETTING = Setting.boolSetting("discovery.zen.master_election.ignore_non_master_pings", false, Property.NodeScope); @@ -115,15 +105,17 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover public static final String DISCOVERY_REJOIN_ACTION_NAME = "internal:discovery/zen/rejoin"; private final TransportService transportService; + private final NamedWriteableRegistry namedWriteableRegistry; private final ClusterService clusterService; private AllocationService allocationService; private final ClusterName clusterName; private final DiscoverySettings discoverySettings; - private final ZenPingService pingService; + protected final ZenPing zenPing; // protected to allow tests access private final MasterFaultDetection masterFD; private final NodesFaultDetection nodesFD; private final PublishClusterStateAction publishClusterState; private final MembershipAction membership; + private final ThreadPool threadPool; private final TimeValue pingTimeout; private final TimeValue joinTimeout; @@ -146,31 +138,28 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover private final JoinThreadControl joinThreadControl; - /** counts the time this node has joined the cluster or have elected it self as master */ - private final AtomicLong clusterJoinsCounter = new AtomicLong(); - // must initialized in doStart(), when we have the allocationService set private volatile NodeJoinController nodeJoinController; private volatile NodeRemovalClusterStateTaskExecutor nodeRemovalExecutor; - @Inject - public ZenDiscovery(Settings settings, ThreadPool threadPool, - TransportService transportService, final ClusterService clusterService, ClusterSettings clusterSettings, - ZenPingService pingService, ElectMasterService electMasterService) { + public ZenDiscovery(Settings settings, ThreadPool threadPool, TransportService transportService, + NamedWriteableRegistry namedWriteableRegistry, + ClusterService clusterService, UnicastHostsProvider hostsProvider) { super(settings); this.clusterService = clusterService; this.clusterName = clusterService.getClusterName(); this.transportService = transportService; - this.discoverySettings = new DiscoverySettings(settings, clusterSettings); - this.pingService = pingService; - this.electMaster = electMasterService; + this.namedWriteableRegistry = namedWriteableRegistry; + this.discoverySettings = new DiscoverySettings(settings, clusterService.getClusterSettings()); + this.zenPing = newZenPing(settings, threadPool, transportService, hostsProvider); + this.electMaster = new ElectMasterService(settings); this.pingTimeout = PING_TIMEOUT_SETTING.get(settings); - this.joinTimeout = JOIN_TIMEOUT_SETTING.get(settings); this.joinRetryAttempts = JOIN_RETRY_ATTEMPTS_SETTING.get(settings); this.joinRetryDelay = JOIN_RETRY_DELAY_SETTING.get(settings); this.maxPingsFromAnotherMaster = MAX_PINGS_FROM_ANOTHER_MASTER_SETTING.get(settings); this.sendLeaveRequest = SEND_LEAVE_REQUEST_SETTING.get(settings); + this.threadPool = threadPool; this.masterElectionIgnoreNonMasters = MASTER_ELECTION_IGNORE_NON_MASTER_PINGS_SETTING.get(settings); this.masterElectionWaitForJoinsTimeout = MASTER_ELECTION_WAIT_FOR_JOINS_TIMEOUT_SETTING.get(settings); @@ -178,17 +167,29 @@ public ZenDiscovery(Settings settings, ThreadPool threadPool, logger.debug("using ping_timeout [{}], join.timeout [{}], master_election.ignore_non_master [{}]", this.pingTimeout, joinTimeout, masterElectionIgnoreNonMasters); - clusterSettings.addSettingsUpdateConsumer(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING, this::handleMinimumMasterNodesChanged, (value) -> { - final ClusterState clusterState = clusterService.state(); - int masterNodes = clusterState.nodes().getMasterNodes().size(); - if (value > masterNodes) { - throw new IllegalArgumentException("cannot set " + ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey() + " to more than the current master nodes count [" + masterNodes + "]"); - } + clusterService.getClusterSettings().addSettingsUpdateConsumer(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING, + this::handleMinimumMasterNodesChanged, (value) -> { + final ClusterState clusterState = clusterService.state(); + int masterNodes = clusterState.nodes().getMasterNodes().size(); + // the purpose of this validation is to make sure that the master doesn't step down + // due to a change in master nodes, which also means that there is no way to revert + // an accidental change. Since we validate using the current cluster state (and + // not the one from which the settings come from) we have to be careful and only + // validate if the local node is already a master. Doing so all the time causes + // subtle issues. For example, a node that joins a cluster has no nodes in its + // current cluster state. When it receives a cluster state from the master with + // a dynamic minimum master nodes setting int it, we must make sure we don't reject + // it. + + if (clusterState.nodes().isLocalNodeElectedMaster() && value > masterNodes) { + throw new IllegalArgumentException("cannot set " + + ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey() + " to more than the current" + + " master nodes count [" + masterNodes + "]"); + } }); this.masterFD = new MasterFaultDetection(settings, threadPool, transportService, clusterService); this.masterFD.addListener(new MasterNodeFailureListener()); - this.nodesFD = new NodesFaultDetection(settings, threadPool, transportService, clusterService.getClusterName()); this.nodesFD.addListener(new NodeFaultDetectionListener()); @@ -196,19 +197,24 @@ public ZenDiscovery(Settings settings, ThreadPool threadPool, new PublishClusterStateAction( settings, transportService, + namedWriteableRegistry, clusterService::state, new NewPendingClusterStateListener(), discoverySettings, clusterService.getClusterName()); - this.pingService.setPingContextProvider(this); - this.membership = new MembershipAction(settings, transportService, this, new MembershipListener()); - - this.joinThreadControl = new JoinThreadControl(threadPool); + this.membership = new MembershipAction(settings, transportService, new MembershipListener()); + this.joinThreadControl = new JoinThreadControl(); transportService.registerRequestHandler( DISCOVERY_REJOIN_ACTION_NAME, RejoinClusterRequest::new, ThreadPool.Names.SAME, new RejoinClusterRequestHandler()); } + // protected to allow overriding in tests + protected ZenPing newZenPing(Settings settings, ThreadPool threadPool, TransportService transportService, + UnicastHostsProvider hostsProvider) { + return new UnicastZenPing(settings, threadPool, transportService, hostsProvider); + } + @Override public void setAllocationService(AllocationService allocationService) { this.allocationService = allocationService; @@ -218,26 +224,21 @@ public void setAllocationService(AllocationService allocationService) { protected void doStart() { nodesFD.setLocalNode(clusterService.localNode()); joinThreadControl.start(); - pingService.start(); - this.nodeJoinController = new NodeJoinController(clusterService, allocationService, electMaster, discoverySettings, settings); - this.nodeRemovalExecutor = new NodeRemovalClusterStateTaskExecutor(allocationService, electMaster, this::rejoin, logger); + zenPing.start(this); + this.nodeJoinController = new NodeJoinController(clusterService, allocationService, electMaster, settings); + this.nodeRemovalExecutor = new NodeRemovalClusterStateTaskExecutor(allocationService, electMaster, this::submitRejoin, logger); } @Override public void startInitialJoin() { // start the join thread from a cluster state update. See {@link JoinThreadControl} for details. - clusterService.submitStateUpdateTask("initial_join", new ClusterStateUpdateTask() { - - @Override - public boolean runOnlyOnMaster() { - return false; - } + clusterService.submitStateUpdateTask("initial_join", new LocalClusterUpdateTask() { @Override - public ClusterState execute(ClusterState currentState) throws Exception { + public ClusterTasksResult execute(ClusterState currentState) throws Exception { // do the join on a different thread, the DiscoveryService waits for 30s anyhow till it is discovered joinThreadControl.startNewThreadIfNotRunning(); - return currentState; + return unchanged(); } @Override @@ -250,10 +251,10 @@ public void onFailure(String source, @org.elasticsearch.common.Nullable Exceptio @Override protected void doStop() { joinThreadControl.stop(); - pingService.stop(); masterFD.stop("zen disco stop"); nodesFD.stop(); - DiscoveryNodes nodes = nodes(); + Releasables.close(zenPing); // stop any ongoing pinging + DiscoveryNodes nodes = clusterState().nodes(); if (sendLeaveRequest) { if (nodes.getMasterNode() == null) { // if we don't know who the master is, nothing to do here @@ -281,12 +282,8 @@ protected void doStop() { } @Override - protected void doClose() { - masterFD.close(); - nodesFD.close(); - publishClusterState.close(); - membership.close(); - pingService.close(); + protected void doClose() throws IOException { + IOUtils.close(masterFD, nodesFD); } @Override @@ -299,45 +296,40 @@ public String nodeDescription() { return clusterName.value() + "/" + clusterService.localNode().getId(); } - /** start of {@link org.elasticsearch.discovery.zen.ping.PingContextProvider } implementation */ - @Override - public DiscoveryNodes nodes() { - return clusterService.state().nodes(); - } - @Override - public boolean nodeHasJoinedClusterOnce() { - return clusterJoinsCounter.get() > 0; + public ClusterState clusterState() { + return clusterService.state(); } - /** end of {@link org.elasticsearch.discovery.zen.ping.PingContextProvider } implementation */ - - @Override public void publish(ClusterChangedEvent clusterChangedEvent, AckListener ackListener) { if (!clusterChangedEvent.state().getNodes().isLocalNodeElectedMaster()) { throw new IllegalStateException("Shouldn't publish state when not master"); } - nodesFD.updateNodesAndPing(clusterChangedEvent.state()); try { publishClusterState.publish(clusterChangedEvent, electMaster.minimumMasterNodes(), ackListener); } catch (FailedToCommitClusterStateException t) { // cluster service logs a WARN message logger.debug("failed to publish cluster state version [{}] (not enough nodes acknowledged, min master nodes [{}])", clusterChangedEvent.state().version(), electMaster.minimumMasterNodes()); - clusterService.submitStateUpdateTask("zen-disco-failed-to-publish", new ClusterStateUpdateTask(Priority.IMMEDIATE) { - @Override - public ClusterState execute(ClusterState currentState) { - return rejoin(currentState, "failed to publish to min_master_nodes"); - } - - @Override - public void onFailure(String source, Exception e) { - logger.error((Supplier) () -> new ParameterizedMessage("unexpected failure during [{}]", source), e); - } - - }); + submitRejoin("zen-disco-failed-to-publish"); throw t; } + + // update the set of nodes to ping after the new cluster state has been published + nodesFD.updateNodesAndPing(clusterChangedEvent.state()); + + // clean the pending cluster queue - we are currently master, so any pending cluster state should be failed + // note that we also clean the queue on master failure (see handleMasterGone) but a delayed cluster state publish + // from a stale master can still make it in the queue during the election (but not be committed) + publishClusterState.pendingStatesQueue().failAllStatesAndClear(new ElasticsearchException("elected as master")); + } + + /** + * Gets the current set of nodes involved in the node fault detection. + * NB: for testing purposes + */ + public Set getFaultDetectionNodes() { + return nodesFD.getNodes(); } @Override @@ -364,12 +356,15 @@ public boolean joiningCluster() { return joinThreadControl.joinThreadActive(); } - // used for testing public ClusterState[] pendingClusterStates() { return publishClusterState.pendingStatesQueue().pendingClusterStates(); } + PendingClusterStatesQueue pendingClusterStatesQueue() { + return publishClusterState.pendingStatesQueue(); + } + /** * the main function of a join thread. This function is guaranteed to join the cluster * or spawn a new join thread upon failure to do so. @@ -397,8 +392,6 @@ public void onElectedAsMaster(ClusterState state) { joinThreadControl.markThreadAsDone(currentThread); // we only starts nodesFD if we are master (it may be that we received a cluster state while pinging) nodesFD.updateNodesAndPing(state); // start the nodes FD - long count = clusterJoinsCounter.incrementAndGet(); - logger.trace("cluster joins counter set to [{}] (elected as master)", count); } @Override @@ -418,18 +411,13 @@ public void onFailure(Throwable t) { // finalize join through the cluster state update thread final DiscoveryNode finalMasterNode = masterNode; - clusterService.submitStateUpdateTask("finalize_join (" + masterNode + ")", new ClusterStateUpdateTask() { + clusterService.submitStateUpdateTask("finalize_join (" + masterNode + ")", new LocalClusterUpdateTask() { @Override - public boolean runOnlyOnMaster() { - return false; - } - - @Override - public ClusterState execute(ClusterState currentState) throws Exception { + public ClusterTasksResult execute(ClusterState currentState) throws Exception { if (!success) { // failed to join. Try again... joinThreadControl.markThreadAsDoneAndStartNew(currentThread); - return currentState; + return unchanged(); } if (currentState.getNodes().getMasterNode() == null) { @@ -437,7 +425,7 @@ public ClusterState execute(ClusterState currentState) throws Exception { // a valid master. logger.debug("no master node is set, despite of join request completing. retrying pings."); joinThreadControl.markThreadAsDoneAndStartNew(currentThread); - return currentState; + return unchanged(); } if (!currentState.getNodes().getMasterNode().equals(finalMasterNode)) { @@ -447,7 +435,7 @@ public ClusterState execute(ClusterState currentState) throws Exception { // Note: we do not have to start master fault detection here because it's set at {@link #processNextPendingClusterState } // when the first cluster state arrives. joinThreadControl.markThreadAsDone(currentThread); - return currentState; + return unchanged(); } @Override @@ -505,12 +493,27 @@ private boolean joinElectedMaster(DiscoveryNode masterNode) { } } + private void submitRejoin(String source) { + clusterService.submitStateUpdateTask(source, new LocalClusterUpdateTask(Priority.IMMEDIATE) { + @Override + public ClusterTasksResult execute(ClusterState currentState) { + return rejoin(currentState, source); + } + + @Override + public void onFailure(String source, Exception e) { + logger.error((Supplier) () -> new ParameterizedMessage("unexpected failure during [{}]", source), e); + } + + }); + } + // visible for testing static class NodeRemovalClusterStateTaskExecutor implements ClusterStateTaskExecutor, ClusterStateTaskListener { private final AllocationService allocationService; private final ElectMasterService electMasterService; - private final BiFunction rejoin; + private final Consumer rejoin; private final Logger logger; static class Task { @@ -518,7 +521,7 @@ static class Task { private final DiscoveryNode node; private final String reason; - public Task(final DiscoveryNode node, final String reason) { + Task(final DiscoveryNode node, final String reason) { this.node = node; this.reason = reason; } @@ -540,7 +543,7 @@ public String toString() { NodeRemovalClusterStateTaskExecutor( final AllocationService allocationService, final ElectMasterService electMasterService, - final BiFunction rejoin, + final Consumer rejoin, final Logger logger) { this.allocationService = allocationService; this.electMasterService = electMasterService; @@ -549,7 +552,7 @@ public String toString() { } @Override - public BatchResult execute(final ClusterState currentState, final List tasks) throws Exception { + public ClusterTasksResult execute(final ClusterState currentState, final List tasks) throws Exception { final DiscoveryNodes.Builder remainingNodesBuilder = DiscoveryNodes.builder(currentState.nodes()); boolean removed = false; for (final Task task : tasks) { @@ -563,18 +566,19 @@ public BatchResult execute(final ClusterState currentState, final Listbuilder().successes(tasks).build(currentState); + return ClusterTasksResult.builder().successes(tasks).build(currentState); } final ClusterState remainingNodesClusterState = remainingNodesClusterState(currentState, remainingNodesBuilder); - final BatchResult.Builder resultBuilder = BatchResult.builder().successes(tasks); - if (!electMasterService.hasEnoughMasterNodes(remainingNodesClusterState.nodes())) { - return resultBuilder.build(rejoin.apply(remainingNodesClusterState, "not enough master nodes")); + final ClusterTasksResult.Builder resultBuilder = ClusterTasksResult.builder().successes(tasks); + if (electMasterService.hasEnoughMasterNodes(remainingNodesClusterState.nodes()) == false) { + final int masterNodes = electMasterService.countMasterNodes(remainingNodesClusterState.nodes()); + rejoin.accept(LoggerMessageFormat.format("not enough master nodes (has [{}], but needed [{}])", + masterNodes, electMasterService.minimumMasterNodes())); + return resultBuilder.build(currentState); } else { - final RoutingAllocation.Result routingResult = - allocationService.deassociateDeadNodes(remainingNodesClusterState, true, describeTasks(tasks)); - return resultBuilder.build(ClusterState.builder(remainingNodesClusterState).routingResult(routingResult).build()); + return resultBuilder.build(allocationService.deassociateDeadNodes(remainingNodesClusterState, true, describeTasks(tasks))); } } @@ -613,7 +617,7 @@ private void handleLeaveRequest(final DiscoveryNode node) { } if (localNodeMaster()) { removeNode(node, "zen-disco-node-left", "left"); - } else if (node.equals(nodes().getMasterNode())) { + } else if (node.equals(clusterState().nodes().getMasterNode())) { handleMasterGone(node, null, "shut_down"); } } @@ -641,14 +645,14 @@ private void handleMinimumMasterNodesChanged(final int minimumMasterNodes) { // We only set the new value. If the master doesn't see enough nodes it will revoke it's mastership. return; } - clusterService.submitStateUpdateTask("zen-disco-mini-master-nodes-changed", new ClusterStateUpdateTask(Priority.IMMEDIATE) { + clusterService.submitStateUpdateTask("zen-disco-min-master-nodes-changed", new LocalClusterUpdateTask(Priority.IMMEDIATE) { @Override - public ClusterState execute(ClusterState currentState) { + public ClusterTasksResult execute(ClusterState currentState) { // check if we have enough master nodes, if not, we need to move into joining the cluster again if (!electMaster.hasEnoughMasterNodes(currentState.nodes())) { return rejoin(currentState, "not enough master nodes on change of minimum_master_nodes from [" + prevMinimumMasterNode + "] to [" + minimumMasterNodes + "]"); } - return currentState; + return unchanged(); } @@ -681,29 +685,19 @@ private void handleMasterGone(final DiscoveryNode masterNode, final Throwable ca logger.info((Supplier) () -> new ParameterizedMessage("master_left [{}], reason [{}]", masterNode, reason), cause); - clusterService.submitStateUpdateTask("master_failed (" + masterNode + ")", new ClusterStateUpdateTask(Priority.IMMEDIATE) { + clusterService.submitStateUpdateTask("master_failed (" + masterNode + ")", new LocalClusterUpdateTask(Priority.IMMEDIATE) { @Override - public boolean runOnlyOnMaster() { - return false; - } - - @Override - public ClusterState execute(ClusterState currentState) { + public ClusterTasksResult execute(ClusterState currentState) { if (!masterNode.equals(currentState.nodes().getMasterNode())) { // master got switched on us, no need to send anything - return currentState; + return unchanged(); } - DiscoveryNodes discoveryNodes = DiscoveryNodes.builder(currentState.nodes()) - // make sure the old master node, which has failed, is not part of the nodes we publish - .remove(masterNode) - .masterNodeId(null).build(); - // flush any pending cluster states from old master, so it will not be set as master again publishClusterState.pendingStatesQueue().failAllStatesAndClear(new ElasticsearchException("master left [{}]", reason)); - return rejoin(ClusterState.builder(currentState).nodes(discoveryNodes).build(), "master left (reason = " + reason + ")"); + return rejoin(currentState, "master left (reason = " + reason + ")"); } @Override @@ -711,29 +705,20 @@ public void onFailure(String source, Exception e) { logger.error((Supplier) () -> new ParameterizedMessage("unexpected failure during [{}]", source), e); } - @Override - public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { - } - }); } void processNextPendingClusterState(String reason) { - clusterService.submitStateUpdateTask("zen-disco-receive(from master [" + reason + "])", new ClusterStateUpdateTask(Priority.URGENT) { - @Override - public boolean runOnlyOnMaster() { - return false; - } - + clusterService.submitStateUpdateTask("zen-disco-receive(from master [" + reason + "])", new LocalClusterUpdateTask(Priority.URGENT) { ClusterState newClusterState = null; @Override - public ClusterState execute(ClusterState currentState) { + public ClusterTasksResult execute(ClusterState currentState) { newClusterState = publishClusterState.pendingStatesQueue().getNextClusterStateToProcess(); // all pending states have been processed if (newClusterState == null) { - return currentState; + return unchanged(); } assert newClusterState.nodes().getMasterNode() != null : "received a cluster state without a master"; @@ -744,21 +729,12 @@ public ClusterState execute(ClusterState currentState) { } if (shouldIgnoreOrRejectNewClusterState(logger, currentState, newClusterState)) { - return currentState; - } - - // check to see that we monitor the correct master of the cluster - if (masterFD.masterNode() == null || !masterFD.masterNode().equals(newClusterState.nodes().getMasterNode())) { - masterFD.restart(newClusterState.nodes().getMasterNode(), "new cluster state received and we are monitoring the wrong master [" + masterFD.masterNode() + "]"); + return unchanged(); } - if (currentState.blocks().hasGlobalBlock(discoverySettings.getNoMasterBlock())) { // its a fresh update from the master as we transition from a start of not having a master to having one logger.debug("got first state from fresh master [{}]", newClusterState.nodes().getMasterNodeId()); - long count = clusterJoinsCounter.incrementAndGet(); - logger.trace("updated cluster join cluster to [{}]", count); - - return newClusterState; + return newState(newClusterState); } @@ -788,7 +764,7 @@ public ClusterState execute(ClusterState currentState) { builder.metaData(metaDataBuilder); } - return builder.build(); + return newState(builder.build()); } @Override @@ -808,6 +784,10 @@ public void onFailure(String source, Exception e) { public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { try { if (newClusterState != null) { + // check to see that we monitor the correct master of the cluster + if (masterFD.masterNode() == null || !masterFD.masterNode().equals(newClusterState.nodes().getMasterNode())) { + masterFD.restart(newClusterState.nodes().getMasterNode(), "new cluster state received and we are monitoring the wrong master [" + masterFD.masterNode() + "]"); + } publishClusterState.pendingStatesQueue().markAsProcessed(newClusterState); } } catch (Exception e) { @@ -864,16 +844,10 @@ void handleJoinRequest(final DiscoveryNode node, final ClusterState state, final } else if (nodeJoinController == null) { throw new IllegalStateException("discovery module is not yet started"); } else { - // The minimum supported version for a node joining a master: - Version minimumNodeJoinVersion = localNode().getVersion().minimumCompatibilityVersion(); - // Sanity check: maybe we don't end up here, because serialization may have failed. - if (node.getVersion().before(minimumNodeJoinVersion)) { - callback.onFailure( - new IllegalStateException("Can't handle join request from a node with a version [" + node.getVersion() + "] that is lower than the minimum compatible version [" + minimumNodeJoinVersion.minimumCompatibilityVersion() + "]") - ); - return; - } - + // we do this in a couple of places including the cluster update thread. This one here is really just best effort + // to ensure we fail as fast as possible. + MembershipAction.ensureNodesCompatibility(node.getVersion(), state.getNodes()); + MembershipAction.ensureIndexCompatibility(node.getVersion(), state.getMetaData()); // try and connect to the node, if it fails, we can raise an exception back to the client... transportService.connectToNode(node); @@ -892,14 +866,14 @@ void handleJoinRequest(final DiscoveryNode node, final ClusterState state, final private DiscoveryNode findMaster() { logger.trace("starting to ping"); - ZenPing.PingResponse[] fullPingResponses = pingService.pingAndWait(pingTimeout); + List fullPingResponses = pingAndWait(pingTimeout).toList(); if (fullPingResponses == null) { logger.trace("No full ping responses"); return null; } if (logger.isTraceEnabled()) { StringBuilder sb = new StringBuilder(); - if (fullPingResponses.length == 0) { + if (fullPingResponses.size() == 0) { sb.append(" {none}"); } else { for (ZenPing.PingResponse pingResponse : fullPingResponses) { @@ -909,69 +883,58 @@ private DiscoveryNode findMaster() { logger.trace("full ping responses:{}", sb); } + final DiscoveryNode localNode = clusterService.localNode(); + + // add our selves + assert fullPingResponses.stream().map(ZenPing.PingResponse::node) + .filter(n -> n.equals(localNode)).findAny().isPresent() == false; + + fullPingResponses.add(new ZenPing.PingResponse(localNode, null, clusterService.state())); + // filter responses final List pingResponses = filterPingResponses(fullPingResponses, masterElectionIgnoreNonMasters, logger); - final DiscoveryNode localNode = clusterService.localNode(); - List pingMasters = new ArrayList<>(); + List activeMasters = new ArrayList<>(); for (ZenPing.PingResponse pingResponse : pingResponses) { - if (pingResponse.master() != null) { - // We can't include the local node in pingMasters list, otherwise we may up electing ourselves without - // any check / verifications from other nodes in ZenDiscover#innerJoinCluster() - if (!localNode.equals(pingResponse.master())) { - pingMasters.add(pingResponse.master()); - } + // We can't include the local node in pingMasters list, otherwise we may up electing ourselves without + // any check / verifications from other nodes in ZenDiscover#innerJoinCluster() + if (pingResponse.master() != null && !localNode.equals(pingResponse.master())) { + activeMasters.add(pingResponse.master()); } } // nodes discovered during pinging - Set activeNodes = new HashSet<>(); - // nodes discovered who has previously been part of the cluster and do not ping for the very first time - Set joinedOnceActiveNodes = new HashSet<>(); - if (localNode.isMasterNode()) { - activeNodes.add(localNode); - long joinsCounter = clusterJoinsCounter.get(); - if (joinsCounter > 0) { - logger.trace("adding local node to the list of active nodes that have previously joined the cluster (joins counter is [{}])", joinsCounter); - joinedOnceActiveNodes.add(localNode); - } - } + List masterCandidates = new ArrayList<>(); for (ZenPing.PingResponse pingResponse : pingResponses) { - activeNodes.add(pingResponse.node()); - if (pingResponse.hasJoinedOnce()) { - joinedOnceActiveNodes.add(pingResponse.node()); + if (pingResponse.node().isMasterNode()) { + masterCandidates.add(new ElectMasterService.MasterCandidate(pingResponse.node(), pingResponse.getClusterStateVersion())); } } - if (pingMasters.isEmpty()) { - if (electMaster.hasEnoughMasterNodes(activeNodes)) { - // we give preference to nodes who have previously already joined the cluster. Those will - // have a cluster state in memory, including an up to date routing table (which is not persistent to disk - // by the gateway) - DiscoveryNode master = electMaster.electMaster(joinedOnceActiveNodes); - if (master != null) { - return master; - } - return electMaster.electMaster(activeNodes); + if (activeMasters.isEmpty()) { + if (electMaster.hasEnoughCandidates(masterCandidates)) { + final ElectMasterService.MasterCandidate winner = electMaster.electMaster(masterCandidates); + logger.trace("candidate {} won election", winner); + return winner.getNode(); } else { // if we don't have enough master nodes, we bail, because there are not enough master to elect from - logger.trace("not enough master nodes [{}]", activeNodes); + logger.warn("not enough master nodes discovered during pinging (found [{}], but needed [{}]), pinging again", + masterCandidates, electMaster.minimumMasterNodes()); return null; } } else { - - assert !pingMasters.contains(localNode) : "local node should never be elected as master when other nodes indicate an active master"; + assert !activeMasters.contains(localNode) : "local node should never be elected as master when other nodes indicate an active master"; // lets tie break between discovered nodes - return electMaster.electMaster(pingMasters); + return electMaster.tieBreakActiveMasters(activeMasters); } } - static List filterPingResponses(ZenPing.PingResponse[] fullPingResponses, boolean masterElectionIgnoreNonMasters, Logger logger) { + static List filterPingResponses(List fullPingResponses, boolean masterElectionIgnoreNonMasters, Logger logger) { List pingResponses; if (masterElectionIgnoreNonMasters) { - pingResponses = Arrays.stream(fullPingResponses).filter(ping -> ping.node().isMasterNode()).collect(Collectors.toList()); + pingResponses = fullPingResponses.stream().filter(ping -> ping.node().isMasterNode()).collect(Collectors.toList()); } else { - pingResponses = Arrays.asList(fullPingResponses); + pingResponses = fullPingResponses; } if (logger.isDebugEnabled()) { @@ -988,7 +951,7 @@ static List filterPingResponses(ZenPing.PingResponse[] ful return pingResponses; } - protected ClusterState rejoin(ClusterState clusterState, String reason) { + protected ClusterStateTaskExecutor.ClusterTasksResult rejoin(ClusterState clusterState, String reason) { // *** called from within an cluster state update task *** // assert Thread.currentThread().getName().contains(ClusterService.UPDATE_THREAD_NAME); @@ -997,29 +960,17 @@ protected ClusterState rejoin(ClusterState clusterState, String reason) { nodesFD.stop(); masterFD.stop(reason); - - ClusterBlocks clusterBlocks = ClusterBlocks.builder().blocks(clusterState.blocks()) - .addGlobalBlock(discoverySettings.getNoMasterBlock()) - .build(); - - // clean the nodes, we are now not connected to anybody, since we try and reform the cluster - DiscoveryNodes discoveryNodes = new DiscoveryNodes.Builder(clusterState.nodes()).masterNodeId(null).build(); - // TODO: do we want to force a new thread if we actively removed the master? this is to give a full pinging cycle // before a decision is made. joinThreadControl.startNewThreadIfNotRunning(); - - return ClusterState.builder(clusterState) - .blocks(clusterBlocks) - .nodes(discoveryNodes) - .build(); + return LocalClusterUpdateTask.noMaster(); } private boolean localNodeMaster() { - return nodes().isLocalNodeElectedMaster(); + return clusterState().nodes().isLocalNodeElectedMaster(); } - private ClusterState handleAnotherMaster(ClusterState localClusterState, final DiscoveryNode otherMaster, long otherClusterStateVersion, String reason) { + private ClusterStateTaskExecutor.ClusterTasksResult handleAnotherMaster(ClusterState localClusterState, final DiscoveryNode otherMaster, long otherClusterStateVersion, String reason) { assert localClusterState.nodes().isLocalNodeElectedMaster() : "handleAnotherMaster called but current node is not a master"; assert Thread.currentThread().getName().contains(ClusterService.UPDATE_THREAD_NAME) : "not called from the cluster state update thread"; @@ -1027,22 +978,49 @@ private ClusterState handleAnotherMaster(ClusterState localClusterState, final D return rejoin(localClusterState, "zen-disco-discovered another master with a new cluster_state [" + otherMaster + "][" + reason + "]"); } else { logger.warn("discovered [{}] which is also master but with an older cluster_state, telling [{}] to rejoin the cluster ([{}])", otherMaster, otherMaster, reason); - try { - // make sure we're connected to this node (connect to node does nothing if we're already connected) - // since the network connections are asymmetric, it may be that we received a state but have disconnected from the node - // in the past (after a master failure, for example) - transportService.connectToNode(otherMaster); - transportService.sendRequest(otherMaster, DISCOVERY_REJOIN_ACTION_NAME, new RejoinClusterRequest(localClusterState.nodes().getLocalNodeId()), new EmptyTransportResponseHandler(ThreadPool.Names.SAME) { - - @Override - public void handleException(TransportException exp) { - logger.warn((Supplier) () -> new ParameterizedMessage("failed to send rejoin request to [{}]", otherMaster), exp); - } - }); - } catch (Exception e) { - logger.warn((Supplier) () -> new ParameterizedMessage("failed to send rejoin request to [{}]", otherMaster), e); - } - return localClusterState; + // spawn to a background thread to not do blocking operations on the cluster state thread + threadPool.generic().execute(new AbstractRunnable() { + @Override + public void onFailure(Exception e) { + logger.warn((Supplier) () -> new ParameterizedMessage("failed to send rejoin request to [{}]", otherMaster), e); + } + + @Override + protected void doRun() throws Exception { + // make sure we're connected to this node (connect to node does nothing if we're already connected) + // since the network connections are asymmetric, it may be that we received a state but have disconnected from the node + // in the past (after a master failure, for example) + transportService.connectToNode(otherMaster); + transportService.sendRequest(otherMaster, DISCOVERY_REJOIN_ACTION_NAME, new RejoinClusterRequest(localNode().getId()), new EmptyTransportResponseHandler(ThreadPool.Names.SAME) { + + @Override + public void handleException(TransportException exp) { + logger.warn((Supplier) () -> new ParameterizedMessage("failed to send rejoin request to [{}]", otherMaster), exp); + } + }); + } + }); + return LocalClusterUpdateTask.unchanged(); + } + } + + private ZenPing.PingCollection pingAndWait(TimeValue timeout) { + final CompletableFuture response = new CompletableFuture<>(); + try { + zenPing.ping(response::complete, timeout); + } catch (Exception ex) { + // logged later + response.completeExceptionally(ex); + } + + try { + return response.get(); + } catch (InterruptedException e) { + logger.trace("pingAndWait interrupted"); + return new ZenPing.PingCollection(); + } catch (ExecutionException e) { + logger.warn("Ping execution failed", e); + return new ZenPing.PingCollection(); } } @@ -1089,12 +1067,16 @@ public void onPingReceived(final NodesFaultDetection.PingRequest pingRequest) { return; } logger.debug("got a ping from another master {}. resolving who should rejoin. current ping count: [{}]", pingRequest.masterNode(), pingsWhileMaster.get()); - clusterService.submitStateUpdateTask("ping from another master", new ClusterStateUpdateTask(Priority.IMMEDIATE) { + clusterService.submitStateUpdateTask("ping from another master", new LocalClusterUpdateTask(Priority.IMMEDIATE) { @Override - public ClusterState execute(ClusterState currentState) throws Exception { - pingsWhileMaster.set(0); - return handleAnotherMaster(currentState, pingRequest.masterNode(), pingRequest.clusterStateVersion(), "node fd ping"); + public ClusterTasksResult execute(ClusterState currentState) throws Exception { + if (currentState.nodes().isLocalNodeElectedMaster()) { + pingsWhileMaster.set(0); + return handleAnotherMaster(currentState, pingRequest.masterNode(), pingRequest.clusterStateVersion(), "node fd ping"); + } else { + return unchanged(); + } } @Override @@ -1140,15 +1122,10 @@ public void writeTo(StreamOutput out) throws IOException { class RejoinClusterRequestHandler implements TransportRequestHandler { @Override public void messageReceived(final RejoinClusterRequest request, final TransportChannel channel) throws Exception { - clusterService.submitStateUpdateTask("received a request to rejoin the cluster from [" + request.fromNodeId + "]", new ClusterStateUpdateTask(Priority.IMMEDIATE) { + clusterService.submitStateUpdateTask("received a request to rejoin the cluster from [" + request.fromNodeId + "]", new LocalClusterUpdateTask(Priority.IMMEDIATE) { @Override - public boolean runOnlyOnMaster() { - return false; - } - - @Override - public ClusterState execute(ClusterState currentState) { + public ClusterTasksResult execute(ClusterState currentState) { try { channel.sendResponse(TransportResponse.Empty.INSTANCE); } catch (Exception e) { @@ -1172,14 +1149,9 @@ public void onFailure(String source, Exception e) { */ private class JoinThreadControl { - private final ThreadPool threadPool; private final AtomicBoolean running = new AtomicBoolean(false); private final AtomicReference currentJoinThread = new AtomicReference<>(); - public JoinThreadControl(ThreadPool threadPool) { - this.threadPool = threadPool; - } - /** returns true if join thread control is started and there is currently an active join thread */ public boolean joinThreadActive() { Thread currentThread = currentJoinThread.get(); @@ -1192,7 +1164,7 @@ public boolean joinThreadActive(Thread joinThread) { } /** cleans any running joining thread and calls {@link #rejoin} */ - public ClusterState stopRunningThreadAndRejoin(ClusterState clusterState, String reason) { + public ClusterStateTaskExecutor.ClusterTasksResult stopRunningThreadAndRejoin(ClusterState clusterState, String reason) { ClusterService.assertClusterStateThread(); currentJoinThread.set(null); return rejoin(clusterState, reason); diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ZenPing.java b/core/src/main/java/org/elasticsearch/discovery/zen/ZenPing.java new file mode 100644 index 0000000000000..016d2a5423ce1 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/discovery/zen/ZenPing.java @@ -0,0 +1,194 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.discovery.zen; + +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.lease.Releasable; +import org.elasticsearch.common.unit.TimeValue; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.Consumer; + +import static org.elasticsearch.gateway.GatewayService.STATE_NOT_RECOVERED_BLOCK; + +public interface ZenPing extends Releasable { + + void start(PingContextProvider contextProvider); + + void ping(Consumer resultsConsumer, TimeValue timeout); + + class PingResponse implements Streamable { + + public static final PingResponse[] EMPTY = new PingResponse[0]; + + private static final AtomicLong idGenerator = new AtomicLong(); + + // an always increasing unique identifier for this ping response. + // lower values means older pings. + private long id; + + private ClusterName clusterName; + + private DiscoveryNode node; + + private DiscoveryNode master; + + private long clusterStateVersion; + + private PingResponse() { + } + + /** + * @param node the node which this ping describes + * @param master the current master of the node + * @param clusterName the cluster name of the node + * @param clusterStateVersion the current cluster state version of that node + * ({@link ElectMasterService.MasterCandidate#UNRECOVERED_CLUSTER_VERSION} for not recovered) + */ + PingResponse(DiscoveryNode node, DiscoveryNode master, ClusterName clusterName, long clusterStateVersion) { + this.id = idGenerator.incrementAndGet(); + this.node = node; + this.master = master; + this.clusterName = clusterName; + this.clusterStateVersion = clusterStateVersion; + } + + public PingResponse(DiscoveryNode node, DiscoveryNode master, ClusterState state) { + this(node, master, state.getClusterName(), + state.blocks().hasGlobalBlock(STATE_NOT_RECOVERED_BLOCK) ? + ElectMasterService.MasterCandidate.UNRECOVERED_CLUSTER_VERSION : state.version()); + } + + /** + * an always increasing unique identifier for this ping response. + * lower values means older pings. + */ + public long id() { + return this.id; + } + + public ClusterName clusterName() { + return this.clusterName; + } + + /** the node which this ping describes */ + public DiscoveryNode node() { + return node; + } + + /** the current master of the node */ + public DiscoveryNode master() { + return master; + } + + /** + * the current cluster state version of that node ({@link ElectMasterService.MasterCandidate#UNRECOVERED_CLUSTER_VERSION} + * for not recovered) */ + public long getClusterStateVersion() { + return clusterStateVersion; + } + + public static PingResponse readPingResponse(StreamInput in) throws IOException { + PingResponse response = new PingResponse(); + response.readFrom(in); + return response; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + clusterName = new ClusterName(in); + node = new DiscoveryNode(in); + if (in.readBoolean()) { + master = new DiscoveryNode(in); + } + this.clusterStateVersion = in.readLong(); + this.id = in.readLong(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + clusterName.writeTo(out); + node.writeTo(out); + if (master == null) { + out.writeBoolean(false); + } else { + out.writeBoolean(true); + master.writeTo(out); + } + out.writeLong(clusterStateVersion); + out.writeLong(id); + } + + @Override + public String toString() { + return "ping_response{node [" + node + "], id[" + id + "], master [" + master + "]," + + "cluster_state_version [" + clusterStateVersion + "], cluster_name[" + clusterName.value() + "]}"; + } + } + + + /** + * a utility collection of pings where only the most recent ping is stored per node + */ + class PingCollection { + + Map pings; + + public PingCollection() { + pings = new HashMap<>(); + } + + /** + * adds a ping if newer than previous pings from the same node + * + * @return true if added, false o.w. + */ + public synchronized boolean addPing(PingResponse ping) { + PingResponse existingResponse = pings.get(ping.node()); + // in case both existing and new ping have the same id (probably because they come + // from nodes from version <1.4.0) we prefer to use the last added one. + if (existingResponse == null || existingResponse.id() <= ping.id()) { + pings.put(ping.node(), ping); + return true; + } + return false; + } + + /** serialize current pings to a list. It is guaranteed that the list contains one ping response per node */ + public synchronized List toList() { + return new ArrayList<>(pings.values()); + } + + /** the number of nodes for which there are known pings */ + public synchronized int size() { + return pings.size(); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/elect/ElectMasterService.java b/core/src/main/java/org/elasticsearch/discovery/zen/elect/ElectMasterService.java deleted file mode 100644 index 3ef9138f933b9..0000000000000 --- a/core/src/main/java/org/elasticsearch/discovery/zen/elect/ElectMasterService.java +++ /dev/null @@ -1,181 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.discovery.zen.elect; - -import com.carrotsearch.hppc.ObjectContainer; -import org.apache.lucene.util.CollectionUtil; -import org.elasticsearch.Version; -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.settings.Setting; -import org.elasticsearch.common.settings.Setting.Property; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.CollectionUtils; - -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Comparator; -import java.util.Iterator; -import java.util.List; - -/** - * - */ -public class ElectMasterService extends AbstractComponent { - - public static final Setting DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING = - Setting.intSetting("discovery.zen.minimum_master_nodes", -1, Property.Dynamic, Property.NodeScope); - - // This is the minimum version a master needs to be on, otherwise it gets ignored - // This is based on the minimum compatible version of the current version this node is on - private final Version minMasterVersion; - private final NodeComparator nodeComparator = new NodeComparator(); - - private volatile int minimumMasterNodes; - - @Inject - public ElectMasterService(Settings settings) { - super(settings); - this.minMasterVersion = Version.CURRENT.minimumCompatibilityVersion(); - this.minimumMasterNodes = DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.get(settings); - logger.debug("using minimum_master_nodes [{}]", minimumMasterNodes); - } - - public void minimumMasterNodes(int minimumMasterNodes) { - this.minimumMasterNodes = minimumMasterNodes; - } - - public int minimumMasterNodes() { - return minimumMasterNodes; - } - - public boolean hasEnoughMasterNodes(Iterable nodes) { - if (minimumMasterNodes < 1) { - return true; - } - int count = 0; - for (DiscoveryNode node : nodes) { - if (node.isMasterNode()) { - count++; - } - } - return count >= minimumMasterNodes; - } - - public boolean hasTooManyMasterNodes(Iterable nodes) { - int count = 0; - for (DiscoveryNode node : nodes) { - if (node.isMasterNode()) { - count++; - } - } - return count > 1 && minimumMasterNodes <= count / 2; - } - - public void logMinimumMasterNodesWarningIfNecessary(ClusterState oldState, ClusterState newState) { - // check if min_master_nodes setting is too low and log warning - if (hasTooManyMasterNodes(oldState.nodes()) == false && hasTooManyMasterNodes(newState.nodes())) { - logger.warn("value for setting \"{}\" is too low. This can result in data loss! Please set it to at least a quorum of master-" + - "eligible nodes (current value: [{}], total number of master-eligible nodes used for publishing in this round: [{}])", - ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey(), minimumMasterNodes(), - newState.getNodes().getMasterNodes().size()); - } - } - - /** - * Returns the given nodes sorted by likelihood of being elected as master, most likely first. - * Non-master nodes are not removed but are rather put in the end - */ - public List sortByMasterLikelihood(Iterable nodes) { - ArrayList sortedNodes = CollectionUtils.iterableAsArrayList(nodes); - CollectionUtil.introSort(sortedNodes, nodeComparator); - return sortedNodes; - } - - /** - * Returns a list of the next possible masters. - */ - public DiscoveryNode[] nextPossibleMasters(ObjectContainer nodes, int numberOfPossibleMasters) { - List sortedNodes = sortedMasterNodes(Arrays.asList(nodes.toArray(DiscoveryNode.class))); - if (sortedNodes == null) { - return new DiscoveryNode[0]; - } - List nextPossibleMasters = new ArrayList<>(numberOfPossibleMasters); - int counter = 0; - for (DiscoveryNode nextPossibleMaster : sortedNodes) { - if (++counter >= numberOfPossibleMasters) { - break; - } - nextPossibleMasters.add(nextPossibleMaster); - } - return nextPossibleMasters.toArray(new DiscoveryNode[nextPossibleMasters.size()]); - } - - /** - * Elects a new master out of the possible nodes, returning it. Returns null - * if no master has been elected. - */ - public DiscoveryNode electMaster(Iterable nodes) { - List sortedNodes = sortedMasterNodes(nodes); - if (sortedNodes == null || sortedNodes.isEmpty()) { - return null; - } - DiscoveryNode masterNode = sortedNodes.get(0); - // Sanity check: maybe we don't end up here, because serialization may have failed. - if (masterNode.getVersion().before(minMasterVersion)) { - logger.warn("ignoring master [{}], because the version [{}] is lower than the minimum compatible version [{}]", masterNode, masterNode.getVersion(), minMasterVersion); - return null; - } else { - return masterNode; - } - } - - private List sortedMasterNodes(Iterable nodes) { - List possibleNodes = CollectionUtils.iterableAsArrayList(nodes); - if (possibleNodes.isEmpty()) { - return null; - } - // clean non master nodes - for (Iterator it = possibleNodes.iterator(); it.hasNext(); ) { - DiscoveryNode node = it.next(); - if (!node.isMasterNode()) { - it.remove(); - } - } - CollectionUtil.introSort(possibleNodes, nodeComparator); - return possibleNodes; - } - - private static class NodeComparator implements Comparator { - - @Override - public int compare(DiscoveryNode o1, DiscoveryNode o2) { - if (o1.isMasterNode() && !o2.isMasterNode()) { - return -1; - } - if (!o1.isMasterNode() && o2.isMasterNode()) { - return 1; - } - return o1.getId().compareTo(o2.getId()); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/membership/MembershipAction.java b/core/src/main/java/org/elasticsearch/discovery/zen/membership/MembershipAction.java deleted file mode 100644 index 961b8d79728ea..0000000000000 --- a/core/src/main/java/org/elasticsearch/discovery/zen/membership/MembershipAction.java +++ /dev/null @@ -1,222 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.discovery.zen.membership; - -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.discovery.zen.DiscoveryNodesProvider; -import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.transport.EmptyTransportResponseHandler; -import org.elasticsearch.transport.TransportChannel; -import org.elasticsearch.transport.TransportRequest; -import org.elasticsearch.transport.TransportRequestHandler; -import org.elasticsearch.transport.TransportResponse; -import org.elasticsearch.transport.TransportService; - -import java.io.IOException; -import java.util.concurrent.TimeUnit; - -/** - * - */ -public class MembershipAction extends AbstractComponent { - - public static final String DISCOVERY_JOIN_ACTION_NAME = "internal:discovery/zen/join"; - public static final String DISCOVERY_JOIN_VALIDATE_ACTION_NAME = "internal:discovery/zen/join/validate"; - public static final String DISCOVERY_LEAVE_ACTION_NAME = "internal:discovery/zen/leave"; - - public interface JoinCallback { - void onSuccess(); - - void onFailure(Exception e); - } - - public interface MembershipListener { - void onJoin(DiscoveryNode node, JoinCallback callback); - - void onLeave(DiscoveryNode node); - } - - private final TransportService transportService; - - private final DiscoveryNodesProvider nodesProvider; - - private final MembershipListener listener; - - public MembershipAction(Settings settings, TransportService transportService, DiscoveryNodesProvider nodesProvider, MembershipListener listener) { - super(settings); - this.transportService = transportService; - this.nodesProvider = nodesProvider; - this.listener = listener; - - transportService.registerRequestHandler(DISCOVERY_JOIN_ACTION_NAME, JoinRequest::new, ThreadPool.Names.GENERIC, new JoinRequestRequestHandler()); - transportService.registerRequestHandler(DISCOVERY_JOIN_VALIDATE_ACTION_NAME, ValidateJoinRequest::new, ThreadPool.Names.GENERIC, new ValidateJoinRequestRequestHandler()); - transportService.registerRequestHandler(DISCOVERY_LEAVE_ACTION_NAME, LeaveRequest::new, ThreadPool.Names.GENERIC, new LeaveRequestRequestHandler()); - } - - public void close() { - transportService.removeHandler(DISCOVERY_JOIN_ACTION_NAME); - transportService.removeHandler(DISCOVERY_JOIN_VALIDATE_ACTION_NAME); - transportService.removeHandler(DISCOVERY_LEAVE_ACTION_NAME); - } - - public void sendLeaveRequest(DiscoveryNode masterNode, DiscoveryNode node) { - transportService.sendRequest(node, DISCOVERY_LEAVE_ACTION_NAME, new LeaveRequest(masterNode), EmptyTransportResponseHandler.INSTANCE_SAME); - } - - public void sendLeaveRequestBlocking(DiscoveryNode masterNode, DiscoveryNode node, TimeValue timeout) { - transportService.submitRequest(masterNode, DISCOVERY_LEAVE_ACTION_NAME, new LeaveRequest(node), EmptyTransportResponseHandler.INSTANCE_SAME).txGet(timeout.millis(), TimeUnit.MILLISECONDS); - } - - public void sendJoinRequestBlocking(DiscoveryNode masterNode, DiscoveryNode node, TimeValue timeout) { - transportService.submitRequest(masterNode, DISCOVERY_JOIN_ACTION_NAME, new JoinRequest(node), EmptyTransportResponseHandler.INSTANCE_SAME) - .txGet(timeout.millis(), TimeUnit.MILLISECONDS); - } - - /** - * Validates the join request, throwing a failure if it failed. - */ - public void sendValidateJoinRequestBlocking(DiscoveryNode node, ClusterState state, TimeValue timeout) { - transportService.submitRequest(node, DISCOVERY_JOIN_VALIDATE_ACTION_NAME, new ValidateJoinRequest(state), EmptyTransportResponseHandler.INSTANCE_SAME) - .txGet(timeout.millis(), TimeUnit.MILLISECONDS); - } - - public static class JoinRequest extends TransportRequest { - - DiscoveryNode node; - - public JoinRequest() { - } - - private JoinRequest(DiscoveryNode node) { - this.node = node; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - node = new DiscoveryNode(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - node.writeTo(out); - } - } - - - private class JoinRequestRequestHandler implements TransportRequestHandler { - - @Override - public void messageReceived(final JoinRequest request, final TransportChannel channel) throws Exception { - listener.onJoin(request.node, new JoinCallback() { - @Override - public void onSuccess() { - try { - channel.sendResponse(TransportResponse.Empty.INSTANCE); - } catch (Exception e) { - onFailure(e); - } - } - - @Override - public void onFailure(Exception e) { - try { - channel.sendResponse(e); - } catch (Exception inner) { - inner.addSuppressed(e); - logger.warn("failed to send back failure on join request", inner); - } - } - }); - } - } - - class ValidateJoinRequest extends TransportRequest { - private ClusterState state; - - ValidateJoinRequest() { - } - - ValidateJoinRequest(ClusterState state) { - this.state = state; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - this.state = ClusterState.Builder.readFrom(in, nodesProvider.nodes().getLocalNode()); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - this.state.writeTo(out); - } - } - - class ValidateJoinRequestRequestHandler implements TransportRequestHandler { - - @Override - public void messageReceived(ValidateJoinRequest request, TransportChannel channel) throws Exception { - // for now, the mere fact that we can serialize the cluster state acts as validation.... - channel.sendResponse(TransportResponse.Empty.INSTANCE); - } - } - - public static class LeaveRequest extends TransportRequest { - - private DiscoveryNode node; - - public LeaveRequest() { - } - - private LeaveRequest(DiscoveryNode node) { - this.node = node; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - node = new DiscoveryNode(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - node.writeTo(out); - } - } - - private class LeaveRequestRequestHandler implements TransportRequestHandler { - - @Override - public void messageReceived(LeaveRequest request, TransportChannel channel) throws Exception { - listener.onLeave(request.node); - channel.sendResponse(TransportResponse.Empty.INSTANCE); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ping/PingContextProvider.java b/core/src/main/java/org/elasticsearch/discovery/zen/ping/PingContextProvider.java deleted file mode 100644 index 568bc3ec16d75..0000000000000 --- a/core/src/main/java/org/elasticsearch/discovery/zen/ping/PingContextProvider.java +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.discovery.zen.ping; - -import org.elasticsearch.discovery.zen.DiscoveryNodesProvider; - -/** - * - */ -public interface PingContextProvider extends DiscoveryNodesProvider { - - /** return true if this node has previously joined the cluster at least once. False if this is first join */ - boolean nodeHasJoinedClusterOnce(); - -} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ping/ZenPing.java b/core/src/main/java/org/elasticsearch/discovery/zen/ping/ZenPing.java deleted file mode 100644 index 5a9f5f463e236..0000000000000 --- a/core/src/main/java/org/elasticsearch/discovery/zen/ping/ZenPing.java +++ /dev/null @@ -1,190 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.discovery.zen.ping; - -import org.elasticsearch.cluster.ClusterName; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.common.component.LifecycleComponent; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.unit.TimeValue; - -import java.io.IOException; -import java.util.HashMap; -import java.util.Map; -import java.util.concurrent.atomic.AtomicLong; - -public interface ZenPing extends LifecycleComponent { - - void setPingContextProvider(PingContextProvider contextProvider); - - void ping(PingListener listener, TimeValue timeout); - - public interface PingListener { - - void onPing(PingResponse[] pings); - } - - public static class PingResponse implements Streamable { - - public static final PingResponse[] EMPTY = new PingResponse[0]; - - private static final AtomicLong idGenerator = new AtomicLong(); - - // an always increasing unique identifier for this ping response. - // lower values means older pings. - private long id; - - private ClusterName clusterName; - - private DiscoveryNode node; - - private DiscoveryNode master; - - private boolean hasJoinedOnce; - - private PingResponse() { - } - - /** - * @param node the node which this ping describes - * @param master the current master of the node - * @param clusterName the cluster name of the node - * @param hasJoinedOnce true if the joined has successfully joined the cluster before - */ - public PingResponse(DiscoveryNode node, DiscoveryNode master, ClusterName clusterName, boolean hasJoinedOnce) { - this.id = idGenerator.incrementAndGet(); - this.node = node; - this.master = master; - this.clusterName = clusterName; - this.hasJoinedOnce = hasJoinedOnce; - } - - /** - * an always increasing unique identifier for this ping response. - * lower values means older pings. - */ - public long id() { - return this.id; - } - - public ClusterName clusterName() { - return this.clusterName; - } - - /** the node which this ping describes */ - public DiscoveryNode node() { - return node; - } - - /** the current master of the node */ - public DiscoveryNode master() { - return master; - } - - /** true if the joined has successfully joined the cluster before */ - public boolean hasJoinedOnce() { - return hasJoinedOnce; - } - - public static PingResponse readPingResponse(StreamInput in) throws IOException { - PingResponse response = new PingResponse(); - response.readFrom(in); - return response; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - clusterName = new ClusterName(in); - node = new DiscoveryNode(in); - if (in.readBoolean()) { - master = new DiscoveryNode(in); - } - this.hasJoinedOnce = in.readBoolean(); - this.id = in.readLong(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - clusterName.writeTo(out); - node.writeTo(out); - if (master == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - master.writeTo(out); - } - out.writeBoolean(hasJoinedOnce); - out.writeLong(id); - } - - @Override - public String toString() { - return "ping_response{node [" + node + "], id[" + id + "], master [" + master + "], hasJoinedOnce [" + hasJoinedOnce + "], cluster_name[" + clusterName.value() + "]}"; - } - } - - - /** - * a utility collection of pings where only the most recent ping is stored per node - */ - public static class PingCollection { - - Map pings; - - public PingCollection() { - pings = new HashMap<>(); - } - - /** - * adds a ping if newer than previous pings from the same node - * - * @return true if added, false o.w. - */ - public synchronized boolean addPing(PingResponse ping) { - PingResponse existingResponse = pings.get(ping.node()); - // in case both existing and new ping have the same id (probably because they come - // from nodes from version <1.4.0) we prefer to use the last added one. - if (existingResponse == null || existingResponse.id() <= ping.id()) { - pings.put(ping.node(), ping); - return true; - } - return false; - } - - /** adds multiple pings if newer than previous pings from the same node */ - public synchronized void addPings(PingResponse[] pings) { - for (PingResponse ping : pings) { - addPing(ping); - } - } - - /** serialize current pings to an array */ - public synchronized PingResponse[] toArray() { - return pings.values().toArray(new PingResponse[pings.size()]); - } - - /** the number of nodes for which there are known pings */ - public synchronized int size() { - return pings.size(); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ping/ZenPingService.java b/core/src/main/java/org/elasticsearch/discovery/zen/ping/ZenPingService.java deleted file mode 100644 index bd5855666aca2..0000000000000 --- a/core/src/main/java/org/elasticsearch/discovery/zen/ping/ZenPingService.java +++ /dev/null @@ -1,137 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.discovery.zen.ping; - -import org.elasticsearch.common.component.AbstractLifecycleComponent; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; - -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; -import java.util.Set; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicReference; - -public class ZenPingService extends AbstractLifecycleComponent implements ZenPing { - - private List zenPings = Collections.emptyList(); - - @Inject - public ZenPingService(Settings settings, Set zenPings) { - super(settings); - this.zenPings = Collections.unmodifiableList(new ArrayList<>(zenPings)); - } - - public List zenPings() { - return this.zenPings; - } - - @Override - public void setPingContextProvider(PingContextProvider contextProvider) { - if (lifecycle.started()) { - throw new IllegalStateException("Can't set nodes provider when started"); - } - for (ZenPing zenPing : zenPings) { - zenPing.setPingContextProvider(contextProvider); - } - } - - @Override - protected void doStart() { - for (ZenPing zenPing : zenPings) { - zenPing.start(); - } - } - - @Override - protected void doStop() { - for (ZenPing zenPing : zenPings) { - zenPing.stop(); - } - } - - @Override - protected void doClose() { - for (ZenPing zenPing : zenPings) { - zenPing.close(); - } - } - - public PingResponse[] pingAndWait(TimeValue timeout) { - final AtomicReference response = new AtomicReference<>(); - final CountDownLatch latch = new CountDownLatch(1); - ping(new PingListener() { - @Override - public void onPing(PingResponse[] pings) { - response.set(pings); - latch.countDown(); - } - }, timeout); - try { - latch.await(); - return response.get(); - } catch (InterruptedException e) { - logger.trace("pingAndWait interrupted"); - return null; - } - } - - @Override - public void ping(PingListener listener, TimeValue timeout) { - List zenPings = this.zenPings; - CompoundPingListener compoundPingListener = new CompoundPingListener(listener, zenPings); - for (ZenPing zenPing : zenPings) { - try { - zenPing.ping(compoundPingListener, timeout); - } catch (EsRejectedExecutionException ex) { - logger.debug("Ping execution rejected", ex); - compoundPingListener.onPing(null); - } - } - } - - private static class CompoundPingListener implements PingListener { - - private final PingListener listener; - - private final AtomicInteger counter; - - private PingCollection responses = new PingCollection(); - - private CompoundPingListener(PingListener listener, List zenPings) { - this.listener = listener; - this.counter = new AtomicInteger(zenPings.size()); - } - - @Override - public void onPing(PingResponse[] pings) { - if (pings != null) { - responses.addPings(pings); - } - if (counter.decrementAndGet() == 0) { - listener.onPing(responses.toArray()); - } - } - } -} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ping/unicast/UnicastZenPing.java b/core/src/main/java/org/elasticsearch/discovery/zen/ping/unicast/UnicastZenPing.java deleted file mode 100644 index 176ac5763e353..0000000000000 --- a/core/src/main/java/org/elasticsearch/discovery/zen/ping/unicast/UnicastZenPing.java +++ /dev/null @@ -1,599 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.discovery.zen.ping.unicast; - -import com.carrotsearch.hppc.cursors.ObjectCursor; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.apache.lucene.util.IOUtils; -import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.Version; -import org.elasticsearch.cluster.ClusterName; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.node.DiscoveryNodes; -import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.UUIDs; -import org.elasticsearch.common.component.AbstractLifecycleComponent; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.settings.Setting; -import org.elasticsearch.common.settings.Setting.Property; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.transport.TransportAddress; -import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.util.CollectionUtils; -import org.elasticsearch.common.util.concurrent.AbstractRunnable; -import org.elasticsearch.common.util.concurrent.ConcurrentCollections; -import org.elasticsearch.common.util.concurrent.EsExecutors; -import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; -import org.elasticsearch.discovery.zen.elect.ElectMasterService; -import org.elasticsearch.discovery.zen.ping.PingContextProvider; -import org.elasticsearch.discovery.zen.ping.ZenPing; -import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.transport.ConnectTransportException; -import org.elasticsearch.transport.RemoteTransportException; -import org.elasticsearch.transport.TransportChannel; -import org.elasticsearch.transport.TransportException; -import org.elasticsearch.transport.TransportRequest; -import org.elasticsearch.transport.TransportRequestHandler; -import org.elasticsearch.transport.TransportRequestOptions; -import org.elasticsearch.transport.TransportResponse; -import org.elasticsearch.transport.TransportResponseHandler; -import org.elasticsearch.transport.TransportService; - -import java.io.Closeable; -import java.io.IOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Queue; -import java.util.Set; -import java.util.concurrent.CopyOnWriteArrayList; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.RejectedExecutionException; -import java.util.concurrent.ThreadFactory; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicReference; -import java.util.function.Function; - -import static java.util.Collections.emptyList; -import static java.util.Collections.emptyMap; -import static java.util.Collections.emptySet; -import static org.elasticsearch.common.util.concurrent.ConcurrentCollections.newConcurrentMap; -import static org.elasticsearch.discovery.zen.ping.ZenPing.PingResponse.readPingResponse; - -/** - * - */ -public class UnicastZenPing extends AbstractLifecycleComponent implements ZenPing { - - public static final String ACTION_NAME = "internal:discovery/zen/unicast"; - public static final Setting> DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING = - Setting.listSetting("discovery.zen.ping.unicast.hosts", emptyList(), Function.identity(), - Property.NodeScope); - public static final Setting DISCOVERY_ZEN_PING_UNICAST_CONCURRENT_CONNECTS_SETTING = - Setting.intSetting("discovery.zen.ping.unicast.concurrent_connects", 10, 0, Property.NodeScope); - - // these limits are per-address - public static final int LIMIT_FOREIGN_PORTS_COUNT = 1; - public static final int LIMIT_LOCAL_PORTS_COUNT = 5; - - - private final ThreadPool threadPool; - private final TransportService transportService; - private final ClusterName clusterName; - private final ElectMasterService electMasterService; - - private final int concurrentConnects; - - private final DiscoveryNode[] configuredTargetNodes; - - private volatile PingContextProvider contextProvider; - - private final AtomicInteger pingHandlerIdGenerator = new AtomicInteger(); - - // used to generate unique ids for nodes/address we temporarily connect to - private final AtomicInteger unicastNodeIdGenerator = new AtomicInteger(); - - // used as a node id prefix for nodes/address we temporarily connect to - private static final String UNICAST_NODE_PREFIX = "#zen_unicast_"; - - private final Map receivedResponses = newConcurrentMap(); - - // a list of temporal responses a node will return for a request (holds requests from other configuredTargetNodes) - private final Queue temporalResponses = ConcurrentCollections.newQueue(); - - private final CopyOnWriteArrayList hostsProviders = new CopyOnWriteArrayList<>(); - - private final ExecutorService unicastConnectExecutor; - - private volatile boolean closed = false; - - @Inject - public UnicastZenPing(Settings settings, ThreadPool threadPool, TransportService transportService, - ElectMasterService electMasterService, @Nullable Set unicastHostsProviders) { - super(settings); - this.threadPool = threadPool; - this.transportService = transportService; - this.clusterName = ClusterName.CLUSTER_NAME_SETTING.get(settings); - this.electMasterService = electMasterService; - - if (unicastHostsProviders != null) { - for (UnicastHostsProvider unicastHostsProvider : unicastHostsProviders) { - addHostsProvider(unicastHostsProvider); - } - } - - this.concurrentConnects = DISCOVERY_ZEN_PING_UNICAST_CONCURRENT_CONNECTS_SETTING.get(settings); - List hosts = DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING.get(settings); - final int limitPortCounts; - if (hosts.isEmpty()) { - // if unicast hosts are not specified, fill with simple defaults on the local machine - limitPortCounts = LIMIT_LOCAL_PORTS_COUNT; - hosts.addAll(transportService.getLocalAddresses()); - } else { - // we only limit to 1 addresses, makes no sense to ping 100 ports - limitPortCounts = LIMIT_FOREIGN_PORTS_COUNT; - } - - logger.debug("using initial hosts {}, with concurrent_connects [{}]", hosts, concurrentConnects); - - List configuredTargetNodes = new ArrayList<>(); - for (String host : hosts) { - try { - TransportAddress[] addresses = transportService.addressesFromString(host, limitPortCounts); - for (TransportAddress address : addresses) { - configuredTargetNodes.add(new DiscoveryNode(UNICAST_NODE_PREFIX + unicastNodeIdGenerator.incrementAndGet() + "#", - address, emptyMap(), emptySet(), getVersion().minimumCompatibilityVersion())); - } - } catch (Exception e) { - throw new IllegalArgumentException("Failed to resolve address for [" + host + "]", e); - } - } - this.configuredTargetNodes = configuredTargetNodes.toArray(new DiscoveryNode[configuredTargetNodes.size()]); - - transportService.registerRequestHandler(ACTION_NAME, UnicastPingRequest::new, ThreadPool.Names.SAME, - new UnicastPingRequestHandler()); - - ThreadFactory threadFactory = EsExecutors.daemonThreadFactory(settings, "[unicast_connect]"); - unicastConnectExecutor = EsExecutors.newScaling("unicast_connect", 0, concurrentConnects, 60, TimeUnit.SECONDS, - threadFactory, threadPool.getThreadContext()); - } - - @Override - protected void doStart() { - } - - @Override - protected void doStop() { - } - - @Override - protected void doClose() { - transportService.removeHandler(ACTION_NAME); - ThreadPool.terminate(unicastConnectExecutor, 0, TimeUnit.SECONDS); - try { - IOUtils.close(receivedResponses.values()); - } catch (IOException e) { - throw new ElasticsearchException("Error wile closing send ping handlers", e); - } - closed = true; - } - - public void addHostsProvider(UnicastHostsProvider provider) { - hostsProviders.add(provider); - } - - @Override - public void setPingContextProvider(PingContextProvider contextProvider) { - this.contextProvider = contextProvider; - } - - /** - * Clears the list of cached ping responses. - */ - public void clearTemporalResponses() { - temporalResponses.clear(); - } - - public PingResponse[] pingAndWait(TimeValue duration) { - final AtomicReference response = new AtomicReference<>(); - final CountDownLatch latch = new CountDownLatch(1); - ping(pings -> { - response.set(pings); - latch.countDown(); - }, duration); - try { - latch.await(); - return response.get(); - } catch (InterruptedException e) { - return null; - } - } - - @Override - public void ping(final PingListener listener, final TimeValue duration) { - final SendPingsHandler sendPingsHandler = new SendPingsHandler(pingHandlerIdGenerator.incrementAndGet()); - try { - receivedResponses.put(sendPingsHandler.id(), sendPingsHandler); - try { - sendPings(duration, null, sendPingsHandler); - } catch (RejectedExecutionException e) { - logger.debug("Ping execution rejected", e); - // The RejectedExecutionException can come from the fact unicastConnectExecutor is at its max down in sendPings - // But don't bail here, we can retry later on after the send ping has been scheduled. - } - - threadPool.schedule(TimeValue.timeValueMillis(duration.millis() / 2), ThreadPool.Names.GENERIC, new AbstractRunnable() { - @Override - protected void doRun() { - sendPings(duration, null, sendPingsHandler); - threadPool.schedule(TimeValue.timeValueMillis(duration.millis() / 2), ThreadPool.Names.GENERIC, new AbstractRunnable() { - @Override - protected void doRun() throws Exception { - sendPings(duration, TimeValue.timeValueMillis(duration.millis() / 2), sendPingsHandler); - sendPingsHandler.close(); - listener.onPing(sendPingsHandler.pingCollection().toArray()); - for (DiscoveryNode node : sendPingsHandler.nodeToDisconnect) { - logger.trace("[{}] disconnecting from {}", sendPingsHandler.id(), node); - transportService.disconnectFromNode(node); - } - } - - @Override - public void onFailure(Exception e) { - logger.debug("Ping execution failed", e); - sendPingsHandler.close(); - } - }); - } - - @Override - public void onFailure(Exception e) { - logger.debug("Ping execution failed", e); - sendPingsHandler.close(); - } - }); - } catch (EsRejectedExecutionException ex) { // TODO: remove this once ScheduledExecutor has support for AbstractRunnable - sendPingsHandler.close(); - // we are shutting down - } catch (Exception e) { - sendPingsHandler.close(); - throw new ElasticsearchException("Ping execution failed", e); - } - } - - class SendPingsHandler implements Closeable { - private final int id; - private final Set nodeToDisconnect = ConcurrentCollections.newConcurrentSet(); - private final PingCollection pingCollection; - - private AtomicBoolean closed = new AtomicBoolean(false); - - SendPingsHandler(int id) { - this.id = id; - this.pingCollection = new PingCollection(); - } - - public int id() { - return this.id; - } - - public boolean isClosed() { - return this.closed.get(); - } - - public PingCollection pingCollection() { - return pingCollection; - } - - @Override - public void close() { - if (closed.compareAndSet(false, true)) { - receivedResponses.remove(id); - } - } - } - - - void sendPings(final TimeValue timeout, @Nullable TimeValue waitTime, final SendPingsHandler sendPingsHandler) { - final UnicastPingRequest pingRequest = new UnicastPingRequest(); - pingRequest.id = sendPingsHandler.id(); - pingRequest.timeout = timeout; - DiscoveryNodes discoNodes = contextProvider.nodes(); - - pingRequest.pingResponse = createPingResponse(discoNodes); - - HashSet nodesToPingSet = new HashSet<>(); - for (PingResponse temporalResponse : temporalResponses) { - // Only send pings to nodes that have the same cluster name. - if (clusterName.equals(temporalResponse.clusterName())) { - nodesToPingSet.add(temporalResponse.node()); - } - } - - for (UnicastHostsProvider provider : hostsProviders) { - nodesToPingSet.addAll(provider.buildDynamicNodes()); - } - - // add all possible master nodes that were active in the last known cluster configuration - for (ObjectCursor masterNode : discoNodes.getMasterNodes().values()) { - nodesToPingSet.add(masterNode.value); - } - - // sort the nodes by likelihood of being an active master - List sortedNodesToPing = electMasterService.sortByMasterLikelihood(nodesToPingSet); - - // new add the unicast targets first - List nodesToPing = CollectionUtils.arrayAsArrayList(configuredTargetNodes); - nodesToPing.addAll(sortedNodesToPing); - - final CountDownLatch latch = new CountDownLatch(nodesToPing.size()); - for (final DiscoveryNode node : nodesToPing) { - // make sure we are connected - final boolean nodeFoundByAddress; - DiscoveryNode nodeToSend = discoNodes.findByAddress(node.getAddress()); - if (nodeToSend != null) { - nodeFoundByAddress = true; - } else { - nodeToSend = node; - nodeFoundByAddress = false; - } - - if (!transportService.nodeConnected(nodeToSend)) { - if (sendPingsHandler.isClosed()) { - return; - } - // if we find on the disco nodes a matching node by address, we are going to restore the connection - // anyhow down the line if its not connected... - // if we can't resolve the node, we don't know and we have to clean up after pinging. We do have - // to make sure we don't disconnect a true node which was temporarily removed from the DiscoveryNodes - // but will be added again during the pinging. We therefore create a new temporary node - if (!nodeFoundByAddress) { - if (!nodeToSend.getId().startsWith(UNICAST_NODE_PREFIX)) { - DiscoveryNode tempNode = new DiscoveryNode("", - UNICAST_NODE_PREFIX + unicastNodeIdGenerator.incrementAndGet() + "_" + nodeToSend.getId() + "#", - UUIDs.randomBase64UUID(), nodeToSend.getHostName(), nodeToSend.getHostAddress(), nodeToSend.getAddress(), - nodeToSend.getAttributes(), nodeToSend.getRoles(), nodeToSend.getVersion()); - - logger.trace("replacing {} with temp node {}", nodeToSend, tempNode); - nodeToSend = tempNode; - } - sendPingsHandler.nodeToDisconnect.add(nodeToSend); - } - // fork the connection to another thread - final DiscoveryNode finalNodeToSend = nodeToSend; - unicastConnectExecutor.execute(new Runnable() { - @Override - public void run() { - if (sendPingsHandler.isClosed()) { - return; - } - boolean success = false; - try { - // connect to the node, see if we manage to do it, if not, bail - if (!nodeFoundByAddress) { - logger.trace("[{}] connecting (light) to {}", sendPingsHandler.id(), finalNodeToSend); - transportService.connectToNodeLightAndHandshake(finalNodeToSend, timeout.getMillis()); - } else { - logger.trace("[{}] connecting to {}", sendPingsHandler.id(), finalNodeToSend); - transportService.connectToNode(finalNodeToSend); - } - logger.trace("[{}] connected to {}", sendPingsHandler.id(), node); - if (receivedResponses.containsKey(sendPingsHandler.id())) { - // we are connected and still in progress, send the ping request - sendPingRequestToNode(sendPingsHandler.id(), timeout, pingRequest, latch, node, finalNodeToSend); - } else { - // connect took too long, just log it and bail - latch.countDown(); - logger.trace("[{}] connect to {} was too long outside of ping window, bailing", - sendPingsHandler.id(), node); - } - success = true; - } catch (ConnectTransportException e) { - // can't connect to the node - this is a more common path! - logger.trace( - (Supplier) () -> new ParameterizedMessage( - "[{}] failed to connect to {}", sendPingsHandler.id(), finalNodeToSend), e); - } catch (RemoteTransportException e) { - // something went wrong on the other side - logger.debug( - (Supplier) () -> new ParameterizedMessage( - "[{}] received a remote error as a response to ping {}", sendPingsHandler.id(), finalNodeToSend), e); - } catch (Exception e) { - logger.warn( - (Supplier) () -> new ParameterizedMessage( - "[{}] failed send ping to {}", sendPingsHandler.id(), finalNodeToSend), e); - } finally { - if (!success) { - latch.countDown(); - } - } - } - }); - } else { - sendPingRequestToNode(sendPingsHandler.id(), timeout, pingRequest, latch, node, nodeToSend); - } - } - if (waitTime != null) { - try { - latch.await(waitTime.millis(), TimeUnit.MILLISECONDS); - } catch (InterruptedException e) { - // ignore - } - } - } - - private void sendPingRequestToNode(final int id, final TimeValue timeout, final UnicastPingRequest pingRequest, - final CountDownLatch latch, final DiscoveryNode node, final DiscoveryNode nodeToSend) { - logger.trace("[{}] sending to {}", id, nodeToSend); - transportService.sendRequest(nodeToSend, ACTION_NAME, pingRequest, TransportRequestOptions.builder() - .withTimeout((long) (timeout.millis() * 1.25)).build(), new TransportResponseHandler() { - - @Override - public UnicastPingResponse newInstance() { - return new UnicastPingResponse(); - } - - @Override - public String executor() { - return ThreadPool.Names.SAME; - } - - @Override - public void handleResponse(UnicastPingResponse response) { - logger.trace("[{}] received response from {}: {}", id, nodeToSend, Arrays.toString(response.pingResponses)); - try { - DiscoveryNodes discoveryNodes = contextProvider.nodes(); - for (PingResponse pingResponse : response.pingResponses) { - if (pingResponse.node().equals(discoveryNodes.getLocalNode())) { - // that's us, ignore - continue; - } - SendPingsHandler sendPingsHandler = receivedResponses.get(response.id); - if (sendPingsHandler == null) { - if (!closed) { - // Only log when we're not closing the node. Having no send ping handler is then expected - logger.warn("received ping response {} with no matching handler id [{}]", pingResponse, response.id); - } - } else { - sendPingsHandler.pingCollection().addPing(pingResponse); - } - } - } finally { - latch.countDown(); - } - } - - @Override - public void handleException(TransportException exp) { - latch.countDown(); - if (exp instanceof ConnectTransportException) { - // ok, not connected... - logger.trace((Supplier) () -> new ParameterizedMessage("failed to connect to {}", nodeToSend), exp); - } else { - logger.warn((Supplier) () -> new ParameterizedMessage("failed to send ping to [{}]", node), exp); - } - } - }); - } - - private UnicastPingResponse handlePingRequest(final UnicastPingRequest request) { - if (!lifecycle.started()) { - throw new IllegalStateException("received ping request while not started"); - } - temporalResponses.add(request.pingResponse); - threadPool.schedule(TimeValue.timeValueMillis(request.timeout.millis() * 2), ThreadPool.Names.SAME, new Runnable() { - @Override - public void run() { - temporalResponses.remove(request.pingResponse); - } - }); - - List pingResponses = CollectionUtils.iterableAsArrayList(temporalResponses); - pingResponses.add(createPingResponse(contextProvider.nodes())); - - - UnicastPingResponse unicastPingResponse = new UnicastPingResponse(); - unicastPingResponse.id = request.id; - unicastPingResponse.pingResponses = pingResponses.toArray(new PingResponse[pingResponses.size()]); - - return unicastPingResponse; - } - - class UnicastPingRequestHandler implements TransportRequestHandler { - - @Override - public void messageReceived(UnicastPingRequest request, TransportChannel channel) throws Exception { - channel.sendResponse(handlePingRequest(request)); - } - } - - public static class UnicastPingRequest extends TransportRequest { - - int id; - TimeValue timeout; - PingResponse pingResponse; - - public UnicastPingRequest() { - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - id = in.readInt(); - timeout = new TimeValue(in); - pingResponse = readPingResponse(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeInt(id); - timeout.writeTo(out); - pingResponse.writeTo(out); - } - } - - private PingResponse createPingResponse(DiscoveryNodes discoNodes) { - return new PingResponse(discoNodes.getLocalNode(), discoNodes.getMasterNode(), clusterName, - contextProvider.nodeHasJoinedClusterOnce()); - } - - static class UnicastPingResponse extends TransportResponse { - - int id; - - PingResponse[] pingResponses; - - UnicastPingResponse() { - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - id = in.readInt(); - pingResponses = new PingResponse[in.readVInt()]; - for (int i = 0; i < pingResponses.length; i++) { - pingResponses[i] = readPingResponse(in); - } - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeInt(id); - out.writeVInt(pingResponses.length); - for (PingResponse pingResponse : pingResponses) { - pingResponse.writeTo(out); - } - } - } - - protected Version getVersion() { - return Version.CURRENT; // for tests - } -} diff --git a/core/src/main/java/org/elasticsearch/env/Environment.java b/core/src/main/java/org/elasticsearch/env/Environment.java index 4b544aa38820c..8b039f79da4a7 100644 --- a/core/src/main/java/org/elasticsearch/env/Environment.java +++ b/core/src/main/java/org/elasticsearch/env/Environment.java @@ -49,11 +49,18 @@ // public+forbidden api! public class Environment { public static final Setting PATH_HOME_SETTING = Setting.simpleString("path.home", Property.NodeScope); - public static final Setting PATH_CONF_SETTING = Setting.simpleString("path.conf", Property.NodeScope); - public static final Setting PATH_SCRIPTS_SETTING = Setting.simpleString("path.scripts", Property.NodeScope); + public static final Setting DEFAULT_PATH_CONF_SETTING = Setting.simpleString("default.path.conf", Property.NodeScope); + public static final Setting PATH_CONF_SETTING = + new Setting<>("path.conf", DEFAULT_PATH_CONF_SETTING, Function.identity(), Property.NodeScope); + public static final Setting PATH_SCRIPTS_SETTING = + Setting.simpleString("path.scripts", Property.NodeScope, Property.Deprecated); + public static final Setting> DEFAULT_PATH_DATA_SETTING = + Setting.listSetting("default.path.data", Collections.emptyList(), Function.identity(), Property.NodeScope); public static final Setting> PATH_DATA_SETTING = - Setting.listSetting("path.data", Collections.emptyList(), Function.identity(), Property.NodeScope); - public static final Setting PATH_LOGS_SETTING = Setting.simpleString("path.logs", Property.NodeScope); + Setting.listSetting("path.data", DEFAULT_PATH_DATA_SETTING, Function.identity(), Property.NodeScope); + public static final Setting DEFAULT_PATH_LOGS_SETTING = Setting.simpleString("default.path.logs", Property.NodeScope); + public static final Setting PATH_LOGS_SETTING = + new Setting<>("path.logs", DEFAULT_PATH_LOGS_SETTING, Function.identity(), Property.NodeScope); public static final Setting> PATH_REPO_SETTING = Setting.listSetting("path.repo", Collections.emptyList(), Function.identity(), Property.NodeScope); public static final Setting PATH_SHARED_DATA_SETTING = Setting.simpleString("path.shared_data", Property.NodeScope); @@ -61,6 +68,8 @@ public class Environment { private final Settings settings; + private final String configExtension; + private final Path[] dataFiles; private final Path[] dataWithClusterFiles; @@ -69,8 +78,6 @@ public class Environment { private final Path configFile; - private final Path scriptsFile; - private final Path pluginsFile; private final Path modulesFile; @@ -108,6 +115,12 @@ public class Environment { } public Environment(Settings settings) { + this(settings, null); + } + + // Note: Do not use this ctor, it is for correct deprecation logging in 5.5 and will be removed + public Environment(Settings settings, String configExtension) { + this.configExtension = configExtension; final Path homeFile; if (PATH_HOME_SETTING.exists(settings)) { homeFile = PathUtils.get(cleanPath(PATH_HOME_SETTING.get(settings))); @@ -115,18 +128,13 @@ public Environment(Settings settings) { throw new IllegalStateException(PATH_HOME_SETTING.getKey() + " is not configured"); } - if (PATH_CONF_SETTING.exists(settings)) { + // this is trappy, Setting#get(Settings) will get a fallback setting yet return false for Settings#exists(Settings) + if (PATH_CONF_SETTING.exists(settings) || DEFAULT_PATH_CONF_SETTING.exists(settings)) { configFile = PathUtils.get(cleanPath(PATH_CONF_SETTING.get(settings))); } else { configFile = homeFile.resolve("config"); } - if (PATH_SCRIPTS_SETTING.exists(settings)) { - scriptsFile = PathUtils.get(cleanPath(PATH_SCRIPTS_SETTING.get(settings))); - } else { - scriptsFile = configFile.resolve("scripts"); - } - pluginsFile = homeFile.resolve("plugins"); List dataPaths = PATH_DATA_SETTING.get(settings); @@ -156,7 +164,9 @@ public Environment(Settings settings) { } else { repoFiles = new Path[0]; } - if (PATH_LOGS_SETTING.exists(settings)) { + + // this is trappy, Setting#get(Settings) will get a fallback setting yet return false for Settings#exists(Settings) + if (PATH_LOGS_SETTING.exists(settings) || DEFAULT_PATH_LOGS_SETTING.exists(settings)) { logsFile = PathUtils.get(cleanPath(PATH_LOGS_SETTING.get(settings))); } else { logsFile = homeFile.resolve("logs"); @@ -174,7 +184,9 @@ public Environment(Settings settings) { Settings.Builder finalSettings = Settings.builder().put(settings); finalSettings.put(PATH_HOME_SETTING.getKey(), homeFile); - finalSettings.putArray(PATH_DATA_SETTING.getKey(), dataPaths); + if (PATH_DATA_SETTING.exists(settings)) { + finalSettings.putArray(PATH_DATA_SETTING.getKey(), dataPaths); + } finalSettings.put(PATH_LOGS_SETTING.getKey(), logsFile); this.settings = finalSettings.build(); @@ -274,8 +286,14 @@ public URL resolveRepoURL(URL url) { } } + /** Return then extension of the config file that was loaded, or*/ + public String configExtension() { + return configExtension; + } + + // TODO: rename all these "file" methods to "dir" /** - * The config location. + * The config directory. */ public Path configFile() { return configFile; @@ -285,7 +303,11 @@ public Path configFile() { * Location of on-disk scripts */ public Path scriptsFile() { - return scriptsFile; + if (PATH_SCRIPTS_SETTING.exists(settings)) { + return PathUtils.get(cleanPath(PATH_SCRIPTS_SETTING.get(settings))); + } else { + return configFile.resolve("scripts"); + } } public Path pluginsFile() { diff --git a/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java b/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java index df9514cdf8830..469cd99cb72a4 100644 --- a/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java +++ b/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java @@ -45,6 +45,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.gateway.MetaDataStateFormat; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; @@ -216,7 +217,7 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce "Elasticsearch 6.0 will not allow the cluster name as a folder within the data path", dataDir); dataDir = dataDirWithClusterName; } - Path dir = dataDir.resolve(NODES_FOLDER).resolve(Integer.toString(possibleLockId)); + Path dir = resolveNodePath(dataDir, possibleLockId); Files.createDirectories(dir); try (Directory luceneDir = FSDirectory.open(dir, NativeFSLockFactory.INSTANCE)) { @@ -226,7 +227,8 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce nodePaths[dirIndex] = new NodePath(dir); nodeLockId = possibleLockId; } catch (LockObtainFailedException ex) { - startupTraceLogger.trace("failed to obtain node lock on {}", dir.toAbsolutePath()); + startupTraceLogger.trace( + new ParameterizedMessage("failed to obtain node lock on {}", dir.toAbsolutePath()), ex); // release all the ones that were obtained up until now releaseAndNullLocks(locks); break; @@ -282,6 +284,17 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce } } + /** + * Resolve a specific nodes/{node.id} path for the specified path and node lock id. + * + * @param path the path + * @param nodeLockId the node lock id + * @return the resolved path + */ + public static Path resolveNodePath(final Path path, final int nodeLockId) { + return path.resolve(NODES_FOLDER).resolve(Integer.toString(nodeLockId)); + } + /** Returns true if the directory is empty */ private static boolean dirEmpty(final Path path) throws IOException { try (DirectoryStream stream = Files.newDirectoryStream(path)) { @@ -398,7 +411,7 @@ private void maybeLogHeapDetails() { private static NodeMetaData loadOrCreateNodeMetaData(Settings settings, Logger logger, NodePath... nodePaths) throws IOException { final Path[] paths = Arrays.stream(nodePaths).map(np -> np.path).toArray(Path[]::new); - NodeMetaData metaData = NodeMetaData.FORMAT.loadLatestState(logger, paths); + NodeMetaData metaData = NodeMetaData.FORMAT.loadLatestState(logger, NamedXContentRegistry.EMPTY, paths); if (metaData == null) { metaData = new NodeMetaData(generateNodeId(settings)); } @@ -757,6 +770,14 @@ public NodePath[] nodePaths() { return nodePaths; } + public int getNodeLockId() { + assertEnvIsLocked(); + if (nodePaths == null || locks == null) { + throw new IllegalStateException("node is not configured to store local location"); + } + return nodeLockId; + } + /** * Returns all index paths. */ @@ -769,6 +790,8 @@ public Path[] indexPaths(Index index) { return indexPaths; } + + /** * Returns all shard paths excluding custom shard path. Note: Shards are only allocated on one of the * returned paths. The returned array may contain paths to non-existing directories. @@ -797,19 +820,36 @@ public Set availableIndexFolders() throws IOException { assertEnvIsLocked(); Set indexFolders = new HashSet<>(); for (NodePath nodePath : nodePaths) { - Path indicesLocation = nodePath.indicesPath; - if (Files.isDirectory(indicesLocation)) { - try (DirectoryStream stream = Files.newDirectoryStream(indicesLocation)) { - for (Path index : stream) { - if (Files.isDirectory(index)) { - indexFolders.add(index.getFileName().toString()); - } + indexFolders.addAll(availableIndexFoldersForPath(nodePath)); + } + return indexFolders; + + } + + /** + * Return all directory names in the nodes/{node.id}/indices directory for the given node path. + * + * @param nodePath the path + * @return all directories that could be indices for the given node path. + * @throws IOException if an I/O exception occurs traversing the filesystem + */ + public Set availableIndexFoldersForPath(final NodePath nodePath) throws IOException { + if (nodePaths == null || locks == null) { + throw new IllegalStateException("node is not configured to store local location"); + } + assertEnvIsLocked(); + final Set indexFolders = new HashSet<>(); + Path indicesLocation = nodePath.indicesPath; + if (Files.isDirectory(indicesLocation)) { + try (DirectoryStream stream = Files.newDirectoryStream(indicesLocation)) { + for (Path index : stream) { + if (Files.isDirectory(index)) { + indexFolders.add(index.getFileName().toString()); } } } } return indexFolders; - } /** @@ -917,11 +957,12 @@ public void ensureAtomicMoveSupported() throws IOException { final NodePath[] nodePaths = nodePaths(); for (NodePath nodePath : nodePaths) { assert Files.isDirectory(nodePath.path) : nodePath.path + " is not a directory"; - final Path src = nodePath.path.resolve("__es__.tmp"); - final Path target = nodePath.path.resolve("__es__.final"); + final Path src = nodePath.path.resolve(TEMP_FILE_NAME + ".tmp"); + final Path target = nodePath.path.resolve(TEMP_FILE_NAME + ".final"); try { + Files.deleteIfExists(src); Files.createFile(src); - Files.move(src, target, StandardCopyOption.ATOMIC_MOVE); + Files.move(src, target, StandardCopyOption.ATOMIC_MOVE, StandardCopyOption.REPLACE_EXISTING); } catch (AtomicMoveNotSupportedException ex) { throw new IllegalStateException("atomic_move is not supported by the filesystem on path [" + nodePath.path @@ -1025,14 +1066,19 @@ private void assertCanWrite() throws IOException { } } + // package private for testing + static final String TEMP_FILE_NAME = ".es_temp_file"; + private static void tryWriteTempFile(Path path) throws IOException { if (Files.exists(path)) { - Path resolve = path.resolve(".es_temp_file"); + Path resolve = path.resolve(TEMP_FILE_NAME); try { - Files.createFile(resolve); + // delete any lingering file from a previous failure Files.deleteIfExists(resolve); + Files.createFile(resolve); + Files.delete(resolve); } catch (IOException ex) { - throw new IOException("failed to write in data directory [" + path + "] write permission is required", ex); + throw new IOException("failed to test writes in data directory [" + path + "] write permission is required", ex); } } } diff --git a/core/src/main/java/org/elasticsearch/env/NodeMetaData.java b/core/src/main/java/org/elasticsearch/env/NodeMetaData.java index 60625b1852dd6..38a4fce9cdc3d 100644 --- a/core/src/main/java/org/elasticsearch/env/NodeMetaData.java +++ b/core/src/main/java/org/elasticsearch/env/NodeMetaData.java @@ -20,8 +20,6 @@ package org.elasticsearch.env; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; @@ -70,8 +68,7 @@ public String toString() { return "node_id [" + nodeId + "]"; } - private static ObjectParser PARSER = new ObjectParser<>("node_meta_data", - Builder::new); + private static ObjectParser PARSER = new ObjectParser<>("node_meta_data", Builder::new); static { PARSER.declareString(Builder::setNodeId, new ParseField(NODE_ID_KEY)); @@ -110,7 +107,7 @@ public void toXContent(XContentBuilder builder, NodeMetaData nodeMetaData) throw @Override public NodeMetaData fromXContent(XContentParser parser) throws IOException { - return PARSER.apply(parser, () -> ParseFieldMatcher.STRICT).build(); + return PARSER.apply(parser, null).build(); } }; } diff --git a/core/src/main/java/org/elasticsearch/env/ShardLockObtainFailedException.java b/core/src/main/java/org/elasticsearch/env/ShardLockObtainFailedException.java index d1a8ce3b6d480..21bba81adbb1f 100644 --- a/core/src/main/java/org/elasticsearch/env/ShardLockObtainFailedException.java +++ b/core/src/main/java/org/elasticsearch/env/ShardLockObtainFailedException.java @@ -19,30 +19,36 @@ package org.elasticsearch.env; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.index.shard.ShardId; +import java.io.IOException; + /** * Exception used when the in-memory lock for a shard cannot be obtained */ -public class ShardLockObtainFailedException extends Exception { - private final ShardId shardId; +public class ShardLockObtainFailedException extends ElasticsearchException { public ShardLockObtainFailedException(ShardId shardId, String message) { - super(message); - this.shardId = shardId; + super(buildMessage(shardId, message)); + this.setShard(shardId); } public ShardLockObtainFailedException(ShardId shardId, String message, Throwable cause) { - super(message, cause); - this.shardId = shardId; + super(buildMessage(shardId, message), cause); + this.setShard(shardId); + } + + public ShardLockObtainFailedException(StreamInput in) throws IOException { + super(in); } - @Override - public String getMessage() { + private static String buildMessage(ShardId shardId, String message) { StringBuilder sb = new StringBuilder(); sb.append(shardId.toString()); sb.append(": "); - sb.append(super.getMessage()); + sb.append(message); return sb.toString(); } } diff --git a/core/src/main/java/org/elasticsearch/gateway/AsyncShardFetch.java b/core/src/main/java/org/elasticsearch/gateway/AsyncShardFetch.java index 42c40034b1056..e2bbae775e5d7 100644 --- a/core/src/main/java/org/elasticsearch/gateway/AsyncShardFetch.java +++ b/core/src/main/java/org/elasticsearch/gateway/AsyncShardFetch.java @@ -44,6 +44,7 @@ import java.util.List; import java.util.Map; import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; import static java.util.Collections.emptySet; import static java.util.Collections.unmodifiableSet; @@ -67,10 +68,11 @@ public interface Lister, N protected final Logger logger; protected final String type; - private final ShardId shardId; + protected final ShardId shardId; private final Lister, T> action; private final Map> cache = new HashMap<>(); private final Set nodesToIgnore = new HashSet<>(); + private final AtomicLong round = new AtomicLong(); private boolean closed; @SuppressWarnings("unchecked") @@ -112,20 +114,22 @@ public synchronized FetchResult fetchData(DiscoveryNodes nodes, Set i } nodesToIgnore.addAll(ignoreNodes); fillShardCacheWithDataNodes(cache, nodes); - Set> nodesToFetch = findNodesToFetch(cache); + List> nodesToFetch = findNodesToFetch(cache); if (nodesToFetch.isEmpty() == false) { // mark all node as fetching and go ahead and async fetch them + // use a unique round id to detect stale responses in processAsyncFetch + final long fetchingRound = round.incrementAndGet(); for (NodeEntry nodeEntry : nodesToFetch) { - nodeEntry.markAsFetching(); + nodeEntry.markAsFetching(fetchingRound); } DiscoveryNode[] discoNodesToFetch = nodesToFetch.stream().map(NodeEntry::getNodeId).map(nodes::get) .toArray(DiscoveryNode[]::new); - asyncFetch(shardId, discoNodesToFetch); + asyncFetch(discoNodesToFetch, fetchingRound); } // if we are still fetching, return null to indicate it - if (hasAnyNodeFetching(cache) == true) { - return new FetchResult<>(shardId, null, emptySet(), emptySet()); + if (hasAnyNodeFetching(cache)) { + return new FetchResult<>(shardId, null, emptySet()); } else { // nothing to fetch, yay, build the return value Map fetchData = new HashMap<>(); @@ -137,7 +141,7 @@ public synchronized FetchResult fetchData(DiscoveryNodes nodes, Set i DiscoveryNode node = nodes.get(nodeId); if (node != null) { - if (nodeEntry.isFailed() == true) { + if (nodeEntry.isFailed()) { // if its failed, remove it from the list of nodes, so if this run doesn't work // we try again next round to fetch it again it.remove(); @@ -158,7 +162,7 @@ public synchronized FetchResult fetchData(DiscoveryNodes nodes, Set i if (failedNodes.isEmpty() == false || allIgnoreNodes.isEmpty() == false) { reroute(shardId, "nodes failed [" + failedNodes.size() + "], ignored [" + allIgnoreNodes.size() + "]"); } - return new FetchResult<>(shardId, fetchData, failedNodes, allIgnoreNodes); + return new FetchResult<>(shardId, fetchData, allIgnoreNodes); } } @@ -168,7 +172,7 @@ public synchronized FetchResult fetchData(DiscoveryNodes nodes, Set i * the shard (response + failures), issuing a reroute at the end of it to make sure there will be another round * of allocations taking this new data into account. */ - protected synchronized void processAsyncFetch(ShardId shardId, List responses, List failures) { + protected synchronized void processAsyncFetch(List responses, List failures, long fetchingRound) { if (closed) { // we are closed, no need to process this async fetch at all logger.trace("{} ignoring fetched [{}] results, already closed", shardId, type); @@ -179,15 +183,19 @@ protected synchronized void processAsyncFetch(ShardId shardId, List responses if (responses != null) { for (T response : responses) { NodeEntry nodeEntry = cache.get(response.getNode().getId()); - // if the entry is there, and not marked as failed already, process it - if (nodeEntry == null) { - continue; - } - if (nodeEntry.isFailed()) { - logger.trace("{} node {} has failed for [{}] (failure [{}])", shardId, nodeEntry.getNodeId(), type, nodeEntry.getFailure()); - } else { - logger.trace("{} marking {} as done for [{}], result is [{}]", shardId, nodeEntry.getNodeId(), type, response); - nodeEntry.doneFetching(response); + if (nodeEntry != null) { + if (nodeEntry.getFetchingRound() != fetchingRound) { + assert nodeEntry.getFetchingRound() > fetchingRound : "node entries only replaced by newer rounds"; + logger.trace("{} received response for [{}] from node {} for an older fetching round (expected: {} but was: {})", + shardId, nodeEntry.getNodeId(), type, nodeEntry.getFetchingRound(), fetchingRound); + } else if (nodeEntry.isFailed()) { + logger.trace("{} node {} has failed for [{}] (failure [{}])", shardId, nodeEntry.getNodeId(), type, + nodeEntry.getFailure()); + } else { + // if the entry is there, for the right fetching round and not marked as failed already, process it + logger.trace("{} marking {} as done for [{}], result is [{}]", shardId, nodeEntry.getNodeId(), type, response); + nodeEntry.doneFetching(response); + } } } } @@ -195,15 +203,24 @@ protected synchronized void processAsyncFetch(ShardId shardId, List responses for (FailedNodeException failure : failures) { logger.trace("{} processing failure {} for [{}]", shardId, failure, type); NodeEntry nodeEntry = cache.get(failure.nodeId()); - // if the entry is there, and not marked as failed already, process it - if (nodeEntry != null && nodeEntry.isFailed() == false) { - Throwable unwrappedCause = ExceptionsHelper.unwrapCause(failure.getCause()); - // if the request got rejected or timed out, we need to try it again next time... - if (unwrappedCause instanceof EsRejectedExecutionException || unwrappedCause instanceof ReceiveTimeoutTransportException || unwrappedCause instanceof ElasticsearchTimeoutException) { - nodeEntry.restartFetching(); - } else { - logger.warn((Supplier) () -> new ParameterizedMessage("{}: failed to list shard for {} on node [{}]", shardId, type, failure.nodeId()), failure); - nodeEntry.doneFetching(failure.getCause()); + if (nodeEntry != null) { + if (nodeEntry.getFetchingRound() != fetchingRound) { + assert nodeEntry.getFetchingRound() > fetchingRound : "node entries only replaced by newer rounds"; + logger.trace("{} received failure for [{}] from node {} for an older fetching round (expected: {} but was: {})", + shardId, nodeEntry.getNodeId(), type, nodeEntry.getFetchingRound(), fetchingRound); + } else if (nodeEntry.isFailed() == false) { + // if the entry is there, for the right fetching round and not marked as failed already, process it + Throwable unwrappedCause = ExceptionsHelper.unwrapCause(failure.getCause()); + // if the request got rejected or timed out, we need to try it again next time... + if (unwrappedCause instanceof EsRejectedExecutionException || + unwrappedCause instanceof ReceiveTimeoutTransportException || + unwrappedCause instanceof ElasticsearchTimeoutException) { + nodeEntry.restartFetching(); + } else { + logger.warn((Supplier) () -> new ParameterizedMessage("{}: failed to list shard for {} on node [{}]", + shardId, type, failure.nodeId()), failure); + nodeEntry.doneFetching(failure.getCause()); + } } } } @@ -241,8 +258,8 @@ private void fillShardCacheWithDataNodes(Map> shardCache, D * Finds all the nodes that need to be fetched. Those are nodes that have no * data, and are not in fetch mode. */ - private Set> findNodesToFetch(Map> shardCache) { - Set> nodesToFetch = new HashSet<>(); + private List> findNodesToFetch(Map> shardCache) { + List> nodesToFetch = new ArrayList<>(); for (NodeEntry nodeEntry : shardCache.values()) { if (nodeEntry.hasData() == false && nodeEntry.isFetching() == false) { nodesToFetch.add(nodeEntry); @@ -267,12 +284,12 @@ private boolean hasAnyNodeFetching(Map> shardCache) { * Async fetches data for the provided shard with the set of nodes that need to be fetched from. */ // visible for testing - void asyncFetch(final ShardId shardId, final DiscoveryNode[] nodes) { + void asyncFetch(final DiscoveryNode[] nodes, long fetchingRound) { logger.trace("{} fetching [{}] from {}", shardId, type, nodes); action.list(shardId, nodes, new ActionListener>() { @Override public void onResponse(BaseNodesResponse response) { - processAsyncFetch(shardId, response.getNodes(), response.failures()); + processAsyncFetch(response.getNodes(), response.failures(), fetchingRound); } @Override @@ -281,7 +298,7 @@ public void onFailure(Exception e) { for (final DiscoveryNode node: nodes) { failures.add(new FailedNodeException(node.getId(), "total failure in fetching", e)); } - processAsyncFetch(shardId, null, failures); + processAsyncFetch(null, failures, fetchingRound); } }); } @@ -294,13 +311,11 @@ public static class FetchResult { private final ShardId shardId; private final Map data; - private final Set failedNodes; private final Set ignoreNodes; - public FetchResult(ShardId shardId, Map data, Set failedNodes, Set ignoreNodes) { + public FetchResult(ShardId shardId, Map data, Set ignoreNodes) { this.shardId = shardId; this.data = data; - this.failedNodes = failedNodes; this.ignoreNodes = ignoreNodes; } @@ -342,8 +357,9 @@ static class NodeEntry { private T value; private boolean valueSet; private Throwable failure; + private long fetchingRound; - public NodeEntry(String nodeId) { + NodeEntry(String nodeId) { this.nodeId = nodeId; } @@ -355,13 +371,14 @@ boolean isFetching() { return fetching; } - void markAsFetching() { + void markAsFetching(long fetchingRound) { assert fetching == false : "double marking a node as fetching"; - fetching = true; + this.fetching = true; + this.fetchingRound = fetchingRound; } void doneFetching(T value) { - assert fetching == true : "setting value but not in fetching mode"; + assert fetching : "setting value but not in fetching mode"; assert failure == null : "setting value when failure already set"; this.valueSet = true; this.value = value; @@ -369,7 +386,7 @@ void doneFetching(T value) { } void doneFetching(Throwable failure) { - assert fetching == true : "setting value but not in fetching mode"; + assert fetching : "setting value but not in fetching mode"; assert valueSet == false : "setting failure when already set value"; assert failure != null : "setting failure can't be null"; this.failure = failure; @@ -377,7 +394,7 @@ void doneFetching(Throwable failure) { } void restartFetching() { - assert fetching == true : "restarting fetching, but not in fetching mode"; + assert fetching : "restarting fetching, but not in fetching mode"; assert valueSet == false : "value can't be set when restarting fetching"; assert failure == null : "failure can't be set when restarting fetching"; this.fetching = false; @@ -388,7 +405,7 @@ boolean isFailed() { } boolean hasData() { - return valueSet == true || failure != null; + return valueSet || failure != null; } Throwable getFailure() { @@ -399,8 +416,12 @@ Throwable getFailure() { @Nullable T getValue() { assert failure == null : "trying to fetch value, but its marked as failed, check isFailed"; - assert valueSet == true : "value is not set, hasn't been fetched yet"; + assert valueSet : "value is not set, hasn't been fetched yet"; return value; } + + long getFetchingRound() { + return fetchingRound; + } } } diff --git a/core/src/main/java/org/elasticsearch/gateway/BaseGatewayShardAllocator.java b/core/src/main/java/org/elasticsearch/gateway/BaseGatewayShardAllocator.java new file mode 100644 index 0000000000000..275311cf6a8f5 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/gateway/BaseGatewayShardAllocator.java @@ -0,0 +1,107 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.gateway; + +import org.apache.logging.log4j.Logger; +import org.elasticsearch.cluster.routing.RoutingNode; +import org.elasticsearch.cluster.routing.RoutingNodes; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.cluster.routing.allocation.AllocateUnassignedDecision; +import org.elasticsearch.cluster.routing.allocation.AllocationDecision; +import org.elasticsearch.cluster.routing.allocation.NodeAllocationResult; +import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; +import org.elasticsearch.cluster.routing.allocation.decider.Decision; +import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.settings.Settings; + +import java.util.ArrayList; +import java.util.List; + +/** + * An abstract class that implements basic functionality for allocating + * shards to nodes based on shard copies that already exist in the cluster. + * + * Individual implementations of this class are responsible for providing + * the logic to determine to which nodes (if any) those shards are allocated. + */ +public abstract class BaseGatewayShardAllocator extends AbstractComponent { + + public BaseGatewayShardAllocator(Settings settings) { + super(settings); + } + + /** + * Allocate unassigned shards to nodes (if any) where valid copies of the shard already exist. + * It is up to the individual implementations of {@link #makeAllocationDecision(ShardRouting, RoutingAllocation, Logger)} + * to make decisions on assigning shards to nodes. + * + * @param allocation the allocation state container object + */ + public void allocateUnassigned(RoutingAllocation allocation) { + final RoutingNodes routingNodes = allocation.routingNodes(); + final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator(); + while (unassignedIterator.hasNext()) { + final ShardRouting shard = unassignedIterator.next(); + final AllocateUnassignedDecision allocateUnassignedDecision = makeAllocationDecision(shard, allocation, logger); + + if (allocateUnassignedDecision.isDecisionTaken() == false) { + // no decision was taken by this allocator + continue; + } + + if (allocateUnassignedDecision.getAllocationDecision() == AllocationDecision.YES) { + unassignedIterator.initialize(allocateUnassignedDecision.getTargetNode().getId(), + allocateUnassignedDecision.getAllocationId(), + shard.primary() ? ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE : + allocation.clusterInfo().getShardSize(shard, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE), + allocation.changes()); + } else { + unassignedIterator.removeAndIgnore(allocateUnassignedDecision.getAllocationStatus(), allocation.changes()); + } + } + } + + /** + * Make a decision on the allocation of an unassigned shard. This method is used by + * {@link #allocateUnassigned(RoutingAllocation)} to make decisions about whether or not + * the shard can be allocated by this allocator and if so, to which node it will be allocated. + * + * @param unassignedShard the unassigned shard to allocate + * @param allocation the current routing state + * @param logger the logger + * @return an {@link AllocateUnassignedDecision} with the final decision of whether to allocate and details of the decision + */ + public abstract AllocateUnassignedDecision makeAllocationDecision(ShardRouting unassignedShard, + RoutingAllocation allocation, + Logger logger); + + /** + * Builds decisions for all nodes in the cluster, so that the explain API can provide information on + * allocation decisions for each node, while still waiting to allocate the shard (e.g. due to fetching shard data). + */ + protected List buildDecisionsForAllNodes(ShardRouting shard, RoutingAllocation allocation) { + List results = new ArrayList<>(); + for (RoutingNode node : allocation.routingNodes()) { + Decision decision = allocation.deciders().canAllocate(shard, node, allocation); + results.add(new NodeAllocationResult(node.node(), null, decision)); + } + return results; + } +} diff --git a/core/src/main/java/org/elasticsearch/gateway/DanglingIndicesState.java b/core/src/main/java/org/elasticsearch/gateway/DanglingIndicesState.java index 370778898fc2c..acfcadb2f51b5 100644 --- a/core/src/main/java/org/elasticsearch/gateway/DanglingIndicesState.java +++ b/core/src/main/java/org/elasticsearch/gateway/DanglingIndicesState.java @@ -20,9 +20,12 @@ package org.elasticsearch.gateway; import com.carrotsearch.hppc.cursors.ObjectCursor; +import org.elasticsearch.cluster.ClusterChangedEvent; +import org.elasticsearch.cluster.ClusterStateListener; import org.elasticsearch.cluster.metadata.IndexGraveyard; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; +import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; @@ -48,7 +51,7 @@ * their state written on disk, but don't exists in the metadata of the cluster), and importing * them into the cluster. */ -public class DanglingIndicesState extends AbstractComponent { +public class DanglingIndicesState extends AbstractComponent implements ClusterStateListener { private final NodeEnvironment nodeEnv; private final MetaStateService metaStateService; @@ -58,11 +61,12 @@ public class DanglingIndicesState extends AbstractComponent { @Inject public DanglingIndicesState(Settings settings, NodeEnvironment nodeEnv, MetaStateService metaStateService, - LocalAllocateDangledIndices allocateDangledIndices) { + LocalAllocateDangledIndices allocateDangledIndices, ClusterService clusterService) { super(settings); this.nodeEnv = nodeEnv; this.metaStateService = metaStateService; this.allocateDangledIndices = allocateDangledIndices; + clusterService.addListener(this); } /** @@ -153,7 +157,7 @@ Map findNewDanglingIndices(final MetaData metaData) { * for allocation. */ private void allocateDanglingIndices() { - if (danglingIndices.isEmpty() == true) { + if (danglingIndices.isEmpty()) { return; } try { @@ -174,4 +178,11 @@ public void onFailure(Throwable e) { logger.warn("failed to send allocate dangled", e); } } + + @Override + public void clusterChanged(ClusterChangedEvent event) { + if (event.state().blocks().disableStatePersistence() == false) { + processDanglingIndices(event.state().metaData()); + } + } } diff --git a/core/src/main/java/org/elasticsearch/gateway/Gateway.java b/core/src/main/java/org/elasticsearch/gateway/Gateway.java index 3030632a769c4..94284f66e223d 100644 --- a/core/src/main/java/org/elasticsearch/gateway/Gateway.java +++ b/core/src/main/java/org/elasticsearch/gateway/Gateway.java @@ -21,11 +21,12 @@ import com.carrotsearch.hppc.ObjectFloatHashMap; import com.carrotsearch.hppc.cursors.ObjectCursor; + import org.apache.logging.log4j.message.ParameterizedMessage; import org.elasticsearch.action.FailedNodeException; import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.ClusterStateApplier; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.service.ClusterService; @@ -34,13 +35,13 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.discovery.Discovery; import org.elasticsearch.index.Index; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.indices.IndicesService; import java.util.Arrays; +import java.util.Map; import java.util.function.Supplier; -public class Gateway extends AbstractComponent implements ClusterStateListener { +public class Gateway extends AbstractComponent implements ClusterStateApplier { private final ClusterService clusterService; @@ -50,19 +51,17 @@ public class Gateway extends AbstractComponent implements ClusterStateListener { private final Supplier minimumMasterNodesProvider; private final IndicesService indicesService; - private final NodeServicesProvider nodeServicesProvider; public Gateway(Settings settings, ClusterService clusterService, GatewayMetaState metaState, TransportNodesListGatewayMetaState listGatewayMetaState, Discovery discovery, - NodeServicesProvider nodeServicesProvider, IndicesService indicesService) { + IndicesService indicesService) { super(settings); - this.nodeServicesProvider = nodeServicesProvider; this.indicesService = indicesService; this.clusterService = clusterService; this.metaState = metaState; this.listGatewayMetaState = listGatewayMetaState; this.minimumMasterNodesProvider = discovery::getMinimumMasterNodes; - clusterService.addLast(this); + clusterService.addLowPriorityApplier(this); } public void performStateRecovery(final GatewayStateRecoveredListener listener) throws GatewayException { @@ -133,7 +132,7 @@ public void performStateRecovery(final GatewayStateRecoveredListener listener) t try { if (electedIndexMetaData.getState() == IndexMetaData.State.OPEN) { // verify that we can actually create this index - if not we recover it as closed with lots of warn logs - indicesService.verifyIndexMetadata(nodeServicesProvider, electedIndexMetaData, electedIndexMetaData); + indicesService.verifyIndexMetadata(electedIndexMetaData, electedIndexMetaData); } } catch (Exception e) { final Index electedIndex = electedIndexMetaData.getIndex(); @@ -148,18 +147,40 @@ public void performStateRecovery(final GatewayStateRecoveredListener listener) t } } final ClusterSettings clusterSettings = clusterService.getClusterSettings(); - metaDataBuilder.persistentSettings(clusterSettings.archiveUnknownOrBrokenSettings(metaDataBuilder.persistentSettings())); - metaDataBuilder.transientSettings(clusterSettings.archiveUnknownOrBrokenSettings(metaDataBuilder.transientSettings())); + metaDataBuilder.persistentSettings( + clusterSettings.archiveUnknownOrInvalidSettings( + metaDataBuilder.persistentSettings(), + e -> logUnknownSetting("persistent", e), + (e, ex) -> logInvalidSetting("persistent", e, ex))); + metaDataBuilder.transientSettings( + clusterSettings.archiveUnknownOrInvalidSettings( + metaDataBuilder.transientSettings(), + e -> logUnknownSetting("transient", e), + (e, ex) -> logInvalidSetting("transient", e, ex))); ClusterState.Builder builder = ClusterState.builder(clusterService.getClusterName()); builder.metaData(metaDataBuilder); listener.onSuccess(builder.build()); } + private void logUnknownSetting(String settingType, Map.Entry e) { + logger.warn("ignoring unknown {} setting: [{}] with value [{}]; archiving", settingType, e.getKey(), e.getValue()); + } + + private void logInvalidSetting(String settingType, Map.Entry e, IllegalArgumentException ex) { + logger.warn( + (org.apache.logging.log4j.util.Supplier) + () -> new ParameterizedMessage("ignoring invalid {} setting: [{}] with value [{}]; archiving", + settingType, + e.getKey(), + e.getValue()), + ex); + } + @Override - public void clusterChanged(final ClusterChangedEvent event) { + public void applyClusterState(final ClusterChangedEvent event) { // order is important, first metaState, and then shardsState // so dangling indices will be recorded - metaState.clusterChanged(event); + metaState.applyClusterState(event); } public interface GatewayStateRecoveredListener { diff --git a/core/src/main/java/org/elasticsearch/gateway/GatewayAllocator.java b/core/src/main/java/org/elasticsearch/gateway/GatewayAllocator.java index c84a9c3378ae2..371753f9085b4 100644 --- a/core/src/main/java/org/elasticsearch/gateway/GatewayAllocator.java +++ b/core/src/main/java/org/elasticsearch/gateway/GatewayAllocator.java @@ -22,15 +22,13 @@ import org.apache.logging.log4j.Logger; import org.elasticsearch.action.support.nodes.BaseNodeResponse; import org.elasticsearch.action.support.nodes.BaseNodesResponse; -import org.elasticsearch.cluster.ClusterChangedEvent; -import org.elasticsearch.cluster.ClusterStateListener; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.RoutingNodes; import org.elasticsearch.cluster.routing.RoutingService; import org.elasticsearch.cluster.routing.ShardRouting; -import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation; +import org.elasticsearch.cluster.routing.allocation.AllocateUnassignedDecision; +import org.elasticsearch.cluster.routing.allocation.FailedShard; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; -import org.elasticsearch.cluster.routing.allocation.StartedRerouteAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; @@ -40,6 +38,7 @@ import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData; +import java.util.List; import java.util.concurrent.ConcurrentMap; /** @@ -62,44 +61,23 @@ public GatewayAllocator(Settings settings, final TransportNodesListGatewayStarte this.replicaShardAllocator = new InternalReplicaShardAllocator(settings, storeAction); } - /** - * Returns true if the given shard has an async fetch pending - */ - public boolean hasFetchPending(ShardId shardId, boolean primary) { - if (primary) { - AsyncShardFetch fetch = asyncFetchStarted.get(shardId); - if (fetch != null) { - return fetch.getNumberOfInFlightFetches() > 0; - } - } else { - AsyncShardFetch fetch = asyncFetchStore.get(shardId); - if (fetch != null) { - return fetch.getNumberOfInFlightFetches() > 0; - } - } - return false; - } - public void setReallocation(final ClusterService clusterService, final RoutingService routingService) { this.routingService = routingService; - clusterService.add(new ClusterStateListener() { - @Override - public void clusterChanged(ClusterChangedEvent event) { - boolean cleanCache = false; - DiscoveryNode localNode = event.state().nodes().getLocalNode(); - if (localNode != null) { - if (localNode.isMasterNode() == true && event.localNodeMaster() == false) { - cleanCache = true; - } - } else { + clusterService.addStateApplier(event -> { + boolean cleanCache = false; + DiscoveryNode localNode = event.state().nodes().getLocalNode(); + if (localNode != null) { + if (localNode.isMasterNode() && event.localNodeMaster() == false) { cleanCache = true; } - if (cleanCache) { - Releasables.close(asyncFetchStarted.values()); - asyncFetchStarted.clear(); - Releasables.close(asyncFetchStore.values()); - asyncFetchStore.clear(); - } + } else { + cleanCache = true; + } + if (cleanCache) { + Releasables.close(asyncFetchStarted.values()); + asyncFetchStarted.clear(); + Releasables.close(asyncFetchStore.values()); + asyncFetchStore.clear(); } }); } @@ -115,21 +93,28 @@ public int getNumberOfInFlightFetch() { return count; } - public void applyStartedShards(StartedRerouteAllocation allocation) { - for (ShardRouting shard : allocation.startedShards()) { - Releasables.close(asyncFetchStarted.remove(shard.shardId())); - Releasables.close(asyncFetchStore.remove(shard.shardId())); + public void applyStartedShards(final RoutingAllocation allocation, final List startedShards) { + for (ShardRouting startedShard : startedShards) { + Releasables.close(asyncFetchStarted.remove(startedShard.shardId())); + Releasables.close(asyncFetchStore.remove(startedShard.shardId())); } } - public void applyFailedShards(FailedRerouteAllocation allocation) { - for (FailedRerouteAllocation.FailedShard shard : allocation.failedShards()) { - Releasables.close(asyncFetchStarted.remove(shard.routingEntry.shardId())); - Releasables.close(asyncFetchStore.remove(shard.routingEntry.shardId())); + public void applyFailedShards(final RoutingAllocation allocation, final List failedShards) { + for (FailedShard failedShard : failedShards) { + Releasables.close(asyncFetchStarted.remove(failedShard.getRoutingEntry().shardId())); + Releasables.close(asyncFetchStore.remove(failedShard.getRoutingEntry().shardId())); } } public void allocateUnassigned(final RoutingAllocation allocation) { + innerAllocatedUnassigned(allocation, primaryShardAllocator, replicaShardAllocator); + } + + // allow for testing infra to change shard allocators implementation + protected static void innerAllocatedUnassigned(RoutingAllocation allocation, + PrimaryShardAllocator primaryShardAllocator, + ReplicaShardAllocator replicaShardAllocator) { RoutingNodes.UnassignedShards unassigned = allocation.routingNodes().unassigned(); unassigned.sort(PriorityComparator.getAllocationComparator(allocation)); // sort for priority ordering @@ -138,9 +123,21 @@ public void allocateUnassigned(final RoutingAllocation allocation) { replicaShardAllocator.allocateUnassigned(allocation); } + /** + * Computes and returns the design for allocating a single unassigned shard. If called on an assigned shard, + * {@link AllocateUnassignedDecision#NOT_TAKEN} is returned. + */ + public AllocateUnassignedDecision decideUnassignedShardAllocation(ShardRouting unassignedShard, RoutingAllocation routingAllocation) { + if (unassignedShard.primary()) { + return primaryShardAllocator.makeAllocationDecision(unassignedShard, routingAllocation, logger); + } else { + return replicaShardAllocator.makeAllocationDecision(unassignedShard, routingAllocation, logger); + } + } + class InternalAsyncFetch extends AsyncShardFetch { - public InternalAsyncFetch(Logger logger, String type, ShardId shardId, Lister, T> action) { + InternalAsyncFetch(Logger logger, String type, ShardId shardId, Lister, T> action) { super(logger, type, shardId, action); } @@ -155,22 +152,19 @@ class InternalPrimaryShardAllocator extends PrimaryShardAllocator { private final TransportNodesListGatewayStartedShards startedAction; - public InternalPrimaryShardAllocator(Settings settings, TransportNodesListGatewayStartedShards startedAction) { + InternalPrimaryShardAllocator(Settings settings, TransportNodesListGatewayStartedShards startedAction) { super(settings); this.startedAction = startedAction; } @Override protected AsyncShardFetch.FetchResult fetchData(ShardRouting shard, RoutingAllocation allocation) { - AsyncShardFetch fetch = asyncFetchStarted.get(shard.shardId()); - if (fetch == null) { - fetch = new InternalAsyncFetch<>(logger, "shard_started", shard.shardId(), startedAction); - asyncFetchStarted.put(shard.shardId(), fetch); - } + AsyncShardFetch fetch = + asyncFetchStarted.computeIfAbsent(shard.shardId(), shardId -> new InternalAsyncFetch<>(logger, "shard_started", shardId, startedAction)); AsyncShardFetch.FetchResult shardState = fetch.fetchData(allocation.nodes(), allocation.getIgnoreNodes(shard.shardId())); - if (shardState.hasData() == true) { + if (shardState.hasData()) { shardState.processAllocation(allocation); } return shardState; @@ -181,24 +175,26 @@ class InternalReplicaShardAllocator extends ReplicaShardAllocator { private final TransportNodesListShardStoreMetaData storeAction; - public InternalReplicaShardAllocator(Settings settings, TransportNodesListShardStoreMetaData storeAction) { + InternalReplicaShardAllocator(Settings settings, TransportNodesListShardStoreMetaData storeAction) { super(settings); this.storeAction = storeAction; } @Override protected AsyncShardFetch.FetchResult fetchData(ShardRouting shard, RoutingAllocation allocation) { - AsyncShardFetch fetch = asyncFetchStore.get(shard.shardId()); - if (fetch == null) { - fetch = new InternalAsyncFetch<>(logger, "shard_store", shard.shardId(), storeAction); - asyncFetchStore.put(shard.shardId(), fetch); - } + AsyncShardFetch fetch = + asyncFetchStore.computeIfAbsent(shard.shardId(), shardId -> new InternalAsyncFetch<>(logger, "shard_store", shard.shardId(), storeAction)); AsyncShardFetch.FetchResult shardStores = fetch.fetchData(allocation.nodes(), allocation.getIgnoreNodes(shard.shardId())); - if (shardStores.hasData() == true) { + if (shardStores.hasData()) { shardStores.processAllocation(allocation); } return shardStores; } + + @Override + protected boolean hasInitiatedFetching(ShardRouting shard) { + return asyncFetchStore.get(shard.shardId()) != null; + } } } diff --git a/core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java b/core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java index a05e85299a816..d798f8b7745b1 100644 --- a/core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java +++ b/core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java @@ -23,7 +23,7 @@ import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.ClusterStateApplier; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService; @@ -31,6 +31,7 @@ import org.elasticsearch.cluster.routing.RoutingNode; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; @@ -50,15 +51,17 @@ import java.util.List; import java.util.Map; import java.util.Set; +import java.util.function.BiConsumer; +import java.util.function.Consumer; +import java.util.function.UnaryOperator; import static java.util.Collections.emptySet; import static java.util.Collections.unmodifiableSet; -public class GatewayMetaState extends AbstractComponent implements ClusterStateListener { +public class GatewayMetaState extends AbstractComponent implements ClusterStateApplier { private final NodeEnvironment nodeEnv; private final MetaStateService metaStateService; - private final DanglingIndicesState danglingIndicesState; @Nullable private volatile MetaData previousMetaData; @@ -67,13 +70,12 @@ public class GatewayMetaState extends AbstractComponent implements ClusterStateL @Inject public GatewayMetaState(Settings settings, NodeEnvironment nodeEnv, MetaStateService metaStateService, - DanglingIndicesState danglingIndicesState, TransportNodesListGatewayMetaState nodesListGatewayMetaState, + TransportNodesListGatewayMetaState nodesListGatewayMetaState, MetaDataIndexUpgradeService metaDataIndexUpgradeService, MetaDataUpgrader metaDataUpgrader) throws Exception { super(settings); this.nodeEnv = nodeEnv; this.metaStateService = metaStateService; - this.danglingIndicesState = danglingIndicesState; nodesListGatewayMetaState.init(this); if (DiscoveryNode.isDataNode(settings)) { @@ -117,7 +119,7 @@ public MetaData loadMetaState() throws Exception { } @Override - public void clusterChanged(ClusterChangedEvent event) { + public void applyClusterState(ClusterChangedEvent event) { final ClusterState state = event.state(); if (state.blocks().disableStatePersistence()) { @@ -181,7 +183,6 @@ public void clusterChanged(ClusterChangedEvent event) { } } - danglingIndicesState.processDanglingIndices(newMetaData); if (success) { previousMetaData = newMetaData; previouslyWrittenIndices = unmodifiableSet(relevantIndices); @@ -192,7 +193,7 @@ public static Set getRelevantIndices(ClusterState state, ClusterState pre Set relevantIndices; if (isDataOnlyNode(state)) { relevantIndices = getRelevantIndicesOnDataOnlyNode(state, previousState, previouslyWrittenIndices); - } else if (state.nodes().getLocalNode().isMasterNode() == true) { + } else if (state.nodes().getLocalNode().isMasterNode()) { relevantIndices = getRelevantIndicesForMasterEligibleNode(state); } else { relevantIndices = Collections.emptySet(); @@ -222,8 +223,8 @@ private void ensureNoPre019State() throws Exception { final String name = stateFile.getFileName().toString(); if (name.startsWith("metadata-")) { throw new IllegalStateException("Detected pre 0.19 metadata file please upgrade to a version before " - + Version.CURRENT.minimumCompatibilityVersion() - + " first to upgrade state structures - metadata found: [" + stateFile.getParent().toAbsolutePath()); + + Version.CURRENT.minimumCompatibilityVersion() + + " first to upgrade state structures - metadata found: [" + stateFile.getParent().toAbsolutePath()); } } } @@ -245,27 +246,46 @@ static MetaData upgradeMetaData(MetaData metaData, boolean changed = false; final MetaData.Builder upgradedMetaData = MetaData.builder(metaData); for (IndexMetaData indexMetaData : metaData) { - IndexMetaData newMetaData = metaDataIndexUpgradeService.upgradeIndexMetaData(indexMetaData); + IndexMetaData newMetaData = metaDataIndexUpgradeService.upgradeIndexMetaData(indexMetaData, + Version.CURRENT.minimumIndexCompatibilityVersion()); changed |= indexMetaData != newMetaData; upgradedMetaData.put(newMetaData, false); } - // collect current customs - Map existingCustoms = new HashMap<>(); - for (ObjectObjectCursor customCursor : metaData.customs()) { - existingCustoms.put(customCursor.key, customCursor.value); - } // upgrade global custom meta data - Map upgradedCustoms = metaDataUpgrader.customMetaDataUpgraders.apply(existingCustoms); - if (upgradedCustoms.equals(existingCustoms) == false) { - existingCustoms.keySet().forEach(upgradedMetaData::removeCustom); - for (Map.Entry upgradedCustomEntry : upgradedCustoms.entrySet()) { - upgradedMetaData.putCustom(upgradedCustomEntry.getKey(), upgradedCustomEntry.getValue()); - } + if (applyPluginUpgraders(metaData.getCustoms(), metaDataUpgrader.customMetaDataUpgraders, + upgradedMetaData::removeCustom,upgradedMetaData::putCustom)) { + changed = true; + } + // upgrade current templates + if (applyPluginUpgraders(metaData.getTemplates(), metaDataUpgrader.indexTemplateMetaDataUpgraders, + upgradedMetaData::removeTemplate, (s, indexTemplateMetaData) -> upgradedMetaData.put(indexTemplateMetaData))) { changed = true; } return changed ? upgradedMetaData.build() : metaData; } + private static boolean applyPluginUpgraders(ImmutableOpenMap existingData, + UnaryOperator> upgrader, + Consumer removeData, + BiConsumer putData) { + // collect current data + Map existingMap = new HashMap<>(); + for (ObjectObjectCursor customCursor : existingData) { + existingMap.put(customCursor.key, customCursor.value); + } + // upgrade global custom meta data + Map upgradedCustoms = upgrader.apply(existingMap); + if (upgradedCustoms.equals(existingMap) == false) { + // remove all data first so a plugin can remove custom metadata or templates if needed + existingMap.keySet().forEach(removeData); + for (Map.Entry upgradedCustomEntry : upgradedCustoms.entrySet()) { + putData.accept(upgradedCustomEntry.getKey(), upgradedCustomEntry.getValue()); + } + return true; + } + return false; + } + // shard state BWC private void ensureNoPre019ShardState(NodeEnvironment nodeEnv) throws Exception { for (Path dataLocation : nodeEnv.nodeDataPaths()) { diff --git a/core/src/main/java/org/elasticsearch/gateway/GatewayService.java b/core/src/main/java/org/elasticsearch/gateway/GatewayService.java index b953173c6899a..a771f6cc265b2 100644 --- a/core/src/main/java/org/elasticsearch/gateway/GatewayService.java +++ b/core/src/main/java/org/elasticsearch/gateway/GatewayService.java @@ -34,7 +34,6 @@ import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.routing.RoutingTable; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.component.AbstractLifecycleComponent; import org.elasticsearch.common.inject.Inject; @@ -44,7 +43,6 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.discovery.Discovery; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.threadpool.ThreadPool; @@ -71,7 +69,7 @@ public class GatewayService extends AbstractLifecycleComponent implements Cluste public static final Setting RECOVER_AFTER_MASTER_NODES_SETTING = Setting.intSetting("gateway.recover_after_master_nodes", 0, 0, Property.NodeScope); - public static final ClusterBlock STATE_NOT_RECOVERED_BLOCK = new ClusterBlock(1, "state not recovered / initialized", true, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL); + public static final ClusterBlock STATE_NOT_RECOVERED_BLOCK = new ClusterBlock(1, "state not recovered / initialized", true, true, false, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL); public static final TimeValue DEFAULT_RECOVER_AFTER_TIME_IF_EXPECTED_NODES_IS_SET = TimeValue.timeValueMinutes(5); @@ -99,10 +97,10 @@ public class GatewayService extends AbstractLifecycleComponent implements Cluste public GatewayService(Settings settings, AllocationService allocationService, ClusterService clusterService, ThreadPool threadPool, GatewayMetaState metaState, TransportNodesListGatewayMetaState listGatewayMetaState, Discovery discovery, - NodeServicesProvider nodeServicesProvider, IndicesService indicesService) { + IndicesService indicesService) { super(settings); this.gateway = new Gateway(settings, clusterService, metaState, listGatewayMetaState, discovery, - nodeServicesProvider, indicesService); + indicesService); this.allocationService = allocationService; this.clusterService = clusterService; this.threadPool = threadPool; @@ -134,12 +132,13 @@ public GatewayService(Settings settings, AllocationService allocationService, Cl @Override protected void doStart() { - clusterService.addLast(this); + // use post applied so that the state will be visible to the background recovery thread we spawn in performStateRecovery + clusterService.addListener(this); } @Override protected void doStop() { - clusterService.remove(this); + clusterService.removeListener(this); } @Override @@ -258,9 +257,14 @@ public ClusterState execute(ClusterState currentState) { // automatically generate a UID for the metadata if we need to metaDataBuilder.generateClusterUuidIfNeeded(); - if (MetaData.SETTING_READ_ONLY_SETTING.get(recoveredState.metaData().settings()) || MetaData.SETTING_READ_ONLY_SETTING.get(currentState.metaData().settings())) { + if (MetaData.SETTING_READ_ONLY_SETTING.get(recoveredState.metaData().settings()) + || MetaData.SETTING_READ_ONLY_SETTING.get(currentState.metaData().settings())) { blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK); } + if (MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(recoveredState.metaData().settings()) + || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(currentState.metaData().settings())) { + blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK); + } for (IndexMetaData indexMetaData : recoveredState.metaData()) { metaDataBuilder.put(indexMetaData, false); @@ -282,11 +286,8 @@ public ClusterState execute(ClusterState currentState) { routingTableBuilder.version(0); // now, reroute - RoutingAllocation.Result routingResult = allocationService.reroute( - ClusterState.builder(updatedState).routingTable(routingTableBuilder.build()).build(), - "state recovered"); - - return ClusterState.builder(updatedState).routingResult(routingResult).build(); + updatedState = ClusterState.builder(updatedState).routingTable(routingTableBuilder.build()).build(); + return allocationService.reroute(updatedState, "state recovered"); } @Override diff --git a/core/src/main/java/org/elasticsearch/gateway/LocalAllocateDangledIndices.java b/core/src/main/java/org/elasticsearch/gateway/LocalAllocateDangledIndices.java index 707cc89704f81..5a2f635a8cdd3 100644 --- a/core/src/main/java/org/elasticsearch/gateway/LocalAllocateDangledIndices.java +++ b/core/src/main/java/org/elasticsearch/gateway/LocalAllocateDangledIndices.java @@ -21,6 +21,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterStateUpdateTask; import org.elasticsearch.cluster.block.ClusterBlocks; @@ -28,9 +29,9 @@ import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.routing.RoutingTable; import org.elasticsearch.cluster.routing.allocation.AllocationService; -import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; @@ -129,10 +130,18 @@ public ClusterState execute(ClusterState currentState) { MetaData.Builder metaData = MetaData.builder(currentState.metaData()); ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()); RoutingTable.Builder routingTableBuilder = RoutingTable.builder(currentState.routingTable()); - + final Version minIndexCompatibilityVersion = currentState.getNodes().getMaxNodeVersion() + .minimumIndexCompatibilityVersion(); boolean importNeeded = false; StringBuilder sb = new StringBuilder(); for (IndexMetaData indexMetaData : request.indices) { + if (indexMetaData.getCreationVersion().before(minIndexCompatibilityVersion)) { + logger.warn("ignoring dangled index [{}] on node [{}]" + + " since it's created version [{}] is not supported by at least one node in the cluster minVersion [{}]", + indexMetaData.getIndex(), request.fromNode, indexMetaData.getCreationVersion(), + minIndexCompatibilityVersion); + continue; + } if (currentState.metaData().hasIndex(indexMetaData.getIndex().getName())) { continue; } @@ -147,7 +156,8 @@ public ClusterState execute(ClusterState currentState) { try { // The dangled index might be from an older version, we need to make sure it's compatible // with the current version and upgrade it if needed. - upgradedIndexMetaData = metaDataIndexUpgradeService.upgradeIndexMetaData(indexMetaData); + upgradedIndexMetaData = metaDataIndexUpgradeService.upgradeIndexMetaData(indexMetaData, + minIndexCompatibilityVersion); } catch (Exception ex) { // upgrade failed - adding index as closed logger.warn((Supplier) () -> new ParameterizedMessage("found dangled index [{}] on node [{}]. This index cannot be upgraded to the latest version, adding as closed", indexMetaData.getIndex(), request.fromNode), ex); @@ -169,10 +179,8 @@ public ClusterState execute(ClusterState currentState) { ClusterState updatedState = ClusterState.builder(currentState).metaData(metaData).blocks(blocks).routingTable(routingTable).build(); // now, reroute - RoutingAllocation.Result routingResult = allocationService.reroute( + return allocationService.reroute( ClusterState.builder(updatedState).routingTable(routingTable).build(), "dangling indices allocated"); - - return ClusterState.builder(updatedState).routingResult(routingResult).build(); } @Override @@ -217,7 +225,7 @@ public void readFrom(StreamInput in) throws IOException { fromNode = new DiscoveryNode(in); indices = new IndexMetaData[in.readVInt()]; for (int i = 0; i < indices.length; i++) { - indices[i] = IndexMetaData.Builder.readFrom(in); + indices[i] = IndexMetaData.readFrom(in); } } diff --git a/core/src/main/java/org/elasticsearch/gateway/MetaDataStateFormat.java b/core/src/main/java/org/elasticsearch/gateway/MetaDataStateFormat.java index 71c3190e2ee1a..fb48405b72538 100644 --- a/core/src/main/java/org/elasticsearch/gateway/MetaDataStateFormat.java +++ b/core/src/main/java/org/elasticsearch/gateway/MetaDataStateFormat.java @@ -35,6 +35,7 @@ import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.lucene.store.IndexOutputOutputStream; import org.elasticsearch.common.lucene.store.InputStreamIndexInput; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; @@ -181,9 +182,9 @@ protected XContentBuilder newXContentBuilder(XContentType type, OutputStream str * Reads the state from a given file and compares the expected version against the actual version of * the state. */ - public final T read(Path file) throws IOException { + public final T read(NamedXContentRegistry namedXContentRegistry, Path file) throws IOException { try (Directory dir = newDirectory(file.getParent())) { - try (final IndexInput indexInput = dir.openInput(file.getFileName().toString(), IOContext.DEFAULT)) { + try (IndexInput indexInput = dir.openInput(file.getFileName().toString(), IOContext.DEFAULT)) { // We checksum the entire file before we even go and parse it. If it's corrupted we barf right here. CodecUtil.checksumEntireFile(indexInput); final int fileVersion = CodecUtil.checkHeader(indexInput, STATE_FILE_CODEC, MIN_COMPATIBLE_STATE_FILE_VERSION, @@ -196,8 +197,8 @@ public final T read(Path file) throws IOException { long filePointer = indexInput.getFilePointer(); long contentSize = indexInput.length() - CodecUtil.footerLength() - filePointer; try (IndexInput slice = indexInput.slice("state_xcontent", filePointer, contentSize)) { - try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(new InputStreamIndexInput(slice, - contentSize))) { + try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(namedXContentRegistry, + new InputStreamIndexInput(slice, contentSize))) { return fromXContent(parser); } } @@ -260,7 +261,7 @@ long findMaxStateId(final String prefix, Path... locations) throws IOException { * @param dataLocations the data-locations to try. * @return the latest state or null if no state was found. */ - public T loadLatestState(Logger logger, Path... dataLocations) throws IOException { + public T loadLatestState(Logger logger, NamedXContentRegistry namedXContentRegistry, Path... dataLocations) throws IOException { List files = new ArrayList<>(); long maxStateId = -1; boolean maxStateIdIsLegacy = true; @@ -311,14 +312,14 @@ public T loadLatestState(Logger logger, Path... dataLocations) throws IOExcepti logger.debug("{}: no data for [{}], ignoring...", prefix, stateFile.toAbsolutePath()); continue; } - try (final XContentParser parser = XContentHelper.createParser(new BytesArray(data))) { + try (XContentParser parser = XContentHelper.createParser(namedXContentRegistry, new BytesArray(data))) { state = fromXContent(parser); } if (state == null) { logger.debug("{}: no data for [{}], ignoring...", prefix, stateFile.toAbsolutePath()); } } else { - state = read(stateFile); + state = read(namedXContentRegistry, stateFile); logger.trace("state id [{}] read from [{}]", id, stateFile.getFileName()); } return state; diff --git a/core/src/main/java/org/elasticsearch/gateway/MetaStateService.java b/core/src/main/java/org/elasticsearch/gateway/MetaStateService.java index e58a48d41b4f1..344a50ffa4472 100644 --- a/core/src/main/java/org/elasticsearch/gateway/MetaStateService.java +++ b/core/src/main/java/org/elasticsearch/gateway/MetaStateService.java @@ -26,6 +26,7 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.index.Index; @@ -40,10 +41,12 @@ public class MetaStateService extends AbstractComponent { private final NodeEnvironment nodeEnv; + private final NamedXContentRegistry namedXContentRegistry; - public MetaStateService(Settings settings, NodeEnvironment nodeEnv) { + public MetaStateService(Settings settings, NodeEnvironment nodeEnv, NamedXContentRegistry namedXContentRegistry) { super(settings); this.nodeEnv = nodeEnv; + this.namedXContentRegistry = namedXContentRegistry; } /** @@ -59,7 +62,8 @@ MetaData loadFullState() throws Exception { metaDataBuilder = MetaData.builder(); } for (String indexFolderName : nodeEnv.availableIndexFolders()) { - IndexMetaData indexMetaData = IndexMetaData.FORMAT.loadLatestState(logger, nodeEnv.resolveIndexFolder(indexFolderName)); + IndexMetaData indexMetaData = IndexMetaData.FORMAT.loadLatestState(logger, namedXContentRegistry, + nodeEnv.resolveIndexFolder(indexFolderName)); if (indexMetaData != null) { metaDataBuilder.put(indexMetaData, false); } else { @@ -74,7 +78,7 @@ MetaData loadFullState() throws Exception { */ @Nullable public IndexMetaData loadIndexState(Index index) throws IOException { - return IndexMetaData.FORMAT.loadLatestState(logger, nodeEnv.indexPaths(index)); + return IndexMetaData.FORMAT.loadLatestState(logger, namedXContentRegistry, nodeEnv.indexPaths(index)); } /** @@ -86,7 +90,7 @@ List loadIndicesStates(Predicate excludeIndexPathIdsPredi if (excludeIndexPathIdsPredicate.test(indexFolderName)) { continue; } - IndexMetaData indexMetaData = IndexMetaData.FORMAT.loadLatestState(logger, + IndexMetaData indexMetaData = IndexMetaData.FORMAT.loadLatestState(logger, namedXContentRegistry, nodeEnv.resolveIndexFolder(indexFolderName)); if (indexMetaData != null) { final String indexPathId = indexMetaData.getIndex().getUUID(); @@ -106,7 +110,7 @@ List loadIndicesStates(Predicate excludeIndexPathIdsPredi * Loads the global state, *without* index state, see {@link #loadFullState()} for that. */ MetaData loadGlobalState() throws IOException { - MetaData globalState = MetaData.FORMAT.loadLatestState(logger, nodeEnv.nodeDataPaths()); + MetaData globalState = MetaData.FORMAT.loadLatestState(logger, namedXContentRegistry, nodeEnv.nodeDataPaths()); // ES 2.0 now requires units for all time and byte-sized settings, so we add the default unit if it's missing // TODO: can we somehow only do this for pre-2.0 cluster state? if (globalState != null) { diff --git a/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java b/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java index a11300ac496ef..3be22b3f69d83 100644 --- a/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java +++ b/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java @@ -19,35 +19,42 @@ package org.elasticsearch.gateway; +import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; -import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.RecoverySource; import org.elasticsearch.cluster.routing.RoutingNode; import org.elasticsearch.cluster.routing.RoutingNodes; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.UnassignedInfo.AllocationStatus; +import org.elasticsearch.cluster.routing.allocation.AllocateUnassignedDecision; +import org.elasticsearch.cluster.routing.allocation.NodeAllocationResult; +import org.elasticsearch.cluster.routing.allocation.NodeAllocationResult.ShardStoreInfo; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.routing.allocation.decider.Decision; -import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.cluster.routing.allocation.decider.Decision.Type; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.ShardLockObtainFailedException; +import org.elasticsearch.gateway.AsyncShardFetch.FetchResult; import org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.NodeGatewayStartedShards; import org.elasticsearch.index.shard.ShardStateMetaData; import java.util.ArrayList; +import java.util.Collection; import java.util.Collections; import java.util.Comparator; -import java.util.LinkedList; +import java.util.HashSet; import java.util.List; import java.util.Set; import java.util.function.Function; import java.util.stream.Collectors; +import java.util.stream.Stream; /** * The primary shard allocator allocates unassigned primary shards to nodes that hold @@ -62,7 +69,7 @@ * nor does it allocate primaries when a primary shard failed and there is a valid replica * copy that can immediately be promoted to primary, as this takes place in {@link RoutingNodes#failShard}. */ -public abstract class PrimaryShardAllocator extends AbstractComponent { +public abstract class PrimaryShardAllocator extends BaseGatewayShardAllocator { private static final Function INITIAL_SHARDS_PARSER = (value) -> { switch (value) { @@ -94,121 +101,203 @@ public PrimaryShardAllocator(Settings settings) { logger.debug("using initial_shards [{}]", NODE_INITIAL_SHARDS_SETTING.get(settings)); } - public void allocateUnassigned(RoutingAllocation allocation) { - final RoutingNodes routingNodes = allocation.routingNodes(); - final MetaData metaData = allocation.metaData(); - - final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator(); - while (unassignedIterator.hasNext()) { - final ShardRouting shard = unassignedIterator.next(); - - if (shard.primary() == false) { - continue; - } + /** + * Is the allocator responsible for allocating the given {@link ShardRouting}? + */ + private static boolean isResponsibleFor(final ShardRouting shard) { + return shard.primary() // must be primary + && shard.unassigned() // must be unassigned + // only handle either an existing store or a snapshot recovery + && (shard.recoverySource().getType() == RecoverySource.Type.EXISTING_STORE + || shard.recoverySource().getType() == RecoverySource.Type.SNAPSHOT); + } - if (shard.recoverySource().getType() != RecoverySource.Type.EXISTING_STORE && - shard.recoverySource().getType() != RecoverySource.Type.SNAPSHOT) { - continue; - } + @Override + public AllocateUnassignedDecision makeAllocationDecision(final ShardRouting unassignedShard, + final RoutingAllocation allocation, + final Logger logger) { + if (isResponsibleFor(unassignedShard) == false) { + // this allocator is not responsible for allocating this shard + return AllocateUnassignedDecision.NOT_TAKEN; + } - final AsyncShardFetch.FetchResult shardState = fetchData(shard, allocation); - if (shardState.hasData() == false) { - logger.trace("{}: ignoring allocation, still fetching shard started state", shard); - allocation.setHasPendingAsyncFetch(); - unassignedIterator.removeAndIgnore(AllocationStatus.FETCHING_SHARD_DATA, allocation.changes()); - continue; + final boolean explain = allocation.debugDecision(); + final FetchResult shardState = fetchData(unassignedShard, allocation); + if (shardState.hasData() == false) { + allocation.setHasPendingAsyncFetch(); + List nodeDecisions = null; + if (explain) { + nodeDecisions = buildDecisionsForAllNodes(unassignedShard, allocation); } + return AllocateUnassignedDecision.no(AllocationStatus.FETCHING_SHARD_DATA, nodeDecisions); + } - // don't create a new IndexSetting object for every shard as this could cause a lot of garbage - // on cluster restart if we allocate a boat load of shards - final IndexMetaData indexMetaData = metaData.getIndexSafe(shard.index()); - final Set inSyncAllocationIds = indexMetaData.inSyncAllocationIds(shard.id()); - final boolean snapshotRestore = shard.recoverySource().getType() == RecoverySource.Type.SNAPSHOT; - final boolean recoverOnAnyNode = recoverOnAnyNode(indexMetaData); - - final NodeShardsResult nodeShardsResult; - final boolean enoughAllocationsFound; - - if (inSyncAllocationIds.isEmpty()) { - assert Version.indexCreated(indexMetaData.getSettings()).before(Version.V_5_0_0_alpha1) : "trying to allocated a primary with an empty allocation id set, but index is new"; - // when we load an old index (after upgrading cluster) or restore a snapshot of an old index - // fall back to old version-based allocation mode - // Note that once the shard has been active, lastActiveAllocationIds will be non-empty - nodeShardsResult = buildVersionBasedNodeShardsResult(shard, snapshotRestore || recoverOnAnyNode, allocation.getIgnoreNodes(shard.shardId()), shardState); - if (snapshotRestore || recoverOnAnyNode) { - enoughAllocationsFound = nodeShardsResult.allocationsFound > 0; - } else { - enoughAllocationsFound = isEnoughVersionBasedAllocationsFound(indexMetaData, nodeShardsResult); - } - logger.debug("[{}][{}]: version-based allocation for pre-{} index found {} allocations of {}", shard.index(), shard.id(), Version.V_5_0_0_alpha1, nodeShardsResult.allocationsFound, shard); + // don't create a new IndexSetting object for every shard as this could cause a lot of garbage + // on cluster restart if we allocate a boat load of shards + final IndexMetaData indexMetaData = allocation.metaData().getIndexSafe(unassignedShard.index()); + final Set inSyncAllocationIds = indexMetaData.inSyncAllocationIds(unassignedShard.id()); + final boolean snapshotRestore = unassignedShard.recoverySource().getType() == RecoverySource.Type.SNAPSHOT; + final boolean recoverOnAnyNode = recoverOnAnyNode(indexMetaData); + + final NodeShardsResult nodeShardsResult; + final boolean enoughAllocationsFound; + + if (inSyncAllocationIds.isEmpty()) { + assert Version.indexCreated(indexMetaData.getSettings()).before(Version.V_5_0_0_alpha1) : + "trying to allocate a primary with an empty in sync allocation id set, but index is new. index: " + + indexMetaData.getIndex(); + // when we load an old index (after upgrading cluster) or restore a snapshot of an old index + // fall back to old version-based allocation mode + // Note that once the shard has been active, lastActiveAllocationIds will be non-empty + nodeShardsResult = buildVersionBasedNodeShardsResult(unassignedShard, snapshotRestore || recoverOnAnyNode, + allocation.getIgnoreNodes(unassignedShard.shardId()), shardState, logger); + if (snapshotRestore || recoverOnAnyNode) { + enoughAllocationsFound = nodeShardsResult.allocationsFound > 0; } else { - assert inSyncAllocationIds.isEmpty() == false; - // use allocation ids to select nodes - nodeShardsResult = buildAllocationIdBasedNodeShardsResult(shard, snapshotRestore || recoverOnAnyNode, - allocation.getIgnoreNodes(shard.shardId()), inSyncAllocationIds, shardState); - enoughAllocationsFound = nodeShardsResult.orderedAllocationCandidates.size() > 0; - logger.debug("[{}][{}]: found {} allocation candidates of {} based on allocation ids: [{}]", shard.index(), shard.id(), nodeShardsResult.orderedAllocationCandidates.size(), shard, inSyncAllocationIds); + enoughAllocationsFound = isEnoughVersionBasedAllocationsFound(indexMetaData, nodeShardsResult); } + logger.debug("[{}][{}]: version-based allocation for pre-{} index found {} allocations of {}", unassignedShard.index(), + unassignedShard.id(), Version.V_5_0_0_alpha1, nodeShardsResult.allocationsFound, unassignedShard); + } else { + assert inSyncAllocationIds.isEmpty() == false; + // use allocation ids to select nodes + nodeShardsResult = buildAllocationIdBasedNodeShardsResult(unassignedShard, snapshotRestore || recoverOnAnyNode, + allocation.getIgnoreNodes(unassignedShard.shardId()), inSyncAllocationIds, shardState, logger); + enoughAllocationsFound = nodeShardsResult.orderedAllocationCandidates.size() > 0; + logger.debug("[{}][{}]: found {} allocation candidates of {} based on allocation ids: [{}]", unassignedShard.index(), + unassignedShard.id(), nodeShardsResult.orderedAllocationCandidates.size(), unassignedShard, inSyncAllocationIds); + } - if (enoughAllocationsFound == false){ - if (snapshotRestore) { - // let BalancedShardsAllocator take care of allocating this shard - logger.debug("[{}][{}]: missing local data, will restore from [{}]", shard.index(), shard.id(), shard.recoverySource()); - } else if (recoverOnAnyNode) { - // let BalancedShardsAllocator take care of allocating this shard - logger.debug("[{}][{}]: missing local data, recover from any node", shard.index(), shard.id()); - } else { - // we can't really allocate, so ignore it and continue - unassignedIterator.removeAndIgnore(AllocationStatus.NO_VALID_SHARD_COPY, allocation.changes()); - logger.debug("[{}][{}]: not allocating, number_of_allocated_shards_found [{}]", shard.index(), shard.id(), nodeShardsResult.allocationsFound); - } - continue; + if (enoughAllocationsFound == false) { + if (snapshotRestore) { + // let BalancedShardsAllocator take care of allocating this shard + logger.debug("[{}][{}]: missing local data, will restore from [{}]", + unassignedShard.index(), unassignedShard.id(), unassignedShard.recoverySource()); + return AllocateUnassignedDecision.NOT_TAKEN; + } else if (recoverOnAnyNode) { + // let BalancedShardsAllocator take care of allocating this shard + logger.debug("[{}][{}]: missing local data, recover from any node", unassignedShard.index(), unassignedShard.id()); + return AllocateUnassignedDecision.NOT_TAKEN; + } else { + // We have a shard that was previously allocated, but we could not find a valid shard copy to allocate the primary. + // We could just be waiting for the node that holds the primary to start back up, in which case the allocation for + // this shard will be picked up when the node joins and we do another allocation reroute + logger.debug("[{}][{}]: not allocating, number_of_allocated_shards_found [{}]", + unassignedShard.index(), unassignedShard.id(), nodeShardsResult.allocationsFound); + return AllocateUnassignedDecision.no(AllocationStatus.NO_VALID_SHARD_COPY, + explain ? buildNodeDecisions(null, shardState, inSyncAllocationIds) : null); } + } - final NodesToAllocate nodesToAllocate = buildNodesToAllocate( - allocation, nodeShardsResult.orderedAllocationCandidates, shard, false - ); + NodesToAllocate nodesToAllocate = buildNodesToAllocate( + allocation, nodeShardsResult.orderedAllocationCandidates, unassignedShard, false + ); + DiscoveryNode node = null; + String allocationId = null; + boolean throttled = false; + if (nodesToAllocate.yesNodeShards.isEmpty() == false) { + DecidedNode decidedNode = nodesToAllocate.yesNodeShards.get(0); + logger.debug("[{}][{}]: allocating [{}] to [{}] on primary allocation", + unassignedShard.index(), unassignedShard.id(), unassignedShard, decidedNode.nodeShardState.getNode()); + node = decidedNode.nodeShardState.getNode(); + allocationId = decidedNode.nodeShardState.allocationId(); + } else if (nodesToAllocate.throttleNodeShards.isEmpty() && !nodesToAllocate.noNodeShards.isEmpty()) { + // The deciders returned a NO decision for all nodes with shard copies, so we check if primary shard + // can be force-allocated to one of the nodes. + nodesToAllocate = buildNodesToAllocate(allocation, nodeShardsResult.orderedAllocationCandidates, unassignedShard, true); if (nodesToAllocate.yesNodeShards.isEmpty() == false) { - NodeGatewayStartedShards nodeShardState = nodesToAllocate.yesNodeShards.get(0); - logger.debug("[{}][{}]: allocating [{}] to [{}] on primary allocation", shard.index(), shard.id(), shard, nodeShardState.getNode()); - unassignedIterator.initialize(nodeShardState.getNode().getId(), nodeShardState.allocationId(), ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE, allocation.changes()); - } else if (nodesToAllocate.throttleNodeShards.isEmpty() == true && nodesToAllocate.noNodeShards.isEmpty() == false) { - // The deciders returned a NO decision for all nodes with shard copies, so we check if primary shard - // can be force-allocated to one of the nodes. - final NodesToAllocate nodesToForceAllocate = buildNodesToAllocate( - allocation, nodeShardsResult.orderedAllocationCandidates, shard, true - ); - if (nodesToForceAllocate.yesNodeShards.isEmpty() == false) { - NodeGatewayStartedShards nodeShardState = nodesToForceAllocate.yesNodeShards.get(0); - logger.debug("[{}][{}]: allocating [{}] to [{}] on forced primary allocation", - shard.index(), shard.id(), shard, nodeShardState.getNode()); - unassignedIterator.initialize(nodeShardState.getNode().getId(), nodeShardState.allocationId(), - ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE, allocation.changes()); - } else if (nodesToForceAllocate.throttleNodeShards.isEmpty() == false) { - logger.debug("[{}][{}]: throttling allocation [{}] to [{}] on forced primary allocation", - shard.index(), shard.id(), shard, nodesToForceAllocate.throttleNodeShards); - unassignedIterator.removeAndIgnore(AllocationStatus.DECIDERS_THROTTLED, allocation.changes()); - } else { - logger.debug("[{}][{}]: forced primary allocation denied [{}]", shard.index(), shard.id(), shard); - unassignedIterator.removeAndIgnore(AllocationStatus.DECIDERS_NO, allocation.changes()); - } + final DecidedNode decidedNode = nodesToAllocate.yesNodeShards.get(0); + final NodeGatewayStartedShards nodeShardState = decidedNode.nodeShardState; + logger.debug("[{}][{}]: allocating [{}] to [{}] on forced primary allocation", + unassignedShard.index(), unassignedShard.id(), unassignedShard, nodeShardState.getNode()); + node = nodeShardState.getNode(); + allocationId = nodeShardState.allocationId(); + } else if (nodesToAllocate.throttleNodeShards.isEmpty() == false) { + logger.debug("[{}][{}]: throttling allocation [{}] to [{}] on forced primary allocation", + unassignedShard.index(), unassignedShard.id(), unassignedShard, nodesToAllocate.throttleNodeShards); + throttled = true; } else { - // we are throttling this, but we have enough to allocate to this node, ignore it for now - logger.debug("[{}][{}]: throttling allocation [{}] to [{}] on primary allocation", shard.index(), shard.id(), shard, nodesToAllocate.throttleNodeShards); - unassignedIterator.removeAndIgnore(AllocationStatus.DECIDERS_THROTTLED, allocation.changes()); + logger.debug("[{}][{}]: forced primary allocation denied [{}]", + unassignedShard.index(), unassignedShard.id(), unassignedShard); } + } else { + // we are throttling this, since we are allowed to allocate to this node but there are enough allocations + // taking place on the node currently, ignore it for now + logger.debug("[{}][{}]: throttling allocation [{}] to [{}] on primary allocation", + unassignedShard.index(), unassignedShard.id(), unassignedShard, nodesToAllocate.throttleNodeShards); + throttled = true; + } + + List nodeResults = null; + if (explain) { + nodeResults = buildNodeDecisions(nodesToAllocate, shardState, inSyncAllocationIds); + } + if (allocation.hasPendingAsyncFetch()) { + return AllocateUnassignedDecision.no(AllocationStatus.FETCHING_SHARD_DATA, nodeResults); + } else if (node != null) { + return AllocateUnassignedDecision.yes(node, allocationId, nodeResults, false); + } else if (throttled) { + return AllocateUnassignedDecision.throttle(nodeResults); + } else { + return AllocateUnassignedDecision.no(AllocationStatus.DECIDERS_NO, nodeResults, true); + } + } + + /** + * Builds a map of nodes to the corresponding allocation decisions for those nodes. + */ + private static List buildNodeDecisions(NodesToAllocate nodesToAllocate, + FetchResult fetchedShardData, + Set inSyncAllocationIds) { + List nodeResults = new ArrayList<>(); + Collection ineligibleShards; + if (nodesToAllocate != null) { + final Set discoNodes = new HashSet<>(); + nodeResults.addAll(Stream.of(nodesToAllocate.yesNodeShards, nodesToAllocate.throttleNodeShards, nodesToAllocate.noNodeShards) + .flatMap(Collection::stream) + .map(dnode -> { + discoNodes.add(dnode.nodeShardState.getNode()); + return new NodeAllocationResult(dnode.nodeShardState.getNode(), + shardStoreInfo(dnode.nodeShardState, inSyncAllocationIds), + dnode.decision); + }).collect(Collectors.toList())); + ineligibleShards = fetchedShardData.getData().values().stream().filter(shardData -> + discoNodes.contains(shardData.getNode()) == false + ).collect(Collectors.toList()); + } else { + // there were no shard copies that were eligible for being assigned the allocation, + // so all fetched shard data are ineligible shards + ineligibleShards = fetchedShardData.getData().values(); } + + nodeResults.addAll(ineligibleShards.stream().map(shardData -> + new NodeAllocationResult(shardData.getNode(), shardStoreInfo(shardData, inSyncAllocationIds), null) + ).collect(Collectors.toList())); + + return nodeResults; + } + + private static ShardStoreInfo shardStoreInfo(NodeGatewayStartedShards nodeShardState, Set inSyncAllocationIds) { + final Exception storeErr = nodeShardState.storeException(); + final boolean inSync = nodeShardState.allocationId() != null && inSyncAllocationIds.contains(nodeShardState.allocationId()); + return new ShardStoreInfo(nodeShardState.allocationId(), inSync, storeErr); } + private static final Comparator NO_STORE_EXCEPTION_FIRST_COMPARATOR = + Comparator.comparing((NodeGatewayStartedShards state) -> state.storeException() == null).reversed(); + private static final Comparator PRIMARY_FIRST_COMPARATOR = + Comparator.comparing(NodeGatewayStartedShards::primary).reversed(); + /** * Builds a list of nodes. If matchAnyShard is set to false, only nodes that have an allocation id matching - * lastActiveAllocationIds are added to the list. Otherwise, any node that has a shard is added to the list, but + * inSyncAllocationIds are added to the list. Otherwise, any node that has a shard is added to the list, but * entries with matching allocation id are always at the front of the list. */ - protected NodeShardsResult buildAllocationIdBasedNodeShardsResult(ShardRouting shard, boolean matchAnyShard, Set ignoreNodes, - Set lastActiveAllocationIds, AsyncShardFetch.FetchResult shardState) { - LinkedList matchingNodeShardStates = new LinkedList<>(); - LinkedList nonMatchingNodeShardStates = new LinkedList<>(); + protected static NodeShardsResult buildAllocationIdBasedNodeShardsResult(ShardRouting shard, boolean matchAnyShard, + Set ignoreNodes, Set inSyncAllocationIds, + FetchResult shardState, + Logger logger) { + List nodeShardStates = new ArrayList<>(); int numberOfAllocationsFound = 0; for (NodeGatewayStartedShards nodeShardState : shardState.getData().values()) { DiscoveryNode node = nodeShardState.getNode(); @@ -229,31 +318,36 @@ protected NodeShardsResult buildAllocationIdBasedNodeShardsResult(ShardRouting s } } else { final String finalAllocationId = allocationId; - logger.trace((Supplier) () -> new ParameterizedMessage("[{}] on node [{}] has allocation id [{}] but the store can not be opened, treating as no allocation id", shard, nodeShardState.getNode(), finalAllocationId), nodeShardState.storeException()); - allocationId = null; + if (nodeShardState.storeException() instanceof ShardLockObtainFailedException) { + logger.trace((Supplier) () -> new ParameterizedMessage("[{}] on node [{}] has allocation id [{}] but the store can not be opened as it's locked, treating as valid shard", shard, nodeShardState.getNode(), finalAllocationId), nodeShardState.storeException()); + } else { + logger.trace((Supplier) () -> new ParameterizedMessage("[{}] on node [{}] has allocation id [{}] but the store can not be opened, treating as no allocation id", shard, nodeShardState.getNode(), finalAllocationId), nodeShardState.storeException()); + allocationId = null; + } } if (allocationId != null) { + assert nodeShardState.storeException() == null || + nodeShardState.storeException() instanceof ShardLockObtainFailedException : + "only allow store that can be opened or that throws a ShardLockObtainFailedException while being opened but got a store throwing " + nodeShardState.storeException(); numberOfAllocationsFound++; - if (lastActiveAllocationIds.contains(allocationId)) { - if (nodeShardState.primary()) { - matchingNodeShardStates.addFirst(nodeShardState); - } else { - matchingNodeShardStates.addLast(nodeShardState); - } - } else if (matchAnyShard) { - if (nodeShardState.primary()) { - nonMatchingNodeShardStates.addFirst(nodeShardState); - } else { - nonMatchingNodeShardStates.addLast(nodeShardState); - } + if (matchAnyShard || inSyncAllocationIds.contains(nodeShardState.allocationId())) { + nodeShardStates.add(nodeShardState); } } } - List nodeShardStates = new ArrayList<>(); - nodeShardStates.addAll(matchingNodeShardStates); - nodeShardStates.addAll(nonMatchingNodeShardStates); + final Comparator comparator; // allocation preference + if (matchAnyShard) { + // prefer shards with matching allocation ids + Comparator matchingAllocationsFirst = Comparator.comparing( + (NodeGatewayStartedShards state) -> inSyncAllocationIds.contains(state.allocationId())).reversed(); + comparator = matchingAllocationsFirst.thenComparing(NO_STORE_EXCEPTION_FIRST_COMPARATOR).thenComparing(PRIMARY_FIRST_COMPARATOR); + } else { + comparator = NO_STORE_EXCEPTION_FIRST_COMPARATOR.thenComparing(PRIMARY_FIRST_COMPARATOR); + } + + nodeShardStates.sort(comparator); if (logger.isTraceEnabled()) { logger.trace("{} candidates for allocation: {}", shard, nodeShardStates.stream().map(s -> s.getNode().getName()).collect(Collectors.joining(", "))); @@ -299,9 +393,9 @@ private NodesToAllocate buildNodesToAllocate(RoutingAllocation allocation, List nodeShardStates, ShardRouting shardRouting, boolean forceAllocate) { - List yesNodeShards = new ArrayList<>(); - List throttledNodeShards = new ArrayList<>(); - List noNodeShards = new ArrayList<>(); + List yesNodeShards = new ArrayList<>(); + List throttledNodeShards = new ArrayList<>(); + List noNodeShards = new ArrayList<>(); for (NodeGatewayStartedShards nodeShardState : nodeShardStates) { RoutingNode node = allocation.routingNodes().node(nodeShardState.getNode().getId()); if (node == null) { @@ -310,23 +404,25 @@ private NodesToAllocate buildNodesToAllocate(RoutingAllocation allocation, Decision decision = forceAllocate ? allocation.deciders().canForceAllocatePrimary(shardRouting, node, allocation) : allocation.deciders().canAllocate(shardRouting, node, allocation); - if (decision.type() == Decision.Type.THROTTLE) { - throttledNodeShards.add(nodeShardState); - } else if (decision.type() == Decision.Type.NO) { - noNodeShards.add(nodeShardState); + DecidedNode decidedNode = new DecidedNode(nodeShardState, decision); + if (decision.type() == Type.THROTTLE) { + throttledNodeShards.add(decidedNode); + } else if (decision.type() == Type.NO) { + noNodeShards.add(decidedNode); } else { - yesNodeShards.add(nodeShardState); + yesNodeShards.add(decidedNode); } } - return new NodesToAllocate(Collections.unmodifiableList(yesNodeShards), Collections.unmodifiableList(throttledNodeShards), Collections.unmodifiableList(noNodeShards)); + return new NodesToAllocate(Collections.unmodifiableList(yesNodeShards), Collections.unmodifiableList(throttledNodeShards), + Collections.unmodifiableList(noNodeShards)); } /** * Builds a list of previously started shards. If matchAnyShard is set to false, only shards with the highest shard version are added to * the list. Otherwise, any existing shard is added to the list, but entries with highest version are always at the front of the list. */ - NodeShardsResult buildVersionBasedNodeShardsResult(ShardRouting shard, boolean matchAnyShard, Set ignoreNodes, - AsyncShardFetch.FetchResult shardState) { + static NodeShardsResult buildVersionBasedNodeShardsResult(ShardRouting shard, boolean matchAnyShard, Set ignoreNodes, + FetchResult shardState, Logger logger) { final List allocationCandidates = new ArrayList<>(); int numberOfAllocationsFound = 0; long highestVersion = ShardStateMetaData.NO_VERSION; @@ -353,10 +449,19 @@ NodeShardsResult buildVersionBasedNodeShardsResult(ShardRouting shard, boolean m logger.trace("[{}] on node [{}] has allocation id [{}]", shard, nodeShardState.getNode(), nodeShardState.allocationId()); } } else { - final long finalVerison = version; - // when there is an store exception, we disregard the reported version and assign it as no version (same as shard does not exist) - logger.trace((Supplier) () -> new ParameterizedMessage("[{}] on node [{}] has version [{}] but the store can not be opened, treating no version", shard, nodeShardState.getNode(), finalVerison), nodeShardState.storeException()); - version = ShardStateMetaData.NO_VERSION; + final long finalVersion = version; + if (nodeShardState.storeException() instanceof ShardLockObtainFailedException) { + logger.trace((Supplier) () -> new ParameterizedMessage("[{}] on node [{}] has version [{}] but the store can not be opened as it's locked, treating as valid shard", shard, nodeShardState.getNode(), finalVersion), nodeShardState.storeException()); + if (nodeShardState.allocationId() != null) { + version = Long.MAX_VALUE; // shard was already selected in a 5.x cluster as primary, prefer this shard copy again. + } else { + version = 0L; // treat as lowest version so that this shard is the least likely to be selected as primary + } + } else { + // disregard the reported version and assign it as no version (same as shard does not exist) + logger.trace((Supplier) () -> new ParameterizedMessage("[{}] on node [{}] has version [{}] but the store can not be opened, treating no version", shard, nodeShardState.getNode(), finalVersion), nodeShardState.storeException()); + version = ShardStateMetaData.NO_VERSION; + } } if (version != ShardStateMetaData.NO_VERSION) { @@ -396,33 +501,47 @@ NodeShardsResult buildVersionBasedNodeShardsResult(ShardRouting shard, boolean m * recovered on any node */ private boolean recoverOnAnyNode(IndexMetaData metaData) { + // don't use the setting directly, not to trigger verbose deprecation logging return (IndexMetaData.isOnSharedFilesystem(metaData.getSettings()) || IndexMetaData.isOnSharedFilesystem(this.settings)) - && IndexMetaData.INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING.get(metaData.getSettings(), this.settings); + && (metaData.getSettings().getAsBoolean(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false) || + this.settings.getAsBoolean(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false)); } - protected abstract AsyncShardFetch.FetchResult fetchData(ShardRouting shard, RoutingAllocation allocation); + protected abstract FetchResult fetchData(ShardRouting shard, RoutingAllocation allocation); - static class NodeShardsResult { - public final List orderedAllocationCandidates; - public final int allocationsFound; + private static class NodeShardsResult { + final List orderedAllocationCandidates; + final int allocationsFound; - public NodeShardsResult(List orderedAllocationCandidates, int allocationsFound) { + NodeShardsResult(List orderedAllocationCandidates, int allocationsFound) { this.orderedAllocationCandidates = orderedAllocationCandidates; this.allocationsFound = allocationsFound; } } static class NodesToAllocate { - final List yesNodeShards; - final List throttleNodeShards; - final List noNodeShards; + final List yesNodeShards; + final List throttleNodeShards; + final List noNodeShards; - public NodesToAllocate(List yesNodeShards, - List throttleNodeShards, - List noNodeShards) { + NodesToAllocate(List yesNodeShards, List throttleNodeShards, List noNodeShards) { this.yesNodeShards = yesNodeShards; this.throttleNodeShards = throttleNodeShards; this.noNodeShards = noNodeShards; } } + + /** + * This class encapsulates the shard state retrieved from a node and the decision that was made + * by the allocator for allocating to the node that holds the shard copy. + */ + private static class DecidedNode { + final NodeGatewayStartedShards nodeShardState; + final Decision decision; + + private DecidedNode(NodeGatewayStartedShards nodeShardState, Decision decision) { + this.nodeShardState = nodeShardState; + this.decision = decision; + } + } } diff --git a/core/src/main/java/org/elasticsearch/gateway/ReplicaShardAllocator.java b/core/src/main/java/org/elasticsearch/gateway/ReplicaShardAllocator.java index d2fbeee577675..0ea705717e7f5 100644 --- a/core/src/main/java/org/elasticsearch/gateway/ReplicaShardAllocator.java +++ b/core/src/main/java/org/elasticsearch/gateway/ReplicaShardAllocator.java @@ -23,6 +23,7 @@ import com.carrotsearch.hppc.ObjectLongMap; import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectLongCursor; +import org.apache.logging.log4j.Logger; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNode; @@ -31,24 +32,31 @@ import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.UnassignedInfo; import org.elasticsearch.cluster.routing.UnassignedInfo.AllocationStatus; -import org.elasticsearch.cluster.routing.RoutingChangesObserver; +import org.elasticsearch.cluster.routing.allocation.AllocateUnassignedDecision; +import org.elasticsearch.cluster.routing.allocation.NodeAllocationResult; +import org.elasticsearch.cluster.routing.allocation.NodeAllocationResult.ShardStoreInfo; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeValue; +import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.store.StoreFileMetaData; import org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData; +import org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.NodeStoreFilesMetaData; import java.util.ArrayList; +import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Objects; +import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING; + /** */ -public abstract class ReplicaShardAllocator extends AbstractComponent { +public abstract class ReplicaShardAllocator extends BaseGatewayShardAllocator { public ReplicaShardAllocator(Settings settings) { super(settings); @@ -65,7 +73,7 @@ public void processExistingRecoveries(RoutingAllocation allocation) { List shardCancellationActions = new ArrayList<>(); for (RoutingNode routingNode : routingNodes) { for (ShardRouting shard : routingNode) { - if (shard.primary() == true) { + if (shard.primary()) { continue; } if (shard.initializing() == false) { @@ -80,7 +88,7 @@ public void processExistingRecoveries(RoutingAllocation allocation) { continue; } - AsyncShardFetch.FetchResult shardStores = fetchData(shard, allocation); + AsyncShardFetch.FetchResult shardStores = fetchData(shard, allocation); if (shardStores.hasData() == false) { logger.trace("{}: fetching new stores for initializing shard", shard); continue; // still fetching @@ -96,7 +104,7 @@ public void processExistingRecoveries(RoutingAllocation allocation) { continue; } - MatchingNodes matchingNodes = findMatchingNodes(shard, allocation, primaryStore, shardStores); + MatchingNodes matchingNodes = findMatchingNodes(shard, allocation, primaryStore, shardStores, false); if (matchingNodes.getNodeWithHighestMatch() != null) { DiscoveryNode currentNode = allocation.nodes().get(shard.currentNodeId()); DiscoveryNode nodeWithHighestMatch = matchingNodes.getNodeWithHighestMatch(); @@ -109,7 +117,7 @@ public void processExistingRecoveries(RoutingAllocation allocation) { } if (currentNode.equals(nodeWithHighestMatch) == false && Objects.equals(currentSyncId, primaryStore.syncId()) == false - && matchingNodes.isNodeMatchBySyncID(nodeWithHighestMatch) == true) { + && matchingNodes.isNodeMatchBySyncID(nodeWithHighestMatch)) { // we found a better match that has a full sync id match, the existing allocation is not fully synced // so we found a better one, cancel this one logger.debug("cancelling allocation of replica on [{}], sync id match found on node [{}]", @@ -128,86 +136,108 @@ public void processExistingRecoveries(RoutingAllocation allocation) { } } - public void allocateUnassigned(RoutingAllocation allocation) { - final RoutingNodes routingNodes = allocation.routingNodes(); - final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator(); - while (unassignedIterator.hasNext()) { - ShardRouting shard = unassignedIterator.next(); - if (shard.primary()) { - continue; - } + /** + * Is the allocator responsible for allocating the given {@link ShardRouting}? + */ + private static boolean isResponsibleFor(final ShardRouting shard) { + return shard.primary() == false // must be a replica + && shard.unassigned() // must be unassigned + // if we are allocating a replica because of index creation, no need to go and find a copy, there isn't one... + && shard.unassignedInfo().getReason() != UnassignedInfo.Reason.INDEX_CREATED; + } - // if we are allocating a replica because of index creation, no need to go and find a copy, there isn't one... - if (shard.unassignedInfo().getReason() == UnassignedInfo.Reason.INDEX_CREATED) { - continue; - } + @Override + public AllocateUnassignedDecision makeAllocationDecision(final ShardRouting unassignedShard, + final RoutingAllocation allocation, + final Logger logger) { + if (isResponsibleFor(unassignedShard) == false) { + // this allocator is not responsible for deciding on this shard + return AllocateUnassignedDecision.NOT_TAKEN; + } - // pre-check if it can be allocated to any node that currently exists, so we won't list the store for it for nothing - Decision decision = canBeAllocatedToAtLeastOneNode(shard, allocation); - if (decision.type() != Decision.Type.YES) { - logger.trace("{}: ignoring allocation, can't be allocated on any node", shard); - unassignedIterator.removeAndIgnore(UnassignedInfo.AllocationStatus.fromDecision(decision), allocation.changes()); - continue; - } + final RoutingNodes routingNodes = allocation.routingNodes(); + final boolean explain = allocation.debugDecision(); + // pre-check if it can be allocated to any node that currently exists, so we won't list the store for it for nothing + Tuple> result = canBeAllocatedToAtLeastOneNode(unassignedShard, allocation); + Decision allocateDecision = result.v1(); + if (allocateDecision.type() != Decision.Type.YES + && (explain == false || hasInitiatedFetching(unassignedShard) == false)) { + // only return early if we are not in explain mode, or we are in explain mode but we have not + // yet attempted to fetch any shard data + logger.trace("{}: ignoring allocation, can't be allocated on any node", unassignedShard); + return AllocateUnassignedDecision.no(UnassignedInfo.AllocationStatus.fromDecision(allocateDecision.type()), + result.v2() != null ? new ArrayList<>(result.v2().values()) : null); + } - AsyncShardFetch.FetchResult shardStores = fetchData(shard, allocation); - if (shardStores.hasData() == false) { - logger.trace("{}: ignoring allocation, still fetching shard stores", shard); - allocation.setHasPendingAsyncFetch(); - unassignedIterator.removeAndIgnore(AllocationStatus.FETCHING_SHARD_DATA, allocation.changes()); - continue; // still fetching + AsyncShardFetch.FetchResult shardStores = fetchData(unassignedShard, allocation); + if (shardStores.hasData() == false) { + logger.trace("{}: ignoring allocation, still fetching shard stores", unassignedShard); + allocation.setHasPendingAsyncFetch(); + List nodeDecisions = null; + if (explain) { + nodeDecisions = buildDecisionsForAllNodes(unassignedShard, allocation); } + return AllocateUnassignedDecision.no(AllocationStatus.FETCHING_SHARD_DATA, nodeDecisions); + } - ShardRouting primaryShard = routingNodes.activePrimary(shard.shardId()); - assert primaryShard != null : "the replica shard can be allocated on at least one node, so there must be an active primary"; - TransportNodesListShardStoreMetaData.StoreFilesMetaData primaryStore = findStore(primaryShard, allocation, shardStores); - if (primaryStore == null) { - // if we can't find the primary data, it is probably because the primary shard is corrupted (and listing failed) - // we want to let the replica be allocated in order to expose the actual problem with the primary that the replica - // will try and recover from - // Note, this is the existing behavior, as exposed in running CorruptFileTest#testNoPrimaryData - logger.trace("{}: no primary shard store found or allocated, letting actual allocation figure it out", shard); - continue; - } + ShardRouting primaryShard = routingNodes.activePrimary(unassignedShard.shardId()); + if (primaryShard == null) { + assert explain : "primary should only be null here if we are in explain mode, so we didn't " + + "exit early when canBeAllocatedToAtLeastOneNode didn't return a YES decision"; + return AllocateUnassignedDecision.no(UnassignedInfo.AllocationStatus.fromDecision(allocateDecision.type()), + new ArrayList<>(result.v2().values())); + } - MatchingNodes matchingNodes = findMatchingNodes(shard, allocation, primaryStore, shardStores); + TransportNodesListShardStoreMetaData.StoreFilesMetaData primaryStore = findStore(primaryShard, allocation, shardStores); + if (primaryStore == null) { + // if we can't find the primary data, it is probably because the primary shard is corrupted (and listing failed) + // we want to let the replica be allocated in order to expose the actual problem with the primary that the replica + // will try and recover from + // Note, this is the existing behavior, as exposed in running CorruptFileTest#testNoPrimaryData + logger.trace("{}: no primary shard store found or allocated, letting actual allocation figure it out", unassignedShard); + return AllocateUnassignedDecision.NOT_TAKEN; + } - if (matchingNodes.getNodeWithHighestMatch() != null) { - RoutingNode nodeWithHighestMatch = allocation.routingNodes().node(matchingNodes.getNodeWithHighestMatch().getId()); - // we only check on THROTTLE since we checked before before on NO - decision = allocation.deciders().canAllocate(shard, nodeWithHighestMatch, allocation); - if (decision.type() == Decision.Type.THROTTLE) { - logger.debug("[{}][{}]: throttling allocation [{}] to [{}] in order to reuse its unallocated persistent store", shard.index(), shard.id(), shard, nodeWithHighestMatch.node()); - // we are throttling this, but we have enough to allocate to this node, ignore it for now - unassignedIterator.removeAndIgnore(UnassignedInfo.AllocationStatus.fromDecision(decision), allocation.changes()); - } else { - logger.debug("[{}][{}]: allocating [{}] to [{}] in order to reuse its unallocated persistent store", shard.index(), shard.id(), shard, nodeWithHighestMatch.node()); - // we found a match - unassignedIterator.initialize(nodeWithHighestMatch.nodeId(), null, allocation.clusterInfo().getShardSize(shard, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE), allocation.changes()); - } - } else if (matchingNodes.hasAnyData() == false) { - // if we didn't manage to find *any* data (regardless of matching sizes), check if the allocation of the replica shard needs to be delayed - ignoreUnassignedIfDelayed(unassignedIterator, shard, allocation.changes()); + MatchingNodes matchingNodes = findMatchingNodes(unassignedShard, allocation, primaryStore, shardStores, explain); + assert explain == false || matchingNodes.nodeDecisions != null : "in explain mode, we must have individual node decisions"; + + List nodeDecisions = augmentExplanationsWithStoreInfo(result.v2(), matchingNodes.nodeDecisions); + if (allocateDecision.type() != Decision.Type.YES) { + return AllocateUnassignedDecision.no(UnassignedInfo.AllocationStatus.fromDecision(allocateDecision.type()), nodeDecisions); + } else if (matchingNodes.getNodeWithHighestMatch() != null) { + RoutingNode nodeWithHighestMatch = allocation.routingNodes().node(matchingNodes.getNodeWithHighestMatch().getId()); + // we only check on THROTTLE since we checked before before on NO + Decision decision = allocation.deciders().canAllocate(unassignedShard, nodeWithHighestMatch, allocation); + if (decision.type() == Decision.Type.THROTTLE) { + logger.debug("[{}][{}]: throttling allocation [{}] to [{}] in order to reuse its unallocated persistent store", + unassignedShard.index(), unassignedShard.id(), unassignedShard, nodeWithHighestMatch.node()); + // we are throttling this, as we have enough other shards to allocate to this node, so ignore it for now + return AllocateUnassignedDecision.throttle(nodeDecisions); + } else { + logger.debug("[{}][{}]: allocating [{}] to [{}] in order to reuse its unallocated persistent store", + unassignedShard.index(), unassignedShard.id(), unassignedShard, nodeWithHighestMatch.node()); + // we found a match + return AllocateUnassignedDecision.yes(nodeWithHighestMatch.node(), null, nodeDecisions, true); } + } else if (matchingNodes.hasAnyData() == false && unassignedShard.unassignedInfo().isDelayed()) { + // if we didn't manage to find *any* data (regardless of matching sizes), and the replica is + // unassigned due to a node leaving, so we delay allocation of this replica to see if the + // node with the shard copy will rejoin so we can re-use the copy it has + logger.debug("{}: allocation of [{}] is delayed", unassignedShard.shardId(), unassignedShard); + long remainingDelayMillis = 0L; + long totalDelayMillis = 0L; + if (explain) { + UnassignedInfo unassignedInfo = unassignedShard.unassignedInfo(); + MetaData metadata = allocation.metaData(); + IndexMetaData indexMetaData = metadata.index(unassignedShard.index()); + totalDelayMillis = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexMetaData.getSettings()).getMillis(); + long remainingDelayNanos = unassignedInfo.getRemainingDelay(System.nanoTime(), indexMetaData.getSettings()); + remainingDelayMillis = TimeValue.timeValueNanos(remainingDelayNanos).millis(); + } + return AllocateUnassignedDecision.delayed(remainingDelayMillis, totalDelayMillis, nodeDecisions); } - } - /** - * Check if the allocation of the replica is to be delayed. Compute the delay and if it is delayed, add it to the ignore unassigned list - * Note: we only care about replica in delayed allocation, since if we have an unassigned primary it - * will anyhow wait to find an existing copy of the shard to be allocated - * Note: the other side of the equation is scheduling a reroute in a timely manner, which happens in the RoutingService - * - * PUBLIC FOR TESTS! - * - * @param unassignedIterator iterator over unassigned shards - * @param shard the shard which might be delayed - */ - public void ignoreUnassignedIfDelayed(RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator, ShardRouting shard, RoutingChangesObserver changes) { - if (shard.unassignedInfo().isDelayed()) { - logger.debug("{}: allocation of [{}] is delayed", shard.shardId(), shard); - unassignedIterator.removeAndIgnore(AllocationStatus.DELAYED_ALLOCATION, changes); - } + return AllocateUnassignedDecision.NOT_TAKEN; } /** @@ -215,10 +245,14 @@ public void ignoreUnassignedIfDelayed(RoutingNodes.UnassignedShards.UnassignedIt * * Returns the best allocation decision for allocating the shard on any node (i.e. YES if at least one * node decided YES, THROTTLE if at least one node decided THROTTLE, and NO if none of the nodes decided - * YES or THROTTLE. + * YES or THROTTLE). If in explain mode, also returns the node-level explanations as the second element + * in the returned tuple. */ - private Decision canBeAllocatedToAtLeastOneNode(ShardRouting shard, RoutingAllocation allocation) { + private Tuple> canBeAllocatedToAtLeastOneNode(ShardRouting shard, + RoutingAllocation allocation) { Decision madeDecision = Decision.NO; + final boolean explain = allocation.debugDecision(); + Map nodeDecisions = explain ? new HashMap<>() : null; for (ObjectCursor cursor : allocation.nodes().getDataNodes().values()) { RoutingNode node = allocation.routingNodes().node(cursor.value.getId()); if (node == null) { @@ -227,25 +261,52 @@ private Decision canBeAllocatedToAtLeastOneNode(ShardRouting shard, RoutingAlloc // if we can't allocate it on a node, ignore it, for example, this handles // cases for only allocating a replica after a primary Decision decision = allocation.deciders().canAllocate(shard, node, allocation); - if (decision.type() == Decision.Type.YES) { - return decision; + if (decision.type() == Decision.Type.YES && madeDecision.type() != Decision.Type.YES) { + if (explain) { + madeDecision = decision; + } else { + return Tuple.tuple(decision, nodeDecisions); + } } else if (madeDecision.type() == Decision.Type.NO && decision.type() == Decision.Type.THROTTLE) { madeDecision = decision; } + if (explain) { + nodeDecisions.put(node.nodeId(), new NodeAllocationResult(node.node(), null, decision)); + } + } + return Tuple.tuple(madeDecision, nodeDecisions); + } + + /** + * Takes the store info for nodes that have a shard store and adds them to the node decisions, + * leaving the node explanations untouched for those nodes that do not have any store information. + */ + private List augmentExplanationsWithStoreInfo(Map nodeDecisions, + Map withShardStores) { + if (nodeDecisions == null || withShardStores == null) { + return null; + } + List augmented = new ArrayList<>(); + for (Map.Entry entry : nodeDecisions.entrySet()) { + if (withShardStores.containsKey(entry.getKey())) { + augmented.add(withShardStores.get(entry.getKey())); + } else { + augmented.add(entry.getValue()); + } } - return madeDecision; + return augmented; } /** * Finds the store for the assigned shard in the fetched data, returns null if none is found. */ - private TransportNodesListShardStoreMetaData.StoreFilesMetaData findStore(ShardRouting shard, RoutingAllocation allocation, AsyncShardFetch.FetchResult data) { + private TransportNodesListShardStoreMetaData.StoreFilesMetaData findStore(ShardRouting shard, RoutingAllocation allocation, AsyncShardFetch.FetchResult data) { assert shard.currentNodeId() != null; DiscoveryNode primaryNode = allocation.nodes().get(shard.currentNodeId()); if (primaryNode == null) { return null; } - TransportNodesListShardStoreMetaData.NodeStoreFilesMetaData primaryNodeFilesStore = data.getData().get(primaryNode); + NodeStoreFilesMetaData primaryNodeFilesStore = data.getData().get(primaryNode); if (primaryNodeFilesStore == null) { return null; } @@ -254,9 +315,11 @@ private TransportNodesListShardStoreMetaData.StoreFilesMetaData findStore(ShardR private MatchingNodes findMatchingNodes(ShardRouting shard, RoutingAllocation allocation, TransportNodesListShardStoreMetaData.StoreFilesMetaData primaryStore, - AsyncShardFetch.FetchResult data) { + AsyncShardFetch.FetchResult data, + boolean explain) { ObjectLongMap nodesToSize = new ObjectLongHashMap<>(); - for (Map.Entry nodeStoreEntry : data.getData().entrySet()) { + Map nodeDecisions = explain ? new HashMap<>() : null; + for (Map.Entry nodeStoreEntry : data.getData().entrySet()) { DiscoveryNode discoNode = nodeStoreEntry.getKey(); TransportNodesListShardStoreMetaData.StoreFilesMetaData storeFilesMetaData = nodeStoreEntry.getValue().storeFilesMetaData(); // we don't have any files at all, it is an empty index @@ -273,41 +336,70 @@ private MatchingNodes findMatchingNodes(ShardRouting shard, RoutingAllocation al // we only check for NO, since if this node is THROTTLING and it has enough "same data" // then we will try and assign it next time Decision decision = allocation.deciders().canAllocate(shard, node, allocation); + + long matchingBytes = -1; + if (explain) { + matchingBytes = computeMatchingBytes(primaryStore, storeFilesMetaData); + ShardStoreInfo shardStoreInfo = new ShardStoreInfo(matchingBytes); + nodeDecisions.put(node.nodeId(), new NodeAllocationResult(discoNode, shardStoreInfo, decision)); + } + if (decision.type() == Decision.Type.NO) { continue; } - String primarySyncId = primaryStore.syncId(); - String replicaSyncId = storeFilesMetaData.syncId(); - // see if we have a sync id we can make use of - if (replicaSyncId != null && replicaSyncId.equals(primarySyncId)) { - logger.trace("{}: node [{}] has same sync id {} as primary", shard, discoNode.getName(), replicaSyncId); - nodesToSize.put(discoNode, Long.MAX_VALUE); - } else { - long sizeMatched = 0; - for (StoreFileMetaData storeFileMetaData : storeFilesMetaData) { - String metaDataFileName = storeFileMetaData.name(); - if (primaryStore.fileExists(metaDataFileName) && primaryStore.file(metaDataFileName).isSame(storeFileMetaData)) { - sizeMatched += storeFileMetaData.length(); - } + if (matchingBytes < 0) { + matchingBytes = computeMatchingBytes(primaryStore, storeFilesMetaData); + } + nodesToSize.put(discoNode, matchingBytes); + if (logger.isTraceEnabled()) { + if (matchingBytes == Long.MAX_VALUE) { + logger.trace("{}: node [{}] has same sync id {} as primary", shard, discoNode.getName(), storeFilesMetaData.syncId()); + } else { + logger.trace("{}: node [{}] has [{}/{}] bytes of re-usable data", + shard, discoNode.getName(), new ByteSizeValue(matchingBytes), matchingBytes); } - logger.trace("{}: node [{}] has [{}/{}] bytes of re-usable data", - shard, discoNode.getName(), new ByteSizeValue(sizeMatched), sizeMatched); - nodesToSize.put(discoNode, sizeMatched); } } - return new MatchingNodes(nodesToSize); + return new MatchingNodes(nodesToSize, nodeDecisions); + } + + private static long computeMatchingBytes(TransportNodesListShardStoreMetaData.StoreFilesMetaData primaryStore, + TransportNodesListShardStoreMetaData.StoreFilesMetaData storeFilesMetaData) { + String primarySyncId = primaryStore.syncId(); + String replicaSyncId = storeFilesMetaData.syncId(); + // see if we have a sync id we can make use of + if (replicaSyncId != null && replicaSyncId.equals(primarySyncId)) { + return Long.MAX_VALUE; + } else { + long sizeMatched = 0; + for (StoreFileMetaData storeFileMetaData : storeFilesMetaData) { + String metaDataFileName = storeFileMetaData.name(); + if (primaryStore.fileExists(metaDataFileName) && primaryStore.file(metaDataFileName).isSame(storeFileMetaData)) { + sizeMatched += storeFileMetaData.length(); + } + } + return sizeMatched; + } } - protected abstract AsyncShardFetch.FetchResult fetchData(ShardRouting shard, RoutingAllocation allocation); + protected abstract AsyncShardFetch.FetchResult fetchData(ShardRouting shard, RoutingAllocation allocation); + + /** + * Returns a boolean indicating whether fetching shard data has been triggered at any point for the given shard. + */ + protected abstract boolean hasInitiatedFetching(ShardRouting shard); static class MatchingNodes { private final ObjectLongMap nodesToSize; private final DiscoveryNode nodeWithHighestMatch; + @Nullable + private final Map nodeDecisions; - public MatchingNodes(ObjectLongMap nodesToSize) { + MatchingNodes(ObjectLongMap nodesToSize, @Nullable Map nodeDecisions) { this.nodesToSize = nodesToSize; + this.nodeDecisions = nodeDecisions; long highestMatchSize = 0; DiscoveryNode highestMatchNode = null; diff --git a/core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayMetaState.java b/core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayMetaState.java index 24886dc72d39c..a67750e7b6a12 100644 --- a/core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayMetaState.java +++ b/core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayMetaState.java @@ -103,11 +103,6 @@ protected NodeGatewayMetaState nodeOperation(NodeRequest request) { } } - @Override - protected boolean accumulateExceptions() { - return true; - } - public static class Request extends BaseNodesRequest { public Request() { @@ -188,7 +183,7 @@ public MetaData metaData() { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); if (in.readBoolean()) { - metaData = MetaData.Builder.readFrom(in); + metaData = MetaData.readFrom(in); } } diff --git a/core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayStartedShards.java b/core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayStartedShards.java index 31fc290c10cb8..f9f98296a26f7 100644 --- a/core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayStartedShards.java +++ b/core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayStartedShards.java @@ -39,6 +39,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.shard.ShardId; @@ -115,7 +116,7 @@ protected NodeGatewayStartedShards nodeOperation(NodeRequest request) { try { final ShardId shardId = request.getShardId(); logger.trace("{} loading local shard state info", shardId); - ShardStateMetaData shardStateMetaData = ShardStateMetaData.FORMAT.loadLatestState(logger, + ShardStateMetaData shardStateMetaData = ShardStateMetaData.FORMAT.loadLatestState(logger, NamedXContentRegistry.EMPTY, nodeEnv.availableShardPaths(request.shardId)); if (shardStateMetaData != null) { IndexMetaData metaData = clusterService.state().metaData().index(shardId.getIndex()); @@ -123,7 +124,8 @@ protected NodeGatewayStartedShards nodeOperation(NodeRequest request) { // we may send this requests while processing the cluster state that recovered the index // sometimes the request comes in before the local node processed that cluster state // in such cases we can load it from disk - metaData = IndexMetaData.FORMAT.loadLatestState(logger, nodeEnv.indexPaths(shardId.getIndex())); + metaData = IndexMetaData.FORMAT.loadLatestState(logger, NamedXContentRegistry.EMPTY, + nodeEnv.indexPaths(shardId.getIndex())); } if (metaData == null) { ElasticsearchException e = new ElasticsearchException("failed to find local IndexMetaData"); @@ -170,11 +172,6 @@ protected NodeGatewayStartedShards nodeOperation(NodeRequest request) { } } - @Override - protected boolean accumulateExceptions() { - return true; - } - public static class Request extends BaseNodesRequest { private ShardId shardId; diff --git a/core/src/main/java/org/elasticsearch/http/HttpServer.java b/core/src/main/java/org/elasticsearch/http/HttpServer.java deleted file mode 100644 index ccf2d764c052a..0000000000000 --- a/core/src/main/java/org/elasticsearch/http/HttpServer.java +++ /dev/null @@ -1,201 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.http; - -import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.breaker.CircuitBreaker; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.component.AbstractLifecycleComponent; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.io.Streams; -import org.elasticsearch.common.io.stream.BytesStreamOutput; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.concurrent.ThreadContext; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.indices.breaker.CircuitBreakerService; -import org.elasticsearch.node.service.NodeService; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; -import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; -import org.elasticsearch.rest.RestStatus; - -import java.io.ByteArrayOutputStream; -import java.io.IOException; -import java.io.InputStream; -import java.util.concurrent.atomic.AtomicBoolean; - -import static org.elasticsearch.rest.RestStatus.FORBIDDEN; -import static org.elasticsearch.rest.RestStatus.INTERNAL_SERVER_ERROR; - -/** - * A component to serve http requests, backed by rest handlers. - */ -public class HttpServer extends AbstractLifecycleComponent implements HttpServerAdapter { - private final HttpServerTransport transport; - - private final RestController restController; - - private final NodeClient client; - - private final CircuitBreakerService circuitBreakerService; - - @Inject - public HttpServer(Settings settings, HttpServerTransport transport, RestController restController, - NodeClient client, CircuitBreakerService circuitBreakerService) { - super(settings); - this.transport = transport; - this.restController = restController; - this.client = client; - this.circuitBreakerService = circuitBreakerService; - transport.httpServerAdapter(this); - } - - - @Override - protected void doStart() { - transport.start(); - if (logger.isInfoEnabled()) { - logger.info("{}", transport.boundAddress()); - } - } - - @Override - protected void doStop() { - transport.stop(); - } - - @Override - protected void doClose() { - transport.close(); - } - - public HttpInfo info() { - return transport.info(); - } - - public HttpStats stats() { - return transport.stats(); - } - - public void dispatchRequest(RestRequest request, RestChannel channel, ThreadContext threadContext) { - if (request.rawPath().equals("/favicon.ico")) { - handleFavicon(request, channel); - return; - } - RestChannel responseChannel = channel; - try { - int contentLength = request.content().length(); - if (restController.canTripCircuitBreaker(request)) { - inFlightRequestsBreaker(circuitBreakerService).addEstimateBytesAndMaybeBreak(contentLength, ""); - } else { - inFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(contentLength); - } - // iff we could reserve bytes for the request we need to send the response also over this channel - responseChannel = new ResourceHandlingHttpChannel(channel, circuitBreakerService, contentLength); - restController.dispatchRequest(request, responseChannel, client, threadContext); - } catch (Exception e) { - restController.sendErrorResponse(request, responseChannel, e); - } - } - - void handleFavicon(RestRequest request, RestChannel channel) { - if (request.method() == RestRequest.Method.GET) { - try { - try (InputStream stream = getClass().getResourceAsStream("/config/favicon.ico")) { - ByteArrayOutputStream out = new ByteArrayOutputStream(); - Streams.copy(stream, out); - BytesRestResponse restResponse = new BytesRestResponse(RestStatus.OK, "image/x-icon", out.toByteArray()); - channel.sendResponse(restResponse); - } - } catch (IOException e) { - channel.sendResponse(new BytesRestResponse(INTERNAL_SERVER_ERROR, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - } - } else { - channel.sendResponse(new BytesRestResponse(FORBIDDEN, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - } - } - - private static final class ResourceHandlingHttpChannel implements RestChannel { - private final RestChannel delegate; - private final CircuitBreakerService circuitBreakerService; - private final int contentLength; - private final AtomicBoolean closed = new AtomicBoolean(); - - public ResourceHandlingHttpChannel(RestChannel delegate, CircuitBreakerService circuitBreakerService, int contentLength) { - this.delegate = delegate; - this.circuitBreakerService = circuitBreakerService; - this.contentLength = contentLength; - } - - @Override - public XContentBuilder newBuilder() throws IOException { - return delegate.newBuilder(); - } - - @Override - public XContentBuilder newErrorBuilder() throws IOException { - return delegate.newErrorBuilder(); - } - - @Override - public XContentBuilder newBuilder(@Nullable BytesReference autoDetectSource, boolean useFiltering) throws IOException { - return delegate.newBuilder(autoDetectSource, useFiltering); - } - - @Override - public BytesStreamOutput bytesOutput() { - return delegate.bytesOutput(); - } - - @Override - public RestRequest request() { - return delegate.request(); - } - - @Override - public boolean detailedErrorsEnabled() { - return delegate.detailedErrorsEnabled(); - } - - @Override - public void sendResponse(RestResponse response) { - close(); - delegate.sendResponse(response); - } - - private void close() { - // attempt to close once atomically - if (closed.compareAndSet(false, true) == false) { - throw new IllegalStateException("Channel is already closed"); - } - inFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(-contentLength); - } - - } - - private static CircuitBreaker inFlightRequestsBreaker(CircuitBreakerService circuitBreakerService) { - // We always obtain a fresh breaker to reflect changes to the breaker configuration. - return circuitBreakerService.getBreaker(CircuitBreaker.IN_FLIGHT_REQUESTS); - } -} diff --git a/core/src/main/java/org/elasticsearch/http/HttpServerAdapter.java b/core/src/main/java/org/elasticsearch/http/HttpServerAdapter.java deleted file mode 100644 index 16b0bd0044306..0000000000000 --- a/core/src/main/java/org/elasticsearch/http/HttpServerAdapter.java +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.http; - -import org.elasticsearch.common.util.concurrent.ThreadContext; -import org.elasticsearch.rest.RestChannel; -import org.elasticsearch.rest.RestRequest; - -/** - * - */ -public interface HttpServerAdapter { - - void dispatchRequest(RestRequest request, RestChannel channel, ThreadContext context); - -} diff --git a/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java b/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java index 4dc4a888d8af4..40f08f3998c3d 100644 --- a/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java +++ b/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java @@ -21,6 +21,9 @@ import org.elasticsearch.common.component.LifecycleComponent; import org.elasticsearch.common.transport.BoundTransportAddress; +import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.rest.RestChannel; +import org.elasticsearch.rest.RestRequest; public interface HttpServerTransport extends LifecycleComponent { @@ -33,6 +36,32 @@ public interface HttpServerTransport extends LifecycleComponent { HttpStats stats(); - void httpServerAdapter(HttpServerAdapter httpServerAdapter); + /** + * Dispatches HTTP requests. + */ + interface Dispatcher { + + /** + * Dispatches the {@link RestRequest} to the relevant request handler or responds to the given rest channel directly if + * the request can't be handled by any request handler. + * + * @param request the request to dispatch + * @param channel the response channel of this request + * @param threadContext the thread context + */ + void dispatchRequest(RestRequest request, RestChannel channel, ThreadContext threadContext); + + /** + * Dispatches a bad request. For example, if a request is malformed it will be dispatched via this method with the cause of the bad + * request. + * + * @param request the request to dispatch + * @param channel the response channel of this request + * @param threadContext the thread context + * @param cause the cause of the bad request + */ + void dispatchBadRequest(RestRequest request, RestChannel channel, ThreadContext threadContext, Throwable cause); + + } } diff --git a/core/src/main/java/org/elasticsearch/http/HttpTransportSettings.java b/core/src/main/java/org/elasticsearch/http/HttpTransportSettings.java index 60bc3449d0be5..b5e254aa4c2ef 100644 --- a/core/src/main/java/org/elasticsearch/http/HttpTransportSettings.java +++ b/core/src/main/java/org/elasticsearch/http/HttpTransportSettings.java @@ -68,6 +68,8 @@ public final class HttpTransportSettings { Setting.intSetting("http.publish_port", -1, -1, Property.NodeScope); public static final Setting SETTING_HTTP_DETAILED_ERRORS_ENABLED = Setting.boolSetting("http.detailed_errors.enabled", true, Property.NodeScope); + public static final Setting SETTING_HTTP_CONTENT_TYPE_REQUIRED = + Setting.boolSetting("http.content_type.required", false, Property.NodeScope); public static final Setting SETTING_HTTP_MAX_CONTENT_LENGTH = Setting.byteSizeSetting("http.max_content_length", new ByteSizeValue(100, ByteSizeUnit.MB), Property.NodeScope); public static final Setting SETTING_HTTP_MAX_CHUNK_SIZE = diff --git a/core/src/main/java/org/elasticsearch/index/CompositeIndexEventListener.java b/core/src/main/java/org/elasticsearch/index/CompositeIndexEventListener.java index 3b2cf5cbd0749..90d8a205e8b57 100644 --- a/core/src/main/java/org/elasticsearch/index/CompositeIndexEventListener.java +++ b/core/src/main/java/org/elasticsearch/index/CompositeIndexEventListener.java @@ -30,6 +30,7 @@ import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardState; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason; import java.util.ArrayList; import java.util.Collection; @@ -176,48 +177,24 @@ public void beforeIndexShardCreated(ShardId shardId, Settings indexSettings) { } @Override - public void beforeIndexClosed(IndexService indexService) { + public void beforeIndexRemoved(IndexService indexService, IndexRemovalReason reason) { for (IndexEventListener listener : listeners) { try { - listener.beforeIndexClosed(indexService); + listener.beforeIndexRemoved(indexService, reason); } catch (Exception e) { - logger.warn("failed to invoke before index closed callback", e); + logger.warn("failed to invoke before index removed callback", e); throw e; } } } @Override - public void beforeIndexDeleted(IndexService indexService) { + public void afterIndexRemoved(Index index, IndexSettings indexSettings, IndexRemovalReason reason) { for (IndexEventListener listener : listeners) { try { - listener.beforeIndexDeleted(indexService); + listener.afterIndexRemoved(index, indexSettings, reason); } catch (Exception e) { - logger.warn("failed to invoke before index deleted callback", e); - throw e; - } - } - } - - @Override - public void afterIndexDeleted(Index index, Settings indexSettings) { - for (IndexEventListener listener : listeners) { - try { - listener.afterIndexDeleted(index, indexSettings); - } catch (Exception e) { - logger.warn("failed to invoke after index deleted callback", e); - throw e; - } - } - } - - @Override - public void afterIndexClosed(Index index, Settings indexSettings) { - for (IndexEventListener listener : listeners) { - try { - listener.afterIndexClosed(index, indexSettings); - } catch (Exception e) { - logger.warn("failed to invoke after index closed callback", e); + logger.warn("failed to invoke after index removed callback", e); throw e; } } diff --git a/core/src/main/java/org/elasticsearch/index/Index.java b/core/src/main/java/org/elasticsearch/index/Index.java index 25b293ad3879e..da94ad2ec7250 100644 --- a/core/src/main/java/org/elasticsearch/index/Index.java +++ b/core/src/main/java/org/elasticsearch/index/Index.java @@ -21,8 +21,6 @@ import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -42,7 +40,7 @@ public class Index implements Writeable, ToXContent { public static final Index[] EMPTY_ARRAY = new Index[0]; private static final String INDEX_UUID_KEY = "index_uuid"; private static final String INDEX_NAME_KEY = "index_name"; - private static final ObjectParser INDEX_PARSER = new ObjectParser<>("index", Builder::new); + private static final ObjectParser INDEX_PARSER = new ObjectParser<>("index", Builder::new); static { INDEX_PARSER.declareString(Builder::name, new ParseField(INDEX_NAME_KEY)); INDEX_PARSER.declareString(Builder::uuid, new ParseField(INDEX_UUID_KEY)); @@ -118,11 +116,7 @@ public XContentBuilder toXContent(final XContentBuilder builder, final Params pa } public static Index fromXContent(final XContentParser parser) throws IOException { - return INDEX_PARSER.parse(parser, () -> ParseFieldMatcher.STRICT).build(); - } - - public static final Index parseIndex(final XContentParser parser, final ParseFieldMatcherSupplier supplier) { - return INDEX_PARSER.apply(parser, supplier).build(); + return INDEX_PARSER.parse(parser, null).build(); } /** diff --git a/core/src/main/java/org/elasticsearch/index/IndexModule.java b/core/src/main/java/org/elasticsearch/index/IndexModule.java index f6227ca3276fc..22c58f308eea9 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexModule.java +++ b/core/src/main/java/org/elasticsearch/index/IndexModule.java @@ -20,16 +20,21 @@ package org.elasticsearch.index; import org.apache.lucene.util.SetOnce; +import org.elasticsearch.client.Client; +import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.index.analysis.AnalysisRegistry; -import org.elasticsearch.index.cache.query.QueryCache; -import org.elasticsearch.index.cache.query.IndexQueryCache; import org.elasticsearch.index.cache.query.DisabledQueryCache; +import org.elasticsearch.index.cache.query.IndexQueryCache; +import org.elasticsearch.index.cache.query.QueryCache; import org.elasticsearch.index.engine.EngineFactory; +import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.shard.IndexEventListener; import org.elasticsearch.index.shard.IndexSearcherWrapper; import org.elasticsearch.index.shard.IndexingOperationListener; @@ -40,8 +45,11 @@ import org.elasticsearch.index.store.IndexStore; import org.elasticsearch.index.store.IndexStoreConfig; import org.elasticsearch.indices.IndicesQueryCache; +import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache; import org.elasticsearch.indices.mapper.MapperRegistry; +import org.elasticsearch.script.ScriptService; +import org.elasticsearch.threadpool.ThreadPool; import java.io.IOException; import java.util.ArrayList; @@ -93,6 +101,11 @@ public final class IndexModule { public static final Setting INDEX_QUERY_CACHE_EVERYTHING_SETTING = Setting.boolSetting("index.queries.cache.everything", false, Property.IndexScope); + // This setting is an escape hatch in case not caching term queries would slow some users down + // Do not document. + public static final Setting INDEX_QUERY_CACHE_TERM_QUERIES_SETTING = + Setting.boolSetting("index.queries.cache.term_queries", false, Property.IndexScope); + private final IndexSettings indexSettings; private final IndexStoreConfig indexStoreConfig; private final AnalysisRegistry analysisRegistry; @@ -306,12 +319,14 @@ public interface IndexSearcherWrapperFactory { /** * Returns a new IndexSearcherWrapper. This method is called once per index per node */ - IndexSearcherWrapper newWrapper(final IndexService indexService); + IndexSearcherWrapper newWrapper(IndexService indexService); } - public IndexService newIndexService(NodeEnvironment environment, IndexService.ShardStoreDeleter shardStoreDeleter, - NodeServicesProvider servicesProvider, IndicesQueryCache indicesQueryCache, - MapperRegistry mapperRegistry, IndicesFieldDataCache indicesFieldDataCache) throws IOException { + public IndexService newIndexService(NodeEnvironment environment, NamedXContentRegistry xContentRegistry, + IndexService.ShardStoreDeleter shardStoreDeleter, CircuitBreakerService circuitBreakerService, BigArrays bigArrays, + ThreadPool threadPool, ScriptService scriptService, + ClusterService clusterService, Client client, IndicesQueryCache indicesQueryCache, MapperRegistry mapperRegistry, + IndicesFieldDataCache indicesFieldDataCache) throws IOException { final IndexEventListener eventListener = freeze(); IndexSearcherWrapperFactory searcherWrapperFactory = indexSearcherWrapper.get() == null ? (shard) -> null : indexSearcherWrapper.get(); @@ -344,9 +359,20 @@ public IndexService newIndexService(NodeEnvironment environment, IndexService.Sh } else { queryCache = new DisabledQueryCache(indexSettings); } - return new IndexService(indexSettings, environment, new SimilarityService(indexSettings, similarities), shardStoreDeleter, - analysisRegistry, engineFactory.get(), servicesProvider, queryCache, store, eventListener, searcherWrapperFactory, - mapperRegistry, indicesFieldDataCache, searchOperationListeners, indexOperationListeners); + return new IndexService(indexSettings, environment, xContentRegistry, new SimilarityService(indexSettings, similarities), + shardStoreDeleter, analysisRegistry, engineFactory.get(), circuitBreakerService, bigArrays, threadPool, scriptService, + clusterService, client, queryCache, store, eventListener, searcherWrapperFactory, mapperRegistry, + indicesFieldDataCache, searchOperationListeners, indexOperationListeners); + } + + /** + * creates a new mapper service to do administrative work like mapping updates. This *should not* be used for document parsing. + * doing so will result in an exception. + */ + public MapperService newIndexMapperService(NamedXContentRegistry xContentRegistry, MapperRegistry mapperRegistry) throws IOException { + return new MapperService(indexSettings, analysisRegistry.build(indexSettings), xContentRegistry, + new SimilarityService(indexSettings, similarities), mapperRegistry, + () -> { throw new UnsupportedOperationException("no index query shard context available"); }); } /** diff --git a/core/src/main/java/org/elasticsearch/index/IndexNotFoundException.java b/core/src/main/java/org/elasticsearch/index/IndexNotFoundException.java index 035b90dd25e15..4442ee276c9cf 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexNotFoundException.java +++ b/core/src/main/java/org/elasticsearch/index/IndexNotFoundException.java @@ -24,9 +24,16 @@ import java.io.IOException; public final class IndexNotFoundException extends ResourceNotFoundException { + /** + * Construct with a custom message. + */ + public IndexNotFoundException(String message, String index) { + super(message); + setIndex(index); + } public IndexNotFoundException(String index) { - this(index, null); + this(index, (Throwable) null); } public IndexNotFoundException(String index, Throwable cause) { diff --git a/core/src/main/java/org/elasticsearch/index/IndexService.java b/core/src/main/java/org/elasticsearch/index/IndexService.java index e662e46c79de6..a4b16efe1f641 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexService.java +++ b/core/src/main/java/org/elasticsearch/index/IndexService.java @@ -22,42 +22,37 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.index.IndexReader; -import org.apache.lucene.search.BooleanClause; -import org.apache.lucene.search.BooleanQuery; -import org.apache.lucene.search.Query; import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.util.Accountable; import org.apache.lucene.util.IOUtils; -import org.elasticsearch.cluster.metadata.AliasMetaData; +import org.elasticsearch.client.Client; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.concurrent.FutureUtils; -import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.env.ShardLock; import org.elasticsearch.env.ShardLockObtainFailedException; import org.elasticsearch.index.analysis.AnalysisRegistry; -import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.cache.IndexCache; import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.cache.query.QueryCache; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.engine.EngineClosedException; import org.elasticsearch.index.engine.EngineFactory; import org.elasticsearch.index.fielddata.IndexFieldDataCache; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.shard.IndexEventListener; import org.elasticsearch.index.shard.IndexSearcherWrapper; import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.index.shard.IndexShardClosedException; import org.elasticsearch.index.shard.IndexingOperationListener; import org.elasticsearch.index.shard.SearchOperationListener; import org.elasticsearch.index.shard.ShadowIndexShard; @@ -68,11 +63,11 @@ import org.elasticsearch.index.store.IndexStore; import org.elasticsearch.index.store.Store; import org.elasticsearch.index.translog.Translog; -import org.elasticsearch.indices.AliasFilterParsingException; -import org.elasticsearch.indices.InvalidAliasNameException; +import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.indices.cluster.IndicesClusterStateService; import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache; import org.elasticsearch.indices.mapper.MapperRegistry; +import org.elasticsearch.script.ScriptService; import org.elasticsearch.threadpool.ThreadPool; import java.io.Closeable; @@ -84,11 +79,11 @@ import java.util.List; import java.util.Map; import java.util.Objects; -import java.util.Optional; import java.util.Set; import java.util.concurrent.ScheduledFuture; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.LongSupplier; import static java.util.Collections.emptyMap; import static java.util.Collections.unmodifiableMap; @@ -97,16 +92,15 @@ public class IndexService extends AbstractIndexComponent implements IndicesClusterStateService.AllocatedIndex { private final IndexEventListener eventListener; - private final AnalysisService analysisService; private final IndexFieldDataService indexFieldData; private final BitsetFilterCache bitsetFilterCache; private final NodeEnvironment nodeEnv; private final ShardStoreDeleter shardStoreDeleter; - private final NodeServicesProvider nodeServicesProvider; private final IndexStore indexStore; private final IndexSearcherWrapper searcherWrapper; private final IndexCache indexCache; private final MapperService mapperService; + private final NamedXContentRegistry xContentRegistry; private final SimilarityService similarityService; private final EngineFactory engineFactory; private final IndexWarmer warmer; @@ -120,13 +114,22 @@ public class IndexService extends AbstractIndexComponent implements IndicesClust private volatile AsyncTranslogFSync fsyncTask; private final ThreadPool threadPool; private final BigArrays bigArrays; + private final ScriptService scriptService; + private final ClusterService clusterService; + private final Client client; public IndexService(IndexSettings indexSettings, NodeEnvironment nodeEnv, + NamedXContentRegistry xContentRegistry, SimilarityService similarityService, ShardStoreDeleter shardStoreDeleter, AnalysisRegistry registry, @Nullable EngineFactory engineFactory, - NodeServicesProvider nodeServicesProvider, + CircuitBreakerService circuitBreakerService, + BigArrays bigArrays, + ThreadPool threadPool, + ScriptService scriptService, + ClusterService clusterService, + Client client, QueryCache queryCache, IndexStore indexStore, IndexEventListener eventListener, @@ -137,18 +140,21 @@ public IndexService(IndexSettings indexSettings, NodeEnvironment nodeEnv, List indexingOperationListeners) throws IOException { super(indexSettings); this.indexSettings = indexSettings; - this.analysisService = registry.build(indexSettings); + this.xContentRegistry = xContentRegistry; this.similarityService = similarityService; - this.mapperService = new MapperService(indexSettings, analysisService, similarityService, mapperRegistry, - IndexService.this::newQueryShardContext); - this.indexFieldData = new IndexFieldDataService(indexSettings, indicesFieldDataCache, - nodeServicesProvider.getCircuitBreakerService(), mapperService); + this.mapperService = new MapperService(indexSettings, registry.build(indexSettings), xContentRegistry, similarityService, + mapperRegistry, + // we parse all percolator queries as they would be parsed on shard 0 + () -> newQueryShardContext(0, null, System::currentTimeMillis)); + this.indexFieldData = new IndexFieldDataService(indexSettings, indicesFieldDataCache, circuitBreakerService, mapperService); this.shardStoreDeleter = shardStoreDeleter; - this.bigArrays = nodeServicesProvider.getBigArrays(); - this.threadPool = nodeServicesProvider.getThreadPool(); + this.bigArrays = bigArrays; + this.threadPool = threadPool; + this.scriptService = scriptService; + this.clusterService = clusterService; + this.client = client; this.eventListener = eventListener; this.nodeEnv = nodeEnv; - this.nodeServicesProvider = nodeServicesProvider; this.indexStore = indexStore; indexFieldData.setListener(new FieldDataCacheListener(this)); this.bitsetFilterCache = new BitsetFilterCache(indexSettings, new BitsetCacheListener(this)); @@ -214,14 +220,18 @@ public IndexFieldDataService fieldData() { return indexFieldData; } - public AnalysisService analysisService() { - return this.analysisService; + public IndexAnalyzers getIndexAnalyzers() { + return this.mapperService.getIndexAnalyzers(); } public MapperService mapperService() { return mapperService; } + public NamedXContentRegistry xContentRegistry() { + return xContentRegistry; + } + public SimilarityService similarityService() { return similarityService; } @@ -239,7 +249,7 @@ public synchronized void close(final String reason, boolean delete) throws IOExc } } } finally { - IOUtils.close(bitsetFilterCache, indexCache, indexFieldData, analysisService, refreshTask, fsyncTask); + IOUtils.close(bitsetFilterCache, indexCache, indexFieldData, mapperService, refreshTask, fsyncTask); } } } @@ -320,13 +330,13 @@ public synchronized IndexShard createShard(ShardRouting routing) throws IOExcept } if (shards.containsKey(shardId.id())) { - throw new IndexShardAlreadyExistsException(shardId + " already exists"); + throw new IllegalStateException(shardId + " already exists"); } logger.debug("creating shard_id {}", shardId); // if we are on a shared FS we only own the shard (ie. we can safely delete it) if we are the primary. - final boolean canDeleteShardContent = IndexMetaData.isOnSharedFilesystem(indexSettings) == false || - (primary && IndexMetaData.isOnSharedFilesystem(indexSettings)); + final boolean canDeleteShardContent = this.indexSettings.isOnSharedFilesystem() == false || + (primary && this.indexSettings.isOnSharedFilesystem()); final Engine.Warmer engineWarmer = (searcher) -> { IndexShard shard = getShardOrNull(shardId.getId()); if (shard != null) { @@ -335,7 +345,7 @@ public synchronized IndexShard createShard(ShardRouting routing) throws IOExcept }; store = new Store(shardId, this.indexSettings, indexStore.newDirectoryService(path), lock, new StoreCloseListener(shardId, canDeleteShardContent, () -> eventListener.onStoreClosed(shardId))); - if (useShadowEngine(primary, indexSettings)) { + if (useShadowEngine(primary, this.indexSettings)) { indexShard = new ShadowIndexShard(routing, this.indexSettings, path, store, indexCache, mapperService, similarityService, indexFieldData, engineFactory, eventListener, searcherWrapper, threadPool, bigArrays, engineWarmer, searchOperationListeners); @@ -362,8 +372,8 @@ public synchronized IndexShard createShard(ShardRouting routing) throws IOExcept } } - static boolean useShadowEngine(boolean primary, Settings indexSettings) { - return primary == false && IndexMetaData.isIndexUsingShadowReplicas(indexSettings); + static boolean useShadowEngine(boolean primary, IndexSettings indexSettings) { + return primary == false && indexSettings.isShadowReplicaIndex(); } @Override @@ -405,7 +415,11 @@ private void closeShard(String reason, ShardId sId, IndexShard indexShard, Store } } finally { try { - store.close(); + if (store != null) { + store.close(); + } else { + logger.trace("[{}] store not initialized prior to closing shard, nothing to close", shardId); + } } catch (Exception e) { logger.warn( (Supplier) () -> new ParameterizedMessage( @@ -435,10 +449,6 @@ private void onShardClose(ShardLock lock, boolean ownsShard) { } } - public NodeServicesProvider getIndexServices() { - return nodeServicesProvider; - } - @Override public IndexSettings getIndexSettings() { return indexSettings; @@ -446,34 +456,40 @@ public IndexSettings getIndexSettings() { /** * Creates a new QueryShardContext. The context has not types set yet, if types are required set them via - * {@link QueryShardContext#setTypes(String...)} + * {@link QueryShardContext#setTypes(String...)}. + * + * Passing a {@code null} {@link IndexReader} will return a valid context, however it won't be able to make + * {@link IndexReader}-specific optimizations, such as rewriting containing range queries. */ - public QueryShardContext newQueryShardContext(IndexReader indexReader) { + public QueryShardContext newQueryShardContext(int shardId, IndexReader indexReader, LongSupplier nowInMillis) { return new QueryShardContext( - indexSettings, indexCache.bitsetFilterCache(), indexFieldData, mapperService(), - similarityService(), nodeServicesProvider.getScriptService(), nodeServicesProvider.getIndicesQueriesRegistry(), - nodeServicesProvider.getClient(), indexReader, - nodeServicesProvider.getClusterService().state() - ); + shardId, indexSettings, indexCache.bitsetFilterCache(), indexFieldData, mapperService(), + similarityService(), scriptService, xContentRegistry, + client, indexReader, + nowInMillis); } /** - * Creates a new QueryShardContext. The context has not types set yet, if types are required set them via - * {@link QueryShardContext#setTypes(String...)}. This context may be used for query parsing but cannot be - * used for rewriting since it does not know about the current {@link IndexReader}. + * The {@link ThreadPool} to use for this index. */ - public QueryShardContext newQueryShardContext() { - return newQueryShardContext(null); - } - public ThreadPool getThreadPool() { return threadPool; } + /** + * The {@link BigArrays} to use for this index. + */ public BigArrays getBigArrays() { return bigArrays; } + /** + * The {@link ScriptService} to use for this index. + */ + public ScriptService getScriptService() { + return scriptService; + } + List getIndexOperationListeners() { // pkg private for testing return indexingOperationListeners; } @@ -492,14 +508,14 @@ private class StoreCloseListener implements Store.OnClose { private final boolean ownsShard; private final Closeable[] toClose; - public StoreCloseListener(ShardId shardId, boolean ownsShard, Closeable... toClose) { + StoreCloseListener(ShardId shardId, boolean ownsShard, Closeable... toClose) { this.shardId = shardId; this.ownsShard = ownsShard; this.toClose = toClose; } @Override - public void handle(ShardLock lock) { + public void accept(ShardLock lock) { try { assert lock.getShardId().equals(shardId) : "shard id mismatch, expected: " + shardId + " but got: " + lock.getShardId(); onShardClose(lock, ownsShard); @@ -547,7 +563,7 @@ public void onRemoval(ShardId shardId, Accountable accountable) { private final class FieldDataCacheListener implements IndexFieldDataCache.Listener { final IndexService indexService; - public FieldDataCacheListener(IndexService indexService) { + FieldDataCacheListener(IndexService indexService) { this.indexService = indexService; } @@ -572,64 +588,6 @@ public void onRemoval(ShardId shardId, String fieldName, boolean wasEvicted, lon } } - /** - * Returns the filter associated with listed filtering aliases. - *

    - * The list of filtering aliases should be obtained by calling MetaData.filteringAliases. - * Returns null if no filtering is required.

    - */ - public Query aliasFilter(QueryShardContext context, String... aliasNames) { - if (aliasNames == null || aliasNames.length == 0) { - return null; - } - final ImmutableOpenMap aliases = indexSettings.getIndexMetaData().getAliases(); - if (aliasNames.length == 1) { - AliasMetaData alias = aliases.get(aliasNames[0]); - if (alias == null) { - // This shouldn't happen unless alias disappeared after filteringAliases was called. - throw new InvalidAliasNameException(index(), aliasNames[0], "Unknown alias name was passed to alias Filter"); - } - return parse(alias, context); - } else { - // we need to bench here a bit, to see maybe it makes sense to use OrFilter - BooleanQuery.Builder combined = new BooleanQuery.Builder(); - for (String aliasName : aliasNames) { - AliasMetaData alias = aliases.get(aliasName); - if (alias == null) { - // This shouldn't happen unless alias disappeared after filteringAliases was called. - throw new InvalidAliasNameException(indexSettings.getIndex(), aliasNames[0], - "Unknown alias name was passed to alias Filter"); - } - Query parsedFilter = parse(alias, context); - if (parsedFilter != null) { - combined.add(parsedFilter, BooleanClause.Occur.SHOULD); - } else { - // The filter might be null only if filter was removed after filteringAliases was called - return null; - } - } - return combined.build(); - } - } - - private Query parse(AliasMetaData alias, QueryShardContext shardContext) { - if (alias.filter() == null) { - return null; - } - try { - byte[] filterSource = alias.filter().uncompressed(); - try (XContentParser parser = XContentFactory.xContent(filterSource).createParser(filterSource)) { - Optional innerQueryBuilder = shardContext.newParseContext(parser).parseInnerQueryBuilder(); - if (innerQueryBuilder.isPresent()) { - return shardContext.toFilter(innerQueryBuilder.get()).query(); - } - return null; - } - } catch (IOException ex) { - throw new AliasFilterParsingException(shardContext.index(), alias.getAlias(), "Invalid alias filter", ex); - } - } - public IndexMetaData getMetaData() { return indexSettings.getIndexMetaData(); } @@ -707,7 +665,7 @@ private void maybeFSyncTranslogs() { if (translog.syncNeeded()) { translog.sync(); } - } catch (EngineClosedException | AlreadyClosedException ex) { + } catch (AlreadyClosedException ex) { // fine - continue; } catch (IOException e) { logger.warn("failed to sync translog", e); @@ -731,7 +689,7 @@ private void maybeRefreshEngine() { if (shard.isRefreshNeeded()) { shard.refresh("schedule"); } - } catch (EngineClosedException | AlreadyClosedException ex) { + } catch (IndexShardClosedException | AlreadyClosedException ex) { // fine - continue; } continue; diff --git a/core/src/main/java/org/elasticsearch/index/IndexSettings.java b/core/src/main/java/org/elasticsearch/index/IndexSettings.java index 5666fb416f0c5..ea484dff3ae65 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexSettings.java +++ b/core/src/main/java/org/elasticsearch/index/IndexSettings.java @@ -22,7 +22,6 @@ import org.apache.lucene.index.MergePolicy; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.IndexScopedSettings; @@ -33,12 +32,12 @@ import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.mapper.AllFieldMapper; +import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.node.Node; import java.util.Locale; import java.util.concurrent.TimeUnit; -import java.util.function.Consumer; import java.util.function.Function; import java.util.function.Predicate; @@ -99,6 +98,13 @@ public final class IndexSettings { */ public static final Setting MAX_RESCORE_WINDOW_SETTING = Setting.intSetting("index.max_rescore_window", MAX_RESULT_WINDOW_SETTING, 1, Property.Dynamic, Property.IndexScope); + /** + * Index setting describing the maximum number of filters clauses that can be used + * in an adjacency_matrix aggregation. The max number of buckets produced by + * N filters is (N*N)/2 so a limit of 100 filters is imposed by default. + */ + public static final Setting MAX_ADJACENCY_MATRIX_FILTERS_SETTING = + Setting.intSetting("index.max_adjacency_matrix_filters", 100, 2, Property.Dynamic, Property.IndexScope); public static final TimeValue DEFAULT_REFRESH_INTERVAL = new TimeValue(1, TimeUnit.SECONDS); public static final Setting INDEX_REFRESH_INTERVAL_SETTING = Setting.timeSetting("index.refresh_interval", DEFAULT_REFRESH_INTERVAL, new TimeValue(-1, TimeUnit.MILLISECONDS), @@ -135,7 +141,6 @@ public final class IndexSettings { private final Settings nodeSettings; private final int numberOfShards; private final boolean isShadowReplicaIndex; - private final ParseFieldMatcher parseFieldMatcher; // volatile fields are updated via #updateIndexMetaData(IndexMetaData) under lock private volatile Settings settings; private volatile IndexMetaData indexMetaData; @@ -155,6 +160,7 @@ public final class IndexSettings { private long gcDeletesInMillis = DEFAULT_GC_DELETES.millis(); private volatile boolean warmerEnabled; private volatile int maxResultWindow; + private volatile int maxAdjacencyMatrixFilters; private volatile int maxRescoreWindow; private volatile boolean TTLPurgeDisabled; /** @@ -165,7 +171,10 @@ public final class IndexSettings { * The maximum number of slices allowed in a scroll request. */ private volatile int maxSlicesPerScroll; - + /** + * Whether the index is required to have at most one type. + */ + private final boolean singleType; /** * Returns the default search field for this index. @@ -237,7 +246,6 @@ public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSetti this.queryStringLenient = QUERY_STRING_LENIENT_SETTING.get(settings); this.queryStringAnalyzeWildcard = QUERY_STRING_ANALYZE_WILDCARD.get(nodeSettings); this.queryStringAllowLeadingWildcard = QUERY_STRING_ALLOW_LEADING_WILDCARD.get(nodeSettings); - this.parseFieldMatcher = new ParseFieldMatcher(settings); this.defaultAllowUnmappedFields = scopedSettings.get(ALLOW_UNMAPPED); this.indexNameMatcher = indexNameMatcher; this.durability = scopedSettings.get(INDEX_TRANSLOG_DURABILITY_SETTING); @@ -248,12 +256,13 @@ public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSetti gcDeletesInMillis = scopedSettings.get(INDEX_GC_DELETES_SETTING).getMillis(); warmerEnabled = scopedSettings.get(INDEX_WARMER_ENABLED_SETTING); maxResultWindow = scopedSettings.get(MAX_RESULT_WINDOW_SETTING); + maxAdjacencyMatrixFilters = scopedSettings.get(MAX_ADJACENCY_MATRIX_FILTERS_SETTING); maxRescoreWindow = scopedSettings.get(MAX_RESCORE_WINDOW_SETTING); TTLPurgeDisabled = scopedSettings.get(INDEX_TTL_DISABLE_PURGE_SETTING); maxRefreshListeners = scopedSettings.get(MAX_REFRESH_LISTENERS_PER_SHARD); maxSlicesPerScroll = scopedSettings.get(MAX_SLICES_PER_SCROLL); this.mergePolicyConfig = new MergePolicyConfig(logger, this); - assert indexNameMatcher.test(indexMetaData.getIndex().getName()); + singleType = scopedSettings.get(MapperService.INDEX_MAPPING_SINGLE_TYPE_SETTING); scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_COMPOUND_FORMAT_SETTING, mergePolicyConfig::setNoCFSRatio); scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_EXPUNGE_DELETES_ALLOWED_SETTING, mergePolicyConfig::setExpungeDeletesAllowed); @@ -263,12 +272,14 @@ public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSetti scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT_SETTING, mergePolicyConfig::setMaxMergedSegment); scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_SEGMENTS_PER_TIER_SETTING, mergePolicyConfig::setSegmentsPerTier); scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT_SETTING, mergePolicyConfig::setReclaimDeletesWeight); - scopedSettings.addSettingsUpdateConsumer(MergeSchedulerConfig.MAX_THREAD_COUNT_SETTING, mergeSchedulerConfig::setMaxThreadCount); - scopedSettings.addSettingsUpdateConsumer(MergeSchedulerConfig.MAX_MERGE_COUNT_SETTING, mergeSchedulerConfig::setMaxMergeCount); + + scopedSettings.addSettingsUpdateConsumer(MergeSchedulerConfig.MAX_THREAD_COUNT_SETTING, MergeSchedulerConfig.MAX_MERGE_COUNT_SETTING, + mergeSchedulerConfig::setMaxThreadAndMergeCount); scopedSettings.addSettingsUpdateConsumer(MergeSchedulerConfig.AUTO_THROTTLE_SETTING, mergeSchedulerConfig::setAutoThrottle); scopedSettings.addSettingsUpdateConsumer(INDEX_TRANSLOG_DURABILITY_SETTING, this::setTranslogDurability); scopedSettings.addSettingsUpdateConsumer(INDEX_TTL_DISABLE_PURGE_SETTING, this::setTTLPurgeDisabled); scopedSettings.addSettingsUpdateConsumer(MAX_RESULT_WINDOW_SETTING, this::setMaxResultWindow); + scopedSettings.addSettingsUpdateConsumer(MAX_ADJACENCY_MATRIX_FILTERS_SETTING, this::setMaxAdjacencyMatrixFilters); scopedSettings.addSettingsUpdateConsumer(MAX_RESCORE_WINDOW_SETTING, this::setMaxRescoreWindow); scopedSettings.addSettingsUpdateConsumer(INDEX_WARMER_ENABLED_SETTING, this::setEnableWarmer); scopedSettings.addSettingsUpdateConsumer(INDEX_GC_DELETES_SETTING, this::setGCDeletes); @@ -334,15 +345,6 @@ public boolean isOnSharedFilesystem() { return IndexMetaData.isOnSharedFilesystem(getSettings()); } - /** - * Returns true iff the given settings indicate that the index associated - * with these settings uses shadow replicas. Otherwise false. The default - * setting for this is false. - */ - public boolean isIndexUsingShadowReplicas() { - return IndexMetaData.isOnSharedFilesystem(getSettings()); - } - /** * Returns the version the index was created on. * @see Version#indexCreated(Settings) @@ -381,6 +383,11 @@ public IndexMetaData getIndexMetaData() { */ public boolean isShadowReplicaIndex() { return isShadowReplicaIndex; } + /** + * Returns whether the index enforces at most one type. + */ + public boolean isSingleType() { return singleType; } + /** * Returns the node settings. The settings returned from {@link #getSettings()} are a merged version of the * index settings and the node settings where node settings are overwritten by index settings. @@ -389,11 +396,6 @@ public Settings getNodeSettings() { return nodeSettings; } - /** - * Returns a {@link ParseFieldMatcher} for this index. - */ - public ParseFieldMatcher getParseFieldMatcher() { return parseFieldMatcher; } - /** * Returns true if the given expression matches the index name or one of it's aliases */ @@ -483,6 +485,17 @@ public int getMaxResultWindow() { private void setMaxResultWindow(int maxResultWindow) { this.maxResultWindow = maxResultWindow; } + + /** + * Returns the max number of filters in adjacency_matrix aggregation search requests + */ + public int getMaxAdjacencyMatrixFilters() { + return this.maxAdjacencyMatrixFilters; + } + + private void setMaxAdjacencyMatrixFilters(int maxAdjacencyFilters) { + this.maxAdjacencyMatrixFilters = maxAdjacencyFilters; + } /** * Returns the maximum rescore window for search requests. diff --git a/core/src/main/java/org/elasticsearch/index/IndexShardAlreadyExistsException.java b/core/src/main/java/org/elasticsearch/index/IndexShardAlreadyExistsException.java index 564988d05946f..d83b56f8b32a9 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexShardAlreadyExistsException.java +++ b/core/src/main/java/org/elasticsearch/index/IndexShardAlreadyExistsException.java @@ -26,7 +26,9 @@ /** * + * @deprecated use {@link IllegalStateException} instead */ +@Deprecated public class IndexShardAlreadyExistsException extends ElasticsearchException { public IndexShardAlreadyExistsException(String message) { @@ -36,4 +38,4 @@ public IndexShardAlreadyExistsException(String message) { public IndexShardAlreadyExistsException(StreamInput in) throws IOException { super(in); } -} \ No newline at end of file +} diff --git a/core/src/main/java/org/elasticsearch/index/IndexWarmer.java b/core/src/main/java/org/elasticsearch/index/IndexWarmer.java index 439acb239a359..a2e57e83ed5af 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexWarmer.java +++ b/core/src/main/java/org/elasticsearch/index/IndexWarmer.java @@ -56,9 +56,8 @@ public final class IndexWarmer extends AbstractComponent { ArrayList list = new ArrayList<>(); final Executor executor = threadPool.executor(ThreadPool.Names.WARMER); list.add(new FieldDataWarmer(executor)); - for (Listener listener : listeners) { - list.add(listener); - } + + Collections.addAll(list, listeners); this.listeners = Collections.unmodifiableList(list); } @@ -113,7 +112,7 @@ public interface Listener { private static class FieldDataWarmer implements IndexWarmer.Listener { private final Executor executor; - public FieldDataWarmer(Executor executor) { + FieldDataWarmer(Executor executor) { this.executor = executor; } diff --git a/core/src/main/java/org/elasticsearch/index/IndexingSlowLog.java b/core/src/main/java/org/elasticsearch/index/IndexingSlowLog.java index 513e87878d6b8..f7f7334f6b946 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexingSlowLog.java +++ b/core/src/main/java/org/elasticsearch/index/IndexingSlowLog.java @@ -22,6 +22,7 @@ import org.apache.logging.log4j.Logger; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -30,11 +31,14 @@ import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.shard.IndexingOperationListener; +import org.elasticsearch.index.shard.ShardId; import java.io.IOException; import java.util.concurrent.TimeUnit; public final class IndexingSlowLog implements IndexingOperationListener { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(IndexingSlowLog.class)); + private final Index index; private boolean reformat; private long indexWarnThreshold; @@ -76,13 +80,19 @@ public final class IndexingSlowLog implements IndexingOperationListener { * and everything else is interpreted as Elasticsearch interprets booleans * which is then converted to 0 for false and Integer.MAX_VALUE for true. */ - public static final Setting INDEX_INDEXING_SLOWLOG_MAX_SOURCE_CHARS_TO_LOG_SETTING = new Setting<>(INDEX_INDEXING_SLOWLOG_PREFIX + ".source", "1000", (value) -> { - try { - return Integer.parseInt(value, 10); - } catch (NumberFormatException e) { - return Booleans.parseBoolean(value, true) ? Integer.MAX_VALUE : 0; - } - }, Property.Dynamic, Property.IndexScope); + public static final Setting INDEX_INDEXING_SLOWLOG_MAX_SOURCE_CHARS_TO_LOG_SETTING = + new Setting<>(INDEX_INDEXING_SLOWLOG_PREFIX + ".source", "1000", (value) -> { + try { + return Integer.parseInt(value, 10); + } catch (NumberFormatException e) { + boolean booleanValue = Booleans.parseBoolean(value, true); + if (value != null && Booleans.isStrictlyBoolean(value) == false) { + DEPRECATION_LOGGER.deprecated("Expected a boolean [true/false] or a number for setting [{}] but got [{}]", + INDEX_INDEXING_SLOWLOG_PREFIX + ".source", value); + } + return booleanValue ? Integer.MAX_VALUE : 0; + } + }, Property.Dynamic, Property.IndexScope); IndexingSlowLog(IndexSettings indexSettings) { this.indexLogger = Loggers.getLogger(INDEX_INDEXING_SLOWLOG_PREFIX + ".index", indexSettings.getSettings()); @@ -90,17 +100,22 @@ public final class IndexingSlowLog implements IndexingOperationListener { indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_REFORMAT_SETTING, this::setReformat); this.reformat = indexSettings.getValue(INDEX_INDEXING_SLOWLOG_REFORMAT_SETTING); - indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN_SETTING, this::setWarnThreshold); + indexSettings.getScopedSettings() + .addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN_SETTING, this::setWarnThreshold); this.indexWarnThreshold = indexSettings.getValue(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN_SETTING).nanos(); - indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO_SETTING, this::setInfoThreshold); + indexSettings.getScopedSettings() + .addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO_SETTING, this::setInfoThreshold); this.indexInfoThreshold = indexSettings.getValue(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO_SETTING).nanos(); - indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG_SETTING, this::setDebugThreshold); + indexSettings.getScopedSettings() + .addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG_SETTING, this::setDebugThreshold); this.indexDebugThreshold = indexSettings.getValue(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG_SETTING).nanos(); - indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_TRACE_SETTING, this::setTraceThreshold); + indexSettings.getScopedSettings() + .addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_TRACE_SETTING, this::setTraceThreshold); this.indexTraceThreshold = indexSettings.getValue(INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_TRACE_SETTING).nanos(); indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_LEVEL_SETTING, this::setLevel); setLevel(indexSettings.getValue(INDEX_INDEXING_SLOWLOG_LEVEL_SETTING)); - indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_MAX_SOURCE_CHARS_TO_LOG_SETTING, this::setMaxSourceCharsToLog); + indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_MAX_SOURCE_CHARS_TO_LOG_SETTING, + this::setMaxSourceCharsToLog); this.maxSourceCharsToLog = indexSettings.getValue(INDEX_INDEXING_SLOWLOG_MAX_SOURCE_CHARS_TO_LOG_SETTING); } @@ -133,23 +148,20 @@ private void setReformat(boolean reformat) { this.reformat = reformat; } - @Override - public void postIndex(Engine.Index index, boolean created) { - final long took = index.endTime() - index.startTime(); - postIndexing(index.parsedDoc(), took); - } - - - private void postIndexing(ParsedDocument doc, long tookInNanos) { - if (indexWarnThreshold >= 0 && tookInNanos > indexWarnThreshold) { - indexLogger.warn("{}", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog)); - } else if (indexInfoThreshold >= 0 && tookInNanos > indexInfoThreshold) { - indexLogger.info("{}", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog)); - } else if (indexDebugThreshold >= 0 && tookInNanos > indexDebugThreshold) { - indexLogger.debug("{}", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog)); - } else if (indexTraceThreshold >= 0 && tookInNanos > indexTraceThreshold) { - indexLogger.trace("{}", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog)); + public void postIndex(ShardId shardId, Engine.Index indexOperation, Engine.IndexResult result) { + if (result.hasFailure() == false) { + final ParsedDocument doc = indexOperation.parsedDoc(); + final long tookInNanos = result.getTook(); + if (indexWarnThreshold >= 0 && tookInNanos > indexWarnThreshold) { + indexLogger.warn("{}", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog)); + } else if (indexInfoThreshold >= 0 && tookInNanos > indexInfoThreshold) { + indexLogger.info("{}", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog)); + } else if (indexDebugThreshold >= 0 && tookInNanos > indexDebugThreshold) { + indexLogger.debug("{}", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog)); + } else if (indexTraceThreshold >= 0 && tookInNanos > indexTraceThreshold) { + indexLogger.trace("{}", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog)); + } } } @@ -172,7 +184,8 @@ static final class SlowLogParsedDocumentPrinter { public String toString() { StringBuilder sb = new StringBuilder(); sb.append(index).append(" "); - sb.append("took[").append(TimeValue.timeValueNanos(tookInNanos)).append("], took_millis[").append(TimeUnit.NANOSECONDS.toMillis(tookInNanos)).append("], "); + sb.append("took[").append(TimeValue.timeValueNanos(tookInNanos)).append("], "); + sb.append("took_millis[").append(TimeUnit.NANOSECONDS.toMillis(tookInNanos)).append("], "); sb.append("type[").append(doc.type()).append("], "); sb.append("id[").append(doc.id()).append("], "); if (doc.routing() == null) { @@ -185,7 +198,7 @@ public String toString() { return sb.toString(); } try { - String source = XContentHelper.convertToJson(doc.source(), reformat); + String source = XContentHelper.convertToJson(doc.source(), reformat, doc.getXContentType()); sb.append(", source[").append(Strings.cleanTruncate(source, maxSourceCharsToLog)).append("]"); } catch (IOException e) { sb.append(", source[_failed_to_convert_]"); diff --git a/core/src/main/java/org/elasticsearch/index/MergePolicyConfig.java b/core/src/main/java/org/elasticsearch/index/MergePolicyConfig.java index 52b98e2bd01fa..bbc2f9d099ee4 100644 --- a/core/src/main/java/org/elasticsearch/index/MergePolicyConfig.java +++ b/core/src/main/java/org/elasticsearch/index/MergePolicyConfig.java @@ -171,10 +171,10 @@ public final class MergePolicyConfig { maxMergeAtOnce = adjustMaxMergeAtOnceIfNeeded(maxMergeAtOnce, segmentsPerTier); mergePolicy.setNoCFSRatio(indexSettings.getValue(INDEX_COMPOUND_FORMAT_SETTING)); mergePolicy.setForceMergeDeletesPctAllowed(forceMergeDeletesPctAllowed); - mergePolicy.setFloorSegmentMB(floorSegment.mbFrac()); + mergePolicy.setFloorSegmentMB(floorSegment.getMbFrac()); mergePolicy.setMaxMergeAtOnce(maxMergeAtOnce); mergePolicy.setMaxMergeAtOnceExplicit(maxMergeAtOnceExplicit); - mergePolicy.setMaxMergedSegmentMB(maxMergedSegment.mbFrac()); + mergePolicy.setMaxMergedSegmentMB(maxMergedSegment.getMbFrac()); mergePolicy.setSegmentsPerTier(segmentsPerTier); mergePolicy.setReclaimDeletesWeight(reclaimDeletesWeight); if (logger.isTraceEnabled()) { @@ -192,7 +192,7 @@ void setSegmentsPerTier(Double segmentsPerTier) { } void setMaxMergedSegment(ByteSizeValue maxMergedSegment) { - mergePolicy.setMaxMergedSegmentMB(maxMergedSegment.mbFrac()); + mergePolicy.setMaxMergedSegmentMB(maxMergedSegment.getMbFrac()); } void setMaxMergesAtOnceExplicit(Integer maxMergeAtOnceExplicit) { @@ -204,7 +204,7 @@ void setMaxMergesAtOnce(Integer maxMergeAtOnce) { } void setFloorSegmentSetting(ByteSizeValue floorSegementSetting) { - mergePolicy.setFloorSegmentMB(floorSegementSetting.mbFrac()); + mergePolicy.setFloorSegmentMB(floorSegementSetting.getMbFrac()); } void setExpungeDeletesAllowed(Double value) { @@ -247,7 +247,7 @@ private static double parseNoCFSRatio(String noCFSRatio) { } return value; } catch (NumberFormatException ex) { - throw new IllegalArgumentException("Expected a boolean or a value in the interval [0..1] but was: [" + noCFSRatio + "]", ex); + throw new IllegalArgumentException("Expected a boolean [true/false] or a value in the interval [0..1] but was: [" + noCFSRatio + "]", ex); } } } diff --git a/core/src/main/java/org/elasticsearch/index/MergeSchedulerConfig.java b/core/src/main/java/org/elasticsearch/index/MergeSchedulerConfig.java index 2eb43a50ee47b..0487a802fd932 100644 --- a/core/src/main/java/org/elasticsearch/index/MergeSchedulerConfig.java +++ b/core/src/main/java/org/elasticsearch/index/MergeSchedulerConfig.java @@ -69,13 +69,15 @@ public final class MergeSchedulerConfig { private volatile int maxMergeCount; MergeSchedulerConfig(IndexSettings indexSettings) { - maxThreadCount = indexSettings.getValue(MAX_THREAD_COUNT_SETTING); - maxMergeCount = indexSettings.getValue(MAX_MERGE_COUNT_SETTING); + int maxThread = indexSettings.getValue(MAX_THREAD_COUNT_SETTING); + int maxMerge = indexSettings.getValue(MAX_MERGE_COUNT_SETTING); + setMaxThreadAndMergeCount(maxThread, maxMerge); this.autoThrottle = indexSettings.getValue(AUTO_THROTTLE_SETTING); } /** * Returns true iff auto throttle is enabled. + * * @see ConcurrentMergeScheduler#enableAutoIOThrottle() */ public boolean isAutoThrottle() { @@ -100,8 +102,19 @@ public int getMaxThreadCount() { * Expert: directly set the maximum number of merge threads and * simultaneous merges allowed. */ - void setMaxThreadCount(int maxThreadCount) { + void setMaxThreadAndMergeCount(int maxThreadCount, int maxMergeCount) { + if (maxThreadCount < 1) { + throw new IllegalArgumentException("maxThreadCount should be at least 1"); + } + if (maxMergeCount < 1) { + throw new IllegalArgumentException("maxMergeCount should be at least 1"); + } + if (maxThreadCount > maxMergeCount) { + throw new IllegalArgumentException("maxThreadCount (= " + maxThreadCount + + ") should be <= maxMergeCount (= " + maxMergeCount + ")"); + } this.maxThreadCount = maxThreadCount; + this.maxMergeCount = maxMergeCount; } /** @@ -110,12 +123,4 @@ void setMaxThreadCount(int maxThreadCount) { public int getMaxMergeCount() { return maxMergeCount; } - - /** - * - * Expert: set the maximum number of simultaneous merges allowed. - */ - void setMaxMergeCount(int maxMergeCount) { - this.maxMergeCount = maxMergeCount; - } } diff --git a/core/src/main/java/org/elasticsearch/index/NodeServicesProvider.java b/core/src/main/java/org/elasticsearch/index/NodeServicesProvider.java deleted file mode 100644 index 866c938c0f5c1..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/NodeServicesProvider.java +++ /dev/null @@ -1,84 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index; - -import org.elasticsearch.client.Client; -import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.indices.breaker.CircuitBreakerService; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; -import org.elasticsearch.script.ScriptService; -import org.elasticsearch.threadpool.ThreadPool; - -/** - * Simple provider class that holds the Index and Node level services used by - * a shard. - * This is just a temporary solution until we cleaned up index creation and removed injectors on that level as well. - */ -public final class NodeServicesProvider { - - private final ThreadPool threadPool; - private final BigArrays bigArrays; - private final Client client; - private final IndicesQueriesRegistry indicesQueriesRegistry; - private final ScriptService scriptService; - private final CircuitBreakerService circuitBreakerService; - private final ClusterService clusterService; - - @Inject - public NodeServicesProvider(ThreadPool threadPool, BigArrays bigArrays, Client client, ScriptService scriptService, - IndicesQueriesRegistry indicesQueriesRegistry, CircuitBreakerService circuitBreakerService, - ClusterService clusterService) { - this.threadPool = threadPool; - this.bigArrays = bigArrays; - this.client = client; - this.indicesQueriesRegistry = indicesQueriesRegistry; - this.scriptService = scriptService; - this.circuitBreakerService = circuitBreakerService; - this.clusterService = clusterService; - } - - public ThreadPool getThreadPool() { - return threadPool; - } - - public BigArrays getBigArrays() { return bigArrays; } - - public Client getClient() { - return client; - } - - public IndicesQueriesRegistry getIndicesQueriesRegistry() { - return indicesQueriesRegistry; - } - - public ScriptService getScriptService() { - return scriptService; - } - - public CircuitBreakerService getCircuitBreakerService() { - return circuitBreakerService; - } - - public ClusterService getClusterService() { - return clusterService; - } -} diff --git a/core/src/main/java/org/elasticsearch/index/SearchSlowLog.java b/core/src/main/java/org/elasticsearch/index/SearchSlowLog.java index 19086416b805e..a48e3d7bd72c5 100644 --- a/core/src/main/java/org/elasticsearch/index/SearchSlowLog.java +++ b/core/src/main/java/org/elasticsearch/index/SearchSlowLog.java @@ -25,14 +25,14 @@ import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.index.shard.SearchOperationListener; import org.elasticsearch.search.internal.SearchContext; +import java.util.Collections; import java.util.concurrent.TimeUnit; public final class SearchSlowLog implements SearchOperationListener { - private boolean reformat; - private long queryWarnThreshold; private long queryInfoThreshold; private long queryDebugThreshold; @@ -73,20 +73,17 @@ public final class SearchSlowLog implements SearchOperationListener { public static final Setting INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_TRACE_SETTING = Setting.timeSetting(INDEX_SEARCH_SLOWLOG_PREFIX + ".threshold.fetch.trace", TimeValue.timeValueNanos(-1), TimeValue.timeValueMillis(-1), Property.Dynamic, Property.IndexScope); - public static final Setting INDEX_SEARCH_SLOWLOG_REFORMAT = - Setting.boolSetting(INDEX_SEARCH_SLOWLOG_PREFIX + ".reformat", true, Property.Dynamic, Property.IndexScope); public static final Setting INDEX_SEARCH_SLOWLOG_LEVEL = new Setting<>(INDEX_SEARCH_SLOWLOG_PREFIX + ".level", SlowLogLevel.TRACE.name(), SlowLogLevel::parse, Property.Dynamic, Property.IndexScope); + private static final ToXContent.Params FORMAT_PARAMS = new ToXContent.MapParams(Collections.singletonMap("pretty", "false")); + public SearchSlowLog(IndexSettings indexSettings) { this.queryLogger = Loggers.getLogger(INDEX_SEARCH_SLOWLOG_PREFIX + ".query", indexSettings.getSettings()); this.fetchLogger = Loggers.getLogger(INDEX_SEARCH_SLOWLOG_PREFIX + ".fetch", indexSettings.getSettings()); - indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_SEARCH_SLOWLOG_REFORMAT, this::setReformat); - this.reformat = indexSettings.getValue(INDEX_SEARCH_SLOWLOG_REFORMAT); - indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_WARN_SETTING, this::setQueryWarnThreshold); this.queryWarnThreshold = indexSettings.getValue(INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_WARN_SETTING).nanos(); indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_INFO_SETTING, this::setQueryInfoThreshold); @@ -117,38 +114,36 @@ private void setLevel(SlowLogLevel level) { @Override public void onQueryPhase(SearchContext context, long tookInNanos) { if (queryWarnThreshold >= 0 && tookInNanos > queryWarnThreshold) { - queryLogger.warn("{}", new SlowLogSearchContextPrinter(context, tookInNanos, reformat)); + queryLogger.warn("{}", new SlowLogSearchContextPrinter(context, tookInNanos)); } else if (queryInfoThreshold >= 0 && tookInNanos > queryInfoThreshold) { - queryLogger.info("{}", new SlowLogSearchContextPrinter(context, tookInNanos, reformat)); + queryLogger.info("{}", new SlowLogSearchContextPrinter(context, tookInNanos)); } else if (queryDebugThreshold >= 0 && tookInNanos > queryDebugThreshold) { - queryLogger.debug("{}", new SlowLogSearchContextPrinter(context, tookInNanos, reformat)); + queryLogger.debug("{}", new SlowLogSearchContextPrinter(context, tookInNanos)); } else if (queryTraceThreshold >= 0 && tookInNanos > queryTraceThreshold) { - queryLogger.trace("{}", new SlowLogSearchContextPrinter(context, tookInNanos, reformat)); + queryLogger.trace("{}", new SlowLogSearchContextPrinter(context, tookInNanos)); } } @Override public void onFetchPhase(SearchContext context, long tookInNanos) { if (fetchWarnThreshold >= 0 && tookInNanos > fetchWarnThreshold) { - fetchLogger.warn("{}", new SlowLogSearchContextPrinter(context, tookInNanos, reformat)); + fetchLogger.warn("{}", new SlowLogSearchContextPrinter(context, tookInNanos)); } else if (fetchInfoThreshold >= 0 && tookInNanos > fetchInfoThreshold) { - fetchLogger.info("{}", new SlowLogSearchContextPrinter(context, tookInNanos, reformat)); + fetchLogger.info("{}", new SlowLogSearchContextPrinter(context, tookInNanos)); } else if (fetchDebugThreshold >= 0 && tookInNanos > fetchDebugThreshold) { - fetchLogger.debug("{}", new SlowLogSearchContextPrinter(context, tookInNanos, reformat)); + fetchLogger.debug("{}", new SlowLogSearchContextPrinter(context, tookInNanos)); } else if (fetchTraceThreshold >= 0 && tookInNanos > fetchTraceThreshold) { - fetchLogger.trace("{}", new SlowLogSearchContextPrinter(context, tookInNanos, reformat)); + fetchLogger.trace("{}", new SlowLogSearchContextPrinter(context, tookInNanos)); } } static final class SlowLogSearchContextPrinter { private final SearchContext context; private final long tookInNanos; - private final boolean reformat; - public SlowLogSearchContextPrinter(SearchContext context, long tookInNanos, boolean reformat) { + SlowLogSearchContextPrinter(SearchContext context, long tookInNanos) { this.context = context; this.tookInNanos = tookInNanos; - this.reformat = reformat; } @Override @@ -172,7 +167,7 @@ public String toString() { } sb.append("search_type[").append(context.searchType()).append("], total_shards[").append(context.numberOfShards()).append("], "); if (context.request().source() != null) { - sb.append("source[").append(context.request().source()).append("], "); + sb.append("source[").append(context.request().source().toString(FORMAT_PARAMS)).append("], "); } else { sb.append("source[], "); } @@ -180,10 +175,6 @@ public String toString() { } } - private void setReformat(boolean reformat) { - this.reformat = reformat; - } - private void setQueryWarnThreshold(TimeValue warnThreshold) { this.queryWarnThreshold = warnThreshold.nanos(); } @@ -216,10 +207,6 @@ private void setFetchTraceThreshold(TimeValue traceThreshold) { this.fetchTraceThreshold = traceThreshold.nanos(); } - boolean isReformat() { - return reformat; - } - long getQueryWarnThreshold() { return queryWarnThreshold; } diff --git a/core/src/main/java/org/elasticsearch/index/VersionType.java b/core/src/main/java/org/elasticsearch/index/VersionType.java index 062fbce10dea2..352056499f61a 100644 --- a/core/src/main/java/org/elasticsearch/index/VersionType.java +++ b/core/src/main/java/org/elasticsearch/index/VersionType.java @@ -198,6 +198,55 @@ public boolean validateVersionForReads(long version) { return version >= 0L || version == Versions.MATCH_ANY; } + }, + /** + * Warning: this version type should be used with care. Concurrent indexing may result in loss of data on replicas + * + * @deprecated this version type will be removed in the next major version + */ + @Deprecated + FORCE((byte) 3) { + @Override + public boolean isVersionConflictForWrites(long currentVersion, long expectedVersion, boolean deleted) { + if (currentVersion == Versions.NOT_FOUND) { + return false; + } + if (expectedVersion == Versions.MATCH_ANY) { + throw new IllegalStateException("you must specify a version when use VersionType.FORCE"); + } + return false; + } + + @Override + public String explainConflictForWrites(long currentVersion, long expectedVersion, boolean deleted) { + throw new AssertionError("VersionType.FORCE should never result in a write conflict"); + } + + @Override + public boolean isVersionConflictForReads(long currentVersion, long expectedVersion) { + return false; + } + + @Override + public String explainConflictForReads(long currentVersion, long expectedVersion) { + throw new AssertionError("VersionType.FORCE should never result in a read conflict"); + } + + @Override + public long updateVersion(long currentVersion, long expectedVersion) { + return expectedVersion; + } + + @Override + public boolean validateVersionForWrites(long version) { + return version >= 0L; + } + + @Override + public boolean validateVersionForReads(long version) { + return version >= 0L || version == Versions.MATCH_ANY; + } + }; private final byte value; @@ -291,6 +340,8 @@ public static VersionType fromString(String versionType) { return EXTERNAL; } else if ("external_gte".equals(versionType)) { return EXTERNAL_GTE; + } else if ("force".equals(versionType)) { + return FORCE; } throw new IllegalArgumentException("No version type match [" + versionType + "]"); } @@ -309,18 +360,18 @@ public static VersionType fromValue(byte value) { return EXTERNAL; } else if (value == 2) { return EXTERNAL_GTE; + } else if (value == 3) { + return FORCE; } throw new IllegalArgumentException("No version type match [" + value + "]"); } public static VersionType readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - assert (ordinal == 0 || ordinal == 1 || ordinal == 2 || ordinal == 3); - return VersionType.values()[ordinal]; + return in.readEnum(VersionType.class); } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(ordinal()); + out.writeEnum(this); } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/ASCIIFoldingTokenFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/ASCIIFoldingTokenFilterFactory.java index b7417b2637481..4318ef273dca9 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/ASCIIFoldingTokenFilterFactory.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/ASCIIFoldingTokenFilterFactory.java @@ -47,6 +47,20 @@ public TokenStream create(TokenStream tokenStream) { @Override public Object getMultiTermComponent() { - return this; + if (preserveOriginal == false) { + return this; + } else { + // See https://issues.apache.org/jira/browse/LUCENE-7536 for the reasoning + return new TokenFilterFactory() { + @Override + public String name() { + return ASCIIFoldingTokenFilterFactory.this.name(); + } + @Override + public TokenStream create(TokenStream tokenStream) { + return new ASCIIFoldingFilter(tokenStream, false); + } + }; + } } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java b/core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java index 119e0c16ea0dc..36357afe678b5 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java @@ -18,14 +18,20 @@ */ package org.elasticsearch.index.analysis; +import org.apache.logging.log4j.Logger; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.env.Environment; +import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.mapper.TextFieldMapper; import org.elasticsearch.indices.analysis.AnalysisModule; import org.elasticsearch.indices.analysis.AnalysisModule.AnalysisProvider; import org.elasticsearch.indices.analysis.PreBuiltAnalyzers; @@ -39,6 +45,7 @@ import java.util.HashMap; import java.util.Locale; import java.util.Map; +import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.stream.Collectors; @@ -46,7 +53,7 @@ /** * An internal registry for tokenizer, token filter, char filter and analyzer. - * This class exists per node and allows to create per-index {@link AnalysisService} via {@link #build(IndexSettings)} + * This class exists per node and allows to create per-index {@link IndexAnalyzers} via {@link #build(IndexSettings)} */ public final class AnalysisRegistry implements Closeable { public static final String INDEX_ANALYSIS_CHAR_FILTER = "index.analysis.char_filter"; @@ -60,17 +67,20 @@ public final class AnalysisRegistry implements Closeable { private final Map> tokenFilters; private final Map> tokenizers; private final Map>> analyzers; + private final Map>> normalizers; public AnalysisRegistry(Environment environment, Map> charFilters, Map> tokenFilters, Map> tokenizers, - Map>> analyzers) { + Map>> analyzers, + Map>> normalizers) { this.environment = environment; this.charFilters = unmodifiableMap(charFilters); this.tokenFilters = unmodifiableMap(tokenFilters); this.tokenizers = unmodifiableMap(tokenizers); this.analyzers = unmodifiableMap(analyzers); + this.normalizers = unmodifiableMap(normalizers); } /** @@ -136,28 +146,50 @@ public void close() throws IOException { } /** - * Creates an index-level {@link AnalysisService} from this registry using the given index settings + * Creates an index-level {@link IndexAnalyzers} from this registry using the given index settings */ - public AnalysisService build(IndexSettings indexSettings) throws IOException { - final Map charFiltersSettings = indexSettings.getSettings().getGroups(INDEX_ANALYSIS_CHAR_FILTER); - final Map tokenFiltersSettings = indexSettings.getSettings().getGroups(INDEX_ANALYSIS_FILTER); - final Map tokenizersSettings = indexSettings.getSettings().getGroups(INDEX_ANALYSIS_TOKENIZER); - final Map analyzersSettings = indexSettings.getSettings().getGroups("index.analysis.analyzer"); - - final Map charFilterFactories = buildMapping(false, "charfilter", indexSettings, charFiltersSettings, charFilters, prebuiltAnalysis.charFilterFactories); - final Map tokenizerFactories = buildMapping(false, "tokenizer", indexSettings, tokenizersSettings, tokenizers, prebuiltAnalysis.tokenizerFactories); + public IndexAnalyzers build(IndexSettings indexSettings) throws IOException { + + final Map charFilterFactories = buildCharFilterFactories(indexSettings); + final Map tokenizerFactories = buildTokenizerFactories(indexSettings); + final Map tokenFilterFactories = buildTokenFilterFactories(indexSettings); + final Map> analyzierFactories = buildAnalyzerFactories(indexSettings); + final Map> normalizerFactories = buildNormalizerFactories(indexSettings); + return build(indexSettings, analyzierFactories, normalizerFactories, tokenizerFactories, charFilterFactories, tokenFilterFactories); + } + public Map buildTokenFilterFactories(IndexSettings indexSettings) throws IOException { + final Map tokenFiltersSettings = indexSettings.getSettings().getGroups(INDEX_ANALYSIS_FILTER); Map> tokenFilters = new HashMap<>(this.tokenFilters); /* - * synonym is different than everything else since it needs access to the tokenizer factories for this index. + * synonym and synonym_graph are different than everything else since they need access to the tokenizer factories for the index. * instead of building the infrastructure for plugins we rather make it a real exception to not pollute the general interface and * hide internal data-structures as much as possible. */ - tokenFilters.put("synonym", requriesAnalysisSettings((is, env, name, settings) -> new SynonymTokenFilterFactory(is, env, this, name, settings))); - final Map tokenFilterFactories = buildMapping(false, "tokenfilter", indexSettings, tokenFiltersSettings, Collections.unmodifiableMap(tokenFilters), prebuiltAnalysis.tokenFilterFactories); - final Map> analyzierFactories = buildMapping(true, "analyzer", indexSettings, analyzersSettings, - analyzers, prebuiltAnalysis.analyzerProviderFactories); - return new AnalysisService(indexSettings, analyzierFactories, tokenizerFactories, charFilterFactories, tokenFilterFactories); + tokenFilters.put("synonym", requiresAnalysisSettings((is, env, name, settings) -> new SynonymTokenFilterFactory(is, env, this, name, settings))); + tokenFilters.put("synonym_graph", requiresAnalysisSettings((is, env, name, settings) -> new SynonymGraphTokenFilterFactory(is, env, this, name, settings))); + return buildMapping(Component.FILTER, indexSettings, tokenFiltersSettings, Collections.unmodifiableMap(tokenFilters), prebuiltAnalysis.tokenFilterFactories); + } + + public Map buildTokenizerFactories(IndexSettings indexSettings) throws IOException { + final Map tokenizersSettings = indexSettings.getSettings().getGroups(INDEX_ANALYSIS_TOKENIZER); + return buildMapping(Component.TOKENIZER, indexSettings, tokenizersSettings, tokenizers, prebuiltAnalysis.tokenizerFactories); + } + + public Map buildCharFilterFactories(IndexSettings indexSettings) throws IOException { + final Map charFiltersSettings = indexSettings.getSettings().getGroups(INDEX_ANALYSIS_CHAR_FILTER); + return buildMapping(Component.CHAR_FILTER, indexSettings, charFiltersSettings, charFilters, prebuiltAnalysis.charFilterFactories); + } + + public Map> buildAnalyzerFactories(IndexSettings indexSettings) throws IOException { + final Map analyzersSettings = indexSettings.getSettings().getGroups("index.analysis.analyzer"); + return buildMapping(Component.ANALYZER, indexSettings, analyzersSettings, analyzers, prebuiltAnalysis.analyzerProviderFactories); + } + + public Map> buildNormalizerFactories(IndexSettings indexSettings) throws IOException { + final Map noralizersSettings = indexSettings.getSettings().getGroups("index.analysis.normalizer"); + // TODO: Have pre-built normalizers + return buildMapping(Component.NORMALIZER, indexSettings, noralizersSettings, normalizers, Collections.emptyMap()); } /** @@ -172,9 +204,9 @@ public AnalysisProvider getTokenizerProvider(String tokenizer, final Map tokenizerSettings = indexSettings.getSettings().getGroups("index.analysis.tokenizer"); if (tokenizerSettings.containsKey(tokenizer)) { Settings currentSettings = tokenizerSettings.get(tokenizer); - return getAnalysisProvider("tokenizer", tokenizers, tokenizer, currentSettings.get("type")); + return getAnalysisProvider(Component.TOKENIZER, tokenizers, tokenizer, currentSettings.get("type")); } else { - return prebuiltAnalysis.tokenizerFactories.get(tokenizer); + return getTokenizerProvider(tokenizer); } } @@ -192,17 +224,19 @@ public AnalysisProvider getTokenFilterProvider(String tokenF Settings currentSettings = tokenFilterSettings.get(tokenFilter); String typeName = currentSettings.get("type"); /* - * synonym is different than everything else since it needs access to the tokenizer factories for this index. + * synonym and synonym_graph are different than everything else since they need access to the tokenizer factories for the index. * instead of building the infrastructure for plugins we rather make it a real exception to not pollute the general interface and * hide internal data-structures as much as possible. */ if ("synonym".equals(typeName)) { - return requriesAnalysisSettings((is, env, name, settings) -> new SynonymTokenFilterFactory(is, env, this, name, settings)); + return requiresAnalysisSettings((is, env, name, settings) -> new SynonymTokenFilterFactory(is, env, this, name, settings)); + } else if ("synonym_graph".equals(typeName)) { + return requiresAnalysisSettings((is, env, name, settings) -> new SynonymGraphTokenFilterFactory(is, env, this, name, settings)); } else { - return getAnalysisProvider("tokenfilter", tokenFilters, tokenFilter, typeName); + return getAnalysisProvider(Component.FILTER, tokenFilters, tokenFilter, typeName); } } else { - return prebuiltAnalysis.tokenFilterFactories.get(tokenFilter); + return getTokenFilterProvider(tokenFilter); } } @@ -218,13 +252,13 @@ public AnalysisProvider getCharFilterProvider(String charFilt final Map tokenFilterSettings = indexSettings.getSettings().getGroups("index.analysis.char_filter"); if (tokenFilterSettings.containsKey(charFilter)) { Settings currentSettings = tokenFilterSettings.get(charFilter); - return getAnalysisProvider("charfilter", charFilters, charFilter, currentSettings.get("type")); + return getAnalysisProvider(Component.CHAR_FILTER, charFilters, charFilter, currentSettings.get("type")); } else { - return prebuiltAnalysis.charFilterFactories.get(charFilter); + return getCharFilterProvider(charFilter); } } - private static AnalysisModule.AnalysisProvider requriesAnalysisSettings(AnalysisModule.AnalysisProvider provider) { + private static AnalysisModule.AnalysisProvider requiresAnalysisSettings(AnalysisModule.AnalysisProvider provider) { return new AnalysisModule.AnalysisProvider() { @Override public T get(IndexSettings indexSettings, Environment environment, String name, Settings settings) throws IOException { @@ -237,7 +271,40 @@ public boolean requiresAnalysisSettings() { }; } - private Map buildMapping(boolean analyzer, String toBuild, IndexSettings settings, Map settingsMap, + enum Component { + ANALYZER { + @Override + public String toString() { + return "analyzer"; + } + }, + NORMALIZER { + @Override + public String toString() { + return "normalizer"; + } + }, + CHAR_FILTER { + @Override + public String toString() { + return "char_filter"; + } + }, + TOKENIZER { + @Override + public String toString() { + return "tokenizer"; + } + }, + FILTER { + @Override + public String toString() { + return "filter"; + } + }; + } + + private Map buildMapping(Component component, IndexSettings settings, Map settingsMap, Map> providerMap, Map> defaultInstance) throws IOException { Settings defaultSettings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, settings.getIndexVersionCreated()).build(); @@ -246,29 +313,34 @@ private Map buildMapping(boolean analyzer, String toBuild, IndexS String name = entry.getKey(); Settings currentSettings = entry.getValue(); String typeName = currentSettings.get("type"); - if (analyzer) { - T factory; + if (component == Component.ANALYZER) { + T factory = null; if (typeName == null) { if (currentSettings.get("tokenizer") != null) { factory = (T) new CustomAnalyzerProvider(settings, name, currentSettings); } else { - throw new IllegalArgumentException(toBuild + " [" + name + "] must specify either an analyzer type, or a tokenizer"); + throw new IllegalArgumentException(component + " [" + name + "] must specify either an analyzer type, or a tokenizer"); } } else if (typeName.equals("custom")) { factory = (T) new CustomAnalyzerProvider(settings, name, currentSettings); - } else { - AnalysisModule.AnalysisProvider type = providerMap.get(typeName); - if (type == null) { - throw new IllegalArgumentException("Unknown " + toBuild + " type [" + typeName + "] for [" + name + "]"); - } - factory = type.get(settings, environment, name, currentSettings); } - factories.put(name, factory); - } else { - AnalysisProvider type = getAnalysisProvider(toBuild, providerMap, name, typeName); - final T factory = type.get(settings, environment, name, currentSettings); - factories.put(name, factory); + if (factory != null) { + factories.put(name, factory); + continue; + } + } else if (component == Component.NORMALIZER) { + if (typeName == null || typeName.equals("custom")) { + T factory = (T) new CustomNormalizerProvider(settings, name, currentSettings); + factories.put(name, factory); + continue; + } + } + AnalysisProvider type = getAnalysisProvider(component, providerMap, name, typeName); + if (type == null) { + throw new IllegalArgumentException("Unknown " + component + " type [" + typeName + "] for [" + name + "]"); } + final T factory = type.get(settings, environment, name, currentSettings); + factories.put(name, factory); } // go over the char filters in the bindings and register the ones that are not configured @@ -306,13 +378,13 @@ private Map buildMapping(boolean analyzer, String toBuild, IndexS return factories; } - private AnalysisProvider getAnalysisProvider(String toBuild, Map> providerMap, String name, String typeName) { + private AnalysisProvider getAnalysisProvider(Component component, Map> providerMap, String name, String typeName) { if (typeName == null) { - throw new IllegalArgumentException(toBuild + " [" + name + "] must specify either an analyzer type, or a tokenizer"); + throw new IllegalArgumentException(component + " [" + name + "] must specify either an analyzer type, or a tokenizer"); } AnalysisProvider type = providerMap.get(typeName); if (type == null) { - throw new IllegalArgumentException("Unknown " + toBuild + " type [" + typeName + "] for [" + name + "]"); + throw new IllegalArgumentException("Unknown " + component + " type [" + typeName + "] for [" + name + "]"); } return type; } @@ -399,4 +471,159 @@ public void close() throws IOException { IOUtils.close(analyzerProviderFactories.values().stream().map((a) -> ((PreBuiltAnalyzerProviderFactory)a).analyzer()).collect(Collectors.toList())); } } + + public IndexAnalyzers build(IndexSettings indexSettings, + Map> analyzerProviders, + Map> normalizerProviders, + Map tokenizerFactoryFactories, + Map charFilterFactoryFactories, + Map tokenFilterFactoryFactories) { + + Index index = indexSettings.getIndex(); + analyzerProviders = new HashMap<>(analyzerProviders); + Logger logger = Loggers.getLogger(getClass(), indexSettings.getSettings()); + DeprecationLogger deprecationLogger = new DeprecationLogger(logger); + Map analyzerAliases = new HashMap<>(); + Map analyzers = new HashMap<>(); + Map normalizers = new HashMap<>(); + for (Map.Entry> entry : analyzerProviders.entrySet()) { + processAnalyzerFactory(deprecationLogger, indexSettings, entry.getKey(), entry.getValue(), analyzerAliases, analyzers, + tokenFilterFactoryFactories, charFilterFactoryFactories, tokenizerFactoryFactories); + } + for (Map.Entry> entry : normalizerProviders.entrySet()) { + processNormalizerFactory(deprecationLogger, indexSettings, entry.getKey(), entry.getValue(), normalizers, + tokenFilterFactoryFactories, charFilterFactoryFactories); + } + for (Map.Entry entry : analyzerAliases.entrySet()) { + String key = entry.getKey(); + if (analyzers.containsKey(key) && + ("default".equals(key) || "default_search".equals(key) || "default_search_quoted".equals(key)) == false) { + throw new IllegalStateException("already registered analyzer with name: " + key); + } else { + NamedAnalyzer configured = entry.getValue(); + analyzers.put(key, configured); + } + } + + if (!analyzers.containsKey("default")) { + processAnalyzerFactory(deprecationLogger, indexSettings, "default", new StandardAnalyzerProvider(indexSettings, null, "default", Settings.Builder.EMPTY_SETTINGS), + analyzerAliases, analyzers, tokenFilterFactoryFactories, charFilterFactoryFactories, tokenizerFactoryFactories); + } + if (!analyzers.containsKey("default_search")) { + analyzers.put("default_search", analyzers.get("default")); + } + if (!analyzers.containsKey("default_search_quoted")) { + analyzers.put("default_search_quoted", analyzers.get("default_search")); + } + + + NamedAnalyzer defaultAnalyzer = analyzers.get("default"); + if (defaultAnalyzer == null) { + throw new IllegalArgumentException("no default analyzer configured"); + } + if (analyzers.containsKey("default_index")) { + final Version createdVersion = indexSettings.getIndexVersionCreated(); + if (createdVersion.onOrAfter(Version.V_5_0_0_alpha1)) { + throw new IllegalArgumentException("setting [index.analysis.analyzer.default_index] is not supported anymore, use [index.analysis.analyzer.default] instead for index [" + index.getName() + "]"); + } else { + deprecationLogger.deprecated("setting [index.analysis.analyzer.default_index] is deprecated, use [index.analysis.analyzer.default] instead for index [{}]", index.getName()); + } + } + NamedAnalyzer defaultIndexAnalyzer = analyzers.containsKey("default_index") ? analyzers.get("default_index") : defaultAnalyzer; + NamedAnalyzer defaultSearchAnalyzer = analyzers.containsKey("default_search") ? analyzers.get("default_search") : defaultAnalyzer; + NamedAnalyzer defaultSearchQuoteAnalyzer = analyzers.containsKey("default_search_quote") ? analyzers.get("default_search_quote") : defaultSearchAnalyzer; + + for (Map.Entry analyzer : analyzers.entrySet()) { + if (analyzer.getKey().startsWith("_")) { + throw new IllegalArgumentException("analyzer name must not start with '_'. got \"" + analyzer.getKey() + "\""); + } + } + return new IndexAnalyzers(indexSettings, defaultIndexAnalyzer, defaultSearchAnalyzer, defaultSearchQuoteAnalyzer, + unmodifiableMap(analyzers), unmodifiableMap(normalizers)); + } + + private void processAnalyzerFactory(DeprecationLogger deprecationLogger, + IndexSettings indexSettings, + String name, + AnalyzerProvider analyzerFactory, + Map analyzerAliases, + Map analyzers, Map tokenFilters, + Map charFilters, Map tokenizers) { + /* + * Lucene defaults positionIncrementGap to 0 in all analyzers but + * Elasticsearch defaults them to 0 only before version 2.0 + * and 100 afterwards so we override the positionIncrementGap if it + * doesn't match here. + */ + int overridePositionIncrementGap = TextFieldMapper.Defaults.POSITION_INCREMENT_GAP; + if (analyzerFactory instanceof CustomAnalyzerProvider) { + ((CustomAnalyzerProvider) analyzerFactory).build(tokenizers, charFilters, tokenFilters); + /* + * Custom analyzers already default to the correct, version + * dependent positionIncrementGap and the user is be able to + * configure the positionIncrementGap directly on the analyzer so + * we disable overriding the positionIncrementGap to preserve the + * user's setting. + */ + overridePositionIncrementGap = Integer.MIN_VALUE; + } + Analyzer analyzerF = analyzerFactory.get(); + if (analyzerF == null) { + throw new IllegalArgumentException("analyzer [" + analyzerFactory.name() + "] created null analyzer"); + } + NamedAnalyzer analyzer; + if (analyzerF instanceof NamedAnalyzer) { + // if we got a named analyzer back, use it... + analyzer = (NamedAnalyzer) analyzerF; + if (overridePositionIncrementGap >= 0 && analyzer.getPositionIncrementGap(analyzer.name()) != overridePositionIncrementGap) { + // unless the positionIncrementGap needs to be overridden + analyzer = new NamedAnalyzer(analyzer, overridePositionIncrementGap); + } + } else { + analyzer = new NamedAnalyzer(name, analyzerFactory.scope(), analyzerF, overridePositionIncrementGap); + } + if (analyzers.containsKey(name)) { + throw new IllegalStateException("already registered analyzer with name: " + name); + } + analyzers.put(name, analyzer); + // TODO: remove alias support completely when we no longer support pre 5.0 indices + final String analyzerAliasKey = "index.analysis.analyzer." + analyzerFactory.name() + ".alias"; + if (indexSettings.getSettings().get(analyzerAliasKey) != null) { + if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_5_0_0_beta1)) { + // do not allow alias creation if the index was created on or after v5.0 alpha6 + throw new IllegalArgumentException("setting [" + analyzerAliasKey + "] is not supported"); + } + + // the setting is now removed but we only support it for loading indices created before v5.0 + deprecationLogger.deprecated("setting [{}] is only allowed on index [{}] because it was created before 5.x; " + + "analyzer aliases can no longer be created on new indices.", analyzerAliasKey, indexSettings.getIndex().getName()); + Set aliases = Sets.newHashSet(indexSettings.getSettings().getAsArray(analyzerAliasKey)); + for (String alias : aliases) { + if (analyzerAliases.putIfAbsent(alias, analyzer) != null) { + throw new IllegalStateException("alias [" + alias + "] is already used by [" + analyzerAliases.get(alias).name() + "]"); + } + } + } + } + + private void processNormalizerFactory(DeprecationLogger deprecationLogger, + IndexSettings indexSettings, + String name, + AnalyzerProvider normalizerFactory, + Map normalizers, + Map tokenFilters, + Map charFilters) { + if (normalizerFactory instanceof CustomNormalizerProvider) { + ((CustomNormalizerProvider) normalizerFactory).build(charFilters, tokenFilters); + } + Analyzer normalizerF = normalizerFactory.get(); + if (normalizerF == null) { + throw new IllegalArgumentException("normalizer [" + normalizerFactory.name() + "] created null normalizer"); + } + NamedAnalyzer normalizer = new NamedAnalyzer(name, normalizerFactory.scope(), normalizerF); + if (normalizers.containsKey(name)) { + throw new IllegalStateException("already registered analyzer with name: " + name); + } + normalizers.put(name, normalizer); + } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/AnalysisService.java b/core/src/main/java/org/elasticsearch/index/analysis/AnalysisService.java deleted file mode 100644 index cb84e6c6d0a34..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/analysis/AnalysisService.java +++ /dev/null @@ -1,218 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.analysis; - -import org.apache.lucene.analysis.Analyzer; -import org.elasticsearch.Version; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.set.Sets; -import org.elasticsearch.index.AbstractIndexComponent; -import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.mapper.TextFieldMapper; - -import java.io.Closeable; -import java.util.HashMap; -import java.util.Map; -import java.util.Set; - -import static java.util.Collections.unmodifiableMap; - -/** - * - */ -public class AnalysisService extends AbstractIndexComponent implements Closeable { - - private final Map analyzers; - private final Map tokenizers; - private final Map charFilters; - private final Map tokenFilters; - - private final NamedAnalyzer defaultIndexAnalyzer; - private final NamedAnalyzer defaultSearchAnalyzer; - private final NamedAnalyzer defaultSearchQuoteAnalyzer; - - public AnalysisService(IndexSettings indexSettings, - Map> analyzerProviders, - Map tokenizerFactoryFactories, - Map charFilterFactoryFactories, - Map tokenFilterFactoryFactories) { - super(indexSettings); - this.tokenizers = unmodifiableMap(tokenizerFactoryFactories); - this.charFilters = unmodifiableMap(charFilterFactoryFactories); - this.tokenFilters = unmodifiableMap(tokenFilterFactoryFactories); - analyzerProviders = new HashMap<>(analyzerProviders); - - Map analyzerAliases = new HashMap<>(); - Map analyzers = new HashMap<>(); - for (Map.Entry> entry : analyzerProviders.entrySet()) { - processAnalyzerFactory(entry.getKey(), entry.getValue(), analyzerAliases, analyzers); - } - for (Map.Entry entry : analyzerAliases.entrySet()) { - String key = entry.getKey(); - if (analyzers.containsKey(key) && - ("default".equals(key) || "default_search".equals(key) || "default_search_quoted".equals(key)) == false) { - throw new IllegalStateException("already registered analyzer with name: " + key); - } else { - NamedAnalyzer configured = entry.getValue(); - analyzers.put(key, configured); - } - } - - if (!analyzers.containsKey("default")) { - processAnalyzerFactory("default", new StandardAnalyzerProvider(indexSettings, null, "default", Settings.Builder.EMPTY_SETTINGS), - analyzerAliases, analyzers); - } - if (!analyzers.containsKey("default_search")) { - analyzers.put("default_search", analyzers.get("default")); - } - if (!analyzers.containsKey("default_search_quoted")) { - analyzers.put("default_search_quoted", analyzers.get("default_search")); - } - - - NamedAnalyzer defaultAnalyzer = analyzers.get("default"); - if (defaultAnalyzer == null) { - throw new IllegalArgumentException("no default analyzer configured"); - } - if (analyzers.containsKey("default_index")) { - final Version createdVersion = indexSettings.getIndexVersionCreated(); - if (createdVersion.onOrAfter(Version.V_5_0_0_alpha1)) { - throw new IllegalArgumentException("setting [index.analysis.analyzer.default_index] is not supported anymore, use [index.analysis.analyzer.default] instead for index [" + index().getName() + "]"); - } else { - deprecationLogger.deprecated("setting [index.analysis.analyzer.default_index] is deprecated, use [index.analysis.analyzer.default] instead for index [{}]", index().getName()); - } - } - defaultIndexAnalyzer = analyzers.containsKey("default_index") ? analyzers.get("default_index") : defaultAnalyzer; - defaultSearchAnalyzer = analyzers.containsKey("default_search") ? analyzers.get("default_search") : defaultAnalyzer; - defaultSearchQuoteAnalyzer = analyzers.containsKey("default_search_quote") ? analyzers.get("default_search_quote") : defaultSearchAnalyzer; - - for (Map.Entry analyzer : analyzers.entrySet()) { - if (analyzer.getKey().startsWith("_")) { - throw new IllegalArgumentException("analyzer name must not start with '_'. got \"" + analyzer.getKey() + "\""); - } - } - this.analyzers = unmodifiableMap(analyzers); - } - - private void processAnalyzerFactory(String name, AnalyzerProvider analyzerFactory, Map analyzerAliases, Map analyzers) { - /* - * Lucene defaults positionIncrementGap to 0 in all analyzers but - * Elasticsearch defaults them to 0 only before version 2.0 - * and 100 afterwards so we override the positionIncrementGap if it - * doesn't match here. - */ - int overridePositionIncrementGap = TextFieldMapper.Defaults.POSITION_INCREMENT_GAP; - if (analyzerFactory instanceof CustomAnalyzerProvider) { - ((CustomAnalyzerProvider) analyzerFactory).build(this); - /* - * Custom analyzers already default to the correct, version - * dependent positionIncrementGap and the user is be able to - * configure the positionIncrementGap directly on the analyzer so - * we disable overriding the positionIncrementGap to preserve the - * user's setting. - */ - overridePositionIncrementGap = Integer.MIN_VALUE; - } - Analyzer analyzerF = analyzerFactory.get(); - if (analyzerF == null) { - throw new IllegalArgumentException("analyzer [" + analyzerFactory.name() + "] created null analyzer"); - } - NamedAnalyzer analyzer; - if (analyzerF instanceof NamedAnalyzer) { - // if we got a named analyzer back, use it... - analyzer = (NamedAnalyzer) analyzerF; - if (overridePositionIncrementGap >= 0 && analyzer.getPositionIncrementGap(analyzer.name()) != overridePositionIncrementGap) { - // unless the positionIncrementGap needs to be overridden - analyzer = new NamedAnalyzer(analyzer, overridePositionIncrementGap); - } - } else { - analyzer = new NamedAnalyzer(name, analyzerFactory.scope(), analyzerF, overridePositionIncrementGap); - } - if (analyzers.containsKey(name)) { - throw new IllegalStateException("already registered analyzer with name: " + name); - } - analyzers.put(name, analyzer); - // TODO: remove alias support completely when we no longer support pre 5.0 indices - final String analyzerAliasKey = "index.analysis.analyzer." + analyzerFactory.name() + ".alias"; - if (indexSettings.getSettings().get(analyzerAliasKey) != null) { - if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_5_0_0_alpha6)) { - // do not allow alias creation if the index was created on or after v5.0 alpha6 - throw new IllegalArgumentException("setting [" + analyzerAliasKey + "] is not supported"); - } - - // the setting is now removed but we only support it for loading indices created before v5.0 - deprecationLogger.deprecated("setting [{}] is only allowed on index [{}] because it was created before 5.x; " + - "analyzer aliases can no longer be created on new indices.", analyzerAliasKey, index().getName()); - Set aliases = Sets.newHashSet(indexSettings.getSettings().getAsArray(analyzerAliasKey)); - for (String alias : aliases) { - if (analyzerAliases.putIfAbsent(alias, analyzer) != null) { - throw new IllegalStateException("alias [" + alias + "] is already used by [" + analyzerAliases.get(alias).name() + "]"); - } - } - } - } - - @Override - public void close() { - for (NamedAnalyzer analyzer : analyzers.values()) { - if (analyzer.scope() == AnalyzerScope.INDEX) { - try { - analyzer.close(); - } catch (NullPointerException e) { - // because analyzers are aliased, they might be closed several times - // an NPE is thrown in this case, so ignore.... - // TODO: Analyzer's can no longer have aliases in indices created in 5.x and beyond, - // so we only allow the aliases for analyzers on indices created pre 5.x for backwards - // compatibility. Once pre 5.0 indices are no longer supported, this check should be removed. - } catch (Exception e) { - logger.debug("failed to close analyzer {}", analyzer); - } - } - } - } - - public NamedAnalyzer analyzer(String name) { - return analyzers.get(name); - } - - public NamedAnalyzer defaultIndexAnalyzer() { - return defaultIndexAnalyzer; - } - - public NamedAnalyzer defaultSearchAnalyzer() { - return defaultSearchAnalyzer; - } - - public NamedAnalyzer defaultSearchQuoteAnalyzer() { - return defaultSearchQuoteAnalyzer; - } - - public TokenizerFactory tokenizer(String name) { - return tokenizers.get(name); - } - - public CharFilterFactory charFilter(String name) { - return charFilters.get(name); - } - - public TokenFilterFactory tokenFilter(String name) { - return tokenFilters.get(name); - } -} diff --git a/core/src/main/java/org/elasticsearch/index/analysis/CJKBigramFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/CJKBigramFilterFactory.java index d7d5139bed5cd..4c709b14730d3 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/CJKBigramFilterFactory.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/CJKBigramFilterFactory.java @@ -21,6 +21,7 @@ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.cjk.CJKBigramFilter; +import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; import org.elasticsearch.index.IndexSettings; @@ -73,7 +74,17 @@ public CJKBigramFilterFactory(IndexSettings indexSettings, Environment environme @Override public TokenStream create(TokenStream tokenStream) { - return new CJKBigramFilter(tokenStream, flags, outputUnigrams); + CJKBigramFilter filter = new CJKBigramFilter(tokenStream, flags, outputUnigrams); + if (outputUnigrams) { + /** + * We disable the graph analysis on this token stream + * because it produces bigrams AND unigrams. + * Graph analysis on such token stream is useless and dangerous as it may create too many paths + * since shingles of different size are not aligned in terms of positions. + */ + filter.addAttribute(DisableGraphAttribute.class); + } + return filter; } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/CharMatcher.java b/core/src/main/java/org/elasticsearch/index/analysis/CharMatcher.java index 238fde5e6b3f5..b9e70d05bb77b 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/CharMatcher.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/CharMatcher.java @@ -45,7 +45,7 @@ public boolean isTokenChar(int c) { } } - public enum Basic implements CharMatcher { + enum Basic implements CharMatcher { LETTER { @Override public boolean isTokenChar(int c) { @@ -97,7 +97,7 @@ public boolean isTokenChar(int c) { } } - public final class Builder { + final class Builder { private final Set matchers; Builder() { matchers = new HashSet<>(); diff --git a/core/src/main/java/org/elasticsearch/index/analysis/CustomAnalyzer.java b/core/src/main/java/org/elasticsearch/index/analysis/CustomAnalyzer.java index b68a321359ef2..d15e4e2e2fd61 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/CustomAnalyzer.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/CustomAnalyzer.java @@ -97,4 +97,27 @@ protected Reader initReader(String fieldName, Reader reader) { } return reader; } + + @Override + protected Reader initReaderForNormalization(String fieldName, Reader reader) { + for (CharFilterFactory charFilter : charFilters) { + if (charFilter instanceof MultiTermAwareComponent) { + charFilter = (CharFilterFactory) ((MultiTermAwareComponent) charFilter).getMultiTermComponent(); + reader = charFilter.create(reader); + } + } + return reader; + } + + @Override + protected TokenStream normalize(String fieldName, TokenStream in) { + TokenStream result = in; + for (TokenFilterFactory filter : tokenFilters) { + if (filter instanceof MultiTermAwareComponent) { + filter = (TokenFilterFactory) ((MultiTermAwareComponent) filter).getMultiTermComponent(); + result = filter.create(result); + } + } + return result; + } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/CustomAnalyzerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/CustomAnalyzerProvider.java index 144cbe817430b..5aa38584bbae3 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/CustomAnalyzerProvider.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/CustomAnalyzerProvider.java @@ -26,6 +26,7 @@ import java.util.ArrayList; import java.util.List; +import java.util.Map; /** * A custom analyzer that is built out of a single {@link org.apache.lucene.analysis.Tokenizer} and a list @@ -43,35 +44,36 @@ public CustomAnalyzerProvider(IndexSettings indexSettings, this.analyzerSettings = settings; } - public void build(AnalysisService analysisService) { + public void build(final Map tokenizers, final Map charFilters, + final Map tokenFilters) { String tokenizerName = analyzerSettings.get("tokenizer"); if (tokenizerName == null) { throw new IllegalArgumentException("Custom Analyzer [" + name() + "] must be configured with a tokenizer"); } - TokenizerFactory tokenizer = analysisService.tokenizer(tokenizerName); + TokenizerFactory tokenizer = tokenizers.get(tokenizerName); if (tokenizer == null) { throw new IllegalArgumentException("Custom Analyzer [" + name() + "] failed to find tokenizer under name [" + tokenizerName + "]"); } - List charFilters = new ArrayList<>(); String[] charFilterNames = analyzerSettings.getAsArray("char_filter"); + List charFiltersList = new ArrayList<>(charFilterNames.length); for (String charFilterName : charFilterNames) { - CharFilterFactory charFilter = analysisService.charFilter(charFilterName); + CharFilterFactory charFilter = charFilters.get(charFilterName); if (charFilter == null) { throw new IllegalArgumentException("Custom Analyzer [" + name() + "] failed to find char_filter under name [" + charFilterName + "]"); } - charFilters.add(charFilter); + charFiltersList.add(charFilter); } - List tokenFilters = new ArrayList<>(); String[] tokenFilterNames = analyzerSettings.getAsArray("filter"); + List tokenFilterList = new ArrayList<>(tokenFilterNames.length); for (String tokenFilterName : tokenFilterNames) { - TokenFilterFactory tokenFilter = analysisService.tokenFilter(tokenFilterName); + TokenFilterFactory tokenFilter = tokenFilters.get(tokenFilterName); if (tokenFilter == null) { throw new IllegalArgumentException("Custom Analyzer [" + name() + "] failed to find filter under name [" + tokenFilterName + "]"); } - tokenFilters.add(tokenFilter); + tokenFilterList.add(tokenFilter); } int positionIncrementGap = TextFieldMapper.Defaults.POSITION_INCREMENT_GAP; @@ -93,8 +95,8 @@ public void build(AnalysisService analysisService) { int offsetGap = analyzerSettings.getAsInt("offset_gap", -1);; this.customAnalyzer = new CustomAnalyzer(tokenizer, - charFilters.toArray(new CharFilterFactory[charFilters.size()]), - tokenFilters.toArray(new TokenFilterFactory[tokenFilters.size()]), + charFiltersList.toArray(new CharFilterFactory[charFiltersList.size()]), + tokenFilterList.toArray(new TokenFilterFactory[tokenFilterList.size()]), positionIncrementGap, offsetGap ); diff --git a/core/src/main/java/org/elasticsearch/index/analysis/CustomNormalizerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/CustomNormalizerProvider.java new file mode 100644 index 0000000000000..2fcc987df6aa3 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/analysis/CustomNormalizerProvider.java @@ -0,0 +1,95 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.analysis; + +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.indices.analysis.PreBuiltTokenizers; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +/** + * A custom normalizer that is built out of a char and token filters. On the + * contrary to analyzers, it does not support tokenizers and only supports a + * subset of char and token filters. + */ +public final class CustomNormalizerProvider extends AbstractIndexAnalyzerProvider { + + private final Settings analyzerSettings; + + private CustomAnalyzer customAnalyzer; + + public CustomNormalizerProvider(IndexSettings indexSettings, + String name, Settings settings) { + super(indexSettings, name, settings); + this.analyzerSettings = settings; + } + + public void build(final Map charFilters, final Map tokenFilters) { + String tokenizerName = analyzerSettings.get("tokenizer"); + if (tokenizerName != null) { + throw new IllegalArgumentException("Custom normalizer [" + name() + "] cannot configure a tokenizer"); + } + + String[] charFilterNames = analyzerSettings.getAsArray("char_filter"); + List charFiltersList = new ArrayList<>(charFilterNames.length); + for (String charFilterName : charFilterNames) { + CharFilterFactory charFilter = charFilters.get(charFilterName); + if (charFilter == null) { + throw new IllegalArgumentException("Custom normalizer [" + name() + "] failed to find char_filter under name [" + + charFilterName + "]"); + } + if (charFilter instanceof MultiTermAwareComponent == false) { + throw new IllegalArgumentException("Custom normalizer [" + name() + "] may not use char filter [" + + charFilterName + "]"); + } + charFilter = (CharFilterFactory) ((MultiTermAwareComponent) charFilter).getMultiTermComponent(); + charFiltersList.add(charFilter); + } + + String[] tokenFilterNames = analyzerSettings.getAsArray("filter"); + List tokenFilterList = new ArrayList<>(tokenFilterNames.length); + for (String tokenFilterName : tokenFilterNames) { + TokenFilterFactory tokenFilter = tokenFilters.get(tokenFilterName); + if (tokenFilter == null) { + throw new IllegalArgumentException("Custom Analyzer [" + name() + "] failed to find filter under name [" + + tokenFilterName + "]"); + } + if (tokenFilter instanceof MultiTermAwareComponent == false) { + throw new IllegalArgumentException("Custom normalizer [" + name() + "] may not use filter [" + tokenFilterName + "]"); + } + tokenFilter = (TokenFilterFactory) ((MultiTermAwareComponent) tokenFilter).getMultiTermComponent(); + tokenFilterList.add(tokenFilter); + } + + this.customAnalyzer = new CustomAnalyzer( + PreBuiltTokenizers.KEYWORD.getTokenizerFactory(indexSettings.getIndexVersionCreated()), + charFiltersList.toArray(new CharFilterFactory[charFiltersList.size()]), + tokenFilterList.toArray(new TokenFilterFactory[tokenFilterList.size()]) + ); + } + + @Override + public CustomAnalyzer get() { + return this.customAnalyzer; + } +} diff --git a/core/src/main/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactory.java new file mode 100644 index 0000000000000..6c9487a2cb32b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactory.java @@ -0,0 +1,38 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.analysis; + +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.core.FlattenGraphFilter; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.Environment; +import org.elasticsearch.index.IndexSettings; + +public class FlattenGraphTokenFilterFactory extends AbstractTokenFilterFactory { + + public FlattenGraphTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { + super(indexSettings, name, settings); + } + + @Override + public TokenStream create(TokenStream tokenStream) { + return new FlattenGraphFilter(tokenStream); + } +} diff --git a/core/src/main/java/org/elasticsearch/index/analysis/IndexAnalyzers.java b/core/src/main/java/org/elasticsearch/index/analysis/IndexAnalyzers.java new file mode 100644 index 0000000000000..f3200d606fb45 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/analysis/IndexAnalyzers.java @@ -0,0 +1,106 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.analysis; + +import org.apache.lucene.util.IOUtils; +import org.elasticsearch.index.AbstractIndexComponent; +import org.elasticsearch.index.IndexSettings; + +import java.io.Closeable; +import java.io.IOException; +import java.util.Map; +import java.util.stream.Stream; + +/** + * IndexAnalyzers contains a name to analyzer mapping for a specific index. + * This class only holds analyzers that are explicitly configured for an index and doesn't allow + * access to individual tokenizers, char or token filter. + * + * @see AnalysisRegistry + */ +public final class IndexAnalyzers extends AbstractIndexComponent implements Closeable { + private final NamedAnalyzer defaultIndexAnalyzer; + private final NamedAnalyzer defaultSearchAnalyzer; + private final NamedAnalyzer defaultSearchQuoteAnalyzer; + private final Map analyzers; + private final Map normalizers; + private final IndexSettings indexSettings; + + public IndexAnalyzers(IndexSettings indexSettings, NamedAnalyzer defaultIndexAnalyzer, NamedAnalyzer defaultSearchAnalyzer, + NamedAnalyzer defaultSearchQuoteAnalyzer, Map analyzers, + Map normalizers) { + super(indexSettings); + this.defaultIndexAnalyzer = defaultIndexAnalyzer; + this.defaultSearchAnalyzer = defaultSearchAnalyzer; + this.defaultSearchQuoteAnalyzer = defaultSearchQuoteAnalyzer; + this.analyzers = analyzers; + this.normalizers = normalizers; + this.indexSettings = indexSettings; + } + + /** + * Returns an analyzer mapped to the given name or null if not present + */ + public NamedAnalyzer get(String name) { + return analyzers.get(name); + } + + /** + * Returns a normalizer mapped to the given name or null if not present + */ + public NamedAnalyzer getNormalizer(String name) { + return normalizers.get(name); + } + + /** + * Returns the default index analyzer for this index + */ + public NamedAnalyzer getDefaultIndexAnalyzer() { + return defaultIndexAnalyzer; + } + + /** + * Returns the default search analyzer for this index + */ + public NamedAnalyzer getDefaultSearchAnalyzer() { + return defaultSearchAnalyzer; + } + + /** + * Returns the default search quote analyzer for this index + */ + public NamedAnalyzer getDefaultSearchQuoteAnalyzer() { + return defaultSearchQuoteAnalyzer; + } + + @Override + public void close() throws IOException { + IOUtils.close(() -> Stream.concat(analyzers.values().stream(), normalizers.values().stream()) + .filter(a -> a.scope() == AnalyzerScope.INDEX) + .iterator()); + } + + /** + * Returns the indices settings + */ + public IndexSettings getIndexSettings() { + return indexSettings; + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/analysis/KeywordMarkerTokenFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/KeywordMarkerTokenFilterFactory.java index 0805a3bdf8609..7869e3495812d 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/KeywordMarkerTokenFilterFactory.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/KeywordMarkerTokenFilterFactory.java @@ -21,30 +21,68 @@ import org.apache.lucene.analysis.CharArraySet; import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.miscellaneous.PatternKeywordMarkerFilter; import org.apache.lucene.analysis.miscellaneous.SetKeywordMarkerFilter; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; import org.elasticsearch.index.IndexSettings; import java.util.Set; +import java.util.regex.Pattern; +/** + * A factory for creating keyword marker token filters that prevent tokens from + * being modified by stemmers. Two types of keyword marker filters are available: + * the {@link SetKeywordMarkerFilter} and the {@link PatternKeywordMarkerFilter}. + * + * The {@link SetKeywordMarkerFilter} uses a set of keywords to denote which tokens + * should be excluded from stemming. This filter is created if the settings include + * {@code keywords}, which contains the list of keywords, or {@code `keywords_path`}, + * which contains a path to a file in the config directory with the keywords. + * + * The {@link PatternKeywordMarkerFilter} uses a regular expression pattern to match + * against tokens that should be excluded from stemming. This filter is created if + * the settings include {@code keywords_pattern}, which contains the regular expression + * to match against. + */ public class KeywordMarkerTokenFilterFactory extends AbstractTokenFilterFactory { private final CharArraySet keywordLookup; + private final Pattern keywordPattern; public KeywordMarkerTokenFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { super(indexSettings, name, settings); boolean ignoreCase = settings.getAsBoolean("ignore_case", false); - Set rules = Analysis.getWordSet(env, settings, "keywords"); - if (rules == null) { - throw new IllegalArgumentException("keyword filter requires either `keywords` or `keywords_path` to be configured"); + String patternString = settings.get("keywords_pattern"); + if (patternString != null) { + // a pattern for matching keywords is specified, as opposed to a + // set of keyword strings to match against + if (settings.get("keywords") != null || settings.get("keywords_path") != null) { + throw new IllegalArgumentException( + "cannot specify both `keywords_pattern` and `keywords` or `keywords_path`"); + } + keywordPattern = Pattern.compile(patternString); + keywordLookup = null; + } else { + Set rules = Analysis.getWordSet(env, settings, "keywords"); + if (rules == null) { + throw new IllegalArgumentException( + "keyword filter requires either `keywords`, `keywords_path`, " + + "or `keywords_pattern` to be configured"); + } + // a set of keywords (or a path to them) is specified + keywordLookup = new CharArraySet(rules, ignoreCase); + keywordPattern = null; } - keywordLookup = new CharArraySet(rules, ignoreCase); } @Override public TokenStream create(TokenStream tokenStream) { - return new SetKeywordMarkerFilter(tokenStream, keywordLookup); + if (keywordPattern != null) { + return new PatternKeywordMarkerFilter(tokenStream, keywordPattern); + } else { + return new SetKeywordMarkerFilter(tokenStream, keywordLookup); + } } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/NamedAnalyzer.java b/core/src/main/java/org/elasticsearch/index/analysis/NamedAnalyzer.java index 1dd562c4bb144..416967e94f5f0 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/NamedAnalyzer.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/NamedAnalyzer.java @@ -39,10 +39,6 @@ public NamedAnalyzer(NamedAnalyzer analyzer, int positionIncrementGap) { this(analyzer.name(), analyzer.scope(), analyzer.analyzer(), positionIncrementGap); } - public NamedAnalyzer(String name, Analyzer analyzer) { - this(name, AnalyzerScope.INDEX, analyzer); - } - public NamedAnalyzer(String name, AnalyzerScope scope, Analyzer analyzer) { this(name, scope, analyzer, Integer.MIN_VALUE); } @@ -119,4 +115,12 @@ public boolean equals(Object o) { public int hashCode() { return Objects.hash(name); } + + @Override + public void close() { + super.close(); + if (scope == AnalyzerScope.INDEX) { + analyzer.close(); + } + } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzer.java b/core/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzer.java index 7554f459bfaf4..5d4d9f2df3f4f 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzer.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzer.java @@ -53,4 +53,13 @@ protected TokenStreamComponents createComponents(String s) { } return new TokenStreamComponents(tokenizer, stream); } + + @Override + protected TokenStream normalize(String fieldName, TokenStream in) { + TokenStream stream = in; + if (lowercase) { + stream = new LowerCaseFilter(stream); + } + return stream; + } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/PatternReplaceCharFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/PatternReplaceCharFilterFactory.java index db428db153d49..2562f20373bef 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/PatternReplaceCharFilterFactory.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/PatternReplaceCharFilterFactory.java @@ -18,6 +18,9 @@ */ package org.elasticsearch.index.analysis; +import java.io.Reader; +import java.util.regex.Pattern; + import org.apache.lucene.analysis.pattern.PatternReplaceCharFilter; import org.elasticsearch.common.Strings; import org.elasticsearch.common.regex.Regex; @@ -25,10 +28,7 @@ import org.elasticsearch.env.Environment; import org.elasticsearch.index.IndexSettings; -import java.io.Reader; -import java.util.regex.Pattern; - -public class PatternReplaceCharFilterFactory extends AbstractCharFilterFactory { +public class PatternReplaceCharFilterFactory extends AbstractCharFilterFactory implements MultiTermAwareComponent { private final Pattern pattern; private final String replacement; @@ -56,4 +56,9 @@ public String getReplacement() { public Reader create(Reader tokenStream) { return new PatternReplaceCharFilter(pattern, replacement, tokenStream); } + + @Override + public Object getMultiTermComponent() { + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactory.java index 0c6ae7b965212..1f7fc301cbac7 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactory.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactory.java @@ -20,6 +20,7 @@ package org.elasticsearch.index.analysis; import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; import org.apache.lucene.analysis.shingle.ShingleFilter; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; @@ -89,6 +90,15 @@ public TokenStream create(TokenStream tokenStream) { filter.setOutputUnigramsIfNoShingles(outputUnigramsIfNoShingles); filter.setTokenSeparator(tokenSeparator); filter.setFillerToken(fillerToken); + if (outputUnigrams || (minShingleSize != maxShingleSize)) { + /** + * We disable the graph analysis on this token stream + * because it produces shingles of different size. + * Graph analysis on such token stream is useless and dangerous as it may create too many paths + * since shingles of different size are not aligned in terms of positions. + */ + filter.addAttribute(DisableGraphAttribute.class); + } return filter; } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/SynonymGraphTokenFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/SynonymGraphTokenFilterFactory.java new file mode 100644 index 0000000000000..cfb37f0b075c7 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/analysis/SynonymGraphTokenFilterFactory.java @@ -0,0 +1,41 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.analysis; + +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.synonym.SynonymGraphFilter; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.Environment; +import org.elasticsearch.index.IndexSettings; + +import java.io.IOException; + +public class SynonymGraphTokenFilterFactory extends SynonymTokenFilterFactory { + public SynonymGraphTokenFilterFactory(IndexSettings indexSettings, Environment env, AnalysisRegistry analysisRegistry, + String name, Settings settings) throws IOException { + super(indexSettings, env, analysisRegistry, name, settings); + } + + @Override + public TokenStream create(TokenStream tokenStream) { + // fst is null means no synonyms + return synonymMap.fst == null ? tokenStream : new SynonymGraphFilter(tokenStream, synonymMap, ignoreCase); + } +} diff --git a/core/src/main/java/org/elasticsearch/index/analysis/SynonymTokenFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/SynonymTokenFilterFactory.java index 11f1303328c7a..d32c66e0dfe8e 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/SynonymTokenFilterFactory.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/SynonymTokenFilterFactory.java @@ -40,8 +40,8 @@ public class SynonymTokenFilterFactory extends AbstractTokenFilterFactory { - private final SynonymMap synonymMap; - private final boolean ignoreCase; + protected final SynonymMap synonymMap; + protected final boolean ignoreCase; public SynonymTokenFilterFactory(IndexSettings indexSettings, Environment env, AnalysisRegistry analysisRegistry, String name, Settings settings) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/index/analysis/WordDelimiterGraphTokenFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/WordDelimiterGraphTokenFilterFactory.java new file mode 100644 index 0000000000000..aa88a1647a78e --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/analysis/WordDelimiterGraphTokenFilterFactory.java @@ -0,0 +1,101 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.analysis; + +import org.apache.lucene.analysis.CharArraySet; +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilter; +import org.apache.lucene.analysis.miscellaneous.WordDelimiterIterator; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.Environment; +import org.elasticsearch.index.IndexSettings; + +import java.util.List; +import java.util.Set; + +import static org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.CATENATE_ALL; +import static org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.CATENATE_NUMBERS; +import static org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.CATENATE_WORDS; +import static org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.GENERATE_NUMBER_PARTS; +import static org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.GENERATE_WORD_PARTS; +import static org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.PRESERVE_ORIGINAL; +import static org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.SPLIT_ON_CASE_CHANGE; +import static org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.SPLIT_ON_NUMERICS; +import static org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.STEM_ENGLISH_POSSESSIVE; +import static org.elasticsearch.index.analysis.WordDelimiterTokenFilterFactory.parseTypes; + +public class WordDelimiterGraphTokenFilterFactory extends AbstractTokenFilterFactory { + + private final byte[] charTypeTable; + private final int flags; + private final CharArraySet protoWords; + + public WordDelimiterGraphTokenFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { + super(indexSettings, name, settings); + + // Sample Format for the type table: + // $ => DIGIT + // % => DIGIT + // . => DIGIT + // \u002C => DIGIT + // \u200D => ALPHANUM + List charTypeTableValues = Analysis.getWordList(env, settings, "type_table"); + if (charTypeTableValues == null) { + this.charTypeTable = WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE; + } else { + this.charTypeTable = parseTypes(charTypeTableValues); + } + int flags = 0; + // If set, causes parts of words to be generated: "PowerShot" => "Power" "Shot" + flags |= getFlag(GENERATE_WORD_PARTS, settings, "generate_word_parts", true); + // If set, causes number subwords to be generated: "500-42" => "500" "42" + flags |= getFlag(GENERATE_NUMBER_PARTS, settings, "generate_number_parts", true); + // 1, causes maximum runs of word parts to be catenated: "wi-fi" => "wifi" + flags |= getFlag(CATENATE_WORDS, settings, "catenate_words", false); + // If set, causes maximum runs of number parts to be catenated: "500-42" => "50042" + flags |= getFlag(CATENATE_NUMBERS, settings, "catenate_numbers", false); + // If set, causes all subword parts to be catenated: "wi-fi-4000" => "wifi4000" + flags |= getFlag(CATENATE_ALL, settings, "catenate_all", false); + // 1, causes "PowerShot" to be two tokens; ("Power-Shot" remains two parts regards) + flags |= getFlag(SPLIT_ON_CASE_CHANGE, settings, "split_on_case_change", true); + // If set, includes original words in subwords: "500-42" => "500" "42" "500-42" + flags |= getFlag(PRESERVE_ORIGINAL, settings, "preserve_original", false); + // 1, causes "j2se" to be three tokens; "j" "2" "se" + flags |= getFlag(SPLIT_ON_NUMERICS, settings, "split_on_numerics", true); + // If set, causes trailing "'s" to be removed for each subword: "O'Neil's" => "O", "Neil" + flags |= getFlag(STEM_ENGLISH_POSSESSIVE, settings, "stem_english_possessive", true); + // If not null is the set of tokens to protect from being delimited + Set protectedWords = Analysis.getWordSet(env, settings, "protected_words"); + this.protoWords = protectedWords == null ? null : CharArraySet.copy(protectedWords); + this.flags = flags; + } + + @Override + public TokenStream create(TokenStream tokenStream) { + return new WordDelimiterGraphFilter(tokenStream, charTypeTable, flags, protoWords); + } + + private int getFlag(int flag, Settings settings, String key, boolean defaultValue) { + if (settings.getAsBoolean(key, defaultValue)) { + return flag; + } + return 0; + } +} diff --git a/core/src/main/java/org/elasticsearch/index/analysis/WordDelimiterTokenFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/WordDelimiterTokenFilterFactory.java index ccc60f4ce7027..74d10553e4afe 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/WordDelimiterTokenFilterFactory.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/WordDelimiterTokenFilterFactory.java @@ -113,7 +113,7 @@ public int getFlag(int flag, Settings settings, String key, boolean defaultValue /** * parses a list of MappingCharFilter style rules into a custom byte[] type table */ - private byte[] parseTypes(Collection rules) { + static byte[] parseTypes(Collection rules) { SortedMap typeMap = new TreeMap<>(); for (String rule : rules) { Matcher m = typePattern.matcher(rule); @@ -137,7 +137,7 @@ private byte[] parseTypes(Collection rules) { return types; } - private Byte parseType(String s) { + private static Byte parseType(String s) { if (s.equals("LOWER")) return WordDelimiterFilter.LOWER; else if (s.equals("UPPER")) @@ -154,9 +154,8 @@ else if (s.equals("SUBWORD_DELIM")) return null; } - char[] out = new char[256]; - - private String parseString(String s) { + private static String parseString(String s) { + char[] out = new char[256]; int readPos = 0; int len = s.length(); int writePos = 0; diff --git a/core/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java b/core/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java index 0e4c54e7a7d34..fe11a30fad6f8 100644 --- a/core/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java +++ b/core/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java @@ -117,8 +117,7 @@ public void clear(String reason) { private BitSet getAndLoadIfNotPresent(final Query query, final LeafReaderContext context) throws IOException, ExecutionException { final Object coreCacheReader = context.reader().getCoreCacheKey(); final ShardId shardId = ShardUtils.extractShardId(context.reader()); - if (shardId != null // can't require it because of the percolator - && indexSettings.getIndex().equals(shardId.getIndex()) == false) { + if (indexSettings.getIndex().equals(shardId.getIndex()) == false) { // insanity throw new IllegalStateException("Trying to load bit set for index " + shardId.getIndex() + " with cache of index " + indexSettings.getIndex()); @@ -236,7 +235,7 @@ public IndexWarmer.TerminationHandle warmReader(final IndexShard indexShard, fin hasNested = true; for (ObjectMapper objectMapper : docMapper.objectMappers().values()) { if (objectMapper.nested().isNested()) { - ObjectMapper parentObjectMapper = docMapper.findParentObjectMapper(objectMapper); + ObjectMapper parentObjectMapper = objectMapper.getParentObjectMapper(mapperService); if (parentObjectMapper != null && parentObjectMapper.nested().isNested()) { warmUp.add(parentObjectMapper.nestedTypeFilter()); } diff --git a/core/src/main/java/org/elasticsearch/index/cache/query/QueryCache.java b/core/src/main/java/org/elasticsearch/index/cache/query/QueryCache.java index bee947b54f0d1..d1747fc30ee6f 100644 --- a/core/src/main/java/org/elasticsearch/index/cache/query/QueryCache.java +++ b/core/src/main/java/org/elasticsearch/index/cache/query/QueryCache.java @@ -28,7 +28,7 @@ */ public interface QueryCache extends IndexComponent, Closeable, org.apache.lucene.search.QueryCache { - static class EntriesStats { + class EntriesStats { public final long sizeInBytes; public final long count; diff --git a/core/src/main/java/org/elasticsearch/index/cache/query/QueryCacheStats.java b/core/src/main/java/org/elasticsearch/index/cache/query/QueryCacheStats.java index fcbb619004886..99673afac6f35 100644 --- a/core/src/main/java/org/elasticsearch/index/cache/query/QueryCacheStats.java +++ b/core/src/main/java/org/elasticsearch/index/cache/query/QueryCacheStats.java @@ -108,13 +108,6 @@ public long getEvictions() { return cacheCount - cacheSize; } - public static QueryCacheStats readQueryCacheStats(StreamInput in) throws IOException { - QueryCacheStats stats = new QueryCacheStats(); - stats.readFrom(in); - return stats; - } - - @Override public void readFrom(StreamInput in) throws IOException { ramBytesUsed = in.readLong(); diff --git a/core/src/main/java/org/elasticsearch/index/engine/CommitStats.java b/core/src/main/java/org/elasticsearch/index/engine/CommitStats.java index 48fb8a80eeb84..eb2e35a5a23fb 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/CommitStats.java +++ b/core/src/main/java/org/elasticsearch/index/engine/CommitStats.java @@ -49,13 +49,6 @@ public CommitStats(SegmentInfos segmentInfos) { } private CommitStats() { - - } - - public static CommitStats readCommitStatsFrom(StreamInput in) throws IOException { - CommitStats commitStats = new CommitStats(); - commitStats.readFrom(in); - return commitStats; } public static CommitStats readOptionalCommitStatsFrom(StreamInput in) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/index/engine/DeleteFailedEngineException.java b/core/src/main/java/org/elasticsearch/index/engine/DeleteFailedEngineException.java index 068df25e2fdc6..16716ccdcb03c 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/DeleteFailedEngineException.java +++ b/core/src/main/java/org/elasticsearch/index/engine/DeleteFailedEngineException.java @@ -25,15 +25,13 @@ import java.io.IOException; /** - * + * Deprecated as not used in 6.0, should be removed in 7.0 + * Still exists for bwc in serializing/deserializing from + * 5.x nodes */ +@Deprecated public class DeleteFailedEngineException extends EngineException { - - public DeleteFailedEngineException(ShardId shardId, Engine.Delete delete, Throwable cause) { - super(shardId, "Delete failed for [" + delete.uid().text() + "]", cause); - } - public DeleteFailedEngineException(StreamInput in) throws IOException{ super(in); } -} \ No newline at end of file +} diff --git a/core/src/main/java/org/elasticsearch/index/engine/DeleteVersionValue.java b/core/src/main/java/org/elasticsearch/index/engine/DeleteVersionValue.java index baacc4b240d6b..45b28d96ba283 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/DeleteVersionValue.java +++ b/core/src/main/java/org/elasticsearch/index/engine/DeleteVersionValue.java @@ -29,18 +29,18 @@ class DeleteVersionValue extends VersionValue { private final long time; - public DeleteVersionValue(long version, long time) { + DeleteVersionValue(long version, long time) { super(version); this.time = time; } @Override - public long time() { + public long getTime() { return this.time; } @Override - public boolean delete() { + public boolean isDelete() { return true; } @@ -48,4 +48,12 @@ public boolean delete() { public long ramBytesUsed() { return BASE_RAM_BYTES_USED; } + + @Override + public String toString() { + return "DeleteVersionValue{" + + "version=" + getVersion() + + ",time=" + time + + '}'; + } } diff --git a/core/src/main/java/org/elasticsearch/index/engine/ElasticsearchConcurrentMergeScheduler.java b/core/src/main/java/org/elasticsearch/index/engine/ElasticsearchConcurrentMergeScheduler.java index 466da06dec4fa..f4876149cac13 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/ElasticsearchConcurrentMergeScheduler.java +++ b/core/src/main/java/org/elasticsearch/index/engine/ElasticsearchConcurrentMergeScheduler.java @@ -67,7 +67,7 @@ class ElasticsearchConcurrentMergeScheduler extends ConcurrentMergeScheduler { private final Set readOnlyOnGoingMerges = Collections.unmodifiableSet(onGoingMerges); private final MergeSchedulerConfig config; - public ElasticsearchConcurrentMergeScheduler(ShardId shardId, IndexSettings indexSettings) { + ElasticsearchConcurrentMergeScheduler(ShardId shardId, IndexSettings indexSettings) { this.config = indexSettings.getMergeSchedulerConfig(); this.shardId = shardId; this.indexSettings = indexSettings.getSettings(); @@ -110,10 +110,15 @@ protected void doMerge(IndexWriter writer, MergePolicy.OneMerge merge) throws IO totalMergesNumDocs.inc(totalNumDocs); totalMergesSizeInBytes.inc(totalSizeInBytes); totalMerges.inc(tookMS); - - long stoppedMS = TimeValue.nsecToMSec(merge.rateLimiter.getTotalStoppedNS()); - long throttledMS = TimeValue.nsecToMSec(merge.rateLimiter.getTotalPausedNS()); - + long stoppedMS = TimeValue.nsecToMSec( + merge.getMergeProgress().getPauseTimes().get(MergePolicy.OneMergeProgress.PauseReason.STOPPED) + ); + long throttledMS = TimeValue.nsecToMSec( + merge.getMergeProgress().getPauseTimes().get(MergePolicy.OneMergeProgress.PauseReason.PAUSED) + ); + final Thread thread = Thread.currentThread(); + long totalBytesWritten = OneMergeHelper.getTotalBytesWritten(thread, merge); + double mbPerSec = OneMergeHelper.getMbPerSec(thread, merge); totalMergeStoppedTime.inc(stoppedMS); totalMergeThrottledTime.inc(throttledMS); @@ -125,8 +130,8 @@ protected void doMerge(IndexWriter writer, MergePolicy.OneMerge merge) throws IO totalNumDocs, TimeValue.timeValueMillis(stoppedMS), TimeValue.timeValueMillis(throttledMS), - merge.rateLimiter.getTotalBytesWritten()/1024f/1024f, - merge.rateLimiter.getMBPerSec()); + totalBytesWritten/1024f/1024f, + mbPerSec); if (tookMS > 20000) { // if more than 20 seconds, DEBUG log it logger.debug("{}", message); diff --git a/core/src/main/java/org/elasticsearch/index/engine/Engine.java b/core/src/main/java/org/elasticsearch/index/engine/Engine.java index 9df03beb1abce..e1ccf5efcc629 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/Engine.java +++ b/core/src/main/java/org/elasticsearch/index/engine/Engine.java @@ -42,6 +42,7 @@ import org.apache.lucene.store.IOContext; import org.apache.lucene.util.Accountable; import org.apache.lucene.util.Accountables; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.bytes.BytesReference; @@ -54,15 +55,15 @@ import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.lucene.uid.VersionsResolver; +import org.elasticsearch.common.lucene.uid.VersionsResolver.DocIdAndVersion; import org.elasticsearch.common.metrics.CounterMetric; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.ReleasableLock; import org.elasticsearch.index.VersionType; import org.elasticsearch.index.mapper.ParseContext.Document; import org.elasticsearch.index.mapper.ParsedDocument; -import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.merge.MergeStats; -import org.elasticsearch.index.shard.DocsStats; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.store.Store; import org.elasticsearch.index.translog.Translog; @@ -76,8 +77,10 @@ import java.util.Comparator; import java.util.HashMap; import java.util.List; +import java.util.Locale; import java.util.Map; import java.util.Objects; +import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.locks.Condition; @@ -95,13 +98,14 @@ public abstract class Engine implements Closeable { protected final EngineConfig engineConfig; protected final Store store; protected final AtomicBoolean isClosed = new AtomicBoolean(false); + private final CountDownLatch closedLatch = new CountDownLatch(1); protected final EventListener eventListener; protected final SnapshotDeletionPolicy deletionPolicy; protected final ReentrantLock failEngineLock = new ReentrantLock(); protected final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); protected final ReleasableLock readLock = new ReleasableLock(rwl.readLock()); protected final ReleasableLock writeLock = new ReleasableLock(rwl.writeLock()); - protected volatile Exception failedEngine = null; + protected final SetOnce failedEngine = new SetOnce<>(); /* * on lastWriteNanos we use System.nanoTime() to initialize this since: * - we use the value for figuring out if the shard / engine is active so if we startup and no write has happened yet we still consider it active @@ -233,17 +237,13 @@ boolean isThrottled() { /** * Returns the number of milliseconds this engine was under index throttling. */ - public long getIndexThrottleTimeInMillis() { - return 0; - } + public abstract long getIndexThrottleTimeInMillis(); /** * Returns the true iff this engine is currently under index throttling. * @see #getIndexThrottleTimeInMillis() */ - public boolean isThrottled() { - return false; - } + public abstract boolean isThrottled(); /** A Lock implementation that always allows the lock to be acquired */ protected static final class NoOpLock implements Lock { @@ -276,9 +276,135 @@ public Condition newCondition() { } } - public abstract void index(Index operation) throws EngineException; + /** + * Perform document index operation on the engine + * @param index operation to perform + * @return {@link IndexResult} containing updated translog location, version and + * document specific failures + * + * Note: engine level failures (i.e. persistent engine failures) are thrown + */ + public abstract IndexResult index(Index index) throws IOException; + + /** + * Perform document delete operation on the engine + * @param delete operation to perform + * @return {@link DeleteResult} containing updated translog location, version and + * document specific failures + * + * Note: engine level failures (i.e. persistent engine failures) are thrown + */ + public abstract DeleteResult delete(Delete delete) throws IOException; + + /** + * Base class for index and delete operation results + * Holds result meta data (e.g. translog location, updated version) + * for an executed write {@link Operation} + **/ + public abstract static class Result { + private final Operation.TYPE operationType; + private final long version; + private final Exception failure; + private final SetOnce freeze = new SetOnce<>(); + private Translog.Location translogLocation; + private long took; + + protected Result(Operation.TYPE operationType, Exception failure, long version) { + this.operationType = operationType; + this.failure = failure; + this.version = version; + } + + protected Result(Operation.TYPE operationType, long version) { + this(operationType, null, version); + } + + /** whether the operation had failure */ + public boolean hasFailure() { + return failure != null; + } + + /** get the updated document version */ + public long getVersion() { + return version; + } + + /** get the translog location after executing the operation */ + public Translog.Location getTranslogLocation() { + return translogLocation; + } + + /** get document failure while executing the operation {@code null} in case of no failure */ + public Exception getFailure() { + return failure; + } + + /** get total time in nanoseconds */ + public long getTook() { + return took; + } + + public Operation.TYPE getOperationType() { + return operationType; + } + + void setTranslogLocation(Translog.Location translogLocation) { + if (freeze.get() == null) { + assert failure == null : "failure has to be null to set translog location"; + this.translogLocation = translogLocation; + } else { + throw new IllegalStateException("result is already frozen"); + } + } + + void setTook(long took) { + if (freeze.get() == null) { + this.took = took; + } else { + throw new IllegalStateException("result is already frozen"); + } + } + + void freeze() { + freeze.set(true); + } + } + + public static class IndexResult extends Result { + private final boolean created; + + public IndexResult(long version, boolean created) { + super(Operation.TYPE.INDEX, version); + this.created = created; + } + + public IndexResult(Exception failure, long version) { + super(Operation.TYPE.INDEX, failure, version); + this.created = false; + } + + public boolean isCreated() { + return created; + } + } + + public static class DeleteResult extends Result { + private final boolean found; - public abstract void delete(Delete delete) throws EngineException; + public DeleteResult(long version, boolean found) { + super(Operation.TYPE.DELETE, version); + this.found = found; + } + + public DeleteResult(Exception failure, long version, boolean found) { + super(Operation.TYPE.DELETE, failure, version); + this.found = found; + } + + public boolean isFound() { + return found; + } + } /** * Attempts to do a special commit where the given syncID is put into the commit data. The attempt @@ -298,9 +424,9 @@ public enum SyncedFlushResult { protected final GetResult getFromSearcher(Get get, Function searcherFactory) throws EngineException { final Searcher searcher = searcherFactory.apply("get"); - final Versions.DocIdAndVersion docIdAndVersion; + final DocIdAndVersion docIdAndVersion; try { - docIdAndVersion = Versions.loadDocIdAndVersion(searcher.reader(), get.uid()); + docIdAndVersion = VersionsResolver.loadDocIdAndVersion(searcher.reader(), get.uid()); } catch (Exception e) { Releasables.closeWhileHandlingException(searcher); //TODO: A better exception goes here @@ -310,8 +436,7 @@ protected final GetResult getFromSearcher(Get get, Function se if (docIdAndVersion != null) { if (get.versionType().isVersionConflictForReads(docIdAndVersion.version, get.version())) { Releasables.close(searcher); - Uid uid = Uid.createUid(get.uid().text()); - throw new VersionConflictEngineException(shardId, uid.type(), uid.id(), + throw new VersionConflictEngineException(shardId, get.type(), get.id(), get.versionType().explainConflictForReads(docIdAndVersion.version, get.version())); } } @@ -326,10 +451,6 @@ protected final GetResult getFromSearcher(Get get, Function se } } - public final GetResult get(Get get) throws EngineException { - return get(get, this::acquireSearcher); - } - public abstract GetResult get(Get get, Function searcherFactory) throws EngineException; /** @@ -359,7 +480,7 @@ public final Searcher acquireSearcher(String source) throws EngineException { manager.release(searcher); } } - } catch (EngineClosedException ex) { + } catch (AlreadyClosedException ex) { throw ex; } catch (Exception ex) { ensureOpen(); // throw EngineCloseException here if we are already closed @@ -377,7 +498,7 @@ public final Searcher acquireSearcher(String source) throws EngineException { protected void ensureOpen() { if (isClosed.get()) { - throw new EngineClosedException(shardId, failedEngine); + throw new AlreadyClosedException(shardId + " engine is closed", failedEngine.get()); } } @@ -412,7 +533,7 @@ protected static SegmentInfos readLastCommittedSegmentInfos(final SearcherManage */ public final SegmentsStats segmentsStats(boolean includeSegmentFileSizes) { ensureOpen(); - try (final Searcher searcher = acquireSearcher("segments_stats")) { + try (Searcher searcher = acquireSearcher("segments_stats")) { SegmentsStats stats = new SegmentsStats(); for (LeafReaderContext reader : searcher.reader().leaves()) { final SegmentReader segmentReader = segmentReader(reader.reader()); @@ -478,7 +599,9 @@ private ImmutableOpenMap getSegmentFileSizes(SegmentReader segment try { length = directory.fileLength(file); } catch (NoSuchFileException | FileNotFoundException e) { - logger.warn("Tried to query fileLength but file is gone [{}] [{}]", e, directory, file); + final Directory finalDirectory = directory; + logger.warn((Supplier) + () -> new ParameterizedMessage("Tried to query fileLength but file is gone [{}] [{}]", finalDirectory, file), e); } catch (IOException e) { final Directory finalDirectory = directory; logger.warn( @@ -670,17 +793,19 @@ public void failEngine(String reason, @Nullable Exception failure) { if (failEngineLock.tryLock()) { store.incRef(); try { + if (failedEngine.get() != null) { + logger.warn((Supplier) () -> new ParameterizedMessage("tried to fail engine but engine is already failed. ignoring. [{}]", reason), failure); + return; + } + // this must happen before we close IW or Translog such that we can check this state to opt out of failing the engine + // again on any caught AlreadyClosedException + failedEngine.set((failure != null) ? failure : new IllegalStateException(reason)); try { // we just go and close this engine - no way to recover - closeNoLock("engine failed on: [" + reason + "]"); + closeNoLock("engine failed on: [" + reason + "]", closedLatch); } finally { - if (failedEngine != null) { - logger.debug((Supplier) () -> new ParameterizedMessage("tried to fail engine but engine is already failed. ignoring. [{}]", reason), failure); - return; - } logger.warn((Supplier) () -> new ParameterizedMessage("failed engine [{}]", reason), failure); // we must set a failure exception, generate one if not supplied - failedEngine = (failure != null) ? failure : new IllegalStateException(reason); // we first mark the store as corrupted before we notify any listeners // this must happen first otherwise we might try to reallocate so quickly // on the same node that we don't see the corrupted marker file when @@ -762,13 +887,27 @@ public void close() { } public abstract static class Operation { + + /** type of operation (index, delete), subclasses use static types */ + public enum TYPE { + INDEX, DELETE; + + private final String lowercase; + + TYPE() { + this.lowercase = this.toString().toLowerCase(Locale.ROOT); + } + + public String getLowercase() { + return lowercase; + } + } + private final Term uid; - private long version; + private final long version; private final VersionType versionType; private final Origin origin; - private Translog.Location location; private final long startTime; - private long endTime; public Operation(Term uid, long version, VersionType versionType, Origin origin, long startTime) { this.uid = uid; @@ -801,27 +940,7 @@ public long version() { return this.version; } - public void updateVersion(long version) { - this.version = version; - } - - public void setTranslogLocation(Translog.Location location) { - this.location = location; - } - - public Translog.Location getTranslogLocation() { - return this.location; - } - - public int sizeInBytes() { - if (location != null) { - return location.size; - } else { - return estimatedSizeInBytes(); - } - } - - protected abstract int estimatedSizeInBytes(); + public abstract int estimatedSizeInBytes(); public VersionType versionType() { return this.versionType; @@ -834,20 +953,11 @@ public long startTime() { return this.startTime; } - public void endTime(long endTime) { - this.endTime = endTime; - } - - /** - * Returns operation end time in nanoseconds. - */ - public long endTime() { - return this.endTime; - } - - abstract String type(); + public abstract String type(); abstract String id(); + + abstract TYPE operationType(); } public static class Index extends Operation { @@ -855,7 +965,6 @@ public static class Index extends Operation { private final ParsedDocument doc; private final long autoGeneratedIdTimestamp; private final boolean isRetry; - private boolean created; public Index(Term uid, ParsedDocument doc, long version, VersionType versionType, Origin origin, long startTime, long autoGeneratedIdTimestamp, boolean isRetry) { @@ -887,6 +996,11 @@ public String id() { return this.doc.id(); } + @Override + TYPE operationType() { + return TYPE.INDEX; + } + public String routing() { return this.doc.routing(); } @@ -899,12 +1013,6 @@ public long ttl() { return this.doc.ttl(); } - @Override - public void updateVersion(long version) { - super.updateVersion(version); - this.doc.version().setLongValue(version); - } - public String parent() { return this.doc.parent(); } @@ -917,16 +1025,8 @@ public BytesReference source() { return this.doc.source(); } - public boolean isCreated() { - return created; - } - - public void setCreated(boolean created) { - this.created = created; - } - @Override - protected int estimatedSizeInBytes() { + public int estimatedSizeInBytes() { return (id().length() + type().length()) * 2 + source().length() + 12; } @@ -953,21 +1053,19 @@ public static class Delete extends Operation { private final String type; private final String id; - private boolean found; - public Delete(String type, String id, Term uid, long version, VersionType versionType, Origin origin, long startTime, boolean found) { + public Delete(String type, String id, Term uid, long version, VersionType versionType, Origin origin, long startTime) { super(uid, version, versionType, origin, startTime); this.type = type; this.id = id; - this.found = found; } public Delete(String type, String id, Term uid) { - this(type, id, uid, Versions.MATCH_ANY, VersionType.INTERNAL, Origin.PRIMARY, System.nanoTime(), false); + this(type, id, uid, Versions.MATCH_ANY, VersionType.INTERNAL, Origin.PRIMARY, System.nanoTime()); } public Delete(Delete template, VersionType versionType) { - this(template.type(), template.id(), template.uid(), template.version(), versionType, template.origin(), template.startTime(), template.found()); + this(template.type(), template.id(), template.uid(), template.version(), versionType, template.origin(), template.startTime()); } @Override @@ -980,30 +1078,28 @@ public String id() { return this.id; } - public void updateVersion(long version, boolean found) { - updateVersion(version); - this.found = found; - } - - public boolean found() { - return this.found; + @Override + TYPE operationType() { + return TYPE.DELETE; } @Override - protected int estimatedSizeInBytes() { + public int estimatedSizeInBytes() { return (uid().field().length() + uid().text().length()) * 2 + 20; } - } public static class Get { private final boolean realtime; private final Term uid; + private final String type, id; private long version = Versions.MATCH_ANY; private VersionType versionType = VersionType.INTERNAL; - public Get(boolean realtime, Term uid) { + public Get(boolean realtime, String type, String id, Term uid) { this.realtime = realtime; + this.type = type; + this.id = id; this.uid = uid; } @@ -1011,6 +1107,14 @@ public boolean realtime() { return this.realtime; } + public String type() { + return type; + } + + public String id() { + return id; + } + public Term uid() { return uid; } @@ -1037,12 +1141,12 @@ public Get versionType(VersionType versionType) { public static class GetResult implements Releasable { private final boolean exists; private final long version; - private final Versions.DocIdAndVersion docIdAndVersion; + private final DocIdAndVersion docIdAndVersion; private final Searcher searcher; public static final GetResult NOT_EXISTS = new GetResult(false, Versions.NOT_FOUND, null, null); - private GetResult(boolean exists, long version, Versions.DocIdAndVersion docIdAndVersion, Searcher searcher) { + private GetResult(boolean exists, long version, DocIdAndVersion docIdAndVersion, Searcher searcher) { this.exists = exists; this.version = version; this.docIdAndVersion = docIdAndVersion; @@ -1052,7 +1156,7 @@ private GetResult(boolean exists, long version, Versions.DocIdAndVersion docIdAn /** * Build a non-realtime get result from the searcher. */ - public GetResult(Searcher searcher, Versions.DocIdAndVersion docIdAndVersion) { + public GetResult(Searcher searcher, DocIdAndVersion docIdAndVersion) { this(true, docIdAndVersion.version, docIdAndVersion, searcher); } @@ -1068,7 +1172,7 @@ public Searcher searcher() { return this.searcher; } - public Versions.DocIdAndVersion docIdAndVersion() { + public DocIdAndVersion docIdAndVersion() { return docIdAndVersion; } @@ -1086,8 +1190,10 @@ public void release() { /** * Method to close the engine while the write lock is held. + * Must decrement the supplied when closing work is done and resources are + * freed. */ - protected abstract void closeNoLock(String reason); + protected abstract void closeNoLock(String reason, CountDownLatch closedLatch); /** * Flush the engine (committing segments to disk and truncating the @@ -1102,9 +1208,7 @@ public void flushAndClose() throws IOException { logger.debug("flushing shard on close - this might take some time to sync files to disk"); try { flush(); // TODO we might force a flush in the future since we have the write lock already even though recoveries are running. - } catch (FlushNotAllowedEngineException ex) { - logger.debug("flush not allowed during flushAndClose - skipping"); - } catch (EngineClosedException ex) { + } catch (AlreadyClosedException ex) { logger.debug("engine already closed - skipping flushAndClose"); } } finally { @@ -1112,6 +1216,7 @@ public void flushAndClose() throws IOException { } } } + awaitPendingClose(); } @Override @@ -1120,9 +1225,18 @@ public void close() throws IOException { logger.debug("close now acquiring writeLock"); try (ReleasableLock lock = writeLock.acquire()) { logger.debug("close acquired writeLock"); - closeNoLock("api"); + closeNoLock("api", closedLatch); } } + awaitPendingClose(); + } + + private void awaitPendingClose() { + try { + closedLatch.await(); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + } } public static class CommitId implements Writeable { @@ -1194,16 +1308,6 @@ public long getLastWriteNanos() { return this.lastWriteNanos; } - /** - * Returns the engines current document statistics - */ - public DocsStats getDocStats() { - try (Engine.Searcher searcher = acquireSearcher("doc_stats")) { - IndexReader reader = searcher.reader(); - return new DocsStats(reader.numDocs(), reader.numDeletedDocs()); - } - } - /** * Called for each new opened engine searcher to warm new segments * @see EngineConfig#getWarmer() @@ -1230,4 +1334,11 @@ public interface Warmer { * This operation will close the engine if the recovery fails. */ public abstract Engine recoverFromTranslog() throws IOException; + + /** + * Returns true iff this engine is currently recovering from translog. + */ + public boolean isRecovering() { + return false; + } } diff --git a/core/src/main/java/org/elasticsearch/index/engine/EngineClosedException.java b/core/src/main/java/org/elasticsearch/index/engine/EngineClosedException.java index 3ecae762a2adb..cc3ffcac6b0b9 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/EngineClosedException.java +++ b/core/src/main/java/org/elasticsearch/index/engine/EngineClosedException.java @@ -33,6 +33,7 @@ * * */ +@Deprecated public class EngineClosedException extends IndexShardClosedException { public EngineClosedException(ShardId shardId) { diff --git a/core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java b/core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java index 9f9d2186a835c..02967514ae55e 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java +++ b/core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java @@ -24,9 +24,9 @@ import org.apache.lucene.index.SnapshotDeletionPolicy; import org.apache.lucene.search.QueryCache; import org.apache.lucene.search.QueryCachingPolicy; +import org.apache.lucene.search.ReferenceManager; import org.apache.lucene.search.similarities.Similarity; import org.elasticsearch.action.index.IndexRequest; -import org.elasticsearch.common.Nullable; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.unit.ByteSizeUnit; @@ -34,7 +34,6 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.codec.CodecService; -import org.elasticsearch.index.shard.RefreshListeners; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.TranslogRecoveryPerformer; import org.elasticsearch.index.store.Store; @@ -42,6 +41,8 @@ import org.elasticsearch.indices.IndexingMemoryController; import org.elasticsearch.threadpool.ThreadPool; +import java.util.List; + /* * Holds all the configuration that is used to create an {@link Engine}. * Once {@link Engine} has been created with this object, changes to this @@ -67,8 +68,7 @@ public final class EngineConfig { private final QueryCache queryCache; private final QueryCachingPolicy queryCachingPolicy; private final long maxUnsafeAutoIdTimestamp; - @Nullable - private final RefreshListeners refreshListeners; + private final List refreshListeners; /** * Index setting to change the low level lucene codec used for writing new segments. @@ -112,7 +112,7 @@ public EngineConfig(OpenMode openMode, ShardId shardId, ThreadPool threadPool, MergePolicy mergePolicy, Analyzer analyzer, Similarity similarity, CodecService codecService, Engine.EventListener eventListener, TranslogRecoveryPerformer translogRecoveryPerformer, QueryCache queryCache, QueryCachingPolicy queryCachingPolicy, - TranslogConfig translogConfig, TimeValue flushMergesAfter, RefreshListeners refreshListeners, + TranslogConfig translogConfig, TimeValue flushMergesAfter, List refreshListeners, long maxUnsafeAutoIdTimestamp) { if (openMode == null) { throw new IllegalArgumentException("openMode must not be null"); @@ -188,7 +188,7 @@ public Codec getCodec() { /** * Returns a thread-pool mainly used to get estimated time stamps from - * {@link org.elasticsearch.threadpool.ThreadPool#estimatedTimeInMillis()} and to schedule + * {@link org.elasticsearch.threadpool.ThreadPool#relativeTimeInMillis()} and to schedule * async force merge calls on the {@link org.elasticsearch.threadpool.ThreadPool.Names#FORCE_MERGE} thread-pool */ public ThreadPool getThreadPool() { @@ -322,9 +322,9 @@ public enum OpenMode { } /** - * {@linkplain RefreshListeners} instance to configure. + * The refresh listeners to add to Lucene */ - public RefreshListeners getRefreshListeners() { + public List getRefreshListeners() { return refreshListeners; } diff --git a/core/src/main/java/org/elasticsearch/index/engine/FlushNotAllowedEngineException.java b/core/src/main/java/org/elasticsearch/index/engine/FlushNotAllowedEngineException.java deleted file mode 100644 index d9371707e3bd1..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/engine/FlushNotAllowedEngineException.java +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.engine; - -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.index.shard.ShardId; -import org.elasticsearch.rest.RestStatus; - -import java.io.IOException; - -/** - * - */ -public class FlushNotAllowedEngineException extends EngineException { - - public FlushNotAllowedEngineException(ShardId shardId, String msg) { - super(shardId, msg); - } - - public FlushNotAllowedEngineException(StreamInput in) throws IOException{ - super(in); - } - - @Override - public RestStatus status() { - return RestStatus.SERVICE_UNAVAILABLE; - } -} diff --git a/core/src/main/java/org/elasticsearch/index/engine/IndexFailedEngineException.java b/core/src/main/java/org/elasticsearch/index/engine/IndexFailedEngineException.java index 4728b7f899a5a..2eab6bbd17d27 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/IndexFailedEngineException.java +++ b/core/src/main/java/org/elasticsearch/index/engine/IndexFailedEngineException.java @@ -27,22 +27,17 @@ import java.util.Objects; /** - * + * Deprecated as not used in 6.0, should be removed in 7.0 + * Still exists for bwc in serializing/deserializing from + * 5.x nodes */ +@Deprecated public class IndexFailedEngineException extends EngineException { private final String type; private final String id; - public IndexFailedEngineException(ShardId shardId, String type, String id, Throwable cause) { - super(shardId, "Index failed for [" + type + "#" + id + "]", cause); - Objects.requireNonNull(type, "type must not be null"); - Objects.requireNonNull(id, "id must not be null"); - this.type = type; - this.id = id; - } - public IndexFailedEngineException(StreamInput in) throws IOException{ super(in); type = in.readString(); @@ -63,4 +58,4 @@ public String type() { public String id() { return this.id; } -} \ No newline at end of file +} diff --git a/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java b/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java index 774465fb71a4c..8523c7b872ad6 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java +++ b/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java @@ -33,8 +33,10 @@ import org.apache.lucene.index.SegmentInfos; import org.apache.lucene.index.Term; import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.ReferenceManager; import org.apache.lucene.search.SearcherFactory; import org.apache.lucene.search.SearcherManager; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.store.Directory; import org.apache.lucene.store.LockObtainFailedException; @@ -50,6 +52,7 @@ import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.lucene.index.ElasticsearchDirectoryReader; import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.lucene.uid.VersionsResolver; import org.elasticsearch.common.metrics.CounterMetric; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.util.concurrent.AbstractRunnable; @@ -57,10 +60,11 @@ import org.elasticsearch.common.util.concurrent.ReleasableLock; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.VersionType; -import org.elasticsearch.index.mapper.Uid; +import org.elasticsearch.index.mapper.IdFieldMapper; +import org.elasticsearch.index.mapper.ParseContext; +import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.merge.MergeStats; import org.elasticsearch.index.merge.OnGoingMerge; -import org.elasticsearch.index.shard.DocsStats; import org.elasticsearch.index.shard.ElasticsearchMergePolicy; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.TranslogRecoveryPerformer; @@ -74,7 +78,10 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Objects; +import java.util.Optional; import java.util.Set; +import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; @@ -82,9 +89,6 @@ import java.util.concurrent.locks.ReentrantLock; import java.util.function.Function; -/** - * - */ public class InternalEngine extends Engine { /** * When we last pruned expired tombstones from versionMap.deletes: @@ -114,12 +118,15 @@ public class InternalEngine extends Engine { private final IndexThrottle throttle; + private final String uidField; + // How many callers are currently requesting index throttling. Currently there are only two situations where we do this: when merges // are falling behind and when writing indexing buffer to disk is too slow. When this is 0, there is no throttling, else we throttling // incoming indexing ops to a single thread: private final AtomicInteger throttleRequestCount = new AtomicInteger(); private final EngineConfig.OpenMode openMode; - private final AtomicBoolean allowCommits = new AtomicBoolean(true); + private final AtomicBoolean pendingTranslogRecovery = new AtomicBoolean(false); + private static final String MAX_UNSAFE_AUTO_ID_TIMESTAMP_COMMIT_ID = "max_unsafe_auto_id_timestamp"; private final AtomicLong maxUnsafeAutoIdTimestamp = new AtomicLong(-1); private final CounterMetric numVersionLookups = new CounterMetric(); private final CounterMetric numIndexVersionsLookups = new CounterMetric(); @@ -127,12 +134,13 @@ public class InternalEngine extends Engine { public InternalEngine(EngineConfig engineConfig) throws EngineException { super(engineConfig); openMode = engineConfig.getOpenMode(); - if (engineConfig.getIndexSettings().getIndexVersionCreated().before(Version.V_5_0_0_alpha6)) { + if (engineConfig.getIndexSettings().getIndexVersionCreated().before(Version.V_5_0_0_beta1)) { // no optimization for pre 5.0.0.alpha6 since translog might not have all information needed maxUnsafeAutoIdTimestamp.set(Long.MAX_VALUE); } else { maxUnsafeAutoIdTimestamp.set(engineConfig.getMaxUnsafeAutoIdTimestamp()); } + this.uidField = engineConfig.getIndexSettings().isSingleType() ? IdFieldMapper.NAME : UidFieldMapper.NAME; this.versionMap = new LiveVersionMap(); store.incRef(); IndexWriter writer = null; @@ -141,12 +149,13 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException { EngineMergeScheduler scheduler = null; boolean success = false; try { - this.lastDeleteVersionPruneTimeMSec = engineConfig.getThreadPool().estimatedTimeInMillis(); + this.lastDeleteVersionPruneTimeMSec = engineConfig.getThreadPool().relativeTimeInMillis(); mergeScheduler = scheduler = new EngineMergeScheduler(engineConfig.getShardId(), engineConfig.getIndexSettings()); throttle = new IndexThrottle(); this.searcherFactory = new SearchFactory(logger, isClosed, engineConfig); try { writer = createWriter(openMode == EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG); + updateMaxUnsafeAutoIdTimestampFromWriter(writer); indexWriter = writer; translog = openTranslog(engineConfig, writer); assert translog.getGeneration() != null; @@ -166,11 +175,11 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException { manager = createSearcherManager(); this.searcherManager = manager; this.versionMap.setManager(searcherManager); + assert pendingTranslogRecovery.get() == false : "translog recovery can't be pending before we set it"; // don't allow commits until we are done with recovering - allowCommits.compareAndSet(true, openMode != EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG); - if (engineConfig.getRefreshListeners() != null) { - searcherManager.addListener(engineConfig.getRefreshListeners()); - engineConfig.getRefreshListeners().setTranslog(translog); + pendingTranslogRecovery.set(openMode == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG); + for (ReferenceManager.RefreshListener listener: engineConfig.getRefreshListeners()) { + searcherManager.addListener(listener); } success = true; } finally { @@ -186,6 +195,17 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException { logger.trace("created new InternalEngine"); } + private void updateMaxUnsafeAutoIdTimestampFromWriter(IndexWriter writer) { + long commitMaxUnsafeAutoIdTimestamp = Long.MIN_VALUE; + for (Map.Entry entry : writer.getLiveCommitData()) { + if (entry.getKey().equals(MAX_UNSAFE_AUTO_ID_TIMESTAMP_COMMIT_ID)) { + commitMaxUnsafeAutoIdTimestamp = Long.parseLong(entry.getValue()); + break; + } + } + maxUnsafeAutoIdTimestamp.set(Math.max(maxUnsafeAutoIdTimestamp.get(), commitMaxUnsafeAutoIdTimestamp)); + } + @Override public InternalEngine recoverFromTranslog() throws IOException { flushLock.lock(); @@ -194,14 +214,14 @@ public InternalEngine recoverFromTranslog() throws IOException { if (openMode != EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) { throw new IllegalStateException("Can't recover from translog with open mode: " + openMode); } - if (allowCommits.get()) { + if (pendingTranslogRecovery.get() == false) { throw new IllegalStateException("Engine has already been recovered"); } try { recoverFromTranslog(engineConfig.getTranslogRecoveryPerformer()); } catch (Exception e) { try { - allowCommits.set(false); // just play safe and never allow commits on this + pendingTranslogRecovery.set(true); // just play safe and never allow commits on this see #ensureCanFlush failEngine("failed to recover from translog", e); } catch (Exception inner) { e.addSuppressed(inner); @@ -225,8 +245,8 @@ private void recoverFromTranslog(TranslogRecoveryPerformer handler) throws IOExc } // flush if we recovered something or if we have references to older translogs // note: if opsRecovered == 0 and we have older translogs it means they are corrupted or 0 length. - assert allowCommits.get() == false : "commits are allowed but shouldn't"; - allowCommits.set(true); // we are good - now we can commit + assert pendingTranslogRecovery.get() : "translogRecovery is not pending but should be"; + pendingTranslogRecovery.set(false); // we are good - now we can commit if (opsRecovered > 0) { logger.trace("flushing post recovery from translog. ops recovered [{}]. committed translog id [{}]. current id [{}]", opsRecovered, translogGeneration == null ? null : translogGeneration.translogFileGeneration, translog.currentFileGeneration()); @@ -329,18 +349,18 @@ private SearcherManager createSearcherManager() throws EngineException { @Override public GetResult get(Get get, Function searcherFactory) throws EngineException { + assert Objects.equals(get.uid().field(), uidField) : get.uid().field(); try (ReleasableLock lock = readLock.acquire()) { ensureOpen(); if (get.realtime()) { VersionValue versionValue = versionMap.getUnderLock(get.uid()); if (versionValue != null) { - if (versionValue.delete()) { + if (versionValue.isDelete()) { return GetResult.NOT_EXISTS; } - if (get.versionType().isVersionConflictForReads(versionValue.version(), get.version())) { - Uid uid = Uid.createUid(get.uid().text()); - throw new VersionConflictEngineException(shardId, uid.type(), uid.id(), - get.versionType().explainConflictForReads(versionValue.version(), get.version())); + if (get.versionType().isVersionConflictForReads(versionValue.getVersion(), get.version())) { + throw new VersionConflictEngineException(shardId, get.type(), get.id(), + get.versionType().explainConflictForReads(versionValue.getVersion(), get.version())); } refresh("realtime_get"); } @@ -351,73 +371,46 @@ public GetResult get(Get get, Function searcherFactory) throws } } - private boolean checkVersionConflict( - final Operation op, - final long currentVersion, - final long expectedVersion, - final boolean deleted) { - if (op.versionType().isVersionConflictForWrites(currentVersion, expectedVersion, deleted)) { - if (op.origin().isRecovery()) { - // version conflict, but okay - return true; - } else { - // fatal version conflict - throw new VersionConflictEngineException(shardId, op.type(), op.id(), - op.versionType().explainConflictForWrites(currentVersion, expectedVersion, deleted)); - } - } - return false; - } - - private long checkDeletedAndGCed(VersionValue versionValue) { - long currentVersion; - if (engineConfig.isEnableGcDeletes() && versionValue.delete() && (engineConfig.getThreadPool().estimatedTimeInMillis() - versionValue.time()) > getGcDeletesInMillis()) { - currentVersion = Versions.NOT_FOUND; // deleted, and GC - } else { - currentVersion = versionValue.version(); - } - return currentVersion; - } - - private static VersionValueSupplier NEW_VERSION_VALUE = (u, t) -> new VersionValue(u); - - @FunctionalInterface - private interface VersionValueSupplier { - VersionValue apply(long updatedVersion, long time); + /** + * the status of the current doc version in lucene, compared to the version in an incoming + * operation + */ + enum OpVsLuceneDocStatus { + /** the op is more recent than the one that last modified the doc found in lucene */ + OP_NEWER, + /** the op is older or the same as the one that last modified the doc found in lucene */ + OP_STALE_OR_EQUAL, + /** no doc was found in lucene */ + LUCENE_DOC_NOT_FOUND } - private void maybeAddToTranslog( - final T op, - final long updatedVersion, - final Function toTranslogOp, - final VersionValueSupplier toVersionValue) throws IOException { - if (op.origin() != Operation.Origin.LOCAL_TRANSLOG_RECOVERY) { - final Translog.Location translogLocation = translog.add(toTranslogOp.apply(op)); - op.setTranslogLocation(translogLocation); + /** resolves the current version of the document, returning null if not found */ + private VersionValue resolveDocVersion(final Operation op) throws IOException { + assert incrementVersionLookup(); // used for asserting in tests + VersionValue versionValue = versionMap.getUnderLock(op.uid()); + if (versionValue == null) { + assert incrementIndexVersionLookup(); // used for asserting in tests + final long currentVersion = loadCurrentVersionFromIndex(op.uid()); + if (currentVersion != Versions.NOT_FOUND) { + versionValue = new VersionValue(currentVersion); + } + } else if (engineConfig.isEnableGcDeletes() && versionValue.isDelete() && + (engineConfig.getThreadPool().relativeTimeInMillis() - versionValue.getTime()) > + getGcDeletesInMillis()) { + versionValue = null; } - versionMap.putUnderLock(op.uid().bytes(), toVersionValue.apply(updatedVersion, engineConfig.getThreadPool().estimatedTimeInMillis())); - + return versionValue; } - @Override - public void index(Index index) { - try (ReleasableLock lock = readLock.acquire()) { - ensureOpen(); - if (index.origin().isRecovery()) { - // Don't throttle recovery operations - innerIndex(index); - } else { - try (Releasable r = throttle.acquireThrottle()) { - innerIndex(index); - } - } - } catch (IllegalStateException | IOException e) { - try { - maybeFailEngine("index", e); - } catch (Exception inner) { - e.addSuppressed(inner); - } - throw new IndexFailedEngineException(shardId, index.type(), index.id(), e); + private OpVsLuceneDocStatus compareOpToLuceneDocBasedOnVersions(final Operation op) + throws IOException { + assert op.version() >= 0 : "versions should be non-negative. got " + op.version(); + final VersionValue versionValue = resolveDocVersion(op); + if (versionValue == null) { + return OpVsLuceneDocStatus.LUCENE_DOC_NOT_FOUND; + } else { + return op.versionType().isVersionConflictForWrites(versionValue.getVersion(), op.version(), versionValue.isDelete()) ? + OpVsLuceneDocStatus.OP_STALE_OR_EQUAL : OpVsLuceneDocStatus.OP_NEWER; } } @@ -433,11 +426,11 @@ private boolean canOptimizeAddDocument(Index index) { case PEER_RECOVERY: case REPLICA: assert index.version() == 1 && index.versionType() == VersionType.EXTERNAL - : "version: " + index.version() + " type: " + index.versionType(); + : "version: " + index.version() + " type: " + index.versionType(); return true; case LOCAL_TRANSLOG_RECOVERY: assert index.isRetry(); - return false; // even if retry is set we never optimize local recovery + return true; // allow to optimize in order to update the max safe time stamp default: throw new IllegalArgumentException("unknown origin " + index.origin()); } @@ -445,176 +438,466 @@ private boolean canOptimizeAddDocument(Index index) { return false; } - private void innerIndex(Index index) throws IOException { - try (Releasable ignored = acquireLock(index.uid())) { - lastWriteNanos = index.startTime(); - /* if we have an autoGeneratedID that comes into the engine we can potentially optimize - * and just use addDocument instead of updateDocument and skip the entire version and index lookup across the board. - * Yet, we have to deal with multiple document delivery, for this we use a property of the document that is added - * to detect if it has potentially been added before. We use the documents timestamp for this since it's something - * that: - * - doesn't change per document - * - is preserved in the transaction log - * - and is assigned before we start to index / replicate - * NOTE: it's not important for this timestamp to be consistent across nodes etc. it's just a number that is in the common - * case increasing and can be used in the failure case when we retry and resent documents to establish a happens before relationship. - * for instance: - * - doc A has autoGeneratedIdTimestamp = 10, isRetry = false - * - doc B has autoGeneratedIdTimestamp = 9, isRetry = false - * - * while both docs are in in flight, we disconnect on one node, reconnect and send doc A again - * - now doc A' has autoGeneratedIdTimestamp = 10, isRetry = true - * - * if A' arrives on the shard first we update maxUnsafeAutoIdTimestamp to 10 and use update document. All subsequent - * documents that arrive (A and B) will also use updateDocument since their timestamps are less than maxUnsafeAutoIdTimestamp. - * While this is not strictly needed for doc B it is just much simpler to implement since it will just de-optimize some doc in the worst case. - * - * if A arrives on the shard first we use addDocument since maxUnsafeAutoIdTimestamp is < 10. A` will then just be skipped or calls - * updateDocument. - */ - long currentVersion; - final boolean deleted; - // if anything is fishy here ie. there is a retry we go and force updateDocument below so we are updating the document in the - // lucene index without checking the version map but we still do the version check - final boolean forceUpdateDocument; - if (canOptimizeAddDocument(index)) { - long deOptimizeTimestamp = maxUnsafeAutoIdTimestamp.get(); - if (index.isRetry()) { - forceUpdateDocument = true; - do { - deOptimizeTimestamp = maxUnsafeAutoIdTimestamp.get(); - if (deOptimizeTimestamp >= index.getAutoGeneratedIdTimestamp()) { - break; - } - } while(maxUnsafeAutoIdTimestamp.compareAndSet(deOptimizeTimestamp, - index.getAutoGeneratedIdTimestamp()) == false); - assert maxUnsafeAutoIdTimestamp.get() >= index.getAutoGeneratedIdTimestamp(); + private boolean assertVersionType(final Engine.Operation operation) { + if (operation.origin() == Operation.Origin.REPLICA || + operation.origin() == Operation.Origin.PEER_RECOVERY || + operation.origin() == Operation.Origin.LOCAL_TRANSLOG_RECOVERY) { + // ensure that replica operation has expected version type for replication + // ensure that versionTypeForReplicationAndRecovery is idempotent + assert operation.versionType() == operation.versionType().versionTypeForReplicationAndRecovery() + : "unexpected version type in request from [" + operation.origin().name() + "] " + + "found [" + operation.versionType().name() + "] " + + "expected [" + operation.versionType().versionTypeForReplicationAndRecovery().name() + "]"; + } + return true; + } + + @Override + public IndexResult index(Index index) throws IOException { + assert Objects.equals(index.uid().field(), uidField) : index.uid().field(); + final boolean doThrottle = index.origin().isRecovery() == false; + try (ReleasableLock releasableLock = readLock.acquire()) { + ensureOpen(); + assert assertVersionType(index); + try (Releasable ignored = acquireLock(index.uid()); + Releasable indexThrottle = doThrottle ? () -> {} : throttle.acquireThrottle()) { + lastWriteNanos = index.startTime(); + /* A NOTE ABOUT APPEND ONLY OPTIMIZATIONS: + * if we have an autoGeneratedID that comes into the engine we can potentially optimize + * and just use addDocument instead of updateDocument and skip the entire version and index lookupVersion across the board. + * Yet, we have to deal with multiple document delivery, for this we use a property of the document that is added + * to detect if it has potentially been added before. We use the documents timestamp for this since it's something + * that: + * - doesn't change per document + * - is preserved in the transaction log + * - and is assigned before we start to index / replicate + * NOTE: it's not important for this timestamp to be consistent across nodes etc. it's just a number that is in the common + * case increasing and can be used in the failure case when we retry and resent documents to establish a happens before relationship. + * for instance: + * - doc A has autoGeneratedIdTimestamp = 10, isRetry = false + * - doc B has autoGeneratedIdTimestamp = 9, isRetry = false + * + * while both docs are in in flight, we disconnect on one node, reconnect and send doc A again + * - now doc A' has autoGeneratedIdTimestamp = 10, isRetry = true + * + * if A' arrives on the shard first we update maxUnsafeAutoIdTimestamp to 10 and use update document. All subsequent + * documents that arrive (A and B) will also use updateDocument since their timestamps are less than maxUnsafeAutoIdTimestamp. + * While this is not strictly needed for doc B it is just much simpler to implement since it will just de-optimize some doc in the worst case. + * + * if A arrives on the shard first we use addDocument since maxUnsafeAutoIdTimestamp is < 10. A` will then just be skipped or calls + * updateDocument. + */ + final IndexingStrategy plan; + + if (index.origin() == Operation.Origin.PRIMARY) { + plan = planIndexingAsPrimary(index); } else { - // in this case we force - forceUpdateDocument = deOptimizeTimestamp >= index.getAutoGeneratedIdTimestamp(); + // non-primary mode (i.e., replica or recovery) + plan = planIndexingAsNonPrimary(index); } - currentVersion = Versions.NOT_FOUND; - deleted = true; - } else { - // update the document - forceUpdateDocument = false; // we don't force it - it depends on the version - final VersionValue versionValue = versionMap.getUnderLock(index.uid()); - assert incrementVersionLookup(); - if (versionValue == null) { - currentVersion = loadCurrentVersionFromIndex(index.uid()); - deleted = currentVersion == Versions.NOT_FOUND; + + final IndexResult indexResult; + if (plan.earlyResultOnPreFlightError.isPresent()) { + indexResult = plan.earlyResultOnPreFlightError.get(); + assert indexResult.hasFailure(); + } else if (plan.indexIntoLucene) { + indexResult = indexIntoLucene(index, plan); } else { - currentVersion = checkDeletedAndGCed(versionValue); - deleted = versionValue.delete(); + assert index.origin() != Operation.Origin.PRIMARY; + indexResult = new IndexResult(plan.versionForIndexing, plan.currentNotFoundOrDeleted); } + if (indexResult.hasFailure() == false && + plan.indexIntoLucene && // if we didn't store it in lucene, there is no need to store it in the translog + index.origin() != Operation.Origin.LOCAL_TRANSLOG_RECOVERY) { + Translog.Location location = + translog.add(new Translog.Index(index, indexResult)); + indexResult.setTranslogLocation(location); + } + indexResult.setTook(System.nanoTime() - index.startTime()); + indexResult.freeze(); + return indexResult; + } + } catch (RuntimeException | IOException e) { + try { + maybeFailEngine("index", e); + } catch (Exception inner) { + e.addSuppressed(inner); + } + throw e; + } + } + + private IndexingStrategy planIndexingAsNonPrimary(Index index) throws IOException { + final IndexingStrategy plan; + if (canOptimizeAddDocument(index) && mayHaveBeenIndexedBefore(index) == false) { + // no need to deal with out of order delivery - we never saw this one + assert index.version() == 1L : + "can optimize on replicas but incoming version is [" + index.version() + "]"; + plan = IndexingStrategy.optimizedAppendOnly(); + } else { + // drop out of order operations + assert index.versionType().versionTypeForReplicationAndRecovery() == index.versionType() : + "resolving out of order delivery based on versioning but version type isn't fit for it. got [" + + index.versionType() + "]"; + // unlike the primary, replicas don't really care to about creation status of documents + // this allows to ignore the case where a document was found in the live version maps in + // a delete state and return false for the created flag in favor of code simplicity + final OpVsLuceneDocStatus opVsLucene = compareOpToLuceneDocBasedOnVersions(index); + if (opVsLucene == OpVsLuceneDocStatus.OP_STALE_OR_EQUAL) { + plan = IndexingStrategy.skipAsStale(false, index.version()); + } else { + plan = IndexingStrategy.processNormally( + opVsLucene == OpVsLuceneDocStatus.LUCENE_DOC_NOT_FOUND, index.version() + ); + } + } + return plan; + } + + private IndexingStrategy planIndexingAsPrimary(Index index) throws IOException { + assert index.origin() == Operation.Origin.PRIMARY : + "planing as primary but origin isn't. got " + index.origin(); + final IndexingStrategy plan; + // resolve an external operation into an internal one which is safe to replay + if (canOptimizeAddDocument(index)) { + if (mayHaveBeenIndexedBefore(index)) { + plan = IndexingStrategy.overrideExistingAsIfNotThere(1L); + } else { + plan = IndexingStrategy.optimizedAppendOnly(); } - final long expectedVersion = index.version(); - if (checkVersionConflict(index, currentVersion, expectedVersion, deleted)) { - index.setCreated(false); - return; + } else { + // resolves incoming version + final VersionValue versionValue = resolveDocVersion(index); + final long currentVersion; + final boolean currentNotFoundOrDeleted; + if (versionValue == null) { + currentVersion = Versions.NOT_FOUND; + currentNotFoundOrDeleted = true; + } else { + currentVersion = versionValue.getVersion(); + currentNotFoundOrDeleted = versionValue.isDelete(); } - final long updatedVersion = updateVersion(index, currentVersion, expectedVersion); - index.setCreated(deleted); - if (currentVersion == Versions.NOT_FOUND && forceUpdateDocument == false) { - // document does not exists, we can optimize for create - index(index, indexWriter); + if (index.versionType().isVersionConflictForWrites( + currentVersion, index.version(), currentNotFoundOrDeleted)) { + plan = IndexingStrategy.skipDueToVersionConflict( + new VersionConflictEngineException(shardId, index, currentVersion, + currentNotFoundOrDeleted), + currentNotFoundOrDeleted, currentVersion); } else { - update(index, indexWriter); + plan = IndexingStrategy.processNormally(currentNotFoundOrDeleted, + index.versionType().updateVersion(currentVersion, index.version()) + ); } - maybeAddToTranslog(index, updatedVersion, Translog.Index::new, NEW_VERSION_VALUE); } + return plan; } - private long updateVersion(Engine.Operation op, long currentVersion, long expectedVersion) { - final long updatedVersion = op.versionType().updateVersion(currentVersion, expectedVersion); - op.updateVersion(updatedVersion); - return updatedVersion; + private IndexResult indexIntoLucene(Index index, IndexingStrategy plan) + throws IOException { + assert plan.versionForIndexing >= 0 : "version must be set. got " + plan.versionForIndexing; + assert plan.indexIntoLucene; + index.parsedDoc().version().setLongValue(plan.versionForIndexing); + try { + if (plan.useLuceneUpdateDocument) { + update(index.uid(), index.docs(), indexWriter); + } else { + // document does not exists, we can optimize for create, but double check if assertions are running + assert assertDocDoesNotExist(index, canOptimizeAddDocument(index) == false); + index(index.docs(), indexWriter); + } + versionMap.putUnderLock(index.uid().bytes(), new VersionValue(plan.versionForIndexing)); + return new IndexResult(plan.versionForIndexing, plan.currentNotFoundOrDeleted); + } catch (Exception ex) { + if (indexWriter.getTragicException() == null) { + /* There is no tragic event recorded so this must be a document failure. + * + * The handling inside IW doesn't guarantee that an tragic / aborting exception + * will be used as THE tragicEventException since if there are multiple exceptions causing an abort in IW + * only one wins. Yet, only the one that wins will also close the IW and in turn fail the engine such that + * we can potentially handle the exception before the engine is failed. + * Bottom line is that we can only rely on the fact that if it's a document failure then + * `indexWriter.getTragicException()` will be null otherwise we have to rethrow and treat it as fatal or rather + * non-document failure + * + * we return a `MATCH_ANY` version to indicate no document was index. The value is + * not used anyway + */ + return new IndexResult(ex, Versions.MATCH_ANY); + } else { + throw ex; + } + } + } + + /** + * returns true if the indexing operation may have already be processed by this engine. + * Note that it is OK to rarely return true even if this is not the case. However a `false` + * return value must always be correct. + */ + private boolean mayHaveBeenIndexedBefore(Index index) { + assert canOptimizeAddDocument(index); + boolean mayHaveBeenIndexBefore; + long deOptimizeTimestamp = maxUnsafeAutoIdTimestamp.get(); + if (index.isRetry()) { + mayHaveBeenIndexBefore = true; + do { + deOptimizeTimestamp = maxUnsafeAutoIdTimestamp.get(); + if (deOptimizeTimestamp >= index.getAutoGeneratedIdTimestamp()) { + break; + } + } while (maxUnsafeAutoIdTimestamp.compareAndSet(deOptimizeTimestamp, + index.getAutoGeneratedIdTimestamp()) == false); + assert maxUnsafeAutoIdTimestamp.get() >= index.getAutoGeneratedIdTimestamp(); + } else { + // in this case we force + mayHaveBeenIndexBefore = deOptimizeTimestamp >= index.getAutoGeneratedIdTimestamp(); + } + return mayHaveBeenIndexBefore; + } + + private static void index(final List docs, final IndexWriter indexWriter) throws IOException { + if (docs.size() > 1) { + indexWriter.addDocuments(docs); + } else { + indexWriter.addDocument(docs.get(0)); + } + } + + private static final class IndexingStrategy { + final boolean currentNotFoundOrDeleted; + final boolean useLuceneUpdateDocument; + final long versionForIndexing; + final boolean indexIntoLucene; + final Optional earlyResultOnPreFlightError; + + private IndexingStrategy(boolean currentNotFoundOrDeleted, boolean useLuceneUpdateDocument, + boolean indexIntoLucene, long versionForIndexing, + IndexResult earlyResultOnPreFlightError) { + assert useLuceneUpdateDocument == false || indexIntoLucene : + "use lucene update is set to true, but we're not indexing into lucene"; + assert (indexIntoLucene && earlyResultOnPreFlightError != null) == false : + "can only index into lucene or have a preflight result but not both." + + "indexIntoLucene: " + indexIntoLucene + + " earlyResultOnPreFlightError:" + earlyResultOnPreFlightError; + this.currentNotFoundOrDeleted = currentNotFoundOrDeleted; + this.useLuceneUpdateDocument = useLuceneUpdateDocument; + this.versionForIndexing = versionForIndexing; + this.indexIntoLucene = indexIntoLucene; + this.earlyResultOnPreFlightError = + earlyResultOnPreFlightError == null ? Optional.empty() : + Optional.of(earlyResultOnPreFlightError); + } + + static IndexingStrategy optimizedAppendOnly() { + return new IndexingStrategy(true, false, true, 1L, null); + } + + static IndexingStrategy skipDueToVersionConflict(VersionConflictEngineException e, + boolean currentNotFoundOrDeleted, + long currentVersion) { + return new IndexingStrategy(currentNotFoundOrDeleted, false, false, Versions.NOT_FOUND, + new IndexResult(e, currentVersion)); + } + + static IndexingStrategy processNormally(boolean currentNotFoundOrDeleted, long versionForIndexing) { + return new IndexingStrategy(currentNotFoundOrDeleted, currentNotFoundOrDeleted == false, true, versionForIndexing, null); + } + + static IndexingStrategy overrideExistingAsIfNotThere(long versionForIndexing) { + return new IndexingStrategy(true, true, true, versionForIndexing, null); + } + + static IndexingStrategy skipAsStale(boolean currentNotFoundOrDeleted, long versionForIndexing) { + return new IndexingStrategy(currentNotFoundOrDeleted, false, false, versionForIndexing, null); + } } - private static void index(final Index index, final IndexWriter indexWriter) throws IOException { - if (index.docs().size() > 1) { - indexWriter.addDocuments(index.docs()); + /** + * Asserts that the doc in the index operation really doesn't exist + */ + private boolean assertDocDoesNotExist(final Index index, final boolean allowDeleted) throws IOException { + final VersionValue versionValue = versionMap.getUnderLock(index.uid()); + if (versionValue != null) { + if (versionValue.isDelete() == false || allowDeleted == false) { + throw new AssertionError("doc [" + index.type() + "][" + index.id() + "] exists in version map (version " + versionValue + ")"); + } } else { - indexWriter.addDocument(index.docs().get(0)); + try (Searcher searcher = acquireSearcher("assert doc doesn't exist")) { + final long docsWithId = searcher.searcher().count(new TermQuery(index.uid())); + if (docsWithId > 0) { + throw new AssertionError("doc [" + index.type() + "][" + index.id() + "] exists [" + docsWithId + "] times in index"); + } + } } + return true; } - private static void update(final Index index, final IndexWriter indexWriter) throws IOException { - if (index.docs().size() > 1) { - indexWriter.updateDocuments(index.uid(), index.docs()); + private static void update(final Term uid, final List docs, final IndexWriter indexWriter) throws IOException { + if (docs.size() > 1) { + indexWriter.updateDocuments(uid, docs); } else { - indexWriter.updateDocument(index.uid(), index.docs().get(0)); + indexWriter.updateDocument(uid, docs.get(0)); } } @Override - public void delete(Delete delete) throws EngineException { - try (ReleasableLock lock = readLock.acquire()) { + public DeleteResult delete(Delete delete) throws IOException { + assert Objects.equals(delete.uid().field(), uidField) : delete.uid().field(); + assert assertVersionType(delete); + final DeleteResult deleteResult; + // NOTE: we don't throttle this when merges fall behind because delete-by-id does not create new segments: + try (ReleasableLock ignored = readLock.acquire(); Releasable ignored2 = acquireLock(delete.uid())) { ensureOpen(); - // NOTE: we don't throttle this when merges fall behind because delete-by-id does not create new segments: - innerDelete(delete); - } catch (IllegalStateException | IOException e) { + lastWriteNanos = delete.startTime(); + final DeletionStrategy plan; + if (delete.origin() == Operation.Origin.PRIMARY) { + plan = planDeletionAsPrimary(delete); + } else { + plan = planDeletionAsNonPrimary(delete); + } + + if (plan.earlyResultOnPreflightError.isPresent()) { + deleteResult = plan.earlyResultOnPreflightError.get(); + } else if (plan.deleteFromLucene) { + deleteResult = deleteInLucene(delete, plan); + } else { + assert delete.origin() != Operation.Origin.PRIMARY; + deleteResult = new DeleteResult(plan.versionOfDeletion, plan.currentlyDeleted == false); + } + if (!deleteResult.hasFailure() && + plan.deleteFromLucene && // if it wasn't applied to lucene, we don't store it in the translog + delete.origin() != Operation.Origin.LOCAL_TRANSLOG_RECOVERY) { + Translog.Location location = + translog.add(new Translog.Delete(delete, deleteResult)); + deleteResult.setTranslogLocation(location); + } + deleteResult.setTook(System.nanoTime() - delete.startTime()); + deleteResult.freeze(); + } catch (RuntimeException | IOException e) { try { - maybeFailEngine("delete", e); + maybeFailEngine("index", e); } catch (Exception inner) { e.addSuppressed(inner); } - throw new DeleteFailedEngineException(shardId, delete, e); + throw e; } - maybePruneDeletedTombstones(); + return deleteResult; } - private void maybePruneDeletedTombstones() { - // It's expensive to prune because we walk the deletes map acquiring dirtyLock for each uid so we only do it - // every 1/4 of gcDeletesInMillis: - if (engineConfig.isEnableGcDeletes() && engineConfig.getThreadPool().estimatedTimeInMillis() - lastDeleteVersionPruneTimeMSec > getGcDeletesInMillis() * 0.25) { - pruneDeletedTombstones(); + private DeletionStrategy planDeletionAsNonPrimary(Delete delete) throws IOException { + assert delete.origin() != Operation.Origin.PRIMARY : "planing as primary but got " + + delete.origin(); + // drop out of order operations + assert delete.versionType().versionTypeForReplicationAndRecovery() == delete.versionType() : + "resolving out of order delivery based on versioning but version type isn't fit for it. got [" + + delete.versionType() + "]"; + // unlike the primary, replicas don't really care to about found status of documents + // this allows to ignore the case where a document was found in the live version maps in + // a delete state and return true for the found flag in favor of code simplicity + final OpVsLuceneDocStatus opVsLucene = compareOpToLuceneDocBasedOnVersions(delete); + + final DeletionStrategy plan; + if (opVsLucene == OpVsLuceneDocStatus.OP_STALE_OR_EQUAL) { + plan = DeletionStrategy.processButSkipLucene(false, delete.version()); + } else { + plan = DeletionStrategy.processNormally( + opVsLucene == OpVsLuceneDocStatus.LUCENE_DOC_NOT_FOUND, delete.version()); } + return plan; } - private void innerDelete(Delete delete) throws IOException { - try (Releasable ignored = acquireLock(delete.uid())) { - lastWriteNanos = delete.startTime(); - final long currentVersion; - final boolean deleted; - final VersionValue versionValue = versionMap.getUnderLock(delete.uid()); - assert incrementVersionLookup(); - if (versionValue == null) { - currentVersion = loadCurrentVersionFromIndex(delete.uid()); - deleted = currentVersion == Versions.NOT_FOUND; + private DeletionStrategy planDeletionAsPrimary(Delete delete) throws IOException { + assert delete.origin() == Operation.Origin.PRIMARY : "planing as primary but got " + + delete.origin(); + // resolve operation from external to internal + final VersionValue versionValue = resolveDocVersion(delete); + assert incrementVersionLookup(); + final long currentVersion; + final boolean currentlyDeleted; + if (versionValue == null) { + currentVersion = Versions.NOT_FOUND; + currentlyDeleted = true; + } else { + currentVersion = versionValue.getVersion(); + currentlyDeleted = versionValue.isDelete(); + } + final DeletionStrategy plan; + if (delete.versionType().isVersionConflictForWrites(currentVersion, delete.version(), currentlyDeleted)) { + plan = DeletionStrategy.skipDueToVersionConflict( + new VersionConflictEngineException(shardId, delete, currentVersion, currentlyDeleted), + currentVersion, currentlyDeleted); + } else { + plan = DeletionStrategy.processNormally(currentlyDeleted, delete.versionType().updateVersion(currentVersion, delete.version())); + } + return plan; + } + + private DeleteResult deleteInLucene(Delete delete, DeletionStrategy plan) + throws IOException { + try { + if (plan.currentlyDeleted == false) { + // any exception that comes from this is a either an ACE or a fatal exception there + // can't be any document failures coming from this + indexWriter.deleteDocuments(delete.uid()); + } + versionMap.putUnderLock(delete.uid().bytes(), + new DeleteVersionValue(plan.versionOfDeletion, + engineConfig.getThreadPool().relativeTimeInMillis())); + return new DeleteResult(plan.versionOfDeletion, plan.currentlyDeleted == false); + } catch (Exception ex) { + if (indexWriter.getTragicException() == null) { + // there is no tragic event and such it must be a document level failure + return new DeleteResult(ex, plan.versionOfDeletion, plan.currentlyDeleted == false); } else { - currentVersion = checkDeletedAndGCed(versionValue); - deleted = versionValue.delete(); + throw ex; } + } + } - final long expectedVersion = delete.version(); - if (checkVersionConflict(delete, currentVersion, expectedVersion, deleted)) return; + private static final class DeletionStrategy { + // of a rare double delete + final boolean deleteFromLucene; + final boolean currentlyDeleted; + final long versionOfDeletion; + final Optional earlyResultOnPreflightError; + + private DeletionStrategy(boolean deleteFromLucene, boolean currentlyDeleted, + long versionOfDeletion, DeleteResult earlyResultOnPreflightError) { + assert (deleteFromLucene && earlyResultOnPreflightError != null) == false : + "can only delete from lucene or have a preflight result but not both." + + "deleteFromLucene: " + deleteFromLucene + + " earlyResultOnPreFlightError:" + earlyResultOnPreflightError; + this.deleteFromLucene = deleteFromLucene; + this.currentlyDeleted = currentlyDeleted; + this.versionOfDeletion = versionOfDeletion; + this.earlyResultOnPreflightError = earlyResultOnPreflightError == null ? + Optional.empty() : Optional.of(earlyResultOnPreflightError); + } - final long updatedVersion = updateVersion(delete, currentVersion, expectedVersion); + static DeletionStrategy skipDueToVersionConflict(VersionConflictEngineException e, + long currentVersion, boolean currentlyDeleted) { + return new DeletionStrategy(false, currentlyDeleted, Versions.NOT_FOUND, + new DeleteResult(e, currentVersion, currentlyDeleted == false)); + } - final boolean found = deleteIfFound(delete, currentVersion, deleted, versionValue); + static DeletionStrategy processNormally(boolean currentlyDeleted, long versionOfDeletion) { + return new DeletionStrategy(true, currentlyDeleted, versionOfDeletion, + null); - delete.updateVersion(updatedVersion, found); + } - maybeAddToTranslog(delete, updatedVersion, Translog.Delete::new, DeleteVersionValue::new); + public static DeletionStrategy processButSkipLucene(boolean currentlyDeleted, long versionOfDeletion) { + return new DeletionStrategy(false, currentlyDeleted, versionOfDeletion, null); } } - private boolean deleteIfFound(Delete delete, long currentVersion, boolean deleted, VersionValue versionValue) throws IOException { - final boolean found; - if (currentVersion == Versions.NOT_FOUND) { - // doc does not exist and no prior deletes - found = false; - } else if (versionValue != null && deleted) { - // a "delete on delete", in this case, we still increment the version, log it, and return that version - found = false; - } else { - // we deleted a currently existing document - indexWriter.deleteDocuments(delete.uid()); - found = true; + private void maybePruneDeletedTombstones() { + // It's expensive to prune because we walk the deletes map acquiring dirtyLock for each uid so we only do it + // every 1/4 of gcDeletesInMillis: + if (engineConfig.isEnableGcDeletes() && engineConfig.getThreadPool().relativeTimeInMillis() - lastDeleteVersionPruneTimeMSec > getGcDeletesInMillis() * 0.25) { + pruneDeletedTombstones(); } - return found; } @Override @@ -627,8 +910,6 @@ public void refresh(String source) throws EngineException { } catch (AlreadyClosedException e) { failOnTragicEvent(e); throw e; - } catch (EngineClosedException e) { - throw e; } catch (Exception e) { try { failEngine("refresh failed", e); @@ -660,23 +941,21 @@ public void writeIndexingBuffer() throws EngineException { final long versionMapBytes = versionMap.ramBytesUsedForRefresh(); final long indexingBufferBytes = indexWriter.ramBytesUsed(); - final boolean useRefresh = versionMapRefreshPending.get() || (indexingBufferBytes/4 < versionMapBytes); + final boolean useRefresh = versionMapRefreshPending.get() || (indexingBufferBytes / 4 < versionMapBytes); if (useRefresh) { // The version map is using > 25% of the indexing buffer, so we do a refresh so the version map also clears logger.debug("use refresh to write indexing buffer (heap size=[{}]), to also clear version map (heap size=[{}])", - new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes)); + new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes)); refresh("write indexing buffer"); } else { // Most of our heap is used by the indexing buffer, so we do a cheaper (just writes segments, doesn't open a new searcher) IW.flush: logger.debug("use IndexWriter.flush to write indexing buffer (heap size=[{}]) since version map is small (heap size=[{}])", - new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes)); + new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes)); indexWriter.flush(); } } catch (AlreadyClosedException e) { failOnTragicEvent(e); throw e; - } catch (EngineClosedException e) { - throw e; } catch (Exception e) { try { failEngine("writeIndexingBuffer failed", e); @@ -769,7 +1048,7 @@ public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineExcepti flushLock.lock(); logger.trace("acquired flush lock after blocking"); } else { - throw new FlushNotAllowedEngineException(shardId, "already flushing..."); + return new CommitId(lastCommittedSegmentInfos.getId()); } } else { logger.trace("acquired flush lock immediately"); @@ -789,30 +1068,30 @@ public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineExcepti } catch (Exception e) { throw new FlushFailedEngineException(shardId, e); } - } - /* - * we have to inc-ref the store here since if the engine is closed by a tragic event - * we don't acquire the write lock and wait until we have exclusive access. This might also - * dec the store reference which can essentially close the store and unless we can inc the reference - * we can't use it. - */ - store.incRef(); - try { - // reread the last committed segment infos - lastCommittedSegmentInfos = store.readLastCommittedSegmentsInfo(); - } catch (Exception e) { - if (isClosed.get() == false) { - try { - logger.warn("failed to read latest segment infos on flush", e); - } catch (Exception inner) { - e.addSuppressed(inner); - } - if (Lucene.isCorruptionException(e)) { - throw new FlushFailedEngineException(shardId, e); + /* + * we have to inc-ref the store here since if the engine is closed by a tragic event + * we don't acquire the write lock and wait until we have exclusive access. This might also + * dec the store reference which can essentially close the store and unless we can inc the reference + * we can't use it. + */ + store.incRef(); + try { + // reread the last committed segment infos + lastCommittedSegmentInfos = store.readLastCommittedSegmentsInfo(); + } catch (Exception e) { + if (isClosed.get() == false) { + try { + logger.warn("failed to read latest segment infos on flush", e); + } catch (Exception inner) { + e.addSuppressed(inner); + } + if (Lucene.isCorruptionException(e)) { + throw new FlushFailedEngineException(shardId, e); + } } + } finally { + store.decRef(); } - } finally { - store.decRef(); } newCommitId = lastCommittedSegmentInfos.getId(); } catch (FlushFailedEngineException ex) { @@ -831,7 +1110,7 @@ public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineExcepti } private void pruneDeletedTombstones() { - long timeMSec = engineConfig.getThreadPool().estimatedTimeInMillis(); + long timeMSec = engineConfig.getThreadPool().relativeTimeInMillis(); // TODO: not good that we reach into LiveVersionMap here; can we move this inside VersionMap instead? problem is the dirtyLock... @@ -843,7 +1122,7 @@ private void pruneDeletedTombstones() { // Must re-get it here, vs using entry.getValue(), in case the uid was indexed/deleted since we pulled the iterator: VersionValue versionValue = versionMap.getTombstoneUnderLock(uid); if (versionValue != null) { - if (timeMSec - versionValue.time() > getGcDeletesInMillis()) { + if (timeMSec - versionValue.getTime() > getGcDeletesInMillis()) { versionMap.removeTombstoneUnderLock(uid); } } @@ -853,9 +1132,14 @@ private void pruneDeletedTombstones() { lastDeleteVersionPruneTimeMSec = timeMSec; } + // testing + void clearDeletedTombstones() { + versionMap.clearTombstones(); + } + @Override public void forceMerge(final boolean flush, int maxNumSegments, boolean onlyExpungeDeletes, - final boolean upgrade, final boolean upgradeOnlyAncientSegments) throws EngineException, EngineClosedException, IOException { + final boolean upgrade, final boolean upgradeOnlyAncientSegments) throws EngineException, IOException { /* * We do NOT acquire the readlock here since we are waiting on the merges to finish * that's fine since the IW.rollback should stop all the threads and trigger an IOException @@ -940,22 +1224,34 @@ public IndexCommit acquireIndexCommit(final boolean flushFirst) throws EngineExc } } - private void failOnTragicEvent(AlreadyClosedException ex) { + @SuppressWarnings("finally") + private boolean failOnTragicEvent(AlreadyClosedException ex) { + final boolean engineFailed; // if we are already closed due to some tragic exception // we need to fail the engine. it might have already been failed before // but we are double-checking it's failed and closed if (indexWriter.isOpen() == false && indexWriter.getTragicException() != null) { - final Exception tragedy = indexWriter.getTragicException() instanceof Exception ? - (Exception) indexWriter.getTragicException() : - new Exception(indexWriter.getTragicException()); - failEngine("already closed by tragic event on the index writer", tragedy); + if (indexWriter.getTragicException() instanceof Error) { + try { + logger.error("tragic event in index writer", ex); + } finally { + throw (Error) indexWriter.getTragicException(); + } + } else { + failEngine("already closed by tragic event on the index writer", (Exception) indexWriter.getTragicException()); + engineFailed = true; + } } else if (translog.isOpen() == false && translog.getTragicException() != null) { failEngine("already closed by tragic event on the translog", translog.getTragicException()); - } else { + engineFailed = true; + } else if (failedEngine.get() == null && isClosed.get() == false) { // we are closed but the engine is not failed yet? // this smells like a bug - we only expect ACE if we are in a fatal case ie. either translog or IW is closed by // a tragic event or has closed itself. if that is not the case we are in a buggy state and raise an assertion error throw new AssertionError("Unexpected AlreadyClosedException", ex); + } else { + engineFailed = false; } + return engineFailed; } @Override @@ -968,8 +1264,7 @@ protected boolean maybeFailEngine(String source, Exception e) { // exception that should only be thrown in a tragic event. we pass on the checks to failOnTragicEvent which will // throw and AssertionError if the tragic event condition is not met. if (e instanceof AlreadyClosedException) { - failOnTragicEvent((AlreadyClosedException)e); - return true; + return failOnTragicEvent((AlreadyClosedException) e); } else if (e != null && ((indexWriter.isOpen() == false && indexWriter.getTragicException() == e) || (translog.isOpen() == false && translog.getTragicException() == e))) { @@ -1026,7 +1321,7 @@ public List segments(boolean verbose) { * is failed. */ @Override - protected final void closeNoLock(String reason) { + protected final void closeNoLock(String reason, CountDownLatch closedLatch) { if (isClosed.compareAndSet(false, true)) { assert rwl.isWriteLockedByCurrentThread() || failEngineLock.isHeldByCurrentThread() : "Either the write lock must be held or the engine must be currently be failing itself"; try { @@ -1053,8 +1348,12 @@ protected final void closeNoLock(String reason) { } catch (Exception e) { logger.warn("failed to rollback writer on close", e); } finally { - store.decRef(); - logger.debug("engine closed [{}]", reason); + try { + store.decRef(); + logger.debug("engine closed [{}]", reason); + } finally { + closedLatch.countDown(); + } } } } @@ -1074,12 +1373,13 @@ private Releasable acquireLock(Term uid) { private long loadCurrentVersionFromIndex(Term uid) throws IOException { assert incrementIndexVersionLookup(); - try (final Searcher searcher = acquireSearcher("load_version")) { - return Versions.loadVersion(searcher.reader(), uid); + try (Searcher searcher = acquireSearcher("load_version")) { + return VersionsResolver.loadVersion(searcher.reader(), uid); } } - private IndexWriter createWriter(boolean create) throws IOException { + // pkg-private for testing + IndexWriter createWriter(boolean create) throws IOException { try { final IndexWriterConfig iwc = new IndexWriterConfig(engineConfig.getAnalyzer()); iwc.setCommitOnClose(false); // we by default don't commit on close @@ -1099,7 +1399,7 @@ private IndexWriter createWriter(boolean create) throws IOException { mergePolicy = new ElasticsearchMergePolicy(mergePolicy); iwc.setMergePolicy(mergePolicy); iwc.setSimilarity(engineConfig.getSimilarity()); - iwc.setRAMBufferSizeMB(engineConfig.getIndexingBufferSize().mbFrac()); + iwc.setRAMBufferSizeMB(engineConfig.getIndexingBufferSize().getMbFrac()); iwc.setCodec(engineConfig.getCodec()); iwc.setUseCompoundFile(true); // always use compound on flush - reduces # of file-handles on refresh return new IndexWriter(store.directory(), iwc); @@ -1125,7 +1425,7 @@ static final class SearchFactory extends EngineSearcherFactory { @Override public IndexSearcher newSearcher(IndexReader reader, IndexReader previousReader) throws IOException { IndexSearcher searcher = super.newSearcher(reader, previousReader); - if (reader instanceof LeafReader && isMergedSegment((LeafReader)reader)) { + if (reader instanceof LeafReader && isMergedSegment((LeafReader) reader)) { // we call newSearcher from the IndexReaderWarmer which warms segments during merging // in that case the reader is a LeafReader and all we need to do is to build a new Searcher // and return it since it does it's own warming for that particular reader. @@ -1148,7 +1448,7 @@ public IndexSearcher newSearcher(IndexReader reader, IndexReader previousReader) @Override public void activateThrottling() { int count = throttleRequestCount.incrementAndGet(); - assert count >= 1: "invalid post-increment throttleRequestCount=" + count; + assert count >= 1 : "invalid post-increment throttleRequestCount=" + count; if (count == 1) { throttle.activate(); } @@ -1157,12 +1457,18 @@ public void activateThrottling() { @Override public void deactivateThrottling() { int count = throttleRequestCount.decrementAndGet(); - assert count >= 0: "invalid post-decrement throttleRequestCount=" + count; + assert count >= 0 : "invalid post-decrement throttleRequestCount=" + count; if (count == 0) { throttle.deactivate(); } } + @Override + public boolean isThrottled() { + return throttle.isThrottled(); + } + + @Override public long getIndexThrottleTimeInMillis() { return throttle.getThrottleTimeInMillis(); } @@ -1252,15 +1558,21 @@ protected void doRun() throws Exception { private void commitIndexWriter(IndexWriter writer, Translog translog, String syncId) throws IOException { ensureCanFlush(); try { - Translog.TranslogGeneration translogGeneration = translog.getGeneration(); - logger.trace("committing writer with translog id [{}] and sync id [{}] ", translogGeneration.translogFileGeneration, syncId); - Map commitData = new HashMap<>(2); - commitData.put(Translog.TRANSLOG_GENERATION_KEY, Long.toString(translogGeneration.translogFileGeneration)); - commitData.put(Translog.TRANSLOG_UUID_KEY, translogGeneration.translogUUID); - if (syncId != null) { - commitData.put(Engine.SYNC_COMMIT_ID, syncId); - } - indexWriter.setCommitData(commitData); + final Translog.TranslogGeneration translogGeneration = translog.getGeneration(); + final String translogFileGeneration = Long.toString(translogGeneration.translogFileGeneration); + final String translogUUID = translogGeneration.translogUUID; + + writer.setLiveCommitData(() -> { + final Map commitData = new HashMap<>(4); + commitData.put(Translog.TRANSLOG_GENERATION_KEY, translogFileGeneration); + commitData.put(Translog.TRANSLOG_UUID_KEY, translogUUID); + if (syncId != null) { + commitData.put(Engine.SYNC_COMMIT_ID, syncId); + } + commitData.put(MAX_UNSAFE_AUTO_ID_TIMESTAMP_COMMIT_ID, Long.toString(maxUnsafeAutoIdTimestamp.get())); + logger.trace("committing writer with commit data [{}]", commitData); + return commitData.entrySet().iterator(); + }); writer.commit(); } catch (Exception ex) { try { @@ -1291,8 +1603,8 @@ private void ensureCanFlush() { // if we are in this stage we have to prevent flushes from this // engine otherwise we might loose documents if the flush succeeds // and the translog recover fails we we "commit" the translog on flush. - if (allowCommits.get() == false) { - throw new FlushNotAllowedEngineException(shardId, "flushes are disabled - pending translog recovery"); + if (pendingTranslogRecovery.get()) { + throw new IllegalStateException(shardId.toString() + " flushes are disabled - pending translog recovery"); } } @@ -1312,14 +1624,6 @@ public MergeStats getMergeStats() { return mergeScheduler.stats(); } - @Override - public DocsStats getDocStats() { - final int numDocs = indexWriter.numDocs(); - final int maxDoc = indexWriter.maxDoc(); - return new DocsStats(numDocs, maxDoc-numDocs); - } - - /** * Returns the number of times a version was looked up either from the index. * Note this is only available if assertions are enabled @@ -1353,4 +1657,9 @@ private boolean incrementIndexVersionLookup() { boolean indexWriterHasDeletions() { return indexWriter.hasDeletions(); } + + @Override + public boolean isRecovering() { + return pendingTranslogRecovery.get(); + } } diff --git a/core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java b/core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java index b489cec5768bb..7233420309c72 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java +++ b/core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java @@ -43,12 +43,12 @@ private static class Maps { // Used while refresh is running, and to hold adds/deletes until refresh finishes. We read from both current and old on lookup: final Map old; - public Maps(Map current, Map old) { + Maps(Map current, Map old) { this.current = current; this.old = old; } - public Maps() { + Maps() { this(ConcurrentCollections.newConcurrentMapWithAggressiveConcurrency(), ConcurrentCollections.newConcurrentMapWithAggressiveConcurrency()); } @@ -164,7 +164,7 @@ void putUnderLock(BytesRef uid, VersionValue version) { if (prev != null) { // Deduct RAM for the version we just replaced: long prevBytes = BASE_BYTES_PER_CHM_ENTRY; - if (prev.delete() == false) { + if (prev.isDelete() == false) { prevBytes += prev.ramBytesUsed() + uidRAMBytesUsed; } ramBytesUsedCurrent.addAndGet(-prevBytes); @@ -172,13 +172,13 @@ void putUnderLock(BytesRef uid, VersionValue version) { // Add RAM for the new version: long newBytes = BASE_BYTES_PER_CHM_ENTRY; - if (version.delete() == false) { + if (version.isDelete() == false) { newBytes += version.ramBytesUsed() + uidRAMBytesUsed; } ramBytesUsedCurrent.addAndGet(newBytes); final VersionValue prevTombstone; - if (version.delete()) { + if (version.isDelete()) { // Also enroll the delete into tombstones, and account for its RAM too: prevTombstone = tombstones.put(uid, version); @@ -187,7 +187,7 @@ void putUnderLock(BytesRef uid, VersionValue version) { // the accounting to current: ramBytesUsedTombstones.addAndGet(BASE_BYTES_PER_CHM_ENTRY + version.ramBytesUsed() + uidRAMBytesUsed); - if (prevTombstone == null && prev != null && prev.delete()) { + if (prevTombstone == null && prev != null && prev.isDelete()) { // If prev was a delete that had already been removed from tombstones, then current was already accounting for the // BytesRef/VersionValue RAM, so we now deduct that as well: ramBytesUsedCurrent.addAndGet(-(prev.ramBytesUsed() + uidRAMBytesUsed)); @@ -211,12 +211,12 @@ void removeTombstoneUnderLock(BytesRef uid) { final VersionValue prev = tombstones.remove(uid); if (prev != null) { - assert prev.delete(); + assert prev.isDelete(); long v = ramBytesUsedTombstones.addAndGet(-(BASE_BYTES_PER_CHM_ENTRY + prev.ramBytesUsed() + uidRAMBytesUsed)); assert v >= 0: "bytes=" + v; } final VersionValue curVersion = maps.current.get(uid); - if (curVersion != null && curVersion.delete()) { + if (curVersion != null && curVersion.isDelete()) { // We now shift accounting of the BytesRef from tombstones to current, because a refresh would clear this RAM. This should be // uncommon, because with the default refresh=1s and gc_deletes=60s, deletes should be cleared from current long before we drop // them from tombstones: @@ -234,6 +234,11 @@ Iterable> getAllTombstones() { return tombstones.entrySet(); } + /** clears all tombstones ops */ + void clearTombstones() { + tombstones.clear(); + } + /** Called when this index is closed. */ synchronized void clear() { maps = new Maps(); diff --git a/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java b/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java index 637beebfec8ed..ed8e150cd6c18 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java +++ b/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java @@ -286,12 +286,6 @@ public long getMaxUnsafeAutoIdTimestamp() { return maxUnsafeAutoIdTimestamp; } - public static SegmentsStats readSegmentsStats(StreamInput in) throws IOException { - SegmentsStats stats = new SegmentsStats(); - stats.readFrom(in); - return stats; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.SEGMENTS); @@ -391,10 +385,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeLong(maxUnsafeAutoIdTimestamp); out.writeVInt(fileSizes.size()); - for (Iterator> it = fileSizes.iterator(); it.hasNext();) { - ObjectObjectCursor entry = it.next(); + for (ObjectObjectCursor entry : fileSizes) { out.writeString(entry.key); - out.writeLong(entry.value); + out.writeLong(entry.value.longValue()); } } } diff --git a/core/src/main/java/org/elasticsearch/index/engine/ShadowEngine.java b/core/src/main/java/org/elasticsearch/index/engine/ShadowEngine.java index 3aafcaff74859..fbd42b4031726 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/ShadowEngine.java +++ b/core/src/main/java/org/elasticsearch/index/engine/ShadowEngine.java @@ -35,6 +35,7 @@ import java.io.IOException; import java.util.Arrays; import java.util.List; +import java.util.concurrent.CountDownLatch; import java.util.function.Function; /** @@ -52,8 +53,8 @@ * regular primary (which uses {@link org.elasticsearch.index.engine.InternalEngine}) * * Notice that since this Engine does not deal with the translog, any - * {@link #get(Get get)} request goes directly to the searcher, meaning it is - * non-realtime. + * {@link #get(Get, Function)} request goes directly to the searcher, + * meaning it is non-realtime. */ public class ShadowEngine extends Engine { @@ -67,9 +68,6 @@ public class ShadowEngine extends Engine { public ShadowEngine(EngineConfig engineConfig) { super(engineConfig); - if (engineConfig.getRefreshListeners() != null) { - throw new IllegalArgumentException("ShadowEngine doesn't support RefreshListeners"); - } SearcherFactory searcherFactory = new EngineSearcherFactory(engineConfig); final long nonexistentRetryTime = engineConfig.getIndexSettings().getSettings() .getAsTime(NONEXISTENT_INDEX_RETRY_WAIT, DEFAULT_NONEXISTENT_INDEX_RETRY_WAIT) @@ -106,12 +104,12 @@ public ShadowEngine(EngineConfig engineConfig) { @Override - public void index(Index index) throws EngineException { + public IndexResult index(Index index) { throw new UnsupportedOperationException(shardId + " index operation not allowed on shadow engine"); } @Override - public void delete(Delete delete) throws EngineException { + public DeleteResult delete(Delete delete) { throw new UnsupportedOperationException(shardId + " delete operation not allowed on shadow engine"); } @@ -162,6 +160,7 @@ public void forceMerge(boolean flush, int maxNumSegments, boolean onlyExpungeDel @Override public GetResult get(Get get, Function searcherFacotry) throws EngineException { // There is no translog, so we can get it directly from the searcher + // Since we never refresh we just drop the onRefresh parameter on the floor return getFromSearcher(get, searcherFacotry); } @@ -191,9 +190,6 @@ public void refresh(String source) throws EngineException { ensureOpen(); searcherManager.maybeRefreshBlocking(); } catch (AlreadyClosedException e) { - // This means there's a bug somewhere: don't suppress it - throw new AssertionError(e); - } catch (EngineClosedException e) { throw e; } catch (Exception e) { try { @@ -216,7 +212,7 @@ protected SearcherManager getSearcherManager() { } @Override - protected void closeNoLock(String reason) { + protected void closeNoLock(String reason, CountDownLatch closedLatch) { if (isClosed.compareAndSet(false, true)) { try { logger.debug("shadow replica close searcher manager refCount: {}", store.refCount()); @@ -224,7 +220,11 @@ protected void closeNoLock(String reason) { } catch (Exception e) { logger.warn("shadow replica failed to close searcher manager", e); } finally { - store.decRef(); + try { + store.decRef(); + } finally { + closedLatch.countDown(); + } } } } @@ -256,6 +256,16 @@ public void deactivateThrottling() { throw new UnsupportedOperationException("ShadowEngine has no IndexWriter"); } + @Override + public boolean isThrottled() { + return false; + } + + @Override + public long getIndexThrottleTimeInMillis() { + return 0L; + } + @Override public Engine recoverFromTranslog() throws IOException { throw new UnsupportedOperationException("can't recover on a shadow engine"); diff --git a/core/src/main/java/org/elasticsearch/index/engine/VersionConflictEngineException.java b/core/src/main/java/org/elasticsearch/index/engine/VersionConflictEngineException.java index 9b038c6e77c56..56c19827faba3 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/VersionConflictEngineException.java +++ b/core/src/main/java/org/elasticsearch/index/engine/VersionConflictEngineException.java @@ -29,6 +29,10 @@ */ public class VersionConflictEngineException extends EngineException { + public VersionConflictEngineException(ShardId shardId, Engine.Operation op, long currentVersion, boolean deleted) { + this(shardId, op.type(), op.id(), op.versionType().explainConflictForWrites(currentVersion, op.version(), deleted)); + } + public VersionConflictEngineException(ShardId shardId, String type, String id, String explanation) { this(shardId, null, type, id, explanation); } diff --git a/core/src/main/java/org/elasticsearch/index/engine/VersionValue.java b/core/src/main/java/org/elasticsearch/index/engine/VersionValue.java index 662c88df5d9d3..53550578cc3a0 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/VersionValue.java +++ b/core/src/main/java/org/elasticsearch/index/engine/VersionValue.java @@ -29,25 +29,25 @@ class VersionValue implements Accountable { private static final long BASE_RAM_BYTES_USED = RamUsageEstimator.shallowSizeOfInstance(VersionValue.class); + /** the version of the document. used for versioned indexed operations and as a BWC layer, where no seq# are set yet */ private final long version; - public VersionValue(long version) { + VersionValue(long version) { this.version = version; } - public long time() { + public long getTime() { throw new UnsupportedOperationException(); } - public long version() { + public long getVersion() { return version; } - public boolean delete() { + public boolean isDelete() { return false; } - @Override public long ramBytesUsed() { return BASE_RAM_BYTES_USED; @@ -57,4 +57,10 @@ public long ramBytesUsed() { public Collection getChildResources() { return Collections.emptyList(); } + + @Override + public String toString() { + return "VersionValue{" + + "version=" + version + "}"; + } } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/AtomicParentChildFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/AtomicParentChildFieldData.java deleted file mode 100644 index f88d7c58774f1..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/fielddata/AtomicParentChildFieldData.java +++ /dev/null @@ -1,43 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.fielddata; - -import org.apache.lucene.index.SortedDocValues; - -import java.util.Set; - -/** - * Specialization of {@link AtomicFieldData} for parent/child mappings. - */ -public interface AtomicParentChildFieldData extends AtomicFieldData { - - /** - * Return the set of types there is a mapping for. - */ - Set types(); - - /** - * Return the mapping for the given type. The returned - * {@link SortedDocValues} will map doc IDs to the identifier of their - * parent. - */ - SortedDocValues getOrdinalsValues(String type); - -} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/FieldDataStats.java b/core/src/main/java/org/elasticsearch/index/fielddata/FieldDataStats.java index d8548f72476a0..70a41b96f7c46 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/FieldDataStats.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/FieldDataStats.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata; -import com.carrotsearch.hppc.ObjectLongHashMap; +import org.elasticsearch.common.FieldMemoryStats; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -29,21 +29,27 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; +import java.util.Objects; /** */ public class FieldDataStats implements Streamable, ToXContent { + private static final String FIELDDATA = "fielddata"; + private static final String MEMORY_SIZE = "memory_size"; + private static final String MEMORY_SIZE_IN_BYTES = "memory_size_in_bytes"; + private static final String EVICTIONS = "evictions"; + private static final String FIELDS = "fields"; long memorySize; long evictions; @Nullable - ObjectLongHashMap fields; + FieldMemoryStats fields; public FieldDataStats() { } - public FieldDataStats(long memorySize, long evictions, @Nullable ObjectLongHashMap fields) { + public FieldDataStats(long memorySize, long evictions, @Nullable FieldMemoryStats fields) { this.memorySize = memorySize; this.evictions = evictions; this.fields = fields; @@ -54,16 +60,9 @@ public void add(FieldDataStats stats) { this.evictions += stats.evictions; if (stats.fields != null) { if (fields == null) { - fields = stats.fields.clone(); + fields = stats.fields.copy(); } else { - assert !stats.fields.containsKey(null); - final Object[] keys = stats.fields.keys; - final long[] values = stats.fields.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - fields.addTo((String) keys[i], values[i]); - } - } + fields.add(stats.fields); } } } @@ -81,78 +80,48 @@ public long getEvictions() { } @Nullable - public ObjectLongHashMap getFields() { + public FieldMemoryStats getFields() { return fields; } - public static FieldDataStats readFieldDataStats(StreamInput in) throws IOException { - FieldDataStats stats = new FieldDataStats(); - stats.readFrom(in); - return stats; - } - @Override public void readFrom(StreamInput in) throws IOException { memorySize = in.readVLong(); evictions = in.readVLong(); - if (in.readBoolean()) { - int size = in.readVInt(); - fields = new ObjectLongHashMap<>(size); - for (int i = 0; i < size; i++) { - fields.put(in.readString(), in.readVLong()); - } - } + fields = in.readOptionalWriteable(FieldMemoryStats::new); } @Override public void writeTo(StreamOutput out) throws IOException { out.writeVLong(memorySize); out.writeVLong(evictions); - if (fields == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - out.writeVInt(fields.size()); - assert !fields.containsKey(null); - final Object[] keys = fields.keys; - final long[] values = fields.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - out.writeString((String) keys[i]); - out.writeVLong(values[i]); - } - } - } + out.writeOptionalWriteable(fields); } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(Fields.FIELDDATA); - builder.byteSizeField(Fields.MEMORY_SIZE_IN_BYTES, Fields.MEMORY_SIZE, memorySize); - builder.field(Fields.EVICTIONS, getEvictions()); + builder.startObject(FIELDDATA); + builder.byteSizeField(MEMORY_SIZE_IN_BYTES, MEMORY_SIZE, memorySize); + builder.field(EVICTIONS, getEvictions()); if (fields != null) { - builder.startObject(Fields.FIELDS); - assert !fields.containsKey(null); - final Object[] keys = fields.keys; - final long[] values = fields.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - builder.startObject((String) keys[i]); - builder.byteSizeField(Fields.MEMORY_SIZE_IN_BYTES, Fields.MEMORY_SIZE, values[i]); - builder.endObject(); - } - } - builder.endObject(); + fields.toXContent(builder, FIELDS, MEMORY_SIZE_IN_BYTES, MEMORY_SIZE); } builder.endObject(); return builder; } - static final class Fields { - static final String FIELDDATA = "fielddata"; - static final String MEMORY_SIZE = "memory_size"; - static final String MEMORY_SIZE_IN_BYTES = "memory_size_in_bytes"; - static final String EVICTIONS = "evictions"; - static final String FIELDS = "fields"; + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + FieldDataStats that = (FieldDataStats) o; + return memorySize == that.memorySize && + evictions == that.evictions && + Objects.equals(fields, that.fields); + } + + @Override + public int hashCode() { + return Objects.hash(memorySize, evictions, fields); } } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java index feacfe5999695..0b63dfb8df80a 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java @@ -52,7 +52,7 @@ */ public interface IndexFieldData extends IndexComponent { - public static class CommonSettings { + class CommonSettings { public static final String SETTING_MEMORY_STORAGE_HINT = "memory_storage_hint"; public enum MemoryStorageFormat { @@ -85,9 +85,9 @@ public static MemoryStorageFormat fromString(String string) { FD loadDirect(LeafReaderContext context) throws Exception; /** - * Comparator used for sorting. + * Returns the {@link SortField} to used for sorting. */ - XFieldComparatorSource comparatorSource(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested); + SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse); /** * Clears any resources associated with this field data. @@ -136,17 +136,17 @@ public DocIdSetIterator innerDocs(LeafReaderContext ctx) throws IOException { } /** Whether missing values should be sorted first. */ - protected final boolean sortMissingFirst(Object missingValue) { + public final boolean sortMissingFirst(Object missingValue) { return "_first".equals(missingValue); } /** Whether missing values should be sorted last, this is the default. */ - protected final boolean sortMissingLast(Object missingValue) { + public final boolean sortMissingLast(Object missingValue) { return missingValue == null || "_last".equals(missingValue); } /** Return the missing object value according to the reduced type of the comparator. */ - protected final Object missingObject(Object missingValue, boolean reversed) { + public final Object missingObject(Object missingValue, boolean reversed) { if (sortMissingFirst(missingValue) || sortMissingLast(missingValue)) { final boolean min = sortMissingFirst(missingValue) ^ reversed; switch (reducedType()) { diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataCache.java b/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataCache.java index 948b19a8afbf5..5238f06a7909c 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataCache.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataCache.java @@ -31,7 +31,7 @@ public interface IndexFieldDataCache { > FD load(LeafReaderContext context, IFD indexFieldData) throws Exception; - > IFD load(final DirectoryReader indexReader, final IFD indexFieldData) throws Exception; + > IFD load(DirectoryReader indexReader, IFD indexFieldData) throws Exception; /** * Clears all the field data stored cached in on this index. diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/IndexNumericFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/IndexNumericFieldData.java index abfe0d8e96af0..08a89eb82f3d8 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/IndexNumericFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/IndexNumericFieldData.java @@ -23,7 +23,7 @@ */ public interface IndexNumericFieldData extends IndexFieldData { - public static enum NumericType { + enum NumericType { BOOLEAN(false), BYTE(false), SHORT(false), @@ -35,7 +35,7 @@ public static enum NumericType { private final boolean floatingPoint; - private NumericType(boolean floatingPoint) { + NumericType(boolean floatingPoint) { this.floatingPoint = floatingPoint; } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/IndexOrdinalsFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/IndexOrdinalsFieldData.java index cb1471179c2d7..2e714fc80a12b 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/IndexOrdinalsFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/IndexOrdinalsFieldData.java @@ -21,7 +21,7 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexReader; - +import org.apache.lucene.index.MultiDocValues; /** @@ -42,4 +42,9 @@ public interface IndexOrdinalsFieldData extends IndexFieldData.Global { - - /** - * Load a global view of the ordinals for the given {@link IndexReader}, - * potentially from a cache. - */ - @Override - IndexParentChildFieldData loadGlobal(DirectoryReader indexReader); - - /** - * Load a global view of the ordinals for the given {@link IndexReader}. - */ - @Override - IndexParentChildFieldData localGlobalDirect(DirectoryReader indexReader) throws Exception; - -} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ScriptDocValues.java b/core/src/main/java/org/elasticsearch/index/fielddata/ScriptDocValues.java index 3991a37a8bf2c..a1e62979cae6e 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ScriptDocValues.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/ScriptDocValues.java @@ -25,33 +25,62 @@ import org.elasticsearch.common.geo.GeoHashUtils; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.geo.GeoUtils; -import org.elasticsearch.common.unit.DistanceUnit; +import org.joda.time.DateTime; import org.joda.time.DateTimeZone; import org.joda.time.MutableDateTime; import org.joda.time.ReadableDateTime; import java.util.AbstractList; -import java.util.Collections; +import java.util.Comparator; import java.util.List; +import java.util.function.UnaryOperator; /** * Script level doc values, the assumption is that any implementation will implement a getValue * and a getValues that return the relevant type that then can be used in scripts. */ -public interface ScriptDocValues extends List { +public abstract class ScriptDocValues extends AbstractList { /** * Set the current doc ID. */ - void setNextDocId(int docId); + public abstract void setNextDocId(int docId); /** * Return a copy of the list of the values for the current document. */ - List getValues(); + public final List getValues() { + return this; + } + + // Throw meaningful exceptions if someone tries to modify the ScriptDocValues. + @Override + public final void add(int index, T element) { + throw new UnsupportedOperationException("doc values are unmodifiable"); + } + + @Override + public final boolean remove(Object o) { + throw new UnsupportedOperationException("doc values are unmodifiable"); + } + + @Override + public final void replaceAll(UnaryOperator operator) { + throw new UnsupportedOperationException("doc values are unmodifiable"); + } - public static final class Strings extends AbstractList implements ScriptDocValues { + @Override + public final T set(int index, T element) { + throw new UnsupportedOperationException("doc values are unmodifiable"); + } + + @Override + public final void sort(Comparator c) { + throw new UnsupportedOperationException("doc values are unmodifiable"); + } + + public static final class Strings extends ScriptDocValues { private final SortedBinaryDocValues values; @@ -85,11 +114,6 @@ public String getValue() { } } - @Override - public List getValues() { - return Collections.unmodifiableList(this); - } - @Override public String get(int index) { return values.valueAt(index).utf8ToString(); @@ -102,10 +126,10 @@ public int size() { } - public static class Longs extends AbstractList implements ScriptDocValues { + public static final class Longs extends ScriptDocValues { private final SortedNumericDocValues values; - private final MutableDateTime date = new MutableDateTime(0, DateTimeZone.UTC); + private Dates dates; public Longs(SortedNumericDocValues values) { this.values = values; @@ -114,6 +138,9 @@ public Longs(SortedNumericDocValues values) { @Override public void setNextDocId(int docId) { values.setDocument(docId); + if (dates != null) { + dates.refreshArray(); + } } public SortedNumericDocValues getInternalValues() { @@ -128,14 +155,20 @@ public long getValue() { return values.valueAt(0); } - @Override - public List getValues() { - return Collections.unmodifiableList(this); + public ReadableDateTime getDate() { + if (dates == null) { + dates = new Dates(values); + dates.refreshArray(); + } + return dates.getValue(); } - public ReadableDateTime getDate() { - date.setMillis(getValue()); - return date; + public List getDates() { + if (dates == null) { + dates = new Dates(values); + dates.refreshArray(); + } + return dates; } @Override @@ -147,10 +180,87 @@ public Long get(int index) { public int size() { return values.count(); } + } + + public static final class Dates extends ScriptDocValues { + private static final ReadableDateTime EPOCH = new DateTime(0, DateTimeZone.UTC); + + private final SortedNumericDocValues values; + /** + * Values wrapped in {@link MutableDateTime}. Null by default an allocated on first usage so we allocate a reasonably size. We keep + * this array so we don't have allocate new {@link MutableDateTime}s on every usage. Instead we reuse them for every document. + */ + private MutableDateTime[] dates; + + public Dates(SortedNumericDocValues values) { + this.values = values; + } + /** + * Fetch the first field value or 0 millis after epoch if there are no values. + */ + public ReadableDateTime getValue() { + if (values.count() == 0) { + return EPOCH; + } + return get(0); + } + + @Override + public ReadableDateTime get(int index) { + if (index >= values.count()) { + throw new IndexOutOfBoundsException( + "attempted to fetch the [" + index + "] date when there are only [" + values.count() + "] dates."); + } + return dates[index]; + } + + @Override + public int size() { + return values.count(); + } + + @Override + public void setNextDocId(int docId) { + values.setDocument(docId); + refreshArray(); + } + + /** + * Refresh the backing array. Package private so it can be called when {@link Longs} loads dates. + */ + void refreshArray() { + if (values.count() == 0) { + return; + } + if (dates == null) { + // Happens for the document. We delay allocating dates so we can allocate it with a reasonable size. + dates = new MutableDateTime[values.count()]; + for (int i = 0; i < dates.length; i++) { + dates[i] = new MutableDateTime(values.valueAt(i), DateTimeZone.UTC); + } + return; + } + if (values.count() > dates.length) { + // Happens when we move to a new document and it has more dates than any documents before it. + MutableDateTime[] backup = dates; + dates = new MutableDateTime[values.count()]; + System.arraycopy(backup, 0, dates, 0, backup.length); + for (int i = 0; i < backup.length; i++) { + dates[i].setMillis(values.valueAt(i)); + } + for (int i = backup.length; i < dates.length; i++) { + dates[i] = new MutableDateTime(values.valueAt(i), DateTimeZone.UTC); + } + return; + } + for (int i = 0; i < values.count(); i++) { + dates[i].setMillis(values.valueAt(i)); + } + } } - public static class Doubles extends AbstractList implements ScriptDocValues { + public static final class Doubles extends ScriptDocValues { private final SortedNumericDoubleValues values; @@ -175,11 +285,6 @@ public double getValue() { return values.valueAt(0); } - @Override - public List getValues() { - return Collections.unmodifiableList(this); - } - @Override public Double get(int index) { return values.valueAt(index); @@ -191,7 +296,7 @@ public int size() { } } - class GeoPoints extends AbstractList implements ScriptDocValues { + public static final class GeoPoints extends ScriptDocValues { private final MultiGeoPointValues values; @@ -238,11 +343,6 @@ public double getLon() { return getValue().lon(); } - @Override - public List getValues() { - return Collections.unmodifiableList(this); - } - @Override public GeoPoint get(int index) { final GeoPoint point = values.valueAt(index); @@ -291,4 +391,69 @@ public double geohashDistanceWithDefault(String geohash, double defaultValue) { return geohashDistance(geohash); } } + + public static final class Booleans extends ScriptDocValues { + + private final SortedNumericDocValues values; + + public Booleans(SortedNumericDocValues values) { + this.values = values; + } + + @Override + public void setNextDocId(int docId) { + values.setDocument(docId); + } + + public boolean getValue() { + return values.count() != 0 && values.valueAt(0) == 1; + } + + @Override + public Boolean get(int index) { + return values.valueAt(index) == 1; + } + + @Override + public int size() { + return values.count(); + } + + } + + public static final class BytesRefs extends ScriptDocValues { + + private final SortedBinaryDocValues values; + + public BytesRefs(SortedBinaryDocValues values) { + this.values = values; + } + + @Override + public void setNextDocId(int docId) { + values.setDocument(docId); + } + + public SortedBinaryDocValues getInternalValues() { + return this.values; + } + + public BytesRef getValue() { + int numValues = values.count(); + if (numValues == 0) { + return new BytesRef(); + } + return values.valueAt(0); + } + + @Override + public BytesRef get(int index) { + return values.valueAt(index); + } + + @Override + public int size() { + return values.count(); + } + } } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ShardFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/ShardFieldData.java index 9e21562e8c722..73c6cff81701a 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ShardFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/ShardFieldData.java @@ -21,6 +21,7 @@ import com.carrotsearch.hppc.ObjectLongHashMap; import org.apache.lucene.util.Accountable; +import org.elasticsearch.common.FieldMemoryStats; import org.elasticsearch.common.metrics.CounterMetric; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; @@ -47,7 +48,8 @@ public FieldDataStats stats(String... fields) { } } } - return new FieldDataStats(totalMetric.count(), evictionsMetric.count(), fieldTotals); + return new FieldDataStats(totalMetric.count(), evictionsMetric.count(), fieldTotals == null ? null : + new FieldMemoryStats(fieldTotals)); } @Override diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/SingletonSortedNumericDoubleValues.java b/core/src/main/java/org/elasticsearch/index/fielddata/SingletonSortedNumericDoubleValues.java index 6c3c5a4f1610c..4207ac73a1a0c 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/SingletonSortedNumericDoubleValues.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/SingletonSortedNumericDoubleValues.java @@ -34,7 +34,7 @@ final class SingletonSortedNumericDoubleValues extends SortedNumericDoubleValues private double value; private int count; - public SingletonSortedNumericDoubleValues(NumericDoubleValues in, Bits docsWithField) { + SingletonSortedNumericDoubleValues(NumericDoubleValues in, Bits docsWithField) { this.in = in; this.docsWithField = docsWithField instanceof MatchAllBits ? null : docsWithField; } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/UidIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/UidIndexFieldData.java new file mode 100644 index 0000000000000..8673e9b21640e --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/fielddata/UidIndexFieldData.java @@ -0,0 +1,175 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.fielddata; + +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.RandomAccessOrds; +import org.apache.lucene.index.MultiDocValues; +import org.apache.lucene.search.SortField; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.BytesRefBuilder; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; +import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; +import org.elasticsearch.index.fielddata.plain.AbstractAtomicOrdinalsFieldData; +import org.elasticsearch.index.mapper.UidFieldMapper; +import org.elasticsearch.search.MultiValueMode; + +/** Fielddata view of the _uid field on indices that do not index _uid but _id. + * It gets fielddata on the {@code _id field}, which is in-memory since the _id + * field does not have doc values, and prepends {@code ${type}#} to all values. + * Note that it does not add memory compared to what fielddata on the _id is + * already using: this is just a view. + * TODO: Remove fielddata access on _uid and _id, or add doc values to _id. + */ +public final class UidIndexFieldData implements IndexOrdinalsFieldData { + + private final Index index; + private final String type; + private final BytesRef prefix; + private final IndexOrdinalsFieldData idFieldData; + + public UidIndexFieldData(Index index, String type, IndexOrdinalsFieldData idFieldData) { + this.index = index; + this.type = type; + BytesRefBuilder prefix = new BytesRefBuilder(); + prefix.append(new BytesRef(type)); + prefix.append((byte) '#'); + this.prefix = prefix.toBytesRef(); + this.idFieldData = idFieldData; + } + + @Override + public Index index() { + return index; + } + + @Override + public String getFieldName() { + return UidFieldMapper.NAME; + } + + @Override + public SortField sortField(Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) { + XFieldComparatorSource source = new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested); + return new SortField(getFieldName(), source, reverse); + } + + @Override + public AtomicOrdinalsFieldData load(LeafReaderContext context) { + return new UidAtomicFieldData(prefix, idFieldData.load(context)); + } + + @Override + public AtomicOrdinalsFieldData loadDirect(LeafReaderContext context) throws Exception { + return new UidAtomicFieldData(prefix, idFieldData.loadDirect(context)); + } + + @Override + public void clear() { + idFieldData.clear(); + } + + @Override + public IndexOrdinalsFieldData loadGlobal(DirectoryReader indexReader) { + return new UidIndexFieldData(index, type, idFieldData.loadGlobal(indexReader)); + } + + @Override + public IndexOrdinalsFieldData localGlobalDirect(DirectoryReader indexReader) throws Exception { + return new UidIndexFieldData(index, type, idFieldData.localGlobalDirect(indexReader)); + } + + @Override + public MultiDocValues.OrdinalMap getOrdinalMap() { + return idFieldData.getOrdinalMap(); + } + + static final class UidAtomicFieldData implements AtomicOrdinalsFieldData { + + private final BytesRef prefix; + private final AtomicOrdinalsFieldData idFieldData; + + UidAtomicFieldData(BytesRef prefix, AtomicOrdinalsFieldData idFieldData) { + this.prefix = prefix; + this.idFieldData = idFieldData; + } + + @Override + public ScriptDocValues getScriptValues() { + return AbstractAtomicOrdinalsFieldData.DEFAULT_SCRIPT_FUNCTION.apply(getOrdinalsValues()); + } + + @Override + public SortedBinaryDocValues getBytesValues() { + return FieldData.toString(getOrdinalsValues()); + } + + @Override + public long ramBytesUsed() { + return 0; // simple wrapper + } + + @Override + public void close() { + idFieldData.close(); + } + + @Override + public RandomAccessOrds getOrdinalsValues() { + RandomAccessOrds idValues = idFieldData.getOrdinalsValues(); + return new AbstractRandomAccessOrds() { + + private final BytesRefBuilder scratch = new BytesRefBuilder(); + + @Override + public BytesRef lookupOrd(long ord) { + scratch.setLength(0); + scratch.append(prefix); + scratch.append(idValues.lookupOrd(ord)); + return scratch.get(); + } + + @Override + public long getValueCount() { + return idValues.getValueCount(); + } + + @Override + protected void doSetDocument(int docID) { + idValues.setDocument(docID); + } + + @Override + public long ordAt(int index) { + return idValues.ordAt(index); + } + + @Override + public int cardinality() { + return idValues.cardinality(); + } + }; + } + + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java index 51f8f2b42bd1f..48b6a1127a5e0 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java @@ -79,7 +79,7 @@ protected SortedBinaryDocValues getValues(LeafReaderContext context) throws IOEx protected void setScorer(Scorer scorer) {} @Override - public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) throws IOException { + public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) { assert indexFieldData == null || fieldname.equals(indexFieldData.getFieldName()); final boolean sortMissingLast = sortMissingLast(missingValue) ^ reversed; diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java index 4684399a23d68..390a5493e273d 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java @@ -64,7 +64,7 @@ protected SortedNumericDoubleValues getValues(LeafReaderContext context) throws protected void setScorer(Scorer scorer) {} @Override - public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) throws IOException { + public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) { assert indexFieldData == null || fieldname.equals(indexFieldData.getFieldName()); final double dMissingValue = (Double) missingObject(missingValue, reversed); diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java index ba9b031cede3e..0546a5e5e8ea7 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java @@ -56,7 +56,7 @@ public SortField.Type reducedType() { } @Override - public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) throws IOException { + public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) { assert indexFieldData == null || fieldname.equals(indexFieldData.getFieldName()); final float dMissingValue = (Float) missingObject(missingValue, reversed); diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java index b2fd25e54457b..d652673308594 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java @@ -55,7 +55,7 @@ public SortField.Type reducedType() { } @Override - public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) throws IOException { + public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) { assert indexFieldData == null || fieldname.equals(indexFieldData.getFieldName()); final Long dMissingValue = (Long) missingObject(missingValue, reversed); diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsBuilder.java b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsBuilder.java index aaecf2fa89609..f4b79105aadb8 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsBuilder.java @@ -31,6 +31,7 @@ import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; +import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.index.fielddata.plain.AbstractAtomicOrdinalsFieldData; import org.elasticsearch.indices.breaker.CircuitBreakerService; @@ -38,6 +39,7 @@ import java.util.Collection; import java.util.Collections; import java.util.concurrent.TimeUnit; +import java.util.function.Function; /** * Utility class to build global ordinals. @@ -48,7 +50,9 @@ public enum GlobalOrdinalsBuilder { /** * Build global ordinals for the provided {@link IndexReader}. */ - public static IndexOrdinalsFieldData build(final IndexReader indexReader, IndexOrdinalsFieldData indexFieldData, IndexSettings indexSettings, CircuitBreakerService breakerService, Logger logger) throws IOException { + public static IndexOrdinalsFieldData build(final IndexReader indexReader, IndexOrdinalsFieldData indexFieldData, + IndexSettings indexSettings, CircuitBreakerService breakerService, Logger logger, + Function> scriptFunction) throws IOException { assert indexReader.leaves().size() > 1; long startTimeNS = System.nanoTime(); @@ -70,8 +74,8 @@ public static IndexOrdinalsFieldData build(final IndexReader indexReader, IndexO new TimeValue(System.nanoTime() - startTimeNS, TimeUnit.NANOSECONDS) ); } - return new InternalGlobalOrdinalsIndexFieldData(indexSettings, indexFieldData.getFieldName(), - atomicFD, ordinalMap, memorySizeInBytes + return new GlobalOrdinalsIndexFieldData(indexSettings, indexFieldData.getFieldName(), + atomicFD, ordinalMap, memorySizeInBytes, scriptFunction ); } @@ -81,7 +85,7 @@ public static IndexOrdinalsFieldData buildEmpty(IndexSettings indexSettings, fin final AtomicOrdinalsFieldData[] atomicFD = new AtomicOrdinalsFieldData[indexReader.leaves().size()]; final RandomAccessOrds[] subs = new RandomAccessOrds[indexReader.leaves().size()]; for (int i = 0; i < indexReader.leaves().size(); ++i) { - atomicFD[i] = new AbstractAtomicOrdinalsFieldData() { + atomicFD[i] = new AbstractAtomicOrdinalsFieldData(AbstractAtomicOrdinalsFieldData.DEFAULT_SCRIPT_FUNCTION) { @Override public RandomAccessOrds getOrdinalsValues() { return DocValues.emptySortedSet(); @@ -104,8 +108,8 @@ public void close() { subs[i] = atomicFD[i].getOrdinalsValues(); } final OrdinalMap ordinalMap = OrdinalMap.build(null, subs, PackedInts.DEFAULT); - return new InternalGlobalOrdinalsIndexFieldData(indexSettings, indexFieldData.getFieldName(), - atomicFD, ordinalMap, 0 + return new GlobalOrdinalsIndexFieldData(indexSettings, indexFieldData.getFieldName(), + atomicFD, ordinalMap, 0, AbstractAtomicOrdinalsFieldData.DEFAULT_SCRIPT_FUNCTION ); } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java index 3e75620400247..b29ae47a14215 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java @@ -20,6 +20,9 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.MultiDocValues; +import org.apache.lucene.index.RandomAccessOrds; +import org.apache.lucene.search.SortField; import org.apache.lucene.util.Accountable; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.AbstractIndexComponent; @@ -28,23 +31,39 @@ import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; +import org.elasticsearch.index.fielddata.ScriptDocValues; +import org.elasticsearch.index.fielddata.plain.AbstractAtomicOrdinalsFieldData; import org.elasticsearch.search.MultiValueMode; import java.util.Collection; import java.util.Collections; +import java.util.function.Function; /** * {@link IndexFieldData} base class for concrete global ordinals implementations. */ -public abstract class GlobalOrdinalsIndexFieldData extends AbstractIndexComponent implements IndexOrdinalsFieldData, Accountable { +public class GlobalOrdinalsIndexFieldData extends AbstractIndexComponent implements IndexOrdinalsFieldData, Accountable { private final String fieldName; private final long memorySizeInBytes; - protected GlobalOrdinalsIndexFieldData(IndexSettings indexSettings, String fieldName, long memorySizeInBytes) { + private final MultiDocValues.OrdinalMap ordinalMap; + private final Atomic[] atomicReaders; + private final Function> scriptFunction; + + + protected GlobalOrdinalsIndexFieldData(IndexSettings indexSettings, String fieldName, AtomicOrdinalsFieldData[] segmentAfd, + MultiDocValues.OrdinalMap ordinalMap, long memorySizeInBytes, Function> scriptFunction) { super(indexSettings); this.fieldName = fieldName; this.memorySizeInBytes = memorySizeInBytes; + this.ordinalMap = ordinalMap; + this.atomicReaders = new Atomic[segmentAfd.length]; + for (int i = 0; i < segmentAfd.length; i++) { + atomicReaders[i] = new Atomic(segmentAfd[i], ordinalMap, i); + } + this.scriptFunction = scriptFunction; } @Override @@ -68,7 +87,7 @@ public String getFieldName() { } @Override - public XFieldComparatorSource comparatorSource(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested) { + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) { throw new UnsupportedOperationException("no global ordinals sorting yet"); } @@ -87,4 +106,57 @@ public Collection getChildResources() { // TODO: break down ram usage? return Collections.emptyList(); } + + @Override + public AtomicOrdinalsFieldData load(LeafReaderContext context) { + return atomicReaders[context.ord]; + } + + @Override + public MultiDocValues.OrdinalMap getOrdinalMap() { + return ordinalMap; + } + + private final class Atomic extends AbstractAtomicOrdinalsFieldData { + + private final AtomicOrdinalsFieldData afd; + private final MultiDocValues.OrdinalMap ordinalMap; + private final int segmentIndex; + + private Atomic(AtomicOrdinalsFieldData afd, MultiDocValues.OrdinalMap ordinalMap, int segmentIndex) { + super(scriptFunction); + this.afd = afd; + this.ordinalMap = ordinalMap; + this.segmentIndex = segmentIndex; + } + + @Override + public RandomAccessOrds getOrdinalsValues() { + final RandomAccessOrds values = afd.getOrdinalsValues(); + if (values.getValueCount() == ordinalMap.getValueCount()) { + // segment ordinals match global ordinals + return values; + } + final RandomAccessOrds[] bytesValues = new RandomAccessOrds[atomicReaders.length]; + for (int i = 0; i < bytesValues.length; i++) { + bytesValues[i] = atomicReaders[i].afd.getOrdinalsValues(); + } + return new GlobalOrdinalMapping(ordinalMap, bytesValues, segmentIndex); + } + + @Override + public long ramBytesUsed() { + return afd.ramBytesUsed(); + } + + @Override + public Collection getChildResources() { + return afd.getChildResources(); + } + + @Override + public void close() { + } + + } } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/InternalGlobalOrdinalsIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/InternalGlobalOrdinalsIndexFieldData.java deleted file mode 100644 index 5b8ef83b10e03..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/InternalGlobalOrdinalsIndexFieldData.java +++ /dev/null @@ -1,93 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.index.fielddata.ordinals; - -import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.MultiDocValues.OrdinalMap; -import org.apache.lucene.index.RandomAccessOrds; -import org.apache.lucene.util.Accountable; -import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; -import org.elasticsearch.index.fielddata.plain.AbstractAtomicOrdinalsFieldData; - -import java.util.Collection; - -/** - * {@link org.elasticsearch.index.fielddata.IndexFieldData} impl based on global ordinals. - */ -final class InternalGlobalOrdinalsIndexFieldData extends GlobalOrdinalsIndexFieldData { - - private final Atomic[] atomicReaders; - - InternalGlobalOrdinalsIndexFieldData(IndexSettings indexSettings, String fieldName, AtomicOrdinalsFieldData[] segmentAfd, OrdinalMap ordinalMap, long memorySizeInBytes) { - super(indexSettings, fieldName, memorySizeInBytes); - this.atomicReaders = new Atomic[segmentAfd.length]; - for (int i = 0; i < segmentAfd.length; i++) { - atomicReaders[i] = new Atomic(segmentAfd[i], ordinalMap, i); - } - } - - @Override - public AtomicOrdinalsFieldData load(LeafReaderContext context) { - return atomicReaders[context.ord]; - } - - private final class Atomic extends AbstractAtomicOrdinalsFieldData { - - private final AtomicOrdinalsFieldData afd; - private final OrdinalMap ordinalMap; - private final int segmentIndex; - - private Atomic(AtomicOrdinalsFieldData afd, OrdinalMap ordinalMap, int segmentIndex) { - this.afd = afd; - this.ordinalMap = ordinalMap; - this.segmentIndex = segmentIndex; - } - - @Override - public RandomAccessOrds getOrdinalsValues() { - final RandomAccessOrds values = afd.getOrdinalsValues(); - if (values.getValueCount() == ordinalMap.getValueCount()) { - // segment ordinals match global ordinals - return values; - } - final RandomAccessOrds[] bytesValues = new RandomAccessOrds[atomicReaders.length]; - for (int i = 0; i < bytesValues.length; i++) { - bytesValues[i] = atomicReaders[i].afd.getOrdinalsValues(); - } - return new GlobalOrdinalMapping(ordinalMap, bytesValues, segmentIndex); - } - - @Override - public long ramBytesUsed() { - return afd.ramBytesUsed(); - } - - @Override - public Collection getChildResources() { - return afd.getChildResources(); - } - - @Override - public void close() { - } - - } - -} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/SinglePackedOrdinals.java b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/SinglePackedOrdinals.java index aa09bac4dcf82..747a33900b18d 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/SinglePackedOrdinals.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/SinglePackedOrdinals.java @@ -69,7 +69,7 @@ private static class Docs extends SortedDocValues { private final PackedInts.Reader reader; private final ValuesHolder values; - public Docs(SinglePackedOrdinals parent, ValuesHolder values) { + Docs(SinglePackedOrdinals parent, ValuesHolder values) { this.maxOrd = parent.valueCount; this.reader = parent.reader; this.values = values; diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicOrdinalsFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicOrdinalsFieldData.java index 51de4c1be53dc..1a6e5d0886cfe 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicOrdinalsFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicOrdinalsFieldData.java @@ -29,15 +29,26 @@ import java.util.Collection; import java.util.Collections; +import java.util.function.Function; /** */ public abstract class AbstractAtomicOrdinalsFieldData implements AtomicOrdinalsFieldData { + public static final Function> DEFAULT_SCRIPT_FUNCTION = + ((Function) FieldData::toString) + .andThen(ScriptDocValues.Strings::new); + + private final Function> scriptFunction; + + protected AbstractAtomicOrdinalsFieldData(Function> scriptFunction) { + this.scriptFunction = scriptFunction; + } + @Override - public final ScriptDocValues getScriptValues() { - return new ScriptDocValues.Strings(getBytesValues()); + public final ScriptDocValues getScriptValues() { + return scriptFunction.apply(getOrdinalsValues()); } @Override @@ -46,7 +57,7 @@ public final SortedBinaryDocValues getBytesValues() { } public static AtomicOrdinalsFieldData empty() { - return new AbstractAtomicOrdinalsFieldData() { + return new AbstractAtomicOrdinalsFieldData(DEFAULT_SCRIPT_FUNCTION) { @Override public long ramBytesUsed() { diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicParentChildFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicParentChildFieldData.java deleted file mode 100644 index 1a801d75411fb..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicParentChildFieldData.java +++ /dev/null @@ -1,116 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.fielddata.plain; - -import org.apache.lucene.index.DocValues; -import org.apache.lucene.index.SortedDocValues; -import org.apache.lucene.util.Accountable; -import org.apache.lucene.util.ArrayUtil; -import org.apache.lucene.util.BytesRef; -import org.elasticsearch.index.fielddata.AtomicParentChildFieldData; -import org.elasticsearch.index.fielddata.ScriptDocValues; -import org.elasticsearch.index.fielddata.SortedBinaryDocValues; - -import java.util.Collection; -import java.util.Collections; -import java.util.Set; - -import static java.util.Collections.emptySet; - - -/** - */ -abstract class AbstractAtomicParentChildFieldData implements AtomicParentChildFieldData { - - @Override - public final ScriptDocValues getScriptValues() { - return new ScriptDocValues.Strings(getBytesValues()); - } - - @Override - public final SortedBinaryDocValues getBytesValues() { - return new SortedBinaryDocValues() { - - private final BytesRef[] terms = new BytesRef[2]; - private int count; - - @Override - public void setDocument(int docId) { - count = 0; - for (String type : types()) { - final SortedDocValues values = getOrdinalsValues(type); - final int ord = values.getOrd(docId); - if (ord >= 0) { - terms[count++] = values.lookupOrd(ord); - } - } - assert count <= 2 : "A single doc can potentially be both parent and child, so the maximum allowed values is 2"; - if (count > 1) { - int cmp = terms[0].compareTo(terms[1]); - if (cmp > 0) { - ArrayUtil.swap(terms, 0, 1); - } else if (cmp == 0) { - // If the id is the same between types the only omit one. For example: a doc has parent#1 in _uid field and has grand_parent#1 in _parent field. - count = 1; - } - } - } - - @Override - public int count() { - return count; - } - - @Override - public BytesRef valueAt(int index) { - return terms[index]; - } - }; - } - - public static AtomicParentChildFieldData empty() { - return new AbstractAtomicParentChildFieldData() { - - @Override - public long ramBytesUsed() { - return 0; - } - - @Override - public Collection getChildResources() { - return Collections.emptyList(); - } - - @Override - public void close() { - } - - @Override - public SortedDocValues getOrdinalsValues(String type) { - return DocValues.emptySorted(); - } - - @Override - public Set types() { - return emptySet(); - } - }; - } -} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractGeoPointDVIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractGeoPointDVIndexFieldData.java index 23e770121a729..a9c7e7c79b4ea 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractGeoPointDVIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractGeoPointDVIndexFieldData.java @@ -22,6 +22,7 @@ import org.apache.lucene.index.DocValues; import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.Version; +import org.apache.lucene.search.SortField; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; @@ -44,7 +45,7 @@ public abstract class AbstractGeoPointDVIndexFieldData extends DocValuesIndexFie } @Override - public final XFieldComparatorSource comparatorSource(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested) { + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) { throw new IllegalArgumentException("can't sort on geo_point field without using specific sorting feature, like geo_distance"); } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexGeoPointFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexGeoPointFieldData.java index 90554bd130846..bdf1bbac33235 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexGeoPointFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexGeoPointFieldData.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.fielddata.plain; +import org.apache.lucene.search.SortField; import org.apache.lucene.spatial.geopoint.document.GeoPointField; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefIterator; @@ -28,6 +29,7 @@ import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.fielddata.AtomicGeoPointFieldData; +import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.fielddata.IndexFieldDataCache; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; @@ -99,12 +101,12 @@ public GeoPoint next() throws IOException { } } - public AbstractIndexGeoPointFieldData(IndexSettings indexSettings, String fieldName, IndexFieldDataCache cache) { + AbstractIndexGeoPointFieldData(IndexSettings indexSettings, String fieldName, IndexFieldDataCache cache) { super(indexSettings, fieldName, cache); } @Override - public final XFieldComparatorSource comparatorSource(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested) { + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) { throw new IllegalArgumentException("can't sort on geo_point field without using specific sorting feature, like geo_distance"); } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java index 0223c1a0b63b3..1dbd082f93bc8 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java @@ -22,20 +22,17 @@ import org.apache.lucene.index.FilteredTermsEnum; import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.MultiDocValues; import org.apache.lucene.index.Terms; import org.apache.lucene.index.TermsEnum; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.common.Nullable; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; -import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.fielddata.IndexFieldDataCache; import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; -import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; import org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder; import org.elasticsearch.indices.breaker.CircuitBreakerService; -import org.elasticsearch.search.MultiValueMode; import java.io.IOException; @@ -56,8 +53,8 @@ protected AbstractIndexOrdinalsFieldData(IndexSettings indexSettings, String fie } @Override - public XFieldComparatorSource comparatorSource(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested) { - return new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested); + public MultiDocValues.OrdinalMap getOrdinalMap() { + return null; } @Override @@ -97,7 +94,8 @@ public IndexOrdinalsFieldData loadGlobal(DirectoryReader indexReader) { @Override public IndexOrdinalsFieldData localGlobalDirect(DirectoryReader indexReader) throws Exception { - return GlobalOrdinalsBuilder.build(indexReader, this, indexSettings, breakerService, logger); + return GlobalOrdinalsBuilder.build(indexReader, this, indexSettings, breakerService, logger, + AbstractAtomicOrdinalsFieldData.DEFAULT_SCRIPT_FUNCTION); } @Override @@ -131,7 +129,7 @@ private static final class FrequencyFilter extends FilteredTermsEnum { private int minFreq; private int maxFreq; - public FrequencyFilter(TermsEnum delegate, int minFreq, int maxFreq) { + FrequencyFilter(TermsEnum delegate, int minFreq, int maxFreq) { super(delegate, false); this.minFreq = minFreq; this.maxFreq = maxFreq; diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractLatLonPointDVIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractLatLonPointDVIndexFieldData.java new file mode 100644 index 0000000000000..3b4ac58e0e81e --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractLatLonPointDVIndexFieldData.java @@ -0,0 +1,98 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.fielddata.plain; + +import org.apache.lucene.document.LatLonDocValuesField; +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.SortField; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.fielddata.AtomicGeoPointFieldData; +import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.fielddata.IndexFieldDataCache; +import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.indices.breaker.CircuitBreakerService; +import org.elasticsearch.search.MultiValueMode; + +import java.io.IOException; + +public abstract class AbstractLatLonPointDVIndexFieldData extends DocValuesIndexFieldData + implements IndexGeoPointFieldData { + AbstractLatLonPointDVIndexFieldData(Index index, String fieldName) { + super(index, fieldName); + } + + @Override + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, XFieldComparatorSource.Nested nested, boolean reverse) { + throw new IllegalArgumentException("can't sort on geo_point field without using specific sorting feature, like geo_distance"); + } + + public static class LatLonPointDVIndexFieldData extends AbstractLatLonPointDVIndexFieldData { + public LatLonPointDVIndexFieldData(Index index, String fieldName) { + super(index, fieldName); + } + + @Override + public AtomicGeoPointFieldData load(LeafReaderContext context) { + try { + LeafReader reader = context.reader(); + FieldInfo info = reader.getFieldInfos().fieldInfo(fieldName); + if (info != null) { + checkCompatible(info); + } + return new LatLonPointDVAtomicFieldData(DocValues.getSortedNumeric(reader, fieldName)); + } catch (IOException e) { + throw new IllegalStateException("Cannot load doc values", e); + } + } + + @Override + public AtomicGeoPointFieldData loadDirect(LeafReaderContext context) throws Exception { + return load(context); + } + + /** helper: checks a fieldinfo and throws exception if its definitely not a LatLonDocValuesField */ + static void checkCompatible(FieldInfo fieldInfo) { + // dv properties could be "unset", if you e.g. used only StoredField with this same name in the segment. + if (fieldInfo.getDocValuesType() != DocValuesType.NONE + && fieldInfo.getDocValuesType() != LatLonDocValuesField.TYPE.docValuesType()) { + throw new IllegalArgumentException("field=\"" + fieldInfo.name + "\" was indexed with docValuesType=" + + fieldInfo.getDocValuesType() + " but this type has docValuesType=" + + LatLonDocValuesField.TYPE.docValuesType() + ", is the field really a LatLonDocValuesField?"); + } + } + } + + public static class Builder implements IndexFieldData.Builder { + @Override + public IndexFieldData build(IndexSettings indexSettings, MappedFieldType fieldType, IndexFieldDataCache cache, + CircuitBreakerService breakerService, MapperService mapperService) { + // ignore breaker + return new LatLonPointDVIndexFieldData(indexSettings.getIndex(), fieldType.name()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicLongFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicLongFieldData.java index b3b0604e9e21c..c52ccb90bed8a 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicLongFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicLongFieldData.java @@ -19,28 +19,24 @@ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.DocValues; -import org.apache.lucene.index.SortedNumericDocValues; -import org.apache.lucene.util.Accountable; import org.elasticsearch.index.fielddata.AtomicNumericFieldData; import org.elasticsearch.index.fielddata.FieldData; import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; -import java.util.Collection; -import java.util.Collections; - - /** * Specialization of {@link AtomicNumericFieldData} for integers. */ abstract class AtomicLongFieldData implements AtomicNumericFieldData { private final long ramBytesUsed; + /** True if this numeric data is for a boolean field, and so only has values 0 and 1. */ + private final boolean isBoolean; - AtomicLongFieldData(long ramBytesUsed) { + AtomicLongFieldData(long ramBytesUsed, boolean isBoolean) { this.ramBytesUsed = ramBytesUsed; + this.isBoolean = isBoolean; } @Override @@ -50,7 +46,11 @@ public long ramBytesUsed() { @Override public final ScriptDocValues getScriptValues() { - return new ScriptDocValues.Longs(getLongValues()); + if (isBoolean) { + return new ScriptDocValues.Booleans(getLongValues()); + } else { + return new ScriptDocValues.Longs(getLongValues()); + } } @Override @@ -63,24 +63,6 @@ public final SortedNumericDoubleValues getDoubleValues() { return FieldData.castToDouble(getLongValues()); } - public static AtomicNumericFieldData empty(final int maxDoc) { - return new AtomicLongFieldData(0) { - - @Override - public SortedNumericDocValues getLongValues() { - return DocValues.emptySortedNumeric(maxDoc); - } - - @Override - public Collection getChildResources() { - return Collections.emptyList(); - } - - }; - } - @Override - public void close() { - } - + public void close() {} } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVIndexFieldData.java index 586ad1f0d4869..a7e1981766704 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVIndexFieldData.java @@ -20,9 +20,12 @@ package org.elasticsearch.index.fielddata.plain; import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.SortedSetSortField; +import org.apache.lucene.search.SortedSetSelector; +import org.elasticsearch.common.Nullable; import org.elasticsearch.index.Index; import org.elasticsearch.index.fielddata.IndexFieldData; -import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; import org.elasticsearch.search.MultiValueMode; @@ -43,7 +46,21 @@ public BinaryDVAtomicFieldData loadDirect(LeafReaderContext context) throws Exce } @Override - public org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource comparatorSource(Object missingValue, MultiValueMode sortMode, Nested nested) { - return new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested); + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, XFieldComparatorSource.Nested nested, boolean reverse) { + XFieldComparatorSource source = new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested); + /** + * Check if we can use a simple {@link SortedSetSortField} compatible with index sorting and + * returns a custom sort field otherwise. + */ + if (nested != null || + (sortMode != MultiValueMode.MAX && sortMode != MultiValueMode.MIN) || + (source.sortMissingFirst(missingValue) == false && source.sortMissingLast(missingValue) == false)) { + return new SortField(getFieldName(), source, reverse); + } + SortField sortField = new SortedSetSortField(fieldName, reverse, + sortMode == MultiValueMode.MAX ? SortedSetSelector.Type.MAX : SortedSetSelector.Type.MIN); + sortField.setMissingValue(source.sortMissingLast(missingValue) ^ reverse ? + SortedSetSortField.STRING_LAST : SortedSetSortField.STRING_FIRST); + return sortField; } } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVAtomicFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVAtomicFieldData.java index 9963f7d51a85e..8d43241ba75e0 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVAtomicFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVAtomicFieldData.java @@ -101,7 +101,7 @@ public BytesRef valueAt(int index) { @Override public ScriptDocValues getScriptValues() { - throw new UnsupportedOperationException(); + return new ScriptDocValues.BytesRefs(getBytesValues()); } @Override diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVIndexFieldData.java index bd3cdd71184c5..398093c034b79 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVIndexFieldData.java @@ -21,6 +21,7 @@ import org.apache.lucene.index.DocValues; import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.SortField; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; @@ -41,7 +42,7 @@ public BytesBinaryDVIndexFieldData(Index index, String fieldName) { } @Override - public final XFieldComparatorSource comparatorSource(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested) { + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) { throw new IllegalArgumentException("can't sort on binary field"); } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/ConstantIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/ConstantIndexFieldData.java new file mode 100644 index 0000000000000..19367d06679ff --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/ConstantIndexFieldData.java @@ -0,0 +1,155 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.fielddata.plain; + +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.RandomAccessOrds; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.search.SortField; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; +import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.fielddata.IndexFieldDataCache; +import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; +import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.TextFieldMapper; +import org.elasticsearch.indices.breaker.CircuitBreakerService; +import org.elasticsearch.search.MultiValueMode; + +import java.util.Collection; +import java.util.Collections; +import java.util.function.Function; + +public class ConstantIndexFieldData extends AbstractIndexOrdinalsFieldData { + + public static class Builder implements IndexFieldData.Builder { + + private final Function valueFunction; + + public Builder(Function valueFunction) { + this.valueFunction = valueFunction; + } + + @Override + public IndexFieldData build(IndexSettings indexSettings, MappedFieldType fieldType, IndexFieldDataCache cache, + CircuitBreakerService breakerService, MapperService mapperService) { + return new ConstantIndexFieldData(indexSettings, fieldType.name(), valueFunction.apply(mapperService)); + } + + } + + private static class ConstantAtomicFieldData extends AbstractAtomicOrdinalsFieldData { + + private final String value; + + ConstantAtomicFieldData(String value) { + super(DEFAULT_SCRIPT_FUNCTION); + this.value = value; + } + + @Override + public long ramBytesUsed() { + return 0; + } + + @Override + public Collection getChildResources() { + return Collections.emptyList(); + } + + @Override + public RandomAccessOrds getOrdinalsValues() { + final BytesRef term = new BytesRef(value); + final SortedDocValues sortedValues = new SortedDocValues() { + + @Override + public BytesRef lookupOrd(int ord) { + return term; + } + + @Override + public int getValueCount() { + return 1; + } + + @Override + public int getOrd(int docID) { + return 0; + } + }; + return (RandomAccessOrds) DocValues.singleton(sortedValues); + } + + @Override + public void close() { + } + + } + + private final AtomicOrdinalsFieldData atomicFieldData; + + private ConstantIndexFieldData(IndexSettings indexSettings, String name, String value) { + super(indexSettings, name, null, null, + TextFieldMapper.Defaults.FIELDDATA_MIN_FREQUENCY, + TextFieldMapper.Defaults.FIELDDATA_MAX_FREQUENCY, + TextFieldMapper.Defaults.FIELDDATA_MIN_SEGMENT_SIZE); + atomicFieldData = new ConstantAtomicFieldData(value); + } + + @Override + public void clear() { + } + + @Override + public final AtomicOrdinalsFieldData load(LeafReaderContext context) { + return atomicFieldData; + } + + @Override + public AtomicOrdinalsFieldData loadDirect(LeafReaderContext context) + throws Exception { + return atomicFieldData; + } + + @Override + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, XFieldComparatorSource.Nested nested, + boolean reverse) { + final XFieldComparatorSource source = new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested); + return new SortField(getFieldName(), source, reverse); + } + + @Override + public IndexOrdinalsFieldData loadGlobal(DirectoryReader indexReader) { + return this; + } + + @Override + public IndexOrdinalsFieldData localGlobalDirect(DirectoryReader indexReader) throws Exception { + return loadGlobal(indexReader); + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/DocValuesIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/DocValuesIndexFieldData.java index 4621876399663..c77ceb57457ea 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/DocValuesIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/DocValuesIndexFieldData.java @@ -21,12 +21,14 @@ import org.apache.logging.log4j.Logger; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.RandomAccessOrds; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldDataCache; import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; +import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.index.mapper.IdFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; @@ -34,6 +36,7 @@ import org.elasticsearch.indices.breaker.CircuitBreakerService; import java.util.Set; +import java.util.function.Function; import static java.util.Collections.unmodifiableSet; import static org.elasticsearch.common.util.set.Sets.newHashSet; @@ -72,12 +75,18 @@ public static class Builder implements IndexFieldData.Builder { private static final Set BINARY_INDEX_FIELD_NAMES = unmodifiableSet(newHashSet(UidFieldMapper.NAME, IdFieldMapper.NAME)); private NumericType numericType; + private Function> scriptFunction = AbstractAtomicOrdinalsFieldData.DEFAULT_SCRIPT_FUNCTION; public Builder numericType(NumericType type) { this.numericType = type; return this; } + public Builder scriptFunction(Function> scriptFunction) { + this.scriptFunction = scriptFunction; + return this; + } + @Override public IndexFieldData build(IndexSettings indexSettings, MappedFieldType fieldType, IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) { @@ -89,7 +98,7 @@ public IndexFieldData build(IndexSettings indexSettings, MappedFieldType fiel } else if (numericType != null) { return new SortedNumericDVIndexFieldData(indexSettings.getIndex(), fieldName, numericType); } else { - return new SortedSetDVOrdinalsIndexFieldData(indexSettings, cache, fieldName, breakerService); + return new SortedSetDVOrdinalsIndexFieldData(indexSettings, cache, fieldName, breakerService, scriptFunction); } } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayAtomicFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayAtomicFieldData.java index b1a97a878ee45..0ce02713be444 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayAtomicFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayAtomicFieldData.java @@ -49,7 +49,7 @@ static class WithOrdinals extends GeoPointArrayAtomicFieldData { private final Ordinals ordinals; private final int maxDoc; - public WithOrdinals(LongArray indexedPoints, Ordinals ordinals, int maxDoc) { + WithOrdinals(LongArray indexedPoints, Ordinals ordinals, int maxDoc) { super(); this.indexedPoints = indexedPoints; this.ordinals = ordinals; diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayIndexFieldData.java index d484c503c2b54..18313f3274517 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayIndexFieldData.java @@ -68,7 +68,7 @@ public AtomicGeoPointFieldData loadDirect(LeafReaderContext context) throws Exce estimator.afterLoad(null, data.ramBytesUsed()); return data; } - return (indexSettings.getIndexVersionCreated().before(Version.V_2_2_0) == true) ? + return (indexSettings.getIndexVersionCreated().before(Version.V_2_2_0)) ? loadLegacyFieldData(reader, estimator, terms, data) : loadFieldData22(reader, estimator, terms, data); } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayLegacyAtomicFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayLegacyAtomicFieldData.java index 8d0953fb6e6b6..ec357633464b3 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayLegacyAtomicFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointArrayLegacyAtomicFieldData.java @@ -50,7 +50,7 @@ static class WithOrdinals extends GeoPointArrayLegacyAtomicFieldData { private final Ordinals ordinals; private final int maxDoc; - public WithOrdinals(DoubleArray lon, DoubleArray lat, Ordinals ordinals, int maxDoc) { + WithOrdinals(DoubleArray lon, DoubleArray lat, Ordinals ordinals, int maxDoc) { super(); this.lon = lon; this.lat = lat; diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/IndexIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/IndexIndexFieldData.java deleted file mode 100644 index d57c023371e58..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/IndexIndexFieldData.java +++ /dev/null @@ -1,136 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.fielddata.plain; - -import org.apache.lucene.index.DirectoryReader; -import org.apache.lucene.index.DocValues; -import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.RandomAccessOrds; -import org.apache.lucene.index.SortedDocValues; -import org.apache.lucene.util.Accountable; -import org.apache.lucene.util.BytesRef; -import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; -import org.elasticsearch.index.fielddata.IndexFieldData; -import org.elasticsearch.index.fielddata.IndexFieldDataCache; -import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; -import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.index.mapper.TextFieldMapper; -import org.elasticsearch.indices.breaker.CircuitBreakerService; - -import java.util.Collection; -import java.util.Collections; - -public class IndexIndexFieldData extends AbstractIndexOrdinalsFieldData { - - public static class Builder implements IndexFieldData.Builder { - - @Override - public IndexFieldData build(IndexSettings indexSettings, MappedFieldType fieldType, IndexFieldDataCache cache, - CircuitBreakerService breakerService, MapperService mapperService) { - return new IndexIndexFieldData(indexSettings, fieldType.name()); - } - - } - - private static class IndexAtomicFieldData extends AbstractAtomicOrdinalsFieldData { - - private final String index; - - IndexAtomicFieldData(String index) { - this.index = index; - } - - @Override - public long ramBytesUsed() { - return 0; - } - - @Override - public Collection getChildResources() { - return Collections.emptyList(); - } - - @Override - public RandomAccessOrds getOrdinalsValues() { - final BytesRef term = new BytesRef(index); - final SortedDocValues sortedValues = new SortedDocValues() { - - @Override - public BytesRef lookupOrd(int ord) { - return term; - } - - @Override - public int getValueCount() { - return 1; - } - - @Override - public int getOrd(int docID) { - return 0; - } - }; - return (RandomAccessOrds) DocValues.singleton(sortedValues); - } - - @Override - public void close() { - } - - } - - private final AtomicOrdinalsFieldData atomicFieldData; - - private IndexIndexFieldData(IndexSettings indexSettings, String name) { - super(indexSettings, name, null, null, - TextFieldMapper.Defaults.FIELDDATA_MIN_FREQUENCY, - TextFieldMapper.Defaults.FIELDDATA_MAX_FREQUENCY, - TextFieldMapper.Defaults.FIELDDATA_MIN_SEGMENT_SIZE); - atomicFieldData = new IndexAtomicFieldData(index().getName()); - } - - @Override - public void clear() { - } - - @Override - public final AtomicOrdinalsFieldData load(LeafReaderContext context) { - return atomicFieldData; - } - - @Override - public AtomicOrdinalsFieldData loadDirect(LeafReaderContext context) - throws Exception { - return atomicFieldData; - } - - @Override - public IndexOrdinalsFieldData loadGlobal(DirectoryReader indexReader) { - return this; - } - - @Override - public IndexOrdinalsFieldData localGlobalDirect(DirectoryReader indexReader) throws Exception { - return loadGlobal(indexReader); - } - -} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/LatLonPointDVAtomicFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/LatLonPointDVAtomicFieldData.java new file mode 100644 index 0000000000000..d11a79c255654 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/LatLonPointDVAtomicFieldData.java @@ -0,0 +1,91 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.fielddata.plain; + +import org.apache.lucene.geo.GeoEncodingUtils; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.ArrayUtil; +import org.apache.lucene.util.RamUsageEstimator; +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.index.fielddata.MultiGeoPointValues; + +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; + +final class LatLonPointDVAtomicFieldData extends AbstractAtomicGeoPointFieldData { + private final SortedNumericDocValues values; + + LatLonPointDVAtomicFieldData(SortedNumericDocValues values) { + super(); + this.values = values; + } + + @Override + public long ramBytesUsed() { + return 0; // not exposed by lucene + } + + @Override + public Collection getChildResources() { + return Collections.emptyList(); + } + + @Override + public void close() { + // noop + } + + @Override + public MultiGeoPointValues getGeoPointValues() { + return new MultiGeoPointValues() { + GeoPoint[] points = new GeoPoint[0]; + private int count = 0; + + @Override + public void setDocument(int docId) { + values.setDocument(docId); + count = values.count(); + if (count > points.length) { + final int previousLength = points.length; + points = Arrays.copyOf(points, ArrayUtil.oversize(count, RamUsageEstimator.NUM_BYTES_OBJECT_REF)); + for (int i = previousLength; i < points.length; ++i) { + points[i] = new GeoPoint(Double.NaN, Double.NaN); + } + } + long encoded; + for (int i=0; i>> 32)), GeoEncodingUtils.decodeLongitude((int)encoded)); + } + } + + @Override + public int count() { + return count; + } + + @Override + public GeoPoint valueAt(int index) { + return points[index]; + } + }; + } +} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesAtomicFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesAtomicFieldData.java index 93a981cab2de5..f94fde6955378 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesAtomicFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesAtomicFieldData.java @@ -40,6 +40,7 @@ public class PagedBytesAtomicFieldData extends AbstractAtomicOrdinalsFieldData { protected final Ordinals ordinals; public PagedBytesAtomicFieldData(PagedBytes.Reader bytes, PackedLongValues termOrdToBytesOffset, Ordinals ordinals) { + super(DEFAULT_SCRIPT_FUNCTION); this.bytes = bytes; this.termOrdToBytesOffset = termOrdToBytesOffset; this.ordinals = ordinals; diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesIndexFieldData.java index 5695f7ef15a9a..a56b0f1b8c18d 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesIndexFieldData.java @@ -27,10 +27,12 @@ import org.apache.lucene.index.Terms; import org.apache.lucene.index.TermsEnum; import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.SortField; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.PagedBytes; import org.apache.lucene.util.packed.PackedInts; import org.apache.lucene.util.packed.PackedLongValues; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.breaker.CircuitBreaker; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; @@ -38,11 +40,13 @@ import org.elasticsearch.index.fielddata.IndexFieldDataCache; import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; import org.elasticsearch.index.fielddata.RamAccountingTermsEnum; +import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; import org.elasticsearch.index.fielddata.ordinals.Ordinals; import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.indices.breaker.CircuitBreakerService; +import org.elasticsearch.search.MultiValueMode; import java.io.IOException; @@ -76,6 +80,12 @@ public PagedBytesIndexFieldData(IndexSettings indexSettings, String fieldName, super(indexSettings, fieldName, cache, breakerService, minFrequency, maxFrequency, minSegmentSize); } + @Override + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, XFieldComparatorSource.Nested nested, boolean reverse) { + XFieldComparatorSource source = new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested); + return new SortField(getFieldName(), source, reverse); + } + @Override public AtomicOrdinalsFieldData loadDirect(LeafReaderContext context) throws Exception { LeafReader reader = context.reader(); diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildIndexFieldData.java deleted file mode 100644 index d483baa777e88..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildIndexFieldData.java +++ /dev/null @@ -1,393 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.fielddata.plain; - -import org.apache.lucene.index.DirectoryReader; -import org.apache.lucene.index.DocValues; -import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.LeafReader; -import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.MultiDocValues; -import org.apache.lucene.index.MultiDocValues.OrdinalMap; -import org.apache.lucene.index.SortedDocValues; -import org.apache.lucene.util.Accountable; -import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.LongValues; -import org.apache.lucene.util.packed.PackedInts; -import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.breaker.CircuitBreaker; -import org.elasticsearch.common.lease.Releasable; -import org.elasticsearch.common.lease.Releasables; -import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.index.Index; -import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.fielddata.AtomicParentChildFieldData; -import org.elasticsearch.index.fielddata.IndexFieldData; -import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; -import org.elasticsearch.index.fielddata.IndexFieldDataCache; -import org.elasticsearch.index.fielddata.IndexParentChildFieldData; -import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.index.mapper.ParentFieldMapper; -import org.elasticsearch.indices.breaker.CircuitBreakerService; -import org.elasticsearch.search.MultiValueMode; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collection; -import java.util.Collections; -import java.util.HashMap; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.concurrent.TimeUnit; - -/** - * ParentChildIndexFieldData is responsible for loading the id cache mapping - * needed for has_child and has_parent queries into memory. - */ -public class ParentChildIndexFieldData extends AbstractIndexFieldData implements IndexParentChildFieldData { - - private final Set parentTypes; - private final CircuitBreakerService breakerService; - - public ParentChildIndexFieldData(IndexSettings indexSettings, String fieldName, - IndexFieldDataCache cache, MapperService mapperService, - CircuitBreakerService breakerService) { - super(indexSettings, fieldName, cache); - this.breakerService = breakerService; - Set parentTypes = new HashSet<>(); - for (DocumentMapper mapper : mapperService.docMappers(false)) { - ParentFieldMapper parentFieldMapper = mapper.parentFieldMapper(); - if (parentFieldMapper.active()) { - parentTypes.add(parentFieldMapper.type()); - } - } - this.parentTypes = parentTypes; - } - - @Override - public XFieldComparatorSource comparatorSource(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested) { - return new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested); - } - - @Override - public AtomicParentChildFieldData load(LeafReaderContext context) { - final LeafReader reader = context.reader(); - return new AbstractAtomicParentChildFieldData() { - - public Set types() { - return parentTypes; - } - - @Override - public SortedDocValues getOrdinalsValues(String type) { - try { - return DocValues.getSorted(reader, ParentFieldMapper.joinField(type)); - } catch (IOException e) { - throw new IllegalStateException("cannot load join doc values field for type [" + type + "]", e); - } - } - - @Override - public long ramBytesUsed() { - // unknown - return 0; - } - - @Override - public Collection getChildResources() { - return Collections.emptyList(); - } - - @Override - public void close() throws ElasticsearchException { - } - }; - } - - @Override - public AbstractAtomicParentChildFieldData loadDirect(LeafReaderContext context) throws Exception { - throw new UnsupportedOperationException(); - } - - @Override - protected AtomicParentChildFieldData empty(int maxDoc) { - return AbstractAtomicParentChildFieldData.empty(); - } - - public static class Builder implements IndexFieldData.Builder { - - @Override - public IndexFieldData build(IndexSettings indexSettings, - MappedFieldType fieldType, - IndexFieldDataCache cache, CircuitBreakerService breakerService, - MapperService mapperService) { - return new ParentChildIndexFieldData(indexSettings, fieldType.name(), cache, - mapperService, breakerService); - } - } - - @Override - public IndexParentChildFieldData loadGlobal(DirectoryReader indexReader) { - if (indexReader.leaves().size() <= 1) { - // ordinals are already global - return this; - } - try { - return cache.load(indexReader, this); - } catch (Exception e) { - if (e instanceof ElasticsearchException) { - throw (ElasticsearchException) e; - } else { - throw new ElasticsearchException(e); - } - } - } - - private static OrdinalMap buildOrdinalMap(AtomicParentChildFieldData[] atomicFD, String parentType) throws IOException { - final SortedDocValues[] ordinals = new SortedDocValues[atomicFD.length]; - for (int i = 0; i < ordinals.length; ++i) { - ordinals[i] = atomicFD[i].getOrdinalsValues(parentType); - } - return OrdinalMap.build(null, ordinals, PackedInts.DEFAULT); - } - - private static class OrdinalMapAndAtomicFieldData { - final OrdinalMap ordMap; - final AtomicParentChildFieldData[] fieldData; - - public OrdinalMapAndAtomicFieldData(OrdinalMap ordMap, AtomicParentChildFieldData[] fieldData) { - this.ordMap = ordMap; - this.fieldData = fieldData; - } - } - - @Override - public IndexParentChildFieldData localGlobalDirect(DirectoryReader indexReader) throws Exception { - final long startTime = System.nanoTime(); - - long ramBytesUsed = 0; - final Map perType = new HashMap<>(); - for (String type : parentTypes) { - final AtomicParentChildFieldData[] fieldData = new AtomicParentChildFieldData[indexReader.leaves().size()]; - for (LeafReaderContext context : indexReader.leaves()) { - fieldData[context.ord] = load(context); - } - final OrdinalMap ordMap = buildOrdinalMap(fieldData, type); - ramBytesUsed += ordMap.ramBytesUsed(); - perType.put(type, new OrdinalMapAndAtomicFieldData(ordMap, fieldData)); - } - - final AtomicParentChildFieldData[] fielddata = new AtomicParentChildFieldData[indexReader.leaves().size()]; - for (int i = 0; i < fielddata.length; ++i) { - fielddata[i] = new GlobalAtomicFieldData(parentTypes, perType, i); - } - - breakerService.getBreaker(CircuitBreaker.FIELDDATA).addWithoutBreaking(ramBytesUsed); - if (logger.isDebugEnabled()) { - logger.debug( - "global-ordinals [_parent] took [{}]", - new TimeValue(System.nanoTime() - startTime, TimeUnit.NANOSECONDS) - ); - } - - return new GlobalFieldData(indexReader, fielddata, ramBytesUsed, perType); - } - - private static class GlobalAtomicFieldData extends AbstractAtomicParentChildFieldData { - - private final Set types; - private final Map atomicFD; - private final int segmentIndex; - - public GlobalAtomicFieldData(Set types, Map atomicFD, int segmentIndex) { - this.types = types; - this.atomicFD = atomicFD; - this.segmentIndex = segmentIndex; - } - - @Override - public Set types() { - return types; - } - - @Override - public SortedDocValues getOrdinalsValues(String type) { - final OrdinalMapAndAtomicFieldData atomicFD = this.atomicFD.get(type); - if (atomicFD == null) { - return DocValues.emptySorted(); - } - - final OrdinalMap ordMap = atomicFD.ordMap; - final SortedDocValues[] allSegmentValues = new SortedDocValues[atomicFD.fieldData.length]; - for (int i = 0; i < allSegmentValues.length; ++i) { - allSegmentValues[i] = atomicFD.fieldData[i].getOrdinalsValues(type); - } - final SortedDocValues segmentValues = allSegmentValues[segmentIndex]; - if (segmentValues.getValueCount() == ordMap.getValueCount()) { - // ords are already global - return segmentValues; - } - final LongValues globalOrds = ordMap.getGlobalOrds(segmentIndex); - return new SortedDocValues() { - - @Override - public BytesRef lookupOrd(int ord) { - final int segmentIndex = ordMap.getFirstSegmentNumber(ord); - final int segmentOrd = (int) ordMap.getFirstSegmentOrd(ord); - return allSegmentValues[segmentIndex].lookupOrd(segmentOrd); - } - - @Override - public int getValueCount() { - return (int) ordMap.getValueCount(); - } - - @Override - public int getOrd(int docID) { - final int segmentOrd = segmentValues.getOrd(docID); - // TODO: is there a way we can get rid of this branch? - if (segmentOrd >= 0) { - return (int) globalOrds.get(segmentOrd); - } else { - return segmentOrd; - } - } - }; - } - - @Override - public long ramBytesUsed() { - // this class does not take memory on its own, the index-level field data does - // it through the use of ordinal maps - return 0; - } - - @Override - public Collection getChildResources() { - return Collections.emptyList(); - } - - @Override - public void close() { - List closeables = new ArrayList<>(); - for (OrdinalMapAndAtomicFieldData fds : atomicFD.values()) { - closeables.addAll(Arrays.asList(fds.fieldData)); - } - Releasables.close(closeables); - } - - } - - public class GlobalFieldData implements IndexParentChildFieldData, Accountable { - - private final Object coreCacheKey; - private final List leaves; - private final AtomicParentChildFieldData[] fielddata; - private final long ramBytesUsed; - private final Map ordinalMapPerType; - - GlobalFieldData(IndexReader reader, AtomicParentChildFieldData[] fielddata, long ramBytesUsed, Map ordinalMapPerType) { - this.coreCacheKey = reader.getCoreCacheKey(); - this.leaves = reader.leaves(); - this.ramBytesUsed = ramBytesUsed; - this.fielddata = fielddata; - this.ordinalMapPerType = ordinalMapPerType; - } - - @Override - public String getFieldName() { - return ParentChildIndexFieldData.this.getFieldName(); - } - - @Override - public AtomicParentChildFieldData load(LeafReaderContext context) { - assert context.reader().getCoreCacheKey() == leaves.get(context.ord).reader().getCoreCacheKey(); - return fielddata[context.ord]; - } - - @Override - public AtomicParentChildFieldData loadDirect(LeafReaderContext context) throws Exception { - return load(context); - } - - @Override - public XFieldComparatorSource comparatorSource(Object missingValue, MultiValueMode sortMode, Nested nested) { - throw new UnsupportedOperationException("No sorting on global ords"); - } - - @Override - public void clear() { - ParentChildIndexFieldData.this.clear(); - } - - @Override - public Index index() { - return ParentChildIndexFieldData.this.index(); - } - - @Override - public long ramBytesUsed() { - return ramBytesUsed; - } - - @Override - public Collection getChildResources() { - return Collections.emptyList(); - } - - @Override - public IndexParentChildFieldData loadGlobal(DirectoryReader indexReader) { - if (indexReader.getCoreCacheKey() == coreCacheKey) { - return this; - } - throw new IllegalStateException(); - } - - @Override - public IndexParentChildFieldData localGlobalDirect(DirectoryReader indexReader) throws Exception { - return loadGlobal(indexReader); - } - - } - - /** - * Returns the global ordinal map for the specified type - */ - // TODO: OrdinalMap isn't expose in the field data framework, because it is an implementation detail. - // However the JoinUtil works directly with OrdinalMap, so this is a hack to get access to OrdinalMap - // I don't think we should expose OrdinalMap in IndexFieldData, because only parent/child relies on it and for the - // rest of the code OrdinalMap is an implementation detail, but maybe we can expose it in IndexParentChildFieldData interface? - public static MultiDocValues.OrdinalMap getOrdinalMap(IndexParentChildFieldData indexParentChildFieldData, String type) { - if (indexParentChildFieldData instanceof ParentChildIndexFieldData.GlobalFieldData) { - return ((GlobalFieldData) indexParentChildFieldData).ordinalMapPerType.get(type).ordMap; - } else { - // one segment, local ordinals are global - return null; - } - } - -} diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedNumericDVIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedNumericDVIndexFieldData.java index be877b9c68af3..9e8e6dba0b6d0 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedNumericDVIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedNumericDVIndexFieldData.java @@ -26,6 +26,9 @@ import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.NumericDocValues; import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.SortedNumericSelector; +import org.apache.lucene.search.SortedNumericSortField; import org.apache.lucene.util.Accountable; import org.apache.lucene.util.NumericUtils; import org.elasticsearch.index.Index; @@ -60,17 +63,53 @@ public SortedNumericDVIndexFieldData(Index index, String fieldNames, NumericType } @Override - public org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource comparatorSource(Object missingValue, MultiValueMode sortMode, Nested nested) { + public SortField sortField(Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) { + final XFieldComparatorSource source; switch (numericType) { case HALF_FLOAT: case FLOAT: - return new FloatValuesComparatorSource(this, missingValue, sortMode, nested); + source = new FloatValuesComparatorSource(this, missingValue, sortMode, nested); + break; + + case DOUBLE: + source = new DoubleValuesComparatorSource(this, missingValue, sortMode, nested); + break; + + default: + assert !numericType.isFloatingPoint(); + source = new LongValuesComparatorSource(this, missingValue, sortMode, nested); + break; + } + + /** + * Check if we can use a simple {@link SortedNumericSortField} compatible with index sorting and + * returns a custom sort field otherwise. + */ + if (nested != null + || (sortMode != MultiValueMode.MAX && sortMode != MultiValueMode.MIN) + || numericType == NumericType.HALF_FLOAT) { + return new SortField(fieldName, source, reverse); + } + + final SortField sortField; + final SortedNumericSelector.Type selectorType = sortMode == MultiValueMode.MAX ? + SortedNumericSelector.Type.MAX : SortedNumericSelector.Type.MIN; + switch (numericType) { + case FLOAT: + sortField = new SortedNumericSortField(fieldName, SortField.Type.FLOAT, reverse, selectorType); + break; + case DOUBLE: - return new DoubleValuesComparatorSource(this, missingValue, sortMode, nested); + sortField = new SortedNumericSortField(fieldName, SortField.Type.DOUBLE, reverse, selectorType); + break; + default: assert !numericType.isFloatingPoint(); - return new LongValuesComparatorSource(this, missingValue, sortMode, nested); + sortField = new SortedNumericSortField(fieldName, SortField.Type.LONG, reverse, selectorType); + break; } + sortField.setMissingValue(source.missingObject(missingValue, reverse)); + return sortField; } @Override @@ -96,7 +135,7 @@ public AtomicNumericFieldData load(LeafReaderContext context) { case DOUBLE: return new SortedNumericDoubleFieldData(reader, field); default: - return new SortedNumericLongFieldData(reader, field); + return new SortedNumericLongFieldData(reader, field, numericType == NumericType.BOOLEAN); } } @@ -117,8 +156,8 @@ static final class SortedNumericLongFieldData extends AtomicLongFieldData { final LeafReader reader; final String field; - SortedNumericLongFieldData(LeafReader reader, String field) { - super(0L); + SortedNumericLongFieldData(LeafReader reader, String field, boolean isBoolean) { + super(0L, isBoolean); this.reader = reader; this.field = field; } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVBytesAtomicFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVBytesAtomicFieldData.java index 0bcb8251b98c7..114031cd3553b 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVBytesAtomicFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVBytesAtomicFieldData.java @@ -25,10 +25,12 @@ import org.apache.lucene.util.Accountable; import org.elasticsearch.index.fielddata.AtomicFieldData; import org.elasticsearch.index.fielddata.FieldData; +import org.elasticsearch.index.fielddata.ScriptDocValues; import java.io.IOException; import java.util.Collection; import java.util.Collections; +import java.util.function.Function; /** * An {@link AtomicFieldData} implementation that uses Lucene {@link org.apache.lucene.index.SortedSetDocValues}. @@ -38,7 +40,9 @@ public final class SortedSetDVBytesAtomicFieldData extends AbstractAtomicOrdinal private final LeafReader reader; private final String field; - SortedSetDVBytesAtomicFieldData(LeafReader reader, String field) { + SortedSetDVBytesAtomicFieldData(LeafReader reader, String field, Function> scriptFunction) { + super(scriptFunction); this.reader = reader; this.field = field; } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java index f0c8aa3d0767f..5e07faae8d878 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java @@ -21,41 +21,65 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.RandomAccessOrds; +import org.apache.lucene.index.MultiDocValues; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.SortedSetSortField; +import org.apache.lucene.search.SortedSetSelector; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.common.Nullable; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; -import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.fielddata.IndexFieldDataCache; import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; +import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; import org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.search.MultiValueMode; import java.io.IOException; +import java.util.function.Function; public class SortedSetDVOrdinalsIndexFieldData extends DocValuesIndexFieldData implements IndexOrdinalsFieldData { private final IndexSettings indexSettings; private final IndexFieldDataCache cache; private final CircuitBreakerService breakerService; + private final Function> scriptFunction; - public SortedSetDVOrdinalsIndexFieldData(IndexSettings indexSettings, IndexFieldDataCache cache, String fieldName, CircuitBreakerService breakerService) { + public SortedSetDVOrdinalsIndexFieldData(IndexSettings indexSettings, IndexFieldDataCache cache, String fieldName, + CircuitBreakerService breakerService, Function> scriptFunction) { super(indexSettings.getIndex(), fieldName); this.indexSettings = indexSettings; this.cache = cache; this.breakerService = breakerService; + this.scriptFunction = scriptFunction; } @Override - public org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource comparatorSource(Object missingValue, MultiValueMode sortMode, Nested nested) { - return new BytesRefFieldComparatorSource((IndexFieldData) this, missingValue, sortMode, nested); + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) { + XFieldComparatorSource source = new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested); + /** + * Check if we can use a simple {@link SortedSetSortField} compatible with index sorting and + * returns a custom sort field otherwise. + */ + if (nested != null || + (sortMode != MultiValueMode.MAX && sortMode != MultiValueMode.MIN) || + (source.sortMissingLast(missingValue) == false && source.sortMissingFirst(missingValue) == false)) { + return new SortField(getFieldName(), source, reverse); + } + SortField sortField = new SortedSetSortField(fieldName, reverse, + sortMode == MultiValueMode.MAX ? SortedSetSelector.Type.MAX : SortedSetSelector.Type.MIN); + sortField.setMissingValue(source.sortMissingLast(missingValue) ^ reverse ? + SortedSetSortField.STRING_LAST : SortedSetSortField.STRING_FIRST); + return sortField; } @Override public AtomicOrdinalsFieldData load(LeafReaderContext context) { - return new SortedSetDVBytesAtomicFieldData(context.reader(), fieldName); + return new SortedSetDVBytesAtomicFieldData(context.reader(), fieldName, scriptFunction); } @Override @@ -100,6 +124,11 @@ public IndexOrdinalsFieldData loadGlobal(DirectoryReader indexReader) { @Override public IndexOrdinalsFieldData localGlobalDirect(DirectoryReader indexReader) throws Exception { - return GlobalOrdinalsBuilder.build(indexReader, this, indexSettings, breakerService, logger); + return GlobalOrdinalsBuilder.build(indexReader, this, indexSettings, breakerService, logger, scriptFunction); + } + + @Override + public MultiDocValues.OrdinalMap getOrdinalMap() { + return null; } } diff --git a/core/src/main/java/org/elasticsearch/index/fieldvisitor/FieldsVisitor.java b/core/src/main/java/org/elasticsearch/index/fieldvisitor/FieldsVisitor.java index ee9634f690c4a..415080de13498 100644 --- a/core/src/main/java/org/elasticsearch/index/fieldvisitor/FieldsVisitor.java +++ b/core/src/main/java/org/elasticsearch/index/fieldvisitor/FieldsVisitor.java @@ -23,6 +23,7 @@ import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.index.mapper.IdFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.ParentFieldMapper; @@ -36,6 +37,7 @@ import java.io.IOException; import java.nio.charset.StandardCharsets; import java.util.ArrayList; +import java.util.Collection; import java.util.HashMap; import java.util.HashSet; import java.util.List; @@ -54,13 +56,14 @@ public class FieldsVisitor extends StoredFieldVisitor { UidFieldMapper.NAME, TimestampFieldMapper.NAME, TTLFieldMapper.NAME, + IdFieldMapper.NAME, RoutingFieldMapper.NAME, ParentFieldMapper.NAME)); private final boolean loadSource; private final Set requiredFields; protected BytesReference source; - protected Uid uid; + protected String type, id; protected Map> fieldsValues; public FieldsVisitor(boolean loadSource) { @@ -82,6 +85,13 @@ public Status needsField(FieldInfo fieldInfo) throws IOException { } public void postProcess(MapperService mapperService) { + if (mapperService.getIndexSettings().isSingleType()) { + final Collection types = mapperService.types(); + assert types.size() <= 1 : types; + if (types.isEmpty() == false) { + type = types.iterator().next(); + } + } for (Map.Entry> entry : fields().entrySet()) { MappedFieldType fieldType = mapperService.fullName(entry.getKey()); if (fieldType == null) { @@ -90,7 +100,7 @@ public void postProcess(MapperService mapperService) { } List fieldValues = entry.getValue(); for (int i = 0; i < fieldValues.size(); i++) { - fieldValues.set(i, fieldType.valueForSearch(fieldValues.get(i))); + fieldValues.set(i, fieldType.valueForDisplay(fieldValues.get(i))); } } } @@ -108,7 +118,11 @@ public void binaryField(FieldInfo fieldInfo, byte[] value) throws IOException { public void stringField(FieldInfo fieldInfo, byte[] bytes) throws IOException { final String value = new String(bytes, StandardCharsets.UTF_8); if (UidFieldMapper.NAME.equals(fieldInfo.name)) { - uid = Uid.createUid(value); + Uid uid = Uid.createUid(value); + type = uid.type(); + id = uid.id(); + } else if (IdFieldMapper.NAME.equals(fieldInfo.name)) { + id = value; } else { addValue(fieldInfo.name, value); } @@ -139,7 +153,12 @@ public BytesReference source() { } public Uid uid() { - return uid; + if (id == null) { + return null; + } else if (type == null) { + throw new IllegalStateException("Call postProcess before getting the uid"); + } + return new Uid(type, id); } public String routing() { @@ -161,7 +180,8 @@ public Map> fields() { public void reset() { if (fieldsValues != null) fieldsValues.clear(); source = null; - uid = null; + type = null; + id = null; requiredFields.addAll(BASE_REQUIRED_FIELDS); if (loadSource) { diff --git a/core/src/main/java/org/elasticsearch/index/fieldvisitor/JustUidFieldsVisitor.java b/core/src/main/java/org/elasticsearch/index/fieldvisitor/JustUidFieldsVisitor.java deleted file mode 100644 index 2a6c362274a4f..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/fieldvisitor/JustUidFieldsVisitor.java +++ /dev/null @@ -1,41 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.index.fieldvisitor; - -import org.apache.lucene.index.FieldInfo; -import org.elasticsearch.index.mapper.UidFieldMapper; - -import java.io.IOException; - -/** - */ -public class JustUidFieldsVisitor extends FieldsVisitor { - - public JustUidFieldsVisitor() { - super(false); - } - - @Override - public Status needsField(FieldInfo fieldInfo) throws IOException { - if (UidFieldMapper.NAME.equals(fieldInfo.name)) { - return Status.YES; - } - return uid != null ? Status.STOP : Status.NO; - } -} diff --git a/core/src/main/java/org/elasticsearch/index/fieldvisitor/SingleFieldsVisitor.java b/core/src/main/java/org/elasticsearch/index/fieldvisitor/SingleFieldsVisitor.java index 2503286f710d9..0a72a573deb14 100644 --- a/core/src/main/java/org/elasticsearch/index/fieldvisitor/SingleFieldsVisitor.java +++ b/core/src/main/java/org/elasticsearch/index/fieldvisitor/SingleFieldsVisitor.java @@ -20,12 +20,12 @@ import org.apache.lucene.index.FieldInfo; import org.elasticsearch.index.mapper.IdFieldMapper; -import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.TypeFieldMapper; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.UidFieldMapper; import java.io.IOException; -import java.util.List; /** */ @@ -56,30 +56,17 @@ public void reset(String field) { super.reset(); } - public void postProcess(MappedFieldType fieldType) { - if (uid != null) { - switch (field) { - case UidFieldMapper.NAME: - addValue(field, uid.toString()); - break; - case IdFieldMapper.NAME: - addValue(field, uid.id()); - break; - case TypeFieldMapper.NAME: - addValue(field, uid.type()); - break; - } - } - - if (fieldsValues == null) { - return; + @Override + public void postProcess(MapperService mapperService) { + super.postProcess(mapperService); + if (id != null) { + addValue(IdFieldMapper.NAME, id); } - List fieldValues = fieldsValues.get(fieldType.name()); - if (fieldValues == null) { - return; + if (type != null) { + addValue(TypeFieldMapper.NAME, type); } - for (int i = 0; i < fieldValues.size(); i++) { - fieldValues.set(i, fieldType.valueForSearch(fieldValues.get(i))); + if (type != null && id != null) { + addValue(UidFieldMapper.NAME, Uid.createUid(type, id)); } } } diff --git a/core/src/main/java/org/elasticsearch/index/flush/FlushStats.java b/core/src/main/java/org/elasticsearch/index/flush/FlushStats.java index 600651ad30662..ac9a4a5c9a10e 100644 --- a/core/src/main/java/org/elasticsearch/index/flush/FlushStats.java +++ b/core/src/main/java/org/elasticsearch/index/flush/FlushStats.java @@ -81,12 +81,6 @@ public TimeValue getTotalTime() { return new TimeValue(totalTimeInMillis); } - public static FlushStats readFlushStats(StreamInput in) throws IOException { - FlushStats flushStats = new FlushStats(); - flushStats.readFrom(in); - return flushStats; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.FLUSH); diff --git a/core/src/main/java/org/elasticsearch/index/get/GetField.java b/core/src/main/java/org/elasticsearch/index/get/GetField.java index 0ebbf1f9ac178..9b6808517aa5d 100644 --- a/core/src/main/java/org/elasticsearch/index/get/GetField.java +++ b/core/src/main/java/org/elasticsearch/index/get/GetField.java @@ -22,17 +22,22 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.mapper.MapperService; import java.io.IOException; import java.util.ArrayList; import java.util.Iterator; import java.util.List; +import java.util.Objects; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; +import static org.elasticsearch.common.xcontent.XContentParserUtils.parseStoredFieldsValue; -/** - * - */ -public class GetField implements Streamable, Iterable { + +public class GetField implements Streamable, ToXContent, Iterable { private String name; private List values; @@ -41,8 +46,8 @@ private GetField() { } public GetField(String name, List values) { - this.name = name; - this.values = values; + this.name = Objects.requireNonNull(name, "name must not be null"); + this.values = Objects.requireNonNull(values, "values must not be null"); } public String getName() { @@ -93,4 +98,55 @@ public void writeTo(StreamOutput out) throws IOException { out.writeGenericValue(obj); } } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startArray(name); + for (Object value : values) { + //this call doesn't really need to support writing any kind of object. + //Stored fields values are converted using MappedFieldType#valueForDisplay. + //As a result they can either be Strings, Numbers, Booleans, or BytesReference, that's all. + builder.value(value); + } + builder.endArray(); + return builder; + } + + public static GetField fromXContent(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.FIELD_NAME, parser.currentToken(), parser::getTokenLocation); + String fieldName = parser.currentName(); + XContentParser.Token token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.START_ARRAY, token, parser::getTokenLocation); + List values = new ArrayList<>(); + while((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + values.add(parseStoredFieldsValue(parser)); + } + return new GetField(fieldName, values); + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + GetField objects = (GetField) o; + return Objects.equals(name, objects.name) && + Objects.equals(values, objects.values); + } + + @Override + public int hashCode() { + return Objects.hash(name, values); + } + + @Override + public String toString() { + return "GetField{" + + "name='" + name + '\'' + + ", values=" + values + + '}'; + } } diff --git a/core/src/main/java/org/elasticsearch/index/get/GetResult.java b/core/src/main/java/org/elasticsearch/index/get/GetResult.java index 0fa843adc47d4..6569902ceffc8 100644 --- a/core/src/main/java/org/elasticsearch/index/get/GetResult.java +++ b/core/src/main/java/org/elasticsearch/index/get/GetResult.java @@ -20,14 +20,17 @@ package org.elasticsearch.index.get; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.compress.CompressorFactory; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.mapper.SourceFieldMapper; import org.elasticsearch.search.lookup.SourceLookup; import java.io.IOException; @@ -37,13 +40,20 @@ import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Objects; import static java.util.Collections.emptyMap; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; import static org.elasticsearch.index.get.GetField.readGetField; -/** - */ -public class GetResult implements Streamable, Iterable, ToXContent { +public class GetResult implements Streamable, Iterable, ToXContentObject { + + public static final String _INDEX = "_index"; + public static final String _TYPE = "_type"; + public static final String _ID = "_id"; + private static final String _VERSION = "_version"; + private static final String FOUND = "found"; + private static final String FIELDS = "fields"; private String index; private String type; @@ -58,7 +68,8 @@ public class GetResult implements Streamable, Iterable, ToXContent { GetResult() { } - public GetResult(String index, String type, String id, long version, boolean exists, BytesReference source, Map fields) { + public GetResult(String index, String type, String id, long version, boolean exists, BytesReference source, + Map fields) { this.index = index; this.type = type; this.id = id; @@ -124,6 +135,10 @@ public byte[] source() { * Returns bytes reference, also un compress the source if needed. */ public BytesReference sourceRef() { + if (source == null) { + return null; + } + try { this.source = CompressorFactory.uncompressIfNeeded(this.source); return this.source; @@ -197,15 +212,6 @@ public Iterator iterator() { return fields.values().iterator(); } - static final class Fields { - static final String _INDEX = "_index"; - static final String _TYPE = "_type"; - static final String _ID = "_id"; - static final String _VERSION = "_version"; - static final String FOUND = "found"; - static final String FIELDS = "fields"; - } - public XContentBuilder toXContentEmbedded(XContentBuilder builder, Params params) throws IOException { List metaFields = new ArrayList<>(); List otherFields = new ArrayList<>(); @@ -226,20 +232,16 @@ public XContentBuilder toXContentEmbedded(XContentBuilder builder, Params params builder.field(field.getName(), field.getValue()); } - builder.field(Fields.FOUND, exists); + builder.field(FOUND, exists); if (source != null) { - XContentHelper.writeRawField("_source", source, builder, params); + XContentHelper.writeRawField(SourceFieldMapper.NAME, source, builder, params); } if (!otherFields.isEmpty()) { - builder.startObject(Fields.FIELDS); + builder.startObject(FIELDS); for (GetField field : otherFields) { - builder.startArray(field.getName()); - for (Object value : field.getValues()) { - builder.value(value); - } - builder.endArray(); + field.toXContent(builder, params); } builder.endObject(); } @@ -248,23 +250,79 @@ public XContentBuilder toXContentEmbedded(XContentBuilder builder, Params params @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - if (!isExists()) { - builder.field(Fields._INDEX, index); - builder.field(Fields._TYPE, type); - builder.field(Fields._ID, id); - builder.field(Fields.FOUND, false); - } else { - builder.field(Fields._INDEX, index); - builder.field(Fields._TYPE, type); - builder.field(Fields._ID, id); + builder.startObject(); + builder.field(_INDEX, index); + builder.field(_TYPE, type); + builder.field(_ID, id); + if (isExists()) { if (version != -1) { - builder.field(Fields._VERSION, version); + builder.field(_VERSION, version); } toXContentEmbedded(builder, params); + } else { + builder.field(FOUND, false); } + builder.endObject(); return builder; } + public static GetResult fromXContentEmbedded(XContentParser parser) throws IOException { + XContentParser.Token token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + + String currentFieldName = parser.currentName(); + String index = null, type = null, id = null; + long version = -1; + Boolean found = null; + BytesReference source = null; + Map fields = new HashMap<>(); + while((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (_INDEX.equals(currentFieldName)) { + index = parser.text(); + } else if (_TYPE.equals(currentFieldName)) { + type = parser.text(); + } else if (_ID.equals(currentFieldName)) { + id = parser.text(); + } else if (_VERSION.equals(currentFieldName)) { + version = parser.longValue(); + } else if (FOUND.equals(currentFieldName)) { + found = parser.booleanValue(); + } else { + fields.put(currentFieldName, new GetField(currentFieldName, Collections.singletonList(parser.objectText()))); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (SourceFieldMapper.NAME.equals(currentFieldName)) { + try (XContentBuilder builder = XContentBuilder.builder(parser.contentType().xContent())) { + //the original document gets slightly modified: whitespaces or pretty printing are not preserved, + //it all depends on the current builder settings + builder.copyCurrentStructure(parser); + source = builder.bytes(); + } + } else if (FIELDS.equals(currentFieldName)) { + while(parser.nextToken() != XContentParser.Token.END_OBJECT) { + GetField getField = GetField.fromXContent(parser); + fields.put(getField.getName(), getField); + } + } else { + parser.skipChildren(); // skip potential inner objects for forward compatibility + } + } else if (token == XContentParser.Token.START_ARRAY) { + parser.skipChildren(); // skip potential inner arrays for forward compatibility + } + } + return new GetResult(index, type, id, version, found, source, fields); + } + + public static GetResult fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + + return fromXContentEmbedded(parser); + } + public static GetResult readGetResult(StreamInput in) throws IOException { GetResult result = new GetResult(); result.readFrom(in); @@ -315,5 +373,28 @@ public void writeTo(StreamOutput out) throws IOException { } } } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + GetResult getResult = (GetResult) o; + return version == getResult.version && + exists == getResult.exists && + Objects.equals(index, getResult.index) && + Objects.equals(type, getResult.type) && + Objects.equals(id, getResult.id) && + Objects.equals(fields, getResult.fields) && + Objects.equals(sourceAsMap(), getResult.sourceAsMap()); + } + + @Override + public int hashCode() { + return Objects.hash(index, type, id, version, exists, fields, sourceAsMap()); + } } diff --git a/core/src/main/java/org/elasticsearch/index/get/GetStats.java b/core/src/main/java/org/elasticsearch/index/get/GetStats.java index 10b4f64c19ead..ad5c534264e8d 100644 --- a/core/src/main/java/org/elasticsearch/index/get/GetStats.java +++ b/core/src/main/java/org/elasticsearch/index/get/GetStats.java @@ -136,12 +136,6 @@ static final class Fields { static final String CURRENT = "current"; } - public static GetStats readGetStats(StreamInput in) throws IOException { - GetStats stats = new GetStats(); - stats.readFrom(in); - return stats; - } - @Override public void readFrom(StreamInput in) throws IOException { existsCount = in.readVLong(); diff --git a/core/src/main/java/org/elasticsearch/index/get/ShardGetService.java b/core/src/main/java/org/elasticsearch/index/get/ShardGetService.java index fb3fb5aa56d98..9324bf29d3915 100644 --- a/core/src/main/java/org/elasticsearch/index/get/ShardGetService.java +++ b/core/src/main/java/org/elasticsearch/index/get/ShardGetService.java @@ -24,7 +24,7 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.Tuple; -import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.lucene.uid.VersionsResolver.DocIdAndVersion; import org.elasticsearch.common.metrics.CounterMetric; import org.elasticsearch.common.metrics.MeanMetric; import org.elasticsearch.common.util.set.Sets; @@ -42,14 +42,13 @@ import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.ParentFieldMapper; import org.elasticsearch.index.mapper.SourceFieldMapper; -import org.elasticsearch.index.mapper.Uid; -import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.shard.AbstractIndexShardComponent; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import org.elasticsearch.search.fetch.subphase.ParentFieldSubFetchPhase; import java.io.IOException; +import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.List; @@ -76,11 +75,11 @@ public GetStats stats() { return new GetStats(existsMetric.count(), TimeUnit.NANOSECONDS.toMillis(existsMetric.sum()), missingMetric.count(), TimeUnit.NANOSECONDS.toMillis(missingMetric.sum()), currentMetric.count()); } - public GetResult get(String type, String id, String[] gFields, boolean realtime, long version, VersionType versionType, FetchSourceContext fetchSourceContext, boolean ignoreErrorsOnGeneratedFields) { + public GetResult get(String type, String id, String[] gFields, boolean realtime, long version, VersionType versionType, FetchSourceContext fetchSourceContext) { currentMetric.inc(); try { long now = System.nanoTime(); - GetResult getResult = innerGet(type, id, gFields, realtime, version, versionType, fetchSourceContext, ignoreErrorsOnGeneratedFields); + GetResult getResult = innerGet(type, id, gFields, realtime, version, versionType, fetchSourceContext); if (getResult.isExists()) { existsMetric.inc(System.nanoTime() - now); @@ -139,13 +138,20 @@ private FetchSourceContext normalizeFetchSourceContent(@Nullable FetchSourceCont return FetchSourceContext.DO_NOT_FETCH_SOURCE; } - private GetResult innerGet(String type, String id, String[] gFields, boolean realtime, long version, VersionType versionType, FetchSourceContext fetchSourceContext, boolean ignoreErrorsOnGeneratedFields) { + private GetResult innerGet(String type, String id, String[] gFields, boolean realtime, long version, VersionType versionType, FetchSourceContext fetchSourceContext) { fetchSourceContext = normalizeFetchSourceContent(fetchSourceContext, gFields); + final Collection types; + if (type == null || type.equals("_all")) { + types = mapperService.types(); + } else { + types = Collections.singleton(type); + } Engine.GetResult get = null; - if (type == null || type.equals("_all")) { - for (String typeX : mapperService.types()) { - get = indexShard.get(new Engine.Get(realtime, new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(typeX, id))) + for (String typeX : types) { + Term uidTerm = mapperService.createUidTerm(typeX, id); + if (uidTerm != null) { + get = indexShard.get(new Engine.Get(realtime, typeX, id, uidTerm) .version(version).versionType(versionType)); if (get.exists()) { type = typeX; @@ -154,20 +160,10 @@ private GetResult innerGet(String type, String id, String[] gFields, boolean rea get.release(); } } - if (get == null) { - return new GetResult(shardId.getIndexName(), type, id, -1, false, null, null); - } - if (!get.exists()) { - // no need to release here as well..., we release in the for loop for non exists - return new GetResult(shardId.getIndexName(), type, id, -1, false, null, null); - } - } else { - get = indexShard.get(new Engine.Get(realtime, new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(type, id))) - .version(version).versionType(versionType)); - if (!get.exists()) { - get.release(); - return new GetResult(shardId.getIndexName(), type, id, -1, false, null, null); - } + } + + if (get == null || get.exists() == false) { + return new GetResult(shardId.getIndexName(), type, id, -1, false, null, null); } try { @@ -181,7 +177,7 @@ private GetResult innerGet(String type, String id, String[] gFields, boolean rea private GetResult innerGetLoadFromStoredFields(String type, String id, String[] gFields, FetchSourceContext fetchSourceContext, Engine.GetResult get, MapperService mapperService) { Map fields = null; BytesReference source = null; - Versions.DocIdAndVersion docIdAndVersion = get.docIdAndVersion(); + DocIdAndVersion docIdAndVersion = get.docIdAndVersion(); FieldsVisitor fieldVisitor = buildFieldsVisitors(gFields, fetchSourceContext); if (fieldVisitor != null) { try { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java index c418dd5e6e20f..2967ddce8fd8a 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java @@ -19,8 +19,8 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Term; import org.apache.lucene.search.Query; import org.elasticsearch.common.io.stream.BytesStreamOutput; @@ -34,11 +34,11 @@ import org.elasticsearch.index.similarity.SimilarityService; import java.io.IOException; +import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.Map; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeMapValue; import static org.elasticsearch.index.mapper.TypeParsers.parseTextField; @@ -103,18 +103,18 @@ public AllFieldMapper build(BuilderContext context) { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder = new Builder(parserContext.mapperService().fullName(NAME)); - builder.fieldType().setIndexAnalyzer(parserContext.analysisService().defaultIndexAnalyzer()); - builder.fieldType().setSearchAnalyzer(parserContext.analysisService().defaultSearchAnalyzer()); - builder.fieldType().setSearchQuoteAnalyzer(parserContext.analysisService().defaultSearchQuoteAnalyzer()); + builder.fieldType().setIndexAnalyzer(parserContext.getIndexAnalyzers().getDefaultIndexAnalyzer()); + builder.fieldType().setSearchAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchAnalyzer()); + builder.fieldType().setSearchQuoteAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchQuoteAnalyzer()); // parseField below will happily parse the doc_values setting, but it is then never passed to // the AllFieldMapper ctor in the builder since it is not valid. Here we validate // the doc values settings (old and new) are rejected Object docValues = node.get("doc_values"); - if (docValues != null && lenientNodeBooleanValue(docValues)) { + if (docValues != null && TypeParsers.nodeBooleanValue(name, "doc_values", docValues)) { throw new MapperParsingException("Field [" + name + "] is always tokenized and cannot have doc values"); } @@ -135,8 +135,8 @@ public MetadataFieldMapper.Builder parse(String name, Map node, String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); if (fieldName.equals("enabled")) { - builder.enabled(lenientNodeBooleanValue(fieldNode) ? EnabledAttributeMapper.ENABLED : - EnabledAttributeMapper.DISABLED); + boolean enabled = TypeParsers.nodeBooleanValue(name, "enabled", fieldNode); + builder.enabled(enabled ? EnabledAttributeMapper.ENABLED : EnabledAttributeMapper.DISABLED); iterator.remove(); } } @@ -144,14 +144,20 @@ public MetadataFieldMapper.Builder parse(String name, Map node, } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { - return new AllFieldMapper(indexSettings, fieldType); + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); + if (fieldType != null) { + return new AllFieldMapper(indexSettings, fieldType); + } else { + return parse(NAME, Collections.emptyMap(), context) + .build(new BuilderContext(indexSettings, new ContentPath(1))); + } } } static final class AllFieldType extends StringFieldType { - public AllFieldType() { + AllFieldType() { } protected AllFieldType(AllFieldType ref) { @@ -182,7 +188,7 @@ public Query termQuery(Object value, QueryShardContext context) { private EnabledAttributeMapper enabledState; private AllFieldMapper(Settings indexSettings, MappedFieldType existing) { - this(existing == null ? Defaults.FIELD_TYPE.clone() : existing.clone(), Defaults.ENABLED, indexSettings); + this(existing.clone(), Defaults.ENABLED, indexSettings); } private AllFieldMapper(MappedFieldType fieldType, EnabledAttributeMapper enabled, Settings indexSettings) { @@ -211,7 +217,7 @@ public Mapper parse(ParseContext context) throws IOException { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { if (!enabledState.enabled) { return; } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/BaseGeoPointFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/BaseGeoPointFieldMapper.java index 2a24b6a94c6dc..738db9ccc381b 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/BaseGeoPointFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/BaseGeoPointFieldMapper.java @@ -19,16 +19,22 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Terms; import org.apache.lucene.search.Query; -import org.elasticsearch.common.geo.GeoHashUtils; +import org.apache.lucene.spatial.util.MortonEncoder; +import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.LegacyNumericUtils; -import org.elasticsearch.Version; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; +import org.elasticsearch.action.fieldstats.FieldStats; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.collect.Iterators; +import org.elasticsearch.common.geo.GeoHashUtils; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.logging.DeprecationLogger; @@ -89,7 +95,7 @@ public abstract static class Builder ignoreMalformed, CopyTo copyTo); public Y build(Mapper.BuilderContext context) { - GeoPointFieldType geoPointFieldType = (GeoPointFieldType)fieldType; + // version 5.0 cuts over to LatLonPoint and no longer indexes geohash, or lat/lon separately + if (context.indexCreatedVersion().before(LatLonPointFieldMapper.LAT_LON_FIELD_VERSION)) { + return buildLegacy(context); + } + return build(context, name, fieldType, defaultFieldType, context.indexSettings(), + null, null, null, multiFieldsBuilder.build(this, context), ignoreMalformed(context), copyTo); + } + + private Y buildLegacy(Mapper.BuilderContext context) { + LegacyGeoPointFieldType geoPointFieldType = (LegacyGeoPointFieldType)fieldType; FieldMapper latMapper = null; FieldMapper lonMapper = null; @@ -161,9 +176,9 @@ public Y build(Mapper.BuilderContext context) { lonMapper = (LegacyDoubleFieldMapper) lonMapperBuilder.includeInAll(false).store(fieldType.stored()).docValues(false).build(context); } else { latMapper = new NumberFieldMapper.Builder(Names.LAT, NumberFieldMapper.NumberType.DOUBLE) - .includeInAll(false).store(fieldType.stored()).docValues(false).build(context); + .includeInAll(false).store(fieldType.stored()).docValues(false).build(context); lonMapper = new NumberFieldMapper.Builder(Names.LON, NumberFieldMapper.NumberType.DOUBLE) - .includeInAll(false).store(fieldType.stored()).docValues(false).build(context); + .includeInAll(false).store(fieldType.stored()).docValues(false).build(context); } geoPointFieldType.setLatLonEnabled(latMapper.fieldType(), lonMapper.fieldType()); } @@ -183,7 +198,7 @@ public Y build(Mapper.BuilderContext context) { context.path().remove(); return build(context, name, fieldType, defaultFieldType, context.indexSettings(), - latMapper, lonMapper, geoHashMapper, multiFieldsBuilder.build(this, context), ignoreMalformed(context), copyTo); + latMapper, lonMapper, geoHashMapper, multiFieldsBuilder.build(this, context), ignoreMalformed(context), copyTo); } } @@ -191,8 +206,11 @@ public abstract static class TypeParser implements Mapper.TypeParser { @Override public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder; - if (parserContext.indexVersionCreated().before(Version.V_2_2_0)) { + Version indexVersionCreated = parserContext.indexVersionCreated(); + if (indexVersionCreated.before(Version.V_2_2_0)) { builder = new LegacyGeoPointFieldMapper.Builder(name); + } else if (indexVersionCreated.onOrAfter(LatLonPointFieldMapper.LAT_LON_FIELD_VERSION)) { + builder = new LatLonPointFieldMapper.Builder(name); } else { builder = new GeoPointFieldMapper.Builder(name); } @@ -202,82 +220,119 @@ public abstract static class TypeParser implements Mapper.TypeParser { Map.Entry entry = iterator.next(); String propName = entry.getKey(); Object propNode = entry.getValue(); - if (propName.equals("lat_lon")) { - deprecationLogger.deprecated(CONTENT_TYPE + " lat_lon parameter is deprecated and will be removed " - + "in the next major release"); - builder.enableLatLon(XContentMapValues.lenientNodeBooleanValue(propNode)); - iterator.remove(); - } else if (propName.equals("precision_step")) { - deprecationLogger.deprecated(CONTENT_TYPE + " precision_step parameter is deprecated and will be removed " - + "in the next major release"); - builder.precisionStep(XContentMapValues.nodeIntegerValue(propNode)); - iterator.remove(); - } else if (propName.equals("geohash")) { - deprecationLogger.deprecated(CONTENT_TYPE + " geohash parameter is deprecated and will be removed " - + "in the next major release"); - builder.enableGeoHash(XContentMapValues.lenientNodeBooleanValue(propNode)); - iterator.remove(); - } else if (propName.equals("geohash_prefix")) { - deprecationLogger.deprecated(CONTENT_TYPE + " geohash_prefix parameter is deprecated and will be removed " - + "in the next major release"); - builder.geoHashPrefix(XContentMapValues.lenientNodeBooleanValue(propNode)); - if (XContentMapValues.lenientNodeBooleanValue(propNode)) { - builder.enableGeoHash(true); - } - iterator.remove(); - } else if (propName.equals("geohash_precision")) { - deprecationLogger.deprecated(CONTENT_TYPE + " geohash_precision parameter is deprecated and will be removed " - + "in the next major release"); - if (propNode instanceof Integer) { - builder.geoHashPrecision(XContentMapValues.nodeIntegerValue(propNode)); - } else { - builder.geoHashPrecision(GeoUtils.geoHashLevelsForPrecision(propNode.toString())); + if (indexVersionCreated.before(LatLonPointFieldMapper.LAT_LON_FIELD_VERSION)) { + if (propName.equals("lat_lon")) { + deprecationLogger.deprecated(CONTENT_TYPE + " lat_lon parameter is deprecated and will be removed " + + "in the next major release"); + builder.enableLatLon(XContentMapValues.lenientNodeBooleanValue(propNode, propName)); + iterator.remove(); + } else if (propName.equals("precision_step")) { + deprecationLogger.deprecated(CONTENT_TYPE + " precision_step parameter is deprecated and will be removed " + + "in the next major release"); + builder.precisionStep(XContentMapValues.nodeIntegerValue(propNode)); + iterator.remove(); + } else if (propName.equals("geohash")) { + deprecationLogger.deprecated(CONTENT_TYPE + " geohash parameter is deprecated and will be removed " + + "in the next major release"); + builder.enableGeoHash(XContentMapValues.lenientNodeBooleanValue(propNode, propName)); + iterator.remove(); + } else if (propName.equals("geohash_prefix")) { + deprecationLogger.deprecated(CONTENT_TYPE + " geohash_prefix parameter is deprecated and will be removed " + + "in the next major release"); + builder.geoHashPrefix(XContentMapValues.lenientNodeBooleanValue(propNode, propName)); + if (XContentMapValues.lenientNodeBooleanValue(propNode, propName)) { + builder.enableGeoHash(true); + } + iterator.remove(); + } else if (propName.equals("geohash_precision")) { + deprecationLogger.deprecated(CONTENT_TYPE + " geohash_precision parameter is deprecated and will be removed " + + "in the next major release"); + if (propNode instanceof Integer) { + builder.geoHashPrecision(XContentMapValues.nodeIntegerValue(propNode)); + } else { + builder.geoHashPrecision(GeoUtils.geoHashLevelsForPrecision(propNode.toString())); + } + iterator.remove(); } - iterator.remove(); - } else if (propName.equals(Names.IGNORE_MALFORMED)) { - builder.ignoreMalformed(XContentMapValues.lenientNodeBooleanValue(propNode)); + } + + if (propName.equals(Names.IGNORE_MALFORMED)) { + builder.ignoreMalformed(TypeParsers.nodeBooleanValue(name, Names.IGNORE_MALFORMED, propNode)); iterator.remove(); } } if (builder instanceof LegacyGeoPointFieldMapper.Builder) { return LegacyGeoPointFieldMapper.parse((LegacyGeoPointFieldMapper.Builder) builder, node, parserContext); + } else if (builder instanceof LatLonPointFieldMapper.Builder) { + return (LatLonPointFieldMapper.Builder) builder; } return (GeoPointFieldMapper.Builder) builder; } } - public static class GeoPointFieldType extends MappedFieldType { + public abstract static class GeoPointFieldType extends MappedFieldType { + GeoPointFieldType() { + } + + GeoPointFieldType(GeoPointFieldType ref) { + super(ref); + } + + @Override + public String typeName() { + return CONTENT_TYPE; + } + + @Override + public FieldStats stats(IndexReader reader) throws IOException { + int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } + /** + * we don't have a specific type for geo_point so we use an empty {@link FieldStats.Text}. + * TODO: we should maybe support a new type that knows how to (de)encode the min/max information + */ + return new FieldStats.Text(maxDoc, -1, -1, -1, isSearchable(), isAggregatable()); + } + } + + public static class LegacyGeoPointFieldType extends GeoPointFieldType { protected MappedFieldType geoHashFieldType; protected int geoHashPrecision; protected boolean geoHashPrefixEnabled; protected MappedFieldType latFieldType; protected MappedFieldType lonFieldType; + protected boolean numericEncoded; - GeoPointFieldType() {} + LegacyGeoPointFieldType() {} - GeoPointFieldType(GeoPointFieldType ref) { + LegacyGeoPointFieldType(LegacyGeoPointFieldType ref) { super(ref); this.geoHashFieldType = ref.geoHashFieldType; // copying ref is ok, this can never be modified this.geoHashPrecision = ref.geoHashPrecision; this.geoHashPrefixEnabled = ref.geoHashPrefixEnabled; this.latFieldType = ref.latFieldType; // copying ref is ok, this can never be modified this.lonFieldType = ref.lonFieldType; // copying ref is ok, this can never be modified + this.numericEncoded = ref.numericEncoded; } @Override public MappedFieldType clone() { - return new GeoPointFieldType(this); + return new LegacyGeoPointFieldType(this); } @Override public boolean equals(Object o) { if (!super.equals(o)) return false; - GeoPointFieldType that = (GeoPointFieldType) o; + LegacyGeoPointFieldType that = (LegacyGeoPointFieldType) o; return geoHashPrecision == that.geoHashPrecision && geoHashPrefixEnabled == that.geoHashPrefixEnabled && + numericEncoded == that.numericEncoded && java.util.Objects.equals(geoHashFieldType, that.geoHashFieldType) && java.util.Objects.equals(latFieldType, that.latFieldType) && java.util.Objects.equals(lonFieldType, that.lonFieldType); @@ -285,19 +340,14 @@ public boolean equals(Object o) { @Override public int hashCode() { - return java.util.Objects.hash(super.hashCode(), geoHashFieldType, geoHashPrecision, geoHashPrefixEnabled, latFieldType, - lonFieldType); - } - - @Override - public String typeName() { - return CONTENT_TYPE; + return java.util.Objects.hash(super.hashCode(), geoHashFieldType, geoHashPrecision, geoHashPrefixEnabled, + numericEncoded, latFieldType, lonFieldType); } @Override public void checkCompatibility(MappedFieldType fieldType, List conflicts, boolean strict) { super.checkCompatibility(fieldType, conflicts, strict); - GeoPointFieldType other = (GeoPointFieldType)fieldType; + LegacyGeoPointFieldType other = (LegacyGeoPointFieldType)fieldType; if (isLatLonEnabled() != other.isLatLonEnabled()) { conflicts.add("mapper [" + name() + "] has different [lat_lon]"); } @@ -378,6 +428,23 @@ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZ public Query termQuery(Object value, QueryShardContext context) { throw new QueryShardException(context, "Geo fields do not support exact searching, use dedicated geo queries instead: [" + name() + "]"); } + + @Override + public FieldStats.GeoPoint stats(IndexReader reader) throws IOException { + String field = name(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(field); + if (fi == null) { + return null; + } + + Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, field); + if (terms == null) { + return new FieldStats.GeoPoint(reader.maxDoc(), 0L, -1L, -1L, isSearchable(), isAggregatable()); + } + return new FieldStats.GeoPoint(reader.maxDoc(), terms.getDocCount(), -1L, terms.getSumTotalTermFreq(), isSearchable(), + isAggregatable(), prefixCodedToGeoPoint(terms.getMin(), numericEncoded), + prefixCodedToGeoPoint(terms.getMax(), numericEncoded)); + } } protected FieldMapper latMapper; @@ -398,9 +465,10 @@ protected BaseGeoPointFieldMapper(String simpleName, MappedFieldType fieldType, this.ignoreMalformed = ignoreMalformed; } - @Override - public GeoPointFieldType fieldType() { - return (GeoPointFieldType) super.fieldType(); + + + public LegacyGeoPointFieldType legacyFieldType() { + return (LegacyGeoPointFieldType) super.fieldType(); } @Override @@ -414,15 +482,22 @@ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { @Override public Iterator iterator() { + if (this instanceof LatLonPointFieldMapper == false) { + return Iterators.concat(super.iterator(), legacyIterator()); + } + return super.iterator(); + } + + public Iterator legacyIterator() { List extras = new ArrayList<>(); - if (fieldType().isGeoHashEnabled()) { + if (legacyFieldType().isGeoHashEnabled()) { extras.add(geoHashMapper); } - if (fieldType().isLatLonEnabled()) { + if (legacyFieldType().isLatLonEnabled()) { extras.add(latMapper); extras.add(lonMapper); } - return Iterators.concat(super.iterator(), extras.iterator()); + return extras.iterator(); } @Override @@ -431,18 +506,18 @@ protected String contentType() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { throw new UnsupportedOperationException("Parsing is implemented in parse(), this method should NEVER be called"); } protected void parse(ParseContext context, GeoPoint point, String geoHash) throws IOException { - if (fieldType().isGeoHashEnabled()) { + if (legacyFieldType().isGeoHashEnabled()) { if (geoHash == null) { geoHash = GeoHashUtils.stringEncode(point.lon(), point.lat()); } addGeoHashField(context, geoHash); } - if (fieldType().isLatLonEnabled()) { + if (legacyFieldType().isLatLonEnabled()) { latMapper.parse(context.createExternalValueContext(point.lat())); lonMapper.parse(context.createExternalValueContext(point.lon())); } @@ -517,8 +592,9 @@ public Mapper parse(ParseContext context) throws IOException { } private void addGeoHashField(ParseContext context, String geoHash) throws IOException { - int len = Math.min(fieldType().geoHashPrecision(), geoHash.length()); - int min = fieldType().isGeoHashPrefixEnabled() ? 1 : len; + LegacyGeoPointFieldType ft = (LegacyGeoPointFieldType)fieldType; + int len = Math.min(ft.geoHashPrecision(), geoHash.length()); + int min = ft.isGeoHashPrefixEnabled() ? 1 : len; for (int i = len; i >= min; i--) { // side effect of this call is adding the field @@ -537,23 +613,30 @@ private void parsePointFromString(ParseContext context, GeoPoint sparse, String @Override protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException { super.doXContentBody(builder, includeDefaults, params); - if (includeDefaults || fieldType().isLatLonEnabled() != GeoPointFieldMapper.Defaults.ENABLE_LATLON) { - builder.field("lat_lon", fieldType().isLatLonEnabled()); + if (this instanceof LatLonPointFieldMapper == false) { + legacyDoXContentBody(builder, includeDefaults, params); } - if (fieldType().isLatLonEnabled() && (includeDefaults || fieldType().latFieldType().numericPrecisionStep() != LegacyNumericUtils.PRECISION_STEP_DEFAULT)) { - builder.field("precision_step", fieldType().latFieldType().numericPrecisionStep()); + if (includeDefaults || ignoreMalformed.explicit()) { + builder.field(Names.IGNORE_MALFORMED, ignoreMalformed.value()); } - if (includeDefaults || fieldType().isGeoHashEnabled() != Defaults.ENABLE_GEOHASH) { - builder.field("geohash", fieldType().isGeoHashEnabled()); + } + + protected void legacyDoXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException { + LegacyGeoPointFieldType ft = (LegacyGeoPointFieldType) fieldType; + if (includeDefaults || ft.isLatLonEnabled() != GeoPointFieldMapper.Defaults.ENABLE_LATLON) { + builder.field("lat_lon", ft.isLatLonEnabled()); } - if (includeDefaults || fieldType().isGeoHashPrefixEnabled() != Defaults.ENABLE_GEOHASH_PREFIX) { - builder.field("geohash_prefix", fieldType().isGeoHashPrefixEnabled()); + if (ft.isLatLonEnabled() && (includeDefaults || ft.latFieldType().numericPrecisionStep() != LegacyNumericUtils.PRECISION_STEP_DEFAULT)) { + builder.field("precision_step", ft.latFieldType().numericPrecisionStep()); } - if (fieldType().isGeoHashEnabled() && (includeDefaults || fieldType().geoHashPrecision() != Defaults.GEO_HASH_PRECISION)) { - builder.field("geohash_precision", fieldType().geoHashPrecision()); + if (includeDefaults || ft.isGeoHashEnabled() != Defaults.ENABLE_GEOHASH) { + builder.field("geohash", ft.isGeoHashEnabled()); } - if (includeDefaults || ignoreMalformed.explicit()) { - builder.field(Names.IGNORE_MALFORMED, ignoreMalformed.value()); + if (includeDefaults || ft.isGeoHashPrefixEnabled() != Defaults.ENABLE_GEOHASH_PREFIX) { + builder.field("geohash_prefix", ft.isGeoHashPrefixEnabled()); + } + if (ft.isGeoHashEnabled() && (includeDefaults || ft.geoHashPrecision() != Defaults.GEO_HASH_PRECISION)) { + builder.field("geohash_precision", ft.geoHashPrecision()); } } @@ -577,4 +660,19 @@ public FieldMapper updateFieldType(Map fullNameToFieldT updated.lonMapper = lonUpdated; return updated; } + + private static GeoPoint prefixCodedToGeoPoint(BytesRef val, boolean numericEncoded) { + final long encoded = numericEncoded ? LegacyNumericUtils.prefixCodedToLong(val) : prefixCodedToGeoCoded(val); + return new GeoPoint(MortonEncoder.decodeLatitude(encoded), MortonEncoder.decodeLongitude(encoded)); + } + + private static long prefixCodedToGeoCoded(BytesRef val) { + long result = fromBytes((byte)0, (byte)0, (byte)0, (byte)0, val.bytes[val.offset + 0], val.bytes[val.offset + 1], + val.bytes[val.offset + 2], val.bytes[val.offset + 3]); + return result << 32; + } + + private static long fromBytes(byte b1, byte b2, byte b3, byte b4, byte b5, byte b6, byte b7, byte b8) { + return ((long)b1 & 255L) << 56 | ((long)b2 & 255L) << 48 | ((long)b3 & 255L) << 40 | ((long)b4 & 255L) << 32 | ((long)b5 & 255L) << 24 | ((long)b6 & 255L) << 16 | ((long)b7 & 255L) << 8 | (long)b8 & 255L; + } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/BinaryFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/BinaryFieldMapper.java index cb6fae8b59d50..fa50e13e19ae5 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/BinaryFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/BinaryFieldMapper.java @@ -22,6 +22,7 @@ import com.carrotsearch.hppc.ObjectArrayList; import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.apache.lucene.store.ByteArrayDataOutput; import org.apache.lucene.util.BytesRef; @@ -85,7 +86,7 @@ public Mapper.Builder parse(String name, Map node, ParserContext static final class BinaryFieldType extends MappedFieldType { - public BinaryFieldType() {} + BinaryFieldType() {} protected BinaryFieldType(BinaryFieldType ref) { super(ref); @@ -104,7 +105,7 @@ public String typeName() { @Override - public BytesReference valueForSearch(Object value) { + public BytesReference valueForDisplay(Object value) { if (value == null) { return null; } @@ -140,7 +141,7 @@ protected BinaryFieldMapper(String simpleName, MappedFieldType fieldType, Mapped } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { if (!fieldType().stored() && !fieldType().hasDocValues()) { return; } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/BooleanFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/BooleanFieldMapper.java index b27f564f2d7e8..03d1dc7d57187 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/BooleanFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/BooleanFieldMapper.java @@ -22,11 +22,14 @@ import org.apache.lucene.document.Field; import org.apache.lucene.document.SortedNumericDocValuesField; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermRangeQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -34,6 +37,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.search.DocValueFormat; import org.joda.time.DateTimeZone; @@ -42,7 +46,6 @@ import java.util.List; import java.util.Map; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; import static org.elasticsearch.index.mapper.TypeParsers.parseField; /** @@ -52,6 +55,8 @@ public class BooleanFieldMapper extends FieldMapper { public static final String CONTENT_TYPE = "boolean"; + private static final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(BooleanFieldMapper.class)); + public static class Defaults { public static final MappedFieldType FIELD_TYPE = new BooleanFieldType(); @@ -106,7 +111,7 @@ public Mapper.Builder parse(String name, Map node, ParserContext if (propNode == null) { throw new MapperParsingException("Property [null_value] cannot be null."); } - builder.nullValue(lenientNodeBooleanValue(propNode)); + builder.nullValue(TypeParsers.nodeBooleanValue(name, "null_value", propNode)); iterator.remove(); } } @@ -151,20 +156,29 @@ public BytesRef indexedValueForSearch(Object value) { } else { sValue = value.toString(); } - if (sValue.length() == 0) { - return Values.FALSE; - } - if (sValue.length() == 1 && sValue.charAt(0) == 'F') { - return Values.FALSE; - } - if (Booleans.parseBoolean(sValue, false)) { - return Values.TRUE; + switch (sValue) { + case "true": + return Values.TRUE; + case "false": + return Values.FALSE; + default: + deprecationLogger.deprecated("searching using boolean value [" + sValue + + "] is deprecated, please use [true] or [false]"); + if (sValue.length() == 0) { + return Values.FALSE; + } + if (sValue.length() == 1 && sValue.charAt(0) == 'F') { + return Values.FALSE; + } + if (Booleans.parseBoolean(sValue, false)) { + return Values.TRUE; + } + return Values.FALSE; } - return Values.FALSE; } @Override - public Boolean valueForSearch(Object value) { + public Boolean valueForDisplay(Object value) { if (value == null) { return null; } @@ -197,7 +211,7 @@ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZ } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { failIfNotIndexed(); return new TermRangeQuery(name(), lowerTerm == null ? null : indexedValueForSearch(lowerTerm), @@ -217,7 +231,7 @@ public BooleanFieldType fieldType() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { if (fieldType().indexOptions() == IndexOptions.NONE && !fieldType().stored() && !fieldType().hasDocValues()) { return; } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java index 13bb7d255a873..dfdb726ec57cc 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.mapper; import org.apache.lucene.codecs.PostingsFormat; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Term; import org.apache.lucene.search.suggest.document.Completion50PostingsFormat; import org.apache.lucene.search.suggest.document.CompletionAnalyzer; @@ -38,6 +38,7 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.NumberType; import org.elasticsearch.common.xcontent.XContentParser.Token; +import org.elasticsearch.index.analysis.AnalyzerScope; import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.search.suggest.completion.CompletionSuggester; import org.elasticsearch.search.suggest.completion.context.ContextMapping; @@ -126,22 +127,22 @@ public static class TypeParser implements Mapper.TypeParser { if (fieldName.equals("type")) { continue; } - if (parserContext.parseFieldMatcher().match(fieldName, Fields.ANALYZER)) { + if (Fields.ANALYZER.match(fieldName)) { indexAnalyzer = getNamedAnalyzer(parserContext, fieldNode.toString()); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, Fields.SEARCH_ANALYZER)) { + } else if (Fields.SEARCH_ANALYZER.match(fieldName)) { searchAnalyzer = getNamedAnalyzer(parserContext, fieldNode.toString()); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, Fields.PRESERVE_SEPARATORS)) { + } else if (Fields.PRESERVE_SEPARATORS.match(fieldName)) { builder.preserveSeparators(Boolean.parseBoolean(fieldNode.toString())); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, Fields.PRESERVE_POSITION_INCREMENTS)) { + } else if (Fields.PRESERVE_POSITION_INCREMENTS.match(fieldName)) { builder.preservePositionIncrements(Boolean.parseBoolean(fieldNode.toString())); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, Fields.MAX_INPUT_LENGTH)) { + } else if (Fields.MAX_INPUT_LENGTH.match(fieldName)) { builder.maxInputLength(Integer.parseInt(fieldNode.toString())); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, Fields.CONTEXTS)) { + } else if (Fields.CONTEXTS.match(fieldName)) { builder.contextMappings(ContextMappings.load(fieldNode, parserContext.indexVersionCreated())); iterator.remove(); } else if (parseMultiField(builder, name, parserContext, fieldName, fieldNode)) { @@ -153,7 +154,7 @@ public static class TypeParser implements Mapper.TypeParser { if (searchAnalyzer != null) { throw new MapperParsingException("analyzer on completion field [" + name + "] must be set when search_analyzer is set"); } - indexAnalyzer = searchAnalyzer = parserContext.analysisService().analyzer("simple"); + indexAnalyzer = searchAnalyzer = parserContext.getIndexAnalyzers().get("simple"); } else if (searchAnalyzer == null) { searchAnalyzer = indexAnalyzer; } @@ -164,7 +165,7 @@ public static class TypeParser implements Mapper.TypeParser { } private NamedAnalyzer getNamedAnalyzer(ParserContext parserContext, String name) { - NamedAnalyzer analyzer = parserContext.analysisService().analyzer(name); + NamedAnalyzer analyzer = parserContext.getIndexAnalyzers().get(name); if (analyzer == null) { throw new IllegalArgumentException("Can't find default or mapped analyzer with name [" + name + "]"); } @@ -209,7 +210,7 @@ public void setContextMappings(ContextMappings contextMappings) { public NamedAnalyzer indexAnalyzer() { final NamedAnalyzer indexAnalyzer = super.indexAnalyzer(); if (indexAnalyzer != null && !(indexAnalyzer.analyzer() instanceof CompletionAnalyzer)) { - return new NamedAnalyzer(indexAnalyzer.name(), + return new NamedAnalyzer(indexAnalyzer.name(), AnalyzerScope.INDEX, new CompletionAnalyzer(indexAnalyzer, preserveSep, preservePositionIncrements)); } @@ -220,7 +221,7 @@ public NamedAnalyzer indexAnalyzer() { public NamedAnalyzer searchAnalyzer() { final NamedAnalyzer searchAnalyzer = super.searchAnalyzer(); if (searchAnalyzer != null && !(searchAnalyzer.analyzer() instanceof CompletionAnalyzer)) { - return new NamedAnalyzer(searchAnalyzer.name(), + return new NamedAnalyzer(searchAnalyzer.name(), AnalyzerScope.INDEX, new CompletionAnalyzer(searchAnalyzer, preserveSep, preservePositionIncrements)); } return searchAnalyzer; @@ -530,14 +531,10 @@ private void parse(ParseContext parseContext, Token token, XContentParser parser if (currentToken == XContentParser.Token.FIELD_NAME) { fieldName = parser.currentName(); contextMapping = contextMappings.get(fieldName); - } else if (currentToken == XContentParser.Token.VALUE_STRING - || currentToken == XContentParser.Token.START_ARRAY - || currentToken == XContentParser.Token.START_OBJECT) { + } else { assert fieldName != null; assert !contextsMap.containsKey(fieldName); contextsMap.put(fieldName, contextMapping.parseContext(parseContext, parser)); - } else { - throw new IllegalArgumentException("contexts must be an object or an array , but was [" + currentToken + "]"); } } } else { @@ -589,7 +586,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { // no-op } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper2x.java b/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper2x.java index 655af43710ffb..f1a9f1990cefd 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper2x.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper2x.java @@ -22,6 +22,7 @@ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.codecs.PostingsFormat; import org.apache.lucene.document.Field; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.suggest.analyzing.XAnalyzingSuggester; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchParseException; @@ -178,19 +179,19 @@ public static class TypeParser implements Mapper.TypeParser { indexAnalyzer = getNamedAnalyzer(parserContext, fieldNode.toString()); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, Fields.SEARCH_ANALYZER)) { + } else if (Fields.SEARCH_ANALYZER.match(fieldName)) { searchAnalyzer = getNamedAnalyzer(parserContext, fieldNode.toString()); iterator.remove(); } else if (fieldName.equals(Fields.PAYLOADS)) { builder.payloads(Boolean.parseBoolean(fieldNode.toString())); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, Fields.PRESERVE_SEPARATORS)) { + } else if (Fields.PRESERVE_SEPARATORS.match(fieldName)) { builder.preserveSeparators(Boolean.parseBoolean(fieldNode.toString())); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, Fields.PRESERVE_POSITION_INCREMENTS)) { + } else if (Fields.PRESERVE_POSITION_INCREMENTS.match(fieldName)) { builder.preservePositionIncrements(Boolean.parseBoolean(fieldNode.toString())); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, Fields.MAX_INPUT_LENGTH)) { + } else if (Fields.MAX_INPUT_LENGTH.match(fieldName)) { builder.maxInputLength(Integer.parseInt(fieldNode.toString())); iterator.remove(); } else if (parseMultiField(builder, name, parserContext, fieldName, fieldNode)) { @@ -206,7 +207,7 @@ public static class TypeParser implements Mapper.TypeParser { throw new MapperParsingException( "analyzer on completion field [" + name + "] must be set when search_analyzer is set"); } - indexAnalyzer = searchAnalyzer = parserContext.analysisService().analyzer("simple"); + indexAnalyzer = searchAnalyzer = parserContext.getIndexAnalyzers().get("simple"); } else if (searchAnalyzer == null) { searchAnalyzer = indexAnalyzer; } @@ -217,7 +218,7 @@ public static class TypeParser implements Mapper.TypeParser { } private NamedAnalyzer getNamedAnalyzer(ParserContext parserContext, String name) { - NamedAnalyzer analyzer = parserContext.analysisService().analyzer(name); + NamedAnalyzer analyzer = parserContext.getIndexAnalyzers().get(name); if (analyzer == null) { throw new IllegalArgumentException("Can't find default or mapped analyzer with name [" + name + "]"); } @@ -521,7 +522,7 @@ private static final class SuggestField extends Field { private final CompletionTokenStream.ToFiniteStrings toFiniteStrings; private final ContextMapping.Context ctx; - public SuggestField(String name, ContextMapping.Context ctx, + SuggestField(String name, ContextMapping.Context ctx, String value, MappedFieldType type, BytesRef payload, CompletionTokenStream.ToFiniteStrings toFiniteStrings) { super(name, value, type); @@ -566,7 +567,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { } @Override diff --git a/core/src/main/java/org/elasticsearch/index/mapper/CustomDocValuesField.java b/core/src/main/java/org/elasticsearch/index/mapper/CustomDocValuesField.java index a8b27d112453b..8e6cee222d4cd 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/CustomDocValuesField.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/CustomDocValuesField.java @@ -39,7 +39,7 @@ abstract class CustomDocValuesField implements IndexableField { private final String name; - public CustomDocValuesField(String name) { + CustomDocValuesField(String name) { this.name = name; } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java index 717f036155254..f3ba596251f62 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java @@ -19,14 +19,16 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.document.StoredField; import org.apache.lucene.document.SortedNumericDocValuesField; import org.apache.lucene.document.LongPoint; -import org.apache.lucene.index.XPointValues; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.PointValues; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.BoostQuery; +import org.apache.lucene.search.IndexOrDocValuesQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; @@ -43,9 +45,9 @@ import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; import org.elasticsearch.index.mapper.LegacyNumberFieldMapper.Defaults; +import org.elasticsearch.index.query.QueryRewriteContext; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.search.DocValueFormat; -import org.elasticsearch.search.internal.SearchContext; import org.joda.time.DateTimeZone; import java.io.IOException; @@ -54,8 +56,7 @@ import java.util.Locale; import java.util.Map; import java.util.Objects; -import java.util.concurrent.Callable; - +import java.util.ArrayList; import static org.elasticsearch.index.mapper.TypeParsers.parseDateTimeFormatter; /** A {@link FieldMapper} for ip addresses. */ @@ -69,6 +70,7 @@ public static class Builder extends FieldMapper.Builder ignoreMalformed(BuilderContext context) { return Defaults.IGNORE_MALFORMED; } + /** Whether an explicit format for this date field has been set already. */ + public boolean isDateTimeFormatterSet() { + return dateTimeFormatterSet; + } + public Builder dateTimeFormatter(FormatDateTimeFormatter dateTimeFormatter) { fieldType().setDateTimeFormatter(dateTimeFormatter); + dateTimeFormatterSet = true; return this; } @@ -146,7 +154,7 @@ public Mapper.Builder parse(String name, Map node, ParserCo builder.nullValue(propNode.toString()); iterator.remove(); } else if (propName.equals("ignore_malformed")) { - builder.ignoreMalformed(TypeParsers.nodeBooleanValue("ignore_malformed", propNode, parserContext)); + builder.ignoreMalformed(TypeParsers.nodeBooleanValue(name, "ignore_malformed", propNode)); iterator.remove(); } else if (propName.equals("locale")) { builder.locale(LocaleUtils.parse(propNode.toString())); @@ -163,69 +171,6 @@ public Mapper.Builder parse(String name, Map node, ParserCo } public static final class DateFieldType extends MappedFieldType { - - final class LateParsingQuery extends Query { - - final Object lowerTerm; - final Object upperTerm; - final boolean includeLower; - final boolean includeUpper; - final DateTimeZone timeZone; - final DateMathParser forcedDateParser; - - public LateParsingQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, - DateTimeZone timeZone, DateMathParser forcedDateParser) { - this.lowerTerm = lowerTerm; - this.upperTerm = upperTerm; - this.includeLower = includeLower; - this.includeUpper = includeUpper; - this.timeZone = timeZone; - this.forcedDateParser = forcedDateParser; - } - - @Override - public Query rewrite(IndexReader reader) throws IOException { - Query rewritten = super.rewrite(reader); - if (rewritten != this) { - return rewritten; - } - return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser); - } - - // Even though we only cache rewritten queries it is good to let all queries implement hashCode() and equals(): - @Override - public boolean equals(Object o) { - if (this == o) return true; - if (sameClassAs(o) == false) return false; - - LateParsingQuery that = (LateParsingQuery) o; - if (includeLower != that.includeLower) return false; - if (includeUpper != that.includeUpper) return false; - if (lowerTerm != null ? !lowerTerm.equals(that.lowerTerm) : that.lowerTerm != null) return false; - if (upperTerm != null ? !upperTerm.equals(that.upperTerm) : that.upperTerm != null) return false; - if (timeZone != null ? !timeZone.equals(that.timeZone) : that.timeZone != null) return false; - - return true; - } - - @Override - public int hashCode() { - return Objects.hash(classHash(), lowerTerm, upperTerm, includeLower, includeUpper, timeZone); - } - - @Override - public String toString(String s) { - final StringBuilder sb = new StringBuilder(); - return sb.append(name()).append(':') - .append(includeLower ? '[' : '{') - .append((lowerTerm == null) ? "*" : lowerTerm.toString()) - .append(" TO ") - .append((upperTerm == null) ? "*" : upperTerm.toString()) - .append(includeUpper ? ']' : '}') - .toString(); - } - } - protected FormatDateTimeFormatter dateTimeFormatter; protected DateMathParser dateMathParser; @@ -268,16 +213,12 @@ public String typeName() { @Override public void checkCompatibility(MappedFieldType fieldType, List conflicts, boolean strict) { super.checkCompatibility(fieldType, conflicts, strict); - if (strict) { - DateFieldType other = (DateFieldType)fieldType; - if (Objects.equals(dateTimeFormatter().format(), other.dateTimeFormatter().format()) == false) { - conflicts.add("mapper [" + name() - + "] is used by multiple types. Set update_all_types to true to update [format] across all types."); - } - if (Objects.equals(dateTimeFormatter().locale(), other.dateTimeFormatter().locale()) == false) { - conflicts.add("mapper [" + name() - + "] is used by multiple types. Set update_all_types to true to update [locale] across all types."); - } + DateFieldType other = (DateFieldType) fieldType; + if (Objects.equals(dateTimeFormatter().format(), other.dateTimeFormatter().format()) == false) { + conflicts.add("mapper [" + name() + "] has different [format] values"); + } + if (Objects.equals(dateTimeFormatter().locale(), other.dateTimeFormatter().locale()) == false) { + conflicts.add("mapper [" + name() + "] has different [locale] values"); } } @@ -301,7 +242,7 @@ long parse(String value) { @Override public Query termQuery(Object value, @Nullable QueryShardContext context) { - Query query = innerRangeQuery(value, value, true, true, null, null); + Query query = innerRangeQuery(value, value, true, true, null, null, context); if (boost() != 1f) { query = new BoostQuery(query, boost()); } @@ -309,19 +250,19 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { failIfNotIndexed(); - return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, null, null); + return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, null, null, context); } public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, - @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) { + @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryShardContext context) { failIfNotIndexed(); - return new LateParsingQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser); + return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, context); } Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, - @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) { + @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryShardContext context) { failIfNotIndexed(); DateMathParser parser = forcedDateParser == null ? dateMathParser @@ -330,7 +271,7 @@ Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, if (lowerTerm == null) { l = Long.MIN_VALUE; } else { - l = parseToMilliseconds(lowerTerm, !includeLower, timeZone, parser); + l = parseToMilliseconds(lowerTerm, !includeLower, timeZone, parser, context); if (includeLower == false) { ++l; } @@ -338,16 +279,21 @@ Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, if (upperTerm == null) { u = Long.MAX_VALUE; } else { - u = parseToMilliseconds(upperTerm, includeUpper, timeZone, parser); + u = parseToMilliseconds(upperTerm, includeUpper, timeZone, parser, context); if (includeUpper == false) { --u; } } - return LongPoint.newRangeQuery(name(), l, u); + Query query = LongPoint.newRangeQuery(name(), l, u); + if (hasDocValues()) { + Query dvQuery = SortedNumericDocValuesField.newRangeQuery(name(), l, u); + query = new IndexOrDocValuesQuery(query, dvQuery); + } + return query; } public long parseToMilliseconds(Object value, boolean roundUp, - @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) { + @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser, QueryRewriteContext context) { DateMathParser dateParser = dateMathParser(); if (forcedDateParser != null) { dateParser = forcedDateParser; @@ -359,28 +305,23 @@ public long parseToMilliseconds(Object value, boolean roundUp, } else { strValue = value.toString(); } - return dateParser.parse(strValue, now(), roundUp, zone); - } - - private static Callable now() { - return () -> { - final SearchContext context = SearchContext.current(); - return context != null - ? context.nowInMillis() - : System.currentTimeMillis(); - }; + return dateParser.parse(strValue, context::nowInMillis, roundUp, zone); } @Override public FieldStats.Date stats(IndexReader reader) throws IOException { String field = name(); - long size = XPointValues.size(reader, field); - if (size == 0) { + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { return null; } - int docCount = XPointValues.getDocCount(reader, field); - byte[] min = XPointValues.getMinPackedValue(reader, field); - byte[] max = XPointValues.getMaxPackedValue(reader, field); + long size = PointValues.size(reader, field); + if (size == 0) { + return new FieldStats.Date(reader.maxDoc(), 0, -1, -1, isSearchable(), isAggregatable()); + } + int docCount = PointValues.getDocCount(reader, field); + byte[] min = PointValues.getMinPackedValue(reader, field); + byte[] max = PointValues.getMaxPackedValue(reader, field); return new FieldStats.Date(reader.maxDoc(),docCount, -1L, size, isSearchable(), isAggregatable(), dateTimeFormatter(), LongPoint.decodeDimension(min, 0), LongPoint.decodeDimension(max, 0)); @@ -388,24 +329,15 @@ public FieldStats.Date stats(IndexReader reader) throws IOException { @Override public Relation isFieldWithinQuery(IndexReader reader, - Object from, Object to, - boolean includeLower, boolean includeUpper, - DateTimeZone timeZone, DateMathParser dateParser) throws IOException { + Object from, Object to, boolean includeLower, boolean includeUpper, + DateTimeZone timeZone, DateMathParser dateParser, QueryRewriteContext context) throws IOException { if (dateParser == null) { dateParser = this.dateMathParser; } - if (XPointValues.size(reader, name()) == 0) { - // no points, so nothing matches - return Relation.DISJOINT; - } - - long minValue = LongPoint.decodeDimension(XPointValues.getMinPackedValue(reader, name()), 0); - long maxValue = LongPoint.decodeDimension(XPointValues.getMaxPackedValue(reader, name()), 0); - long fromInclusive = Long.MIN_VALUE; if (from != null) { - fromInclusive = parseToMilliseconds(from, !includeLower, timeZone, dateParser); + fromInclusive = parseToMilliseconds(from, !includeLower, timeZone, dateParser, context); if (includeLower == false) { if (fromInclusive == Long.MAX_VALUE) { return Relation.DISJOINT; @@ -416,7 +348,7 @@ public Relation isFieldWithinQuery(IndexReader reader, long toInclusive = Long.MAX_VALUE; if (to != null) { - toInclusive = parseToMilliseconds(to, includeUpper, timeZone, dateParser); + toInclusive = parseToMilliseconds(to, includeUpper, timeZone, dateParser, context); if (includeUpper == false) { if (toInclusive == Long.MIN_VALUE) { return Relation.DISJOINT; @@ -425,6 +357,17 @@ public Relation isFieldWithinQuery(IndexReader reader, } } + // This check needs to be done after fromInclusive and toInclusive + // are resolved so we can throw an exception if they are invalid + // even if there are no points in the shard + if (PointValues.size(reader, name()) == 0) { + // no points, so nothing matches + return Relation.DISJOINT; + } + + long minValue = LongPoint.decodeDimension(PointValues.getMinPackedValue(reader, name()), 0); + long maxValue = LongPoint.decodeDimension(PointValues.getMaxPackedValue(reader, name()), 0); + if (minValue >= fromInclusive && maxValue <= toInclusive) { return Relation.WITHIN; } else if (maxValue < fromInclusive || minValue > toInclusive) { @@ -441,7 +384,7 @@ public IndexFieldData.Builder fielddataBuilder() { } @Override - public Object valueForSearch(Object value) { + public Object valueForDisplay(Object value) { Long val = (Long) value; if (val == null) { return null; @@ -496,7 +439,7 @@ protected DateFieldMapper clone() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { String dateAsString; if (context.externalValueSet()) { Object dateAsObject = context.externalValue(); @@ -545,8 +488,8 @@ protected void parseCreateField(ParseContext context, List fields) throws @Override protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { + final DateFieldMapper other = (DateFieldMapper) mergeWith; super.doMerge(mergeWith, updateAllTypes); - DateFieldMapper other = (DateFieldMapper) mergeWith; this.includeInAll = other.includeInAll; if (other.ignoreMalformed.explicit()) { this.ignoreMalformed = other.ignoreMalformed; diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DocumentFieldMappers.java b/core/src/main/java/org/elasticsearch/index/mapper/DocumentFieldMappers.java index 57f2ff40530e5..a3acaa9534e1b 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DocumentFieldMappers.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DocumentFieldMappers.java @@ -78,8 +78,6 @@ public Collection simpleMatchToFullName(String pattern) { for (FieldMapper fieldMapper : this) { if (Regex.simpleMatch(pattern, fieldMapper.fieldType().name())) { fields.add(fieldMapper.fieldType().name()); - } else if (Regex.simpleMatch(pattern, fieldMapper.fieldType().name())) { - fields.add(fieldMapper.fieldType().name()); } } return fields; diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java index a4d1a0c5e4b96..6d2a15762c88f 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java @@ -32,8 +32,9 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.mapper.MetadataFieldMapper.TypeParser; +import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; @@ -77,7 +78,8 @@ public Builder(RootObjectMapper.Builder builder, MapperService mapperService) { final MetadataFieldMapper metadataMapper; if (existingMetadataMapper == null) { final TypeParser parser = entry.getValue(); - metadataMapper = parser.getDefault(indexSettings, mapperService.fullName(name), builder.name()); + metadataMapper = parser.getDefault(mapperService.fullName(name), + mapperService.documentMapperParser().parserContext(builder.name())); } else { metadataMapper = existingMetadataMapper; } @@ -147,11 +149,11 @@ public DocumentMapper(MapperService mapperService, Mapping mapping) { } MapperUtils.collect(this.mapping.root, newObjectMappers, newFieldMappers); - final AnalysisService analysisService = mapperService.analysisService(); + final IndexAnalyzers indexAnalyzers = mapperService.getIndexAnalyzers(); this.fieldMappers = new DocumentFieldMappers(newFieldMappers, - analysisService.defaultIndexAnalyzer(), - analysisService.defaultSearchAnalyzer(), - analysisService.defaultSearchQuoteAnalyzer()); + indexAnalyzers.getDefaultIndexAnalyzer(), + indexAnalyzers.getDefaultSearchAnalyzer(), + indexAnalyzers.getDefaultSearchQuoteAnalyzer()); Map builder = new HashMap<>(); for (ObjectMapper objectMapper : newObjectMappers) { @@ -250,8 +252,8 @@ public IndexFieldMapper IndexFieldMapper() { return metadataMapper(IndexFieldMapper.class); } - public Query typeFilter() { - return typeMapper().fieldType().termQuery(type, null); + public Query typeFilter(QueryShardContext context) { + return typeMapper().fieldType().termQuery(type, context); } public boolean hasNestedObjects() { @@ -266,8 +268,9 @@ public Map objectMappers() { return this.objectMappers; } + // TODO this method looks like it is only used in tests... public ParsedDocument parse(String index, String type, String id, BytesReference source) throws MapperParsingException { - return parse(SourceToParse.source(index, type, id, source)); + return parse(SourceToParse.source(index, type, id, source, XContentType.JSON)); } public ParsedDocument parse(SourceToParse source) throws MapperParsingException { @@ -309,21 +312,6 @@ public ObjectMapper findNestedObjectMapper(int nestedDocId, SearchContext sc, Le return nestedObjectMapper; } - /** - * Returns the parent {@link ObjectMapper} instance of the specified object mapper or null if there - * isn't any. - */ - // TODO: We should add: ObjectMapper#getParentObjectMapper() - public ObjectMapper findParentObjectMapper(ObjectMapper objectMapper) { - int indexOfLastDot = objectMapper.fullPath().lastIndexOf('.'); - if (indexOfLastDot != -1) { - String parentNestObjectPath = objectMapper.fullPath().substring(0, indexOfLastDot); - return objectMappers().get(parentNestObjectPath); - } else { - return null; - } - } - public boolean isParent(String type) { return mapperService.getParentTypes().contains(type); } @@ -338,6 +326,11 @@ public DocumentMapper merge(Mapping mapping, boolean updateAllTypes) { */ public DocumentMapper updateFieldType(Map fullNameToFieldType) { Mapping updated = this.mapping.updateFieldType(fullNameToFieldType); + if (updated == this.mapping) { + // no change + return this; + } + assert updated == updated.updateFieldType(fullNameToFieldType) : "updateFieldType operation is not idempotent"; return new DocumentMapper(mapperService, updated); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java index f336fbb01ac30..be25775c1353e 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java @@ -21,15 +21,14 @@ import org.elasticsearch.Version; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.compress.CompressedXContent; -import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.similarity.SimilarityService; import org.elasticsearch.indices.mapper.MapperRegistry; @@ -44,24 +43,24 @@ public class DocumentMapperParser { final MapperService mapperService; - final AnalysisService analysisService; + final IndexAnalyzers indexAnalyzers; + private final NamedXContentRegistry xContentRegistry; private final SimilarityService similarityService; private final Supplier queryShardContextSupplier; private final RootObjectMapper.TypeParser rootObjectTypeParser = new RootObjectMapper.TypeParser(); private final Version indexVersionCreated; - private final ParseFieldMatcher parseFieldMatcher; private final Map typeParsers; private final Map rootTypeParsers; - public DocumentMapperParser(IndexSettings indexSettings, MapperService mapperService, AnalysisService analysisService, - SimilarityService similarityService, MapperRegistry mapperRegistry, + public DocumentMapperParser(IndexSettings indexSettings, MapperService mapperService, IndexAnalyzers indexAnalyzers, + NamedXContentRegistry xContentRegistry, SimilarityService similarityService, MapperRegistry mapperRegistry, Supplier queryShardContextSupplier) { - this.parseFieldMatcher = new ParseFieldMatcher(indexSettings.getSettings()); this.mapperService = mapperService; - this.analysisService = analysisService; + this.indexAnalyzers = indexAnalyzers; + this.xContentRegistry = xContentRegistry; this.similarityService = similarityService; this.queryShardContextSupplier = queryShardContextSupplier; this.typeParsers = mapperRegistry.getMapperParsers(); @@ -70,7 +69,8 @@ public DocumentMapperParser(IndexSettings indexSettings, MapperService mapperSer } public Mapper.TypeParser.ParserContext parserContext(String type) { - return new Mapper.TypeParser.ParserContext(type, analysisService, similarityService::getSimilarity, mapperService, typeParsers::get, indexVersionCreated, parseFieldMatcher, queryShardContextSupplier.get()); + return new Mapper.TypeParser.ParserContext(type, indexAnalyzers, similarityService::getSimilarity, mapperService, + typeParsers::get, indexVersionCreated, queryShardContextSupplier); } public DocumentMapper parse(@Nullable String type, CompressedXContent source) throws MapperParsingException { @@ -80,7 +80,7 @@ public DocumentMapper parse(@Nullable String type, CompressedXContent source) th public DocumentMapper parse(@Nullable String type, CompressedXContent source, String defaultSource) throws MapperParsingException { Map mapping = null; if (source != null) { - Map root = XContentHelper.convertToMap(source.compressedReference(), true).v2(); + Map root = XContentHelper.convertToMap(source.compressedReference(), true, XContentType.JSON).v2(); Tuple> t = extractMapping(type, root); type = t.v1(); mapping = t.v2(); @@ -107,7 +107,8 @@ private DocumentMapper parse(String type, Map mapping, String de Mapper.TypeParser.ParserContext parserContext = parserContext(type); // parse RootObjectMapper - DocumentMapper.Builder docBuilder = new DocumentMapper.Builder((RootObjectMapper.Builder) rootObjectTypeParser.parse(type, mapping, parserContext), mapperService); + DocumentMapper.Builder docBuilder = new DocumentMapper.Builder( + (RootObjectMapper.Builder) rootObjectTypeParser.parse(type, mapping, parserContext), mapperService); Iterator> iterator = mapping.entrySet().iterator(); // parse DocumentMapper while(iterator.hasNext()) { @@ -118,6 +119,9 @@ private DocumentMapper parse(String type, Map mapping, String de MetadataFieldMapper.TypeParser typeParser = rootTypeParsers.get(fieldName); if (typeParser != null) { iterator.remove(); + if (false == fieldNode instanceof Map) { + throw new IllegalArgumentException("[_parent] must be an object containing [type]"); + } Map fieldNodeMap = (Map) fieldNode; docBuilder.put(typeParser.parse(fieldName, fieldNodeMap, parserContext)); fieldNodeMap.remove("type"); @@ -138,7 +142,8 @@ private DocumentMapper parse(String type, Map mapping, String de } public static void checkNoRemainingFields(String fieldName, Map fieldNodeMap, Version indexVersionCreated) { - checkNoRemainingFields(fieldNodeMap, indexVersionCreated, "Mapping definition for [" + fieldName + "] has unsupported parameters: "); + checkNoRemainingFields(fieldNodeMap, indexVersionCreated, + "Mapping definition for [" + fieldName + "] has unsupported parameters: "); } public static void checkNoRemainingFields(Map fieldNodeMap, Version indexVersionCreated, String message) { @@ -157,7 +162,7 @@ private static String getRemainingFields(Map map) { private Tuple> extractMapping(String type, String source) throws MapperParsingException { Map root; - try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) { + try (XContentParser parser = XContentType.JSON.xContent().createParser(xContentRegistry, source)) { root = parser.mapOrdered(); } catch (Exception e) { throw new MapperParsingException("failed to parse mapping definition", e); @@ -180,4 +185,8 @@ private Tuple> extractMapping(String type, Map dynamicMappers) { if (dynamicMappers.isEmpty()) { @@ -187,7 +199,7 @@ static Mapping createDynamicUpdate(Mapping mapping, DocumentMapper docMapper, Li Iterator dynamicMapperItr = dynamicMappers.iterator(); List parentMappers = new ArrayList<>(); Mapper firstUpdate = dynamicMapperItr.next(); - parentMappers.add(createUpdate(mapping.root(), firstUpdate.name().split("\\."), 0, firstUpdate)); + parentMappers.add(createUpdate(mapping.root(), splitAndValidatePath(firstUpdate.name()), 0, firstUpdate)); Mapper previousMapper = null; while (dynamicMapperItr.hasNext()) { Mapper newMapper = dynamicMapperItr.next(); @@ -199,7 +211,7 @@ static Mapping createDynamicUpdate(Mapping mapping, DocumentMapper docMapper, Li continue; } previousMapper = newMapper; - String[] nameParts = newMapper.name().split("\\."); + String[] nameParts = splitAndValidatePath(newMapper.name()); // We first need the stack to only contain mappers in common with the previously processed mapper // For example, if the first mapper processed was a.b.c, and we now have a.d, the stack will contain @@ -414,15 +426,33 @@ private static ParseContext nestedContext(ParseContext context, ObjectMapper map context = context.createNestedContext(mapper.fullPath()); ParseContext.Document nestedDoc = context.doc(); ParseContext.Document parentDoc = nestedDoc.getParent(); - // pre add the uid field if possible (id was already provided) - IndexableField uidField = parentDoc.getField(UidFieldMapper.NAME); - if (uidField != null) { - // we don't need to add it as a full uid field in nested docs, since we don't need versioning - // we also rely on this for UidField#loadVersion - // this is a deeply nested field - nestedDoc.add(new Field(UidFieldMapper.NAME, uidField.stringValue(), UidFieldMapper.Defaults.NESTED_FIELD_TYPE)); + // We need to add the uid or id to this nested Lucene document too, + // If we do not do this then when a document gets deleted only the root Lucene document gets deleted and + // not the nested Lucene documents! Besides the fact that we would have zombie Lucene documents, the ordering of + // documents inside the Lucene index (document blocks) will be incorrect, as nested documents of different root + // documents are then aligned with other root documents. This will lead tothe nested query, sorting, aggregations + // and inner hits to fail or yield incorrect results. + if (context.mapperService().getIndexSettings().isSingleType()) { + IndexableField idField = parentDoc.getField(IdFieldMapper.NAME); + if (idField != null) { + // We just need to store the id as indexed field, so that IndexWriter#deleteDocuments(term) can then + // delete it when the root document is deleted too. + nestedDoc.add(new Field(IdFieldMapper.NAME, idField.stringValue(), IdFieldMapper.Defaults.NESTED_FIELD_TYPE)); + } else { + throw new IllegalStateException("The root document of a nested document should have an id field"); + } + } else { + IndexableField uidField = parentDoc.getField(UidFieldMapper.NAME); + if (uidField != null) { + /// We just need to store the uid as indexed field, so that IndexWriter#deleteDocuments(term) can then + // delete it when the root document is deleted too. + nestedDoc.add(new Field(UidFieldMapper.NAME, uidField.stringValue(), UidFieldMapper.Defaults.NESTED_FIELD_TYPE)); + } else { + throw new IllegalStateException("The root document of a nested document should have an uid field"); + } } + // the type of the nested doc starts with __, so we can identify that its a nested one in filters // note, we don't prefix it with the type of the doc since it allows us to execute a nested query // across types (for example, with similar nested objects) @@ -445,10 +475,9 @@ private static void parseObjectOrField(ParseContext context, Mapper mapper) thro } } - private static ObjectMapper parseObject(final ParseContext context, ObjectMapper mapper, String currentFieldName) throws IOException { + private static void parseObject(final ParseContext context, ObjectMapper mapper, String currentFieldName) throws IOException { assert currentFieldName != null; - ObjectMapper update = null; Mapper objectMapper = getMapper(mapper, currentFieldName); if (objectMapper != null) { context.path().add(currentFieldName); @@ -456,7 +485,7 @@ private static ObjectMapper parseObject(final ParseContext context, ObjectMapper context.path().remove(); } else { - final String[] paths = currentFieldName.split("\\."); + final String[] paths = splitAndValidatePath(currentFieldName); currentFieldName = paths[paths.length - 1]; Tuple parentMapperTuple = getDynamicParentMapper(context, paths, mapper); ObjectMapper parentMapper = parentMapperTuple.v2(); @@ -482,8 +511,6 @@ private static ObjectMapper parseObject(final ParseContext context, ObjectMapper context.path().remove(); } } - - return update; } private static void parseArray(ParseContext context, ObjectMapper parentMapper, String lastFieldName) throws IOException { @@ -500,7 +527,7 @@ private static void parseArray(ParseContext context, ObjectMapper parentMapper, } } else { - final String[] paths = arrayFieldName.split("\\."); + final String[] paths = splitAndValidatePath(arrayFieldName); arrayFieldName = paths[paths.length - 1]; lastFieldName = arrayFieldName; Tuple parentMapperTuple = getDynamicParentMapper(context, paths, parentMapper); @@ -564,7 +591,7 @@ private static void parseValue(final ParseContext context, ObjectMapper parentMa parseObjectOrField(context, mapper); } else { - final String[] paths = currentFieldName.split("\\."); + final String[] paths = splitAndValidatePath(currentFieldName); currentFieldName = paths[paths.length - 1]; Tuple parentMapperTuple = getDynamicParentMapper(context, paths, parentMapper); parentMapper = parentMapperTuple.v2(); @@ -679,6 +706,12 @@ private static Mapper.Builder createBuilderFromDynamicValue(final ParseCont if (builder == null) { builder = newDateBuilder(currentFieldName, dateTimeFormatter, Version.indexCreated(context.indexSettings())); } + if (builder instanceof DateFieldMapper.Builder) { + DateFieldMapper.Builder dateBuilder = (DateFieldMapper.Builder) builder; + if (dateBuilder.isDateTimeFormatterSet() == false) { + dateBuilder.dateTimeFormatter(dateTimeFormatter); + } + } return builder; } catch (Exception e) { // failure to parse this, continue @@ -818,7 +851,7 @@ private static void parseCopy(String field, ParseContext context) throws IOExcep // The path of the dest field might be completely different from the current one so we need to reset it context = context.overridePath(new ContentPath(0)); - final String[] paths = field.split("\\."); + final String[] paths = splitAndValidatePath(field); final String fieldName = paths[paths.length-1]; Tuple parentMapperTuple = getDynamicParentMapper(context, paths, null); ObjectMapper mapper = parentMapperTuple.v2(); @@ -858,7 +891,8 @@ private static Tuple getDynamicParentMapper(ParseContext Mapper.BuilderContext builderContext = new Mapper.BuilderContext(context.indexSettings(), context.path()); mapper = (ObjectMapper) builder.build(builderContext); if (mapper.nested() != ObjectMapper.Nested.NO) { - throw new MapperParsingException("It is forbidden to create dynamic nested objects ([" + context.path().pathAsText(paths[i]) + "]) through `copy_to`"); + throw new MapperParsingException("It is forbidden to create dynamic nested objects ([" + context.path().pathAsText(paths[i]) + + "]) through `copy_to` or dots in field names"); } context.addDynamicMapper(mapper); break; @@ -901,13 +935,18 @@ private static ObjectMapper.Dynamic dynamicOrDefault(ObjectMapper parentMapper, // looks up a child mapper, but takes into account field names that expand to objects static Mapper getMapper(ObjectMapper objectMapper, String fieldName) { - String[] subfields = fieldName.split("\\."); + String[] subfields = splitAndValidatePath(fieldName); for (int i = 0; i < subfields.length - 1; ++i) { Mapper mapper = objectMapper.getMapper(subfields[i]); if (mapper == null || (mapper instanceof ObjectMapper) == false) { return null; } objectMapper = (ObjectMapper)mapper; + if (objectMapper.nested().isNested()) { + throw new MapperParsingException("Cannot add a value for field [" + + fieldName + "] since one of the intermediate objects is mapped as a nested object: [" + + mapper.name() + "]"); + } } return objectMapper.getMapper(subfields[subfields.length - 1]); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DynamicTemplate.java b/core/src/main/java/org/elasticsearch/index/mapper/DynamicTemplate.java index 08620ed8c4535..bf7193848a070 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DynamicTemplate.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DynamicTemplate.java @@ -41,7 +41,7 @@ public class DynamicTemplate implements ToXContent { private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(DynamicTemplate.class)); - public static enum MatchType { + public enum MatchType { SIMPLE { @Override public boolean matches(String pattern, String value) { @@ -155,7 +155,7 @@ public static XContentFieldType fromString(String value) { return v; } } - throw new IllegalArgumentException("No xcontent type matched on [" + value + "], possible values are " + throw new IllegalArgumentException("No field type matched on [" + value + "], possible values are " + Arrays.toString(values())); } @@ -208,12 +208,8 @@ public static DynamicTemplate parse(String name, Map conf, try { xcontentFieldType = XContentFieldType.fromString(matchMappingType); } catch (IllegalArgumentException e) { - // TODO: do this in 6.0 - /*if (indexVersionCreated.onOrAfter(Version.V_6_0_0)) { - throw e; - }*/ - - DEPRECATION_LOGGER.deprecated("Ignoring unrecognized match_mapping_type: [" + matchMappingType + "]"); + DEPRECATION_LOGGER.deprecated("match_mapping_type [" + matchMappingType + "] is invalid and will be ignored: " + + e.getMessage()); // this template is on an unknown type so it will never match anything // null indicates that the template should be ignored return null; @@ -321,7 +317,7 @@ private Map processMap(Map map, String name, Str } private List processList(List list, String name, String dynamicType) { - List processedList = new ArrayList(); + List processedList = new ArrayList(list.size()); for (Object value : list) { if (value instanceof Map) { value = processMap((Map) value, name, dynamicType); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java index 4773d5da46847..0ed093b3a884f 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java @@ -24,6 +24,7 @@ import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.elasticsearch.Version; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.lucene.Lucene; @@ -236,7 +237,7 @@ protected void setupFieldType(BuilderContext context) { } } - private final Version indexCreatedVersion; + protected final Version indexCreatedVersion; protected MappedFieldType fieldType; protected final MappedFieldType defaultFieldType; protected MultiFields multiFields; @@ -246,7 +247,7 @@ protected FieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldT super(simpleName); assert indexSettings != null; this.indexCreatedVersion = Version.indexCreated(indexSettings); - if (indexCreatedVersion.onOrAfter(Version.V_5_0_0_alpha6)) { + if (indexCreatedVersion.onOrAfter(Version.V_5_0_0_beta1)) { if (simpleName.isEmpty()) { throw new IllegalArgumentException("name cannot be empty string"); } @@ -281,15 +282,15 @@ public CopyTo copyTo() { * mappings were not modified. */ public Mapper parse(ParseContext context) throws IOException { - final List fields = new ArrayList<>(2); + final List fields = new ArrayList<>(2); try { parseCreateField(context, fields); - for (Field field : fields) { + for (IndexableField field : fields) { if (!customBoost() // don't set boosts eg. on dv fields && field.fieldType().indexOptions() != IndexOptions.NONE && indexCreatedVersion.before(Version.V_5_0_0_alpha1)) { - field.setBoost(fieldType().boost()); + ((Field)(field)).setBoost(fieldType().boost()); } context.doc().add(field); } @@ -303,7 +304,7 @@ public Mapper parse(ParseContext context) throws IOException { /** * Parse the field value and populate fields. */ - protected abstract void parseCreateField(ParseContext context, List fields) throws IOException; + protected abstract void parseCreateField(ParseContext context, List fields) throws IOException; /** * Derived classes can override it to specify that boost value is set by derived classes. diff --git a/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java index 7343963f0996a..42c177b34d23a 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java @@ -30,13 +30,12 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Objects; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; - /** * A mapper that indexes the field names of a document under _field_names. This mapper is typically useful in order * to have fast exists and missing queries/filters. @@ -98,7 +97,7 @@ public FieldNamesFieldMapper build(BuilderContext context) { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder = new Builder(parserContext.mapperService().fullName(NAME)); for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { @@ -106,7 +105,7 @@ public MetadataFieldMapper.Builder parse(String name, Map node, String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); if (fieldName.equals("enabled")) { - builder.enabled(lenientNodeBooleanValue(fieldNode)); + builder.enabled(TypeParsers.nodeBooleanValue(name, "enabled", fieldNode)); iterator.remove(); } } @@ -114,8 +113,14 @@ public MetadataFieldMapper.Builder parse(String name, Map node, } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { - return new FieldNamesFieldMapper(indexSettings, fieldType); + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); + if (fieldType != null) { + return new FieldNamesFieldMapper(indexSettings, fieldType); + } else { + return parse(NAME, Collections.emptyMap(), context) + .build(new BuilderContext(indexSettings, new ContentPath(1))); + } } } @@ -183,7 +188,7 @@ public Query termQuery(Object value, QueryShardContext context) { } private FieldNamesFieldMapper(Settings indexSettings, MappedFieldType existing) { - this(existing == null ? Defaults.FIELD_TYPE.clone() : existing.clone(), indexSettings); + this(existing.clone(), indexSettings); } private FieldNamesFieldMapper(MappedFieldType fieldType, Settings indexSettings) { @@ -238,7 +243,7 @@ public String next() { } @Override - public final void remove() { + public void remove() { throw new UnsupportedOperationException(); } @@ -248,7 +253,7 @@ public final void remove() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { if (fieldType().isEnabled() == false) { return; } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/FieldTypeLookup.java b/core/src/main/java/org/elasticsearch/index/mapper/FieldTypeLookup.java index 5f6fddf09ef68..fee41e43f2a3c 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/FieldTypeLookup.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/FieldTypeLookup.java @@ -43,7 +43,7 @@ class FieldTypeLookup implements Iterable { final CopyOnWriteHashMap> fullNameToTypes; /** Create a new empty instance. */ - public FieldTypeLookup() { + FieldTypeLookup() { fullNameToFieldType = new CopyOnWriteHashMap<>(); fullNameToTypes = new CopyOnWriteHashMap<>(); } @@ -93,7 +93,7 @@ public FieldTypeLookup copyAndAddAll(String type, Collection fieldM // is the update even legal? checkCompatibility(type, fieldMapper, updateAllTypes); - if (fieldType != fullNameFieldType) { + if (fieldType.equals(fullNameFieldType) == false) { fullName = fullName.copyAndPut(fieldType.name(), fieldMapper.fieldType()); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java index c27ddc1811b64..8111e5e02a68f 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java @@ -48,7 +48,7 @@ public class GeoPointFieldMapper extends BaseGeoPointFieldMapper { public static class Defaults extends BaseGeoPointFieldMapper.Defaults { - public static final GeoPointFieldType FIELD_TYPE = new GeoPointFieldType(); + public static final GeoPointFieldType FIELD_TYPE = new LegacyGeoPointFieldType(); static { FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); @@ -79,6 +79,7 @@ public GeoPointFieldMapper build(BuilderContext context, String simpleName, Mapp if (context.indexCreatedVersion().before(Version.V_2_3_0)) { fieldType.setNumericPrecisionStep(GeoPointField.PRECISION_STEP); fieldType.setNumericType(FieldType.LegacyNumericType.LONG); + ((LegacyGeoPointFieldType)fieldType).numericEncoded = true; } setupFieldType(context); return new GeoPointFieldMapper(simpleName, fieldType, defaultFieldType, indexSettings, latMapper, lonMapper, @@ -127,4 +128,9 @@ protected void parse(ParseContext context, GeoPoint point, String geoHash) throw } super.parse(context, point, geoHash); } + + @Override + public LegacyGeoPointFieldType fieldType() { + return (LegacyGeoPointFieldType) super.fieldType(); + } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java index 9c90dd44dbc1e..355c515bdd0e2 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java @@ -18,11 +18,11 @@ */ package org.elasticsearch.index.mapper; -import org.locationtech.spatial4j.shape.Point; -import org.locationtech.spatial4j.shape.Shape; -import org.locationtech.spatial4j.shape.jts.JtsGeometry; import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.apache.lucene.spatial.prefix.PrefixTreeStrategy; import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy; @@ -32,8 +32,8 @@ import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree; import org.apache.lucene.spatial.prefix.tree.SpatialPrefixTree; import org.elasticsearch.Version; +import org.elasticsearch.action.fieldstats.FieldStats; import org.elasticsearch.common.Explicit; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.geo.SpatialStrategy; import org.elasticsearch.common.geo.builders.ShapeBuilder; @@ -41,9 +41,11 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.QueryShardException; +import org.locationtech.spatial4j.shape.Point; +import org.locationtech.spatial4j.shape.Shape; +import org.locationtech.spatial4j.shape.jts.JtsGeometry; import java.io.IOException; import java.util.Iterator; @@ -51,9 +53,6 @@ import java.util.Map; import java.util.Objects; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; - - /** * FieldMapper for indexing {@link org.locationtech.spatial4j.shape.Shape}s. *

    @@ -182,11 +181,12 @@ public Mapper.Builder parse(String name, Map node, ParserContext builder.fieldType().setStrategyName(fieldNode.toString()); iterator.remove(); } else if (Names.COERCE.equals(fieldName)) { - builder.coerce(lenientNodeBooleanValue(fieldNode)); + builder.coerce(TypeParsers.nodeBooleanValue(fieldName, Names.COERCE, fieldNode)); iterator.remove(); } else if (Names.STRATEGY_POINTS_ONLY.equals(fieldName) && builder.fieldType().strategyName.equals(SpatialStrategy.TERM.getStrategyName()) == false) { - builder.fieldType().setPointsOnly(XContentMapValues.lenientNodeBooleanValue(fieldNode)); + boolean pointsOnly = TypeParsers.nodeBooleanValue(fieldName, Names.STRATEGY_POINTS_ONLY, fieldNode); + builder.fieldType().setPointsOnly(pointsOnly); iterator.remove(); } } @@ -414,6 +414,20 @@ public PrefixTreeStrategy resolveStrategy(String strategyName) { public Query termQuery(Object value, QueryShardContext context) { throw new QueryShardException(context, "Geo fields do not support exact searching, use dedicated geo queries instead"); } + + @Override + public FieldStats stats(IndexReader reader) throws IOException { + int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } + /** + * we don't have a specific type for geo_shape so we use an empty {@link FieldStats.Text}. + * TODO: we should maybe support a new type that knows how to (de)encode the min/max information + */ + return new FieldStats.Text(maxDoc, -1, -1, -1, isSearchable(), isAggregatable()); + } } protected Explicit coerce; @@ -462,7 +476,7 @@ public Mapper parse(ParseContext context) throws IOException { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { } @Override diff --git a/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java index d70a50eede99e..813a546aaed36 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java @@ -21,25 +21,19 @@ import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; -import org.apache.lucene.index.Term; -import org.apache.lucene.queries.TermsQuery; -import org.apache.lucene.search.BooleanClause; -import org.apache.lucene.search.BooleanQuery; -import org.apache.lucene.search.MultiTermQuery; -import org.apache.lucene.search.PrefixQuery; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; -import org.apache.lucene.search.RegexpQuery; -import org.apache.lucene.util.BytesRef; +import org.apache.lucene.search.TermInSetQuery; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.lucene.BytesRefs; import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.iterable.Iterables; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData; import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; -import java.util.Collection; +import java.util.Arrays; import java.util.List; import java.util.Map; @@ -58,16 +52,21 @@ public static class Defaults { public static final String NAME = IdFieldMapper.NAME; public static final MappedFieldType FIELD_TYPE = new IdFieldType(); + public static final MappedFieldType NESTED_FIELD_TYPE; static { FIELD_TYPE.setTokenized(false); - FIELD_TYPE.setIndexOptions(IndexOptions.NONE); - FIELD_TYPE.setStored(false); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); + FIELD_TYPE.setStored(true); FIELD_TYPE.setOmitNorms(true); FIELD_TYPE.setIndexAnalyzer(Lucene.KEYWORD_ANALYZER); FIELD_TYPE.setSearchAnalyzer(Lucene.KEYWORD_ANALYZER); FIELD_TYPE.setName(NAME); FIELD_TYPE.freeze(); + + NESTED_FIELD_TYPE = FIELD_TYPE.clone(); + NESTED_FIELD_TYPE.setStored(false); + NESTED_FIELD_TYPE.freeze(); } } @@ -78,14 +77,15 @@ public MetadataFieldMapper.Builder parse(String name, Map node, } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final IndexSettings indexSettings = context.mapperService().getIndexSettings(); return new IdFieldMapper(indexSettings, fieldType); } } static final class IdFieldType extends TermBasedFieldType { - public IdFieldType() { + IdFieldType() { } protected IdFieldType(IdFieldType ref) { @@ -110,32 +110,66 @@ public boolean isSearchable() { @Override public Query termQuery(Object value, @Nullable QueryShardContext context) { - final BytesRef[] uids = Uid.createUidsForTypesAndId(context.queryTypes(), value); - return new TermsQuery(UidFieldMapper.NAME, uids); + return termsQuery(Arrays.asList(value), context); } @Override - public Query termsQuery(List values, @Nullable QueryShardContext context) { - return new TermsQuery(UidFieldMapper.NAME, Uid.createUidsForTypesAndIds(context.queryTypes(), values)); + public Query termsQuery(List values, @Nullable QueryShardContext context) { + if (indexOptions() != IndexOptions.NONE) { + // 6.x index, _id is indexed + return super.termsQuery(values, context); + } + // 5.x index, _uid is indexed + return new TermInSetQuery(UidFieldMapper.NAME, Uid.createUidsForTypesAndIds(context.queryTypes(), values)); + } + + @Override + public IndexFieldData.Builder fielddataBuilder() { + if (indexOptions() == IndexOptions.NONE) { + throw new IllegalArgumentException("Fielddata access on the _uid field is disallowed"); + } + return new PagedBytesIndexFieldData.Builder( + TextFieldMapper.Defaults.FIELDDATA_MIN_FREQUENCY, + TextFieldMapper.Defaults.FIELDDATA_MAX_FREQUENCY, + TextFieldMapper.Defaults.FIELDDATA_MIN_SEGMENT_SIZE); + } + } + + static MappedFieldType defaultFieldType(IndexSettings indexSettings) { + MappedFieldType defaultFieldType = Defaults.FIELD_TYPE.clone(); + if (indexSettings.isSingleType()) { + defaultFieldType.setIndexOptions(IndexOptions.DOCS); + defaultFieldType.setStored(true); + } else { + defaultFieldType.setIndexOptions(IndexOptions.NONE); + defaultFieldType.setStored(false); } + return defaultFieldType; } - private IdFieldMapper(Settings indexSettings, MappedFieldType existing) { - this(existing != null ? existing : Defaults.FIELD_TYPE, indexSettings); + private IdFieldMapper(IndexSettings indexSettings, MappedFieldType existing) { + this(existing == null ? defaultFieldType(indexSettings) : existing, indexSettings); } - private IdFieldMapper(MappedFieldType fieldType, Settings indexSettings) { - super(NAME, fieldType, Defaults.FIELD_TYPE, indexSettings); + private IdFieldMapper(MappedFieldType fieldType, IndexSettings indexSettings) { + super(NAME, fieldType, defaultFieldType(indexSettings), indexSettings.getSettings()); } @Override - public void preParse(ParseContext context) throws IOException {} + public void preParse(ParseContext context) throws IOException { + super.parse(context); + } @Override public void postParse(ParseContext context) throws IOException {} @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException {} + protected void parseCreateField(ParseContext context, List fields) throws IOException { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { + Field id = new Field(NAME, context.sourceToParse().id(), fieldType); + fields.add(id); + } + } @Override protected String contentType() { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java index e1615add19ec9..51acef299104a 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java @@ -19,8 +19,8 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; @@ -30,7 +30,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.fielddata.IndexFieldData; -import org.elasticsearch.index.fielddata.plain.IndexIndexFieldData; +import org.elasticsearch.index.fielddata.plain.ConstantIndexFieldData; import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; @@ -87,14 +87,15 @@ public MetadataFieldMapper.Builder parse(String name, Map n } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new IndexFieldMapper(indexSettings, fieldType); } } static final class IndexFieldType extends MappedFieldType { - public IndexFieldType() {} + IndexFieldType() {} protected IndexFieldType(IndexFieldType ref) { super(ref); @@ -159,7 +160,7 @@ private boolean isSameIndex(Object value, String indexName) { @Override public IndexFieldData.Builder fielddataBuilder() { - return new IndexIndexFieldData.Builder(); + return new ConstantIndexFieldData.Builder(mapperService -> mapperService.index().getName()); } } @@ -178,7 +179,7 @@ public void preParse(ParseContext context) throws IOException {} public void postParse(ParseContext context) throws IOException {} @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException {} + protected void parseCreateField(ParseContext context, List fields) throws IOException {} @Override protected String contentType() { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java index 69a8e06f859d7..19dcdf3fb6045 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java @@ -19,13 +19,15 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.document.InetAddressPoint; import org.apache.lucene.document.SortedSetDocValuesField; import org.apache.lucene.document.StoredField; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.XPointValues; +import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.PointValues; +import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; @@ -37,6 +39,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; import org.elasticsearch.index.mapper.LegacyNumberFieldMapper.Defaults; import org.elasticsearch.index.query.QueryShardContext; @@ -45,6 +48,7 @@ import java.io.IOException; import java.net.InetAddress; +import java.util.Arrays; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -109,7 +113,7 @@ public Mapper.Builder parse(String name, Map node, ParserCo builder.nullValue(InetAddresses.forString(propNode.toString())); iterator.remove(); } else if (propName.equals("ignore_malformed")) { - builder.ignoreMalformed(TypeParsers.nodeBooleanValue("ignore_malformed", propNode, parserContext)); + builder.ignoreMalformed(TypeParsers.nodeBooleanValue(name, "ignore_malformed", propNode)); iterator.remove(); } else if (TypeParsers.parseMultiField(builder, name, parserContext, propName, propNode)) { iterator.remove(); @@ -178,7 +182,31 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query termsQuery(List values, QueryShardContext context) { + InetAddress[] addresses = new InetAddress[values.size()]; + int i = 0; + for (Object value : values) { + InetAddress address; + if (value instanceof InetAddress) { + address = (InetAddress) value; + } else { + if (value instanceof BytesRef) { + value = ((BytesRef) value).utf8ToString(); + } + if (value.toString().contains("/")) { + // the `terms` query contains some prefix queries, so we cannot create a set query + // and need to fall back to a disjunction of `term` queries + return super.termsQuery(values, context); + } + address = InetAddresses.forString(value.toString()); + } + addresses[i++] = address; + } + return InetAddressPoint.newSetQuery(name(), addresses); + } + + @Override + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { failIfNotIndexed(); InetAddress lower; if (lowerTerm == null) { @@ -212,26 +240,65 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower @Override public FieldStats.Ip stats(IndexReader reader) throws IOException { String field = name(); - long size = XPointValues.size(reader, field); - if (size == 0) { + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { return null; } - int docCount = XPointValues.getDocCount(reader, field); - byte[] min = XPointValues.getMinPackedValue(reader, field); - byte[] max = XPointValues.getMaxPackedValue(reader, field); + long size = PointValues.size(reader, field); + if (size == 0) { + return new FieldStats.Ip(reader.maxDoc(), 0, -1, -1, isSearchable(), isAggregatable()); + } + int docCount = PointValues.getDocCount(reader, field); + byte[] min = PointValues.getMinPackedValue(reader, field); + byte[] max = PointValues.getMaxPackedValue(reader, field); return new FieldStats.Ip(reader.maxDoc(), docCount, -1L, size, isSearchable(), isAggregatable(), InetAddressPoint.decode(min), InetAddressPoint.decode(max)); } + public static final class IpScriptDocValues extends ScriptDocValues { + + private final RandomAccessOrds values; + + public IpScriptDocValues(RandomAccessOrds values) { + this.values = values; + } + + @Override + public void setNextDocId(int docId) { + values.setDocument(docId); + } + + public String getValue() { + if (isEmpty()) { + return null; + } else { + return get(0); + } + } + + @Override + public String get(int index) { + BytesRef encoded = values.lookupOrd(values.ordAt(0)); + InetAddress address = InetAddressPoint.decode( + Arrays.copyOfRange(encoded.bytes, encoded.offset, encoded.offset + encoded.length)); + return InetAddresses.toAddrString(address); + } + + @Override + public int size() { + return values.cardinality(); + } + } + @Override public IndexFieldData.Builder fielddataBuilder() { failIfNoDocValues(); - return new DocValuesIndexFieldData.Builder(); + return new DocValuesIndexFieldData.Builder().scriptFunction(IpScriptDocValues::new); } @Override - public Object valueForSearch(Object value) { + public Object valueForDisplay(Object value) { if (value == null) { return null; } @@ -285,7 +352,7 @@ protected IpFieldMapper clone() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { Object addressAsObject; if (context.externalValueSet()) { addressAsObject = context.externalValue(); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java index 204e61aabe6b3..a80327a98864a 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java @@ -19,16 +19,21 @@ package org.elasticsearch.index.mapper; +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.document.Field; import org.apache.lucene.document.SortedSetDocValuesField; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; +import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.support.XContentMapValues; +import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; @@ -38,6 +43,7 @@ import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Objects; import java.util.Set; import static java.util.Collections.unmodifiableList; @@ -80,6 +86,11 @@ public Builder(String name) { builder = this; } + @Override + public KeywordFieldType fieldType() { + return (KeywordFieldType) super.fieldType(); + } + public Builder ignoreAbove(int ignoreAbove) { if (ignoreAbove < 0) { throw new IllegalArgumentException("[ignore_above] must be positive, got " + ignoreAbove); @@ -102,6 +113,12 @@ public Builder eagerGlobalOrdinals(boolean eagerGlobalOrdinals) { return builder; } + public Builder normalizer(NamedAnalyzer normalizer) { + fieldType().setNormalizer(normalizer); + fieldType().setSearchAnalyzer(normalizer); + return builder; + } + @Override public KeywordFieldMapper build(BuilderContext context) { setupFieldType(context); @@ -113,7 +130,7 @@ public KeywordFieldMapper build(BuilderContext context) { public static class TypeParser implements Mapper.TypeParser { @Override - public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { if (parserContext.indexVersionCreated().before(Version.V_5_0_0_alpha1)) { // Downgrade "keyword" to "string" in indexes created in 2.x so you can use modern syntax against old indexes Set unsupportedParameters = new HashSet<>(node.keySet()); @@ -134,7 +151,7 @@ public Mapper.Builder parse(String name, Map node, ParserContext } node.put("index", index); } - + return new StringFieldMapper.TypeParser().parse(name, node, parserContext); } KeywordFieldMapper.Builder builder = new KeywordFieldMapper.Builder(name); @@ -158,6 +175,15 @@ public Mapper.Builder parse(String name, Map node, ParserContext } else if (propName.equals("eager_global_ordinals")) { builder.eagerGlobalOrdinals(XContentMapValues.nodeBooleanValue(propNode)); iterator.remove(); + } else if (propName.equals("normalizer")) { + if (propNode != null) { + NamedAnalyzer normalizer = parserContext.getIndexAnalyzers().getNormalizer(propNode.toString()); + if (normalizer == null) { + throw new MapperParsingException("normalizer [" + propNode.toString() + "] not found for field [" + name + "]"); + } + builder.normalizer(normalizer); + } + iterator.remove(); } } return builder; @@ -166,21 +192,58 @@ public Mapper.Builder parse(String name, Map node, ParserContext public static final class KeywordFieldType extends StringFieldType { - public KeywordFieldType() {} + private NamedAnalyzer normalizer = null; + + public KeywordFieldType() { + setIndexAnalyzer(Lucene.KEYWORD_ANALYZER); + setSearchAnalyzer(Lucene.KEYWORD_ANALYZER); + } protected KeywordFieldType(KeywordFieldType ref) { super(ref); + this.normalizer = ref.normalizer; } public KeywordFieldType clone() { return new KeywordFieldType(this); } + @Override + public boolean equals(Object o) { + if (super.equals(o) == false) { + return false; + } + return Objects.equals(normalizer, ((KeywordFieldType) o).normalizer); + } + + @Override + public void checkCompatibility(MappedFieldType otherFT, List conflicts, boolean strict) { + super.checkCompatibility(otherFT, conflicts, strict); + KeywordFieldType other = (KeywordFieldType) otherFT; + if (Objects.equals(normalizer, other.normalizer) == false) { + conflicts.add("mapper [" + name() + "] has different [normalizer]"); + } + } + + @Override + public int hashCode() { + return 31 * super.hashCode() + Objects.hashCode(normalizer); + } + @Override public String typeName() { return CONTENT_TYPE; } + public NamedAnalyzer normalizer() { + return normalizer; + } + + public void setNormalizer(NamedAnalyzer normalizer) { + checkIfFrozen(); + this.normalizer = normalizer; + } + @Override public Query nullValueQuery() { if (nullValue() == null) { @@ -196,7 +259,7 @@ public IndexFieldData.Builder fielddataBuilder() { } @Override - public Object valueForSearch(Object value) { + public Object valueForDisplay(Object value) { if (value == null) { return null; } @@ -204,13 +267,34 @@ public Object valueForSearch(Object value) { BytesRef binaryValue = (BytesRef) value; return binaryValue.utf8ToString(); } + + @Override + protected BytesRef indexedValueForSearch(Object value) { + if (searchAnalyzer() == Lucene.KEYWORD_ANALYZER) { + // keyword analyzer with the default attribute source which encodes terms using UTF8 + // in that case we skip normalization, which may be slow if there many terms need to + // parse (eg. large terms query) since Analyzer.normalize involves things like creating + // attributes through reflection + // This if statement will be used whenever a normalizer is NOT configured + return super.indexedValueForSearch(value); + } + + if (value == null) { + return null; + } + if (value instanceof BytesRef) { + value = ((BytesRef) value).utf8ToString(); + } + return searchAnalyzer().normalize(name(), value.toString()); + } } private Boolean includeInAll; private int ignoreAbove; protected KeywordFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, - int ignoreAbove, Boolean includeInAll, Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { + int ignoreAbove, Boolean includeInAll, + Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo); assert fieldType.indexOptions().compareTo(IndexOptions.DOCS_AND_FREQS) <= 0; this.ignoreAbove = ignoreAbove; @@ -229,14 +313,19 @@ protected KeywordFieldMapper clone() { return (KeywordFieldMapper) super.clone(); } + @Override + public KeywordFieldType fieldType() { + return (KeywordFieldType) super.fieldType(); + } + // pkg-private for testing Boolean includeInAll() { return includeInAll; } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { - final String value; + protected void parseCreateField(ParseContext context, List fields) throws IOException { + String value; if (context.externalValueSet()) { value = context.externalValue().toString(); } else { @@ -252,6 +341,27 @@ protected void parseCreateField(ParseContext context, List fields) throws return; } + final NamedAnalyzer normalizer = fieldType().normalizer(); + if (normalizer != null) { + try (TokenStream ts = normalizer.tokenStream(name(), value)) { + final CharTermAttribute termAtt = ts.addAttribute(CharTermAttribute.class); + ts.reset(); + if (ts.incrementToken() == false) { + throw new IllegalStateException("The normalization token stream is " + + "expected to produce exactly 1 token, but got 0 for analyzer " + + normalizer + " and input \"" + value + "\""); + } + final String newValue = termAtt.toString(); + if (ts.incrementToken()) { + throw new IllegalStateException("The normalization token stream is " + + "expected to produce exactly 1 token, but got 2+ for analyzer " + + normalizer + " and input \"" + value + "\""); + } + ts.end(); + value = newValue; + } + } + if (context.includeInAll(includeInAll, this)) { context.allEntries().addText(fieldType().name(), value, fieldType().boost()); } @@ -296,5 +406,11 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, if (includeDefaults || ignoreAbove != Defaults.IGNORE_ABOVE) { builder.field("ignore_above", ignoreAbove); } + + if (fieldType().normalizer() != null) { + builder.field("normalizer", fieldType().normalizer().name()); + } else if (includeDefaults) { + builder.nullField("normalizer"); + } } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LatLonPointFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LatLonPointFieldMapper.java new file mode 100644 index 0000000000000..df63992ba8886 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/mapper/LatLonPointFieldMapper.java @@ -0,0 +1,182 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.mapper; + +import org.apache.lucene.document.LatLonDocValuesField; +import org.apache.lucene.document.LatLonPoint; +import org.apache.lucene.document.StoredField; +import org.apache.lucene.geo.GeoEncodingUtils; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.PointValues; +import org.apache.lucene.search.Query; +import org.elasticsearch.Version; +import org.elasticsearch.action.fieldstats.FieldStats; +import org.elasticsearch.common.Explicit; +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.geo.GeoUtils; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.fielddata.plain.AbstractLatLonPointDVIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.index.query.QueryShardException; + +import java.io.IOException; +import java.util.Iterator; +import java.util.Map; + +/** + * Field Mapper for geo_point types. + * + * Uses lucene 6 LatLonPoint encoding + */ +public class LatLonPointFieldMapper extends BaseGeoPointFieldMapper { + public static final String CONTENT_TYPE = "geo_point"; + // TODO replace this once 5.0.0-beta1 is released + public static final Version LAT_LON_FIELD_VERSION = Version.fromString("5.0.0-beta1"); + + public static class Defaults extends BaseGeoPointFieldMapper.Defaults { + public static final LatLonPointFieldType FIELD_TYPE = new LatLonPointFieldType(); + + static { + FIELD_TYPE.setTokenized(false); + FIELD_TYPE.setHasDocValues(true); + FIELD_TYPE.setDimensions(2, Integer.BYTES); + FIELD_TYPE.freeze(); + } + } + + public static class Builder extends BaseGeoPointFieldMapper.Builder { + public Builder(String name) { + super(name, Defaults.FIELD_TYPE); + } + + @Override + public LatLonPointFieldMapper build(BuilderContext context, String simpleName, MappedFieldType fieldType, + MappedFieldType defaultFieldType, Settings indexSettings, + FieldMapper latMapper, FieldMapper lonMapper, FieldMapper geoHashMapper, + MultiFields multiFields, Explicit ignoreMalformed, + CopyTo copyTo) { + setupFieldType(context); + return new LatLonPointFieldMapper(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, + ignoreMalformed, copyTo); + } + + @Override + public LatLonPointFieldMapper build(BuilderContext context) { + return super.build(context); + } + } + + public static class TypeParser extends BaseGeoPointFieldMapper.TypeParser { + @Override + public Mapper.Builder parse(String name, Map node, ParserContext parserContext) + throws MapperParsingException { + return super.parse(name, node, parserContext); + } + } + + public LatLonPointFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, + Settings indexSettings, MultiFields multiFields, Explicit ignoreMalformed, + CopyTo copyTo) { + super(simpleName, fieldType, defaultFieldType, indexSettings, null, null, null, multiFields, ignoreMalformed, copyTo); + } + + public static class LatLonPointFieldType extends GeoPointFieldType { + LatLonPointFieldType() { + } + + LatLonPointFieldType(LatLonPointFieldType ref) { + super(ref); + } + + @Override + public String typeName() { + return CONTENT_TYPE; + } + + @Override + public MappedFieldType clone() { + return new LatLonPointFieldType(this); + } + + @Override + public IndexFieldData.Builder fielddataBuilder() { + failIfNoDocValues(); + return new AbstractLatLonPointDVIndexFieldData.Builder(); + } + + @Override + public Query termQuery(Object value, QueryShardContext context) { + throw new QueryShardException(context, "Geo fields do not support exact searching, use dedicated geo queries instead: [" + + name() + "]"); + } + + @Override + public FieldStats.GeoPoint stats(IndexReader reader) throws IOException { + String field = name(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } + final long size = PointValues.size(reader, field); + if (size == 0) { + return new FieldStats.GeoPoint(reader.maxDoc(), -1L, -1L, -1L, isSearchable(), isAggregatable()); + } + final int docCount = PointValues.getDocCount(reader, field); + byte[] min = PointValues.getMinPackedValue(reader, field); + byte[] max = PointValues.getMaxPackedValue(reader, field); + GeoPoint minPt = new GeoPoint(GeoEncodingUtils.decodeLatitude(min, 0), GeoEncodingUtils.decodeLongitude(min, Integer.BYTES)); + GeoPoint maxPt = new GeoPoint(GeoEncodingUtils.decodeLatitude(max, 0), GeoEncodingUtils.decodeLongitude(max, Integer.BYTES)); + return new FieldStats.GeoPoint(reader.maxDoc(), docCount, -1L, size, isSearchable(), isAggregatable(), + minPt, maxPt); + } + } + + @Override + protected void parse(ParseContext originalContext, GeoPoint point, String geoHash) throws IOException { + // Geopoint fields, by default, will not be included in _all + final ParseContext context = originalContext.setIncludeInAllDefault(false); + + if (ignoreMalformed.value() == false) { + if (point.lat() > 90.0 || point.lat() < -90.0) { + throw new IllegalArgumentException("illegal latitude value [" + point.lat() + "] for " + name()); + } + if (point.lon() > 180.0 || point.lon() < -180) { + throw new IllegalArgumentException("illegal longitude value [" + point.lon() + "] for " + name()); + } + } else { + GeoUtils.normalizePoint(point); + } + if (fieldType().indexOptions() != IndexOptions.NONE) { + context.doc().add(new LatLonPoint(fieldType().name(), point.lat(), point.lon())); + } + if (fieldType().stored()) { + context.doc().add(new StoredField(fieldType().name(), point.toString())); + } + if (fieldType.hasDocValues()) { + context.doc().add(new LatLonDocValuesField(fieldType().name(), point.lat(), point.lon())); + } + // if the mapping contains multifields then use the geohash string + if (multiFields.iterator().hasNext()) { + multiFields.parse(this, context.createExternalValueContext(point.geohash())); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyByteFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyByteFieldMapper.java index 2c63806ebbe28..d69ae8a51a367 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyByteFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyByteFieldMapper.java @@ -20,9 +20,10 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Terms; import org.apache.lucene.search.LegacyNumericRangeQuery; import org.apache.lucene.search.Query; @@ -38,6 +39,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Iterator; @@ -107,7 +109,7 @@ public Mapper.Builder parse(String name, Map node, ParserContext } static final class ByteFieldType extends NumberFieldType { - public ByteFieldType() { + ByteFieldType() { super(LegacyNumericType.INT); } @@ -131,7 +133,7 @@ public Byte nullValue() { } @Override - public Byte valueForSearch(Object value) { + public Byte valueForDisplay(Object value) { if (value == null) { return null; } @@ -146,7 +148,7 @@ public BytesRef indexedValueForSearch(Object value) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { return LegacyNumericRangeQuery.newIntRange(name(), numericPrecisionStep(), lowerTerm == null ? null : (int)parseValue(lowerTerm), upperTerm == null ? null : (int)parseValue(upperTerm), @@ -156,9 +158,13 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower @Override public FieldStats.Long stats(IndexReader reader) throws IOException { int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name()); if (terms == null) { - return null; + return new FieldStats.Long(maxDoc, 0, -1, -1, isSearchable(), isAggregatable()); } long minValue = LegacyNumericUtils.getMinInt(terms); long maxValue = LegacyNumericUtils.getMaxInt(terms); @@ -201,7 +207,7 @@ protected boolean customBoost() { } @Override - protected void innerParseCreateField(ParseContext context, List fields) throws IOException { + protected void innerParseCreateField(ParseContext context, List fields) throws IOException { byte value; float boost = fieldType().boost(); if (context.externalValueSet()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyDateFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyDateFieldMapper.java index 29689d06dff9a..42e588d963dc3 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyDateFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyDateFieldMapper.java @@ -19,9 +19,10 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Terms; import org.apache.lucene.search.LegacyNumericRangeQuery; import org.apache.lucene.search.Query; @@ -43,8 +44,9 @@ import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; import org.elasticsearch.index.mapper.LegacyLongFieldMapper.CustomLongNumericField; +import org.elasticsearch.index.query.QueryRewriteContext; +import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.search.DocValueFormat; -import org.elasticsearch.search.internal.SearchContext; import org.joda.time.DateTimeZone; import java.io.IOException; @@ -53,7 +55,6 @@ import java.util.Locale; import java.util.Map; import java.util.Objects; -import java.util.concurrent.Callable; import java.util.concurrent.TimeUnit; import static org.elasticsearch.index.mapper.TypeParsers.parseDateTimeFormatter; @@ -176,67 +177,6 @@ public static class TypeParser implements Mapper.TypeParser { public static class DateFieldType extends NumberFieldType { - final class LateParsingQuery extends Query { - - final Object lowerTerm; - final Object upperTerm; - final boolean includeLower; - final boolean includeUpper; - final DateTimeZone timeZone; - final DateMathParser forcedDateParser; - - public LateParsingQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, DateTimeZone timeZone, DateMathParser forcedDateParser) { - this.lowerTerm = lowerTerm; - this.upperTerm = upperTerm; - this.includeLower = includeLower; - this.includeUpper = includeUpper; - this.timeZone = timeZone; - this.forcedDateParser = forcedDateParser; - } - - @Override - public Query rewrite(IndexReader reader) throws IOException { - Query rewritten = super.rewrite(reader); - if (rewritten != this) { - return rewritten; - } - return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser); - } - - // Even though we only cache rewritten queries it is good to let all queries implement hashCode() and equals(): - @Override - public boolean equals(Object o) { - if (this == o) return true; - if (sameClassAs(o) == false) return false; - - LateParsingQuery that = (LateParsingQuery) o; - if (includeLower != that.includeLower) return false; - if (includeUpper != that.includeUpper) return false; - if (lowerTerm != null ? !lowerTerm.equals(that.lowerTerm) : that.lowerTerm != null) return false; - if (upperTerm != null ? !upperTerm.equals(that.upperTerm) : that.upperTerm != null) return false; - if (timeZone != null ? !timeZone.equals(that.timeZone) : that.timeZone != null) return false; - - return true; - } - - @Override - public int hashCode() { - return Objects.hash(classHash(), lowerTerm, upperTerm, includeLower, includeUpper, timeZone); - } - - @Override - public String toString(String s) { - final StringBuilder sb = new StringBuilder(); - return sb.append(name()).append(':') - .append(includeLower ? '[' : '{') - .append((lowerTerm == null) ? "*" : lowerTerm.toString()) - .append(" TO ") - .append((upperTerm == null) ? "*" : upperTerm.toString()) - .append(includeUpper ? ']' : '}') - .toString(); - } - } - protected FormatDateTimeFormatter dateTimeFormatter = Defaults.DATE_TIME_FORMATTER; protected TimeUnit timeUnit = Defaults.TIME_UNIT; protected DateMathParser dateMathParser = new DateMathParser(dateTimeFormatter); @@ -339,7 +279,7 @@ public BytesRef indexedValueForSearch(Object value) { } @Override - public Object valueForSearch(Object value) { + public Object valueForDisplay(Object value) { Long val = (Long) value; if (val == null) { return null; @@ -348,16 +288,20 @@ public Object valueForSearch(Object value) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { - return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, null, null); + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { + return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, null, null, context); } @Override public FieldStats.Date stats(IndexReader reader) throws IOException { int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name()); if (terms == null) { - return null; + return new FieldStats.Date(maxDoc, 0, -1, -1, isSearchable(), isAggregatable()); } long minValue = LegacyNumericUtils.getMinLong(terms); long maxValue = LegacyNumericUtils.getMaxLong(terms); @@ -366,14 +310,20 @@ public FieldStats.Date stats(IndexReader reader) throws IOException { dateTimeFormatter(), minValue, maxValue); } - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) { - return new LateParsingQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser); + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, + @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryShardContext context) { + return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, context); } - private Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) { + private Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, + @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryRewriteContext context) { return LegacyNumericRangeQuery.newLongRange(name(), numericPrecisionStep(), - lowerTerm == null ? null : parseToMilliseconds(lowerTerm, !includeLower, timeZone, forcedDateParser == null ? dateMathParser : forcedDateParser), - upperTerm == null ? null : parseToMilliseconds(upperTerm, includeUpper, timeZone, forcedDateParser == null ? dateMathParser : forcedDateParser), + lowerTerm == null ? null + : parseToMilliseconds(lowerTerm, !includeLower, timeZone, + forcedDateParser == null ? dateMathParser : forcedDateParser, context), + upperTerm == null ? null + : parseToMilliseconds(upperTerm, includeUpper, timeZone, + forcedDateParser == null ? dateMathParser : forcedDateParser, context), includeLower, includeUpper); } @@ -381,7 +331,7 @@ private Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includ public Relation isFieldWithinQuery(IndexReader reader, Object from, Object to, boolean includeLower, boolean includeUpper, - DateTimeZone timeZone, DateMathParser dateParser) throws IOException { + DateTimeZone timeZone, DateMathParser dateParser, QueryRewriteContext context) throws IOException { if (dateParser == null) { dateParser = this.dateMathParser; } @@ -397,7 +347,7 @@ public Relation isFieldWithinQuery(IndexReader reader, long fromInclusive = Long.MIN_VALUE; if (from != null) { - fromInclusive = parseToMilliseconds(from, !includeLower, timeZone, dateParser); + fromInclusive = parseToMilliseconds(from, !includeLower, timeZone, dateParser, context); if (includeLower == false) { if (fromInclusive == Long.MAX_VALUE) { return Relation.DISJOINT; @@ -408,7 +358,7 @@ public Relation isFieldWithinQuery(IndexReader reader, long toInclusive = Long.MAX_VALUE; if (to != null) { - toInclusive = parseToMilliseconds(to, includeUpper, timeZone, dateParser); + toInclusive = parseToMilliseconds(to, includeUpper, timeZone, dateParser, context); if (includeUpper == false) { if (toInclusive == Long.MIN_VALUE) { return Relation.DISJOINT; @@ -426,7 +376,8 @@ public Relation isFieldWithinQuery(IndexReader reader, } } - public long parseToMilliseconds(Object value, boolean inclusive, @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) { + public long parseToMilliseconds(Object value, boolean inclusive, @Nullable DateTimeZone zone, + @Nullable DateMathParser forcedDateParser, QueryRewriteContext context) { if (value instanceof Long) { return ((Long) value).longValue(); } @@ -442,7 +393,7 @@ public long parseToMilliseconds(Object value, boolean inclusive, @Nullable DateT } else { strValue = value.toString(); } - return dateParser.parse(strValue, now(), inclusive, zone); + return dateParser.parse(strValue, context::nowInMillis, inclusive, zone); } @Override @@ -474,25 +425,13 @@ public DateFieldType fieldType() { return (DateFieldType) super.fieldType(); } - private static Callable now() { - return new Callable() { - @Override - public Long call() { - final SearchContext context = SearchContext.current(); - return context != null - ? context.nowInMillis() - : System.currentTimeMillis(); - } - }; - } - @Override protected boolean customBoost() { return true; } @Override - protected void innerParseCreateField(ParseContext context, List fields) throws IOException { + protected void innerParseCreateField(ParseContext context, List fields) throws IOException { String dateAsString = null; float boost = fieldType().boost(); if (context.externalValueSet()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyDoubleFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyDoubleFieldMapper.java index 07e459e8ea976..9895155d88ff2 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyDoubleFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyDoubleFieldMapper.java @@ -21,9 +21,10 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Terms; import org.apache.lucene.search.LegacyNumericRangeQuery; import org.apache.lucene.search.Query; @@ -41,6 +42,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Iterator; @@ -135,7 +137,7 @@ public java.lang.Double nullValue() { } @Override - public java.lang.Double valueForSearch(Object value) { + public java.lang.Double valueForDisplay(Object value) { if (value == null) { return null; } @@ -157,7 +159,7 @@ public BytesRef indexedValueForSearch(Object value) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { return LegacyNumericRangeQuery.newDoubleRange(name(), numericPrecisionStep(), lowerTerm == null ? null : parseDoubleValue(lowerTerm), upperTerm == null ? null : parseDoubleValue(upperTerm), @@ -167,9 +169,13 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower @Override public FieldStats.Double stats(IndexReader reader) throws IOException { int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name()); if (terms == null) { - return null; + return new FieldStats.Double(maxDoc, 0, -1, -1, isSearchable(), isAggregatable()); } double minValue = NumericUtils.sortableLongToDouble(LegacyNumericUtils.getMinLong(terms)); double maxValue = NumericUtils.sortableLongToDouble(LegacyNumericUtils.getMaxLong(terms)); @@ -201,7 +207,7 @@ protected boolean customBoost() { } @Override - protected void innerParseCreateField(ParseContext context, List fields) throws IOException { + protected void innerParseCreateField(ParseContext context, List fields) throws IOException { double value; float boost = fieldType().boost(); if (context.externalValueSet()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyFloatFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyFloatFieldMapper.java index 3fbc639ea67f8..0e473be04a455 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyFloatFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyFloatFieldMapper.java @@ -21,9 +21,10 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Terms; import org.apache.lucene.search.LegacyNumericRangeQuery; import org.apache.lucene.search.Query; @@ -40,6 +41,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Iterator; @@ -110,7 +112,7 @@ public Mapper.Builder parse(String name, Map node, ParserContext static final class FloatFieldType extends NumberFieldType { - public FloatFieldType() { + FloatFieldType() { super(LegacyNumericType.FLOAT); } @@ -142,7 +144,7 @@ public BytesRef indexedValueForSearch(Object value) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { return LegacyNumericRangeQuery.newFloatRange(name(), numericPrecisionStep(), lowerTerm == null ? null : parseValue(lowerTerm), upperTerm == null ? null : parseValue(upperTerm), @@ -152,9 +154,13 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower @Override public FieldStats.Double stats(IndexReader reader) throws IOException { int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name()); if (terms == null) { - return null; + return new FieldStats.Double(maxDoc, 0, -1, -1, isSearchable(), isAggregatable()); } float minValue = NumericUtils.sortableIntToFloat(LegacyNumericUtils.getMinInt(terms)); float maxValue = NumericUtils.sortableIntToFloat(LegacyNumericUtils.getMaxInt(terms)); @@ -196,7 +202,7 @@ protected boolean customBoost() { } @Override - protected void innerParseCreateField(ParseContext context, List fields) throws IOException { + protected void innerParseCreateField(ParseContext context, List fields) throws IOException { float value; float boost = fieldType().boost(); if (context.externalValueSet()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyGeoPointFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyGeoPointFieldMapper.java index 3fe195c5d917b..619a0a824ef18 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyGeoPointFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyGeoPointFieldMapper.java @@ -60,7 +60,7 @@ public static class Names extends BaseGeoPointFieldMapper.Names { public static class Defaults extends BaseGeoPointFieldMapper.Defaults{ public static final Explicit COERCE = new Explicit<>(false, false); - public static final GeoPointFieldType FIELD_TYPE = new GeoPointFieldType(); + public static final GeoPointFieldType FIELD_TYPE = new LegacyGeoPointFieldType(); static { FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); @@ -123,7 +123,7 @@ public static Builder parse(Builder builder, Map node, Mapper.Ty String propName = entry.getKey(); Object propNode = entry.getValue(); if (propName.equals(Names.COERCE)) { - builder.coerce = XContentMapValues.lenientNodeBooleanValue(propNode); + builder.coerce = XContentMapValues.lenientNodeBooleanValue(propNode, propName); iterator.remove(); } } @@ -297,7 +297,7 @@ protected void parse(ParseContext context, GeoPoint point, String geoHash) throw validPoint = true; } - if (coerce.value() == true && validPoint == false) { + if (coerce.value() && validPoint == false) { // by setting coerce to false we are assuming all geopoints are already in a valid coordinate system // thus this extra step can be skipped GeoUtils.normalizePoint(point, true, true); @@ -331,6 +331,11 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, } } + @Override + public LegacyGeoPointFieldType fieldType() { + return (LegacyGeoPointFieldType) super.fieldType(); + } + public static class CustomGeoPointDocValuesField extends CustomDocValuesField { private final ObjectHashSet points; diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyIntegerFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyIntegerFieldMapper.java index 65b9b65eaf913..987f932e8cad3 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyIntegerFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyIntegerFieldMapper.java @@ -21,9 +21,10 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Terms; import org.apache.lucene.search.LegacyNumericRangeQuery; import org.apache.lucene.search.Query; @@ -39,6 +40,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Iterator; @@ -145,7 +147,7 @@ public BytesRef indexedValueForSearch(Object value) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { return LegacyNumericRangeQuery.newIntRange(name(), numericPrecisionStep(), lowerTerm == null ? null : parseValue(lowerTerm), upperTerm == null ? null : parseValue(upperTerm), @@ -155,9 +157,13 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower @Override public FieldStats.Long stats(IndexReader reader) throws IOException { int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name()); if (terms == null) { - return null; + return new FieldStats.Long(maxDoc, 0, -1, -1, isSearchable(), isAggregatable()); } long minValue = LegacyNumericUtils.getMinInt(terms); long maxValue = LegacyNumericUtils.getMaxInt(terms); @@ -200,7 +206,7 @@ protected boolean customBoost() { } @Override - protected void innerParseCreateField(ParseContext context, List fields) throws IOException { + protected void innerParseCreateField(ParseContext context, List fields) throws IOException { int value; float boost = fieldType().boost(); if (context.externalValueSet()) { @@ -272,7 +278,7 @@ protected void innerParseCreateField(ParseContext context, List fields) t addIntegerFields(context, fields, value, boost); } - protected void addIntegerFields(ParseContext context, List fields, int value, float boost) { + protected void addIntegerFields(ParseContext context, List fields, int value, float boost) { if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) { CustomIntegerNumericField field = new CustomIntegerNumericField(value, fieldType()); if (boost != 1f && Version.indexCreated(context.indexSettings()).before(Version.V_5_0_0_alpha1)) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyIpFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyIpFieldMapper.java index 699124a4c05c2..c3704d34fb5c2 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyIpFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyIpFieldMapper.java @@ -19,9 +19,10 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Terms; import org.apache.lucene.search.LegacyNumericRangeQuery; import org.apache.lucene.search.Query; @@ -80,7 +81,9 @@ public static long ipToLong(String ip) { } String[] octets = pattern.split(ip); if (octets.length != 4) { - throw new IllegalArgumentException("failed to parse ip [" + ip + "], not a valid ipv4 address (4 dots)"); + // this must be ipv6, since isInetAddress returned true above + throw new IllegalArgumentException("ip [" + ip + "] is an IPv6 address, but this ip field is for an " + + "index created before 5.0. Reindex into a new index to get IPv6 support."); } return (Long.parseLong(octets[0]) << 24) + (Integer.parseInt(octets[1]) << 16) + (Integer.parseInt(octets[2]) << 8) + Integer.parseInt(octets[3]); @@ -171,7 +174,7 @@ public String typeName() { * IPs should return as a string. */ @Override - public Object valueForSearch(Object value) { + public Object valueForDisplay(Object value) { Long val = (Long) value; if (val == null) { return null; @@ -210,14 +213,14 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) { } if (fromTo != null) { return rangeQuery(fromTo[0] == 0 ? null : fromTo[0], - fromTo[1] == MAX_IP ? null : fromTo[1], true, false); + fromTo[1] == MAX_IP ? null : fromTo[1], true, false, context); } } return super.termQuery(value, context); } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { return LegacyNumericRangeQuery.newLongRange(name(), numericPrecisionStep(), lowerTerm == null ? null : parseValue(lowerTerm), upperTerm == null ? null : parseValue(upperTerm), @@ -227,9 +230,13 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower @Override public FieldStats stats(IndexReader reader) throws IOException { int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name()); if (terms == null) { - return null; + return new FieldStats.Ip(maxDoc, 0, -1, -1, isSearchable(), isAggregatable()); } long minValue = LegacyNumericUtils.getMinLong(terms); long maxValue = LegacyNumericUtils.getMaxLong(terms); @@ -282,7 +289,7 @@ private static long parseValue(Object value) { } @Override - protected void innerParseCreateField(ParseContext context, List fields) throws IOException { + protected void innerParseCreateField(ParseContext context, List fields) throws IOException { String ipAsString; if (context.externalValueSet()) { ipAsString = (String) context.externalValue(); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyIpIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyIpIndexFieldData.java index feb3328227d0b..20326c2584167 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyIpIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyIpIndexFieldData.java @@ -24,7 +24,9 @@ import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.search.SortField; import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.index.Index; import org.elasticsearch.index.fielddata.AtomicFieldData; @@ -46,7 +48,7 @@ final class LegacyIpIndexFieldData implements IndexFieldData { protected final String fieldName; protected final Logger logger; - public LegacyIpIndexFieldData(Index index, String fieldName) { + LegacyIpIndexFieldData(Index index, String fieldName) { this.index = index; this.fieldName = fieldName; this.logger = Loggers.getLogger(getClass()); @@ -71,22 +73,22 @@ public Index index() { @Override public AtomicFieldData load(LeafReaderContext context) { return new AtomicFieldData() { - + @Override public void close() { // no-op } - + @Override public long ramBytesUsed() { return 0; } - + @Override public ScriptDocValues getScriptValues() { throw new UnsupportedOperationException("Cannot run scripts on ip fields"); } - + @Override public SortedBinaryDocValues getBytesValues() { SortedNumericDocValues values; @@ -115,12 +117,12 @@ public BytesRef valueAt(int index) { byte[] encoded = InetAddressPoint.encode(inet); return new BytesRef(encoded); } - + @Override public void setDocument(int docId) { values.setDocument(docId); } - + @Override public int count() { return values.count(); @@ -137,9 +139,11 @@ public AtomicFieldData loadDirect(LeafReaderContext context) } @Override - public IndexFieldData.XFieldComparatorSource comparatorSource( - Object missingValue, MultiValueMode sortMode, Nested nested) { - return new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested); + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, + Nested nested, boolean reverse) { + final XFieldComparatorSource source = + new BytesRefFieldComparatorSource(this, missingValue, + sortMode, nested); + return new SortField(getFieldName(), source, reverse); } - } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyLongFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyLongFieldMapper.java index 4661d1cd365aa..6d76401281997 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyLongFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyLongFieldMapper.java @@ -21,9 +21,10 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Terms; import org.apache.lucene.search.LegacyNumericRangeQuery; import org.apache.lucene.search.Query; @@ -39,6 +40,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Iterator; @@ -146,7 +148,7 @@ public BytesRef indexedValueForSearch(Object value) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { return LegacyNumericRangeQuery.newLongRange(name(), numericPrecisionStep(), lowerTerm == null ? null : parseLongValue(lowerTerm), upperTerm == null ? null : parseLongValue(upperTerm), @@ -156,9 +158,13 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower @Override public FieldStats stats(IndexReader reader) throws IOException { int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name()); if (terms == null) { - return null; + return new FieldStats.Long(maxDoc, 0, -1, -1, isSearchable(), isAggregatable()); } long minValue = LegacyNumericUtils.getMinLong(terms); long maxValue = LegacyNumericUtils.getMaxLong(terms); @@ -191,7 +197,7 @@ protected boolean customBoost() { } @Override - protected void innerParseCreateField(ParseContext context, List fields) throws IOException { + protected void innerParseCreateField(ParseContext context, List fields) throws IOException { long value; float boost = fieldType().boost(); if (context.externalValueSet()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyNumberFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyNumberFieldMapper.java index f377883aa24e5..99974c349423d 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyNumberFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyNumberFieldMapper.java @@ -27,6 +27,7 @@ import org.apache.lucene.document.Field; import org.apache.lucene.document.SortedNumericDocValuesField; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.Nullable; @@ -173,7 +174,7 @@ protected LegacyNumberFieldMapper clone() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { RuntimeException e = null; try { innerParseCreateField(context, fields); @@ -188,9 +189,9 @@ protected void parseCreateField(ParseContext context, List fields) throws } } - protected abstract void innerParseCreateField(ParseContext context, List fields) throws IOException; + protected abstract void innerParseCreateField(ParseContext context, List fields) throws IOException; - protected final void addDocValue(ParseContext context, List fields, long value) { + protected final void addDocValue(ParseContext context, List fields, long value) { fields.add(new SortedNumericDocValuesField(fieldType().name(), value)); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyShortFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyShortFieldMapper.java index b42ec620aea43..d625b0a7a3aae 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyShortFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyShortFieldMapper.java @@ -21,9 +21,10 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.document.Field; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Terms; import org.apache.lucene.search.LegacyNumericRangeQuery; import org.apache.lucene.search.Query; @@ -39,6 +40,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Iterator; @@ -111,7 +113,7 @@ public Mapper.Builder parse(String name, Map node, ParserContext static final class ShortFieldType extends NumberFieldType { - public ShortFieldType() { + ShortFieldType() { super(LegacyNumericType.INT); } @@ -135,7 +137,7 @@ public Short nullValue() { } @Override - public Short valueForSearch(Object value) { + public Short valueForDisplay(Object value) { if (value == null) { return null; } @@ -150,7 +152,7 @@ public BytesRef indexedValueForSearch(Object value) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { return LegacyNumericRangeQuery.newIntRange(name(), numericPrecisionStep(), lowerTerm == null ? null : (int)parseValue(lowerTerm), upperTerm == null ? null : (int)parseValue(upperTerm), @@ -160,9 +162,13 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower @Override public FieldStats.Long stats(IndexReader reader) throws IOException { int maxDoc = reader.maxDoc(); + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name()); if (terms == null) { - return null; + return new FieldStats.Long(maxDoc, 0, -1, -1, isSearchable(), isAggregatable()); } long minValue = LegacyNumericUtils.getMinInt(terms); long maxValue = LegacyNumericUtils.getMaxInt(terms); @@ -205,7 +211,7 @@ protected boolean customBoost() { } @Override - protected void innerParseCreateField(ParseContext context, List fields) throws IOException { + protected void innerParseCreateField(ParseContext context, List fields) throws IOException { short value; float boost = fieldType().boost(); if (context.externalValueSet()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/LegacyTokenCountFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/LegacyTokenCountFieldMapper.java index 2ed1b544a0292..5dfb93d4836f1 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/LegacyTokenCountFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/LegacyTokenCountFieldMapper.java @@ -23,6 +23,7 @@ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; import org.apache.lucene.document.Field; +import org.apache.lucene.index.IndexableField; import org.elasticsearch.Version; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.settings.Settings; @@ -97,7 +98,7 @@ public Mapper.Builder parse(String name, Map node, ParserContext builder.nullValue(nodeIntegerValue(propNode)); iterator.remove(); } else if (propName.equals("analyzer")) { - NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString()); + NamedAnalyzer analyzer = parserContext.getIndexAnalyzers().get(propNode.toString()); if (analyzer == null) { throw new MapperParsingException("Analyzer [" + propNode.toString() + "] not found for field [" + name + "]"); } @@ -122,7 +123,7 @@ protected LegacyTokenCountFieldMapper(String simpleName, MappedFieldType fieldTy } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { ValueAndBoost valueAndBoost = StringFieldMapper.parseCreateFieldForString(context, null /* Out null value is an int so we convert*/, fieldType().boost()); if (valueAndBoost.value() == null && fieldType().nullValue() == null) { return; diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java b/core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java index 9a434cc8a368b..9e8b2d9018f25 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java @@ -20,24 +20,31 @@ package org.elasticsearch.index.mapper; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.MultiFields; +import org.apache.lucene.index.PrefixCodedTerms; import org.apache.lucene.index.Term; import org.apache.lucene.index.Terms; +import org.apache.lucene.index.PrefixCodedTerms.TermIterator; +import org.apache.lucene.search.BooleanClause.Occur; +import org.apache.lucene.search.BooleanQuery; +import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.MultiTermQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermInSetQuery; import org.apache.lucene.search.TermQuery; -import org.apache.lucene.search.BooleanClause.Occur; -import org.apache.lucene.search.BooleanQuery; -import org.apache.lucene.search.BoostQuery; +import org.apache.lucene.util.BytesRef; import org.elasticsearch.action.fieldstats.FieldStats; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.joda.DateMathParser; +import org.elasticsearch.common.lucene.all.AllTermQuery; import org.elasticsearch.common.unit.Fuzziness; import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.query.QueryRewriteContext; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.QueryShardException; import org.elasticsearch.index.similarity.SimilarityProvider; @@ -133,7 +140,7 @@ public int hashCode() { eagerGlobalOrdinals, similarity == null ? null : similarity.name(), nullValue, nullValueAsString); } - // norelease: we need to override freeze() and add safety checks that all settings are actually set + // TODO: we need to override freeze() and add safety checks that all settings are actually set /** Returns the name of this type, as would be specified in mapping properties */ public abstract String typeName(); @@ -303,21 +310,21 @@ public void setNullValue(Object nullValue) { /** Given a value that comes from the stored fields API, convert it to the * expected type. For instance a date field would store dates as longs and * format it back to a string in this method. */ - public Object valueForSearch(Object value) { + public Object valueForDisplay(Object value) { return value; } /** Returns true if the field is searchable. * */ - protected boolean isSearchable() { + public boolean isSearchable() { return indexOptions() != IndexOptions.NONE; } /** Returns true if the field is aggregatable. * */ - protected boolean isAggregatable() { + public boolean isAggregatable() { try { fielddataBuilder(); return true; @@ -343,7 +350,7 @@ public Query termsQuery(List values, @Nullable QueryShardContext context) { return new ConstantScoreQuery(builder.build()); } - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { throw new IllegalArgumentException("Field [" + name + "] of type [" + typeName() + "] does not support range queries"); } @@ -373,14 +380,16 @@ public Query nullValueQuery() { */ public FieldStats stats(IndexReader reader) throws IOException { int maxDoc = reader.maxDoc(); + FieldInfo fi = MultiFields.getMergedFieldInfos(reader).fieldInfo(name()); + if (fi == null) { + return null; + } Terms terms = MultiFields.getTerms(reader, name()); if (terms == null) { - return null; + return new FieldStats.Text(maxDoc, 0, -1, -1, isSearchable(), isAggregatable()); } FieldStats stats = new FieldStats.Text(maxDoc, terms.getDocCount(), - terms.getSumDocFreq(), terms.getSumTotalTermFreq(), - isSearchable(), isAggregatable(), - terms.getMin(), terms.getMax()); + terms.getSumDocFreq(), terms.getSumTotalTermFreq(), isSearchable(), isAggregatable(), terms.getMin(), terms.getMax()); return stats; } @@ -388,7 +397,7 @@ public FieldStats stats(IndexReader reader) throws IOException { * An enum used to describe the relation between the range of terms in a * shard when compared with a query range */ - public static enum Relation { + public enum Relation { WITHIN, INTERSECTS, DISJOINT; @@ -399,10 +408,10 @@ public static enum Relation { * {@link Relation#INTERSECTS}, which is always fine to return when there is * no way to check whether values are actually within bounds. */ public Relation isFieldWithinQuery( - IndexReader reader, - Object from, Object to, - boolean includeLower, boolean includeUpper, - DateTimeZone timeZone, DateMathParser dateMathParser) throws IOException { + IndexReader reader, + Object from, Object to, + boolean includeLower, boolean includeUpper, + DateTimeZone timeZone, DateMathParser dateMathParser, QueryRewriteContext context) throws IOException { return Relation.INTERSECTS; } @@ -462,6 +471,21 @@ public static Term extractTerm(Query termQuery) { while (termQuery instanceof BoostQuery) { termQuery = ((BoostQuery) termQuery).getQuery(); } + if (termQuery instanceof AllTermQuery) { + return ((AllTermQuery) termQuery).getTerm(); + } else if (termQuery instanceof TypeFieldMapper.TypesQuery) { + assert ((TypeFieldMapper.TypesQuery) termQuery).getTerms().length == 1; + return new Term(TypeFieldMapper.NAME, ((TypeFieldMapper.TypesQuery) termQuery).getTerms()[0]); + } + if (termQuery instanceof TermInSetQuery) { + TermInSetQuery tisQuery = (TermInSetQuery) termQuery; + PrefixCodedTerms terms = tisQuery.getTermData(); + if (terms.size() == 1) { + TermIterator it = terms.iterator(); + BytesRef term = it.next(); + return new Term(it.field(), term); + } + } if (termQuery instanceof TermQuery == false) { throw new IllegalArgumentException("Cannot extract a term from a query of type " + termQuery.getClass() + ": " + termQuery); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/Mapper.java b/core/src/main/java/org/elasticsearch/index/mapper/Mapper.java index d1341c8f7d46d..b7082ba81c4f1 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/Mapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/Mapper.java @@ -21,16 +21,16 @@ import org.elasticsearch.Version; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ToXContent; -import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.similarity.SimilarityProvider; import java.util.Map; import java.util.Objects; import java.util.function.Function; +import java.util.function.Supplier; public abstract class Mapper implements ToXContent, Iterable { @@ -85,7 +85,7 @@ class ParserContext { private final String type; - private final AnalysisService analysisService; + private final IndexAnalyzers indexAnalyzers; private final Function similarityLookupService; @@ -95,29 +95,26 @@ class ParserContext { private final Version indexVersionCreated; - private final ParseFieldMatcher parseFieldMatcher; + private final Supplier queryShardContextSupplier; - private final QueryShardContext queryShardContext; - - public ParserContext(String type, AnalysisService analysisService, Function similarityLookupService, + public ParserContext(String type, IndexAnalyzers indexAnalyzers, Function similarityLookupService, MapperService mapperService, Function typeParsers, - Version indexVersionCreated, ParseFieldMatcher parseFieldMatcher, QueryShardContext queryShardContext) { + Version indexVersionCreated, Supplier queryShardContextSupplier) { this.type = type; - this.analysisService = analysisService; + this.indexAnalyzers = indexAnalyzers; this.similarityLookupService = similarityLookupService; this.mapperService = mapperService; this.typeParsers = typeParsers; this.indexVersionCreated = indexVersionCreated; - this.parseFieldMatcher = parseFieldMatcher; - this.queryShardContext = queryShardContext; + this.queryShardContextSupplier = queryShardContextSupplier; } public String type() { return type; } - public AnalysisService analysisService() { - return analysisService; + public IndexAnalyzers getIndexAnalyzers() { + return indexAnalyzers; } public SimilarityProvider getSimilarity(String name) { @@ -136,12 +133,8 @@ public Version indexVersionCreated() { return indexVersionCreated; } - public ParseFieldMatcher parseFieldMatcher() { - return parseFieldMatcher; - } - - public QueryShardContext queryShardContext() { - return queryShardContext; + public Supplier queryShardContextSupplier() { + return queryShardContextSupplier; } public boolean isWithinMultiField() { return false; } @@ -159,7 +152,8 @@ public ParserContext createMultiFieldContext(ParserContext in) { static class MultiFieldParserContext extends ParserContext { MultiFieldParserContext(ParserContext in) { - super(in.type(), in.analysisService, in.similarityLookupService(), in.mapperService(), in.typeParsers(), in.indexVersionCreated(), in.parseFieldMatcher(), in.queryShardContext()); + super(in.type(), in.indexAnalyzers, in.similarityLookupService(), in.mapperService(), in.typeParsers(), + in.indexVersionCreated(), in.queryShardContextSupplier()); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java b/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java index 148fde7b64826..2b66d88a2d833 100755 --- a/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java @@ -21,22 +21,27 @@ import com.carrotsearch.hppc.ObjectHashSet; import com.carrotsearch.hppc.cursors.ObjectCursor; + import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.DelegatingAnalyzerWrapper; +import org.apache.lucene.index.Term; import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MappingMetaData; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.AbstractIndexComponent; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.mapper.Mapper.BuilderContext; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.similarity.SimilarityService; @@ -44,12 +49,14 @@ import org.elasticsearch.indices.TypeMissingException; import org.elasticsearch.indices.mapper.MapperRegistry; +import java.io.Closeable; import java.io.IOException; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; +import java.util.LinkedHashMap; import java.util.List; import java.util.Map; import java.util.Set; @@ -60,12 +67,8 @@ import static java.util.Collections.emptyMap; import static java.util.Collections.emptySet; import static java.util.Collections.unmodifiableMap; -import static org.elasticsearch.common.collect.MapBuilder.newMapBuilder; -/** - * - */ -public class MapperService extends AbstractIndexComponent { +public class MapperService extends AbstractIndexComponent implements Closeable { /** * The reason why a mapping is being merged. @@ -93,6 +96,8 @@ public enum MergeReason { public static final boolean INDEX_MAPPER_DYNAMIC_DEFAULT = true; public static final Setting INDEX_MAPPER_DYNAMIC_SETTING = Setting.boolSetting("index.mapper.dynamic", INDEX_MAPPER_DYNAMIC_DEFAULT, Property.Dynamic, Property.IndexScope); + public static final Setting INDEX_MAPPING_SINGLE_TYPE_SETTING = + Setting.boolSetting("index.mapping.single_type", false, Property.IndexScope, Property.Final); private static ObjectHashSet META_FIELDS = ObjectHashSet.from( "_uid", "_id", "_type", "_all", "_parent", "_routing", "_index", "_size", "_timestamp", "_ttl" @@ -100,7 +105,7 @@ public enum MergeReason { @Deprecated public static final String PERCOLATOR_LEGACY_TYPE_NAME = ".percolator"; - private final AnalysisService analysisService; + private final IndexAnalyzers indexAnalyzers; /** * Will create types automatically if they do not exists in the mapping definition yet @@ -112,8 +117,9 @@ public enum MergeReason { private volatile Map mappers = emptyMap(); private volatile FieldTypeLookup fieldTypes; - private volatile Map fullPathObjectMappers = new HashMap<>(); + private volatile Map fullPathObjectMappers = emptyMap(); private boolean hasNested = false; // updated dynamically to true when a nested object is added + private boolean allEnabled = false; // updated dynamically to true when _all is enabled private final DocumentMapperParser documentParser; @@ -127,16 +133,17 @@ public enum MergeReason { final MapperRegistry mapperRegistry; - public MapperService(IndexSettings indexSettings, AnalysisService analysisService, + public MapperService(IndexSettings indexSettings, IndexAnalyzers indexAnalyzers, NamedXContentRegistry xContentRegistry, SimilarityService similarityService, MapperRegistry mapperRegistry, Supplier queryShardContextSupplier) { super(indexSettings); - this.analysisService = analysisService; + this.indexAnalyzers = indexAnalyzers; this.fieldTypes = new FieldTypeLookup(); - this.documentParser = new DocumentMapperParser(indexSettings, this, analysisService, similarityService, mapperRegistry, queryShardContextSupplier); - this.indexAnalyzer = new MapperAnalyzerWrapper(analysisService.defaultIndexAnalyzer(), p -> p.indexAnalyzer()); - this.searchAnalyzer = new MapperAnalyzerWrapper(analysisService.defaultSearchAnalyzer(), p -> p.searchAnalyzer()); - this.searchQuoteAnalyzer = new MapperAnalyzerWrapper(analysisService.defaultSearchQuoteAnalyzer(), p -> p.searchQuoteAnalyzer()); + this.documentParser = new DocumentMapperParser(indexSettings, this, indexAnalyzers, xContentRegistry, similarityService, + mapperRegistry, queryShardContextSupplier); + this.indexAnalyzer = new MapperAnalyzerWrapper(indexAnalyzers.getDefaultIndexAnalyzer(), p -> p.indexAnalyzer()); + this.searchAnalyzer = new MapperAnalyzerWrapper(indexAnalyzers.getDefaultSearchAnalyzer(), p -> p.searchAnalyzer()); + this.searchQuoteAnalyzer = new MapperAnalyzerWrapper(indexAnalyzers.getDefaultSearchQuoteAnalyzer(), p -> p.searchQuoteAnalyzer()); this.mapperRegistry = mapperRegistry; this.dynamic = this.indexSettings.getValue(INDEX_MAPPER_DYNAMIC_SETTING); @@ -153,6 +160,13 @@ public boolean hasNested() { return this.hasNested; } + /** + * Returns true if the "_all" field is enabled on any type. + */ + public boolean allEnabled() { + return this.allEnabled; + } + /** * returns an immutable iterator over current document mappers. * @@ -171,167 +185,256 @@ public Iterable docMappers(final boolean includingDefaultMapping }; } - public AnalysisService analysisService() { - return this.analysisService; + public IndexAnalyzers getIndexAnalyzers() { + return this.indexAnalyzers; } public DocumentMapperParser documentMapperParser() { return this.documentParser; } - public static Map parseMapping(String mappingSource) throws Exception { - try (XContentParser parser = XContentFactory.xContent(mappingSource).createParser(mappingSource)) { + /** + * Parses the mappings (formatted as JSON) into a map + */ + public static Map parseMapping(NamedXContentRegistry xContentRegistry, String mappingSource) throws Exception { + try (XContentParser parser = XContentType.JSON.xContent().createParser(xContentRegistry, mappingSource)) { return parser.map(); } } + /** + * Update mapping by only merging the metadata that is different between received and stored entries + */ public boolean updateMapping(IndexMetaData indexMetaData) throws IOException { assert indexMetaData.getIndex().equals(index()) : "index mismatch: expected " + index() + " but was " + indexMetaData.getIndex(); // go over and add the relevant mappings (or update them) + final Set existingMappers = new HashSet<>(mappers.keySet()); + final Map updatedEntries; + try { + // only update entries if needed + updatedEntries = internalMerge(indexMetaData, MergeReason.MAPPING_RECOVERY, true, true); + } catch (Exception e) { + logger.warn((org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("[{}] failed to apply mappings", index()), e); + throw e; + } + boolean requireRefresh = false; - for (ObjectCursor cursor : indexMetaData.getMappings().values()) { - MappingMetaData mappingMd = cursor.value; - String mappingType = mappingMd.type(); - CompressedXContent mappingSource = mappingMd.source(); + + for (DocumentMapper documentMapper : updatedEntries.values()) { + String mappingType = documentMapper.type(); + CompressedXContent incomingMappingSource = indexMetaData.mapping(mappingType).source(); + + String op = existingMappers.contains(mappingType) ? "updated" : "added"; + if (logger.isDebugEnabled() && incomingMappingSource.compressed().length < 512) { + logger.debug("[{}] {} mapping [{}], source [{}]", index(), op, mappingType, incomingMappingSource.string()); + } else if (logger.isTraceEnabled()) { + logger.trace("[{}] {} mapping [{}], source [{}]", index(), op, mappingType, incomingMappingSource.string()); + } else { + logger.debug("[{}] {} mapping [{}] (source suppressed due to length, use TRACE level if needed)", index(), op, mappingType); + } + // refresh mapping can happen when the parsing/merging of the mapping from the metadata doesn't result in the same // mapping, in this case, we send to the master to refresh its own version of the mappings (to conform with the // merge version of it, which it does when refreshing the mappings), and warn log it. - try { - DocumentMapper existingMapper = documentMapper(mappingType); - - if (existingMapper == null || mappingSource.equals(existingMapper.mappingSource()) == false) { - String op = existingMapper == null ? "adding" : "updating"; - if (logger.isDebugEnabled() && mappingSource.compressed().length < 512) { - logger.debug("[{}] {} mapping [{}], source [{}]", index(), op, mappingType, mappingSource.string()); - } else if (logger.isTraceEnabled()) { - logger.trace("[{}] {} mapping [{}], source [{}]", index(), op, mappingType, mappingSource.string()); - } else { - logger.debug("[{}] {} mapping [{}] (source suppressed due to length, use TRACE level if needed)", index(), op, - mappingType); - } - merge(mappingType, mappingSource, MergeReason.MAPPING_RECOVERY, true); - if (!documentMapper(mappingType).mappingSource().equals(mappingSource)) { - logger.debug("[{}] parsed mapping [{}], and got different sources\noriginal:\n{}\nparsed:\n{}", index(), - mappingType, mappingSource, documentMapper(mappingType).mappingSource()); - requireRefresh = true; - } - } - } catch (Exception e) { - logger.warn( - (org.apache.logging.log4j.util.Supplier) - () -> new ParameterizedMessage("[{}] failed to add mapping [{}], source [{}]", index(), mappingType, mappingSource), - e); - throw e; + if (documentMapper(mappingType).mappingSource().equals(incomingMappingSource) == false) { + logger.debug("[{}] parsed mapping [{}], and got different sources\noriginal:\n{}\nparsed:\n{}", index(), mappingType, + incomingMappingSource, documentMapper(mappingType).mappingSource()); + + requireRefresh = true; } } + return requireRefresh; } - //TODO: make this atomic - public void merge(Map> mappings, boolean updateAllTypes) throws MapperParsingException { - // first, add the default mapping - if (mappings.containsKey(DEFAULT_MAPPING)) { - try { - this.merge(DEFAULT_MAPPING, new CompressedXContent(XContentFactory.jsonBuilder().map(mappings.get(DEFAULT_MAPPING)).string()), MergeReason.MAPPING_UPDATE, updateAllTypes); - } catch (Exception e) { - throw new MapperParsingException("Failed to parse mapping [{}]: {}", e, DEFAULT_MAPPING, e.getMessage()); - } - } + public void merge(Map> mappings, MergeReason reason, boolean updateAllTypes) { + Map mappingSourcesCompressed = new LinkedHashMap<>(mappings.size()); for (Map.Entry> entry : mappings.entrySet()) { - if (entry.getKey().equals(DEFAULT_MAPPING)) { - continue; - } try { - // apply the default here, its the first time we parse it - this.merge(entry.getKey(), new CompressedXContent(XContentFactory.jsonBuilder().map(entry.getValue()).string()), MergeReason.MAPPING_UPDATE, updateAllTypes); + mappingSourcesCompressed.put(entry.getKey(), new CompressedXContent(XContentFactory.jsonBuilder().map(entry.getValue()).string())); } catch (Exception e) { throw new MapperParsingException("Failed to parse mapping [{}]: {}", e, entry.getKey(), e.getMessage()); } } + + internalMerge(mappingSourcesCompressed, reason, updateAllTypes); + } + + public void merge(IndexMetaData indexMetaData, MergeReason reason, boolean updateAllTypes) { + internalMerge(indexMetaData, reason, updateAllTypes, false); } public DocumentMapper merge(String type, CompressedXContent mappingSource, MergeReason reason, boolean updateAllTypes) { - if (DEFAULT_MAPPING.equals(type)) { + return internalMerge(Collections.singletonMap(type, mappingSource), reason, updateAllTypes).get(type); + } + + private synchronized Map internalMerge(IndexMetaData indexMetaData, MergeReason reason, boolean updateAllTypes, + boolean onlyUpdateIfNeeded) { + Map map = new LinkedHashMap<>(); + for (ObjectCursor cursor : indexMetaData.getMappings().values()) { + MappingMetaData mappingMetaData = cursor.value; + if (onlyUpdateIfNeeded) { + DocumentMapper existingMapper = documentMapper(mappingMetaData.type()); + if (existingMapper == null || mappingMetaData.source().equals(existingMapper.mappingSource()) == false) { + map.put(mappingMetaData.type(), mappingMetaData.source()); + } + } else { + map.put(mappingMetaData.type(), mappingMetaData.source()); + } + } + return internalMerge(map, reason, updateAllTypes); + } + + private synchronized Map internalMerge(Map mappings, MergeReason reason, boolean updateAllTypes) { + DocumentMapper defaultMapper = null; + String defaultMappingSource = null; + + if (mappings.containsKey(DEFAULT_MAPPING)) { // verify we can parse it // NOTE: never apply the default here - DocumentMapper mapper = documentParser.parse(type, mappingSource); - // still add it as a document mapper so we have it registered and, for example, persisted back into - // the cluster meta data if needed, or checked for existence - synchronized (this) { - mappers = newMapBuilder(mappers).put(type, mapper).map(); + try { + defaultMapper = documentParser.parse(DEFAULT_MAPPING, mappings.get(DEFAULT_MAPPING)); + } catch (Exception e) { + throw new MapperParsingException("Failed to parse mapping [{}]: {}", e, DEFAULT_MAPPING, e.getMessage()); } try { - defaultMappingSource = mappingSource.string(); + defaultMappingSource = mappings.get(DEFAULT_MAPPING).string(); } catch (IOException e) { throw new ElasticsearchGenerationException("failed to un-compress", e); } - return mapper; + } + + final String defaultMappingSourceOrLastStored; + if (defaultMappingSource != null) { + defaultMappingSourceOrLastStored = defaultMappingSource; } else { - synchronized (this) { - final boolean applyDefault = - // the default was already applied if we are recovering - reason != MergeReason.MAPPING_RECOVERY - // only apply the default mapping if we don't have the type yet - && mappers.containsKey(type) == false; - DocumentMapper mergeWith = parse(type, mappingSource, applyDefault); - return merge(mergeWith, reason, updateAllTypes); + defaultMappingSourceOrLastStored = this.defaultMappingSource; + } + + List documentMappers = new ArrayList<>(); + for (Map.Entry entry : mappings.entrySet()) { + String type = entry.getKey(); + if (type.equals(DEFAULT_MAPPING)) { + continue; + } + + final boolean applyDefault = + // the default was already applied if we are recovering + reason != MergeReason.MAPPING_RECOVERY + // only apply the default mapping if we don't have the type yet + && mappers.containsKey(type) == false; + + try { + DocumentMapper documentMapper = + documentParser.parse(type, entry.getValue(), applyDefault ? defaultMappingSourceOrLastStored : null); + documentMappers.add(documentMapper); + } catch (Exception e) { + throw new MapperParsingException("Failed to parse mapping [{}]: {}", e, entry.getKey(), e.getMessage()); } } + + return internalMerge(defaultMapper, defaultMappingSource, documentMappers, reason, updateAllTypes); } - private synchronized DocumentMapper merge(DocumentMapper mapper, MergeReason reason, boolean updateAllTypes) { - if (mapper.type().length() == 0) { - throw new InvalidTypeNameException("mapping type name is empty"); - } - if (mapper.type().length() > 255) { - throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] is too long; limit is length 255 but was [" + mapper.type().length() + "]"); - } - if (mapper.type().charAt(0) == '_') { - throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] can't start with '_'"); - } - if (mapper.type().contains("#")) { - throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] should not include '#' in it"); - } - if (mapper.type().contains(",")) { - throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] should not include ',' in it"); - } - if (mapper.type().equals(mapper.parentFieldMapper().type())) { - throw new IllegalArgumentException("The [_parent.type] option can't point to the same type"); - } - if (typeNameStartsWithIllegalDot(mapper)) { - throw new IllegalArgumentException("mapping type name [" + mapper.type() + "] must not start with a '.'"); - } + private synchronized Map internalMerge(@Nullable DocumentMapper defaultMapper, @Nullable String defaultMappingSource, + List documentMappers, MergeReason reason, boolean updateAllTypes) { + boolean hasNested = this.hasNested; + boolean allEnabled = this.allEnabled; + Map fullPathObjectMappers = this.fullPathObjectMappers; + FieldTypeLookup fieldTypes = this.fieldTypes; + Set parentTypes = this.parentTypes; + Map mappers = new HashMap<>(this.mappers); - // 1. compute the merged DocumentMapper - DocumentMapper oldMapper = mappers.get(mapper.type()); - DocumentMapper newMapper; - if (oldMapper != null) { - newMapper = oldMapper.merge(mapper.mapping(), updateAllTypes); - } else { - newMapper = mapper; + Map results = new LinkedHashMap<>(documentMappers.size() + 1); + + if (defaultMapper != null) { + assert defaultMapper.type().equals(DEFAULT_MAPPING); + mappers.put(DEFAULT_MAPPING, defaultMapper); + results.put(DEFAULT_MAPPING, defaultMapper); } - // 2. check basic sanity of the new mapping - List objectMappers = new ArrayList<>(); - List fieldMappers = new ArrayList<>(); - Collections.addAll(fieldMappers, newMapper.mapping().metadataMappers); - MapperUtils.collect(newMapper.mapping().root(), objectMappers, fieldMappers); - checkFieldUniqueness(newMapper.type(), objectMappers, fieldMappers); - checkObjectsCompatibility(objectMappers, updateAllTypes); + for (DocumentMapper mapper : documentMappers) { + // check naming + if (mapper.type().length() == 0) { + throw new InvalidTypeNameException("mapping type name is empty"); + } + if (mapper.type().length() > 255) { + throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] is too long; limit is length 255 but was [" + mapper.type().length() + "]"); + } + if (mapper.type().charAt(0) == '_') { + throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] can't start with '_'"); + } + if (mapper.type().contains("#")) { + throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] should not include '#' in it"); + } + if (mapper.type().contains(",")) { + throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] should not include ',' in it"); + } + if (mapper.type().equals(mapper.parentFieldMapper().type())) { + throw new IllegalArgumentException("The [_parent.type] option can't point to the same type"); + } + if (typeNameStartsWithIllegalDot(mapper)) { + throw new IllegalArgumentException("mapping type name [" + mapper.type() + "] must not start with a '.'"); + } + + // compute the merged DocumentMapper + DocumentMapper oldMapper = mappers.get(mapper.type()); + DocumentMapper newMapper; + if (oldMapper != null) { + newMapper = oldMapper.merge(mapper.mapping(), updateAllTypes); + } else { + newMapper = mapper; + } - // 3. update lookup data-structures - // this will in particular make sure that the merged fields are compatible with other types - FieldTypeLookup fieldTypes = this.fieldTypes.copyAndAddAll(newMapper.type(), fieldMappers, updateAllTypes); + // check basic sanity of the new mapping + List objectMappers = new ArrayList<>(); + List fieldMappers = new ArrayList<>(); + Collections.addAll(fieldMappers, newMapper.mapping().metadataMappers); + MapperUtils.collect(newMapper.mapping().root(), objectMappers, fieldMappers); + checkFieldUniqueness(newMapper.type(), objectMappers, fieldMappers, fullPathObjectMappers, fieldTypes); + checkObjectsCompatibility(objectMappers, updateAllTypes, fullPathObjectMappers); + checkPartitionedIndexConstraints(newMapper); + + // update lookup data-structures + // this will in particular make sure that the merged fields are compatible with other types + fieldTypes = fieldTypes.copyAndAddAll(newMapper.type(), fieldMappers, updateAllTypes); + + for (ObjectMapper objectMapper : objectMappers) { + if (fullPathObjectMappers == this.fullPathObjectMappers) { + // first time through the loops + fullPathObjectMappers = new HashMap<>(this.fullPathObjectMappers); + } + fullPathObjectMappers.put(objectMapper.fullPath(), objectMapper); - boolean hasNested = this.hasNested; - Map fullPathObjectMappers = new HashMap<>(this.fullPathObjectMappers); - for (ObjectMapper objectMapper : objectMappers) { - fullPathObjectMappers.put(objectMapper.fullPath(), objectMapper); - if (objectMapper.nested().isNested()) { - hasNested = true; + if (objectMapper.nested().isNested()) { + hasNested = true; + } + } + + if (reason == MergeReason.MAPPING_UPDATE) { + // this check will only be performed on the master node when there is + // a call to the update mapping API. For all other cases like + // the master node restoring mappings from disk or data nodes + // deserializing cluster state that was sent by the master node, + // this check will be skipped. + checkTotalFieldsLimit(objectMappers.size() + fieldMappers.size()); } + + if (oldMapper == null && newMapper.parentFieldMapper().active()) { + if (parentTypes == this.parentTypes) { + // first time through the loop + parentTypes = new HashSet<>(this.parentTypes); + } + parentTypes.add(mapper.parentFieldMapper().type()); + } + + // this is only correct because types cannot be removed and we do not + // allow to disable an existing _all field + allEnabled |= mapper.allFieldMapper().enabled(); + + results.put(newMapper.type(), newMapper); + mappers.put(newMapper.type(), newMapper); } - fullPathObjectMappers = Collections.unmodifiableMap(fullPathObjectMappers); if (reason == MergeReason.MAPPING_UPDATE) { // this check will only be performed on the master node when there is @@ -340,42 +443,62 @@ private synchronized DocumentMapper merge(DocumentMapper mapper, MergeReason rea // deserializing cluster state that was sent by the master node, // this check will be skipped. checkNestedFieldsLimit(fullPathObjectMappers); - checkTotalFieldsLimit(objectMappers.size() + fieldMappers.size()); checkDepthLimit(fullPathObjectMappers.keySet()); } - Set parentTypes = this.parentTypes; - if (oldMapper == null && newMapper.parentFieldMapper().active()) { - parentTypes = new HashSet<>(parentTypes.size() + 1); - parentTypes.addAll(this.parentTypes); - parentTypes.add(mapper.parentFieldMapper().type()); - parentTypes = Collections.unmodifiableSet(parentTypes); - } - - Map mappers = new HashMap<>(this.mappers); - mappers.put(newMapper.type(), newMapper); for (Map.Entry entry : mappers.entrySet()) { if (entry.getKey().equals(DEFAULT_MAPPING)) { continue; } - DocumentMapper m = entry.getValue(); + DocumentMapper documentMapper = entry.getValue(); // apply changes to the field types back - m = m.updateFieldType(fieldTypes.fullNameToFieldType); - entry.setValue(m); + DocumentMapper updatedDocumentMapper = documentMapper.updateFieldType(fieldTypes.fullNameToFieldType); + if (updatedDocumentMapper != documentMapper) { + // update both mappers and result + entry.setValue(updatedDocumentMapper); + if (results.containsKey(updatedDocumentMapper.type())) { + results.put(updatedDocumentMapper.type(), updatedDocumentMapper); + } + } } + + if (indexSettings.isSingleType()) { + Set actualTypes = new HashSet<>(mappers.keySet()); + actualTypes.remove(DEFAULT_MAPPING); + if (actualTypes.size() > 1) { + throw new IllegalArgumentException( + "Rejecting mapping update to [" + index().getName() + "] as the final mapping would have more than 1 type: " + actualTypes); + } + } + + // make structures immutable mappers = Collections.unmodifiableMap(mappers); + results = Collections.unmodifiableMap(results); + + // only need to immutably rewrap these if the previous reference was changed. + // if not then they are already implicitly immutable. + if (fullPathObjectMappers != this.fullPathObjectMappers) { + fullPathObjectMappers = Collections.unmodifiableMap(fullPathObjectMappers); + } + if (parentTypes != this.parentTypes) { + parentTypes = Collections.unmodifiableSet(parentTypes); + } - // 4. commit the change + // commit the change + if (defaultMappingSource != null) { + this.defaultMappingSource = defaultMappingSource; + } this.mappers = mappers; this.fieldTypes = fieldTypes; this.hasNested = hasNested; this.fullPathObjectMappers = fullPathObjectMappers; this.parentTypes = parentTypes; + this.allEnabled = allEnabled; - assert assertSerialization(newMapper); assert assertMappersShareSameFieldType(); + assert results.values().stream().allMatch(this::assertSerialization); - return newMapper; + return results; } private boolean assertMappersShareSameFieldType() { @@ -412,8 +535,8 @@ private boolean assertSerialization(DocumentMapper mapper) { return true; } - private void checkFieldUniqueness(String type, Collection objectMappers, Collection fieldMappers) { - assert Thread.holdsLock(this); + private static void checkFieldUniqueness(String type, Collection objectMappers, Collection fieldMappers, + Map fullPathObjectMappers, FieldTypeLookup fieldTypes) { // first check within mapping final Set objectFullNames = new HashSet<>(); @@ -450,9 +573,8 @@ private void checkFieldUniqueness(String type, Collection objectMa } } - private void checkObjectsCompatibility(Collection objectMappers, boolean updateAllTypes) { - assert Thread.holdsLock(this); - + private static void checkObjectsCompatibility(Collection objectMappers, boolean updateAllTypes, + Map fullPathObjectMappers) { for (ObjectMapper newObjectMapper : objectMappers) { ObjectMapper existingObjectMapper = fullPathObjectMappers.get(newObjectMapper.fullPath()); if (existingObjectMapper != null) { @@ -504,6 +626,20 @@ private void checkDepthLimit(String objectPath, long maxDepth) { } } + private void checkPartitionedIndexConstraints(DocumentMapper newMapper) { + if (indexSettings.getIndexMetaData().isRoutingPartitionedIndex()) { + if (newMapper.parentFieldMapper().active()) { + throw new IllegalArgumentException("mapping type name [" + newMapper.type() + "] cannot have a " + + "_parent field for the partitioned index [" + indexSettings.getIndex().getName() + "]"); + } + + if (!newMapper.routingFieldMapper().required()) { + throw new IllegalArgumentException("mapping type [" + newMapper.type() + "] must have routing " + + "required for partitioned index [" + indexSettings.getIndex().getName() + "]"); + } + } + } + public DocumentMapper parse(String mappingType, CompressedXContent mappingSource, boolean applyDefault) throws MapperParsingException { return documentParser.parse(mappingType, mappingSource, applyDefault ? defaultMappingSource : null); } @@ -588,14 +724,13 @@ public MappedFieldType unmappedFieldType(String type) { if (typeParser == null) { throw new IllegalArgumentException("No mapper found for type [" + type + "]"); } - final Mapper.Builder builder = typeParser.parse("__anonymous_" + type, emptyMap(), parserContext); + final Mapper.Builder builder = typeParser.parse("__anonymous_" + type, new HashMap<>(), parserContext); final BuilderContext builderContext = new BuilderContext(indexSettings.getSettings(), new ContentPath(1)); fieldType = ((FieldMapper)builder.build(builderContext)).fieldType(); // There is no need to synchronize writes here. In the case of concurrent access, we could just // compute some mappers several times, which is not a big deal - Map newUnmappedFieldTypes = new HashMap<>(); - newUnmappedFieldTypes.putAll(unmappedFieldTypes); + Map newUnmappedFieldTypes = new HashMap<>(unmappedFieldTypes); newUnmappedFieldTypes.put(type, fieldType); unmappedFieldTypes = unmodifiableMap(newUnmappedFieldTypes); } @@ -618,6 +753,11 @@ public Set getParentTypes() { return parentTypes; } + @Override + public void close() throws IOException { + indexAnalyzers.close(); + } + /** * @return Whether a field is a metadata field. */ @@ -653,4 +793,16 @@ protected Analyzer getWrappedAnalyzer(String fieldName) { return defaultAnalyzer; } } + + /** Return a term that uniquely identifies the document, or {@code null} if the type is not allowed. */ + public Term createUidTerm(String type, String id) { + if (hasMapping(type) == false) { + return null; + } + if (indexSettings.isSingleType()) { + return new Term(IdFieldMapper.NAME, id); + } else { + return new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(type, id)); + } + } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/Mapping.java b/core/src/main/java/org/elasticsearch/index/mapper/Mapping.java index 0b92dbe45173a..9cb3ea7a57e48 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/Mapping.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/Mapping.java @@ -104,12 +104,22 @@ public Mapping merge(Mapping mergeWith, boolean updateAllTypes) { * Recursively update sub field types. */ public Mapping updateFieldType(Map fullNameToFieldType) { - final MetadataFieldMapper[] updatedMeta = Arrays.copyOf(metadataMappers, metadataMappers.length); - for (int i = 0; i < updatedMeta.length; ++i) { - updatedMeta[i] = (MetadataFieldMapper) updatedMeta[i].updateFieldType(fullNameToFieldType); + MetadataFieldMapper[] updatedMeta = null; + for (int i = 0; i < metadataMappers.length; ++i) { + MetadataFieldMapper currentFieldMapper = metadataMappers[i]; + MetadataFieldMapper updatedFieldMapper = (MetadataFieldMapper) currentFieldMapper.updateFieldType(fullNameToFieldType); + if (updatedFieldMapper != currentFieldMapper) { + if (updatedMeta == null) { + updatedMeta = Arrays.copyOf(metadataMappers, metadataMappers.length); + } + updatedMeta[i] = updatedFieldMapper; + } } RootObjectMapper updatedRoot = root.updateFieldType(fullNameToFieldType); - return new Mapping(indexCreated, updatedRoot, updatedMeta, meta); + if (updatedMeta == null && updatedRoot == root) { + return this; + } + return new Mapping(indexCreated, updatedRoot, updatedMeta == null ? metadataMappers : updatedMeta, meta); } @Override diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MetadataFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/MetadataFieldMapper.java index 07a4b3b9a5142..ec84631e04161 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/MetadataFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MetadataFieldMapper.java @@ -39,14 +39,13 @@ public interface TypeParser extends Mapper.TypeParser { * Get the default {@link MetadataFieldMapper} to use, if nothing had to be parsed. * @param fieldType null if this is the first root mapper on this index, the existing * fieldType for this index otherwise - * @param indexSettings the index-level settings * @param fieldType the existing field type for this meta mapper on the current index * or null if this is the first type being introduced - * @param typeName the name of the type that this mapper will be used on + * @param parserContext context that may be useful to build the field like analyzers */ // TODO: remove the fieldType parameter which is only used for bw compat with pre-2.0 // since settings could be modified - MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName); + MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext parserContext); } public abstract static class Builder extends FieldMapper.Builder { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java index 886b93fcf0e3a..fc17508fcd24d 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java @@ -27,10 +27,13 @@ import org.apache.lucene.document.LongPoint; import org.apache.lucene.document.SortedNumericDocValuesField; import org.apache.lucene.document.StoredField; +import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.XPointValues; +import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.PointValues; import org.apache.lucene.search.BoostQuery; +import org.apache.lucene.search.IndexOrDocValuesQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; @@ -38,6 +41,7 @@ import org.elasticsearch.Version; import org.elasticsearch.action.fieldstats.FieldStats; import org.elasticsearch.common.Explicit; +import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; @@ -54,6 +58,7 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -158,13 +163,13 @@ public Mapper.Builder parse(String name, Map node, if (propNode == null) { throw new MapperParsingException("Property [null_value] cannot be null."); } - builder.nullValue(type.parse(propNode)); + builder.nullValue(type.parse(propNode, false)); iterator.remove(); } else if (propName.equals("ignore_malformed")) { - builder.ignoreMalformed(TypeParsers.nodeBooleanValue("ignore_malformed", propNode, parserContext)); + builder.ignoreMalformed(TypeParsers.nodeBooleanValue(name,"ignore_malformed", propNode)); iterator.remove(); } else if (propName.equals("coerce")) { - builder.coerce(TypeParsers.nodeBooleanValue("coerce", propNode, parserContext)); + builder.coerce(TypeParsers.nodeBooleanValue(name, "coerce", propNode)); iterator.remove(); } } @@ -175,8 +180,8 @@ public Mapper.Builder parse(String name, Map node, public enum NumberType { HALF_FLOAT("half_float", NumericType.HALF_FLOAT) { @Override - Float parse(Object value) { - return (Float) FLOAT.parse(value); + Float parse(Object value, boolean coerce) { + return (Float) FLOAT.parse(value, false); } @Override @@ -186,7 +191,7 @@ Float parse(XContentParser parser, boolean coerce) throws IOException { @Override Query termQuery(String field, Object value) { - float v = parse(value); + float v = parse(value, false); return HalfFloatPoint.newExactQuery(field, v); } @@ -194,31 +199,39 @@ Query termQuery(String field, Object value) { Query termsQuery(String field, List values) { float[] v = new float[values.size()]; for (int i = 0; i < values.size(); ++i) { - v[i] = parse(values.get(i)); + v[i] = parse(values.get(i), false); } return HalfFloatPoint.newSetQuery(field, v); } @Override Query rangeQuery(String field, Object lowerTerm, Object upperTerm, - boolean includeLower, boolean includeUpper) { + boolean includeLower, boolean includeUpper, + boolean hasDocValues) { float l = Float.NEGATIVE_INFINITY; float u = Float.POSITIVE_INFINITY; if (lowerTerm != null) { - l = parse(lowerTerm); + l = parse(lowerTerm, false); if (includeLower) { - l = Math.nextDown(l); + l = HalfFloatPoint.nextDown(l); } l = HalfFloatPoint.nextUp(l); } if (upperTerm != null) { - u = parse(upperTerm); + u = parse(upperTerm, false); if (includeUpper) { - u = Math.nextUp(u); + u = HalfFloatPoint.nextUp(u); } u = HalfFloatPoint.nextDown(u); } - return HalfFloatPoint.newRangeQuery(field, l, u); + Query query = HalfFloatPoint.newRangeQuery(field, l, u); + if (hasDocValues) { + Query dvQuery = SortedNumericDocValuesField.newRangeQuery(field, + HalfFloatPoint.halfFloatToSortableShort(l), + HalfFloatPoint.halfFloatToSortableShort(u)); + query = new IndexOrDocValuesQuery(query, dvQuery); + } + return query; } @Override @@ -241,21 +254,25 @@ public List createFields(String name, Number value, @Override FieldStats.Double stats(IndexReader reader, String fieldName, boolean isSearchable, boolean isAggregatable) throws IOException { - long size = XPointValues.size(reader, fieldName); - if (size == 0) { + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(fieldName); + if (fi == null) { return null; } - int docCount = XPointValues.getDocCount(reader, fieldName); - byte[] min = XPointValues.getMinPackedValue(reader, fieldName); - byte[] max = XPointValues.getMaxPackedValue(reader, fieldName); - return new FieldStats.Double(reader.maxDoc(),docCount, -1L, size, + long size = PointValues.size(reader, fieldName); + if (size == 0) { + return new FieldStats.Double(reader.maxDoc(), 0, -1, -1, isSearchable, isAggregatable); + } + int docCount = PointValues.getDocCount(reader, fieldName); + byte[] min = PointValues.getMinPackedValue(reader, fieldName); + byte[] max = PointValues.getMaxPackedValue(reader, fieldName); + return new FieldStats.Double(reader.maxDoc(), docCount, -1L, size, isSearchable, isAggregatable, HalfFloatPoint.decodeDimension(min, 0), HalfFloatPoint.decodeDimension(max, 0)); } }, FLOAT("float", NumericType.FLOAT) { @Override - Float parse(Object value) { + Float parse(Object value, boolean coerce) { if (value instanceof Number) { return ((Number) value).floatValue(); } @@ -272,7 +289,7 @@ Float parse(XContentParser parser, boolean coerce) throws IOException { @Override Query termQuery(String field, Object value) { - float v = parse(value); + float v = parse(value, false); return FloatPoint.newExactQuery(field, v); } @@ -280,29 +297,37 @@ Query termQuery(String field, Object value) { Query termsQuery(String field, List values) { float[] v = new float[values.size()]; for (int i = 0; i < values.size(); ++i) { - v[i] = parse(values.get(i)); + v[i] = parse(values.get(i), false); } return FloatPoint.newSetQuery(field, v); } @Override Query rangeQuery(String field, Object lowerTerm, Object upperTerm, - boolean includeLower, boolean includeUpper) { + boolean includeLower, boolean includeUpper, + boolean hasDocValues) { float l = Float.NEGATIVE_INFINITY; float u = Float.POSITIVE_INFINITY; if (lowerTerm != null) { - l = parse(lowerTerm); + l = parse(lowerTerm, false); if (includeLower == false) { - l = Math.nextUp(l); + l = FloatPoint.nextUp(l); } } if (upperTerm != null) { - u = parse(upperTerm); + u = parse(upperTerm, false); if (includeUpper == false) { - u = Math.nextDown(u); + u = FloatPoint.nextDown(u); } } - return FloatPoint.newRangeQuery(field, l, u); + Query query = FloatPoint.newRangeQuery(field, l, u); + if (hasDocValues) { + Query dvQuery = SortedNumericDocValuesField.newRangeQuery(field, + NumericUtils.floatToSortableInt(l), + NumericUtils.floatToSortableInt(u)); + query = new IndexOrDocValuesQuery(query, dvQuery); + } + return query; } @Override @@ -325,13 +350,17 @@ public List createFields(String name, Number value, @Override FieldStats.Double stats(IndexReader reader, String fieldName, boolean isSearchable, boolean isAggregatable) throws IOException { - long size = XPointValues.size(reader, fieldName); - if (size == 0) { + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(fieldName); + if (fi == null) { return null; } - int docCount = XPointValues.getDocCount(reader, fieldName); - byte[] min = XPointValues.getMinPackedValue(reader, fieldName); - byte[] max = XPointValues.getMaxPackedValue(reader, fieldName); + long size = PointValues.size(reader, fieldName); + if (size == 0) { + return new FieldStats.Double(reader.maxDoc(), 0, -1, -1, isSearchable, isAggregatable); + } + int docCount = PointValues.getDocCount(reader, fieldName); + byte[] min = PointValues.getMinPackedValue(reader, fieldName); + byte[] max = PointValues.getMaxPackedValue(reader, fieldName); return new FieldStats.Double(reader.maxDoc(),docCount, -1L, size, isSearchable, isAggregatable, FloatPoint.decodeDimension(min, 0), FloatPoint.decodeDimension(max, 0)); @@ -339,14 +368,8 @@ FieldStats.Double stats(IndexReader reader, String fieldName, }, DOUBLE("double", NumericType.DOUBLE) { @Override - Double parse(Object value) { - if (value instanceof Number) { - return ((Number) value).doubleValue(); - } - if (value instanceof BytesRef) { - value = ((BytesRef) value).utf8ToString(); - } - return Double.parseDouble(value.toString()); + Double parse(Object value, boolean coerce) { + return objectToDouble(value); } @Override @@ -356,7 +379,7 @@ Double parse(XContentParser parser, boolean coerce) throws IOException { @Override Query termQuery(String field, Object value) { - double v = parse(value); + double v = parse(value, false); return DoublePoint.newExactQuery(field, v); } @@ -364,29 +387,37 @@ Query termQuery(String field, Object value) { Query termsQuery(String field, List values) { double[] v = new double[values.size()]; for (int i = 0; i < values.size(); ++i) { - v[i] = parse(values.get(i)); + v[i] = parse(values.get(i), false); } return DoublePoint.newSetQuery(field, v); } @Override Query rangeQuery(String field, Object lowerTerm, Object upperTerm, - boolean includeLower, boolean includeUpper) { + boolean includeLower, boolean includeUpper, + boolean hasDocValues) { double l = Double.NEGATIVE_INFINITY; double u = Double.POSITIVE_INFINITY; if (lowerTerm != null) { - l = parse(lowerTerm); + l = parse(lowerTerm, false); if (includeLower == false) { - l = Math.nextUp(l); + l = DoublePoint.nextUp(l); } } if (upperTerm != null) { - u = parse(upperTerm); + u = parse(upperTerm, false); if (includeUpper == false) { - u = Math.nextDown(u); + u = DoublePoint.nextDown(u); } } - return DoublePoint.newRangeQuery(field, l, u); + Query query = DoublePoint.newRangeQuery(field, l, u); + if (hasDocValues) { + Query dvQuery = SortedNumericDocValuesField.newRangeQuery(field, + NumericUtils.doubleToSortableLong(l), + NumericUtils.doubleToSortableLong(u)); + query = new IndexOrDocValuesQuery(query, dvQuery); + } + return query; } @Override @@ -409,13 +440,17 @@ public List createFields(String name, Number value, @Override FieldStats.Double stats(IndexReader reader, String fieldName, boolean isSearchable, boolean isAggregatable) throws IOException { - long size = XPointValues.size(reader, fieldName); - if (size == 0) { + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(fieldName); + if (fi == null) { return null; } - int docCount = XPointValues.getDocCount(reader, fieldName); - byte[] min = XPointValues.getMinPackedValue(reader, fieldName); - byte[] max = XPointValues.getMaxPackedValue(reader, fieldName); + long size = PointValues.size(reader, fieldName); + if (size == 0) { + return new FieldStats.Double(reader.maxDoc(),0, -1, -1, isSearchable, isAggregatable); + } + int docCount = PointValues.getDocCount(reader, fieldName); + byte[] min = PointValues.getMinPackedValue(reader, fieldName); + byte[] max = PointValues.getMaxPackedValue(reader, fieldName); return new FieldStats.Double(reader.maxDoc(),docCount, -1L, size, isSearchable, isAggregatable, DoublePoint.decodeDimension(min, 0), DoublePoint.decodeDimension(max, 0)); @@ -423,21 +458,21 @@ FieldStats.Double stats(IndexReader reader, String fieldName, }, BYTE("byte", NumericType.BYTE) { @Override - Byte parse(Object value) { + Byte parse(Object value, boolean coerce) { + double doubleValue = objectToDouble(value); + + if (doubleValue < Byte.MIN_VALUE || doubleValue > Byte.MAX_VALUE) { + throw new IllegalArgumentException("Value [" + value + "] is out of range for a byte"); + } + if (!coerce && doubleValue % 1 != 0) { + throw new IllegalArgumentException("Value [" + value + "] has a decimal part"); + } + if (value instanceof Number) { - double doubleValue = ((Number) value).doubleValue(); - if (doubleValue < Byte.MIN_VALUE || doubleValue > Byte.MAX_VALUE) { - throw new IllegalArgumentException("Value [" + value + "] is out of range for a byte"); - } - if (doubleValue % 1 != 0) { - throw new IllegalArgumentException("Value [" + value + "] has a decimal part"); - } return ((Number) value).byteValue(); } - if (value instanceof BytesRef) { - value = ((BytesRef) value).utf8ToString(); - } - return Byte.parseByte(value.toString()); + + return (byte) doubleValue; } @Override @@ -461,8 +496,9 @@ Query termsQuery(String field, List values) { @Override Query rangeQuery(String field, Object lowerTerm, Object upperTerm, - boolean includeLower, boolean includeUpper) { - return INTEGER.rangeQuery(field, lowerTerm, upperTerm, includeLower, includeUpper); + boolean includeLower, boolean includeUpper, + boolean hasDocValues) { + return INTEGER.rangeQuery(field, lowerTerm, upperTerm, includeLower, includeUpper, hasDocValues); } @Override @@ -484,30 +520,26 @@ Number valueForSearch(Number value) { }, SHORT("short", NumericType.SHORT) { @Override - Short parse(Object value) { + Short parse(Object value, boolean coerce) { + double doubleValue = objectToDouble(value); + + if (doubleValue < Short.MIN_VALUE || doubleValue > Short.MAX_VALUE) { + throw new IllegalArgumentException("Value [" + value + "] is out of range for a short"); + } + if (!coerce && doubleValue % 1 != 0) { + throw new IllegalArgumentException("Value [" + value + "] has a decimal part"); + } + if (value instanceof Number) { - double doubleValue = ((Number) value).doubleValue(); - if (doubleValue < Short.MIN_VALUE || doubleValue > Short.MAX_VALUE) { - throw new IllegalArgumentException("Value [" + value + "] is out of range for a short"); - } - if (doubleValue % 1 != 0) { - throw new IllegalArgumentException("Value [" + value + "] has a decimal part"); - } return ((Number) value).shortValue(); } - if (value instanceof BytesRef) { - value = ((BytesRef) value).utf8ToString(); - } - return Short.parseShort(value.toString()); + + return (short) doubleValue; } @Override Short parse(XContentParser parser, boolean coerce) throws IOException { - int value = parser.intValue(coerce); - if (value < Short.MIN_VALUE || value > Short.MAX_VALUE) { - throw new IllegalArgumentException("Value [" + value + "] is out of range for a short"); - } - return (short) value; + return parser.shortValue(coerce); } @Override @@ -522,8 +554,9 @@ Query termsQuery(String field, List values) { @Override Query rangeQuery(String field, Object lowerTerm, Object upperTerm, - boolean includeLower, boolean includeUpper) { - return INTEGER.rangeQuery(field, lowerTerm, upperTerm, includeLower, includeUpper); + boolean includeLower, boolean includeUpper, + boolean hasDocValues) { + return INTEGER.rangeQuery(field, lowerTerm, upperTerm, includeLower, includeUpper, hasDocValues); } @Override @@ -545,21 +578,21 @@ Number valueForSearch(Number value) { }, INTEGER("integer", NumericType.INT) { @Override - Integer parse(Object value) { + Integer parse(Object value, boolean coerce) { + double doubleValue = objectToDouble(value); + + if (doubleValue < Integer.MIN_VALUE || doubleValue > Integer.MAX_VALUE) { + throw new IllegalArgumentException("Value [" + value + "] is out of range for an integer"); + } + if (!coerce && doubleValue % 1 != 0) { + throw new IllegalArgumentException("Value [" + value + "] has a decimal part"); + } + if (value instanceof Number) { - double doubleValue = ((Number) value).doubleValue(); - if (doubleValue < Integer.MIN_VALUE || doubleValue > Integer.MAX_VALUE) { - throw new IllegalArgumentException("Value [" + value + "] is out of range for an integer"); - } - if (doubleValue % 1 != 0) { - throw new IllegalArgumentException("Value [" + value + "] has a decimal part"); - } return ((Number) value).intValue(); } - if (value instanceof BytesRef) { - value = ((BytesRef) value).utf8ToString(); - } - return Integer.parseInt(value.toString()); + + return (int) doubleValue; } @Override @@ -569,27 +602,50 @@ Integer parse(XContentParser parser, boolean coerce) throws IOException { @Override Query termQuery(String field, Object value) { - int v = parse(value); + if (hasDecimalPart(value)) { + return Queries.newMatchNoDocsQuery("Value [" + value + "] has a decimal part"); + } + int v = parse(value, true); return IntPoint.newExactQuery(field, v); } @Override Query termsQuery(String field, List values) { int[] v = new int[values.size()]; - for (int i = 0; i < values.size(); ++i) { - v[i] = parse(values.get(i)); + int upTo = 0; + + for (int i = 0; i < values.size(); i++) { + Object value = values.get(i); + if (!hasDecimalPart(value)) { + v[upTo++] = parse(value, true); + } + } + + if (upTo == 0) { + return Queries.newMatchNoDocsQuery("All values have a decimal part"); + } + if (upTo != v.length) { + v = Arrays.copyOf(v, upTo); } return IntPoint.newSetQuery(field, v); } @Override Query rangeQuery(String field, Object lowerTerm, Object upperTerm, - boolean includeLower, boolean includeUpper) { + boolean includeLower, boolean includeUpper, + boolean hasDocValues) { int l = Integer.MIN_VALUE; int u = Integer.MAX_VALUE; if (lowerTerm != null) { - l = parse(lowerTerm); - if (includeLower == false) { + l = parse(lowerTerm, true); + // if the lower bound is decimal: + // - if the bound is positive then we increment it: + // if lowerTerm=1.5 then the (inclusive) bound becomes 2 + // - if the bound is negative then we leave it as is: + // if lowerTerm=-1.5 then the (inclusive) bound becomes -1 due to the call to longValue + boolean lowerTermHasDecimalPart = hasDecimalPart(lowerTerm); + if ((lowerTermHasDecimalPart == false && includeLower == false) || + (lowerTermHasDecimalPart && signum(lowerTerm) > 0)) { if (l == Integer.MAX_VALUE) { return new MatchNoDocsQuery(); } @@ -597,15 +653,22 @@ Query rangeQuery(String field, Object lowerTerm, Object upperTerm, } } if (upperTerm != null) { - u = parse(upperTerm); - if (includeUpper == false) { + u = parse(upperTerm, true); + boolean upperTermHasDecimalPart = hasDecimalPart(upperTerm); + if ((upperTermHasDecimalPart == false && includeUpper == false) || + (upperTermHasDecimalPart && signum(upperTerm) < 0)) { if (u == Integer.MIN_VALUE) { return new MatchNoDocsQuery(); } --u; } } - return IntPoint.newRangeQuery(field, l, u); + Query query = IntPoint.newRangeQuery(field, l, u); + if (hasDocValues) { + Query dvQuery = SortedNumericDocValuesField.newRangeQuery(field, l, u); + query = new IndexOrDocValuesQuery(query, dvQuery); + } + return query; } @Override @@ -627,13 +690,17 @@ public List createFields(String name, Number value, @Override FieldStats.Long stats(IndexReader reader, String fieldName, boolean isSearchable, boolean isAggregatable) throws IOException { - long size = XPointValues.size(reader, fieldName); - if (size == 0) { + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(fieldName); + if (fi == null) { return null; } - int docCount = XPointValues.getDocCount(reader, fieldName); - byte[] min = XPointValues.getMinPackedValue(reader, fieldName); - byte[] max = XPointValues.getMaxPackedValue(reader, fieldName); + long size = PointValues.size(reader, fieldName); + if (size == 0) { + return new FieldStats.Long(reader.maxDoc(), 0, -1, -1, isSearchable, isAggregatable); + } + int docCount = PointValues.getDocCount(reader, fieldName); + byte[] min = PointValues.getMinPackedValue(reader, fieldName); + byte[] max = PointValues.getMaxPackedValue(reader, fieldName); return new FieldStats.Long(reader.maxDoc(),docCount, -1L, size, isSearchable, isAggregatable, IntPoint.decodeDimension(min, 0), IntPoint.decodeDimension(max, 0)); @@ -641,21 +708,28 @@ FieldStats.Long stats(IndexReader reader, String fieldName, }, LONG("long", NumericType.LONG) { @Override - Long parse(Object value) { + Long parse(Object value, boolean coerce) { + double doubleValue = objectToDouble(value); + + if (doubleValue < Long.MIN_VALUE || doubleValue > Long.MAX_VALUE) { + throw new IllegalArgumentException("Value [" + value + "] is out of range for a long"); + } + if (!coerce && doubleValue % 1 != 0) { + throw new IllegalArgumentException("Value [" + value + "] has a decimal part"); + } + if (value instanceof Number) { - double doubleValue = ((Number) value).doubleValue(); - if (doubleValue < Long.MIN_VALUE || doubleValue > Long.MAX_VALUE) { - throw new IllegalArgumentException("Value [" + value + "] is out of range for a long"); - } - if (doubleValue % 1 != 0) { - throw new IllegalArgumentException("Value [" + value + "] has a decimal part"); - } return ((Number) value).longValue(); } - if (value instanceof BytesRef) { - value = ((BytesRef) value).utf8ToString(); + + // longs need special handling so we don't lose precision while parsing + String stringValue = (value instanceof BytesRef) ? ((BytesRef) value).utf8ToString() : value.toString(); + + try { + return Long.parseLong(stringValue); + } catch (NumberFormatException e) { + return (long) Double.parseDouble(stringValue); } - return Long.parseLong(value.toString()); } @Override @@ -665,27 +739,50 @@ Long parse(XContentParser parser, boolean coerce) throws IOException { @Override Query termQuery(String field, Object value) { - long v = parse(value); + if (hasDecimalPart(value)) { + return Queries.newMatchNoDocsQuery("Value [" + value + "] has a decimal part"); + } + long v = parse(value, true); return LongPoint.newExactQuery(field, v); } @Override Query termsQuery(String field, List values) { long[] v = new long[values.size()]; - for (int i = 0; i < values.size(); ++i) { - v[i] = parse(values.get(i)); + int upTo = 0; + + for (int i = 0; i < values.size(); i++) { + Object value = values.get(i); + if (!hasDecimalPart(value)) { + v[upTo++] = parse(value, true); + } + } + + if (upTo == 0) { + return Queries.newMatchNoDocsQuery("All values have a decimal part"); + } + if (upTo != v.length) { + v = Arrays.copyOf(v, upTo); } return LongPoint.newSetQuery(field, v); } @Override Query rangeQuery(String field, Object lowerTerm, Object upperTerm, - boolean includeLower, boolean includeUpper) { + boolean includeLower, boolean includeUpper, + boolean hasDocValues) { long l = Long.MIN_VALUE; long u = Long.MAX_VALUE; if (lowerTerm != null) { - l = parse(lowerTerm); - if (includeLower == false) { + l = parse(lowerTerm, true); + // if the lower bound is decimal: + // - if the bound is positive then we increment it: + // if lowerTerm=1.5 then the (inclusive) bound becomes 2 + // - if the bound is negative then we leave it as is: + // if lowerTerm=-1.5 then the (inclusive) bound becomes -1 due to the call to longValue + boolean lowerTermHasDecimalPart = hasDecimalPart(lowerTerm); + if ((lowerTermHasDecimalPart == false && includeLower == false) || + (lowerTermHasDecimalPart && signum(lowerTerm) > 0)) { if (l == Long.MAX_VALUE) { return new MatchNoDocsQuery(); } @@ -693,15 +790,22 @@ Query rangeQuery(String field, Object lowerTerm, Object upperTerm, } } if (upperTerm != null) { - u = parse(upperTerm); - if (includeUpper == false) { + u = parse(upperTerm, true); + boolean upperTermHasDecimalPart = hasDecimalPart(upperTerm); + if ((upperTermHasDecimalPart == false && includeUpper == false) || + (upperTermHasDecimalPart && signum(upperTerm) < 0)) { if (u == Long.MIN_VALUE) { return new MatchNoDocsQuery(); } --u; } } - return LongPoint.newRangeQuery(field, l, u); + Query query = LongPoint.newRangeQuery(field, l, u); + if (hasDocValues) { + Query dvQuery = SortedNumericDocValuesField.newRangeQuery(field, l, u); + query = new IndexOrDocValuesQuery(query, dvQuery); + } + return query; } @Override @@ -723,13 +827,17 @@ public List createFields(String name, Number value, @Override FieldStats.Long stats(IndexReader reader, String fieldName, boolean isSearchable, boolean isAggregatable) throws IOException { - long size = XPointValues.size(reader, fieldName); - if (size == 0) { + FieldInfo fi = org.apache.lucene.index.MultiFields.getMergedFieldInfos(reader).fieldInfo(fieldName); + if (fi == null) { return null; } - int docCount = XPointValues.getDocCount(reader, fieldName); - byte[] min = XPointValues.getMinPackedValue(reader, fieldName); - byte[] max = XPointValues.getMaxPackedValue(reader, fieldName); + long size = PointValues.size(reader, fieldName); + if (size == 0) { + return new FieldStats.Long(reader.maxDoc(), 0, -1, -1, isSearchable, isAggregatable); + } + int docCount = PointValues.getDocCount(reader, fieldName); + byte[] min = PointValues.getMinPackedValue(reader, fieldName); + byte[] max = PointValues.getMaxPackedValue(reader, fieldName); return new FieldStats.Long(reader.maxDoc(),docCount, -1L, size, isSearchable, isAggregatable, LongPoint.decodeDimension(min, 0), LongPoint.decodeDimension(max, 0)); @@ -748,16 +856,17 @@ FieldStats.Long stats(IndexReader reader, String fieldName, public final String typeName() { return name; } - /** Get the associated numerit type */ + /** Get the associated numeric type */ final NumericType numericType() { return numericType; } abstract Query termQuery(String field, Object value); abstract Query termsQuery(String field, List values); abstract Query rangeQuery(String field, Object lowerTerm, Object upperTerm, - boolean includeLower, boolean includeUpper); + boolean includeLower, boolean includeUpper, + boolean hasDocValues); abstract Number parse(XContentParser parser, boolean coerce) throws IOException; - abstract Number parse(Object value); + abstract Number parse(Object value, boolean coerce); public abstract List createFields(String name, Number value, boolean indexed, boolean docValued, boolean stored); abstract FieldStats stats(IndexReader reader, String fieldName, @@ -765,6 +874,55 @@ abstract FieldStats stats(IndexReader reader, String fieldName Number valueForSearch(Number value) { return value; } + + /** + * Returns true if the object is a number and has a decimal part + */ + boolean hasDecimalPart(Object number) { + if (number instanceof Number) { + double doubleValue = ((Number) number).doubleValue(); + return doubleValue % 1 != 0; + } + if (number instanceof BytesRef) { + number = ((BytesRef) number).utf8ToString(); + } + if (number instanceof String) { + return Double.parseDouble((String) number) % 1 != 0; + } + return false; + } + + /** + * Returns -1, 0, or 1 if the value is lower than, equal to, or greater than 0 + */ + double signum(Object value) { + if (value instanceof Number) { + double doubleValue = ((Number) value).doubleValue(); + return Math.signum(doubleValue); + } + if (value instanceof BytesRef) { + value = ((BytesRef) value).utf8ToString(); + } + return Math.signum(Double.parseDouble(value.toString())); + } + + /** + * Converts an Object to a double by checking it against known types first + */ + private static double objectToDouble(Object value) { + double doubleValue; + + if (value instanceof Number) { + doubleValue = ((Number) value).doubleValue(); + } else if (value instanceof BytesRef) { + doubleValue = Double.parseDouble(((BytesRef) value).utf8ToString()); + } else { + doubleValue = Double.parseDouble(value.toString()); + } + + return doubleValue; + } + } public static final class NumberFieldType extends MappedFieldType { @@ -815,9 +973,9 @@ public Query termsQuery(List values, QueryShardContext context) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { failIfNotIndexed(); - Query query = type.rangeQuery(name(), lowerTerm, upperTerm, includeLower, includeUpper); + Query query = type.rangeQuery(name(), lowerTerm, upperTerm, includeLower, includeUpper, hasDocValues()); if (boost() != 1f) { query = new BoostQuery(query, boost()); } @@ -836,7 +994,7 @@ public IndexFieldData.Builder fielddataBuilder() { } @Override - public Object valueForSearch(Object value) { + public Object valueForDisplay(Object value) { if (value == null) { return null; } @@ -895,7 +1053,7 @@ protected NumberFieldMapper clone() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { final boolean includeInAll = context.includeInAll(this.includeInAll, this); XContentParser parser = context.parser(); @@ -935,7 +1093,7 @@ protected void parseCreateField(ParseContext context, List fields) throws } if (numericValue == null) { - numericValue = fieldType().type.parse(value); + numericValue = fieldType().type.parse(value, coerce.value()); } if (includeInAll) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java index e063629bd6107..4072732b28565 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java @@ -42,11 +42,6 @@ import java.util.Locale; import java.util.Map; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; - -/** - * - */ public class ObjectMapper extends Mapper implements Cloneable { public static final String CONTENT_TYPE = "object"; @@ -58,7 +53,7 @@ public static class Defaults { public static final Dynamic DYNAMIC = null; // not set, inherited from root } - public static enum Dynamic { + public enum Dynamic { TRUE, FALSE, STRICT @@ -188,11 +183,12 @@ protected static boolean parseObjectOrDocumentTypeProperties(String fieldName, O if (value.equalsIgnoreCase("strict")) { builder.dynamic(Dynamic.STRICT); } else { - builder.dynamic(lenientNodeBooleanValue(fieldNode) ? Dynamic.TRUE : Dynamic.FALSE); + boolean dynamic = TypeParsers.nodeBooleanValue(fieldName, "dynamic", fieldNode); + builder.dynamic(dynamic ? Dynamic.TRUE : Dynamic.FALSE); } return true; } else if (fieldName.equals("enabled")) { - builder.enabled(lenientNodeBooleanValue(fieldNode)); + builder.enabled(TypeParsers.nodeBooleanValue(fieldName, "enabled", fieldNode)); return true; } else if (fieldName.equals("properties")) { if (fieldNode instanceof Collection && ((Collection) fieldNode).isEmpty()) { @@ -204,7 +200,7 @@ protected static boolean parseObjectOrDocumentTypeProperties(String fieldName, O } return true; } else if (fieldName.equals("include_in_all")) { - builder.includeInAll(lenientNodeBooleanValue(fieldNode)); + builder.includeInAll(TypeParsers.nodeBooleanValue(fieldName, "include_in_all", fieldNode)); return true; } return false; @@ -227,12 +223,12 @@ protected static void parseNested(String name, Map node, ObjectM } fieldNode = node.get("include_in_parent"); if (fieldNode != null) { - nestedIncludeInParent = lenientNodeBooleanValue(fieldNode); + nestedIncludeInParent = TypeParsers.nodeBooleanValue(name, "include_in_parent", fieldNode); node.remove("include_in_parent"); } fieldNode = node.get("include_in_root"); if (fieldNode != null) { - nestedIncludeInRoot = lenientNodeBooleanValue(fieldNode); + nestedIncludeInRoot = TypeParsers.nodeBooleanValue(name, "include_in_root", fieldNode); node.remove("include_in_root"); } if (nested) { @@ -325,7 +321,7 @@ protected static void parseProperties(ObjectMapper.Builder objBuilder, Mapnull if there + * isn't any. + */ + public ObjectMapper getParentObjectMapper(MapperService mapperService) { + int indexOfLastDot = fullPath().lastIndexOf('.'); + if (indexOfLastDot != -1) { + String parentNestObjectPath = fullPath().substring(0, indexOfLastDot); + return mapperService.getObjectMapper(parentNestObjectPath); + } else { + return null; + } + } + + /** + * Returns whether all parent objects fields are nested too. + */ + public boolean parentObjectMapperAreNested(MapperService mapperService) { + for (ObjectMapper parent = getParentObjectMapper(mapperService); + parent != null; + parent = parent.getParentObjectMapper(mapperService)) { + + if (parent.nested().isNested() == false) { + return false; + } + } + return true; + } + @Override public ObjectMapper merge(Mapper mergeWith, boolean updateAllTypes) { if (!(mergeWith instanceof ObjectMapper)) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java index 9caef2c7740d4..00508e0f2be69 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java @@ -18,10 +18,10 @@ */ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.document.SortedDocValuesField; import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Term; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; @@ -37,7 +37,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.index.fielddata.IndexFieldData; -import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; +import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; @@ -65,6 +65,7 @@ public static class Defaults { FIELD_TYPE.setIndexOptions(IndexOptions.NONE); FIELD_TYPE.setHasDocValues(true); FIELD_TYPE.setDocValuesType(DocValuesType.SORTED); + FIELD_TYPE.setEagerGlobalOrdinals(false); FIELD_TYPE.freeze(); } } @@ -77,6 +78,8 @@ public static class Builder extends MetadataFieldMapper.Builder node, if (fieldName.equals("type")) { builder.type(fieldNode.toString()); iterator.remove(); - } else if (parserContext.parseFieldMatcher().match(fieldName, FIELDDATA)) { + } else if (FIELDDATA.match(fieldName)) { // for bw compat only Map fieldDataSettings = SettingsLoader.Helper.loadNestedFromMap(nodeMapValue(fieldNode, "fielddata")); if (fieldDataSettings.containsKey("loading")) { @@ -130,7 +133,9 @@ public MetadataFieldMapper.Builder parse(String name, Map node, } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); + final String typeName = context.type(); KeywordFieldMapper parentJoinField = createParentJoinFieldMapper(typeName, new BuilderContext(indexSettings, new ContentPath(0))); MappedFieldType childJoinFieldType = new ParentFieldType(Defaults.FIELD_TYPE, typeName); childJoinFieldType.setName(ParentFieldMapper.NAME); @@ -150,7 +155,7 @@ static final class ParentFieldType extends MappedFieldType { final String documentType; - public ParentFieldType() { + ParentFieldType() { documentType = null; setEagerGlobalOrdinals(true); } @@ -194,7 +199,7 @@ public Query termsQuery(List values, @Nullable QueryShardContext context) { @Override public IndexFieldData.Builder fielddataBuilder() { - return new ParentChildIndexFieldData.Builder(); + return new DocValuesIndexFieldData.Builder(); } } @@ -227,7 +232,7 @@ public void postParse(ParseContext context) throws IOException { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { boolean parent = context.docMapper().isParent(context.sourceToParse().type()); if (parent) { fields.add(new SortedDocValuesField(parentJoinField.fieldType().name(), new BytesRef(context.sourceToParse().id()))); @@ -284,7 +289,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.startObject(CONTENT_TYPE); builder.field("type", parentType); - if (includeDefaults || fieldType().eagerGlobalOrdinals() != defaultFieldType.eagerGlobalOrdinals()) { + if (includeDefaults || fieldType().eagerGlobalOrdinals() == false) { builder.field("eager_global_ordinals", fieldType().eagerGlobalOrdinals()); } builder.endObject(); @@ -293,20 +298,15 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws @Override protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { - super.doMerge(mergeWith, updateAllTypes); ParentFieldMapper fieldMergeWith = (ParentFieldMapper) mergeWith; - if (Objects.equals(parentType, fieldMergeWith.parentType) == false) { + ParentFieldType currentFieldType = (ParentFieldType) fieldType.clone(); + super.doMerge(mergeWith, updateAllTypes); + if (fieldMergeWith.parentType != null && Objects.equals(parentType, fieldMergeWith.parentType) == false) { throw new IllegalArgumentException("The _parent field's type option can't be changed: [" + parentType + "]->[" + fieldMergeWith.parentType + "]"); } - List conflicts = new ArrayList<>(); - fieldType().checkCompatibility(fieldMergeWith.fieldType, conflicts, true); - if (conflicts.isEmpty() == false) { - throw new IllegalArgumentException("Merge conflicts: " + conflicts); - } - if (active()) { - fieldType = fieldMergeWith.fieldType.clone(); + fieldType = currentFieldType; } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ParseContext.java b/core/src/main/java/org/elasticsearch/index/mapper/ParseContext.java index 8a2aca97e6843..7ff5c4a37fe5d 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ParseContext.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ParseContext.java @@ -29,15 +29,11 @@ import org.elasticsearch.common.lucene.all.AllEntries; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.analysis.AnalysisService; import java.util.ArrayList; import java.util.Iterator; import java.util.List; -/** - * - */ public abstract class ParseContext { /** Fork of {@link org.apache.lucene.document.Document} with additional functionality. */ @@ -242,11 +238,6 @@ public DocumentMapper docMapper() { return in.docMapper(); } - @Override - public AnalysisService analysisService() { - return in.analysisService(); - } - @Override public MapperService mapperService() { return in.mapperService(); @@ -385,11 +376,6 @@ public DocumentMapper docMapper() { return this.docMapper; } - @Override - public AnalysisService analysisService() { - return docMapperParser.analysisService; - } - @Override public MapperService mapperService() { return docMapperParser.mapperService; @@ -525,8 +511,6 @@ public boolean isWithinMultiFields() { public abstract DocumentMapper docMapper(); - public abstract AnalysisService analysisService(); - public abstract MapperService mapperService(); public abstract Field version(); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ParsedDocument.java b/core/src/main/java/org/elasticsearch/index/mapper/ParsedDocument.java index 85367e624d8a6..bf8c1e097236e 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ParsedDocument.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ParsedDocument.java @@ -22,6 +22,7 @@ import org.apache.lucene.document.Field; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.mapper.ParseContext.Document; import java.util.List; @@ -45,13 +46,14 @@ public class ParsedDocument { private final List documents; private BytesReference source; + private XContentType xContentType; private Mapping dynamicMappingsUpdate; private String parent; public ParsedDocument(Field version, String id, String type, String routing, long timestamp, long ttl, List documents, - BytesReference source, Mapping dynamicMappingsUpdate) { + BytesReference source, XContentType xContentType, Mapping dynamicMappingsUpdate) { this.version = version; this.id = id; this.type = type; @@ -62,15 +64,12 @@ public ParsedDocument(Field version, String id, String type, String routing, lon this.documents = documents; this.source = source; this.dynamicMappingsUpdate = dynamicMappingsUpdate; + this.xContentType = xContentType; } public Field version() { return version; } - public BytesRef uid() { - return uid; - } - public String id() { return this.id; } @@ -103,8 +102,13 @@ public BytesReference source() { return this.source; } - public void setSource(BytesReference source) { + public XContentType getXContentType() { + return this.xContentType; + } + + public void setSource(BytesReference source, XContentType xContentType) { this.source = source; + this.xContentType = xContentType; } public ParsedDocument parent(String parent) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java new file mode 100644 index 0000000000000..9caa73e075382 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java @@ -0,0 +1,815 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANYDa + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.mapper; + +import org.apache.lucene.document.Field; +import org.apache.lucene.document.DoubleRange; +import org.apache.lucene.document.FloatRange; +import org.apache.lucene.document.IntRange; +import org.apache.lucene.document.InetAddressPoint; +import org.apache.lucene.document.InetAddressRange; +import org.apache.lucene.document.LongRange; +import org.apache.lucene.document.StoredField; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; +import org.apache.lucene.search.BoostQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.Explicit; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.geo.ShapeRelation; +import org.elasticsearch.common.joda.DateMathParser; +import org.elasticsearch.common.joda.FormatDateTimeFormatter; +import org.elasticsearch.common.network.InetAddresses; +import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.LocaleUtils; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.mapper.NumberFieldMapper.NumberType; +import org.elasticsearch.index.query.QueryShardContext; +import org.joda.time.DateTimeZone; + +import java.io.IOException; +import java.net.InetAddress; +import java.util.ArrayList; +import java.util.Iterator; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.Objects; + +import static org.elasticsearch.index.mapper.TypeParsers.parseDateTimeFormatter; +import static org.elasticsearch.index.query.RangeQueryBuilder.GT_FIELD; +import static org.elasticsearch.index.query.RangeQueryBuilder.GTE_FIELD; +import static org.elasticsearch.index.query.RangeQueryBuilder.LT_FIELD; +import static org.elasticsearch.index.query.RangeQueryBuilder.LTE_FIELD; + +/** A {@link FieldMapper} for indexing numeric and date ranges, and creating queries */ +public class RangeFieldMapper extends FieldMapper { + public static final boolean DEFAULT_INCLUDE_UPPER = true; + public static final boolean DEFAULT_INCLUDE_LOWER = true; + + public static class Defaults { + public static final Explicit COERCE = new Explicit<>(true, false); + } + + // this is private since it has a different default + static final Setting COERCE_SETTING = + Setting.boolSetting("index.mapping.coerce", true, Setting.Property.IndexScope); + + public static class Builder extends FieldMapper.Builder { + private Boolean coerce; + private Locale locale; + + public Builder(String name, RangeType type) { + super(name, new RangeFieldType(type), new RangeFieldType(type)); + builder = this; + locale = Locale.ROOT; + } + + @Override + public RangeFieldType fieldType() { + return (RangeFieldType)fieldType; + } + + @Override + public Builder docValues(boolean docValues) { + if (docValues == true) { + throw new IllegalArgumentException("field [" + name + "] does not currently support " + TypeParsers.DOC_VALUES); + } + return super.docValues(docValues); + } + + public Builder coerce(boolean coerce) { + this.coerce = coerce; + return builder; + } + + protected Explicit coerce(BuilderContext context) { + if (coerce != null) { + return new Explicit<>(coerce, true); + } + if (context.indexSettings() != null) { + return new Explicit<>(COERCE_SETTING.get(context.indexSettings()), false); + } + return Defaults.COERCE; + } + + public Builder dateTimeFormatter(FormatDateTimeFormatter dateTimeFormatter) { + fieldType().setDateTimeFormatter(dateTimeFormatter); + return this; + } + + @Override + public Builder nullValue(Object nullValue) { + throw new IllegalArgumentException("Field [" + name() + "] does not support null value."); + } + + public void locale(Locale locale) { + this.locale = locale; + } + + @Override + protected void setupFieldType(BuilderContext context) { + super.setupFieldType(context); + FormatDateTimeFormatter dateTimeFormatter = fieldType().dateTimeFormatter; + if (fieldType().rangeType == RangeType.DATE) { + if (!locale.equals(dateTimeFormatter.locale())) { + fieldType().setDateTimeFormatter(new FormatDateTimeFormatter(dateTimeFormatter.format(), + dateTimeFormatter.parser(), dateTimeFormatter.printer(), locale)); + } + } else if (dateTimeFormatter != null) { + throw new IllegalArgumentException("field [" + name() + "] of type [" + fieldType().rangeType + + "] should not define a dateTimeFormatter unless it is a " + RangeType.DATE + " type"); + } + } + + @Override + public RangeFieldMapper build(BuilderContext context) { + setupFieldType(context); + return new RangeFieldMapper(name, fieldType, defaultFieldType, coerce(context), includeInAll, + context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); + } + } + + public static class TypeParser implements Mapper.TypeParser { + final RangeType type; + + public TypeParser(RangeType type) { + this.type = type; + } + + @Override + public Mapper.Builder parse(String name, Map node, + ParserContext parserContext) throws MapperParsingException { + Builder builder = new Builder(name, type); + TypeParsers.parseField(builder, name, node, parserContext); + for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { + Map.Entry entry = iterator.next(); + String propName = entry.getKey(); + Object propNode = entry.getValue(); + if (propName.equals("null_value")) { + throw new MapperParsingException("Property [null_value] is not supported for [" + this.type.name + + "] field types."); + } else if (propName.equals("coerce")) { + builder.coerce(TypeParsers.nodeBooleanValue(name, "coerce", propNode)); + iterator.remove(); + } else if (propName.equals("locale")) { + builder.locale(LocaleUtils.parse(propNode.toString())); + iterator.remove(); + } else if (propName.equals("format")) { + builder.dateTimeFormatter(parseDateTimeFormatter(propNode)); + iterator.remove(); + } else if (TypeParsers.parseMultiField(builder, name, parserContext, propName, propNode)) { + iterator.remove(); + } + } + return builder; + } + } + + public static final class RangeFieldType extends MappedFieldType { + protected RangeType rangeType; + protected FormatDateTimeFormatter dateTimeFormatter; + protected DateMathParser dateMathParser; + + public RangeFieldType(RangeType type) { + super(); + this.rangeType = Objects.requireNonNull(type); + setTokenized(false); + setHasDocValues(false); + setOmitNorms(true); + if (rangeType == RangeType.DATE) { + setDateTimeFormatter(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER); + } + } + + public RangeFieldType(RangeFieldType other) { + super(other); + this.rangeType = other.rangeType; + if (other.dateTimeFormatter() != null) { + setDateTimeFormatter(other.dateTimeFormatter); + } + } + + @Override + public MappedFieldType clone() { + return new RangeFieldType(this); + } + + @Override + public boolean equals(Object o) { + if (!super.equals(o)) return false; + RangeFieldType that = (RangeFieldType) o; + return Objects.equals(rangeType, that.rangeType) && + (rangeType == RangeType.DATE) ? + Objects.equals(dateTimeFormatter.format(), that.dateTimeFormatter.format()) + && Objects.equals(dateTimeFormatter.locale(), that.dateTimeFormatter.locale()) + : dateTimeFormatter == null && that.dateTimeFormatter == null; + } + + @Override + public int hashCode() { + return (dateTimeFormatter == null) ? Objects.hash(super.hashCode(), rangeType) + : Objects.hash(super.hashCode(), rangeType, dateTimeFormatter.format(), dateTimeFormatter.locale()); + } + + @Override + public String typeName() { + return rangeType.name; + } + + @Override + public void checkCompatibility(MappedFieldType fieldType, List conflicts, boolean strict) { + super.checkCompatibility(fieldType, conflicts, strict); + if (strict) { + RangeFieldType other = (RangeFieldType)fieldType; + if (this.rangeType != other.rangeType) { + conflicts.add("mapper [" + name() + + "] is attempting to update from type [" + rangeType.name + + "] to incompatible type [" + other.rangeType.name + "]."); + } + if (this.rangeType == RangeType.DATE) { + if (Objects.equals(dateTimeFormatter().format(), other.dateTimeFormatter().format()) == false) { + conflicts.add("mapper [" + name() + + "] is used by multiple types. Set update_all_types to true to update [format] across all types."); + } + if (Objects.equals(dateTimeFormatter().locale(), other.dateTimeFormatter().locale()) == false) { + conflicts.add("mapper [" + name() + + "] is used by multiple types. Set update_all_types to true to update [locale] across all types."); + } + } + } + } + + public FormatDateTimeFormatter dateTimeFormatter() { + return dateTimeFormatter; + } + + public void setDateTimeFormatter(FormatDateTimeFormatter dateTimeFormatter) { + checkIfFrozen(); + this.dateTimeFormatter = dateTimeFormatter; + this.dateMathParser = new DateMathParser(dateTimeFormatter); + } + + protected DateMathParser dateMathParser() { + return dateMathParser; + } + + @Override + public Query termQuery(Object value, QueryShardContext context) { + Query query = rangeQuery(value, value, true, true, ShapeRelation.INTERSECTS, context); + if (boost() != 1f) { + query = new BoostQuery(query, boost()); + } + return query; + } + + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, + ShapeRelation relation, QueryShardContext context) { + failIfNotIndexed(); + return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, relation, null, dateMathParser, context); + } + + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, + ShapeRelation relation, DateTimeZone timeZone, DateMathParser parser, QueryShardContext context) { + return rangeType.rangeQuery(name(), lowerTerm, upperTerm, includeLower, includeUpper, relation, timeZone, parser, context); + } + } + + private Boolean includeInAll; + private Explicit coerce; + + private RangeFieldMapper( + String simpleName, + MappedFieldType fieldType, + MappedFieldType defaultFieldType, + Explicit coerce, + Boolean includeInAll, + Settings indexSettings, + MultiFields multiFields, + CopyTo copyTo) { + super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo); + this.coerce = coerce; + this.includeInAll = includeInAll; + } + + @Override + public RangeFieldType fieldType() { + return (RangeFieldType) super.fieldType(); + } + + @Override + protected String contentType() { + return fieldType.typeName(); + } + + @Override + protected RangeFieldMapper clone() { + return (RangeFieldMapper) super.clone(); + } + + @Override + protected void parseCreateField(ParseContext context, List fields) throws IOException { + final boolean includeInAll = context.includeInAll(this.includeInAll, this); + Range range; + if (context.externalValueSet()) { + range = context.parseExternalValue(Range.class); + } else { + XContentParser parser = context.parser(); + if (parser.currentToken() == XContentParser.Token.START_OBJECT) { + RangeFieldType fieldType = fieldType(); + RangeType rangeType = fieldType.rangeType; + String fieldName = null; + Object from = rangeType.minValue(); + Object to = rangeType.maxValue(); + boolean includeFrom = DEFAULT_INCLUDE_LOWER; + boolean includeTo = DEFAULT_INCLUDE_UPPER; + XContentParser.Token token; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + fieldName = parser.currentName(); + } else { + if (fieldName.equals(GT_FIELD.getPreferredName())) { + includeFrom = false; + if (parser.currentToken() != XContentParser.Token.VALUE_NULL) { + from = rangeType.parseFrom(fieldType, parser, coerce.value(), includeFrom); + } + } else if (fieldName.equals(GTE_FIELD.getPreferredName())) { + includeFrom = true; + if (parser.currentToken() != XContentParser.Token.VALUE_NULL) { + from = rangeType.parseFrom(fieldType, parser, coerce.value(), includeFrom); + } + } else if (fieldName.equals(LT_FIELD.getPreferredName())) { + includeTo = false; + if (parser.currentToken() != XContentParser.Token.VALUE_NULL) { + to = rangeType.parseTo(fieldType, parser, coerce.value(), includeTo); + } + } else if (fieldName.equals(LTE_FIELD.getPreferredName())) { + includeTo = true; + if (parser.currentToken() != XContentParser.Token.VALUE_NULL) { + to = rangeType.parseTo(fieldType, parser, coerce.value(), includeTo); + } + } else { + throw new MapperParsingException("error parsing field [" + + name() + "], with unknown parameter [" + fieldName + "]"); + } + } + } + range = new Range(rangeType, from, to, includeFrom, includeTo); + } else { + throw new MapperParsingException("error parsing field [" + + name() + "], expected an object but got " + parser.currentName()); + } + } + if (includeInAll) { + context.allEntries().addText(fieldType.name(), range.toString(), fieldType.boost()); + } + boolean indexed = fieldType.indexOptions() != IndexOptions.NONE; + boolean docValued = fieldType.hasDocValues(); + boolean stored = fieldType.stored(); + fields.addAll(fieldType().rangeType.createFields(name(), range, indexed, docValued, stored)); + } + + @Override + protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { + super.doMerge(mergeWith, updateAllTypes); + RangeFieldMapper other = (RangeFieldMapper) mergeWith; + this.includeInAll = other.includeInAll; + if (other.coerce.explicit()) { + this.coerce = other.coerce; + } + } + + @Override + protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException { + super.doXContentBody(builder, includeDefaults, params); + + if (fieldType().rangeType == RangeType.DATE + && (includeDefaults || (fieldType().dateTimeFormatter() != null + && fieldType().dateTimeFormatter().format().equals(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.format()) == false))) { + builder.field("format", fieldType().dateTimeFormatter().format()); + } + if (fieldType().rangeType == RangeType.DATE + && (includeDefaults || (fieldType().dateTimeFormatter() != null + && fieldType().dateTimeFormatter().locale() != Locale.ROOT))) { + builder.field("locale", fieldType().dateTimeFormatter().locale()); + } + if (includeDefaults || coerce.explicit()) { + builder.field("coerce", coerce.value()); + } + if (includeInAll != null) { + builder.field("include_in_all", includeInAll); + } else if (includeDefaults) { + builder.field("include_in_all", false); + } + } + + /** Enum defining the type of range */ + public enum RangeType { + IP("ip_range") { + @Override + public Field getRangeField(String name, Range r) { + return new InetAddressRange(name, (InetAddress)r.from, (InetAddress)r.to); + } + @Override + public InetAddress parseFrom(RangeFieldType fieldType, XContentParser parser, boolean coerce, boolean included) + throws IOException { + InetAddress address = InetAddresses.forString(parser.text()); + return included ? address : nextUp(address); + } + @Override + public InetAddress parseTo(RangeFieldType fieldType, XContentParser parser, boolean coerce, boolean included) + throws IOException { + InetAddress address = InetAddresses.forString(parser.text()); + return included ? address : nextDown(address); + } + @Override + public InetAddress parse(Object value, boolean coerce) { + if (value instanceof InetAddress) { + return (InetAddress) value; + } else { + if (value instanceof BytesRef) { + value = ((BytesRef) value).utf8ToString(); + } + return InetAddresses.forString(value.toString()); + } + } + @Override + public InetAddress minValue() { + return InetAddressPoint.MIN_VALUE; + } + @Override + public InetAddress maxValue() { + return InetAddressPoint.MAX_VALUE; + } + @Override + public InetAddress nextUp(Object value) { + return InetAddressPoint.nextUp((InetAddress)value); + } + @Override + public InetAddress nextDown(Object value) { + return InetAddressPoint.nextDown((InetAddress)value); + } + @Override + public Query withinQuery(String field, Object from, Object to, boolean includeLower, boolean includeUpper) { + InetAddress lower = (InetAddress)from; + InetAddress upper = (InetAddress)to; + return InetAddressRange.newWithinQuery(field, + includeLower ? lower : nextUp(lower), includeUpper ? upper : nextDown(upper)); + } + @Override + public Query containsQuery(String field, Object from, Object to, boolean includeLower, boolean includeUpper) { + InetAddress lower = (InetAddress)from; + InetAddress upper = (InetAddress)to; + return InetAddressRange.newContainsQuery(field, + includeLower ? lower : nextUp(lower), includeUpper ? upper : nextDown(upper)); + } + @Override + public Query intersectsQuery(String field, Object from, Object to, boolean includeLower, boolean includeUpper) { + InetAddress lower = (InetAddress)from; + InetAddress upper = (InetAddress)to; + return InetAddressRange.newIntersectsQuery(field, + includeLower ? lower : nextUp(lower), includeUpper ? upper : nextDown(upper)); + } + public String toString(InetAddress address) { + return InetAddresses.toAddrString(address); + } + }, + DATE("date_range", NumberType.LONG) { + @Override + public Field getRangeField(String name, Range r) { + return new LongRange(name, new long[] {((Number)r.from).longValue()}, new long[] {((Number)r.to).longValue()}); + } + private Number parse(DateMathParser dateMathParser, String dateStr) { + return dateMathParser.parse(dateStr, () -> {throw new IllegalArgumentException("now is not used at indexing time");}); + } + @Override + public Number parseFrom(RangeFieldType fieldType, XContentParser parser, boolean coerce, boolean included) + throws IOException { + Number value = parse(fieldType.dateMathParser, parser.text()); + return included ? value : nextUp(value); + } + @Override + public Number parseTo(RangeFieldType fieldType, XContentParser parser, boolean coerce, boolean included) + throws IOException{ + Number value = parse(fieldType.dateMathParser, parser.text()); + return included ? value : nextDown(value); + } + @Override + public Long minValue() { + return Long.MIN_VALUE; + } + @Override + public Long maxValue() { + return Long.MAX_VALUE; + } + @Override + public Long nextUp(Object value) { + return (long) LONG.nextUp(value); + } + @Override + public Long nextDown(Object value) { + return (long) LONG.nextDown(value); + } + @Override + public Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, + ShapeRelation relation, @Nullable DateTimeZone timeZone, @Nullable DateMathParser parser, + QueryShardContext context) { + DateTimeZone zone = (timeZone == null) ? DateTimeZone.UTC : timeZone; + DateMathParser dateMathParser = (parser == null) ? + new DateMathParser(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER) : parser; + Long low = lowerTerm == null ? Long.MIN_VALUE : + dateMathParser.parse(lowerTerm instanceof BytesRef ? ((BytesRef) lowerTerm).utf8ToString() : lowerTerm.toString(), + context::nowInMillis, false, zone); + Long high = upperTerm == null ? Long.MAX_VALUE : + dateMathParser.parse(upperTerm instanceof BytesRef ? ((BytesRef) upperTerm).utf8ToString() : upperTerm.toString(), + context::nowInMillis, false, zone); + + return super.rangeQuery(field, low, high, includeLower, includeUpper, relation, zone, dateMathParser, context); + } + @Override + public Query withinQuery(String field, Object from, Object to, boolean includeLower, boolean includeUpper) { + return LONG.withinQuery(field, from, to, includeLower, includeUpper); + } + @Override + public Query containsQuery(String field, Object from, Object to, boolean includeLower, boolean includeUpper) { + return LONG.containsQuery(field, from, to, includeLower, includeUpper); + } + @Override + public Query intersectsQuery(String field, Object from, Object to, boolean includeLower, boolean includeUpper) { + return LONG.intersectsQuery(field, from, to, includeLower, includeUpper); + } + }, + // todo support half_float + FLOAT("float_range", NumberType.FLOAT) { + @Override + public Float minValue() { + return Float.NEGATIVE_INFINITY; + } + @Override + public Float maxValue() { + return Float.POSITIVE_INFINITY; + } + @Override + public Float nextUp(Object value) { + return Math.nextUp(((Number)value).floatValue()); + } + @Override + public Float nextDown(Object value) { + return Math.nextDown(((Number)value).floatValue()); + } + @Override + public Field getRangeField(String name, Range r) { + return new FloatRange(name, new float[] {((Number)r.from).floatValue()}, new float[] {((Number)r.to).floatValue()}); + } + @Override + public Query withinQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return FloatRange.newWithinQuery(field, + new float[] {includeFrom ? (Float)from : Math.nextUp((Float)from)}, + new float[] {includeTo ? (Float)to : Math.nextDown((Float)to)}); + } + @Override + public Query containsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return FloatRange.newContainsQuery(field, + new float[] {includeFrom ? (Float)from : Math.nextUp((Float)from)}, + new float[] {includeTo ? (Float)to : Math.nextDown((Float)to)}); + } + @Override + public Query intersectsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return FloatRange.newIntersectsQuery(field, + new float[] {includeFrom ? (Float)from : Math.nextUp((Float)from)}, + new float[] {includeTo ? (Float)to : Math.nextDown((Float)to)}); + } + }, + DOUBLE("double_range", NumberType.DOUBLE) { + @Override + public Double minValue() { + return Double.NEGATIVE_INFINITY; + } + @Override + public Double maxValue() { + return Double.POSITIVE_INFINITY; + } + @Override + public Double nextUp(Object value) { + return Math.nextUp(((Number)value).doubleValue()); + } + @Override + public Double nextDown(Object value) { + return Math.nextDown(((Number)value).doubleValue()); + } + @Override + public Field getRangeField(String name, Range r) { + return new DoubleRange(name, new double[] {((Number)r.from).doubleValue()}, new double[] {((Number)r.to).doubleValue()}); + } + @Override + public Query withinQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return DoubleRange.newWithinQuery(field, + new double[] {includeFrom ? (Double)from : Math.nextUp((Double)from)}, + new double[] {includeTo ? (Double)to : Math.nextDown((Double)to)}); + } + @Override + public Query containsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return DoubleRange.newContainsQuery(field, + new double[] {includeFrom ? (Double)from : Math.nextUp((Double)from)}, + new double[] {includeTo ? (Double)to : Math.nextDown((Double)to)}); + } + @Override + public Query intersectsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return DoubleRange.newIntersectsQuery(field, + new double[] {includeFrom ? (Double)from : Math.nextUp((Double)from)}, + new double[] {includeTo ? (Double)to : Math.nextDown((Double)to)}); + } + }, + // todo add BYTE support + // todo add SHORT support + INTEGER("integer_range", NumberType.INTEGER) { + @Override + public Integer minValue() { + return Integer.MIN_VALUE; + } + @Override + public Integer maxValue() { + return Integer.MAX_VALUE; + } + @Override + public Integer nextUp(Object value) { + return ((Number)value).intValue() + 1; + } + @Override + public Integer nextDown(Object value) { + return ((Number)value).intValue() - 1; + } + @Override + public Field getRangeField(String name, Range r) { + return new IntRange(name, new int[] {((Number)r.from).intValue()}, new int[] {((Number)r.to).intValue()}); + } + @Override + public Query withinQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return IntRange.newWithinQuery(field, new int[] {(Integer)from + (includeFrom ? 0 : 1)}, + new int[] {(Integer)to - (includeTo ? 0 : 1)}); + } + @Override + public Query containsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return IntRange.newContainsQuery(field, new int[] {(Integer)from + (includeFrom ? 0 : 1)}, + new int[] {(Integer)to - (includeTo ? 0 : 1)}); + } + @Override + public Query intersectsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return IntRange.newIntersectsQuery(field, new int[] {(Integer)from + (includeFrom ? 0 : 1)}, + new int[] {(Integer)to - (includeTo ? 0 : 1)}); + } + }, + LONG("long_range", NumberType.LONG) { + @Override + public Long minValue() { + return Long.MIN_VALUE; + } + @Override + public Long maxValue() { + return Long.MAX_VALUE; + } + @Override + public Long nextUp(Object value) { + return ((Number)value).longValue() + 1; + } + @Override + public Long nextDown(Object value) { + return ((Number)value).longValue() - 1; + } + @Override + public Field getRangeField(String name, Range r) { + return new LongRange(name, new long[] {((Number)r.from).longValue()}, + new long[] {((Number)r.to).longValue()}); + } + @Override + public Query withinQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return LongRange.newWithinQuery(field, new long[] {(Long)from + (includeFrom ? 0 : 1)}, + new long[] {(Long)to - (includeTo ? 0 : 1)}); + } + @Override + public Query containsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return LongRange.newContainsQuery(field, new long[] {(Long)from + (includeFrom ? 0 : 1)}, + new long[] {(Long)to - (includeTo ? 0 : 1)}); + } + @Override + public Query intersectsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo) { + return LongRange.newIntersectsQuery(field, new long[] {(Long)from + (includeFrom ? 0 : 1)}, + new long[] {(Long)to - (includeTo ? 0 : 1)}); + } + }; + + RangeType(String name) { + this.name = name; + this.numberType = null; + } + + RangeType(String name, NumberType type) { + this.name = name; + this.numberType = type; + } + + /** Get the associated type name. */ + public final String typeName() { + return name; + } + + public abstract Field getRangeField(String name, Range range); + public List createFields(String name, Range range, boolean indexed, boolean docValued, boolean stored) { + assert range != null : "range cannot be null when creating fields"; + List fields = new ArrayList<>(); + if (indexed) { + fields.add(getRangeField(name, range)); + } + // todo add docValues ranges once aggregations are supported + if (stored) { + fields.add(new StoredField(name, range.toString())); + } + return fields; + } + /** parses from value. rounds according to included flag */ + public Object parseFrom(RangeFieldType fieldType, XContentParser parser, boolean coerce, boolean included) throws IOException { + Number value = numberType.parse(parser, coerce); + return included ? value : (Number)nextUp(value); + } + /** parses to value. rounds according to included flag */ + public Object parseTo(RangeFieldType fieldType, XContentParser parser, boolean coerce, boolean included) throws IOException { + Number value = numberType.parse(parser, coerce); + return included ? value : (Number)nextDown(value); + } + + public abstract Object minValue(); + public abstract Object maxValue(); + public abstract Object nextUp(Object value); + public abstract Object nextDown(Object value); + public abstract Query withinQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo); + public abstract Query containsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo); + public abstract Query intersectsQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo); + public Object parse(Object value, boolean coerce) { + return numberType.parse(value, coerce); + } + public Query rangeQuery(String field, Object from, Object to, boolean includeFrom, boolean includeTo, + ShapeRelation relation, @Nullable DateTimeZone timeZone, @Nullable DateMathParser dateMathParser, + QueryShardContext context) { + Object lower = from == null ? minValue() : parse(from, false); + Object upper = to == null ? maxValue() : parse(to, false); + if (relation == ShapeRelation.WITHIN) { + return withinQuery(field, lower, upper, includeFrom, includeTo); + } else if (relation == ShapeRelation.CONTAINS) { + return containsQuery(field, lower, upper, includeFrom, includeTo); + } + return intersectsQuery(field, lower, upper, includeFrom, includeTo); + } + + public final String name; + private final NumberType numberType; + } + + /** Class defining a range */ + public static class Range { + RangeType type; + private Object from; + private Object to; + private boolean includeFrom; + private boolean includeTo; + + public Range(RangeType type, Object from, Object to, boolean includeFrom, boolean includeTo) { + this.type = type; + this.from = from; + this.to = to; + this.includeFrom = includeFrom; + this.includeTo = includeTo; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(includeFrom ? '[' : '('); + Object f = includeFrom || from.equals(type.minValue()) ? from : type.nextDown(from); + Object t = includeTo || to.equals(type.maxValue()) ? to : type.nextUp(to); + sb.append(type == RangeType.IP ? InetAddresses.toAddrString((InetAddress)f) : f.toString()); + sb.append(" : "); + sb.append(type == RangeType.IP ? InetAddresses.toAddrString((InetAddress)t) : t.toString()); + sb.append(includeTo ? ']' : ')'); + return sb.toString(); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java index aa3e78b8ee056..0eab67e57b267 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java @@ -21,20 +21,17 @@ import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; +import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.Map; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; - -/** - * - */ public class RoutingFieldMapper extends MetadataFieldMapper { public static final String NAME = "_routing"; @@ -80,14 +77,14 @@ public RoutingFieldMapper build(BuilderContext context) { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder = new Builder(parserContext.mapperService().fullName(NAME)); for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { Map.Entry entry = iterator.next(); String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); if (fieldName.equals("required")) { - builder.required(lenientNodeBooleanValue(fieldNode)); + builder.required(TypeParsers.nodeBooleanValue(name, "required", fieldNode)); iterator.remove(); } } @@ -95,14 +92,20 @@ public MetadataFieldMapper.Builder parse(String name, Map node, } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { - return new RoutingFieldMapper(indexSettings, fieldType); + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); + if (fieldType != null) { + return new RoutingFieldMapper(indexSettings, fieldType); + } else { + return parse(NAME, Collections.emptyMap(), context) + .build(new BuilderContext(indexSettings, new ContentPath(1))); + } } } static final class RoutingFieldType extends TermBasedFieldType { - public RoutingFieldType() { + RoutingFieldType() { } protected RoutingFieldType(RoutingFieldType ref) { @@ -123,7 +126,7 @@ public String typeName() { private boolean required; private RoutingFieldMapper(Settings indexSettings, MappedFieldType existing) { - this(existing == null ? Defaults.FIELD_TYPE.clone() : existing.clone(), Defaults.REQUIRED, indexSettings); + this(existing.clone(), Defaults.REQUIRED, indexSettings); } private RoutingFieldMapper(MappedFieldType fieldType, boolean required, Settings indexSettings) { @@ -157,7 +160,7 @@ public Mapper parse(ParseContext context) throws IOException { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { String routing = context.sourceToParse().routing(); if (routing != null) { if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java index 3608da30f7653..7ac5b8bf63b47 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java @@ -19,8 +19,8 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.LeafReaderContext; @@ -28,8 +28,10 @@ import org.apache.lucene.index.SortedNumericDocValues; import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.SortField; import org.elasticsearch.action.fieldstats.FieldStats; import org.elasticsearch.common.Explicit; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -144,16 +146,16 @@ public Mapper.Builder parse(String name, Map node, if (propNode == null) { throw new MapperParsingException("Property [null_value] cannot be null."); } - builder.nullValue(NumberFieldMapper.NumberType.DOUBLE.parse(propNode)); + builder.nullValue(NumberFieldMapper.NumberType.DOUBLE.parse(propNode, false)); iterator.remove(); } else if (propName.equals("ignore_malformed")) { - builder.ignoreMalformed(TypeParsers.nodeBooleanValue("ignore_malformed", propNode, parserContext)); + builder.ignoreMalformed(TypeParsers.nodeBooleanValue(name, "ignore_malformed", propNode)); iterator.remove(); } else if (propName.equals("coerce")) { - builder.coerce(TypeParsers.nodeBooleanValue("coerce", propNode, parserContext)); + builder.coerce(TypeParsers.nodeBooleanValue(name, "coerce", propNode)); iterator.remove(); } else if (propName.equals("scaling_factor")) { - builder.scalingFactor(NumberFieldMapper.NumberType.DOUBLE.parse(propNode).doubleValue()); + builder.scalingFactor(NumberFieldMapper.NumberType.DOUBLE.parse(propNode, false).doubleValue()); iterator.remove(); } } @@ -207,7 +209,7 @@ public void checkCompatibility(MappedFieldType other, List conflicts, bo @Override public Query termQuery(Object value, QueryShardContext context) { failIfNotIndexed(); - double queryValue = NumberFieldMapper.NumberType.DOUBLE.parse(value).doubleValue(); + double queryValue = NumberFieldMapper.NumberType.DOUBLE.parse(value, false).doubleValue(); long scaledValue = Math.round(queryValue * scalingFactor); Query query = NumberFieldMapper.NumberType.LONG.termQuery(name(), scaledValue); if (boost() != 1f) { @@ -221,7 +223,7 @@ public Query termsQuery(List values, QueryShardContext context) { failIfNotIndexed(); List scaledValues = new ArrayList<>(values.size()); for (Object value : values) { - double queryValue = NumberFieldMapper.NumberType.DOUBLE.parse(value).doubleValue(); + double queryValue = NumberFieldMapper.NumberType.DOUBLE.parse(value, false).doubleValue(); long scaledValue = Math.round(queryValue * scalingFactor); scaledValues.add(scaledValue); } @@ -233,11 +235,11 @@ public Query termsQuery(List values, QueryShardContext context) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { failIfNotIndexed(); Long lo = null; if (lowerTerm != null) { - double dValue = NumberFieldMapper.NumberType.DOUBLE.parse(lowerTerm).doubleValue(); + double dValue = NumberFieldMapper.NumberType.DOUBLE.parse(lowerTerm, false).doubleValue(); if (includeLower == false) { dValue = Math.nextUp(dValue); } @@ -245,13 +247,13 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower } Long hi = null; if (upperTerm != null) { - double dValue = NumberFieldMapper.NumberType.DOUBLE.parse(upperTerm).doubleValue(); + double dValue = NumberFieldMapper.NumberType.DOUBLE.parse(upperTerm, false).doubleValue(); if (includeUpper == false) { dValue = Math.nextDown(dValue); } hi = Math.round(Math.floor(dValue * scalingFactor)); } - Query query = NumberFieldMapper.NumberType.LONG.rangeQuery(name(), lo, hi, true, true); + Query query = NumberFieldMapper.NumberType.LONG.rangeQuery(name(), lo, hi, true, true, hasDocValues()); if (boost() != 1f) { query = new BoostQuery(query, boost()); } @@ -265,11 +267,16 @@ public FieldStats stats(IndexReader reader) throws IOException { if (stats == null) { return null; } - return new FieldStats.Double(stats.getMaxDoc(), stats.getDocCount(), + if (stats.hasMinMax()) { + return new FieldStats.Double(stats.getMaxDoc(), stats.getDocCount(), stats.getSumDocFreq(), stats.getSumTotalTermFreq(), stats.isSearchable(), stats.isAggregatable(), - stats.getMinValue() == null ? null : stats.getMinValue() / scalingFactor, - stats.getMaxValue() == null ? null : stats.getMaxValue() / scalingFactor); + stats.getMinValue() / scalingFactor, + stats.getMaxValue() / scalingFactor); + } + return new FieldStats.Double(stats.getMaxDoc(), stats.getDocCount(), + stats.getSumDocFreq(), stats.getSumTotalTermFreq(), + stats.isSearchable(), stats.isAggregatable()); } @Override @@ -288,7 +295,7 @@ public IndexFieldData build(IndexSettings indexSettings, MappedFieldType fiel } @Override - public Object valueForSearch(Object value) { + public Object valueForDisplay(Object value) { if (value == null) { return null; } @@ -364,7 +371,7 @@ protected ScaledFloatFieldMapper clone() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { final boolean includeInAll = context.includeInAll(this.includeInAll, this); XContentParser parser = context.parser(); @@ -404,7 +411,7 @@ protected void parseCreateField(ParseContext context, List fields) throws } if (numericValue == null) { - numericValue = NumberFieldMapper.NumberType.DOUBLE.parse(value); + numericValue = NumberFieldMapper.NumberType.DOUBLE.parse(value, false); } if (includeInAll) { @@ -413,8 +420,12 @@ protected void parseCreateField(ParseContext context, List fields) throws double doubleValue = numericValue.doubleValue(); if (Double.isFinite(doubleValue) == false) { - // since we encode to a long, we have no way to carry NaNs and infinities - throw new IllegalArgumentException("[scaled_float] only supports finite values, but got [" + doubleValue + "]"); + if (ignoreMalformed.value()) { + return; + } else { + // since we encode to a long, we have no way to carry NaNs and infinities + throw new IllegalArgumentException("[scaled_float] only supports finite values, but got [" + doubleValue + "]"); + } } long scaledValue = Math.round(doubleValue * fieldType().getScalingFactor()); @@ -487,9 +498,9 @@ public AtomicNumericFieldData loadDirect(LeafReaderContext context) throws Excep } @Override - public org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource comparatorSource(Object missingValue, - MultiValueMode sortMode, Nested nested) { - return new DoubleValuesComparatorSource(this, missingValue, sortMode, nested); + public SortField sortField(@Nullable Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) { + final XFieldComparatorSource source = new DoubleValuesComparatorSource(this, missingValue, sortMode, nested); + return new SortField(getFieldName(), source, reverse); } @Override @@ -504,7 +515,10 @@ public Index index() { @Override public NumericType getNumericType() { - return scaledFieldData.getNumericType(); + /** + * {@link ScaledFloatLeafFieldData#getDoubleValues()} transforms the raw long values in `scaled` floats. + */ + return NumericType.DOUBLE; } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java index 4854eb5775259..2673bede0a8f9 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java @@ -19,9 +19,9 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.document.StoredField; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; @@ -45,17 +45,14 @@ import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.function.Function; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; - -/** - * - */ public class SourceFieldMapper extends MetadataFieldMapper { public static final String NAME = "_source"; public static final String CONTENT_TYPE = "_source"; + private final Function, Map> filter; public static class Defaults { public static final String NAME = SourceFieldMapper.NAME; @@ -109,7 +106,7 @@ public SourceFieldMapper build(BuilderContext context) { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder = new Builder(); for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { @@ -117,7 +114,7 @@ public MetadataFieldMapper.Builder parse(String name, Map node, String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); if (fieldName.equals("enabled")) { - builder.enabled(lenientNodeBooleanValue(fieldNode)); + builder.enabled(TypeParsers.nodeBooleanValue(name, "enabled", fieldNode)); iterator.remove(); } else if ("format".equals(fieldName) && parserContext.indexVersionCreated().before(Version.V_5_0_0_alpha1)) { // ignore on old indices, reject on and after 5.0 @@ -144,14 +141,15 @@ public MetadataFieldMapper.Builder parse(String name, Map node, } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new SourceFieldMapper(indexSettings); } } static final class SourceFieldType extends MappedFieldType { - public SourceFieldType() {} + SourceFieldType() {} protected SourceFieldType(SourceFieldType ref) { super(ref); @@ -190,6 +188,8 @@ private SourceFieldMapper(boolean enabled, String[] includes, String[] excludes, this.enabled = enabled; this.includes = includes; this.excludes = excludes; + final boolean filtered = (includes != null && includes.length > 0) || (excludes != null && excludes.length > 0); + this.filter = enabled && filtered && fieldType().stored() ? XContentMapValues.filter(includes, excludes) : null; this.complete = enabled && includes == null && excludes == null; } @@ -226,7 +226,7 @@ public Mapper parse(ParseContext context) throws IOException { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { if (!enabled) { return; } @@ -239,12 +239,11 @@ protected void parseCreateField(ParseContext context, List fields) throws return; } - boolean filtered = (includes != null && includes.length > 0) || (excludes != null && excludes.length > 0); - if (filtered) { + if (filter != null) { // we don't update the context source if we filter, we want to keep it as is... - - Tuple> mapTuple = XContentHelper.convertToMap(source, true); - Map filteredSource = XContentMapValues.filter(mapTuple.v2(), includes, excludes); + Tuple> mapTuple = + XContentHelper.convertToMap(source, true, context.sourceToParse().getXContentType()); + Map filteredSource = filter.apply(mapTuple.v2()); BytesStreamOutput bStream = new BytesStreamOutput(); XContentType contentType = mapTuple.v1(); XContentBuilder builder = XContentFactory.contentBuilder(contentType, bStream).map(filteredSource); @@ -275,15 +274,15 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } if (includes != null) { - builder.field("includes", includes); + builder.array("includes", includes); } else if (includeDefaults) { - builder.field("includes", Strings.EMPTY_ARRAY); + builder.array("includes", Strings.EMPTY_ARRAY); } if (excludes != null) { - builder.field("excludes", excludes); + builder.array("excludes", excludes); } else if (includeDefaults) { - builder.field("excludes", Strings.EMPTY_ARRAY); + builder.array("excludes", Strings.EMPTY_ARRAY); } builder.endObject(); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/SourceToParse.java b/core/src/main/java/org/elasticsearch/index/mapper/SourceToParse.java index 074ce8829e3a0..fc34e4a0094be 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/SourceToParse.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/SourceToParse.java @@ -24,18 +24,20 @@ import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentType; /** * */ public class SourceToParse { - public static SourceToParse source(String index, String type, String id, BytesReference source) { - return source(Origin.PRIMARY, index, type, id, source); + public static SourceToParse source(String index, String type, String id, BytesReference source, XContentType contentType) { + return source(Origin.PRIMARY, index, type, id, source, contentType); } - public static SourceToParse source(Origin origin, String index, String type, String id, BytesReference source) { - return new SourceToParse(origin, index, type, id, source); + public static SourceToParse source(Origin origin, String index, String type, String id, BytesReference source, + XContentType contentType) { + return new SourceToParse(origin, index, type, id, source, contentType); } private final Origin origin; @@ -56,14 +58,17 @@ public static SourceToParse source(Origin origin, String index, String type, Str private long ttl; - private SourceToParse(Origin origin, String index, String type, String id, BytesReference source) { + private XContentType xContentType; + + private SourceToParse(Origin origin, String index, String type, String id, BytesReference source, XContentType xContentType) { this.origin = Objects.requireNonNull(origin); this.index = Objects.requireNonNull(index); this.type = Objects.requireNonNull(type); this.id = Objects.requireNonNull(id); // we always convert back to byte array, since we store it and Field only supports bytes.. // so, we might as well do it here, and improve the performance of working with direct byte arrays - this.source = new BytesArray(source.toBytesRef()); + this.source = new BytesArray(Objects.requireNonNull(source).toBytesRef()); + this.xContentType = Objects.requireNonNull(xContentType); } public Origin origin() { @@ -99,6 +104,10 @@ public String routing() { return this.routing; } + public XContentType getXContentType() { + return this.xContentType; + } + public SourceToParse routing(String routing) { this.routing = routing; return this; diff --git a/core/src/main/java/org/elasticsearch/index/mapper/StringFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/StringFieldMapper.java index ec7a90148ad35..da11f79729563 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/StringFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/StringFieldMapper.java @@ -23,6 +23,7 @@ import org.apache.lucene.document.Field; import org.apache.lucene.document.SortedSetDocValuesField; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; @@ -88,6 +89,8 @@ public static class Defaults { public static class Builder extends FieldMapper.Builder { + private final DeprecationLogger deprecationLogger; + protected String nullValue = Defaults.NULL_VALUE; /** @@ -102,6 +105,8 @@ public static class Builder extends FieldMapper.Builder node, ParserCo norms = ((Map) norms).get("enabled"); } if (norms != null) { - node.put("norms", TypeParsers.nodeBooleanValue("norms", norms, parserContext)); + node.put("norms", TypeParsers.nodeBooleanValue(fieldName,"norms", norms)); } Object omitNorms = node.remove("omit_norms"); if (omitNorms != null) { - node.put("norms", TypeParsers.nodeBooleanValue("omit_norms", omitNorms, parserContext) == false); + node.put("norms", TypeParsers.nodeBooleanValue(fieldName, "omit_norms", omitNorms) == false); } } { @@ -318,13 +329,13 @@ public Mapper.Builder parse(String fieldName, Map node, ParserCo // we need to update to actual analyzers if they are not set in this case... // so we can inject the position increment gap... if (builder.fieldType().indexAnalyzer() == null) { - builder.fieldType().setIndexAnalyzer(parserContext.analysisService().defaultIndexAnalyzer()); + builder.fieldType().setIndexAnalyzer(parserContext.getIndexAnalyzers().getDefaultIndexAnalyzer()); } if (builder.fieldType().searchAnalyzer() == null) { - builder.fieldType().setSearchAnalyzer(parserContext.analysisService().defaultSearchAnalyzer()); + builder.fieldType().setSearchAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchAnalyzer()); } if (builder.fieldType().searchQuoteAnalyzer() == null) { - builder.fieldType().setSearchQuoteAnalyzer(parserContext.analysisService().defaultSearchQuoteAnalyzer()); + builder.fieldType().setSearchQuoteAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchQuoteAnalyzer()); } iterator.remove(); } else if (propName.equals("ignore_above")) { @@ -527,7 +538,7 @@ public int getIgnoreAbove() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { ValueAndBoost valueAndBoost = parseCreateFieldForString(context, fieldType().nullValueAsString(), fieldType().boost()); if (valueAndBoost.value() == null) { return; @@ -640,7 +651,10 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, if (includeDefaults || ignoreAbove != Defaults.IGNORE_ABOVE) { builder.field("ignore_above", ignoreAbove); } - if (includeDefaults || fieldType().fielddata() != ((StringFieldType) defaultFieldType).fielddata()) { + + if (includeDefaults || (fieldType.indexOptions() != IndexOptions.NONE + && fieldType().hasDocValues() == false + && fieldType().fielddata() == false)) { builder.field("fielddata", fieldType().fielddata()); } if (fieldType().fielddata()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/StringFieldType.java b/core/src/main/java/org/elasticsearch/index/mapper/StringFieldType.java index 9450c2a5b9749..37834b93a1e0f 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/StringFieldType.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/StringFieldType.java @@ -22,7 +22,7 @@ import java.util.List; import org.apache.lucene.index.Term; -import org.apache.lucene.queries.TermsQuery; +import org.apache.lucene.search.TermInSetQuery; import org.apache.lucene.search.FuzzyQuery; import org.apache.lucene.search.MultiTermQuery; import org.apache.lucene.search.PrefixQuery; @@ -46,17 +46,18 @@ protected StringFieldType(MappedFieldType ref) { super(ref); } + @Override public Query termsQuery(List values, QueryShardContext context) { failIfNotIndexed(); BytesRef[] bytesRefs = new BytesRef[values.size()]; for (int i = 0; i < bytesRefs.length; i++) { bytesRefs[i] = indexedValueForSearch(values.get(i)); } - return new TermsQuery(name(), bytesRefs); + return new TermInSetQuery(name(), bytesRefs); } @Override - public final Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, + public Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions) { failIfNotIndexed(); return new FuzzyQuery(new Term(name(), indexedValueForSearch(value)), @@ -64,7 +65,7 @@ public final Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLengt } @Override - public final Query prefixQuery(String value, MultiTermQuery.RewriteMethod method, QueryShardContext context) { + public Query prefixQuery(String value, MultiTermQuery.RewriteMethod method, QueryShardContext context) { failIfNotIndexed(); PrefixQuery query = new PrefixQuery(new Term(name(), indexedValueForSearch(value))); if (method != null) { @@ -74,7 +75,7 @@ public final Query prefixQuery(String value, MultiTermQuery.RewriteMethod method } @Override - public final Query regexpQuery(String value, int flags, int maxDeterminizedStates, + public Query regexpQuery(String value, int flags, int maxDeterminizedStates, MultiTermQuery.RewriteMethod method, QueryShardContext context) { failIfNotIndexed(); RegexpQuery query = new RegexpQuery(new Term(name(), indexedValueForSearch(value)), flags, maxDeterminizedStates); @@ -85,7 +86,7 @@ public final Query regexpQuery(String value, int flags, int maxDeterminizedState } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) { + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { failIfNotIndexed(); return new TermRangeQuery(name(), lowerTerm == null ? null : indexedValueForSearch(lowerTerm), diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TTLFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/TTLFieldMapper.java index f95f42156e1ca..8c3ccf3c765d1 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TTLFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TTLFieldMapper.java @@ -19,8 +19,8 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.elasticsearch.Version; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; @@ -28,7 +28,6 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.AlreadyExpiredException; -import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Date; @@ -103,7 +102,7 @@ public MetadataFieldMapper.Builder parse(String name, Map node, String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); if (fieldName.equals("enabled")) { - EnabledAttributeMapper enabledState = lenientNodeBooleanValue(fieldNode) ? EnabledAttributeMapper.ENABLED : EnabledAttributeMapper.DISABLED; + EnabledAttributeMapper enabledState = lenientNodeBooleanValue(fieldNode, fieldName) ? EnabledAttributeMapper.ENABLED : EnabledAttributeMapper.DISABLED; builder.enabled(enabledState); iterator.remove(); } else if (fieldName.equals("default")) { @@ -118,7 +117,8 @@ public MetadataFieldMapper.Builder parse(String name, Map node, } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new TTLFieldMapper(indexSettings); } } @@ -139,15 +139,9 @@ public TTLFieldType clone() { // Overrides valueForSearch to display live value of remaining ttl @Override - public Object valueForSearch(Object value) { - long now; - SearchContext searchContext = SearchContext.current(); - if (searchContext != null) { - now = searchContext.nowInMillis(); - } else { - now = System.currentTimeMillis(); - } - Long val = (Long) super.valueForSearch(value); + public Object valueForDisplay(Object value) { + final long now = System.currentTimeMillis(); + Long val = (Long) super.valueForDisplay(value); return val - now; } } @@ -177,11 +171,6 @@ public long defaultTTL() { return this.defaultTTL; } - // Other implementation for realtime get display - public Object valueForSearch(long expirationTime) { - return expirationTime - System.currentTimeMillis(); - } - @Override public void preParse(ParseContext context) throws IOException { } @@ -209,7 +198,7 @@ public Mapper parse(ParseContext context) throws IOException, MapperParsingExcep } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException, AlreadyExpiredException { + protected void parseCreateField(ParseContext context, List fields) throws IOException, AlreadyExpiredException { if (enabledState.enabled) { long ttl = context.sourceToParse().ttl(); if (ttl <= 0 && defaultTTL > 0) { // no ttl provided so we use the default value @@ -273,4 +262,5 @@ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { } } } + } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TermBasedFieldType.java b/core/src/main/java/org/elasticsearch/index/mapper/TermBasedFieldType.java index 71d07aa385f51..89b09cc068e63 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TermBasedFieldType.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TermBasedFieldType.java @@ -22,9 +22,9 @@ import java.util.List; import org.apache.lucene.index.Term; -import org.apache.lucene.queries.TermsQuery; import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermInSetQuery; import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; @@ -35,7 +35,7 @@ * with the inverted index. */ abstract class TermBasedFieldType extends MappedFieldType { - public TermBasedFieldType() {} + TermBasedFieldType() {} protected TermBasedFieldType(MappedFieldType ref) { super(ref); @@ -66,7 +66,7 @@ public Query termsQuery(List values, QueryShardContext context) { for (int i = 0; i < bytesRefs.length; i++) { bytesRefs[i] = indexedValueForSearch(values.get(i)); } - return new TermsQuery(name(), bytesRefs); + return new TermInSetQuery(name(), bytesRefs); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java index 53b02717cc1da..f4bf643b1673e 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java @@ -21,6 +21,7 @@ import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.elasticsearch.Version; import org.elasticsearch.common.settings.Settings; @@ -174,13 +175,13 @@ public Mapper.Builder parse(String fieldName, Map node, ParserCo } node.put("fielddata", fielddata); } - + return new StringFieldMapper.TypeParser().parse(fieldName, node, parserContext); } TextFieldMapper.Builder builder = new TextFieldMapper.Builder(fieldName); - builder.fieldType().setIndexAnalyzer(parserContext.analysisService().defaultIndexAnalyzer()); - builder.fieldType().setSearchAnalyzer(parserContext.analysisService().defaultSearchAnalyzer()); - builder.fieldType().setSearchQuoteAnalyzer(parserContext.analysisService().defaultSearchQuoteAnalyzer()); + builder.fieldType().setIndexAnalyzer(parserContext.getIndexAnalyzers().getDefaultIndexAnalyzer()); + builder.fieldType().setSearchAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchAnalyzer()); + builder.fieldType().setSearchQuoteAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchQuoteAnalyzer()); parseTextField(builder, fieldName, node, parserContext); for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { Map.Entry entry = iterator.next(); @@ -334,7 +335,7 @@ public IndexFieldData.Builder fielddataBuilder() { if (fielddata == false) { throw new IllegalArgumentException("Fielddata is disabled on text fields by default. Set fielddata=true on [" + name() + "] in order to load fielddata in memory by uninverting the inverted index. Note that this can however " - + "use significant memory."); + + "use significant memory. Alternatively use a keyword field instead."); } return new PagedBytesIndexFieldData.Builder(fielddataMinFrequency, fielddataMaxFrequency, fielddataMinSegmentSize); } @@ -371,7 +372,7 @@ public int getPositionIncrementGap() { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { final String value; if (context.externalValueSet()) { value = context.externalValue().toString(); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TimestampFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/TimestampFieldMapper.java index d57d2f89c6fd6..be315b4af8c87 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TimestampFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TimestampFieldMapper.java @@ -19,9 +19,9 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.document.NumericDocValuesField; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.elasticsearch.Version; import org.elasticsearch.action.TimestampParsingException; import org.elasticsearch.common.joda.FormatDateTimeFormatter; @@ -130,7 +130,7 @@ public MetadataFieldMapper.Builder parse(String name, Map node, String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); if (fieldName.equals("enabled")) { - EnabledAttributeMapper enabledState = lenientNodeBooleanValue(fieldNode) ? EnabledAttributeMapper.ENABLED : EnabledAttributeMapper.DISABLED; + EnabledAttributeMapper enabledState = lenientNodeBooleanValue(fieldNode, fieldName) ? EnabledAttributeMapper.ENABLED : EnabledAttributeMapper.DISABLED; builder.enabled(enabledState); iterator.remove(); } else if (fieldName.equals("format")) { @@ -145,7 +145,7 @@ public MetadataFieldMapper.Builder parse(String name, Map node, } iterator.remove(); } else if (fieldName.equals("ignore_missing")) { - ignoreMissing = lenientNodeBooleanValue(fieldNode); + ignoreMissing = lenientNodeBooleanValue(fieldNode, fieldName); builder.ignoreMissing(ignoreMissing); iterator.remove(); } @@ -160,7 +160,8 @@ public MetadataFieldMapper.Builder parse(String name, Map node, } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new TimestampFieldMapper(indexSettings, fieldType); } } @@ -179,7 +180,7 @@ public TimestampFieldType clone() { } @Override - public Object valueForSearch(Object value) { + public Object valueForDisplay(Object value) { return value; } } @@ -237,7 +238,7 @@ public Mapper parse(ParseContext context) throws IOException { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { if (enabledState.enabled) { long timestamp = context.sourceToParse().timestamp(); if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TokenCountFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/TokenCountFieldMapper.java index 9eeaf4012fad0..d4572837e0648 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TokenCountFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TokenCountFieldMapper.java @@ -22,8 +22,8 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; -import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.elasticsearch.Version; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -35,6 +35,7 @@ import java.util.Map; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeIntegerValue; +import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeBooleanValue; import static org.elasticsearch.index.mapper.TypeParsers.parseField; /** @@ -46,10 +47,12 @@ public class TokenCountFieldMapper extends FieldMapper { public static class Defaults { public static final MappedFieldType FIELD_TYPE = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.INTEGER); + public static final boolean DEFAULT_POSITION_INCREMENTS = true; } public static class Builder extends FieldMapper.Builder { private NamedAnalyzer analyzer; + private boolean enablePositionIncrements = Defaults.DEFAULT_POSITION_INCREMENTS; public Builder(String name) { super(name, Defaults.FIELD_TYPE, Defaults.FIELD_TYPE); @@ -65,18 +68,26 @@ public NamedAnalyzer analyzer() { return analyzer; } + public Builder enablePositionIncrements(boolean enablePositionIncrements) { + this.enablePositionIncrements = enablePositionIncrements; + return this; + } + + public boolean enablePositionIncrements() { + return enablePositionIncrements; + } + @Override public TokenCountFieldMapper build(BuilderContext context) { setupFieldType(context); return new TokenCountFieldMapper(name, fieldType, defaultFieldType, - context.indexSettings(), analyzer, multiFieldsBuilder.build(this, context), copyTo); + context.indexSettings(), analyzer, enablePositionIncrements, multiFieldsBuilder.build(this, context), copyTo); } } public static class TypeParser implements Mapper.TypeParser { @Override - @SuppressWarnings("unchecked") - public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { if (parserContext.indexVersionCreated().before(Version.V_5_0_0_alpha2)) { return new LegacyTokenCountFieldMapper.TypeParser().parse(name, node, parserContext); } @@ -89,12 +100,15 @@ public Mapper.Builder parse(String name, Map node, ParserContext builder.nullValue(nodeIntegerValue(propNode)); iterator.remove(); } else if (propName.equals("analyzer")) { - NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString()); + NamedAnalyzer analyzer = parserContext.getIndexAnalyzers().get(propNode.toString()); if (analyzer == null) { throw new MapperParsingException("Analyzer [" + propNode.toString() + "] not found for field [" + name + "]"); } builder.analyzer(analyzer); iterator.remove(); + } else if (propName.equals("enable_position_increments")) { + builder.enablePositionIncrements(nodeBooleanValue(propNode)); + iterator.remove(); } } parseField(builder, name, node, parserContext); @@ -106,15 +120,17 @@ public Mapper.Builder parse(String name, Map node, ParserContext } private NamedAnalyzer analyzer; + private boolean enablePositionIncrements; protected TokenCountFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, - Settings indexSettings, NamedAnalyzer analyzer, MultiFields multiFields, CopyTo copyTo) { + Settings indexSettings, NamedAnalyzer analyzer, boolean enablePositionIncrements, MultiFields multiFields, CopyTo copyTo) { super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo); this.analyzer = analyzer; + this.enablePositionIncrements = enablePositionIncrements; } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { final String value; if (context.externalValueSet()) { value = context.externalValue().toString(); @@ -122,11 +138,15 @@ protected void parseCreateField(ParseContext context, List fields) throws value = context.parser().textOrNull(); } + if (value == null && fieldType().nullValue() == null) { + return; + } + final int tokenCount; if (value == null) { tokenCount = (Integer) fieldType().nullValue(); } else { - tokenCount = countPositions(analyzer, name(), value); + tokenCount = countPositions(analyzer, name(), value, enablePositionIncrements); } boolean indexed = fieldType().indexOptions() != IndexOptions.NONE; @@ -140,19 +160,26 @@ protected void parseCreateField(ParseContext context, List fields) throws * @param analyzer analyzer to create token stream * @param fieldName field name to pass to analyzer * @param fieldValue field value to pass to analyzer + * @param enablePositionIncrements should we count position increments ? * @return number of position increments in a token stream * @throws IOException if tokenStream throws it */ - static int countPositions(Analyzer analyzer, String fieldName, String fieldValue) throws IOException { + static int countPositions(Analyzer analyzer, String fieldName, String fieldValue, boolean enablePositionIncrements) throws IOException { try (TokenStream tokenStream = analyzer.tokenStream(fieldName, fieldValue)) { int count = 0; PositionIncrementAttribute position = tokenStream.addAttribute(PositionIncrementAttribute.class); tokenStream.reset(); while (tokenStream.incrementToken()) { - count += position.getPositionIncrement(); + if (enablePositionIncrements) { + count += position.getPositionIncrement(); + } else { + count += Math.min(1, position.getPositionIncrement()); + } } tokenStream.end(); - count += position.getPositionIncrement(); + if (enablePositionIncrements) { + count += position.getPositionIncrement(); + } return count; } } @@ -165,6 +192,14 @@ public String analyzer() { return analyzer.name(); } + /** + * Indicates if position increments are counted. + * @return true if position increments are counted + */ + public boolean enablePositionIncrements() { + return enablePositionIncrements; + } + @Override protected String contentType() { return CONTENT_TYPE; @@ -174,12 +209,16 @@ protected String contentType() { protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { super.doMerge(mergeWith, updateAllTypes); this.analyzer = ((TokenCountFieldMapper) mergeWith).analyzer; + this.enablePositionIncrements = ((TokenCountFieldMapper) mergeWith).enablePositionIncrements; } @Override protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException { super.doXContentBody(builder, includeDefaults, params); builder.field("analyzer", analyzer()); + if (includeDefaults || enablePositionIncrements() != Defaults.DEFAULT_POSITION_INCREMENTS) { + builder.field("enable_position_increments", enablePositionIncrements()); + } } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java index 0889fab663668..68d2ac9bcdf53 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java @@ -21,29 +21,41 @@ import org.apache.lucene.document.Field; import org.apache.lucene.document.SortedSetDocValuesField; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Term; import org.apache.lucene.index.TermContext; +import org.apache.lucene.search.BooleanClause; +import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.TermInSetQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; -import org.elasticsearch.common.Nullable; +import org.elasticsearch.action.fieldstats.FieldStats; import org.elasticsearch.common.lucene.Lucene; +import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; import org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData; +import org.elasticsearch.index.fielddata.plain.ConstantIndexFieldData; import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; +import java.util.Arrays; +import java.util.Collection; +import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Objects; +import java.util.Set; +import java.util.function.Function; /** * @@ -73,12 +85,13 @@ public static class Defaults { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { throw new MapperParsingException(NAME + " is not configurable"); } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new TypeFieldMapper(indexSettings, fieldType); } } @@ -86,7 +99,7 @@ public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fi static final class TypeFieldType extends StringFieldType { private boolean fielddata; - public TypeFieldType() { + TypeFieldType() { this.fielddata = false; } @@ -127,63 +140,148 @@ public void setFielddata(boolean fielddata) { checkIfFrozen(); this.fielddata = fielddata; } - + @Override public IndexFieldData.Builder fielddataBuilder() { if (hasDocValues()) { return new DocValuesIndexFieldData.Builder(); - } - assert indexOptions() != IndexOptions.NONE; - if (fielddata) { + } else if (fielddata) { return new PagedBytesIndexFieldData.Builder(TextFieldMapper.Defaults.FIELDDATA_MIN_FREQUENCY, - TextFieldMapper.Defaults.FIELDDATA_MAX_FREQUENCY, - TextFieldMapper.Defaults.FIELDDATA_MIN_SEGMENT_SIZE); + TextFieldMapper.Defaults.FIELDDATA_MAX_FREQUENCY, + TextFieldMapper.Defaults.FIELDDATA_MIN_SEGMENT_SIZE); + } else { + // means the index has a single type and the type field is implicit + Function typeFunction = mapperService -> { + Collection types = mapperService.types(); + if (types.size() > 1) { + throw new AssertionError(); + } + // If we reach here, there is necessarily one type since we were able to find a `_type` field + String type = types.iterator().next(); + return type; + }; + return new ConstantIndexFieldData.Builder(typeFunction); + } + } + + @Override + public FieldStats stats(IndexReader reader) throws IOException { + if (reader.maxDoc() == 0) { + return null; } - return super.fielddataBuilder(); + return new FieldStats.Text(reader.maxDoc(), reader.numDocs(), reader.maxDoc(), reader.maxDoc(), + isSearchable(), isAggregatable()); } @Override - public Query termQuery(Object value, @Nullable QueryShardContext context) { - if (indexOptions() == IndexOptions.NONE) { - throw new AssertionError(); + public boolean isSearchable() { + return true; + } + + @Override + public Query termQuery(Object value, QueryShardContext context) { + return termsQuery(Arrays.asList(value), context); + } + + @Override + public Query termsQuery(List values, QueryShardContext context) { + if (context.getIndexSettings().isSingleType()) { + Collection indexTypes = context.getMapperService().types(); + if (indexTypes.isEmpty()) { + return new MatchNoDocsQuery("No types"); + } + assert indexTypes.size() == 1; + BytesRef indexType = indexedValueForSearch(indexTypes.iterator().next()); + if (values.stream() + .map(this::indexedValueForSearch) + .anyMatch(indexType::equals)) { + if (context.getMapperService().hasNested()) { + // type filters are expected not to match nested docs + return Queries.newNonNestedFilter(); + } else { + return new MatchAllDocsQuery(); + } + } else { + return new MatchNoDocsQuery("Type list does not contain the index type"); + } + } else { + if (indexOptions() == IndexOptions.NONE) { + throw new AssertionError(); + } + final BytesRef[] types = values.stream() + .map(this::indexedValueForSearch) + .toArray(size -> new BytesRef[size]); + return new TypesQuery(types); } - return new TypeQuery(indexedValueForSearch(value)); } @Override public void checkCompatibility(MappedFieldType other, - List conflicts, boolean strict) { + List conflicts, boolean strict) { super.checkCompatibility(other, conflicts, strict); TypeFieldType otherType = (TypeFieldType) other; if (strict) { if (fielddata() != otherType.fielddata()) { conflicts.add("mapper [" + name() + "] is used by multiple types. Set update_all_types to true to update [fielddata] " - + "across all types."); + + "across all types."); } } } } - public static class TypeQuery extends Query { + /** + * Specialization for a disjunction over many _type + */ + public static class TypesQuery extends Query { + // Same threshold as TermInSetQuery + private static final int BOOLEAN_REWRITE_TERM_COUNT_THRESHOLD = 16; + + private final BytesRef[] types; - private final BytesRef type; + public TypesQuery(BytesRef... types) { + if (types == null) { + throw new NullPointerException("types cannot be null."); + } + if (types.length == 0) { + throw new IllegalArgumentException("types must contains at least one value."); + } + this.types = types; + } - public TypeQuery(BytesRef type) { - this.type = Objects.requireNonNull(type); + public BytesRef[] getTerms() { + return types; } @Override public Query rewrite(IndexReader reader) throws IOException { - Term term = new Term(CONTENT_TYPE, type); - TermContext context = TermContext.build(reader.getContext(), term); - if (context.docFreq() == reader.maxDoc()) { - // All docs have the same type. - // Using a match_all query will help Lucene perform some optimizations - // For instance, match_all queries as filter clauses are automatically removed - return new MatchAllDocsQuery(); - } else { - return new ConstantScoreQuery(new TermQuery(term, context)); + final int threshold = Math.min(BOOLEAN_REWRITE_TERM_COUNT_THRESHOLD, BooleanQuery.getMaxClauseCount()); + if (types.length <= threshold) { + Set uniqueTypes = new HashSet<>(); + BooleanQuery.Builder bq = new BooleanQuery.Builder(); + int totalDocFreq = 0; + for (BytesRef type : types) { + if (uniqueTypes.add(type)) { + Term term = new Term(CONTENT_TYPE, type); + TermContext context = TermContext.build(reader.getContext(), term); + if (context.docFreq() == 0) { + // this _type is not present in the reader + continue; + } + totalDocFreq += context.docFreq(); + // strict equality should be enough ? + if (totalDocFreq >= reader.maxDoc()) { + assert totalDocFreq == reader.maxDoc(); + // Matches all docs since _type is a single value field + // Using a match_all query will help Lucene perform some optimizations + // For instance, match_all queries as filter clauses are automatically removed + return new MatchAllDocsQuery(); + } + bq.add(new TermQuery(term, context), BooleanClause.Occur.SHOULD); + } + } + return new ConstantScoreQuery(bq.build()); } + return new TermInSetQuery(CONTENT_TYPE, types); } @Override @@ -191,20 +289,26 @@ public boolean equals(Object obj) { if (sameClassAs(obj) == false) { return false; } - TypeQuery that = (TypeQuery) obj; - return type.equals(that.type); + TypesQuery that = (TypesQuery) obj; + return Arrays.equals(types, that.types); } @Override public int hashCode() { - return 31 * classHash() + type.hashCode(); + return 31 * classHash() + Arrays.hashCode(types); } @Override public String toString(String field) { - return "_type:" + type; + StringBuilder builder = new StringBuilder(); + for (BytesRef type : types) { + if (builder.length() > 0) { + builder.append(' '); + } + builder.append(new Term(CONTENT_TYPE, type).toString()); + } + return builder.toString(); } - } private TypeFieldMapper(Settings indexSettings, MappedFieldType existing) { @@ -217,12 +321,18 @@ private TypeFieldMapper(MappedFieldType fieldType, Settings indexSettings) { } private static MappedFieldType defaultFieldType(Settings indexSettings) { - MappedFieldType defaultFieldType = Defaults.FIELD_TYPE.clone(); Version indexCreated = Version.indexCreated(indexSettings); - if (indexCreated.before(Version.V_2_1_0)) { + MappedFieldType defaultFieldType = Defaults.FIELD_TYPE.clone(); + if (MapperService.INDEX_MAPPING_SINGLE_TYPE_SETTING.get(indexSettings)) { + defaultFieldType.setIndexOptions(IndexOptions.NONE); + defaultFieldType.setHasDocValues(false); + } else if (indexCreated.before(Version.V_2_1_0)) { // enables fielddata loading, doc values was disabled on _type between 2.0 and 2.1. ((TypeFieldType) defaultFieldType).setFielddata(true); + defaultFieldType.setIndexOptions(IndexOptions.DOCS); + defaultFieldType.setHasDocValues(false); } else { + defaultFieldType.setIndexOptions(IndexOptions.DOCS); defaultFieldType.setHasDocValues(true); } return defaultFieldType; @@ -244,7 +354,7 @@ public Mapper parse(ParseContext context) throws IOException { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { if (fieldType().indexOptions() == IndexOptions.NONE && !fieldType().stored()) { return; } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java b/core/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java index eaa97ac5100e0..957f05aced3a7 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java @@ -31,17 +31,13 @@ import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.similarity.SimilarityProvider; -import java.util.Arrays; import java.util.Collections; -import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Map.Entry; -import java.util.Set; import static org.elasticsearch.common.xcontent.support.XContentMapValues.isArray; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeFloatValue; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeIntegerValue; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeMapValue; @@ -52,6 +48,8 @@ */ public class TypeParsers { + protected static final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(TypeParsers.class)); + public static final String DOC_VALUES = "doc_values"; public static final String INDEX_OPTIONS_DOCS = "docs"; public static final String INDEX_OPTIONS_FREQS = "freqs"; @@ -59,19 +57,9 @@ public class TypeParsers { public static final String INDEX_OPTIONS_OFFSETS = "offsets"; private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(TypeParsers.class)); - private static final Set BOOLEAN_STRINGS = new HashSet<>(Arrays.asList("true", "false")); - public static boolean nodeBooleanValue(String name, Object node, Mapper.TypeParser.ParserContext parserContext) { - // Hook onto ParseFieldMatcher so that parsing becomes strict when setting index.query.parse.strict - if (parserContext.parseFieldMatcher().isStrict()) { - return XContentMapValues.nodeBooleanValue(node); - } else { - // TODO: remove this leniency in 6.0 - if (BOOLEAN_STRINGS.contains(node.toString()) == false) { - DEPRECATION_LOGGER.deprecated("Expected a boolean for property [{}] but got [{}]", name, node); - } - return XContentMapValues.lenientNodeBooleanValue(node); - } + public static boolean nodeBooleanValue(String fieldName, String propertyName, Object node) { + return XContentMapValues.lenientNodeBooleanValue(node, fieldName + "." + propertyName); } @Deprecated // for legacy ints only @@ -85,10 +73,10 @@ public static void parseNumberField(LegacyNumberFieldMapper.Builder builder, Str builder.precisionStep(nodeIntegerValue(propNode)); iterator.remove(); } else if (propName.equals("ignore_malformed")) { - builder.ignoreMalformed(nodeBooleanValue("ignore_malformed", propNode, parserContext)); + builder.ignoreMalformed(nodeBooleanValue(name, "ignore_malformed", propNode)); iterator.remove(); } else if (propName.equals("coerce")) { - builder.coerce(nodeBooleanValue("coerce", propNode, parserContext)); + builder.coerce(nodeBooleanValue(name, "coerce", propNode)); iterator.remove(); } else if (propName.equals("similarity")) { SimilarityProvider similarityProvider = resolveSimilarity(parserContext, name, propNode.toString()); @@ -100,7 +88,8 @@ public static void parseNumberField(LegacyNumberFieldMapper.Builder builder, Str } } - private static void parseAnalyzersAndTermVectors(FieldMapper.Builder builder, String name, Map fieldNode, Mapper.TypeParser.ParserContext parserContext) { + private static void parseAnalyzersAndTermVectors(FieldMapper.Builder builder, String name, Map fieldNode, + Mapper.TypeParser.ParserContext parserContext) { NamedAnalyzer indexAnalyzer = null; NamedAnalyzer searchAnalyzer = null; NamedAnalyzer searchQuoteAnalyzer = null; @@ -113,33 +102,34 @@ private static void parseAnalyzersAndTermVectors(FieldMapper.Builder builder, St parseTermVector(name, propNode.toString(), builder); iterator.remove(); } else if (propName.equals("store_term_vectors")) { - builder.storeTermVectors(nodeBooleanValue("store_term_vectors", propNode, parserContext)); + builder.storeTermVectors(nodeBooleanValue(name, "store_term_vectors", propNode)); iterator.remove(); } else if (propName.equals("store_term_vector_offsets")) { - builder.storeTermVectorOffsets(nodeBooleanValue("store_term_vector_offsets", propNode, parserContext)); + builder.storeTermVectorOffsets(nodeBooleanValue(name, "store_term_vector_offsets", propNode)); iterator.remove(); } else if (propName.equals("store_term_vector_positions")) { - builder.storeTermVectorPositions(nodeBooleanValue("store_term_vector_positions", propNode, parserContext)); + builder.storeTermVectorPositions( + nodeBooleanValue(name, "store_term_vector_positions", propNode)); iterator.remove(); } else if (propName.equals("store_term_vector_payloads")) { - builder.storeTermVectorPayloads(nodeBooleanValue("store_term_vector_payloads", propNode, parserContext)); + builder.storeTermVectorPayloads(nodeBooleanValue(name,"store_term_vector_payloads", propNode)); iterator.remove(); } else if (propName.equals("analyzer")) { - NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString()); + NamedAnalyzer analyzer = parserContext.getIndexAnalyzers().get(propNode.toString()); if (analyzer == null) { throw new MapperParsingException("analyzer [" + propNode.toString() + "] not found for field [" + name + "]"); } indexAnalyzer = analyzer; iterator.remove(); } else if (propName.equals("search_analyzer")) { - NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString()); + NamedAnalyzer analyzer = parserContext.getIndexAnalyzers().get(propNode.toString()); if (analyzer == null) { throw new MapperParsingException("analyzer [" + propNode.toString() + "] not found for field [" + name + "]"); } searchAnalyzer = analyzer; iterator.remove(); } else if (propName.equals("search_quote_analyzer")) { - NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString()); + NamedAnalyzer analyzer = parserContext.getIndexAnalyzers().get(propNode.toString()); if (analyzer == null) { throw new MapperParsingException("analyzer [" + propNode.toString() + "] not found for field [" + name + "]"); } @@ -153,7 +143,8 @@ private static void parseAnalyzersAndTermVectors(FieldMapper.Builder builder, St } if (searchAnalyzer == null && searchQuoteAnalyzer != null) { - throw new MapperParsingException("analyzer and search_analyzer on field [" + name + "] must be set when search_quote_analyzer is set"); + throw new MapperParsingException("analyzer and search_analyzer on field [" + name + + "] must be set when search_quote_analyzer is set"); } if (searchAnalyzer == null) { @@ -175,16 +166,17 @@ private static void parseAnalyzersAndTermVectors(FieldMapper.Builder builder, St } } - public static boolean parseNorms(FieldMapper.Builder builder, String propName, Object propNode, Mapper.TypeParser.ParserContext parserContext) { + public static boolean parseNorms(FieldMapper.Builder builder, String fieldName, String propName, Object propNode, + Mapper.TypeParser.ParserContext parserContext) { if (propName.equals("norms")) { if (propNode instanceof Map) { final Map properties = nodeMapValue(propNode, "norms"); - for (Iterator> propsIterator = properties.entrySet().iterator(); propsIterator.hasNext();) { + for (Iterator> propsIterator = properties.entrySet().iterator(); propsIterator.hasNext(); ) { Entry entry2 = propsIterator.next(); final String propName2 = entry2.getKey(); final Object propNode2 = entry2.getValue(); if (propName2.equals("enabled")) { - builder.omitNorms(!lenientNodeBooleanValue(propNode2)); + builder.omitNorms(nodeBooleanValue(fieldName, "enabled", propNode2) == false); propsIterator.remove(); } else if (propName2.equals("loading")) { // ignore for bw compat @@ -192,13 +184,14 @@ public static boolean parseNorms(FieldMapper.Builder builder, String propName, O } } DocumentMapperParser.checkNoRemainingFields(propName, properties, parserContext.indexVersionCreated()); - DEPRECATION_LOGGER.deprecated("The [norms{enabled:true/false}] way of specifying norms is deprecated, please use [norms:true/false] instead"); + DEPRECATION_LOGGER.deprecated("The [norms{enabled:true/false}] way of specifying norms is deprecated, please use " + + "[norms:true/false] instead"); } else { - builder.omitNorms(nodeBooleanValue("norms", propNode, parserContext) == false); + builder.omitNorms(nodeBooleanValue(fieldName,"norms", propNode) == false); } return true; } else if (propName.equals("omit_norms")) { - builder.omitNorms(nodeBooleanValue("norms", propNode, parserContext)); + builder.omitNorms(nodeBooleanValue(fieldName,"norms", propNode)); DEPRECATION_LOGGER.deprecated("[omit_norms] is deprecated, please use [norms] instead with the opposite boolean value"); return true; } else { @@ -210,14 +203,15 @@ public static boolean parseNorms(FieldMapper.Builder builder, String propName, O * Parse text field attributes. In addition to {@link #parseField common attributes} * this will parse analysis and term-vectors related settings. */ - public static void parseTextField(FieldMapper.Builder builder, String name, Map fieldNode, Mapper.TypeParser.ParserContext parserContext) { + public static void parseTextField(FieldMapper.Builder builder, String name, Map fieldNode, + Mapper.TypeParser.ParserContext parserContext) { parseField(builder, name, fieldNode, parserContext); parseAnalyzersAndTermVectors(builder, name, fieldNode, parserContext); - for (Iterator> iterator = fieldNode.entrySet().iterator(); iterator.hasNext();) { + for (Iterator> iterator = fieldNode.entrySet().iterator(); iterator.hasNext(); ) { Map.Entry entry = iterator.next(); final String propName = entry.getKey(); final Object propNode = entry.getValue(); - if (parseNorms(builder, propName, propNode, parserContext)) { + if (parseNorms(builder, name, propName, propNode, parserContext)) { iterator.remove(); } } @@ -240,25 +234,33 @@ public static void parseField(FieldMapper.Builder builder, String name, Map) propNode; } else { throw new MapperParsingException("expected map for property [fields] on field [" + propNode + "] or " + - "[" + propName + "] but got a " + propNode.getClass()); + "[" + propName + "] but got a " + propNode.getClass()); } for (Map.Entry multiFieldEntry : multiFieldsPropNodes.entrySet()) { String multiFieldName = multiFieldEntry.getKey(); if (multiFieldName.contains(".")) { - throw new MapperParsingException("Field name [" + multiFieldName + "] which is a multi field of [" + name + "] cannot contain '.'"); + throw new MapperParsingException("Field name [" + multiFieldName + "] which is a multi field of [" + name + "] cannot" + + " contain '.'"); } if (!(multiFieldEntry.getValue() instanceof Map)) { throw new MapperParsingException("illegal field [" + multiFieldName + "], only fields can be specified inside fields"); @@ -385,7 +389,7 @@ public static void parseTermVector(String fieldName, String termVector, FieldMap } } - public static boolean parseIndex(String fieldName, String index, Mapper.TypeParser.ParserContext parserContext) throws MapperParsingException { + private static boolean parseIndex(String fieldName, String index) throws MapperParsingException { switch (index) { case "true": return true; @@ -394,39 +398,18 @@ public static boolean parseIndex(String fieldName, String index, Mapper.TypePars case "not_analyzed": case "analyzed": case "no": - if (parserContext.parseFieldMatcher().isStrict() == false) { - DEPRECATION_LOGGER.deprecated("Expected a boolean for property [index] but got [{}]", index); - return "no".equals(index) == false; - } else { - throw new IllegalArgumentException("Can't parse [index] value [" + index + "] for field [" + fieldName + "], expected [true] or [false]"); - } + DEPRECATION_LOGGER.deprecated("Expected a boolean [true/false] for property [index] but got [{}]", index); + return "no".equals(index) == false; default: throw new IllegalArgumentException("Can't parse [index] value [" + index + "] for field [" + fieldName + "], expected [true] or [false]"); } } - public static boolean parseStore(String fieldName, String store, Mapper.TypeParser.ParserContext parserContext) throws MapperParsingException { - if (parserContext.parseFieldMatcher().isStrict()) { - return XContentMapValues.nodeBooleanValue(store); - } else { - if (BOOLEAN_STRINGS.contains(store) == false) { - DEPRECATION_LOGGER.deprecated("Expected a boolean for property [store] but got [{}]", store); - } - if ("no".equals(store)) { - return false; - } else if ("yes".equals(store)) { - return true; - } else { - return lenientNodeBooleanValue(store); - } - } - } - @SuppressWarnings("unchecked") public static void parseCopyFields(Object propNode, FieldMapper.Builder builder) { FieldMapper.CopyTo.Builder copyToBuilder = new FieldMapper.CopyTo.Builder(); if (isArray(propNode)) { - for(Object node : (List) propNode) { + for (Object node : (List) propNode) { copyToBuilder.add(nodeStringValue(node, null)); } } else { @@ -436,8 +419,11 @@ public static void parseCopyFields(Object propNode, FieldMapper.Builder builder) } private static SimilarityProvider resolveSimilarity(Mapper.TypeParser.ParserContext parserContext, String name, String value) { - if (parserContext.indexVersionCreated().before(Version.V_5_0_0_alpha1) && "default".equals(value)) { - // "default" similarity has been renamed into "classic" in 3.x. + if (parserContext.indexVersionCreated().before(Version.V_5_0_0_alpha1) && + "default".equals(value) && + // check if "default" similarity is overridden + parserContext.getSimilarity("default") == null) { + // "default" similarity has been renamed into "classic" in 5.x. value = "classic"; } SimilarityProvider similarityProvider = parserContext.getSimilarity(value); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/Uid.java b/core/src/main/java/org/elasticsearch/index/mapper/Uid.java index 2a8938b4ab7ff..344c8dc0cc091 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/Uid.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/Uid.java @@ -21,12 +21,10 @@ import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; -import org.elasticsearch.action.DocumentRequest; import org.elasticsearch.common.lucene.BytesRefs; import java.util.Collection; import java.util.Collections; -import java.util.List; /** * diff --git a/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java index f27fa30b91cf1..1e41b25ee42bb 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java @@ -19,17 +19,33 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.BinaryDocValuesField; import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.MatchNoDocsQuery; +import org.apache.lucene.search.TermInSetQuery; import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.BytesRefBuilder; +import org.apache.lucene.util.StringHelper; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.fielddata.IndexFieldDataCache; +import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; +import org.elasticsearch.index.fielddata.UidIndexFieldData; import org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.indices.breaker.CircuitBreakerService; import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; import java.util.List; import java.util.Map; @@ -64,6 +80,8 @@ public static class Defaults { } } + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(UidFieldMapper.class)); + public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { @@ -71,14 +89,15 @@ public static class TypeParser implements MetadataFieldMapper.TypeParser { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final IndexSettings indexSettings = context.mapperService().getIndexSettings(); return new UidFieldMapper(indexSettings, fieldType); } } static final class UidFieldType extends TermBasedFieldType { - public UidFieldType() { + UidFieldType() { } protected UidFieldType(UidFieldType ref) { @@ -97,20 +116,81 @@ public String typeName() { @Override public IndexFieldData.Builder fielddataBuilder() { - // TODO: add doc values support? - return new PagedBytesIndexFieldData.Builder( - TextFieldMapper.Defaults.FIELDDATA_MIN_FREQUENCY, - TextFieldMapper.Defaults.FIELDDATA_MAX_FREQUENCY, - TextFieldMapper.Defaults.FIELDDATA_MIN_SEGMENT_SIZE); + if (indexOptions() == IndexOptions.NONE) { + DEPRECATION_LOGGER.deprecated("Fielddata access on the _uid field is deprecated, use _id instead"); + return new IndexFieldData.Builder() { + @Override + public IndexFieldData build(IndexSettings indexSettings, MappedFieldType fieldType, IndexFieldDataCache cache, + CircuitBreakerService breakerService, MapperService mapperService) { + MappedFieldType idFieldType = mapperService.fullName(IdFieldMapper.NAME); + IndexOrdinalsFieldData idFieldData = (IndexOrdinalsFieldData) idFieldType.fielddataBuilder() + .build(indexSettings, idFieldType, cache, breakerService, mapperService); + final String type = mapperService.types().iterator().next(); + return new UidIndexFieldData(indexSettings.getIndex(), type, idFieldData); + } + }; + } else { + // Old index, _uid was indexed + return new PagedBytesIndexFieldData.Builder( + TextFieldMapper.Defaults.FIELDDATA_MIN_FREQUENCY, + TextFieldMapper.Defaults.FIELDDATA_MAX_FREQUENCY, + TextFieldMapper.Defaults.FIELDDATA_MIN_SEGMENT_SIZE); + } + } + + @Override + public Query termQuery(Object value, @Nullable QueryShardContext context) { + return termsQuery(Arrays.asList(value), context); + } + + @Override + public Query termsQuery(List values, @Nullable QueryShardContext context) { + if (indexOptions() != IndexOptions.NONE) { + return super.termsQuery(values, context); + } + Collection indexTypes = context.getMapperService().types(); + if (indexTypes.isEmpty()) { + return new MatchNoDocsQuery("No types"); + } + assert indexTypes.size() == 1; + BytesRef indexType = indexedValueForSearch(indexTypes.iterator().next()); + BytesRefBuilder prefixBuilder = new BytesRefBuilder(); + prefixBuilder.append(indexType); + prefixBuilder.append((byte) '#'); + BytesRef expectedPrefix = prefixBuilder.get(); + List ids = new ArrayList<>(); + for (Object uid : values) { + BytesRef uidBytes = indexedValueForSearch(uid); + if (StringHelper.startsWith(uidBytes, expectedPrefix)) { + BytesRef id = new BytesRef(); + id.bytes = uidBytes.bytes; + id.offset = uidBytes.offset + expectedPrefix.length; + id.length = uidBytes.length - expectedPrefix.length; + ids.add(id); + } + } + return new TermInSetQuery(IdFieldMapper.NAME, ids); + } + } + + static MappedFieldType defaultFieldType(IndexSettings indexSettings) { + MappedFieldType defaultFieldType = Defaults.FIELD_TYPE.clone(); + if (indexSettings.isSingleType()) { + defaultFieldType.setIndexOptions(IndexOptions.NONE); + defaultFieldType.setStored(false); + } else { + defaultFieldType.setIndexOptions(IndexOptions.DOCS); + defaultFieldType.setStored(true); } + return defaultFieldType; } - private UidFieldMapper(Settings indexSettings, MappedFieldType existing) { - this(existing == null ? Defaults.FIELD_TYPE.clone() : existing, Defaults.FIELD_TYPE, indexSettings); + private UidFieldMapper(IndexSettings indexSettings, MappedFieldType existing) { + this(existing == null ? defaultFieldType(indexSettings) : existing, indexSettings); } - private UidFieldMapper(MappedFieldType fieldType, MappedFieldType defaultFieldType, Settings indexSettings) { - super(NAME, fieldType, defaultFieldType, indexSettings); + private UidFieldMapper(MappedFieldType fieldType, IndexSettings indexSettings) { + super(NAME, fieldType, defaultFieldType(indexSettings), indexSettings.getSettings()); } @Override @@ -128,11 +208,10 @@ public Mapper parse(ParseContext context) throws IOException { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { - Field uid = new Field(NAME, Uid.createUid(context.sourceToParse().type(), context.sourceToParse().id()), Defaults.FIELD_TYPE); - fields.add(uid); - if (fieldType().hasDocValues()) { - fields.add(new BinaryDocValuesField(NAME, new BytesRef(uid.stringValue()))); + protected void parseCreateField(ParseContext context, List fields) throws IOException { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { + Field uid = new Field(NAME, Uid.createUid(context.sourceToParse().type(), context.sourceToParse().id()), fieldType); + fields.add(uid); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java index eb46f7a21d662..1d2e997acba1d 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java @@ -23,6 +23,7 @@ import org.apache.lucene.document.NumericDocValuesField; import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -61,14 +62,15 @@ public static class TypeParser implements MetadataFieldMapper.TypeParser { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new VersionFieldMapper(indexSettings); } } static final class VersionFieldType extends MappedFieldType { - public VersionFieldType() { + VersionFieldType() { } protected VersionFieldType(VersionFieldType ref) { @@ -101,7 +103,7 @@ public void preParse(ParseContext context) throws IOException { } @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { + protected void parseCreateField(ParseContext context, List fields) throws IOException { // see InternalEngine.updateVersion to see where the real version value is set final Field version = new NumericDocValuesField(NAME, -1L); context.version(version); diff --git a/core/src/main/java/org/elasticsearch/index/merge/MergeStats.java b/core/src/main/java/org/elasticsearch/index/merge/MergeStats.java index ee8c08f00d077..0274a7e233005 100644 --- a/core/src/main/java/org/elasticsearch/index/merge/MergeStats.java +++ b/core/src/main/java/org/elasticsearch/index/merge/MergeStats.java @@ -185,12 +185,6 @@ public ByteSizeValue getCurrentSize() { return new ByteSizeValue(currentSizeInBytes); } - public static MergeStats readMergeStats(StreamInput in) throws IOException { - MergeStats stats = new MergeStats(); - stats.readFrom(in); - return stats; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.MERGES); diff --git a/core/src/main/java/org/elasticsearch/index/query/AbstractQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/AbstractQueryBuilder.java index 3ed3b80fb84ec..72af541476384 100644 --- a/core/src/main/java/org/elasticsearch/index/query/AbstractQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/AbstractQueryBuilder.java @@ -30,6 +30,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.BytesRefs; +import org.elasticsearch.common.xcontent.AbstractObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentLocation; import org.elasticsearch.common.xcontent.XContentType; @@ -249,8 +250,8 @@ protected static final void writeQueries(StreamOutput out, List readQueries(StreamInput in) throws IOException { - List queries = new ArrayList<>(); int size = in.readVInt(); + List queries = new ArrayList<>(size); for (int i = 0; i < size; i++) { queries.add(in.readNamedWriteable(QueryBuilder.class)); } @@ -282,7 +283,7 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws I * Extracts the inner hits from the query tree. * While it extracts inner hits, child inner hits are inlined into the inner hit builder they belong to. */ - protected void extractInnerHitBuilders(Map innerHits) { + protected void extractInnerHitBuilders(Map innerHits) { } // Like Objects.requireNotNull(...) but instead throws a IllegalArgumentException @@ -300,4 +301,15 @@ protected static void throwParsingExceptionOnMultipleFields(String queryName, XC + processedFieldName + "] and [" + currentFieldName + "]"); } } + + /** + * Adds {@code boost} and {@code query_name} parsing to the + * {@link AbstractObjectParser} passed in. All query builders except + * {@link MatchAllQueryBuilder} and {@link MatchNoneQueryBuilder} support these fields so they + * should use this method. + */ + protected static void declareStandardFields(AbstractObjectParser parser) { + parser.declareFloat((builder, value) -> builder.boost(value), AbstractQueryBuilder.BOOST_FIELD); + parser.declareString((builder, value) -> builder.queryName(value), AbstractQueryBuilder.NAME_FIELD); + } } diff --git a/core/src/main/java/org/elasticsearch/index/query/BoolQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/BoolQueryBuilder.java index 7b375f125c9b1..8f887e5704b6a 100644 --- a/core/src/main/java/org/elasticsearch/index/query/BoolQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/BoolQueryBuilder.java @@ -39,6 +39,8 @@ import java.util.Objects; import java.util.Optional; import java.util.function.Consumer; +import java.util.stream.Stream; +import java.util.stream.StreamSupport; import static org.elasticsearch.common.lucene.search.Queries.fixNegativeQueryIfNeeded; @@ -57,8 +59,7 @@ public class BoolQueryBuilder extends AbstractQueryBuilder { private static final String SHOULD = "should"; private static final String MUST = "must"; private static final ParseField DISABLE_COORD_FIELD = new ParseField("disable_coord"); - private static final ParseField MINIMUM_SHOULD_MATCH = new ParseField("minimum_should_match"); - private static final ParseField MINIMUM_NUMBER_SHOULD_MATCH = new ParseField("minimum_number_should_match"); + private static final ParseField MINIMUM_SHOULD_MATCH = new ParseField("minimum_should_match", "minimum_number_should_match"); private static final ParseField ADJUST_PURE_NEGATIVE = new ParseField("adjust_pure_negative"); private final List mustClauses = new ArrayList<>(); @@ -168,7 +169,7 @@ public List mustNot() { * MUST clauses one or more SHOULD clauses must match a document * for the BooleanQuery to match. No null value allowed. * - * @see #minimumNumberShouldMatch(int) + * @see #minimumShouldMatch(int) */ public BoolQueryBuilder should(QueryBuilder queryBuilder) { if (queryBuilder == null) { @@ -182,7 +183,7 @@ public BoolQueryBuilder should(QueryBuilder queryBuilder) { * Gets the list of clauses that should be matched by the returned documents. * * @see #should(QueryBuilder) - * @see #minimumNumberShouldMatch(int) + * @see #minimumShouldMatch(int) */ public List should() { return this.shouldClauses; @@ -204,18 +205,9 @@ public boolean disableCoord() { } /** - * Specifies a minimum number of the optional (should) boolean clauses which must be satisfied. - *

    - * By default no optional clauses are necessary for a match - * (unless there are no required clauses). If this method is used, - * then the specified number of clauses is required. - *

    - * Use of this method is totally independent of specifying that - * any specific clauses are required (or prohibited). This number will - * only be compared against the number of matching optional clauses. - * - * @param minimumNumberShouldMatch the number of optional clauses that must match + * @deprecated use {@link BoolQueryBuilder#minimumShouldMatch(int)} instead */ + @Deprecated public BoolQueryBuilder minimumNumberShouldMatch(int minimumNumberShouldMatch) { this.minimumShouldMatch = Integer.toString(minimumNumberShouldMatch); return this; @@ -223,9 +215,9 @@ public BoolQueryBuilder minimumNumberShouldMatch(int minimumNumberShouldMatch) { /** - * Specifies a minimum number of the optional (should) boolean clauses which must be satisfied. - * @see BoolQueryBuilder#minimumNumberShouldMatch(int) + * @deprecated use {@link BoolQueryBuilder#minimumShouldMatch(String)} instead */ + @Deprecated public BoolQueryBuilder minimumNumberShouldMatch(String minimumNumberShouldMatch) { this.minimumShouldMatch = minimumNumberShouldMatch; return this; @@ -239,13 +231,32 @@ public String minimumShouldMatch() { } /** - * Sets the minimum should match using the special syntax (for example, supporting percentage). + * Sets the minimum should match parameter using the special syntax (for example, supporting percentage). + * @see BoolQueryBuilder#minimumShouldMatch(int) */ public BoolQueryBuilder minimumShouldMatch(String minimumShouldMatch) { this.minimumShouldMatch = minimumShouldMatch; return this; } + /** + * Specifies a minimum number of the optional (should) boolean clauses which must be satisfied. + *

    + * By default no optional clauses are necessary for a match + * (unless there are no required clauses). If this method is used, + * then the specified number of clauses is required. + *

    + * Use of this method is totally independent of specifying that + * any specific clauses are required (or prohibited). This number will + * only be compared against the number of matching optional clauses. + * + * @param minimumShouldMatch the number of optional clauses that must match + */ + public BoolQueryBuilder minimumShouldMatch(int minimumShouldMatch) { + this.minimumShouldMatch = Integer.toString(minimumShouldMatch); + return this; + } + /** * Returns true iff this query builder has at least one should, must, must not or filter clause. * Otherwise false. @@ -338,10 +349,6 @@ public static Optional fromXContent(QueryParseContext parseCon default: throw new ParsingException(parser.getTokenLocation(), "[bool] query does not support [" + currentFieldName + "]"); } - if (parser.currentToken() != XContentParser.Token.END_OBJECT) { - throw new ParsingException(parser.getTokenLocation(), - "expected [END_OBJECT] but got [{}], possibly too many query clauses", parser.currentToken()); - } } else if (token == XContentParser.Token.START_ARRAY) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { switch (currentFieldName) { @@ -363,17 +370,15 @@ public static Optional fromXContent(QueryParseContext parseCon } } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, DISABLE_COORD_FIELD)) { + if (DISABLE_COORD_FIELD.match(currentFieldName)) { disableCoord = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MINIMUM_SHOULD_MATCH)) { + } else if (MINIMUM_SHOULD_MATCH.match(currentFieldName)) { minimumShouldMatch = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MINIMUM_NUMBER_SHOULD_MATCH)) { - minimumShouldMatch = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ADJUST_PURE_NEGATIVE)) { + } else if (ADJUST_PURE_NEGATIVE.match(currentFieldName)) { adjustPureNegative = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[bool] query does not support [" + currentFieldName + "]"); @@ -396,7 +401,7 @@ public static Optional fromXContent(QueryParseContext parseCon boolQuery.boost(boost); boolQuery.disableCoord(disableCoord); boolQuery.adjustPureNegative(adjustPureNegative); - boolQuery.minimumNumberShouldMatch(minimumShouldMatch); + boolQuery.minimumShouldMatch(minimumShouldMatch); boolQuery.queryName(queryName); return Optional.of(boolQuery); } @@ -476,7 +481,12 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws changed |= rewriteClauses(queryRewriteContext, mustNotClauses, newBuilder::mustNot); changed |= rewriteClauses(queryRewriteContext, filterClauses, newBuilder::filter); changed |= rewriteClauses(queryRewriteContext, shouldClauses, newBuilder::should); - + // lets do some early termination and prevent any kind of rewriting if we have a mandatory query that is a MatchNoneQueryBuilder + Optional any = Stream.concat(newBuilder.mustClauses.stream(), newBuilder.filterClauses.stream()) + .filter(b -> b instanceof MatchNoneQueryBuilder).findAny(); + if (any.isPresent()) { + return any.get(); + } if (changed) { newBuilder.adjustPureNegative = adjustPureNegative; newBuilder.disableCoord = disableCoord; @@ -489,13 +499,13 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws } @Override - protected void extractInnerHitBuilders(Map innerHits) { + protected void extractInnerHitBuilders(Map innerHits) { List clauses = new ArrayList<>(filter()); clauses.addAll(must()); clauses.addAll(should()); // no need to include must_not (since there will be no hits for it) for (QueryBuilder clause : clauses) { - InnerHitBuilder.extractInnerHits(clause, innerHits); + InnerHitContextBuilder.extractInnerHits(clause, innerHits); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/BoostingQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/BoostingQueryBuilder.java index 4c86fce4a57cd..2523861e8b4d3 100644 --- a/core/src/main/java/org/elasticsearch/index/query/BoostingQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/BoostingQueryBuilder.java @@ -154,21 +154,21 @@ public static Optional fromXContent(QueryParseContext pars if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, POSITIVE_FIELD)) { + if (POSITIVE_FIELD.match(currentFieldName)) { positiveQuery = parseContext.parseInnerQueryBuilder(); positiveQueryFound = true; - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, NEGATIVE_FIELD)) { + } else if (NEGATIVE_FIELD.match(currentFieldName)) { negativeQuery = parseContext.parseInnerQueryBuilder(); negativeQueryFound = true; } else { throw new ParsingException(parser.getTokenLocation(), "[boosting] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, NEGATIVE_BOOST_FIELD)) { + if (NEGATIVE_BOOST_FIELD.match(currentFieldName)) { negativeBoost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); } else { throw new ParsingException(parser.getTokenLocation(), "[boosting] query does not support [" + currentFieldName + "]"); @@ -234,8 +234,8 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws } @Override - protected void extractInnerHitBuilders(Map innerHits) { - InnerHitBuilder.extractInnerHits(positiveQuery, innerHits); - InnerHitBuilder.extractInnerHits(negativeQuery, innerHits); + protected void extractInnerHitBuilders(Map innerHits) { + InnerHitContextBuilder.extractInnerHits(positiveQuery, innerHits); + InnerHitContextBuilder.extractInnerHits(negativeQuery, innerHits); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/CommonTermsQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/CommonTermsQueryBuilder.java index 21328ff8fc4b7..97b499fee206a 100644 --- a/core/src/main/java/org/elasticsearch/index/query/CommonTermsQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/CommonTermsQueryBuilder.java @@ -291,15 +291,15 @@ public static Optional fromXContent(QueryParseContext p if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, MINIMUM_SHOULD_MATCH_FIELD)) { + if (MINIMUM_SHOULD_MATCH_FIELD.match(currentFieldName)) { String innerFieldName = null; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { innerFieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(innerFieldName, LOW_FREQ_FIELD)) { + if (LOW_FREQ_FIELD.match(innerFieldName)) { lowFreqMinimumShouldMatch = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(innerFieldName, HIGH_FREQ_FIELD)) { + } else if (HIGH_FREQ_FIELD.match(innerFieldName)) { highFreqMinimumShouldMatch = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[" + CommonTermsQueryBuilder.NAME + @@ -317,23 +317,23 @@ public static Optional fromXContent(QueryParseContext p "] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { text = parser.objectText(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ANALYZER_FIELD)) { + } else if (ANALYZER_FIELD.match(currentFieldName)) { analyzer = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, DISABLE_COORD_FIELD)) { + } else if (DISABLE_COORD_FIELD.match(currentFieldName)) { disableCoord = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, HIGH_FREQ_OPERATOR_FIELD)) { + } else if (HIGH_FREQ_OPERATOR_FIELD.match(currentFieldName)) { highFreqOperator = Operator.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LOW_FREQ_OPERATOR_FIELD)) { + } else if (LOW_FREQ_OPERATOR_FIELD.match(currentFieldName)) { lowFreqOperator = Operator.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MINIMUM_SHOULD_MATCH_FIELD)) { + } else if (MINIMUM_SHOULD_MATCH_FIELD.match(currentFieldName)) { lowFreqMinimumShouldMatch = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, CUTOFF_FREQUENCY_FIELD)) { + } else if (CUTOFF_FREQUENCY_FIELD.match(currentFieldName)) { cutoffFrequency = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[" + CommonTermsQueryBuilder.NAME + @@ -383,7 +383,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { analyzerObj = context.getMapperService().searchAnalyzer(); } } else { - analyzerObj = context.getMapperService().analysisService().analyzer(analyzer); + analyzerObj = context.getMapperService().getIndexAnalyzers().get(analyzer); if (analyzerObj == null) { throw new QueryShardException(context, "[common] analyzer [" + analyzer + "] not found"); } diff --git a/core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryBuilder.java index 99b6ba731d23b..e9c54fee3273d 100644 --- a/core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryBuilder.java @@ -102,7 +102,7 @@ public static Optional fromXContent(QueryParseContext } else if (parseContext.isDeprecatedSetting(currentFieldName)) { // skip } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, INNER_QUERY_FIELD)) { + if (INNER_QUERY_FIELD.match(currentFieldName)) { if (queryFound) { throw new ParsingException(parser.getTokenLocation(), "[" + ConstantScoreQueryBuilder.NAME + "]" + " accepts only one 'filter' element."); @@ -114,9 +114,9 @@ public static Optional fromXContent(QueryParseContext "[constant_score] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -165,6 +165,9 @@ protected boolean doEquals(ConstantScoreQueryBuilder other) { @Override protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws IOException { QueryBuilder rewrite = filterBuilder.rewrite(queryRewriteContext); + if (rewrite instanceof MatchNoneQueryBuilder) { + return rewrite; // we won't match anyway + } if (rewrite != filterBuilder) { return new ConstantScoreQueryBuilder(rewrite); } @@ -172,7 +175,7 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws } @Override - protected void extractInnerHitBuilders(Map innerHits) { - InnerHitBuilder.extractInnerHits(filterBuilder, innerHits); + protected void extractInnerHitBuilders(Map innerHits) { + InnerHitContextBuilder.extractInnerHits(filterBuilder, innerHits); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/DisMaxQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/DisMaxQueryBuilder.java index beeb5a05f45bf..3cb95007a0d84 100644 --- a/core/src/main/java/org/elasticsearch/index/query/DisMaxQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/DisMaxQueryBuilder.java @@ -33,6 +33,7 @@ import java.util.ArrayList; import java.util.Collection; import java.util.List; +import java.util.Map; import java.util.Objects; import java.util.Optional; @@ -138,14 +139,14 @@ public static Optional fromXContent(QueryParseContext parseC if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERIES_FIELD)) { + if (QUERIES_FIELD.match(currentFieldName)) { queriesFound = true; parseContext.parseInnerQueryBuilder().ifPresent(queries::add); } else { throw new ParsingException(parser.getTokenLocation(), "[dis_max] query does not support [" + currentFieldName + "]"); } } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERIES_FIELD)) { + if (QUERIES_FIELD.match(currentFieldName)) { queriesFound = true; while (token != XContentParser.Token.END_ARRAY) { parseContext.parseInnerQueryBuilder().ifPresent(queries::add); @@ -155,11 +156,11 @@ public static Optional fromXContent(QueryParseContext parseC throw new ParsingException(parser.getTokenLocation(), "[dis_max] query does not support [" + currentFieldName + "]"); } } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TIE_BREAKER_FIELD)) { + } else if (TIE_BREAKER_FIELD.match(currentFieldName)) { tieBreaker = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[dis_max] query does not support [" + currentFieldName + "]"); @@ -207,4 +208,11 @@ protected boolean doEquals(DisMaxQueryBuilder other) { public String getWriteableName() { return NAME; } + + @Override + protected void extractInnerHitBuilders(Map innerHits) { + for (QueryBuilder query : queries) { + InnerHitContextBuilder.extractInnerHits(query, innerHits); + } + } } diff --git a/core/src/main/java/org/elasticsearch/index/query/ExistsQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/ExistsQueryBuilder.java index 93e491d626929..964edc7349246 100644 --- a/core/src/main/java/org/elasticsearch/index/query/ExistsQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/ExistsQueryBuilder.java @@ -97,11 +97,11 @@ public static Optional fromXContent(QueryParseContext parseC if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, FIELD_FIELD)) { + if (FIELD_FIELD.match(currentFieldName)) { fieldPattern = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); } else { throw new ParsingException(parser.getTokenLocation(), "[" + ExistsQueryBuilder.NAME + @@ -130,7 +130,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { public static Query newFilter(QueryShardContext context, String fieldPattern) { final FieldNamesFieldMapper.FieldNamesFieldType fieldNamesFieldType = - (FieldNamesFieldMapper.FieldNamesFieldType)context.getMapperService().fullName(FieldNamesFieldMapper.NAME); + (FieldNamesFieldMapper.FieldNamesFieldType) context.getMapperService().fullName(FieldNamesFieldMapper.NAME); if (fieldNamesFieldType == null) { // can only happen when no types exist, so no docs exist either return Queries.newMatchNoDocsQuery("Missing types in \"" + NAME + "\" query."); @@ -145,6 +145,11 @@ public static Query newFilter(QueryShardContext context, String fieldPattern) { fields = context.simpleMatchToIndexNames(fieldPattern); } + if (fields.size() == 1) { + Query filter = fieldNamesFieldType.termQuery(fields.iterator().next(), context); + return new ConstantScoreQuery(filter); + } + BooleanQuery.Builder boolFilterBuilder = new BooleanQuery.Builder(); for (String field : fields) { Query filter = fieldNamesFieldType.termQuery(field, context); diff --git a/core/src/main/java/org/elasticsearch/index/query/FieldMaskingSpanQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/FieldMaskingSpanQueryBuilder.java index dc8bc786aed8b..cc3ad2e191f2e 100644 --- a/core/src/main/java/org/elasticsearch/index/query/FieldMaskingSpanQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/FieldMaskingSpanQueryBuilder.java @@ -116,7 +116,7 @@ public static Optional fromXContent(QueryParseCont if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { throw new ParsingException(parser.getTokenLocation(), "[field_masking_span] query must be of type span query"); @@ -127,11 +127,11 @@ public static Optional fromXContent(QueryParseCont + currentFieldName + "]"); } } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FIELD_FIELD)) { + } else if (FIELD_FIELD.match(currentFieldName)) { field = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), diff --git a/core/src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java index c75ba1fda99f4..92289f373c07f 100644 --- a/core/src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java @@ -41,11 +41,7 @@ /** * A Query that does fuzzy matching for a specific value. - * - * @deprecated Fuzzy queries are not useful enough. This class will be removed with Elasticsearch 4.0. In most cases you may want to use - * a match query with the fuzziness parameter for strings or range queries for numeric and date fields. */ -@Deprecated public class FuzzyQueryBuilder extends AbstractQueryBuilder implements MultiTermQueryBuilder { public static final String NAME = "fuzzy"; @@ -90,6 +86,7 @@ public class FuzzyQueryBuilder extends AbstractQueryBuilder i * @param fieldName The name of the field * @param value The value of the text */ + @Deprecated public FuzzyQueryBuilder(String fieldName, String value) { this(fieldName, (Object) value); } @@ -100,6 +97,7 @@ public FuzzyQueryBuilder(String fieldName, String value) { * @param fieldName The name of the field * @param value The value of the text */ + @Deprecated public FuzzyQueryBuilder(String fieldName, int value) { this(fieldName, (Object) value); } @@ -110,6 +108,7 @@ public FuzzyQueryBuilder(String fieldName, int value) { * @param fieldName The name of the field * @param value The value of the text */ + @Deprecated public FuzzyQueryBuilder(String fieldName, long value) { this(fieldName, (Object) value); } @@ -120,6 +119,7 @@ public FuzzyQueryBuilder(String fieldName, long value) { * @param fieldName The name of the field * @param value The value of the text */ + @Deprecated public FuzzyQueryBuilder(String fieldName, float value) { this(fieldName, (Object) value); } @@ -130,6 +130,7 @@ public FuzzyQueryBuilder(String fieldName, float value) { * @param fieldName The name of the field * @param value The value of the text */ + @Deprecated public FuzzyQueryBuilder(String fieldName, double value) { this(fieldName, (Object) value); } @@ -140,6 +141,7 @@ public FuzzyQueryBuilder(String fieldName, double value) { * @param fieldName The name of the field * @param value The value of the text */ + @Deprecated public FuzzyQueryBuilder(String fieldName, boolean value) { this(fieldName, (Object) value); } @@ -150,6 +152,7 @@ public FuzzyQueryBuilder(String fieldName, boolean value) { * @param fieldName The name of the field * @param value The value of the term */ + @Deprecated public FuzzyQueryBuilder(String fieldName, Object value) { if (Strings.isEmpty(fieldName)) { throw new IllegalArgumentException("field name cannot be null or empty"); @@ -280,29 +283,32 @@ public static Optional fromXContent(QueryParseContext parseCo while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); - } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, TERM_FIELD)) { + } else if (token.isValue()) { + if (TERM_FIELD.match(currentFieldName)) { value = parser.objectBytes(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, VALUE_FIELD)) { + } else if (VALUE_FIELD.match(currentFieldName)) { value = parser.objectBytes(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Fuzziness.FIELD)) { + } else if (Fuzziness.FIELD.match(currentFieldName)) { fuzziness = Fuzziness.parse(parser); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, PREFIX_LENGTH_FIELD)) { + } else if (PREFIX_LENGTH_FIELD.match(currentFieldName)) { prefixLength = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MAX_EXPANSIONS_FIELD)) { + } else if (MAX_EXPANSIONS_FIELD.match(currentFieldName)) { maxExpansions = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TRANSPOSITIONS_FIELD)) { + } else if (TRANSPOSITIONS_FIELD.match(currentFieldName)) { transpositions = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, REWRITE_FIELD)) { + } else if (REWRITE_FIELD.match(currentFieldName)) { rewrite = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[fuzzy] query does not support [" + currentFieldName + "]"); } + } else { + throw new ParsingException(parser.getTokenLocation(), + "[" + NAME + "] unexpected token [" + token + "] after [" + currentFieldName + "]"); } } } else { @@ -342,7 +348,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { query = new FuzzyQuery(new Term(fieldName, BytesRefs.toBytesRef(value)), maxEdits, prefixLength, maxExpansions, transpositions); } if (query instanceof MultiTermQuery) { - MultiTermQuery.RewriteMethod rewriteMethod = QueryParsers.parseRewriteMethod(context.getParseFieldMatcher(), rewrite, null); + MultiTermQuery.RewriteMethod rewriteMethod = QueryParsers.parseRewriteMethod(rewrite, null); QueryParsers.setRewriteMethod((MultiTermQuery) query, rewriteMethod); } return query; diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java index ee7220111f378..a8819d5e76e38 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java @@ -19,7 +19,10 @@ package org.elasticsearch.index.query; +import org.apache.lucene.document.LatLonDocValuesField; +import org.apache.lucene.document.LatLonPoint; import org.apache.lucene.geo.Rectangle; +import org.apache.lucene.search.IndexOrDocValuesQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.spatial.geopoint.document.GeoPointField; @@ -38,7 +41,8 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.mapper.BaseGeoPointFieldMapper; -import org.elasticsearch.index.mapper.LegacyGeoPointFieldMapper; +import org.elasticsearch.index.mapper.BaseGeoPointFieldMapper.LegacyGeoPointFieldType; +import org.elasticsearch.index.mapper.LatLonPointFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.search.geo.LegacyInMemoryGeoBoundingBoxQuery; import org.elasticsearch.index.search.geo.LegacyIndexedGeoBoundingBoxQuery; @@ -69,9 +73,9 @@ public class GeoBoundingBoxQueryBuilder extends AbstractQueryBuilder use prefix encoded postings format final GeoPointField.TermEncoding encoding = (indexVersionCreated.before(Version.V_2_3_0)) ? @@ -371,8 +385,8 @@ public Query doToQuery(QueryShardContext context) { Query query; switch(type) { case INDEXED: - LegacyGeoPointFieldMapper.GeoPointFieldType geoFieldType = ((LegacyGeoPointFieldMapper.GeoPointFieldType) fieldType); - query = LegacyIndexedGeoBoundingBoxQuery.create(luceneTopLeft, luceneBottomRight, geoFieldType); + LegacyGeoPointFieldType geoFieldType = ((LegacyGeoPointFieldType) fieldType); + query = LegacyIndexedGeoBoundingBoxQuery.create(luceneTopLeft, luceneBottomRight, geoFieldType, context); break; case MEMORY: IndexGeoPointFieldData indexFieldData = context.getForField(fieldType); @@ -438,30 +452,30 @@ public static Optional fromXContent(QueryParseContex token = parser.nextToken(); if (parseContext.isDeprecatedSetting(currentFieldName)) { // skip - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FIELD_FIELD)) { + } else if (FIELD_FIELD.match(currentFieldName)) { fieldName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TOP_FIELD)) { + } else if (TOP_FIELD.match(currentFieldName)) { top = parser.doubleValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, BOTTOM_FIELD)) { + } else if (BOTTOM_FIELD.match(currentFieldName)) { bottom = parser.doubleValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LEFT_FIELD)) { + } else if (LEFT_FIELD.match(currentFieldName)) { left = parser.doubleValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, RIGHT_FIELD)) { + } else if (RIGHT_FIELD.match(currentFieldName)) { right = parser.doubleValue(); } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, TOP_LEFT_FIELD)) { + if (TOP_LEFT_FIELD.match(currentFieldName)) { GeoUtils.parseGeoPoint(parser, sparse); top = sparse.getLat(); left = sparse.getLon(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, BOTTOM_RIGHT_FIELD)) { + } else if (BOTTOM_RIGHT_FIELD.match(currentFieldName)) { GeoUtils.parseGeoPoint(parser, sparse); bottom = sparse.getLat(); right = sparse.getLon(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TOP_RIGHT_FIELD)) { + } else if (TOP_RIGHT_FIELD.match(currentFieldName)) { GeoUtils.parseGeoPoint(parser, sparse); top = sparse.getLat(); right = sparse.getLon(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, BOTTOM_LEFT_FIELD)) { + } else if (BOTTOM_LEFT_FIELD.match(currentFieldName)) { GeoUtils.parseGeoPoint(parser, sparse); bottom = sparse.getLat(); left = sparse.getLon(); @@ -476,22 +490,22 @@ public static Optional fromXContent(QueryParseContex } } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, COERCE_FIELD)) { + } else if (COERCE_FIELD.match(currentFieldName)) { coerce = parser.booleanValue(); if (coerce) { ignoreMalformed = true; } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, VALIDATION_METHOD_FIELD)) { + } else if (VALIDATION_METHOD_FIELD.match(currentFieldName)) { validationMethod = GeoValidationMethod.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_UNMAPPED_FIELD)) { + } else if (IGNORE_UNMAPPED_FIELD.match(currentFieldName)) { ignoreUnmapped = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TYPE_FIELD)) { + } else if (TYPE_FIELD.match(currentFieldName)) { type = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_MALFORMED_FIELD)) { + } else if (IGNORE_MALFORMED_FIELD.match(currentFieldName)) { ignoreMalformed = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), "failed to parse [{}] query. unexpected field [{}]", diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java index a0226f44f76ff..ea9c14f56ed31 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java @@ -19,6 +19,9 @@ package org.elasticsearch.index.query; +import org.apache.lucene.document.LatLonDocValuesField; +import org.apache.lucene.document.LatLonPoint; +import org.apache.lucene.search.IndexOrDocValuesQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.spatial.geopoint.document.GeoPointField; @@ -38,9 +41,10 @@ import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.mapper.BaseGeoPointFieldMapper; import org.elasticsearch.index.mapper.GeoPointFieldMapper; +import org.elasticsearch.index.mapper.LatLonPointFieldMapper; import org.elasticsearch.index.mapper.LegacyGeoPointFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.search.geo.GeoDistanceRangeQuery; +import org.elasticsearch.index.search.geo.LegacyGeoDistanceRangeQuery; import java.io.IOException; import java.util.Locale; @@ -61,7 +65,7 @@ public class GeoDistanceQueryBuilder extends AbstractQueryBuilder fromXContent(QueryParseContext p } } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, DISTANCE_FIELD)) { + if (DISTANCE_FIELD.match(currentFieldName)) { if (token == XContentParser.Token.VALUE_STRING) { vDistance = parser.text(); // a String } else { vDistance = parser.numberValue(); // a Number } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, UNIT_FIELD)) { + } else if (UNIT_FIELD.match(currentFieldName)) { unit = DistanceUnit.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, DISTANCE_TYPE_FIELD)) { + } else if (DISTANCE_TYPE_FIELD.match(currentFieldName)) { geoDistance = GeoDistance.fromString(parser.text()); } else if (currentFieldName.endsWith(GeoPointFieldMapper.Names.LAT_SUFFIX)) { point.resetLat(parser.doubleValue()); @@ -396,22 +404,22 @@ public static Optional fromXContent(QueryParseContext p } else if (currentFieldName.endsWith(GeoPointFieldMapper.Names.LON_SUFFIX)) { point.resetLon(parser.doubleValue()); fieldName = currentFieldName.substring(0, currentFieldName.length() - GeoPointFieldMapper.Names.LON_SUFFIX.length()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, OPTIMIZE_BBOX_FIELD)) { + } else if (OPTIMIZE_BBOX_FIELD.match(currentFieldName)) { optimizeBbox = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, COERCE_FIELD)) { + } else if (COERCE_FIELD.match(currentFieldName)) { coerce = parser.booleanValue(); - if (coerce == true) { + if (coerce) { ignoreMalformed = true; } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_MALFORMED_FIELD)) { + } else if (IGNORE_MALFORMED_FIELD.match(currentFieldName)) { ignoreMalformed = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_UNMAPPED_FIELD)) { + } else if (IGNORE_UNMAPPED_FIELD.match(currentFieldName)) { ignoreUnmapped = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, VALIDATION_METHOD_FIELD)) { + } else if (VALIDATION_METHOD_FIELD.match(currentFieldName)) { validationMethod = GeoValidationMethod.fromString(parser.text()); } else { if (fieldName == null) { diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java index 0b06e07e02e23..1f950a4bc0db6 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java @@ -32,15 +32,18 @@ import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.mapper.BaseGeoPointFieldMapper; +import org.elasticsearch.index.mapper.BaseGeoPointFieldMapper.LegacyGeoPointFieldType; import org.elasticsearch.index.mapper.GeoPointFieldMapper; -import org.elasticsearch.index.mapper.LegacyGeoPointFieldMapper; +import org.elasticsearch.index.mapper.LatLonPointFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.search.geo.GeoDistanceRangeQuery; +import org.elasticsearch.index.search.geo.LegacyGeoDistanceRangeQuery; import java.io.IOException; import java.util.Locale; @@ -50,9 +53,11 @@ public class GeoDistanceRangeQueryBuilder extends AbstractQueryBuilder { public static final String NAME = "geo_distance_range"; + private static final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(GeoDistanceRangeQueryBuilder.class)); + public static final boolean DEFAULT_INCLUDE_LOWER = true; public static final boolean DEFAULT_INCLUDE_UPPER = true; - public static final GeoDistance DEFAULT_GEO_DISTANCE = GeoDistance.DEFAULT; + public static final GeoDistance DEFAULT_GEO_DISTANCE = GeoDistance.ARC; public static final DistanceUnit DEFAULT_UNIT = DistanceUnit.DEFAULT; @Deprecated public static final String DEFAULT_OPTIMIZE_BBOX = "memory"; @@ -326,9 +331,6 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } else { fromValue = DistanceUnit.parse((String) from, unit, DistanceUnit.DEFAULT); } - if (indexCreatedBeforeV2_2 == true) { - fromValue = geoDistance.normalize(fromValue, DistanceUnit.DEFAULT); - } } else { fromValue = 0.0; } @@ -339,20 +341,22 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } else { toValue = DistanceUnit.parse((String) to, unit, DistanceUnit.DEFAULT); } - if (indexCreatedBeforeV2_2 == true) { - toValue = geoDistance.normalize(toValue, DistanceUnit.DEFAULT); - } } else { toValue = GeoUtils.maxRadialDistanceMeters(point.lat(), point.lon()); } final Version indexVersionCreated = context.indexVersionCreated(); + if (indexVersionCreated.onOrAfter(LatLonPointFieldMapper.LAT_LON_FIELD_VERSION)) { + throw new QueryShardException(context, "[{}] queries are no longer supported for geo_point field types. " + + "Use geo_distance sort or aggregations", NAME); + } + + deprecationLogger.deprecated("geo_distance_range search is deprecated. Use geo_distance aggregation or sort instead."); + if (indexVersionCreated.before(Version.V_2_2_0)) { - LegacyGeoPointFieldMapper.GeoPointFieldType geoFieldType = ((LegacyGeoPointFieldMapper.GeoPointFieldType) fieldType); IndexGeoPointFieldData indexFieldData = context.getForField(fieldType); - String bboxOptimization = Strings.isEmpty(optimizeBbox) ? DEFAULT_OPTIMIZE_BBOX : optimizeBbox; - return new GeoDistanceRangeQuery(point, fromValue, toValue, includeLower, includeUpper, geoDistance, geoFieldType, - indexFieldData, bboxOptimization); + return new LegacyGeoDistanceRangeQuery(point, fromValue, toValue, includeLower, includeUpper, geoDistance, + ((LegacyGeoPointFieldType) fieldType), indexFieldData, optimizeBbox, context); } // if index created V_2_2 use (soon to be legacy) numeric encoding postings format @@ -435,27 +439,27 @@ public static Optional fromXContent(QueryParseCont "] field name already set to [" + fieldName + "] but found [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, FROM_FIELD)) { + if (FROM_FIELD.match(currentFieldName)) { if (token == XContentParser.Token.VALUE_NULL) { } else if (token == XContentParser.Token.VALUE_STRING) { vFrom = parser.text(); // a String } else { vFrom = parser.numberValue(); // a Number } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TO_FIELD)) { + } else if (TO_FIELD.match(currentFieldName)) { if (token == XContentParser.Token.VALUE_NULL) { } else if (token == XContentParser.Token.VALUE_STRING) { vTo = parser.text(); // a String } else { vTo = parser.numberValue(); // a Number } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, INCLUDE_LOWER_FIELD)) { + } else if (INCLUDE_LOWER_FIELD.match(currentFieldName)) { includeLower = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, INCLUDE_UPPER_FIELD)) { + } else if (INCLUDE_UPPER_FIELD.match(currentFieldName)) { includeUpper = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_UNMAPPED_FIELD)) { + } else if (IGNORE_UNMAPPED_FIELD.match(currentFieldName)) { ignoreUnmapped = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, GT_FIELD)) { + } else if (GT_FIELD.match(currentFieldName)) { if (token == XContentParser.Token.VALUE_NULL) { } else if (token == XContentParser.Token.VALUE_STRING) { vFrom = parser.text(); // a String @@ -463,7 +467,7 @@ public static Optional fromXContent(QueryParseCont vFrom = parser.numberValue(); // a Number } includeLower = false; - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, GTE_FIELD)) { + } else if (GTE_FIELD.match(currentFieldName)) { if (token == XContentParser.Token.VALUE_NULL) { } else if (token == XContentParser.Token.VALUE_STRING) { vFrom = parser.text(); // a String @@ -471,7 +475,7 @@ public static Optional fromXContent(QueryParseCont vFrom = parser.numberValue(); // a Number } includeLower = true; - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LT_FIELD)) { + } else if (LT_FIELD.match(currentFieldName)) { if (token == XContentParser.Token.VALUE_NULL) { } else if (token == XContentParser.Token.VALUE_STRING) { vTo = parser.text(); // a String @@ -479,7 +483,7 @@ public static Optional fromXContent(QueryParseCont vTo = parser.numberValue(); // a Number } includeUpper = false; - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LTE_FIELD)) { + } else if (LTE_FIELD.match(currentFieldName)) { if (token == XContentParser.Token.VALUE_NULL) { } else if (token == XContentParser.Token.VALUE_STRING) { vTo = parser.text(); // a String @@ -487,9 +491,9 @@ public static Optional fromXContent(QueryParseCont vTo = parser.numberValue(); // a Number } includeUpper = true; - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, UNIT_FIELD)) { + } else if (UNIT_FIELD.match(currentFieldName)) { unit = DistanceUnit.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, DISTANCE_TYPE_FIELD)) { + } else if (DISTANCE_TYPE_FIELD.match(currentFieldName)) { geoDistance = GeoDistance.fromString(parser.text()); } else if (currentFieldName.endsWith(GeoPointFieldMapper.Names.LAT_SUFFIX)) { String maybeFieldName = currentFieldName.substring(0, @@ -517,17 +521,17 @@ public static Optional fromXContent(QueryParseCont point = new GeoPoint(); } point.resetLon(parser.doubleValue()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, NAME_FIELD)) { + } else if (NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, BOOST_FIELD)) { + } else if (BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, OPTIMIZE_BBOX_FIELD)) { + } else if (OPTIMIZE_BBOX_FIELD.match(currentFieldName)) { optimizeBbox = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, COERCE_FIELD)) { + } else if (COERCE_FIELD.match(currentFieldName)) { coerce = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_MALFORMED_FIELD)) { + } else if (IGNORE_MALFORMED_FIELD.match(currentFieldName)) { ignoreMalformed = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, VALIDATION_METHOD)) { + } else if (VALIDATION_METHOD.match(currentFieldName)) { validationMethod = GeoValidationMethod.fromString(parser.text()); } else { if (fieldName == null) { diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java index 65ce33c1c988d..68b90340c563e 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java @@ -19,6 +19,8 @@ package org.elasticsearch.index.query; +import org.apache.lucene.document.LatLonPoint; +import org.apache.lucene.geo.Polygon; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.spatial.geopoint.document.GeoPointField; @@ -36,6 +38,7 @@ import org.elasticsearch.common.xcontent.XContentParser.Token; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.mapper.BaseGeoPointFieldMapper; +import org.elasticsearch.index.mapper.LatLonPointFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.search.geo.GeoPolygonQuery; @@ -53,10 +56,8 @@ public class GeoPolygonQueryBuilder extends AbstractQueryBuilder shell = new ArrayList(); + List shell = new ArrayList<>(this.shell.size()); for (GeoPoint geoPoint : this.shell) { shell.add(new GeoPoint(geoPoint)); } @@ -210,10 +211,14 @@ protected Query doToQuery(QueryShardContext context) throws IOException { double[] lons = new double[shellSize]; GeoPoint p; for (int i=0; i use prefix encoded postings format final GeoPointField.TermEncoding encoding = (indexVersionCreated.before(Version.V_2_3_0)) ? @@ -268,7 +273,7 @@ public static Optional fromXContent(QueryParseContext pa if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, POINTS_FIELD)) { + if (POINTS_FIELD.match(currentFieldName)) { shell = new ArrayList(); while ((token = parser.nextToken()) != Token.END_ARRAY) { shell.add(GeoUtils.parseGeoPoint(parser)); @@ -287,16 +292,16 @@ public static Optional fromXContent(QueryParseContext pa queryName = parser.text(); } else if ("boost".equals(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, COERCE_FIELD)) { + } else if (COERCE_FIELD.match(currentFieldName)) { coerce = parser.booleanValue(); if (coerce) { ignoreMalformed = true; } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_UNMAPPED_FIELD)) { + } else if (IGNORE_UNMAPPED_FIELD.match(currentFieldName)) { ignoreUnmapped = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_MALFORMED_FIELD)) { + } else if (IGNORE_MALFORMED_FIELD.match(currentFieldName)) { ignoreMalformed = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, VALIDATION_METHOD)) { + } else if (VALIDATION_METHOD.match(currentFieldName)) { validationMethod = GeoValidationMethod.fromString(parser.text()); } else { throw new ParsingException(parser.getTokenLocation(), diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoShapeQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoShapeQueryBuilder.java index 76e9bc0f9be4a..7833ba4597fe3 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoShapeQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoShapeQueryBuilder.java @@ -39,6 +39,7 @@ import org.elasticsearch.common.geo.builders.ShapeBuilder; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; @@ -384,7 +385,8 @@ private ShapeBuilder fetch(Client client, GetRequest getRequest, String path) th String[] pathElements = path.split("\\."); int currentPathSlot = 0; - try (XContentParser parser = XContentHelper.createParser(response.getSourceAsBytesRef())) { + // It is safe to use EMPTY here because this never uses namedObject + try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, response.getSourceAsBytesRef())) { XContentParser.Token currentToken; while ((currentToken = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (currentToken == XContentParser.Token.FIELD_NAME) { @@ -488,31 +490,31 @@ public static Optional fromXContent(QueryParseContext pars if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); token = parser.nextToken(); - if (parseContext.getParseFieldMatcher().match(currentFieldName, SHAPE_FIELD)) { + if (SHAPE_FIELD.match(currentFieldName)) { shape = ShapeBuilder.parse(parser); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, STRATEGY_FIELD)) { + } else if (STRATEGY_FIELD.match(currentFieldName)) { String strategyName = parser.text(); strategy = SpatialStrategy.fromString(strategyName); if (strategy == null) { throw new ParsingException(parser.getTokenLocation(), "Unknown strategy [" + strategyName + " ]"); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, RELATION_FIELD)) { + } else if (RELATION_FIELD.match(currentFieldName)) { shapeRelation = ShapeRelation.getRelationByName(parser.text()); if (shapeRelation == null) { throw new ParsingException(parser.getTokenLocation(), "Unknown shape operation [" + parser.text() + " ]"); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, INDEXED_SHAPE_FIELD)) { + } else if (INDEXED_SHAPE_FIELD.match(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, SHAPE_ID_FIELD)) { + if (SHAPE_ID_FIELD.match(currentFieldName)) { id = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SHAPE_TYPE_FIELD)) { + } else if (SHAPE_TYPE_FIELD.match(currentFieldName)) { type = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SHAPE_INDEX_FIELD)) { + } else if (SHAPE_INDEX_FIELD.match(currentFieldName)) { index = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SHAPE_PATH_FIELD)) { + } else if (SHAPE_PATH_FIELD.match(currentFieldName)) { shapePath = parser.text(); } } else { @@ -527,11 +529,11 @@ public static Optional fromXContent(QueryParseContext pars } } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_UNMAPPED_FIELD)) { + } else if (IGNORE_UNMAPPED_FIELD.match(currentFieldName)) { ignoreUnmapped = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), "[" + GeoShapeQueryBuilder.NAME + diff --git a/core/src/main/java/org/elasticsearch/index/query/GeohashCellQuery.java b/core/src/main/java/org/elasticsearch/index/query/GeohashCellQuery.java index 57a189b72f446..d306a97736ff5 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeohashCellQuery.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeohashCellQuery.java @@ -36,6 +36,7 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; import org.elasticsearch.index.mapper.BaseGeoPointFieldMapper; +import org.elasticsearch.index.mapper.LatLonPointFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import java.io.IOException; @@ -83,7 +84,7 @@ public class GeohashCellQuery { * @param geohashes optional array of additional geohashes * @return a new GeoBoundinboxfilter */ - public static Query create(QueryShardContext context, BaseGeoPointFieldMapper.GeoPointFieldType fieldType, + public static Query create(QueryShardContext context, BaseGeoPointFieldMapper.LegacyGeoPointFieldType fieldType, String geohash, @Nullable List geohashes) { MappedFieldType geoHashMapper = fieldType.geoHashFieldType(); if (geoHashMapper == null) { @@ -241,11 +242,14 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } } - if (!(fieldType instanceof BaseGeoPointFieldMapper.GeoPointFieldType)) { + if (fieldType instanceof LatLonPointFieldMapper.LatLonPointFieldType) { + throw new QueryShardException(context, "failed to parse [{}] query. " + + "geo_point field no longer supports geohash_cell queries", NAME); + } else if (!(fieldType instanceof BaseGeoPointFieldMapper.LegacyGeoPointFieldType)) { throw new QueryShardException(context, "failed to parse [{}] query. field [{}] is not a geo_point field", NAME, fieldName); } - BaseGeoPointFieldMapper.GeoPointFieldType geoFieldType = ((BaseGeoPointFieldMapper.GeoPointFieldType) fieldType); + BaseGeoPointFieldMapper.LegacyGeoPointFieldType geoFieldType = ((BaseGeoPointFieldMapper.LegacyGeoPointFieldType) fieldType); if (!geoFieldType.isGeoHashPrefixEnabled()) { throw new QueryShardException(context, "failed to parse [{}] query. [geohash_prefix] is not enabled for field [{}]", NAME, fieldName); @@ -301,7 +305,7 @@ public static Optional fromXContent(QueryParseContext parseContext) thr if (parseContext.isDeprecatedSetting(field)) { // skip - } else if (parseContext.getParseFieldMatcher().match(field, PRECISION_FIELD)) { + } else if (PRECISION_FIELD.match(field)) { token = parser.nextToken(); if (token == Token.VALUE_NUMBER) { levels = parser.intValue(); @@ -309,16 +313,16 @@ public static Optional fromXContent(QueryParseContext parseContext) thr double meters = DistanceUnit.parse(parser.text(), DistanceUnit.DEFAULT, DistanceUnit.METERS); levels = GeoUtils.geoHashLevelsForPrecision(meters); } - } else if (parseContext.getParseFieldMatcher().match(field, NEIGHBORS_FIELD)) { + } else if (NEIGHBORS_FIELD.match(field)) { parser.nextToken(); neighbors = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(field, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(field)) { parser.nextToken(); queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(field, IGNORE_UNMAPPED_FIELD)) { + } else if (IGNORE_UNMAPPED_FIELD.match(field)) { parser.nextToken(); ignoreUnmapped = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(field, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(field)) { parser.nextToken(); boost = parser.floatValue(); } else { diff --git a/core/src/main/java/org/elasticsearch/index/query/HasChildQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/HasChildQueryBuilder.java deleted file mode 100644 index 81fe106833d11..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/query/HasChildQueryBuilder.java +++ /dev/null @@ -1,494 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.index.query; - -import org.apache.lucene.index.DirectoryReader; -import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.MultiDocValues; -import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.MatchNoDocsQuery; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.join.JoinUtil; -import org.apache.lucene.search.join.ScoreMode; -import org.apache.lucene.search.similarities.Similarity; -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.fielddata.IndexParentChildFieldData; -import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.ParentFieldMapper; - -import java.io.IOException; -import java.util.Locale; -import java.util.Map; -import java.util.Objects; -import java.util.Optional; - -/** - * A query builder for has_child query. - */ -public class HasChildQueryBuilder extends AbstractQueryBuilder { - public static final String NAME = "has_child"; - - /** - * The default maximum number of children that are required to match for the parent to be considered a match. - */ - public static final int DEFAULT_MAX_CHILDREN = Integer.MAX_VALUE; - /** - * The default minimum number of children that are required to match for the parent to be considered a match. - */ - public static final int DEFAULT_MIN_CHILDREN = 0; - - /** - * The default value for ignore_unmapped. - */ - public static final boolean DEFAULT_IGNORE_UNMAPPED = false; - - private static final ParseField QUERY_FIELD = new ParseField("query", "filter"); - private static final ParseField TYPE_FIELD = new ParseField("type", "child_type"); - private static final ParseField MAX_CHILDREN_FIELD = new ParseField("max_children"); - private static final ParseField MIN_CHILDREN_FIELD = new ParseField("min_children"); - private static final ParseField SCORE_MODE_FIELD = new ParseField("score_mode"); - private static final ParseField INNER_HITS_FIELD = new ParseField("inner_hits"); - private static final ParseField IGNORE_UNMAPPED_FIELD = new ParseField("ignore_unmapped"); - - private final QueryBuilder query; - private final String type; - private final ScoreMode scoreMode; - private InnerHitBuilder innerHitBuilder; - private int minChildren = DEFAULT_MIN_CHILDREN; - private int maxChildren = DEFAULT_MAX_CHILDREN; - private boolean ignoreUnmapped = false; - - public HasChildQueryBuilder(String type, QueryBuilder query, ScoreMode scoreMode) { - this(type, query, DEFAULT_MIN_CHILDREN, DEFAULT_MAX_CHILDREN, scoreMode, null); - } - - private HasChildQueryBuilder(String type, QueryBuilder query, int minChildren, int maxChildren, ScoreMode scoreMode, - InnerHitBuilder innerHitBuilder) { - this.type = requireValue(type, "[" + NAME + "] requires 'type' field"); - this.query = requireValue(query, "[" + NAME + "] requires 'query' field"); - this.scoreMode = requireValue(scoreMode, "[" + NAME + "] requires 'score_mode' field"); - this.innerHitBuilder = innerHitBuilder; - this.minChildren = minChildren; - this.maxChildren = maxChildren; - } - - /** - * Read from a stream. - */ - public HasChildQueryBuilder(StreamInput in) throws IOException { - super(in); - type = in.readString(); - minChildren = in.readInt(); - maxChildren = in.readInt(); - scoreMode = ScoreMode.values()[in.readVInt()]; - query = in.readNamedWriteable(QueryBuilder.class); - innerHitBuilder = in.readOptionalWriteable(InnerHitBuilder::new); - ignoreUnmapped = in.readBoolean(); - } - - @Override - protected void doWriteTo(StreamOutput out) throws IOException { - out.writeString(type); - out.writeInt(minChildren); - out.writeInt(maxChildren); - out.writeVInt(scoreMode.ordinal()); - out.writeNamedWriteable(query); - out.writeOptionalWriteable(innerHitBuilder); - out.writeBoolean(ignoreUnmapped); - } - - /** - * Defines the minimum number of children that are required to match for the parent to be considered a match and - * the maximum number of children that are required to match for the parent to be considered a match. - */ - public HasChildQueryBuilder minMaxChildren(int minChildren, int maxChildren) { - if (minChildren < 0) { - throw new IllegalArgumentException("[" + NAME + "] requires non-negative 'min_children' field"); - } - if (maxChildren < 0) { - throw new IllegalArgumentException("[" + NAME + "] requires non-negative 'max_children' field"); - } - if (maxChildren < minChildren) { - throw new IllegalArgumentException("[" + NAME + "] 'max_children' is less than 'min_children'"); - } - this.minChildren = minChildren; - this.maxChildren = maxChildren; - return this; - } - - /** - * Returns inner hit definition in the scope of this query and reusing the defined type and query. - */ - public InnerHitBuilder innerHit() { - return innerHitBuilder; - } - - public HasChildQueryBuilder innerHit(InnerHitBuilder innerHit) { - this.innerHitBuilder = new InnerHitBuilder(Objects.requireNonNull(innerHit), query, type); - return this; - } - - /** - * Returns the children query to execute. - */ - public QueryBuilder query() { - return query; - } - - /** - * Returns the child type - */ - public String childType() { - return type; - } - - /** - * Returns how the scores from the matching child documents are mapped into the parent document. - */ - public ScoreMode scoreMode() { - return scoreMode; - } - - /** - * Returns the minimum number of children that are required to match for the parent to be considered a match. - * The default is {@value #DEFAULT_MAX_CHILDREN} - */ - public int minChildren() { - return minChildren; - } - - /** - * Returns the maximum number of children that are required to match for the parent to be considered a match. - * The default is {@value #DEFAULT_MIN_CHILDREN} - */ - public int maxChildren() { return maxChildren; } - - /** - * Sets whether the query builder should ignore unmapped types (and run a - * {@link MatchNoDocsQuery} in place of this query) or throw an exception if - * the type is unmapped. - */ - public HasChildQueryBuilder ignoreUnmapped(boolean ignoreUnmapped) { - this.ignoreUnmapped = ignoreUnmapped; - return this; - } - - /** - * Gets whether the query builder will ignore unmapped types (and run a - * {@link MatchNoDocsQuery} in place of this query) or throw an exception if - * the type is unmapped. - */ - public boolean ignoreUnmapped() { - return ignoreUnmapped; - } - - @Override - protected void doXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(NAME); - builder.field(QUERY_FIELD.getPreferredName()); - query.toXContent(builder, params); - builder.field(TYPE_FIELD.getPreferredName(), type); - builder.field(SCORE_MODE_FIELD.getPreferredName(), scoreModeAsString(scoreMode)); - builder.field(MIN_CHILDREN_FIELD.getPreferredName(), minChildren); - builder.field(MAX_CHILDREN_FIELD.getPreferredName(), maxChildren); - builder.field(IGNORE_UNMAPPED_FIELD.getPreferredName(), ignoreUnmapped); - printBoostAndQueryName(builder); - if (innerHitBuilder != null) { - builder.field(INNER_HITS_FIELD.getPreferredName(), innerHitBuilder, params); - } - builder.endObject(); - } - - public static Optional fromXContent(QueryParseContext parseContext) throws IOException { - XContentParser parser = parseContext.parser(); - float boost = AbstractQueryBuilder.DEFAULT_BOOST; - String childType = null; - ScoreMode scoreMode = ScoreMode.None; - int minChildren = HasChildQueryBuilder.DEFAULT_MIN_CHILDREN; - int maxChildren = HasChildQueryBuilder.DEFAULT_MAX_CHILDREN; - boolean ignoreUnmapped = DEFAULT_IGNORE_UNMAPPED; - String queryName = null; - InnerHitBuilder innerHitBuilder = null; - String currentFieldName = null; - XContentParser.Token token; - Optional iqb = Optional.empty(); - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (parseContext.isDeprecatedSetting(currentFieldName)) { - // skip - } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { - iqb = parseContext.parseInnerQueryBuilder(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, INNER_HITS_FIELD)) { - innerHitBuilder = InnerHitBuilder.fromXContent(parseContext); - } else { - throw new ParsingException(parser.getTokenLocation(), "[has_child] query does not support [" + currentFieldName + "]"); - } - } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, TYPE_FIELD)) { - childType = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SCORE_MODE_FIELD)) { - scoreMode = parseScoreMode(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { - boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MIN_CHILDREN_FIELD)) { - minChildren = parser.intValue(true); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MAX_CHILDREN_FIELD)) { - maxChildren = parser.intValue(true); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_UNMAPPED_FIELD)) { - ignoreUnmapped = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { - queryName = parser.text(); - } else { - throw new ParsingException(parser.getTokenLocation(), "[has_child] query does not support [" + currentFieldName + "]"); - } - } - } - - if (iqb.isPresent() == false) { - // if inner query is empty, bubble this up to caller so they can decide how to deal with it - return Optional.empty(); - } - - HasChildQueryBuilder hasChildQueryBuilder = new HasChildQueryBuilder(childType, iqb.get(), scoreMode); - if (innerHitBuilder != null) { - hasChildQueryBuilder.innerHit(innerHitBuilder); - } - hasChildQueryBuilder.minMaxChildren(minChildren, maxChildren); - hasChildQueryBuilder.queryName(queryName); - hasChildQueryBuilder.boost(boost); - hasChildQueryBuilder.ignoreUnmapped(ignoreUnmapped); - return Optional.of(hasChildQueryBuilder); - } - - public static ScoreMode parseScoreMode(String scoreModeString) { - if ("none".equals(scoreModeString)) { - return ScoreMode.None; - } else if ("min".equals(scoreModeString)) { - return ScoreMode.Min; - } else if ("max".equals(scoreModeString)) { - return ScoreMode.Max; - } else if ("avg".equals(scoreModeString)) { - return ScoreMode.Avg; - } else if ("sum".equals(scoreModeString)) { - return ScoreMode.Total; - } - throw new IllegalArgumentException("No score mode for child query [" + scoreModeString + "] found"); - } - - public static String scoreModeAsString(ScoreMode scoreMode) { - if (scoreMode == ScoreMode.Total) { - // Lucene uses 'total' but 'sum' is more consistent with other elasticsearch APIs - return "sum"; - } else { - return scoreMode.name().toLowerCase(Locale.ROOT); - } - } - - @Override - public String getWriteableName() { - return NAME; - } - - @Override - protected Query doToQuery(QueryShardContext context) throws IOException { - Query innerQuery; - final String[] previousTypes = context.getTypes(); - context.setTypes(type); - try { - innerQuery = query.toQuery(context); - } finally { - context.setTypes(previousTypes); - } - - DocumentMapper childDocMapper = context.getMapperService().documentMapper(type); - if (childDocMapper == null) { - if (ignoreUnmapped) { - return new MatchNoDocsQuery(); - } else { - throw new QueryShardException(context, "[" + NAME + "] no mapping found for type [" + type + "]"); - } - } - ParentFieldMapper parentFieldMapper = childDocMapper.parentFieldMapper(); - if (parentFieldMapper.active() == false) { - throw new QueryShardException(context, "[" + NAME + "] _parent field has no parent type configured"); - } - String parentType = parentFieldMapper.type(); - DocumentMapper parentDocMapper = context.getMapperService().documentMapper(parentType); - if (parentDocMapper == null) { - throw new QueryShardException(context, - "[" + NAME + "] Type [" + type + "] points to a non existent parent type [" + parentType + "]"); - } - - // wrap the query with type query - innerQuery = Queries.filtered(innerQuery, childDocMapper.typeFilter()); - - final ParentChildIndexFieldData parentChildIndexFieldData = context.getForField(parentFieldMapper.fieldType()); - return new LateParsingQuery(parentDocMapper.typeFilter(), innerQuery, minChildren(), maxChildren(), - parentType, scoreMode, parentChildIndexFieldData, context.getSearchSimilarity()); - } - - /** - * A query that rewrites into another query using - * {@link JoinUtil#createJoinQuery(String, Query, Query, IndexSearcher, ScoreMode, MultiDocValues.OrdinalMap, int, int)} - * that executes the actual join. - * - * This query is exclusively used by the {@link HasChildQueryBuilder} and {@link HasParentQueryBuilder} to get access - * to the {@link DirectoryReader} used by the current search in order to retrieve the {@link MultiDocValues.OrdinalMap}. - * The {@link MultiDocValues.OrdinalMap} is required by {@link JoinUtil} to execute the join. - */ - // TODO: Find a way to remove this query and let doToQuery(...) just return the query from JoinUtil.createJoinQuery(...) - public static final class LateParsingQuery extends Query { - - private final Query toQuery; - private final Query innerQuery; - private final int minChildren; - private final int maxChildren; - private final String parentType; - private final ScoreMode scoreMode; - private final ParentChildIndexFieldData parentChildIndexFieldData; - private final Similarity similarity; - - LateParsingQuery(Query toQuery, Query innerQuery, int minChildren, int maxChildren, - String parentType, ScoreMode scoreMode, ParentChildIndexFieldData parentChildIndexFieldData, - Similarity similarity) { - this.toQuery = toQuery; - this.innerQuery = innerQuery; - this.minChildren = minChildren; - this.maxChildren = maxChildren; - this.parentType = parentType; - this.scoreMode = scoreMode; - this.parentChildIndexFieldData = parentChildIndexFieldData; - this.similarity = similarity; - } - - @Override - public Query rewrite(IndexReader reader) throws IOException { - Query rewritten = super.rewrite(reader); - if (rewritten != this) { - return rewritten; - } - if (reader instanceof DirectoryReader) { - String joinField = ParentFieldMapper.joinField(parentType); - IndexSearcher indexSearcher = new IndexSearcher(reader); - indexSearcher.setQueryCache(null); - indexSearcher.setSimilarity(similarity); - IndexParentChildFieldData indexParentChildFieldData = parentChildIndexFieldData.loadGlobal((DirectoryReader) reader); - MultiDocValues.OrdinalMap ordinalMap = ParentChildIndexFieldData.getOrdinalMap(indexParentChildFieldData, parentType); - return JoinUtil.createJoinQuery(joinField, innerQuery, toQuery, indexSearcher, scoreMode, - ordinalMap, minChildren, maxChildren); - } else { - if (reader.leaves().isEmpty() && reader.numDocs() == 0) { - // asserting reader passes down a MultiReader during rewrite which makes this - // blow up since for this query to work we have to have a DirectoryReader otherwise - // we can't load global ordinals - for this to work we simply check if the reader has no leaves - // and rewrite to match nothing - return new MatchNoDocsQuery(); - } - throw new IllegalStateException("can't load global ordinals for reader of type: " + - reader.getClass() + " must be a DirectoryReader"); - } - } - - @Override - public boolean equals(Object o) { - if (sameClassAs(o) == false) return false; - - LateParsingQuery that = (LateParsingQuery) o; - - if (minChildren != that.minChildren) return false; - if (maxChildren != that.maxChildren) return false; - if (!toQuery.equals(that.toQuery)) return false; - if (!innerQuery.equals(that.innerQuery)) return false; - if (!parentType.equals(that.parentType)) return false; - return scoreMode == that.scoreMode; - } - - @Override - public int hashCode() { - return Objects.hash(classHash(), toQuery, innerQuery, minChildren, maxChildren, parentType, scoreMode); - } - - @Override - public String toString(String s) { - return "LateParsingQuery {parentType=" + parentType + "}"; - } - - public int getMinChildren() { - return minChildren; - } - - public int getMaxChildren() { - return maxChildren; - } - - public ScoreMode getScoreMode() { - return scoreMode; - } - - public Query getInnerQuery() { - return innerQuery; - } - - public Similarity getSimilarity() { - return similarity; - } - } - - @Override - protected boolean doEquals(HasChildQueryBuilder that) { - return Objects.equals(query, that.query) - && Objects.equals(type, that.type) - && Objects.equals(scoreMode, that.scoreMode) - && Objects.equals(minChildren, that.minChildren) - && Objects.equals(maxChildren, that.maxChildren) - && Objects.equals(innerHitBuilder, that.innerHitBuilder) - && Objects.equals(ignoreUnmapped, that.ignoreUnmapped); - } - - @Override - protected int doHashCode() { - return Objects.hash(query, type, scoreMode, minChildren, maxChildren, innerHitBuilder, ignoreUnmapped); - } - - @Override - protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws IOException { - QueryBuilder rewrittenQuery = query.rewrite(queryRewriteContext); - if (rewrittenQuery != query) { - InnerHitBuilder rewrittenInnerHit = InnerHitBuilder.rewrite(innerHitBuilder, rewrittenQuery); - return new HasChildQueryBuilder(type, rewrittenQuery, minChildren, maxChildren, scoreMode, rewrittenInnerHit); - } - return this; - } - - @Override - protected void extractInnerHitBuilders(Map innerHits) { - if (innerHitBuilder != null) { - innerHitBuilder.inlineInnerHits(innerHits); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/index/query/HasParentQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/HasParentQueryBuilder.java deleted file mode 100644 index 5b89262ecc731..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/query/HasParentQueryBuilder.java +++ /dev/null @@ -1,324 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.index.query; - -import org.apache.lucene.search.BooleanClause; -import org.apache.lucene.search.BooleanQuery; -import org.apache.lucene.search.MatchNoDocsQuery; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.join.ScoreMode; -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.ParentFieldMapper; - -import java.io.IOException; -import java.util.HashSet; -import java.util.Map; -import java.util.Objects; -import java.util.Optional; -import java.util.Set; - -/** - * Builder for the 'has_parent' query. - */ -public class HasParentQueryBuilder extends AbstractQueryBuilder { - public static final String NAME = "has_parent"; - - /** - * The default value for ignore_unmapped. - */ - public static final boolean DEFAULT_IGNORE_UNMAPPED = false; - - private static final ParseField QUERY_FIELD = new ParseField("query", "filter"); - private static final ParseField SCORE_MODE_FIELD = new ParseField("score_mode").withAllDeprecated("score"); - private static final ParseField TYPE_FIELD = new ParseField("parent_type", "type"); - private static final ParseField SCORE_FIELD = new ParseField("score"); - private static final ParseField INNER_HITS_FIELD = new ParseField("inner_hits"); - private static final ParseField IGNORE_UNMAPPED_FIELD = new ParseField("ignore_unmapped"); - - private final QueryBuilder query; - private final String type; - private final boolean score; - private InnerHitBuilder innerHit; - private boolean ignoreUnmapped = false; - - public HasParentQueryBuilder(String type, QueryBuilder query, boolean score) { - this(type, query, score, null); - } - - private HasParentQueryBuilder(String type, QueryBuilder query, boolean score, InnerHitBuilder innerHit) { - this.type = requireValue(type, "[" + NAME + "] requires 'type' field"); - this.query = requireValue(query, "[" + NAME + "] requires 'query' field"); - this.score = score; - this.innerHit = innerHit; - } - - /** - * Read from a stream. - */ - public HasParentQueryBuilder(StreamInput in) throws IOException { - super(in); - type = in.readString(); - score = in.readBoolean(); - query = in.readNamedWriteable(QueryBuilder.class); - innerHit = in.readOptionalWriteable(InnerHitBuilder::new); - ignoreUnmapped = in.readBoolean(); - } - - @Override - protected void doWriteTo(StreamOutput out) throws IOException { - out.writeString(type); - out.writeBoolean(score); - out.writeNamedWriteable(query); - out.writeOptionalWriteable(innerHit); - out.writeBoolean(ignoreUnmapped); - } - - /** - * Returns the query to execute. - */ - public QueryBuilder query() { - return query; - } - - /** - * Returns true if the parent score is mapped into the child documents - */ - public boolean score() { - return score; - } - - /** - * Returns the parents type name - */ - public String type() { - return type; - } - - /** - * Returns inner hit definition in the scope of this query and reusing the defined type and query. - */ - public InnerHitBuilder innerHit() { - return innerHit; - } - - public HasParentQueryBuilder innerHit(InnerHitBuilder innerHit) { - this.innerHit = new InnerHitBuilder(innerHit, query, type); - return this; - } - - /** - * Sets whether the query builder should ignore unmapped types (and run a - * {@link MatchNoDocsQuery} in place of this query) or throw an exception if - * the type is unmapped. - */ - public HasParentQueryBuilder ignoreUnmapped(boolean ignoreUnmapped) { - this.ignoreUnmapped = ignoreUnmapped; - return this; - } - - /** - * Gets whether the query builder will ignore unmapped types (and run a - * {@link MatchNoDocsQuery} in place of this query) or throw an exception if - * the type is unmapped. - */ - public boolean ignoreUnmapped() { - return ignoreUnmapped; - } - - @Override - protected Query doToQuery(QueryShardContext context) throws IOException { - Query innerQuery; - String[] previousTypes = context.getTypes(); - context.setTypes(type); - try { - innerQuery = query.toQuery(context); - } finally { - context.setTypes(previousTypes); - } - - DocumentMapper parentDocMapper = context.getMapperService().documentMapper(type); - if (parentDocMapper == null) { - if (ignoreUnmapped) { - return new MatchNoDocsQuery(); - } else { - throw new QueryShardException(context, "[" + NAME + "] query configured 'parent_type' [" + type + "] is not a valid type"); - } - } - - Set childTypes = new HashSet<>(); - ParentChildIndexFieldData parentChildIndexFieldData = null; - for (DocumentMapper documentMapper : context.getMapperService().docMappers(false)) { - ParentFieldMapper parentFieldMapper = documentMapper.parentFieldMapper(); - if (parentFieldMapper.active() && type.equals(parentFieldMapper.type())) { - childTypes.add(documentMapper.type()); - parentChildIndexFieldData = context.getForField(parentFieldMapper.fieldType()); - } - } - - if (childTypes.isEmpty()) { - throw new QueryShardException(context, "[" + NAME + "] no child types found for type [" + type + "]"); - } - - Query childrenQuery; - if (childTypes.size() == 1) { - DocumentMapper documentMapper = context.getMapperService().documentMapper(childTypes.iterator().next()); - childrenQuery = documentMapper.typeFilter(); - } else { - BooleanQuery.Builder childrenFilter = new BooleanQuery.Builder(); - for (String childrenTypeStr : childTypes) { - DocumentMapper documentMapper = context.getMapperService().documentMapper(childrenTypeStr); - childrenFilter.add(documentMapper.typeFilter(), BooleanClause.Occur.SHOULD); - } - childrenQuery = childrenFilter.build(); - } - - // wrap the query with type query - innerQuery = Queries.filtered(innerQuery, parentDocMapper.typeFilter()); - return new HasChildQueryBuilder.LateParsingQuery(childrenQuery, - innerQuery, - HasChildQueryBuilder.DEFAULT_MIN_CHILDREN, - HasChildQueryBuilder.DEFAULT_MAX_CHILDREN, - type, - score ? ScoreMode.Max : ScoreMode.None, - parentChildIndexFieldData, - context.getSearchSimilarity()); - } - - @Override - protected void doXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(NAME); - builder.field(QUERY_FIELD.getPreferredName()); - query.toXContent(builder, params); - builder.field(TYPE_FIELD.getPreferredName(), type); - builder.field(SCORE_FIELD.getPreferredName(), score); - builder.field(IGNORE_UNMAPPED_FIELD.getPreferredName(), ignoreUnmapped); - printBoostAndQueryName(builder); - if (innerHit != null) { - builder.field(INNER_HITS_FIELD.getPreferredName(), innerHit, params); - } - builder.endObject(); - } - - public static Optional fromXContent(QueryParseContext parseContext) throws IOException { - XContentParser parser = parseContext.parser(); - float boost = AbstractQueryBuilder.DEFAULT_BOOST; - String parentType = null; - boolean score = false; - String queryName = null; - InnerHitBuilder innerHits = null; - boolean ignoreUnmapped = DEFAULT_IGNORE_UNMAPPED; - - String currentFieldName = null; - XContentParser.Token token; - Optional iqb = Optional.empty(); - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { - iqb = parseContext.parseInnerQueryBuilder(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, INNER_HITS_FIELD)) { - innerHits = InnerHitBuilder.fromXContent(parseContext); - } else { - throw new ParsingException(parser.getTokenLocation(), "[has_parent] query does not support [" + currentFieldName + "]"); - } - } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, TYPE_FIELD)) { - parentType = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SCORE_MODE_FIELD)) { - String scoreModeValue = parser.text(); - if ("score".equals(scoreModeValue)) { - score = true; - } else if ("none".equals(scoreModeValue)) { - score = false; - } else { - throw new ParsingException(parser.getTokenLocation(), "[has_parent] query does not support [" + - scoreModeValue + "] as an option for score_mode"); - } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SCORE_FIELD)) { - score = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_UNMAPPED_FIELD)) { - ignoreUnmapped = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { - boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { - queryName = parser.text(); - } else { - throw new ParsingException(parser.getTokenLocation(), "[has_parent] query does not support [" + currentFieldName + "]"); - } - } - } - if (iqb.isPresent() == false) { - // if inner query is empty, bubble this up to caller so they can decide how to deal with it - return Optional.empty(); - } - HasParentQueryBuilder queryBuilder = new HasParentQueryBuilder(parentType, iqb.get(), score) - .ignoreUnmapped(ignoreUnmapped) - .queryName(queryName) - .boost(boost); - if (innerHits != null) { - queryBuilder.innerHit(innerHits); - } - return Optional.of(queryBuilder); - } - - @Override - public String getWriteableName() { - return NAME; - } - - @Override - protected boolean doEquals(HasParentQueryBuilder that) { - return Objects.equals(query, that.query) - && Objects.equals(type, that.type) - && Objects.equals(score, that.score) - && Objects.equals(innerHit, that.innerHit) - && Objects.equals(ignoreUnmapped, that.ignoreUnmapped); - } - - @Override - protected int doHashCode() { - return Objects.hash(query, type, score, innerHit, ignoreUnmapped); - } - - @Override - protected QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException { - QueryBuilder rewrittenQuery = query.rewrite(queryShardContext); - if (rewrittenQuery != query) { - InnerHitBuilder rewrittenInnerHit = InnerHitBuilder.rewrite(innerHit, rewrittenQuery); - return new HasParentQueryBuilder(type, rewrittenQuery, score, rewrittenInnerHit); - } - return this; - } - - @Override - protected void extractInnerHitBuilders(Map innerHits) { - if (innerHit!= null) { - innerHit.inlineInnerHits(innerHits); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/index/query/IdsQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/IdsQueryBuilder.java index c8f9f55f96e6f..81afd7c334edf 100644 --- a/core/src/main/java/org/elasticsearch/index/query/IdsQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/IdsQueryBuilder.java @@ -19,30 +19,32 @@ package org.elasticsearch.index.query; -import org.apache.lucene.queries.TermsQuery; +import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.search.Queries; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.UidFieldMapper; import java.io.IOException; -import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.HashSet; -import java.util.List; import java.util.Objects; import java.util.Optional; import java.util.Set; +import static org.elasticsearch.common.xcontent.ObjectParser.fromList; + /** * A query that will return only documents matching specific ids (and a type). */ @@ -54,23 +56,22 @@ public class IdsQueryBuilder extends AbstractQueryBuilder { private final Set ids = new HashSet<>(); - private final String[] types; + private String[] types = Strings.EMPTY_ARRAY; /** - * Creates a new IdsQueryBuilder without providing the types of the documents to look for + * Creates a new IdsQueryBuilder with no types specified upfront */ public IdsQueryBuilder() { - this.types = new String[0]; + // nothing to do } /** * Creates a new IdsQueryBuilder by providing the types of the documents to look for + * @deprecated Replaced by {@link #types(String...)} */ + @Deprecated public IdsQueryBuilder(String... types) { - if (types == null) { - throw new IllegalArgumentException("[ids] types cannot be null"); - } - this.types = types; + types(types); } /** @@ -88,6 +89,17 @@ protected void doWriteTo(StreamOutput out) throws IOException { out.writeStringArray(ids.toArray(new String[ids.size()])); } + /** + * Add types to query + */ + public IdsQueryBuilder types(String... types) { + if (types == null) { + throw new IllegalArgumentException("[" + NAME + "] types cannot be null"); + } + this.types = types; + return this; + } + /** * Returns the types used in this query */ @@ -100,7 +112,7 @@ public String[] types() { */ public IdsQueryBuilder addIds(String... ids) { if (ids == null) { - throw new IllegalArgumentException("[ids] ids cannot be null"); + throw new IllegalArgumentException("[" + NAME + "] ids cannot be null"); } Collections.addAll(this.ids, ids); return this; @@ -126,71 +138,21 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep builder.endObject(); } - public static Optional fromXContent(QueryParseContext parseContext) throws IOException { - XContentParser parser = parseContext.parser(); - List ids = new ArrayList<>(); - List types = new ArrayList<>(); - float boost = AbstractQueryBuilder.DEFAULT_BOOST; - String queryName = null; - - String currentFieldName = null; - XContentParser.Token token; - boolean idsProvided = false; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, VALUES_FIELD)) { - idsProvided = true; - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if ((token == XContentParser.Token.VALUE_STRING) || - (token == XContentParser.Token.VALUE_NUMBER)) { - String id = parser.textOrNull(); - if (id == null) { - throw new ParsingException(parser.getTokenLocation(), "No value specified for term filter"); - } - ids.add(id); - } else { - throw new ParsingException(parser.getTokenLocation(), - "Illegal value for id, expecting a string or number, got: " + token); - } - } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TYPE_FIELD)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - String value = parser.textOrNull(); - if (value == null) { - throw new ParsingException(parser.getTokenLocation(), "No type specified for term filter"); - } - types.add(value); - } - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + IdsQueryBuilder.NAME + - "] query does not support [" + currentFieldName + "]"); - } - } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, TYPE_FIELD)) { - types = Collections.singletonList(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { - boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { - queryName = parser.text(); - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + IdsQueryBuilder.NAME + - "] query does not support [" + currentFieldName + "]"); - } - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + IdsQueryBuilder.NAME + - "] unknown token [" + token + "] after [" + currentFieldName + "]"); - } - } - if (!idsProvided) { - throw new ParsingException(parser.getTokenLocation(), "[" + IdsQueryBuilder.NAME + "] query, no ids values provided"); - } + private static ObjectParser PARSER = new ObjectParser<>(NAME, + () -> new IdsQueryBuilder()); - IdsQueryBuilder query = new IdsQueryBuilder(types.toArray(new String[types.size()])); - query.addIds(ids.toArray(new String[ids.size()])); - query.boost(boost).queryName(queryName); - return Optional.of(query); + static { + PARSER.declareStringArray(fromList(String.class, IdsQueryBuilder::types), IdsQueryBuilder.TYPE_FIELD); + PARSER.declareStringArray(fromList(String.class, IdsQueryBuilder::addIds), IdsQueryBuilder.VALUES_FIELD); + declareStandardFields(PARSER); + } + + public static Optional fromXContent(QueryParseContext context) { + try { + return Optional.of(PARSER.apply(context.parser(), context)); + } catch (IllegalArgumentException e) { + throw new ParsingException(context.parser().getTokenLocation(), e.getMessage(), e); + } } @@ -202,6 +164,10 @@ public String getWriteableName() { @Override protected Query doToQuery(QueryShardContext context) throws IOException { Query query; + MappedFieldType uidField = context.fieldMapper(UidFieldMapper.NAME); + if (uidField == null) { + return new MatchNoDocsQuery("No mappings"); + } if (this.ids.isEmpty()) { query = Queries.newMatchNoDocsQuery("Missing ids in \"" + this.getName() + "\" query."); } else { @@ -215,7 +181,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { Collections.addAll(typesForQuery, types); } - query = new TermsQuery(UidFieldMapper.NAME, Uid.createUidsForTypesAndIds(typesForQuery, ids)); + query = uidField.termsQuery(Arrays.asList(Uid.createUidsForTypesAndIds(typesForQuery, ids)), context); } return query; } diff --git a/core/src/main/java/org/elasticsearch/index/query/IndicesQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/IndicesQueryBuilder.java index 736387a0d24c7..f5ad441946985 100644 --- a/core/src/main/java/org/elasticsearch/index/query/IndicesQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/IndicesQueryBuilder.java @@ -33,6 +33,7 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; +import java.util.Map; import java.util.Objects; import java.util.Optional; @@ -132,7 +133,7 @@ private static QueryBuilder defaultNoMatchQuery() { @Override protected void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(NAME); - builder.field(INDICES_FIELD.getPreferredName(), indices); + builder.array(INDICES_FIELD.getPreferredName(), indices); builder.field(QUERY_FIELD.getPreferredName()); innerQuery.toXContent(builder, params); builder.field(NO_MATCH_QUERY.getPreferredName()); @@ -157,16 +158,16 @@ public static Optional fromXContent(QueryParseContext parse if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { // the 2.0 behaviour when encountering "query" : {} is to return no docs for matching indices innerQuery = parseContext.parseInnerQueryBuilder().orElse(new MatchNoneQueryBuilder()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, NO_MATCH_QUERY)) { + } else if (NO_MATCH_QUERY.match(currentFieldName)) { noMatchQuery = parseContext.parseInnerQueryBuilder().orElse(defaultNoMatchQuery()); } else { throw new ParsingException(parser.getTokenLocation(), "[indices] query does not support [" + currentFieldName + "]"); } } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, INDICES_FIELD)) { + if (INDICES_FIELD.match(currentFieldName)) { if (indices.isEmpty() == false) { throw new ParsingException(parser.getTokenLocation(), "[indices] indices or index already specified"); } @@ -181,16 +182,16 @@ public static Optional fromXContent(QueryParseContext parse throw new ParsingException(parser.getTokenLocation(), "[indices] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, INDEX_FIELD)) { + if (INDEX_FIELD.match(currentFieldName)) { if (indices.isEmpty() == false) { throw new ParsingException(parser.getTokenLocation(), "[indices] indices or index already specified"); } indices.add(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, NO_MATCH_QUERY)) { + } else if (NO_MATCH_QUERY.match(currentFieldName)) { noMatchQuery = parseNoMatchQuery(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); } else { throw new ParsingException(parser.getTokenLocation(), "[indices] query does not support [" + currentFieldName + "]"); @@ -232,6 +233,12 @@ protected Query doToQuery(QueryShardContext context) throws IOException { return noMatchQuery.toQuery(context); } + @Override + protected void extractInnerHitBuilders(Map innerHits) { + InnerHitContextBuilder.extractInnerHits(innerQuery, innerHits); + InnerHitContextBuilder.extractInnerHits(noMatchQuery, innerHits); + } + @Override public int doHashCode() { return Objects.hash(innerQuery, noMatchQuery, Arrays.hashCode(indices)); @@ -246,10 +253,10 @@ protected boolean doEquals(IndicesQueryBuilder other) { @Override protected QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException { - QueryBuilder newInnnerQuery = innerQuery.rewrite(queryShardContext); + QueryBuilder newInnerQuery = innerQuery.rewrite(queryShardContext); QueryBuilder newNoMatchQuery = noMatchQuery.rewrite(queryShardContext); - if (newInnnerQuery != innerQuery || newNoMatchQuery != noMatchQuery) { - return new IndicesQueryBuilder(innerQuery, indices).noMatchQuery(noMatchQuery); + if (newInnerQuery != innerQuery || newNoMatchQuery != noMatchQuery) { + return new IndicesQueryBuilder(newInnerQuery, indices).noMatchQuery(newNoMatchQuery); } return this; } diff --git a/core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java b/core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java index 27b0138e1e5a0..7b331dd487b51 100644 --- a/core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java @@ -18,6 +18,7 @@ */ package org.elasticsearch.index.query; +import org.elasticsearch.Version; import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; @@ -27,32 +28,21 @@ import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.ObjectMapper; import org.elasticsearch.script.Script; -import org.elasticsearch.script.ScriptContext; -import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField; import org.elasticsearch.search.fetch.StoredFieldsContext; -import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext; -import org.elasticsearch.search.fetch.subphase.DocValueFieldsFetchSubPhase; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; -import org.elasticsearch.search.fetch.subphase.InnerHitsContext; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; -import org.elasticsearch.search.internal.SearchContext; -import org.elasticsearch.search.sort.SortAndFormats; import org.elasticsearch.search.sort.SortBuilder; import java.io.IOException; import java.util.ArrayList; -import java.util.Collections; -import java.util.HashMap; +import java.util.Comparator; import java.util.HashSet; +import java.util.Iterator; import java.util.List; -import java.util.Map; import java.util.Objects; -import java.util.Optional; import java.util.Set; import static org.elasticsearch.common.xcontent.XContentParser.Token.END_OBJECT; @@ -60,13 +50,14 @@ public final class InnerHitBuilder extends ToXContentToBytes implements Writeable { public static final ParseField NAME_FIELD = new ParseField("name"); - public static final ParseField INNER_HITS_FIELD = new ParseField("inner_hits"); + public static final ParseField IGNORE_UNMAPPED = new ParseField("ignore_unmapped"); public static final QueryBuilder DEFAULT_INNER_HIT_QUERY = new MatchAllQueryBuilder(); private static final ObjectParser PARSER = new ObjectParser<>("inner_hits", InnerHitBuilder::new); static { PARSER.declareString(InnerHitBuilder::setName, NAME_FIELD); + PARSER.declareBoolean((innerHitBuilder, value) -> innerHitBuilder.ignoreUnmapped = value, IGNORE_UNMAPPED); PARSER.declareInt(InnerHitBuilder::setFrom, SearchSourceBuilder.FROM_FIELD); PARSER.declareInt(InnerHitBuilder::setSize, SearchSourceBuilder.SIZE_FIELD); PARSER.declareBoolean(InnerHitBuilder::setExplain, SearchSourceBuilder.EXPLAIN_FIELD); @@ -95,42 +86,17 @@ public final class InnerHitBuilder extends ToXContentToBytes implements Writeabl ObjectParser.ValueType.OBJECT_ARRAY); PARSER.declareField((p, i, c) -> { try { - i.setFetchSourceContext(FetchSourceContext.parse(c)); + i.setFetchSourceContext(FetchSourceContext.fromXContent(c.parser())); } catch (IOException e) { throw new ParsingException(p.getTokenLocation(), "Could not parse inner _source definition", e); } - }, SearchSourceBuilder._SOURCE_FIELD, ObjectParser.ValueType.OBJECT_OR_BOOLEAN); + }, SearchSourceBuilder._SOURCE_FIELD, ObjectParser.ValueType.OBJECT_ARRAY_BOOLEAN_OR_STRING); PARSER.declareObject(InnerHitBuilder::setHighlightBuilder, (p, c) -> HighlightBuilder.fromXContent(c), SearchSourceBuilder.HIGHLIGHT_FIELD); - PARSER.declareObject(InnerHitBuilder::setChildInnerHits, (p, c) -> { - try { - Map innerHitBuilders = new HashMap<>(); - String innerHitName = null; - for (XContentParser.Token token = p.nextToken(); token != XContentParser.Token.END_OBJECT; token = p.nextToken()) { - switch (token) { - case START_OBJECT: - InnerHitBuilder innerHitBuilder = InnerHitBuilder.fromXContent(c); - innerHitBuilder.setName(innerHitName); - innerHitBuilders.put(innerHitName, innerHitBuilder); - break; - case FIELD_NAME: - innerHitName = p.currentName(); - break; - default: - throw new ParsingException(p.getTokenLocation(), "Expected [" + XContentParser.Token.START_OBJECT + "] in [" - + p.currentName() + "] but found [" + token + "]", p.getTokenLocation()); - } - } - return innerHitBuilders; - } catch (IOException e) { - throw new ParsingException(p.getTokenLocation(), "Could not parse inner query definition", e); - } - }, INNER_HITS_FIELD); } private String name; - private String nestedPath; - private String parentChildType; + private boolean ignoreUnmapped; private int from; private int size = 3; @@ -145,67 +111,28 @@ public final class InnerHitBuilder extends ToXContentToBytes implements Writeabl private Set scriptFields; private HighlightBuilder highlightBuilder; private FetchSourceContext fetchSourceContext; - private Map childInnerHits; public InnerHitBuilder() { + this.name = null; } - private InnerHitBuilder(InnerHitBuilder other) { - name = other.name; - from = other.from; - size = other.size; - explain = other.explain; - version = other.version; - trackScores = other.trackScores; - if (other.storedFieldsContext != null) { - storedFieldsContext = new StoredFieldsContext(other.storedFieldsContext); - } - if (other.docValueFields != null) { - docValueFields = new ArrayList<> (other.docValueFields); - } - if (other.scriptFields != null) { - scriptFields = new HashSet<> (other.scriptFields); - } - if (other.fetchSourceContext != null) { - fetchSourceContext = new FetchSourceContext( - other.fetchSourceContext.fetchSource(), other.fetchSourceContext.includes(), other.fetchSourceContext.excludes() - ); - } - if (other.sorts != null) { - sorts = new ArrayList<>(other.sorts); - } - highlightBuilder = other.highlightBuilder; - if (other.childInnerHits != null) { - childInnerHits = new HashMap<>(other.childInnerHits); - } - } - - - InnerHitBuilder(InnerHitBuilder other, String nestedPath, QueryBuilder query) { - this(other); - this.query = query; - this.nestedPath = nestedPath; - if (name == null) { - this.name = nestedPath; - } + public InnerHitBuilder(String name) { + this.name = name; } - InnerHitBuilder(InnerHitBuilder other, QueryBuilder query, String parentChildType) { - this(other); - this.query = query; - this.parentChildType = parentChildType; - if (name == null) { - this.name = parentChildType; - } - } /** * Read from a stream. */ public InnerHitBuilder(StreamInput in) throws IOException { name = in.readOptionalString(); - nestedPath = in.readOptionalString(); - parentChildType = in.readOptionalString(); + if (in.getVersion().before(Version.V_5_5_0)) { + in.readOptionalString(); + in.readOptionalString(); + } + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { + ignoreUnmapped = in.readBoolean(); + } from = in.readVInt(); size = in.readVInt(); explain = in.readBoolean(); @@ -220,7 +147,7 @@ public InnerHitBuilder(StreamInput in) throws IOException { scriptFields.add(new ScriptField(in)); } } - fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new); + fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new); if (in.readBoolean()) { int size = in.readVInt(); sorts = new ArrayList<>(size); @@ -229,21 +156,23 @@ public InnerHitBuilder(StreamInput in) throws IOException { } } highlightBuilder = in.readOptionalWriteable(HighlightBuilder::new); - query = in.readNamedWriteable(QueryBuilder.class); - if (in.readBoolean()) { - int size = in.readVInt(); - childInnerHits = new HashMap<>(size); - for (int i = 0; i < size; i++) { - childInnerHits.put(in.readString(), new InnerHitBuilder(in)); - } + if (in.getVersion().before(Version.V_5_5_0)) { + /** + * this is needed for BWC with nodes pre 5.5 + */ + in.readNamedWriteable(QueryBuilder.class); + boolean hasChildren = in.readBoolean(); + assert hasChildren == false; } } @Override public void writeTo(StreamOutput out) throws IOException { + if (out.getVersion().before(Version.V_5_5_0)) { + throw new IOException("Invalid output version, must >= " + Version.V_5_5_0.toString()); + } out.writeOptionalString(name); - out.writeOptionalString(nestedPath); - out.writeOptionalString(parentChildType); + out.writeBoolean(ignoreUnmapped); out.writeVInt(from); out.writeVInt(size); out.writeBoolean(explain); @@ -255,11 +184,13 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(hasScriptFields); if (hasScriptFields) { out.writeVInt(scriptFields.size()); - for (ScriptField scriptField : scriptFields) { - scriptField.writeTo(out); + Iterator iterator = scriptFields.stream() + .sorted(Comparator.comparing(ScriptField::fieldName)).iterator(); + while (iterator.hasNext()) { + iterator.next().writeTo(out); } } - out.writeOptionalStreamable(fetchSourceContext); + out.writeOptionalWriteable(fetchSourceContext); boolean hasSorts = sorts != null; out.writeBoolean(hasSorts); if (hasSorts) { @@ -269,16 +200,82 @@ public void writeTo(StreamOutput out) throws IOException { } } out.writeOptionalWriteable(highlightBuilder); - out.writeNamedWriteable(query); - boolean hasChildInnerHits = childInnerHits != null; - out.writeBoolean(hasChildInnerHits); - if (hasChildInnerHits) { - out.writeVInt(childInnerHits.size()); - for (Map.Entry entry : childInnerHits.entrySet()) { - out.writeString(entry.getKey()); - entry.getValue().writeTo(out); + } + + /** + * BWC serialization for nested {@link InnerHitBuilder}. + * Should only be used to send nested inner hits to nodes pre 5.5. + */ + protected void writeToNestedBWC(StreamOutput out, QueryBuilder query, String nestedPath) throws IOException { + assert out.getVersion().before(Version.V_5_5_0) : + "invalid output version, must be < " + Version.V_5_5_0.toString(); + writeToBWC(out, query, nestedPath, null); + } + + /** + * BWC serialization for collapsing {@link InnerHitBuilder}. + * Should only be used to send collapsing inner hits to nodes pre 5.5. + */ + public void writeToCollapseBWC(StreamOutput out) throws IOException { + assert out.getVersion().before(Version.V_5_5_0) : + "invalid output version, must be < " + Version.V_5_5_0.toString(); + writeToBWC(out, new MatchAllQueryBuilder(), null, null); + } + + /** + * BWC serialization for parent/child {@link InnerHitBuilder}. + * Should only be used to send hasParent or hasChild inner hits to nodes pre 5.5. + */ + public void writeToParentChildBWC(StreamOutput out, QueryBuilder query, String parentChildPath) throws IOException { + assert(out.getVersion().before(Version.V_5_5_0)) : + "invalid output version, must be < " + Version.V_5_5_0.toString(); + writeToBWC(out, query, null, parentChildPath); + } + + private void writeToBWC(StreamOutput out, + QueryBuilder query, + String nestedPath, + String parentChildPath) throws IOException { + out.writeOptionalString(name); + if (nestedPath != null) { + out.writeOptionalString(nestedPath); + out.writeOptionalString(null); + } else { + out.writeOptionalString(null); + out.writeOptionalString(parentChildPath); + } + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { + out.writeBoolean(ignoreUnmapped); + } + out.writeVInt(from); + out.writeVInt(size); + out.writeBoolean(explain); + out.writeBoolean(version); + out.writeBoolean(trackScores); + out.writeOptionalWriteable(storedFieldsContext); + out.writeGenericValue(docValueFields); + boolean hasScriptFields = scriptFields != null; + out.writeBoolean(hasScriptFields); + if (hasScriptFields) { + out.writeVInt(scriptFields.size()); + Iterator iterator = scriptFields.stream() + .sorted(Comparator.comparing(ScriptField::fieldName)).iterator(); + while (iterator.hasNext()) { + iterator.next().writeTo(out); + } + } + out.writeOptionalWriteable(fetchSourceContext); + boolean hasSorts = sorts != null; + out.writeBoolean(hasSorts); + if (hasSorts) { + out.writeVInt(sorts.size()); + for (SortBuilder sort : sorts) { + out.writeNamedWriteable(sort); } } + out.writeOptionalWriteable(highlightBuilder); + out.writeNamedWriteable(query); + out.writeBoolean(false); } public String getName() { @@ -290,6 +287,18 @@ public InnerHitBuilder setName(String name) { return this; } + public InnerHitBuilder setIgnoreUnmapped(boolean value) { + this.ignoreUnmapped = value; + return this; + } + + /** + * Whether to include inner hits in the search response hits if required mappings is missing + */ + public boolean isIgnoreUnmapped() { + return ignoreUnmapped; + } + public int getFrom() { return from; } @@ -500,129 +509,13 @@ QueryBuilder getQuery() { return query; } - void setChildInnerHits(Map childInnerHits) { - this.childInnerHits = childInnerHits; - } - - String getParentChildType() { - return parentChildType; - } - - String getNestedPath() { - return nestedPath; - } - - void addChildInnerHit(InnerHitBuilder innerHitBuilder) { - if (childInnerHits == null) { - childInnerHits = new HashMap<>(); - } - this.childInnerHits.put(innerHitBuilder.getName(), innerHitBuilder); - } - - public InnerHitsContext.BaseInnerHits build(SearchContext parentSearchContext, - InnerHitsContext innerHitsContext) throws IOException { - QueryShardContext queryShardContext = parentSearchContext.getQueryShardContext(); - if (nestedPath != null) { - ObjectMapper nestedObjectMapper = queryShardContext.getObjectMapper(nestedPath); - ObjectMapper parentObjectMapper = queryShardContext.nestedScope().nextLevel(nestedObjectMapper); - InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits( - name, parentSearchContext, parentObjectMapper, nestedObjectMapper - ); - setupInnerHitsContext(queryShardContext, nestedInnerHits); - if (childInnerHits != null) { - buildChildInnerHits(parentSearchContext, nestedInnerHits); - } - queryShardContext.nestedScope().previousLevel(); - innerHitsContext.addInnerHitDefinition(nestedInnerHits); - return nestedInnerHits; - } else if (parentChildType != null) { - DocumentMapper documentMapper = queryShardContext.getMapperService().documentMapper(parentChildType); - InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits( - name, parentSearchContext, queryShardContext.getMapperService(), documentMapper - ); - setupInnerHitsContext(queryShardContext, parentChildInnerHits); - if (childInnerHits != null) { - buildChildInnerHits(parentSearchContext, parentChildInnerHits); - } - innerHitsContext.addInnerHitDefinition( parentChildInnerHits); - return parentChildInnerHits; - } else { - throw new IllegalStateException("Neither a nested or parent/child inner hit"); - } - } - - private void buildChildInnerHits(SearchContext parentSearchContext, InnerHitsContext.BaseInnerHits innerHits) throws IOException { - Map childInnerHits = new HashMap<>(); - for (Map.Entry entry : this.childInnerHits.entrySet()) { - InnerHitsContext.BaseInnerHits childInnerHit = entry.getValue().build( - parentSearchContext, new InnerHitsContext() - ); - childInnerHits.put(entry.getKey(), childInnerHit); - } - innerHits.setChildInnerHits(childInnerHits); - } - - private void setupInnerHitsContext(QueryShardContext context, InnerHitsContext.BaseInnerHits innerHitsContext) throws IOException { - innerHitsContext.from(from); - innerHitsContext.size(size); - innerHitsContext.explain(explain); - innerHitsContext.version(version); - innerHitsContext.trackScores(trackScores); - if (storedFieldsContext != null) { - innerHitsContext.storedFieldsContext(storedFieldsContext); - } - if (docValueFields != null) { - DocValueFieldsContext docValueFieldsContext = innerHitsContext - .getFetchSubPhaseContext(DocValueFieldsFetchSubPhase.CONTEXT_FACTORY); - for (String field : docValueFields) { - docValueFieldsContext.add(new DocValueFieldsContext.DocValueField(field)); - } - docValueFieldsContext.setHitExecutionNeeded(true); - } - if (scriptFields != null) { - for (ScriptField field : scriptFields) { - SearchScript searchScript = innerHitsContext.scriptService().search(innerHitsContext.lookup(), field.script(), - ScriptContext.Standard.SEARCH, Collections.emptyMap()); - innerHitsContext.scriptFields().add(new org.elasticsearch.search.fetch.subphase.ScriptFieldsContext.ScriptField( - field.fieldName(), searchScript, field.ignoreFailure())); - } - } - if (fetchSourceContext != null) { - innerHitsContext.fetchSourceContext(fetchSourceContext); - } - if (sorts != null) { - Optional optionalSort = SortBuilder.buildSort(sorts, context); - if (optionalSort.isPresent()) { - innerHitsContext.sort(optionalSort.get()); - } - } - if (highlightBuilder != null) { - innerHitsContext.highlight(highlightBuilder.build(context)); - } - ParsedQuery parsedQuery = new ParsedQuery(query.toQuery(context), context.copyNamedQueries()); - innerHitsContext.parsedQuery(parsedQuery); - } - - public void inlineInnerHits(Map innerHits) { - InnerHitBuilder copy = new InnerHitBuilder(this); - copy.parentChildType = this.parentChildType; - copy.nestedPath = this.nestedPath; - copy.query = this.query; - innerHits.put(copy.getName(), copy); - - Map childInnerHits = new HashMap<>(); - extractInnerHits(query, childInnerHits); - if (childInnerHits.size() > 0) { - copy.setChildInnerHits(childInnerHits); - } - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); if (name != null) { builder.field(NAME_FIELD.getPreferredName(), name); } + builder.field(IGNORE_UNMAPPED.getPreferredName(), ignoreUnmapped); builder.field(SearchSourceBuilder.FROM_FIELD.getPreferredName(), from); builder.field(SearchSourceBuilder.SIZE_FIELD.getPreferredName(), size); builder.field(SearchSourceBuilder.VERSION_FIELD.getPreferredName(), version); @@ -658,13 +551,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (highlightBuilder != null) { builder.field(SearchSourceBuilder.HIGHLIGHT_FIELD.getPreferredName(), highlightBuilder, params); } - if (childInnerHits != null) { - builder.startObject(INNER_HITS_FIELD.getPreferredName()); - for (Map.Entry entry : childInnerHits.entrySet()) { - builder.field(entry.getKey(), entry.getValue(), params); - } - builder.endObject(); - } builder.endObject(); return builder; } @@ -676,8 +562,7 @@ public boolean equals(Object o) { InnerHitBuilder that = (InnerHitBuilder) o; return Objects.equals(name, that.name) && - Objects.equals(nestedPath, that.nestedPath) && - Objects.equals(parentChildType, that.parentChildType) && + Objects.equals(ignoreUnmapped, that.ignoreUnmapped) && Objects.equals(from, that.from) && Objects.equals(size, that.size) && Objects.equals(explain, that.explain) && @@ -688,40 +573,16 @@ public boolean equals(Object o) { Objects.equals(scriptFields, that.scriptFields) && Objects.equals(fetchSourceContext, that.fetchSourceContext) && Objects.equals(sorts, that.sorts) && - Objects.equals(highlightBuilder, that.highlightBuilder) && - Objects.equals(query, that.query) && - Objects.equals(childInnerHits, that.childInnerHits); + Objects.equals(highlightBuilder, that.highlightBuilder); } @Override public int hashCode() { - return Objects.hash(name, nestedPath, parentChildType, from, size, explain, version, trackScores, storedFieldsContext, - docValueFields, scriptFields, fetchSourceContext, sorts, highlightBuilder, query, childInnerHits); + return Objects.hash(name, ignoreUnmapped, from, size, explain, version, trackScores, + storedFieldsContext, docValueFields, scriptFields, fetchSourceContext, sorts, highlightBuilder); } public static InnerHitBuilder fromXContent(QueryParseContext context) throws IOException { return PARSER.parse(context.parser(), new InnerHitBuilder(), context); } - - public static void extractInnerHits(QueryBuilder query, Map innerHitBuilders) { - if (query instanceof AbstractQueryBuilder) { - ((AbstractQueryBuilder) query).extractInnerHitBuilders(innerHitBuilders); - } else { - throw new IllegalStateException("provided query builder [" + query.getClass() + - "] class should inherit from AbstractQueryBuilder, but it doesn't"); - } - } - - static InnerHitBuilder rewrite(InnerHitBuilder original, QueryBuilder rewrittenQuery) { - if (original == null) { - return null; - } - - InnerHitBuilder copy = new InnerHitBuilder(original); - copy.query = rewrittenQuery; - copy.parentChildType = original.parentChildType; - copy.nestedPath = original.nestedPath; - return copy; - } - } diff --git a/core/src/main/java/org/elasticsearch/index/query/InnerHitContextBuilder.java b/core/src/main/java/org/elasticsearch/index/query/InnerHitContextBuilder.java new file mode 100644 index 0000000000000..e68c5631844ea --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/query/InnerHitContextBuilder.java @@ -0,0 +1,116 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.query; + +import org.elasticsearch.script.ScriptContext; +import org.elasticsearch.script.SearchScript; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext; +import org.elasticsearch.search.fetch.subphase.InnerHitsContext; +import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.search.sort.SortAndFormats; +import org.elasticsearch.search.sort.SortBuilder; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; +import java.util.Optional; + +/** + * A builder for {@link InnerHitsContext.InnerHitSubContext} + */ +public abstract class InnerHitContextBuilder { + protected final QueryBuilder query; + protected final InnerHitBuilder innerHitBuilder; + protected final Map children; + + protected InnerHitContextBuilder(QueryBuilder query, InnerHitBuilder innerHitBuilder, Map children) { + this.innerHitBuilder = innerHitBuilder; + this.children = children; + this.query = query; + } + + public abstract void build(SearchContext parentSearchContext, + InnerHitsContext innerHitsContext) throws IOException; + + public static void extractInnerHits(QueryBuilder query, Map innerHitBuilders) { + if (query instanceof AbstractQueryBuilder) { + ((AbstractQueryBuilder) query).extractInnerHitBuilders(innerHitBuilders); + } else { + throw new IllegalStateException("provided query builder [" + query.getClass() + + "] class should inherit from AbstractQueryBuilder, but it doesn't"); + } + } + + protected void setupInnerHitsContext(QueryShardContext queryShardContext, + InnerHitsContext.InnerHitSubContext innerHitsContext) throws IOException { + innerHitsContext.from(innerHitBuilder.getFrom()); + innerHitsContext.size(innerHitBuilder.getSize()); + innerHitsContext.explain(innerHitBuilder.isExplain()); + innerHitsContext.version(innerHitBuilder.isVersion()); + innerHitsContext.trackScores(innerHitBuilder.isTrackScores()); + if (innerHitBuilder.getStoredFieldsContext() != null) { + innerHitsContext.storedFieldsContext(innerHitBuilder.getStoredFieldsContext()); + } + if (innerHitBuilder.getDocValueFields() != null) { + innerHitsContext.docValueFieldsContext(new DocValueFieldsContext(innerHitBuilder.getDocValueFields())); + } + if (innerHitBuilder.getScriptFields() != null) { + for (SearchSourceBuilder.ScriptField field : innerHitBuilder.getScriptFields()) { + SearchScript searchScript = innerHitsContext.getQueryShardContext().getSearchScript(field.script(), + ScriptContext.Standard.SEARCH); + innerHitsContext.scriptFields().add(new org.elasticsearch.search.fetch.subphase.ScriptFieldsContext.ScriptField( + field.fieldName(), searchScript, field.ignoreFailure())); + } + } + if (innerHitBuilder.getFetchSourceContext() != null) { + innerHitsContext.fetchSourceContext(innerHitBuilder.getFetchSourceContext() ); + } + if (innerHitBuilder.getSorts() != null) { + Optional optionalSort = SortBuilder.buildSort(innerHitBuilder.getSorts(), queryShardContext); + if (optionalSort.isPresent()) { + innerHitsContext.sort(optionalSort.get()); + } + } + if (innerHitBuilder.getHighlightBuilder() != null) { + innerHitsContext.highlight(innerHitBuilder.getHighlightBuilder().build(queryShardContext)); + } + ParsedQuery parsedQuery = new ParsedQuery(query.toQuery(queryShardContext), queryShardContext.copyNamedQueries()); + innerHitsContext.parsedQuery(parsedQuery); + Map baseChildren = + buildChildInnerHits(innerHitsContext.parentSearchContext(), children); + innerHitsContext.setChildInnerHits(baseChildren); + } + + private static Map buildChildInnerHits(SearchContext parentSearchContext, + Map children) throws IOException { + + Map childrenInnerHits = new HashMap<>(); + for (Map.Entry entry : children.entrySet()) { + InnerHitsContext childInnerHitsContext = new InnerHitsContext(); + entry.getValue().build( + parentSearchContext, childInnerHitsContext); + if (childInnerHitsContext.getInnerHits() != null) { + childrenInnerHits.putAll(childInnerHitsContext.getInnerHits()); + } + } + return childrenInnerHits; + } +} diff --git a/core/src/main/java/org/elasticsearch/index/query/MatchAllQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MatchAllQueryBuilder.java index ba704809a4fd0..4e2deab6b93bf 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MatchAllQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MatchAllQueryBuilder.java @@ -20,13 +20,12 @@ package org.elasticsearch.index.query; import org.apache.lucene.search.Query; -import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.search.Queries; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; import java.util.Optional; @@ -48,7 +47,7 @@ public MatchAllQueryBuilder(StreamInput in) throws IOException { } @Override - protected void doWriteTo(StreamOutput out) throws IOException { + protected void doWriteTo(StreamOutput out) { // only superclass has state } @@ -59,38 +58,22 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep builder.endObject(); } - public static Optional fromXContent(QueryParseContext parseContext) throws IOException { - XContentParser parser = parseContext.parser(); + private static final ObjectParser PARSER = new ObjectParser<>(NAME, MatchAllQueryBuilder::new); - String currentFieldName = null; - XContentParser.Token token; - String queryName = null; - float boost = AbstractQueryBuilder.DEFAULT_BOOST; - while (((token = parser.nextToken()) != XContentParser.Token.END_OBJECT && token != XContentParser.Token.END_ARRAY)) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { - queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { - boost = parser.floatValue(); - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + MatchAllQueryBuilder.NAME + - "] query does not support [" + currentFieldName + "]"); - } - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + MatchAllQueryBuilder.NAME + - "] unknown token [" + token + "] after [" + currentFieldName + "]"); - } + static { + declareStandardFields(PARSER); + } + + public static Optional fromXContent(QueryParseContext context) { + try { + return Optional.of(PARSER.apply(context.parser(), context)); + } catch (IllegalArgumentException e) { + throw new ParsingException(context.parser().getTokenLocation(), e.getMessage(), e); } - MatchAllQueryBuilder queryBuilder = new MatchAllQueryBuilder(); - queryBuilder.boost(boost); - queryBuilder.queryName(queryName); - return Optional.of(queryBuilder); } @Override - protected Query doToQuery(QueryShardContext context) throws IOException { + protected Query doToQuery(QueryShardContext context) { return Queries.newMatchAllQuery(); } diff --git a/core/src/main/java/org/elasticsearch/index/query/MatchNoneQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MatchNoneQueryBuilder.java index 55a6e544f495a..73ca411be32c6 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MatchNoneQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MatchNoneQueryBuilder.java @@ -70,9 +70,9 @@ public static Optional fromXContent(QueryParseContext par if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); } else { throw new ParsingException(parser.getTokenLocation(), "["+MatchNoneQueryBuilder.NAME + diff --git a/core/src/main/java/org/elasticsearch/index/query/MatchPhrasePrefixQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MatchPhrasePrefixQueryBuilder.java index bff28d0f5be0c..e4af7c8c98ebd 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MatchPhrasePrefixQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MatchPhrasePrefixQueryBuilder.java @@ -164,7 +164,7 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep @Override protected Query doToQuery(QueryShardContext context) throws IOException { // validate context specific fields - if (analyzer != null && context.getAnalysisService().analyzer(analyzer) == null) { + if (analyzer != null && context.getIndexAnalyzers().get(analyzer) == null) { throw new QueryShardException(context, "[" + NAME + "] analyzer [" + analyzer + "] not found"); } @@ -213,17 +213,17 @@ public static Optional fromXContent(QueryParseCon if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, MatchQueryBuilder.QUERY_FIELD)) { + if (MatchQueryBuilder.QUERY_FIELD.match(currentFieldName)) { value = parser.objectText(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MatchQueryBuilder.ANALYZER_FIELD)) { + } else if (MatchQueryBuilder.ANALYZER_FIELD.match(currentFieldName)) { analyzer = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MatchPhraseQueryBuilder.SLOP_FIELD)) { + } else if (MatchPhraseQueryBuilder.SLOP_FIELD.match(currentFieldName)) { slop = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MAX_EXPANSIONS_FIELD)) { + } else if (MAX_EXPANSIONS_FIELD.match(currentFieldName)) { maxExpansion = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), diff --git a/core/src/main/java/org/elasticsearch/index/query/MatchPhraseQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MatchPhraseQueryBuilder.java index 6fd6922d9cef5..6cf6d00678449 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MatchPhraseQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MatchPhraseQueryBuilder.java @@ -140,7 +140,7 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep @Override protected Query doToQuery(QueryShardContext context) throws IOException { // validate context specific fields - if (analyzer != null && context.getAnalysisService().analyzer(analyzer) == null) { + if (analyzer != null && context.getIndexAnalyzers().get(analyzer) == null) { throw new QueryShardException(context, "[" + NAME + "] analyzer [" + analyzer + "] not found"); } @@ -184,15 +184,15 @@ public static Optional fromXContent(QueryParseContext p if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, MatchQueryBuilder.QUERY_FIELD)) { + if (MatchQueryBuilder.QUERY_FIELD.match(currentFieldName)) { value = parser.objectText(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MatchQueryBuilder.ANALYZER_FIELD)) { + } else if (MatchQueryBuilder.ANALYZER_FIELD.match(currentFieldName)) { analyzer = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SLOP_FIELD)) { + } else if (SLOP_FIELD.match(currentFieldName)) { slop = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), diff --git a/core/src/main/java/org/elasticsearch/index/query/MatchQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MatchQueryBuilder.java index dc6ac994084a9..824829d505ba4 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MatchQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MatchQueryBuilder.java @@ -19,8 +19,6 @@ package org.elasticsearch.index.query; -import org.apache.lucene.queries.ExtendedCommonTermsQuery; -import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.FuzzyQuery; import org.apache.lucene.search.Query; import org.elasticsearch.common.ParseField; @@ -444,7 +442,7 @@ public void doXContent(XContentBuilder builder, Params params) throws IOExceptio @Override protected Query doToQuery(QueryShardContext context) throws IOException { // validate context specific fields - if (analyzer != null && context.getAnalysisService().analyzer(analyzer) == null) { + if (analyzer != null && context.getIndexAnalyzers().get(analyzer) == null) { throw new QueryShardException(context, "[" + NAME + "] analyzer [" + analyzer + "] not found"); } @@ -456,25 +454,13 @@ protected Query doToQuery(QueryShardContext context) throws IOException { matchQuery.setFuzzyPrefixLength(prefixLength); matchQuery.setMaxExpansions(maxExpansions); matchQuery.setTranspositions(fuzzyTranspositions); - matchQuery.setFuzzyRewriteMethod(QueryParsers.parseRewriteMethod(context.getParseFieldMatcher(), fuzzyRewrite, null)); + matchQuery.setFuzzyRewriteMethod(QueryParsers.parseRewriteMethod(fuzzyRewrite, null)); matchQuery.setLenient(lenient); matchQuery.setCommonTermsCutoff(cutoffFrequency); matchQuery.setZeroTermsQuery(zeroTermsQuery); Query query = matchQuery.parse(type, fieldName, value); - if (query == null) { - return null; - } - - // If the coordination factor is disabled on a boolean query we don't apply the minimum should match. - // This is done to make sure that the minimum_should_match doesn't get applied when there is only one word - // and multiple variations of the same word in the query (synonyms for instance). - if (query instanceof BooleanQuery && !((BooleanQuery) query).isCoordDisabled()) { - query = Queries.applyMinimumShouldMatch((BooleanQuery) query, minimumShouldMatch); - } else if (query instanceof ExtendedCommonTermsQuery) { - ((ExtendedCommonTermsQuery)query).setLowFreqMinimumNumberShouldMatch(minimumShouldMatch); - } - return query; + return Queries.maybeApplyMinimumShouldMatch(query, minimumShouldMatch); } @Override @@ -541,9 +527,9 @@ public static Optional fromXContent(QueryParseContext parseCo if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { value = parser.objectText(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TYPE_FIELD)) { + } else if (TYPE_FIELD.match(currentFieldName)) { String tStr = parser.text(); if ("boolean".equals(tStr)) { type = MatchQuery.Type.BOOLEAN; @@ -554,31 +540,31 @@ public static Optional fromXContent(QueryParseContext parseCo } else { throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] query does not support type " + tStr); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ANALYZER_FIELD)) { + } else if (ANALYZER_FIELD.match(currentFieldName)) { analyzer = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SLOP_FIELD)) { + } else if (SLOP_FIELD.match(currentFieldName)) { slop = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Fuzziness.FIELD)) { + } else if (Fuzziness.FIELD.match(currentFieldName)) { fuzziness = Fuzziness.parse(parser); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, PREFIX_LENGTH_FIELD)) { + } else if (PREFIX_LENGTH_FIELD.match(currentFieldName)) { prefixLength = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MAX_EXPANSIONS_FIELD)) { + } else if (MAX_EXPANSIONS_FIELD.match(currentFieldName)) { maxExpansion = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, OPERATOR_FIELD)) { + } else if (OPERATOR_FIELD.match(currentFieldName)) { operator = Operator.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MINIMUM_SHOULD_MATCH_FIELD)) { + } else if (MINIMUM_SHOULD_MATCH_FIELD.match(currentFieldName)) { minimumShouldMatch = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FUZZY_REWRITE_FIELD)) { + } else if (FUZZY_REWRITE_FIELD.match(currentFieldName)) { fuzzyRewrite = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FUZZY_TRANSPOSITIONS_FIELD)) { + } else if (FUZZY_TRANSPOSITIONS_FIELD.match(currentFieldName)) { fuzzyTranspositions = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LENIENT_FIELD)) { + } else if (LENIENT_FIELD.match(currentFieldName)) { lenient = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, CUTOFF_FREQUENCY_FIELD)) { + } else if (CUTOFF_FREQUENCY_FIELD.match(currentFieldName)) { cutOffFrequency = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ZERO_TERMS_QUERY_FIELD)) { + } else if (ZERO_TERMS_QUERY_FIELD.match(currentFieldName)) { String zeroTermsDocs = parser.text(); if ("none".equalsIgnoreCase(zeroTermsDocs)) { zeroTermsQuery = MatchQuery.ZeroTermsQuery.NONE; @@ -588,7 +574,7 @@ public static Optional fromXContent(QueryParseContext parseCo throw new ParsingException(parser.getTokenLocation(), "Unsupported zero_terms_docs value [" + zeroTermsDocs + "]"); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), diff --git a/core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java index 9fb1343b11899..e1ab8da5a31ae 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java @@ -21,13 +21,13 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.index.Fields; -import org.apache.lucene.queries.TermsQuery; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.Version; import org.elasticsearch.action.termvectors.MultiTermVectorsItemResponse; import org.elasticsearch.action.termvectors.MultiTermVectorsRequest; import org.elasticsearch.action.termvectors.MultiTermVectorsResponse; @@ -36,7 +36,6 @@ import org.elasticsearch.client.Client; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; @@ -52,11 +51,11 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; -import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.mapper.KeywordFieldMapper.KeywordFieldType; +import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.StringFieldMapper.StringFieldType; import org.elasticsearch.index.mapper.TextFieldMapper.TextFieldType; +import org.elasticsearch.index.mapper.UidFieldMapper; import java.io.IOException; import java.util.ArrayList; @@ -147,7 +146,7 @@ private interface Field { */ public static final class Item implements ToXContent, Writeable { public static final Item[] EMPTY_ARRAY = new Item[0]; - + public interface Field { ParseField INDEX = new ParseField("_index"); ParseField TYPE = new ParseField("_type"); @@ -164,6 +163,7 @@ public interface Field { private String type; private String id; private BytesReference doc; + private XContentType xContentType; private String[] fields; private Map perFieldAnalyzer; private String routing; @@ -180,7 +180,9 @@ public Item() { this.index = copy.index; this.type = copy.type; this.id = copy.id; + this.routing = copy.routing; this.doc = copy.doc; + this.xContentType = copy.xContentType; this.fields = copy.fields; this.perFieldAnalyzer = copy.perFieldAnalyzer; this.version = copy.version; @@ -217,6 +219,7 @@ public Item(@Nullable String index, @Nullable String type, XContentBuilder doc) this.index = index; this.type = type; this.doc = doc.bytes(); + this.xContentType = doc.contentType(); } /** @@ -227,6 +230,11 @@ public Item(@Nullable String index, @Nullable String type, XContentBuilder doc) type = in.readOptionalString(); if (in.readBoolean()) { doc = (BytesReference) in.readGenericValue(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType = XContentType.readFrom(in); + } else { + xContentType = XContentFactory.xContentType(doc); + } } else { id = in.readString(); } @@ -244,6 +252,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(doc != null); if (doc != null) { out.writeGenericValue(doc); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType.writeTo(out); + } } else { out.writeString(id); } @@ -328,10 +339,14 @@ public Item versionType(VersionType versionType) { return this; } + XContentType xContentType() { + return xContentType; + } + /** * Convert this to a {@link TermVectorsRequest} for fetching the terms of the document. */ - public TermVectorsRequest toTermVectorsRequest() { + TermVectorsRequest toTermVectorsRequest() { TermVectorsRequest termVectorsRequest = new TermVectorsRequest(index, type, id) .selectedFields(fields) .routing(routing) @@ -345,7 +360,7 @@ public TermVectorsRequest toTermVectorsRequest() { .termStatistics(false); // for artificial docs to make sure that the id has changed in the item too if (doc != null) { - termVectorsRequest.doc(doc, true); + termVectorsRequest.doc(doc, true, xContentType); this.id = termVectorsRequest.id(); } return termVectorsRequest; @@ -354,22 +369,23 @@ public TermVectorsRequest toTermVectorsRequest() { /** * Parses and returns the given item. */ - public static Item parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher, Item item) throws IOException { + public static Item parse(XContentParser parser, Item item) throws IOException { XContentParser.Token token; String currentFieldName = null; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (currentFieldName != null) { - if (parseFieldMatcher.match(currentFieldName, Field.INDEX)) { + if (Field.INDEX.match(currentFieldName)) { item.index = parser.text(); - } else if (parseFieldMatcher.match(currentFieldName, Field.TYPE)) { + } else if (Field.TYPE.match(currentFieldName)) { item.type = parser.text(); - } else if (parseFieldMatcher.match(currentFieldName, Field.ID)) { + } else if (Field.ID.match(currentFieldName)) { item.id = parser.text(); - } else if (parseFieldMatcher.match(currentFieldName, Field.DOC)) { + } else if (Field.DOC.match(currentFieldName)) { item.doc = jsonBuilder().copyCurrentStructure(parser).bytes(); - } else if (parseFieldMatcher.match(currentFieldName, Field.FIELDS)) { + item.xContentType = XContentType.JSON; + } else if (Field.FIELDS.match(currentFieldName)) { if (token == XContentParser.Token.START_ARRAY) { List fields = new ArrayList<>(); while (parser.nextToken() != XContentParser.Token.END_ARRAY) { @@ -380,7 +396,7 @@ public static Item parse(XContentParser parser, ParseFieldMatcher parseFieldMatc throw new ElasticsearchParseException( "failed to parse More Like This item. field [fields] must be an array"); } - } else if (parseFieldMatcher.match(currentFieldName, Field.PER_FIELD_ANALYZER)) { + } else if (Field.PER_FIELD_ANALYZER.match(currentFieldName)) { item.perFieldAnalyzer(TermVectorsRequest.readPerFieldAnalyzer(parser.map())); } else if ("_routing".equals(currentFieldName) || "routing".equals(currentFieldName)) { item.routing = parser.text(); @@ -419,16 +435,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Field.ID.getPreferredName(), this.id); } if (this.doc != null) { - XContentType contentType = XContentFactory.xContentType(this.doc); - if (contentType == builder.contentType()) { - builder.rawField(Field.DOC.getPreferredName(), this.doc); - } else { - try (XContentParser parser = XContentFactory.xContent(contentType).createParser(this.doc)) { - parser.nextToken(); - builder.field(Field.DOC.getPreferredName()); - builder.copyCurrentStructure(parser); - } - } + builder.rawField(Field.DOC.getPreferredName(), this.doc, xContentType); } if (this.fields != null) { builder.array(Field.FIELDS.getPreferredName(), this.fields); @@ -780,7 +787,7 @@ public static Item[] ids(String... ids) { protected void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(NAME); if (fields != null) { - builder.field(Field.FIELDS.getPreferredName(), fields); + builder.array(Field.FIELDS.getPreferredName(), fields); } buildLikeField(builder, Field.LIKE.getPreferredName(), likeTexts, likeItems); buildLikeField(builder, Field.UNLIKE.getPreferredName(), unlikeTexts, unlikeItems); @@ -791,7 +798,7 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep builder.field(Field.MIN_WORD_LENGTH.getPreferredName(), minWordLength); builder.field(Field.MAX_WORD_LENGTH.getPreferredName(), maxWordLength); if (stopWords != null) { - builder.field(Field.STOP_WORDS.getPreferredName(), stopWords); + builder.array(Field.STOP_WORDS.getPreferredName(), stopWords); } if (analyzer != null) { builder.field(Field.ANALYZER.getPreferredName(), analyzer); @@ -840,33 +847,33 @@ public static Optional fromXContent(QueryParseContext if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.LIKE)) { + if (Field.LIKE.match(currentFieldName)) { parseLikeField(parseContext, likeTexts, likeItems); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.UNLIKE)) { + } else if (Field.UNLIKE.match(currentFieldName)) { parseLikeField(parseContext, unlikeTexts, unlikeItems); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.LIKE_TEXT)) { + } else if (Field.LIKE_TEXT.match(currentFieldName)) { likeTexts.add(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.MAX_QUERY_TERMS)) { + } else if (Field.MAX_QUERY_TERMS.match(currentFieldName)) { maxQueryTerms = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.MIN_TERM_FREQ)) { + } else if (Field.MIN_TERM_FREQ.match(currentFieldName)) { minTermFreq =parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.MIN_DOC_FREQ)) { + } else if (Field.MIN_DOC_FREQ.match(currentFieldName)) { minDocFreq = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.MAX_DOC_FREQ)) { + } else if (Field.MAX_DOC_FREQ.match(currentFieldName)) { maxDocFreq = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.MIN_WORD_LENGTH)) { + } else if (Field.MIN_WORD_LENGTH.match(currentFieldName)) { minWordLength = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.MAX_WORD_LENGTH)) { + } else if (Field.MAX_WORD_LENGTH.match(currentFieldName)) { maxWordLength = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.ANALYZER)) { + } else if (Field.ANALYZER.match(currentFieldName)) { analyzer = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.MINIMUM_SHOULD_MATCH)) { + } else if (Field.MINIMUM_SHOULD_MATCH.match(currentFieldName)) { minimumShouldMatch = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.BOOST_TERMS)) { + } else if (Field.BOOST_TERMS.match(currentFieldName)) { boostTerms = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.INCLUDE)) { + } else if (Field.INCLUDE.match(currentFieldName)) { include = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.FAIL_ON_UNSUPPORTED_FIELD)) { + } else if (Field.FAIL_ON_UNSUPPORTED_FIELD.match(currentFieldName)) { failOnUnsupportedField = parser.booleanValue(); } else if ("boost".equals(currentFieldName)) { boost = parser.floatValue(); @@ -876,34 +883,34 @@ public static Optional fromXContent(QueryParseContext throw new ParsingException(parser.getTokenLocation(), "[mlt] query does not support [" + currentFieldName + "]"); } } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.FIELDS)) { + if (Field.FIELDS.match(currentFieldName)) { fields = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { fields.add(parser.text()); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.LIKE)) { + } else if (Field.LIKE.match(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { parseLikeField(parseContext, likeTexts, likeItems); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.UNLIKE)) { + } else if (Field.UNLIKE.match(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { parseLikeField(parseContext, unlikeTexts, unlikeItems); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.IDS)) { + } else if (Field.IDS.match(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { if (!token.isValue()) { throw new IllegalArgumentException("ids array element should only contain ids"); } likeItems.add(new Item(null, null, parser.text())); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.DOCS)) { + } else if (Field.DOCS.match(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { if (token != XContentParser.Token.START_OBJECT) { throw new IllegalArgumentException("docs array element should include an object"); } - likeItems.add(Item.parse(parser, parseContext.getParseFieldMatcher(), new Item())); + likeItems.add(Item.parse(parser, new Item())); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.STOP_WORDS)) { + } else if (Field.STOP_WORDS.match(currentFieldName)) { stopWords = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { stopWords.add(parser.text()); @@ -912,9 +919,9 @@ public static Optional fromXContent(QueryParseContext throw new ParsingException(parser.getTokenLocation(), "[mlt] query does not support [" + currentFieldName + "]"); } } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.LIKE)) { + if (Field.LIKE.match(currentFieldName)) { parseLikeField(parseContext, likeTexts, likeItems); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Field.UNLIKE)) { + } else if (Field.UNLIKE.match(currentFieldName)) { parseLikeField(parseContext, unlikeTexts, unlikeItems); } else { throw new ParsingException(parser.getTokenLocation(), "[mlt] query does not support [" + currentFieldName + "]"); @@ -962,7 +969,7 @@ private static void parseLikeField(QueryParseContext parseContext, List if (parser.currentToken().isValue()) { texts.add(parser.text()); } else if (parser.currentToken() == XContentParser.Token.START_OBJECT) { - items.add(Item.parse(parser, parseContext.getParseFieldMatcher(), new Item())); + items.add(Item.parse(parser, new Item())); } else { throw new IllegalArgumentException("Content of 'like' parameter should either be a string or an object"); } @@ -1021,7 +1028,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } // set analyzer - Analyzer analyzerObj = context.getAnalysisService().analyzer(analyzer); + Analyzer analyzerObj = context.getIndexAnalyzers().get(analyzer); if (analyzerObj == null) { analyzerObj = context.getMapperService().searchAnalyzer(); } @@ -1081,14 +1088,14 @@ private Query handleItems(QueryShardContext context, MoreLikeThisQuery mltQuery, // fetching the items with multi-termvectors API MultiTermVectorsResponse likeItemsResponse = fetchResponse(context.getClient(), likeItems); // getting the Fields for liked items - mltQuery.setLikeText(getFieldsFor(likeItemsResponse)); + mltQuery.setLikeFields(getFieldsFor(likeItemsResponse)); // getting the Fields for unliked items if (unlikeItems.length > 0) { MultiTermVectorsResponse unlikeItemsResponse = fetchResponse(context.getClient(), unlikeItems); org.apache.lucene.index.Fields[] unlikeFields = getFieldsFor(unlikeItemsResponse); if (unlikeFields.length > 0) { - mltQuery.setUnlikeText(unlikeFields); + mltQuery.setUnlikeFields(unlikeFields); } } @@ -1097,7 +1104,7 @@ private Query handleItems(QueryShardContext context, MoreLikeThisQuery mltQuery, // exclude the items from the search if (!include) { - handleExclude(boolQuery, likeItems); + handleExclude(boolQuery, likeItems, context); } return boolQuery.build(); } @@ -1150,7 +1157,12 @@ private static Fields[] getFieldsFor(MultiTermVectorsResponse responses) throws return likeFields.toArray(Fields.EMPTY_ARRAY); } - private static void handleExclude(BooleanQuery.Builder boolQuery, Item[] likeItems) { + private static void handleExclude(BooleanQuery.Builder boolQuery, Item[] likeItems, QueryShardContext context) { + MappedFieldType uidField = context.fieldMapper(UidFieldMapper.NAME); + if (uidField == null) { + // no mappings, nothing to exclude + return; + } // artificial docs get assigned a random id and should be disregarded List uids = new ArrayList<>(); for (Item item : likeItems) { @@ -1160,7 +1172,7 @@ private static void handleExclude(BooleanQuery.Builder boolQuery, Item[] likeIte uids.add(createUidAsBytes(item.type(), item.id())); } if (!uids.isEmpty()) { - TermsQuery query = new TermsQuery(UidFieldMapper.NAME, uids.toArray(new BytesRef[uids.size()])); + Query query = uidField.termsQuery(uids, context); boolQuery.add(query, BooleanClause.Occur.MUST_NOT); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java index 6ffcb19ea3a9b..2729e8bcb2454 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java @@ -23,7 +23,6 @@ import org.apache.lucene.search.Query; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; @@ -148,11 +147,11 @@ public ParseField parseField() { return parseField; } - public static Type parse(String value, ParseFieldMatcher parseFieldMatcher) { + public static Type parse(String value) { MultiMatchQueryBuilder.Type[] values = MultiMatchQueryBuilder.Type.values(); Type type = null; for (MultiMatchQueryBuilder.Type t : values) { - if (parseFieldMatcher.match(value, t.parseField())) { + if (t.parseField().match(value)) { type = t; break; } @@ -304,7 +303,7 @@ public MultiMatchQueryBuilder type(Object type) { if (type == null) { throw new IllegalArgumentException("[" + NAME + "] requires type to be non-null"); } - this.type = Type.parse(type.toString().toLowerCase(Locale.ROOT), ParseFieldMatcher.EMPTY); + this.type = Type.parse(type.toString().toLowerCase(Locale.ROOT)); return this; } @@ -584,7 +583,7 @@ public static Optional fromXContent(QueryParseContext pa while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FIELDS_FIELD)) { + } else if (FIELDS_FIELD.match(currentFieldName)) { if (token == XContentParser.Token.START_ARRAY) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { parseFieldAndBoost(parser, fieldsBoosts); @@ -596,37 +595,37 @@ public static Optional fromXContent(QueryParseContext pa "[" + NAME + "] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { value = parser.objectText(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TYPE_FIELD)) { - type = MultiMatchQueryBuilder.Type.parse(parser.text(), parseContext.getParseFieldMatcher()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ANALYZER_FIELD)) { + } else if (TYPE_FIELD.match(currentFieldName)) { + type = MultiMatchQueryBuilder.Type.parse(parser.text()); + } else if (ANALYZER_FIELD.match(currentFieldName)) { analyzer = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SLOP_FIELD)) { + } else if (SLOP_FIELD.match(currentFieldName)) { slop = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Fuzziness.FIELD)) { + } else if (Fuzziness.FIELD.match(currentFieldName)) { fuzziness = Fuzziness.parse(parser); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, PREFIX_LENGTH_FIELD)) { + } else if (PREFIX_LENGTH_FIELD.match(currentFieldName)) { prefixLength = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MAX_EXPANSIONS_FIELD)) { + } else if (MAX_EXPANSIONS_FIELD.match(currentFieldName)) { maxExpansions = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, OPERATOR_FIELD)) { + } else if (OPERATOR_FIELD.match(currentFieldName)) { operator = Operator.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MINIMUM_SHOULD_MATCH_FIELD)) { + } else if (MINIMUM_SHOULD_MATCH_FIELD.match(currentFieldName)) { minimumShouldMatch = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FUZZY_REWRITE_FIELD)) { + } else if (FUZZY_REWRITE_FIELD.match(currentFieldName)) { fuzzyRewrite = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, USE_DIS_MAX_FIELD)) { + } else if (USE_DIS_MAX_FIELD.match(currentFieldName)) { useDisMax = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TIE_BREAKER_FIELD)) { + } else if (TIE_BREAKER_FIELD.match(currentFieldName)) { tieBreaker = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, CUTOFF_FREQUENCY_FIELD)) { + } else if (CUTOFF_FREQUENCY_FIELD.match(currentFieldName)) { cutoffFrequency = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LENIENT_FIELD)) { + } else if (LENIENT_FIELD.match(currentFieldName)) { lenient = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ZERO_TERMS_QUERY_FIELD)) { + } else if (ZERO_TERMS_QUERY_FIELD.match(currentFieldName)) { String zeroTermsDocs = parser.text(); if ("none".equalsIgnoreCase(zeroTermsDocs)) { zeroTermsQuery = MatchQuery.ZeroTermsQuery.NONE; @@ -635,7 +634,7 @@ public static Optional fromXContent(QueryParseContext pa } else { throw new ParsingException(parser.getTokenLocation(), "Unsupported zero_terms_docs value [" + zeroTermsDocs + "]"); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -708,7 +707,7 @@ public String getWriteableName() { protected Query doToQuery(QueryShardContext context) throws IOException { MultiMatchQuery multiMatchQuery = new MultiMatchQuery(context); if (analyzer != null) { - if (context.getAnalysisService().analyzer(analyzer) == null) { + if (context.getIndexAnalyzers().get(analyzer) == null) { throw new QueryShardException(context, "[" + NAME + "] analyzer [" + analyzer + "] not found"); } multiMatchQuery.setAnalyzer(analyzer); @@ -721,7 +720,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { multiMatchQuery.setMaxExpansions(maxExpansions); multiMatchQuery.setOccur(operator.toBooleanClauseOccur()); if (fuzzyRewrite != null) { - multiMatchQuery.setFuzzyRewriteMethod(QueryParsers.parseRewriteMethod(context.getParseFieldMatcher(), fuzzyRewrite, null)); + multiMatchQuery.setFuzzyRewriteMethod(QueryParsers.parseRewriteMethod(fuzzyRewrite, null)); } if (tieBreaker != null) { multiMatchQuery.setTieBreaker(tieBreaker); diff --git a/core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java index f3b0b7379dd8b..f91378c07233a 100644 --- a/core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java @@ -19,25 +19,44 @@ package org.elasticsearch.index.query; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.ReaderUtil; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.search.TopDocsCollector; +import org.apache.lucene.search.TopFieldCollector; +import org.apache.lucene.search.TopScoreDocCollector; +import org.apache.lucene.search.TotalHitCountCollector; +import org.apache.lucene.search.Weight; import org.apache.lucene.search.join.BitSetProducer; +import org.apache.lucene.search.join.ParentChildrenBlockJoinQuery; import org.apache.lucene.search.join.ScoreMode; -import org.apache.lucene.search.join.ToParentBlockJoinQuery; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.mapper.ObjectMapper; +import org.elasticsearch.index.search.ESToParentBlockJoinQuery; +import org.elasticsearch.index.search.NestedHelper; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.fetch.subphase.InnerHitsContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; +import java.util.HashMap; +import java.util.Locale; import java.util.Map; import java.util.Objects; import java.util.Optional; +import static org.elasticsearch.search.fetch.subphase.InnerHitsContext.intersect; + public class NestedQueryBuilder extends AbstractQueryBuilder { public static final String NAME = "nested"; /** @@ -85,7 +104,15 @@ protected void doWriteTo(StreamOutput out) throws IOException { out.writeString(path); out.writeVInt(scoreMode.ordinal()); out.writeNamedWriteable(query); - out.writeOptionalWriteable(innerHitBuilder); + if (out.getVersion().before(Version.V_5_5_0)) { + final boolean hasInnerHit = innerHitBuilder != null; + out.writeBoolean(hasInnerHit); + if (hasInnerHit) { + innerHitBuilder.writeToNestedBWC(out, query, path); + } + } else { + out.writeOptionalWriteable(innerHitBuilder); + } out.writeBoolean(ignoreUnmapped); } @@ -104,8 +131,8 @@ public InnerHitBuilder innerHit() { return innerHitBuilder; } - public NestedQueryBuilder innerHit(InnerHitBuilder innerHit) { - this.innerHitBuilder = new InnerHitBuilder(innerHit, path, query); + public NestedQueryBuilder innerHit(InnerHitBuilder innerHitBuilder) { + this.innerHitBuilder = innerHitBuilder; return this; } @@ -143,7 +170,7 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep builder.field(PATH_FIELD.getPreferredName(), path); builder.field(IGNORE_UNMAPPED_FIELD.getPreferredName(), ignoreUnmapped); if (scoreMode != null) { - builder.field(SCORE_MODE_FIELD.getPreferredName(), HasChildQueryBuilder.scoreModeAsString(scoreMode)); + builder.field(SCORE_MODE_FIELD.getPreferredName(), scoreModeAsString(scoreMode)); } printBoostAndQueryName(builder); if (innerHitBuilder != null) { @@ -167,44 +194,64 @@ public static Optional fromXContent(QueryParseContext parseC if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { query = parseContext.parseInnerQueryBuilder(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, INNER_HITS_FIELD)) { + } else if (INNER_HITS_FIELD.match(currentFieldName)) { innerHitBuilder = InnerHitBuilder.fromXContent(parseContext); } else { throw new ParsingException(parser.getTokenLocation(), "[nested] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, PATH_FIELD)) { + if (PATH_FIELD.match(currentFieldName)) { path = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_UNMAPPED_FIELD)) { + } else if (IGNORE_UNMAPPED_FIELD.match(currentFieldName)) { ignoreUnmapped = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SCORE_MODE_FIELD)) { - scoreMode = HasChildQueryBuilder.parseScoreMode(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (SCORE_MODE_FIELD.match(currentFieldName)) { + scoreMode = parseScoreMode(parser.text()); + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[nested] query does not support [" + currentFieldName + "]"); } } } - if (query.isPresent() == false) { // if inner query is empty, bubble this up to caller so they can decide how to deal with it return Optional.empty(); } - NestedQueryBuilder queryBuilder = new NestedQueryBuilder(path, query.get(), scoreMode) - .ignoreUnmapped(ignoreUnmapped) - .queryName(queryName) - .boost(boost); - if (innerHitBuilder != null) { - queryBuilder.innerHit(innerHitBuilder); - } + NestedQueryBuilder queryBuilder = new NestedQueryBuilder(path, query.get(), scoreMode, innerHitBuilder) + .ignoreUnmapped(ignoreUnmapped) + .queryName(queryName) + .boost(boost); return Optional.of(queryBuilder); } + public static ScoreMode parseScoreMode(String scoreModeString) { + if ("none".equals(scoreModeString)) { + return ScoreMode.None; + } else if ("min".equals(scoreModeString)) { + return ScoreMode.Min; + } else if ("max".equals(scoreModeString)) { + return ScoreMode.Max; + } else if ("avg".equals(scoreModeString)) { + return ScoreMode.Avg; + } else if ("sum".equals(scoreModeString)) { + return ScoreMode.Total; + } + throw new IllegalArgumentException("No score mode for child query [" + scoreModeString + "] found"); + } + + public static String scoreModeAsString(ScoreMode scoreMode) { + if (scoreMode == ScoreMode.Total) { + // Lucene uses 'total' but 'sum' is more consistent with other elasticsearch APIs + return "sum"; + } else { + return scoreMode.name().toLowerCase(Locale.ROOT); + } + } + @Override public final String getWriteableName() { return NAME; @@ -238,38 +285,139 @@ protected Query doToQuery(QueryShardContext context) throws IOException { throw new IllegalStateException("[" + NAME + "] nested object under path [" + path + "] is not of nested type"); } final BitSetProducer parentFilter; - final Query childFilter; - final Query innerQuery; + Query innerQuery; ObjectMapper objectMapper = context.nestedScope().getObjectMapper(); if (objectMapper == null) { parentFilter = context.bitsetFilter(Queries.newNonNestedFilter()); } else { parentFilter = context.bitsetFilter(objectMapper.nestedTypeFilter()); } - childFilter = nestedObjectMapper.nestedTypeFilter(); + try { context.nestedScope().nextLevel(nestedObjectMapper); innerQuery = this.query.toQuery(context); } finally { context.nestedScope().previousLevel(); } - return new ToParentBlockJoinQuery(Queries.filtered(innerQuery, childFilter), parentFilter, scoreMode); + + // ToParentBlockJoinQuery requires that the inner query only matches documents + // in its child space + if (new NestedHelper(context.getMapperService()).mightMatchNonNestedDocs(innerQuery, path)) { + innerQuery = Queries.filtered(innerQuery, nestedObjectMapper.nestedTypeFilter()); + } + + return new ESToParentBlockJoinQuery(innerQuery, parentFilter, scoreMode, + objectMapper == null ? null : objectMapper.fullPath()); } @Override protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws IOException { QueryBuilder rewrittenQuery = query.rewrite(queryRewriteContext); if (rewrittenQuery != query) { - InnerHitBuilder rewrittenInnerHit = InnerHitBuilder.rewrite(innerHitBuilder, rewrittenQuery); - return new NestedQueryBuilder(path, rewrittenQuery, scoreMode, rewrittenInnerHit); + NestedQueryBuilder nestedQuery = new NestedQueryBuilder(path, rewrittenQuery, scoreMode, innerHitBuilder); + nestedQuery.ignoreUnmapped(ignoreUnmapped); + return nestedQuery; } return this; } @Override - protected void extractInnerHitBuilders(Map innerHits) { + public void extractInnerHitBuilders(Map innerHits) { if (innerHitBuilder != null) { - innerHitBuilder.inlineInnerHits(innerHits); + Map children = new HashMap<>(); + InnerHitContextBuilder.extractInnerHits(query, children); + InnerHitContextBuilder innerHitContextBuilder = new NestedInnerHitContextBuilder(path, query, innerHitBuilder, children); + String name = innerHitBuilder.getName() != null ? innerHitBuilder.getName() : path; + innerHits.put(name, innerHitContextBuilder); + } + } + + static class NestedInnerHitContextBuilder extends InnerHitContextBuilder { + private final String path; + + NestedInnerHitContextBuilder(String path, QueryBuilder query, InnerHitBuilder innerHitBuilder, + Map children) { + super(query, innerHitBuilder, children); + this.path = path; + } + + @Override + public void build(SearchContext parentSearchContext, + InnerHitsContext innerHitsContext) throws IOException { + QueryShardContext queryShardContext = parentSearchContext.getQueryShardContext(); + ObjectMapper nestedObjectMapper = queryShardContext.getObjectMapper(path); + if (nestedObjectMapper == null) { + if (innerHitBuilder.isIgnoreUnmapped() == false) { + throw new IllegalStateException("[" + query.getName() + "] no mapping found for type [" + path + "]"); + } else { + return; + } + } + String name = innerHitBuilder.getName() != null ? innerHitBuilder.getName() : nestedObjectMapper.fullPath(); + ObjectMapper parentObjectMapper = queryShardContext.nestedScope().nextLevel(nestedObjectMapper); + NestedInnerHitSubContext nestedInnerHits = new NestedInnerHitSubContext( + name, parentSearchContext, parentObjectMapper, nestedObjectMapper + ); + setupInnerHitsContext(queryShardContext, nestedInnerHits); + queryShardContext.nestedScope().previousLevel(); + innerHitsContext.addInnerHitDefinition(nestedInnerHits); + } + } + + static final class NestedInnerHitSubContext extends InnerHitsContext.InnerHitSubContext { + + private final ObjectMapper parentObjectMapper; + private final ObjectMapper childObjectMapper; + + NestedInnerHitSubContext(String name, SearchContext context, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper) { + super(name, context); + this.parentObjectMapper = parentObjectMapper; + this.childObjectMapper = childObjectMapper; + } + + @Override + public TopDocs[] topDocs(SearchHit[] hits) throws IOException { + Weight innerHitQueryWeight = createInnerHitQueryWeight(); + TopDocs[] result = new TopDocs[hits.length]; + for (int i = 0; i < hits.length; i++) { + SearchHit hit = hits[i]; + Query rawParentFilter; + if (parentObjectMapper == null) { + rawParentFilter = Queries.newNonNestedFilter(); + } else { + rawParentFilter = parentObjectMapper.nestedTypeFilter(); + } + + int parentDocId = hit.docId(); + final int readerIndex = ReaderUtil.subIndex(parentDocId, searcher().getIndexReader().leaves()); + // With nested inner hits the nested docs are always in the same segement, so need to use the other segments + LeafReaderContext ctx = searcher().getIndexReader().leaves().get(readerIndex); + + Query childFilter = childObjectMapper.nestedTypeFilter(); + BitSetProducer parentFilter = context.bitsetFilterCache().getBitSetProducer(rawParentFilter); + Query q = new ParentChildrenBlockJoinQuery(parentFilter, childFilter, parentDocId); + Weight weight = context.searcher().createNormalizedWeight(q, false); + if (size() == 0) { + TotalHitCountCollector totalHitCountCollector = new TotalHitCountCollector(); + intersect(weight, innerHitQueryWeight, totalHitCountCollector, ctx); + result[i] = new TopDocs(totalHitCountCollector.getTotalHits(), Lucene.EMPTY_SCORE_DOCS, 0); + } else { + int topN = Math.min(from() + size(), context.searcher().getIndexReader().maxDoc()); + TopDocsCollector topDocsCollector; + if (sort() != null) { + topDocsCollector = TopFieldCollector.create(sort().sort, topN, true, trackScores(), trackScores()); + } else { + topDocsCollector = TopScoreDocCollector.create(topN); + } + try { + intersect(weight, innerHitQueryWeight, topDocsCollector, ctx); + } finally { + clearReleasables(Lifetime.COLLECTION); + } + result[i] = topDocsCollector.topDocs(from(), size()); + } + } + return result; } } } diff --git a/core/src/main/java/org/elasticsearch/index/query/Operator.java b/core/src/main/java/org/elasticsearch/index/query/Operator.java index 7972dbb49ad81..de88abebad359 100644 --- a/core/src/main/java/org/elasticsearch/index/query/Operator.java +++ b/core/src/main/java/org/elasticsearch/index/query/Operator.java @@ -54,16 +54,12 @@ public QueryParser.Operator toQueryParserOperator() { } public static Operator readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown Operator ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(Operator.class); } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(this.ordinal()); + out.writeEnum(this); } public static Operator fromString(String op) { diff --git a/core/src/main/java/org/elasticsearch/index/query/ParentIdQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/ParentIdQueryBuilder.java deleted file mode 100644 index 1b1a9508bc47d..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/query/ParentIdQueryBuilder.java +++ /dev/null @@ -1,196 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.query; - -import org.apache.lucene.index.Term; -import org.apache.lucene.search.BooleanClause; -import org.apache.lucene.search.BooleanQuery; -import org.apache.lucene.search.DocValuesTermsQuery; -import org.apache.lucene.search.MatchNoDocsQuery; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.TermQuery; -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.ParentFieldMapper; -import org.elasticsearch.index.mapper.TypeFieldMapper; - -import java.io.IOException; -import java.util.Objects; -import java.util.Optional; - -public final class ParentIdQueryBuilder extends AbstractQueryBuilder { - public static final String NAME = "parent_id"; - - /** - * The default value for ignore_unmapped. - */ - public static final boolean DEFAULT_IGNORE_UNMAPPED = false; - - private static final ParseField ID_FIELD = new ParseField("id"); - private static final ParseField TYPE_FIELD = new ParseField("type", "child_type"); - private static final ParseField IGNORE_UNMAPPED_FIELD = new ParseField("ignore_unmapped"); - - private final String type; - private final String id; - - private boolean ignoreUnmapped = false; - - public ParentIdQueryBuilder(String type, String id) { - this.type = type; - this.id = id; - } - - /** - * Read from a stream. - */ - public ParentIdQueryBuilder(StreamInput in) throws IOException { - super(in); - type = in.readString(); - id = in.readString(); - ignoreUnmapped = in.readBoolean(); - } - - @Override - protected void doWriteTo(StreamOutput out) throws IOException { - out.writeString(type); - out.writeString(id); - out.writeBoolean(ignoreUnmapped); - } - - public String getType() { - return type; - } - - public String getId() { - return id; - } - - /** - * Sets whether the query builder should ignore unmapped types (and run a - * {@link MatchNoDocsQuery} in place of this query) or throw an exception if - * the type is unmapped. - */ - public ParentIdQueryBuilder ignoreUnmapped(boolean ignoreUnmapped) { - this.ignoreUnmapped = ignoreUnmapped; - return this; - } - - /** - * Gets whether the query builder will ignore unmapped types (and run a - * {@link MatchNoDocsQuery} in place of this query) or throw an exception if - * the type is unmapped. - */ - public boolean ignoreUnmapped() { - return ignoreUnmapped; - } - - @Override - protected void doXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(NAME); - builder.field(TYPE_FIELD.getPreferredName(), type); - builder.field(ID_FIELD.getPreferredName(), id); - builder.field(IGNORE_UNMAPPED_FIELD.getPreferredName(), ignoreUnmapped); - printBoostAndQueryName(builder); - builder.endObject(); - } - - public static Optional fromXContent(QueryParseContext parseContext) throws IOException { - XContentParser parser = parseContext.parser(); - float boost = AbstractQueryBuilder.DEFAULT_BOOST; - String type = null; - String id = null; - String queryName = null; - String currentFieldName = null; - boolean ignoreUnmapped = DEFAULT_IGNORE_UNMAPPED; - XContentParser.Token token; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, TYPE_FIELD)) { - type = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ID_FIELD)) { - id = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, IGNORE_UNMAPPED_FIELD)) { - ignoreUnmapped = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { - boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { - queryName = parser.text(); - } else { - throw new ParsingException(parser.getTokenLocation(), "[parent_id] query does not support [" + currentFieldName + "]"); - } - } else { - throw new ParsingException(parser.getTokenLocation(), "[parent_id] query does not support [" + currentFieldName + "]"); - } - } - ParentIdQueryBuilder queryBuilder = new ParentIdQueryBuilder(type, id); - queryBuilder.queryName(queryName); - queryBuilder.boost(boost); - queryBuilder.ignoreUnmapped(ignoreUnmapped); - return Optional.of(queryBuilder); - } - - - @Override - protected Query doToQuery(QueryShardContext context) throws IOException { - DocumentMapper childDocMapper = context.getMapperService().documentMapper(type); - if (childDocMapper == null) { - if (ignoreUnmapped) { - return new MatchNoDocsQuery(); - } else { - throw new QueryShardException(context, "[" + NAME + "] no mapping found for type [" + type + "]"); - } - } - ParentFieldMapper parentFieldMapper = childDocMapper.parentFieldMapper(); - if (parentFieldMapper.active() == false) { - throw new QueryShardException(context, "[" + NAME + "] _parent field has no parent type configured"); - } - String fieldName = ParentFieldMapper.joinField(parentFieldMapper.type()); - - BooleanQuery.Builder query = new BooleanQuery.Builder(); - query.add(new DocValuesTermsQuery(fieldName, id), BooleanClause.Occur.MUST); - // Need to take child type into account, otherwise a child doc of different type with the same id could match - query.add(new TermQuery(new Term(TypeFieldMapper.NAME, type)), BooleanClause.Occur.FILTER); - return query.build(); - } - - @Override - protected boolean doEquals(ParentIdQueryBuilder that) { - return Objects.equals(type, that.type) - && Objects.equals(id, that.id) - && Objects.equals(ignoreUnmapped, that.ignoreUnmapped); - } - - @Override - protected int doHashCode() { - return Objects.hash(type, id, ignoreUnmapped); - } - - @Override - public String getWriteableName() { - return NAME; - } -} diff --git a/core/src/main/java/org/elasticsearch/index/query/PrefixQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/PrefixQueryBuilder.java index 13cd9eca9d31c..7d76c1ebd7630 100644 --- a/core/src/main/java/org/elasticsearch/index/query/PrefixQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/PrefixQueryBuilder.java @@ -140,13 +140,13 @@ public static Optional fromXContent(QueryParseContext parseC if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, PREFIX_FIELD)) { + } else if (PREFIX_FIELD.match(currentFieldName)) { value = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, REWRITE_FIELD)) { + } else if (REWRITE_FIELD.match(currentFieldName)) { rewrite = parser.textOrNull(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -174,7 +174,7 @@ public String getWriteableName() { @Override protected Query doToQuery(QueryShardContext context) throws IOException { - MultiTermQuery.RewriteMethod method = QueryParsers.parseRewriteMethod(context.getParseFieldMatcher(), rewrite, null); + MultiTermQuery.RewriteMethod method = QueryParsers.parseRewriteMethod(rewrite, null); Query query = null; MappedFieldType fieldType = context.fieldMapper(fieldName); diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/QueryBuilder.java index 197af655d54a2..7c6b332f4aab8 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryBuilder.java @@ -21,11 +21,11 @@ import org.apache.lucene.search.Query; import org.elasticsearch.common.io.stream.NamedWriteable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import java.io.IOException; -public interface QueryBuilder extends NamedWriteable, ToXContent { +public interface QueryBuilder extends NamedWriteable, ToXContentObject { /** * Converts this QueryBuilder to a lucene {@link Query}. diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryBuilders.java b/core/src/main/java/org/elasticsearch/index/query/QueryBuilders.java index d63696ec4fbaa..e7eacfb995c52 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryBuilders.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryBuilders.java @@ -29,12 +29,10 @@ import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilder; import org.elasticsearch.indices.TermsLookup; import org.elasticsearch.script.Script; -import org.elasticsearch.script.ScriptService; import java.io.IOException; import java.util.Collection; import java.util.List; -import java.util.Map; /** * A static factory for simple "import static" usage. @@ -120,7 +118,7 @@ public static IdsQueryBuilder idsQuery() { * @param types The mapping/doc type */ public static IdsQueryBuilder idsQuery(String... types) { - return new IdsQueryBuilder(types); + return new IdsQueryBuilder().types(types); } /** @@ -199,13 +197,9 @@ public static TermQueryBuilder termQuery(String name, Object value) { * @param name The name of the field * @param value The value of the term * - * @deprecated Fuzzy queries are not useful enough and will be removed with Elasticsearch 4.0. In most cases you may want to use - * a match query with the fuzziness parameter for strings or range queries for numeric and date fields. - * * @see #matchQuery(String, Object) * @see #rangeQuery(String) */ - @Deprecated public static FuzzyQueryBuilder fuzzyQuery(String name, String value) { return new FuzzyQueryBuilder(name, value); } @@ -216,13 +210,9 @@ public static FuzzyQueryBuilder fuzzyQuery(String name, String value) { * @param name The name of the field * @param value The value of the term * - * @deprecated Fuzzy queries are not useful enough and will be removed with Elasticsearch 4.0. In most cases you may want to use - * a match query with the fuzziness parameter for strings or range queries for numeric and date fields. - * * @see #matchQuery(String, Object) * @see #rangeQuery(String) */ - @Deprecated public static FuzzyQueryBuilder fuzzyQuery(String name, Object value) { return new FuzzyQueryBuilder(name, value); } @@ -481,38 +471,6 @@ public static MoreLikeThisQueryBuilder moreLikeThisQuery(Item[] likeItems) { return moreLikeThisQuery(null, null, likeItems); } - /** - * Constructs a new has_child query, with the child type and the query to run on the child documents. The - * results of this query are the parent docs that those child docs matched. - * - * @param type The child type. - * @param query The query. - * @param scoreMode How the scores from the children hits should be aggregated into the parent hit. - */ - public static HasChildQueryBuilder hasChildQuery(String type, QueryBuilder query, ScoreMode scoreMode) { - return new HasChildQueryBuilder(type, query, scoreMode); - } - - /** - * Constructs a new parent query, with the parent type and the query to run on the parent documents. The - * results of this query are the children docs that those parent docs matched. - * - * @param type The parent type. - * @param query The query. - * @param score Whether the score from the parent hit should propogate to the child hit - */ - public static HasParentQueryBuilder hasParentQuery(String type, QueryBuilder query, boolean score) { - return new HasParentQueryBuilder(type, query, score); - } - - /** - * Constructs a new parent id query that returns all child documents of the specified type that - * point to the specified id. - */ - public static ParentIdQueryBuilder parentId(String type, String id) { - return new ParentIdQueryBuilder(type, id); - } - public static NestedQueryBuilder nestedQuery(String path, QueryBuilder query, ScoreMode scoreMode) { return new NestedQueryBuilder(path, query, scoreMode); } diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java b/core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java index 9ed374db212a8..6dde6ec3b9202 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java @@ -20,22 +20,19 @@ package org.elasticsearch.index.query; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry.UnknownNamedObjectException; +import org.elasticsearch.common.xcontent.XContentLocation; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; import org.elasticsearch.script.Script; -import org.elasticsearch.script.ScriptSettings; import java.io.IOException; import java.util.Objects; import java.util.Optional; -public class QueryParseContext implements ParseFieldMatcherSupplier { +public class QueryParseContext { private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(QueryParseContext.class)); @@ -43,19 +40,15 @@ public class QueryParseContext implements ParseFieldMatcherSupplier { private static final ParseField CACHE_KEY = new ParseField("_cache_key").withAllDeprecated("Filters are always used as cache keys"); private final XContentParser parser; - private final IndicesQueriesRegistry indicesQueriesRegistry; - private final ParseFieldMatcher parseFieldMatcher; private final String defaultScriptLanguage; - public QueryParseContext(IndicesQueriesRegistry registry, XContentParser parser, ParseFieldMatcher parseFieldMatcher) { - this(Script.DEFAULT_SCRIPT_LANG, registry, parser, parseFieldMatcher); + public QueryParseContext(XContentParser parser) { + this(Script.DEFAULT_SCRIPT_LANG, parser); } - public QueryParseContext(String defaultScriptLanguage, IndicesQueriesRegistry registry, XContentParser parser, - ParseFieldMatcher parseFieldMatcher) { - this.indicesQueriesRegistry = Objects.requireNonNull(registry, "indices queries registry cannot be null"); + //TODO this constructor can be removed from master branch + public QueryParseContext(String defaultScriptLanguage, XContentParser parser) { this.parser = Objects.requireNonNull(parser, "parser cannot be null"); - this.parseFieldMatcher = Objects.requireNonNull(parseFieldMatcher, "parse field matcher cannot be null"); this.defaultScriptLanguage = defaultScriptLanguage; } @@ -64,7 +57,7 @@ public XContentParser parser() { } public boolean isDeprecatedSetting(String setting) { - return this.parseFieldMatcher.match(setting, CACHE) || this.parseFieldMatcher.match(setting, CACHE_KEY); + return CACHE.match(setting) || CACHE_KEY.match(setting); } /** @@ -73,6 +66,15 @@ public boolean isDeprecatedSetting(String setting) { public QueryBuilder parseTopLevelQueryBuilder() { try { QueryBuilder queryBuilder = null; + XContentParser.Token first = parser.nextToken(); + if (first == null) { + return null; + } else if (first != XContentParser.Token.START_OBJECT) { + throw new ParsingException( + parser.getTokenLocation(), "Expected [" + XContentParser.Token.START_OBJECT + + "] but found [" + first + "]", parser.getTokenLocation() + ); + } for (XContentParser.Token token = parser.nextToken(); token != XContentParser.Token.END_OBJECT; token = parser.nextToken()) { if (token == XContentParser.Token.FIELD_NAME) { String fieldName = parser.currentName(); @@ -95,49 +97,49 @@ public QueryBuilder parseTopLevelQueryBuilder() { * Parses a query excluding the query element that wraps it */ public Optional parseInnerQueryBuilder() throws IOException { - // move to START object - XContentParser.Token token; if (parser.currentToken() != XContentParser.Token.START_OBJECT) { - token = parser.nextToken(); - if (token != XContentParser.Token.START_OBJECT) { + if (parser.nextToken() != XContentParser.Token.START_OBJECT) { throw new ParsingException(parser.getTokenLocation(), "[_na] query malformed, must start with start_object"); } } - token = parser.nextToken(); - if (token == XContentParser.Token.END_OBJECT) { + if (parser.nextToken() == XContentParser.Token.END_OBJECT) { // we encountered '{}' for a query clause String msg = "query malformed, empty clause found at [" + parser.getTokenLocation() +"]"; DEPRECATION_LOGGER.deprecated(msg); - if (parseFieldMatcher.isStrict()) { - throw new IllegalArgumentException(msg); - } return Optional.empty(); } - if (token != XContentParser.Token.FIELD_NAME) { + if (parser.currentToken() != XContentParser.Token.FIELD_NAME) { throw new ParsingException(parser.getTokenLocation(), "[_na] query malformed, no field after start_object"); } String queryName = parser.currentName(); // move to the next START_OBJECT - token = parser.nextToken(); - if (token != XContentParser.Token.START_OBJECT) { + if (parser.nextToken() != XContentParser.Token.START_OBJECT) { throw new ParsingException(parser.getTokenLocation(), "[" + queryName + "] query malformed, no start_object after query name"); } - @SuppressWarnings("unchecked") - Optional result = (Optional) indicesQueriesRegistry.lookup(queryName, parseFieldMatcher, - parser.getTokenLocation()).fromXContent(this); + Optional result; + try { + @SuppressWarnings("unchecked") + Optional resultCast = (Optional) parser.namedObject(Optional.class, queryName, this); + result = resultCast; + } catch (UnknownNamedObjectException e) { + // Preserve the error message from 5.0 until we have a compellingly better message so we don't break BWC. + // This intentionally doesn't include the causing exception because that'd change the "root_cause" of any unknown query errors + throw new ParsingException(new XContentLocation(e.getLineNumber(), e.getColumnNumber()), + "no [query] registered for [" + e.getName() + "]"); + } + //end_object of the specific query (e.g. match, multi_match etc.) element if (parser.currentToken() != XContentParser.Token.END_OBJECT) { throw new ParsingException(parser.getTokenLocation(), "[" + queryName + "] malformed query, expected [END_OBJECT] but found [" + parser.currentToken() + "]"); } - parser.nextToken(); + //end_object of the query object + if (parser.nextToken() != XContentParser.Token.END_OBJECT) { + throw new ParsingException(parser.getTokenLocation(), + "[" + queryName + "] malformed query, expected [END_OBJECT] but found [" + parser.currentToken() + "]"); + } return result; } - @Override - public ParseFieldMatcher getParseFieldMatcher() { - return parseFieldMatcher; - } - /** * Returns the default scripting language, that should be used if scripts don't specify the script language * explicitly. diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryRewriteContext.java b/core/src/main/java/org/elasticsearch/index/query/QueryRewriteContext.java index f12605088e6c5..b0dcd9da7acc5 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryRewriteContext.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryRewriteContext.java @@ -20,44 +20,49 @@ import org.apache.lucene.index.IndexReader; import org.elasticsearch.client.Client; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; -import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; +import org.elasticsearch.script.CompiledScript; +import org.elasticsearch.script.ExecutableScript; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptContext; import org.elasticsearch.script.ScriptService; import org.elasticsearch.script.ScriptSettings; +import org.elasticsearch.template.CompiledTemplate; + +import java.util.function.LongSupplier; /** * Context object used to rewrite {@link QueryBuilder} instances into simplified version. */ -public class QueryRewriteContext implements ParseFieldMatcherSupplier { +public class QueryRewriteContext { protected final MapperService mapperService; protected final ScriptService scriptService; protected final IndexSettings indexSettings; - protected final IndicesQueriesRegistry indicesQueriesRegistry; + private final NamedXContentRegistry xContentRegistry; protected final Client client; protected final IndexReader reader; - protected final ClusterState clusterState; + protected final LongSupplier nowInMillis; public QueryRewriteContext(IndexSettings indexSettings, MapperService mapperService, ScriptService scriptService, - IndicesQueriesRegistry indicesQueriesRegistry, Client client, IndexReader reader, - ClusterState clusterState) { + NamedXContentRegistry xContentRegistry, Client client, IndexReader reader, + LongSupplier nowInMillis) { this.mapperService = mapperService; this.scriptService = scriptService; this.indexSettings = indexSettings; - this.indicesQueriesRegistry = indicesQueriesRegistry; + this.xContentRegistry = xContentRegistry; this.client = client; this.reader = reader; - this.clusterState = clusterState; + this.nowInMillis = nowInMillis; } /** * Returns a clients to fetch resources from local or remove nodes. */ - public final Client getClient() { + public Client getClient() { return client; } @@ -65,47 +70,36 @@ public final Client getClient() { * Returns the index settings for this context. This might return null if the * context has not index scope. */ - public final IndexSettings getIndexSettings() { + public IndexSettings getIndexSettings() { return indexSettings; } - /** - * Returns a script service to fetch scripts. - */ - public final ScriptService getScriptService() { - return scriptService; - } - /** * Return the MapperService. */ - public final MapperService getMapperService() { + public MapperService getMapperService() { return mapperService; } - /** Return the current {@link IndexReader}, or {@code null} if we are on the coordinating node. */ + /** Return the current {@link IndexReader}, or {@code null} if no index reader is available, for + * instance if we are on the coordinating node or if this rewrite context is used to index + * queries (percolation). */ public IndexReader getIndexReader() { return reader; } - @Override - public ParseFieldMatcher getParseFieldMatcher() { - return this.indexSettings.getParseFieldMatcher(); - } - /** - * Returns the cluster state as is when the operation started. + * The registry used to build new {@link XContentParser}s. Contains registered named parsers needed to parse the query. */ - public ClusterState getClusterState() { - return clusterState; + public NamedXContentRegistry getXContentRegistry() { + return xContentRegistry; } /** - * Returns a new {@link QueryParseContext} that wraps the provided parser, using the ParseFieldMatcher settings that - * are configured in the index settings. The default script language will always default to Painless. + * Returns a new {@link QueryParseContext} that wraps the provided parser. */ public QueryParseContext newParseContext(XContentParser parser) { - return new QueryParseContext(indicesQueriesRegistry, parser, indexSettings.getParseFieldMatcher()); + return new QueryParseContext(parser); } /** @@ -114,6 +108,15 @@ public QueryParseContext newParseContext(XContentParser parser) { */ public QueryParseContext newParseContextWithLegacyScriptLanguage(XContentParser parser) { String defaultScriptLanguage = ScriptSettings.getLegacyDefaultLang(indexSettings.getNodeSettings()); - return new QueryParseContext(defaultScriptLanguage, indicesQueriesRegistry, parser, indexSettings.getParseFieldMatcher()); + return new QueryParseContext(defaultScriptLanguage, parser); + } + + public long nowInMillis() { + return nowInMillis.getAsLong(); + } + + public BytesReference getTemplateBytes(Script template) { + CompiledTemplate compiledTemplate = scriptService.compileTemplate(template, ScriptContext.Standard.SEARCH); + return compiledTemplate.run(template.getParams()); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java b/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java index 78869f5374a74..26d2e7ac39864 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java @@ -26,6 +26,9 @@ import java.util.Collection; import java.util.HashMap; import java.util.Map; +import java.util.function.Function; +import java.util.function.LongSupplier; + import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.index.IndexReader; import org.apache.lucene.queryparser.classic.MapperQueryParser; @@ -33,19 +36,23 @@ import org.apache.lucene.search.Query; import org.apache.lucene.search.join.BitSetProducer; import org.apache.lucene.search.similarities.Similarity; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.Version; import org.elasticsearch.client.Client; -import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.lucene.search.Queries; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.ContentPath; +import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.Mapper; import org.elasticsearch.index.mapper.MapperService; @@ -53,9 +60,12 @@ import org.elasticsearch.index.mapper.TextFieldMapper; import org.elasticsearch.index.query.support.NestedScope; import org.elasticsearch.index.similarity.SimilarityService; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; +import org.elasticsearch.script.CompiledScript; +import org.elasticsearch.script.ExecutableScript; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptContext; import org.elasticsearch.script.ScriptService; -import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.lookup.SearchLookup; /** @@ -68,7 +78,10 @@ public class QueryShardContext extends QueryRewriteContext { private final BitsetFilterCache bitsetFilterCache; private final IndexFieldDataService indexFieldDataService; private final IndexSettings indexSettings; + private final int shardId; private String[] types = Strings.EMPTY_ARRAY; + private boolean cachable = true; + private final SetOnce frozen = new SetOnce<>(); public void setTypes(String... types) { this.types = types; @@ -80,31 +93,31 @@ public String[] getTypes() { private final Map namedQueries = new HashMap<>(); private final MapperQueryParser queryParser = new MapperQueryParser(this); - private final IndicesQueriesRegistry indicesQueriesRegistry; private boolean allowUnmappedFields; private boolean mapUnmappedFieldAsString; private NestedScope nestedScope; private boolean isFilter; - public QueryShardContext(IndexSettings indexSettings, BitsetFilterCache bitsetFilterCache, IndexFieldDataService indexFieldDataService, - MapperService mapperService, SimilarityService similarityService, ScriptService scriptService, - final IndicesQueriesRegistry indicesQueriesRegistry, Client client, - IndexReader reader, ClusterState clusterState) { - super(indexSettings, mapperService, scriptService, indicesQueriesRegistry, client, reader, clusterState); + public QueryShardContext(int shardId, IndexSettings indexSettings, BitsetFilterCache bitsetFilterCache, + IndexFieldDataService indexFieldDataService, MapperService mapperService, SimilarityService similarityService, + ScriptService scriptService, NamedXContentRegistry xContentRegistry, + Client client, IndexReader reader, LongSupplier nowInMillis) { + super(indexSettings, mapperService, scriptService, xContentRegistry, client, reader, nowInMillis); + this.shardId = shardId; this.indexSettings = indexSettings; this.similarityService = similarityService; this.mapperService = mapperService; this.bitsetFilterCache = bitsetFilterCache; this.indexFieldDataService = indexFieldDataService; this.allowUnmappedFields = indexSettings.isDefaultAllowUnmappedFields(); - this.indicesQueriesRegistry = indicesQueriesRegistry; this.nestedScope = new NestedScope(); + } public QueryShardContext(QueryShardContext source) { - this(source.indexSettings, source.bitsetFilterCache, source.indexFieldDataService, source.mapperService, - source.similarityService, source.scriptService, source.indicesQueriesRegistry, source.client, - source.reader, source.clusterState); + this(source.shardId, source.indexSettings, source.bitsetFilterCache, source.indexFieldDataService, source.mapperService, + source.similarityService, source.scriptService, source.getXContentRegistry(), source.client, + source.reader, source.nowInMillis); this.types = source.getTypes(); } @@ -116,8 +129,8 @@ private void reset() { this.isFilter = false; } - public AnalysisService getAnalysisService() { - return mapperService.analysisService(); + public IndexAnalyzers getIndexAnalyzers() { + return mapperService.getIndexAnalyzers(); } public Similarity getSearchSimilarity() { @@ -180,6 +193,10 @@ public void setIsFilter(boolean isFilter) { this.isFilter = isFilter; } + /** + * Returns all the fields that match a given pattern. If prefixed with a + * type then the fields will be returned with a type prefix. + */ public Collection simpleMatchToIndexNames(String pattern) { return mapperService.simpleMatchToIndexNames(pattern); } @@ -192,6 +209,14 @@ public ObjectMapper getObjectMapper(String name) { return mapperService.getObjectMapper(name); } + /** + * Returns s {@link DocumentMapper} instance for the given type. + * Delegates to {@link MapperService#documentMapper(String)} + */ + public DocumentMapper documentMapper(String type) { + return mapperService.documentMapper(type); + } + /** * Gets the search analyzer for the given field, or the default if there is none present for the field * TODO: remove this by moving defaults into mappers themselves @@ -250,24 +275,12 @@ public Collection queryTypes() { private SearchLookup lookup = null; public SearchLookup lookup() { - SearchContext current = SearchContext.current(); - if (current != null) { - return current.lookup(); - } if (lookup == null) { - lookup = new SearchLookup(getMapperService(), indexFieldDataService, null); + lookup = new SearchLookup(getMapperService(), indexFieldDataService, types); } return lookup; } - public long nowInMillis() { - SearchContext current = SearchContext.current(); - if (current != null) { - return current.nowInMillis(); - } - return System.currentTimeMillis(); - } - public NestedScope nestedScope() { return nestedScope; } @@ -305,12 +318,7 @@ public ParsedQuery toQuery(QueryBuilder queryBuilder) { }); } - @FunctionalInterface - private interface CheckedFunction { - R apply(T t) throws IOException; - } - - private ParsedQuery toQuery(QueryBuilder queryBuilder, CheckedFunction filterOrQuery) { + private ParsedQuery toQuery(QueryBuilder queryBuilder, CheckedFunction filterOrQuery) { reset(); try { QueryBuilder rewriteQuery = QueryBuilder.rewriteQuery(queryBuilder, this); @@ -327,4 +335,100 @@ private ParsedQuery toQuery(QueryBuilder queryBuilder, CheckedFunction, SearchScript> getLazySearchScript(Script script, ScriptContext context) { + failIfFrozen(); + CompiledScript compile = scriptService.compile(script, context); + return (p) -> scriptService.search(lookup(), compile, p); + } + + /** + * Compiles (or retrieves from cache) and binds the parameters to the + * provided script + */ + public final ExecutableScript getExecutableScript(Script script, ScriptContext context) { + failIfFrozen(); + CompiledScript compiledScript = scriptService.compile(script, context); + return scriptService.executable(compiledScript, script.getParams()); + } + + /** + * Returns a lazily created {@link ExecutableScript} that is compiled immediately but can be pulled later once all + * parameters are available. + */ + public final Function, ExecutableScript> getLazyExecutableScript(Script script, ScriptContext context) { + failIfFrozen(); + CompiledScript executable = scriptService.compile(script, context); + return (p) -> scriptService.executable(executable, p); + } + + /** + * if this method is called the query context will throw exception if methods are accessed + * that could yield different results across executions like {@link #getTemplateBytes(Script)} + */ + public final void freezeContext() { + this.frozen.set(Boolean.TRUE); + } + + /** + * This method fails if {@link #freezeContext()} is called before on this + * context. This is used to seal. + * + * This methods and all methods that call it should be final to ensure that + * setting the request as not cacheable and the freezing behaviour of this + * class cannot be bypassed. This is important so we can trust when this + * class says a request can be cached. + */ + protected final void failIfFrozen() { + this.cachable = false; + if (frozen.get() == Boolean.TRUE) { + throw new IllegalArgumentException("features that prevent cachability are disabled on this context"); + } else { + assert frozen.get() == null : frozen.get(); + } + } + + @Override + public final BytesReference getTemplateBytes(Script template) { + failIfFrozen(); + return super.getTemplateBytes(template); + } + + /** + * Returns true iff the result of the processed search request is cachable. Otherwise false + */ + public final boolean isCachable() { + return cachable; + } + + /** + * Returns the shard ID this context was created for. + */ + public int getShardId() { + return shardId; + } + + @Override + public final long nowInMillis() { + failIfFrozen(); + return super.nowInMillis(); + } + + @Override + public Client getClient() { + failIfFrozen(); // we somebody uses a terms filter with lookup for instance can't be cached... + return super.getClient(); + } } diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryShardException.java b/core/src/main/java/org/elasticsearch/index/query/QueryShardException.java index 1e31c7c50e160..9b6ce3a6e4b51 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryShardException.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryShardException.java @@ -22,7 +22,6 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.Index; import org.elasticsearch.rest.RestStatus; @@ -60,11 +59,6 @@ public RestStatus status() { return RestStatus.BAD_REQUEST; } - @Override - protected void innerToXContent(XContentBuilder builder, Params params) throws IOException { - super.innerToXContent(builder, params); - } - @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java index be6a170cc2285..6b37da0dd1821 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java @@ -21,11 +21,12 @@ import org.apache.lucene.queryparser.classic.MapperQueryParser; import org.apache.lucene.queryparser.classic.QueryParserSettings; -import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.FuzzyQuery; +import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.automaton.Operations; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; @@ -36,17 +37,30 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.analysis.NamedAnalyzer; +import org.elasticsearch.index.mapper.DateFieldMapper; +import org.elasticsearch.index.mapper.IpFieldMapper; +import org.elasticsearch.index.mapper.KeywordFieldMapper; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.NumberFieldMapper; +import org.elasticsearch.index.mapper.ScaledFloatFieldMapper; +import org.elasticsearch.index.mapper.StringFieldMapper; +import org.elasticsearch.index.mapper.TextFieldMapper; +import org.elasticsearch.index.mapper.TimestampFieldMapper; import org.elasticsearch.index.query.support.QueryParsers; import org.joda.time.DateTimeZone; import java.io.IOException; import java.util.ArrayList; +import java.util.Collection; import java.util.HashMap; +import java.util.HashSet; import java.util.List; import java.util.Locale; import java.util.Map; import java.util.Objects; import java.util.Optional; +import java.util.Set; import java.util.TreeMap; /** @@ -57,11 +71,11 @@ * them either using DisMax or a plain boolean query (see {@link #useDisMax(boolean)}). */ public class QueryStringQueryBuilder extends AbstractQueryBuilder { + public static final String NAME = "query_string"; public static final boolean DEFAULT_AUTO_GENERATE_PHRASE_QUERIES = false; public static final int DEFAULT_MAX_DETERMINED_STATES = Operations.DEFAULT_MAX_DETERMINIZED_STATES; - public static final boolean DEFAULT_LOWERCASE_EXPANDED_TERMS = true; public static final boolean DEFAULT_ENABLE_POSITION_INCREMENTS = true; public static final boolean DEFAULT_ESCAPE = false; public static final boolean DEFAULT_USE_DIS_MAX = true; @@ -71,7 +85,7 @@ public class QueryStringQueryBuilder extends AbstractQueryBuilder ALLOWED_QUERY_MAPPER_TYPES; + + static { + ALLOWED_QUERY_MAPPER_TYPES = new HashSet<>(); + ALLOWED_QUERY_MAPPER_TYPES.add(DateFieldMapper.CONTENT_TYPE); + ALLOWED_QUERY_MAPPER_TYPES.add(IpFieldMapper.CONTENT_TYPE); + ALLOWED_QUERY_MAPPER_TYPES.add(KeywordFieldMapper.CONTENT_TYPE); + for (NumberFieldMapper.NumberType nt : NumberFieldMapper.NumberType.values()) { + ALLOWED_QUERY_MAPPER_TYPES.add(nt.typeName()); + } + ALLOWED_QUERY_MAPPER_TYPES.add(ScaledFloatFieldMapper.CONTENT_TYPE); + ALLOWED_QUERY_MAPPER_TYPES.add(StringFieldMapper.CONTENT_TYPE); + ALLOWED_QUERY_MAPPER_TYPES.add(TextFieldMapper.CONTENT_TYPE); + ALLOWED_QUERY_MAPPER_TYPES.add(TimestampFieldMapper.CONTENT_TYPE); + } private final String queryString; @@ -126,12 +161,8 @@ public class QueryStringQueryBuilder extends AbstractQueryBuildertrue. - */ - public QueryStringQueryBuilder lowercaseExpandedTerms(boolean lowercaseExpandedTerms) { - this.lowercaseExpandedTerms = lowercaseExpandedTerms; - return this; - } - - public boolean lowercaseExpandedTerms() { - return this.lowercaseExpandedTerms; - } - /** * Set to true to enable position increments in result query. Defaults to * true. @@ -473,6 +527,11 @@ public int phraseSlop() { return phraseSlop; } + public QueryStringQueryBuilder rewrite(String rewrite) { + this.rewrite = rewrite; + return this; + } + /** * Set to true to enable analysis on wildcard and prefix queries. */ @@ -485,11 +544,6 @@ public Boolean analyzeWildcard() { return this.analyzeWildcard; } - public QueryStringQueryBuilder rewrite(String rewrite) { - this.rewrite = rewrite; - return this; - } - public String rewrite() { return this.rewrite; } @@ -528,15 +582,6 @@ public Boolean lenient() { return this.lenient; } - public QueryStringQueryBuilder locale(Locale locale) { - this.locale = locale == null ? DEFAULT_LOCALE : locale; - return this; - } - - public Locale locale() { - return this.locale; - } - /** * In case of date field, we can adjust the from/to fields using a timezone */ @@ -570,6 +615,19 @@ public boolean escape() { return this.escape; } + /** + * Whether query text should be split on whitespace prior to analysis. + * Default is {@value #DEFAULT_SPLIT_ON_WHITESPACE}. + */ + public QueryStringQueryBuilder splitOnWhitespace(boolean value) { + this.splitOnWhitespace = value; + return this; + } + + public boolean splitOnWhitespace() { + return splitOnWhitespace; + } + @Override protected void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(NAME); @@ -593,11 +651,10 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep builder.field(QUOTE_ANALYZER_FIELD.getPreferredName(), this.quoteAnalyzer); } builder.field(AUTO_GENERATE_PHRASE_QUERIES_FIELD.getPreferredName(), this.autoGeneratePhraseQueries); - builder.field(MAX_DETERMINED_STATES_FIELD.getPreferredName(), this.maxDeterminizedStates); + builder.field(MAX_DETERMINIZED_STATES_FIELD.getPreferredName(), this.maxDeterminizedStates); if (this.allowLeadingWildcard != null) { builder.field(ALLOW_LEADING_WILDCARD_FIELD.getPreferredName(), this.allowLeadingWildcard); } - builder.field(LOWERCASE_EXPANDED_TERMS_FIELD.getPreferredName(), this.lowercaseExpandedTerms); builder.field(ENABLE_POSITION_INCREMENTS_FIELD.getPreferredName(), this.enablePositionIncrements); this.fuzziness.toXContent(builder, params); builder.field(FUZZY_PREFIX_LENGTH_FIELD.getPreferredName(), this.fuzzyPrefixLength); @@ -621,11 +678,14 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep if (this.lenient != null) { builder.field(LENIENT_FIELD.getPreferredName(), this.lenient); } - builder.field(LOCALE_FIELD.getPreferredName(), this.locale.toLanguageTag()); if (this.timeZone != null) { builder.field(TIME_ZONE_FIELD.getPreferredName(), this.timeZone.getID()); } builder.field(ESCAPE_FIELD.getPreferredName(), this.escape); + builder.field(SPLIT_ON_WHITESPACE.getPreferredName(), this.splitOnWhitespace); + if (this.useAllFields != null) { + builder.field(ALL_FIELDS_FIELD.getPreferredName(), this.useAllFields); + } printBoostAndQueryName(builder); builder.endObject(); } @@ -642,7 +702,6 @@ public static Optional fromXContent(QueryParseContext p float boost = AbstractQueryBuilder.DEFAULT_BOOST; boolean autoGeneratePhraseQueries = QueryStringQueryBuilder.DEFAULT_AUTO_GENERATE_PHRASE_QUERIES; int maxDeterminizedStates = QueryStringQueryBuilder.DEFAULT_MAX_DETERMINED_STATES; - boolean lowercaseExpandedTerms = QueryStringQueryBuilder.DEFAULT_LOWERCASE_EXPANDED_TERMS; boolean enablePositionIncrements = QueryStringQueryBuilder.DEFAULT_ENABLE_POSITION_INCREMENTS; boolean escape = QueryStringQueryBuilder.DEFAULT_ESCAPE; boolean useDisMax = QueryStringQueryBuilder.DEFAULT_USE_DIS_MAX; @@ -657,16 +716,17 @@ public static Optional fromXContent(QueryParseContext p Boolean lenient = null; Operator defaultOperator = QueryStringQueryBuilder.DEFAULT_OPERATOR; String timeZone = null; - Locale locale = QueryStringQueryBuilder.DEFAULT_LOCALE; Fuzziness fuzziness = QueryStringQueryBuilder.DEFAULT_FUZZINESS; String fuzzyRewrite = null; String rewrite = null; + boolean splitOnWhitespace = DEFAULT_SPLIT_ON_WHITESPACE; + Boolean useAllFields = null; Map fieldsAndWeights = new HashMap<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, FIELDS_FIELD)) { + if (FIELDS_FIELD.match(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { String fField = null; float fBoost = AbstractQueryBuilder.DEFAULT_BOOST; @@ -690,66 +750,69 @@ public static Optional fromXContent(QueryParseContext p "] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { queryString = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, DEFAULT_FIELD_FIELD)) { + } else if (DEFAULT_FIELD_FIELD.match(currentFieldName)) { defaultField = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, DEFAULT_OPERATOR_FIELD)) { + } else if (DEFAULT_OPERATOR_FIELD.match(currentFieldName)) { defaultOperator = Operator.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ANALYZER_FIELD)) { + } else if (ANALYZER_FIELD.match(currentFieldName)) { analyzer = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, QUOTE_ANALYZER_FIELD)) { + } else if (QUOTE_ANALYZER_FIELD.match(currentFieldName)) { quoteAnalyzer = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ALLOW_LEADING_WILDCARD_FIELD)) { + } else if (ALLOW_LEADING_WILDCARD_FIELD.match(currentFieldName)) { allowLeadingWildcard = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AUTO_GENERATE_PHRASE_QUERIES_FIELD)) { + } else if (AUTO_GENERATE_PHRASE_QUERIES_FIELD.match(currentFieldName)) { autoGeneratePhraseQueries = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MAX_DETERMINED_STATES_FIELD)) { + } else if (MAX_DETERMINIZED_STATES_FIELD.match(currentFieldName)) { maxDeterminizedStates = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LOWERCASE_EXPANDED_TERMS_FIELD)) { - lowercaseExpandedTerms = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ENABLE_POSITION_INCREMENTS_FIELD)) { + } else if (LOWERCASE_EXPANDED_TERMS_FIELD.match(currentFieldName)) { + // ignore, deprecated setting + } else if (ENABLE_POSITION_INCREMENTS_FIELD.match(currentFieldName)) { enablePositionIncrements = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ESCAPE_FIELD)) { + } else if (ESCAPE_FIELD.match(currentFieldName)) { escape = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, USE_DIS_MAX_FIELD)) { + } else if (USE_DIS_MAX_FIELD.match(currentFieldName)) { useDisMax = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FUZZY_PREFIX_LENGTH_FIELD)) { + } else if (FUZZY_PREFIX_LENGTH_FIELD.match(currentFieldName)) { fuzzyPrefixLength = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FUZZY_MAX_EXPANSIONS_FIELD)) { + } else if (FUZZY_MAX_EXPANSIONS_FIELD.match(currentFieldName)) { fuzzyMaxExpansions = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FUZZY_REWRITE_FIELD)) { + } else if (FUZZY_REWRITE_FIELD.match(currentFieldName)) { fuzzyRewrite = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, PHRASE_SLOP_FIELD)) { + } else if (PHRASE_SLOP_FIELD.match(currentFieldName)) { phraseSlop = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, Fuzziness.FIELD)) { + } else if (Fuzziness.FIELD.match(currentFieldName)) { fuzziness = Fuzziness.parse(parser); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TIE_BREAKER_FIELD)) { + } else if (TIE_BREAKER_FIELD.match(currentFieldName)) { tieBreaker = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ANALYZE_WILDCARD_FIELD)) { + } else if (ANALYZE_WILDCARD_FIELD.match(currentFieldName)) { analyzeWildcard = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, REWRITE_FIELD)) { + } else if (REWRITE_FIELD.match(currentFieldName)) { rewrite = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MINIMUM_SHOULD_MATCH_FIELD)) { + } else if (MINIMUM_SHOULD_MATCH_FIELD.match(currentFieldName)) { minimumShouldMatch = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, QUOTE_FIELD_SUFFIX_FIELD)) { + } else if (QUOTE_FIELD_SUFFIX_FIELD.match(currentFieldName)) { quoteFieldSuffix = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LENIENT_FIELD)) { + } else if (LENIENT_FIELD.match(currentFieldName)) { lenient = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LOCALE_FIELD)) { - String localeStr = parser.text(); - locale = Locale.forLanguageTag(localeStr); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TIME_ZONE_FIELD)) { + } else if (LOCALE_FIELD.match(currentFieldName)) { + // ignore, deprecated setting + } else if (ALL_FIELDS_FIELD.match(currentFieldName)) { + useAllFields = parser.booleanValue(); + } else if (TIME_ZONE_FIELD.match(currentFieldName)) { try { timeZone = parser.text(); } catch (IllegalArgumentException e) { throw new ParsingException(parser.getTokenLocation(), "[" + QueryStringQueryBuilder.NAME + "] time_zone [" + parser.text() + "] is unknown"); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); + } else if (SPLIT_ON_WHITESPACE.match(currentFieldName)) { + splitOnWhitespace = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), "[" + QueryStringQueryBuilder.NAME + "] query does not support [" + currentFieldName + "]"); @@ -763,6 +826,12 @@ public static Optional fromXContent(QueryParseContext p throw new ParsingException(parser.getTokenLocation(), "[" + QueryStringQueryBuilder.NAME + "] must be provided with a [query]"); } + if ((useAllFields != null && useAllFields) && + (defaultField != null || fieldsAndWeights.size() != 0)) { + throw new ParsingException(parser.getTokenLocation(), + "cannot use [all_fields] parameter in conjunction with [default_field] or [fields]"); + } + QueryStringQueryBuilder queryStringQuery = new QueryStringQueryBuilder(queryString); queryStringQuery.fields(fieldsAndWeights); queryStringQuery.defaultField(defaultField); @@ -772,7 +841,6 @@ public static Optional fromXContent(QueryParseContext p queryStringQuery.allowLeadingWildcard(allowLeadingWildcard); queryStringQuery.autoGeneratePhraseQueries(autoGeneratePhraseQueries); queryStringQuery.maxDeterminizedStates(maxDeterminizedStates); - queryStringQuery.lowercaseExpandedTerms(lowercaseExpandedTerms); queryStringQuery.enablePositionIncrements(enablePositionIncrements); queryStringQuery.escape(escape); queryStringQuery.useDisMax(useDisMax); @@ -788,9 +856,10 @@ public static Optional fromXContent(QueryParseContext p queryStringQuery.quoteFieldSuffix(quoteFieldSuffix); queryStringQuery.lenient(lenient); queryStringQuery.timeZone(timeZone); - queryStringQuery.locale(locale); queryStringQuery.boost(boost); queryStringQuery.queryName(queryName); + queryStringQuery.splitOnWhitespace(splitOnWhitespace); + queryStringQuery.useAllFields(useAllFields); return Optional.of(queryStringQuery); } @@ -810,10 +879,8 @@ protected boolean doEquals(QueryStringQueryBuilder other) { Objects.equals(quoteFieldSuffix, other.quoteFieldSuffix) && Objects.equals(autoGeneratePhraseQueries, other.autoGeneratePhraseQueries) && Objects.equals(allowLeadingWildcard, other.allowLeadingWildcard) && - Objects.equals(lowercaseExpandedTerms, other.lowercaseExpandedTerms) && Objects.equals(enablePositionIncrements, other.enablePositionIncrements) && Objects.equals(analyzeWildcard, other.analyzeWildcard) && - Objects.equals(locale.toLanguageTag(), other.locale.toLanguageTag()) && Objects.equals(fuzziness, other.fuzziness) && Objects.equals(fuzzyPrefixLength, other.fuzzyPrefixLength) && Objects.equals(fuzzyMaxExpansions, other.fuzzyMaxExpansions) && @@ -827,55 +894,119 @@ protected boolean doEquals(QueryStringQueryBuilder other) { timeZone == null ? other.timeZone == null : other.timeZone != null && Objects.equals(timeZone.getID(), other.timeZone.getID()) && Objects.equals(escape, other.escape) && - Objects.equals(maxDeterminizedStates, other.maxDeterminizedStates); + Objects.equals(maxDeterminizedStates, other.maxDeterminizedStates) && + Objects.equals(splitOnWhitespace, other.splitOnWhitespace) && + Objects.equals(useAllFields, other.useAllFields); } @Override protected int doHashCode() { return Objects.hash(queryString, defaultField, fieldsAndWeights, defaultOperator, analyzer, quoteAnalyzer, - quoteFieldSuffix, autoGeneratePhraseQueries, allowLeadingWildcard, lowercaseExpandedTerms, - enablePositionIncrements, analyzeWildcard, locale.toLanguageTag(), fuzziness, fuzzyPrefixLength, + quoteFieldSuffix, autoGeneratePhraseQueries, allowLeadingWildcard, analyzeWildcard, + enablePositionIncrements, fuzziness, fuzzyPrefixLength, fuzzyMaxExpansions, fuzzyRewrite, phraseSlop, useDisMax, tieBreaker, rewrite, minimumShouldMatch, lenient, - timeZone == null ? 0 : timeZone.getID(), escape, maxDeterminizedStates); + timeZone == null ? 0 : timeZone.getID(), escape, maxDeterminizedStates, splitOnWhitespace, useAllFields); + } + + /** + * Given a shard context, return a map of all fields in the mappings that + * can be queried. The map will be field name to a float of 1.0f. + */ + public static Map allQueryableDefaultFields(QueryShardContext context) { + Collection allFields = context.simpleMatchToIndexNames("*"); + Map fields = new HashMap<>(); + for (String fieldName : allFields) { + if (MapperService.isMetadataField(fieldName)) { + // Ignore our metadata fields + continue; + } + MappedFieldType mft = context.fieldMapper(fieldName); + assert mft != null : "should never have a null mapper for an existing field"; + + // Ignore fields that are not in the allowed mapper types. Some + // types do not support term queries, and thus we cannot generate + // a special query for them. + String mappingType = mft.typeName(); + if (ALLOWED_QUERY_MAPPER_TYPES.contains(mappingType)) { + fields.put(fieldName, 1.0f); + } + } + return fields; } @Override protected Query doToQuery(QueryShardContext context) throws IOException { //TODO would be nice to have all the settings in one place: some change though at query execution time //e.g. field names get expanded to concrete names, defaults get resolved sometimes to settings values etc. + if (splitOnWhitespace == false && autoGeneratePhraseQueries) { + throw new IllegalArgumentException("it is disallowed to disable [split_on_whitespace] " + + "if [auto_generate_phrase_queries] is activated"); + } QueryParserSettings qpSettings; if (this.escape) { qpSettings = new QueryParserSettings(org.apache.lucene.queryparser.classic.QueryParser.escape(this.queryString)); } else { qpSettings = new QueryParserSettings(this.queryString); } - qpSettings.defaultField(this.defaultField == null ? context.defaultField() : this.defaultField); + Map resolvedFields = new TreeMap<>(); - for (Map.Entry fieldsEntry : fieldsAndWeights.entrySet()) { - String fieldName = fieldsEntry.getKey(); - Float weight = fieldsEntry.getValue(); - if (Regex.isSimpleMatchPattern(fieldName)) { - for (String resolvedFieldName : context.getMapperService().simpleMatchToIndexNames(fieldName)) { - resolvedFields.put(resolvedFieldName, weight); + + if ((useAllFields != null && useAllFields) && (fieldsAndWeights.size() != 0 || this.defaultField != null)) { + throw addValidationError("cannot use [all_fields] parameter in conjunction with [default_field] or [fields]", null); + } + + // If explicitly required to use all fields, use all fields, OR: + // Automatically determine the fields (to replace the _all field) if all of the following are true: + // - The _all field is disabled, + // - and the default_field has not been changed in the settings + // - and default_field is not specified in the request + // - and no fields are specified in the request + if ((this.useAllFields != null && this.useAllFields) || + (context.getMapperService().allEnabled() == false && + "_all".equals(context.defaultField()) && + this.defaultField == null && + this.fieldsAndWeights.size() == 0)) { + // Use the automatically determined expansion of all queryable fields + resolvedFields = allQueryableDefaultFields(context); + // Automatically set leniency to "true" so mismatched fields don't cause exceptions + qpSettings.lenient(true); + if ("*".equals(queryString)) { + return new MatchAllDocsQuery(); + } + } else { + qpSettings.defaultField(this.defaultField == null ? context.defaultField() : this.defaultField); + + for (Map.Entry fieldsEntry : fieldsAndWeights.entrySet()) { + String fieldName = fieldsEntry.getKey(); + Float weight = fieldsEntry.getValue(); + if (Regex.isSimpleMatchPattern(fieldName)) { + for (String resolvedFieldName : context.getMapperService().simpleMatchToIndexNames(fieldName)) { + resolvedFields.put(resolvedFieldName, weight); + } + } else { + resolvedFields.put(fieldName, weight); } - } else { - resolvedFields.put(fieldName, weight); } + qpSettings.lenient(lenient == null ? context.queryStringLenient() : lenient); + } + if (fieldsAndWeights.isEmpty() == false || resolvedFields.isEmpty() == false) { + // We set the fields and weight only if we have explicit fields to query + // Otherwise we set it to null and fallback to the default field. + qpSettings.fieldsAndWeights(resolvedFields); } - qpSettings.fieldsAndWeights(resolvedFields); qpSettings.defaultOperator(defaultOperator.toQueryParserOperator()); if (analyzer == null) { qpSettings.defaultAnalyzer(context.getMapperService().searchAnalyzer()); } else { - NamedAnalyzer namedAnalyzer = context.getAnalysisService().analyzer(analyzer); + NamedAnalyzer namedAnalyzer = context.getIndexAnalyzers().get(analyzer); if (namedAnalyzer == null) { throw new QueryShardException(context, "[query_string] analyzer [" + analyzer + "] not found"); } qpSettings.forceAnalyzer(namedAnalyzer); } if (quoteAnalyzer != null) { - NamedAnalyzer namedAnalyzer = context.getAnalysisService().analyzer(quoteAnalyzer); + NamedAnalyzer namedAnalyzer = context.getIndexAnalyzers().get(quoteAnalyzer); if (namedAnalyzer == null) { throw new QueryShardException(context, "[query_string] quote_analyzer [" + quoteAnalyzer + "] not found"); } @@ -890,20 +1021,18 @@ protected Query doToQuery(QueryShardContext context) throws IOException { qpSettings.autoGeneratePhraseQueries(autoGeneratePhraseQueries); qpSettings.allowLeadingWildcard(allowLeadingWildcard == null ? context.queryStringAllowLeadingWildcard() : allowLeadingWildcard); qpSettings.analyzeWildcard(analyzeWildcard == null ? context.queryStringAnalyzeWildcard() : analyzeWildcard); - qpSettings.lowercaseExpandedTerms(lowercaseExpandedTerms); qpSettings.enablePositionIncrements(enablePositionIncrements); - qpSettings.locale(locale); qpSettings.fuzziness(fuzziness); qpSettings.fuzzyPrefixLength(fuzzyPrefixLength); qpSettings.fuzzyMaxExpansions(fuzzyMaxExpansions); - qpSettings.fuzzyRewriteMethod(QueryParsers.parseRewriteMethod(context.getParseFieldMatcher(), this.fuzzyRewrite)); + qpSettings.fuzzyRewriteMethod(QueryParsers.parseRewriteMethod(this.fuzzyRewrite)); qpSettings.phraseSlop(phraseSlop); qpSettings.useDisMax(useDisMax); qpSettings.tieBreaker(tieBreaker); - qpSettings.rewriteMethod(QueryParsers.parseRewriteMethod(context.getParseFieldMatcher(), this.rewrite)); - qpSettings.lenient(lenient == null ? context.queryStringLenient() : lenient); + qpSettings.rewriteMethod(QueryParsers.parseRewriteMethod(this.rewrite)); qpSettings.timeZone(timeZone); qpSettings.maxDeterminizedStates(maxDeterminizedStates); + qpSettings.splitOnWhitespace(splitOnWhitespace); MapperQueryParser queryParser = context.queryParser(qpSettings); Query query; @@ -926,12 +1055,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } query = Queries.fixNegativeQueryIfNeeded(query); - // If the coordination factor is disabled on a boolean query we don't apply the minimum should match. - // This is done to make sure that the minimum_should_match doesn't get applied when there is only one word - // and multiple variations of the same word in the query (synonyms for instance). - if (query instanceof BooleanQuery && !((BooleanQuery) query).isCoordDisabled()) { - query = Queries.applyMinimumShouldMatch((BooleanQuery) query, this.minimumShouldMatch()); - } + query = Queries.maybeApplyMinimumShouldMatch(query, this.minimumShouldMatch); //restore the previous BoostQuery wrapping for (int i = boosts.size() - 1; i >= 0; i--) { @@ -940,4 +1064,5 @@ protected Query doToQuery(QueryShardContext context) throws IOException { return query; } + } diff --git a/core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java index d19441e8cf7bf..23d37a2743520 100644 --- a/core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java @@ -23,9 +23,11 @@ import org.apache.lucene.search.Query; import org.apache.lucene.search.TermRangeQuery; import org.apache.lucene.util.BytesRef; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.geo.ShapeRelation; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.joda.DateMathParser; @@ -38,6 +40,7 @@ import org.elasticsearch.index.mapper.LegacyDateFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.RangeFieldMapper; import org.joda.time.DateTimeZone; import java.io.IOException; @@ -55,17 +58,18 @@ public class RangeQueryBuilder extends AbstractQueryBuilder i private static final ParseField FIELDDATA_FIELD = new ParseField("fielddata").withAllDeprecated("[no replacement]"); private static final ParseField NAME_FIELD = new ParseField("_name") - .withAllDeprecated("query name is not supported in short version of range query"); - private static final ParseField LTE_FIELD = new ParseField("lte", "le"); - private static final ParseField GTE_FIELD = new ParseField("gte", "ge"); - private static final ParseField FROM_FIELD = new ParseField("from"); - private static final ParseField TO_FIELD = new ParseField("to"); + .withAllDeprecated("query name is not supported in short version of range query"); + public static final ParseField LTE_FIELD = new ParseField("lte", "le"); + public static final ParseField GTE_FIELD = new ParseField("gte", "ge"); + public static final ParseField FROM_FIELD = new ParseField("from"); + public static final ParseField TO_FIELD = new ParseField("to"); private static final ParseField INCLUDE_LOWER_FIELD = new ParseField("include_lower"); private static final ParseField INCLUDE_UPPER_FIELD = new ParseField("include_upper"); - private static final ParseField GT_FIELD = new ParseField("gt"); - private static final ParseField LT_FIELD = new ParseField("lt"); + public static final ParseField GT_FIELD = new ParseField("gt"); + public static final ParseField LT_FIELD = new ParseField("lt"); private static final ParseField TIME_ZONE_FIELD = new ParseField("time_zone"); private static final ParseField FORMAT_FIELD = new ParseField("format"); + private static final ParseField RELATION_FIELD = new ParseField("relation"); private final String fieldName; @@ -81,6 +85,8 @@ public class RangeQueryBuilder extends AbstractQueryBuilder i private FormatDateTimeFormatter format; + private ShapeRelation relation; + /** * A Query that matches documents within an range of terms. * @@ -108,6 +114,12 @@ public RangeQueryBuilder(StreamInput in) throws IOException { if (formatString != null) { format = Joda.forPattern(formatString); } + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { + String relationString = in.readOptionalString(); + if (relationString != null) { + relation = ShapeRelation.getRelationByName(relationString); + } + } } @Override @@ -123,6 +135,13 @@ protected void doWriteTo(StreamOutput out) throws IOException { formatString = this.format.format(); } out.writeOptionalString(formatString); + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { + String relationString = null; + if (this.relation != null) { + relationString = this.relation.getRelationName(); + } + out.writeOptionalString(relationString); + } } /** @@ -260,6 +279,10 @@ public String timeZone() { return this.timeZone == null ? null : this.timeZone.getID(); } + DateTimeZone getDateTimeZone() { // for testing + return timeZone; + } + /** * In case of format field, we can parse the from/to fields using this time format */ @@ -278,6 +301,28 @@ public String format() { return this.format == null ? null : this.format.format(); } + DateMathParser getForceDateParser() { // pkg private for testing + if (this.format != null) { + return new DateMathParser(this.format); + } + return null; + } + + public ShapeRelation relation() { + return this.relation; + } + + public RangeQueryBuilder relation(String relation) { + if (relation == null) { + throw new IllegalArgumentException("relation cannot be null"); + } + this.relation = ShapeRelation.getRelationByName(relation); + if (this.relation == null) { + throw new IllegalArgumentException(relation + " is not a valid relation"); + } + return this; + } + @Override protected void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(NAME); @@ -292,6 +337,9 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep if (format != null) { builder.field(FORMAT_FIELD.getPreferredName(), format.format()); } + if (relation != null) { + builder.field(RELATION_FIELD.getPreferredName(), relation.getRelationName()); + } printBoostAndQueryName(builder); builder.endObject(); builder.endObject(); @@ -309,6 +357,7 @@ public static Optional fromXContent(QueryParseContext parseCo float boost = AbstractQueryBuilder.DEFAULT_BOOST; String queryName = null; String format = null; + String relation = null; String currentFieldName = null; XContentParser.Token token; @@ -324,33 +373,35 @@ public static Optional fromXContent(QueryParseContext parseCo if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, FROM_FIELD)) { + if (FROM_FIELD.match(currentFieldName)) { from = parser.objectBytes(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TO_FIELD)) { + } else if (TO_FIELD.match(currentFieldName)) { to = parser.objectBytes(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, INCLUDE_LOWER_FIELD)) { + } else if (INCLUDE_LOWER_FIELD.match(currentFieldName)) { includeLower = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, INCLUDE_UPPER_FIELD)) { + } else if (INCLUDE_UPPER_FIELD.match(currentFieldName)) { includeUpper = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, GT_FIELD)) { + } else if (GT_FIELD.match(currentFieldName)) { from = parser.objectBytes(); includeLower = false; - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, GTE_FIELD)) { + } else if (GTE_FIELD.match(currentFieldName)) { from = parser.objectBytes(); includeLower = true; - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LT_FIELD)) { + } else if (LT_FIELD.match(currentFieldName)) { to = parser.objectBytes(); includeUpper = false; - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LTE_FIELD)) { + } else if (LTE_FIELD.match(currentFieldName)) { to = parser.objectBytes(); includeUpper = true; - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, TIME_ZONE_FIELD)) { + } else if (TIME_ZONE_FIELD.match(currentFieldName)) { timeZone = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FORMAT_FIELD)) { + } else if (FORMAT_FIELD.match(currentFieldName)) { format = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (RELATION_FIELD.match(currentFieldName)) { + relation = parser.text(); + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -359,9 +410,9 @@ public static Optional fromXContent(QueryParseContext parseCo } } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, NAME_FIELD)) { + if (NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FIELDDATA_FIELD)) { + } else if (FIELDDATA_FIELD.match(currentFieldName)) { // ignore } else { throw new ParsingException(parser.getTokenLocation(), "[range] query does not support [" + currentFieldName + "]"); @@ -382,6 +433,9 @@ public static Optional fromXContent(QueryParseContext parseCo if (format != null) { rangeQuery.format(format); } + if (relation != null) { + rangeQuery.relation(relation); + } return Optional.of(rangeQuery); } @@ -406,7 +460,7 @@ protected MappedFieldType.Relation getRelation(QueryRewriteContext queryRewriteC } else { DateMathParser dateMathParser = format == null ? null : new DateMathParser(format); return fieldType.isFieldWithinQuery(queryRewriteContext.getIndexReader(), from, to, includeLower, - includeUpper, timeZone, dateMathParser); + includeUpper, timeZone, dateMathParser, queryRewriteContext); } } @@ -417,12 +471,12 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws case DISJOINT: return new MatchNoneQueryBuilder(); case WITHIN: - if (from != null || to != null) { + if (from != null || to != null || format != null || timeZone != null) { RangeQueryBuilder newRangeQuery = new RangeQueryBuilder(fieldName); newRangeQuery.from(null); newRangeQuery.to(null); - newRangeQuery.format = format; - newRangeQuery.timeZone = timeZone; + newRangeQuery.format = null; + newRangeQuery.timeZone = null; return newRangeQuery; } else { return this; @@ -440,26 +494,27 @@ protected Query doToQuery(QueryShardContext context) throws IOException { MappedFieldType mapper = context.fieldMapper(this.fieldName); if (mapper != null) { if (mapper instanceof LegacyDateFieldMapper.DateFieldType) { - DateMathParser forcedDateParser = null; - if (this.format != null) { - forcedDateParser = new DateMathParser(this.format); - } + query = ((LegacyDateFieldMapper.DateFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper, - timeZone, forcedDateParser); + timeZone, getForceDateParser(), context); } else if (mapper instanceof DateFieldMapper.DateFieldType) { + + query = ((DateFieldMapper.DateFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper, + timeZone, getForceDateParser(), context); + } else if (mapper instanceof RangeFieldMapper.RangeFieldType) { DateMathParser forcedDateParser = null; - if (this.format != null) { + if (mapper.typeName() == RangeFieldMapper.RangeType.DATE.name && this.format != null) { forcedDateParser = new DateMathParser(this.format); } - query = ((DateFieldMapper.DateFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper, - timeZone, forcedDateParser); - } else { + query = ((RangeFieldMapper.RangeFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper, + relation, timeZone, forcedDateParser, context); + } else { if (timeZone != null) { throw new QueryShardException(context, "[range] time_zone can not be applied to non date field [" + fieldName + "]"); } //LUCENE 4 UPGRADE Mapper#rangeQuery should use bytesref as well? - query = mapper.rangeQuery(from, to, includeLower, includeUpper); + query = mapper.rangeQuery(from, to, includeLower, includeUpper, context); } } else { if (timeZone != null) { diff --git a/core/src/main/java/org/elasticsearch/index/query/RegexpFlag.java b/core/src/main/java/org/elasticsearch/index/query/RegexpFlag.java index c0fc8b80928e6..e00c19b68b587 100644 --- a/core/src/main/java/org/elasticsearch/index/query/RegexpFlag.java +++ b/core/src/main/java/org/elasticsearch/index/query/RegexpFlag.java @@ -78,7 +78,7 @@ public enum RegexpFlag { final int value; - private RegexpFlag(int value) { + RegexpFlag(int value) { this.value = value; } diff --git a/core/src/main/java/org/elasticsearch/index/query/RegexpQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/RegexpQueryBuilder.java index d8481d1529c8f..df2f63665b3fa 100644 --- a/core/src/main/java/org/elasticsearch/index/query/RegexpQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/RegexpQueryBuilder.java @@ -201,20 +201,20 @@ public static Optional fromXContent(QueryParseContext parseC if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, VALUE_FIELD)) { + if (VALUE_FIELD.match(currentFieldName)) { value = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, REWRITE_FIELD)) { + } else if (REWRITE_FIELD.match(currentFieldName)) { rewrite = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FLAGS_FIELD)) { + } else if (FLAGS_FIELD.match(currentFieldName)) { String flags = parser.textOrNull(); flagsValue = RegexpFlag.resolveValue(flags); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MAX_DETERMINIZED_STATES_FIELD)) { + } else if (MAX_DETERMINIZED_STATES_FIELD.match(currentFieldName)) { maxDeterminizedStates = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FLAGS_VALUE_FIELD)) { + } else if (FLAGS_VALUE_FIELD.match(currentFieldName)) { flagsValue = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -223,7 +223,7 @@ public static Optional fromXContent(QueryParseContext parseC } } } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, NAME_FIELD)) { + if (NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throwParsingExceptionOnMultipleFields(NAME, parser.getTokenLocation(), fieldName, parser.currentName()); @@ -248,7 +248,7 @@ public String getWriteableName() { @Override protected Query doToQuery(QueryShardContext context) throws QueryShardException, IOException { - MultiTermQuery.RewriteMethod method = QueryParsers.parseRewriteMethod(context.getParseFieldMatcher(), rewrite, null); + MultiTermQuery.RewriteMethod method = QueryParsers.parseRewriteMethod(rewrite, null); Query query = null; MappedFieldType fieldType = context.fieldMapper(fieldName); diff --git a/core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java index 3ff924b28db11..652d0d05a81b0 100644 --- a/core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java @@ -33,15 +33,10 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.script.LeafSearchScript; import org.elasticsearch.script.Script; -import org.elasticsearch.script.Script.ScriptField; import org.elasticsearch.script.ScriptContext; -import org.elasticsearch.script.ScriptService; import org.elasticsearch.script.SearchScript; -import org.elasticsearch.search.lookup.SearchLookup; import java.io.IOException; -import java.util.Collections; -import java.util.Map; import java.util.Objects; import java.util.Optional; @@ -84,7 +79,7 @@ public String getWriteableName() { @Override protected void doXContent(XContentBuilder builder, Params builderParams) throws IOException { builder.startObject(NAME); - builder.field(ScriptField.SCRIPT.getPreferredName(), script); + builder.field(Script.SCRIPT_PARSE_FIELD.getPreferredName(), script); printBoostAndQueryName(builder); builder.endObject(); } @@ -105,18 +100,18 @@ public static Optional fromXContent(QueryParseContext parseC } else if (parseContext.isDeprecatedSetting(currentFieldName)) { // skip } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, ScriptField.SCRIPT)) { - script = Script.parse(parser, parseContext.getParseFieldMatcher(), parseContext.getDefaultScriptLanguage()); + if (Script.SCRIPT_PARSE_FIELD.match(currentFieldName)) { + script = Script.parse(parser, parseContext.getDefaultScriptLanguage()); } else { throw new ParsingException(parser.getTokenLocation(), "[script] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ScriptField.SCRIPT)) { - script = Script.parse(parser, parseContext.getParseFieldMatcher(), parseContext.getDefaultScriptLanguage()); + } else if (Script.SCRIPT_PARSE_FIELD.match(currentFieldName)) { + script = Script.parse(parser, parseContext.getDefaultScriptLanguage()); } else { throw new ParsingException(parser.getTokenLocation(), "[script] query does not support [" + currentFieldName + "]"); } @@ -134,18 +129,17 @@ public static Optional fromXContent(QueryParseContext parseC @Override protected Query doToQuery(QueryShardContext context) throws IOException { - return new ScriptQuery(script, context.getScriptService(), context.lookup()); + return new ScriptQuery(script, context.getSearchScript(script, ScriptContext.Standard.SEARCH)); } static class ScriptQuery extends Query { - private final Script script; + final Script script; + final SearchScript searchScript; - private final SearchScript searchScript; - - public ScriptQuery(Script script, ScriptService scriptService, SearchLookup searchLookup) { + ScriptQuery(Script script, SearchScript searchScript) { this.script = script; - this.searchScript = scriptService.search(searchLookup, script, ScriptContext.Standard.SEARCH, Collections.emptyMap()); + this.searchScript = searchScript; } @Override @@ -159,17 +153,23 @@ public String toString(String field) { @Override public boolean equals(Object obj) { - if (this == obj) + // TODO: Do this if/when we can assume scripts are pure functions + // and they have a reliable equals impl + /*if (this == obj) return true; if (sameClassAs(obj) == false) return false; ScriptQuery other = (ScriptQuery) obj; - return Objects.equals(script, other.script); + return Objects.equals(script, other.script);*/ + return this == obj; } @Override public int hashCode() { - return Objects.hash(classHash(), script); + // TODO: Do this if/when we can assume scripts are pure functions + // and they have a reliable equals impl + // return Objects.hash(classHash(), script); + return System.identityHashCode(this); } @Override @@ -216,4 +216,6 @@ protected int doHashCode() { protected boolean doEquals(ScriptQueryBuilder other) { return Objects.equals(script, other.script); } + + } diff --git a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java index 151e924ad163a..a147e496045e2 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.query; import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; @@ -30,10 +31,11 @@ import org.apache.lucene.search.PrefixQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.SynonymQuery; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.index.analysis.ShingleTokenFilterFactory; import org.elasticsearch.index.mapper.MappedFieldType; import java.io.IOException; -import java.util.Locale; import java.util.Map; import java.util.Objects; import java.util.List; @@ -98,14 +100,13 @@ public Query newDefaultQuery(String text) { */ @Override public Query newFuzzyQuery(String text, int fuzziness) { - if (settings.lowercaseExpandedTerms()) { - text = text.toLowerCase(settings.locale()); - } BooleanQuery.Builder bq = new BooleanQuery.Builder(); bq.setDisableCoord(true); for (Map.Entry entry : weights.entrySet()) { + final String fieldName = entry.getKey(); try { - Query query = new FuzzyQuery(new Term(entry.getKey(), text), fuzziness); + final BytesRef term = getAnalyzer().normalize(fieldName, text); + Query query = new FuzzyQuery(new Term(fieldName, term), fuzziness); bq.add(wrapWithBoost(query, entry.getValue()), BooleanClause.Occur.SHOULD); } catch (RuntimeException e) { rethrowUnlessLenient(e); @@ -120,9 +121,18 @@ public Query newPhraseQuery(String text, int slop) { bq.setDisableCoord(true); for (Map.Entry entry : weights.entrySet()) { try { - Query q = createPhraseQuery(entry.getKey(), text, slop); + String field = entry.getKey(); + if (settings.quoteFieldSuffix() != null) { + String quoteField = field + settings.quoteFieldSuffix(); + MappedFieldType quotedFieldType = context.fieldMapper(quoteField); + if (quotedFieldType != null) { + field = quoteField; + } + } + Float boost = entry.getValue(); + Query q = createPhraseQuery(field, text, slop); if (q != null) { - bq.add(wrapWithBoost(q, entry.getValue()), BooleanClause.Occur.SHOULD); + bq.add(wrapWithBoost(q, boost), BooleanClause.Occur.SHOULD); } } catch (RuntimeException e) { rethrowUnlessLenient(e); @@ -137,20 +147,19 @@ public Query newPhraseQuery(String text, int slop) { */ @Override public Query newPrefixQuery(String text) { - if (settings.lowercaseExpandedTerms()) { - text = text.toLowerCase(settings.locale()); - } BooleanQuery.Builder bq = new BooleanQuery.Builder(); bq.setDisableCoord(true); for (Map.Entry entry : weights.entrySet()) { + final String fieldName = entry.getKey(); try { if (settings.analyzeWildcard()) { - Query analyzedQuery = newPossiblyAnalyzedQuery(entry.getKey(), text); + Query analyzedQuery = newPossiblyAnalyzedQuery(fieldName, text); if (analyzedQuery != null) { bq.add(wrapWithBoost(analyzedQuery, entry.getValue()), BooleanClause.Occur.SHOULD); } } else { - Query query = new PrefixQuery(new Term(entry.getKey(), text)); + Term term = new Term(fieldName, getAnalyzer().normalize(fieldName, text)); + Query query = new PrefixQuery(term); bq.add(wrapWithBoost(query, entry.getValue()), BooleanClause.Occur.SHOULD); } } catch (RuntimeException e) { @@ -160,6 +169,32 @@ public Query newPrefixQuery(String text) { return super.simplify(bq.build()); } + /** + * Checks if graph analysis should be enabled for the field depending + * on the provided {@link Analyzer} + */ + protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field, + String queryText, boolean quoted, int phraseSlop) { + assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST; + + // Use the analyzer to get all the tokens, and then build an appropriate + // query based on the analysis chain. + try (TokenStream source = analyzer.tokenStream(field, queryText)) { + if (source.hasAttribute(DisableGraphAttribute.class)) { + /** + * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid + * paths explosion. See {@link ShingleTokenFilterFactory} for details. + */ + setEnableGraphQueries(false); + } + Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop); + setEnableGraphQueries(true); + return query; + } catch (IOException e) { + throw new RuntimeException("Error analyzing query text", e); + } + } + private static Query wrapWithBoost(Query query, float boost) { if (boost != AbstractQueryBuilder.DEFAULT_BOOST) { return new BoostQuery(query, boost); @@ -173,11 +208,11 @@ private static Query wrapWithBoost(Query query, float boost) { * of {@code TermQuery}s and {@code PrefixQuery}s */ private Query newPossiblyAnalyzedQuery(String field, String termStr) { - List> tlist = new ArrayList<> (); + List> tlist = new ArrayList<> (); // get Analyzer from superclass and tokenize the term try (TokenStream source = getAnalyzer().tokenStream(field, termStr)) { source.reset(); - List currentPos = new ArrayList<>(); + List currentPos = new ArrayList<>(); CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class); PositionIncrementAttribute posAtt = source.addAttribute(PositionIncrementAttribute.class); @@ -188,7 +223,8 @@ private Query newPossiblyAnalyzedQuery(String field, String termStr) { tlist.add(currentPos); currentPos = new ArrayList<>(); } - currentPos.add(termAtt.toString()); + final BytesRef term = getAnalyzer().normalize(field, termAtt.toString()); + currentPos.add(term); hasMoreTokens = source.incrementToken(); } if (currentPos.isEmpty() == false) { @@ -214,7 +250,7 @@ private Query newPossiblyAnalyzedQuery(String field, String termStr) { // build a boolean query with prefix on the last position only. BooleanQuery.Builder builder = new BooleanQuery.Builder(); for (int pos = 0; pos < tlist.size(); pos++) { - List plist = tlist.get(pos); + List plist = tlist.get(pos); boolean isLastPos = (pos == tlist.size()-1); Query posQuery; if (plist.size() == 1) { @@ -232,7 +268,7 @@ private Query newPossiblyAnalyzedQuery(String field, String termStr) { posQuery = new SynonymQuery(terms); } else { BooleanQuery.Builder innerBuilder = new BooleanQuery.Builder(); - for (String token : plist) { + for (BytesRef token : plist) { innerBuilder.add(new BooleanClause(new PrefixQuery(new Term(field, token)), BooleanClause.Occur.SHOULD)); } @@ -248,50 +284,24 @@ private Query newPossiblyAnalyzedQuery(String field, String termStr) { * their default values */ static class Settings { - /** Locale to use for parsing. */ - private Locale locale = SimpleQueryStringBuilder.DEFAULT_LOCALE; - /** Specifies whether parsed terms should be lowercased. */ - private boolean lowercaseExpandedTerms = SimpleQueryStringBuilder.DEFAULT_LOWERCASE_EXPANDED_TERMS; /** Specifies whether lenient query parsing should be used. */ private boolean lenient = SimpleQueryStringBuilder.DEFAULT_LENIENT; /** Specifies whether wildcards should be analyzed. */ private boolean analyzeWildcard = SimpleQueryStringBuilder.DEFAULT_ANALYZE_WILDCARD; + /** Specifies a suffix, if any, to apply to field names for phrase matching. */ + private String quoteFieldSuffix = null; /** * Generates default {@link Settings} object (uses ROOT locale, does * lowercase terms, no lenient parsing, no wildcard analysis). * */ - public Settings() { - } - - public Settings(Locale locale, Boolean lowercaseExpandedTerms, Boolean lenient, Boolean analyzeWildcard) { - this.locale = locale; - this.lowercaseExpandedTerms = lowercaseExpandedTerms; - this.lenient = lenient; - this.analyzeWildcard = analyzeWildcard; + Settings() { } - /** Specifies the locale to use for parsing, Locale.ROOT by default. */ - public void locale(Locale locale) { - this.locale = (locale != null) ? locale : SimpleQueryStringBuilder.DEFAULT_LOCALE; - } - - /** Returns the locale to use for parsing. */ - public Locale locale() { - return this.locale; - } - - /** - * Specifies whether to lowercase parse terms, defaults to true if - * unset. - */ - public void lowercaseExpandedTerms(boolean lowercaseExpandedTerms) { - this.lowercaseExpandedTerms = lowercaseExpandedTerms; - } - - /** Returns whether to lowercase parse terms. */ - public boolean lowercaseExpandedTerms() { - return this.lowercaseExpandedTerms; + Settings(Settings other) { + this.lenient = other.lenient; + this.analyzeWildcard = other.analyzeWildcard; + this.quoteFieldSuffix = other.quoteFieldSuffix; } /** Specifies whether to use lenient parsing, defaults to false. */ @@ -314,12 +324,24 @@ public boolean analyzeWildcard() { return analyzeWildcard; } + /** + * Set the suffix to append to field names for phrase matching. + */ + public void quoteFieldSuffix(String suffix) { + this.quoteFieldSuffix = suffix; + } + + /** + * Return the suffix to append for phrase matching, or {@code null} if + * no suffix should be appended. + */ + public String quoteFieldSuffix() { + return quoteFieldSuffix; + } + @Override public int hashCode() { - // checking the return value of toLanguageTag() for locales only. - // For further reasoning see - // https://issues.apache.org/jira/browse/LUCENE-4021 - return Objects.hash(locale.toLanguageTag(), lowercaseExpandedTerms, lenient, analyzeWildcard); + return Objects.hash(lenient, analyzeWildcard, quoteFieldSuffix); } @Override @@ -331,14 +353,8 @@ public boolean equals(Object obj) { return false; } Settings other = (Settings) obj; - - // checking the return value of toLanguageTag() for locales only. - // For further reasoning see - // https://issues.apache.org/jira/browse/LUCENE-4021 - return (Objects.equals(locale.toLanguageTag(), other.locale.toLanguageTag()) - && Objects.equals(lowercaseExpandedTerms, other.lowercaseExpandedTerms) - && Objects.equals(lenient, other.lenient) - && Objects.equals(analyzeWildcard, other.analyzeWildcard)); + return Objects.equals(lenient, other.lenient) && Objects.equals(analyzeWildcard, other.analyzeWildcard) + && Objects.equals(quoteFieldSuffix, other.quoteFieldSuffix); } } } diff --git a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java index f408c0f147317..52cf7ee971eeb 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java @@ -20,8 +20,8 @@ package org.elasticsearch.index.query; import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.Query; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; @@ -78,10 +78,7 @@ * > online documentation. */ public class SimpleQueryStringBuilder extends AbstractQueryBuilder { - /** Default locale used for parsing.*/ - public static final Locale DEFAULT_LOCALE = Locale.ROOT; - /** Default for lowercasing parsed terms.*/ - public static final boolean DEFAULT_LOWERCASE_EXPANDED_TERMS = true; + /** Default for using lenient query parsing.*/ public static final boolean DEFAULT_LENIENT = false; /** Default for wildcard analysis.*/ @@ -97,13 +94,17 @@ public class SimpleQueryStringBuilder extends AbstractQueryBuilder resolvedFieldsAndWeights = new TreeMap<>(); - // Use the default field if no fields specified - if (fieldsAndWeights.isEmpty()) { - resolvedFieldsAndWeights.put(resolveIndexName(context.defaultField(), context), AbstractQueryBuilder.DEFAULT_BOOST); + + if ((useAllFields != null && useAllFields) && (fieldsAndWeights.size() != 0)) { + throw addValidationError("cannot use [all_fields] parameter in conjunction with [fields]", null); + } + + // If explicitly required to use all fields, use all fields, OR: + // Automatically determine the fields (to replace the _all field) if all of the following are true: + // - The _all field is disabled, + // - and the default_field has not been changed in the settings + // - and no fields are specified in the request + Settings newSettings = new Settings(settings); + if ((this.useAllFields != null && this.useAllFields) || + (context.getMapperService().allEnabled() == false && + "_all".equals(context.defaultField()) && + this.fieldsAndWeights.isEmpty())) { + resolvedFieldsAndWeights = QueryStringQueryBuilder.allQueryableDefaultFields(context); + // Need to use lenient mode when using "all-mode" so exceptions aren't thrown due to mismatched types + newSettings.lenient(true); } else { - for (Map.Entry fieldEntry : fieldsAndWeights.entrySet()) { - if (Regex.isSimpleMatchPattern(fieldEntry.getKey())) { - for (String fieldName : context.getMapperService().simpleMatchToIndexNames(fieldEntry.getKey())) { - resolvedFieldsAndWeights.put(fieldName, fieldEntry.getValue()); + // Use the default field if no fields specified + if (fieldsAndWeights.isEmpty()) { + resolvedFieldsAndWeights.put(resolveIndexName(context.defaultField(), context), AbstractQueryBuilder.DEFAULT_BOOST); + } else { + for (Map.Entry fieldEntry : fieldsAndWeights.entrySet()) { + if (Regex.isSimpleMatchPattern(fieldEntry.getKey())) { + for (String fieldName : context.getMapperService().simpleMatchToIndexNames(fieldEntry.getKey())) { + resolvedFieldsAndWeights.put(fieldName, fieldEntry.getValue()); + } + } else { + resolvedFieldsAndWeights.put(resolveIndexName(fieldEntry.getKey(), context), fieldEntry.getValue()); } - } else { - resolvedFieldsAndWeights.put(resolveIndexName(fieldEntry.getKey(), context), fieldEntry.getValue()); } } } @@ -355,7 +393,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { if (analyzer == null) { luceneAnalyzer = context.getMapperService().searchAnalyzer(); } else { - luceneAnalyzer = context.getAnalysisService().analyzer(analyzer); + luceneAnalyzer = context.getIndexAnalyzers().get(analyzer); if (luceneAnalyzer == null) { throw new QueryShardException(context, "[" + SimpleQueryStringBuilder.NAME + "] analyzer [" + analyzer + "] not found"); @@ -363,17 +401,10 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } - SimpleQueryParser sqp = new SimpleQueryParser(luceneAnalyzer, resolvedFieldsAndWeights, flags, settings, context); + SimpleQueryParser sqp = new SimpleQueryParser(luceneAnalyzer, resolvedFieldsAndWeights, flags, newSettings, context); sqp.setDefaultOperator(defaultOperator.toBooleanClauseOccur()); - Query query = sqp.parse(queryText); - // If the coordination factor is disabled on a boolean query we don't apply the minimum should match. - // This is done to make sure that the minimum_should_match doesn't get applied when there is only one word - // and multiple variations of the same word in the query (synonyms for instance). - if (minimumShouldMatch != null && query instanceof BooleanQuery && !((BooleanQuery) query).isCoordDisabled()) { - query = Queries.applyMinimumShouldMatch((BooleanQuery) query, minimumShouldMatch); - } - return query; + return Queries.maybeApplyMinimumShouldMatch(query, minimumShouldMatch); } private static String resolveIndexName(String fieldName, QueryShardContext context) { @@ -404,14 +435,18 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep builder.field(FLAGS_FIELD.getPreferredName(), flags); builder.field(DEFAULT_OPERATOR_FIELD.getPreferredName(), defaultOperator.name().toLowerCase(Locale.ROOT)); - builder.field(LOWERCASE_EXPANDED_TERMS_FIELD.getPreferredName(), settings.lowercaseExpandedTerms()); builder.field(LENIENT_FIELD.getPreferredName(), settings.lenient()); builder.field(ANALYZE_WILDCARD_FIELD.getPreferredName(), settings.analyzeWildcard()); - builder.field(LOCALE_FIELD.getPreferredName(), (settings.locale().toLanguageTag())); + if (settings.quoteFieldSuffix() != null) { + builder.field(QUOTE_FIELD_SUFFIX_FIELD.getPreferredName(), settings.quoteFieldSuffix()); + } if (minimumShouldMatch != null) { builder.field(MINIMUM_SHOULD_MATCH_FIELD.getPreferredName(), minimumShouldMatch); } + if (useAllFields != null) { + builder.field(ALL_FIELDS_FIELD.getPreferredName(), useAllFields); + } printBoostAndQueryName(builder); builder.endObject(); @@ -430,16 +465,16 @@ public static Optional fromXContent(QueryParseContext String analyzerName = null; int flags = SimpleQueryStringFlag.ALL.value(); boolean lenient = SimpleQueryStringBuilder.DEFAULT_LENIENT; - boolean lowercaseExpandedTerms = SimpleQueryStringBuilder.DEFAULT_LOWERCASE_EXPANDED_TERMS; boolean analyzeWildcard = SimpleQueryStringBuilder.DEFAULT_ANALYZE_WILDCARD; - Locale locale = null; + String quoteFieldSuffix = null; + Boolean useAllFields = null; XContentParser.Token token; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, FIELDS_FIELD)) { + if (FIELDS_FIELD.match(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { String fField = null; float fBoost = 1; @@ -463,15 +498,15 @@ public static Optional fromXContent(QueryParseContext "] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { queryBody = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ANALYZER_FIELD)) { + } else if (ANALYZER_FIELD.match(currentFieldName)) { analyzerName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, DEFAULT_OPERATOR_FIELD)) { + } else if (DEFAULT_OPERATOR_FIELD.match(currentFieldName)) { defaultOperator = Operator.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, FLAGS_FIELD)) { + } else if (FLAGS_FIELD.match(currentFieldName)) { if (parser.currentToken() != XContentParser.Token.VALUE_NUMBER) { // Possible options are: // ALL, NONE, AND, OR, PREFIX, PHRASE, PRECEDENCE, ESCAPE, WHITESPACE, FUZZY, NEAR, SLOP @@ -482,19 +517,22 @@ public static Optional fromXContent(QueryParseContext flags = SimpleQueryStringFlag.ALL.value(); } } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LOCALE_FIELD)) { - String localeStr = parser.text(); - locale = Locale.forLanguageTag(localeStr); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LOWERCASE_EXPANDED_TERMS_FIELD)) { - lowercaseExpandedTerms = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LENIENT_FIELD)) { + } else if (LOCALE_FIELD.match(currentFieldName)) { + // ignore, deprecated setting + } else if (LOWERCASE_EXPANDED_TERMS_FIELD.match(currentFieldName)) { + // ignore, deprecated setting + } else if (LENIENT_FIELD.match(currentFieldName)) { lenient = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, ANALYZE_WILDCARD_FIELD)) { + } else if (ANALYZE_WILDCARD_FIELD.match(currentFieldName)) { analyzeWildcard = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MINIMUM_SHOULD_MATCH_FIELD)) { + } else if (MINIMUM_SHOULD_MATCH_FIELD.match(currentFieldName)) { minimumShouldMatch = parser.textOrNull(); + } else if (QUOTE_FIELD_SUFFIX_FIELD.match(currentFieldName)) { + quoteFieldSuffix = parser.textOrNull(); + } else if (ALL_FIELDS_FIELD.match(currentFieldName)) { + useAllFields = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), "[" + SimpleQueryStringBuilder.NAME + "] unsupported field [" + parser.currentName() + "]"); @@ -510,10 +548,16 @@ public static Optional fromXContent(QueryParseContext throw new ParsingException(parser.getTokenLocation(), "[" + SimpleQueryStringBuilder.NAME + "] query text missing"); } + if ((useAllFields != null && useAllFields) && (fieldsAndWeights.size() != 0)) { + throw new ParsingException(parser.getTokenLocation(), + "cannot use [all_fields] parameter in conjunction with [fields]"); + } + SimpleQueryStringBuilder qb = new SimpleQueryStringBuilder(queryBody); qb.boost(boost).fields(fieldsAndWeights).analyzer(analyzerName).queryName(queryName).minimumShouldMatch(minimumShouldMatch); - qb.flags(flags).defaultOperator(defaultOperator).locale(locale).lowercaseExpandedTerms(lowercaseExpandedTerms); - qb.lenient(lenient).analyzeWildcard(analyzeWildcard).boost(boost); + qb.flags(flags).defaultOperator(defaultOperator); + qb.lenient(lenient).analyzeWildcard(analyzeWildcard).boost(boost).quoteFieldSuffix(quoteFieldSuffix); + qb.useAllFields(useAllFields); return Optional.of(qb); } @@ -524,7 +568,7 @@ public String getWriteableName() { @Override protected int doHashCode() { - return Objects.hash(fieldsAndWeights, analyzer, defaultOperator, queryText, minimumShouldMatch, settings, flags); + return Objects.hash(fieldsAndWeights, analyzer, defaultOperator, queryText, minimumShouldMatch, settings, flags, useAllFields); } @Override @@ -532,7 +576,9 @@ protected boolean doEquals(SimpleQueryStringBuilder other) { return Objects.equals(fieldsAndWeights, other.fieldsAndWeights) && Objects.equals(analyzer, other.analyzer) && Objects.equals(defaultOperator, other.defaultOperator) && Objects.equals(queryText, other.queryText) && Objects.equals(minimumShouldMatch, other.minimumShouldMatch) - && Objects.equals(settings, other.settings) && (flags == other.flags); + && Objects.equals(settings, other.settings) + && (flags == other.flags) + && (useAllFields == other.useAllFields); } -} +} diff --git a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringFlag.java b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringFlag.java index 68d19db7cc69b..e8cbe035c90cd 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringFlag.java +++ b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringFlag.java @@ -43,7 +43,7 @@ public enum SimpleQueryStringFlag { final int value; - private SimpleQueryStringFlag(int value) { + SimpleQueryStringFlag(int value) { this.value = value; } diff --git a/core/src/main/java/org/elasticsearch/index/query/SpanContainingQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SpanContainingQueryBuilder.java index 42c64d05633a8..34311098ee640 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SpanContainingQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SpanContainingQueryBuilder.java @@ -113,13 +113,13 @@ public static Optional fromXContent(QueryParseContex if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, BIG_FIELD)) { + if (BIG_FIELD.match(currentFieldName)) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { throw new ParsingException(parser.getTokenLocation(), "span_containing [big] must be of type span query"); } big = (SpanQueryBuilder) query.get(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LITTLE_FIELD)) { + } else if (LITTLE_FIELD.match(currentFieldName)) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { throw new ParsingException(parser.getTokenLocation(), "span_containing [little] must be of type span query"); @@ -129,9 +129,9 @@ public static Optional fromXContent(QueryParseContex throw new ParsingException(parser.getTokenLocation(), "[span_containing] query does not support [" + currentFieldName + "]"); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), diff --git a/core/src/main/java/org/elasticsearch/index/query/SpanFirstQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SpanFirstQueryBuilder.java index 5cf5adc66a896..afac564238b9c 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SpanFirstQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SpanFirstQueryBuilder.java @@ -115,7 +115,7 @@ public static Optional fromXContent(QueryParseContext par if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, MATCH_FIELD)) { + if (MATCH_FIELD.match(currentFieldName)) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { throw new ParsingException(parser.getTokenLocation(), "spanFirst [match] must be of type span query"); @@ -125,11 +125,11 @@ public static Optional fromXContent(QueryParseContext par throw new ParsingException(parser.getTokenLocation(), "[span_first] query does not support [" + currentFieldName + "]"); } } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, END_FIELD)) { + } else if (END_FIELD.match(currentFieldName)) { end = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[span_first] query does not support [" + currentFieldName + "]"); diff --git a/core/src/main/java/org/elasticsearch/index/query/SpanMultiTermQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SpanMultiTermQueryBuilder.java index b58244f3ce1f0..1e1f44df86882 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SpanMultiTermQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SpanMultiTermQueryBuilder.java @@ -93,7 +93,7 @@ public static Optional fromXContent(QueryParseContext if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, MATCH_FIELD)) { + if (MATCH_FIELD.match(currentFieldName)) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof MultiTermQueryBuilder == false) { throw new ParsingException(parser.getTokenLocation(), @@ -104,9 +104,9 @@ public static Optional fromXContent(QueryParseContext throw new ParsingException(parser.getTokenLocation(), "[span_multi] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); } else { throw new ParsingException(parser.getTokenLocation(), "[span_multi] query does not support [" + currentFieldName + "]"); diff --git a/core/src/main/java/org/elasticsearch/index/query/SpanNearQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SpanNearQueryBuilder.java index c23597dbf1805..996a161e625a1 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SpanNearQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SpanNearQueryBuilder.java @@ -46,6 +46,8 @@ public class SpanNearQueryBuilder extends AbstractQueryBuilder fromXContent(QueryParseContext pars XContentParser parser = parseContext.parser(); float boost = AbstractQueryBuilder.DEFAULT_BOOST; - Integer slop = null; - boolean inOrder = SpanNearQueryBuilder.DEFAULT_IN_ORDER; + int slop = DEFAULT_SLOP; + boolean inOrder = DEFAULT_IN_ORDER; String queryName = null; List clauses = new ArrayList<>(); @@ -161,7 +163,7 @@ public static Optional fromXContent(QueryParseContext pars if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, CLAUSES_FIELD)) { + if (CLAUSES_FIELD.match(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { @@ -173,15 +175,15 @@ public static Optional fromXContent(QueryParseContext pars throw new ParsingException(parser.getTokenLocation(), "[span_near] query does not support [" + currentFieldName + "]"); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, IN_ORDER_FIELD)) { + if (IN_ORDER_FIELD.match(currentFieldName)) { inOrder = parser.booleanValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, COLLECT_PAYLOADS_FIELD)) { - // Deprecated in 3.0.0 - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, SLOP_FIELD)) { + } else if (COLLECT_PAYLOADS_FIELD.match(currentFieldName)) { + // Deprecated in 5.0.0 + } else if (SLOP_FIELD.match(currentFieldName)) { slop = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[span_near] query does not support [" + currentFieldName + "]"); @@ -195,10 +197,6 @@ public static Optional fromXContent(QueryParseContext pars throw new ParsingException(parser.getTokenLocation(), "span_near must include [clauses]"); } - if (slop == null) { - throw new ParsingException(parser.getTokenLocation(), "span_near must include [slop]"); - } - SpanNearQueryBuilder queryBuilder = new SpanNearQueryBuilder(clauses.get(0), slop); for (int i = 1; i < clauses.size(); i++) { queryBuilder.addClause(clauses.get(i)); @@ -211,6 +209,11 @@ public static Optional fromXContent(QueryParseContext pars @Override protected Query doToQuery(QueryShardContext context) throws IOException { + if (clauses.size() == 1) { + Query query = clauses.get(0).toQuery(context); + assert query instanceof SpanQuery; + return query; + } SpanQuery[] spanQueries = new SpanQuery[clauses.size()]; for (int i = 0; i < clauses.size(); i++) { Query query = clauses.get(i).toQuery(context); diff --git a/core/src/main/java/org/elasticsearch/index/query/SpanNotQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SpanNotQueryBuilder.java index ccf325c253f1d..d7dd6d6534b2d 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SpanNotQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SpanNotQueryBuilder.java @@ -181,13 +181,13 @@ public static Optional fromXContent(QueryParseContext parse if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, INCLUDE_FIELD)) { + if (INCLUDE_FIELD.match(currentFieldName)) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { throw new ParsingException(parser.getTokenLocation(), "spanNot [include] must be of type span query"); } include = (SpanQueryBuilder) query.get(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, EXCLUDE_FIELD)) { + } else if (EXCLUDE_FIELD.match(currentFieldName)) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { throw new ParsingException(parser.getTokenLocation(), "spanNot [exclude] must be of type span query"); @@ -197,15 +197,15 @@ public static Optional fromXContent(QueryParseContext parse throw new ParsingException(parser.getTokenLocation(), "[span_not] query does not support [" + currentFieldName + "]"); } } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, DIST_FIELD)) { + if (DIST_FIELD.match(currentFieldName)) { dist = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, PRE_FIELD)) { + } else if (PRE_FIELD.match(currentFieldName)) { pre = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, POST_FIELD)) { + } else if (POST_FIELD.match(currentFieldName)) { post = parser.intValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[span_not] query does not support [" + currentFieldName + "]"); diff --git a/core/src/main/java/org/elasticsearch/index/query/SpanOrQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SpanOrQueryBuilder.java index 336dd8bb0fe19..6676d168ee55d 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SpanOrQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SpanOrQueryBuilder.java @@ -112,7 +112,7 @@ public static Optional fromXContent(QueryParseContext parseC if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, CLAUSES_FIELD)) { + if (CLAUSES_FIELD.match(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { @@ -124,9 +124,9 @@ public static Optional fromXContent(QueryParseContext parseC throw new ParsingException(parser.getTokenLocation(), "[span_or] query does not support [" + currentFieldName + "]"); } } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[span_or] query does not support [" + currentFieldName + "]"); diff --git a/core/src/main/java/org/elasticsearch/index/query/SpanTermQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SpanTermQueryBuilder.java index b57206c33b55c..90073e3fec43d 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SpanTermQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SpanTermQueryBuilder.java @@ -110,13 +110,13 @@ public static Optional fromXContent(QueryParseContext pars if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, TERM_FIELD)) { + if (TERM_FIELD.match(currentFieldName)) { value = parser.objectBytes(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, BaseTermQueryBuilder.VALUE_FIELD)) { + } else if (BaseTermQueryBuilder.VALUE_FIELD.match(currentFieldName)) { value = parser.objectBytes(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), diff --git a/core/src/main/java/org/elasticsearch/index/query/SpanWithinQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SpanWithinQueryBuilder.java index 2c5cf1cda6917..eebc8b0eed0c8 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SpanWithinQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SpanWithinQueryBuilder.java @@ -119,13 +119,13 @@ public static Optional fromXContent(QueryParseContext pa if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, BIG_FIELD)) { + if (BIG_FIELD.match(currentFieldName)) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { throw new ParsingException(parser.getTokenLocation(), "span_within [big] must be of type span query"); } big = (SpanQueryBuilder) query.get(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, LITTLE_FIELD)) { + } else if (LITTLE_FIELD.match(currentFieldName)) { Optional query = parseContext.parseInnerQueryBuilder(); if (query.isPresent() == false || query.get() instanceof SpanQueryBuilder == false) { throw new ParsingException(parser.getTokenLocation(), "span_within [little] must be of type span query"); @@ -135,9 +135,9 @@ public static Optional fromXContent(QueryParseContext pa throw new ParsingException(parser.getTokenLocation(), "[span_within] query does not support [" + currentFieldName + "]"); } - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "[span_within] query does not support [" + currentFieldName + "]"); diff --git a/core/src/main/java/org/elasticsearch/index/query/TermQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/TermQueryBuilder.java index 92ec60f402f4f..dd8019e5a1445 100644 --- a/core/src/main/java/org/elasticsearch/index/query/TermQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/TermQueryBuilder.java @@ -104,13 +104,13 @@ public static Optional fromXContent(QueryParseContext parseCon if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, TERM_FIELD)) { + if (TERM_FIELD.match(currentFieldName)) { value = parser.objectBytes(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, VALUE_FIELD)) { + } else if (VALUE_FIELD.match(currentFieldName)) { value = parser.objectBytes(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); } else { throw new ParsingException(parser.getTokenLocation(), diff --git a/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java index 56e6f2a7a45a9..94435d29212d4 100644 --- a/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java @@ -20,20 +20,25 @@ package org.elasticsearch.index.query; import org.apache.lucene.index.Term; -import org.apache.lucene.queries.TermsQuery; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermInSetQuery; import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.BytesRefBuilder; import org.elasticsearch.action.get.GetRequest; import org.elasticsearch.action.get.GetResponse; import org.elasticsearch.client.Client; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.BytesRefs; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -43,11 +48,15 @@ import org.elasticsearch.indices.TermsLookup; import java.io.IOException; +import java.util.AbstractList; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; import java.util.List; import java.util.Objects; import java.util.Optional; +import java.util.Set; import java.util.stream.Collectors; import java.util.stream.IntStream; @@ -57,6 +66,7 @@ public class TermsQueryBuilder extends AbstractQueryBuilder { public static final String NAME = "terms"; public static final ParseField QUERY_NAME_FIELD = new ParseField(NAME, "in"); + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(TermsQueryBuilder.class)); private final String fieldName; private final List values; @@ -80,7 +90,7 @@ public TermsQueryBuilder(String fieldName, TermsLookup termsLookup) { throw new IllegalArgumentException("Both values and termsLookup specified for terms query"); } this.fieldName = fieldName; - this.values = values; + this.values = values == null ? null : convert(values); this.termsLookup = termsLookup; } @@ -159,7 +169,7 @@ public TermsQueryBuilder(String fieldName, Iterable values) { throw new IllegalArgumentException("No value specified for terms query"); } this.fieldName = fieldName; - this.values = convertToBytesRefListIfStringList(values); + this.values = convert(values); this.termsLookup = null; } @@ -185,43 +195,125 @@ public String fieldName() { } public List values() { - return convertToStringListIfBytesRefList(this.values); + return convertBack(this.values); } public TermsLookup termsLookup() { return this.termsLookup; } + private static final Set> INTEGER_TYPES = new HashSet<>( + Arrays.asList(Byte.class, Short.class, Integer.class, Long.class)); + private static final Set> STRING_TYPES = new HashSet<>( + Arrays.asList(BytesRef.class, String.class)); + /** - * Same as {@link #convertToBytesRefIfString} but on Iterable. - * @param objs the Iterable of input object - * @return the same input or a list of {@link BytesRef} representation if input was a list of type string + * Same as {@link #convert(List)} but on an {@link Iterable}. */ - private static List convertToBytesRefListIfStringList(Iterable objs) { - if (objs == null) { - return null; - } - List newObjs = new ArrayList<>(); - for (Object obj : objs) { - newObjs.add(convertToBytesRefIfString(obj)); + private static List convert(Iterable values) { + List list; + if (values instanceof List) { + list = (List) values; + } else { + ArrayList arrayList = new ArrayList(); + for (Object o : values) { + arrayList.add(o); + } + list = arrayList; } - return newObjs; + return convert(list); } /** - * Same as {@link #convertToStringIfBytesRef} but on Iterable. - * @param objs the Iterable of input object - * @return the same input or a list of utf8 string if input was a list of type {@link BytesRef} + * Convert the list in a way that optimizes storage in the case that all + * elements are either integers or {@link String}s/{@link BytesRef}s. This + * is useful to help garbage collections for use-cases that involve sending + * very large terms queries to Elasticsearch. If the list does not only + * contain integers or {@link String}s, then a list is returned where all + * {@link String}s have been replaced with {@link BytesRef}s. */ - private static List convertToStringListIfBytesRefList(Iterable objs) { - if (objs == null) { - return null; + static List convert(List list) { + if (list.isEmpty()) { + return Collections.emptyList(); } - List newObjs = new ArrayList<>(); - for (Object obj : objs) { - newObjs.add(convertToStringIfBytesRef(obj)); + + final boolean allNumbers = list.stream().allMatch(o -> o != null && INTEGER_TYPES.contains(o.getClass())); + if (allNumbers) { + final long[] elements = list.stream().mapToLong(o -> ((Number) o).longValue()).toArray(); + return new AbstractList() { + @Override + public Object get(int index) { + return elements[index]; + } + @Override + public int size() { + return elements.length; + } + }; } - return newObjs; + + final boolean allStrings = list.stream().allMatch(o -> o != null && STRING_TYPES.contains(o.getClass())); + if (allStrings) { + final BytesRefBuilder builder = new BytesRefBuilder(); + try (BytesStreamOutput bytesOut = new BytesStreamOutput()) { + final int[] endOffsets = new int[list.size()]; + int i = 0; + for (Object o : list) { + BytesRef b; + if (o instanceof BytesRef) { + b = (BytesRef) o; + } else { + builder.copyChars(o.toString()); + b = builder.get(); + } + bytesOut.writeBytes(b.bytes, b.offset, b.length); + if (i == 0) { + endOffsets[0] = b.length; + } else { + endOffsets[i] = Math.addExact(endOffsets[i-1], b.length); + } + ++i; + } + final BytesReference bytes = bytesOut.bytes(); + return new AbstractList() { + public Object get(int i) { + final int startOffset = i == 0 ? 0 : endOffsets[i-1]; + final int endOffset = endOffsets[i]; + return bytes.slice(startOffset, endOffset - startOffset).toBytesRef(); + } + public int size() { + return endOffsets.length; + } + }; + } + } + + return list.stream().map(o -> o instanceof String ? new BytesRef(o.toString()) : o).collect(Collectors.toList()); + } + + /** + * Convert the internal {@link List} of values back to a user-friendly list. + * Integers are kept as-is since the terms query does not make any difference + * between {@link Integer}s and {@link Long}s, but {@link BytesRef}s are + * converted back to {@link String}s. + */ + static List convertBack(List list) { + return new AbstractList() { + @Override + public int size() { + return list.size(); + } + @Override + public Object get(int index) { + Object o = list.get(index); + if (o instanceof BytesRef) { + o = ((BytesRef) o).utf8ToString(); + } + // we do not convert longs, all integer types are equivalent + // as far as this query is concerned + return o; + } + }; } @Override @@ -232,7 +324,7 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep termsLookup.toXContent(builder, params); builder.endObject(); } else { - builder.field(fieldName, convertToStringListIfBytesRefList(values)); + builder.field(fieldName, convertBack(values)); } printBoostAndQueryName(builder); builder.endObject(); @@ -271,9 +363,9 @@ public static Optional fromXContent(QueryParseContext parseCo fieldName = currentFieldName; termsLookup = TermsLookup.parseTermsLookup(parser); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -352,7 +444,7 @@ private static Query handleTermsQuery(List terms, String fieldName, QueryShar for (int i = 0; i < filterValues.length; i++) { filterValues[i] = BytesRefs.toBytesRef(terms.get(i)); } - query = new TermsQuery(indexFieldName, filterValues); + query = new TermInSetQuery(indexFieldName, filterValues); } } else { BooleanQuery.Builder bq = new BooleanQuery.Builder(); @@ -385,6 +477,7 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws if (this.termsLookup != null) { TermsLookup termsLookup = new TermsLookup(this.termsLookup); if (termsLookup.index() == null) { // TODO this should go away? + DEPRECATION_LOGGER.deprecated("Omitting the index in terms lookup is deprecated"); if (queryRewriteContext.getIndexSettings() != null) { termsLookup.index(queryRewriteContext.getIndexSettings().getIndex().getName()); } else { diff --git a/core/src/main/java/org/elasticsearch/index/query/TypeQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/TypeQueryBuilder.java index c5b071aa64efe..e3a9c5bebbdd2 100644 --- a/core/src/main/java/org/elasticsearch/index/query/TypeQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/TypeQueryBuilder.java @@ -94,11 +94,11 @@ public static Optional fromXContent(QueryParseContext parseCon if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, VALUE_FIELD)) { + } else if (VALUE_FIELD.match(currentFieldName)) { type = parser.utf8Bytes(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -133,7 +133,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { // no type means no documents return new MatchNoDocsQuery(); } else { - return documentMapper.typeFilter(); + return documentMapper.typeFilter(context); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/WildcardQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/WildcardQueryBuilder.java index 0cef251be07ed..728ade482c2ab 100644 --- a/core/src/main/java/org/elasticsearch/index/query/WildcardQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/WildcardQueryBuilder.java @@ -154,15 +154,15 @@ public static Optional fromXContent(QueryParseContext pars if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, WILDCARD_FIELD)) { + if (WILDCARD_FIELD.match(currentFieldName)) { value = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, VALUE_FIELD)) { + } else if (VALUE_FIELD.match(currentFieldName)) { value = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, REWRITE_FIELD)) { + } else if (REWRITE_FIELD.match(currentFieldName)) { rewrite = parser.textOrNull(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -195,7 +195,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } WildcardQuery query = new WildcardQuery(term); - MultiTermQuery.RewriteMethod rewriteMethod = QueryParsers.parseRewriteMethod(context.getParseFieldMatcher(), rewrite, null); + MultiTermQuery.RewriteMethod rewriteMethod = QueryParsers.parseRewriteMethod(rewrite, null); QueryParsers.setRewriteMethod(query, rewriteMethod); return query; } diff --git a/core/src/main/java/org/elasticsearch/index/query/WrapperQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/WrapperQueryBuilder.java index 922038ba43624..26c84eeb88b2c 100644 --- a/core/src/main/java/org/elasticsearch/index/query/WrapperQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/WrapperQueryBuilder.java @@ -124,7 +124,7 @@ public static Optional fromXContent(QueryParseContext parse throw new ParsingException(parser.getTokenLocation(), "[wrapper] query malformed"); } String fieldName = parser.currentName(); - if (! parseContext.getParseFieldMatcher().match(fieldName, QUERY_FIELD)) { + if (! QUERY_FIELD.match(fieldName)) { throw new ParsingException(parser.getTokenLocation(), "[wrapper] query malformed, expected `query` but was " + fieldName); } parser.nextToken(); @@ -161,11 +161,11 @@ protected boolean doEquals(WrapperQueryBuilder other) { @Override protected QueryBuilder doRewrite(QueryRewriteContext context) throws IOException { - try (XContentParser qSourceParser = XContentFactory.xContent(source).createParser(source)) { + try (XContentParser qSourceParser = XContentFactory.xContent(source).createParser(context.getXContentRegistry(), source)) { QueryParseContext parseContext = context.newParseContext(qSourceParser); final QueryBuilder queryBuilder = parseContext.parseInnerQueryBuilder().orElseThrow( - () -> new ParsingException(qSourceParser.getTokenLocation(), "inner query cannot be empty")); + () -> new ParsingException(qSourceParser.getTokenLocation(), "inner query cannot be empty")).rewrite(context); if (boost() != DEFAULT_BOOST || queryName() != null) { final BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder(); boolQueryBuilder.must(queryBuilder); diff --git a/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java b/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java index c2d20587cf0c8..6a0ed7d3cac67 100644 --- a/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java @@ -34,6 +34,7 @@ import org.elasticsearch.common.lucene.search.function.ScoreFunction; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; @@ -48,6 +49,7 @@ import org.elasticsearch.index.mapper.LegacyNumberFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.NumberFieldMapper; +import org.elasticsearch.index.query.QueryRewriteContext; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.search.MultiValueMode; @@ -147,10 +149,7 @@ public BytesReference getFunctionBytes() { @Override public void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(getName()); - builder.field(fieldName); - try (XContentParser parser = XContentFactory.xContent(functionBytes).createParser(functionBytes)) { - builder.copyCurrentStructure(parser); - } + builder.rawField(fieldName, functionBytes); builder.field(DecayFunctionParser.MULTI_VALUE_MODE.getPreferredName(), multiValueMode.name()); builder.endObject(); } @@ -183,7 +182,8 @@ protected int doHashCode() { @Override protected ScoreFunction doToFunction(QueryShardContext context) throws IOException { AbstractDistanceScoreFunction scoreFunction; - try (XContentParser parser = XContentFactory.xContent(functionBytes).createParser(functionBytes)) { + // EMPTY is safe because parseVariable doesn't use namedObject + try (XContentParser parser = XContentFactory.xContent(functionBytes).createParser(NamedXContentRegistry.EMPTY, functionBytes)) { scoreFunction = parseVariable(fieldName, parser, context, multiValueMode); } return scoreFunction; @@ -315,9 +315,10 @@ private AbstractDistanceScoreFunction parseDateVariable(XContentParser parser, Q origin = context.nowInMillis(); } else { if (dateFieldType instanceof LegacyDateFieldMapper.DateFieldType) { - origin = ((LegacyDateFieldMapper.DateFieldType) dateFieldType).parseToMilliseconds(originString, false, null, null); + origin = ((LegacyDateFieldMapper.DateFieldType) dateFieldType).parseToMilliseconds(originString, false, null, null, + context); } else { - origin = ((DateFieldMapper.DateFieldType) dateFieldType).parseToMilliseconds(originString, false, null, null); + origin = ((DateFieldMapper.DateFieldType) dateFieldType).parseToMilliseconds(originString, false, null, null, context); } } @@ -338,9 +339,9 @@ static class GeoFieldDataScoreFunction extends AbstractDistanceScoreFunction { private final GeoPoint origin; private final IndexGeoPointFieldData fieldData; - private static final GeoDistance distFunction = GeoDistance.DEFAULT; + private static final GeoDistance distFunction = GeoDistance.ARC; - public GeoFieldDataScoreFunction(GeoPoint origin, double scale, double decay, double offset, DecayFunction func, + GeoFieldDataScoreFunction(GeoPoint origin, double scale, double decay, double offset, DecayFunction func, IndexGeoPointFieldData fieldData, MultiValueMode mode) { super(scale, decay, offset, func, mode); this.origin = origin; @@ -422,7 +423,7 @@ static class NumericFieldDataScoreFunction extends AbstractDistanceScoreFunction private final IndexNumericFieldData fieldData; private final double origin; - public NumericFieldDataScoreFunction(double origin, double scale, double decay, double offset, DecayFunction func, + NumericFieldDataScoreFunction(double origin, double scale, double decay, double offset, DecayFunction func, IndexNumericFieldData fieldData, MultiValueMode mode) { super(scale, decay, offset, func, mode); this.fieldData = fieldData; @@ -547,7 +548,7 @@ public double score(int docId, float subQueryScore) { @Override public Explanation explainScore(int docId, Explanation subQueryScore) throws IOException { return Explanation.match( - CombineFunction.toFloat(score(docId, subQueryScore.getValue())), + (float) score(docId, subQueryScore.getValue()), "Function for field " + getFieldName() + ":", func.explainFunction(getDistanceString(ctx, docId), distance.get(docId), scale)); } diff --git a/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java b/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java index 2c8b9af28d6a8..5a767ecc3a0ba 100644 --- a/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java @@ -112,7 +112,7 @@ public DFB fromXContent(QueryParseContext context) throws IOException, ParsingEx XContentBuilder builder = XContentFactory.jsonBuilder(); builder.copyCurrentStructure(parser); functionBytes = builder.bytes(); - } else if (context.getParseFieldMatcher().match(currentFieldName, MULTI_VALUE_MODE)) { + } else if (MULTI_VALUE_MODE.match(currentFieldName)) { multiValueMode = MultiValueMode.fromString(parser.text()); } else { throw new ParsingException(parser.getTokenLocation(), "malformed score function score parameters."); diff --git a/core/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilder.java index c9bb00270d416..d86e9fcc347e3 100644 --- a/core/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilder.java @@ -31,13 +31,12 @@ import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery.FilterFunction; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; import org.elasticsearch.common.lucene.search.function.ScoreFunction; -import org.elasticsearch.common.xcontent.ParseFieldRegistry; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentLocation; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.AbstractQueryBuilder; -import org.elasticsearch.index.query.InnerHitBuilder; +import org.elasticsearch.index.query.InnerHitContextBuilder; import org.elasticsearch.index.query.MatchAllQueryBuilder; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryBuilders; @@ -427,18 +426,18 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws newQueryBuilder.scoreMode = scoreMode; newQueryBuilder.minScore = minScore; newQueryBuilder.maxBoost = maxBoost; + newQueryBuilder.boostMode = boostMode; return newQueryBuilder; } return this; } @Override - protected void extractInnerHitBuilders(Map innerHits) { - InnerHitBuilder.extractInnerHits(query(), innerHits); + protected void extractInnerHitBuilders(Map innerHits) { + InnerHitContextBuilder.extractInnerHits(query(), innerHits); } - public static Optional fromXContent(ParseFieldRegistry> scoreFunctionsRegistry, - QueryParseContext parseContext) throws IOException { + public static Optional fromXContent(QueryParseContext parseContext) throws IOException { XContentParser parser = parseContext.parser(); QueryBuilder query = null; @@ -462,7 +461,7 @@ public static Optional fromXContent(ParseFieldRegistr if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { if (query != null) { throw new ParsingException(parser.getTokenLocation(), "failed to parse [{}] query. [query] is already defined.", NAME); @@ -482,38 +481,35 @@ public static Optional fromXContent(ParseFieldRegistr singleFunctionFound = true; singleFunctionName = currentFieldName; - // we try to parse a score function. If there is no score function for the current field name, - // getScoreFunction will throw. - ScoreFunctionBuilder scoreFunction = scoreFunctionsRegistry - .lookup(currentFieldName, parseContext.getParseFieldMatcher(), parser.getTokenLocation()) - .fromXContent(parseContext); + ScoreFunctionBuilder scoreFunction = parser.namedObject(ScoreFunctionBuilder.class, currentFieldName, + parseContext); filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(scoreFunction)); } } else if (token == XContentParser.Token.START_ARRAY) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, FUNCTIONS_FIELD)) { + if (FUNCTIONS_FIELD.match(currentFieldName)) { if (singleFunctionFound) { String errorString = "already found [" + singleFunctionName + "], now encountering [functions]."; handleMisplacedFunctionsDeclaration(parser.getTokenLocation(), errorString); } functionArrayFound = true; - currentFieldName = parseFiltersAndFunctions(scoreFunctionsRegistry, parseContext, filterFunctionBuilders); + currentFieldName = parseFiltersAndFunctions(parseContext, filterFunctionBuilders); } else { throw new ParsingException(parser.getTokenLocation(), "failed to parse [{}] query. array [{}] is not supported", NAME, currentFieldName); } } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(currentFieldName, SCORE_MODE_FIELD)) { + if (SCORE_MODE_FIELD.match(currentFieldName)) { scoreMode = FiltersFunctionScoreQuery.ScoreMode.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, BOOST_MODE_FIELD)) { + } else if (BOOST_MODE_FIELD.match(currentFieldName)) { combineFunction = CombineFunction.fromString(parser.text()); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MAX_BOOST_FIELD)) { + } else if (MAX_BOOST_FIELD.match(currentFieldName)) { maxBoost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) { + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) { + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { queryName = parser.text(); - } else if (parseContext.getParseFieldMatcher().match(currentFieldName, MIN_SCORE_FIELD)) { + } else if (MIN_SCORE_FIELD.match(currentFieldName)) { minScore = parser.floatValue(); } else { if (singleFunctionFound) { @@ -526,7 +522,7 @@ public static Optional fromXContent(ParseFieldRegistr String errorString = "already found [functions] array, now encountering [" + currentFieldName + "]."; handleMisplacedFunctionsDeclaration(parser.getTokenLocation(), errorString); } - if (parseContext.getParseFieldMatcher().match(currentFieldName, WEIGHT_FIELD)) { + if (WEIGHT_FIELD.match(currentFieldName)) { filterFunctionBuilders.add( new FunctionScoreQueryBuilder.FilterFunctionBuilder(new WeightBuilder().setWeight(parser.floatValue()))); singleFunctionFound = true; @@ -563,9 +559,8 @@ private static void handleMisplacedFunctionsDeclaration(XContentLocation content MISPLACED_FUNCTION_MESSAGE_PREFIX + errorString); } - private static String parseFiltersAndFunctions(ParseFieldRegistry> scoreFunctionsRegistry, - QueryParseContext parseContext, List filterFunctionBuilders) - throws IOException { + private static String parseFiltersAndFunctions(QueryParseContext parseContext, + List filterFunctionBuilders) throws IOException { String currentFieldName = null; XContentParser.Token token; XContentParser parser = parseContext.parser(); @@ -582,7 +577,7 @@ private static String parseFiltersAndFunctions(ParseFieldRegistry uidFieldData = context.getForField(fieldType); return new RandomScoreFunction(this.seed == null ? hash(context.nowInMillis()) : seed, salt, uidFieldData); } - /** - * Get the current shard's id for the seed. Protected because this method doesn't work during certain unit tests and needs to be - * replaced. - */ - int getCurrentShardId() { - return SearchContext.current().indexShard().shardId().id(); - } - private static int hash(long value) { return Long.hashCode(value); } diff --git a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionBuilders.java b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionBuilders.java index e6fb632f5aa2a..100ff29dfeb69 100644 --- a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionBuilders.java +++ b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionBuilders.java @@ -20,6 +20,9 @@ package org.elasticsearch.index.query.functionscore; import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptType; + +import static java.util.Collections.emptyMap; /** * Static method aliases for constructors of known {@link ScoreFunctionBuilder}s. @@ -69,7 +72,7 @@ public static ScriptScoreFunctionBuilder scriptFunction(Script script) { } public static ScriptScoreFunctionBuilder scriptFunction(String script) { - return (new ScriptScoreFunctionBuilder(new Script(script))); + return (new ScriptScoreFunctionBuilder(new Script(ScriptType.INLINE, Script.DEFAULT_SCRIPT_LANG, script, emptyMap()))); } public static RandomScoreFunctionBuilder randomFunction(int seed) { diff --git a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionParser.java b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionParser.java index 3c01c2d92f369..1a2fad90c464f 100644 --- a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionParser.java @@ -19,7 +19,6 @@ package org.elasticsearch.index.query.functionscore; -import org.elasticsearch.common.ParsingException; import org.elasticsearch.index.query.QueryParseContext; import java.io.IOException; @@ -29,5 +28,5 @@ */ @FunctionalInterface public interface ScoreFunctionParser> { - FB fromXContent(QueryParseContext context) throws IOException, ParsingException; + FB fromXContent(QueryParseContext context) throws IOException; } diff --git a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScriptScoreFunctionBuilder.java b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScriptScoreFunctionBuilder.java index e2fbc5955d7fb..68913cd9e2162 100644 --- a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScriptScoreFunctionBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScriptScoreFunctionBuilder.java @@ -30,12 +30,10 @@ import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.QueryShardException; import org.elasticsearch.script.Script; -import org.elasticsearch.script.Script.ScriptField; import org.elasticsearch.script.ScriptContext; import org.elasticsearch.script.SearchScript; import java.io.IOException; -import java.util.Collections; import java.util.Objects; /** @@ -74,7 +72,7 @@ public Script getScript() { @Override public void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(getName()); - builder.field(ScriptField.SCRIPT.getPreferredName(), script); + builder.field(Script.SCRIPT_PARSE_FIELD.getPreferredName(), script); builder.endObject(); } @@ -96,8 +94,7 @@ protected int doHashCode() { @Override protected ScoreFunction doToFunction(QueryShardContext context) { try { - SearchScript searchScript = context.getScriptService().search(context.lookup(), script, ScriptContext.Standard.SEARCH, - Collections.emptyMap()); + SearchScript searchScript = context.getSearchScript(script, ScriptContext.Standard.SEARCH); return new ScriptScoreFunction(script, searchScript); } catch (Exception e) { throw new QueryShardException(context, "script_score: the script could not be loaded", e); @@ -114,8 +111,8 @@ public static ScriptScoreFunctionBuilder fromXContent(QueryParseContext parseCon if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else { - if (parseContext.getParseFieldMatcher().match(currentFieldName, ScriptField.SCRIPT)) { - script = Script.parse(parser, parseContext.getParseFieldMatcher(), parseContext.getDefaultScriptLanguage()); + if (Script.SCRIPT_PARSE_FIELD.match(currentFieldName)) { + script = Script.parse(parser, parseContext.getDefaultScriptLanguage()); } else { throw new ParsingException(parser.getTokenLocation(), NAME + " query does not support [" + currentFieldName + "]"); } diff --git a/core/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java b/core/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java index a500393c160da..ec92f174bb47f 100644 --- a/core/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java +++ b/core/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java @@ -22,7 +22,6 @@ import org.apache.lucene.search.MultiTermQuery; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; /** * @@ -47,28 +46,22 @@ public static void setRewriteMethod(MultiTermQuery query, @Nullable MultiTermQue query.setRewriteMethod(rewriteMethod); } - public static void setRewriteMethod(MultiTermQuery query, ParseFieldMatcher matcher, @Nullable String rewriteMethod) { - if (rewriteMethod == null) { - return; - } - query.setRewriteMethod(parseRewriteMethod(matcher, rewriteMethod)); + public static MultiTermQuery.RewriteMethod parseRewriteMethod(@Nullable String rewriteMethod) { + return parseRewriteMethod(rewriteMethod, MultiTermQuery.CONSTANT_SCORE_REWRITE); } - public static MultiTermQuery.RewriteMethod parseRewriteMethod(ParseFieldMatcher matcher, @Nullable String rewriteMethod) { - return parseRewriteMethod(matcher, rewriteMethod, MultiTermQuery.CONSTANT_SCORE_REWRITE); - } - - public static MultiTermQuery.RewriteMethod parseRewriteMethod(ParseFieldMatcher matcher, @Nullable String rewriteMethod, @Nullable MultiTermQuery.RewriteMethod defaultRewriteMethod) { + public static MultiTermQuery.RewriteMethod parseRewriteMethod(@Nullable String rewriteMethod, + @Nullable MultiTermQuery.RewriteMethod defaultRewriteMethod) { if (rewriteMethod == null) { return defaultRewriteMethod; } - if (matcher.match(rewriteMethod, CONSTANT_SCORE)) { + if (CONSTANT_SCORE.match(rewriteMethod)) { return MultiTermQuery.CONSTANT_SCORE_REWRITE; } - if (matcher.match(rewriteMethod, SCORING_BOOLEAN)) { + if (SCORING_BOOLEAN.match(rewriteMethod)) { return MultiTermQuery.SCORING_BOOLEAN_REWRITE; } - if (matcher.match(rewriteMethod, CONSTANT_SCORE_BOOLEAN)) { + if (CONSTANT_SCORE_BOOLEAN.match(rewriteMethod)) { return MultiTermQuery.CONSTANT_SCORE_BOOLEAN_REWRITE; } @@ -84,18 +77,17 @@ public static MultiTermQuery.RewriteMethod parseRewriteMethod(ParseFieldMatcher final int size = Integer.parseInt(rewriteMethod.substring(firstDigit)); String rewriteMethodName = rewriteMethod.substring(0, firstDigit); - if (matcher.match(rewriteMethodName, TOP_TERMS)) { + if (TOP_TERMS.match(rewriteMethodName)) { return new MultiTermQuery.TopTermsScoringBooleanQueryRewrite(size); } - if (matcher.match(rewriteMethodName, TOP_TERMS_BOOST)) { + if (TOP_TERMS_BOOST.match(rewriteMethodName)) { return new MultiTermQuery.TopTermsBoostOnlyBooleanQueryRewrite(size); } - if (matcher.match(rewriteMethodName, TOP_TERMS_BLENDED_FREQS)) { + if (TOP_TERMS_BLENDED_FREQS.match(rewriteMethodName)) { return new MultiTermQuery.TopTermsBlendedFreqScoringRewrite(size); } } throw new IllegalArgumentException("Failed to parse rewrite_method [" + rewriteMethod + "]"); } - } diff --git a/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java b/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java index 9b9b4673accd1..a8a8a0be41c42 100644 --- a/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java +++ b/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.refresh; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; @@ -27,6 +28,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; +import java.util.Objects; public class RefreshStats implements Streamable, ToXContent { @@ -34,18 +36,19 @@ public class RefreshStats implements Streamable, ToXContent { private long totalTimeInMillis; + /** + * Number of waiting refresh listeners. + */ + private int listeners; + public RefreshStats() { } - public RefreshStats(long total, long totalTimeInMillis) { + public RefreshStats(long total, long totalTimeInMillis, int listeners) { this.total = total; this.totalTimeInMillis = totalTimeInMillis; - } - - public void add(long total, long totalTimeInMillis) { - this.total += total; - this.totalTimeInMillis += totalTimeInMillis; + this.listeners = listeners; } public void add(RefreshStats refreshStats) { @@ -58,6 +61,7 @@ public void addTotals(RefreshStats refreshStats) { } this.total += refreshStats.total; this.totalTimeInMillis += refreshStats.totalTimeInMillis; + this.listeners += refreshStats.listeners; } /** @@ -81,37 +85,56 @@ public TimeValue getTotalTime() { return new TimeValue(totalTimeInMillis); } - public static RefreshStats readRefreshStats(StreamInput in) throws IOException { - RefreshStats refreshStats = new RefreshStats(); - refreshStats.readFrom(in); - return refreshStats; + /** + * The number of waiting refresh listeners. + */ + public int getListeners() { + return listeners; } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(Fields.REFRESH); - builder.field(Fields.TOTAL, total); - builder.timeValueField(Fields.TOTAL_TIME_IN_MILLIS, Fields.TOTAL_TIME, totalTimeInMillis); + builder.startObject("refresh"); + builder.field("total", total); + builder.timeValueField("total_time_in_millis", "total_time", totalTimeInMillis); + builder.field("listeners", listeners); builder.endObject(); return builder; } - static final class Fields { - static final String REFRESH = "refresh"; - static final String TOTAL = "total"; - static final String TOTAL_TIME = "total_time"; - static final String TOTAL_TIME_IN_MILLIS = "total_time_in_millis"; - } - @Override public void readFrom(StreamInput in) throws IOException { total = in.readVLong(); totalTimeInMillis = in.readVLong(); + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { + listeners = in.readVInt(); + } else { + listeners = 0; + } } @Override public void writeTo(StreamOutput out) throws IOException { out.writeVLong(total); out.writeVLong(totalTimeInMillis); + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { + out.writeVInt(listeners); + } + } + + @Override + public boolean equals(Object obj) { + if (obj == null || obj.getClass() != RefreshStats.class) { + return false; + } + RefreshStats rhs = (RefreshStats) obj; + return total == rhs.total + && totalTimeInMillis == rhs.totalTimeInMillis + && listeners == rhs.listeners; + } + + @Override + public int hashCode() { + return Objects.hash(total, totalTimeInMillis, listeners); } } diff --git a/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequest.java b/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequest.java new file mode 100644 index 0000000000000..5bfae5fde924a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequest.java @@ -0,0 +1,453 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.support.ActiveShardCount; +import org.elasticsearch.action.support.replication.ReplicationRequest; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskId; + +import java.io.IOException; +import java.util.Arrays; + +import static org.elasticsearch.action.ValidateActions.addValidationError; +import static org.elasticsearch.common.unit.TimeValue.timeValueMillis; +import static org.elasticsearch.common.unit.TimeValue.timeValueMinutes; + +public abstract class AbstractBulkByScrollRequest> extends ActionRequest { + + public static final int SIZE_ALL_MATCHES = -1; + private static final TimeValue DEFAULT_SCROLL_TIMEOUT = timeValueMinutes(5); + private static final int DEFAULT_SCROLL_SIZE = 1000; + + /** + * The search to be executed. + */ + private SearchRequest searchRequest; + + /** + * Maximum number of processed documents. Defaults to -1 meaning process all + * documents. + */ + private int size = SIZE_ALL_MATCHES; + + /** + * Should version conflicts cause aborts? Defaults to true. + */ + private boolean abortOnVersionConflict = true; + + /** + * Call refresh on the indexes we've written to after the request ends? + */ + private boolean refresh = false; + + /** + * Timeout to wait for the shards on to be available for each bulk request? + */ + private TimeValue timeout = ReplicationRequest.DEFAULT_TIMEOUT; + + /** + * The number of shard copies that must be active before proceeding with the write. + */ + private ActiveShardCount activeShardCount = ActiveShardCount.DEFAULT; + + /** + * Initial delay after a rejection before retrying a bulk request. With the default maxRetries the total backoff for retrying rejections + * is about one minute per bulk request. Once the entire bulk request is successful the retry counter resets. + */ + private TimeValue retryBackoffInitialTime = timeValueMillis(500); + + /** + * Total number of retries attempted for rejections. There is no way to ask for unlimited retries. + */ + private int maxRetries = 11; + + /** + * The throttle for this request in sub-requests per second. {@link Float#POSITIVE_INFINITY} means set no throttle and that is the + * default. Throttling is done between batches, as we start the next scroll requests. That way we can increase the scroll's timeout to + * make sure that it contains any time that we might wait. + */ + private float requestsPerSecond = Float.POSITIVE_INFINITY; + + /** + * Should this task store its result? + */ + private boolean shouldStoreResult; + + /** + * The number of slices this task should be divided into. Defaults to 1 meaning the task isn't sliced into subtasks. + */ + private int slices = 1; + + /** + * Constructor for deserialization. + */ + public AbstractBulkByScrollRequest() { + } + + /** + * Constructor for actual use. + * + * @param searchRequest the search request to execute to get the documents to process + * @param setDefaults should this request set the defaults on the search request? Usually set to true but leave it false to support + * request slicing + */ + public AbstractBulkByScrollRequest(SearchRequest searchRequest, boolean setDefaults) { + this.searchRequest = searchRequest; + + // Set the defaults which differ from SearchRequest's defaults. + if (setDefaults) { + searchRequest.scroll(DEFAULT_SCROLL_TIMEOUT); + searchRequest.source(new SearchSourceBuilder()); + searchRequest.source().size(DEFAULT_SCROLL_SIZE); + } + } + + /** + * `this` cast to Self. Used for building fluent methods without cast + * warnings. + */ + protected abstract Self self(); + + @Override + public ActionRequestValidationException validate() { + ActionRequestValidationException e = searchRequest.validate(); + if (searchRequest.source().from() != -1) { + e = addValidationError("from is not supported in this context", e); + } + if (searchRequest.source().storedFields() != null) { + e = addValidationError("stored_fields is not supported in this context", e); + } + if (maxRetries < 0) { + e = addValidationError("retries cannnot be negative", e); + } + if (false == (size == -1 || size > 0)) { + e = addValidationError( + "size should be greater than 0 if the request is limited to some number of documents or -1 if it isn't but it was [" + + size + "]", + e); + } + if (searchRequest.source().slice() != null && slices != 1) { + e = addValidationError("can't specify both slice and workers", e); + } + return e; + } + + /** + * Maximum number of processed documents. Defaults to -1 meaning process all + * documents. + */ + public int getSize() { + return size; + } + + /** + * Maximum number of processed documents. Defaults to -1 meaning process all + * documents. + */ + public Self setSize(int size) { + this.size = size; + return self(); + } + + /** + * Should version conflicts cause aborts? Defaults to false. + */ + public boolean isAbortOnVersionConflict() { + return abortOnVersionConflict; + } + + /** + * Should version conflicts cause aborts? Defaults to false. + */ + public Self setAbortOnVersionConflict(boolean abortOnVersionConflict) { + this.abortOnVersionConflict = abortOnVersionConflict; + return self(); + } + + /** + * Sets abortOnVersionConflict based on REST-friendly names. + */ + public void setConflicts(String conflicts) { + switch (conflicts) { + case "proceed": + setAbortOnVersionConflict(false); + return; + case "abort": + setAbortOnVersionConflict(true); + return; + default: + throw new IllegalArgumentException("conflicts may only be \"proceed\" or \"abort\" but was [" + conflicts + "]"); + } + } + + /** + * The search request that matches the documents to process. + */ + public SearchRequest getSearchRequest() { + return searchRequest; + } + + /** + * Call refresh on the indexes we've written to after the request ends? + */ + public boolean isRefresh() { + return refresh; + } + + /** + * Call refresh on the indexes we've written to after the request ends? + */ + public Self setRefresh(boolean refresh) { + this.refresh = refresh; + return self(); + } + + /** + * Timeout to wait for the shards on to be available for each bulk request? + */ + public TimeValue getTimeout() { + return timeout; + } + + /** + * Timeout to wait for the shards on to be available for each bulk request? + */ + public Self setTimeout(TimeValue timeout) { + this.timeout = timeout; + return self(); + } + + /** + * The number of shard copies that must be active before proceeding with the write. + */ + public ActiveShardCount getWaitForActiveShards() { + return activeShardCount; + } + + /** + * Sets the number of shard copies that must be active before proceeding with the write. + * See {@link ReplicationRequest#waitForActiveShards(ActiveShardCount)} for details. + */ + public Self setWaitForActiveShards(ActiveShardCount activeShardCount) { + this.activeShardCount = activeShardCount; + return self(); + } + + /** + * A shortcut for {@link #setWaitForActiveShards(ActiveShardCount)} where the numerical + * shard count is passed in, instead of having to first call {@link ActiveShardCount#from(int)} + * to get the ActiveShardCount. + */ + public Self setWaitForActiveShards(final int waitForActiveShards) { + return setWaitForActiveShards(ActiveShardCount.from(waitForActiveShards)); + } + + /** + * Initial delay after a rejection before retrying request. + */ + public TimeValue getRetryBackoffInitialTime() { + return retryBackoffInitialTime; + } + + /** + * Set the initial delay after a rejection before retrying request. + */ + public Self setRetryBackoffInitialTime(TimeValue retryBackoffInitialTime) { + this.retryBackoffInitialTime = retryBackoffInitialTime; + return self(); + } + + /** + * Total number of retries attempted for rejections. + */ + public int getMaxRetries() { + return maxRetries; + } + + /** + * Set the total number of retries attempted for rejections. There is no way to ask for unlimited retries. + */ + public Self setMaxRetries(int maxRetries) { + this.maxRetries = maxRetries; + return self(); + } + + /** + * The throttle for this request in sub-requests per second. {@link Float#POSITIVE_INFINITY} means set no throttle and that is the + * default. Throttling is done between batches, as we start the next scroll requests. That way we can increase the scroll's timeout to + * make sure that it contains any time that we might wait. + */ + public float getRequestsPerSecond() { + return requestsPerSecond; + } + + /** + * Set the throttle for this request in sub-requests per second. {@link Float#POSITIVE_INFINITY} means set no throttle and that is the + * default. Throttling is done between batches, as we start the next scroll requests. That way we can increase the scroll's timeout to + * make sure that it contains any time that we might wait. + */ + public Self setRequestsPerSecond(float requestsPerSecond) { + if (requestsPerSecond <= 0) { + throw new IllegalArgumentException( + "[requests_per_second] must be greater than 0. Use Float.POSITIVE_INFINITY to disable throttling."); + } + this.requestsPerSecond = requestsPerSecond; + return self(); + } + + /** + * Should this task store its result after it has finished? + */ + public Self setShouldStoreResult(boolean shouldStoreResult) { + this.shouldStoreResult = shouldStoreResult; + return self(); + } + + @Override + public boolean getShouldStoreResult() { + return shouldStoreResult; + } + + /** + * The number of slices this task should be divided into. Defaults to 1 meaning the task isn't sliced into subtasks. + */ + public Self setSlices(int slices) { + if (slices < 1) { + throw new IllegalArgumentException("[slices] must be at least 1"); + } + this.slices = slices; + return self(); + } + + /** + * The number of slices this task should be divided into. Defaults to 1 meaning the task isn't sliced into subtasks. + */ + public int getSlices() { + return slices; + } + + /** + * Build a new request for a slice of the parent request. + */ + public abstract Self forSlice(TaskId slicingTask, SearchRequest slice); + + /** + * Setup a clone of this request with the information needed to process a slice of it. + */ + protected Self doForSlice(Self request, TaskId slicingTask) { + request.setAbortOnVersionConflict(abortOnVersionConflict).setRefresh(refresh).setTimeout(timeout) + .setWaitForActiveShards(activeShardCount).setRetryBackoffInitialTime(retryBackoffInitialTime).setMaxRetries(maxRetries) + // Parent task will store result + .setShouldStoreResult(false) + // Split requests per second between all slices + .setRequestsPerSecond(requestsPerSecond / slices) + // Size is split between workers. This means the size might round down! + .setSize(size == SIZE_ALL_MATCHES ? SIZE_ALL_MATCHES : size / slices) + // Sub requests don't have workers + .setSlices(1); + // Set the parent task so this task is cancelled if we cancel the parent + request.setParentTask(slicingTask); + // TODO It'd be nice not to refresh on every slice. Instead we should refresh after the sub requests finish. + return request; + } + + @Override + public Task createTask(long id, String type, String action, TaskId parentTaskId) { + if (slices > 1) { + return new ParentBulkByScrollTask(id, type, action, getDescription(), parentTaskId, slices); + } + /* Extract the slice from the search request so it'll be available in the status. This is potentially useful for users that manually + * slice their search requests so they can keep track of it and **absolutely** useful for automatically sliced reindex requests so + * they can properly track the responses. */ + Integer sliceId = searchRequest.source().slice() == null ? null : searchRequest.source().slice().getId(); + return new WorkingBulkByScrollTask(id, type, action, getDescription(), parentTaskId, sliceId, requestsPerSecond); + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + searchRequest = new SearchRequest(); + searchRequest.readFrom(in); + abortOnVersionConflict = in.readBoolean(); + size = in.readVInt(); + refresh = in.readBoolean(); + timeout = new TimeValue(in); + activeShardCount = ActiveShardCount.readFrom(in); + retryBackoffInitialTime = new TimeValue(in); + maxRetries = in.readVInt(); + requestsPerSecond = in.readFloat(); + if (in.getVersion().onOrAfter(Version.V_5_1_1)) { + slices = in.readVInt(); + } else { + slices = 1; + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + searchRequest.writeTo(out); + out.writeBoolean(abortOnVersionConflict); + out.writeVInt(size); + out.writeBoolean(refresh); + timeout.writeTo(out); + activeShardCount.writeTo(out); + retryBackoffInitialTime.writeTo(out); + out.writeVInt(maxRetries); + out.writeFloat(requestsPerSecond); + if (out.getVersion().onOrAfter(Version.V_5_1_1)) { + out.writeVInt(slices); + } else { + if (slices > 1) { + throw new IllegalArgumentException("Attempting to send sliced reindex-style request to a node that doesn't support " + + "it. Version is [" + out.getVersion() + "] but must be [" + Version.V_5_1_1 + "]"); + } + } + } + + /** + * Append a short description of the search request to a StringBuilder. Used + * to make toString. + */ + protected void searchToString(StringBuilder b) { + if (searchRequest.indices() != null && searchRequest.indices().length != 0) { + b.append(Arrays.toString(searchRequest.indices())); + } else { + b.append("[all indices]"); + } + if (searchRequest.types() != null && searchRequest.types().length != 0) { + b.append(Arrays.toString(searchRequest.types())); + } + } + + @Override + public String getDescription() { + return this.toString(); + } +} diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequestBuilder.java b/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequestBuilder.java similarity index 91% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequestBuilder.java rename to core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequestBuilder.java index 4d9c779f4875c..de3f22f0943a8 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequestBuilder.java @@ -31,11 +31,11 @@ public abstract class AbstractBulkByScrollRequestBuilder< Request extends AbstractBulkByScrollRequest, Self extends AbstractBulkByScrollRequestBuilder> - extends ActionRequestBuilder { + extends ActionRequestBuilder { private final SearchRequestBuilder source; protected AbstractBulkByScrollRequestBuilder(ElasticsearchClient client, - Action action, SearchRequestBuilder source, Request request) { + Action action, SearchRequestBuilder source, Request request) { super(client, action, request); this.source = source; } @@ -141,4 +141,12 @@ public Self setShouldStoreResult(boolean shouldStoreResult) { request.setShouldStoreResult(shouldStoreResult); return self(); } + + /** + * The number of slices this task should be divided into. Defaults to 1 meaning the task isn't sliced into subtasks. + */ + public Self setSlices(int workers) { + request.setSlices(workers); + return self(); + } } diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkIndexByScrollRequest.java b/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkIndexByScrollRequest.java similarity index 76% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkIndexByScrollRequest.java rename to core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkIndexByScrollRequest.java index 10fb0bc676e11..62c2635b301c5 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkIndexByScrollRequest.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkIndexByScrollRequest.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.script.Script; +import org.elasticsearch.tasks.TaskId; import java.io.IOException; @@ -34,11 +35,21 @@ public abstract class AbstractBulkIndexByScrollRequest { protected AbstractBulkIndexByScrollRequestBuilder(ElasticsearchClient client, - Action action, SearchRequestBuilder search, Request request) { + Action action, SearchRequestBuilder search, Request request) { super(client, action, search, request); } diff --git a/core/src/main/java/org/elasticsearch/index/reindex/BulkByScrollResponse.java b/core/src/main/java/org/elasticsearch/index/reindex/BulkByScrollResponse.java new file mode 100644 index 0000000000000..400baf7b9e2c6 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/reindex/BulkByScrollResponse.java @@ -0,0 +1,201 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.action.bulk.BulkItemResponse.Failure; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; + +import static java.lang.Math.max; +import static java.lang.Math.min; +import static java.util.Objects.requireNonNull; +import static org.elasticsearch.common.unit.TimeValue.timeValueNanos; + +/** + * Response used for actions that index many documents using a scroll request. + */ +public class BulkByScrollResponse extends ActionResponse implements ToXContent { + private TimeValue took; + private BulkByScrollTask.Status status; + private List bulkFailures; + private List searchFailures; + private boolean timedOut; + + public BulkByScrollResponse() { + } + + public BulkByScrollResponse(TimeValue took, BulkByScrollTask.Status status, List bulkFailures, + List searchFailures, boolean timedOut) { + this.took = took; + this.status = requireNonNull(status, "Null status not supported"); + this.bulkFailures = bulkFailures; + this.searchFailures = searchFailures; + this.timedOut = timedOut; + } + + public BulkByScrollResponse(Iterable toMerge, @Nullable String reasonCancelled) { + long mergedTook = 0; + List statuses = new ArrayList<>(); + bulkFailures = new ArrayList<>(); + searchFailures = new ArrayList<>(); + for (BulkByScrollResponse response : toMerge) { + mergedTook = max(mergedTook, response.getTook().nanos()); + statuses.add(new BulkByScrollTask.StatusOrException(response.status)); + bulkFailures.addAll(response.getBulkFailures()); + searchFailures.addAll(response.getSearchFailures()); + timedOut |= response.isTimedOut(); + } + took = timeValueNanos(mergedTook); + status = new BulkByScrollTask.Status(statuses, reasonCancelled); + } + + public TimeValue getTook() { + return took; + } + + public BulkByScrollTask.Status getStatus() { + return status; + } + + public long getCreated() { + return status.getCreated(); + } + + public long getDeleted() { + return status.getDeleted(); + } + + public long getUpdated() { + return status.getUpdated(); + } + + public int getBatches() { + return status.getBatches(); + } + + public long getVersionConflicts() { + return status.getVersionConflicts(); + } + + public long getNoops() { + return status.getNoops(); + } + + /** + * The reason that the request was canceled or null if it hasn't been. + */ + public String getReasonCancelled() { + return status.getReasonCancelled(); + } + + /** + * The number of times that the request had retry bulk actions. + */ + public long getBulkRetries() { + return status.getBulkRetries(); + } + + /** + * The number of times that the request had retry search actions. + */ + public long getSearchRetries() { + return status.getSearchRetries(); + } + + /** + * All of the bulk failures. Version conflicts are only included if the request sets abortOnVersionConflict to true (the default). + */ + public List getBulkFailures() { + return bulkFailures; + } + + /** + * All search failures. + */ + public List getSearchFailures() { + return searchFailures; + } + + /** + * Did any of the sub-requests that were part of this request timeout? + */ + public boolean isTimedOut() { + return timedOut; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + took.writeTo(out); + status.writeTo(out); + out.writeList(bulkFailures); + out.writeList(searchFailures); + out.writeBoolean(timedOut); + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + took = new TimeValue(in); + status = new BulkByScrollTask.Status(in); + bulkFailures = in.readList(Failure::new); + searchFailures = in.readList(ScrollableHitSource.SearchFailure::new); + timedOut = in.readBoolean(); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.field("took", took.millis()); + builder.field("timed_out", timedOut); + status.innerXContent(builder, params); + builder.startArray("failures"); + for (Failure failure: bulkFailures) { + builder.startObject(); + failure.toXContent(builder, params); + builder.endObject(); + } + for (ScrollableHitSource.SearchFailure failure: searchFailures) { + failure.toXContent(builder, params); + } + builder.endArray(); + return builder; + } + + @Override + public String toString() { + StringBuilder builder = new StringBuilder(); + builder.append("BulkIndexByScrollResponse["); + builder.append("took=").append(took).append(','); + builder.append("timed_out=").append(timedOut).append(','); + status.innerToString(builder); + builder.append(",bulk_failures=").append(getBulkFailures().subList(0, min(3, getBulkFailures().size()))); + builder.append(",search_failures=").append(getSearchFailures().subList(0, min(3, getSearchFailures().size()))); + return builder.append(']').toString(); + } +} diff --git a/core/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java b/core/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java new file mode 100644 index 0000000000000..284fea7a38bfc --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java @@ -0,0 +1,518 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.tasks.CancellableTask; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskId; +import org.elasticsearch.tasks.TaskInfo; + +import java.io.IOException; +import java.util.List; +import java.util.Objects; + +import static java.lang.Math.min; +import static java.util.Collections.emptyList; +import static org.elasticsearch.common.unit.TimeValue.timeValueNanos; + +/** + * Task storing information about a currently running BulkByScroll request. + */ +public abstract class BulkByScrollTask extends CancellableTask { + public BulkByScrollTask(long id, String type, String action, String description, TaskId parentTaskId) { + super(id, type, action, description, parentTaskId); + } + + /** + * The number of sub-slices that are still running. {@link WorkingBulkByScrollTask} will always have 0 and + * {@link ParentBulkByScrollTask} will return the number of waiting tasks. Used to decide how to perform rethrottling. + */ + public abstract int runningSliceSubTasks(); + + /** + * Apply the {@code newRequestsPerSecond}. + */ + public abstract void rethrottle(float newRequestsPerSecond); + + /* + * Overridden to force children to return compatible status. + */ + public abstract BulkByScrollTask.Status getStatus(); + + /** + * Build the status for this task given a snapshot of the information of running slices. + */ + public abstract TaskInfo getInfoGivenSliceInfo(String localNodeId, List sliceInfo); + + @Override + public boolean shouldCancelChildrenOnCancellation() { + return true; + } + + public static class Status implements Task.Status, SuccessfullyProcessed { + public static final String NAME = "bulk-by-scroll"; + + /** + * XContent param name to indicate if "created" count must be included + * in the response. + */ + public static final String INCLUDE_CREATED = "include_created"; + + /** + * XContent param name to indicate if "updated" count must be included + * in the response. + */ + public static final String INCLUDE_UPDATED = "include_updated"; + + private final Integer sliceId; + private final long total; + private final long updated; + private final long created; + private final long deleted; + private final int batches; + private final long versionConflicts; + private final long noops; + private final long bulkRetries; + private final long searchRetries; + private final TimeValue throttled; + private final float requestsPerSecond; + private final String reasonCancelled; + private final TimeValue throttledUntil; + private final List sliceStatuses; + + public Status(Integer sliceId, long total, long updated, long created, long deleted, int batches, long versionConflicts, long noops, + long bulkRetries, long searchRetries, TimeValue throttled, float requestsPerSecond, @Nullable String reasonCancelled, + TimeValue throttledUntil) { + this.sliceId = sliceId == null ? null : checkPositive(sliceId, "sliceId"); + this.total = checkPositive(total, "total"); + this.updated = checkPositive(updated, "updated"); + this.created = checkPositive(created, "created"); + this.deleted = checkPositive(deleted, "deleted"); + this.batches = checkPositive(batches, "batches"); + this.versionConflicts = checkPositive(versionConflicts, "versionConflicts"); + this.noops = checkPositive(noops, "noops"); + this.bulkRetries = checkPositive(bulkRetries, "bulkRetries"); + this.searchRetries = checkPositive(searchRetries, "searchRetries"); + this.throttled = throttled; + this.requestsPerSecond = requestsPerSecond; + this.reasonCancelled = reasonCancelled; + this.throttledUntil = throttledUntil; + this.sliceStatuses = emptyList(); + } + + /** + * Constructor merging many statuses. + * + * @param sliceStatuses Statuses of sub requests that this task was sliced into. + * @param reasonCancelled Reason that this *this* task was cancelled. Note that each entry in {@code sliceStatuses} can be cancelled + * independently of this task but if this task is cancelled then the workers *should* be cancelled. + */ + public Status(List sliceStatuses, @Nullable String reasonCancelled) { + sliceId = null; + this.reasonCancelled = reasonCancelled; + + long mergedTotal = 0; + long mergedUpdated = 0; + long mergedCreated = 0; + long mergedDeleted = 0; + int mergedBatches = 0; + long mergedVersionConflicts = 0; + long mergedNoops = 0; + long mergedBulkRetries = 0; + long mergedSearchRetries = 0; + long mergedThrottled = 0; + float mergedRequestsPerSecond = 0; + long mergedThrottledUntil = Long.MAX_VALUE; + + for (StatusOrException slice : sliceStatuses) { + if (slice == null) { + // Hasn't returned yet. + continue; + } + if (slice.status == null) { + // This slice failed catastrophically so it doesn't count towards the status + continue; + } + mergedTotal += slice.status.getTotal(); + mergedUpdated += slice.status.getUpdated(); + mergedCreated += slice.status.getCreated(); + mergedDeleted += slice.status.getDeleted(); + mergedBatches += slice.status.getBatches(); + mergedVersionConflicts += slice.status.getVersionConflicts(); + mergedNoops += slice.status.getNoops(); + mergedBulkRetries += slice.status.getBulkRetries(); + mergedSearchRetries += slice.status.getSearchRetries(); + mergedThrottled += slice.status.getThrottled().nanos(); + mergedRequestsPerSecond += slice.status.getRequestsPerSecond(); + mergedThrottledUntil = min(mergedThrottledUntil, slice.status.getThrottledUntil().nanos()); + } + + total = mergedTotal; + updated = mergedUpdated; + created = mergedCreated; + deleted = mergedDeleted; + batches = mergedBatches; + versionConflicts = mergedVersionConflicts; + noops = mergedNoops; + bulkRetries = mergedBulkRetries; + searchRetries = mergedSearchRetries; + throttled = timeValueNanos(mergedThrottled); + requestsPerSecond = mergedRequestsPerSecond; + throttledUntil = timeValueNanos(mergedThrottledUntil == Long.MAX_VALUE ? 0 : mergedThrottledUntil); + this.sliceStatuses = sliceStatuses; + } + + public Status(StreamInput in) throws IOException { + if (in.getVersion().onOrAfter(Version.V_5_1_1)) { + sliceId = in.readOptionalVInt(); + } else { + sliceId = null; + } + total = in.readVLong(); + updated = in.readVLong(); + created = in.readVLong(); + deleted = in.readVLong(); + batches = in.readVInt(); + versionConflicts = in.readVLong(); + noops = in.readVLong(); + bulkRetries = in.readVLong(); + searchRetries = in.readVLong(); + throttled = new TimeValue(in); + requestsPerSecond = in.readFloat(); + reasonCancelled = in.readOptionalString(); + throttledUntil = new TimeValue(in); + if (in.getVersion().onOrAfter(Version.V_5_1_1)) { + sliceStatuses = in.readList(stream -> stream.readOptionalWriteable(StatusOrException::new)); + } else { + sliceStatuses = emptyList(); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + if (out.getVersion().onOrAfter(Version.V_5_1_1)) { + out.writeOptionalVInt(sliceId); + } + out.writeVLong(total); + out.writeVLong(updated); + out.writeVLong(created); + out.writeVLong(deleted); + out.writeVInt(batches); + out.writeVLong(versionConflicts); + out.writeVLong(noops); + out.writeVLong(bulkRetries); + out.writeVLong(searchRetries); + throttled.writeTo(out); + out.writeFloat(requestsPerSecond); + out.writeOptionalString(reasonCancelled); + throttledUntil.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_5_1_1)) { + out.writeVInt(sliceStatuses.size()); + for (StatusOrException sliceStatus : sliceStatuses) { + out.writeOptionalWriteable(sliceStatus); + } + } + } + + @Override + public String getWriteableName() { + return NAME; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + innerXContent(builder, params); + return builder.endObject(); + } + + public XContentBuilder innerXContent(XContentBuilder builder, Params params) + throws IOException { + if (sliceId != null) { + builder.field("slice_id", sliceId); + } + builder.field("total", total); + if (params.paramAsBoolean(INCLUDE_UPDATED, true)) { + builder.field("updated", updated); + } + if (params.paramAsBoolean(INCLUDE_CREATED, true)) { + builder.field("created", created); + } + builder.field("deleted", deleted); + builder.field("batches", batches); + builder.field("version_conflicts", versionConflicts); + builder.field("noops", noops); + builder.startObject("retries"); { + builder.field("bulk", bulkRetries); + builder.field("search", searchRetries); + } + builder.endObject(); + builder.timeValueField("throttled_millis", "throttled", throttled); + builder.field("requests_per_second", requestsPerSecond == Float.POSITIVE_INFINITY ? -1 : requestsPerSecond); + if (reasonCancelled != null) { + builder.field("canceled", reasonCancelled); + } + builder.timeValueField("throttled_until_millis", "throttled_until", throttledUntil); + if (false == sliceStatuses.isEmpty()) { + builder.startArray("slices"); + for (StatusOrException slice : sliceStatuses) { + if (slice == null) { + builder.nullValue(); + } else { + slice.toXContent(builder, params); + } + } + builder.endArray(); + } + return builder; + } + + @Override + public String toString() { + StringBuilder builder = new StringBuilder(); + builder.append("BulkIndexByScrollResponse["); + innerToString(builder); + return builder.append(']').toString(); + } + + public void innerToString(StringBuilder builder) { + builder.append("sliceId=").append(sliceId); + builder.append(",updated=").append(updated); + builder.append(",created=").append(created); + builder.append(",deleted=").append(deleted); + builder.append(",batches=").append(batches); + builder.append(",versionConflicts=").append(versionConflicts); + builder.append(",noops=").append(noops); + builder.append(",retries=").append(bulkRetries); + if (reasonCancelled != null) { + builder.append(",canceled=").append(reasonCancelled); + } + builder.append(",throttledUntil=").append(throttledUntil); + if (false == sliceStatuses.isEmpty()) { + builder.append(",workers=").append(sliceStatuses); + } + } + + /** + * The id of the slice that this status is reporting or {@code null} if this isn't the status of a sub-slice. + */ + Integer getSliceId() { + return sliceId; + } + + /** + * The total number of documents this request will process. 0 means we don't yet know or, possibly, there are actually 0 documents + * to process. Its ok that these have the same meaning because any request with 0 actual documents should be quite short lived. + */ + public long getTotal() { + return total; + } + + @Override + public long getUpdated() { + return updated; + } + + @Override + public long getCreated() { + return created; + } + + @Override + public long getDeleted() { + return deleted; + } + + /** + * Number of scan responses this request has processed. + */ + public int getBatches() { + return batches; + } + + /** + * Number of version conflicts this request has hit. + */ + public long getVersionConflicts() { + return versionConflicts; + } + + /** + * Number of noops (skipped bulk items) as part of this request. + */ + public long getNoops() { + return noops; + } + + /** + * Number of retries that had to be attempted due to bulk actions being rejected. + */ + public long getBulkRetries() { + return bulkRetries; + } + + /** + * Number of retries that had to be attempted due to search actions being rejected. + */ + public long getSearchRetries() { + return searchRetries; + } + + /** + * The total time this request has throttled itself not including the current throttle time if it is currently sleeping. + */ + public TimeValue getThrottled() { + return throttled; + } + + /** + * The number of requests per second to which to throttle the request. Float.POSITIVE_INFINITY means unlimited. + */ + public float getRequestsPerSecond() { + return requestsPerSecond; + } + + /** + * The reason that the request was canceled or null if it hasn't been. + */ + public String getReasonCancelled() { + return reasonCancelled; + } + + /** + * Remaining delay of any current throttle sleep or 0 if not sleeping. + */ + public TimeValue getThrottledUntil() { + return throttledUntil; + } + + /** + * Statuses of the sub requests into which this sub-request was sliced. Empty if this request wasn't sliced into sub-requests. + */ + public List getSliceStatuses() { + return sliceStatuses; + } + + private int checkPositive(int value, String name) { + if (value < 0) { + throw new IllegalArgumentException(name + " must be greater than 0 but was [" + value + "]"); + } + return value; + } + + private long checkPositive(long value, String name) { + if (value < 0) { + throw new IllegalArgumentException(name + " must be greater than 0 but was [" + value + "]"); + } + return value; + } + } + + /** + * The status of a slice of the request. Successful requests store the {@link StatusOrException#status} while failing requests store a + * {@link StatusOrException#exception}. + */ + public static class StatusOrException implements Writeable, ToXContent { + private final Status status; + private final Exception exception; + + public StatusOrException(Status status) { + this.status = status; + exception = null; + } + + public StatusOrException(Exception exception) { + status = null; + this.exception = exception; + } + + /** + * Read from a stream. + */ + public StatusOrException(StreamInput in) throws IOException { + if (in.readBoolean()) { + status = new Status(in); + exception = null; + } else { + status = null; + exception = in.readException(); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + if (exception == null) { + out.writeBoolean(true); + status.writeTo(out); + } else { + out.writeBoolean(false); + out.writeException(exception); + } + } + + public Status getStatus() { + return status; + } + + public Exception getException() { + return exception; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + if (exception == null) { + status.toXContent(builder, params); + } else { + builder.startObject(); + ElasticsearchException.generateThrowableXContent(builder, params, exception); + builder.endObject(); + } + return builder; + } + + @Override + public boolean equals(Object obj) { + if (obj == null) { + return false; + } + if (obj.getClass() != BulkByScrollTask.StatusOrException.class) { + return false; + } + BulkByScrollTask.StatusOrException other = (StatusOrException) obj; + return Objects.equals(status, other.status) + && Objects.equals(exception, other.exception); + } + + @Override + public int hashCode() { + return Objects.hash(status, exception); + } + } + +} diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ClientScrollableHitSource.java b/core/src/main/java/org/elasticsearch/index/reindex/ClientScrollableHitSource.java similarity index 85% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/ClientScrollableHitSource.java rename to core/src/main/java/org/elasticsearch/index/reindex/ClientScrollableHitSource.java index 4d5f762340014..53899757484a3 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ClientScrollableHitSource.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/ClientScrollableHitSource.java @@ -37,6 +37,8 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.mapper.ParentFieldMapper; import org.elasticsearch.index.mapper.RoutingFieldMapper; import org.elasticsearch.index.mapper.TTLFieldMapper; @@ -81,18 +83,16 @@ public void doStart(Consumer onResponse) { @Override protected void doStartNextScroll(String scrollId, TimeValue extraKeepAlive, Consumer onResponse) { - SearchScrollRequest request = new SearchScrollRequest(); - // Add the wait time into the scroll timeout so it won't timeout while we wait for throttling - request.scrollId(scrollId).scroll(timeValueNanos(firstSearchRequest.scroll().keepAlive().nanos() + extraKeepAlive.nanos())); - searchWithRetry(listener -> client.searchScroll(request, listener), r -> consume(r, onResponse)); + searchWithRetry(listener -> { + SearchScrollRequest request = new SearchScrollRequest(); + // Add the wait time into the scroll timeout so it won't timeout while we wait for throttling + request.scrollId(scrollId).scroll(timeValueNanos(firstSearchRequest.scroll().keepAlive().nanos() + extraKeepAlive.nanos())); + client.searchScroll(request, listener); + }, r -> consume(r, onResponse)); } @Override - public void clearScroll(String scrollId) { - /* - * Fire off the clear scroll but don't wait for it it return before - * we send the use their response. - */ + public void clearScroll(String scrollId, Runnable onCompletion) { ClearScrollRequest clearScrollRequest = new ClearScrollRequest(); clearScrollRequest.addScrollId(scrollId); /* @@ -103,15 +103,22 @@ public void clearScroll(String scrollId) { @Override public void onResponse(ClearScrollResponse response) { logger.debug("Freed [{}] contexts", response.getNumFreed()); + onCompletion.run(); } @Override public void onFailure(Exception e) { logger.warn((Supplier) () -> new ParameterizedMessage("Failed to clear scroll [{}]", scrollId), e); + onCompletion.run(); } }); } + @Override + protected void cleanup(Runnable onCompletion) { + onCompletion.run(); + } + /** * Run a search action and call onResponse when a the response comes in, retrying if the action fails with an exception caused by * rejected execution. @@ -128,6 +135,10 @@ private void searchWithRetry(Consumer> action, Co */ class RetryHelper extends AbstractRunnable implements ActionListener { private final Iterator retries = backoffPolicy.iterator(); + /** + * The runnable to run that retries in the same context as the original call. + */ + private Runnable retryWithContext; private volatile int retryCount = 0; @Override @@ -148,7 +159,7 @@ public void onFailure(Exception e) { TimeValue delay = retries.next(); logger.trace((Supplier) () -> new ParameterizedMessage("retrying rejected search after [{}]", delay), e); countSearchRetry.run(); - threadPool.schedule(delay, ThreadPool.Names.SAME, this); + threadPool.schedule(delay, ThreadPool.Names.SAME, retryWithContext); } else { logger.warn( (Supplier) () -> new ParameterizedMessage( @@ -161,7 +172,10 @@ public void onFailure(Exception e) { } } } - new RetryHelper().run(); + RetryHelper helper = new RetryHelper(); + // Wrap the helper in a runnable that preserves the current context so we keep it on retry. + helper.retryWithContext = threadPool.getThreadContext().preserveContext(helper); + helper.run(); } private void consume(SearchResponse response, Consumer onResponse) { @@ -175,7 +189,7 @@ private Response wrap(SearchResponse response) { } else { failures = new ArrayList<>(response.getShardFailures().length); for (ShardSearchFailure failure: response.getShardFailures()) { - String nodeId = failure.shard() == null ? null : failure.shard().nodeId(); + String nodeId = failure.shard() == null ? null : failure.shard().getNodeId(); failures.add(new SearchFailure(failure.getCause(), failure.index(), failure.shardId(), nodeId)); } } @@ -197,9 +211,9 @@ private static class ClientHit implements Hit { private final SearchHit delegate; private final BytesReference source; - public ClientHit(SearchHit delegate) { + ClientHit(SearchHit delegate) { this.delegate = delegate; - source = delegate.hasSource() ? null : delegate.getSourceRef(); + source = delegate.hasSource() ? delegate.getSourceRef() : null; } @Override @@ -222,6 +236,10 @@ public BytesReference getSource() { return source; } + @Override + public XContentType getXContentType() { + return XContentFactory.xContentType(source); + } @Override public long getVersion() { return delegate.getVersion(); @@ -248,8 +266,8 @@ public Long getTTL() { } private T fieldValue(String fieldName) { - SearchHitField field = delegate.field(fieldName); - return field == null ? null : field.value(); + SearchHitField field = delegate.getField(fieldName); + return field == null ? null : field.getValue(); } } } diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryAction.java b/core/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryAction.java similarity index 89% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryAction.java rename to core/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryAction.java index c789e9c77b4a6..c1abb16ca3977 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryAction.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryAction.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.Action; import org.elasticsearch.client.ElasticsearchClient; -public class DeleteByQueryAction extends Action { +public class DeleteByQueryAction extends Action { public static final DeleteByQueryAction INSTANCE = new DeleteByQueryAction(); public static final String NAME = "indices:data/write/delete/byquery"; @@ -37,7 +37,7 @@ public DeleteByQueryRequestBuilder newRequestBuilder(ElasticsearchClient client) } @Override - public BulkIndexByScrollResponse newResponse() { - return new BulkIndexByScrollResponse(); + public BulkByScrollResponse newResponse() { + return new BulkByScrollResponse(); } } diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequest.java b/core/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequest.java similarity index 77% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequest.java rename to core/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequest.java index df1f4d387abd7..cc010fd77bb44 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequest.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequest.java @@ -23,6 +23,9 @@ import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.tasks.TaskId; import static org.elasticsearch.action.ValidateActions.addValidationError; @@ -45,13 +48,21 @@ */ public class DeleteByQueryRequest extends AbstractBulkByScrollRequest implements IndicesRequest.Replaceable { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(DeleteByQueryRequest.class)); + public DeleteByQueryRequest() { } public DeleteByQueryRequest(SearchRequest search) { - super(search); + this(search, true); + } + + private DeleteByQueryRequest(SearchRequest search, boolean setDefaults) { + super(search, setDefaults); // Delete-By-Query does not require the source - search.source().fetchSource(false); + if (setDefaults) { + search.source().fetchSource(false); + } } @Override @@ -67,10 +78,18 @@ public ActionRequestValidationException validate() { } if (getSearchRequest() == null || getSearchRequest().source() == null) { e = addValidationError("source is missing", e); + } else if (getSearchRequest().source().query() == null) { + DEPRECATION_LOGGER.deprecated("A request to _delete_by_query without an explicit " + + "query is deprecated. Specify a query explicitly instead."); } return e; } + @Override + public DeleteByQueryRequest forSlice(TaskId slicingTask, SearchRequest slice) { + return doForSlice(new DeleteByQueryRequest(slice, false), slicingTask); + } + @Override public String toString() { StringBuilder b = new StringBuilder(); @@ -99,4 +118,16 @@ public IndicesOptions indicesOptions() { assert getSearchRequest() != null; return getSearchRequest().indicesOptions(); } + + public String[] types() { + assert getSearchRequest() != null; + return getSearchRequest().types(); + } + + public DeleteByQueryRequest types(String... types) { + assert getSearchRequest() != null; + getSearchRequest().types(types); + return this; + } + } diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequestBuilder.java b/core/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequestBuilder.java similarity index 93% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequestBuilder.java rename to core/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequestBuilder.java index eb528793236ba..e94d1308a74be 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/DeleteByQueryRequestBuilder.java @@ -28,12 +28,12 @@ public class DeleteByQueryRequestBuilder extends AbstractBulkByScrollRequestBuilder { public DeleteByQueryRequestBuilder(ElasticsearchClient client, - Action action) { + Action action) { this(client, action, new SearchRequestBuilder(client, SearchAction.INSTANCE)); } private DeleteByQueryRequestBuilder(ElasticsearchClient client, - Action action, + Action action, SearchRequestBuilder search) { super(client, action, search, new DeleteByQueryRequest(search.request())); } diff --git a/core/src/main/java/org/elasticsearch/index/reindex/ParentBulkByScrollTask.java b/core/src/main/java/org/elasticsearch/index/reindex/ParentBulkByScrollTask.java new file mode 100644 index 0000000000000..bea9bc203653d --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/reindex/ParentBulkByScrollTask.java @@ -0,0 +1,156 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.tasks.TaskId; +import org.elasticsearch.tasks.TaskInfo; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; + +import static java.util.Collections.unmodifiableList; + +/** + * Task for parent bulk by scroll requests that have sub-workers. + */ +public class ParentBulkByScrollTask extends BulkByScrollTask { + /** + * Holds the responses as they come back. This uses {@link Tuple} as an "Either" style holder where only the response or the exception + * is set. + */ + private final AtomicArray results; + private final AtomicInteger counter; + + public ParentBulkByScrollTask(long id, String type, String action, String description, TaskId parentTaskId, int slices) { + super(id, type, action, description, parentTaskId); + this.results = new AtomicArray<>(slices); + this.counter = new AtomicInteger(slices); + } + + @Override + public void rethrottle(float newRequestsPerSecond) { + // Nothing to do because all rethrottling is done on slice sub tasks. + } + + @Override + public Status getStatus() { + // We only have access to the statuses of requests that have finished so we return them + List statuses = Arrays.asList(new StatusOrException[results.length()]); + addResultsToList(statuses); + return new Status(unmodifiableList(statuses), getReasonCancelled()); + } + + @Override + public int runningSliceSubTasks() { + return counter.get(); + } + + @Override + public TaskInfo getInfoGivenSliceInfo(String localNodeId, List sliceInfo) { + /* Merge the list of finished sub requests with the provided info. If a slice is both finished and in the list then we prefer the + * finished status because we don't expect them to change after the task is finished. */ + List sliceStatuses = Arrays.asList(new StatusOrException[results.length()]); + for (TaskInfo t : sliceInfo) { + Status status = (Status) t.getStatus(); + sliceStatuses.set(status.getSliceId(), new StatusOrException(status)); + } + addResultsToList(sliceStatuses); + Status status = new Status(sliceStatuses, getReasonCancelled()); + return taskInfo(localNodeId, getDescription(), status); + } + + private void addResultsToList(List sliceStatuses) { + for (Result t : results.asList()) { + if (t.response != null) { + sliceStatuses.set(t.sliceId, new StatusOrException(t.response.getStatus())); + } else { + sliceStatuses.set(t.sliceId, new StatusOrException(t.failure)); + } + } + } + + /** + * Record a response from a slice and respond to the listener if the request is finished. + */ + public void onSliceResponse(ActionListener listener, int sliceId, BulkByScrollResponse response) { + results.setOnce(sliceId, new Result(sliceId, response)); + /* If the request isn't finished we could automatically rethrottle the sub-requests here but we would only want to do that if we + * were fairly sure they had a while left to go. */ + recordSliceCompletionAndRespondIfAllDone(listener); + } + + /** + * Record a failure from a slice and respond to the listener if the request is finished. + */ + public void onSliceFailure(ActionListener listener, int sliceId, Exception e) { + results.setOnce(sliceId, new Result(sliceId, e)); + recordSliceCompletionAndRespondIfAllDone(listener); + // TODO cancel when a slice fails? + } + + private void recordSliceCompletionAndRespondIfAllDone(ActionListener listener) { + if (counter.decrementAndGet() != 0) { + return; + } + List responses = new ArrayList<>(results.length()); + Exception exception = null; + for (Result t : results.asList()) { + if (t.response == null) { + assert t.failure != null : "exception shouldn't be null if value is null"; + if (exception == null) { + exception = t.failure; + } else { + exception.addSuppressed(t.failure); + } + } else { + assert t.failure == null : "exception should be null if response is not null"; + responses.add(t.response); + } + } + if (exception == null) { + listener.onResponse(new BulkByScrollResponse(responses, getReasonCancelled())); + } else { + listener.onFailure(exception); + } + } + + private static final class Result { + final BulkByScrollResponse response; + final int sliceId; + final Exception failure; + + private Result(int sliceId, BulkByScrollResponse response) { + this.sliceId = sliceId; + this.response = response; + failure = null; + } + + private Result(int sliceId, Exception failure) { + this.sliceId = sliceId; + this.failure = failure; + response = null; + } + } +} diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexAction.java b/core/src/main/java/org/elasticsearch/index/reindex/ReindexAction.java similarity index 86% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexAction.java rename to core/src/main/java/org/elasticsearch/index/reindex/ReindexAction.java index aa16a859c1deb..1c53a925f0d71 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexAction.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/ReindexAction.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.Action; import org.elasticsearch.client.ElasticsearchClient; -public class ReindexAction extends Action { +public class ReindexAction extends Action { public static final ReindexAction INSTANCE = new ReindexAction(); public static final String NAME = "indices:data/write/reindex"; @@ -36,7 +36,7 @@ public ReindexRequestBuilder newRequestBuilder(ElasticsearchClient client) { } @Override - public BulkIndexByScrollResponse newResponse() { - return new BulkIndexByScrollResponse(); + public BulkByScrollResponse newResponse() { + return new BulkByScrollResponse(); } } diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexRequest.java b/core/src/main/java/org/elasticsearch/index/reindex/ReindexRequest.java similarity index 77% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexRequest.java rename to core/src/main/java/org/elasticsearch/index/reindex/ReindexRequest.java index 8c11cd3430ff2..9fed5500d1664 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexRequest.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/ReindexRequest.java @@ -21,20 +21,15 @@ import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.CompositeIndicesRequest; -import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.uid.Versions; -import org.elasticsearch.index.reindex.remote.RemoteInfo; +import org.elasticsearch.tasks.TaskId; import java.io.IOException; -import java.util.Arrays; -import java.util.List; -import static java.util.Collections.singletonList; -import static java.util.Collections.unmodifiableList; import static org.elasticsearch.action.ValidateActions.addValidationError; import static org.elasticsearch.index.VersionType.INTERNAL; @@ -56,7 +51,11 @@ public ReindexRequest() { } public ReindexRequest(SearchRequest search, IndexRequest destination) { - super(search); + this(search, destination, true); + } + + private ReindexRequest(SearchRequest search, IndexRequest destination, boolean setDefaults) { + super(search, setDefaults); this.destination = destination; } @@ -71,6 +70,9 @@ public ActionRequestValidationException validate() { if (getSearchRequest().indices() == null || getSearchRequest().indices().length == 0) { e = addValidationError("use _all if you really want to copy from all existing indexes", e); } + if (getSearchRequest().source().fetchSource() != null && getSearchRequest().source().fetchSource().fetchSource() == false) { + e = addValidationError("_source:false is not supported in this context", e); + } /* * Note that we don't call index's validator - it won't work because * we'll be filling in portions of it as we receive the docs. But we can @@ -94,8 +96,13 @@ public ActionRequestValidationException validate() { if (destination.timestamp() != null) { e = addValidationError("setting timestamp on destination isn't supported. use scripts instead.", e); } - if (getRemoteInfo() != null && getSearchRequest().source().query() != null) { - e = addValidationError("reindex from remote sources should use RemoteInfo's query instead of source's query", e); + if (getRemoteInfo() != null) { + if (getSearchRequest().source().query() != null) { + e = addValidationError("reindex from remote sources should use RemoteInfo's query instead of source's query", e); + } + if (getSlices() != 1) { + e = addValidationError("reindex from remote sources doesn't support workers > 1 but was [" + getSlices() + "]", e); + } } return e; } @@ -125,6 +132,13 @@ public RemoteInfo getRemoteInfo() { return remoteInfo; } + @Override + public ReindexRequest forSlice(TaskId slicingTask, SearchRequest slice) { + ReindexRequest sliced = doForSlice(new ReindexRequest(slice, destination, false), slicingTask); + sliced.setRemoteInfo(remoteInfo); + return sliced; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -154,23 +168,4 @@ public String toString() { } return b.toString(); } - - // CompositeIndicesRequest implementation so plugins can reason about the request. This is really just a best effort thing. - /** - * Accessor to get the underlying {@link IndicesRequest}s that this request wraps. Note that this method is not - * accurate since it returns a prototype {@link IndexRequest} and not the actual requests that will be issued as part of the - * execution of this request. Additionally, scripts can modify the underlying {@link IndexRequest} and change values such as the index, - * type, {@link org.elasticsearch.action.support.IndicesOptions}. In short - only use this for very course reasoning about the request. - * - * @return a list comprising of the {@link SearchRequest} and the prototype {@link IndexRequest} - */ - @Override - public List subRequests() { - assert getSearchRequest() != null; - assert getDestination() != null; - if (remoteInfo != null) { - return singletonList(getDestination()); - } - return unmodifiableList(Arrays.asList(getSearchRequest(), getDestination())); - } } diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexRequestBuilder.java b/core/src/main/java/org/elasticsearch/index/reindex/ReindexRequestBuilder.java similarity index 91% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexRequestBuilder.java rename to core/src/main/java/org/elasticsearch/index/reindex/ReindexRequestBuilder.java index 1eadf2c15bc73..68bd3f4661828 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/ReindexRequestBuilder.java @@ -25,20 +25,19 @@ import org.elasticsearch.action.search.SearchAction; import org.elasticsearch.action.search.SearchRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; -import org.elasticsearch.index.reindex.remote.RemoteInfo; public class ReindexRequestBuilder extends AbstractBulkIndexByScrollRequestBuilder { private final IndexRequestBuilder destination; public ReindexRequestBuilder(ElasticsearchClient client, - Action action) { + Action action) { this(client, action, new SearchRequestBuilder(client, SearchAction.INSTANCE), new IndexRequestBuilder(client, IndexAction.INSTANCE)); } private ReindexRequestBuilder(ElasticsearchClient client, - Action action, + Action action, SearchRequestBuilder search, IndexRequestBuilder destination) { super(client, action, search, new ReindexRequest(search.request(), destination.request())); this.destination = destination; diff --git a/core/src/main/java/org/elasticsearch/index/reindex/RemoteInfo.java b/core/src/main/java/org/elasticsearch/index/reindex/RemoteInfo.java new file mode 100644 index 0000000000000..105afcc95bc38 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/reindex/RemoteInfo.java @@ -0,0 +1,181 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +import org.elasticsearch.Version; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.unit.TimeValue; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; + +import static java.util.Collections.unmodifiableMap; +import static java.util.Objects.requireNonNull; +import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds; + +public class RemoteInfo implements Writeable { + /** + * Default {@link #socketTimeout} for requests that don't have one set. + */ + public static final TimeValue DEFAULT_SOCKET_TIMEOUT = timeValueSeconds(30); + /** + * Default {@link #connectTimeout} for requests that don't have one set. + */ + public static final TimeValue DEFAULT_CONNECT_TIMEOUT = timeValueSeconds(30); + + private final String scheme; + private final String host; + private final int port; + private final BytesReference query; + private final String username; + private final String password; + private final Map headers; + /** + * Time to wait for a response from each request. + */ + private final TimeValue socketTimeout; + /** + * Time to wait for a connecting to the remote cluster. + */ + private final TimeValue connectTimeout; + + public RemoteInfo(String scheme, String host, int port, BytesReference query, String username, String password, + Map headers, TimeValue socketTimeout, TimeValue connectTimeout) { + this.scheme = requireNonNull(scheme, "[scheme] must be specified to reindex from a remote cluster"); + this.host = requireNonNull(host, "[host] must be specified to reindex from a remote cluster"); + this.port = port; + this.query = requireNonNull(query, "[query] must be specified to reindex from a remote cluster"); + this.username = username; + this.password = password; + this.headers = unmodifiableMap(requireNonNull(headers, "[headers] is required")); + this.socketTimeout = requireNonNull(socketTimeout, "[socketTimeout] must be specified"); + this.connectTimeout = requireNonNull(connectTimeout, "[connectTimeout] must be specified"); + } + + /** + * Read from a stream. + */ + public RemoteInfo(StreamInput in) throws IOException { + scheme = in.readString(); + host = in.readString(); + port = in.readVInt(); + query = in.readBytesReference(); + username = in.readOptionalString(); + password = in.readOptionalString(); + int headersLength = in.readVInt(); + Map headers = new HashMap<>(headersLength); + for (int i = 0; i < headersLength; i++) { + headers.put(in.readString(), in.readString()); + } + this.headers = unmodifiableMap(headers); + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { + socketTimeout = new TimeValue(in); + connectTimeout = new TimeValue(in); + } else { + socketTimeout = DEFAULT_SOCKET_TIMEOUT; + connectTimeout = DEFAULT_CONNECT_TIMEOUT; + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(scheme); + out.writeString(host); + out.writeVInt(port); + out.writeBytesReference(query); + out.writeOptionalString(username); + out.writeOptionalString(password); + out.writeVInt(headers.size()); + for (Map.Entry header : headers.entrySet()) { + out.writeString(header.getKey()); + out.writeString(header.getValue()); + } + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { + socketTimeout.writeTo(out); + connectTimeout.writeTo(out); + } + } + + public String getScheme() { + return scheme; + } + + public String getHost() { + return host; + } + + public int getPort() { + return port; + } + + public BytesReference getQuery() { + return query; + } + + @Nullable + public String getUsername() { + return username; + } + + @Nullable + public String getPassword() { + return password; + } + + public Map getHeaders() { + return headers; + } + + /** + * Time to wait for a response from each request. + */ + public TimeValue getSocketTimeout() { + return socketTimeout; + } + + /** + * Time to wait to connect to the external cluster. + */ + public TimeValue getConnectTimeout() { + return connectTimeout; + } + + @Override + public String toString() { + StringBuilder b = new StringBuilder(); + if (false == "http".equals(scheme)) { + // http is the default so it isn't worth taking up space if it is the scheme + b.append("scheme=").append(scheme).append(' '); + } + b.append("host=").append(host).append(" port=").append(port).append(" query=").append(query.utf8ToString()); + if (username != null) { + b.append(" username=").append(username); + } + if (password != null) { + b.append(" password=<<>>"); + } + return b.toString(); + } +} diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ScrollableHitSource.java b/core/src/main/java/org/elasticsearch/index/reindex/ScrollableHitSource.java similarity index 87% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/ScrollableHitSource.java rename to core/src/main/java/org/elasticsearch/index/reindex/ScrollableHitSource.java index 0b4b66222bc4b..4dd32f56a1cba 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ScrollableHitSource.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/ScrollableHitSource.java @@ -32,7 +32,7 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.reindex.remote.RemoteScrollableHitSource; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.search.SearchHit; import org.elasticsearch.threadpool.ThreadPool; @@ -47,7 +47,7 @@ /** * A scrollable source of results. */ -public abstract class ScrollableHitSource implements Closeable { +public abstract class ScrollableHitSource { private final AtomicReference scrollId = new AtomicReference<>(); protected final Logger logger; @@ -81,20 +81,37 @@ public final void startNextScroll(TimeValue extraKeepAlive, Consumer o }); } protected abstract void doStartNextScroll(String scrollId, TimeValue extraKeepAlive, Consumer onResponse); - - @Override - public void close() { + + public final void close(Runnable onCompletion) { String scrollId = this.scrollId.get(); if (Strings.hasLength(scrollId)) { - clearScroll(scrollId); + clearScroll(scrollId, () -> cleanup(onCompletion)); + } else { + cleanup(onCompletion); } } - protected abstract void clearScroll(String scrollId); + + /** + * Called to clear a scroll id. + * + * @param scrollId the id to clear + * @param onCompletion implementers must call this after completing the clear whether they are + * successful or not + */ + protected abstract void clearScroll(String scrollId, Runnable onCompletion); + /** + * Called after the process has been totally finished to clean up any resources the process + * needed like remote connections. + * + * @param onCompletion implementers must call this after completing the cleanup whether they are + * successful or not + */ + protected abstract void cleanup(Runnable onCompletion); /** * Set the id of the last scroll. Used for debugging. */ - final void setScroll(String scrollId) { + public final void setScroll(String scrollId) { this.scrollId.set(scrollId); } @@ -179,6 +196,10 @@ public interface Hit { * all. */ @Nullable BytesReference getSource(); + /** + * The content type of the hit source. Returns null if the source didn't come back from the search. + */ + @Nullable XContentType getXContentType(); /** * The document id of the parent of the hit if there is a parent or null if there isn't. */ @@ -198,8 +219,7 @@ public interface Hit { } /** - * An implementation of {@linkplain Hit} that uses getters and setters. Primarily used for testing and {@link RemoteScrollableHitSource} - * . + * An implementation of {@linkplain Hit} that uses getters and setters. */ public static class BasicHit implements Hit { private final String index; @@ -208,6 +228,7 @@ public static class BasicHit implements Hit { private final long version; private BytesReference source; + private XContentType xContentType; private String parent; private String routing; private Long timestamp; @@ -245,8 +266,14 @@ public BytesReference getSource() { return source; } - public BasicHit setSource(BytesReference source) { + @Override + public XContentType getXContentType() { + return xContentType; + } + + public BasicHit setSource(BytesReference source, XContentType xContentType) { this.source = source; + this.xContentType = xContentType; return this; } @@ -367,7 +394,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field("reason"); { builder.startObject(); - ElasticsearchException.toXContent(builder, params, reason); + ElasticsearchException.generateThrowableXContent(builder, params, reason); builder.endObject(); } builder.endObject(); diff --git a/core/src/main/java/org/elasticsearch/index/reindex/SuccessfullyProcessed.java b/core/src/main/java/org/elasticsearch/index/reindex/SuccessfullyProcessed.java new file mode 100644 index 0000000000000..6547984900e4b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/reindex/SuccessfullyProcessed.java @@ -0,0 +1,46 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +/** + * Implemented by {@link BulkByScrollTask} and {@link BulkByScrollTask.Status} to consistently implement + * {@link #getSuccessfullyProcessed()}. + */ +public interface SuccessfullyProcessed { + /** + * Total number of successfully processed documents. + */ + default long getSuccessfullyProcessed() { + return getUpdated() + getCreated() + getDeleted(); + } + + /** + * Count of documents updated. + */ + long getUpdated(); + /** + * Count of documents created. + */ + long getCreated(); + /** + * Count of successful delete operations. + */ + long getDeleted(); +} diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryAction.java b/core/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryAction.java similarity index 87% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryAction.java rename to core/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryAction.java index 0ff1b18bde057..1058f7f13078a 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryAction.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryAction.java @@ -23,7 +23,7 @@ import org.elasticsearch.client.ElasticsearchClient; public class UpdateByQueryAction extends - Action { + Action { public static final UpdateByQueryAction INSTANCE = new UpdateByQueryAction(); public static final String NAME = "indices:data/write/update/byquery"; @@ -37,7 +37,7 @@ public UpdateByQueryRequestBuilder newRequestBuilder(ElasticsearchClient client) } @Override - public BulkIndexByScrollResponse newResponse() { - return new BulkIndexByScrollResponse(); + public BulkByScrollResponse newResponse() { + return new BulkByScrollResponse(); } } diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequest.java b/core/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequest.java similarity index 88% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequest.java rename to core/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequest.java index 3401ce4582b13..ad0123d76cedf 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequest.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequest.java @@ -24,6 +24,7 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.tasks.TaskId; import java.io.IOException; @@ -43,7 +44,11 @@ public UpdateByQueryRequest() { } public UpdateByQueryRequest(SearchRequest search) { - super(search); + this(search, true); + } + + private UpdateByQueryRequest(SearchRequest search, boolean setDefaults) { + super(search, setDefaults); } /** @@ -65,6 +70,13 @@ protected UpdateByQueryRequest self() { return this; } + @Override + public UpdateByQueryRequest forSlice(TaskId slicingTask, SearchRequest slice) { + UpdateByQueryRequest request = doForSlice(new UpdateByQueryRequest(slice, false), slicingTask); + request.setPipeline(pipeline); + return request; + } + @Override public String toString() { StringBuilder b = new StringBuilder(); diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequestBuilder.java b/core/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequestBuilder.java similarity index 90% rename from modules/reindex/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequestBuilder.java rename to core/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequestBuilder.java index 311e138fb1f78..06e0426864193 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/UpdateByQueryRequestBuilder.java @@ -28,12 +28,12 @@ public class UpdateByQueryRequestBuilder extends AbstractBulkIndexByScrollRequestBuilder { public UpdateByQueryRequestBuilder(ElasticsearchClient client, - Action action) { + Action action) { this(client, action, new SearchRequestBuilder(client, SearchAction.INSTANCE)); } private UpdateByQueryRequestBuilder(ElasticsearchClient client, - Action action, + Action action, SearchRequestBuilder search) { super(client, action, search, new UpdateByQueryRequest(search.request())); } diff --git a/core/src/main/java/org/elasticsearch/index/reindex/WorkingBulkByScrollTask.java b/core/src/main/java/org/elasticsearch/index/reindex/WorkingBulkByScrollTask.java new file mode 100644 index 0000000000000..4e11b3c9595e0 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/reindex/WorkingBulkByScrollTask.java @@ -0,0 +1,320 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +import org.apache.logging.log4j.Logger; +import org.elasticsearch.common.logging.ESLoggerFactory; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.AbstractRunnable; +import org.elasticsearch.common.util.concurrent.FutureUtils; +import org.elasticsearch.tasks.TaskId; +import org.elasticsearch.tasks.TaskInfo; +import org.elasticsearch.threadpool.ThreadPool; + +import java.util.List; +import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; + +import static java.lang.Math.max; +import static java.lang.Math.round; +import static org.elasticsearch.common.unit.TimeValue.timeValueNanos; + +/** + * {@link BulkByScrollTask} subclass for tasks that actually perform the work. Compare to {@link ParentBulkByScrollTask}. + */ +public class WorkingBulkByScrollTask extends BulkByScrollTask implements SuccessfullyProcessed { + private static final Logger logger = ESLoggerFactory.getLogger(BulkByScrollTask.class.getPackage().getName()); + + /** + * The id of the slice that this task is processing or {@code null} if this task isn't for a sliced request. + */ + private final Integer sliceId; + /** + * The total number of documents this request will process. 0 means we don't yet know or, possibly, there are actually 0 documents + * to process. Its ok that these have the same meaning because any request with 0 actual documents should be quite short lived. + */ + private final AtomicLong total = new AtomicLong(0); + private final AtomicLong updated = new AtomicLong(0); + private final AtomicLong created = new AtomicLong(0); + private final AtomicLong deleted = new AtomicLong(0); + private final AtomicLong noops = new AtomicLong(0); + private final AtomicInteger batch = new AtomicInteger(0); + private final AtomicLong versionConflicts = new AtomicLong(0); + private final AtomicLong bulkRetries = new AtomicLong(0); + private final AtomicLong searchRetries = new AtomicLong(0); + private final AtomicLong throttledNanos = new AtomicLong(); + /** + * The number of requests per second to which to throttle the request that this task represents. The other variables are all AtomicXXX + * style variables but there isn't an AtomicFloat so we just use a volatile. + */ + private volatile float requestsPerSecond; + /** + * Reference to any the last delayed prepareBulkRequest call. Used during rethrottling and canceling to reschedule the request. + */ + private final AtomicReference delayedPrepareBulkRequestReference = new AtomicReference<>(); + + public WorkingBulkByScrollTask(long id, String type, String action, String description, TaskId parentTask, Integer sliceId, + float requestsPerSecond) { + super(id, type, action, description, parentTask); + this.sliceId = sliceId; + setRequestsPerSecond(requestsPerSecond); + } + + @Override + public Status getStatus() { + return new Status(sliceId, total.get(), updated.get(), created.get(), deleted.get(), batch.get(), versionConflicts.get(), + noops.get(), bulkRetries.get(), searchRetries.get(), timeValueNanos(throttledNanos.get()), getRequestsPerSecond(), + getReasonCancelled(), throttledUntil()); + } + + @Override + protected void onCancelled() { + /* Drop the throttle to 0, immediately rescheduling any throttled + * operation so it will wake up and cancel itself. */ + rethrottle(Float.POSITIVE_INFINITY); + } + + @Override + public int runningSliceSubTasks() { + return 0; + } + + @Override + public TaskInfo getInfoGivenSliceInfo(String localNodeId, List sliceInfo) { + throw new UnsupportedOperationException("This is only supported by " + ParentBulkByScrollTask.class.getName() + "."); + } + + TimeValue throttledUntil() { + DelayedPrepareBulkRequest delayed = delayedPrepareBulkRequestReference.get(); + if (delayed == null) { + return timeValueNanos(0); + } + if (delayed.future == null) { + return timeValueNanos(0); + } + return timeValueNanos(max(0, delayed.future.getDelay(TimeUnit.NANOSECONDS))); + } + + public void setTotal(long totalHits) { + total.set(totalHits); + } + + public void countBatch() { + batch.incrementAndGet(); + } + + public void countNoop() { + noops.incrementAndGet(); + } + + @Override + public long getCreated() { + return created.get(); + } + + public void countCreated() { + created.incrementAndGet(); + } + + @Override + public long getUpdated() { + return updated.get(); + } + + public void countUpdated() { + updated.incrementAndGet(); + } + + @Override + public long getDeleted() { + return deleted.get(); + } + + public void countDeleted() { + deleted.incrementAndGet(); + } + + public void countVersionConflict() { + versionConflicts.incrementAndGet(); + } + + public void countBulkRetry() { + bulkRetries.incrementAndGet(); + } + + public void countSearchRetry() { + searchRetries.incrementAndGet(); + } + + float getRequestsPerSecond() { + return requestsPerSecond; + } + + /** + * Schedule prepareBulkRequestRunnable to run after some delay. This is where throttling plugs into reindexing so the request can be + * rescheduled over and over again. + */ + public void delayPrepareBulkRequest(ThreadPool threadPool, TimeValue lastBatchStartTime, int lastBatchSize, + AbstractRunnable prepareBulkRequestRunnable) { + // Synchronize so we are less likely to schedule the same request twice. + synchronized (delayedPrepareBulkRequestReference) { + TimeValue delay = throttleWaitTime(lastBatchStartTime, timeValueNanos(System.nanoTime()), lastBatchSize); + logger.debug("[{}]: preparing bulk request for [{}]", getId(), delay); + delayedPrepareBulkRequestReference.set(new DelayedPrepareBulkRequest(threadPool, getRequestsPerSecond(), + delay, new RunOnce(prepareBulkRequestRunnable))); + } + } + + public TimeValue throttleWaitTime(TimeValue lastBatchStartTime, TimeValue now, int lastBatchSize) { + long earliestNextBatchStartTime = now.nanos() + (long) perfectlyThrottledBatchTime(lastBatchSize); + return timeValueNanos(max(0, earliestNextBatchStartTime - System.nanoTime())); + } + + /** + * How many nanoseconds should a batch of lastBatchSize have taken if it were perfectly throttled? Package private for testing. + */ + float perfectlyThrottledBatchTime(int lastBatchSize) { + if (requestsPerSecond == Float.POSITIVE_INFINITY) { + return 0; + } + // requests + // ------------------- == seconds + // request per seconds + float targetBatchTimeInSeconds = lastBatchSize / requestsPerSecond; + // nanoseconds per seconds * seconds == nanoseconds + return TimeUnit.SECONDS.toNanos(1) * targetBatchTimeInSeconds; + } + + private void setRequestsPerSecond(float requestsPerSecond) { + if (requestsPerSecond <= 0) { + throw new IllegalArgumentException("requests per second must be more than 0 but was [" + requestsPerSecond + "]"); + } + this.requestsPerSecond = requestsPerSecond; + } + + @Override + public void rethrottle(float newRequestsPerSecond) { + synchronized (delayedPrepareBulkRequestReference) { + logger.debug("[{}]: rethrottling to [{}] requests per second", getId(), newRequestsPerSecond); + setRequestsPerSecond(newRequestsPerSecond); + + DelayedPrepareBulkRequest delayedPrepareBulkRequest = this.delayedPrepareBulkRequestReference.get(); + if (delayedPrepareBulkRequest == null) { + // No request has been queued so nothing to reschedule. + logger.debug("[{}]: skipping rescheduling because there is no scheduled task", getId()); + return; + } + + this.delayedPrepareBulkRequestReference.set(delayedPrepareBulkRequest.rethrottle(newRequestsPerSecond)); + } + } + + class DelayedPrepareBulkRequest { + private final ThreadPool threadPool; + private final AbstractRunnable command; + private final float requestsPerSecond; + private final ScheduledFuture future; + + DelayedPrepareBulkRequest(ThreadPool threadPool, float requestsPerSecond, TimeValue delay, AbstractRunnable command) { + this.threadPool = threadPool; + this.requestsPerSecond = requestsPerSecond; + this.command = command; + this.future = threadPool.schedule(delay, ThreadPool.Names.GENERIC, new AbstractRunnable() { + @Override + protected void doRun() throws Exception { + throttledNanos.addAndGet(delay.nanos()); + command.run(); + } + + @Override + public void onFailure(Exception e) { + command.onFailure(e); + } + }); + } + + DelayedPrepareBulkRequest rethrottle(float newRequestsPerSecond) { + if (newRequestsPerSecond < requestsPerSecond) { + /* The user is attempting to slow the request down. We'll let the + * change in throttle take effect the next time we delay + * prepareBulkRequest. We can't just reschedule the request further + * out in the future because the bulk context might time out. */ + logger.debug("[{}]: skipping rescheduling because the new throttle [{}] is slower than the old one [{}]", getId(), + newRequestsPerSecond, requestsPerSecond); + return this; + } + + long remainingDelay = future.getDelay(TimeUnit.NANOSECONDS); + // Actually reschedule the task + if (false == FutureUtils.cancel(future)) { + // Couldn't cancel, probably because the task has finished or been scheduled. Either way we have nothing to do here. + logger.debug("[{}]: skipping rescheduling because we couldn't cancel the task", getId()); + return this; + } + + /* Strangely enough getting here doesn't mean that you actually + * cancelled the request, just that you probably did. If you stress + * test it you'll find that requests sneak through. So each request + * is given a runOnce boolean to prevent that. */ + TimeValue newDelay = newDelay(remainingDelay, newRequestsPerSecond); + logger.debug("[{}]: rescheduling for [{}] in the future", getId(), newDelay); + return new DelayedPrepareBulkRequest(threadPool, requestsPerSecond, newDelay, command); + } + + /** + * Scale back remaining delay to fit the new delay. + */ + TimeValue newDelay(long remainingDelay, float newRequestsPerSecond) { + if (remainingDelay < 0) { + return timeValueNanos(0); + } + return timeValueNanos(round(remainingDelay * requestsPerSecond / newRequestsPerSecond)); + } + } + + /** + * Runnable that can only be run one time. This is paranoia to prevent furiously rethrottling from running the command multiple times. + * Without it the command would be run multiple times. + */ + private static class RunOnce extends AbstractRunnable { + private final AtomicBoolean hasRun = new AtomicBoolean(false); + private final AbstractRunnable delegate; + + RunOnce(AbstractRunnable delegate) { + this.delegate = delegate; + } + + @Override + protected void doRun() throws Exception { + if (hasRun.compareAndSet(false, true)) { + delegate.run(); + } + } + + @Override + public void onFailure(Exception e) { + delegate.onFailure(e); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/index/reindex/package-info.java b/core/src/main/java/org/elasticsearch/index/reindex/package-info.java new file mode 100644 index 0000000000000..00cb5106770d1 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/reindex/package-info.java @@ -0,0 +1,24 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/** + * Infrastructure for actions that modify documents based on the results of a scrolling query + * like reindex, update by query or delete by query. + */ +package org.elasticsearch.index.reindex; diff --git a/core/src/main/java/org/elasticsearch/index/search/ESToParentBlockJoinQuery.java b/core/src/main/java/org/elasticsearch/index/search/ESToParentBlockJoinQuery.java new file mode 100644 index 0000000000000..1ee427599cab2 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/search/ESToParentBlockJoinQuery.java @@ -0,0 +1,101 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.search; + +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Weight; +import org.apache.lucene.search.join.BitSetProducer; +import org.apache.lucene.search.join.ScoreMode; +import org.apache.lucene.search.join.ToParentBlockJoinQuery; + +import java.io.IOException; +import java.util.Objects; + +/** A {@link ToParentBlockJoinQuery} that allows to retrieve its nested path. */ +public final class ESToParentBlockJoinQuery extends Query { + + private final ToParentBlockJoinQuery query; + private final String path; + + public ESToParentBlockJoinQuery(Query childQuery, BitSetProducer parentsFilter, ScoreMode scoreMode, String path) { + this(new ToParentBlockJoinQuery(childQuery, parentsFilter, scoreMode), path); + } + + private ESToParentBlockJoinQuery(ToParentBlockJoinQuery query, String path) { + this.query = query; + this.path = path; + } + + /** Return the child query. */ + public Query getChildQuery() { + return query.getChildQuery(); + } + + /** Return the path of results of this query, or {@code null} if matches are at the root level. */ + public String getPath() { + return path; + } + + @Override + public Query rewrite(IndexReader reader) throws IOException { + Query innerRewrite = query.rewrite(reader); + if (innerRewrite != query) { + // Right now ToParentBlockJoinQuery always rewrites to a ToParentBlockJoinQuery + // so the else block will never be used. It is useful in the case that + // ToParentBlockJoinQuery one day starts to rewrite to a different query, eg. + // a MatchNoDocsQuery if it realizes that it cannot match any docs and rewrites + // to a MatchNoDocsQuery. In that case it would be fine to lose information + // about the nested path. + if (innerRewrite instanceof ToParentBlockJoinQuery) { + return new ESToParentBlockJoinQuery((ToParentBlockJoinQuery) innerRewrite, path); + } else { + return innerRewrite; + } + } + return super.rewrite(reader); + } + + @Override + public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException { + return query.createWeight(searcher, needsScores); + } + + @Override + public boolean equals(Object obj) { + if (sameClassAs(obj) == false) { + return false; + } + ESToParentBlockJoinQuery that = (ESToParentBlockJoinQuery) obj; + return query.equals(that.query) && Objects.equals(path, that.path); + } + + @Override + public int hashCode() { + return Objects.hash(getClass(), query, path); + } + + @Override + public String toString(String field) { + return query.toString(field); + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/search/MatchQuery.java b/core/src/main/java/org/elasticsearch/index/search/MatchQuery.java index 835ec8e143f5d..f3ea9447db74a 100644 --- a/core/src/main/java/org/elasticsearch/index/search/MatchQuery.java +++ b/core/src/main/java/org/elasticsearch/index/search/MatchQuery.java @@ -20,26 +20,38 @@ package org.elasticsearch.index.search; import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; +import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.index.Term; import org.apache.lucene.queries.ExtendedCommonTermsQuery; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanClause.Occur; import org.apache.lucene.search.BooleanQuery; +import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.FuzzyQuery; import org.apache.lucene.search.MultiPhraseQuery; import org.apache.lucene.search.MultiTermQuery; import org.apache.lucene.search.PhraseQuery; +import org.apache.lucene.search.PrefixQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.SynonymQuery; import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.spans.SpanMultiTermQueryWrapper; +import org.apache.lucene.search.spans.SpanNearQuery; +import org.apache.lucene.search.spans.SpanOrQuery; +import org.apache.lucene.search.spans.SpanQuery; +import org.apache.lucene.search.spans.SpanTermQuery; import org.apache.lucene.util.QueryBuilder; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.lucene.all.AllTermQuery; import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.unit.Fuzziness; +import org.elasticsearch.index.analysis.ShingleTokenFilterFactory; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.support.QueryParsers; @@ -48,7 +60,7 @@ public class MatchQuery { - public static enum Type implements Writeable { + public enum Type implements Writeable { /** * The text is analyzed and terms are added to a boolean query. */ @@ -64,7 +76,7 @@ public static enum Type implements Writeable { private final int ordinal; - private Type(int ordinal) { + Type(int ordinal) { this.ordinal = ordinal; } @@ -84,13 +96,13 @@ public void writeTo(StreamOutput out) throws IOException { } } - public static enum ZeroTermsQuery implements Writeable { + public enum ZeroTermsQuery implements Writeable { NONE(0), ALL(1); private final int ordinal; - private ZeroTermsQuery(int ordinal) { + ZeroTermsQuery(int ordinal) { this.ordinal = ordinal; } @@ -110,13 +122,19 @@ public void writeTo(StreamOutput out) throws IOException { } } - /** the default phrase slop */ + /** + * the default phrase slop + */ public static final int DEFAULT_PHRASE_SLOP = 0; - /** the default leniency setting */ + /** + * the default leniency setting + */ public static final boolean DEFAULT_LENIENCY = false; - /** the default zero terms query */ + /** + * the default zero terms query + */ public static final ZeroTermsQuery DEFAULT_ZERO_TERMS_QUERY = ZeroTermsQuery.NONE; protected final QueryShardContext context; @@ -204,7 +222,7 @@ protected Analyzer getAnalyzer(MappedFieldType fieldType) { } return context.getMapperService().searchAnalyzer(); } else { - Analyzer analyzer = context.getMapperService().analysisService().analyzer(this.analyzer); + Analyzer analyzer = context.getMapperService().getIndexAnalyzers().get(this.analyzer); if (analyzer == null) { throw new IllegalArgumentException("No analyzer found for [" + this.analyzer + "]"); } @@ -290,7 +308,7 @@ private class MatchQueryBuilder extends QueryBuilder { /** * Creates a new QueryBuilder using the given analyzer. */ - public MatchQueryBuilder(Analyzer analyzer, @Nullable MappedFieldType mapper) { + MatchQueryBuilder(Analyzer analyzer, @Nullable MappedFieldType mapper) { super(analyzer); this.mapper = mapper; } @@ -300,52 +318,137 @@ protected Query newTermQuery(Term term) { return blendTermQuery(term, mapper); } + @Override + protected Query newSynonymQuery(Term[] terms) { + return blendTermsQuery(terms, mapper); + } + + /** + * Checks if graph analysis should be enabled for the field depending + * on the provided {@link Analyzer} + */ + protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field, + String queryText, boolean quoted, int phraseSlop) { + assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST; + + // Use the analyzer to get all the tokens, and then build an appropriate + // query based on the analysis chain. + try (TokenStream source = analyzer.tokenStream(field, queryText)) { + if (source.hasAttribute(DisableGraphAttribute.class)) { + /** + * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid + * paths explosion. See {@link ShingleTokenFilterFactory} for details. + */ + setEnableGraphQueries(false); + } + Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop); + setEnableGraphQueries(true); + return query; + } catch (IOException e) { + throw new RuntimeException("Error analyzing query text", e); + } + } + public Query createPhrasePrefixQuery(String field, String queryText, int phraseSlop, int maxExpansions) { final Query query = createFieldQuery(getAnalyzer(), Occur.MUST, field, queryText, true, phraseSlop); + return toMultiPhrasePrefix(query, phraseSlop, maxExpansions); + } + + private Query toMultiPhrasePrefix(final Query query, int phraseSlop, int maxExpansions) { + float boost = 1; + Query innerQuery = query; + while (innerQuery instanceof BoostQuery) { + BoostQuery bq = (BoostQuery) innerQuery; + boost *= bq.getBoost(); + innerQuery = bq.getQuery(); + } + if (query instanceof SpanQuery) { + return toSpanQueryPrefix((SpanQuery) query, boost); + } final MultiPhrasePrefixQuery prefixQuery = new MultiPhrasePrefixQuery(); prefixQuery.setMaxExpansions(maxExpansions); prefixQuery.setSlop(phraseSlop); - if (query instanceof PhraseQuery) { - PhraseQuery pq = (PhraseQuery)query; + if (innerQuery instanceof PhraseQuery) { + PhraseQuery pq = (PhraseQuery) innerQuery; Term[] terms = pq.getTerms(); int[] positions = pq.getPositions(); for (int i = 0; i < terms.length; i++) { - prefixQuery.add(new Term[] {terms[i]}, positions[i]); + prefixQuery.add(new Term[]{terms[i]}, positions[i]); } - return prefixQuery; - } else if (query instanceof MultiPhraseQuery) { - MultiPhraseQuery pq = (MultiPhraseQuery)query; + return boost == 1 ? prefixQuery : new BoostQuery(prefixQuery, boost); + } else if (innerQuery instanceof MultiPhraseQuery) { + MultiPhraseQuery pq = (MultiPhraseQuery) innerQuery; Term[][] terms = pq.getTermArrays(); int[] positions = pq.getPositions(); for (int i = 0; i < terms.length; i++) { prefixQuery.add(terms[i], positions[i]); } - return prefixQuery; - } else if (query instanceof TermQuery) { - prefixQuery.add(((TermQuery) query).getTerm()); - return prefixQuery; + return boost == 1 ? prefixQuery : new BoostQuery(prefixQuery, boost); + } else if (innerQuery instanceof TermQuery) { + prefixQuery.add(((TermQuery) innerQuery).getTerm()); + return boost == 1 ? prefixQuery : new BoostQuery(prefixQuery, boost); + } else if (innerQuery instanceof AllTermQuery) { + prefixQuery.add(((AllTermQuery) innerQuery).getTerm()); + return boost == 1 ? prefixQuery : new BoostQuery(prefixQuery, boost); } return query; } - public Query createCommonTermsQuery(String field, String queryText, Occur highFreqOccur, Occur lowFreqOccur, float maxTermFrequency, MappedFieldType fieldType) { + private Query toSpanQueryPrefix(SpanQuery query, float boost) { + if (query instanceof SpanTermQuery) { + SpanMultiTermQueryWrapper ret = + new SpanMultiTermQueryWrapper<>(new PrefixQuery(((SpanTermQuery) query).getTerm())); + return boost == 1 ? ret : new BoostQuery(ret, boost); + } else if (query instanceof SpanNearQuery) { + SpanNearQuery spanNearQuery = (SpanNearQuery) query; + SpanQuery[] clauses = spanNearQuery.getClauses(); + if (clauses[clauses.length-1] instanceof SpanTermQuery) { + clauses[clauses.length-1] = new SpanMultiTermQueryWrapper<>( + new PrefixQuery(((SpanTermQuery) clauses[clauses.length-1]).getTerm()) + ); + } + SpanNearQuery newQuery = new SpanNearQuery(clauses, spanNearQuery.getSlop(), spanNearQuery.isInOrder()); + return boost == 1 ? newQuery : new BoostQuery(newQuery, boost); + } else if (query instanceof SpanOrQuery) { + SpanOrQuery orQuery = (SpanOrQuery) query; + SpanQuery[] clauses = new SpanQuery[orQuery.getClauses().length]; + for (int i = 0; i < clauses.length; i++) { + clauses[i] = (SpanQuery) toSpanQueryPrefix(orQuery.getClauses()[i], 1); + } + return boost == 1 ? new SpanOrQuery(clauses) : new BoostQuery(new SpanOrQuery(clauses), boost); + } else { + return query; + } + } + + public Query createCommonTermsQuery(String field, String queryText, Occur highFreqOccur, Occur lowFreqOccur, float + maxTermFrequency, MappedFieldType fieldType) { Query booleanQuery = createBooleanQuery(field, queryText, lowFreqOccur); if (booleanQuery != null && booleanQuery instanceof BooleanQuery) { BooleanQuery bq = (BooleanQuery) booleanQuery; - ExtendedCommonTermsQuery query = new ExtendedCommonTermsQuery(highFreqOccur, lowFreqOccur, maxTermFrequency, ((BooleanQuery)booleanQuery).isCoordDisabled(), fieldType); - for (BooleanClause clause : bq.clauses()) { - if (!(clause.getQuery() instanceof TermQuery)) { - return booleanQuery; - } - query.add(((TermQuery) clause.getQuery()).getTerm()); - } - return query; + return boolToExtendedCommonTermsQuery(bq, highFreqOccur, lowFreqOccur, maxTermFrequency, fieldType); } return booleanQuery; + } + private Query boolToExtendedCommonTermsQuery(BooleanQuery bq, Occur highFreqOccur, Occur lowFreqOccur, float + maxTermFrequency, MappedFieldType fieldType) { + ExtendedCommonTermsQuery query = new ExtendedCommonTermsQuery(highFreqOccur, lowFreqOccur, maxTermFrequency, + bq.isCoordDisabled(), fieldType); + for (BooleanClause clause : bq.clauses()) { + if (!(clause.getQuery() instanceof TermQuery)) { + return bq; + } + query.add(((TermQuery) clause.getQuery()).getTerm()); + } + return query; } } + protected Query blendTermsQuery(Term[] terms, MappedFieldType fieldType) { + return new SynonymQuery(terms); + } + protected Query blendTermQuery(Term term, MappedFieldType fieldType) { if (fuzziness != null) { if (fieldType != null) { diff --git a/core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java b/core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java index 1a5b3c02e1057..4203cb76c548e 100644 --- a/core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java +++ b/core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java @@ -28,10 +28,11 @@ import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.DisjunctionMaxQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.collect.Tuple; -import org.elasticsearch.common.lucene.search.MatchNoDocsQuery; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.query.AbstractQueryBuilder; @@ -60,12 +61,7 @@ public MultiMatchQuery(QueryShardContext context) { private Query parseAndApply(Type type, String fieldName, Object value, String minimumShouldMatch, Float boostValue) throws IOException { Query query = parse(type, fieldName, value); - // If the coordination factor is disabled on a boolean query we don't apply the minimum should match. - // This is done to make sure that the minimum_should_match doesn't get applied when there is only one word - // and multiple variations of the same word in the query (synonyms for instance). - if (query instanceof BooleanQuery && !((BooleanQuery) query).isCoordDisabled()) { - query = Queries.applyMinimumShouldMatch((BooleanQuery) query, minimumShouldMatch); - } + query = Queries.maybeApplyMinimumShouldMatch(query, minimumShouldMatch); if (query != null && boostValue != null && boostValue != AbstractQueryBuilder.DEFAULT_BOOST) { query = new BoostQuery(query, boostValue); } @@ -157,6 +153,10 @@ public Query blendTerm(Term term, MappedFieldType fieldType) { return MultiMatchQuery.super.blendTermQuery(term, fieldType); } + public Query blendTerms(Term[] terms, MappedFieldType fieldType) { + return MultiMatchQuery.super.blendTermsQuery(terms, fieldType); + } + public Query termQuery(MappedFieldType fieldType, Object value) { return MultiMatchQuery.this.termQuery(fieldType, value, lenient); } @@ -165,7 +165,7 @@ public Query termQuery(MappedFieldType fieldType, Object value) { final class CrossFieldsQueryBuilder extends QueryBuilder { private FieldAndFieldType[] blendedFields; - public CrossFieldsQueryBuilder(float tieBreaker) { + CrossFieldsQueryBuilder(float tieBreaker) { super(false, tieBreaker); } @@ -222,12 +222,24 @@ public List buildGroupedQueries(MultiMatchQueryBuilder.Type type, Map queries = new ArrayList<>(); - Term[] terms = new Term[blendedFields.length]; - float[] blendedBoost = new float[blendedFields.length]; + Term[] terms = new Term[blendedFields.length * values.length]; + float[] blendedBoost = new float[blendedFields.length * values.length]; int i = 0; for (FieldAndFieldType ft : blendedFields) { - Query query; - try { - query = ft.fieldType.termQuery(value, null); - } catch (IllegalArgumentException e) { - // the query expects a certain class of values such as numbers - // of ip addresses and the value can't be parsed, so ignore this - // field - continue; - } - float boost = ft.boost; - while (query instanceof BoostQuery) { - BoostQuery bq = (BoostQuery) query; - query = bq.getQuery(); - boost *= bq.getBoost(); - } - if (query.getClass() == TermQuery.class) { - terms[i] = ((TermQuery) query).getTerm(); - blendedBoost[i] = boost; - i++; - } else { - if (boost != 1f) { - query = new BoostQuery(query, boost); + for (BytesRef term : values) { + Query query; + try { + query = ft.fieldType.termQuery(term, context); + } catch (IllegalArgumentException e) { + // the query expects a certain class of values such as numbers + // of ip addresses and the value can't be parsed, so ignore this + // field + continue; + } catch (ElasticsearchParseException parseException) { + // date fields throw an ElasticsearchParseException with the + // underlying IAE as the cause, ignore this field if that is + // the case + if (parseException.getCause() instanceof IllegalArgumentException) { + continue; + } + throw parseException; + } + float boost = ft.boost; + while (query instanceof BoostQuery) { + BoostQuery bq = (BoostQuery) query; + query = bq.getQuery(); + boost *= bq.getBoost(); + } + if (query.getClass() == TermQuery.class) { + terms[i] = ((TermQuery) query).getTerm(); + blendedBoost[i] = boost; + i++; + } else { + if (boost != 1f) { + query = new BoostQuery(query, boost); + } + queries.add(query); } - queries.add(query); } } if (i > 0) { @@ -307,6 +335,14 @@ protected Query blendTermQuery(Term term, MappedFieldType fieldType) { return queryBuilder.blendTerm(term, fieldType); } + @Override + protected Query blendTermsQuery(Term[] terms, MappedFieldType fieldType) { + if (queryBuilder == null) { + return super.blendTermsQuery(terms, fieldType); + } + return queryBuilder.blendTerms(terms, fieldType); + } + static final class FieldAndFieldType { final MappedFieldType fieldType; final float boost; diff --git a/core/src/main/java/org/elasticsearch/index/search/NestedHelper.java b/core/src/main/java/org/elasticsearch/index/search/NestedHelper.java new file mode 100644 index 0000000000000..f5a5c8143bcb0 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/search/NestedHelper.java @@ -0,0 +1,183 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.search; + +import org.apache.lucene.search.BooleanClause; +import org.apache.lucene.search.BooleanQuery; +import org.apache.lucene.search.BoostQuery; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.IndexOrDocValuesQuery; +import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.MatchNoDocsQuery; +import org.apache.lucene.search.PointRangeQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.BooleanClause.Occur; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.ObjectMapper; + +/** Utility class to filter parent and children clauses when building nested + * queries. */ +public final class NestedHelper { + + private final MapperService mapperService; + + public NestedHelper(MapperService mapperService) { + this.mapperService = mapperService; + } + + /** Returns true if the given query might match nested documents. */ + public boolean mightMatchNestedDocs(Query query) { + if (query instanceof ConstantScoreQuery) { + return mightMatchNestedDocs(((ConstantScoreQuery) query).getQuery()); + } else if (query instanceof BoostQuery) { + return mightMatchNestedDocs(((BoostQuery) query).getQuery()); + } else if (query instanceof MatchAllDocsQuery) { + return true; + } else if (query instanceof MatchNoDocsQuery) { + return false; + } else if (query instanceof TermQuery) { + // We only handle term queries and range queries, which should already + // cover a high majority of use-cases + return mightMatchNestedDocs(((TermQuery) query).getTerm().field()); + } else if (query instanceof PointRangeQuery) { + return mightMatchNestedDocs(((PointRangeQuery) query).getField()); + } else if (query instanceof IndexOrDocValuesQuery) { + return mightMatchNestedDocs(((IndexOrDocValuesQuery) query).getIndexQuery()); + } else if (query instanceof BooleanQuery) { + final BooleanQuery bq = (BooleanQuery) query; + final boolean hasRequiredClauses = bq.clauses().stream().anyMatch(BooleanClause::isRequired); + if (hasRequiredClauses) { + return bq.clauses().stream() + .filter(BooleanClause::isRequired) + .map(BooleanClause::getQuery) + .allMatch(this::mightMatchNestedDocs); + } else { + return bq.clauses().stream() + .filter(c -> c.getOccur() == Occur.SHOULD) + .map(BooleanClause::getQuery) + .anyMatch(this::mightMatchNestedDocs); + } + } else if (query instanceof ESToParentBlockJoinQuery) { + return ((ESToParentBlockJoinQuery) query).getPath() != null; + } else { + return true; + } + } + + /** Returns true if a query on the given field might match nested documents. */ + boolean mightMatchNestedDocs(String field) { + if (field.startsWith("_")) { + // meta field. Every meta field behaves differently, eg. nested + // documents have the same _uid as their parent, put their path in + // the _type field but do not have _field_names. So we just ignore + // meta fields and return true, which is always safe, it just means + // we might add a nested filter when it is nor required. + return true; + } + if (mapperService.fullName(field) == null) { + // field does not exist + return false; + } + for (String parent = parentObject(field); parent != null; parent = parentObject(parent)) { + ObjectMapper mapper = mapperService.getObjectMapper(parent); + if (mapper != null && mapper.nested().isNested()) { + return true; + } + } + return false; + } + + /** Returns true if the given query might match parent documents or documents + * that are nested under a different path. */ + public boolean mightMatchNonNestedDocs(Query query, String nestedPath) { + if (query instanceof ConstantScoreQuery) { + return mightMatchNonNestedDocs(((ConstantScoreQuery) query).getQuery(), nestedPath); + } else if (query instanceof BoostQuery) { + return mightMatchNonNestedDocs(((BoostQuery) query).getQuery(), nestedPath); + } else if (query instanceof MatchAllDocsQuery) { + return true; + } else if (query instanceof MatchNoDocsQuery) { + return false; + } else if (query instanceof TermQuery) { + return mightMatchNonNestedDocs(((TermQuery) query).getTerm().field(), nestedPath); + } else if (query instanceof PointRangeQuery) { + return mightMatchNonNestedDocs(((PointRangeQuery) query).getField(), nestedPath); + } else if (query instanceof IndexOrDocValuesQuery) { + return mightMatchNonNestedDocs(((IndexOrDocValuesQuery) query).getIndexQuery(), nestedPath); + } else if (query instanceof BooleanQuery) { + final BooleanQuery bq = (BooleanQuery) query; + final boolean hasRequiredClauses = bq.clauses().stream().anyMatch(BooleanClause::isRequired); + if (hasRequiredClauses) { + return bq.clauses().stream() + .filter(BooleanClause::isRequired) + .map(BooleanClause::getQuery) + .allMatch(q -> mightMatchNonNestedDocs(q, nestedPath)); + } else { + return bq.clauses().stream() + .filter(c -> c.getOccur() == Occur.SHOULD) + .map(BooleanClause::getQuery) + .anyMatch(q -> mightMatchNonNestedDocs(q, nestedPath)); + } + } else { + return true; + } + } + + /** Returns true if a query on the given field might match parent documents + * or documents that are nested under a different path. */ + boolean mightMatchNonNestedDocs(String field, String nestedPath) { + if (field.startsWith("_")) { + // meta field. Every meta field behaves differently, eg. nested + // documents have the same _uid as their parent, put their path in + // the _type field but do not have _field_names. So we just ignore + // meta fields and return true, which is always safe, it just means + // we might add a nested filter when it is nor required. + return true; + } + if (mapperService.fullName(field) == null) { + return false; + } + for (String parent = parentObject(field); parent != null; parent = parentObject(parent)) { + ObjectMapper mapper = mapperService.getObjectMapper(parent); + if (mapper!= null && mapper.nested().isNested()) { + if (mapper.fullPath().equals(nestedPath)) { + // If the mapper does not include in its parent or in the root object then + // the query might only match nested documents with the given path + return mapper.nested().isIncludeInParent() || mapper.nested().isIncludeInRoot(); + } else { + // the first parent nested mapper does not have the expected path + // It might be misconfiguration or a sub nested mapper + return true; + } + } + } + return true; // the field is not a sub field of the nested path + } + + private static String parentObject(String field) { + int lastDot = field.lastIndexOf('.'); + if (lastDot == -1) { + return null; + } + return field.substring(0, lastDot); + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceRangeQuery.java b/core/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceRangeQuery.java deleted file mode 100644 index 53cdf5861a318..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceRangeQuery.java +++ /dev/null @@ -1,234 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.search.geo; - -import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.search.ConstantScoreScorer; -import org.apache.lucene.search.ConstantScoreWeight; -import org.apache.lucene.search.DocIdSetIterator; -import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.Scorer; -import org.apache.lucene.search.TwoPhaseIterator; -import org.apache.lucene.search.Weight; -import org.apache.lucene.util.NumericUtils; -import org.elasticsearch.common.geo.GeoDistance; -import org.elasticsearch.common.geo.GeoPoint; -import org.elasticsearch.common.unit.DistanceUnit; -import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; -import org.elasticsearch.index.fielddata.MultiGeoPointValues; -import org.elasticsearch.index.mapper.LegacyGeoPointFieldMapper; - -import java.io.IOException; - -/** - * Query geo_point fields by distance ranges. Used for indexes created prior to 2.2 - * @deprecated - */ -@Deprecated -public class GeoDistanceRangeQuery extends Query { - - private final double lat; - private final double lon; - - private final double inclusiveLowerPoint; // in meters - private final double inclusiveUpperPoint; // in meters - - private final GeoDistance geoDistance; - private final GeoDistance.FixedSourceDistance fixedSourceDistance; - private GeoDistance.DistanceBoundingCheck distanceBoundingCheck; - private final Query boundingBoxFilter; - - private final IndexGeoPointFieldData indexFieldData; - - public GeoDistanceRangeQuery(GeoPoint point, Double lowerVal, Double upperVal, boolean includeLower, - boolean includeUpper, GeoDistance geoDistance, LegacyGeoPointFieldMapper.GeoPointFieldType fieldType, - IndexGeoPointFieldData indexFieldData, String optimizeBbox) { - this.lat = point.lat(); - this.lon = point.lon(); - this.geoDistance = geoDistance; - this.indexFieldData = indexFieldData; - - this.fixedSourceDistance = geoDistance.fixedSourceDistance(lat, lon, DistanceUnit.DEFAULT); - - if (lowerVal != null) { - double f = lowerVal.doubleValue(); - long i = NumericUtils.doubleToSortableLong(f); - inclusiveLowerPoint = NumericUtils.sortableLongToDouble(includeLower ? i : (i + 1L)); - } else { - inclusiveLowerPoint = Double.NEGATIVE_INFINITY; - } - if (upperVal != null) { - double f = upperVal.doubleValue(); - long i = NumericUtils.doubleToSortableLong(f); - inclusiveUpperPoint = NumericUtils.sortableLongToDouble(includeUpper ? i : (i - 1L)); - } else { - inclusiveUpperPoint = Double.POSITIVE_INFINITY; - // we disable bounding box in this case, since the upper point is all and we create bounding box up to the - // upper point it will effectively include all - // TODO we can create a bounding box up to from and "not" it - optimizeBbox = null; - } - - if (optimizeBbox != null && !"none".equals(optimizeBbox)) { - distanceBoundingCheck = GeoDistance.distanceBoundingCheck(lat, lon, inclusiveUpperPoint, DistanceUnit.DEFAULT); - if ("memory".equals(optimizeBbox)) { - boundingBoxFilter = null; - } else if ("indexed".equals(optimizeBbox)) { - boundingBoxFilter = LegacyIndexedGeoBoundingBoxQuery.create(distanceBoundingCheck.topLeft(), - distanceBoundingCheck.bottomRight(), fieldType); - distanceBoundingCheck = GeoDistance.ALWAYS_INSTANCE; // fine, we do the bounding box check using the filter - } else { - throw new IllegalArgumentException("type [" + optimizeBbox + "] for bounding box optimization not supported"); - } - } else { - distanceBoundingCheck = GeoDistance.ALWAYS_INSTANCE; - boundingBoxFilter = null; - } - } - - public double lat() { - return lat; - } - - public double lon() { - return lon; - } - - public GeoDistance geoDistance() { - return geoDistance; - } - - public double minInclusiveDistance() { - return inclusiveLowerPoint; - } - - public double maxInclusiveDistance() { - return inclusiveUpperPoint; - } - - public String fieldName() { - return indexFieldData.getFieldName(); - } - - @Override - public Query rewrite(IndexReader reader) throws IOException { - return super.rewrite(reader); - } - - @Override - public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException { - final Weight boundingBoxWeight; - if (boundingBoxFilter != null) { - boundingBoxWeight = searcher.createNormalizedWeight(boundingBoxFilter, false); - } else { - boundingBoxWeight = null; - } - return new ConstantScoreWeight(this) { - @Override - public Scorer scorer(LeafReaderContext context) throws IOException { - final DocIdSetIterator approximation; - if (boundingBoxWeight != null) { - Scorer s = boundingBoxWeight.scorer(context); - if (s == null) { - // if the approximation does not match anything, we're done - return null; - } - approximation = s.iterator(); - } else { - approximation = DocIdSetIterator.all(context.reader().maxDoc()); - } - final MultiGeoPointValues values = indexFieldData.load(context).getGeoPointValues(); - final TwoPhaseIterator twoPhaseIterator = new TwoPhaseIterator(approximation) { - @Override - public boolean matches() throws IOException { - final int doc = approximation.docID(); - values.setDocument(doc); - final int length = values.count(); - for (int i = 0; i < length; i++) { - GeoPoint point = values.valueAt(i); - if (distanceBoundingCheck.isWithin(point.lat(), point.lon())) { - double d = fixedSourceDistance.calculate(point.lat(), point.lon()); - if (d >= inclusiveLowerPoint && d <= inclusiveUpperPoint) { - return true; - } - } - } - return false; - } - - @Override - public float matchCost() { - if (distanceBoundingCheck == GeoDistance.ALWAYS_INSTANCE) { - return 0.0f; - } else { - // TODO: is this right (up to 4 comparisons from GeoDistance.SimpleDistanceBoundingCheck)? - return 4.0f; - } - } - }; - return new ConstantScoreScorer(this, score(), twoPhaseIterator); - } - }; - } - - @Override - public boolean equals(Object o) { - if (this == o) return true; - if (sameClassAs(o) == false) return false; - - GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) o; - - if (Double.compare(filter.inclusiveLowerPoint, inclusiveLowerPoint) != 0) return false; - if (Double.compare(filter.inclusiveUpperPoint, inclusiveUpperPoint) != 0) return false; - if (Double.compare(filter.lat, lat) != 0) return false; - if (Double.compare(filter.lon, lon) != 0) return false; - if (!indexFieldData.getFieldName().equals(filter.indexFieldData.getFieldName())) - return false; - if (geoDistance != filter.geoDistance) return false; - - return true; - } - - @Override - public String toString(String field) { - return "GeoDistanceRangeQuery(" + indexFieldData.getFieldName() + ", " + geoDistance + ", [" - + inclusiveLowerPoint + " - " + inclusiveUpperPoint + "], " + lat + ", " + lon + ")"; - } - - @Override - public int hashCode() { - int result = classHash(); - long temp; - temp = lat != +0.0d ? Double.doubleToLongBits(lat) : 0L; - result = 31 * result + Long.hashCode(temp); - temp = lon != +0.0d ? Double.doubleToLongBits(lon) : 0L; - result = 31 * result + Long.hashCode(temp); - temp = inclusiveLowerPoint != +0.0d ? Double.doubleToLongBits(inclusiveLowerPoint) : 0L; - result = 31 * result + Long.hashCode(temp); - temp = inclusiveUpperPoint != +0.0d ? Double.doubleToLongBits(inclusiveUpperPoint) : 0L; - result = 31 * result + Long.hashCode(temp); - result = 31 * result + (geoDistance != null ? geoDistance.hashCode() : 0); - result = 31 * result + indexFieldData.getFieldName().hashCode(); - return result; - } - -} diff --git a/core/src/main/java/org/elasticsearch/index/search/geo/LegacyGeoDistanceRangeQuery.java b/core/src/main/java/org/elasticsearch/index/search/geo/LegacyGeoDistanceRangeQuery.java new file mode 100644 index 0000000000000..716f67da6f8d4 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/search/geo/LegacyGeoDistanceRangeQuery.java @@ -0,0 +1,241 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.search.geo; + +import org.apache.lucene.geo.Rectangle; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.ConstantScoreScorer; +import org.apache.lucene.search.ConstantScoreWeight; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.TwoPhaseIterator; +import org.apache.lucene.search.Weight; +import org.apache.lucene.util.NumericUtils; +import org.elasticsearch.common.geo.GeoDistance; +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.geo.GeoUtils; +import org.elasticsearch.common.unit.DistanceUnit; +import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; +import org.elasticsearch.index.fielddata.MultiGeoPointValues; +import org.elasticsearch.index.mapper.LegacyGeoPointFieldMapper; +import org.elasticsearch.index.query.QueryShardContext; + +import java.io.IOException; +import java.util.Objects; + +/** + * Query geo_point fields by distance ranges. Used for indexes created prior to 2.2 + * @deprecated this is deprecated in favor of lucene's new GeoPointField queries + */ +@Deprecated +public class LegacyGeoDistanceRangeQuery extends Query { + private final double lat; + private final double lon; + + private final double inclusiveLowerPoint; // in meters + private final double inclusiveUpperPoint; // in meters + + private final GeoDistance geoDistance; + private final Query boundingBoxFilter; + private final Rectangle bbox; + + private final IndexGeoPointFieldData indexFieldData; + + public LegacyGeoDistanceRangeQuery(GeoPoint point, Double lowerVal, Double upperVal, boolean includeLower, + boolean includeUpper, GeoDistance geoDistance, + LegacyGeoPointFieldMapper.LegacyGeoPointFieldType fieldType, + IndexGeoPointFieldData indexFieldData, String optimizeBbox, + QueryShardContext context) { + this.lat = point.lat(); + this.lon = point.lon(); + this.geoDistance = geoDistance; + this.indexFieldData = indexFieldData; + + if (lowerVal != null) { + double f = lowerVal.doubleValue(); + long i = NumericUtils.doubleToSortableLong(f); + inclusiveLowerPoint = NumericUtils.sortableLongToDouble(includeLower ? i : (i + 1L)); + } else { + inclusiveLowerPoint = Double.NEGATIVE_INFINITY; + } + if (upperVal != null) { + double f = upperVal.doubleValue(); + long i = NumericUtils.doubleToSortableLong(f); + inclusiveUpperPoint = NumericUtils.sortableLongToDouble(includeUpper ? i : (i - 1L)); + } else { + inclusiveUpperPoint = Double.POSITIVE_INFINITY; + // we disable bounding box in this case, since the upper point is all and we create bounding box up to the + // upper point it will effectively include all + // TODO we can create a bounding box up to from and "not" it + optimizeBbox = null; + } + + if (optimizeBbox != null && !"none".equals(optimizeBbox)) { + bbox = Rectangle.fromPointDistance(lat, lon, inclusiveUpperPoint); + if ("memory".equals(optimizeBbox)) { + boundingBoxFilter = null; + } else if ("indexed".equals(optimizeBbox)) { + boundingBoxFilter = LegacyIndexedGeoBoundingBoxQuery.create(bbox, fieldType, context); + } else { + throw new IllegalArgumentException("type [" + optimizeBbox + "] for bounding box optimization not supported"); + } + } else { + bbox = null; + boundingBoxFilter = null; + } + } + + public double lat() { + return lat; + } + + public double lon() { + return lon; + } + + public GeoDistance geoDistance() { + return geoDistance; + } + + public double minInclusiveDistance() { + return inclusiveLowerPoint; + } + + public double maxInclusiveDistance() { + return inclusiveUpperPoint; + } + + public String fieldName() { + return indexFieldData.getFieldName(); + } + + @Override + public Query rewrite(IndexReader reader) throws IOException { + return super.rewrite(reader); + } + + @Override + public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException { + final Weight boundingBoxWeight; + if (boundingBoxFilter != null) { + boundingBoxWeight = searcher.createNormalizedWeight(boundingBoxFilter, false); + } else { + boundingBoxWeight = null; + } + return new ConstantScoreWeight(this) { + @Override + public Scorer scorer(LeafReaderContext context) throws IOException { + final DocIdSetIterator approximation; + if (boundingBoxWeight != null) { + Scorer s = boundingBoxWeight.scorer(context); + if (s == null) { + // if the approximation does not match anything, we're done + return null; + } + approximation = s.iterator(); + } else { + approximation = DocIdSetIterator.all(context.reader().maxDoc()); + } + final MultiGeoPointValues values = indexFieldData.load(context).getGeoPointValues(); + final TwoPhaseIterator twoPhaseIterator = new TwoPhaseIterator(approximation) { + @Override + public boolean matches() throws IOException { + final int doc = approximation.docID(); + values.setDocument(doc); + final int length = values.count(); + for (int i = 0; i < length; i++) { + GeoPoint point = values.valueAt(i); + if (bbox == null + || GeoUtils.rectangleContainsPoint(bbox, point.lat(), point.lon())) { + double d = geoDistance.calculate(lat, lon, point.lat(), point.lon(), DistanceUnit.DEFAULT); + if (d >= inclusiveLowerPoint && d <= inclusiveUpperPoint) { + return true; + } + } + } + return false; + } + + @Override + public float matchCost() { + if (bbox != null) { + // always within bounds so we're going to compute distance for every point + return values.count(); + } else { + // TODO: come up with better estimate of boundary points + return 4.0f; + } + } + }; + return new ConstantScoreScorer(this, score(), twoPhaseIterator); + } + }; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (sameClassAs(o) == false) return false; + + LegacyGeoDistanceRangeQuery filter = (LegacyGeoDistanceRangeQuery) o; + + if (Double.compare(filter.inclusiveLowerPoint, inclusiveLowerPoint) != 0) return false; + if (Double.compare(filter.inclusiveUpperPoint, inclusiveUpperPoint) != 0) return false; + if (Double.compare(filter.lat, lat) != 0) return false; + if (Double.compare(filter.lon, lon) != 0) return false; + if (!indexFieldData.getFieldName().equals(filter.indexFieldData.getFieldName())) + return false; + if (geoDistance != filter.geoDistance) return false; + if (boundingBoxFilter == null && filter.boundingBoxFilter != null) return false; + else if (boundingBoxFilter != null && boundingBoxFilter.equals(filter.boundingBoxFilter) == false) return false; + if (bbox == null && filter.bbox != null) return false; + else if (bbox != null && bbox.equals(filter.bbox) == false) return false; + + return true; + } + + @Override + public String toString(String field) { + return "GeoDistanceRangeQuery(" + indexFieldData.getFieldName() + ", " + geoDistance + ", [" + + inclusiveLowerPoint + " - " + inclusiveUpperPoint + "], " + lat + ", " + lon + ")"; + } + + @Override + public int hashCode() { + int result = classHash(); + long temp; + temp = lat != +0.0d ? Double.doubleToLongBits(lat) : 0L; + result = 31 * result + Long.hashCode(temp); + temp = lon != +0.0d ? Double.doubleToLongBits(lon) : 0L; + result = 31 * result + Long.hashCode(temp); + temp = inclusiveLowerPoint != +0.0d ? Double.doubleToLongBits(inclusiveLowerPoint) : 0L; + result = 31 * result + Long.hashCode(temp); + temp = inclusiveUpperPoint != +0.0d ? Double.doubleToLongBits(inclusiveUpperPoint) : 0L; + result = 31 * result + Long.hashCode(temp); + result = 31 * result + (geoDistance != null ? geoDistance.hashCode() : 0); + result = 31 * result + indexFieldData.getFieldName().hashCode(); + result = 31 * result + Objects.hash(boundingBoxFilter, bbox); + return result; + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/search/geo/LegacyInMemoryGeoBoundingBoxQuery.java b/core/src/main/java/org/elasticsearch/index/search/geo/LegacyInMemoryGeoBoundingBoxQuery.java index 2d8ea7af49d05..78b80246a0438 100644 --- a/core/src/main/java/org/elasticsearch/index/search/geo/LegacyInMemoryGeoBoundingBoxQuery.java +++ b/core/src/main/java/org/elasticsearch/index/search/geo/LegacyInMemoryGeoBoundingBoxQuery.java @@ -106,7 +106,7 @@ private static class Meridian180GeoBoundingBoxBits implements Bits { private final GeoPoint topLeft; private final GeoPoint bottomRight; - public Meridian180GeoBoundingBoxBits(int maxDoc, MultiGeoPointValues values, GeoPoint topLeft, GeoPoint bottomRight) { + Meridian180GeoBoundingBoxBits(int maxDoc, MultiGeoPointValues values, GeoPoint topLeft, GeoPoint bottomRight) { this.maxDoc = maxDoc; this.values = values; this.topLeft = topLeft; @@ -139,7 +139,7 @@ private static class GeoBoundingBoxBits implements Bits { private final GeoPoint topLeft; private final GeoPoint bottomRight; - public GeoBoundingBoxBits(int maxDoc, MultiGeoPointValues values, GeoPoint topLeft, GeoPoint bottomRight) { + GeoBoundingBoxBits(int maxDoc, MultiGeoPointValues values, GeoPoint topLeft, GeoPoint bottomRight) { this.maxDoc = maxDoc; this.values = values; this.topLeft = topLeft; diff --git a/core/src/main/java/org/elasticsearch/index/search/geo/LegacyIndexedGeoBoundingBoxQuery.java b/core/src/main/java/org/elasticsearch/index/search/geo/LegacyIndexedGeoBoundingBoxQuery.java index f60a12f023dd7..f04fe7c9767d0 100644 --- a/core/src/main/java/org/elasticsearch/index/search/geo/LegacyIndexedGeoBoundingBoxQuery.java +++ b/core/src/main/java/org/elasticsearch/index/search/geo/LegacyIndexedGeoBoundingBoxQuery.java @@ -19,48 +19,59 @@ package org.elasticsearch.index.search.geo; +import org.apache.lucene.geo.Rectangle; import org.apache.lucene.search.BooleanClause.Occur; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Query; import org.elasticsearch.common.geo.GeoPoint; -import org.elasticsearch.index.mapper.LegacyGeoPointFieldMapper; +import org.elasticsearch.index.mapper.BaseGeoPointFieldMapper.LegacyGeoPointFieldType; +import org.elasticsearch.index.query.QueryShardContext; /** + * Bounding Box filter used for indexes created before 2.2 * - * @deprecated This query is no longer used for geo_point indexes created after version 2.1 + * @deprecated this class is deprecated in favor of lucene's GeoPointField */ @Deprecated public class LegacyIndexedGeoBoundingBoxQuery { + public static Query create(final Rectangle bbox, final LegacyGeoPointFieldType fieldType, QueryShardContext context) { + return create(bbox.minLat, bbox.minLon, bbox.maxLat, bbox.maxLon, fieldType, context); + } + + public static Query create(final GeoPoint topLeft, final GeoPoint bottomRight, final LegacyGeoPointFieldType fieldType, + QueryShardContext context) { + return create(bottomRight.getLat(), topLeft.getLon(), topLeft.getLat(), bottomRight.getLon(), fieldType, context); + } - public static Query create(GeoPoint topLeft, GeoPoint bottomRight, LegacyGeoPointFieldMapper.GeoPointFieldType fieldType) { + private static Query create(double minLat, double minLon, double maxLat, double maxLon, + LegacyGeoPointFieldType fieldType, QueryShardContext context) { if (!fieldType.isLatLonEnabled()) { throw new IllegalArgumentException("lat/lon is not enabled (indexed) for field [" + fieldType.name() + "], can't use indexed filter on it"); } //checks to see if bounding box crosses 180 degrees - if (topLeft.lon() > bottomRight.lon()) { - return westGeoBoundingBoxFilter(topLeft, bottomRight, fieldType); - } else { - return eastGeoBoundingBoxFilter(topLeft, bottomRight, fieldType); + if (minLon > maxLon) { + return westGeoBoundingBoxFilter(minLat, minLon, maxLat, maxLon, fieldType, context); } + return eastGeoBoundingBoxFilter(minLat, minLon, maxLat, maxLon, fieldType, context); } - private static Query westGeoBoundingBoxFilter(GeoPoint topLeft, GeoPoint bottomRight, - LegacyGeoPointFieldMapper.GeoPointFieldType fieldType) { + private static Query westGeoBoundingBoxFilter(double minLat, double minLon, double maxLat, double maxLon, + LegacyGeoPointFieldType fieldType, QueryShardContext context) { BooleanQuery.Builder filter = new BooleanQuery.Builder(); filter.setMinimumNumberShouldMatch(1); - filter.add(fieldType.lonFieldType().rangeQuery(null, bottomRight.lon(), true, true), Occur.SHOULD); - filter.add(fieldType.lonFieldType().rangeQuery(topLeft.lon(), null, true, true), Occur.SHOULD); - filter.add(fieldType.latFieldType().rangeQuery(bottomRight.lat(), topLeft.lat(), true, true), Occur.MUST); + filter.add(fieldType.lonFieldType().rangeQuery(null, maxLon, true, true, context), Occur.SHOULD); + filter.add(fieldType.lonFieldType().rangeQuery(minLon, null, true, true, context), Occur.SHOULD); + filter.add(fieldType.latFieldType().rangeQuery(minLat, maxLat, true, true, context), Occur.MUST); return new ConstantScoreQuery(filter.build()); } - private static Query eastGeoBoundingBoxFilter(GeoPoint topLeft, GeoPoint bottomRight, - LegacyGeoPointFieldMapper.GeoPointFieldType fieldType) { + private static Query eastGeoBoundingBoxFilter(double minLat, double minLon, double maxLat, double maxLon, + LegacyGeoPointFieldType fieldType, QueryShardContext context) { BooleanQuery.Builder filter = new BooleanQuery.Builder(); - filter.add(fieldType.lonFieldType().rangeQuery(topLeft.lon(), bottomRight.lon(), true, true), Occur.MUST); - filter.add(fieldType.latFieldType().rangeQuery(bottomRight.lat(), topLeft.lat(), true, true), Occur.MUST); + filter.add(fieldType.lonFieldType().rangeQuery(minLon, maxLon, true, true, context), Occur.MUST); + filter.add(fieldType.latFieldType().rangeQuery(minLat, maxLat, true, true, context), Occur.MUST); return new ConstantScoreQuery(filter.build()); } } diff --git a/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java b/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java index d514571d0e200..824ca598ae2bb 100644 --- a/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java +++ b/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.search.stats; +import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -32,9 +33,7 @@ import java.util.HashMap; import java.util.Map; -/** - */ -public class SearchStats implements Streamable, ToXContent { +public class SearchStats extends ToXContentToBytes implements Streamable { public static class Stats implements Streamable, ToXContent { @@ -340,22 +339,12 @@ static final class Fields { static final String SUGGEST_CURRENT = "suggest_current"; } - public static SearchStats readSearchStats(StreamInput in) throws IOException { - SearchStats searchStats = new SearchStats(); - searchStats.readFrom(in); - return searchStats; - } - @Override public void readFrom(StreamInput in) throws IOException { totalStats = Stats.readStats(in); openContexts = in.readVLong(); if (in.readBoolean()) { - int size = in.readVInt(); - groupStats = new HashMap<>(size); - for (int i = 0; i < size; i++) { - groupStats.put(in.readString(), Stats.readStats(in)); - } + groupStats = in.readMap(StreamInput::readString, Stats::readStats); } } @@ -367,24 +356,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(false); } else { out.writeBoolean(true); - out.writeVInt(groupStats.size()); - for (Map.Entry entry : groupStats.entrySet()) { - out.writeString(entry.getKey()); - entry.getValue().writeTo(out); - } - } - } - - @Override - public String toString() { - try { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; + out.writeMap(groupStats, StreamOutput::writeString, (stream, stats) -> stats.writeTo(stream)); } } } diff --git a/core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java b/core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java index 68a2a30a2e974..8a6353e4cab32 100644 --- a/core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java +++ b/core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java @@ -82,8 +82,10 @@ public void onFailedQueryPhase(SearchContext searchContext) { computeStats(searchContext, statsHolder -> { if (searchContext.hasOnlySuggest()) { statsHolder.suggestCurrent.dec(); + assert statsHolder.suggestCurrent.count() >= 0; } else { statsHolder.queryCurrent.dec(); + assert statsHolder.queryCurrent.count() >= 0; } }); } @@ -94,9 +96,11 @@ public void onQueryPhase(SearchContext searchContext, long tookInNanos) { if (searchContext.hasOnlySuggest()) { statsHolder.suggestMetric.inc(tookInNanos); statsHolder.suggestCurrent.dec(); + assert statsHolder.suggestCurrent.count() >= 0; } else { statsHolder.queryMetric.inc(tookInNanos); statsHolder.queryCurrent.dec(); + assert statsHolder.queryCurrent.count() >= 0; } }); } @@ -116,6 +120,7 @@ public void onFetchPhase(SearchContext searchContext, long tookInNanos) { computeStats(searchContext, statsHolder -> { statsHolder.fetchMetric.inc(tookInNanos); statsHolder.fetchCurrent.dec(); + assert statsHolder.fetchCurrent.count() >= 0; }); } @@ -176,6 +181,7 @@ public void onNewScrollContext(SearchContext context) { @Override public void onFreeScrollContext(SearchContext context) { totalStats.scrollCurrent.dec(); + assert totalStats.scrollCurrent.count() >= 0; totalStats.scrollMetric.inc(System.nanoTime() - context.getOriginNanoTime()); } diff --git a/core/src/main/java/org/elasticsearch/index/shard/CommitPoint.java b/core/src/main/java/org/elasticsearch/index/shard/CommitPoint.java index 9082fc072dadb..acd2f9f84559f 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/CommitPoint.java +++ b/core/src/main/java/org/elasticsearch/index/shard/CommitPoint.java @@ -64,7 +64,7 @@ public String checksum() { } } - public static enum Type { + public enum Type { GENERATED, SAVED } diff --git a/core/src/main/java/org/elasticsearch/index/shard/CommitPoints.java b/core/src/main/java/org/elasticsearch/index/shard/CommitPoints.java index 2a5b350602889..dde6a82ea79c9 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/CommitPoints.java +++ b/core/src/main/java/org/elasticsearch/index/shard/CommitPoints.java @@ -21,6 +21,7 @@ import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; @@ -121,7 +122,8 @@ public static byte[] toXContent(CommitPoint commitPoint) throws Exception { } public static CommitPoint fromXContent(byte[] data) throws Exception { - try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(data)) { + // EMPTY is safe here because we never call namedObject + try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(NamedXContentRegistry.EMPTY, data)) { String currentFieldName = null; XContentParser.Token token = parser.nextToken(); if (token == null) { diff --git a/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java b/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java index bfb95b426b110..7fb2d09436fe2 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java +++ b/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java @@ -59,12 +59,6 @@ public long getDeleted() { return this.deleted; } - public static DocsStats readDocStats(StreamInput in) throws IOException { - DocsStats docsStats = new DocsStats(); - docsStats.readFrom(in); - return docsStats; - } - @Override public void readFrom(StreamInput in) throws IOException { count = in.readVLong(); diff --git a/core/src/main/java/org/elasticsearch/index/shard/ElasticsearchQueryCachingPolicy.java b/core/src/main/java/org/elasticsearch/index/shard/ElasticsearchQueryCachingPolicy.java new file mode 100644 index 0000000000000..3ea3955a1f416 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/shard/ElasticsearchQueryCachingPolicy.java @@ -0,0 +1,56 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.shard; + +import org.apache.lucene.search.Query; +import org.apache.lucene.search.QueryCachingPolicy; +import org.apache.lucene.search.TermQuery; + +import java.io.IOException; + +/** + * A {@link QueryCachingPolicy} that does not cache {@link TermQuery}s. + */ +final class ElasticsearchQueryCachingPolicy implements QueryCachingPolicy { + + private final QueryCachingPolicy in; + + ElasticsearchQueryCachingPolicy(QueryCachingPolicy in) { + this.in = in; + } + + @Override + public void onUse(Query query) { + if (query.getClass() != TermQuery.class) { + // Do not waste space in the history for term queries. The assumption + // is that these queries are very fast so not worth caching + in.onUse(query); + } + } + + @Override + public boolean shouldCache(Query query) throws IOException { + if (query.getClass() == TermQuery.class) { + return false; + } + return in.shouldCache(query); + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexEventListener.java b/core/src/main/java/org/elasticsearch/index/shard/IndexEventListener.java index f5c6dca7d2fc7..24ed98e9affe3 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/IndexEventListener.java +++ b/core/src/main/java/org/elasticsearch/index/shard/IndexEventListener.java @@ -23,6 +23,8 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexService; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason; /** * An index event listener is the primary extension point for plugins and build-in services @@ -103,31 +105,32 @@ default void afterIndexCreated(IndexService indexService) { } - /** - * Called before the index shard gets created. - */ - default void beforeIndexShardCreated(ShardId shardId, Settings indexSettings) { - } - - /** * Called before the index get closed. * * @param indexService The index service + * @param reason the reason for index removal */ - default void beforeIndexClosed(IndexService indexService) { + default void beforeIndexRemoved(IndexService indexService, IndexRemovalReason reason) { } /** - * Called after the index has been closed. + * Called after the index has been removed. * * @param index The index + * @param reason the reason for index removal */ - default void afterIndexClosed(Index index, Settings indexSettings) { + default void afterIndexRemoved(Index index, IndexSettings indexSettings, IndexRemovalReason reason) { } + /** + * Called before the index shard gets created. + */ + default void beforeIndexShardCreated(ShardId shardId, Settings indexSettings) { + } + /** * Called before the index shard gets deleted from disk * Note: this method is only executed on the first attempt of deleting the shard. Retries are will not invoke @@ -149,28 +152,6 @@ default void beforeIndexShardDeleted(ShardId shardId, Settings indexSettings) { default void afterIndexShardDeleted(ShardId shardId, Settings indexSettings) { } - /** - * Called after the index has been deleted. - * This listener method is invoked after {@link #afterIndexClosed(org.elasticsearch.index.Index, org.elasticsearch.common.settings.Settings)} - * when an index is deleted - * - * @param index The index - */ - default void afterIndexDeleted(Index index, Settings indexSettings) { - - } - - /** - * Called before the index gets deleted. - * This listener method is invoked after - * {@link #beforeIndexClosed(org.elasticsearch.index.IndexService)} when an index is deleted - * - * @param indexService The index service - */ - default void beforeIndexDeleted(IndexService indexService) { - - } - /** * Called on the Master node only before the {@link IndexService} instances is created to simulate an index creation. * This happens right before the index and it's metadata is registered in the cluster state diff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java b/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java index a38585cf46849..bee40f801e741 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java +++ b/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java @@ -22,21 +22,18 @@ import org.apache.logging.log4j.Logger; import org.apache.lucene.codecs.PostingsFormat; import org.apache.lucene.index.CheckIndex; -import org.apache.lucene.index.CorruptIndexException; import org.apache.lucene.index.IndexCommit; -import org.apache.lucene.index.IndexFormatTooNewException; -import org.apache.lucene.index.IndexFormatTooOldException; -import org.apache.lucene.index.IndexWriter; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy; import org.apache.lucene.index.SnapshotDeletionPolicy; import org.apache.lucene.index.Term; -import org.apache.lucene.search.Query; import org.apache.lucene.search.QueryCachingPolicy; +import org.apache.lucene.search.ReferenceManager; import org.apache.lucene.search.UsageTrackingQueryCachingPolicy; import org.apache.lucene.store.AlreadyClosedException; -import org.apache.lucene.store.Lock; import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.ThreadInterruptedException; +import org.elasticsearch.Assertions; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.flush.FlushRequest; @@ -48,7 +45,6 @@ import org.elasticsearch.cluster.routing.RecoverySource; import org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource; import org.elasticsearch.cluster.routing.ShardRouting; -import org.elasticsearch.cluster.routing.ShardRoutingState; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.collect.Tuple; @@ -62,7 +58,6 @@ import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.common.util.Callback; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.AsyncIOProcessor; import org.elasticsearch.index.Index; @@ -77,7 +72,6 @@ import org.elasticsearch.index.codec.CodecService; import org.elasticsearch.index.engine.CommitStats; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.engine.EngineClosedException; import org.elasticsearch.index.engine.EngineConfig; import org.elasticsearch.index.engine.EngineException; import org.elasticsearch.index.engine.EngineFactory; @@ -91,13 +85,13 @@ import org.elasticsearch.index.flush.FlushStats; import org.elasticsearch.index.get.GetStats; import org.elasticsearch.index.get.ShardGetService; -import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.DocumentMapperForType; -import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.IdFieldMapper; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.mapper.SourceToParse; import org.elasticsearch.index.mapper.Uid; +import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.merge.MergeStats; import org.elasticsearch.index.recovery.RecoveryStats; import org.elasticsearch.index.refresh.RefreshStats; @@ -116,9 +110,9 @@ import org.elasticsearch.indices.IndexingMemoryController; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.cluster.IndicesClusterStateService; +import org.elasticsearch.indices.recovery.PeerRecoveryTargetService; import org.elasticsearch.indices.recovery.RecoveryFailedException; import org.elasticsearch.indices.recovery.RecoveryState; -import org.elasticsearch.indices.recovery.PeerRecoveryTargetService; import org.elasticsearch.repositories.RepositoriesService; import org.elasticsearch.repositories.Repository; import org.elasticsearch.search.suggest.completion.CompletionFieldStats; @@ -126,16 +120,13 @@ import org.elasticsearch.search.suggest.completion2x.Completion090PostingsFormat; import org.elasticsearch.threadpool.ThreadPool; -import java.io.FileNotFoundException; import java.io.IOException; import java.io.PrintStream; import java.nio.channels.ClosedByInterruptException; import java.nio.charset.StandardCharsets; -import java.nio.file.NoSuchFileException; import java.util.ArrayList; import java.util.EnumSet; import java.util.List; -import java.util.Locale; import java.util.Map; import java.util.Objects; import java.util.Set; @@ -272,7 +263,11 @@ public IndexShard(ShardRouting shardRouting, IndexSettings indexSettings, ShardP if (IndexModule.INDEX_QUERY_CACHE_EVERYTHING_SETTING.get(settings)) { cachingPolicy = QueryCachingPolicy.ALWAYS_CACHE; } else { - cachingPolicy = new UsageTrackingQueryCachingPolicy(); + QueryCachingPolicy cachingPolicy = new UsageTrackingQueryCachingPolicy(); + if (IndexModule.INDEX_QUERY_CACHE_TERM_QUERIES_SETTING.get(settings) == false) { + cachingPolicy = new ElasticsearchQueryCachingPolicy(cachingPolicy); + } + this.cachingPolicy = cachingPolicy; } indexShardOperationsLock = new IndexShardOperationsLock(shardId, logger, threadPool); searcherWrapper = indexSearcherWrapper; @@ -338,8 +333,21 @@ public long getPrimaryTerm() { public void updatePrimaryTerm(final long newTerm) { synchronized (mutex) { if (newTerm != primaryTerm) { - assert shardRouting.primary() == false : "a primary shard should never update it's term. shard: " + shardRouting - + " current term [" + primaryTerm + "] new term [" + newTerm + "]"; + // Note that due to cluster state batching an initializing primary shard term can failed and re-assigned + // in one state causing it's term to be incremented. Note that if both current shard state and new + // shard state are initializing, we could replace the current shard and reinitialize it. It is however + // possible that this shard is being started. This can happen if: + // 1) Shard is post recovery and sends shard started to the master + // 2) Node gets disconnected and rejoins + // 3) Master assigns the shard back to the node + // 4) Master processes the shard started and starts the shard + // 5) The node process the cluster state where the shard is both started and primary term is incremented. + // + // We could fail the shard in that case, but this will cause it to be removed from the insync allocations list + // potentially preventing re-allocation. + assert shardRouting.primary() == false || shardRouting.initializing() == false : + "a started primary shard should never update it's term. shard: " + shardRouting + + " current term [" + primaryTerm + "] new term [" + newTerm + "]"; assert newTerm > primaryTerm : "primary terms can only go up. current [" + primaryTerm + "], new [" + newTerm + "]"; primaryTerm = newTerm; } @@ -367,61 +375,50 @@ public QueryCachingPolicy getQueryCachingPolicy() { * @throws IndexShardRelocatedException if shard is marked as relocated and relocation aborted * @throws IOException if shard state could not be persisted */ - public void updateRoutingEntry(final ShardRouting newRouting) throws IOException { - final ShardRouting currentRouting = this.shardRouting; - if (!newRouting.shardId().equals(shardId())) { - throw new IllegalArgumentException("Trying to set a routing entry with shardId " + newRouting.shardId() + " on a shard with shardId " + shardId() + ""); - } - if ((currentRouting == null || newRouting.isSameAllocation(currentRouting)) == false) { - throw new IllegalArgumentException("Trying to set a routing entry with a different allocation. Current " + currentRouting + ", new " + newRouting); - } - if (currentRouting != null) { - if (!newRouting.primary() && currentRouting.primary()) { - logger.warn("suspect illegal state: trying to move shard from primary mode to replica mode"); + public void updateRoutingEntry(ShardRouting newRouting) throws IOException { + final ShardRouting currentRouting; + synchronized (mutex) { + currentRouting = this.shardRouting; + + if (!newRouting.shardId().equals(shardId())) { + throw new IllegalArgumentException("Trying to set a routing entry with shardId " + newRouting.shardId() + " on a shard with shardId " + shardId()); } - // if its the same routing, return - if (currentRouting.equals(newRouting)) { - return; + if ((currentRouting == null || newRouting.isSameAllocation(currentRouting)) == false) { + throw new IllegalArgumentException("Trying to set a routing entry with a different allocation. Current " + currentRouting + ", new " + newRouting); + } + if (currentRouting != null && currentRouting.primary() && newRouting.primary() == false) { + throw new IllegalArgumentException("illegal state: trying to move shard from primary mode to replica mode. Current " + + currentRouting + ", new " + newRouting); } - } - if (state == IndexShardState.POST_RECOVERY) { - // if the state is started or relocating (cause it might move right away from started to relocating) - // then move to STARTED - if (newRouting.state() == ShardRoutingState.STARTED || newRouting.state() == ShardRoutingState.RELOCATING) { + if (state == IndexShardState.POST_RECOVERY && newRouting.active()) { + assert currentRouting.active() == false : "we are in POST_RECOVERY, but our shard routing is active " + currentRouting; // we want to refresh *before* we move to internal STARTED state try { getEngine().refresh("cluster_state_started"); } catch (Exception e) { logger.debug("failed to refresh due to move to cluster wide started", e); } - - boolean movedToStarted = false; - synchronized (mutex) { - // do the check under a mutex, so we make sure to only change to STARTED if in POST_RECOVERY - if (state == IndexShardState.POST_RECOVERY) { - changeState(IndexShardState.STARTED, "global state is [" + newRouting.state() + "]"); - movedToStarted = true; - } else { - logger.debug("state [{}] not changed, not in POST_RECOVERY, global state is [{}]", state, newRouting.state()); - } - } - if (movedToStarted) { - indexEventListener.afterIndexShardStarted(this); - } + changeState(IndexShardState.STARTED, "global state is [" + newRouting.state() + "]"); + } else if (state == IndexShardState.RELOCATED && + (newRouting.relocating() == false || newRouting.equalsIgnoringMetaData(currentRouting) == false)) { + // if the shard is marked as RELOCATED we have to fail when any changes in shard routing occur (e.g. due to recovery + // failure / cancellation). The reason is that at the moment we cannot safely move back to STARTED without risking two + // active primaries. + throw new IndexShardRelocatedException(shardId(), "Shard is marked as relocated, cannot safely move to state " + newRouting.state()); } + assert newRouting.active() == false || state == IndexShardState.STARTED || state == IndexShardState.RELOCATED || + state == IndexShardState.CLOSED : + "routing is active, but local shard state isn't. routing: " + newRouting + ", local state: " + state; + this.shardRouting = newRouting; + persistMetadata(newRouting, currentRouting); } - - if (state == IndexShardState.RELOCATED && - (newRouting.relocating() == false || newRouting.equalsIgnoringMetaData(currentRouting) == false)) { - // if the shard is marked as RELOCATED we have to fail when any changes in shard routing occur (e.g. due to recovery - // failure / cancellation). The reason is that at the moment we cannot safely move back to STARTED without risking two - // active primaries. - throw new IndexShardRelocatedException(shardId(), "Shard is marked as relocated, cannot safely move to state " + newRouting.state()); + if (currentRouting != null && currentRouting.active() == false && newRouting.active()) { + indexEventListener.afterIndexShardStarted(this); + } + if (newRouting.equals(currentRouting) == false) { + indexEventListener.shardRoutingChanged(this, currentRouting, newRouting); } - this.shardRouting = newRouting; - indexEventListener.shardRoutingChanged(this, currentRouting, newRouting); - persistMetadata(newRouting, currentRouting); } /** @@ -451,6 +448,7 @@ public IndexShardState markAsRecovering(String reason, RecoveryState recoverySta } public void relocated(String reason) throws IllegalIndexShardStateException, InterruptedException { + assert shardRouting.primary() : "only primaries can be marked as relocated: " + shardRouting; try { indexShardOperationsLock.blockOperations(30, TimeUnit.MINUTES, () -> { // no shard operation locks are being held here, move state from started to relocated @@ -460,6 +458,15 @@ public void relocated(String reason) throws IllegalIndexShardStateException, Int if (state != IndexShardState.STARTED) { throw new IndexShardNotStartedException(shardId, state); } + // if the master cancelled the recovery, the target will be removed + // and the recovery will stopped. + // However, it is still possible that we concurrently end up here + // and therefore have to protect we don't mark the shard as relocated when + // its shard routing says otherwise. + if (shardRouting.relocating() == false) { + throw new IllegalIndexShardStateException(shardId, IndexShardState.STARTED, + ": shard is no longer relocating " + shardRouting); + } changeState(IndexShardState.RELOCATED, reason); } }); @@ -485,6 +492,7 @@ public IndexShardState state() { * @return the previous shard state */ private IndexShardState changeState(IndexShardState newState, String reason) { + assert Thread.holdsLock(mutex); logger.debug("state: [{}]->[{}], reason [{}]", state, newState, reason); IndexShardState previousState = state; state = newState; @@ -523,86 +531,93 @@ static Engine.Index prepareIndex(DocumentMapperForType docMapper, SourceToParse if (docMapper.getMapping() != null) { doc.addDynamicMappingsUpdate(docMapper.getMapping()); } - MappedFieldType uidFieldType = docMapper.getDocumentMapper().uidMapper().fieldType(); - Query uidQuery = uidFieldType.termQuery(doc.uid(), null); - Term uid = MappedFieldType.extractTerm(uidQuery); + Term uid; + if (docMapper.getDocumentMapper().idFieldMapper().fieldType().indexOptions() != IndexOptions.NONE) { + uid = new Term(IdFieldMapper.NAME, doc.id()); + } else { + uid = new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(doc.type(), doc.id())); + } return new Engine.Index(uid, doc, version, versionType, origin, startTime, autoGeneratedIdTimestamp, isRetry); } - public void index(Engine.Index index) { + public Engine.IndexResult index(Engine.Index index) throws IOException { ensureWriteAllowed(index); Engine engine = getEngine(); - index(engine, index); + return index(engine, index); } - private void index(Engine engine, Engine.Index index) { + private Engine.IndexResult index(Engine engine, Engine.Index index) throws IOException { active.set(true); - index = indexingOperationListeners.preIndex(index); + final Engine.IndexResult result; + index = indexingOperationListeners.preIndex(shardId, index); try { if (logger.isTraceEnabled()) { - logger.trace("index [{}][{}]{}", index.type(), index.id(), index.docs()); + logger.trace("index [{}][{}] (v# [{}])", index.type(), index.id(), index.version()); } - engine.index(index); - index.endTime(System.nanoTime()); + result = engine.index(index); } catch (Exception e) { - indexingOperationListeners.postIndex(index, e); + indexingOperationListeners.postIndex(shardId, index, e); throw e; } - indexingOperationListeners.postIndex(index, index.isCreated()); + indexingOperationListeners.postIndex(shardId, index, result); + return result; } public Engine.Delete prepareDeleteOnPrimary(String type, String id, long version, VersionType versionType) { verifyPrimary(); - final DocumentMapper documentMapper = docMapper(type).getDocumentMapper(); - final MappedFieldType uidFieldType = documentMapper.uidMapper().fieldType(); - final Query uidQuery = uidFieldType.termQuery(Uid.createUid(type, id), null); - final Term uid = MappedFieldType.extractTerm(uidQuery); + final Term uid = extractUidForDelete(type, id); return prepareDelete(type, id, uid, version, versionType, Engine.Operation.Origin.PRIMARY); } public Engine.Delete prepareDeleteOnReplica(String type, String id, long version, VersionType versionType) { - final DocumentMapper documentMapper = docMapper(type).getDocumentMapper(); - final MappedFieldType uidFieldType = documentMapper.uidMapper().fieldType(); - final Query uidQuery = uidFieldType.termQuery(Uid.createUid(type, id), null); - final Term uid = MappedFieldType.extractTerm(uidQuery); + final Term uid = extractUidForDelete(type, id); return prepareDelete(type, id, uid, version, versionType, Engine.Operation.Origin.REPLICA); } static Engine.Delete prepareDelete(String type, String id, Term uid, long version, VersionType versionType, Engine.Operation.Origin origin) { long startTime = System.nanoTime(); - return new Engine.Delete(type, id, uid, version, versionType, origin, startTime, false); + return new Engine.Delete(type, id, uid, version, versionType, origin, startTime); } - public void delete(Engine.Delete delete) { + public Engine.DeleteResult delete(Engine.Delete delete) throws IOException { ensureWriteAllowed(delete); Engine engine = getEngine(); - delete(engine, delete); + return delete(engine, delete); + } + + private Term extractUidForDelete(String type, String id) { + if (indexSettings.isSingleType()) { + // This is only correct because we create types dynamically on delete operations + // otherwise this could match the same _id from a different type + return new Term(IdFieldMapper.NAME, id); + } else { + return new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(type, id)); + } } - private void delete(Engine engine, Engine.Delete delete) { + private Engine.DeleteResult delete(Engine engine, Engine.Delete delete) throws IOException { active.set(true); - delete = indexingOperationListeners.preDelete(delete); + final Engine.DeleteResult result; + delete = indexingOperationListeners.preDelete(shardId, delete); try { if (logger.isTraceEnabled()) { logger.trace("delete [{}]", delete.uid().text()); } - engine.delete(delete); - delete.endTime(System.nanoTime()); + result = engine.delete(delete); } catch (Exception e) { - indexingOperationListeners.postDelete(delete, e); + indexingOperationListeners.postDelete(shardId, delete, e); throw e; } - - indexingOperationListeners.postDelete(delete); + indexingOperationListeners.postDelete(shardId, delete, result); + return result; } - public Engine.GetResult get(Engine.Get get) { readAllowed(); return getEngine().get(get, this::acquireSearcher); } /** - * Writes all indexing changes to disk and opens a new searcher reflecting all changes. This can throw {@link EngineClosedException}. + * Writes all indexing changes to disk and opens a new searcher reflecting all changes. This can throw {@link AlreadyClosedException}. */ public void refresh(String source) { verifyNotClosed(); @@ -614,9 +629,7 @@ public void refresh(String source) { if (logger.isTraceEnabled()) { logger.trace("refresh with source [{}] indexBufferRAMBytesUsed [{}]", source, new ByteSizeValue(bytes)); } - long time = System.nanoTime(); getEngine().refresh(source); - refreshMetric.inc(System.nanoTime() - time); } finally { if (logger.isTraceEnabled()) { logger.trace("remove [{}] writing bytes for shard [{}]", new ByteSizeValue(bytes), shardId()); @@ -627,9 +640,7 @@ public void refresh(String source) { if (logger.isTraceEnabled()) { logger.trace("refresh with source [{}]", source); } - long time = System.nanoTime(); getEngine().refresh(source); - refreshMetric.inc(System.nanoTime() - time); } } @@ -641,7 +652,9 @@ public long getWritingBytes() { } public RefreshStats refreshStats() { - return new RefreshStats(refreshMetric.count(), TimeUnit.NANOSECONDS.toMillis(refreshMetric.sum())); + // Null refreshListeners means this shard doesn't support them so there can't be any. + int listeners = refreshListeners == null ? 0 : refreshListeners.pendingCount(); + return new RefreshStats(refreshMetric.count(), TimeUnit.NANOSECONDS.toMillis(refreshMetric.sum()), listeners); } public FlushStats flushStats() { @@ -649,9 +662,9 @@ public FlushStats flushStats() { } public DocsStats docStats() { - readAllowed(); - final Engine engine = getEngine(); - return engine.getDocStats(); + try (Engine.Searcher searcher = acquireSearcher("doc_stats")) { + return new DocsStats(searcher.reader().numDocs(), searcher.reader().numDeletedDocs()); + } } /** @@ -723,7 +736,7 @@ public TranslogStats translogStats() { public CompletionStats completionStats(String... fields) { CompletionStats completionStats = new CompletionStats(); - try (final Engine.Searcher currentSearcher = acquireSearcher("completion_stats")) { + try (Engine.Searcher currentSearcher = acquireSearcher("completion_stats")) { completionStats.add(CompletionFieldStats.completionStats(currentSearcher.reader(), fields)); // Necessary for 2.x shards: Completion090PostingsFormat postingsFormat = ((Completion090PostingsFormat) @@ -734,9 +747,14 @@ public CompletionStats completionStats(String... fields) { } public Engine.SyncedFlushResult syncFlush(String syncId, Engine.CommitId expectedCommitId) { - verifyStartedOrRecovering(); + verifyNotClosed(); logger.trace("trying to sync flush. sync id [{}]. expected commit id [{}]]", syncId, expectedCommitId); - return getEngine().syncFlush(syncId, expectedCommitId); + Engine engine = getEngine(); + if (engine.isRecovering()) { + throw new IllegalIndexShardStateException(shardId(), state, "syncFlush is only allowed if the engine is not recovery" + + " from translog"); + } + return engine.syncFlush(syncId, expectedCommitId); } public Engine.CommitId flush(FlushRequest request) throws ElasticsearchException { @@ -747,18 +765,23 @@ public Engine.CommitId flush(FlushRequest request) throws ElasticsearchException } // we allows flush while recovering, since we allow for operations to happen // while recovering, and we want to keep the translog at bay (up to deletes, which - // we don't gc). - verifyStartedOrRecovering(); - + // we don't gc). Yet, we don't use flush internally to clear deletes and flush the indexwriter since + // we use #writeIndexingBuffer for this now. + verifyNotClosed(); + Engine engine = getEngine(); + if (engine.isRecovering()) { + throw new IllegalIndexShardStateException(shardId(), state, "flush is only allowed if the engine is not recovery" + + " from translog"); + } long time = System.nanoTime(); - Engine.CommitId commitId = getEngine().flush(force, waitIfOngoing); + Engine.CommitId commitId = engine.flush(force, waitIfOngoing); flushMetric.inc(System.nanoTime() - time); return commitId; } public void forceMerge(ForceMergeRequest forceMerge) throws IOException { - verifyStarted(); + verifyActive(); if (logger.isTraceEnabled()) { logger.trace("force merge with {}", forceMerge); } @@ -770,7 +793,7 @@ public void forceMerge(ForceMergeRequest forceMerge) throws IOException { * Upgrades the shard to the current version of Lucene and returns the minimum segment version */ public org.apache.lucene.util.Version upgrade(UpgradeRequest upgrade) throws IOException { - verifyStarted(); + verifyActive(); if (logger.isTraceEnabled()) { logger.trace("upgrade with {}", upgrade); } @@ -827,12 +850,13 @@ public void releaseIndexCommit(IndexCommit snapshot) throws IOException { * without having to worry about the current state of the engine and concurrent flushes. * * @throws org.apache.lucene.index.IndexNotFoundException if no index is found in the current directory - * @throws CorruptIndexException if the lucene index is corrupted. This can be caused by a checksum mismatch or an - * unexpected exception when opening the index reading the segments file. - * @throws IndexFormatTooOldException if the lucene index is too old to be opened. - * @throws IndexFormatTooNewException if the lucene index is too new to be opened. - * @throws FileNotFoundException if one or more files referenced by a commit are not present. - * @throws NoSuchFileException if one or more files referenced by a commit are not present. + * @throws org.apache.lucene.index.CorruptIndexException if the lucene index is corrupted. This can be caused by a checksum + * mismatch or an unexpected exception when opening the index reading the + * segments file. + * @throws org.apache.lucene.index.IndexFormatTooOldException if the lucene index is too old to be opened. + * @throws org.apache.lucene.index.IndexFormatTooNewException if the lucene index is too new to be opened. + * @throws java.io.FileNotFoundException if one or more files referenced by a commit are not present. + * @throws java.nio.file.NoSuchFileException if one or more files referenced by a commit are not present. */ public Store.MetadataSnapshot snapshotStoreMetadata() throws IOException { IndexCommit indexCommit = null; @@ -844,9 +868,7 @@ public Store.MetadataSnapshot snapshotStoreMetadata() throws IOException { // That can be done out of mutex, since the engine can be closed half way. Engine engine = getEngineOrNull(); if (engine == null) { - try (Lock ignored = store.directory().obtainLock(IndexWriter.WRITE_LOCK_NAME)) { - return store.getMetadata(null); - } + return store.getMetadata(null, true); } } indexCommit = deletionPolicy.snapshot(); @@ -897,8 +919,10 @@ public void close(String reason, boolean flushEngine) throws IOException { if (engine != null && flushEngine) { engine.flushAndClose(); } - } finally { // playing safe here and close the engine even if the above succeeds - close can be called multiple times - IOUtils.close(engine); + } finally { + // playing safe here and close the engine even if the above succeeds - close can be called multiple times + // Also closing refreshListeners to prevent us from accumulating any more listeners + IOUtils.close(engine, refreshListeners); indexShardOperationsLock.close(); } } @@ -971,6 +995,9 @@ private void internalPerformTranslogRecovery(boolean skipTranslogRecovery, boole recoveryState.setStage(RecoveryState.Stage.VERIFY_INDEX); // also check here, before we apply the translog if (Booleans.parseBoolean(checkIndexOnStartup, false)) { + if (Booleans.isBoolean(checkIndexOnStartup) && Booleans.isStrictlyBoolean(checkIndexOnStartup) == false) { + deprecationLogger.deprecated("Expected [true/false/fix/checksum] for setting [{}] but got [{}]", IndexSettings.INDEX_CHECK_ON_STARTUP.getKey(), checkIndexOnStartup); + } try { checkIndex(); } catch (IOException ex) { @@ -1005,7 +1032,10 @@ private void internalPerformTranslogRecovery(boolean skipTranslogRecovery, boole active.set(true); newEngine.recoverFromTranslog(); } + } + protected void onNewEngine(Engine newEngine) { + refreshListeners.setTranslog(newEngine.getTranslog()); } /** @@ -1110,13 +1140,6 @@ private void verifyReplicationTarget() { } } - protected final void verifyStartedOrRecovering() throws IllegalIndexShardStateException { - IndexShardState state = this.state; // one time volatile read - if (state != IndexShardState.STARTED && state != IndexShardState.RECOVERING && state != IndexShardState.POST_RECOVERY) { - throw new IllegalIndexShardStateException(shardId, state, "operation only allowed when started/recovering"); - } - } - private void verifyNotClosed() throws IllegalIndexShardStateException { verifyNotClosed(null); } @@ -1132,10 +1155,10 @@ private void verifyNotClosed(Exception suppressed) throws IllegalIndexShardState } } - protected final void verifyStarted() throws IllegalIndexShardStateException { + protected final void verifyActive() throws IllegalIndexShardStateException { IndexShardState state = this.state; // one time volatile read - if (state != IndexShardState.STARTED) { - throw new IndexShardNotStartedException(shardId, state); + if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED) { + throw new IllegalIndexShardStateException(shardId, state, "operation only allowed when shard is active"); } } @@ -1154,7 +1177,7 @@ public long getIndexBufferRAMBytesUsed() { } } - public void addShardFailureCallback(Callback onShardFailure) { + public void addShardFailureCallback(Consumer onShardFailure) { this.shardEventListener.delegates.add(onShardFailure); } @@ -1168,7 +1191,11 @@ public void checkIdle(long inactiveTimeNS) { boolean wasActive = active.getAndSet(false); if (wasActive) { logger.debug("shard is now inactive"); - indexEventListener.onShardInactive(this); + try { + indexEventListener.onShardInactive(this); + } catch (Exception e) { + logger.warn("failed to notify index event listener", e); + } } } } @@ -1225,8 +1252,8 @@ boolean shouldFlush() { if (engine != null) { try { Translog translog = engine.getTranslog(); - return translog.sizeInBytes() > indexSettings.getFlushThresholdSize().bytes(); - } catch (AlreadyClosedException | EngineClosedException ex) { + return translog.sizeInBytes() > indexSettings.getFlushThresholdSize().getBytes(); + } catch (AlreadyClosedException ex) { // that's fine we are already close - no need to flush } } @@ -1265,7 +1292,7 @@ public IndexEventListener getIndexEventListener() { public void activateThrottling() { try { getEngine().activateThrottling(); - } catch (EngineClosedException ex) { + } catch (AlreadyClosedException ex) { // ignore } } @@ -1273,13 +1300,13 @@ public void activateThrottling() { public void deactivateThrottling() { try { getEngine().deactivateThrottling(); - } catch (EngineClosedException ex) { + } catch (AlreadyClosedException ex) { // ignore } } private void handleRefreshException(Exception e) { - if (e instanceof EngineClosedException) { + if (e instanceof AlreadyClosedException) { // ignore } else if (e instanceof RefreshFailedEngineException) { RefreshFailedEngineException rfee = (RefreshFailedEngineException) e; @@ -1414,7 +1441,7 @@ private void doCheckIndex() throws IOException { Engine getEngine() { Engine engine = getEngineOrNull(); if (engine == null) { - throw new EngineClosedException(shardId); + throw new AlreadyClosedException("engine is closed"); } return engine; } @@ -1514,7 +1541,7 @@ public void startRecovery(RecoveryState recoveryState, PeerRecoveryTargetService } }); } else { - final Exception e; + final RuntimeException e; if (numShards == -1) { e = new IndexNotFoundException(mergeSourceIndex); } else { @@ -1522,7 +1549,7 @@ public void startRecovery(RecoveryState recoveryState, PeerRecoveryTargetService + " are started yet, expected " + numShards + " found " + startedShards.size() + " can't recover shard " + shardId()); } - recoveryListener.onRecoveryFailure(recoveryState, new RecoveryFailedException(recoveryState, null, e), true); + throw e; } break; default: @@ -1531,15 +1558,15 @@ public void startRecovery(RecoveryState recoveryState, PeerRecoveryTargetService } class ShardEventListener implements Engine.EventListener { - private final CopyOnWriteArrayList> delegates = new CopyOnWriteArrayList<>(); + private final CopyOnWriteArrayList> delegates = new CopyOnWriteArrayList<>(); // called by the current engine @Override public void onFailedEngine(String reason, @Nullable Exception failure) { final ShardFailure shardFailure = new ShardFailure(shardRouting, reason, failure); - for (Callback listener : delegates) { + for (Consumer listener : delegates) { try { - listener.handle(shardFailure); + listener.accept(shardFailure); } catch (Exception inner) { inner.addSuppressed(failure); logger.warn("exception while notifying engine failure", inner); @@ -1551,11 +1578,13 @@ public void onFailedEngine(String reason, @Nullable Exception failure) { private Engine createNewEngine(EngineConfig config) { synchronized (mutex) { if (state == IndexShardState.CLOSED) { - throw new EngineClosedException(shardId); + throw new AlreadyClosedException(shardId + " can't create engine - shard is closed"); } assert this.currentEngineReference.get() == null; - this.currentEngineReference.set(newEngine(config)); - + Engine engine = newEngine(config); + onNewEngine(engine); // call this before we pass the memory barrier otherwise actions that happen + // inside the callback are not visible. This one enforces happens-before + this.currentEngineReference.set(engine); } // time elapses after the engine is created above (pulling the config settings) until we set the engine reference, during which @@ -1606,11 +1635,16 @@ private DocumentMapperForType docMapper(String type) { private EngineConfig newEngineConfig(EngineConfig.OpenMode openMode, long maxUnsafeAutoIdTimestamp) { final IndexShardRecoveryPerformer translogRecoveryPerformer = new IndexShardRecoveryPerformer(shardId, mapperService, logger); + final List refreshListenerList = new ArrayList<>(); + refreshListenerList.add(new RefreshMetricUpdater(refreshMetric)); + if (refreshListeners != null) { + refreshListenerList.add(refreshListeners); + } return new EngineConfig(openMode, shardId, threadPool, indexSettings, warmer, store, deletionPolicy, indexSettings.getMergePolicy(), mapperService.indexAnalyzer(), similarityService.similarity(mapperService), codecService, shardEventListener, translogRecoveryPerformer, indexCache.query(), cachingPolicy, translogConfig, - IndexingMemoryController.SHARD_INACTIVE_TIME_SETTING.get(indexSettings.getSettings()), refreshListeners, - maxUnsafeAutoIdTimestamp); + IndexingMemoryController.SHARD_INACTIVE_TIME_SETTING.get(indexSettings.getSettings()), + refreshListenerList, maxUnsafeAutoIdTimestamp); } /** @@ -1651,7 +1685,7 @@ protected void write(List>> candida try { final Engine engine = getEngine(); engine.getTranslog().ensureSynced(candidates.stream().map(Tuple::v1)); - } catch (EngineClosedException ex) { + } catch (AlreadyClosedException ex) { // that's fine since we already synced everything on engine close - this also is conform with the methods // documentation } catch (IOException ex) { // if this fails we are in deep shit - fail the request @@ -1742,7 +1776,7 @@ protected RefreshListeners buildRefreshListeners() { /** * Simple struct encapsulating a shard failure * - * @see IndexShard#addShardFailureCallback(Callback) + * @see IndexShard#addShardFailureCallback(Consumer) */ public static final class ShardFailure { public final ShardRouting routing; @@ -1766,8 +1800,7 @@ EngineFactory getEngineFactory() { * refresh listeners. * Otherwise false. * - * @throws EngineClosedException if the engine is already closed - * @throws AlreadyClosedException if the internal indexwriter in the engine is already closed + * @throws AlreadyClosedException if the engine or internal indexwriter in the engine is already closed */ public boolean isRefreshNeeded() { return getEngine().refreshNeeded() || (refreshListeners != null && refreshListeners.refreshNeeded()); @@ -1806,13 +1839,45 @@ public int recoveryFromSnapshot(Engine engine, Translog.Snapshot snapshot) throw } @Override - protected void index(Engine engine, Engine.Index engineIndex) { + protected void index(Engine engine, Engine.Index engineIndex) throws IOException { IndexShard.this.index(engine, engineIndex); } @Override - protected void delete(Engine engine, Engine.Delete engineDelete) { + protected void delete(Engine engine, Engine.Delete engineDelete) throws IOException { IndexShard.this.delete(engine, engineDelete); } } + + private static class RefreshMetricUpdater implements ReferenceManager.RefreshListener { + + private final MeanMetric refreshMetric; + private long currentRefreshStartTime; + private Thread callingThread = null; + + private RefreshMetricUpdater(MeanMetric refreshMetric) { + this.refreshMetric = refreshMetric; + } + + @Override + public void beforeRefresh() throws IOException { + if (Assertions.ENABLED) { + assert callingThread == null : "beforeRefresh was called by " + callingThread.getName() + + " without a corresponding call to afterRefresh"; + callingThread = Thread.currentThread(); + } + currentRefreshStartTime = System.nanoTime(); + } + + @Override + public void afterRefresh(boolean didRefresh) throws IOException { + if (Assertions.ENABLED) { + assert callingThread != null : "afterRefresh called but not beforeRefresh"; + assert callingThread == Thread.currentThread() : "beforeRefreshed called by a different thread. current [" + + Thread.currentThread().getName() + "], thread that called beforeRefresh [" + callingThread.getName() + "]"; + callingThread = null; + } + refreshMetric.inc(System.nanoTime() - currentRefreshStartTime); + } + } } diff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexShardOperationsLock.java b/core/src/main/java/org/elasticsearch/index/shard/IndexShardOperationsLock.java index cde14dec17307..3f48b60d45a3b 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/IndexShardOperationsLock.java +++ b/core/src/main/java/org/elasticsearch/index/shard/IndexShardOperationsLock.java @@ -19,10 +19,14 @@ package org.elasticsearch.index.shard; import org.apache.logging.log4j.Logger; +import org.apache.lucene.util.IOUtils; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.support.ContextPreservingActionListener; import org.elasticsearch.action.support.ThreadedActionListener; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.lease.Releasable; +import org.elasticsearch.common.util.concurrent.AbstractRunnable; +import org.elasticsearch.common.util.concurrent.ThreadContext.StoredContext; import org.elasticsearch.threadpool.ThreadPool; import java.io.Closeable; @@ -32,6 +36,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.Supplier; public class IndexShardOperationsLock implements Closeable { private final ShardId shardId; @@ -103,14 +108,17 @@ public void blockOperations(long timeout, TimeUnit timeUnit, Runnable onBlocked) } /** - * Acquires a lock whenever lock acquisition is not blocked. If the lock is directly available, the provided - * ActionListener will be called on the calling thread. During calls of {@link #blockOperations(long, TimeUnit, Runnable)}, lock - * acquisition can be delayed. The provided ActionListener will then be called using the provided executor once blockOperations - * terminates. + * Acquires a permit whenever permit acquisition is not blocked. If the permit is directly available, the provided + * {@link ActionListener} will be called on the calling thread. During calls of + * {@link #blockOperations(long, TimeUnit, Runnable)}, permit acquisition can be delayed. + * The {@link ActionListener#onResponse(Object)} method will then be called using the provided executor once operations are no + * longer blocked. Note that the executor will not be used for {@link ActionListener#onFailure(Exception)} calls. Those will run + * directly on the calling thread, which in case of delays, will be a generic thread. Callers should thus make sure + * that the {@link ActionListener#onFailure(Exception)} method provided here only contains lightweight operations. * - * @param onAcquired ActionListener that is invoked once acquisition is successful or failed - * @param executorOnDelay executor to use for delayed call - * @param forceExecution whether the runnable should force its execution in case it gets rejected + * @param onAcquired {@link ActionListener} that is invoked once acquisition is successful or failed + * @param executorOnDelay executor to use for the possibly delayed {@link ActionListener#onResponse(Object)} call + * @param forceExecution whether the runnable should force its execution in case it gets rejected */ public void acquire(ActionListener onAcquired, String executorOnDelay, boolean forceExecution) { if (closed) { @@ -126,11 +134,12 @@ public void acquire(ActionListener onAcquired, String executorOnDela if (delayedOperations == null) { delayedOperations = new ArrayList<>(); } + final Supplier contextSupplier = threadPool.getThreadContext().newRestorableContext(false); if (executorOnDelay != null) { - delayedOperations.add( - new ThreadedActionListener<>(logger, threadPool, executorOnDelay, onAcquired, forceExecution)); + delayedOperations.add(new PermitAwareThreadedActionListener(threadPool, executorOnDelay, + new ContextPreservingActionListener<>(contextSupplier, onAcquired), forceExecution)); } else { - delayedOperations.add(onAcquired); + delayedOperations.add(new ContextPreservingActionListener<>(contextSupplier, onAcquired)); } return; } @@ -163,4 +172,56 @@ public int getActiveOperationsCount() { return TOTAL_PERMITS - availablePermits; } } + + /** + * A permit-aware action listener wrapper that spawns onResponse listener invocations off on a configurable thread-pool. + * Being permit-aware, it also releases the permit when hitting thread-pool rejections and falls back to the + * invoker's thread to communicate failures. + */ + private static class PermitAwareThreadedActionListener implements ActionListener { + + private final ThreadPool threadPool; + private final String executor; + private final ActionListener listener; + private final boolean forceExecution; + + private PermitAwareThreadedActionListener(ThreadPool threadPool, String executor, ActionListener listener, + boolean forceExecution) { + this.threadPool = threadPool; + this.executor = executor; + this.listener = listener; + this.forceExecution = forceExecution; + } + + @Override + public void onResponse(final Releasable releasable) { + threadPool.executor(executor).execute(new AbstractRunnable() { + @Override + public boolean isForceExecution() { + return forceExecution; + } + + @Override + protected void doRun() throws Exception { + listener.onResponse(releasable); + } + + @Override + public void onRejection(Exception e) { + IOUtils.closeWhileHandlingException(releasable); + super.onRejection(e); + } + + @Override + public void onFailure(Exception e) { + listener.onFailure(e); // will possibly execute on the caller thread + } + }); + } + + @Override + public void onFailure(final Exception e) { + listener.onFailure(e); + } + } } diff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexingOperationListener.java b/core/src/main/java/org/elasticsearch/index/shard/IndexingOperationListener.java index 042ddec924e3c..335196fe68198 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/IndexingOperationListener.java +++ b/core/src/main/java/org/elasticsearch/index/shard/IndexingOperationListener.java @@ -33,37 +33,47 @@ public interface IndexingOperationListener { /** * Called before the indexing occurs. */ - default Engine.Index preIndex(Engine.Index operation) { + default Engine.Index preIndex(ShardId shardId, Engine.Index operation) { return operation; } /** - * Called after the indexing operation occurred. + * Called after the indexing operation occurred. Note that this is + * also called when indexing a document did not succeed due to document + * related failures. See {@link #postIndex(ShardId, Engine.Index, Exception)} + * for engine level failures */ - default void postIndex(Engine.Index index, boolean created) {} + default void postIndex(ShardId shardId, Engine.Index index, Engine.IndexResult result) {} /** - * Called after the indexing operation occurred with exception. + * Called after the indexing operation occurred with engine level exception. + * See {@link #postIndex(ShardId, Engine.Index, Engine.IndexResult)} for document + * related failures */ - default void postIndex(Engine.Index index, Exception ex) {} + default void postIndex(ShardId shardId, Engine.Index index, Exception ex) {} /** * Called before the delete occurs. */ - default Engine.Delete preDelete(Engine.Delete delete) { + default Engine.Delete preDelete(ShardId shardId, Engine.Delete delete) { return delete; } /** - * Called after the delete operation occurred. + * Called after the delete operation occurred. Note that this is + * also called when deleting a document did not succeed due to document + * related failures. See {@link #postDelete(ShardId, Engine.Delete, Exception)} + * for engine level failures */ - default void postDelete(Engine.Delete delete) {} + default void postDelete(ShardId shardId, Engine.Delete delete, Engine.DeleteResult result) {} /** - * Called after the delete operation occurred with exception. + * Called after the delete operation occurred with engine level exception. + * See {@link #postDelete(ShardId, Engine.Delete, Engine.DeleteResult)} for document + * related failures */ - default void postDelete(Engine.Delete delete, Exception ex) {} + default void postDelete(ShardId shardId, Engine.Delete delete, Exception ex) {} /** * A Composite listener that multiplexes calls to each of the listeners methods. @@ -78,11 +88,11 @@ public CompositeListener(List listeners, Logger logge } @Override - public Engine.Index preIndex(Engine.Index operation) { + public Engine.Index preIndex(ShardId shardId, Engine.Index operation) { assert operation != null; for (IndexingOperationListener listener : listeners) { try { - listener.preIndex(operation); + listener.preIndex(shardId, operation); } catch (Exception e) { logger.warn((Supplier) () -> new ParameterizedMessage("preIndex listener [{}] failed", listener), e); } @@ -91,11 +101,11 @@ public Engine.Index preIndex(Engine.Index operation) { } @Override - public void postIndex(Engine.Index index, boolean created) { + public void postIndex(ShardId shardId, Engine.Index index, Engine.IndexResult result) { assert index != null; for (IndexingOperationListener listener : listeners) { try { - listener.postIndex(index, created); + listener.postIndex(shardId, index, result); } catch (Exception e) { logger.warn((Supplier) () -> new ParameterizedMessage("postIndex listener [{}] failed", listener), e); } @@ -103,11 +113,11 @@ public void postIndex(Engine.Index index, boolean created) { } @Override - public void postIndex(Engine.Index index, Exception ex) { + public void postIndex(ShardId shardId, Engine.Index index, Exception ex) { assert index != null && ex != null; for (IndexingOperationListener listener : listeners) { try { - listener.postIndex(index, ex); + listener.postIndex(shardId, index, ex); } catch (Exception inner) { inner.addSuppressed(ex); logger.warn((Supplier) () -> new ParameterizedMessage("postIndex listener [{}] failed", listener), inner); @@ -116,11 +126,11 @@ public void postIndex(Engine.Index index, Exception ex) { } @Override - public Engine.Delete preDelete(Engine.Delete delete) { + public Engine.Delete preDelete(ShardId shardId, Engine.Delete delete) { assert delete != null; for (IndexingOperationListener listener : listeners) { try { - listener.preDelete(delete); + listener.preDelete(shardId, delete); } catch (Exception e) { logger.warn((Supplier) () -> new ParameterizedMessage("preDelete listener [{}] failed", listener), e); } @@ -129,11 +139,11 @@ public Engine.Delete preDelete(Engine.Delete delete) { } @Override - public void postDelete(Engine.Delete delete) { + public void postDelete(ShardId shardId, Engine.Delete delete, Engine.DeleteResult result) { assert delete != null; for (IndexingOperationListener listener : listeners) { try { - listener.postDelete(delete); + listener.postDelete(shardId, delete, result); } catch (Exception e) { logger.warn((Supplier) () -> new ParameterizedMessage("postDelete listener [{}] failed", listener), e); } @@ -141,11 +151,11 @@ public void postDelete(Engine.Delete delete) { } @Override - public void postDelete(Engine.Delete delete, Exception ex) { + public void postDelete(ShardId shardId, Engine.Delete delete, Exception ex) { assert delete != null && ex != null; for (IndexingOperationListener listener : listeners) { try { - listener.postDelete(delete, ex); + listener.postDelete(shardId, delete, ex); } catch (Exception inner) { inner.addSuppressed(ex); logger.warn((Supplier) () -> new ParameterizedMessage("postDelete listener [{}] failed", listener), inner); diff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java b/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java index 97f9dd2b92fe9..cc657746f10d5 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java +++ b/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java @@ -145,11 +145,7 @@ public void readFrom(StreamInput in) throws IOException { indexCount = in.readVLong(); indexTimeInMillis = in.readVLong(); indexCurrent = in.readVLong(); - - if(in.getVersion().onOrAfter(Version.V_2_1_0)){ - indexFailedCount = in.readVLong(); - } - + indexFailedCount = in.readVLong(); deleteCount = in.readVLong(); deleteTimeInMillis = in.readVLong(); deleteCurrent = in.readVLong(); @@ -163,11 +159,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeVLong(indexCount); out.writeVLong(indexTimeInMillis); out.writeVLong(indexCurrent); - - if(out.getVersion().onOrAfter(Version.V_2_1_0)) { - out.writeVLong(indexFailedCount); - } - + out.writeVLong(indexFailedCount); out.writeVLong(deleteCount); out.writeVLong(deleteTimeInMillis); out.writeVLong(deleteCurrent); @@ -285,21 +277,11 @@ static final class Fields { static final String THROTTLED_TIME = "throttle_time"; } - public static IndexingStats readIndexingStats(StreamInput in) throws IOException { - IndexingStats indexingStats = new IndexingStats(); - indexingStats.readFrom(in); - return indexingStats; - } - @Override public void readFrom(StreamInput in) throws IOException { totalStats = Stats.readStats(in); if (in.readBoolean()) { - int size = in.readVInt(); - typeStats = new HashMap<>(size); - for (int i = 0; i < size; i++) { - typeStats.put(in.readString(), Stats.readStats(in)); - } + typeStats = in.readMap(StreamInput::readString, Stats::readStats); } } @@ -310,11 +292,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(false); } else { out.writeBoolean(true); - out.writeVInt(typeStats.size()); - for (Map.Entry entry : typeStats.entrySet()) { - out.writeString(entry.getKey()); - entry.getValue().writeTo(out); - } + out.writeMap(typeStats, StreamOutput::writeString, (stream, stats) -> stats.writeTo(stream)); } } } diff --git a/core/src/main/java/org/elasticsearch/index/shard/InternalIndexingStats.java b/core/src/main/java/org/elasticsearch/index/shard/InternalIndexingStats.java index f62b8f7fe3c25..ada869a1d9c0b 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/InternalIndexingStats.java +++ b/core/src/main/java/org/elasticsearch/index/shard/InternalIndexingStats.java @@ -65,7 +65,7 @@ IndexingStats stats(boolean isThrottled, long currentThrottleInMillis, String... } @Override - public Engine.Index preIndex(Engine.Index operation) { + public Engine.Index preIndex(ShardId shardId, Engine.Index operation) { if (!operation.origin().isRecovery()) { totalStats.indexCurrent.inc(); typeStats(operation.type()).indexCurrent.inc(); @@ -74,19 +74,23 @@ public Engine.Index preIndex(Engine.Index operation) { } @Override - public void postIndex(Engine.Index index, boolean created) { - if (!index.origin().isRecovery()) { - long took = index.endTime() - index.startTime(); - totalStats.indexMetric.inc(took); - totalStats.indexCurrent.dec(); - StatsHolder typeStats = typeStats(index.type()); - typeStats.indexMetric.inc(took); - typeStats.indexCurrent.dec(); + public void postIndex(ShardId shardId, Engine.Index index, Engine.IndexResult result) { + if (result.hasFailure() == false) { + if (!index.origin().isRecovery()) { + long took = result.getTook(); + totalStats.indexMetric.inc(took); + totalStats.indexCurrent.dec(); + StatsHolder typeStats = typeStats(index.type()); + typeStats.indexMetric.inc(took); + typeStats.indexCurrent.dec(); + } + } else { + postIndex(shardId, index, result.getFailure()); } } @Override - public void postIndex(Engine.Index index, Exception ex) { + public void postIndex(ShardId shardId, Engine.Index index, Exception ex) { if (!index.origin().isRecovery()) { totalStats.indexCurrent.dec(); typeStats(index.type()).indexCurrent.dec(); @@ -96,7 +100,7 @@ public void postIndex(Engine.Index index, Exception ex) { } @Override - public Engine.Delete preDelete(Engine.Delete delete) { + public Engine.Delete preDelete(ShardId shardId, Engine.Delete delete) { if (!delete.origin().isRecovery()) { totalStats.deleteCurrent.inc(); typeStats(delete.type()).deleteCurrent.inc(); @@ -106,19 +110,23 @@ public Engine.Delete preDelete(Engine.Delete delete) { } @Override - public void postDelete(Engine.Delete delete) { - if (!delete.origin().isRecovery()) { - long took = delete.endTime() - delete.startTime(); - totalStats.deleteMetric.inc(took); - totalStats.deleteCurrent.dec(); - StatsHolder typeStats = typeStats(delete.type()); - typeStats.deleteMetric.inc(took); - typeStats.deleteCurrent.dec(); + public void postDelete(ShardId shardId, Engine.Delete delete, Engine.DeleteResult result) { + if (result.hasFailure() == false) { + if (!delete.origin().isRecovery()) { + long took = result.getTook(); + totalStats.deleteMetric.inc(took); + totalStats.deleteCurrent.dec(); + StatsHolder typeStats = typeStats(delete.type()); + typeStats.deleteMetric.inc(took); + typeStats.deleteCurrent.dec(); + } + } else { + postDelete(shardId, delete, result.getFailure()); } } @Override - public void postDelete(Engine.Delete delete, Exception ex) { + public void postDelete(ShardId shardId, Engine.Delete delete, Exception ex) { if (!delete.origin().isRecovery()) { totalStats.deleteCurrent.dec(); typeStats(delete.type()).deleteCurrent.dec(); diff --git a/core/src/main/java/org/elasticsearch/index/shard/LocalShardSnapshot.java b/core/src/main/java/org/elasticsearch/index/shard/LocalShardSnapshot.java index 7b79f785ff338..9fd5113b2ad15 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/LocalShardSnapshot.java +++ b/core/src/main/java/org/elasticsearch/index/shard/LocalShardSnapshot.java @@ -26,8 +26,7 @@ import org.apache.lucene.store.IndexOutput; import org.apache.lucene.store.Lock; import org.apache.lucene.store.NoLockFactory; -import org.elasticsearch.cluster.metadata.MappingMetaData; -import org.elasticsearch.common.collect.ImmutableOpenMap; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.index.Index; import org.elasticsearch.index.store.Store; @@ -44,7 +43,7 @@ final class LocalShardSnapshot implements Closeable { private final IndexCommit indexCommit; private final AtomicBoolean closed = new AtomicBoolean(false); - public LocalShardSnapshot(IndexShard shard) { + LocalShardSnapshot(IndexShard shard) { this.shard = shard; store = shard.store(); store.incRef(); @@ -125,8 +124,8 @@ public void close() throws IOException { } } - ImmutableOpenMap getMappings() { - return shard.indexSettings.getIndexMetaData().getMappings(); + IndexMetaData getIndexMetaData() { + return shard.indexSettings.getIndexMetaData(); } @Override diff --git a/core/src/main/java/org/elasticsearch/index/shard/RefreshListeners.java b/core/src/main/java/org/elasticsearch/index/shard/RefreshListeners.java index ca94f1ea96131..f0df6e12b8cec 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/RefreshListeners.java +++ b/core/src/main/java/org/elasticsearch/index/shard/RefreshListeners.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.index.translog.Translog; +import java.io.Closeable; import java.io.IOException; import java.util.ArrayList; import java.util.List; @@ -35,18 +36,26 @@ /** * Allows for the registration of listeners that are called when a change becomes visible for search. This functionality is exposed from - * {@link IndexShard} but kept here so it can be tested without standing up the entire thing. + * {@link IndexShard} but kept here so it can be tested without standing up the entire thing. + * + * When {@link Closeable#close()}d it will no longer accept listeners and flush any existing listeners. */ -public final class RefreshListeners implements ReferenceManager.RefreshListener { +public final class RefreshListeners implements ReferenceManager.RefreshListener, Closeable { private final IntSupplier getMaxRefreshListeners; private final Runnable forceRefresh; private final Executor listenerExecutor; private final Logger logger; + /** + * Is this closed? If true then we won't add more listeners and have flushed all pending listeners. + */ + private volatile boolean closed = false; /** * List of refresh listeners. Defaults to null and built on demand because most refresh cycles won't need it. Entries are never removed * from it, rather, it is nulled and rebuilt when needed again. The (hopefully) rare entries that didn't make the current refresh cycle * are just added back to the new list. Both the reference and the contents are always modified while synchronized on {@code this}. + * + * We never set this to non-null while closed it {@code true}. */ private volatile List>> refreshListeners = null; /** @@ -80,12 +89,17 @@ public boolean addOrNotify(Translog.Location location, Consumer listene return true; } synchronized (this) { - if (refreshListeners == null) { - refreshListeners = new ArrayList<>(); + List>> listeners = refreshListeners; + if (listeners == null) { + if (closed) { + throw new IllegalStateException("can't wait for refresh on a closed index"); + } + listeners = new ArrayList<>(); + refreshListeners = listeners; } - if (refreshListeners.size() < getMaxRefreshListeners.getAsInt()) { + if (listeners.size() < getMaxRefreshListeners.getAsInt()) { // We have a free slot so register the listener - refreshListeners.add(new Tuple<>(location, listener)); + listeners.add(new Tuple<>(location, listener)); return false; } } @@ -95,12 +109,34 @@ public boolean addOrNotify(Translog.Location location, Consumer listene return true; } + @Override + public void close() throws IOException { + List>> oldListeners; + synchronized (this) { + oldListeners = refreshListeners; + refreshListeners = null; + closed = true; + } + // Fire any listeners we might have had + fireListeners(oldListeners); + } + /** * Returns true if there are pending listeners. */ public boolean refreshNeeded() { + // A null list doesn't need a refresh. If we're closed we don't need a refresh either. + return refreshListeners != null && false == closed; + } + + /** + * The number of pending listeners. + */ + public int pendingCount() { // No need to synchronize here because we're doing a single volatile read - return refreshListeners != null; + List>> listeners = refreshListeners; + // A null list means we haven't accumulated any listeners. Otherwise we need the size. + return listeners == null ? 0 : listeners.size(); } /** @@ -125,33 +161,25 @@ public void beforeRefresh() throws IOException { @Override public void afterRefresh(boolean didRefresh) throws IOException { - /* - * We intentionally ignore didRefresh here because our timing is a little off. It'd be a useful flag if we knew everything that made + /* We intentionally ignore didRefresh here because our timing is a little off. It'd be a useful flag if we knew everything that made * it into the refresh, but the way we snapshot the translog position before the refresh, things can sneak into the refresh that we - * don't know about. - */ + * don't know about. */ if (null == currentRefreshLocation) { - /* - * The translog had an empty last write location at the start of the refresh so we can't alert anyone to anything. This - * usually happens during recovery. The next refresh cycle out to pick up this refresh. - */ + /* The translog had an empty last write location at the start of the refresh so we can't alert anyone to anything. This + * usually happens during recovery. The next refresh cycle out to pick up this refresh. */ return; } - /* - * Set the lastRefreshedLocation so listeners that come in for locations before that will just execute inline without messing + /* Set the lastRefreshedLocation so listeners that come in for locations before that will just execute inline without messing * around with refreshListeners or synchronizing at all. Note that it is not safe for us to abort early if we haven't advanced the * position here because we set and read lastRefreshedLocation outside of a synchronized block. We do that so that waiting for a * refresh that has already passed is just a volatile read but the cost is that any check whether or not we've advanced the * position will introduce a race between adding the listener and the position check. We could work around this by moving this * assignment into the synchronized block below and double checking lastRefreshedLocation in addOrNotify's synchronized block but - * that doesn't seem worth it given that we already skip this process early if there aren't any listeners to iterate. - */ + * that doesn't seem worth it given that we already skip this process early if there aren't any listeners to iterate. */ lastRefreshedLocation = currentRefreshLocation; - /* - * Grab the current refresh listeners and replace them with null while synchronized. Any listeners that come in after this won't be + /* Grab the current refresh listeners and replace them with null while synchronized. Any listeners that come in after this won't be * in the list we iterate over and very likely won't be candidates for refresh anyway because we've already moved the - * lastRefreshedLocation. - */ + * lastRefreshedLocation. */ List>> candidates; synchronized (this) { candidates = refreshListeners; @@ -162,16 +190,15 @@ public void afterRefresh(boolean didRefresh) throws IOException { refreshListeners = null; } // Iterate the list of listeners, copying the listeners to fire to one list and those to preserve to another list. - List> listenersToFire = null; + List>> listenersToFire = null; List>> preservedListeners = null; for (Tuple> tuple : candidates) { Translog.Location location = tuple.v1(); - Consumer listener = tuple.v2(); if (location.compareTo(currentRefreshLocation) <= 0) { if (listenersToFire == null) { listenersToFire = new ArrayList<>(); } - listenersToFire.add(listener); + listenersToFire.add(tuple); } else { if (preservedListeners == null) { preservedListeners = new ArrayList<>(); @@ -179,27 +206,36 @@ public void afterRefresh(boolean didRefresh) throws IOException { preservedListeners.add(tuple); } } - /* - * Now add any preserved listeners back to the running list of refresh listeners while under lock. We'll try them next time. While - * we were iterating the list of listeners new listeners could have come in. That means that adding all of our preserved listeners - * might push our list of listeners above the maximum number of slots allowed. This seems unlikely because we expect few listeners - * to be preserved. And the next listener while we're full will trigger a refresh anyway. - */ + /* Now deal with the listeners that it isn't time yet to fire. We need to do this under lock so we don't miss a concurrent close or + * newly registered listener. If we're not closed we just add the listeners to the list of listeners we check next time. If we are + * closed we fire the listeners even though it isn't time for them. */ if (preservedListeners != null) { synchronized (this) { if (refreshListeners == null) { - refreshListeners = new ArrayList<>(); + if (closed) { + listenersToFire.addAll(preservedListeners); + } else { + refreshListeners = preservedListeners; + } + } else { + assert closed == false : "Can't be closed and have non-null refreshListeners"; + refreshListeners.addAll(preservedListeners); } - refreshListeners.addAll(preservedListeners); } } // Lastly, fire the listeners that are ready on the listener thread pool + fireListeners(listenersToFire); + } + + /** + * Fire some listeners. Does nothing if the list of listeners is null. + */ + private void fireListeners(List>> listenersToFire) { if (listenersToFire != null) { - final List> finalListenersToFire = listenersToFire; listenerExecutor.execute(() -> { - for (Consumer listener : finalListenersToFire) { + for (Tuple> listener : listenersToFire) { try { - listener.accept(false); + listener.v2().accept(false); } catch (Exception e) { logger.warn("Error firing refresh listener", e); } diff --git a/core/src/main/java/org/elasticsearch/index/shard/SearchOperationListener.java b/core/src/main/java/org/elasticsearch/index/shard/SearchOperationListener.java index 11723c3d50a01..153a985ab0892 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/SearchOperationListener.java +++ b/core/src/main/java/org/elasticsearch/index/shard/SearchOperationListener.java @@ -21,7 +21,9 @@ import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.transport.TransportRequest; import java.util.List; @@ -104,6 +106,15 @@ public interface SearchOperationListener { */ default void onFreeScrollContext(SearchContext context) {}; + /** + * Executed prior to using a {@link SearchContext} that has been retrieved + * from the active contexts. If the context is deemed invalid a runtime + * exception can be thrown, which will prevent the context from being used. + * @param context the context retrieved from the active contexts + * @param transportRequest the request that is going to use the search context + */ + default void validateSearchContext(SearchContext context, TransportRequest transportRequest) {} + /** * A Composite listener that multiplexes calls to each of the listeners methods. */ @@ -225,5 +236,18 @@ public void onFreeScrollContext(SearchContext context) { } } } + + @Override + public void validateSearchContext(SearchContext context, TransportRequest request) { + Exception exception = null; + for (SearchOperationListener listener : listeners) { + try { + listener.validateSearchContext(context, request); + } catch (Exception e) { + exception = ExceptionsHelper.useOrSuppress(exception, e); + } + } + ExceptionsHelper.reThrowIfNotNull(exception); + } } } diff --git a/core/src/main/java/org/elasticsearch/index/shard/ShadowIndexShard.java b/core/src/main/java/org/elasticsearch/index/shard/ShadowIndexShard.java index 45a471e1aa9fb..fcab0cf3fc752 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/ShadowIndexShard.java +++ b/core/src/main/java/org/elasticsearch/index/shard/ShadowIndexShard.java @@ -65,7 +65,7 @@ public ShadowIndexShard(ShardRouting shardRouting, IndexSettings indexSettings, */ @Override public void updateRoutingEntry(ShardRouting newRouting) throws IOException { - if (newRouting.primary() == true) {// becoming a primary + if (newRouting.primary()) {// becoming a primary throw new IllegalStateException("can't promote shard to primary"); } super.updateRoutingEntry(newRouting); @@ -114,4 +114,9 @@ public void addRefreshListener(Translog.Location location, Consumer lis public Store.MetadataSnapshot snapshotStoreMetadata() throws IOException { throw new UnsupportedOperationException("can't snapshot the directory as the primary may change it underneath us"); } + + @Override + protected void onNewEngine(Engine newEngine) { + // nothing to do here - the superclass sets the translog on some listeners but we don't have such a thing + } } diff --git a/core/src/main/java/org/elasticsearch/index/shard/ShardId.java b/core/src/main/java/org/elasticsearch/index/shard/ShardId.java index a9bc63ae44f63..a806c414e9aea 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/ShardId.java +++ b/core/src/main/java/org/elasticsearch/index/shard/ShardId.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.shard; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; @@ -71,6 +72,22 @@ public String toString() { return "[" + index.getName() + "][" + shardId + "]"; } + /** + * Parse the string representation of this shardId back to an object. + * We lose index uuid information here, but since we use toString in + * rest responses, this is the best we can do to reconstruct the object + * on the client side. + */ + public static ShardId fromString(String shardIdString) { + int splitPosition = shardIdString.indexOf("]["); + if (splitPosition <= 0 || shardIdString.charAt(0) != '[' || shardIdString.charAt(shardIdString.length() - 1) != ']') { + throw new IllegalArgumentException("Unexpected shardId string format, expected [indexName][shardId] but got " + shardIdString); + } + String indexName = shardIdString.substring(1, splitPosition); + int shardId = Integer.parseInt(shardIdString.substring(splitPosition + 2, shardIdString.length() - 1)); + return new ShardId(new Index(indexName, IndexMetaData.INDEX_UUID_NA_VALUE), shardId); + } + @Override public boolean equals(Object o) { if (this == o) return true; diff --git a/core/src/main/java/org/elasticsearch/index/shard/ShardPath.java b/core/src/main/java/org/elasticsearch/index/shard/ShardPath.java index 23b17c290f12e..eb2d530fd99e0 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/ShardPath.java +++ b/core/src/main/java/org/elasticsearch/index/shard/ShardPath.java @@ -21,6 +21,7 @@ import org.apache.logging.log4j.Logger; import org.apache.lucene.util.IOUtils; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.env.ShardLock; import org.elasticsearch.index.IndexSettings; @@ -113,7 +114,8 @@ public static ShardPath loadShardPath(Logger logger, NodeEnvironment env, ShardI final Path[] paths = env.availableShardPaths(shardId); Path loadedPath = null; for (Path path : paths) { - ShardStateMetaData load = ShardStateMetaData.FORMAT.loadLatestState(logger, path); + // EMPTY is safe here because we never call namedObject + ShardStateMetaData load = ShardStateMetaData.FORMAT.loadLatestState(logger, NamedXContentRegistry.EMPTY, path); if (load != null) { if (load.indexUUID.equals(indexUUID) == false && IndexMetaData.INDEX_UUID_NA_VALUE.equals(load.indexUUID) == false) { logger.warn("{} found shard on path: [{}] with a different index UUID - this shard seems to be leftover from a different index with the same name. Remove the leftover shard in order to reuse the path with the current index", shardId, path); @@ -150,7 +152,8 @@ public static void deleteLeftoverShardDirectory(Logger logger, NodeEnvironment e final String indexUUID = indexSettings.getUUID(); final Path[] paths = env.availableShardPaths(lock.getShardId()); for (Path path : paths) { - ShardStateMetaData load = ShardStateMetaData.FORMAT.loadLatestState(logger, path); + // EMPTY is safe here because we never call namedObject + ShardStateMetaData load = ShardStateMetaData.FORMAT.loadLatestState(logger, NamedXContentRegistry.EMPTY, path); if (load != null) { if (load.indexUUID.equals(indexUUID) == false && IndexMetaData.INDEX_UUID_NA_VALUE.equals(load.indexUUID) == false) { logger.warn("{} deleting leftover shard on path: [{}] with a different index UUID", lock.getShardId(), path); diff --git a/core/src/main/java/org/elasticsearch/index/shard/SnapshotStatus.java b/core/src/main/java/org/elasticsearch/index/shard/SnapshotStatus.java index a72cfc48b6506..2014f81ec56fa 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/SnapshotStatus.java +++ b/core/src/main/java/org/elasticsearch/index/shard/SnapshotStatus.java @@ -24,7 +24,7 @@ */ public class SnapshotStatus { - public static enum Stage { + public enum Stage { NONE, INDEX, TRANSLOG, diff --git a/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java b/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java index 44b4ed933f7cd..04c2113dea34b 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java +++ b/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java @@ -104,12 +104,11 @@ boolean recoverFromLocalShards(BiConsumer mappingUpdate if (indices.size() > 1) { throw new IllegalArgumentException("can't add shards from more than one index"); } - for (ObjectObjectCursor mapping : shards.get(0).getMappings()) { + IndexMetaData indexMetaData = shards.get(0).getIndexMetaData(); + for (ObjectObjectCursor mapping : indexMetaData.getMappings()) { mappingUpdateConsumer.accept(mapping.key, mapping.value); } - for (ObjectObjectCursor mapping : shards.get(0).getMappings()) { - indexShard.mapperService().merge(mapping.key,mapping.value.source(), MapperService.MergeReason.MAPPING_RECOVERY, true); - } + indexShard.mapperService().merge(indexMetaData, MapperService.MergeReason.MAPPING_RECOVERY, true); return executeRecovery(indexShard, () -> { logger.debug("starting recovery from local shards {}", shards); try { @@ -363,7 +362,7 @@ private void internalRecoverFromStore(IndexShard indexShard) throws IndexShardRe indexShard.finalizeRecovery(); indexShard.postRecovery("post recovery from shard_store"); } catch (EngineException | IOException e) { - throw new IndexShardRecoveryException(shardId, "failed to recovery from gateway", e); + throw new IndexShardRecoveryException(shardId, "failed to recover from gateway", e); } finally { store.decRef(); } diff --git a/core/src/main/java/org/elasticsearch/index/shard/TranslogRecoveryPerformer.java b/core/src/main/java/org/elasticsearch/index/shard/TranslogRecoveryPerformer.java index 64ae0c7700752..6aaff1b5cd93b 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/TranslogRecoveryPerformer.java +++ b/core/src/main/java/org/elasticsearch/index/shard/TranslogRecoveryPerformer.java @@ -22,13 +22,13 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.engine.IgnoreOnRecoveryEngineException; import org.elasticsearch.index.mapper.DocumentMapperForType; import org.elasticsearch.index.mapper.MapperException; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.Mapping; -import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.rest.RestStatus; @@ -146,7 +146,7 @@ private void maybeAddMappingUpdate(String type, Mapping update, String docId, bo * cause a {@link MapperException} to be thrown if an update * is encountered. */ - private void performRecoveryOperation(Engine engine, Translog.Operation operation, boolean allowMappingUpdates, Engine.Operation.Origin origin) { + private void performRecoveryOperation(Engine engine, Translog.Operation operation, boolean allowMappingUpdates, Engine.Operation.Origin origin) throws IOException { try { switch (operation.opType()) { @@ -154,7 +154,8 @@ private void performRecoveryOperation(Engine engine, Translog.Operation operatio Translog.Index index = (Translog.Index) operation; // we set canHaveDuplicates to true all the time such that we de-optimze the translog case and ensure that all // autoGeneratedID docs that are coming from the primary are updated correctly. - Engine.Index engineIndex = IndexShard.prepareIndex(docMapper(index.type()), source(shardId.getIndexName(), index.type(), index.id(), index.source()) + Engine.Index engineIndex = IndexShard.prepareIndex(docMapper(index.type()), + source(shardId.getIndexName(), index.type(), index.id(), index.source(), XContentFactory.xContentType(index.source())) .routing(index.routing()).parent(index.parent()).timestamp(index.timestamp()).ttl(index.ttl()), index.version(), index.versionType().versionTypeForReplicationAndRecovery(), origin, index.getAutoGeneratedIdTimestamp(), true); maybeAddMappingUpdate(engineIndex.type(), engineIndex.parsedDoc().dynamicMappingsUpdate(), engineIndex.id(), allowMappingUpdates); @@ -165,12 +166,11 @@ private void performRecoveryOperation(Engine engine, Translog.Operation operatio break; case DELETE: Translog.Delete delete = (Translog.Delete) operation; - Uid uid = Uid.createUid(delete.uid().text()); if (logger.isTraceEnabled()) { - logger.trace("[translog] recover [delete] op of [{}][{}]", uid.type(), uid.id()); + logger.trace("[translog] recover [delete] op of [{}][{}]", delete.type(), delete.id()); } - final Engine.Delete engineDelete = new Engine.Delete(uid.type(), uid.id(), delete.uid(), delete.version(), - delete.versionType().versionTypeForReplicationAndRecovery(), origin, System.nanoTime(), false); + final Engine.Delete engineDelete = new Engine.Delete(delete.type(), delete.id(), delete.uid(), delete.version(), + delete.versionType().versionTypeForReplicationAndRecovery(), origin, System.nanoTime()); delete(engine, engineDelete); break; default: @@ -197,11 +197,11 @@ private void performRecoveryOperation(Engine engine, Translog.Operation operatio operationProcessed(); } - protected void index(Engine engine, Engine.Index engineIndex) { + protected void index(Engine engine, Engine.Index engineIndex) throws IOException { engine.index(engineIndex); } - protected void delete(Engine engine, Engine.Delete engineDelete) { + protected void delete(Engine engine, Engine.Delete engineDelete) throws IOException { engine.delete(engineDelete); } diff --git a/core/src/main/java/org/elasticsearch/index/similarity/BooleanSimilarityProvider.java b/core/src/main/java/org/elasticsearch/index/similarity/BooleanSimilarityProvider.java new file mode 100644 index 0000000000000..c4903eda96162 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/similarity/BooleanSimilarityProvider.java @@ -0,0 +1,48 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.similarity; + +import org.apache.lucene.search.similarities.BooleanSimilarity; +import org.elasticsearch.common.settings.Settings; + +/** + * {@link SimilarityProvider} for the {@link BooleanSimilarity}, + * which is a simple similarity that gives terms a score equal + * to their query boost only. This is useful in situations where + * a field does not need to be scored by a full-text ranking + * algorithm, but rather all that matters is whether the query + * terms matched or not. + */ +public class BooleanSimilarityProvider extends AbstractSimilarityProvider { + + private final BooleanSimilarity similarity = new BooleanSimilarity(); + + public BooleanSimilarityProvider(String name, Settings settings) { + super(name); + } + + /** + * {@inheritDoc} + */ + @Override + public BooleanSimilarity get() { + return similarity; + } +} diff --git a/core/src/main/java/org/elasticsearch/index/similarity/SimilarityService.java b/core/src/main/java/org/elasticsearch/index/similarity/SimilarityService.java index a8b7fafb9804e..838f9764fa9ea 100644 --- a/core/src/main/java/org/elasticsearch/index/similarity/SimilarityService.java +++ b/core/src/main/java/org/elasticsearch/index/similarity/SimilarityService.java @@ -44,18 +44,19 @@ public final class SimilarityService extends AbstractIndexComponent { public static final Map> BUILT_IN; static { Map> defaults = new HashMap<>(); - Map> buildIn = new HashMap<>(); defaults.put("classic", ClassicSimilarityProvider::new); defaults.put("BM25", BM25SimilarityProvider::new); - buildIn.put("classic", ClassicSimilarityProvider::new); - buildIn.put("BM25", BM25SimilarityProvider::new); - buildIn.put("DFR", DFRSimilarityProvider::new); - buildIn.put("IB", IBSimilarityProvider::new); - buildIn.put("LMDirichlet", LMDirichletSimilarityProvider::new); - buildIn.put("LMJelinekMercer", LMJelinekMercerSimilarityProvider::new); - buildIn.put("DFI", DFISimilarityProvider::new); + defaults.put("boolean", BooleanSimilarityProvider::new); + + Map> builtIn = new HashMap<>(defaults); + builtIn.put("DFR", DFRSimilarityProvider::new); + builtIn.put("IB", IBSimilarityProvider::new); + builtIn.put("LMDirichlet", LMDirichletSimilarityProvider::new); + builtIn.put("LMJelinekMercer", LMJelinekMercerSimilarityProvider::new); + builtIn.put("DFI", DFISimilarityProvider::new); + DEFAULTS = Collections.unmodifiableMap(defaults); - BUILT_IN = Collections.unmodifiableMap(buildIn); + BUILT_IN = Collections.unmodifiableMap(builtIn); } public SimilarityService(IndexSettings indexSettings, Map> similarities) { diff --git a/core/src/main/java/org/elasticsearch/index/snapshots/IndexShardSnapshotStatus.java b/core/src/main/java/org/elasticsearch/index/snapshots/IndexShardSnapshotStatus.java index 51c8bcf5d7e27..644caa7520be5 100644 --- a/core/src/main/java/org/elasticsearch/index/snapshots/IndexShardSnapshotStatus.java +++ b/core/src/main/java/org/elasticsearch/index/snapshots/IndexShardSnapshotStatus.java @@ -27,7 +27,7 @@ public class IndexShardSnapshotStatus { /** * Snapshot stage */ - public static enum Stage { + public enum Stage { /** * Snapshot hasn't started yet */ @@ -66,7 +66,7 @@ public static enum Stage { private long indexVersion; - private boolean aborted; + private volatile boolean aborted; private String failure; diff --git a/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java b/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java index 5bb0f728bc14e..37b728d43d622 100644 --- a/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java +++ b/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java @@ -23,11 +23,9 @@ import org.apache.lucene.util.Version; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.Strings; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.xcontent.FromXContentBuilder; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; @@ -41,9 +39,7 @@ /** * Shard snapshot metadata */ -public class BlobStoreIndexShardSnapshot implements ToXContent, FromXContentBuilder { - - public static final BlobStoreIndexShardSnapshot PROTO = new BlobStoreIndexShardSnapshot(); +public class BlobStoreIndexShardSnapshot implements ToXContent { /** * Information about snapshotted file @@ -70,7 +66,7 @@ public FileInfo(String name, StoreFileMetaData metaData, ByteSizeValue partSize) long partBytes = Long.MAX_VALUE; if (partSize != null) { - partBytes = partSize.bytes(); + partBytes = partSize.getBytes(); } long totalLength = metaData.length(); @@ -261,7 +257,7 @@ public static void toXContent(FileInfo file, XContentBuilder builder, ToXContent builder.field(CHECKSUM, file.metadata.checksum()); } if (file.partSize != null) { - builder.field(PART_SIZE, file.partSize.bytes()); + builder.field(PART_SIZE, file.partSize.getBytes()); } if (file.metadata.writtenBy() != null) { @@ -507,8 +503,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws * @param parser parser * @return shard snapshot metadata */ - public BlobStoreIndexShardSnapshot fromXContent(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException { - + public static BlobStoreIndexShardSnapshot fromXContent(XContentParser parser) throws IOException { String snapshot = null; long indexVersion = -1; long startTime = 0; @@ -527,24 +522,24 @@ public BlobStoreIndexShardSnapshot fromXContent(XContentParser parser, ParseFiel String currentFieldName = parser.currentName(); token = parser.nextToken(); if (token.isValue()) { - if (parseFieldMatcher.match(currentFieldName, PARSE_NAME)) { + if (PARSE_NAME.match(currentFieldName)) { snapshot = parser.text(); - } else if (parseFieldMatcher.match(currentFieldName, PARSE_INDEX_VERSION)) { + } else if (PARSE_INDEX_VERSION.match(currentFieldName)) { // The index-version is needed for backward compatibility with v 1.0 indexVersion = parser.longValue(); - } else if (parseFieldMatcher.match(currentFieldName, PARSE_START_TIME)) { + } else if (PARSE_START_TIME.match(currentFieldName)) { startTime = parser.longValue(); - } else if (parseFieldMatcher.match(currentFieldName, PARSE_TIME)) { + } else if (PARSE_TIME.match(currentFieldName)) { time = parser.longValue(); - } else if (parseFieldMatcher.match(currentFieldName, PARSE_NUMBER_OF_FILES)) { + } else if (PARSE_NUMBER_OF_FILES.match(currentFieldName)) { numberOfFiles = parser.intValue(); - } else if (parseFieldMatcher.match(currentFieldName, PARSE_TOTAL_SIZE)) { + } else if (PARSE_TOTAL_SIZE.match(currentFieldName)) { totalSize = parser.longValue(); } else { throw new ElasticsearchParseException("unknown parameter [{}]", currentFieldName); } } else if (token == XContentParser.Token.START_ARRAY) { - if (parseFieldMatcher.match(currentFieldName, PARSE_FILES)) { + if (PARSE_FILES.match(currentFieldName)) { while ((parser.nextToken()) != XContentParser.Token.END_ARRAY) { indexFiles.add(FileInfo.fromXContent(parser)); } @@ -562,5 +557,4 @@ public BlobStoreIndexShardSnapshot fromXContent(XContentParser parser, ParseFiel return new BlobStoreIndexShardSnapshot(snapshot, indexVersion, Collections.unmodifiableList(indexFiles), startTime, time, numberOfFiles, totalSize); } - } diff --git a/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshots.java b/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshots.java index 5b66d9b6f6ff8..56c343a5ae9b5 100644 --- a/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshots.java +++ b/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshots.java @@ -21,8 +21,6 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.xcontent.FromXContentBuilder; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; @@ -44,9 +42,7 @@ * This class is used to find files that were already snapshotted and clear out files that no longer referenced by any * snapshots */ -public class BlobStoreIndexShardSnapshots implements Iterable, ToXContent, FromXContentBuilder { - - public static final BlobStoreIndexShardSnapshots PROTO = new BlobStoreIndexShardSnapshots(); +public class BlobStoreIndexShardSnapshots implements Iterable, ToXContent { private final List shardSnapshots; private final Map files; @@ -232,8 +228,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } - @Override - public BlobStoreIndexShardSnapshots fromXContent(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException { + public static BlobStoreIndexShardSnapshots fromXContent(XContentParser parser) throws IOException { XContentParser.Token token = parser.currentToken(); if (token == null) { // New parser token = parser.nextToken(); @@ -248,7 +243,7 @@ public BlobStoreIndexShardSnapshots fromXContent(XContentParser parser, ParseFie String currentFieldName = parser.currentName(); token = parser.nextToken(); if (token == XContentParser.Token.START_ARRAY) { - if (parseFieldMatcher.match(currentFieldName, ParseFields.FILES) == false) { + if (ParseFields.FILES.match(currentFieldName) == false) { throw new ElasticsearchParseException("unknown array [{}]", currentFieldName); } while (parser.nextToken() != XContentParser.Token.END_ARRAY) { @@ -256,7 +251,7 @@ public BlobStoreIndexShardSnapshots fromXContent(XContentParser parser, ParseFie files.put(fileInfo.name(), fileInfo); } } else if (token == XContentParser.Token.START_OBJECT) { - if (parseFieldMatcher.match(currentFieldName, ParseFields.SNAPSHOTS) == false) { + if (ParseFields.SNAPSHOTS.match(currentFieldName) == false) { throw new ElasticsearchParseException("unknown object [{}]", currentFieldName); } while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { @@ -271,7 +266,7 @@ public BlobStoreIndexShardSnapshots fromXContent(XContentParser parser, ParseFie if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); if (parser.nextToken() == XContentParser.Token.START_ARRAY) { - if (parseFieldMatcher.match(currentFieldName, ParseFields.FILES) == false) { + if (ParseFields.FILES.match(currentFieldName) == false) { throw new ElasticsearchParseException("unknown array [{}]", currentFieldName); } List fileNames = new ArrayList<>(); @@ -289,7 +284,7 @@ public BlobStoreIndexShardSnapshots fromXContent(XContentParser parser, ParseFie } } - List snapshots = new ArrayList<>(); + List snapshots = new ArrayList<>(snapshotsMap.size()); for (Map.Entry> entry : snapshotsMap.entrySet()) { List fileInfosBuilder = new ArrayList<>(); for (String file : entry.getValue()) { diff --git a/core/src/main/java/org/elasticsearch/index/store/FsDirectoryService.java b/core/src/main/java/org/elasticsearch/index/store/FsDirectoryService.java index 69eedd7ef190c..c758d43778352 100644 --- a/core/src/main/java/org/elasticsearch/index/store/FsDirectoryService.java +++ b/core/src/main/java/org/elasticsearch/index/store/FsDirectoryService.java @@ -91,7 +91,7 @@ public Directory newDirectory() throws IOException { Set preLoadExtensions = new HashSet<>( indexSettings.getValue(IndexModule.INDEX_STORE_PRE_LOAD_SETTING)); wrapped = setPreload(wrapped, location, lockFactory, preLoadExtensions); - if (IndexMetaData.isOnSharedFilesystem(indexSettings.getSettings())) { + if (indexSettings.isOnSharedFilesystem()) { wrapped = new SleepingLockWrapper(wrapped, 5000); } return new RateLimitedFSDirectory(wrapped, this, this) ; @@ -105,7 +105,10 @@ public void onPause(long nanos) { protected Directory newFSDirectory(Path location, LockFactory lockFactory) throws IOException { final String storeType = indexSettings.getSettings().get(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), IndexModule.Type.FS.getSettingsKey()); - if (IndexModule.Type.FS.match(storeType) || IndexModule.Type.DEFAULT.match(storeType)) { + if (IndexModule.Type.FS.match(storeType)) { + return FSDirectory.open(location, lockFactory); // use lucene defaults + } else if (IndexModule.Type.DEFAULT.match(storeType)) { + deprecationLogger.deprecated("[default] store type is deprecated, use [fs] instead"); return FSDirectory.open(location, lockFactory); // use lucene defaults } else if (IndexModule.Type.SIMPLEFS.match(storeType)) { return new SimpleFSDirectory(location, lockFactory); diff --git a/core/src/main/java/org/elasticsearch/index/store/Store.java b/core/src/main/java/org/elasticsearch/index/store/Store.java index abcb4a44c8086..88b1ba3fb8696 100644 --- a/core/src/main/java/org/elasticsearch/index/store/Store.java +++ b/core/src/main/java/org/elasticsearch/index/store/Store.java @@ -65,7 +65,6 @@ import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.util.Callback; import org.elasticsearch.common.util.SingleObjectCache; import org.elasticsearch.common.util.concurrent.AbstractRefCounted; import org.elasticsearch.common.util.concurrent.RefCounted; @@ -84,6 +83,7 @@ import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStream; +import java.nio.file.AccessDeniedException; import java.nio.file.NoSuchFileException; import java.nio.file.Path; import java.util.ArrayList; @@ -96,6 +96,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.locks.ReentrantReadWriteLock; +import java.util.function.Consumer; import java.util.zip.CRC32; import java.util.zip.Checksum; @@ -222,7 +223,8 @@ final void ensureOpen() { * {@link #readMetadataSnapshot(Path, ShardId, NodeEnvironment.ShardLocker, Logger)} to read a meta data while locking * {@link IndexShard#snapshotStoreMetadata()} to safely read from an existing shard * {@link IndexShard#acquireIndexCommit(boolean)} to get an {@link IndexCommit} which is safe to use but has to be freed - * + * @param commit the index commit to read the snapshot from or null if the latest snapshot should be read from the + * directory * @throws CorruptIndexException if the lucene index is corrupted. This can be caused by a checksum mismatch or an * unexpected exception when opening the index reading the segments file. * @throws IndexFormatTooOldException if the lucene index is too old to be opened. @@ -232,20 +234,50 @@ final void ensureOpen() { * @throws IndexNotFoundException if the commit point can't be found in this store */ public MetadataSnapshot getMetadata(IndexCommit commit) throws IOException { + return getMetadata(commit, false); + } + + /** + * Returns a new MetadataSnapshot for the given commit. If the given commit is null + * the latest commit point is used. + * + * Note that this method requires the caller verify it has the right to access the store and + * no concurrent file changes are happening. If in doubt, you probably want to use one of the following: + * + * {@link #readMetadataSnapshot(Path, ShardId, NodeEnvironment.ShardLocker, Logger)} to read a meta data while locking + * {@link IndexShard#snapshotStoreMetadata()} to safely read from an existing shard + * {@link IndexShard#acquireIndexCommit(boolean)} to get an {@link IndexCommit} which is safe to use but has to be freed + * + * @param commit the index commit to read the snapshot from or null if the latest snapshot should be read from the + * directory + * @param lockDirectory if true the index writer lock will be obtained before reading the snapshot. This should + * only be used if there is no started shard using this store. + * @throws CorruptIndexException if the lucene index is corrupted. This can be caused by a checksum mismatch or an + * unexpected exception when opening the index reading the segments file. + * @throws IndexFormatTooOldException if the lucene index is too old to be opened. + * @throws IndexFormatTooNewException if the lucene index is too new to be opened. + * @throws FileNotFoundException if one or more files referenced by a commit are not present. + * @throws NoSuchFileException if one or more files referenced by a commit are not present. + * @throws IndexNotFoundException if the commit point can't be found in this store + */ + public MetadataSnapshot getMetadata(IndexCommit commit, boolean lockDirectory) throws IOException { ensureOpen(); failIfCorrupted(); - metadataLock.readLock().lock(); - try { + assert lockDirectory ? commit == null : true : "IW lock should not be obtained if there is a commit point available"; + // if we lock the directory we also acquire the write lock since that makes sure that nobody else tries to lock the IW + // on this store at the same time. + java.util.concurrent.locks.Lock lock = lockDirectory ? metadataLock.writeLock() : metadataLock.readLock(); + lock.lock(); + try (Closeable ignored = lockDirectory ? directory.obtainLock(IndexWriter.WRITE_LOCK_NAME) : () -> {} ) { return new MetadataSnapshot(commit, directory, logger); } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) { markStoreCorrupted(ex); throw ex; } finally { - metadataLock.readLock().unlock(); + lock.unlock(); } } - /** * Renames all the given files from the key of the map to the * value of the map. All successfully renamed files are removed from the map in-place. @@ -362,7 +394,7 @@ private void closeInternal() { try { directory.innerClose(); // this closes the distributorDirectory as well } finally { - onClose.handle(shardLock); + onClose.accept(shardLock); } } catch (IOException e) { logger.debug("failed to close directory", e); @@ -371,7 +403,6 @@ private void closeInternal() { } } - /** * Reads a MetadataSnapshot from the given index locations or returns an empty snapshot if it can't be read. * @@ -388,7 +419,7 @@ public static MetadataSnapshot readMetadataSnapshot(Path indexLocation, ShardId } catch (FileNotFoundException | NoSuchFileException ex) { logger.info("Failed to open / find files while reading metadata snapshot"); } catch (ShardLockObtainFailedException ex) { - logger.info("{}: failed to obtain shard lock", ex, shardId); + logger.info((Supplier) () -> new ParameterizedMessage("{}: failed to obtain shard lock", shardId), ex); } return MetadataSnapshot.EMPTY; } @@ -413,15 +444,12 @@ public static boolean canOpenIndex(Logger logger, Path indexLocation, ShardId sh * segment infos and possible corruption markers. If the index can not * be opened, an exception is thrown */ - public static void tryOpenIndex(Path indexLocation, ShardId shardId, NodeEnvironment.ShardLocker shardLocker, Logger logger) throws IOException { + public static void tryOpenIndex(Path indexLocation, ShardId shardId, NodeEnvironment.ShardLocker shardLocker, Logger logger) throws IOException, ShardLockObtainFailedException { try (ShardLock lock = shardLocker.lock(shardId, TimeUnit.SECONDS.toMillis(5)); Directory dir = new SimpleFSDirectory(indexLocation)) { failIfCorrupted(dir, shardId); SegmentInfos segInfo = Lucene.readSegmentInfos(dir); logger.trace("{} loaded segment info [{}]", shardId, segInfo); - } catch (ShardLockObtainFailedException ex) { - logger.error("{} unable to acquire shard lock", ex, shardId); - throw new IOException(ex); } } @@ -582,7 +610,7 @@ private static void failIfCorrupted(Directory directory, ShardId shardId) throws /** * This method deletes every file in this store that is not contained in the given source meta data or is a * legacy checksum file. After the delete it pulls the latest metadata snapshot from the store and compares it - * to the given snapshot. If the snapshots are inconsistent an illegal state exception is thrown + * to the given snapshot. If the snapshots are inconsistent an illegal state exception is thrown. * * @param reason the reason for this cleanup operation logged for each deleted file * @param sourceMetaData the metadata used for cleanup. all files in this metadata should be kept around. @@ -626,9 +654,9 @@ final void verifyAfterCleanup(MetadataSnapshot sourceMetaData, MetadataSnapshot for (StoreFileMetaData meta : recoveryDiff.different) { StoreFileMetaData local = targetMetaData.get(meta.name()); StoreFileMetaData remote = sourceMetaData.get(meta.name()); - // if we have different files the they must have no checksums otherwise something went wrong during recovery. - // we have that problem when we have an empty index is only a segments_1 file then we can't tell if it's a Lucene 4.8 file - // and therefore no checksum. That isn't much of a problem since we simply copy it over anyway but those files come out as + // if we have different files then they must have no checksums; otherwise something went wrong during recovery. + // we have that problem when we have an empty index is only a segments_1 file so we can't tell if it's a Lucene 4.8 file + // and therefore no checksum is included. That isn't a problem since we simply copy it over anyway but those files come out as // different in the diff. That's why we have to double check here again if the rest of it matches. // all is fine this file is just part of a commit or a segment that is different @@ -661,7 +689,6 @@ static final class StoreDirectory extends FilterDirectory { this.deletesLogger = deletesLogger; } - @Override public void close() throws IOException { assert false : "Nobody should close this directory except of the Store itself"; @@ -843,7 +870,7 @@ private static void checksumFromLuceneFile(Directory directory, String file, Map Logger logger, Version version, boolean readFileAsHash) throws IOException { final String checksum; final BytesRefBuilder fileHash = new BytesRefBuilder(); - try (final IndexInput in = directory.openInput(file, IOContext.READONCE)) { + try (IndexInput in = directory.openInput(file, IOContext.READONCE)) { final long length; try { length = in.length(); @@ -1179,11 +1206,11 @@ static class VerifyingIndexInput extends ChecksumIndexInput { private final byte[] checksum = new byte[8]; private long verifiedPosition = 0; - public VerifyingIndexInput(IndexInput input) { + VerifyingIndexInput(IndexInput input) { this(input, new BufferedChecksum(new CRC32())); } - public VerifyingIndexInput(IndexInput input, Checksum digest) { + VerifyingIndexInput(IndexInput input, Checksum digest) { super("VerifyingIndexInput(" + input + ")"); this.input = input; this.digest = digest; @@ -1336,14 +1363,14 @@ public void markStoreCorrupted(IOException exception) throws IOException { /** * A listener that is executed once the store is closed and all references to it are released */ - public interface OnClose extends Callback { + public interface OnClose extends Consumer { OnClose EMPTY = new OnClose() { /** * This method is called while the provided {@link org.elasticsearch.env.ShardLock} is held. * This method is only called once after all resources for a store are released. */ @Override - public void handle(ShardLock Lock) { + public void accept(ShardLock Lock) { } }; } @@ -1352,7 +1379,7 @@ private static class StoreStatsCache extends SingleObjectCache { private final Directory directory; private final DirectoryService directoryService; - public StoreStatsCache(TimeValue refreshInterval, Directory directory, DirectoryService directoryService) throws IOException { + StoreStatsCache(TimeValue refreshInterval, Directory directory, DirectoryService directoryService) throws IOException { super(refreshInterval, new StoreStats(estimateSize(directory), directoryService.throttleTimeInNanos())); this.directory = directory; this.directoryService = directoryService; @@ -1373,8 +1400,9 @@ private static long estimateSize(Directory directory) throws IOException { for (String file : files) { try { estimatedSize += directory.fileLength(file); - } catch (NoSuchFileException | FileNotFoundException e) { - // ignore, the file is not there no more + } catch (NoSuchFileException | FileNotFoundException | AccessDeniedException e) { + // ignore, the file is not there no more; on Windows, if one thread concurrently deletes a file while + // calling Files.size, you can also sometimes hit AccessDeniedException } } return estimatedSize; diff --git a/core/src/main/java/org/elasticsearch/index/store/StoreStats.java b/core/src/main/java/org/elasticsearch/index/store/StoreStats.java index d777d7b7830c2..d3406c44891fa 100644 --- a/core/src/main/java/org/elasticsearch/index/store/StoreStats.java +++ b/core/src/main/java/org/elasticsearch/index/store/StoreStats.java @@ -79,12 +79,6 @@ public TimeValue getThrottleTime() { return throttleTime(); } - public static StoreStats readStoreStats(StreamInput in) throws IOException { - StoreStats store = new StoreStats(); - store.readFrom(in); - return store; - } - @Override public void readFrom(StreamInput in) throws IOException { sizeInBytes = in.readVLong(); diff --git a/core/src/main/java/org/elasticsearch/index/termvectors/TermVectorsService.java b/core/src/main/java/org/elasticsearch/index/termvectors/TermVectorsService.java index 50583a148d757..b3fa7b29991b0 100644 --- a/core/src/main/java/org/elasticsearch/index/termvectors/TermVectorsService.java +++ b/core/src/main/java/org/elasticsearch/index/termvectors/TermVectorsService.java @@ -34,8 +34,9 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.lucene.uid.VersionsResolver.DocIdAndVersion; import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.get.GetField; @@ -49,8 +50,6 @@ import org.elasticsearch.index.mapper.SourceFieldMapper; import org.elasticsearch.index.mapper.StringFieldMapper; import org.elasticsearch.index.mapper.TextFieldMapper; -import org.elasticsearch.index.mapper.Uid; -import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.search.dfs.AggregatedDfs; @@ -82,9 +81,13 @@ public static TermVectorsResponse getTermVectors(IndexShard indexShard, TermVect static TermVectorsResponse getTermVectors(IndexShard indexShard, TermVectorsRequest request, LongSupplier nanoTimeSupplier) { final long startTime = nanoTimeSupplier.getAsLong(); final TermVectorsResponse termVectorsResponse = new TermVectorsResponse(indexShard.shardId().getIndex().getName(), request.type(), request.id()); - final Term uidTerm = new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(request.type(), request.id())); - - Engine.GetResult get = indexShard.get(new Engine.Get(request.realtime(), uidTerm).version(request.version()).versionType(request.versionType())); + final Term uidTerm = indexShard.mapperService().createUidTerm(request.type(), request.id()); + if (uidTerm == null) { + termVectorsResponse.setExists(false); + return termVectorsResponse; + } + Engine.GetResult get = indexShard.get(new Engine.Get(request.realtime(), request.type(), request.id(), uidTerm) + .version(request.version()).versionType(request.versionType())); Fields termVectorsByField = null; AggregatedDfs dfs = null; @@ -98,7 +101,7 @@ static TermVectorsResponse getTermVectors(IndexShard indexShard, TermVectorsRequ final Engine.Searcher searcher = indexShard.acquireSearcher("term_vector"); try { Fields topLevelFields = MultiFields.getFields(get.searcher() != null ? get.searcher().reader() : searcher.reader()); - Versions.DocIdAndVersion docIdAndVersion = get.docIdAndVersion(); + DocIdAndVersion docIdAndVersion = get.docIdAndVersion(); /* from an artificial document */ if (request.doc() != null) { termVectorsByField = generateTermVectorsFromDoc(indexShard, request); @@ -214,12 +217,12 @@ private static Analyzer getAnalyzerAtField(IndexShard indexShard, String field, MapperService mapperService = indexShard.mapperService(); Analyzer analyzer; if (perFieldAnalyzer != null && perFieldAnalyzer.containsKey(field)) { - analyzer = mapperService.analysisService().analyzer(perFieldAnalyzer.get(field).toString()); + analyzer = mapperService.getIndexAnalyzers().get(perFieldAnalyzer.get(field).toString()); } else { analyzer = mapperService.fullName(field).indexAnalyzer(); } if (analyzer == null) { - analyzer = mapperService.analysisService().defaultIndexAnalyzer(); + analyzer = mapperService.getIndexAnalyzers().getDefaultIndexAnalyzer(); } return analyzer; } @@ -258,8 +261,12 @@ private static Fields generateTermVectors(IndexShard indexShard, Map> entry : values.entrySet()) { String field = entry.getKey(); Analyzer analyzer = getAnalyzerAtField(indexShard, field, perFieldAnalyzer); - for (Object text : entry.getValue()) { - index.addField(field, text.toString(), analyzer); + if (entry.getValue() instanceof List) { + for (Object text : entry.getValue()) { + index.addField(field, text.toString(), analyzer); + } + } else { + index.addField(field, entry.getValue().toString(), analyzer); } } /* and read vectors from it */ @@ -268,7 +275,8 @@ private static Fields generateTermVectors(IndexShard indexShard, Map= 4 : "reusable buffer must have capacity >=4 when reading opSize. got [" + reusableBuffer.capacity() + "]"; - try { - reusableBuffer.clear(); - reusableBuffer.limit(4); - readBytes(reusableBuffer, position); - reusableBuffer.flip(); - // Add an extra 4 to account for the operation size integer itself - final int size = reusableBuffer.getInt() + 4; - final long maxSize = sizeInBytes() - position; - if (size < 0 || size > maxSize) { - throw new TranslogCorruptedException("operation size is corrupted must be [0.." + maxSize + "] but was: " + size); - } - - return size; - } catch (IOException e) { - throw new ElasticsearchException("unexpected exception reading from translog snapshot of " + this.path, e); + reusableBuffer.clear(); + reusableBuffer.limit(4); + readBytes(reusableBuffer, position); + reusableBuffer.flip(); + // Add an extra 4 to account for the operation size integer itself + final int size = reusableBuffer.getInt() + 4; + final long maxSize = sizeInBytes() - position; + if (size < 0 || size > maxSize) { + throw new TranslogCorruptedException("operation size is corrupted must be [0.." + maxSize + "] but was: " + size); } + return size; } public Translog.Snapshot newSnapshot() { diff --git a/core/src/main/java/org/elasticsearch/index/translog/BufferedChecksumStreamInput.java b/core/src/main/java/org/elasticsearch/index/translog/BufferedChecksumStreamInput.java index 58aa60a23c857..ba6da4ba522bb 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/BufferedChecksumStreamInput.java +++ b/core/src/main/java/org/elasticsearch/index/translog/BufferedChecksumStreamInput.java @@ -20,8 +20,10 @@ package org.elasticsearch.index.translog; import org.apache.lucene.store.BufferedChecksum; +import org.elasticsearch.common.io.stream.FilterStreamInput; import org.elasticsearch.common.io.stream.StreamInput; +import java.io.EOFException; import java.io.IOException; import java.util.zip.CRC32; import java.util.zip.Checksum; @@ -30,19 +32,18 @@ * Similar to Lucene's BufferedChecksumIndexInput, however this wraps a * {@link StreamInput} so anything read will update the checksum */ -public final class BufferedChecksumStreamInput extends StreamInput { +public final class BufferedChecksumStreamInput extends FilterStreamInput { private static final int SKIP_BUFFER_SIZE = 1024; private byte[] skipBuffer; - private final StreamInput in; private final Checksum digest; public BufferedChecksumStreamInput(StreamInput in) { - this.in = in; + super(in); this.digest = new BufferedChecksum(new CRC32()); } public BufferedChecksumStreamInput(StreamInput in, BufferedChecksumStreamInput reuse) { - this.in = in; + super(in); if (reuse == null ) { this.digest = new BufferedChecksum(new CRC32()); } else { @@ -58,20 +59,20 @@ public long getChecksum() { @Override public byte readByte() throws IOException { - final byte b = in.readByte(); + final byte b = delegate.readByte(); digest.update(b); return b; } @Override public void readBytes(byte[] b, int offset, int len) throws IOException { - in.readBytes(b, offset, len); + delegate.readBytes(b, offset, len); digest.update(b, offset, len); } @Override public void reset() throws IOException { - in.reset(); + delegate.reset(); digest.reset(); } @@ -80,14 +81,9 @@ public int read() throws IOException { return readByte() & 0xFF; } - @Override - public void close() throws IOException { - in.close(); - } - @Override public boolean markSupported() { - return in.markSupported(); + return delegate.markSupported(); } @@ -109,17 +105,14 @@ public long skip(long numBytes) throws IOException { return skipped; } - @Override - public int available() throws IOException { - return in.available(); - } @Override public synchronized void mark(int readlimit) { - in.mark(readlimit); + delegate.mark(readlimit); } public void resetDigest() { digest.reset(); } + } diff --git a/core/src/main/java/org/elasticsearch/index/translog/Checkpoint.java b/core/src/main/java/org/elasticsearch/index/translog/Checkpoint.java index d4eb980782718..1e2b25b1d1a84 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/Checkpoint.java +++ b/core/src/main/java/org/elasticsearch/index/translog/Checkpoint.java @@ -89,7 +89,7 @@ public String toString() { public static Checkpoint read(Path path) throws IOException { try (Directory dir = new SimpleFSDirectory(path.getParent())) { - try (final IndexInput indexInput = dir.openInput(path.getFileName().toString(), IOContext.DEFAULT)) { + try (IndexInput indexInput = dir.openInput(path.getFileName().toString(), IOContext.DEFAULT)) { if (indexInput.length() == LEGACY_NON_CHECKSUMMED_FILE_LENGTH) { // OLD unchecksummed file that was written < ES 5.0.0 return Checkpoint.readNonChecksummed(indexInput); @@ -111,7 +111,7 @@ public synchronized byte[] toByteArray() { } }; final String resourceDesc = "checkpoint(path=\"" + checkpointFile + "\", gen=" + checkpoint + ")"; - try (final OutputStreamIndexOutput indexOutput = + try (OutputStreamIndexOutput indexOutput = new OutputStreamIndexOutput(resourceDesc, checkpointFile.toString(), byteOutputStream, FILE_SIZE)) { CodecUtil.writeHeader(indexOutput, CHECKPOINT_CODEC, INITIAL_VERSION); checkpoint.write(indexOutput); diff --git a/core/src/main/java/org/elasticsearch/index/translog/Translog.java b/core/src/main/java/org/elasticsearch/index/translog/Translog.java index 0082b7a033685..ef0c0ecb654b7 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/Translog.java +++ b/core/src/main/java/org/elasticsearch/index/translog/Translog.java @@ -25,7 +25,6 @@ import org.apache.lucene.index.TwoPhaseCommit; import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.util.IOUtils; -import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.bytes.BytesArray; @@ -39,17 +38,16 @@ import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; -import org.elasticsearch.common.util.concurrent.FutureUtils; import org.elasticsearch.common.util.concurrent.ReleasableLock; import org.elasticsearch.index.VersionType; import org.elasticsearch.index.engine.Engine; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.shard.AbstractIndexShardComponent; import org.elasticsearch.index.shard.IndexShardComponent; import java.io.Closeable; import java.io.EOFException; import java.io.IOException; -import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.nio.file.Files; import java.nio.file.Path; @@ -57,9 +55,9 @@ import java.nio.file.StandardOpenOption; import java.util.ArrayList; import java.util.List; +import java.util.Objects; import java.util.Optional; import java.util.Set; -import java.util.concurrent.ScheduledFuture; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; @@ -112,7 +110,6 @@ public class Translog extends AbstractIndexShardComponent implements IndexShardC // the list of translog readers is guaranteed to be in order of translog generation private final List readers = new ArrayList<>(); - private volatile ScheduledFuture syncScheduler; // this is a concurrent set and is not protected by any of the locks. The main reason // is that is being accessed by two separate classes (additions & reading are done by Translog, remove by View when closed) private final Set outstandingViews = ConcurrentCollections.newConcurrentSet(); @@ -312,7 +309,6 @@ public void close() throws IOException { closeFilesIfNoPendingViews(); } } finally { - FutureUtils.cancel(syncScheduler); logger.debug("translog closed"); } } @@ -387,31 +383,6 @@ TranslogWriter createWriter(long fileGeneration) throws IOException { return newFile; } - - /** - * Read the Operation object from the given location. This method will try to read the given location from - * the current or from the currently committing translog file. If the location is in a file that has already - * been closed or even removed the method will return null instead. - */ - Translog.Operation read(Location location) { // TODO this is only here for testing - we can remove it? - try (ReleasableLock lock = readLock.acquire()) { - final BaseTranslogReader reader; - final long currentGeneration = current.getGeneration(); - if (currentGeneration == location.generation) { - reader = current; - } else if (readers.isEmpty() == false && readers.get(readers.size() - 1).getGeneration() == location.generation) { - reader = readers.get(readers.size() - 1); - } else if (currentGeneration < location.generation) { - throw new IllegalStateException("location generation [" + location.generation + "] is greater than the current generation [" + currentGeneration + "]"); - } else { - return null; - } - return reader.read(location); - } catch (IOException e) { - throw new ElasticsearchException("failed to read source from translog location " + location, e); - } - } - /** * Adds a delete / index operations to the transaction log. * @@ -435,7 +406,6 @@ public Location add(Operation operation) throws IOException { try (ReleasableLock lock = readLock.acquire()) { ensureOpen(); Location location = current.add(bytes); - assert assertBytesAtLocation(location, bytes); return location; } } catch (AlreadyClosedException | IOException ex) { @@ -453,7 +423,7 @@ public Location add(Operation operation) throws IOException { } throw new TranslogException(shardId, "Failed to write operation [" + operation + "]", e); } finally { - Releasables.close(out.bytes()); + Releasables.close(out); } } @@ -472,13 +442,6 @@ public Location getLastWriteLocation() { } } - boolean assertBytesAtLocation(Translog.Location location, BytesReference expectedBytes) throws IOException { - // tests can override this - ByteBuffer buffer = ByteBuffer.allocate(location.size); - current.readBytes(buffer, location.translogLocation); - return new BytesArray(buffer.array()).equals(expectedBytes); - } - /** * Snapshots the current transaction log allowing to safely iterate over the snapshot. * Snapshots are fixed in time and will not be updated with future operations. @@ -865,13 +828,13 @@ public Index(StreamInput in) throws IOException { } } - public Index(Engine.Index index) { + public Index(Engine.Index index, Engine.IndexResult indexResult) { this.id = index.id(); this.type = index.type(); this.source = index.source(); this.routing = index.routing(); this.parent = index.parent(); - this.version = index.version(); + this.version = indexResult.getVersion(); this.timestamp = index.timestamp(); this.ttl = index.ttl(); this.versionType = index.versionType(); @@ -1014,32 +977,45 @@ public long getAutoGeneratedIdTimestamp() { } public static class Delete implements Operation { - public static final int SERIALIZATION_FORMAT = 2; // since 2.0-beta1 and 1.1 + public static final int FORMAT_5_0 = 2; // 5.0 - 5.5 + private static final int FORMAT_SINGLE_TYPE = FORMAT_5_0 + 1; // 5.5 - 6.0 + private final String type, id; private final Term uid; private final long version; private final VersionType versionType; public Delete(StreamInput in) throws IOException { final int format = in.readVInt();// SERIALIZATION_FORMAT - assert format == SERIALIZATION_FORMAT : "format was: " + format; - uid = new Term(in.readString(), in.readString()); + assert format >= FORMAT_5_0 && format <= FORMAT_SINGLE_TYPE : "format was: " + format; + if (format >= FORMAT_SINGLE_TYPE) { + type = in.readString(); + id = in.readString(); + uid = new Term(in.readString(), in.readString()); + } else { + uid = new Term(in.readString(), in.readString()); + // the uid was constructed from the type and id so we can + // extract them back + Uid uidObject = Uid.createUid(uid.text()); + type = uidObject.type(); + id = uidObject.id(); + } this.version = in.readLong(); this.versionType = VersionType.fromValue(in.readByte()); assert versionType.validateVersionForWrites(this.version); } - public Delete(Engine.Delete delete) { - this.uid = delete.uid(); - this.version = delete.version(); - this.versionType = delete.versionType(); + public Delete(Engine.Delete delete, Engine.DeleteResult deleteResult) { + this(delete.type(), delete.id(), delete.uid(), deleteResult.getVersion(), delete.versionType()); } - public Delete(Term uid) { - this(uid, Versions.MATCH_ANY, VersionType.INTERNAL); + public Delete(String type, String id, Term uid) { + this(type, id, uid, Versions.MATCH_ANY, VersionType.INTERNAL); } - public Delete(Term uid, long version, VersionType versionType) { + public Delete(String type, String id, Term uid, long version, VersionType versionType) { + this.type = Objects.requireNonNull(type); + this.id = Objects.requireNonNull(id); this.uid = uid; this.version = version; this.versionType = versionType; @@ -1055,6 +1031,14 @@ public long estimateSize() { return ((uid.field().length() + uid.text().length()) * 2) + 20; } + public String type() { + return type; + } + + public String id() { + return id; + } + public Term uid() { return this.uid; } @@ -1074,7 +1058,9 @@ public Source getSource() { @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(SERIALIZATION_FORMAT); + out.writeVInt(FORMAT_SINGLE_TYPE); + out.writeString(type); + out.writeString(id); out.writeString(uid.field()); out.writeString(uid.text()); out.writeLong(version); @@ -1200,7 +1186,7 @@ public static void writeOperations(StreamOutput outStream, List toWri bytes.writeTo(outStream); } } finally { - Releasables.close(out.bytes()); + Releasables.close(out); } } diff --git a/core/src/main/java/org/elasticsearch/index/translog/TranslogSnapshot.java b/core/src/main/java/org/elasticsearch/index/translog/TranslogSnapshot.java index f33ec1bd60702..ffbe1002eb146 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/TranslogSnapshot.java +++ b/core/src/main/java/org/elasticsearch/index/translog/TranslogSnapshot.java @@ -26,7 +26,7 @@ import java.nio.channels.FileChannel; import java.nio.file.Path; -public class TranslogSnapshot extends BaseTranslogReader implements Translog.Snapshot { +final class TranslogSnapshot extends BaseTranslogReader implements Translog.Snapshot { private final int totalOperations; protected final long length; @@ -40,7 +40,7 @@ public class TranslogSnapshot extends BaseTranslogReader implements Translog.Sna * Create a snapshot of translog file channel. The length parameter should be consistent with totalOperations and point * at the end of the last operation in this snapshot. */ - public TranslogSnapshot(long generation, FileChannel channel, Path path, long firstOperationOffset, long length, int totalOperations) { + TranslogSnapshot(long generation, FileChannel channel, Path path, long firstOperationOffset, long length, int totalOperations) { super(generation, channel, path, firstOperationOffset); this.length = length; this.totalOperations = totalOperations; @@ -51,7 +51,7 @@ public TranslogSnapshot(long generation, FileChannel channel, Path path, long fi } @Override - public final int totalOperations() { + public int totalOperations() { return totalOperations; } @@ -64,7 +64,7 @@ public Translog.Operation next() throws IOException { } } - protected final Translog.Operation readOperation() throws IOException { + protected Translog.Operation readOperation() throws IOException { final int opSize = readSize(reusableBuffer, position); reuse = checksummedStream(reusableBuffer, position, opSize, reuse); Translog.Operation op = read(reuse); diff --git a/core/src/main/java/org/elasticsearch/index/translog/TranslogToolCli.java b/core/src/main/java/org/elasticsearch/index/translog/TranslogToolCli.java index d565074a50c13..944296d6813e7 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/TranslogToolCli.java +++ b/core/src/main/java/org/elasticsearch/index/translog/TranslogToolCli.java @@ -21,10 +21,6 @@ import org.elasticsearch.cli.MultiCommand; import org.elasticsearch.cli.Terminal; -import org.elasticsearch.common.logging.LogConfigurator; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.env.Environment; -import org.elasticsearch.node.internal.InternalSettingsPreparer; /** * Class encapsulating and dispatching commands from the {@code elasticsearch-translog} command line tool @@ -37,17 +33,7 @@ private TranslogToolCli() { } public static void main(String[] args) throws Exception { - // initialize default for es.logger.level because we will not read the log4j2.properties - String loggerLevel = System.getProperty("es.logger.level", "INFO"); - String pathHome = System.getProperty("es.path.home"); - // Set the appender for all potential log files to terminal so that other components that use the logger print out the - // same terminal. - Environment loggingEnvironment = InternalSettingsPreparer.prepareEnvironment(Settings.builder() - .put("path.home", pathHome) - .put("logger.level", loggerLevel) - .build(), Terminal.DEFAULT); - LogConfigurator.configure(loggingEnvironment, false); - exit(new TranslogToolCli().main(args, Terminal.DEFAULT)); } + } diff --git a/core/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java b/core/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java index bb4a84651c561..241729c4acb35 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java +++ b/core/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java @@ -332,7 +332,7 @@ protected final boolean isClosed() { private final class BufferedChannelOutputStream extends BufferedOutputStream { - public BufferedChannelOutputStream(OutputStream out, int size) throws IOException { + BufferedChannelOutputStream(OutputStream out, int size) throws IOException { super(out, size); } diff --git a/core/src/main/java/org/elasticsearch/index/translog/TruncateTranslogCommand.java b/core/src/main/java/org/elasticsearch/index/translog/TruncateTranslogCommand.java index 6514cd42709d2..ef339b31972e8 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/TruncateTranslogCommand.java +++ b/core/src/main/java/org/elasticsearch/index/translog/TruncateTranslogCommand.java @@ -34,10 +34,11 @@ import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.cli.SettingCommand; +import org.elasticsearch.cli.EnvironmentAwareCommand; import org.elasticsearch.cli.Terminal; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.io.PathUtils; +import org.elasticsearch.env.Environment; import org.elasticsearch.index.IndexNotFoundException; import java.io.IOException; @@ -54,7 +55,7 @@ import java.util.Map; import java.util.Set; -public class TruncateTranslogCommand extends SettingCommand { +public class TruncateTranslogCommand extends EnvironmentAwareCommand { private final OptionSpec translogFolder; private final OptionSpec batchMode; @@ -86,7 +87,7 @@ private Path getTranslogPath(OptionSet options) { } @Override - protected void execute(Terminal terminal, OptionSet options, Map settings) throws Exception { + protected void execute(Terminal terminal, OptionSet options, Environment env) throws Exception { boolean batch = options.has(batchMode); Path translogPath = getTranslogPath(options); diff --git a/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java b/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java index 233dbf4f5fe8a..21dec0f62a043 100644 --- a/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java +++ b/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java @@ -86,12 +86,6 @@ public TimeValue totalTime() { return new TimeValue(totalTimeInMillis); } - public static WarmerStats readWarmerStats(StreamInput in) throws IOException { - WarmerStats refreshStats = new WarmerStats(); - refreshStats.readFrom(in); - return refreshStats; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.WARMER); diff --git a/core/src/main/java/org/elasticsearch/indices/AbstractIndexShardCacheEntity.java b/core/src/main/java/org/elasticsearch/indices/AbstractIndexShardCacheEntity.java index c0d929d82f5eb..98afd8781b4f8 100644 --- a/core/src/main/java/org/elasticsearch/indices/AbstractIndexShardCacheEntity.java +++ b/core/src/main/java/org/elasticsearch/indices/AbstractIndexShardCacheEntity.java @@ -19,40 +19,15 @@ package org.elasticsearch.indices; -import org.apache.lucene.index.DirectoryReader; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.cache.RemovalNotification; -import org.elasticsearch.common.io.stream.BytesStreamOutput; -import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.index.cache.request.ShardRequestCache; import org.elasticsearch.index.shard.IndexShard; -import java.io.IOException; - /** * Abstract base class for the an {@link IndexShard} level {@linkplain IndicesRequestCache.CacheEntity}. */ abstract class AbstractIndexShardCacheEntity implements IndicesRequestCache.CacheEntity { - @FunctionalInterface - public interface Loader { - void load(StreamOutput out) throws IOException; - } - - private final Loader loader; - private boolean loadedFromCache = true; - - protected AbstractIndexShardCacheEntity(Loader loader) { - this.loader = loader; - } - - /** - * When called after passing this through - * {@link IndicesRequestCache#getOrCompute(IndicesRequestCache.CacheEntity, DirectoryReader, BytesReference)} this will return whether - * or not the result was loaded from the cache. - */ - public final boolean loadedFromCache() { - return loadedFromCache; - } /** * Get the {@linkplain ShardRequestCache} used to track cache statistics. @@ -60,27 +35,7 @@ public final boolean loadedFromCache() { protected abstract ShardRequestCache stats(); @Override - public final IndicesRequestCache.Value loadValue() throws IOException { - /* BytesStreamOutput allows to pass the expected size but by default uses - * BigArrays.PAGE_SIZE_IN_BYTES which is 16k. A common cached result ie. - * a date histogram with 3 buckets is ~100byte so 16k might be very wasteful - * since we don't shrink to the actual size once we are done serializing. - * By passing 512 as the expected size we will resize the byte array in the stream - * slowly until we hit the page size and don't waste too much memory for small query - * results.*/ - final int expectedSizeInBytes = 512; - try (BytesStreamOutput out = new BytesStreamOutput(expectedSizeInBytes)) { - loader.load(out); - // for now, keep the paged data structure, which might have unused bytes to fill a page, but better to keep - // the memory properly paged instead of having varied sized bytes - final BytesReference reference = out.bytes(); - loadedFromCache = false; - return new IndicesRequestCache.Value(reference, out.ramBytesUsed()); - } - } - - @Override - public final void onCached(IndicesRequestCache.Key key, IndicesRequestCache.Value value) { + public final void onCached(IndicesRequestCache.Key key, BytesReference value) { stats().onCached(key, value); } @@ -95,7 +50,7 @@ public final void onMiss() { } @Override - public final void onRemoval(RemovalNotification notification) { + public final void onRemoval(RemovalNotification notification) { stats().onRemoval(notification.getKey(), notification.getValue(), notification.getRemovalReason() == RemovalNotification.RemovalReason.EVICTED); } diff --git a/core/src/main/java/org/elasticsearch/indices/IndexAlreadyExistsException.java b/core/src/main/java/org/elasticsearch/indices/IndexAlreadyExistsException.java deleted file mode 100644 index f64addfd2f1c1..0000000000000 --- a/core/src/main/java/org/elasticsearch/indices/IndexAlreadyExistsException.java +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.indices; - -import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.index.Index; -import org.elasticsearch.rest.RestStatus; - -import java.io.IOException; - -/** - * - */ -public class IndexAlreadyExistsException extends ElasticsearchException { - - public IndexAlreadyExistsException(Index index) { - this(index, "index " + index.toString() + " already exists"); - } - - public IndexAlreadyExistsException(Index index, String message) { - super(message); - setIndex(index); - } - - public IndexAlreadyExistsException(StreamInput in) throws IOException{ - super(in); - } - - @Override - public RestStatus status() { - return RestStatus.BAD_REQUEST; - } -} diff --git a/core/src/main/java/org/elasticsearch/indices/IndexTemplateAlreadyExistsException.java b/core/src/main/java/org/elasticsearch/indices/IndexTemplateAlreadyExistsException.java index 3b665e051e049..947e3eab90b74 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndexTemplateAlreadyExistsException.java +++ b/core/src/main/java/org/elasticsearch/indices/IndexTemplateAlreadyExistsException.java @@ -26,8 +26,9 @@ import java.io.IOException; /** - * + * @deprecated use {@link IllegalArgumentException} instead */ +@Deprecated public class IndexTemplateAlreadyExistsException extends ElasticsearchException { private final String name; diff --git a/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java b/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java index 75c3f06062fbb..1b960bb15992b 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java +++ b/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java @@ -21,6 +21,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.store.AlreadyClosedException; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -30,11 +31,10 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.engine.EngineClosedException; -import org.elasticsearch.index.engine.FlushNotAllowedEngineException; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardState; import org.elasticsearch.index.shard.IndexingOperationListener; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.threadpool.ThreadPool.Cancellable; import org.elasticsearch.threadpool.ThreadPool.Names; @@ -52,7 +52,7 @@ public class IndexingMemoryController extends AbstractComponent implements IndexingOperationListener, Closeable { /** How much heap (% or bytes) we will share across all actively indexing shards on this node (default: 10%). */ - public static final Setting INDEX_BUFFER_SIZE_SETTING = + public static final Setting INDEX_BUFFER_SIZE_SETTING = Setting.memorySizeSetting("indices.memory.index_buffer_size", "10%", Property.NodeScope); /** Only applies when indices.memory.index_buffer_size is a %, to set a floor on the actual size in bytes (default: 48 MB). */ @@ -106,10 +106,10 @@ public class IndexingMemoryController extends AbstractComponent implements Index // We only apply the min/max when % value was used for the index buffer: ByteSizeValue minIndexingBuffer = MIN_INDEX_BUFFER_SIZE_SETTING.get(this.settings); ByteSizeValue maxIndexingBuffer = MAX_INDEX_BUFFER_SIZE_SETTING.get(this.settings); - if (indexingBuffer.bytes() < minIndexingBuffer.bytes()) { + if (indexingBuffer.getBytes() < minIndexingBuffer.getBytes()) { indexingBuffer = minIndexingBuffer; } - if (maxIndexingBuffer.bytes() != -1 && indexingBuffer.bytes() > maxIndexingBuffer.bytes()) { + if (maxIndexingBuffer.getBytes() != -1 && indexingBuffer.getBytes() > maxIndexingBuffer.getBytes()) { indexingBuffer = maxIndexingBuffer; } } @@ -190,11 +190,6 @@ void forceCheck() { statusChecker.run(); } - /** called by IndexShard to record that this many bytes were written to translog */ - public void bytesWritten(int bytes) { - statusChecker.bytesWritten(bytes); - } - /** Asks this shard to throttle indexing to one thread */ protected void activateThrottling(IndexShard shard) { shard.activateThrottling(); @@ -206,24 +201,27 @@ protected void deactivateThrottling(IndexShard shard) { } @Override - public void postIndex(Engine.Index index, boolean created) { - recordOperationBytes(index); + public void postIndex(ShardId shardId, Engine.Index index, Engine.IndexResult result) { + recordOperationBytes(index, result); } @Override - public void postDelete(Engine.Delete delete) { - recordOperationBytes(delete); + public void postDelete(ShardId shardId, Engine.Delete delete, Engine.DeleteResult result) { + recordOperationBytes(delete, result); } - private void recordOperationBytes(Engine.Operation op) { - bytesWritten(op.sizeInBytes()); + /** called by IndexShard to record estimated bytes written to translog for the operation */ + private void recordOperationBytes(Engine.Operation operation, Engine.Result result) { + if (result.hasFailure() == false) { + statusChecker.bytesWritten(operation.estimatedSizeInBytes()); + } } private static final class ShardAndBytesUsed implements Comparable { final long bytesUsed; final IndexShard shard; - public ShardAndBytesUsed(long bytesUsed, IndexShard shard) { + ShardAndBytesUsed(long bytesUsed, IndexShard shard) { this.bytesUsed = bytesUsed; this.shard = shard; } @@ -245,13 +243,13 @@ final class ShardsIndicesStatusChecker implements Runnable { public void bytesWritten(int bytes) { long totalBytes = bytesWrittenSinceCheck.addAndGet(bytes); assert totalBytes >= 0; - while (totalBytes > indexingBuffer.bytes()/30) { + while (totalBytes > indexingBuffer.getBytes()/30) { if (runLock.tryLock()) { try { // Must pull this again because it may have changed since we first checked: totalBytes = bytesWrittenSinceCheck.get(); - if (totalBytes > indexingBuffer.bytes()/30) { + if (totalBytes > indexingBuffer.getBytes()/30) { bytesWrittenSinceCheck.addAndGet(-totalBytes); // NOTE: this is only an approximate check, because bytes written is to the translog, vs indexing memory buffer which is // typically smaller but can be larger in extreme cases (many unique terms). This logic is here only as a safety against @@ -320,9 +318,9 @@ private void runUnlocked() { // If we are using more than 50% of our budget across both indexing buffer and bytes we are still moving to disk, then we now // throttle the top shards to send back-pressure to ongoing indexing: - boolean doThrottle = (totalBytesWriting + totalBytesUsed) > 1.5 * indexingBuffer.bytes(); + boolean doThrottle = (totalBytesWriting + totalBytesUsed) > 1.5 * indexingBuffer.getBytes(); - if (totalBytesUsed > indexingBuffer.bytes()) { + if (totalBytesUsed > indexingBuffer.getBytes()) { // OK we are now over-budget; fill the priority queue and ask largest shard(s) to refresh: PriorityQueue queue = new PriorityQueue<>(); @@ -357,7 +355,7 @@ private void runUnlocked() { logger.debug("now write some indexing buffers: total indexing heap bytes used [{}] vs {} [{}], currently writing bytes [{}], [{}] shards with non-zero indexing buffer", new ByteSizeValue(totalBytesUsed), INDEX_BUFFER_SIZE_SETTING.getKey(), indexingBuffer, new ByteSizeValue(totalBytesWriting), queue.size()); - while (totalBytesUsed > indexingBuffer.bytes() && queue.isEmpty() == false) { + while (totalBytesUsed > indexingBuffer.getBytes() && queue.isEmpty() == false) { ShardAndBytesUsed largest = queue.poll(); logger.debug("write indexing buffer to disk for shard [{}] to free up its [{}] indexing buffer", largest.shard.shardId(), new ByteSizeValue(largest.bytesUsed)); writeIndexingBufferAsync(largest.shard); @@ -386,8 +384,8 @@ private void runUnlocked() { protected void checkIdle(IndexShard shard, long inactiveTimeNS) { try { shard.checkIdle(inactiveTimeNS); - } catch (EngineClosedException | FlushNotAllowedEngineException e) { - logger.trace("ignore exception while checking if shard {} is inactive", e, shard.shardId()); + } catch (AlreadyClosedException e) { + logger.trace((Supplier) () -> new ParameterizedMessage("ignore exception while checking if shard {} is inactive", shard.shardId()), e); } } } diff --git a/core/src/main/java/org/elasticsearch/indices/IndicesModule.java b/core/src/main/java/org/elasticsearch/indices/IndicesModule.java index fed710244fbb7..82a511ea3d1be 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndicesModule.java +++ b/core/src/main/java/org/elasticsearch/indices/IndicesModule.java @@ -22,12 +22,9 @@ import org.elasticsearch.action.admin.indices.rollover.Condition; import org.elasticsearch.action.admin.indices.rollover.MaxAgeCondition; import org.elasticsearch.action.admin.indices.rollover.MaxDocsCondition; -import org.elasticsearch.action.update.UpdateHelper; -import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService; import org.elasticsearch.common.geo.ShapesAvailability; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.io.stream.NamedWriteableRegistry.Entry; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.index.mapper.AllFieldMapper; import org.elasticsearch.index.mapper.BinaryFieldMapper; import org.elasticsearch.index.mapper.BooleanFieldMapper; @@ -40,11 +37,13 @@ import org.elasticsearch.index.mapper.IndexFieldMapper; import org.elasticsearch.index.mapper.IpFieldMapper; import org.elasticsearch.index.mapper.KeywordFieldMapper; +import org.elasticsearch.index.mapper.LatLonPointFieldMapper; import org.elasticsearch.index.mapper.Mapper; import org.elasticsearch.index.mapper.MetadataFieldMapper; import org.elasticsearch.index.mapper.NumberFieldMapper; import org.elasticsearch.index.mapper.ObjectMapper; import org.elasticsearch.index.mapper.ParentFieldMapper; +import org.elasticsearch.index.mapper.RangeFieldMapper; import org.elasticsearch.index.mapper.RoutingFieldMapper; import org.elasticsearch.index.mapper.ScaledFloatFieldMapper; import org.elasticsearch.index.mapper.SourceFieldMapper; @@ -59,9 +58,6 @@ import org.elasticsearch.indices.cluster.IndicesClusterStateService; import org.elasticsearch.indices.flush.SyncedFlushService; import org.elasticsearch.indices.mapper.MapperRegistry; -import org.elasticsearch.indices.recovery.RecoverySettings; -import org.elasticsearch.indices.recovery.PeerRecoverySourceService; -import org.elasticsearch.indices.recovery.PeerRecoveryTargetService; import org.elasticsearch.indices.store.IndicesStore; import org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData; import org.elasticsearch.indices.ttl.IndicesTTLService; @@ -77,16 +73,11 @@ * Configures classes and services that are shared by indices on each node. */ public class IndicesModule extends AbstractModule { - - private final Map mapperParsers; - private final Map metadataMapperParsers; - private final MapperRegistry mapperRegistry; private final List namedWritables = new ArrayList<>(); + private final MapperRegistry mapperRegistry; public IndicesModule(List mapperPlugins) { - this.mapperParsers = getMappers(mapperPlugins); - this.metadataMapperParsers = getMetadataMappers(mapperPlugins); - this.mapperRegistry = new MapperRegistry(mapperParsers, metadataMapperParsers); + this.mapperRegistry = new MapperRegistry(getMappers(mapperPlugins), getMetadataMappers(mapperPlugins)); registerBuiltinWritables(); } @@ -106,6 +97,9 @@ private Map getMappers(List mapperPlugi for (NumberFieldMapper.NumberType type : NumberFieldMapper.NumberType.values()) { mappers.put(type.typeName(), new NumberFieldMapper.TypeParser(type)); } + for (RangeFieldMapper.RangeType type : RangeFieldMapper.RangeType.values()) { + mappers.put(type.typeName(), new RangeFieldMapper.TypeParser(type)); + } mappers.put(BooleanFieldMapper.CONTENT_TYPE, new BooleanFieldMapper.TypeParser()); mappers.put(BinaryFieldMapper.CONTENT_TYPE, new BinaryFieldMapper.TypeParser()); mappers.put(DateFieldMapper.CONTENT_TYPE, new DateFieldMapper.TypeParser()); @@ -119,6 +113,7 @@ private Map getMappers(List mapperPlugi mappers.put(ObjectMapper.NESTED_CONTENT_TYPE, new ObjectMapper.TypeParser()); mappers.put(CompletionFieldMapper.CONTENT_TYPE, new CompletionFieldMapper.TypeParser()); mappers.put(GeoPointFieldMapper.CONTENT_TYPE, new GeoPointFieldMapper.TypeParser()); + mappers.put(LatLonPointFieldMapper.CONTENT_TYPE, new LatLonPointFieldMapper.TypeParser()); if (ShapesAvailability.JTS_AVAILABLE && ShapesAvailability.SPATIAL4J_AVAILABLE) { mappers.put(GeoShapeFieldMapper.CONTENT_TYPE, new GeoShapeFieldMapper.TypeParser()); } @@ -170,27 +165,17 @@ private Map getMetadataMappers(List INDICES_CACHE_QUERY_SIZE_SETTING = + public static final Setting INDICES_CACHE_QUERY_SIZE_SETTING = Setting.memorySizeSetting("indices.queries.cache.size", "10%", Property.NodeScope); - public static final Setting INDICES_CACHE_QUERY_COUNT_SETTING = + public static final Setting INDICES_CACHE_QUERY_COUNT_SETTING = Setting.intSetting("indices.queries.cache.count", 10000, 1, Property.NodeScope); // enables caching on all segments instead of only the larger ones, for testing only - public static final Setting INDICES_QUERIES_CACHE_ALL_SEGMENTS_SETTING = + public static final Setting INDICES_QUERIES_CACHE_ALL_SEGMENTS_SETTING = Setting.boolSetting("indices.queries.cache.all_segments", false, Property.NodeScope); private final LRUQueryCache cache; @@ -74,9 +75,9 @@ public IndicesQueryCache(Settings settings) { logger.debug("using [node] query cache with size [{}] max filter count [{}]", size, count); if (INDICES_QUERIES_CACHE_ALL_SEGMENTS_SETTING.get(settings)) { - cache = new ElasticsearchLRUQueryCache(count, size.bytes(), context -> true); + cache = new ElasticsearchLRUQueryCache(count, size.getBytes(), context -> true); } else { - cache = new ElasticsearchLRUQueryCache(count, size.bytes()); + cache = new ElasticsearchLRUQueryCache(count, size.getBytes()); } sharedRamBytesUsed = 0; } @@ -102,7 +103,7 @@ public QueryCacheStats getStats(ShardId shard) { } final double weight = totalSize == 0 ? 1d / stats.size() - : shardStats.getCacheSize() / totalSize; + : ((double) shardStats.getCacheSize()) / totalSize; final long additionalRamBytesUsed = Math.round(weight * sharedRamBytesUsed); shardStats.add(new QueryCacheStats(additionalRamBytesUsed, 0, 0, 0, 0)); return shardStats; @@ -155,6 +156,12 @@ public Scorer scorer(LeafReaderContext context) throws IOException { return in.scorer(context); } + @Override + public ScorerSupplier scorerSupplier(LeafReaderContext context) throws IOException { + shardKeyMap.add(context.reader()); + return in.scorerSupplier(context); + } + @Override public BulkScorer bulkScorer(LeafReaderContext context) throws IOException { shardKeyMap.add(context.reader()); diff --git a/core/src/main/java/org/elasticsearch/indices/IndicesRequestCache.java b/core/src/main/java/org/elasticsearch/indices/IndicesRequestCache.java index f78ccb22c9df1..e7bd76fb34d09 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndicesRequestCache.java +++ b/core/src/main/java/org/elasticsearch/indices/IndicesRequestCache.java @@ -41,13 +41,12 @@ import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import java.io.Closeable; -import java.io.IOException; import java.util.Collection; import java.util.Collections; import java.util.Iterator; import java.util.Set; import java.util.concurrent.ConcurrentMap; -import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; /** * The indices request cache allows to cache a shard level request stage responses, helping with improving @@ -63,7 +62,7 @@ * is functional. */ public final class IndicesRequestCache extends AbstractComponent implements RemovalListener, Closeable { + BytesReference>, Closeable { /** * A setting to enable or disable request caching on an index level. Its dynamic by default @@ -80,17 +79,17 @@ public final class IndicesRequestCache extends AbstractComponent implements Remo private final Set keysToClean = ConcurrentCollections.newConcurrentSet(); private final ByteSizeValue size; private final TimeValue expire; - private final Cache cache; + private final Cache cache; IndicesRequestCache(Settings settings) { super(settings); this.size = INDICES_CACHE_QUERY_SIZE.get(settings); this.expire = INDICES_CACHE_QUERY_EXPIRE.exists(settings) ? INDICES_CACHE_QUERY_EXPIRE.get(settings) : null; - long sizeInBytes = size.bytes(); - CacheBuilder cacheBuilder = CacheBuilder.builder() + long sizeInBytes = size.getBytes(); + CacheBuilder cacheBuilder = CacheBuilder.builder() .setMaximumWeight(sizeInBytes).weigher((k, v) -> k.ramBytesUsed() + v.ramBytesUsed()).removalListener(this); if (expire != null) { - cacheBuilder.setExpireAfterAccess(TimeUnit.MILLISECONDS.toNanos(expire.millis())); + cacheBuilder.setExpireAfterAccess(expire); } cache = cacheBuilder.build(); } @@ -106,15 +105,16 @@ void clear(CacheEntity entity) { } @Override - public void onRemoval(RemovalNotification notification) { + public void onRemoval(RemovalNotification notification) { notification.getKey().entity.onRemoval(notification); } - BytesReference getOrCompute(CacheEntity cacheEntity, DirectoryReader reader, BytesReference cacheKey) throws Exception { + BytesReference getOrCompute(CacheEntity cacheEntity, Supplier loader, + DirectoryReader reader, BytesReference cacheKey) throws Exception { final Key key = new Key(cacheEntity, reader.getVersion(), cacheKey); - Loader loader = new Loader(cacheEntity); - Value value = cache.computeIfAbsent(key, loader); - if (loader.isLoaded()) { + Loader cacheLoader = new Loader(cacheEntity, loader); + BytesReference value = cache.computeIfAbsent(key, cacheLoader); + if (cacheLoader.isLoaded()) { key.entity.onMiss(); // see if its the first time we see this reader, and make sure to register a cleanup key CleanupKey cleanupKey = new CleanupKey(cacheEntity, reader.getVersion()); @@ -127,16 +127,28 @@ BytesReference getOrCompute(CacheEntity cacheEntity, DirectoryReader reader, Byt } else { key.entity.onHit(); } - return value.reference; + return value; } - private static class Loader implements CacheLoader { + /** + * Invalidates the given the cache entry for the given key and it's context + * @param cacheEntity the cache entity to invalidate for + * @param reader the reader to invalidate the cache entry for + * @param cacheKey the cache key to invalidate + */ + void invalidate(CacheEntity cacheEntity, DirectoryReader reader, BytesReference cacheKey) { + cache.invalidate(new Key(cacheEntity, reader.getVersion(), cacheKey)); + } + + private static class Loader implements CacheLoader { private final CacheEntity entity; + private final Supplier loader; private boolean loaded; - Loader(CacheEntity entity) { + Loader(CacheEntity entity, Supplier loader) { this.entity = entity; + this.loader = loader; } public boolean isLoaded() { @@ -144,8 +156,8 @@ public boolean isLoaded() { } @Override - public Value load(Key key) throws Exception { - Value value = entity.loadValue(); + public BytesReference load(Key key) throws Exception { + BytesReference value = loader.get(); entity.onCached(key, value); loaded = true; return value; @@ -155,16 +167,12 @@ public Value load(Key key) throws Exception { /** * Basic interface to make this cache testable. */ - interface CacheEntity { - /** - * Loads the actual cache value. this is the heavy lifting part. - */ - Value loadValue() throws IOException; + interface CacheEntity extends Accountable { /** - * Called after the value was loaded via {@link #loadValue()} + * Called after the value was loaded. */ - void onCached(Key key, Value value); + void onCached(Key key, BytesReference value); /** * Returns true iff the resource behind this entity is still open ie. @@ -191,32 +199,12 @@ interface CacheEntity { /** * Called when this entity instance is removed */ - void onRemoval(RemovalNotification notification); - } - - - - static class Value implements Accountable { - final BytesReference reference; - final long ramBytesUsed; - - Value(BytesReference reference, long ramBytesUsed) { - this.reference = reference; - this.ramBytesUsed = ramBytesUsed; - } - - @Override - public long ramBytesUsed() { - return ramBytesUsed; - } - - @Override - public Collection getChildResources() { - return Collections.emptyList(); - } + void onRemoval(RemovalNotification notification); } static class Key implements Accountable { + private static final long BASE_RAM_BYTES_USED = RamUsageEstimator.shallowSizeOfInstance(Key.class); + public final CacheEntity entity; // use as identity equality public final long readerVersion; // use the reader version to now keep a reference to a "short" lived reader until its reaped public final BytesReference value; @@ -229,7 +217,7 @@ static class Key implements Accountable { @Override public long ramBytesUsed() { - return RamUsageEstimator.NUM_BYTES_OBJECT_REF + Long.BYTES + value.length(); + return BASE_RAM_BYTES_USED + entity.ramBytesUsed() + value.length(); } @Override diff --git a/core/src/main/java/org/elasticsearch/indices/IndicesService.java b/core/src/main/java/org/elasticsearch/indices/IndicesService.java index abc9873efaf88..9468723ef93d6 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndicesService.java +++ b/core/src/main/java/org/elasticsearch/indices/IndicesService.java @@ -19,15 +19,16 @@ package org.elasticsearch.indices; -import com.carrotsearch.hppc.cursors.ObjectCursor; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.store.LockObtainFailedException; import org.apache.lucene.util.CollectionUtil; import org.apache.lucene.util.IOUtils; +import org.apache.lucene.util.RamUsageEstimator; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ResourceAlreadyExistsException; import org.elasticsearch.action.admin.indices.stats.CommonStats; import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags; import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags.Flag; @@ -35,22 +36,25 @@ import org.elasticsearch.action.admin.indices.stats.ShardStats; import org.elasticsearch.action.fieldstats.FieldStats; import org.elasticsearch.action.search.SearchType; +import org.elasticsearch.client.Client; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.metadata.MappingMetaData; import org.elasticsearch.cluster.routing.RecoverySource; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.breaker.CircuitBreaker; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.component.AbstractLifecycleComponent; import org.elasticsearch.common.io.FileSystemUtils; +import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.IndexScopedSettings; @@ -59,9 +63,12 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.util.Callback; +import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.concurrent.EsExecutors; import org.elasticsearch.common.util.iterable.Iterables; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.env.ShardLock; import org.elasticsearch.env.ShardLockObtainFailedException; @@ -72,7 +79,6 @@ import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.index.analysis.AnalysisRegistry; import org.elasticsearch.index.cache.request.ShardRequestCache; import org.elasticsearch.index.engine.Engine; @@ -82,6 +88,8 @@ import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.merge.MergeStats; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.recovery.RecoveryStats; import org.elasticsearch.index.refresh.RefreshStats; import org.elasticsearch.index.search.stats.SearchStats; @@ -93,16 +101,16 @@ import org.elasticsearch.index.shard.IndexingStats; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.store.IndexStoreConfig; -import org.elasticsearch.indices.AbstractIndexShardCacheEntity.Loader; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.indices.cluster.IndicesClusterStateService; import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache; import org.elasticsearch.indices.mapper.MapperRegistry; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; import org.elasticsearch.indices.recovery.PeerRecoveryTargetService; import org.elasticsearch.indices.recovery.RecoveryState; import org.elasticsearch.plugins.PluginsService; import org.elasticsearch.repositories.RepositoriesService; +import org.elasticsearch.script.ScriptService; +import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.ShardSearchRequest; import org.elasticsearch.search.query.QueryPhase; @@ -113,12 +121,11 @@ import java.io.IOException; import java.nio.file.Files; import java.util.ArrayList; -import java.util.Collections; -import java.util.EnumSet; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Optional; import java.util.Set; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; @@ -126,9 +133,12 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Consumer; import java.util.function.Predicate; +import java.util.function.Supplier; import java.util.stream.Collectors; +import static java.util.Collections.emptyList; import static java.util.Collections.emptyMap; import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.common.collect.MapBuilder.newMapBuilder; @@ -142,15 +152,19 @@ public class IndicesService extends AbstractLifecycleComponent Setting.positiveTimeSetting("indices.cache.cleanup_interval", TimeValue.timeValueMinutes(1), Property.NodeScope); private final PluginsService pluginsService; private final NodeEnvironment nodeEnv; + private final NamedXContentRegistry xContentRegistry; private final TimeValue shardsClosedTimeout; private final AnalysisRegistry analysisRegistry; - private final IndicesQueriesRegistry indicesQueriesRegistry; private final IndexNameExpressionResolver indexNameExpressionResolver; private final IndexScopedSettings indexScopeSetting; private final IndicesFieldDataCache indicesFieldDataCache; private final CacheCleaner cacheCleaner; private final ThreadPool threadPool; private final CircuitBreakerService circuitBreakerService; + private final BigArrays bigArrays; + private final ScriptService scriptService; + private final ClusterService clusterService; + private final Client client; private volatile Map indices = emptyMap(); private final Map> pendingDeletes = new HashMap<>(); private final AtomicInteger numUncompletedDeletes = new AtomicInteger(); @@ -170,20 +184,21 @@ protected void doStart() { threadPool.schedule(this.cleanInterval, ThreadPool.Names.SAME, this.cacheCleaner); } - public IndicesService(Settings settings, PluginsService pluginsService, NodeEnvironment nodeEnv, + public IndicesService(Settings settings, PluginsService pluginsService, NodeEnvironment nodeEnv, NamedXContentRegistry xContentRegistry, ClusterSettings clusterSettings, AnalysisRegistry analysisRegistry, - IndicesQueriesRegistry indicesQueriesRegistry, IndexNameExpressionResolver indexNameExpressionResolver, + IndexNameExpressionResolver indexNameExpressionResolver, MapperRegistry mapperRegistry, NamedWriteableRegistry namedWriteableRegistry, ThreadPool threadPool, IndexScopedSettings indexScopedSettings, CircuitBreakerService circuitBreakerService, + BigArrays bigArrays, ScriptService scriptService, ClusterService clusterService, Client client, MetaStateService metaStateService) { super(settings); this.threadPool = threadPool; this.pluginsService = pluginsService; this.nodeEnv = nodeEnv; + this.xContentRegistry = xContentRegistry; this.shardsClosedTimeout = settings.getAsTime(INDICES_SHARDS_CLOSED_TIMEOUT, new TimeValue(1, TimeUnit.DAYS)); this.indexStoreConfig = new IndexStoreConfig(settings); this.analysisRegistry = analysisRegistry; - this.indicesQueriesRegistry = indicesQueriesRegistry; this.indexNameExpressionResolver = indexNameExpressionResolver; this.indicesRequestCache = new IndicesRequestCache(settings); this.indicesQueryCache = new IndicesQueryCache(settings); @@ -196,6 +211,10 @@ public IndicesService(Settings settings, PluginsService pluginsService, NodeEnvi () -> Iterables.flatten(this).iterator()); this.indexScopeSetting = indexScopedSettings; this.circuitBreakerService = circuitBreakerService; + this.bigArrays = bigArrays; + this.scriptService = scriptService; + this.clusterService = clusterService; + this.client = client; this.indicesFieldDataCache = new IndicesFieldDataCache(settings, new IndexFieldDataCache.Listener() { @Override public void onRemoval(ShardId shardId, String fieldName, boolean wasEvicted, long sizeInBytes) { @@ -218,9 +237,7 @@ protected void doStop() { for (final Index index : indices) { indicesStopExecutor.execute(() -> { try { - removeIndex(index, "shutdown", false); - } catch (Exception e) { - logger.warn((Supplier) () -> new ParameterizedMessage("failed to remove index on stop [{}]", index), e); + removeIndex(index, IndexRemovalReason.NO_LONGER_ASSIGNED, "shutdown"); } finally { latch.countDown(); } @@ -283,26 +300,48 @@ public NodeIndicesStats stats(boolean includePrevious, CommonStatsFlags flags) { } } - Map> statsByShard = new HashMap<>(); - for (IndexService indexService : this) { - for (IndexShard indexShard : indexService) { + return new NodeIndicesStats(oldStats, statsByShard(this, flags)); + } + + Map> statsByShard(final IndicesService indicesService, final CommonStatsFlags flags) { + final Map> statsByShard = new HashMap<>(); + + for (final IndexService indexService : indicesService) { + for (final IndexShard indexShard : indexService) { try { - if (indexShard.routingEntry() == null) { + final IndexShardStats indexShardStats = indicesService.indexShardStats(indicesService, indexShard, flags); + + if (indexShardStats == null) { continue; } - IndexShardStats indexShardStats = new IndexShardStats(indexShard.shardId(), new ShardStats[] { new ShardStats(indexShard.routingEntry(), indexShard.shardPath(), new CommonStats(indicesQueryCache, indexShard, flags), indexShard.commitStats()) }); - if (!statsByShard.containsKey(indexService.index())) { + + if (statsByShard.containsKey(indexService.index()) == false) { statsByShard.put(indexService.index(), arrayAsArrayList(indexShardStats)); } else { statsByShard.get(indexService.index()).add(indexShardStats); } - } catch (IllegalIndexShardStateException e) { + } catch (IllegalIndexShardStateException | AlreadyClosedException e) { // we can safely ignore illegal state on ones that are closing for example logger.trace((Supplier) () -> new ParameterizedMessage("{} ignoring shard stats", indexShard.shardId()), e); } } } - return new NodeIndicesStats(oldStats, statsByShard); + + return statsByShard; + } + + IndexShardStats indexShardStats(final IndicesService indicesService, final IndexShard indexShard, final CommonStatsFlags flags) { + if (indexShard.routingEntry() == null) { + return null; + } + + return new IndexShardStats(indexShard.shardId(), + new ShardStats[] { + new ShardStats(indexShard.routingEntry(), + indexShard.shardPath(), + new CommonStats(indicesService.getIndicesQueryCache(), indexShard, flags), + indexShard.commitStats()) + }); } /** @@ -350,17 +389,17 @@ public IndexService indexServiceSafe(Index index) { * Creates a new {@link IndexService} for the given metadata. * @param indexMetaData the index metadata to create the index for * @param builtInListeners a list of built-in lifecycle {@link IndexEventListener} that should should be used along side with the per-index listeners - * @throws IndexAlreadyExistsException if the index already exists. + * @throws ResourceAlreadyExistsException if the index already exists. */ @Override - public synchronized IndexService createIndex(final NodeServicesProvider nodeServicesProvider, IndexMetaData indexMetaData, List builtInListeners) throws IOException { + public synchronized IndexService createIndex(IndexMetaData indexMetaData, List builtInListeners) throws IOException { ensureChangesAllowed(); if (indexMetaData.getIndexUUID().equals(IndexMetaData.INDEX_UUID_NA_VALUE)) { throw new IllegalArgumentException("index must have a real UUID found value: [" + indexMetaData.getIndexUUID() + "]"); } final Index index = indexMetaData.getIndex(); if (hasIndex(index)) { - throw new IndexAlreadyExistsException(index); + throw new ResourceAlreadyExistsException(index); } List finalListeners = new ArrayList<>(builtInListeners); final IndexEventListener onStoreClose = new IndexEventListener() { @@ -371,7 +410,7 @@ public void onStoreClosed(ShardId shardId) { }; finalListeners.add(onStoreClose); finalListeners.add(oldShardsStats); - final IndexService indexService = createIndexService("create index", nodeServicesProvider, indexMetaData, indicesQueryCache, indicesFieldDataCache, finalListeners, indexingMemoryController); + final IndexService indexService = createIndexService("create index", indexMetaData, indicesQueryCache, indicesFieldDataCache, finalListeners, indexingMemoryController); boolean success = false; try { indexService.getIndexEventListener().afterIndexCreated(indexService); @@ -388,9 +427,8 @@ public void onStoreClosed(ShardId shardId) { /** * This creates a new IndexService without registering it */ - private synchronized IndexService createIndexService(final String reason, final NodeServicesProvider nodeServicesProvider, IndexMetaData indexMetaData, IndicesQueryCache indicesQueryCache, IndicesFieldDataCache indicesFieldDataCache, List builtInListeners, IndexingOperationListener... indexingOperationListeners) throws IOException { + private synchronized IndexService createIndexService(final String reason, IndexMetaData indexMetaData, IndicesQueryCache indicesQueryCache, IndicesFieldDataCache indicesFieldDataCache, List builtInListeners, IndexingOperationListener... indexingOperationListeners) throws IOException { final Index index = indexMetaData.getIndex(); - final ClusterService clusterService = nodeServicesProvider.getClusterService(); final Predicate indexNameMatcher = (indexExpression) -> indexNameExpressionResolver.matchesIndex(index.getName(), indexExpression, clusterService.state()); final IndexSettings idxSettings = new IndexSettings(indexMetaData, this.settings, indexNameMatcher, indexScopeSetting); logger.debug("creating Index [{}], shards [{}]/[{}{}] - reason [{}]", @@ -407,7 +445,23 @@ private synchronized IndexService createIndexService(final String reason, final for (IndexEventListener listener : builtInListeners) { indexModule.addIndexEventListener(listener); } - return indexModule.newIndexService(nodeEnv, this, nodeServicesProvider, indicesQueryCache, mapperRegistry, indicesFieldDataCache); + return indexModule.newIndexService(nodeEnv, xContentRegistry, this, circuitBreakerService, bigArrays, threadPool, scriptService, + clusterService, client, indicesQueryCache, mapperRegistry, indicesFieldDataCache); + } + + /** + * creates a new mapper service for the given index, in order to do administrative work like mapping updates. + * This *should not* be used for document parsing. Doing so will result in an exception. + * + * Note: the returned {@link MapperService} should be closed when unneeded. + */ + public synchronized MapperService createIndexMapperService(IndexMetaData indexMetaData) throws IOException { + final Index index = indexMetaData.getIndex(); + final Predicate indexNameMatcher = (indexExpression) -> indexNameExpressionResolver.matchesIndex(index.getName(), indexExpression, clusterService.state()); + final IndexSettings idxSettings = new IndexSettings(indexMetaData, this.settings, indexNameMatcher, indexScopeSetting); + final IndexModule indexModule = new IndexModule(idxSettings, indexStoreConfig, analysisRegistry); + pluginsService.onIndexModule(indexModule); + return indexModule.newIndexMapperService(xContentRegistry, mapperRegistry); } /** @@ -416,7 +470,7 @@ private synchronized IndexService createIndexService(final String reason, final * This method will throw an exception if the creation or the update fails. * The created {@link IndexService} will not be registered and will be closed immediately. */ - public synchronized void verifyIndexMetadata(final NodeServicesProvider nodeServicesProvider, IndexMetaData metaData, IndexMetaData metaDataUpdate) throws IOException { + public synchronized void verifyIndexMetadata(IndexMetaData metaData, IndexMetaData metaDataUpdate) throws IOException { final List closeables = new ArrayList<>(); try { IndicesFieldDataCache indicesFieldDataCache = new IndicesFieldDataCache(settings, new IndexFieldDataCache.Listener() {}); @@ -424,14 +478,10 @@ public synchronized void verifyIndexMetadata(final NodeServicesProvider nodeServ IndicesQueryCache indicesQueryCache = new IndicesQueryCache(settings); closeables.add(indicesQueryCache); // this will also fail if some plugin fails etc. which is nice since we can verify that early - final IndexService service = createIndexService("metadata verification", nodeServicesProvider, - metaData, indicesQueryCache, indicesFieldDataCache, Collections.emptyList()); + final IndexService service = createIndexService("metadata verification", metaData, indicesQueryCache, indicesFieldDataCache, + emptyList()); closeables.add(() -> service.close("metadata verification", false)); - for (ObjectCursor typeMapping : metaData.getMappings().values()) { - // don't apply the default mapping, it has been applied when the mapping was created - service.mapperService().merge(typeMapping.value.type(), typeMapping.value.source(), - MapperService.MergeReason.MAPPING_RECOVERY, true); - } + service.mapperService().merge(metaData, MapperService.MergeReason.MAPPING_RECOVERY, true); if (metaData.equals(metaDataUpdate) == false) { service.updateMetaData(metaDataUpdate); } @@ -443,7 +493,7 @@ public synchronized void verifyIndexMetadata(final NodeServicesProvider nodeServ @Override public IndexShard createShard(ShardRouting shardRouting, RecoveryState recoveryState, PeerRecoveryTargetService recoveryTargetService, PeerRecoveryTargetService.RecoveryListener recoveryListener, RepositoriesService repositoriesService, - NodeServicesProvider nodeServicesProvider, Callback onShardFailure) throws IOException { + Consumer onShardFailure) throws IOException { ensureChangesAllowed(); IndexService indexService = indexService(shardRouting.index()); IndexShard indexShard = indexService.createShard(shardRouting); @@ -453,7 +503,7 @@ public IndexShard createShard(ShardRouting shardRouting, RecoveryState recoveryS assert recoveryState.getRecoverySource().getType() == RecoverySource.Type.LOCAL_SHARDS: "mapping update consumer only required by local shards recovery"; try { - nodeServicesProvider.getClient().admin().indices().preparePutMapping() + client.admin().indices().preparePutMapping() .setConcreteIndex(shardRouting.index()) // concrete index - no name clash, it uses uuid .setType(type) .setSource(mapping.source().string()) @@ -465,22 +515,8 @@ public IndexShard createShard(ShardRouting shardRouting, RecoveryState recoveryS return indexShard; } - /** - * Removes the given index from this service and releases all associated resources. Persistent parts of the index - * like the shards files, state and transaction logs are kept around in the case of a disaster recovery. - * @param index the index to remove - * @param reason the high level reason causing this removal - */ @Override - public void removeIndex(Index index, String reason) { - try { - removeIndex(index, reason, false); - } catch (Exception e) { - logger.warn((Supplier) () -> new ParameterizedMessage("failed to remove index ({})", reason), e); - } - } - - private void removeIndex(Index index, String reason, boolean delete) { + public void removeIndex(final Index index, final IndexRemovalReason reason, final String extraInfo) { final String indexName = index.getName(); try { final IndexService indexService; @@ -498,22 +534,18 @@ private void removeIndex(Index index, String reason, boolean delete) { listener = indexService.getIndexEventListener(); } - listener.beforeIndexClosed(indexService); - if (delete) { - listener.beforeIndexDeleted(indexService); - } - logger.debug("{} closing index service (reason [{}])", index, reason); - indexService.close(reason, delete); - logger.debug("{} closed... (reason [{}])", index, reason); - listener.afterIndexClosed(indexService.index(), indexService.getIndexSettings().getSettings()); - if (delete) { - final IndexSettings indexSettings = indexService.getIndexSettings(); - listener.afterIndexDeleted(indexService.index(), indexSettings.getSettings()); + listener.beforeIndexRemoved(indexService, reason); + logger.debug("{} closing index service (reason [{}][{}])", index, reason, extraInfo); + indexService.close(extraInfo, reason == IndexRemovalReason.DELETED); + logger.debug("{} closed... (reason [{}][{}])", index, reason, extraInfo); + final IndexSettings indexSettings = indexService.getIndexSettings(); + listener.afterIndexRemoved(indexService.index(), indexSettings, reason); + if (reason == IndexRemovalReason.DELETED) { // now we are done - try to wipe data on disk if possible - deleteIndexStore(reason, indexService.index(), indexSettings); + deleteIndexStore(extraInfo, indexService.index(), indexSettings); } - } catch (IOException ex) { - throw new ElasticsearchException("failed to remove index " + index, ex); + } catch (Exception e) { + logger.warn((Supplier) () -> new ParameterizedMessage("failed to remove index {} ([{}][{}])", index, reason, extraInfo), e); } } @@ -553,27 +585,9 @@ public synchronized void beforeIndexShardClosed(ShardId shardId, @Nullable Index } } - /** - * Deletes the given index. Persistent parts of the index - * like the shards files, state and transaction logs are removed once all resources are released. - * - * Equivalent to {@link #removeIndex(Index, String)} but fires - * different lifecycle events to ensure pending resources of this index are immediately removed. - * @param index the index to delete - * @param reason the high level reason causing this delete - */ - @Override - public void deleteIndex(Index index, String reason) { - try { - removeIndex(index, reason, true); - } catch (Exception e) { - logger.warn((Supplier) () -> new ParameterizedMessage("failed to delete index ({})", reason), e); - } - } - /** * Deletes an index that is not assigned to this node. This method cleans up all disk folders relating to the index - * but does not deal with in-memory structures. For those call {@link #deleteIndex(Index, String)} + * but does not deal with in-memory structures. For those call {@link #removeIndex(Index, IndexRemovalReason, String)} */ @Override public void deleteUnassignedIndex(String reason, IndexMetaData metaData, ClusterState clusterState) { @@ -586,7 +600,7 @@ public void deleteUnassignedIndex(String reason, IndexMetaData metaData, Cluster "the cluster state [" + index.getIndexUUID() + "] [" + metaData.getIndexUUID() + "]"); } deleteIndexStore(reason, metaData, clusterState); - } catch (IOException e) { + } catch (Exception e) { logger.warn((Supplier) () -> new ParameterizedMessage("[{}] failed to delete unassigned index (reason [{}])", metaData.getIndex(), reason), e); } } @@ -683,8 +697,9 @@ public void deleteShardStore(String reason, ShardId shardId, ClusterState cluste final IndexMetaData metaData = clusterState.getMetaData().indices().get(shardId.getIndexName()); final IndexSettings indexSettings = buildIndexSettings(metaData); - if (canDeleteShardContent(shardId, indexSettings) == false) { - throw new IllegalStateException("Can't delete shard " + shardId); + ShardDeletionCheckResult shardDeletionCheckResult = canDeleteShardContent(shardId, indexSettings); + if (shardDeletionCheckResult != ShardDeletionCheckResult.FOLDER_FOUND_CAN_DELETE) { + throw new IllegalStateException("Can't delete shard " + shardId + " (cause: " + shardDeletionCheckResult + ")"); } nodeEnv.deleteShardDirectorySafe(shardId, indexSettings); logger.debug("{} deleted shard reason [{}]", shardId, reason); @@ -747,14 +762,14 @@ public IndexMetaData verifyIndexIsDeleted(final Index index, final ClusterState final IndexMetaData metaData; try { metaData = metaStateService.loadIndexState(index); - } catch (IOException e) { + } catch (Exception e) { logger.warn((Supplier) () -> new ParameterizedMessage("[{}] failed to load state file from a stale deleted index, folders will be left on disk", index), e); return null; } final IndexSettings indexSettings = buildIndexSettings(metaData); try { deleteIndexStoreIfDeletionAllowed("stale deleted index", index, indexSettings, ALWAYS_TRUE); - } catch (IOException e) { + } catch (Exception e) { // we just warn about the exception here because if deleteIndexStoreIfDeletionAllowed // throws an exception, it gets added to the list of pending deletes to be tried again logger.warn((Supplier) () -> new ParameterizedMessage("[{}] failed to delete index on disk", metaData.getIndex()), e); @@ -765,39 +780,50 @@ public IndexMetaData verifyIndexIsDeleted(final Index index, final ClusterState } /** - * Returns true iff the shards content for the given shard can be deleted. - * This method will return false if: - *
      - *
    • if the shard is still allocated / active on this node
    • - *
    • if for instance if the shard is located on shared and should not be deleted
    • - *
    • if the shards data locations do not exists
    • - *
    + * result type returned by {@link #canDeleteShardContent signaling different reasons why a shard can / cannot be deleted} + */ + public enum ShardDeletionCheckResult { + FOLDER_FOUND_CAN_DELETE, // shard data exists and can be deleted + STILL_ALLOCATED, // the shard is still allocated / active on this node + NO_FOLDER_FOUND, // the shards data locations do not exist + SHARED_FILE_SYSTEM, // the shard is located on shared and should not be deleted + NO_LOCAL_STORAGE // node does not have local storage (see DiscoveryNode.nodeRequiresLocalStorage) + } + + /** + * Returns ShardDeletionCheckResult signaling whether the shards content for the given shard can be deleted. * * @param shardId the shard to delete. * @param indexSettings the shards's relevant {@link IndexSettings}. This is required to access the indexes settings etc. */ - public boolean canDeleteShardContent(ShardId shardId, IndexSettings indexSettings) { + public ShardDeletionCheckResult canDeleteShardContent(ShardId shardId, IndexSettings indexSettings) { assert shardId.getIndex().equals(indexSettings.getIndex()); final IndexService indexService = indexService(shardId.getIndex()); if (indexSettings.isOnSharedFilesystem() == false) { if (nodeEnv.hasNodeFile()) { final boolean isAllocated = indexService != null && indexService.hasShard(shardId.id()); if (isAllocated) { - return false; // we are allocated - can't delete the shard + return ShardDeletionCheckResult.STILL_ALLOCATED; // we are allocated - can't delete the shard } else if (indexSettings.hasCustomDataPath()) { // lets see if it's on a custom path (return false if the shared doesn't exist) // we don't need to delete anything that is not there - return Files.exists(nodeEnv.resolveCustomLocation(indexSettings, shardId)); + return Files.exists(nodeEnv.resolveCustomLocation(indexSettings, shardId)) ? + ShardDeletionCheckResult.FOLDER_FOUND_CAN_DELETE : + ShardDeletionCheckResult.NO_FOLDER_FOUND; } else { // lets see if it's path is available (return false if the shared doesn't exist) // we don't need to delete anything that is not there - return FileSystemUtils.exists(nodeEnv.availableShardPaths(shardId)); + return FileSystemUtils.exists(nodeEnv.availableShardPaths(shardId)) ? + ShardDeletionCheckResult.FOLDER_FOUND_CAN_DELETE : + ShardDeletionCheckResult.NO_FOLDER_FOUND; } - } + } else { + return ShardDeletionCheckResult.NO_LOCAL_STORAGE; + } } else { logger.trace("{} skipping shard directory deletion due to shadow replicas", shardId); + return ShardDeletionCheckResult.SHARED_FILE_SYSTEM; } - return false; } private IndexSettings buildIndexSettings(IndexMetaData metaData) { @@ -851,7 +877,7 @@ private static final class PendingDelete implements Comparable { /** * Creates a new pending delete of an index */ - public PendingDelete(ShardId shardId, IndexSettings settings) { + PendingDelete(ShardId shardId, IndexSettings settings) { this.index = shardId.getIndex(); this.shardId = shardId.getId(); this.settings = settings; @@ -861,7 +887,7 @@ public PendingDelete(ShardId shardId, IndexSettings settings) { /** * Creates a new pending delete of a shard */ - public PendingDelete(Index index, IndexSettings settings) { + PendingDelete(Index index, IndexSettings settings) { this.index = index; this.shardId = -1; this.settings = settings; @@ -982,13 +1008,6 @@ public boolean hasUncompletedPendingDeletes() { return numUncompletedDeletes.get() > 0; } - /** - * Returns this nodes {@link IndicesQueriesRegistry} - */ - public IndicesQueriesRegistry getIndicesQueryRegistry() { - return indicesQueriesRegistry; - } - public AnalysisRegistry getAnalysis() { return analysisRegistry; } @@ -1008,7 +1027,7 @@ private static final class CacheCleaner implements Runnable, Releasable { private final AtomicBoolean closed = new AtomicBoolean(false); private final IndicesRequestCache requestCache; - public CacheCleaner(IndicesFieldDataCache cache, IndicesRequestCache requestCache, Logger logger, ThreadPool threadPool, TimeValue interval) { + CacheCleaner(IndicesFieldDataCache cache, IndicesRequestCache requestCache, Logger logger, ThreadPool threadPool, TimeValue interval) { this.cache = cache; this.requestCache = requestCache; this.logger = logger; @@ -1049,8 +1068,6 @@ public void close() { } - private static final Set CACHEABLE_SEARCH_TYPES = EnumSet.of(SearchType.QUERY_THEN_FETCH, SearchType.QUERY_AND_FETCH); - /** * Can the shard request be cached at all? */ @@ -1060,7 +1077,7 @@ public boolean canCache(ShardSearchRequest request, SearchContext context) { // on the overridden statistics. So if you ran two queries on the same index with different stats // (because an other shard was updated) you would get wrong results because of the scores // (think about top_hits aggs or scripts using the score) - if (!CACHEABLE_SEARCH_TYPES.contains(context.searchType())) { + if (SearchType.QUERY_THEN_FETCH != context.searchType()) { return false; } IndexSettings settings = context.indexShard().indexSettings(); @@ -1082,7 +1099,7 @@ public boolean canCache(ShardSearchRequest request, SearchContext context) { } // if now in millis is used (or in the future, a more generic "isDeterministic" flag // then we can't cache based on "now" key within the search request, as it is not deterministic - if (context.nowInMillisUsed()) { + if (context.getQueryShardContext().isCachable() == false) { return false; } return true; @@ -1093,7 +1110,7 @@ public void clearRequestCache(IndexShard shard) { if (shard == null) { return; } - indicesRequestCache.clear(new IndexShardCacheEntity(shard, null)); + indicesRequestCache.clear(new IndexShardCacheEntity(shard)); logger.trace("{} explicit cache clear", shard.shardId()); } @@ -1105,18 +1122,35 @@ public void clearRequestCache(IndexShard shard) { */ public void loadIntoContext(ShardSearchRequest request, SearchContext context, QueryPhase queryPhase) throws Exception { assert canCache(request, context); - final IndexShardCacheEntity entity = new IndexShardCacheEntity(context.indexShard(), out -> { + final DirectoryReader directoryReader = context.searcher().getDirectoryReader(); + + boolean[] loadedFromCache = new boolean[] { true }; + BytesReference bytesReference = cacheShardLevelResult(context.indexShard(), directoryReader, request.cacheKey(), out -> { queryPhase.execute(context); - context.queryResult().writeToNoId(out); + try { + context.queryResult().writeToNoId(out); + + } catch (IOException e) { + throw new AssertionError("Could not serialize response", e); + } + loadedFromCache[0] = false; }); - final DirectoryReader directoryReader = context.searcher().getDirectoryReader(); - final BytesReference bytesReference = indicesRequestCache.getOrCompute(entity, directoryReader, request.cacheKey()); - if (entity.loadedFromCache()) { + + if (loadedFromCache[0]) { // restore the cached query result into the context final QuerySearchResult result = context.queryResult(); StreamInput in = new NamedWriteableAwareStreamInput(bytesReference.streamInput(), namedWriteableRegistry); result.readFromWithId(context.id(), in); - result.shardTarget(context.shardTarget()); + result.setSearchShardTarget(context.shardTarget()); + } else if (context.queryResult().searchTimedOut()) { + // we have to invalidate the cache entry if we cached a query result form a request that timed out. + // we can't really throw exceptions in the loading part to signal a timed out search to the outside world since if there are + // multiple requests that wait for the cache entry to be calculated they'd fail all with the same exception. + // instead we all caching such a result for the time being, return the timed out result for all other searches with that cache + // key invalidate the result in the thread that caused the timeout. This will end up to be simpler and eventually correct since + // running a search that times out concurrently will likely timeout again if it's run while we have this `stale` result in the + // cache. One other option is to not cache requests with a timeout at all... + indicesRequestCache.invalidate(new IndexShardCacheEntity(context.indexShard()), directoryReader, request.cacheKey()); } } @@ -1137,7 +1171,11 @@ public FieldStats getFieldStats(IndexShard shard, Engine.Searcher searcher, S } BytesReference cacheKey = new BytesArray("fieldstats:" + field); BytesReference statsRef = cacheShardLevelResult(shard, searcher.getDirectoryReader(), cacheKey, out -> { - out.writeOptionalWriteable(fieldType.stats(searcher.reader())); + try { + out.writeOptionalWriteable(fieldType.stats(searcher.reader())); + } catch (IOException e) { + throw new IllegalStateException("Failed to write field stats output", e); + } }); try (StreamInput in = statsRef.streamInput()) { return in.readOptionalWriteable(FieldStats::readFrom); @@ -1156,17 +1194,33 @@ public ByteSizeValue getTotalIndexingBufferBytes() { * @param loader loads the data into the cache if needed * @return the contents of the cache or the result of calling the loader */ - private BytesReference cacheShardLevelResult(IndexShard shard, DirectoryReader reader, BytesReference cacheKey, Loader loader) + private BytesReference cacheShardLevelResult(IndexShard shard, DirectoryReader reader, BytesReference cacheKey, Consumer loader) throws Exception { - IndexShardCacheEntity cacheEntity = new IndexShardCacheEntity(shard, loader); - return indicesRequestCache.getOrCompute(cacheEntity, reader, cacheKey); + IndexShardCacheEntity cacheEntity = new IndexShardCacheEntity(shard); + Supplier supplier = () -> { + /* BytesStreamOutput allows to pass the expected size but by default uses + * BigArrays.PAGE_SIZE_IN_BYTES which is 16k. A common cached result ie. + * a date histogram with 3 buckets is ~100byte so 16k might be very wasteful + * since we don't shrink to the actual size once we are done serializing. + * By passing 512 as the expected size we will resize the byte array in the stream + * slowly until we hit the page size and don't waste too much memory for small query + * results.*/ + final int expectedSizeInBytes = 512; + try (BytesStreamOutput out = new BytesStreamOutput(expectedSizeInBytes)) { + loader.accept(out); + // for now, keep the paged data structure, which might have unused bytes to fill a page, but better to keep + // the memory properly paged instead of having varied sized bytes + return out.bytes(); + } + }; + return indicesRequestCache.getOrCompute(cacheEntity, supplier, reader, cacheKey); } static final class IndexShardCacheEntity extends AbstractIndexShardCacheEntity { + private static final long BASE_RAM_BYTES_USED = RamUsageEstimator.shallowSizeOfInstance(IndexShardCacheEntity.class); private final IndexShard indexShard; - protected IndexShardCacheEntity(IndexShard indexShard, Loader loader) { - super(loader); + protected IndexShardCacheEntity(IndexShard indexShard) { this.indexShard = indexShard; } @@ -1184,6 +1238,13 @@ public boolean isOpen() { public Object getCacheIdentity() { return indexShard; } + + @Override + public long ramBytesUsed() { + // No need to take the IndexShard into account since it is shared + // across many entities + return BASE_RAM_BYTES_USED; + } } @FunctionalInterface @@ -1195,4 +1256,17 @@ interface IndexDeletionAllowedPredicate { (Index index, IndexSettings indexSettings) -> canDeleteIndexContents(index, indexSettings); private final IndexDeletionAllowedPredicate ALWAYS_TRUE = (Index index, IndexSettings indexSettings) -> true; + public AliasFilter buildAliasFilter(ClusterState state, String index, String... expressions) { + /* Being static, parseAliasFilter doesn't have access to whatever guts it needs to parse a query. Instead of passing in a bunch + * of dependencies we pass in a function that can perform the parsing. */ + CheckedFunction, IOException> filterParser = bytes -> { + try (XContentParser parser = XContentFactory.xContent(bytes).createParser(xContentRegistry, bytes)) { + return new QueryParseContext(parser).parseInnerQueryBuilder(); + } + }; + String[] aliases = indexNameExpressionResolver.filteringAliases(state, index, expressions); + IndexMetaData indexMetaData = state.metaData().index(index); + return new AliasFilter(ShardSearchRequest.parseAliasFilter(filterParser, indexMetaData, aliases), aliases); + } + } diff --git a/core/src/main/java/org/elasticsearch/indices/InvalidAliasNameException.java b/core/src/main/java/org/elasticsearch/indices/InvalidAliasNameException.java index 4e2c443ff4a4e..84215d45f6895 100644 --- a/core/src/main/java/org/elasticsearch/indices/InvalidAliasNameException.java +++ b/core/src/main/java/org/elasticsearch/indices/InvalidAliasNameException.java @@ -36,6 +36,10 @@ public InvalidAliasNameException(Index index, String name, String desc) { setIndex(index); } + public InvalidAliasNameException(String name, String description) { + super("Invalid alias name [{}]: {}", name, description); + } + public InvalidAliasNameException(StreamInput in) throws IOException{ super(in); } diff --git a/core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java b/core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java index 6c251d3bf1ce0..9133ca81e2837 100644 --- a/core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java +++ b/core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java @@ -188,10 +188,11 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - String level = params.param("level", "node"); - boolean isLevelValid = "node".equalsIgnoreCase(level) || "indices".equalsIgnoreCase(level) || "shards".equalsIgnoreCase(level); + final String level = params.param("level", "node"); + final boolean isLevelValid = + "indices".equalsIgnoreCase(level) || "node".equalsIgnoreCase(level) || "shards".equalsIgnoreCase(level); if (!isLevelValid) { - return builder; + throw new IllegalArgumentException("level parameter must be one of [indices] or [node] or [shards] but was [" + level + "]"); } // "node" level diff --git a/core/src/main/java/org/elasticsearch/indices/analysis/AnalysisModule.java b/core/src/main/java/org/elasticsearch/indices/analysis/AnalysisModule.java index 5dd0203d61720..61950942e6076 100644 --- a/core/src/main/java/org/elasticsearch/indices/analysis/AnalysisModule.java +++ b/core/src/main/java/org/elasticsearch/indices/analysis/AnalysisModule.java @@ -60,6 +60,7 @@ import org.elasticsearch.index.analysis.FingerprintAnalyzerProvider; import org.elasticsearch.index.analysis.FingerprintTokenFilterFactory; import org.elasticsearch.index.analysis.FinnishAnalyzerProvider; +import org.elasticsearch.index.analysis.FlattenGraphTokenFilterFactory; import org.elasticsearch.index.analysis.FrenchAnalyzerProvider; import org.elasticsearch.index.analysis.FrenchStemTokenFilterFactory; import org.elasticsearch.index.analysis.GalicianAnalyzerProvider; @@ -139,6 +140,7 @@ import org.elasticsearch.index.analysis.UpperCaseTokenFilterFactory; import org.elasticsearch.index.analysis.WhitespaceAnalyzerProvider; import org.elasticsearch.index.analysis.WhitespaceTokenizerFactory; +import org.elasticsearch.index.analysis.WordDelimiterGraphTokenFilterFactory; import org.elasticsearch.index.analysis.WordDelimiterTokenFilterFactory; import org.elasticsearch.index.analysis.compound.DictionaryCompoundWordTokenFilterFactory; import org.elasticsearch.index.analysis.compound.HyphenationCompoundWordTokenFilterFactory; @@ -152,13 +154,12 @@ */ public final class AnalysisModule { static { - Settings build = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) - .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1) - .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1) - .build(); + Settings build = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).put(IndexMetaData + .SETTING_NUMBER_OF_REPLICAS, 1).put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).build(); IndexMetaData metaData = IndexMetaData.builder("_na_").settings(build).build(); NA_INDEX_SETTINGS = new IndexSettings(metaData, Settings.EMPTY); } + private static final IndexSettings NA_INDEX_SETTINGS; private final HunspellService hunspellService; @@ -171,8 +172,9 @@ public AnalysisModule(Environment environment, List plugins) thr NamedRegistry> tokenFilters = setupTokenFilters(plugins, hunspellService); NamedRegistry> tokenizers = setupTokenizers(plugins); NamedRegistry>> analyzers = setupAnalyzers(plugins); - analysisRegistry = new AnalysisRegistry(environment, charFilters.getRegistry(), tokenFilters.getRegistry(), - tokenizers.getRegistry(), analyzers.getRegistry()); + NamedRegistry>> normalizers = setupNormalizers(plugins); + analysisRegistry = new AnalysisRegistry(environment, charFilters.getRegistry(), tokenFilters.getRegistry(), tokenizers + .getRegistry(), analyzers.getRegistry(), normalizers.getRegistry()); } HunspellService getHunspellService() { @@ -198,8 +200,8 @@ public NamedRegistry setupHunspe return hunspellDictionaries; } - private NamedRegistry> setupTokenFilters(List plugins, - HunspellService hunspellService) { + private NamedRegistry> setupTokenFilters(List plugins, HunspellService + hunspellService) { NamedRegistry> tokenFilters = new NamedRegistry<>("token_filter"); tokenFilters.register("stop", StopTokenFilterFactory::new); tokenFilters.register("reverse", ReverseTokenFilterFactory::new); @@ -224,8 +226,10 @@ private NamedRegistry> setupTokenFilters(Li tokenFilters.register("snowball", SnowballTokenFilterFactory::new); tokenFilters.register("stemmer", StemmerTokenFilterFactory::new); tokenFilters.register("word_delimiter", WordDelimiterTokenFilterFactory::new); + tokenFilters.register("word_delimiter_graph", WordDelimiterGraphTokenFilterFactory::new); tokenFilters.register("delimited_payload_filter", DelimitedPayloadTokenFilterFactory::new); tokenFilters.register("elision", ElisionTokenFilterFactory::new); + tokenFilters.register("flatten_graph", FlattenGraphTokenFilterFactory::new); tokenFilters.register("keep", requriesAnalysisSettings(KeepWordFilterFactory::new)); tokenFilters.register("keep_types", requriesAnalysisSettings(KeepTypesFilterFactory::new)); tokenFilters.register("pattern_capture", requriesAnalysisSettings(PatternCaptureGroupTokenFilterFactory::new)); @@ -251,8 +255,8 @@ private NamedRegistry> setupTokenFilters(Li tokenFilters.register("scandinavian_folding", ScandinavianFoldingFilterFactory::new); tokenFilters.register("serbian_normalization", SerbianNormalizationFilterFactory::new); - tokenFilters.register("hunspell", requriesAnalysisSettings( - (indexSettings, env, name, settings) -> new HunspellTokenFilterFactory(indexSettings, name, settings, hunspellService))); + tokenFilters.register("hunspell", requriesAnalysisSettings((indexSettings, env, name, settings) -> new HunspellTokenFilterFactory + (indexSettings, name, settings, hunspellService))); tokenFilters.register("cjk_bigram", CJKBigramFilterFactory::new); tokenFilters.register("cjk_width", CJKWidthFilterFactory::new); @@ -335,12 +339,20 @@ private NamedRegistry>> setupAnalyzers(List return analyzers; } + private NamedRegistry>> setupNormalizers(List plugins) { + NamedRegistry>> normalizers = new NamedRegistry<>("normalizer"); + // TODO: provide built-in normalizer providers? + // TODO: pluggability? + return normalizers; + } + private static AnalysisModule.AnalysisProvider requriesAnalysisSettings(AnalysisModule.AnalysisProvider provider) { return new AnalysisModule.AnalysisProvider() { @Override public T get(IndexSettings indexSettings, Environment environment, String name, Settings settings) throws IOException { return provider.get(indexSettings, environment, name, settings); } + @Override public boolean requiresAnalysisSettings() { return true; @@ -355,10 +367,11 @@ public interface AnalysisProvider { /** * Creates a new analysis provider. + * * @param indexSettings the index settings for the index this provider is created for - * @param environment the nodes environment to load resources from persistent storage - * @param name the name of the analysis component - * @param settings the component specific settings without context prefixes + * @param environment the nodes environment to load resources from persistent storage + * @param name the name of the analysis component + * @param settings the component specific settings without context prefixes * @return a new provider instance * @throws IOException if an {@link IOException} occurs */ @@ -369,11 +382,11 @@ public interface AnalysisProvider { * This can be used to get a default instance of an analysis factory without binding to an index. * * @param environment the nodes environment to load resources from persistent storage - * @param name the name of the analysis component + * @param name the name of the analysis component * @return a new provider instance - * @throws IOException if an {@link IOException} occurs + * @throws IOException if an {@link IOException} occurs * @throws IllegalArgumentException if the provider requires analysis settings ie. if {@link #requiresAnalysisSettings()} returns - * true + * true */ default T get(Environment environment, String name) throws IOException { if (requiresAnalysisSettings()) { diff --git a/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenFilters.java b/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenFilters.java index a31f60fc5bdba..cf048f61baacc 100644 --- a/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenFilters.java +++ b/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenFilters.java @@ -42,6 +42,7 @@ import org.apache.lucene.analysis.hi.HindiNormalizationFilter; import org.apache.lucene.analysis.in.IndicNormalizationFilter; import org.apache.lucene.analysis.miscellaneous.ASCIIFoldingFilter; +import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; import org.apache.lucene.analysis.miscellaneous.KeywordRepeatFilter; import org.apache.lucene.analysis.miscellaneous.LengthFilter; import org.apache.lucene.analysis.miscellaneous.LimitTokenCountFilter; @@ -51,6 +52,7 @@ import org.apache.lucene.analysis.miscellaneous.TruncateTokenFilter; import org.apache.lucene.analysis.miscellaneous.UniqueTokenFilter; import org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter; +import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilter; import org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter; import org.apache.lucene.analysis.ngram.NGramTokenFilter; import org.apache.lucene.analysis.payloads.DelimitedPayloadTokenFilter; @@ -65,6 +67,7 @@ import org.elasticsearch.Version; import org.elasticsearch.index.analysis.DelimitedPayloadTokenFilterFactory; import org.elasticsearch.index.analysis.LimitTokenCountFilterFactory; +import org.elasticsearch.index.analysis.MultiTermAwareComponent; import org.elasticsearch.index.analysis.TokenFilterFactory; import org.elasticsearch.indices.analysis.PreBuiltCacheFactory.CachingStrategy; import org.tartarus.snowball.ext.DutchStemmer; @@ -89,6 +92,18 @@ public TokenStream create(TokenStream tokenStream, Version version) { } }, + WORD_DELIMITER_GRAPH(CachingStrategy.ONE) { + @Override + public TokenStream create(TokenStream tokenStream, Version version) { + return new WordDelimiterGraphFilter(tokenStream, + WordDelimiterGraphFilter.GENERATE_WORD_PARTS | + WordDelimiterGraphFilter.GENERATE_NUMBER_PARTS | + WordDelimiterGraphFilter.SPLIT_ON_CASE_CHANGE | + WordDelimiterGraphFilter.SPLIT_ON_NUMERICS | + WordDelimiterGraphFilter.STEM_ENGLISH_POSSESSIVE, null); + } + }, + STOP(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { @@ -115,6 +130,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new ASCIIFoldingFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, LENGTH(CachingStrategy.LUCENE) { @@ -136,6 +155,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new LowerCaseFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, UPPERCASE(CachingStrategy.LUCENE) { @@ -143,6 +166,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new UpperCaseFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, KSTEM(CachingStrategy.ONE) { @@ -221,6 +248,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new ElisionFilter(tokenStream, FrenchAnalyzer.DEFAULT_ARTICLES); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, ARABIC_STEM(CachingStrategy.ONE) { @@ -284,6 +315,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new ArabicNormalizationFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, PERSIAN_NORMALIZATION(CachingStrategy.ONE) { @@ -291,6 +326,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new PersianNormalizationFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, TYPE_AS_PAYLOAD(CachingStrategy.ONE) { @@ -303,7 +342,15 @@ public TokenStream create(TokenStream tokenStream, Version version) { SHINGLE(CachingStrategy.ONE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new ShingleFilter(tokenStream); + final TokenStream filter = new ShingleFilter(tokenStream); + /** + * We disable the graph analysis on this token stream + * because it produces shingles of different size. + * Graph analysis on such token stream is useless and dangerous as it may create too many paths + * since shingles of different size are not aligned in terms of positions. + */ + filter.addAttribute(DisableGraphAttribute.class); + return filter; } }, @@ -312,6 +359,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new GermanNormalizationFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, HINDI_NORMALIZATION(CachingStrategy.ONE) { @@ -319,6 +370,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new HindiNormalizationFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, INDIC_NORMALIZATION(CachingStrategy.ONE) { @@ -326,6 +381,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new IndicNormalizationFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, SORANI_NORMALIZATION(CachingStrategy.ONE) { @@ -333,6 +392,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new SoraniNormalizationFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, SCANDINAVIAN_NORMALIZATION(CachingStrategy.ONE) { @@ -340,6 +403,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new ScandinavianNormalizationFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, SCANDINAVIAN_FOLDING(CachingStrategy.ONE) { @@ -347,6 +414,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new ScandinavianFoldingFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, APOSTROPHE(CachingStrategy.ONE) { @@ -361,6 +432,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new CJKWidthFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, DECIMAL_DIGIT(CachingStrategy.ONE) { @@ -368,6 +443,10 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new DecimalDigitFilter(tokenStream); } + @Override + protected boolean isMultiTermAware() { + return true; + } }, CJK_BIGRAM(CachingStrategy.ONE) { @@ -389,11 +468,15 @@ public TokenStream create(TokenStream tokenStream, Version version) { public TokenStream create(TokenStream tokenStream, Version version) { return new LimitTokenCountFilter(tokenStream, LimitTokenCountFilterFactory.DEFAULT_MAX_TOKEN_COUNT, LimitTokenCountFilterFactory.DEFAULT_CONSUME_ALL_TOKENS); } - } + }, ; - public abstract TokenStream create(TokenStream tokenStream, Version version); + protected boolean isMultiTermAware() { + return false; + } + + public abstract TokenStream create(TokenStream tokenStream, Version version); protected final PreBuiltCacheFactory.PreBuiltCache cache; @@ -402,21 +485,42 @@ public TokenStream create(TokenStream tokenStream, Version version) { cache = PreBuiltCacheFactory.getCache(cachingStrategy); } + private interface MultiTermAwareTokenFilterFactory extends TokenFilterFactory, MultiTermAwareComponent {} + public synchronized TokenFilterFactory getTokenFilterFactory(final Version version) { TokenFilterFactory factory = cache.get(version); if (factory == null) { - final String finalName = name(); - factory = new TokenFilterFactory() { - @Override - public String name() { - return finalName.toLowerCase(Locale.ROOT); - } - - @Override - public TokenStream create(TokenStream tokenStream) { - return valueOf(finalName).create(tokenStream, version); - } - }; + final String finalName = name().toLowerCase(Locale.ROOT); + if (isMultiTermAware()) { + factory = new MultiTermAwareTokenFilterFactory() { + @Override + public String name() { + return finalName; + } + + @Override + public TokenStream create(TokenStream tokenStream) { + return PreBuiltTokenFilters.this.create(tokenStream, version); + } + + @Override + public Object getMultiTermComponent() { + return this; + } + }; + } else { + factory = new TokenFilterFactory() { + @Override + public String name() { + return finalName; + } + + @Override + public TokenStream create(TokenStream tokenStream) { + return PreBuiltTokenFilters.this.create(tokenStream, version); + } + }; + } cache.put(version, factory); } diff --git a/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenizers.java b/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenizers.java index 424b5e4534f29..a979e8f1eb8a6 100644 --- a/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenizers.java +++ b/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenizers.java @@ -33,6 +33,8 @@ import org.apache.lucene.analysis.th.ThaiTokenizer; import org.elasticsearch.Version; import org.elasticsearch.common.regex.Regex; +import org.elasticsearch.index.analysis.MultiTermAwareComponent; +import org.elasticsearch.index.analysis.TokenFilterFactory; import org.elasticsearch.index.analysis.TokenizerFactory; import org.elasticsearch.indices.analysis.PreBuiltCacheFactory.CachingStrategy; @@ -90,6 +92,10 @@ protected Tokenizer create(Version version) { protected Tokenizer create(Version version) { return new LowerCaseTokenizer(); } + @Override + protected TokenFilterFactory getMultiTermComponent(Version version) { + return PreBuiltTokenFilters.LOWERCASE.getTokenFilterFactory(version); + } }, WHITESPACE(CachingStrategy.LUCENE) { @@ -131,6 +137,10 @@ protected Tokenizer create(Version version) { protected abstract Tokenizer create(Version version); + protected TokenFilterFactory getMultiTermComponent(Version version) { + return null; + } + protected final PreBuiltCacheFactory.PreBuiltCache cache; @@ -138,22 +148,42 @@ protected Tokenizer create(Version version) { cache = PreBuiltCacheFactory.getCache(cachingStrategy); } + private interface MultiTermAwareTokenizerFactory extends TokenizerFactory, MultiTermAwareComponent {} + public synchronized TokenizerFactory getTokenizerFactory(final Version version) { TokenizerFactory tokenizerFactory = cache.get(version); if (tokenizerFactory == null) { - final String finalName = name(); - - tokenizerFactory = new TokenizerFactory() { - @Override - public String name() { - return finalName.toLowerCase(Locale.ROOT); - } - - @Override - public Tokenizer create() { - return valueOf(finalName).create(version); - } - }; + final String finalName = name().toLowerCase(Locale.ROOT); + if (getMultiTermComponent(version) != null) { + tokenizerFactory = new MultiTermAwareTokenizerFactory() { + @Override + public String name() { + return finalName; + } + + @Override + public Tokenizer create() { + return PreBuiltTokenizers.this.create(version); + } + + @Override + public Object getMultiTermComponent() { + return PreBuiltTokenizers.this.getMultiTermComponent(version); + } + }; + } else { + tokenizerFactory = new TokenizerFactory() { + @Override + public String name() { + return finalName; + } + + @Override + public Tokenizer create() { + return PreBuiltTokenizers.this.create(version); + } + }; + } cache.put(version, tokenizerFactory); } diff --git a/core/src/main/java/org/elasticsearch/indices/breaker/HierarchyCircuitBreakerService.java b/core/src/main/java/org/elasticsearch/indices/breaker/HierarchyCircuitBreakerService.java index 715cf47a6efb4..b8ec92ba15374 100644 --- a/core/src/main/java/org/elasticsearch/indices/breaker/HierarchyCircuitBreakerService.java +++ b/core/src/main/java/org/elasticsearch/indices/breaker/HierarchyCircuitBreakerService.java @@ -81,25 +81,25 @@ public class HierarchyCircuitBreakerService extends CircuitBreakerService { public HierarchyCircuitBreakerService(Settings settings, ClusterSettings clusterSettings) { super(settings); this.fielddataSettings = new BreakerSettings(CircuitBreaker.FIELDDATA, - FIELDDATA_CIRCUIT_BREAKER_LIMIT_SETTING.get(settings).bytes(), + FIELDDATA_CIRCUIT_BREAKER_LIMIT_SETTING.get(settings).getBytes(), FIELDDATA_CIRCUIT_BREAKER_OVERHEAD_SETTING.get(settings), FIELDDATA_CIRCUIT_BREAKER_TYPE_SETTING.get(settings) ); this.inFlightRequestsSettings = new BreakerSettings(CircuitBreaker.IN_FLIGHT_REQUESTS, - IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_LIMIT_SETTING.get(settings).bytes(), + IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_LIMIT_SETTING.get(settings).getBytes(), IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_OVERHEAD_SETTING.get(settings), IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_TYPE_SETTING.get(settings) ); this.requestSettings = new BreakerSettings(CircuitBreaker.REQUEST, - REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING.get(settings).bytes(), + REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING.get(settings).getBytes(), REQUEST_CIRCUIT_BREAKER_OVERHEAD_SETTING.get(settings), REQUEST_CIRCUIT_BREAKER_TYPE_SETTING.get(settings) ); this.parentSettings = new BreakerSettings(CircuitBreaker.PARENT, - TOTAL_CIRCUIT_BREAKER_LIMIT_SETTING.get(settings).bytes(), 1.0, + TOTAL_CIRCUIT_BREAKER_LIMIT_SETTING.get(settings).getBytes(), 1.0, CircuitBreaker.Type.PARENT); if (logger.isTraceEnabled()) { @@ -117,7 +117,7 @@ public HierarchyCircuitBreakerService(Settings settings, ClusterSettings cluster } private void setRequestBreakerLimit(ByteSizeValue newRequestMax, Double newRequestOverhead) { - BreakerSettings newRequestSettings = new BreakerSettings(CircuitBreaker.REQUEST, newRequestMax.bytes(), newRequestOverhead, + BreakerSettings newRequestSettings = new BreakerSettings(CircuitBreaker.REQUEST, newRequestMax.getBytes(), newRequestOverhead, HierarchyCircuitBreakerService.this.requestSettings.getType()); registerBreaker(newRequestSettings); HierarchyCircuitBreakerService.this.requestSettings = newRequestSettings; @@ -125,7 +125,7 @@ private void setRequestBreakerLimit(ByteSizeValue newRequestMax, Double newReque } private void setInFlightRequestsBreakerLimit(ByteSizeValue newInFlightRequestsMax, Double newInFlightRequestsOverhead) { - BreakerSettings newInFlightRequestsSettings = new BreakerSettings(CircuitBreaker.IN_FLIGHT_REQUESTS, newInFlightRequestsMax.bytes(), + BreakerSettings newInFlightRequestsSettings = new BreakerSettings(CircuitBreaker.IN_FLIGHT_REQUESTS, newInFlightRequestsMax.getBytes(), newInFlightRequestsOverhead, HierarchyCircuitBreakerService.this.inFlightRequestsSettings.getType()); registerBreaker(newInFlightRequestsSettings); HierarchyCircuitBreakerService.this.inFlightRequestsSettings = newInFlightRequestsSettings; @@ -133,7 +133,7 @@ private void setInFlightRequestsBreakerLimit(ByteSizeValue newInFlightRequestsMa } private void setFieldDataBreakerLimit(ByteSizeValue newFielddataMax, Double newFielddataOverhead) { - long newFielddataLimitBytes = newFielddataMax == null ? HierarchyCircuitBreakerService.this.fielddataSettings.getLimit() : newFielddataMax.bytes(); + long newFielddataLimitBytes = newFielddataMax == null ? HierarchyCircuitBreakerService.this.fielddataSettings.getLimit() : newFielddataMax.getBytes(); newFielddataOverhead = newFielddataOverhead == null ? HierarchyCircuitBreakerService.this.fielddataSettings.getOverhead() : newFielddataOverhead; BreakerSettings newFielddataSettings = new BreakerSettings(CircuitBreaker.FIELDDATA, newFielddataLimitBytes, newFielddataOverhead, HierarchyCircuitBreakerService.this.fielddataSettings.getType()); @@ -143,13 +143,13 @@ private void setFieldDataBreakerLimit(ByteSizeValue newFielddataMax, Double newF } private boolean validateTotalCircuitBreakerLimit(ByteSizeValue byteSizeValue) { - BreakerSettings newParentSettings = new BreakerSettings(CircuitBreaker.PARENT, byteSizeValue.bytes(), 1.0, CircuitBreaker.Type.PARENT); + BreakerSettings newParentSettings = new BreakerSettings(CircuitBreaker.PARENT, byteSizeValue.getBytes(), 1.0, CircuitBreaker.Type.PARENT); validateSettings(new BreakerSettings[]{newParentSettings}); return true; } private void setTotalCircuitBreakerLimit(ByteSizeValue byteSizeValue) { - BreakerSettings newParentSettings = new BreakerSettings(CircuitBreaker.PARENT, byteSizeValue.bytes(), 1.0, CircuitBreaker.Type.PARENT); + BreakerSettings newParentSettings = new BreakerSettings(CircuitBreaker.PARENT, byteSizeValue.getBytes(), 1.0, CircuitBreaker.Type.PARENT); this.parentSettings = newParentSettings; } @@ -177,7 +177,7 @@ public CircuitBreaker getBreaker(String name) { @Override public AllCircuitBreakerStats stats() { long parentEstimated = 0; - List allStats = new ArrayList<>(); + List allStats = new ArrayList<>(this.breakers.size()); // Gather the "estimated" count for the parent breaker by adding the // estimations for each individual breaker for (CircuitBreaker breaker : this.breakers.values()) { @@ -208,10 +208,11 @@ public void checkParentLimit(String label) throws CircuitBreakingException { long parentLimit = this.parentSettings.getLimit(); if (totalUsed > parentLimit) { this.parentTripCount.incrementAndGet(); - throw new CircuitBreakingException("[parent] Data too large, data for [" + - label + "] would be larger than limit of [" + - parentLimit + "/" + new ByteSizeValue(parentLimit) + "]", - totalUsed, parentLimit); + final String message = "[parent] Data too large, data for [" + label + "]" + + " would be [" + totalUsed + "/" + new ByteSizeValue(totalUsed) + "]" + + ", which is larger than the limit of [" + + parentLimit + "/" + new ByteSizeValue(parentLimit) + "]"; + throw new CircuitBreakingException(message, totalUsed, parentLimit); } } diff --git a/core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java b/core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java index e766f6ecefb64..76aff15a34e64 100644 --- a/core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java +++ b/core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java @@ -23,9 +23,10 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.store.LockObtainFailedException; +import org.elasticsearch.ResourceAlreadyExistsException; import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.ClusterStateApplier; import org.elasticsearch.cluster.action.index.NodeMappingRefreshAction; import org.elasticsearch.cluster.action.shard.ShardStateAction; import org.elasticsearch.cluster.metadata.IndexMetaData; @@ -43,7 +44,6 @@ import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.util.Callback; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.env.ShardLockObtainFailedException; @@ -52,15 +52,12 @@ import org.elasticsearch.index.IndexComponent; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.IndexShardAlreadyExistsException; -import org.elasticsearch.index.NodeServicesProvider; import org.elasticsearch.index.shard.IndexEventListener; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardRelocatedException; import org.elasticsearch.index.shard.IndexShardState; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.ShardNotFoundException; -import org.elasticsearch.indices.IndexAlreadyExistsException; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.flush.SyncedFlushService; import org.elasticsearch.indices.recovery.PeerRecoverySourceService; @@ -70,6 +67,7 @@ import org.elasticsearch.repositories.RepositoriesService; import org.elasticsearch.search.SearchService; import org.elasticsearch.snapshots.RestoreService; +import org.elasticsearch.snapshots.SnapshotShardsService; import org.elasticsearch.threadpool.ThreadPool; import java.io.IOException; @@ -83,8 +81,14 @@ import java.util.Set; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.TimeUnit; +import java.util.function.Consumer; -public class IndicesClusterStateService extends AbstractLifecycleComponent implements ClusterStateListener { +import static org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason.CLOSED; +import static org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason.DELETED; +import static org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason.FAILURE; +import static org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason.NO_LONGER_ASSIGNED; + +public class IndicesClusterStateService extends AbstractLifecycleComponent implements ClusterStateApplier { final AllocatedIndices> indicesService; private final ClusterService clusterService; @@ -92,7 +96,6 @@ public class IndicesClusterStateService extends AbstractLifecycleComponent imple private final PeerRecoveryTargetService recoveryTargetService; private final ShardStateAction shardStateAction; private final NodeMappingRefreshAction nodeMappingRefreshAction; - private final NodeServicesProvider nodeServicesProvider; private static final ShardStateAction.Listener SHARD_STATE_ACTION_LISTENER = new ShardStateAction.Listener() { }; @@ -115,11 +118,11 @@ public IndicesClusterStateService(Settings settings, IndicesService indicesServi NodeMappingRefreshAction nodeMappingRefreshAction, RepositoriesService repositoriesService, RestoreService restoreService, SearchService searchService, SyncedFlushService syncedFlushService, - PeerRecoverySourceService peerRecoverySourceService, NodeServicesProvider nodeServicesProvider) { + PeerRecoverySourceService peerRecoverySourceService, SnapshotShardsService snapshotShardsService) { this(settings, (AllocatedIndices>) indicesService, clusterService, threadPool, recoveryTargetService, shardStateAction, nodeMappingRefreshAction, repositoriesService, restoreService, searchService, syncedFlushService, peerRecoverySourceService, - nodeServicesProvider); + snapshotShardsService); } // for tests @@ -131,9 +134,10 @@ public IndicesClusterStateService(Settings settings, IndicesService indicesServi NodeMappingRefreshAction nodeMappingRefreshAction, RepositoriesService repositoriesService, RestoreService restoreService, SearchService searchService, SyncedFlushService syncedFlushService, - PeerRecoverySourceService peerRecoverySourceService, NodeServicesProvider nodeServicesProvider) { + PeerRecoverySourceService peerRecoverySourceService, SnapshotShardsService snapshotShardsService) { super(settings); - this.buildInIndexListener = Arrays.asList(peerRecoverySourceService, recoveryTargetService, searchService, syncedFlushService); + this.buildInIndexListener = Arrays.asList(peerRecoverySourceService, recoveryTargetService, searchService, syncedFlushService, + snapshotShardsService); this.indicesService = indicesService; this.clusterService = clusterService; this.threadPool = threadPool; @@ -143,17 +147,21 @@ public IndicesClusterStateService(Settings settings, IndicesService indicesServi this.restoreService = restoreService; this.repositoriesService = repositoriesService; this.sendRefreshMapping = this.settings.getAsBoolean("indices.cluster.send_refresh_mapping", true); - this.nodeServicesProvider = nodeServicesProvider; } @Override protected void doStart() { - clusterService.addFirst(this); + // Doesn't make sense to manage shards on non-master and non-data nodes + if (DiscoveryNode.isDataNode(settings) || DiscoveryNode.isMasterNode(settings)) { + clusterService.addHighPriorityApplier(this); + } } @Override protected void doStop() { - clusterService.remove(this); + if (DiscoveryNode.isDataNode(settings) || DiscoveryNode.isMasterNode(settings)) { + clusterService.removeApplier(this); + } } @Override @@ -161,7 +169,7 @@ protected void doClose() { } @Override - public synchronized void clusterChanged(final ClusterChangedEvent event) { + public synchronized void applyClusterState(final ClusterChangedEvent event) { if (!lifecycle.started()) { return; } @@ -173,7 +181,8 @@ public synchronized void clusterChanged(final ClusterChangedEvent event) { // TODO: feels hacky, a block disables state persistence, and then we clean the allocated shards, maybe another flag in blocks? if (state.blocks().disableStatePersistence()) { for (AllocatedIndex indexService : indicesService) { - indicesService.removeIndex(indexService.index(), "cleaning index (disabled block persistence)"); // also cleans shards + indicesService.removeIndex(indexService.index(), NO_LONGER_ASSIGNED, + "cleaning index (disabled block persistence)"); // also cleans shards } return; } @@ -186,7 +195,7 @@ public synchronized void clusterChanged(final ClusterChangedEvent event) { failMissingShards(state); - removeShards(state); + removeShards(state); // removes any local shards that doesn't match what the master expects updateIndices(event); // can also fail shards, but these are then guaranteed to be in failedShardsCache @@ -221,7 +230,7 @@ private void updateFailedShardsCache(final ClusterState state) { if (masterNode != null) { // TODO: can we remove this? Is resending shard failures the responsibility of shardStateAction? String message = "master " + masterNode + " has not removed previously failed shard. resending shard failure"; logger.trace("[{}] re-sending failed shard [{}], reason [{}]", matchedRouting.shardId(), matchedRouting, message); - shardStateAction.localShardFailed(matchedRouting, message, null, SHARD_STATE_ACTION_LISTENER); + shardStateAction.localShardFailed(matchedRouting, message, null, SHARD_STATE_ACTION_LISTENER, state); } } } @@ -246,7 +255,7 @@ private void deleteIndices(final ClusterChangedEvent event) { final IndexSettings indexSettings; if (indexService != null) { indexSettings = indexService.getIndexSettings(); - indicesService.deleteIndex(index, "index no longer part of the metadata"); + indicesService.removeIndex(index, DELETED, "index no longer part of the metadata"); } else if (previousState.metaData().hasIndex(index.getName())) { // The deleted index was part of the previous cluster state, but not loaded on the local node final IndexMetaData metaData = previousState.metaData().index(index); @@ -320,11 +329,14 @@ private void removeUnallocatedIndices(final ClusterChangedEvent event) { // to remove the in-memory structures for the index and not delete the // contents on disk because the index will later be re-imported as a // dangling index - assert state.metaData().index(index) != null || event.isNewCluster() : + final IndexMetaData indexMetaData = state.metaData().index(index); + assert indexMetaData != null || event.isNewCluster() : "index " + index + " does not exist in the cluster state, it should either " + - "have been deleted or the cluster must be new"; - logger.debug("{} removing index, no shards allocated", index); - indicesService.removeIndex(index, "removing index (no shards allocated)"); + "have been deleted or the cluster must be new"; + final AllocatedIndices.IndexRemovalReason reason = + indexMetaData != null && indexMetaData.getState() == IndexMetaData.State.CLOSE ? CLOSED : NO_LONGER_ASSIGNED; + logger.debug("{} removing index, [{}]", index, reason); + indicesService.removeIndex(index, reason, "removing index (no shards allocated)"); } } } @@ -345,7 +357,8 @@ private void failMissingShards(final ClusterState state) { failedShardsCache.containsKey(shardId) == false && indicesService.getShardOrNull(shardId) == null) { // the master thinks we are active, but we don't have this shard at all, mark it as failed - sendFailShard(shardRouting, "master marked shard as active, but shard has not been created, mark shard as failed", null); + sendFailShard(shardRouting, "master marked shard as active, but shard has not been created, mark shard as failed", null, + state); } } } @@ -370,11 +383,21 @@ private void removeShards(final ClusterState state) { ShardRouting currentRoutingEntry = shard.routingEntry(); ShardId shardId = currentRoutingEntry.shardId(); ShardRouting newShardRouting = localRoutingNode == null ? null : localRoutingNode.getByShardId(shardId); - if (newShardRouting == null || newShardRouting.isSameAllocation(currentRoutingEntry) == false) { + if (newShardRouting == null) { // we can just remove the shard without cleaning it locally, since we will clean it in IndicesStore // once all shards are allocated logger.debug("{} removing shard (not allocated)", shardId); indexService.removeShard(shardId.id(), "removing shard (not allocated)"); + } else if (newShardRouting.isSameAllocation(currentRoutingEntry) == false) { + logger.debug("{} removing shard (stale allocation id, stale {}, new {})", shardId, + currentRoutingEntry, newShardRouting); + indexService.removeShard(shardId.id(), "removing shard (stale copy)"); + } else if (newShardRouting.initializing() && currentRoutingEntry.active()) { + // this can happen if the node was isolated/gc-ed, rejoins the cluster and a new shard with the same allocation id + // is assigned to it. Batch cluster state processing or if shard fetching completes before the node gets a new cluster + // state may result in a new shard being initialized while having the same allocation id as the currently started shard. + logger.debug("{} removing shard (not active, current {}, new {})", shardId, currentRoutingEntry, newShardRouting); + indexService.removeShard(shardId.id(), "removing shard (stale copy)"); } else { // remove shards where recovery source has changed. This re-initializes shards later in createOrUpdateShards if (newShardRouting.recoverySource() != null && newShardRouting.recoverySource().getType() == Type.PEER) { @@ -418,7 +441,7 @@ private void createIndices(final ClusterState state) { AllocatedIndex indexService = null; try { - indexService = indicesService.createIndex(nodeServicesProvider, indexMetaData, buildInIndexListener); + indexService = indicesService.createIndex(indexMetaData, buildInIndexListener); if (indexService.updateMapping(indexMetaData) && sendRefreshMapping) { nodeMappingRefreshAction.nodeMappingRefresh(state.nodes().getMasterNode(), new NodeMappingRefreshAction.NodeMappingRefreshRequest(indexMetaData.getIndex().getName(), @@ -431,10 +454,10 @@ private void createIndices(final ClusterState state) { failShardReason = "failed to create index"; } else { failShardReason = "failed to update mapping for index"; - indicesService.removeIndex(index, "removing index (mapping update failed)"); + indicesService.removeIndex(index, FAILURE, "removing index (mapping update failed)"); } for (ShardRouting shardRouting : entry.getValue()) { - sendFailShard(shardRouting, failShardReason, e); + sendFailShard(shardRouting, failShardReason, e, state); } } } @@ -460,14 +483,14 @@ private void updateIndices(ClusterChangedEvent event) { ); } } catch (Exception e) { - indicesService.removeIndex(indexService.index(), "removing index (mapping update failed)"); + indicesService.removeIndex(indexService.index(), FAILURE, "removing index (mapping update failed)"); // fail shards that would be created or updated by createOrUpdateShards RoutingNode localRoutingNode = state.getRoutingNodes().node(state.nodes().getLocalNodeId()); if (localRoutingNode != null) { for (final ShardRouting shardRouting : localRoutingNode) { if (shardRouting.index().equals(index) && failedShardsCache.containsKey(shardRouting.shardId()) == false) { - sendFailShard(shardRouting, "failed to update mapping for index", e); + sendFailShard(shardRouting, "failed to update mapping for index", e, state); } } } @@ -493,15 +516,15 @@ private void createOrUpdateShards(final ClusterState state) { Shard shard = indexService.getShardOrNull(shardId.id()); if (shard == null) { assert shardRouting.initializing() : shardRouting + " should have been removed by failMissingShards"; - createShard(nodes, routingTable, shardRouting); + createShard(nodes, routingTable, shardRouting, state); } else { - updateShard(nodes, shardRouting, shard); + updateShard(nodes, shardRouting, shard, routingTable, state); } } } } - private void createShard(DiscoveryNodes nodes, RoutingTable routingTable, ShardRouting shardRouting) { + private void createShard(DiscoveryNodes nodes, RoutingTable routingTable, ShardRouting shardRouting, ClusterState state) { assert shardRouting.initializing() : "only allow shard creation for initializing shard but was " + shardRouting; DiscoveryNode sourceNode = null; @@ -517,17 +540,14 @@ private void createShard(DiscoveryNodes nodes, RoutingTable routingTable, ShardR logger.debug("{} creating shard", shardRouting.shardId()); RecoveryState recoveryState = new RecoveryState(shardRouting, nodes.getLocalNode(), sourceNode); indicesService.createShard(shardRouting, recoveryState, recoveryTargetService, new RecoveryListener(shardRouting), - repositoriesService, nodeServicesProvider, failedShardHandler); - } catch (IndexShardAlreadyExistsException e) { - // ignore this, the method call can happen several times - logger.debug("Trying to create shard that already exists", e); - assert false; + repositoriesService, failedShardHandler); } catch (Exception e) { - failAndRemoveShard(shardRouting, true, "failed to create shard", e); + failAndRemoveShard(shardRouting, true, "failed to create shard", e, state); } } - private void updateShard(DiscoveryNodes nodes, ShardRouting shardRouting, Shard shard) { + private void updateShard(DiscoveryNodes nodes, ShardRouting shardRouting, Shard shard, RoutingTable routingTable, + ClusterState clusterState) { final ShardRouting currentRoutingEntry = shard.routingEntry(); assert currentRoutingEntry.isSameAllocation(shardRouting) : "local shard has a different allocation id but wasn't cleaning by removeShards. " @@ -536,7 +556,7 @@ private void updateShard(DiscoveryNodes nodes, ShardRouting shardRouting, Shard try { shard.updateRoutingEntry(shardRouting); } catch (Exception e) { - failAndRemoveShard(shardRouting, true, "failed updating shard routing entry", e); + failAndRemoveShard(shardRouting, true, "failed updating shard routing entry", e, clusterState); return; } @@ -551,16 +571,15 @@ private void updateShard(DiscoveryNodes nodes, ShardRouting shardRouting, Shard } if (nodes.getMasterNode() != null) { shardStateAction.shardStarted(shardRouting, "master " + nodes.getMasterNode() + - " marked shard as initializing, but shard state is [" + state + "], mark shard as started", - SHARD_STATE_ACTION_LISTENER); + " marked shard as initializing, but shard state is [" + state + "], mark shard as started", + SHARD_STATE_ACTION_LISTENER, clusterState); } } } /** * Finds the routing source node for peer recovery, return null if its not found. Note, this method expects the shard - * routing to *require* peer recovery, use {@link ShardRouting#recoverySource()} to - * check if its needed or not. + * routing to *require* peer recovery, use {@link ShardRouting#recoverySource()} to check if its needed or not. */ private static DiscoveryNode findSourceNodeForPeerRecovery(Logger logger, RoutingTable routingTable, DiscoveryNodes nodes, ShardRouting shardRouting) { @@ -626,10 +645,11 @@ public void onRecoveryFailure(RecoveryState state, RecoveryFailedException e, bo } private synchronized void handleRecoveryFailure(ShardRouting shardRouting, boolean sendShardFailure, Exception failure) { - failAndRemoveShard(shardRouting, sendShardFailure, "failed recovery", failure); + failAndRemoveShard(shardRouting, sendShardFailure, "failed recovery", failure, clusterService.state()); } - private void failAndRemoveShard(ShardRouting shardRouting, boolean sendShardFailure, String message, @Nullable Exception failure) { + private void failAndRemoveShard(ShardRouting shardRouting, boolean sendShardFailure, String message, @Nullable Exception failure, + ClusterState state) { try { AllocatedIndex indexService = indicesService.indexService(shardRouting.shardId().getIndex()); if (indexService != null) { @@ -648,17 +668,17 @@ private void failAndRemoveShard(ShardRouting shardRouting, boolean sendShardFail inner); } if (sendShardFailure) { - sendFailShard(shardRouting, message, failure); + sendFailShard(shardRouting, message, failure, state); } } - private void sendFailShard(ShardRouting shardRouting, String message, @Nullable Exception failure) { + private void sendFailShard(ShardRouting shardRouting, String message, @Nullable Exception failure, ClusterState state) { try { logger.warn( (Supplier) () -> new ParameterizedMessage( "[{}] marking and sending shard failed due to [{}]", shardRouting.shardId(), message), failure); failedShardsCache.put(shardRouting.shardId(), shardRouting); - shardStateAction.localShardFailed(shardRouting, message, failure, SHARD_STATE_ACTION_LISTENER); + shardStateAction.localShardFailed(shardRouting, message, failure, SHARD_STATE_ACTION_LISTENER, state); } catch (Exception inner) { if (failure != null) inner.addSuppressed(failure); logger.warn( @@ -671,13 +691,14 @@ private void sendFailShard(ShardRouting shardRouting, String message, @Nullable } } - private class FailedShardHandler implements Callback { + private class FailedShardHandler implements Consumer { @Override - public void handle(final IndexShard.ShardFailure shardFailure) { + public void accept(final IndexShard.ShardFailure shardFailure) { final ShardRouting shardRouting = shardFailure.routing; threadPool.generic().execute(() -> { synchronized (IndicesClusterStateService.this) { - failAndRemoveShard(shardRouting, true, "shard failure, reason [" + shardFailure.reason + "]", shardFailure.cause); + failAndRemoveShard(shardRouting, true, "shard failure, reason [" + shardFailure.reason + "]", shardFailure.cause, + clusterService.state()); } }); } @@ -750,10 +771,9 @@ public interface AllocatedIndices> * @param indexMetaData the index metadata to create the index for * @param builtInIndexListener a list of built-in lifecycle {@link IndexEventListener} that should should be used along side with * the per-index listeners - * @throws IndexAlreadyExistsException if the index already exists. + * @throws ResourceAlreadyExistsException if the index already exists. */ - U createIndex(NodeServicesProvider nodeServicesProvider, IndexMetaData indexMetaData, - List builtInIndexListener) throws IOException; + U createIndex(IndexMetaData indexMetaData, List builtInIndexListener) throws IOException; /** * Verify that the contents on disk for the given index is deleted; if not, delete the contents. @@ -765,20 +785,10 @@ U createIndex(NodeServicesProvider nodeServicesProvider, IndexMetaData indexMeta */ IndexMetaData verifyIndexIsDeleted(Index index, ClusterState clusterState); - /** - * Deletes the given index. Persistent parts of the index - * like the shards files, state and transaction logs are removed once all resources are released. - * - * Equivalent to {@link #removeIndex(Index, String)} but fires - * different lifecycle events to ensure pending resources of this index are immediately removed. - * @param index the index to delete - * @param reason the high level reason causing this delete - */ - void deleteIndex(Index index, String reason); /** * Deletes an index that is not assigned to this node. This method cleans up all disk folders relating to the index - * but does not deal with in-memory structures. For those call {@link #deleteIndex(Index, String)} + * but does not deal with in-memory structures. For those call {@link #removeIndex(Index, IndexRemovalReason, String)} */ void deleteUnassignedIndex(String reason, IndexMetaData metaData, ClusterState clusterState); @@ -786,9 +796,10 @@ U createIndex(NodeServicesProvider nodeServicesProvider, IndexMetaData indexMeta * Removes the given index from this service and releases all associated resources. Persistent parts of the index * like the shards files, state and transaction logs are kept around in the case of a disaster recovery. * @param index the index to remove - * @param reason the high level reason causing this removal + * @param reason the reason to remove the index + * @param extraInfo extra information that will be used for logging and reporting */ - void removeIndex(Index index, String reason); + void removeIndex(Index index, IndexRemovalReason reason, String extraInfo); /** * Returns an IndexService for the specified index if exists otherwise returns null. @@ -800,7 +811,7 @@ U createIndex(NodeServicesProvider nodeServicesProvider, IndexMetaData indexMeta */ T createShard(ShardRouting shardRouting, RecoveryState recoveryState, PeerRecoveryTargetService recoveryTargetService, PeerRecoveryTargetService.RecoveryListener recoveryListener, RepositoriesService repositoriesService, - NodeServicesProvider nodeServicesProvider, Callback onShardFailure) throws IOException; + Consumer onShardFailure) throws IOException; /** * Returns shard for the specified id if it exists otherwise returns null. @@ -814,6 +825,33 @@ default T getShardOrNull(ShardId shardId) { } void processPendingDeletes(Index index, IndexSettings indexSettings, TimeValue timeValue) - throws IOException, InterruptedException, ShardLockObtainFailedException; + throws IOException, InterruptedException, ShardLockObtainFailedException; + + enum IndexRemovalReason { + /** + * Shard of this index were previously assigned to this node but all shards have been relocated. + * The index should be removed and all associated resources released. Persistent parts of the index + * like the shards files, state and transaction logs are kept around in the case of a disaster recovery. + */ + NO_LONGER_ASSIGNED, + /** + * The index is deleted. Persistent parts of the index like the shards files, state and transaction logs are removed once + * all resources are released. + */ + DELETED, + + /** + * The index have been closed. The index should be removed and all associated resources released. Persistent parts of the index + * like the shards files, state and transaction logs are kept around in the case of a disaster recovery. + */ + CLOSED, + + /** + * Something around index management has failed and the index should be removed. + * Persistent parts of the index like the shards files, state and transaction logs are kept around in the + * case of a disaster recovery. + */ + FAILURE + } } } diff --git a/core/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java b/core/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java index c5a8be9b11441..3d4676f39cea0 100644 --- a/core/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java +++ b/core/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java @@ -19,6 +19,7 @@ package org.elasticsearch.indices.fielddata.cache; +import java.util.Collections; import org.apache.logging.log4j.Logger; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexReader; @@ -58,7 +59,7 @@ public class IndicesFieldDataCache extends AbstractComponent implements RemovalL public IndicesFieldDataCache(Settings settings, IndexFieldDataCache.Listener indicesFieldDataCacheListener) { super(settings); this.indicesFieldDataCacheListener = indicesFieldDataCacheListener; - final long sizeInBytes = INDICES_FIELDDATA_CACHE_SIZE_KEY.get(settings).bytes(); + final long sizeInBytes = INDICES_FIELDDATA_CACHE_SIZE_KEY.get(settings).getBytes(); CacheBuilder cacheBuilder = CacheBuilder.builder() .removalListener(this); if (sizeInBytes > 0) { @@ -129,9 +130,8 @@ public > FD load(fina //noinspection unchecked final Accountable accountable = cache.computeIfAbsent(key, k -> { context.reader().addCoreClosedListener(IndexFieldCache.this); - for (Listener listener : this.listeners) { - k.listeners.add(listener); - } + Collections.addAll(k.listeners, this.listeners); + final AtomicFieldData fieldData = indexFieldData.loadDirect(context); for (Listener listener : k.listeners) { try { @@ -153,9 +153,7 @@ public > IFD l //noinspection unchecked final Accountable accountable = cache.computeIfAbsent(key, k -> { ElasticsearchDirectoryReader.addReaderCloseListener(indexReader, IndexFieldCache.this); - for (Listener listener : this.listeners) { - k.listeners.add(listener); - } + Collections.addAll(k.listeners, this.listeners); final Accountable ifd = (Accountable) indexFieldData.localGlobalDirect(indexReader); for (Listener listener : k.listeners) { try { diff --git a/core/src/main/java/org/elasticsearch/indices/flush/SyncedFlushService.java b/core/src/main/java/org/elasticsearch/indices/flush/SyncedFlushService.java index 273682db3125b..3dce0ecdfd493 100644 --- a/core/src/main/java/org/elasticsearch/indices/flush/SyncedFlushService.java +++ b/core/src/main/java/org/elasticsearch/indices/flush/SyncedFlushService.java @@ -658,10 +658,10 @@ static final class InFlightOpsResponse extends TransportResponse { int opCount; - public InFlightOpsResponse() { + InFlightOpsResponse() { } - public InFlightOpsResponse(int opCount) { + InFlightOpsResponse(int opCount) { this.opCount = opCount; } diff --git a/core/src/main/java/org/elasticsearch/indices/query/IndicesQueriesRegistry.java b/core/src/main/java/org/elasticsearch/indices/query/IndicesQueriesRegistry.java deleted file mode 100644 index f1d45b55495fb..0000000000000 --- a/core/src/main/java/org/elasticsearch/indices/query/IndicesQueriesRegistry.java +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.indices.query; - -import org.elasticsearch.common.xcontent.ParseFieldRegistry; -import org.elasticsearch.index.query.QueryParser; - -/** - * Extensions to ParseFieldRegistry to make Guice happy. - */ -public class IndicesQueriesRegistry extends ParseFieldRegistry> { - public IndicesQueriesRegistry() { - super("query"); - } -} diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoverySourceService.java b/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoverySourceService.java index 3fd12a6993e30..dc40c76236fe2 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoverySourceService.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoverySourceService.java @@ -84,7 +84,7 @@ public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexSh } } - private RecoveryResponse recover(final StartRecoveryRequest request) throws IOException { + private RecoveryResponse recover(StartRecoveryRequest request) throws IOException { final IndexService indexService = indicesService.indexServiceSafe(request.shardId().getIndex()); final IndexShard shard = indexService.getShard(request.shardId().id()); @@ -113,6 +113,19 @@ private RecoveryResponse recover(final StartRecoveryRequest request) throws IOEx throw new DelayRecoveryException("source node has the state of the target shard to be [" + targetShardRouting.state() + "], expecting to be [initializing]"); } + if (request.targetAllocationId() == null) { + // ES versions < 5.4.0 do not send targetAllocationId as part of recovery request, just assume that we have the correct id + request = new StartRecoveryRequest(request.shardId(), targetShardRouting.allocationId().getId(), request.sourceNode(), + request.targetNode(), request.metadataSnapshot(), request.isPrimaryRelocation(), request.recoveryId()); + } + + if (request.targetAllocationId().equals(targetShardRouting.allocationId().getId()) == false) { + logger.debug("delaying recovery of {} due to target allocation id mismatch (expected: [{}], but was: [{}])", + request.shardId(), request.targetAllocationId(), targetShardRouting.allocationId().getId()); + throw new DelayRecoveryException("source node has the state of the target shard to have allocation id [" + + targetShardRouting.allocationId().getId() + "], expecting to be [" + request.targetAllocationId() + "]"); + } + RecoverySourceHandler handler = ongoingRecoveries.addNewRecovery(request, shard); logger.trace("[{}][{}] starting recovery to {}", request.shardId().getIndex().getName(), request.shardId().id(), request.targetNode()); try { diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java b/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java index f26d0787f4178..edc627e6f5243 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java @@ -33,7 +33,6 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; @@ -49,6 +48,7 @@ import org.elasticsearch.index.shard.ShardNotFoundException; import org.elasticsearch.index.shard.TranslogRecoveryPerformer; import org.elasticsearch.index.store.Store; +import org.elasticsearch.indices.recovery.RecoveriesCollection.RecoveryRef; import org.elasticsearch.node.NodeClosedException; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.ConnectTransportException; @@ -91,7 +91,6 @@ public static class Actions { private final RecoveriesCollection onGoingRecoveries; - @Inject public PeerRecoveryTargetService(Settings settings, ThreadPool threadPool, TransportService transportService, RecoverySettings recoverySettings, ClusterService clusterService) { super(settings); @@ -141,65 +140,86 @@ public void startRecovery(final IndexShard indexShard, final DiscoveryNode sourc threadPool.generic().execute(new RecoveryRunner(recoveryId)); } - protected void retryRecovery(final RecoveryTarget recoveryTarget, final Throwable reason, TimeValue retryAfter, final - StartRecoveryRequest currentRequest) { + protected void retryRecovery(final long recoveryId, final Throwable reason, TimeValue retryAfter, TimeValue activityTimeout) { logger.trace( (Supplier) () -> new ParameterizedMessage( - "will retry recovery with id [{}] in [{}]", recoveryTarget.recoveryId(), retryAfter), reason); - retryRecovery(recoveryTarget, retryAfter, currentRequest); + "will retry recovery with id [{}] in [{}]", recoveryId, retryAfter), reason); + retryRecovery(recoveryId, retryAfter, activityTimeout); } - protected void retryRecovery(final RecoveryTarget recoveryTarget, final String reason, TimeValue retryAfter, final - StartRecoveryRequest currentRequest) { - logger.trace("will retry recovery with id [{}] in [{}] (reason [{}])", recoveryTarget.recoveryId(), retryAfter, reason); - retryRecovery(recoveryTarget, retryAfter, currentRequest); + protected void retryRecovery(final long recoveryId, final String reason, TimeValue retryAfter, TimeValue activityTimeout) { + logger.trace("will retry recovery with id [{}] in [{}] (reason [{}])", recoveryId, retryAfter, reason); + retryRecovery(recoveryId, retryAfter, activityTimeout); } - private void retryRecovery(final RecoveryTarget recoveryTarget, TimeValue retryAfter, final StartRecoveryRequest currentRequest) { - try { - onGoingRecoveries.resetRecovery(recoveryTarget.recoveryId(), recoveryTarget.shardId()); - } catch (Exception e) { - onGoingRecoveries.failRecovery(recoveryTarget.recoveryId(), new RecoveryFailedException(currentRequest, e), true); + private void retryRecovery(final long recoveryId, TimeValue retryAfter, TimeValue activityTimeout) { + RecoveryTarget newTarget = onGoingRecoveries.resetRecovery(recoveryId, activityTimeout); + if (newTarget != null) { + threadPool.schedule(retryAfter, ThreadPool.Names.GENERIC, new RecoveryRunner(newTarget.recoveryId())); } - threadPool.schedule(retryAfter, ThreadPool.Names.GENERIC, new RecoveryRunner(recoveryTarget.recoveryId())); } - private void doRecovery(final RecoveryTarget recoveryTarget) { - assert recoveryTarget.sourceNode() != null : "can't do a recovery without a source node"; + private void doRecovery(final long recoveryId) { + final StartRecoveryRequest request; + final CancellableThreads cancellableThreads; + final RecoveryState.Timer timer; - logger.trace("collecting local files for {}", recoveryTarget); - Store.MetadataSnapshot metadataSnapshot = null; - try { - if (recoveryTarget.indexShard().indexSettings().isOnSharedFilesystem()) { - // we are not going to copy any files, so don't bother listing files, potentially running - // into concurrency issues with the primary changing files underneath us. - metadataSnapshot = Store.MetadataSnapshot.EMPTY; - } else { - metadataSnapshot = recoveryTarget.indexShard().snapshotStoreMetadata(); + try (RecoveryRef recoveryRef = onGoingRecoveries.getRecovery(recoveryId)) { + if (recoveryRef == null) { + logger.trace("not running recovery with id [{}] - can't find it (probably finished)", recoveryId); + return; } - } catch (org.apache.lucene.index.IndexNotFoundException e) { - // happens on an empty folder. no need to log - metadataSnapshot = Store.MetadataSnapshot.EMPTY; - } catch (IOException e) { - logger.warn("error while listing local files, recover as if there are none", e); - metadataSnapshot = Store.MetadataSnapshot.EMPTY; - } catch (Exception e) { - // this will be logged as warning later on... - logger.trace("unexpected error while listing local files, failing recovery", e); - onGoingRecoveries.failRecovery(recoveryTarget.recoveryId(), + RecoveryTarget recoveryTarget = recoveryRef.target(); + assert recoveryTarget.sourceNode() != null : "can't do a recovery without a source node"; + + logger.trace("collecting local files for {}", recoveryTarget.sourceNode()); + Store.MetadataSnapshot metadataSnapshot; + try { + if (recoveryTarget.indexShard().indexSettings().isOnSharedFilesystem()) { + // we are not going to copy any files, so don't bother listing files, potentially running + // into concurrency issues with the primary changing files underneath us. + metadataSnapshot = Store.MetadataSnapshot.EMPTY; + } else { + metadataSnapshot = recoveryTarget.indexShard().snapshotStoreMetadata(); + } + logger.trace("{} local file count: [{}]", recoveryTarget, metadataSnapshot.size()); + } catch (org.apache.lucene.index.IndexNotFoundException e) { + // happens on an empty folder. no need to log + logger.trace("{} shard folder empty, recover all files", recoveryTarget); + metadataSnapshot = Store.MetadataSnapshot.EMPTY; + } catch (IOException e) { + logger.warn("error while listing local files, recover as if there are none", e); + metadataSnapshot = Store.MetadataSnapshot.EMPTY; + } catch (Exception e) { + // this will be logged as warning later on... + logger.trace("unexpected error while listing local files, failing recovery", e); + onGoingRecoveries.failRecovery(recoveryTarget.recoveryId(), new RecoveryFailedException(recoveryTarget.state(), "failed to list local files", e), true); - return; + return; + } + + try { + logger.trace("{} preparing shard for peer recovery", recoveryTarget.shardId()); + recoveryTarget.indexShard().prepareForIndexRecovery(); + + request = new StartRecoveryRequest(recoveryTarget.shardId(), + recoveryTarget.indexShard().routingEntry().allocationId().getId(), recoveryTarget.sourceNode(), + clusterService.localNode(), metadataSnapshot, recoveryTarget.state().getPrimary(), recoveryTarget.recoveryId()); + cancellableThreads = recoveryTarget.CancellableThreads(); + timer = recoveryTarget.state().getTimer(); + } catch (Exception e) { + // this will be logged as warning later on... + logger.trace("unexpected error while preparing shard for peer recovery, failing recovery", e); + onGoingRecoveries.failRecovery(recoveryTarget.recoveryId(), + new RecoveryFailedException(recoveryTarget.state(), "failed to prepare shard for recovery", e), true); + return; + } } - logger.trace("{} local file count: [{}]", recoveryTarget, metadataSnapshot.size()); - final StartRecoveryRequest request = new StartRecoveryRequest(recoveryTarget.shardId(), recoveryTarget.sourceNode(), - clusterService.localNode(), metadataSnapshot, recoveryTarget.state().getPrimary(), recoveryTarget.recoveryId()); - final AtomicReference responseHolder = new AtomicReference<>(); try { - logger.trace("[{}][{}] starting recovery from {}", request.shardId().getIndex().getName(), request.shardId().id(), request - .sourceNode()); - recoveryTarget.indexShard().prepareForIndexRecovery(); - recoveryTarget.CancellableThreads().execute(() -> responseHolder.set( + logger.trace("{} starting recovery from {}", request.shardId(), request.sourceNode()); + final AtomicReference responseHolder = new AtomicReference<>(); + cancellableThreads.execute(() -> responseHolder.set( transportService.submitRequest(request.sourceNode(), PeerRecoverySourceService.Actions.START_RECOVERY, request, new FutureTransportResponseHandler() { @Override @@ -209,9 +229,9 @@ public RecoveryResponse newInstance() { }).txGet())); final RecoveryResponse recoveryResponse = responseHolder.get(); assert responseHolder != null; - final TimeValue recoveryTime = new TimeValue(recoveryTarget.state().getTimer().time()); + final TimeValue recoveryTime = new TimeValue(timer.time()); // do this through ongoing recoveries to remove it from the collection - onGoingRecoveries.markRecoveryAsDone(recoveryTarget.recoveryId()); + onGoingRecoveries.markRecoveryAsDone(recoveryId); if (logger.isTraceEnabled()) { StringBuilder sb = new StringBuilder(); sb.append('[').append(request.shardId().getIndex().getName()).append(']').append('[').append(request.shardId().id()) @@ -231,7 +251,7 @@ public RecoveryResponse newInstance() { .append("\n"); logger.trace("{}", sb); } else { - logger.debug("{} recovery done from [{}], took [{}]", request.shardId(), recoveryTarget.sourceNode(), recoveryTime); + logger.debug("{} recovery done from [{}], took [{}]", request.shardId(), request.sourceNode(), recoveryTime); } } catch (CancellableThreads.ExecutionCancelledException e) { logger.trace("recovery cancelled", e); @@ -247,8 +267,8 @@ public RecoveryResponse newInstance() { Throwable cause = ExceptionsHelper.unwrapCause(e); if (cause instanceof CancellableThreads.ExecutionCancelledException) { // this can also come from the source wrapped in a RemoteTransportException - onGoingRecoveries.failRecovery(recoveryTarget.recoveryId(), new RecoveryFailedException(request, "source has canceled the" + - " recovery", cause), false); + onGoingRecoveries.failRecovery(recoveryId, new RecoveryFailedException(request, + "source has canceled the recovery", cause), false); return; } if (cause instanceof RecoveryEngineException) { @@ -264,31 +284,34 @@ public RecoveryResponse newInstance() { // here, we would add checks against exception that need to be retried (and not removeAndClean in this case) - if (cause instanceof IllegalIndexShardStateException || cause instanceof IndexNotFoundException || cause instanceof - ShardNotFoundException) { + if (cause instanceof IllegalIndexShardStateException || cause instanceof IndexNotFoundException || + cause instanceof ShardNotFoundException) { // if the target is not ready yet, retry - retryRecovery(recoveryTarget, "remote shard not ready", recoverySettings.retryDelayStateSync(), request); + retryRecovery(recoveryId, "remote shard not ready", recoverySettings.retryDelayStateSync(), + recoverySettings.activityTimeout()); return; } if (cause instanceof DelayRecoveryException) { - retryRecovery(recoveryTarget, cause, recoverySettings.retryDelayStateSync(), request); + retryRecovery(recoveryId, cause, recoverySettings.retryDelayStateSync(), + recoverySettings.activityTimeout()); return; } if (cause instanceof ConnectTransportException) { - logger.debug("delaying recovery of {} for [{}] due to networking error [{}]", recoveryTarget.shardId(), recoverySettings - .retryDelayNetwork(), cause.getMessage()); - retryRecovery(recoveryTarget, cause.getMessage(), recoverySettings.retryDelayNetwork(), request); + logger.debug("delaying recovery of {} for [{}] due to networking error [{}]", request.shardId(), + recoverySettings.retryDelayNetwork(), cause.getMessage()); + retryRecovery(recoveryId, cause.getMessage(), recoverySettings.retryDelayNetwork(), + recoverySettings.activityTimeout()); return; } if (cause instanceof AlreadyClosedException) { - onGoingRecoveries.failRecovery(recoveryTarget.recoveryId(), new RecoveryFailedException(request, "source shard is " + - "closed", cause), false); + onGoingRecoveries.failRecovery(recoveryId, + new RecoveryFailedException(request, "source shard is closed", cause), false); return; } - onGoingRecoveries.failRecovery(recoveryTarget.recoveryId(), new RecoveryFailedException(request, e), true); + onGoingRecoveries.failRecovery(recoveryId, new RecoveryFailedException(request, e), true); } } @@ -302,9 +325,9 @@ class PrepareForTranslogOperationsRequestHandler implements TransportRequestHand @Override public void messageReceived(RecoveryPrepareForTranslogOperationsRequest request, TransportChannel channel) throws Exception { - try (RecoveriesCollection.RecoveryRef recoveryRef = onGoingRecoveries.getRecoverySafe(request.recoveryId(), request.shardId() + try (RecoveryRef recoveryRef = onGoingRecoveries.getRecoverySafe(request.recoveryId(), request.shardId() )) { - recoveryRef.status().prepareForTranslogOperations(request.totalTranslogOps(), request.getMaxUnsafeAutoIdTimestamp()); + recoveryRef.target().prepareForTranslogOperations(request.totalTranslogOps(), request.getMaxUnsafeAutoIdTimestamp()); } channel.sendResponse(TransportResponse.Empty.INSTANCE); } @@ -314,9 +337,9 @@ class FinalizeRecoveryRequestHandler implements TransportRequestHandler= clusterStateVersion) { logger.trace("node has cluster state with version higher than {} (current: {})", clusterStateVersion, clusterState.getVersion()); return; } else { logger.trace("waiting for cluster state version {} (current: {})", clusterStateVersion, clusterState.getVersion()); - final PlainActionFuture future = new PlainActionFuture<>(); + final PlainActionFuture future = new PlainActionFuture<>(); observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override public void onNewClusterState(ClusterState state) { - future.onResponse(null); + future.onResponse(state.getVersion()); } @Override @@ -425,23 +448,16 @@ public void onClusterServiceClose() { public void onTimeout(TimeValue timeout) { future.onFailure(new IllegalStateException("cluster state never updated to version " + clusterStateVersion)); } - }, new ClusterStateObserver.ValidationPredicate() { - - @Override - protected boolean validate(ClusterState newState) { - return newState.getVersion() >= clusterStateVersion; - } - }); + }, newState -> newState.getVersion() >= clusterStateVersion); try { - future.get(); - logger.trace("successfully waited for cluster state with version {} (current: {})", clusterStateVersion, - observer.observedState().getVersion()); + long currentVersion = future.get(); + logger.trace("successfully waited for cluster state with version {} (current: {})", clusterStateVersion, currentVersion); } catch (Exception e) { logger.debug( (Supplier) () -> new ParameterizedMessage( "failed waiting for cluster state with version {} (current: {})", clusterStateVersion, - observer.observedState()), + clusterService.state().getVersion()), e); throw ExceptionsHelper.convertToRuntime(e); } @@ -452,9 +468,9 @@ class FilesInfoRequestHandler implements TransportRequestHandler) () -> new ParameterizedMessage( "unexpected error during recovery [{}], failing shard", recoveryId), e); onGoingRecoveries.failRecovery(recoveryId, - new RecoveryFailedException(recoveryRef.status().state(), "unexpected error", e), + new RecoveryFailedException(recoveryRef.target().state(), "unexpected error", e), true // be safe ); } else { @@ -537,16 +553,7 @@ public void onFailure(Exception e) { @Override public void doRun() { - RecoveriesCollection.RecoveryRef recoveryRef = onGoingRecoveries.getRecovery(recoveryId); - if (recoveryRef == null) { - logger.trace("not running recovery with id [{}] - can't find it (probably finished)", recoveryId); - return; - } - try { - doRecovery(recoveryRef.status()); - } finally { - recoveryRef.close(); - } + doRecovery(recoveryId); } } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveriesCollection.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveriesCollection.java index 65a48b18e22a3..b5d9c1d34bd3f 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveriesCollection.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveriesCollection.java @@ -25,7 +25,6 @@ import org.elasticsearch.ElasticsearchTimeoutException; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.util.Callback; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.index.shard.IndexShard; @@ -33,9 +32,12 @@ import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.threadpool.ThreadPool; -import java.io.IOException; +import java.util.ArrayList; +import java.util.Iterator; +import java.util.List; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.LongConsumer; /** * This class holds a collection of all on going recoveries on the current node (i.e., the node is the target node @@ -50,9 +52,9 @@ public class RecoveriesCollection { private final Logger logger; private final ThreadPool threadPool; - private final Callback ensureClusterStateVersionCallback; + private final LongConsumer ensureClusterStateVersionCallback; - public RecoveriesCollection(Logger logger, ThreadPool threadPool, Callback ensureClusterStateVersionCallback) { + public RecoveriesCollection(Logger logger, ThreadPool threadPool, LongConsumer ensureClusterStateVersionCallback) { this.logger = logger; this.threadPool = threadPool; this.ensureClusterStateVersionCallback = ensureClusterStateVersionCallback; @@ -65,13 +67,18 @@ public RecoveriesCollection(Logger logger, ThreadPool threadPool, Callback */ public long startRecovery(IndexShard indexShard, DiscoveryNode sourceNode, PeerRecoveryTargetService.RecoveryListener listener, TimeValue activityTimeout) { - RecoveryTarget status = new RecoveryTarget(indexShard, sourceNode, listener, ensureClusterStateVersionCallback); - RecoveryTarget existingStatus = onGoingRecoveries.putIfAbsent(status.recoveryId(), status); - assert existingStatus == null : "found two RecoveryStatus instances with the same id"; - logger.trace("{} started recovery from {}, id [{}]", indexShard.shardId(), sourceNode, status.recoveryId()); + RecoveryTarget recoveryTarget = new RecoveryTarget(indexShard, sourceNode, listener, ensureClusterStateVersionCallback); + startRecoveryInternal(recoveryTarget, activityTimeout); + return recoveryTarget.recoveryId(); + } + + private void startRecoveryInternal(RecoveryTarget recoveryTarget, TimeValue activityTimeout) { + RecoveryTarget existingTarget = onGoingRecoveries.putIfAbsent(recoveryTarget.recoveryId(), recoveryTarget); + assert existingTarget == null : "found two RecoveryStatus instances with the same id"; + logger.trace("{} started recovery from {}, id [{}]", recoveryTarget.shardId(), recoveryTarget.sourceNode(), + recoveryTarget.recoveryId()); threadPool.schedule(activityTimeout, ThreadPool.Names.GENERIC, - new RecoveryMonitor(status.recoveryId(), status.lastAccessTime(), activityTimeout)); - return status.recoveryId(); + new RecoveryMonitor(recoveryTarget.recoveryId(), recoveryTarget.lastAccessTime(), activityTimeout)); } @@ -79,22 +86,49 @@ public long startRecovery(IndexShard indexShard, DiscoveryNode sourceNode, * Resets the recovery and performs a recovery restart on the currently recovering index shard * * @see IndexShard#performRecoveryRestart() + * @return newly created RecoveryTarget */ - public void resetRecovery(long id, ShardId shardId) throws IOException { - try (RecoveryRef ref = getRecoverySafe(id, shardId)) { - // instead of adding complicated state to RecoveryTarget we just flip the - // target instance when we reset a recovery, that way we have only one cleanup - // path on the RecoveryTarget and are always within the bounds of ref-counting - // which is important since we verify files are on disk etc. after we have written them etc. - RecoveryTarget status = ref.status(); - RecoveryTarget resetRecovery = status.resetRecovery(); - if (onGoingRecoveries.replace(id, status, resetRecovery) == false) { - resetRecovery.cancel("replace failed"); // this is important otherwise we leak a reference to the store - throw new IllegalStateException("failed to replace recovery target"); + public RecoveryTarget resetRecovery(final long recoveryId, TimeValue activityTimeout) { + RecoveryTarget oldRecoveryTarget = null; + final RecoveryTarget newRecoveryTarget; + + try { + synchronized (onGoingRecoveries) { + // swap recovery targets in a synchronized block to ensure that the newly added recovery target is picked up by + // cancelRecoveriesForShard whenever the old recovery target is picked up + oldRecoveryTarget = onGoingRecoveries.remove(recoveryId); + if (oldRecoveryTarget == null) { + return null; + } + + newRecoveryTarget = oldRecoveryTarget.retryCopy(); + startRecoveryInternal(newRecoveryTarget, activityTimeout); + } + + // Closes the current recovery target + boolean successfulReset = oldRecoveryTarget.resetRecovery(newRecoveryTarget.CancellableThreads()); + if (successfulReset) { + logger.trace("{} restarted recovery from {}, id [{}], previous id [{}]", newRecoveryTarget.shardId(), + newRecoveryTarget.sourceNode(), newRecoveryTarget.recoveryId(), oldRecoveryTarget.recoveryId()); + return newRecoveryTarget; + } else { + logger.trace("{} recovery could not be reset as it is already cancelled, recovery from {}, id [{}], previous id [{}]", + newRecoveryTarget.shardId(), newRecoveryTarget.sourceNode(), newRecoveryTarget.recoveryId(), + oldRecoveryTarget.recoveryId()); + cancelRecovery(newRecoveryTarget.recoveryId(), "recovery cancelled during reset"); + return null; } + } catch (Exception e) { + // fail shard to be safe + oldRecoveryTarget.notifyListener(new RecoveryFailedException(oldRecoveryTarget.state(), "failed to retry recovery", e), true); + return null; } } + public RecoveryTarget getRecoveryTarget(long id) { + return onGoingRecoveries.get(id); + } + /** * gets the {@link RecoveryTarget } for a given id. The RecoveryStatus returned has it's ref count already incremented * to make sure it's safe to use. However, you must call {@link RecoveryTarget#decRef()} when you are done with it, typically @@ -116,7 +150,7 @@ public RecoveryRef getRecoverySafe(long id, ShardId shardId) { if (recoveryRef == null) { throw new IndexShardClosedException(shardId); } - assert recoveryRef.status().shardId().equals(shardId); + assert recoveryRef.target().shardId().equals(shardId); return recoveryRef; } @@ -143,7 +177,8 @@ public boolean cancelRecovery(long id, String reason) { public void failRecovery(long id, RecoveryFailedException e, boolean sendShardFailure) { RecoveryTarget removed = onGoingRecoveries.remove(id); if (removed != null) { - logger.trace("{} failing recovery from {}, id [{}]. Send shard failure: [{}]", removed.shardId(), removed.sourceNode(), removed.recoveryId(), sendShardFailure); + logger.trace("{} failing recovery from {}, id [{}]. Send shard failure: [{}]", removed.shardId(), removed.sourceNode(), + removed.recoveryId(), sendShardFailure); removed.fail(e, sendShardFailure); } } @@ -171,11 +206,22 @@ public int size() { */ public boolean cancelRecoveriesForShard(ShardId shardId, String reason) { boolean cancelled = false; - for (RecoveryTarget status : onGoingRecoveries.values()) { - if (status.shardId().equals(shardId)) { - cancelled |= cancelRecovery(status.recoveryId(), reason); + List matchedRecoveries = new ArrayList<>(); + synchronized (onGoingRecoveries) { + for (Iterator it = onGoingRecoveries.values().iterator(); it.hasNext(); ) { + RecoveryTarget status = it.next(); + if (status.shardId().equals(shardId)) { + matchedRecoveries.add(status); + it.remove(); + } } } + for (RecoveryTarget removed : matchedRecoveries) { + logger.trace("{} canceled recovery from {}, id [{}] (reason [{}])", + removed.shardId(), removed.sourceNode(), removed.recoveryId(), reason); + removed.cancel(reason); + cancelled = true; + } return cancelled; } @@ -205,7 +251,7 @@ public void close() { } } - public RecoveryTarget status() { + public RecoveryTarget target() { return status; } } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFailedException.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFailedException.java index 3c3d96a4f9baa..283de68f82b5b 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFailedException.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFailedException.java @@ -49,7 +49,8 @@ public RecoveryFailedException(ShardId shardId, DiscoveryNode sourceNode, Discov } public RecoveryFailedException(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, @Nullable String extraInfo, Throwable cause) { - super(shardId + ": Recovery failed from " + sourceNode + " into " + targetNode + (extraInfo == null ? "" : " (" + extraInfo + ")"), cause); + super(shardId + ": Recovery failed " + (sourceNode != null ? "from " + sourceNode + " into " : "on ") + + targetNode + (extraInfo == null ? "" : " (" + extraInfo + ")"), cause); } public RecoveryFailedException(StreamInput in) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySettings.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySettings.java index 82595458479b3..e238277b698d0 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySettings.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySettings.java @@ -22,7 +22,6 @@ import org.apache.lucene.store.RateLimiter; import org.apache.lucene.store.RateLimiter.SimpleRateLimiter; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -61,7 +60,7 @@ public class RecoverySettings extends AbstractComponent { */ public static final Setting INDICES_RECOVERY_INTERNAL_LONG_ACTION_TIMEOUT_SETTING = Setting.timeSetting("indices.recovery.internal_action_long_timeout", - (s) -> TimeValue.timeValueMillis(INDICES_RECOVERY_INTERNAL_ACTION_TIMEOUT_SETTING.get(s).millis() * 2).toString(), + (s) -> TimeValue.timeValueMillis(INDICES_RECOVERY_INTERNAL_ACTION_TIMEOUT_SETTING.get(s).millis() * 2), TimeValue.timeValueSeconds(0), Property.Dynamic, Property.NodeScope); /** @@ -70,7 +69,7 @@ public class RecoverySettings extends AbstractComponent { */ public static final Setting INDICES_RECOVERY_ACTIVITY_TIMEOUT_SETTING = Setting.timeSetting("indices.recovery.recovery_activity_timeout", - (s) -> INDICES_RECOVERY_INTERNAL_LONG_ACTION_TIMEOUT_SETTING.getRaw(s) , TimeValue.timeValueSeconds(0), + INDICES_RECOVERY_INTERNAL_LONG_ACTION_TIMEOUT_SETTING::get, TimeValue.timeValueSeconds(0), Property.Dynamic, Property.NodeScope); public static final ByteSizeValue DEFAULT_CHUNK_SIZE = new ByteSizeValue(512, ByteSizeUnit.KB); @@ -85,7 +84,6 @@ public class RecoverySettings extends AbstractComponent { private volatile ByteSizeValue chunkSize = DEFAULT_CHUNK_SIZE; - @Inject public RecoverySettings(Settings settings, ClusterSettings clusterSettings) { super(settings); @@ -99,10 +97,10 @@ public RecoverySettings(Settings settings, ClusterSettings clusterSettings) { this.activityTimeout = INDICES_RECOVERY_ACTIVITY_TIMEOUT_SETTING.get(settings); this.maxBytesPerSec = INDICES_RECOVERY_MAX_BYTES_PER_SEC_SETTING.get(settings); - if (maxBytesPerSec.bytes() <= 0) { + if (maxBytesPerSec.getBytes() <= 0) { rateLimiter = null; } else { - rateLimiter = new SimpleRateLimiter(maxBytesPerSec.mbFrac()); + rateLimiter = new SimpleRateLimiter(maxBytesPerSec.getMbFrac()); } @@ -142,7 +140,7 @@ public TimeValue internalActionLongTimeout() { public ByteSizeValue getChunkSize() { return chunkSize; } - void setChunkSize(ByteSizeValue chunkSize) { // only settable for tests + public void setChunkSize(ByteSizeValue chunkSize) { // only settable for tests if (chunkSize.bytesAsInt() <= 0) { throw new IllegalArgumentException("chunkSize must be > 0"); } @@ -172,12 +170,12 @@ public void setInternalActionLongTimeout(TimeValue internalActionLongTimeout) { private void setMaxBytesPerSec(ByteSizeValue maxBytesPerSec) { this.maxBytesPerSec = maxBytesPerSec; - if (maxBytesPerSec.bytes() <= 0) { + if (maxBytesPerSec.getBytes() <= 0) { rateLimiter = null; } else if (rateLimiter != null) { - rateLimiter.setMBPerSec(maxBytesPerSec.mbFrac()); + rateLimiter.setMBPerSec(maxBytesPerSec.getMbFrac()); } else { - rateLimiter = new SimpleRateLimiter(maxBytesPerSec.mbFrac()); + rateLimiter = new SimpleRateLimiter(maxBytesPerSec.getMbFrac()); } } } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java index 790376ba78ebd..5f66ffd4315ee 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java @@ -148,6 +148,8 @@ public RecoveryResponse recoverToTarget() throws IOException { // engine was just started at the end of phase 1 if (shard.state() == IndexShardState.RELOCATED) { + assert request.isPrimaryRelocation() == false : + "recovery target should not retry primary relocation if previous attempt made it past finalization step"; /** * The primary shard has been relocated while we copied files. This means that we can't guarantee any more that all * operations that were replicated during the file copy (when the target engine was not yet opened) will be present in the @@ -220,7 +222,7 @@ public void phase1(final IndexCommit snapshot, final Translog.View translogView) final long numDocsSource = recoverySourceMetadata.getNumDocs(); if (numDocsTarget != numDocsSource) { throw new IllegalStateException("try to recover " + request.shardId() + " from primary shard with sync id but number " + - "of docs differ: " + numDocsTarget + " (" + request.sourceNode().getName() + ", primary) vs " + numDocsSource + "of docs differ: " + numDocsSource + " (" + request.sourceNode().getName() + ", primary) vs " + numDocsTarget + "(" + request.targetNode().getName() + ")"); } // we shortcut recovery here because we have nothing to copy. but we must still start the engine on the target. @@ -327,7 +329,7 @@ public void phase1(final IndexCommit snapshot, final Translog.View translogView) } } - prepareTargetForTranslog(translogView.totalOperations()); + prepareTargetForTranslog(translogView.totalOperations(), shard.segmentStats(false).getMaxUnsafeAutoIdTimestamp()); logger.trace("[{}][{}] recovery [phase1] to {}: took [{}]", indexName, shardId, request.targetNode(), stopWatch.totalTime()); response.phase1Time = stopWatch.totalTime().millis(); @@ -339,15 +341,14 @@ public void phase1(final IndexCommit snapshot, final Translog.View translogView) } - protected void prepareTargetForTranslog(final int totalTranslogOps) throws IOException { + protected void prepareTargetForTranslog(final int totalTranslogOps, long maxUnsafeAutoIdTimestamp) throws IOException { StopWatch stopWatch = new StopWatch().start(); logger.trace("{} recovery [phase1] to {}: prepare remote engine for translog", request.shardId(), request.targetNode()); final long startEngineStart = stopWatch.totalTime().millis(); // Send a request preparing the new shard's translog to receive // operations. This ensures the shard engine is started and disables // garbage collection (not the JVM's GC!) of tombstone deletes - cancellableThreads.executeIO(() -> recoveryTarget.prepareForTranslogOperations(totalTranslogOps, - shard.segmentStats(false).getMaxUnsafeAutoIdTimestamp())); + cancellableThreads.executeIO(() -> recoveryTarget.prepareForTranslogOperations(totalTranslogOps, maxUnsafeAutoIdTimestamp)); stopWatch.stop(); response.startTime = stopWatch.totalTime().millis() - startEngineStart; @@ -547,7 +548,7 @@ void sendFiles(Store store, StoreFileMetaData[] files, Function Long.compare(a.length(), b.length())); // send smallest first for (int i = 0; i < files.length; i++) { final StoreFileMetaData md = files[i]; - try (final IndexInput indexInput = store.directory().openInput(md.name(), IOContext.READONCE)) { + try (IndexInput indexInput = store.directory().openInput(md.name(), IOContext.READONCE)) { // it's fine that we are only having the indexInput in the try/with block. The copy methods handles // exceptions during close correctly and doesn't hide the original exception. Streams.copy(new InputStreamIndexInput(indexInput, md.length()), outputStreamFactory.apply(md)); diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java index ef4f5072d7c79..77d8b4d707739 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java @@ -45,7 +45,7 @@ */ public class RecoveryState implements ToXContent, Streamable { - public static enum Stage { + public enum Stage { INIT((byte) 0), /** @@ -260,9 +260,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Fields.TYPE, recoverySource.getType()); builder.field(Fields.STAGE, stage.toString()); builder.field(Fields.PRIMARY, primary); - builder.dateValueField(Fields.START_TIME_IN_MILLIS, Fields.START_TIME, timer.startTime); + builder.dateField(Fields.START_TIME_IN_MILLIS, Fields.START_TIME, timer.startTime); if (timer.stopTime > 0) { - builder.dateValueField(Fields.STOP_TIME_IN_MILLIS, Fields.STOP_TIME, timer.stopTime); + builder.dateField(Fields.STOP_TIME_IN_MILLIS, Fields.STOP_TIME, timer.stopTime); } builder.timeValueField(Fields.TOTAL_TIME_IN_MILLIS, Fields.TOTAL_TIME, timer.time()); diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java index d608dc50e2326..98e2cc85bc9ec 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java @@ -36,7 +36,6 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.util.Callback; import org.elasticsearch.common.util.CancellableThreads; import org.elasticsearch.common.util.concurrent.AbstractRefCounted; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; @@ -55,8 +54,10 @@ import java.util.Map; import java.util.Map.Entry; import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; +import java.util.function.LongConsumer; /** * Represents a recovery where the current node is the target node of the recovery. To track recoveries in a central place, instances of @@ -68,7 +69,7 @@ public class RecoveryTarget extends AbstractRefCounted implements RecoveryTarget private static final AtomicLong idGenerator = new AtomicLong(); - private final String RECOVERY_PREFIX = "recovery."; + private static final String RECOVERY_PREFIX = "recovery."; private final ShardId shardId; private final long recoveryId; @@ -77,7 +78,7 @@ public class RecoveryTarget extends AbstractRefCounted implements RecoveryTarget private final String tempFilePrefix; private final Store store; private final PeerRecoveryTargetService.RecoveryListener listener; - private final Callback ensureClusterStateVersionCallback; + private final LongConsumer ensureClusterStateVersionCallback; private final AtomicBoolean finished = new AtomicBoolean(); @@ -87,17 +88,11 @@ public class RecoveryTarget extends AbstractRefCounted implements RecoveryTarget // last time this status was accessed private volatile long lastAccessTime = System.nanoTime(); - private final Map tempFileNames = ConcurrentCollections.newConcurrentMap(); + // latch that can be used to blockingly wait for RecoveryTarget to be closed + private final CountDownLatch closedLatch = new CountDownLatch(1); - private RecoveryTarget(RecoveryTarget copyFrom) { // copy constructor - this(copyFrom.indexShard, copyFrom.sourceNode, copyFrom.listener, copyFrom.cancellableThreads, copyFrom.recoveryId, - copyFrom.ensureClusterStateVersionCallback); - } + private final Map tempFileNames = ConcurrentCollections.newConcurrentMap(); - public RecoveryTarget(IndexShard indexShard, DiscoveryNode sourceNode, PeerRecoveryTargetService.RecoveryListener listener, - Callback ensureClusterStateVersionCallback) { - this(indexShard, sourceNode, listener, new CancellableThreads(), idGenerator.incrementAndGet(), ensureClusterStateVersionCallback); - } /** * creates a new recovery target object that represents a recovery to the provided indexShard * @@ -108,11 +103,14 @@ public RecoveryTarget(IndexShard indexShard, DiscoveryNode sourceNode, PeerRecov * version. Necessary for primary relocation so that new primary knows about all other ongoing * replica recoveries when replicating documents (see {@link RecoverySourceHandler}). */ - private RecoveryTarget(IndexShard indexShard, DiscoveryNode sourceNode, PeerRecoveryTargetService.RecoveryListener listener, - CancellableThreads cancellableThreads, long recoveryId, Callback ensureClusterStateVersionCallback) { + public RecoveryTarget( + final IndexShard indexShard, + final DiscoveryNode sourceNode, + final PeerRecoveryTargetService.RecoveryListener listener, + final LongConsumer ensureClusterStateVersionCallback) { super("recovery_status"); - this.cancellableThreads = cancellableThreads; - this.recoveryId = recoveryId; + this.cancellableThreads = new CancellableThreads(); + this.recoveryId = idGenerator.incrementAndGet(); this.listener = listener; this.logger = Loggers.getLogger(getClass(), indexShard.indexSettings().getSettings(), indexShard.shardId()); this.indexShard = indexShard; @@ -126,6 +124,13 @@ private RecoveryTarget(IndexShard indexShard, DiscoveryNode sourceNode, PeerReco indexShard.recoveryStats().incCurrentAsTarget(); } + /** + * returns a fresh RecoveryTarget to retry recovery from the same source node onto the same IndexShard and using the same listener + */ + public RecoveryTarget retryCopy() { + return new RecoveryTarget(this.indexShard, this.sourceNode, this.listener, this.ensureClusterStateVersionCallback); + } + public long recoveryId() { return recoveryId; } @@ -177,19 +182,37 @@ public void renameAllTempFiles() throws IOException { } /** - * Closes the current recovery target and returns a - * clone to reset the ongoing recovery. - * Note: the returned target must be canceled, failed or finished - * in order to release all it's reference. + * Closes the current recovery target and waits up to a certain timeout for resources to be freed. + * Returns true if resetting the recovery was successful, false if the recovery target is already cancelled / failed or marked as done. */ - RecoveryTarget resetRecovery() throws IOException { - ensureRefCount(); + boolean resetRecovery(CancellableThreads newTargetCancellableThreads) throws IOException { if (finished.compareAndSet(false, true)) { - // release the initial reference. recovery files will be cleaned as soon as ref count goes to zero, potentially now - decRef(); + try { + logger.debug("reset of recovery with shard {} and id [{}]", shardId, recoveryId); + } finally { + // release the initial reference. recovery files will be cleaned as soon as ref count goes to zero, potentially now. + decRef(); + } + try { + newTargetCancellableThreads.execute(closedLatch::await); + } catch (CancellableThreads.ExecutionCancelledException e) { + logger.trace("new recovery target cancelled for shard {} while waiting on old recovery target with id [{}] to close", + shardId, recoveryId); + return false; + } + RecoveryState.Stage stage = indexShard.recoveryState().getStage(); + if (indexShard.recoveryState().getPrimary() && (stage == RecoveryState.Stage.FINALIZE || stage == RecoveryState.Stage.DONE)) { + // once primary relocation has moved past the finalization step, the relocation source can be moved to RELOCATED state + // and start indexing as primary into the target shard (see TransportReplicationAction). Resetting the target shard in this + // state could mean that indexing is halted until the recovery retry attempt is completed and could also destroy existing + // documents indexed and acknowledged before the reset. + assert stage != RecoveryState.Stage.DONE : "recovery should not have completed when it's being reset"; + throw new IllegalStateException("cannot reset recovery as previous attempt made it past finalization step"); + } + indexShard.performRecoveryRestart(); + return true; } - indexShard.performRecoveryRestart(); - return new RecoveryTarget(this); + return false; } /** @@ -220,7 +243,7 @@ public void cancel(String reason) { public void fail(RecoveryFailedException e, boolean sendShardFailure) { if (finished.compareAndSet(false, true)) { try { - listener.onRecoveryFailure(state(), e, sendShardFailure); + notifyListener(e, sendShardFailure); } finally { try { cancellableThreads.cancel("failed recovery [" + ExceptionsHelper.stackTrace(e) + "]"); @@ -232,6 +255,10 @@ public void fail(RecoveryFailedException e, boolean sendShardFailure) { } } + public void notifyListener(RecoveryFailedException e, boolean sendShardFailure) { + listener.onRecoveryFailure(state(), e, sendShardFailure); + } + /** mark the current recovery as done */ public void markAsDone() { if (finished.compareAndSet(false, true)) { @@ -309,6 +336,7 @@ protected void closeInternal() { // free store. increment happens in constructor store.decRef(); indexShard.recoveryStats().decCurrentAsTarget(); + closedLatch.countDown(); } } @@ -339,7 +367,7 @@ public void finalizeRecovery() { @Override public void ensureClusterStateVersion(long clusterStateVersion) { - ensureClusterStateVersionCallback.handle(clusterStateVersion); + ensureClusterStateVersionCallback.accept(clusterStateVersion); } @Override diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RemoteRecoveryTargetHandler.java b/core/src/main/java/org/elasticsearch/indices/recovery/RemoteRecoveryTargetHandler.java index 327eb3b8eca7e..19d70ef264a2c 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RemoteRecoveryTargetHandler.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RemoteRecoveryTargetHandler.java @@ -52,8 +52,6 @@ public class RemoteRecoveryTargetHandler implements RecoveryTargetHandler { public RemoteRecoveryTargetHandler(long recoveryId, ShardId shardId, TransportService transportService, DiscoveryNode targetNode, RecoverySettings recoverySettings, Consumer onSourceThrottle) { this.transportService = transportService; - - this.recoveryId = recoveryId; this.shardId = shardId; this.targetNode = targetNode; diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/SharedFSRecoverySourceHandler.java b/core/src/main/java/org/elasticsearch/indices/recovery/SharedFSRecoverySourceHandler.java index 591176f047a2f..509dd996d19a9 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/SharedFSRecoverySourceHandler.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/SharedFSRecoverySourceHandler.java @@ -50,6 +50,7 @@ public RecoveryResponse recoverToTarget() throws IOException { boolean engineClosed = false; try { logger.trace("{} recovery [phase1] to {}: skipping phase 1 for shared filesystem", request.shardId(), request.targetNode()); + long maxUnsafeAutoIdTimestamp = shard.segmentStats(false).getMaxUnsafeAutoIdTimestamp(); if (request.isPrimaryRelocation()) { logger.debug("[phase1] closing engine on primary for shared filesystem recovery"); try { @@ -62,7 +63,7 @@ public RecoveryResponse recoverToTarget() throws IOException { shard.failShard("failed to close engine (phase1)", e); } } - prepareTargetForTranslog(0); + prepareTargetForTranslog(0, maxUnsafeAutoIdTimestamp); finalizeRecovery(); return response; } catch (Exception e) { diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java b/core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java index 9aa56fd8cb039..8e71491e211e8 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java @@ -19,8 +19,9 @@ package org.elasticsearch.indices.recovery; +import org.elasticsearch.Version; import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.routing.RecoverySource; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.index.shard.ShardId; @@ -38,6 +39,8 @@ public class StartRecoveryRequest extends TransportRequest { private ShardId shardId; + private String targetAllocationId; + private DiscoveryNode sourceNode; private DiscoveryNode targetNode; @@ -55,9 +58,11 @@ public StartRecoveryRequest() { * @param sourceNode The node to recover from * @param targetNode The node to recover to */ - public StartRecoveryRequest(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, Store.MetadataSnapshot metadataSnapshot, boolean primaryRelocation, long recoveryId) { + public StartRecoveryRequest(ShardId shardId, String targetAllocationId, DiscoveryNode sourceNode, DiscoveryNode targetNode, + Store.MetadataSnapshot metadataSnapshot, boolean primaryRelocation, long recoveryId) { this.recoveryId = recoveryId; this.shardId = shardId; + this.targetAllocationId = targetAllocationId; this.sourceNode = sourceNode; this.targetNode = targetNode; this.metadataSnapshot = metadataSnapshot; @@ -72,6 +77,11 @@ public ShardId shardId() { return shardId; } + @Nullable + public String targetAllocationId() { + return targetAllocationId; + } + public DiscoveryNode sourceNode() { return sourceNode; } @@ -93,6 +103,11 @@ public void readFrom(StreamInput in) throws IOException { super.readFrom(in); recoveryId = in.readLong(); shardId = ShardId.readShardId(in); + if (in.getVersion().onOrAfter(Version.V_5_4_0)) { + targetAllocationId = in.readString(); + } else { + targetAllocationId = null; + } sourceNode = new DiscoveryNode(in); targetNode = new DiscoveryNode(in); metadataSnapshot = new Store.MetadataSnapshot(in); @@ -104,6 +119,9 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); out.writeLong(recoveryId); shardId.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_5_4_0)) { + out.writeString(targetAllocationId); + } sourceNode.writeTo(out); targetNode.writeTo(out); metadataSnapshot.writeTo(out); diff --git a/core/src/main/java/org/elasticsearch/indices/store/IndicesStore.java b/core/src/main/java/org/elasticsearch/indices/store/IndicesStore.java index 439806b454b4c..120deb2f25f45 100644 --- a/core/src/main/java/org/elasticsearch/indices/store/IndicesStore.java +++ b/core/src/main/java/org/elasticsearch/indices/store/IndicesStore.java @@ -26,11 +26,13 @@ import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterStateListener; import org.elasticsearch.cluster.ClusterStateObserver; -import org.elasticsearch.cluster.ClusterStateUpdateTask; +import org.elasticsearch.cluster.LocalClusterUpdateTask; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.IndexRoutingTable; import org.elasticsearch.cluster.routing.IndexShardRoutingTable; +import org.elasticsearch.cluster.routing.RoutingNode; +import org.elasticsearch.cluster.routing.RoutingTable; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.collect.Tuple; @@ -62,7 +64,10 @@ import java.io.IOException; import java.util.ArrayList; import java.util.EnumSet; +import java.util.HashSet; +import java.util.Iterator; import java.util.List; +import java.util.Set; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; @@ -82,6 +87,9 @@ public class IndicesStore extends AbstractComponent implements ClusterStateListe private final TransportService transportService; private final ThreadPool threadPool; + // Cache successful shard deletion checks to prevent unnecessary file system lookups + private final Set folderNotFoundCache = new HashSet<>(); + private TimeValue deleteShardTimeout; @Inject @@ -94,12 +102,20 @@ public IndicesStore(Settings settings, IndicesService indicesService, this.threadPool = threadPool; transportService.registerRequestHandler(ACTION_SHARD_EXISTS, ShardActiveRequest::new, ThreadPool.Names.SAME, new ShardActiveRequestHandler()); this.deleteShardTimeout = INDICES_STORE_DELETE_SHARD_TIMEOUT.get(settings); - clusterService.addLast(this); + // Doesn't make sense to delete shards on non-data nodes + if (DiscoveryNode.isDataNode(settings)) { + // we double check nothing has changed when responses come back from other nodes. + // it's easier to do that check when the current cluster state is visible. + // also it's good in general to let things settle down + clusterService.addListener(this); + } } @Override public void close() { - clusterService.remove(this); + if (DiscoveryNode.isDataNode(settings)) { + clusterService.removeListener(this); + } } @Override @@ -112,11 +128,31 @@ public void clusterChanged(ClusterChangedEvent event) { return; } - for (IndexRoutingTable indexRoutingTable : event.state().routingTable()) { + RoutingTable routingTable = event.state().routingTable(); + + // remove entries from cache that don't exist in the routing table anymore (either closed or deleted indices) + // - removing shard data of deleted indices is handled by IndicesClusterStateService + // - closed indices don't need to be removed from the cache but we do it anyway for code simplicity + for (Iterator it = folderNotFoundCache.iterator(); it.hasNext(); ) { + ShardId shardId = it.next(); + if (routingTable.hasIndex(shardId.getIndex()) == false) { + it.remove(); + } + } + // remove entries from cache which are allocated to this node + final String localNodeId = event.state().nodes().getLocalNodeId(); + RoutingNode localRoutingNode = event.state().getRoutingNodes().node(localNodeId); + if (localRoutingNode != null) { + for (ShardRouting routing : localRoutingNode) { + folderNotFoundCache.remove(routing.shardId()); + } + } + + for (IndexRoutingTable indexRoutingTable : routingTable) { // Note, closed indices will not have any routing information, so won't be deleted for (IndexShardRoutingTable indexShardRoutingTable : indexRoutingTable) { - if (shardCanBeDeleted(event.state(), indexShardRoutingTable)) { - ShardId shardId = indexShardRoutingTable.shardId(); + ShardId shardId = indexShardRoutingTable.shardId(); + if (folderNotFoundCache.contains(shardId) == false && shardCanBeDeleted(localNodeId, indexShardRoutingTable)) { IndexService indexService = indicesService.indexService(indexRoutingTable.getIndex()); final IndexSettings indexSettings; if (indexService == null) { @@ -125,15 +161,33 @@ public void clusterChanged(ClusterChangedEvent event) { } else { indexSettings = indexService.getIndexSettings(); } - if (indicesService.canDeleteShardContent(shardId, indexSettings)) { - deleteShardIfExistElseWhere(event.state(), indexShardRoutingTable); + IndicesService.ShardDeletionCheckResult shardDeletionCheckResult = indicesService.canDeleteShardContent(shardId, indexSettings); + switch (shardDeletionCheckResult) { + case FOLDER_FOUND_CAN_DELETE: + deleteShardIfExistElseWhere(event.state(), indexShardRoutingTable); + break; + case NO_FOLDER_FOUND: + folderNotFoundCache.add(shardId); + break; + case NO_LOCAL_STORAGE: + assert false : "shard deletion only runs on data nodes which always have local storage"; + // nothing to do + break; + case STILL_ALLOCATED: + // nothing to do + break; + case SHARED_FILE_SYSTEM: + // nothing to do + break; + default: + assert false : "unknown shard deletion check result: " + shardDeletionCheckResult; } } } } } - boolean shardCanBeDeleted(ClusterState state, IndexShardRoutingTable indexShardRoutingTable) { + static boolean shardCanBeDeleted(String localNodeId, IndexShardRoutingTable indexShardRoutingTable) { // a shard can be deleted if all its copies are active, and its not allocated on this node if (indexShardRoutingTable.size() == 0) { // should not really happen, there should always be at least 1 (primary) shard in a @@ -143,27 +197,12 @@ boolean shardCanBeDeleted(ClusterState state, IndexShardRoutingTable indexShardR for (ShardRouting shardRouting : indexShardRoutingTable) { // be conservative here, check on started, not even active - if (!shardRouting.started()) { + if (shardRouting.started() == false) { return false; } - // if the allocated or relocation node id doesn't exists in the cluster state it may be a stale node, - // make sure we don't do anything with this until the routing table has properly been rerouted to reflect - // the fact that the node does not exists - DiscoveryNode node = state.nodes().get(shardRouting.currentNodeId()); - if (node == null) { - return false; - } - if (shardRouting.relocatingNodeId() != null) { - node = state.nodes().get(shardRouting.relocatingNodeId()); - if (node == null) { - return false; - } - } - - // check if shard is active on the current node or is getting relocated to the our node - String localNodeId = state.getNodes().getLocalNode().getId(); - if (localNodeId.equals(shardRouting.currentNodeId()) || localNodeId.equals(shardRouting.relocatingNodeId())) { + // check if shard is active on the current node + if (localNodeId.equals(shardRouting.currentNodeId())) { return false; } } @@ -176,19 +215,13 @@ private void deleteShardIfExistElseWhere(ClusterState state, IndexShardRoutingTa String indexUUID = indexShardRoutingTable.shardId().getIndex().getUUID(); ClusterName clusterName = state.getClusterName(); for (ShardRouting shardRouting : indexShardRoutingTable) { - // Node can't be null, because otherwise shardCanBeDeleted() would have returned false + assert shardRouting.started() : "expected started shard but was " + shardRouting; DiscoveryNode currentNode = state.nodes().get(shardRouting.currentNodeId()); - assert currentNode != null; - requests.add(new Tuple<>(currentNode, new ShardActiveRequest(clusterName, indexUUID, shardRouting.shardId(), deleteShardTimeout))); - if (shardRouting.relocatingNodeId() != null) { - DiscoveryNode relocatingNode = state.nodes().get(shardRouting.relocatingNodeId()); - assert relocatingNode != null; - requests.add(new Tuple<>(relocatingNode, new ShardActiveRequest(clusterName, indexUUID, shardRouting.shardId(), deleteShardTimeout))); - } } - ShardActiveResponseHandler responseHandler = new ShardActiveResponseHandler(indexShardRoutingTable.shardId(), state, requests.size()); + ShardActiveResponseHandler responseHandler = new ShardActiveResponseHandler(indexShardRoutingTable.shardId(), state.getVersion(), + requests.size()); for (Tuple request : requests) { logger.trace("{} sending shard active check to {}", request.v2().shardId, request.v1()); transportService.sendRequest(request.v1(), ACTION_SHARD_EXISTS, request.v2(), responseHandler); @@ -199,14 +232,14 @@ private class ShardActiveResponseHandler implements TransportResponseHandler execute(ClusterState currentState) throws Exception { + if (clusterStateVersion != currentState.getVersion()) { + logger.trace("not deleting shard {}, the update task state version[{}] is not equal to cluster state before shard active api call [{}]", shardId, currentState.getVersion(), clusterStateVersion); + return unchanged(); } try { indicesService.deleteShardStore("no longer used", shardId, currentState); } catch (Exception ex) { logger.debug((Supplier) () -> new ParameterizedMessage("{} failed to delete unallocated shard, ignoring", shardId), ex); } - return currentState; + return unchanged(); } @Override @@ -330,16 +358,13 @@ public void sendResult(boolean shardActive) { logger.error((Supplier) () -> new ParameterizedMessage("failed send response for shard active while trying to delete shard {} - shard will probably not be removed", request.shardId), e); } } - }, new ClusterStateObserver.ValidationPredicate() { - @Override - protected boolean validate(ClusterState newState) { - // the shard is not there in which case we want to send back a false (shard is not active), so the cluster state listener must be notified - // or the shard is active in which case we want to send back that the shard is active - // here we could also evaluate the cluster state and get the information from there. we - // don't do it because we would have to write another method for this that would have the same effect - IndexShard indexShard = getShard(request); - return indexShard == null || shardActive(indexShard); - } + }, newState -> { + // the shard is not there in which case we want to send back a false (shard is not active), so the cluster state listener must be notified + // or the shard is active in which case we want to send back that the shard is active + // here we could also evaluate the cluster state and get the information from there. we + // don't do it because we would have to write another method for this that would have the same effect + IndexShard currentShard = getShard(request); + return currentShard == null || shardActive(currentShard); }); } } @@ -374,7 +399,7 @@ private static class ShardActiveRequest extends TransportRequest { private String indexUUID; private ShardId shardId; - public ShardActiveRequest() { + ShardActiveRequest() { } ShardActiveRequest(ClusterName clusterName, String indexUUID, ShardId shardId, TimeValue timeout) { diff --git a/core/src/main/java/org/elasticsearch/indices/store/TransportNodesListShardStoreMetaData.java b/core/src/main/java/org/elasticsearch/indices/store/TransportNodesListShardStoreMetaData.java index 341b0e57858b0..acfd3e1233af6 100644 --- a/core/src/main/java/org/elasticsearch/indices/store/TransportNodesListShardStoreMetaData.java +++ b/core/src/main/java/org/elasticsearch/indices/store/TransportNodesListShardStoreMetaData.java @@ -39,6 +39,7 @@ import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.gateway.AsyncShardFetch; import org.elasticsearch.index.IndexService; @@ -133,7 +134,8 @@ private StoreFilesMetaData listStoreMetaData(ShardId shardId) throws IOException // we may send this requests while processing the cluster state that recovered the index // sometimes the request comes in before the local node processed that cluster state // in such cases we can load it from disk - metaData = IndexMetaData.FORMAT.loadLatestState(logger, nodeEnv.indexPaths(shardId.getIndex())); + metaData = IndexMetaData.FORMAT.loadLatestState(logger, NamedXContentRegistry.EMPTY, + nodeEnv.indexPaths(shardId.getIndex())); } if (metaData == null) { logger.trace("{} node doesn't have meta data for the requests index, responding with empty", shardId); @@ -160,11 +162,6 @@ private StoreFilesMetaData listStoreMetaData(ShardId shardId) throws IOException } } - @Override - protected boolean accumulateExceptions() { - return true; - } - public static class StoreFilesMetaData implements Iterable, Streamable { private ShardId shardId; Store.MetadataSnapshot metadataSnapshot; diff --git a/core/src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java b/core/src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java index 3eea62273ade4..d7f1ef6993b14 100644 --- a/core/src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java +++ b/core/src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java @@ -115,7 +115,7 @@ private class PurgerThread extends Thread { private final CountDownLatch shutdownLatch = new CountDownLatch(1); - public PurgerThread(String name, TimeValue interval) { + PurgerThread(String name, TimeValue interval) { super(name); setDaemon(true); this.notifier = new Notifier(interval); @@ -197,7 +197,8 @@ public TimeValue getInterval() { private void purgeShards(List shardsToPurge) { for (IndexShard shardToPurge : shardsToPurge) { - Query query = shardToPurge.mapperService().fullName(TTLFieldMapper.NAME).rangeQuery(null, System.currentTimeMillis(), false, true); + Query query = shardToPurge.mapperService().fullName(TTLFieldMapper.NAME).rangeQuery(null, System.currentTimeMillis(), false, + true, null); Engine.Searcher searcher = shardToPurge.acquireSearcher("indices_ttl"); try { logger.debug("[{}][{}] purging shard", shardToPurge.routingEntry().index(), shardToPurge.routingEntry().id()); @@ -226,7 +227,7 @@ private static class DocToPurge { public final long version; public final String routing; - public DocToPurge(String type, String id, long version, String routing) { + DocToPurge(String type, String id, long version, String routing) { this.type = type; this.id = id; this.version = version; @@ -239,7 +240,7 @@ private class ExpiredDocsCollector extends SimpleCollector { private List docsToPurge = new ArrayList<>(); private NumericDocValues versions; - public ExpiredDocsCollector() { + ExpiredDocsCollector() { } @Override @@ -319,7 +320,7 @@ private static final class Notifier { private final Condition condition = lock.newCondition(); private volatile TimeValue timeout; - public Notifier(TimeValue timeout) { + Notifier(TimeValue timeout) { assert timeout != null; this.timeout = timeout; } diff --git a/core/src/main/java/org/elasticsearch/ingest/ConfigurationUtils.java b/core/src/main/java/org/elasticsearch/ingest/ConfigurationUtils.java index 908e344698066..08669188a9fdb 100644 --- a/core/src/main/java/org/elasticsearch/ingest/ConfigurationUtils.java +++ b/core/src/main/java/org/elasticsearch/ingest/ConfigurationUtils.java @@ -224,7 +224,13 @@ public static Object readObject(String processorType, String processorTag, Map readProcessorConfigs(List>> processorConfigs, Map processorFactories) throws Exception { + Exception exception = null; List processors = new ArrayList<>(); if (processorConfigs != null) { for (Map> processorConfigWithKey : processorConfigs) { for (Map.Entry> entry : processorConfigWithKey.entrySet()) { - processors.add(readProcessor(processorFactories, entry.getKey(), entry.getValue())); + try { + processors.add(readProcessor(processorFactories, entry.getKey(), entry.getValue())); + } catch (Exception e) { + exception = ExceptionsHelper.useOrSuppress(exception, e); + } } } } + if (exception != null) { + throw exception; + } + return processors; } @@ -274,6 +289,7 @@ private static void addHeadersToException(ElasticsearchException exception, Stri public static Processor readProcessor(Map processorFactories, String type, Map config) throws Exception { + String tag = ConfigurationUtils.readOptionalStringProperty(null, null, config, TAG_KEY); Processor.Factory factory = processorFactories.get(type); if (factory != null) { boolean ignoreFailure = ConfigurationUtils.readBooleanProperty(null, null, config, "ignore_failure", false); @@ -281,7 +297,6 @@ public static Processor readProcessor(Map processorFa ConfigurationUtils.readOptionalList(null, null, config, Pipeline.ON_FAILURE_KEY); List onFailureProcessors = readProcessorConfigs(onFailureProcessorConfigs, processorFactories); - String tag = ConfigurationUtils.readOptionalStringProperty(null, null, config, TAG_KEY); if (onFailureProcessorConfigs != null && onFailureProcessors.isEmpty()) { throw newConfigurationException(type, tag, Pipeline.ON_FAILURE_KEY, @@ -303,6 +318,6 @@ public static Processor readProcessor(Map processorFa throw newConfigurationException(type, tag, null, e); } } - throw new ElasticsearchParseException("No processor type exists with name [" + type + "]"); + throw newConfigurationException(type, tag, null, "No processor type exists with name [" + type + "]"); } } diff --git a/core/src/main/java/org/elasticsearch/ingest/IngestDocument.java b/core/src/main/java/org/elasticsearch/ingest/IngestDocument.java index 8010c6e6c3f05..937a1d5e23a69 100644 --- a/core/src/main/java/org/elasticsearch/ingest/IngestDocument.java +++ b/core/src/main/java/org/elasticsearch/ingest/IngestDocument.java @@ -20,6 +20,7 @@ package org.elasticsearch.ingest; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.settings.Setting; import org.elasticsearch.index.mapper.IdFieldMapper; import org.elasticsearch.index.mapper.IndexFieldMapper; import org.elasticsearch.index.mapper.ParentFieldMapper; @@ -29,18 +30,17 @@ import org.elasticsearch.index.mapper.TimestampFieldMapper; import org.elasticsearch.index.mapper.TypeFieldMapper; -import java.text.DateFormat; -import java.text.SimpleDateFormat; +import java.time.ZoneOffset; +import java.time.ZonedDateTime; import java.util.ArrayList; import java.util.Arrays; import java.util.Base64; import java.util.Date; +import java.util.EnumMap; import java.util.HashMap; import java.util.List; -import java.util.Locale; import java.util.Map; import java.util.Objects; -import java.util.TimeZone; /** * Represents a single document being captured before indexing and holds the source and metadata (like id, type and index). @@ -58,6 +58,11 @@ public final class IngestDocument { public IngestDocument(String index, String type, String id, String routing, String parent, String timestamp, String ttl, Map source) { + this(index, type, id, routing, parent, timestamp, ttl, source, false); + } + + public IngestDocument(String index, String type, String id, String routing, String parent, String timestamp, + String ttl, Map source, boolean newDateFormat) { this.sourceAndMetadata = new HashMap<>(); this.sourceAndMetadata.putAll(source); this.sourceAndMetadata.put(MetaData.INDEX.getFieldName(), index); @@ -77,9 +82,11 @@ public IngestDocument(String index, String type, String id, String routing, Stri } this.ingestMetadata = new HashMap<>(); - DateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZZ", Locale.ROOT); - df.setTimeZone(TimeZone.getTimeZone("UTC")); - this.ingestMetadata.put(TIMESTAMP, df.format(new Date())); + if (newDateFormat) { + this.ingestMetadata.put(TIMESTAMP, ZonedDateTime.now(ZoneOffset.UTC)); + } else { + this.ingestMetadata.put(TIMESTAMP, new Date()); + } } /** @@ -116,6 +123,28 @@ public T getFieldValue(String path, Class clazz) { return cast(path, context, clazz); } + /** + * Returns the value contained in the document for the provided path + * + * @param path The path within the document in dot-notation + * @param clazz The expected class of the field value + * @param ignoreMissing The flag to determine whether to throw an exception when `path` is not found in the document. + * @return the value for the provided path if existing, null otherwise. + * @throws IllegalArgumentException only if ignoreMissing is false and the path is null, empty, invalid, if the field doesn't exist + * or if the field that is found at the provided path is not of the expected type. + */ + public T getFieldValue(String path, Class clazz, boolean ignoreMissing) { + try { + return getFieldValue(path, clazz); + } catch (IllegalArgumentException e) { + if (ignoreMissing && hasField(path) != true) { + return null; + } else { + throw e; + } + } + } + /** * Returns the value contained in the document with the provided templated path * @param pathTemplate The path within the document in dot-notation @@ -138,8 +167,24 @@ public T getFieldValue(TemplateService.Template pathTemplate, Class clazz * or if the field that is found at the provided path is not of the expected type. */ public byte[] getFieldValueAsBytes(String path) { - Object object = getFieldValue(path, Object.class); - if (object instanceof byte[]) { + return getFieldValueAsBytes(path, false); + } + + /** + * Returns the value contained in the document for the provided path as a byte array. + * If the path value is a string, a base64 decode operation will happen. + * If the path value is a byte array, it is just returned + * @param path The path within the document in dot-notation + * @param ignoreMissing The flag to determine whether to throw an exception when `path` is not found in the document. + * @return the byte array for the provided path if existing + * @throws IllegalArgumentException if the path is null, empty, invalid, if the field doesn't exist + * or if the field that is found at the provided path is not of the expected type. + */ + public byte[] getFieldValueAsBytes(String path, boolean ignoreMissing) { + Object object = getFieldValue(path, Object.class, ignoreMissing); + if (object == null) { + return null; + } else if (object instanceof byte[]) { return (byte[]) object; } else if (object instanceof String) { return Base64.getDecoder().decode(object.toString()); @@ -531,7 +576,7 @@ private Map createTemplateModel() { * Metadata fields that used to be accessible as ordinary top level fields will be removed as part of this call. */ public Map extractMetadata() { - Map metadataMap = new HashMap<>(); + Map metadataMap = new EnumMap<>(MetaData.class); for (MetaData metaData : MetaData.values()) { metadataMap.put(metaData, cast(metaData.getFieldName(), sourceAndMetadata.remove(metaData.getFieldName()), String.class)); } @@ -582,6 +627,11 @@ private static Object deepCopy(Object value) { value instanceof Long || value instanceof Float || value instanceof Double || value instanceof Boolean) { return value; + } else if (value instanceof Date) { + return ((Date) value).clone(); + } else if (value instanceof ZonedDateTime) { + ZonedDateTime zonedDateTime = (ZonedDateTime) value; + return ZonedDateTime.of(zonedDateTime.toLocalDate(), zonedDateTime.toLocalTime(), zonedDateTime.getZone()); } else { throw new IllegalArgumentException("unexpected value type [" + value.getClass() + "]"); } diff --git a/core/src/main/java/org/elasticsearch/ingest/IngestMetadata.java b/core/src/main/java/org/elasticsearch/ingest/IngestMetadata.java index 9ad369e22d421..ca8a5df845014 100644 --- a/core/src/main/java/org/elasticsearch/ingest/IngestMetadata.java +++ b/core/src/main/java/org/elasticsearch/ingest/IngestMetadata.java @@ -21,10 +21,9 @@ import org.elasticsearch.cluster.Diff; import org.elasticsearch.cluster.DiffableUtils; +import org.elasticsearch.cluster.NamedDiff; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ObjectParser; @@ -45,9 +44,8 @@ public final class IngestMetadata implements MetaData.Custom { public static final String TYPE = "ingest"; - public static final IngestMetadata PROTO = new IngestMetadata(); private static final ParseField PIPELINES_FIELD = new ParseField("pipeline"); - private static final ObjectParser, ParseFieldMatcherSupplier> INGEST_METADATA_PARSER = new ObjectParser<>( + private static final ObjectParser, Void> INGEST_METADATA_PARSER = new ObjectParser<>( "ingest_metadata", ArrayList::new); static { @@ -67,7 +65,7 @@ public IngestMetadata(Map pipelines) { } @Override - public String type() { + public String getWriteableName() { return TYPE; } @@ -75,15 +73,14 @@ public Map getPipelines() { return pipelines; } - @Override - public IngestMetadata readFrom(StreamInput in) throws IOException { + public IngestMetadata(StreamInput in) throws IOException { int size = in.readVInt(); Map pipelines = new HashMap<>(size); for (int i = 0; i < size; i++) { - PipelineConfiguration pipeline = PipelineConfiguration.readPipelineConfiguration(in); + PipelineConfiguration pipeline = PipelineConfiguration.readFrom(in); pipelines.put(pipeline.getId(), pipeline); } - return new IngestMetadata(pipelines); + this.pipelines = Collections.unmodifiableMap(pipelines); } @Override @@ -94,10 +91,9 @@ public void writeTo(StreamOutput out) throws IOException { } } - @Override - public IngestMetadata fromXContent(XContentParser parser) throws IOException { + public static IngestMetadata fromXContent(XContentParser parser) throws IOException { Map pipelines = new HashMap<>(); - List configs = INGEST_METADATA_PARSER.parse(parser, () -> ParseFieldMatcher.STRICT); + List configs = INGEST_METADATA_PARSER.parse(parser, null); for (PipelineConfiguration pipeline : configs) { pipelines.put(pipeline.getId(), pipeline); } @@ -116,7 +112,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws @Override public EnumSet context() { - return MetaData.API_AND_GATEWAY; + return MetaData.ALL_CONTEXTS; } @Override @@ -124,12 +120,11 @@ public Diff diff(MetaData.Custom before) { return new IngestMetadataDiff((IngestMetadata) before, this); } - @Override - public Diff readDiffFrom(StreamInput in) throws IOException { + public static NamedDiff readDiffFrom(StreamInput in) throws IOException { return new IngestMetadataDiff(in); } - static class IngestMetadataDiff implements Diff { + static class IngestMetadataDiff implements NamedDiff { final Diff> pipelines; @@ -137,8 +132,9 @@ static class IngestMetadataDiff implements Diff { this.pipelines = DiffableUtils.diff(before.pipelines, after.pipelines, DiffableUtils.getStringKeySerializer()); } - public IngestMetadataDiff(StreamInput in) throws IOException { - pipelines = DiffableUtils.readJdkMapDiff(in, DiffableUtils.getStringKeySerializer(), PipelineConfiguration.PROTOTYPE); + IngestMetadataDiff(StreamInput in) throws IOException { + pipelines = DiffableUtils.readJdkMapDiff(in, DiffableUtils.getStringKeySerializer(), PipelineConfiguration::readFrom, + PipelineConfiguration::readDiffFrom); } @Override @@ -150,6 +146,11 @@ public MetaData.Custom apply(MetaData.Custom part) { public void writeTo(StreamOutput out) throws IOException { pipelines.writeTo(out); } + + @Override + public String getWriteableName() { + return TYPE; + } } @Override diff --git a/core/src/main/java/org/elasticsearch/ingest/IngestService.java b/core/src/main/java/org/elasticsearch/ingest/IngestService.java index 5249ed7a7dc84..1455e37588a8c 100644 --- a/core/src/main/java/org/elasticsearch/ingest/IngestService.java +++ b/core/src/main/java/org/elasticsearch/ingest/IngestService.java @@ -25,6 +25,8 @@ import java.util.List; import java.util.Map; +import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; import org.elasticsearch.index.analysis.AnalysisRegistry; @@ -32,15 +34,19 @@ import org.elasticsearch.script.ScriptService; import org.elasticsearch.threadpool.ThreadPool; +import static org.elasticsearch.common.settings.Setting.Property; + /** * Holder class for several ingest related services. */ public class IngestService { + public static final Setting NEW_INGEST_DATE_FORMAT = + Setting.boolSetting("ingest.new_date_format", false, Property.NodeScope, Property.Dynamic, Property.Deprecated); private final PipelineStore pipelineStore; private final PipelineExecutionService pipelineExecutionService; - public IngestService(Settings settings, ThreadPool threadPool, + public IngestService(ClusterSettings clusterSettings, Settings settings, ThreadPool threadPool, Environment env, ScriptService scriptService, AnalysisRegistry analysisRegistry, List ingestPlugins) { @@ -56,7 +62,7 @@ public IngestService(Settings settings, ThreadPool threadPool, } } } - this.pipelineStore = new PipelineStore(settings, Collections.unmodifiableMap(processorFactories)); + this.pipelineStore = new PipelineStore(clusterSettings, settings, Collections.unmodifiableMap(processorFactories)); this.pipelineExecutionService = new PipelineExecutionService(pipelineStore, threadPool); } diff --git a/core/src/main/java/org/elasticsearch/ingest/IngestStats.java b/core/src/main/java/org/elasticsearch/ingest/IngestStats.java index dee806e0230e7..add02a5da9023 100644 --- a/core/src/main/java/org/elasticsearch/ingest/IngestStats.java +++ b/core/src/main/java/org/elasticsearch/ingest/IngestStats.java @@ -54,7 +54,7 @@ public IngestStats(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { totalStats.writeTo(out); - out.writeVLong(statsPerPipeline.size()); + out.writeVInt(statsPerPipeline.size()); for (Map.Entry entry : statsPerPipeline.entrySet()) { out.writeString(entry.getKey()); entry.getValue().writeTo(out); diff --git a/core/src/main/java/org/elasticsearch/ingest/InternalTemplateService.java b/core/src/main/java/org/elasticsearch/ingest/InternalTemplateService.java index 677be3b4b0cc6..b5aa2dbc51a8a 100644 --- a/core/src/main/java/org/elasticsearch/ingest/InternalTemplateService.java +++ b/core/src/main/java/org/elasticsearch/ingest/InternalTemplateService.java @@ -19,15 +19,16 @@ package org.elasticsearch.ingest; +import java.util.Collections; +import java.util.Map; + +import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.script.CompiledScript; -import org.elasticsearch.script.ExecutableScript; import org.elasticsearch.script.Script; import org.elasticsearch.script.ScriptContext; import org.elasticsearch.script.ScriptService; - -import java.util.Collections; -import java.util.Map; +import org.elasticsearch.script.ScriptType; +import org.elasticsearch.template.CompiledTemplate; public class InternalTemplateService implements TemplateService { @@ -42,20 +43,12 @@ public Template compile(String template) { int mustacheStart = template.indexOf("{{"); int mustacheEnd = template.indexOf("}}"); if (mustacheStart != -1 && mustacheEnd != -1 && mustacheStart < mustacheEnd) { - Script script = new Script(template, ScriptService.ScriptType.INLINE, "mustache", Collections.emptyMap()); - CompiledScript compiledScript = scriptService.compile( - script, - ScriptContext.Standard.INGEST, - Collections.emptyMap()); + Script script = new Script(ScriptType.INLINE, "mustache", template, Collections.emptyMap()); + CompiledTemplate compiledTemplate = scriptService.compileTemplate(script, ScriptContext.Standard.INGEST); return new Template() { @Override public String execute(Map model) { - ExecutableScript executableScript = scriptService.executable(compiledScript, model); - Object result = executableScript.run(); - if (result instanceof BytesReference) { - return ((BytesReference) result).utf8ToString(); - } - return String.valueOf(result); + return compiledTemplate.run(model).utf8ToString(); } @Override @@ -72,7 +65,7 @@ class StringTemplate implements Template { private final String value; - public StringTemplate(String value) { + StringTemplate(String value) { this.value = value; } diff --git a/core/src/main/java/org/elasticsearch/ingest/PipelineConfiguration.java b/core/src/main/java/org/elasticsearch/ingest/PipelineConfiguration.java index 56f4e2c6a79bc..1d7ba958f1498 100644 --- a/core/src/main/java/org/elasticsearch/ingest/PipelineConfiguration.java +++ b/core/src/main/java/org/elasticsearch/ingest/PipelineConfiguration.java @@ -19,60 +19,61 @@ package org.elasticsearch.ingest; +import org.elasticsearch.Version; import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.cluster.Diff; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ContextParser; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; -import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; import java.util.Map; -import java.util.function.BiFunction; +import java.util.Objects; /** * Encapsulates a pipeline's id and configuration as a blob */ public final class PipelineConfiguration extends AbstractDiffable implements ToXContent { - static final PipelineConfiguration PROTOTYPE = new PipelineConfiguration(null, null); - - public static PipelineConfiguration readPipelineConfiguration(StreamInput in) throws IOException { - return PROTOTYPE.readFrom(in); - } - private static final ObjectParser PARSER = new ObjectParser<>("pipeline_config", Builder::new); + private static final ObjectParser PARSER = new ObjectParser<>("pipeline_config", Builder::new); static { PARSER.declareString(Builder::setId, new ParseField("id")); PARSER.declareField((parser, builder, aVoid) -> { XContentBuilder contentBuilder = XContentBuilder.builder(parser.contentType().xContent()); XContentHelper.copyCurrentStructure(contentBuilder.generator(), parser); - builder.setConfig(contentBuilder.bytes()); + builder.setConfig(contentBuilder.bytes(), contentBuilder.contentType()); }, new ParseField("config"), ObjectParser.ValueType.OBJECT); + } - public static BiFunction getParser() { - return (p, c) -> PARSER.apply(p ,c).build(); + public static ContextParser getParser() { + return (parser, context) -> PARSER.apply(parser, null).build(); } private static class Builder { private String id; private BytesReference config; + private XContentType xContentType; void setId(String id) { this.id = id; } - void setConfig(BytesReference config) { + void setConfig(BytesReference config, XContentType xContentType) { this.config = config; + this.xContentType = xContentType; } PipelineConfiguration build() { - return new PipelineConfiguration(id, config); + return new PipelineConfiguration(id, config, xContentType); } } @@ -81,10 +82,12 @@ PipelineConfiguration build() { // and the way the map of maps config is read requires a deep copy (it removes instead of gets entries to check for unused options) // also the get pipeline api just directly returns this to the caller private final BytesReference config; + private final XContentType xContentType; - public PipelineConfiguration(String id, BytesReference config) { - this.id = id; - this.config = config; + public PipelineConfiguration(String id, BytesReference config, XContentType xContentType) { + this.id = Objects.requireNonNull(id); + this.config = Objects.requireNonNull(config); + this.xContentType = Objects.requireNonNull(xContentType); } public String getId() { @@ -92,7 +95,17 @@ public String getId() { } public Map getConfigAsMap() { - return XContentHelper.convertToMap(config, true).v2(); + return XContentHelper.convertToMap(config, true, xContentType).v2(); + } + + // pkg-private for tests + XContentType getXContentType() { + return xContentType; + } + + // pkg-private for tests + BytesReference getConfig() { + return config; } @Override @@ -104,15 +117,27 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } - @Override - public PipelineConfiguration readFrom(StreamInput in) throws IOException { - return new PipelineConfiguration(in.readString(), in.readBytesReference()); + public static PipelineConfiguration readFrom(StreamInput in) throws IOException { + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + return new PipelineConfiguration(in.readString(), in.readBytesReference(), XContentType.readFrom(in)); + } else { + final String id = in.readString(); + final BytesReference config = in.readBytesReference(); + return new PipelineConfiguration(id, config, XContentFactory.xContentType(config)); + } + } + + public static Diff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(PipelineConfiguration::readFrom, in); } @Override public void writeTo(StreamOutput out) throws IOException { out.writeString(id); out.writeBytesReference(config); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType.writeTo(out); + } } @Override diff --git a/core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java b/core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java index e714663653422..86fe0af75d2b0 100644 --- a/core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java +++ b/core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java @@ -19,10 +19,10 @@ package org.elasticsearch.ingest; -import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.cluster.ClusterChangedEvent; -import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.ClusterStateApplier; import org.elasticsearch.common.Strings; import org.elasticsearch.common.metrics.CounterMetric; import org.elasticsearch.common.metrics.MeanMetric; @@ -38,7 +38,7 @@ import java.util.function.BiConsumer; import java.util.function.Consumer; -public class PipelineExecutionService implements ClusterStateListener { +public class PipelineExecutionService implements ClusterStateApplier { private final PipelineStore store; private final ThreadPool threadPool; @@ -68,7 +68,7 @@ protected void doRun() throws Exception { }); } - public void executeBulkRequest(Iterable> actionRequests, + public void executeBulkRequest(Iterable actionRequests, BiConsumer itemFailureHandler, Consumer completionHandler) { threadPool.executor(ThreadPool.Names.BULK).execute(new AbstractRunnable() { @@ -80,7 +80,7 @@ public void onFailure(Exception e) { @Override protected void doRun() throws Exception { - for (ActionRequest actionRequest : actionRequests) { + for (DocWriteRequest actionRequest : actionRequests) { if ((actionRequest instanceof IndexRequest)) { IndexRequest indexRequest = (IndexRequest) actionRequest; if (Strings.hasText(indexRequest.getPipeline())) { @@ -112,7 +112,7 @@ public IngestStats stats() { } @Override - public void clusterChanged(ClusterChangedEvent event) { + public void applyClusterState(ClusterChangedEvent event) { IngestMetadata ingestMetadata = event.state().getMetaData().custom(IngestMetadata.TYPE); if (ingestMetadata != null) { updatePipelineStats(ingestMetadata); @@ -162,7 +162,8 @@ private void innerExecute(IndexRequest indexRequest, Pipeline pipeline) throws E String timestamp = indexRequest.timestamp(); String ttl = indexRequest.ttl() == null ? null : indexRequest.ttl().toString(); Map sourceAsMap = indexRequest.sourceAsMap(); - IngestDocument ingestDocument = new IngestDocument(index, type, id, routing, parent, timestamp, ttl, sourceAsMap); + IngestDocument ingestDocument = new IngestDocument(index, type, id, routing, parent, timestamp, ttl, + sourceAsMap, store.isNewIngestDateFormat()); pipeline.execute(ingestDocument); Map metadataMap = ingestDocument.extractMetadata(); diff --git a/core/src/main/java/org/elasticsearch/ingest/PipelineStore.java b/core/src/main/java/org/elasticsearch/ingest/PipelineStore.java index 94850674e755c..327ab18907c56 100644 --- a/core/src/main/java/org/elasticsearch/ingest/PipelineStore.java +++ b/core/src/main/java/org/elasticsearch/ingest/PipelineStore.java @@ -19,13 +19,6 @@ package org.elasticsearch.ingest; -import java.util.ArrayList; -import java.util.Collections; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.Objects; - import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.ResourceNotFoundException; @@ -36,19 +29,30 @@ import org.elasticsearch.cluster.AckedClusterStateUpdateTask; import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.ClusterStateApplier; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.regex.Regex; +import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentHelper; -public class PipelineStore extends AbstractComponent implements ClusterStateListener { +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; + +public class PipelineStore extends AbstractComponent implements ClusterStateApplier { private final Pipeline.Factory factory = new Pipeline.Factory(); private final Map processorFactories; + private volatile boolean newIngestDateFormat; // Ideally this should be in IngestMetadata class, but we don't have the processor factories around there. // We know of all the processor factories when a node with all its plugin have been initialized. Also some @@ -56,13 +60,19 @@ public class PipelineStore extends AbstractComponent implements ClusterStateList // are loaded, so in the cluster state we just save the pipeline config and here we keep the actual pipelines around. volatile Map pipelines = new HashMap<>(); - public PipelineStore(Settings settings, Map processorFactories) { + public PipelineStore(ClusterSettings clusterSettings, Settings settings, Map processorFactories) { super(settings); this.processorFactories = processorFactories; + this.newIngestDateFormat = IngestService.NEW_INGEST_DATE_FORMAT.get(settings); + clusterSettings.addSettingsUpdateConsumer(IngestService.NEW_INGEST_DATE_FORMAT, this::setNewIngestDateFormat); + } + + private void setNewIngestDateFormat(Boolean newIngestDateFormat) { + this.newIngestDateFormat = newIngestDateFormat; } @Override - public void clusterChanged(ClusterChangedEvent event) { + public void applyClusterState(ClusterChangedEvent event) { innerUpdatePipelines(event.previousState(), event.state()); } @@ -111,17 +121,26 @@ ClusterState innerDelete(DeletePipelineRequest request, ClusterState currentStat return currentState; } Map pipelines = currentIngestMetadata.getPipelines(); - if (pipelines.containsKey(request.getId()) == false) { + Set toRemove = new HashSet<>(); + for (String pipelineKey : pipelines.keySet()) { + if (Regex.simpleMatch(request.getId(), pipelineKey)) { + toRemove.add(pipelineKey); + } + } + if (toRemove.isEmpty() && Regex.isMatchAllPattern(request.getId()) == false) { throw new ResourceNotFoundException("pipeline [{}] is missing", request.getId()); - } else { - pipelines = new HashMap<>(pipelines); - pipelines.remove(request.getId()); - ClusterState.Builder newState = ClusterState.builder(currentState); - newState.metaData(MetaData.builder(currentState.getMetaData()) - .putCustom(IngestMetadata.TYPE, new IngestMetadata(pipelines)) - .build()); - return newState.build(); + } else if (toRemove.isEmpty()) { + return currentState; } + final Map pipelinesCopy = new HashMap<>(pipelines); + for (String key : toRemove) { + pipelinesCopy.remove(key); + } + ClusterState.Builder newState = ClusterState.builder(currentState); + newState.metaData(MetaData.builder(currentState.getMetaData()) + .putCustom(IngestMetadata.TYPE, new IngestMetadata(pipelinesCopy)) + .build()); + return newState.build(); } /** @@ -151,14 +170,14 @@ void validatePipeline(Map ingestInfos, PutPipelineReq throw new IllegalStateException("Ingest info is empty"); } - Map pipelineConfig = XContentHelper.convertToMap(request.getSource(), false).v2(); + Map pipelineConfig = XContentHelper.convertToMap(request.getSource(), false, request.getXContentType()).v2(); Pipeline pipeline = factory.create(request.getId(), pipelineConfig, processorFactories); - List exceptions = new ArrayList<>(); + List exceptions = new ArrayList<>(); for (Processor processor : pipeline.flattenAllProcessors()) { for (Map.Entry entry : ingestInfos.entrySet()) { if (entry.getValue().containsProcessor(processor.getType()) == false) { String message = "Processor type [" + processor.getType() + "] is not installed on node [" + entry.getKey() + "]"; - exceptions.add(new IllegalArgumentException(message)); + exceptions.add(ConfigurationUtils.newConfigurationException(processor.getType(), processor.getTag(), null, message)); } } } @@ -174,7 +193,7 @@ ClusterState innerPut(PutPipelineRequest request, ClusterState currentState) { pipelines = new HashMap<>(); } - pipelines.put(request.getId(), new PipelineConfiguration(request.getId(), request.getSource())); + pipelines.put(request.getId(), new PipelineConfiguration(request.getId(), request.getSource(), request.getXContentType())); ClusterState.Builder newState = ClusterState.builder(currentState); newState.metaData(MetaData.builder(currentState.getMetaData()) .putCustom(IngestMetadata.TYPE, new IngestMetadata(pipelines)) @@ -193,6 +212,10 @@ public Map getProcessorFactories() { return processorFactories; } + public boolean isNewIngestDateFormat() { + return newIngestDateFormat; + } + /** * @return pipeline configuration specified by id. If multiple ids or wildcards are specified multiple pipelines * may be returned diff --git a/core/src/main/java/org/elasticsearch/ingest/Processor.java b/core/src/main/java/org/elasticsearch/ingest/Processor.java index af4ea954dd546..228ca5f4930e4 100644 --- a/core/src/main/java/org/elasticsearch/ingest/Processor.java +++ b/core/src/main/java/org/elasticsearch/ingest/Processor.java @@ -29,6 +29,8 @@ /** * A processor implementation may modify the data belonging to a document. * Whether changes are made and what exactly is modified is up to the implementation. + * + * Processors may get called concurrently and thus need to be thread-safe. */ public interface Processor { diff --git a/core/src/main/java/org/elasticsearch/monitor/fs/FsInfo.java b/core/src/main/java/org/elasticsearch/monitor/fs/FsInfo.java index bf7dce9c0d5ec..98655b90787f7 100644 --- a/core/src/main/java/org/elasticsearch/monitor/fs/FsInfo.java +++ b/core/src/main/java/org/elasticsearch/monitor/fs/FsInfo.java @@ -135,9 +135,9 @@ private double addDouble(double current, double other) { } public void add(Path path) { - total = addLong(total, path.total); - free = addLong(free, path.free); - available = addLong(available, path.available); + total = FsProbe.adjustForHugeFilesystems(addLong(total, path.total)); + free = FsProbe.adjustForHugeFilesystems(addLong(free, path.free)); + available = FsProbe.adjustForHugeFilesystems(addLong(available, path.available)); if (path.spins != null && path.spins.booleanValue()) { // Spinning is contagious! spins = Boolean.TRUE; diff --git a/core/src/main/java/org/elasticsearch/monitor/fs/FsProbe.java b/core/src/main/java/org/elasticsearch/monitor/fs/FsProbe.java index 4cdbed367c987..d825100352670 100644 --- a/core/src/main/java/org/elasticsearch/monitor/fs/FsProbe.java +++ b/core/src/main/java/org/elasticsearch/monitor/fs/FsProbe.java @@ -126,6 +126,18 @@ List readProcDiskStats() throws IOException { return Files.readAllLines(PathUtils.get("/proc/diskstats")); } + /* See: https://bugs.openjdk.java.net/browse/JDK-8162520 */ + /** + * Take a large value intended to be positive, and if it has overflowed, + * return {@code Long.MAX_VALUE} instead of a negative number. + */ + static long adjustForHugeFilesystems(long bytes) { + if (bytes < 0) { + return Long.MAX_VALUE; + } + return bytes; + } + public static FsInfo.Path getFSInfo(NodePath nodePath) throws IOException { FsInfo.Path fsPath = new FsInfo.Path(); fsPath.path = nodePath.path.toAbsolutePath().toString(); @@ -133,7 +145,7 @@ public static FsInfo.Path getFSInfo(NodePath nodePath) throws IOException { // NOTE: we use already cached (on node startup) FileStore and spins // since recomputing these once per second (default) could be costly, // and they should not change: - fsPath.total = nodePath.fileStore.getTotalSpace(); + fsPath.total = adjustForHugeFilesystems(nodePath.fileStore.getTotalSpace()); fsPath.free = nodePath.fileStore.getUnallocatedSpace(); fsPath.available = nodePath.fileStore.getUsableSpace(); fsPath.type = nodePath.fileStore.type(); diff --git a/core/src/main/java/org/elasticsearch/monitor/fs/FsService.java b/core/src/main/java/org/elasticsearch/monitor/fs/FsService.java index 96467b4d407ca..58557cdc305f1 100644 --- a/core/src/main/java/org/elasticsearch/monitor/fs/FsService.java +++ b/core/src/main/java/org/elasticsearch/monitor/fs/FsService.java @@ -68,7 +68,7 @@ private class FsInfoCache extends SingleObjectCache { private final FsInfo initialValue; - public FsInfoCache(TimeValue interval, FsInfo initialValue) { + FsInfoCache(TimeValue interval, FsInfo initialValue) { super(interval, initialValue); this.initialValue = initialValue; } diff --git a/core/src/main/java/org/elasticsearch/monitor/jvm/JvmGcMonitorService.java b/core/src/main/java/org/elasticsearch/monitor/jvm/JvmGcMonitorService.java index 3a19fe5bd0056..f260d7430e2e5 100644 --- a/core/src/main/java/org/elasticsearch/monitor/jvm/JvmGcMonitorService.java +++ b/core/src/main/java/org/elasticsearch/monitor/jvm/JvmGcMonitorService.java @@ -71,7 +71,7 @@ static class GcOverheadThreshold { final int infoThreshold; final int debugThreshold; - public GcOverheadThreshold(final int warnThreshold, final int infoThreshold, final int debugThreshold) { + GcOverheadThreshold(final int warnThreshold, final int infoThreshold, final int debugThreshold) { this.warnThreshold = warnThreshold; this.infoThreshold = infoThreshold; this.debugThreshold = debugThreshold; @@ -355,7 +355,7 @@ static class SlowGcEvent { final JvmStats currentJvmStats; final ByteSizeValue maxHeapUsed; - public SlowGcEvent( + SlowGcEvent( final GarbageCollector currentGc, final long collectionCount, final TimeValue collectionTime, @@ -380,7 +380,7 @@ public SlowGcEvent( private final Map gcThresholds; final GcOverheadThreshold gcOverheadThreshold; - public JvmMonitor(final Map gcThresholds, final GcOverheadThreshold gcOverheadThreshold) { + JvmMonitor(final Map gcThresholds, final GcOverheadThreshold gcOverheadThreshold) { this.gcThresholds = Objects.requireNonNull(gcThresholds); this.gcOverheadThreshold = Objects.requireNonNull(gcOverheadThreshold); } @@ -486,9 +486,9 @@ long now() { return System.nanoTime(); } - abstract void onSlowGc(final Threshold threshold, final long seq, final SlowGcEvent slowGcEvent); + abstract void onSlowGc(Threshold threshold, long seq, SlowGcEvent slowGcEvent); - abstract void onGcOverhead(final Threshold threshold, final long total, final long elapsed, final long seq); + abstract void onGcOverhead(Threshold threshold, long total, long elapsed, long seq); } diff --git a/core/src/main/java/org/elasticsearch/monitor/jvm/JvmInfo.java b/core/src/main/java/org/elasticsearch/monitor/jvm/JvmInfo.java index ca0bb4f3e8010..c18a1c5e3fdd9 100644 --- a/core/src/main/java/org/elasticsearch/monitor/jvm/JvmInfo.java +++ b/core/src/main/java/org/elasticsearch/monitor/jvm/JvmInfo.java @@ -103,6 +103,7 @@ public class JvmInfo implements Writeable, ToXContent { String onOutOfMemoryError = null; String useCompressedOops = "unknown"; String useG1GC = "unknown"; + String useSerialGC = "unknown"; long configuredInitialHeapSize = -1; long configuredMaxHeapSize = -1; try { @@ -148,6 +149,13 @@ public class JvmInfo implements Writeable, ToXContent { configuredMaxHeapSize = Long.parseLong((String) valueMethod.invoke(maxHeapSizeVmOptionObject)); } catch (Exception ignored) { } + + try { + Object useSerialGCVmOptionObject = vmOptionMethod.invoke(hotSpotDiagnosticMXBean, "UseSerialGC"); + useSerialGC = (String) valueMethod.invoke(useSerialGCVmOptionObject); + } catch (Exception ignored) { + } + } catch (Exception ignored) { } @@ -155,7 +163,7 @@ public class JvmInfo implements Writeable, ToXContent { INSTANCE = new JvmInfo(pid, System.getProperty("java.version"), runtimeMXBean.getVmName(), runtimeMXBean.getVmVersion(), runtimeMXBean.getVmVendor(), runtimeMXBean.getStartTime(), configuredInitialHeapSize, configuredMaxHeapSize, mem, inputArguments, bootClassPath, classPath, systemProperties, gcCollectors, memoryPools, onError, onOutOfMemoryError, - useCompressedOops, useG1GC); + useCompressedOops, useG1GC, useSerialGC); } public static JvmInfo jvmInfo() { @@ -186,11 +194,12 @@ public static JvmInfo jvmInfo() { private final String onOutOfMemoryError; private final String useCompressedOops; private final String useG1GC; + private final String useSerialGC; private JvmInfo(long pid, String version, String vmName, String vmVersion, String vmVendor, long startTime, long configuredInitialHeapSize, long configuredMaxHeapSize, Mem mem, String[] inputArguments, String bootClassPath, String classPath, Map systemProperties, String[] gcCollectors, String[] memoryPools, String onError, - String onOutOfMemoryError, String useCompressedOops, String useG1GC) { + String onOutOfMemoryError, String useCompressedOops, String useG1GC, String useSerialGC) { this.pid = pid; this.version = version; this.vmName = vmName; @@ -210,6 +219,7 @@ private JvmInfo(long pid, String version, String vmName, String vmVersion, Strin this.onOutOfMemoryError = onOutOfMemoryError; this.useCompressedOops = useCompressedOops; this.useG1GC = useG1GC; + this.useSerialGC = useSerialGC; } public JvmInfo(StreamInput in) throws IOException { @@ -230,12 +240,13 @@ public JvmInfo(StreamInput in) throws IOException { gcCollectors = in.readStringArray(); memoryPools = in.readStringArray(); useCompressedOops = in.readString(); - //the following members are only used locally for boostrap checks, never serialized nor printed out + //the following members are only used locally for bootstrap checks, never serialized nor printed out this.configuredMaxHeapSize = -1; this.configuredInitialHeapSize = -1; this.onError = null; this.onOutOfMemoryError = null; this.useG1GC = "unknown"; + this.useSerialGC = "unknown"; } @Override @@ -415,6 +426,10 @@ public String useG1GC() { return this.useG1GC; } + public String useSerialGC() { + return this.useSerialGC; + } + public String[] getGcCollectors() { return gcCollectors; } @@ -431,7 +446,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Fields.VM_NAME, vmName); builder.field(Fields.VM_VERSION, vmVersion); builder.field(Fields.VM_VENDOR, vmVendor); - builder.dateValueField(Fields.START_TIME_IN_MILLIS, Fields.START_TIME, startTime); + builder.dateField(Fields.START_TIME_IN_MILLIS, Fields.START_TIME, startTime); builder.startObject(Fields.MEM); builder.byteSizeField(Fields.HEAP_INIT_IN_BYTES, Fields.HEAP_INIT, mem.heapInit); @@ -441,11 +456,13 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.byteSizeField(Fields.DIRECT_MAX_IN_BYTES, Fields.DIRECT_MAX, mem.directMemoryMax); builder.endObject(); - builder.field(Fields.GC_COLLECTORS, gcCollectors); - builder.field(Fields.MEMORY_POOLS, memoryPools); + builder.array(Fields.GC_COLLECTORS, gcCollectors); + builder.array(Fields.MEMORY_POOLS, memoryPools); builder.field(Fields.USING_COMPRESSED_OOPS, useCompressedOops); + builder.field(Fields.INPUT_ARGUMENTS, inputArguments); + builder.endObject(); return builder; } @@ -474,6 +491,7 @@ static final class Fields { static final String GC_COLLECTORS = "gc_collectors"; static final String MEMORY_POOLS = "memory_pools"; static final String USING_COMPRESSED_OOPS = "using_compressed_ordinary_object_pointers"; + static final String INPUT_ARGUMENTS = "input_arguments"; } public static class Mem implements Writeable { diff --git a/core/src/main/java/org/elasticsearch/monitor/os/OsProbe.java b/core/src/main/java/org/elasticsearch/monitor/os/OsProbe.java index 08abfc05f1d6c..43ef51658b726 100644 --- a/core/src/main/java/org/elasticsearch/monitor/os/OsProbe.java +++ b/core/src/main/java/org/elasticsearch/monitor/os/OsProbe.java @@ -19,17 +19,25 @@ package org.elasticsearch.monitor.os; +import org.apache.logging.log4j.Logger; import org.apache.lucene.util.Constants; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.io.PathUtils; +import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.monitor.Probes; import java.io.IOException; import java.lang.management.ManagementFactory; import java.lang.management.OperatingSystemMXBean; +import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.nio.file.Files; +import java.nio.file.Path; +import java.util.HashMap; import java.util.List; +import java.util.Map; +import java.util.regex.Matcher; +import java.util.regex.Pattern; public class OsProbe { @@ -108,48 +116,326 @@ public long getTotalSwapSpaceSize() { } /** - * Returns the system load averages + * The system load averages as an array. + * + * On Windows, this method returns {@code null}. + * + * On Linux, this method returns the 1, 5, and 15-minute load averages. + * + * On macOS, this method should return the 1-minute load average. + * + * @return the available system load averages or {@code null} */ - public double[] getSystemLoadAverage() { - if (Constants.LINUX || Constants.FREE_BSD) { - final String procLoadAvg = Constants.LINUX ? "/proc/loadavg" : "/compat/linux/proc/loadavg"; - double[] loadAverage = readProcLoadavg(procLoadAvg); - if (loadAverage != null) { - return loadAverage; - } - // fallback - } + final double[] getSystemLoadAverage() { if (Constants.WINDOWS) { return null; + } else if (Constants.LINUX) { + try { + final String procLoadAvg = readProcLoadavg(); + assert procLoadAvg.matches("(\\d+\\.\\d+\\s+){3}\\d+/\\d+\\s+\\d+"); + final String[] fields = procLoadAvg.split("\\s+"); + return new double[]{Double.parseDouble(fields[0]), Double.parseDouble(fields[1]), Double.parseDouble(fields[2])}; + } catch (final IOException e) { + if (logger.isDebugEnabled()) { + logger.debug("error reading /proc/loadavg", e); + } + return null; + } + } else { + assert Constants.MAC_OS_X; + if (getSystemLoadAverage == null) { + return null; + } + try { + final double oneMinuteLoadAverage = (double) getSystemLoadAverage.invoke(osMxBean); + return new double[]{oneMinuteLoadAverage >= 0 ? oneMinuteLoadAverage : -1, -1, -1}; + } catch (IllegalAccessException | InvocationTargetException e) { + if (logger.isDebugEnabled()) { + logger.debug("error reading one minute load average from operating system", e); + } + return null; + } } - if (getSystemLoadAverage == null) { - return null; - } - try { - double oneMinuteLoadAverage = (double) getSystemLoadAverage.invoke(osMxBean); - return new double[] { oneMinuteLoadAverage >= 0 ? oneMinuteLoadAverage : -1, -1, -1 }; - } catch (Exception e) { - return null; + } + + /** + * The line from {@code /proc/loadavg}. The first three fields are the load averages averaged over 1, 5, and 15 minutes. The fourth + * field is two numbers separated by a slash, the first is the number of currently runnable scheduling entities, the second is the + * number of scheduling entities on the system. The fifth field is the PID of the most recently created process. + * + * @return the line from {@code /proc/loadavg} or {@code null} + */ + @SuppressForbidden(reason = "access /proc/loadavg") + String readProcLoadavg() throws IOException { + return readSingleLine(PathUtils.get("/proc/loadavg")); + } + + public short getSystemCpuPercent() { + return Probes.getLoadAndScaleToPercent(getSystemCpuLoad, osMxBean); + } + + /** + * Reads a file containing a single line. + * + * @param path path to the file to read + * @return the single line + * @throws IOException if an I/O exception occurs reading the file + */ + private String readSingleLine(final Path path) throws IOException { + final List lines = Files.readAllLines(path); + assert lines != null && lines.size() == 1; + return lines.get(0); + } + + // this property is to support a hack to workaround an issue with Docker containers mounting the cgroups hierarchy inconsistently with + // respect to /proc/self/cgroup; for Docker containers this should be set to "/" + private static final String CONTROL_GROUPS_HIERARCHY_OVERRIDE = System.getProperty("es.cgroups.hierarchy.override"); + + /** + * A map of the control groups to which the Elasticsearch process belongs. Note that this is a map because the control groups can vary + * from subsystem to subsystem. Additionally, this map can not be cached because a running process can be reclassified. + * + * @return a map from subsystems to the control group for the Elasticsearch process. + * @throws IOException if an I/O exception occurs reading {@code /proc/self/cgroup} + */ + private Map getControlGroups() throws IOException { + final List lines = readProcSelfCgroup(); + final Map controllerMap = new HashMap<>(); + for (final String line : lines) { + /* + * The virtual file /proc/self/cgroup lists the control groups that the Elasticsearch process is a member of. Each line contains + * three colon-separated fields of the form hierarchy-ID:subsystem-list:cgroup-path. For cgroups version 1 hierarchies, the + * subsystem-list is a comma-separated list of subsystems. The subsystem-list can be empty if the hierarchy represents a cgroups + * version 2 hierarchy. For cgroups version 1 + */ + final String[] fields = line.split(":"); + assert fields.length == 3; + final String[] controllers = fields[1].split(","); + for (final String controller : controllers) { + final String controlGroupPath; + if (CONTROL_GROUPS_HIERARCHY_OVERRIDE != null) { + /* + * Docker violates the relationship between /proc/self/cgroup and the /sys/fs/cgroup hierarchy. It's possible that this + * will be fixed in future versions of Docker with cgroup namespaces, but this requires modern kernels. Thus, we provide + * an undocumented hack for overriding the control group path. Do not rely on this hack, it will be removed. + */ + controlGroupPath = CONTROL_GROUPS_HIERARCHY_OVERRIDE; + } else { + controlGroupPath = fields[2]; + } + final String previous = controllerMap.put(controller, controlGroupPath); + assert previous == null; + } } + return controllerMap; } - @SuppressForbidden(reason = "access /proc") - private static double[] readProcLoadavg(String procLoadavg) { - try { - List lines = Files.readAllLines(PathUtils.get(procLoadavg)); - if (!lines.isEmpty()) { - String[] fields = lines.get(0).split("\\s+"); - return new double[] { Double.parseDouble(fields[0]), Double.parseDouble(fields[1]), Double.parseDouble(fields[2]) }; + /** + * The lines from {@code /proc/self/cgroup}. This file represents the control groups to which the Elasticsearch process belongs. Each + * line in this file represents a control group hierarchy of the form + *

    + * {@code \d+:([^:,]+(?:,[^:,]+)?):(/.*)} + *

    + * with the first field representing the hierarchy ID, the second field representing a comma-separated list of the subsystems bound to + * the hierarchy, and the last field representing the control group. + * + * @return the lines from {@code /proc/self/cgroup} + * @throws IOException if an I/O exception occurs reading {@code /proc/self/cgroup} + */ + @SuppressForbidden(reason = "access /proc/self/cgroup") + List readProcSelfCgroup() throws IOException { + final List lines = Files.readAllLines(PathUtils.get("/proc/self/cgroup")); + assert lines != null && !lines.isEmpty(); + return lines; + } + + /** + * The total CPU time in nanoseconds consumed by all tasks in the cgroup to which the Elasticsearch process belongs for the {@code + * cpuacct} subsystem. + * + * @param controlGroup the control group for the Elasticsearch process for the {@code cpuacct} subsystem + * @return the total CPU time in nanoseconds + * @throws IOException if an I/O exception occurs reading {@code cpuacct.usage} for the control group + */ + private long getCgroupCpuAcctUsageNanos(final String controlGroup) throws IOException { + return Long.parseLong(readSysFsCgroupCpuAcctCpuAcctUsage(controlGroup)); + } + + /** + * Returns the line from {@code cpuacct.usage} for the control group to which the Elasticsearch process belongs for the {@code cpuacct} + * subsystem. This line represents the total CPU time in nanoseconds consumed by all tasks in the same control group. + * + * @param controlGroup the control group to which the Elasticsearch process belongs for the {@code cpuacct} subsystem + * @return the line from {@code cpuacct.usage} + * @throws IOException if an I/O exception occurs reading {@code cpuacct.usage} for the control group + */ + @SuppressForbidden(reason = "access /sys/fs/cgroup/cpuacct") + String readSysFsCgroupCpuAcctCpuAcctUsage(final String controlGroup) throws IOException { + return readSingleLine(PathUtils.get("/sys/fs/cgroup/cpuacct", controlGroup, "cpuacct.usage")); + } + + /** + * The total period of time in microseconds for how frequently the Elasticsearch control group's access to CPU resources will be + * reallocated. + * + * @param controlGroup the control group for the Elasticsearch process for the {@code cpuacct} subsystem + * @return the CFS quota period in microseconds + * @throws IOException if an I/O exception occurs reading {@code cpu.cfs_period_us} for the control group + */ + private long getCgroupCpuAcctCpuCfsPeriodMicros(final String controlGroup) throws IOException { + return Long.parseLong(readSysFsCgroupCpuAcctCpuCfsPeriod(controlGroup)); + } + + /** + * Returns the line from {@code cpu.cfs_period_us} for the control group to which the Elasticsearch process belongs for the {@code cpu} + * subsystem. This line represents the period of time in microseconds for how frequently the control group's access to CPU resources + * will be reallocated. + * + * @param controlGroup the control group to which the Elasticsearch process belongs for the {@code cpu} subsystem + * @return the line from {@code cpu.cfs_period_us} + * @throws IOException if an I/O exception occurs reading {@code cpu.cfs_period_us} for the control group + */ + @SuppressForbidden(reason = "access /sys/fs/cgroup/cpu") + String readSysFsCgroupCpuAcctCpuCfsPeriod(final String controlGroup) throws IOException { + return readSingleLine(PathUtils.get("/sys/fs/cgroup/cpu", controlGroup, "cpu.cfs_period_us")); + } + + /** + * The total time in microseconds that all tasks in the Elasticsearch control group can run during one period as specified by {@code + * cpu.cfs_period_us}. + * + * @param controlGroup the control group for the Elasticsearch process for the {@code cpuacct} subsystem + * @return the CFS quota in microseconds + * @throws IOException if an I/O exception occurs reading {@code cpu.cfs_quota_us} for the control group + */ + private long getCgroupCpuAcctCpuCfsQuotaMicros(final String controlGroup) throws IOException { + return Long.parseLong(readSysFsCgroupCpuAcctCpuAcctCfsQuota(controlGroup)); + } + + /** + * Returns the line from {@code cpu.cfs_quota_us} for the control group to which the Elasticsearch process belongs for the {@code cpu} + * subsystem. This line represents the total time in microseconds that all tasks in the control group can run during one period as + * specified by {@code cpu.cfs_period_us}. + * + * @param controlGroup the control group to which the Elasticsearch process belongs for the {@code cpu} subsystem + * @return the line from {@code cpu.cfs_quota_us} + * @throws IOException if an I/O exception occurs reading {@code cpu.cfs_quota_us} for the control group + */ + @SuppressForbidden(reason = "access /sys/fs/cgroup/cpu") + String readSysFsCgroupCpuAcctCpuAcctCfsQuota(final String controlGroup) throws IOException { + return readSingleLine(PathUtils.get("/sys/fs/cgroup/cpu", controlGroup, "cpu.cfs_quota_us")); + } + + /** + * The CPU time statistics for all tasks in the Elasticsearch control group. + * + * @param controlGroup the control group for the Elasticsearch process for the {@code cpuacct} subsystem + * @return the CPU time statistics + * @throws IOException if an I/O exception occurs reading {@code cpu.stat} for the control group + */ + private OsStats.Cgroup.CpuStat getCgroupCpuAcctCpuStat(final String controlGroup) throws IOException { + final List lines = readSysFsCgroupCpuAcctCpuStat(controlGroup); + long numberOfPeriods = -1; + long numberOfTimesThrottled = -1; + long timeThrottledNanos = -1; + for (final String line : lines) { + final String[] fields = line.split("\\s+"); + switch (fields[0]) { + case "nr_periods": + numberOfPeriods = Long.parseLong(fields[1]); + break; + case "nr_throttled": + numberOfTimesThrottled = Long.parseLong(fields[1]); + break; + case "throttled_time": + timeThrottledNanos = Long.parseLong(fields[1]); + break; } - } catch (IOException e) { - // do not fail Elasticsearch if something unexpected - // happens here } - return null; + assert numberOfPeriods != -1; + assert numberOfTimesThrottled != -1; + assert timeThrottledNanos != -1; + return new OsStats.Cgroup.CpuStat(numberOfPeriods, numberOfTimesThrottled, timeThrottledNanos); } - public short getSystemCpuPercent() { - return Probes.getLoadAndScaleToPercent(getSystemCpuLoad, osMxBean); + /** + * Returns the lines from {@code cpu.stat} for the control group to which the Elasticsearch process belongs for the {@code cpu} + * subsystem. These lines represent the CPU time statistics and have the form + *

    +     * nr_periods \d+
    +     * nr_throttled \d+
    +     * throttled_time \d+
    +     * 
    + * where {@code nr_periods} is the number of period intervals as specified by {@code cpu.cfs_period_us} that have elapsed, {@code + * nr_throttled} is the number of times tasks in the given control group have been throttled, and {@code throttled_time} is the total + * time in nanoseconds for which tasks in the given control group have been throttled. + * + * @param controlGroup the control group to which the Elasticsearch process belongs for the {@code cpu} subsystem + * @return the lines from {@code cpu.stat} + * @throws IOException if an I/O exception occurs reading {@code cpu.stat} for the control group + */ + @SuppressForbidden(reason = "access /sys/fs/cgroup/cpu") + List readSysFsCgroupCpuAcctCpuStat(final String controlGroup) throws IOException { + final List lines = Files.readAllLines(PathUtils.get("/sys/fs/cgroup/cpu", controlGroup, "cpu.stat")); + assert lines != null && lines.size() == 3; + return lines; + } + + /** + * Checks if cgroup stats are available by checking for the existence of {@code /proc/self/cgroup}, {@code /sys/fs/cgroup/cpu}, and + * {@code /sys/fs/cgroup/cpuacct}. + * + * @return {@code true} if the stats are available, otherwise {@code false} + */ + @SuppressForbidden(reason = "access /proc/self/cgroup, /sys/fs/cgroup/cpu, and /sys/fs/cgroup/cpuacct") + boolean areCgroupStatsAvailable() { + if (!Files.exists(PathUtils.get("/proc/self/cgroup"))) { + return false; + } + if (!Files.exists(PathUtils.get("/sys/fs/cgroup/cpu"))) { + return false; + } + if (!Files.exists(PathUtils.get("/sys/fs/cgroup/cpuacct"))) { + return false; + } + return true; + } + + /** + * Basic cgroup stats. + * + * @return basic cgroup stats, or {@code null} if an I/O exception occurred reading the cgroup stats + */ + private OsStats.Cgroup getCgroup() { + try { + if (!areCgroupStatsAvailable()) { + return null; + } else { + final Map controllerMap = getControlGroups(); + assert !controllerMap.isEmpty(); + + final String cpuAcctControlGroup = controllerMap.get("cpuacct"); + assert cpuAcctControlGroup != null; + final long cgroupCpuAcctUsageNanos = getCgroupCpuAcctUsageNanos(cpuAcctControlGroup); + + final String cpuControlGroup = controllerMap.get("cpu"); + assert cpuControlGroup != null; + final long cgroupCpuAcctCpuCfsPeriodMicros = getCgroupCpuAcctCpuCfsPeriodMicros(cpuControlGroup); + final long cgroupCpuAcctCpuCfsQuotaMicros = getCgroupCpuAcctCpuCfsQuotaMicros(cpuControlGroup); + final OsStats.Cgroup.CpuStat cpuStat = getCgroupCpuAcctCpuStat(cpuControlGroup); + + return new OsStats.Cgroup( + cpuAcctControlGroup, + cgroupCpuAcctUsageNanos, + cpuControlGroup, + cgroupCpuAcctCpuCfsPeriodMicros, + cgroupCpuAcctCpuCfsQuotaMicros, + cpuStat); + } + } catch (final IOException e) { + logger.debug("error reading control group stats", e); + return null; + } } private static class OsProbeHolder { @@ -160,24 +446,27 @@ public static OsProbe getInstance() { return OsProbeHolder.INSTANCE; } - private OsProbe() { + OsProbe() { + } + private final Logger logger = ESLoggerFactory.getLogger(getClass()); + public OsInfo osInfo(long refreshInterval, int allocatedProcessors) { return new OsInfo(refreshInterval, Runtime.getRuntime().availableProcessors(), allocatedProcessors, Constants.OS_NAME, Constants.OS_ARCH, Constants.OS_VERSION); } public OsStats osStats() { - OsStats.Cpu cpu = new OsStats.Cpu(getSystemCpuPercent(), getSystemLoadAverage()); - OsStats.Mem mem = new OsStats.Mem(getTotalPhysicalMemorySize(), getFreePhysicalMemorySize()); - OsStats.Swap swap = new OsStats.Swap(getTotalSwapSpaceSize(), getFreeSwapSpaceSize()); - return new OsStats(System.currentTimeMillis(), cpu, mem , swap); + final OsStats.Cpu cpu = new OsStats.Cpu(getSystemCpuPercent(), getSystemLoadAverage()); + final OsStats.Mem mem = new OsStats.Mem(getTotalPhysicalMemorySize(), getFreePhysicalMemorySize()); + final OsStats.Swap swap = new OsStats.Swap(getTotalSwapSpaceSize(), getFreeSwapSpaceSize()); + final OsStats.Cgroup cgroup = Constants.LINUX ? getCgroup() : null; + return new OsStats(System.currentTimeMillis(), cpu, mem, swap, cgroup); } /** - * Returns a given method of the OperatingSystemMXBean, - * or null if the method is not found or unavailable. + * Returns a given method of the OperatingSystemMXBean, or null if the method is not found or unavailable. */ private static Method getMethod(String methodName) { try { @@ -187,4 +476,5 @@ private static Method getMethod(String methodName) { return null; } } + } diff --git a/core/src/main/java/org/elasticsearch/monitor/os/OsService.java b/core/src/main/java/org/elasticsearch/monitor/os/OsService.java index cb67eef852c45..1eb0781373573 100644 --- a/core/src/main/java/org/elasticsearch/monitor/os/OsService.java +++ b/core/src/main/java/org/elasticsearch/monitor/os/OsService.java @@ -55,7 +55,7 @@ public synchronized OsStats stats() { } private class OsStatsCache extends SingleObjectCache { - public OsStatsCache(TimeValue interval, OsStats initValue) { + OsStatsCache(TimeValue interval, OsStats initValue) { super(interval, initValue); } diff --git a/core/src/main/java/org/elasticsearch/monitor/os/OsStats.java b/core/src/main/java/org/elasticsearch/monitor/os/OsStats.java index 06d864f31a480..c3783c600ea4c 100644 --- a/core/src/main/java/org/elasticsearch/monitor/os/OsStats.java +++ b/core/src/main/java/org/elasticsearch/monitor/os/OsStats.java @@ -19,6 +19,7 @@ package org.elasticsearch.monitor.os; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -36,12 +37,14 @@ public class OsStats implements Writeable, ToXContent { private final Cpu cpu; private final Mem mem; private final Swap swap; + private final Cgroup cgroup; - public OsStats(long timestamp, Cpu cpu, Mem mem, Swap swap) { + public OsStats(final long timestamp, final Cpu cpu, final Mem mem, final Swap swap, final Cgroup cgroup) { this.timestamp = timestamp; - this.cpu = Objects.requireNonNull(cpu, "cpu must not be null"); - this.mem = Objects.requireNonNull(mem, "mem must not be null");; - this.swap = Objects.requireNonNull(swap, "swap must not be null");; + this.cpu = Objects.requireNonNull(cpu); + this.mem = Objects.requireNonNull(mem); + this.swap = Objects.requireNonNull(swap); + this.cgroup = cgroup; } public OsStats(StreamInput in) throws IOException { @@ -49,6 +52,11 @@ public OsStats(StreamInput in) throws IOException { this.cpu = new Cpu(in); this.mem = new Mem(in); this.swap = new Swap(in); + if (in.getVersion().onOrAfter(Version.V_5_1_1)) { + this.cgroup = in.readOptionalWriteable(Cgroup::new); + } else { + this.cgroup = null; + } } @Override @@ -57,6 +65,9 @@ public void writeTo(StreamOutput out) throws IOException { cpu.writeTo(out); mem.writeTo(out); swap.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_5_1_1)) { + out.writeOptionalWriteable(cgroup); + } } public long getTimestamp() { @@ -73,6 +84,10 @@ public Swap getSwap() { return swap; } + public Cgroup getCgroup() { + return cgroup; + } + static final class Fields { static final String OS = "os"; static final String TIMESTAMP = "timestamp"; @@ -103,6 +118,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws cpu.toXContent(builder, params); mem.toXContent(builder, params); swap.toXContent(builder, params); + if (cgroup != null) { + cgroup.toXContent(builder, params); + } builder.endObject(); return builder; } @@ -241,7 +259,7 @@ public ByteSizeValue getUsed() { } public short getUsedPercent() { - return calculatePercentage(getUsed().bytes(), total); + return calculatePercentage(getUsed().getBytes(), total); } public ByteSizeValue getFree() { @@ -265,7 +283,211 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } } + /** + * Encapsulates basic cgroup statistics. + */ + public static class Cgroup implements Writeable, ToXContent { + + private final String cpuAcctControlGroup; + private final long cpuAcctUsageNanos; + private final String cpuControlGroup; + private final long cpuCfsPeriodMicros; + private final long cpuCfsQuotaMicros; + private final CpuStat cpuStat; + + /** + * The control group for the {@code cpuacct} subsystem. + * + * @return the control group + */ + public String getCpuAcctControlGroup() { + return cpuAcctControlGroup; + } + + /** + * The total CPU time consumed by all tasks in the + * {@code cpuacct} control group from + * {@link Cgroup#cpuAcctControlGroup}. + * + * @return the total CPU time in nanoseconds + */ + public long getCpuAcctUsageNanos() { + return cpuAcctUsageNanos; + } + + /** + * The control group for the {@code cpu} subsystem. + * + * @return the control group + */ + public String getCpuControlGroup() { + return cpuControlGroup; + } + + /** + * The period of time for how frequently the control group from + * {@link Cgroup#cpuControlGroup} has its access to CPU + * resources reallocated. + * + * @return the period of time in microseconds + */ + public long getCpuCfsPeriodMicros() { + return cpuCfsPeriodMicros; + } + + /** + * The total amount of time for which all tasks in the control + * group from {@link Cgroup#cpuControlGroup} can run in one + * period as represented by {@link Cgroup#cpuCfsPeriodMicros}. + * + * @return the total amount of time in microseconds + */ + public long getCpuCfsQuotaMicros() { + return cpuCfsQuotaMicros; + } + + /** + * The CPU time statistics. See {@link CpuStat}. + * + * @return the CPU time statistics. + */ + public CpuStat getCpuStat() { + return cpuStat; + } + + public Cgroup( + final String cpuAcctControlGroup, + final long cpuAcctUsageNanos, + final String cpuControlGroup, + final long cpuCfsPeriodMicros, + final long cpuCfsQuotaMicros, + final CpuStat cpuStat) { + this.cpuAcctControlGroup = Objects.requireNonNull(cpuAcctControlGroup); + this.cpuAcctUsageNanos = cpuAcctUsageNanos; + this.cpuControlGroup = Objects.requireNonNull(cpuControlGroup); + this.cpuCfsPeriodMicros = cpuCfsPeriodMicros; + this.cpuCfsQuotaMicros = cpuCfsQuotaMicros; + this.cpuStat = Objects.requireNonNull(cpuStat); + } + + Cgroup(final StreamInput in) throws IOException { + cpuAcctControlGroup = in.readString(); + cpuAcctUsageNanos = in.readLong(); + cpuControlGroup = in.readString(); + cpuCfsPeriodMicros = in.readLong(); + cpuCfsQuotaMicros = in.readLong(); + cpuStat = new CpuStat(in); + } + + @Override + public void writeTo(final StreamOutput out) throws IOException { + out.writeString(cpuAcctControlGroup); + out.writeLong(cpuAcctUsageNanos); + out.writeString(cpuControlGroup); + out.writeLong(cpuCfsPeriodMicros); + out.writeLong(cpuCfsQuotaMicros); + cpuStat.writeTo(out); + } + + @Override + public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException { + builder.startObject("cgroup"); + { + builder.startObject("cpuacct"); + { + builder.field("control_group", cpuAcctControlGroup); + builder.field("usage_nanos", cpuAcctUsageNanos); + } + builder.endObject(); + builder.startObject("cpu"); + { + builder.field("control_group", cpuControlGroup); + builder.field("cfs_period_micros", cpuCfsPeriodMicros); + builder.field("cfs_quota_micros", cpuCfsQuotaMicros); + cpuStat.toXContent(builder, params); + } + builder.endObject(); + } + builder.endObject(); + return builder; + } + + /** + * Encapsulates CPU time statistics. + */ + public static class CpuStat implements Writeable, ToXContent { + + private final long numberOfElapsedPeriods; + private final long numberOfTimesThrottled; + private final long timeThrottledNanos; + + /** + * The number of elapsed periods. + * + * @return the number of elapsed periods as measured by + * {@code cpu.cfs_period_us} + */ + public long getNumberOfElapsedPeriods() { + return numberOfElapsedPeriods; + } + + /** + * The number of times tasks in the control group have been + * throttled. + * + * @return the number of times + */ + public long getNumberOfTimesThrottled() { + return numberOfTimesThrottled; + } + + /** + * The total time duration for which tasks in the control + * group have been throttled. + * + * @return the total time in nanoseconds + */ + public long getTimeThrottledNanos() { + return timeThrottledNanos; + } + + public CpuStat(final long numberOfElapsedPeriods, final long numberOfTimesThrottled, final long timeThrottledNanos) { + this.numberOfElapsedPeriods = numberOfElapsedPeriods; + this.numberOfTimesThrottled = numberOfTimesThrottled; + this.timeThrottledNanos = timeThrottledNanos; + } + + CpuStat(final StreamInput in) throws IOException { + numberOfElapsedPeriods = in.readLong(); + numberOfTimesThrottled = in.readLong(); + timeThrottledNanos = in.readLong(); + } + + @Override + public void writeTo(final StreamOutput out) throws IOException { + out.writeLong(numberOfElapsedPeriods); + out.writeLong(numberOfTimesThrottled); + out.writeLong(timeThrottledNanos); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject("stat"); + { + builder.field("number_of_elapsed_periods", numberOfElapsedPeriods); + builder.field("number_of_times_throttled", numberOfTimesThrottled); + builder.field("time_throttled_nanos", timeThrottledNanos); + } + builder.endObject(); + return builder; + } + + } + + } + public static short calculatePercentage(long used, long max) { return max <= 0 ? 0 : (short) (Math.round((100d * used) / max)); } + } diff --git a/core/src/main/java/org/elasticsearch/monitor/process/ProcessService.java b/core/src/main/java/org/elasticsearch/monitor/process/ProcessService.java index 99593003b340a..1a1273af87c80 100644 --- a/core/src/main/java/org/elasticsearch/monitor/process/ProcessService.java +++ b/core/src/main/java/org/elasticsearch/monitor/process/ProcessService.java @@ -57,7 +57,7 @@ public ProcessStats stats() { } private class ProcessStatsCache extends SingleObjectCache { - public ProcessStatsCache(TimeValue interval, ProcessStats initValue) { + ProcessStatsCache(TimeValue interval, ProcessStats initValue) { super(interval, initValue); } diff --git a/core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java b/core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java similarity index 85% rename from core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java rename to core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java index dba7f303130c6..08fd1874fd38c 100644 --- a/core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java +++ b/core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java @@ -17,15 +17,7 @@ * under the License. */ -package org.elasticsearch.node.internal; - -import org.elasticsearch.cli.Terminal; -import org.elasticsearch.cluster.ClusterName; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.collect.Tuple; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.SettingsException; -import org.elasticsearch.env.Environment; +package org.elasticsearch.node; import java.io.IOException; import java.nio.file.Files; @@ -37,7 +29,14 @@ import java.util.Map; import java.util.Set; import java.util.function.Function; -import java.util.function.Predicate; + +import org.elasticsearch.cli.Terminal; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.settings.SettingsException; +import org.elasticsearch.env.Environment; import static org.elasticsearch.common.Strings.cleanPath; @@ -47,8 +46,6 @@ public class InternalSettingsPreparer { private static final String[] ALLOWED_SUFFIXES = {".yml", ".yaml", ".json"}; - static final String PROPERTY_DEFAULTS_PREFIX = "default."; - static final Predicate PROPERTY_DEFAULTS_PREDICATE = key -> key.startsWith(PROPERTY_DEFAULTS_PREFIX); public static final String SECRET_PROMPT_VALUE = "${prompt.secret}"; public static final String TEXT_PROMPT_VALUE = "${prompt.text}"; @@ -58,7 +55,7 @@ public class InternalSettingsPreparer { */ public static Settings prepareSettings(Settings input) { Settings.Builder output = Settings.builder(); - initializeSettings(output, input, true, Collections.emptyMap()); + initializeSettings(output, input, Collections.emptyMap()); finalizeSettings(output, null); return output.build(); } @@ -89,9 +86,10 @@ public static Environment prepareEnvironment(Settings input, Terminal terminal) public static Environment prepareEnvironment(Settings input, Terminal terminal, Map properties) { // just create enough settings to build the environment, to get the config dir Settings.Builder output = Settings.builder(); - initializeSettings(output, input, true, properties); + initializeSettings(output, input, properties); Environment environment = new Environment(output.build()); + output = Settings.builder(); // start with a fresh output boolean settingsFileFound = false; Set foundSuffixes = new HashSet<>(); for (String allowedSuffix : ALLOWED_SUFFIXES) { @@ -109,31 +107,33 @@ public static Environment prepareEnvironment(Settings input, Terminal terminal, } } if (foundSuffixes.size() > 1) { - throw new SettingsException("multiple settings files found with suffixes: " + Strings.collectionToDelimitedString(foundSuffixes, ",")); + throw new SettingsException("multiple settings files found with suffixes: " + + Strings.collectionToDelimitedString(foundSuffixes, ",")); } // re-initialize settings now that the config file has been loaded - // TODO: only re-initialize if a config file was actually loaded - initializeSettings(output, input, false, properties); + initializeSettings(output, input, properties); finalizeSettings(output, terminal); environment = new Environment(output.build()); // we put back the path.logs so we can use it in the logging configuration file output.put(Environment.PATH_LOGS_SETTING.getKey(), cleanPath(environment.logsFile().toAbsolutePath().toString())); - return new Environment(output.build()); + String configExtension = foundSuffixes.isEmpty() ? null : foundSuffixes.iterator().next(); + return new Environment(output.build(), configExtension); } /** - * Initializes the builder with the given input settings, and loads system properties settings if allowed. - * If loadDefaults is true, system property default settings are loaded. + * Initializes the builder with the given input settings, and applies settings from the specified map (these settings typically come + * from the command line). + * + * @param output the settings builder to apply the input and default settings to + * @param input the input settings + * @param esSettings a map from which to apply settings */ - private static void initializeSettings(Settings.Builder output, Settings input, boolean loadDefaults, Map esSettings) { + static void initializeSettings(final Settings.Builder output, final Settings input, final Map esSettings) { output.put(input); - if (loadDefaults) { - output.putProperties(esSettings, PROPERTY_DEFAULTS_PREDICATE, key -> key.substring(PROPERTY_DEFAULTS_PREFIX.length())); - } - output.putProperties(esSettings, PROPERTY_DEFAULTS_PREDICATE.negate(), Function.identity()); + output.putProperties(esSettings, Function.identity()); output.replacePropertyPlaceholders(); } @@ -198,7 +198,9 @@ private static void replacePromptPlaceholders(Settings.Builder settings, Termina private static String promptForValue(String key, Terminal terminal, boolean secret) { if (terminal == null) { - throw new UnsupportedOperationException("found property [" + key + "] with value [" + (secret ? SECRET_PROMPT_VALUE : TEXT_PROMPT_VALUE) +"]. prompting for property values is only supported when running elasticsearch in the foreground"); + throw new UnsupportedOperationException("found property [" + key + "] with value [" + + (secret ? SECRET_PROMPT_VALUE : TEXT_PROMPT_VALUE) + + "]. prompting for property values is only supported when running elasticsearch in the foreground"); } if (secret) { diff --git a/core/src/main/java/org/elasticsearch/node/Node.java b/core/src/main/java/org/elasticsearch/node/Node.java index 402064e88d299..9638f77bd0c00 100644 --- a/core/src/main/java/org/elasticsearch/node/Node.java +++ b/core/src/main/java/org/elasticsearch/node/Node.java @@ -19,39 +19,50 @@ package org.elasticsearch.node; -import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.core.LoggerContext; -import org.apache.logging.log4j.core.config.Configurator; import org.apache.lucene.util.Constants; import org.apache.lucene.util.IOUtils; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.Build; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchTimeoutException; import org.elasticsearch.Version; import org.elasticsearch.action.ActionModule; import org.elasticsearch.action.GenericAction; +import org.elasticsearch.action.search.SearchPhaseController; +import org.elasticsearch.action.search.SearchTransportService; import org.elasticsearch.action.support.TransportAction; +import org.elasticsearch.action.update.UpdateHelper; +import org.elasticsearch.bootstrap.BootstrapCheck; import org.elasticsearch.client.Client; import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.cluster.ClusterInfoService; import org.elasticsearch.cluster.ClusterModule; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterStateObserver; -import org.elasticsearch.cluster.MasterNodeChangePredicate; +import org.elasticsearch.cluster.InternalClusterInfoService; import org.elasticsearch.cluster.NodeConnectionsService; import org.elasticsearch.cluster.action.index.MappingUpdatedAction; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.metadata.IndexTemplateMetaData; import org.elasticsearch.cluster.metadata.MetaData; +import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService; +import org.elasticsearch.cluster.metadata.TemplateUpgradeService; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.RoutingService; import org.elasticsearch.cluster.routing.allocation.AllocationService; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.StopWatch; +import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.component.Lifecycle; import org.elasticsearch.common.component.LifecycleComponent; +import org.elasticsearch.common.inject.Binder; import org.elasticsearch.common.inject.Injector; import org.elasticsearch.common.inject.Key; import org.elasticsearch.common.inject.Module; import org.elasticsearch.common.inject.ModulesBuilder; +import org.elasticsearch.common.inject.util.Providers; +import org.elasticsearch.common.io.PathUtils; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.logging.DeprecationLogger; @@ -68,6 +79,7 @@ import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.discovery.Discovery; import org.elasticsearch.discovery.DiscoveryModule; @@ -78,7 +90,6 @@ import org.elasticsearch.gateway.GatewayModule; import org.elasticsearch.gateway.GatewayService; import org.elasticsearch.gateway.MetaStateService; -import org.elasticsearch.http.HttpServer; import org.elasticsearch.http.HttpServerTransport; import org.elasticsearch.index.analysis.AnalysisRegistry; import org.elasticsearch.indices.IndicesModule; @@ -88,21 +99,23 @@ import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService; import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; import org.elasticsearch.indices.cluster.IndicesClusterStateService; +import org.elasticsearch.indices.recovery.PeerRecoverySourceService; +import org.elasticsearch.indices.recovery.PeerRecoveryTargetService; +import org.elasticsearch.indices.recovery.RecoverySettings; import org.elasticsearch.indices.store.IndicesStore; import org.elasticsearch.indices.ttl.IndicesTTLService; import org.elasticsearch.ingest.IngestService; import org.elasticsearch.monitor.MonitorService; import org.elasticsearch.monitor.jvm.JvmInfo; -import org.elasticsearch.node.internal.InternalSettingsPreparer; -import org.elasticsearch.node.service.NodeService; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.AnalysisPlugin; import org.elasticsearch.plugins.ClusterPlugin; import org.elasticsearch.plugins.DiscoveryPlugin; import org.elasticsearch.plugins.IngestPlugin; import org.elasticsearch.plugins.MapperPlugin; -import org.elasticsearch.plugins.Plugin; import org.elasticsearch.plugins.MetaDataUpgrader; +import org.elasticsearch.plugins.NetworkPlugin; +import org.elasticsearch.plugins.Plugin; import org.elasticsearch.plugins.PluginsService; import org.elasticsearch.plugins.RepositoryPlugin; import org.elasticsearch.plugins.ScriptPlugin; @@ -113,16 +126,18 @@ import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.SearchModule; import org.elasticsearch.search.SearchService; +import org.elasticsearch.search.fetch.FetchPhase; import org.elasticsearch.snapshots.SnapshotShardsService; import org.elasticsearch.snapshots.SnapshotsService; import org.elasticsearch.tasks.TaskResultsService; import org.elasticsearch.threadpool.ExecutorBuilder; import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.Transport; +import org.elasticsearch.transport.TransportInterceptor; import org.elasticsearch.transport.TransportService; import org.elasticsearch.tribe.TribeService; import org.elasticsearch.watcher.ResourceWatcherService; -import javax.management.MBeanServerPermission; import java.io.BufferedWriter; import java.io.Closeable; import java.io.IOException; @@ -133,7 +148,6 @@ import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.StandardCopyOption; -import java.security.AccessControlException; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; @@ -141,13 +155,17 @@ import java.util.List; import java.util.Locale; import java.util.Map; +import java.util.Set; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; +import java.util.function.Consumer; import java.util.function.Function; import java.util.function.UnaryOperator; import java.util.stream.Collectors; import java.util.stream.Stream; +import static java.util.stream.Collectors.toList; + /** * A node represent a node within a cluster (cluster.name). The {@link #client()} can be used * in order to use a {@link Client} to perform actions/operations against the cluster. @@ -155,7 +173,7 @@ public class Node implements Closeable { - public static final Setting WRITE_PORTS_FIELD_SETTING = + public static final Setting WRITE_PORTS_FILE_SETTING = Setting.boolSetting("node.portsfile", false, Property.NodeScope); public static final Setting NODE_DATA_SETTING = Setting.boolSetting("node.data", true, Property.NodeScope); public static final Setting NODE_MASTER_SETTING = @@ -172,7 +190,16 @@ public class Node implements Closeable { */ public static final Setting NODE_LOCAL_STORAGE_SETTING = Setting.boolSetting("node.local_storage", true, Property.NodeScope); public static final Setting NODE_NAME_SETTING = Setting.simpleString("node.name", Property.NodeScope); - public static final Setting NODE_ATTRIBUTES = Setting.groupSetting("node.attr.", Property.NodeScope); + public static final Setting NODE_ATTRIBUTES = Setting.groupSetting("node.attr.", (settings) -> { + Map settingsMap = settings.getAsMap(); + for (Map.Entry entry : settingsMap.entrySet()) { + String value = entry.getValue(); + if (Character.isWhitespace(value.charAt(0)) || Character.isWhitespace(value.charAt(value.length() - 1))) { + throw new IllegalArgumentException("node.attr." + entry.getKey() + " cannot have leading or trailing whitespace " + + "[" + value + "]"); + } + } + }, Property.NodeScope); public static final Setting BREAKER_TYPE_KEY = new Setting<>("indices.breaker.type", "hierarchy", (s) -> { switch (s) { case "hierarchy": @@ -203,6 +230,7 @@ public static final Settings addNodeNameIfNeeded(Settings settings, final String private final PluginsService pluginsService; private final NodeClient client; private final Collection pluginLifecycleComponents; + private final LocalNodeFactory localNodeFactory; /** * Constructs a node with the given settings. @@ -237,22 +265,27 @@ protected Node(final Environment environment, Collection nodeEnvironment = new NodeEnvironment(tmpSettings, environment); resourcesToClose.add(nodeEnvironment); } catch (IOException ex) { - throw new IllegalStateException("Failed to created node environment", ex); + throw new IllegalStateException("Failed to create node environment", ex); } - final boolean hadPredefinedNodeName = NODE_NAME_SETTING.exists(tmpSettings); - tmpSettings = addNodeNameIfNeeded(tmpSettings, nodeEnvironment.nodeId()); Logger logger = Loggers.getLogger(Node.class, tmpSettings); + final String nodeId = nodeEnvironment.nodeId(); + tmpSettings = addNodeNameIfNeeded(tmpSettings, nodeId); + if (DiscoveryNode.nodeRequiresLocalStorage(tmpSettings)) { + checkForIndexDataInDefaultPathData(tmpSettings, nodeEnvironment, logger); + } + // this must be captured after the node name is possibly added to the settings + final String nodeName = NODE_NAME_SETTING.get(tmpSettings); if (hadPredefinedNodeName == false) { - logger.info("node name [{}] derived from node ID; set [{}] to override", - NODE_NAME_SETTING.get(tmpSettings), NODE_NAME_SETTING.getKey()); + logger.info("node name [{}] derived from node ID [{}]; set [{}] to override", nodeName, nodeId, NODE_NAME_SETTING.getKey()); + } else { + logger.info("node name [{}], node ID [{}]", nodeName, nodeId); } - final String displayVersion = Version.CURRENT + (Build.CURRENT.isSnapshot() ? "-SNAPSHOT" : ""); final JvmInfo jvmInfo = JvmInfo.jvmInfo(); logger.info( "version[{}], pid[{}], build[{}/{}], OS[{}/{}/{}], JVM[{}/{}/{}/{}]", - displayVersion, + displayVersion(Version.CURRENT, Build.CURRENT.isSnapshot()), jvmInfo.pid(), Build.CURRENT.shortHash(), Build.CURRENT.date(), @@ -263,7 +296,8 @@ protected Node(final Environment environment, Collection Constants.JVM_NAME, Constants.JAVA_VERSION, Constants.JVM_VERSION); - + logger.info("JVM arguments {}", Arrays.toString(jvmInfo.getInputArguments())); + warnIfPreRelease(Version.CURRENT, Build.CURRENT.isSnapshot(), logger); if (logger.isDebugEnabled()) { logger.debug("using config [{}], data [{}], logs [{}], plugins [{}]", @@ -278,6 +312,8 @@ protected Node(final Environment environment, Collection this.pluginsService = new PluginsService(tmpSettings, environment.modulesFile(), environment.pluginsFile(), classpathPlugins); this.settings = pluginsService.updatedSettings(); + localNodeFactory = new LocalNodeFactory(settings, nodeEnvironment.nodeId()); + // create the environment based on the finalized (processed) view of the settings // this is just to makes sure that people get the same settings, no matter where they ask them from this.environment = new Environment(this.settings); @@ -292,13 +328,12 @@ protected Node(final Environment environment, Collection DeprecationLogger.setThreadContext(threadPool.getThreadContext()); resourcesToClose.add(() -> DeprecationLogger.removeThreadContext(threadPool.getThreadContext())); - final List> additionalSettings = new ArrayList<>(); - final List additionalSettingsFilter = new ArrayList<>(); - additionalSettings.addAll(pluginsService.getPluginSettings()); - additionalSettingsFilter.addAll(pluginsService.getPluginSettingsFilter()); + final List> additionalSettings = new ArrayList<>(pluginsService.getPluginSettings()); + final List additionalSettingsFilter = new ArrayList<>(pluginsService.getPluginSettingsFilter()); for (final ExecutorBuilder builder : threadPool.builders()) { additionalSettings.addAll(builder.getRegisteredSettings()); } + client = new NodeClient(settings, threadPool); final ResourceWatcherService resourceWatcherService = new ResourceWatcherService(settings, threadPool); final ScriptModule scriptModule = ScriptModule.create(settings, this.environment, resourceWatcherService, pluginsService.filterPlugins(ScriptPlugin.class)); @@ -311,13 +346,13 @@ protected Node(final Environment environment, Collection resourcesToClose.add(resourceWatcherService); final NetworkService networkService = new NetworkService(settings, getCustomNameResolvers(pluginsService.filterPlugins(DiscoveryPlugin.class))); - final ClusterService clusterService = new ClusterService(settings, settingsModule.getClusterSettings(), threadPool); - clusterService.add(scriptModule.getScriptService()); + final ClusterService clusterService = new ClusterService(settings, settingsModule.getClusterSettings(), threadPool, + localNodeFactory::getNode); + clusterService.addListener(scriptModule.getScriptService()); resourcesToClose.add(clusterService); - final TribeService tribeService = new TribeService(settings, clusterService, nodeEnvironment.nodeId()); - resourcesToClose.add(tribeService); - final IngestService ingestService = new IngestService(settings, threadPool, this.environment, + final IngestService ingestService = new IngestService(clusterService.getClusterSettings(), settings, threadPool, this.environment, scriptModule.getScriptService(), analysisModule.getAnalysisRegistry(), pluginsService.filterPlugins(IngestPlugin.class)); + final ClusterInfoService clusterInfoService = newClusterInfoService(settings, clusterService, threadPool, client); ModulesBuilder modules = new ModulesBuilder(); // plugin modules must be added here, before others or we can get crazy injection errors... @@ -326,52 +361,99 @@ protected Node(final Environment environment, Collection } final MonitorService monitorService = new MonitorService(settings, nodeEnvironment, threadPool); modules.add(new NodeModule(this, monitorService)); - NetworkModule networkModule = new NetworkModule(networkService, settings, false); - modules.add(networkModule); - modules.add(new DiscoveryModule(this.settings)); ClusterModule clusterModule = new ClusterModule(settings, clusterService, pluginsService.filterPlugins(ClusterPlugin.class)); modules.add(clusterModule); IndicesModule indicesModule = new IndicesModule(pluginsService.filterPlugins(MapperPlugin.class)); modules.add(indicesModule); + SearchModule searchModule = new SearchModule(settings, false, pluginsService.filterPlugins(SearchPlugin.class)); - modules.add(searchModule); - modules.add(new ActionModule(DiscoveryNode.isIngestNode(settings), false, settings, - clusterModule.getIndexNameExpressionResolver(), settingsModule.getClusterSettings(), - pluginsService.filterPlugins(ActionPlugin.class))); - modules.add(new GatewayModule()); - modules.add(new RepositoriesModule(this.environment, pluginsService.filterPlugins(RepositoryPlugin.class))); - pluginsService.processModules(modules); CircuitBreakerService circuitBreakerService = createCircuitBreakerService(settingsModule.getSettings(), settingsModule.getClusterSettings()); resourcesToClose.add(circuitBreakerService); + ActionModule actionModule = new ActionModule(false, settings, clusterModule.getIndexNameExpressionResolver(), + settingsModule.getIndexScopedSettings(), settingsModule.getClusterSettings(), settingsModule.getSettingsFilter(), + threadPool, pluginsService.filterPlugins(ActionPlugin.class), client, circuitBreakerService); + modules.add(actionModule); + modules.add(new GatewayModule()); + + BigArrays bigArrays = createBigArrays(settings, circuitBreakerService); resourcesToClose.add(bigArrays); modules.add(settingsModule); List namedWriteables = Stream.of( - networkModule.getNamedWriteables().stream(), + NetworkModule.getNamedWriteables().stream(), indicesModule.getNamedWriteables().stream(), searchModule.getNamedWriteables().stream(), pluginsService.filterPlugins(Plugin.class).stream() - .flatMap(p -> p.getNamedWriteables().stream())) + .flatMap(p -> p.getNamedWriteables().stream()), + ClusterModule.getNamedWriteables().stream()) .flatMap(Function.identity()).collect(Collectors.toList()); final NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(namedWriteables); - final MetaStateService metaStateService = new MetaStateService(settings, nodeEnvironment); - final IndicesService indicesService = new IndicesService(settings, pluginsService, nodeEnvironment, - settingsModule.getClusterSettings(), analysisModule.getAnalysisRegistry(), searchModule.getQueryParserRegistry(), + NamedXContentRegistry xContentRegistry = new NamedXContentRegistry(Stream.of( + NetworkModule.getNamedXContents().stream(), + searchModule.getNamedXContents().stream(), + pluginsService.filterPlugins(Plugin.class).stream() + .flatMap(p -> p.getNamedXContent().stream()), + ClusterModule.getNamedXWriteables().stream()) + .flatMap(Function.identity()).collect(toList())); + final TribeService tribeService = new TribeService(settings, clusterService, nodeId, namedWriteableRegistry, + s -> newTribeClientNode(s, classpathPlugins)); + resourcesToClose.add(tribeService); + modules.add(new RepositoriesModule(this.environment, pluginsService.filterPlugins(RepositoryPlugin.class), xContentRegistry)); + final MetaStateService metaStateService = new MetaStateService(settings, nodeEnvironment, xContentRegistry); + final IndicesService indicesService = new IndicesService(settings, pluginsService, nodeEnvironment, xContentRegistry, + settingsModule.getClusterSettings(), analysisModule.getAnalysisRegistry(), clusterModule.getIndexNameExpressionResolver(), indicesModule.getMapperRegistry(), namedWriteableRegistry, - threadPool, settingsModule.getIndexScopedSettings(), circuitBreakerService, metaStateService); - client = new NodeClient(settings, threadPool); + threadPool, settingsModule.getIndexScopedSettings(), circuitBreakerService, bigArrays, scriptModule.getScriptService(), + clusterService, client, metaStateService); + Collection pluginComponents = pluginsService.filterPlugins(Plugin.class).stream() .flatMap(p -> p.createComponents(client, clusterService, threadPool, resourceWatcherService, - scriptModule.getScriptService(), searchModule.getSearchRequestParsers()).stream()) + scriptModule.getScriptService(), xContentRegistry).stream()) .collect(Collectors.toList()); + final RestController restController = actionModule.getRestController(); + final NetworkModule networkModule = new NetworkModule(settings, false, pluginsService.filterPlugins(NetworkPlugin.class), + threadPool, bigArrays, circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService, restController); Collection>> customMetaDataUpgraders = pluginsService.filterPlugins(Plugin.class).stream() - .map(Plugin::getCustomMetaDataUpgrader) - .collect(Collectors.toList()); - final MetaDataUpgrader metaDataUpgrader = new MetaDataUpgrader(customMetaDataUpgraders); + .map(Plugin::getCustomMetaDataUpgrader) + .collect(Collectors.toList()); + Collection>> indexTemplateMetaDataUpgraders = + pluginsService.filterPlugins(Plugin.class).stream() + .map(Plugin::getIndexTemplateMetaDataUpgrader) + .collect(Collectors.toList()); + Collection> indexMetaDataUpgraders = pluginsService.filterPlugins(Plugin.class).stream() + .map(Plugin::getIndexMetaDataUpgrader).collect(Collectors.toList()); + final MetaDataUpgrader metaDataUpgrader = new MetaDataUpgrader(customMetaDataUpgraders, indexTemplateMetaDataUpgraders); + new TemplateUpgradeService(settings, client, clusterService, threadPool, indexTemplateMetaDataUpgraders); + final Transport transport = networkModule.getTransportSupplier().get(); + final TransportService transportService = newTransportService(settings, transport, threadPool, + networkModule.getTransportInterceptor(), localNodeFactory, settingsModule.getClusterSettings()); + final SearchTransportService searchTransportService = new SearchTransportService(settings, + transportService); + final Consumer httpBind; + final HttpServerTransport httpServerTransport; + if (networkModule.isHttpEnabled()) { + httpServerTransport = networkModule.getHttpServerTransportSupplier().get(); + httpBind = b -> { + b.bind(HttpServerTransport.class).toInstance(httpServerTransport); + }; + } else { + httpBind = b -> { + b.bind(HttpServerTransport.class).toProvider(Providers.of(null)); + }; + httpServerTransport = null; + } + final DiscoveryModule discoveryModule = new DiscoveryModule(this.settings, threadPool, transportService, + namedWriteableRegistry, networkService, clusterService, pluginsService.filterPlugins(DiscoveryPlugin.class)); + NodeService nodeService = new NodeService(settings, threadPool, monitorService, discoveryModule.getDiscovery(), + transportService, indicesService, pluginsService, circuitBreakerService, scriptModule.getScriptService(), + httpServerTransport, ingestService, clusterService, settingsModule.getSettingsFilter()); + modules.add(b -> { + b.bind(NodeService.class).toInstance(nodeService); + b.bind(NamedXContentRegistry.class).toInstance(xContentRegistry); b.bind(PluginsService.class).toInstance(pluginsService); b.bind(Client.class).toInstance(client); b.bind(NodeClient.class).toInstance(client); @@ -389,14 +471,29 @@ protected Node(final Environment environment, Collection b.bind(MetaDataUpgrader.class).toInstance(metaDataUpgrader); b.bind(MetaStateService.class).toInstance(metaStateService); b.bind(IndicesService.class).toInstance(indicesService); - Class searchServiceImpl = pickSearchServiceImplementation(); - if (searchServiceImpl == SearchService.class) { - b.bind(SearchService.class).asEagerSingleton(); - } else { - b.bind(SearchService.class).to(searchServiceImpl).asEagerSingleton(); + b.bind(SearchService.class).toInstance(newSearchService(clusterService, indicesService, + threadPool, scriptModule.getScriptService(), bigArrays, searchModule.getFetchPhase())); + b.bind(SearchTransportService.class).toInstance(searchTransportService); + b.bind(SearchPhaseController.class).toInstance(new SearchPhaseController(settings, bigArrays, + scriptModule.getScriptService())); + b.bind(Transport.class).toInstance(transport); + b.bind(TransportService.class).toInstance(transportService); + b.bind(NetworkService.class).toInstance(networkService); + b.bind(UpdateHelper.class).toInstance(new UpdateHelper(settings, scriptModule.getScriptService())); + b.bind(MetaDataIndexUpgradeService.class).toInstance(new MetaDataIndexUpgradeService(settings, xContentRegistry, + indicesModule.getMapperRegistry(), settingsModule.getIndexScopedSettings(), indexMetaDataUpgraders)); + b.bind(ClusterInfoService.class).toInstance(clusterInfoService); + b.bind(Discovery.class).toInstance(discoveryModule.getDiscovery()); + { + RecoverySettings recoverySettings = new RecoverySettings(settings, settingsModule.getClusterSettings()); + processRecoverySettings(settingsModule.getClusterSettings(), recoverySettings); + b.bind(PeerRecoverySourceService.class).toInstance(new PeerRecoverySourceService(settings, transportService, + indicesService, recoverySettings, clusterService)); + b.bind(PeerRecoveryTargetService.class).toInstance(new PeerRecoveryTargetService(settings, threadPool, + transportService, recoverySettings, clusterService)); } + httpBind.accept(b); pluginComponents.stream().forEach(p -> b.bind((Class) p.getClass()).toInstance(p)); - } ); injector = modules.createInjector(); @@ -408,9 +505,13 @@ protected Node(final Environment environment, Collection .map(injector::getInstance).collect(Collectors.toList())); resourcesToClose.addAll(pluginLifecycleComponents); this.pluginLifecycleComponents = Collections.unmodifiableList(pluginLifecycleComponents); + client.initialize(injector.getInstance(new Key>() {}), + () -> clusterService.localNode().getId()); - client.intialize(injector.getInstance(new Key>() {})); - + if (NetworkModule.HTTP_ENABLED.get(settings)) { + logger.debug("initializing HTTP handlers ..."); + actionModule.initRestHandlers(() -> clusterService.state().nodes()); + } logger.info("initialized"); success = true; @@ -423,6 +524,97 @@ protected Node(final Environment environment, Collection } } + /** + * Checks for path.data and default.path.data being configured, and there being index data in any of the paths in default.path.data. + * + * @param settings the settings to check for path.data and default.path.data + * @param nodeEnv the current node environment + * @param logger a logger where messages regarding the detection will be logged + * @throws IOException if an I/O exception occurs reading the directory structure + */ + static void checkForIndexDataInDefaultPathData( + final Settings settings, final NodeEnvironment nodeEnv, final Logger logger) throws IOException { + if (!Environment.PATH_DATA_SETTING.exists(settings) || !Environment.DEFAULT_PATH_DATA_SETTING.exists(settings)) { + return; + } + + boolean clean = true; + for (final String defaultPathData : Environment.DEFAULT_PATH_DATA_SETTING.get(settings)) { + final Path defaultNodeDirectory = NodeEnvironment.resolveNodePath(getPath(defaultPathData), nodeEnv.getNodeLockId()); + if (Files.exists(defaultNodeDirectory) == false) { + continue; + } + + if (isDefaultPathDataInPathData(nodeEnv, defaultNodeDirectory)) { + continue; + } + + final NodeEnvironment.NodePath nodePath = new NodeEnvironment.NodePath(defaultNodeDirectory); + final Set availableIndexFolders = nodeEnv.availableIndexFoldersForPath(nodePath); + if (availableIndexFolders.isEmpty()) { + continue; + } + + clean = false; + logger.error("detected index data in default.path.data [{}] where there should not be any", nodePath.indicesPath); + for (final String availableIndexFolder : availableIndexFolders) { + logger.info( + "index folder [{}] in default.path.data [{}] must be moved to any of {}", + availableIndexFolder, + nodePath.indicesPath, + Arrays.stream(nodeEnv.nodePaths()).map(np -> np.indicesPath).collect(Collectors.toList())); + } + } + + if (clean) { + return; + } + + final String message = String.format( + Locale.ROOT, + "detected index data in default.path.data %s where there should not be any; check the logs for details", + Environment.DEFAULT_PATH_DATA_SETTING.get(settings)); + throw new IllegalStateException(message); + } + + private static boolean isDefaultPathDataInPathData(final NodeEnvironment nodeEnv, final Path defaultNodeDirectory) throws IOException { + for (final NodeEnvironment.NodePath dataPath : nodeEnv.nodePaths()) { + if (Files.isSameFile(dataPath.path, defaultNodeDirectory)) { + return true; + } + } + return false; + } + + @SuppressForbidden(reason = "read path that is not configured in environment") + private static Path getPath(final String path) { + return PathUtils.get(path); + } + + // visible for testing + static void warnIfPreRelease(final Version version, final boolean isSnapshot, final Logger logger) { + if (!version.isRelease() || isSnapshot) { + logger.warn( + "version [{}] is a pre-release version of Elasticsearch and is not suitable for production", + displayVersion(version, isSnapshot)); + } + } + + private static String displayVersion(final Version version, final boolean isSnapshot) { + return version + (isSnapshot ? "-SNAPSHOT" : ""); + } + + protected TransportService newTransportService(Settings settings, Transport transport, ThreadPool threadPool, + TransportInterceptor interceptor, + Function localNodeFactory, + ClusterSettings clusterSettings) { + return new TransportService(settings, transport, threadPool, interceptor, localNodeFactory, clusterSettings); + } + + protected void processRecoverySettings(ClusterSettings clusterSettings, RecoverySettings recoverySettings) { + // Noop in production, overridden by tests + } + /** * The settings that were used to create the node. */ @@ -455,7 +647,7 @@ public NodeEnvironment getNodeEnvironment() { /** * Start the node. If the node is already started, this method is no-op. */ - public Node start() { + public Node start() throws NodeValidationException { if (!lifecycle.moveToStarted()) { return this; } @@ -475,7 +667,6 @@ public Node start() { injector.getInstance(RoutingService.class).start(); injector.getInstance(SearchService.class).start(); injector.getInstance(MonitorService.class).start(); - injector.getInstance(RestController.class).start(); final ClusterService clusterService = injector.getInstance(ClusterService.class); @@ -489,6 +680,7 @@ public Node start() { injector.getInstance(ResourceWatcherService.class).start(); injector.getInstance(GatewayService.class).start(); Discovery discovery = injector.getInstance(Discovery.class); + clusterService.setDiscoverySettings(discovery.getDiscoverySettings()); clusterService.addInitialStateBlock(discovery.getDiscoverySettings().getNoMasterBlock()); clusterService.setClusterStatePublisher(discovery::publish); @@ -500,29 +692,28 @@ public Node start() { TransportService transportService = injector.getInstance(TransportService.class); transportService.getTaskManager().setTaskResultsService(injector.getInstance(TaskResultsService.class)); transportService.start(); + validateNodeBeforeAcceptingRequests(settings, transportService.boundAddress(), pluginsService.filterPlugins(Plugin.class).stream() + .flatMap(p -> p.getBootstrapChecks().stream()).collect(Collectors.toList())); - validateNodeBeforeAcceptingRequests(settings, transportService.boundAddress()); - - DiscoveryNode localNode = DiscoveryNode.createLocal(settings, - transportService.boundAddress().publishAddress(), injector.getInstance(NodeEnvironment.class).nodeId()); - - // TODO: need to find a cleaner way to start/construct a service with some initial parameters, - // playing nice with the life cycle interfaces - clusterService.setLocalNode(localNode); - transportService.setLocalNode(localNode); - clusterService.add(transportService.getTaskManager()); - + clusterService.addStateApplier(transportService.getTaskManager()); clusterService.start(); - + assert localNodeFactory.getNode() != null; + assert transportService.getLocalNode().equals(localNodeFactory.getNode()) + : "transportService has a different local node than the factory provided"; + assert clusterService.localNode().equals(localNodeFactory.getNode()) + : "clusterService has a different local node than the factory provided"; // start after cluster service so the local disco is known discovery.start(); transportService.acceptIncomingRequests(); discovery.startInitialJoin(); - // tribe nodes don't have a master so we shouldn't register an observer - if (DiscoverySettings.INITIAL_STATE_TIMEOUT_SETTING.get(settings).millis() > 0) { + // tribe nodes don't have a master so we shouldn't register an observer s + final TimeValue initialStateTimeout = DiscoverySettings.INITIAL_STATE_TIMEOUT_SETTING.get(settings); + if (initialStateTimeout.millis() > 0) { final ThreadPool thread = injector.getInstance(ThreadPool.class); - ClusterStateObserver observer = new ClusterStateObserver(clusterService, null, logger, thread.getThreadContext()); - if (observer.observedState().nodes().getMasterNodeId() == null) { + ClusterState clusterState = clusterService.state(); + ClusterStateObserver observer = new ClusterStateObserver(clusterState, clusterService, null, logger, thread.getThreadContext()); + if (clusterState.nodes().getMasterNodeId() == null) { + logger.debug("waiting to join the cluster. timeout [{}]", initialStateTimeout); final CountDownLatch latch = new CountDownLatch(1); observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override @@ -536,10 +727,10 @@ public void onClusterServiceClose() { @Override public void onTimeout(TimeValue timeout) { logger.warn("timed out while waiting for initial discovery state - timeout: {}", - DiscoverySettings.INITIAL_STATE_TIMEOUT_SETTING.get(settings)); + initialStateTimeout); latch.countDown(); } - }, MasterNodeChangePredicate.INSTANCE, DiscoverySettings.INITIAL_STATE_TIMEOUT_SETTING.get(settings)); + }, state -> state.nodes().getMasterNodeId() != null, initialStateTimeout); try { latch.await(); @@ -549,15 +740,12 @@ public void onTimeout(TimeValue timeout) { } } + if (NetworkModule.HTTP_ENABLED.get(settings)) { - injector.getInstance(HttpServer.class).start(); + injector.getInstance(HttpServerTransport.class).start(); } - // start nodes now, after the http server, because it may take some time - tribeService.startNodes(); - - - if (WRITE_PORTS_FIELD_SETTING.get(settings)) { + if (WRITE_PORTS_FILE_SETTING.get(settings)) { if (NetworkModule.HTTP_ENABLED.get(settings)) { HttpServerTransport http = injector.getInstance(HttpServerTransport.class); writePortsFile("http", http.boundAddress()); @@ -566,6 +754,8 @@ public void onTimeout(TimeValue timeout) { writePortsFile("transport", transport.boundAddress()); } + // start nodes now, after the http server, because it may take some time + tribeService.startNodes(); logger.info("started"); return this; @@ -581,23 +771,24 @@ private Node stop() { injector.getInstance(TribeService.class).stop(); injector.getInstance(ResourceWatcherService.class).stop(); if (NetworkModule.HTTP_ENABLED.get(settings)) { - injector.getInstance(HttpServer.class).stop(); + injector.getInstance(HttpServerTransport.class).stop(); } injector.getInstance(SnapshotsService.class).stop(); injector.getInstance(SnapshotShardsService.class).stop(); // stop any changes happening as a result of cluster state changes injector.getInstance(IndicesClusterStateService.class).stop(); + // close discovery early to not react to pings anymore. + // This can confuse other nodes and delay things - mostly if we're the master and we're running tests. + injector.getInstance(Discovery.class).stop(); // we close indices first, so operations won't be allowed on it injector.getInstance(IndicesTTLService.class).stop(); injector.getInstance(RoutingService.class).stop(); injector.getInstance(ClusterService.class).stop(); - injector.getInstance(Discovery.class).stop(); injector.getInstance(NodeConnectionsService.class).stop(); injector.getInstance(MonitorService.class).stop(); injector.getInstance(GatewayService.class).stop(); injector.getInstance(SearchService.class).stop(); - injector.getInstance(RestController.class).stop(); injector.getInstance(TransportService.class).stop(); pluginLifecycleComponents.forEach(LifecycleComponent::stop); @@ -606,24 +797,6 @@ private Node stop() { injector.getInstance(IndicesService.class).stop(); logger.info("stopped"); - final String log4jShutdownEnabled = System.getProperty("es.log4j.shutdownEnabled", "true"); - final boolean shutdownEnabled; - switch (log4jShutdownEnabled) { - case "true": - shutdownEnabled = true; - break; - case "false": - shutdownEnabled = false; - break; - default: - throw new IllegalArgumentException( - "invalid value for [es.log4j.shutdownEnabled], was [" + log4jShutdownEnabled + "] but must be [true] or [false]"); - } - if (shutdownEnabled) { - LoggerContext context = (LoggerContext) LogManager.getContext(false); - Configurator.shutdown(context); - } - return this; } @@ -649,7 +822,7 @@ public synchronized void close() throws IOException { toClose.add(injector.getInstance(NodeService.class)); toClose.add(() -> stopWatch.stop().start("http")); if (NetworkModule.HTTP_ENABLED.get(settings)) { - toClose.add(injector.getInstance(HttpServer.class)); + toClose.add(injector.getInstance(HttpServerTransport.class)); } toClose.add(() -> stopWatch.stop().start("snapshot_service")); toClose.add(injector.getInstance(SnapshotsService.class)); @@ -677,8 +850,6 @@ public synchronized void close() throws IOException { toClose.add(injector.getInstance(GatewayService.class)); toClose.add(() -> stopWatch.stop().start("search")); toClose.add(injector.getInstance(SearchService.class)); - toClose.add(() -> stopWatch.stop().start("rest")); - toClose.add(injector.getInstance(RestController.class)); toClose.add(() -> stopWatch.stop().start("transport")); toClose.add(injector.getInstance(TransportService.class)); @@ -686,7 +857,7 @@ public synchronized void close() throws IOException { toClose.add(() -> stopWatch.stop().start("plugin(" + plugin.getClass().getName() + ")")); toClose.add(plugin); } - toClose.addAll(pluginsService.filterPlugins(Closeable.class)); + toClose.addAll(pluginsService.filterPlugins(Plugin.class)); toClose.add(() -> stopWatch.stop().start("script")); toClose.add(injector.getInstance(ScriptService.class)); @@ -740,7 +911,9 @@ public Injector injector() { * bound and publishing to */ @SuppressWarnings("unused") - protected void validateNodeBeforeAcceptingRequests(Settings settings, BoundTransportAddress boundTransportAddress) { + protected void validateNodeBeforeAcceptingRequests( + final Settings settings, + final BoundTransportAddress boundTransportAddress, List bootstrapChecks) throws NodeValidationException { } /** Writes a file to the logs dir containing the ports for the given transport type */ @@ -797,10 +970,12 @@ BigArrays createBigArrays(Settings settings, CircuitBreakerService circuitBreake } /** - * Select the search service implementation. Overrided by tests. + * Creates a new the SearchService. This method can be overwritten by tests to inject mock implementations. */ - protected Class pickSearchServiceImplementation() { - return SearchService.class; + protected SearchService newSearchService(ClusterService clusterService, IndicesService indicesService, + ThreadPool threadPool, ScriptService scriptService, BigArrays bigArrays, + FetchPhase fetchPhase) { + return new SearchService(clusterService, indicesService, threadPool, scriptService, bigArrays, fetchPhase); } /** @@ -817,4 +992,37 @@ private List getCustomNameResolvers(List> classpathPlugins) { + return new Node(new Environment(settings), classpathPlugins); + } + + /** Constructs a ClusterInfoService which may be mocked for tests. */ + protected ClusterInfoService newClusterInfoService(Settings settings, ClusterService clusterService, + ThreadPool threadPool, NodeClient client) { + return new InternalClusterInfoService(settings, clusterService, threadPool, client); + } + + private static class LocalNodeFactory implements Function { + private final SetOnce localNode = new SetOnce<>(); + private final String persistentNodeId; + private final Settings settings; + + private LocalNodeFactory(Settings settings, String persistentNodeId) { + this.persistentNodeId = persistentNodeId; + this.settings = settings; + } + + @Override + public DiscoveryNode apply(BoundTransportAddress boundTransportAddress) { + localNode.set(DiscoveryNode.createLocal(settings, boundTransportAddress.publishAddress(), persistentNodeId)); + return localNode.get(); + } + + DiscoveryNode getNode() { + assert localNode.get() != null; + return localNode.get(); + } + } } diff --git a/core/src/main/java/org/elasticsearch/node/NodeModule.java b/core/src/main/java/org/elasticsearch/node/NodeModule.java index 6a8f8b90681bb..929e889503ea4 100644 --- a/core/src/main/java/org/elasticsearch/node/NodeModule.java +++ b/core/src/main/java/org/elasticsearch/node/NodeModule.java @@ -22,7 +22,6 @@ import org.elasticsearch.cluster.routing.allocation.DiskThresholdMonitor; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.monitor.MonitorService; -import org.elasticsearch.node.service.NodeService; public class NodeModule extends AbstractModule { @@ -38,7 +37,6 @@ public NodeModule(Node node, MonitorService monitorService) { protected void configure() { bind(Node.class).toInstance(node); bind(MonitorService.class).toInstance(monitorService); - bind(NodeService.class).asEagerSingleton(); bind(DiskThresholdMonitor.class).asEagerSingleton(); } } diff --git a/core/src/main/java/org/elasticsearch/node/service/NodeService.java b/core/src/main/java/org/elasticsearch/node/NodeService.java similarity index 86% rename from core/src/main/java/org/elasticsearch/node/service/NodeService.java rename to core/src/main/java/org/elasticsearch/node/NodeService.java index 39e151c886fcc..cb245487152e3 100644 --- a/core/src/main/java/org/elasticsearch/node/service/NodeService.java +++ b/core/src/main/java/org/elasticsearch/node/NodeService.java @@ -17,10 +17,7 @@ * under the License. */ -package org.elasticsearch.node.service; - -import java.io.Closeable; -import java.io.IOException; +package org.elasticsearch.node; import org.elasticsearch.Build; import org.elasticsearch.Version; @@ -30,11 +27,10 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.discovery.Discovery; -import org.elasticsearch.http.HttpServer; +import org.elasticsearch.http.HttpServerTransport; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.ingest.IngestService; @@ -44,8 +40,9 @@ import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; -/** - */ +import java.io.Closeable; +import java.io.IOException; + public class NodeService extends AbstractComponent implements Closeable { private final ThreadPool threadPool; @@ -57,17 +54,16 @@ public class NodeService extends AbstractComponent implements Closeable { private final IngestService ingestService; private final SettingsFilter settingsFilter; private ScriptService scriptService; + private final HttpServerTransport httpServerTransport; - @Nullable - private final HttpServer httpServer; private final Discovery discovery; - @Inject - public NodeService(Settings settings, ThreadPool threadPool, MonitorService monitorService, Discovery discovery, + NodeService(Settings settings, ThreadPool threadPool, MonitorService monitorService, Discovery discovery, TransportService transportService, IndicesService indicesService, PluginsService pluginService, - CircuitBreakerService circuitBreakerService, ScriptService scriptService, @Nullable HttpServer httpServer, - IngestService ingestService, ClusterService clusterService, SettingsFilter settingsFilter) { + CircuitBreakerService circuitBreakerService, ScriptService scriptService, + @Nullable HttpServerTransport httpServerTransport, IngestService ingestService, ClusterService clusterService, + SettingsFilter settingsFilter) { super(settings); this.threadPool = threadPool; this.monitorService = monitorService; @@ -76,12 +72,12 @@ public NodeService(Settings settings, ThreadPool threadPool, MonitorService moni this.discovery = discovery; this.pluginService = pluginService; this.circuitBreakerService = circuitBreakerService; - this.httpServer = httpServer; + this.httpServerTransport = httpServerTransport; this.ingestService = ingestService; this.settingsFilter = settingsFilter; this.scriptService = scriptService; - clusterService.add(ingestService.getPipelineStore()); - clusterService.add(ingestService.getPipelineExecutionService()); + clusterService.addStateApplier(ingestService.getPipelineStore()); + clusterService.addStateApplier(ingestService.getPipelineExecutionService()); } public NodeInfo info(boolean settings, boolean os, boolean process, boolean jvm, boolean threadPool, @@ -93,7 +89,7 @@ public NodeInfo info(boolean settings, boolean os, boolean process, boolean jvm, jvm ? monitorService.jvmService().info() : null, threadPool ? this.threadPool.info() : null, transport ? transportService.info() : null, - http ? (httpServer == null ? null : httpServer.info()) : null, + http ? (httpServerTransport == null ? null : httpServerTransport.info()) : null, plugin ? (pluginService == null ? null : pluginService.info()) : null, ingest ? (ingestService == null ? null : ingestService.info()) : null, indices ? indicesService.getTotalIndexingBufferBytes() : null @@ -113,7 +109,7 @@ public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, bo threadPool ? this.threadPool.stats() : null, fs ? monitorService.fsService().stats() : null, transport ? transportService.stats() : null, - http ? httpServer.stats() : null, + http ? (httpServerTransport == null ? null : httpServerTransport.stats()) : null, circuitBreaker ? circuitBreakerService.stats() : null, script ? scriptService.stats() : null, discoveryStats ? discovery.stats() : null, @@ -129,4 +125,5 @@ public IngestService getIngestService() { public void close() throws IOException { indicesService.close(); } + } diff --git a/core/src/main/java/org/elasticsearch/node/NodeValidationException.java b/core/src/main/java/org/elasticsearch/node/NodeValidationException.java new file mode 100644 index 0000000000000..884fc8e8b6ab1 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/node/NodeValidationException.java @@ -0,0 +1,46 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.node; + +import org.elasticsearch.bootstrap.BootstrapCheck; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.transport.BoundTransportAddress; + +import java.util.List; + +/** + * An exception thrown during node validation. Node validation runs immediately before a node + * begins accepting network requests in + * {@link Node#validateNodeBeforeAcceptingRequests(Settings, BoundTransportAddress, List)}. This exception is a checked exception that is + * declared as thrown from this method for the purpose of bubbling up to the user. + */ +public class NodeValidationException extends Exception { + + /** + * Creates a node validation exception with the specified validation message to be displayed to + * the user. + * + * @param message the message to display to the user + */ + public NodeValidationException(final String message) { + super(message); + } + +} diff --git a/core/src/main/java/org/elasticsearch/plugins/ActionPlugin.java b/core/src/main/java/org/elasticsearch/plugins/ActionPlugin.java index 3d769d27a87da..346bf491d619b 100644 --- a/core/src/main/java/org/elasticsearch/plugins/ActionPlugin.java +++ b/core/src/main/java/org/elasticsearch/plugins/ActionPlugin.java @@ -25,13 +25,24 @@ import org.elasticsearch.action.support.ActionFilter; import org.elasticsearch.action.support.TransportAction; import org.elasticsearch.action.support.TransportActions; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.IndexScopedSettings; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.settings.SettingsFilter; +import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestHandler; import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.Objects; +import java.util.function.Supplier; +import java.util.function.UnaryOperator; /** * An additional extension point for {@link Plugin}s that extends Elasticsearch's scripting functionality. Implement it like this: @@ -49,7 +60,7 @@ public interface ActionPlugin { /** * Actions added by this plugin. */ - default List, ? extends ActionResponse>> getActions() { + default List> getActions() { return Collections.emptyList(); } /** @@ -61,7 +72,9 @@ default List> getActionFilters() { /** * Rest handlers added by this plugin. */ - default List> getRestHandlers() { + default List getRestHandlers(Settings settings, RestController restController, ClusterSettings clusterSettings, + IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter, + IndexNameExpressionResolver indexNameExpressionResolver, Supplier nodesInCluster) { return Collections.emptyList(); } @@ -72,7 +85,16 @@ default Collection getRestHeaders() { return Collections.emptyList(); } - final class ActionHandler, Response extends ActionResponse> { + /** + * Returns a function used to wrap each rest request before handling the request. + * + * Note: Only one installed plugin may implement a rest wrapper. + */ + default UnaryOperator getRestHandlerWrapper(ThreadContext threadContext) { + return null; + } + + final class ActionHandler { private final GenericAction action; private final Class> transportAction; private final Class[] supportTransportActions; diff --git a/core/src/main/java/org/elasticsearch/plugins/DiscoveryPlugin.java b/core/src/main/java/org/elasticsearch/plugins/DiscoveryPlugin.java index f6174c08d124f..61e87d83a183f 100644 --- a/core/src/main/java/org/elasticsearch/plugins/DiscoveryPlugin.java +++ b/core/src/main/java/org/elasticsearch/plugins/DiscoveryPlugin.java @@ -19,8 +19,19 @@ package org.elasticsearch.plugins; +import java.util.Collections; +import java.util.Map; +import java.util.function.Supplier; + +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.network.NetworkService; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.discovery.Discovery; +import org.elasticsearch.discovery.zen.UnicastHostsProvider; +import org.elasticsearch.discovery.zen.ZenPing; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; /** * An additional extension point for {@link Plugin}s that extends Elasticsearch's discovery functionality. To add an additional @@ -36,6 +47,25 @@ * } */ public interface DiscoveryPlugin { + + /** + * Returns custom discovery implementations added by this plugin. + * + * The key of the returned map is the name of the discovery implementation + * (see {@link org.elasticsearch.discovery.DiscoveryModule#DISCOVERY_TYPE_SETTING}, and + * the value is a supplier to construct the {@link Discovery}. + * + * @param threadPool Use to schedule ping actions + * @param transportService Use to communicate with other nodes + * @param clusterService Use to find current nodes in the cluster + * @param hostsProvider Use to find configured hosts which should be pinged for initial discovery + */ + default Map> getDiscoveryTypes(ThreadPool threadPool, TransportService transportService, + NamedWriteableRegistry namedWriteableRegistry, + ClusterService clusterService, UnicastHostsProvider hostsProvider) { + return Collections.emptyMap(); + } + /** * Override to add additional {@link NetworkService.CustomNameResolver}s. * This can be handy if you want to provide your own Network interface name like _mycard_ @@ -52,4 +82,20 @@ public interface DiscoveryPlugin { default NetworkService.CustomNameResolver getCustomNameResolver(Settings settings) { return null; } + + /** + * Returns providers of unicast host lists for zen discovery. + * + * The key of the returned map is the name of the host provider + * (see {@link org.elasticsearch.discovery.DiscoveryModule#DISCOVERY_HOSTS_PROVIDER_SETTING}), and + * the value is a supplier to construct the host provider when it is selected for use. + * + * @param transportService Use to form the {@link org.elasticsearch.common.transport.TransportAddress} portion + * of a {@link org.elasticsearch.cluster.node.DiscoveryNode} + * @param networkService Use to find the publish host address of the current node + */ + default Map> getZenHostsProviders(TransportService transportService, + NetworkService networkService) { + return Collections.emptyMap(); + } } diff --git a/core/src/main/java/org/elasticsearch/plugins/DummyPluginInfo.java b/core/src/main/java/org/elasticsearch/plugins/DummyPluginInfo.java index 90b1d32f4ae4a..73a3811f729f7 100644 --- a/core/src/main/java/org/elasticsearch/plugins/DummyPluginInfo.java +++ b/core/src/main/java/org/elasticsearch/plugins/DummyPluginInfo.java @@ -21,9 +21,13 @@ public class DummyPluginInfo extends PluginInfo { private DummyPluginInfo(String name, String description, String version, String classname) { - super(name, description, version, classname); + super(name, description, version, classname, false); } - public static final DummyPluginInfo INSTANCE = new DummyPluginInfo( - "dummy_plugin_name", "dummy plugin description", "dummy_plugin_version", "DummyPluginName"); + public static final DummyPluginInfo INSTANCE = + new DummyPluginInfo( + "dummy_plugin_name", + "dummy plugin description", + "dummy_plugin_version", + "DummyPluginName"); } diff --git a/core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java b/core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java deleted file mode 100644 index 6579257b5a775..0000000000000 --- a/core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java +++ /dev/null @@ -1,590 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.plugins; - -import joptsimple.OptionSet; -import joptsimple.OptionSpec; -import org.apache.lucene.search.spell.LevensteinDistance; -import org.apache.lucene.util.CollectionUtil; -import org.apache.lucene.util.IOUtils; -import org.elasticsearch.Version; -import org.elasticsearch.bootstrap.JarHell; -import org.elasticsearch.cli.ExitCodes; -import org.elasticsearch.cli.SettingCommand; -import org.elasticsearch.cli.Terminal; -import org.elasticsearch.cli.UserException; -import org.elasticsearch.common.collect.Tuple; -import org.elasticsearch.common.hash.MessageDigests; -import org.elasticsearch.common.io.FileSystemUtils; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.env.Environment; -import org.elasticsearch.node.internal.InternalSettingsPreparer; - -import java.io.BufferedReader; -import java.io.IOException; -import java.io.InputStream; -import java.io.InputStreamReader; -import java.io.OutputStream; -import java.net.URL; -import java.net.URLConnection; -import java.net.URLDecoder; -import java.nio.charset.StandardCharsets; -import java.nio.file.DirectoryStream; -import java.nio.file.Files; -import java.nio.file.Path; -import java.nio.file.StandardCopyOption; -import java.nio.file.attribute.PosixFileAttributeView; -import java.nio.file.attribute.PosixFileAttributes; -import java.nio.file.attribute.PosixFilePermission; -import java.nio.file.attribute.PosixFilePermissions; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collections; -import java.util.HashSet; -import java.util.List; -import java.util.Locale; -import java.util.Map; -import java.util.Objects; -import java.util.Set; -import java.util.TreeSet; -import java.util.stream.Collectors; -import java.util.zip.ZipEntry; -import java.util.zip.ZipInputStream; - -import static org.elasticsearch.cli.Terminal.Verbosity.VERBOSE; - -/** - * A command for the plugin cli to install a plugin into elasticsearch. - * - * The install command takes a plugin id, which may be any of the following: - *
      - *
    • An official elasticsearch plugin name
    • - *
    • Maven coordinates to a plugin zip
    • - *
    • A URL to a plugin zip
    • - *
    - * - * Plugins are packaged as zip files. Each packaged plugin must contain a - * plugin properties file. See {@link PluginInfo}. - *

    - * The installation process first extracts the plugin files into a temporary - * directory in order to verify the plugin satisfies the following requirements: - *

      - *
    • Jar hell does not exist, either between the plugin's own jars, or with elasticsearch
    • - *
    • The plugin is not a module already provided with elasticsearch
    • - *
    • If the plugin contains extra security permissions, the policy file is validated
    • - *
    - *

    - * A plugin may also contain an optional {@code bin} directory which contains scripts. The - * scripts will be installed into a subdirectory of the elasticsearch bin directory, using - * the name of the plugin, and the scripts will be marked executable. - *

    - * A plugin may also contain an optional {@code config} directory which contains configuration - * files specific to the plugin. The config files be installed into a subdirectory of the - * elasticsearch config directory, using the name of the plugin. If any files to be installed - * already exist, they will be skipped. - */ -class InstallPluginCommand extends SettingCommand { - - private static final String PROPERTY_STAGING_ID = "es.plugins.staging"; - - /** The builtin modules, which are plugins, but cannot be installed or removed. */ - static final Set MODULES; - static { - try (InputStream stream = InstallPluginCommand.class.getResourceAsStream("/modules.txt"); - BufferedReader reader = new BufferedReader(new InputStreamReader(stream, StandardCharsets.UTF_8))) { - Set modules = new HashSet<>(); - String line = reader.readLine(); - while (line != null) { - modules.add(line.trim()); - line = reader.readLine(); - } - MODULES = Collections.unmodifiableSet(modules); - } catch (IOException e) { - throw new RuntimeException(e); - } - } - - /** The official plugins that can be installed simply by name. */ - static final Set OFFICIAL_PLUGINS; - static { - try (InputStream stream = InstallPluginCommand.class.getResourceAsStream("/plugins.txt"); - BufferedReader reader = new BufferedReader(new InputStreamReader(stream, StandardCharsets.UTF_8))) { - Set plugins = new TreeSet<>(); // use tree set to get sorting for help command - String line = reader.readLine(); - while (line != null) { - plugins.add(line.trim()); - line = reader.readLine(); - } - plugins.add("x-pack"); - OFFICIAL_PLUGINS = Collections.unmodifiableSet(plugins); - } catch (IOException e) { - throw new RuntimeException(e); - } - } - - private final OptionSpec batchOption; - private final OptionSpec arguments; - - - static final Set DIR_AND_EXECUTABLE_PERMS; - static final Set FILE_PERMS; - - static { - Set dirAndExecutablePerms = new HashSet<>(7); - // Directories and executables get chmod 755 - dirAndExecutablePerms.add(PosixFilePermission.OWNER_EXECUTE); - dirAndExecutablePerms.add(PosixFilePermission.OWNER_READ); - dirAndExecutablePerms.add(PosixFilePermission.OWNER_WRITE); - dirAndExecutablePerms.add(PosixFilePermission.GROUP_EXECUTE); - dirAndExecutablePerms.add(PosixFilePermission.GROUP_READ); - dirAndExecutablePerms.add(PosixFilePermission.OTHERS_READ); - dirAndExecutablePerms.add(PosixFilePermission.OTHERS_EXECUTE); - DIR_AND_EXECUTABLE_PERMS = Collections.unmodifiableSet(dirAndExecutablePerms); - - Set filePerms = new HashSet<>(4); - // Files get chmod 644 - filePerms.add(PosixFilePermission.OWNER_READ); - filePerms.add(PosixFilePermission.OWNER_WRITE); - filePerms.add(PosixFilePermission.GROUP_READ); - filePerms.add(PosixFilePermission.OTHERS_READ); - FILE_PERMS = Collections.unmodifiableSet(filePerms); - } - - InstallPluginCommand() { - super("Install a plugin"); - this.batchOption = parser.acceptsAll(Arrays.asList("b", "batch"), - "Enable batch mode explicitly, automatic confirmation of security permission"); - this.arguments = parser.nonOptions("plugin id"); - } - - @Override - protected void printAdditionalHelp(Terminal terminal) { - terminal.println("The following official plugins may be installed by name:"); - for (String plugin : OFFICIAL_PLUGINS) { - terminal.println(" " + plugin); - } - terminal.println(""); - } - - @Override - protected void execute(Terminal terminal, OptionSet options, Map settings) throws Exception { - String pluginId = arguments.value(options); - boolean isBatch = options.has(batchOption) || System.console() == null; - execute(terminal, pluginId, isBatch, settings); - } - - // pkg private for testing - void execute(Terminal terminal, String pluginId, boolean isBatch, Map settings) throws Exception { - final Environment env = InternalSettingsPreparer.prepareEnvironment(Settings.EMPTY, terminal, settings); - // TODO: remove this leniency!! is it needed anymore? - if (Files.exists(env.pluginsFile()) == false) { - terminal.println("Plugins directory [" + env.pluginsFile() + "] does not exist. Creating..."); - Files.createDirectory(env.pluginsFile()); - } - - Path pluginZip = download(terminal, pluginId, env.tmpFile()); - Path extractedZip = unzip(pluginZip, env.pluginsFile()); - install(terminal, isBatch, extractedZip, env); - } - - /** Downloads the plugin and returns the file it was downloaded to. */ - private Path download(Terminal terminal, String pluginId, Path tmpDir) throws Exception { - if (OFFICIAL_PLUGINS.contains(pluginId)) { - final String version = Version.CURRENT.toString(); - final String url; - final String stagingHash = System.getProperty(PROPERTY_STAGING_ID); - if (stagingHash != null) { - url = String.format(Locale.ROOT, - "https://staging.elastic.co/%3$s-%1$s/download/elasticsearch-plugins/%2$s/%2$s-%3$s.zip", - stagingHash, pluginId, version); - } else { - url = String.format(Locale.ROOT, - "https://artifacts.elastic.co/download/elasticsearch-plugins/%1$s/%1$s-%2$s.zip", - pluginId, version); - } - terminal.println("-> Downloading " + pluginId + " from elastic"); - return downloadZipAndChecksum(terminal, url, tmpDir); - } - - // now try as maven coordinates, a valid URL would only have a colon and slash - String[] coordinates = pluginId.split(":"); - if (coordinates.length == 3 && pluginId.contains("/") == false) { - String mavenUrl = String.format(Locale.ROOT, "https://repo1.maven.org/maven2/%1$s/%2$s/%3$s/%2$s-%3$s.zip", - coordinates[0].replace(".", "/") /* groupId */, coordinates[1] /* artifactId */, coordinates[2] /* version */); - terminal.println("-> Downloading " + pluginId + " from maven central"); - return downloadZipAndChecksum(terminal, mavenUrl, tmpDir); - } - - // fall back to plain old URL - if (pluginId.contains(":/") == false) { - // definitely not a valid url, so assume it is a plugin name - List plugins = checkMisspelledPlugin(pluginId); - String msg = "Unknown plugin " + pluginId; - if (plugins.isEmpty() == false) { - msg += ", did you mean " + (plugins.size() == 1 ? "[" + plugins.get(0) + "]": "any of " + plugins.toString()) + "?"; - } - throw new UserException(ExitCodes.USAGE, msg); - } - terminal.println("-> Downloading " + URLDecoder.decode(pluginId, "UTF-8")); - return downloadZip(terminal, pluginId, tmpDir); - } - - /** Returns all the official plugin names that look similar to pluginId. **/ - private List checkMisspelledPlugin(String pluginId) { - LevensteinDistance ld = new LevensteinDistance(); - List> scoredKeys = new ArrayList<>(); - for (String officialPlugin : OFFICIAL_PLUGINS) { - float distance = ld.getDistance(pluginId, officialPlugin); - if (distance > 0.7f) { - scoredKeys.add(new Tuple<>(distance, officialPlugin)); - } - } - CollectionUtil.timSort(scoredKeys, (a, b) -> b.v1().compareTo(a.v1())); - return scoredKeys.stream().map((a) -> a.v2()).collect(Collectors.toList()); - } - - /** Downloads a zip from the url, into a temp file under the given temp dir. */ - private Path downloadZip(Terminal terminal, String urlString, Path tmpDir) throws IOException { - terminal.println(VERBOSE, "Retrieving zip from " + urlString); - URL url = new URL(urlString); - Path zip = Files.createTempFile(tmpDir, null, ".zip"); - URLConnection urlConnection = url.openConnection(); - int contentLength = urlConnection.getContentLength(); - try (InputStream in = new TerminalProgressInputStream(urlConnection.getInputStream(), contentLength, terminal)) { - // must overwrite since creating the temp file above actually created the file - Files.copy(in, zip, StandardCopyOption.REPLACE_EXISTING); - } - return zip; - } - - /** - * content length might be -1 for unknown and progress only makes sense if the content length is greater than 0 - */ - private class TerminalProgressInputStream extends ProgressInputStream { - - private final Terminal terminal; - private int width = 50; - private final boolean enabled; - - public TerminalProgressInputStream(InputStream is, int expectedTotalSize, Terminal terminal) { - super(is, expectedTotalSize); - this.terminal = terminal; - this.enabled = expectedTotalSize > 0; - } - - @Override - public void onProgress(int percent) { - if (enabled) { - int currentPosition = percent * width / 100; - StringBuilder sb = new StringBuilder("\r["); - sb.append(String.join("=", Collections.nCopies(currentPosition, ""))); - if (currentPosition > 0 && percent < 100) { - sb.append(">"); - } - sb.append(String.join(" ", Collections.nCopies(width - currentPosition, ""))); - sb.append("] %s   "); - if (percent == 100) { - sb.append("\n"); - } - terminal.print(Terminal.Verbosity.NORMAL, String.format(Locale.ROOT, sb.toString(), percent + "%")); - } - } - } - - /** Downloads a zip from the url, as well as a SHA1 checksum, and checks the checksum. */ - private Path downloadZipAndChecksum(Terminal terminal, String urlString, Path tmpDir) throws Exception { - Path zip = downloadZip(terminal, urlString, tmpDir); - - URL checksumUrl = new URL(urlString + ".sha1"); - final String expectedChecksum; - try (InputStream in = checksumUrl.openStream()) { - BufferedReader checksumReader = new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8)); - expectedChecksum = checksumReader.readLine(); - if (checksumReader.readLine() != null) { - throw new UserException(ExitCodes.IO_ERROR, "Invalid checksum file at " + checksumUrl); - } - } - - byte[] zipbytes = Files.readAllBytes(zip); - String gotChecksum = MessageDigests.toHexString(MessageDigests.sha1().digest(zipbytes)); - if (expectedChecksum.equals(gotChecksum) == false) { - throw new UserException(ExitCodes.IO_ERROR, "SHA1 mismatch, expected " + expectedChecksum + " but got " + gotChecksum); - } - - return zip; - } - - private Path unzip(Path zip, Path pluginsDir) throws IOException, UserException { - // unzip plugin to a staging temp dir - - final Path target = stagingDirectory(pluginsDir); - - boolean hasEsDir = false; - // TODO: we should wrap this in a try/catch and try deleting the target dir on failure? - try (ZipInputStream zipInput = new ZipInputStream(Files.newInputStream(zip))) { - ZipEntry entry; - byte[] buffer = new byte[8192]; - while ((entry = zipInput.getNextEntry()) != null) { - if (entry.getName().startsWith("elasticsearch/") == false) { - // only extract the elasticsearch directory - continue; - } - hasEsDir = true; - Path targetFile = target.resolve(entry.getName().substring("elasticsearch/".length())); - - // Using the entry name as a path can result in an entry outside of the plugin dir, either if the - // name starts with the root of the filesystem, or it is a relative entry like ../whatever. - // This check attempts to identify both cases by first normalizing the path (which removes foo/..) - // and ensuring the normalized entry is still rooted with the target plugin directory. - if (targetFile.normalize().startsWith(target) == false) { - throw new IOException("Zip contains entry name '" + entry.getName() + "' resolving outside of plugin directory"); - } - - // be on the safe side: do not rely on that directories are always extracted - // before their children (although this makes sense, but is it guaranteed?) - if (!Files.isSymbolicLink(targetFile.getParent())) { - Files.createDirectories(targetFile.getParent()); - } - if (entry.isDirectory() == false) { - try (OutputStream out = Files.newOutputStream(targetFile)) { - int len; - while ((len = zipInput.read(buffer)) >= 0) { - out.write(buffer, 0, len); - } - } - } - zipInput.closeEntry(); - } - } - Files.delete(zip); - if (hasEsDir == false) { - IOUtils.rm(target); - throw new UserException(ExitCodes.DATA_ERROR, "`elasticsearch` directory is missing in the plugin zip"); - } - return target; - } - - private Path stagingDirectory(Path pluginsDir) throws IOException { - try { - return Files.createTempDirectory(pluginsDir, ".installing-", PosixFilePermissions.asFileAttribute(DIR_AND_EXECUTABLE_PERMS)); - } catch (IllegalArgumentException e) { - // Jimfs throws an IAE where it should throw an UOE - // remove when google/jimfs#30 is integrated into Jimfs - // and the Jimfs test dependency is upgraded to include - // this pull request - final StackTraceElement[] elements = e.getStackTrace(); - if (elements.length >= 1 && - elements[0].getClassName().equals("com.google.common.jimfs.AttributeService") && - elements[0].getMethodName().equals("setAttributeInternal")) { - return stagingDirectoryWithoutPosixPermissions(pluginsDir); - } else { - throw e; - } - } catch (UnsupportedOperationException e) { - return stagingDirectoryWithoutPosixPermissions(pluginsDir); - } - } - - private Path stagingDirectoryWithoutPosixPermissions(Path pluginsDir) throws IOException { - return Files.createTempDirectory(pluginsDir, ".installing-"); - } - - /** Load information about the plugin, and verify it can be installed with no errors. */ - private PluginInfo verify(Terminal terminal, Path pluginRoot, boolean isBatch, Environment env) throws Exception { - // read and validate the plugin descriptor - PluginInfo info = PluginInfo.readFromProperties(pluginRoot); - terminal.println(VERBOSE, info.toString()); - - // don't let luser install plugin as a module... - // they might be unavoidably in maven central and are packaged up the same way) - if (MODULES.contains(info.getName())) { - throw new UserException( - ExitCodes.USAGE, "plugin '" + info.getName() + "' cannot be installed like this, it is a system module"); - } - - // check for jar hell before any copying - jarHellCheck(pluginRoot, env.pluginsFile()); - - // read optional security policy (extra permissions) - // if it exists, confirm or warn the user - Path policy = pluginRoot.resolve(PluginInfo.ES_PLUGIN_POLICY); - if (Files.exists(policy)) { - PluginSecurity.readPolicy(policy, terminal, env, isBatch); - } - - return info; - } - - /** check a candidate plugin for jar hell before installing it */ - void jarHellCheck(Path candidate, Path pluginsDir) throws Exception { - // create list of current jars in classpath - final List jars = new ArrayList<>(); - jars.addAll(Arrays.asList(JarHell.parseClassPath())); - - // read existing bundles. this does some checks on the installation too. - PluginsService.getPluginBundles(pluginsDir); - - // add plugin jars to the list - Path pluginJars[] = FileSystemUtils.files(candidate, "*.jar"); - for (Path jar : pluginJars) { - jars.add(jar.toUri().toURL()); - } - // TODO: no jars should be an error - // TODO: verify the classname exists in one of the jars! - - // check combined (current classpath + new jars to-be-added) - JarHell.checkJarHell(jars.toArray(new URL[jars.size()])); - } - - /** - * Installs the plugin from {@code tmpRoot} into the plugins dir. - * If the plugin has a bin dir and/or a config dir, those are copied. - */ - private void install(Terminal terminal, boolean isBatch, Path tmpRoot, Environment env) throws Exception { - List deleteOnFailure = new ArrayList<>(); - deleteOnFailure.add(tmpRoot); - - try { - PluginInfo info = verify(terminal, tmpRoot, isBatch, env); - - final Path destination = env.pluginsFile().resolve(info.getName()); - if (Files.exists(destination)) { - throw new UserException( - ExitCodes.USAGE, - "plugin directory " + destination.toAbsolutePath() + - " already exists. To update the plugin, uninstall it first using 'remove " + info.getName() + "' command"); - } - - Path tmpBinDir = tmpRoot.resolve("bin"); - if (Files.exists(tmpBinDir)) { - Path destBinDir = env.binFile().resolve(info.getName()); - deleteOnFailure.add(destBinDir); - installBin(info, tmpBinDir, destBinDir); - } - - Path tmpConfigDir = tmpRoot.resolve("config"); - if (Files.exists(tmpConfigDir)) { - // some files may already exist, and we don't remove plugin config files on plugin removal, - // so any installed config files are left on failure too - installConfig(info, tmpConfigDir, env.configFile().resolve(info.getName())); - } - - Files.move(tmpRoot, destination, StandardCopyOption.ATOMIC_MOVE); - try (DirectoryStream stream = Files.newDirectoryStream(destination)) { - for (Path pluginFile : stream) { - if (Files.isDirectory(pluginFile)) { - setFileAttributes(pluginFile, DIR_AND_EXECUTABLE_PERMS); - } else { - setFileAttributes(pluginFile, FILE_PERMS); - } - } - } - terminal.println("-> Installed " + info.getName()); - - } catch (Exception installProblem) { - try { - IOUtils.rm(deleteOnFailure.toArray(new Path[0])); - } catch (IOException exceptionWhileRemovingFiles) { - installProblem.addSuppressed(exceptionWhileRemovingFiles); - } - throw installProblem; - } - } - - /** Copies the files from {@code tmpBinDir} into {@code destBinDir}, along with permissions from dest dirs parent. */ - private void installBin(PluginInfo info, Path tmpBinDir, Path destBinDir) throws Exception { - if (Files.isDirectory(tmpBinDir) == false) { - throw new UserException(ExitCodes.IO_ERROR, "bin in plugin " + info.getName() + " is not a directory"); - } - Files.createDirectory(destBinDir); - setFileAttributes(destBinDir, DIR_AND_EXECUTABLE_PERMS); - - try (DirectoryStream stream = Files.newDirectoryStream(tmpBinDir)) { - for (Path srcFile : stream) { - if (Files.isDirectory(srcFile)) { - throw new UserException( - ExitCodes.DATA_ERROR, - "Directories not allowed in bin dir for plugin " + info.getName() + ", found " + srcFile.getFileName()); - } - - Path destFile = destBinDir.resolve(tmpBinDir.relativize(srcFile)); - Files.copy(srcFile, destFile); - setFileAttributes(destFile, DIR_AND_EXECUTABLE_PERMS); - } - } - IOUtils.rm(tmpBinDir); // clean up what we just copied - } - - /** - * Copies the files from {@code tmpConfigDir} into {@code destConfigDir}. - * Any files existing in both the source and destination will be skipped. - */ - private void installConfig(PluginInfo info, Path tmpConfigDir, Path destConfigDir) throws Exception { - if (Files.isDirectory(tmpConfigDir) == false) { - throw new UserException(ExitCodes.IO_ERROR, "config in plugin " + info.getName() + " is not a directory"); - } - - Files.createDirectories(destConfigDir); - setFileAttributes(destConfigDir, DIR_AND_EXECUTABLE_PERMS); - final PosixFileAttributeView destConfigDirAttributesView = - Files.getFileAttributeView(destConfigDir.getParent(), PosixFileAttributeView.class); - final PosixFileAttributes destConfigDirAttributes = - destConfigDirAttributesView != null ? destConfigDirAttributesView.readAttributes() : null; - if (destConfigDirAttributes != null) { - setOwnerGroup(destConfigDir, destConfigDirAttributes); - } - - try (DirectoryStream stream = Files.newDirectoryStream(tmpConfigDir)) { - for (Path srcFile : stream) { - if (Files.isDirectory(srcFile)) { - throw new UserException(ExitCodes.DATA_ERROR, "Directories not allowed in config dir for plugin " + info.getName()); - } - - Path destFile = destConfigDir.resolve(tmpConfigDir.relativize(srcFile)); - if (Files.exists(destFile) == false) { - Files.copy(srcFile, destFile); - setFileAttributes(destFile, FILE_PERMS); - if (destConfigDirAttributes != null) { - setOwnerGroup(destFile, destConfigDirAttributes); - } - } - } - } - IOUtils.rm(tmpConfigDir); // clean up what we just copied - } - - private static void setOwnerGroup(final Path path, final PosixFileAttributes attributes) throws IOException { - Objects.requireNonNull(attributes); - PosixFileAttributeView fileAttributeView = Files.getFileAttributeView(path, PosixFileAttributeView.class); - assert fileAttributeView != null; - fileAttributeView.setOwner(attributes.owner()); - fileAttributeView.setGroup(attributes.group()); - } - - /** - * Sets the attributes for a path iff posix attributes are supported - */ - private static void setFileAttributes(final Path path, final Set permissions) throws IOException { - PosixFileAttributeView fileAttributeView = Files.getFileAttributeView(path, PosixFileAttributeView.class); - if (fileAttributeView != null) { - Files.setPosixFilePermissions(path, permissions); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/plugins/ListPluginsCommand.java b/core/src/main/java/org/elasticsearch/plugins/ListPluginsCommand.java deleted file mode 100644 index bd2f853bac022..0000000000000 --- a/core/src/main/java/org/elasticsearch/plugins/ListPluginsCommand.java +++ /dev/null @@ -1,68 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.plugins; - -import joptsimple.OptionSet; -import org.elasticsearch.cli.SettingCommand; -import org.elasticsearch.cli.Terminal; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.env.Environment; -import org.elasticsearch.node.internal.InternalSettingsPreparer; - -import java.io.IOException; -import java.nio.file.DirectoryStream; -import java.nio.file.Files; -import java.nio.file.Path; -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; -import java.util.Map; - -/** - * A command for the plugin cli to list plugins installed in elasticsearch. - */ -class ListPluginsCommand extends SettingCommand { - - ListPluginsCommand() { - super("Lists installed elasticsearch plugins"); - } - - @Override - protected void execute(Terminal terminal, OptionSet options, Map settings) throws Exception { - final Environment env = InternalSettingsPreparer.prepareEnvironment(Settings.EMPTY, terminal, settings); - if (Files.exists(env.pluginsFile()) == false) { - throw new IOException("Plugins directory missing: " + env.pluginsFile()); - } - - terminal.println(Terminal.Verbosity.VERBOSE, "Plugins directory: " + env.pluginsFile()); - final List plugins = new ArrayList<>(); - try (DirectoryStream paths = Files.newDirectoryStream(env.pluginsFile())) { - for (Path plugin : paths) { - plugins.add(plugin); - } - } - Collections.sort(plugins); - for (final Path plugin : plugins) { - terminal.println(plugin.getFileName().toString()); - PluginInfo info = PluginInfo.readFromProperties(env.pluginsFile().resolve(plugin.toAbsolutePath())); - terminal.println(Terminal.Verbosity.VERBOSE, info.toString()); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/plugins/MetaDataUpgrader.java b/core/src/main/java/org/elasticsearch/plugins/MetaDataUpgrader.java index aaeddbec11974..49dd91901d756 100644 --- a/core/src/main/java/org/elasticsearch/plugins/MetaDataUpgrader.java +++ b/core/src/main/java/org/elasticsearch/plugins/MetaDataUpgrader.java @@ -19,6 +19,7 @@ package org.elasticsearch.plugins; +import org.elasticsearch.cluster.metadata.IndexTemplateMetaData; import org.elasticsearch.cluster.metadata.MetaData; import java.util.Collection; @@ -32,7 +33,10 @@ public class MetaDataUpgrader { public final UnaryOperator> customMetaDataUpgraders; - public MetaDataUpgrader(Collection>> customMetaDataUpgraders) { + public final UnaryOperator> indexTemplateMetaDataUpgraders; + + public MetaDataUpgrader(Collection>> customMetaDataUpgraders, + Collection>> indexTemplateMetaDataUpgraders) { this.customMetaDataUpgraders = customs -> { Map upgradedCustoms = new HashMap<>(customs); for (UnaryOperator> customMetaDataUpgrader : customMetaDataUpgraders) { @@ -40,5 +44,13 @@ public MetaDataUpgrader(Collection>> } return upgradedCustoms; }; + + this.indexTemplateMetaDataUpgraders = templates -> { + Map upgradedTemplates = new HashMap<>(templates); + for (UnaryOperator> upgrader : indexTemplateMetaDataUpgraders) { + upgradedTemplates = upgrader.apply(upgradedTemplates); + } + return upgradedTemplates; + }; } } diff --git a/core/src/main/java/org/elasticsearch/plugins/NetworkPlugin.java b/core/src/main/java/org/elasticsearch/plugins/NetworkPlugin.java new file mode 100644 index 0000000000000..8f035e1621fd9 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/plugins/NetworkPlugin.java @@ -0,0 +1,77 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.plugins; + +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.function.Supplier; + +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; +import org.elasticsearch.common.network.NetworkService; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.http.HttpServerTransport; +import org.elasticsearch.indices.breaker.CircuitBreakerService; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.Transport; +import org.elasticsearch.transport.TransportInterceptor; + +/** + * Plugin for extending network and transport related classes + */ +public interface NetworkPlugin { + + /** + * Returns a list of {@link TransportInterceptor} instances that are used to intercept incoming and outgoing + * transport (inter-node) requests. This must not return null + * + * @param threadContext a {@link ThreadContext} of the current nodes or clients {@link ThreadPool} that can be used to set additional + * headers in the interceptors + */ + default List getTransportInterceptors(ThreadContext threadContext) { + return Collections.emptyList(); + } + + /** + * Returns a map of {@link Transport} suppliers. + * See {@link org.elasticsearch.common.network.NetworkModule#TRANSPORT_TYPE_KEY} to configure a specific implementation. + */ + default Map> getTransports(Settings settings, ThreadPool threadPool, BigArrays bigArrays, + CircuitBreakerService circuitBreakerService, + NamedWriteableRegistry namedWriteableRegistry, + NetworkService networkService) { + return Collections.emptyMap(); + } + + /** + * Returns a map of {@link HttpServerTransport} suppliers. + * See {@link org.elasticsearch.common.network.NetworkModule#HTTP_TYPE_SETTING} to configure a specific implementation. + */ + default Map> getHttpTransports(Settings settings, ThreadPool threadPool, BigArrays bigArrays, + CircuitBreakerService circuitBreakerService, + NamedWriteableRegistry namedWriteableRegistry, + NamedXContentRegistry xContentRegistry, + NetworkService networkService, + HttpServerTransport.Dispatcher dispatcher) { + return Collections.emptyMap(); + } +} diff --git a/core/src/main/java/org/elasticsearch/plugins/Platforms.java b/core/src/main/java/org/elasticsearch/plugins/Platforms.java new file mode 100644 index 0000000000000..91af58ebec46e --- /dev/null +++ b/core/src/main/java/org/elasticsearch/plugins/Platforms.java @@ -0,0 +1,82 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.plugins; + +import org.apache.lucene.util.Constants; + +import java.nio.file.Path; +import java.util.Locale; + +/** + * Encapsulates platform-dependent methods for handling native components of plugins. + */ +public class Platforms { + + private static final String PROGRAM_NAME = Constants.WINDOWS ? "controller.exe" : "controller"; + public static final String PLATFORM_NAME = Platforms.platformName(Constants.OS_NAME, Constants.OS_ARCH); + + private Platforms() {} + + /** + * The path to the native controller for a plugin with native components. + */ + public static Path nativeControllerPath(Path plugin) { + return plugin + .resolve("platform") + .resolve(PLATFORM_NAME) + .resolve("bin") + .resolve(PROGRAM_NAME); + } + + /** + * Return the platform name based on the OS name and + * - darwin-x86_64 + * - linux-x86-64 + * - windows-x86_64 + * For *nix platforms this is more-or-less `uname -s`-`uname -m` converted to lower case. + * However, for consistency between different operating systems on the same architecture + * "amd64" is replaced with "x86_64" and "i386" with "x86". + * For Windows it's "windows-" followed by either "x86" or "x86_64". + */ + public static String platformName(final String osName, final String osArch) { + final String lowerCaseOs = osName.toLowerCase(Locale.ROOT); + final String normalizedOs; + if (lowerCaseOs.startsWith("windows")) { + normalizedOs = "windows"; + } else if (lowerCaseOs.equals("mac os x")) { + normalizedOs = "darwin"; + } else { + normalizedOs = lowerCaseOs; + } + + final String lowerCaseArch = osArch.toLowerCase(Locale.ROOT); + final String normalizedArch; + if (lowerCaseArch.equals("amd64")) { + normalizedArch = "x86_64"; + } else if (lowerCaseArch.equals("i386")) { + normalizedArch = "x86"; + } else { + normalizedArch = lowerCaseArch; + } + + return normalizedOs + "-" + normalizedArch; + } + +} diff --git a/core/src/main/java/org/elasticsearch/plugins/Plugin.java b/core/src/main/java/org/elasticsearch/plugins/Plugin.java index 1c79986e18f27..bf5bc49d279c0 100644 --- a/core/src/main/java/org/elasticsearch/plugins/Plugin.java +++ b/core/src/main/java/org/elasticsearch/plugins/Plugin.java @@ -19,47 +19,63 @@ package org.elasticsearch.plugins; -import java.util.Collection; -import java.util.Collections; -import java.util.List; - import org.elasticsearch.action.ActionModule; +import org.elasticsearch.bootstrap.BootstrapCheck; import org.elasticsearch.client.Client; +import org.elasticsearch.cluster.ClusterModule; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.metadata.IndexTemplateMetaData; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.component.LifecycleComponent; import org.elasticsearch.common.inject.Module; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; +import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsModule; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.discovery.DiscoveryModule; import org.elasticsearch.index.IndexModule; import org.elasticsearch.indices.analysis.AnalysisModule; +import org.elasticsearch.repositories.RepositoriesModule; import org.elasticsearch.script.ScriptModule; import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.SearchModule; -import org.elasticsearch.search.SearchRequestParsers; import org.elasticsearch.threadpool.ExecutorBuilder; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.watcher.ResourceWatcherService; +import java.io.Closeable; +import java.io.IOException; +import java.util.Collection; +import java.util.Collections; +import java.util.List; import java.util.Map; import java.util.function.UnaryOperator; /** - * An extension point allowing to plug in custom functionality. - *

    - * Implement any of these interfaces to extend Elasticsearch: + * An extension point allowing to plug in custom functionality. This class has a number of extension points that are available to all + * plugins, in addition you can implement any of the following interfaces to further customize Elasticsearch: *

      *
    • {@link ActionPlugin} *
    • {@link AnalysisPlugin} + *
    • {@link ClusterPlugin} + *
    • {@link DiscoveryPlugin} + *
    • {@link IngestPlugin} *
    • {@link MapperPlugin} + *
    • {@link NetworkPlugin} + *
    • {@link RepositoryPlugin} *
    • {@link ScriptPlugin} *
    • {@link SearchPlugin} *
    + *

    In addition to extension points this class also declares some {@code @Deprecated} {@code public final void onModule} methods. These + * methods should cause any extensions of {@linkplain Plugin} that used the pre-5.x style extension syntax to fail to build and point the + * plugin author at the new extension syntax. We hope that these make the process of upgrading a plugin from 2.x to 5.x only mildly painful. */ -public abstract class Plugin { +public abstract class Plugin implements Closeable { /** * Node level guice modules. @@ -88,11 +104,10 @@ public Collection> getGuiceServiceClasses() * @param threadPool A service to allow retrieving an executor to run an async action * @param resourceWatcherService A service to watch for changes to node local files * @param scriptService A service to allow running scripts on the local node - * @param searchRequestParsers Parsers for search requests which may be used to templatize search requests */ public Collection createComponents(Client client, ClusterService clusterService, ThreadPool threadPool, ResourceWatcherService resourceWatcherService, ScriptService scriptService, - SearchRequestParsers searchRequestParsers) { + NamedXContentRegistry xContentRegistry) { return Collections.emptyList(); } @@ -112,6 +127,14 @@ public List getNamedWriteables() { return Collections.emptyList(); } + /** + * Returns parsers for named objects this plugin will parse from {@link XContentParser#namedObject(Class, String, Object)}. + * @see NamedWriteableRegistry + */ + public List getNamedXContent() { + return Collections.emptyList(); + } + /** * Called before a new index is created on a node. The given module can be used to register index-level * extensions. @@ -132,16 +155,81 @@ public void onIndexModule(IndexModule indexModule) {} * Provides a function to modify global custom meta data on startup. *

    * Plugins should return the input custom map via {@link UnaryOperator#identity()} if no upgrade is required. + *

    + * The order of custom meta data upgraders calls is undefined and can change between runs so, it is expected that + * plugins will modify only data owned by them to avoid conflicts. + *

    * @return Never {@code null}. The same or upgraded {@code MetaData.Custom} map. * @throws IllegalStateException if the node should not start because at least one {@code MetaData.Custom} - * is unsupported + * is unsupported */ public UnaryOperator> getCustomMetaDataUpgrader() { return UnaryOperator.identity(); } /** - * Old-style guice index level extension point. + * Provides a function to modify index template meta data on startup. + *

    + * Plugins should return the input template map via {@link UnaryOperator#identity()} if no upgrade is required. + *

    + * The order of the template upgrader calls is undefined and can change between runs so, it is expected that + * plugins will modify only templates owned by them to avoid conflicts. + *

    + * @return Never {@code null}. The same or upgraded {@code IndexTemplateMetaData} map. + * @throws IllegalStateException if the node should not start because at least one {@code IndexTemplateMetaData} + * cannot be upgraded + */ + public UnaryOperator> getIndexTemplateMetaDataUpgrader() { + return UnaryOperator.identity(); + } + + /** + * Provides a function to modify index meta data when an index is introduced into the cluster state for the first time. + *

    + * Plugins should return the input index metadata via {@link UnaryOperator#identity()} if no upgrade is required. + *

    + * The order of the index upgrader calls for the same index is undefined and can change between runs so, it is expected that + * plugins will modify only indices owned by them to avoid conflicts. + *

    + * @return Never {@code null}. The same or upgraded {@code IndexMetaData}. + * @throws IllegalStateException if the node should not start because the index is unsupported + */ + public UnaryOperator getIndexMetaDataUpgrader() { + return UnaryOperator.identity(); + } + + /** + * Provides the list of this plugin's custom thread pools, empty if + * none. + * + * @param settings the current settings + * @return executors builders for this plugin's custom thread pools + */ + public List> getExecutorBuilders(Settings settings) { + return Collections.emptyList(); + } + + /** + * Returns a list of checks that are enforced when a node starts up once a node has the transport protocol bound to a non-loopback + * interface. In this case we assume the node is running in production and all bootstrap checks must pass. This allows plugins + * to provide a better out of the box experience by pre-configuring otherwise (in production) mandatory settings or to enforce certain + * configurations like OS settings or 3rd party resources. + */ + public List getBootstrapChecks() { return Collections.emptyList(); } + + /** + * Close the resources opened by this plugin. + * + * @throws IOException if the plugin failed to close its resources + */ + @Override + public void close() throws IOException { + + } + + /** + * Old-style guice index level extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. * * @deprecated use #onIndexModule instead */ @@ -150,7 +238,8 @@ public final void onModule(IndexModule indexModule) {} /** - * Old-style guice settings extension point. + * Old-style guice settings extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. * * @deprecated use #getSettings and #getSettingsFilter instead */ @@ -158,7 +247,8 @@ public final void onModule(IndexModule indexModule) {} public final void onModule(SettingsModule settingsModule) {} /** - * Old-style guice scripting extension point. + * Old-style guice scripting extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. * * @deprecated implement {@link ScriptPlugin} instead */ @@ -166,7 +256,8 @@ public final void onModule(SettingsModule settingsModule) {} public final void onModule(ScriptModule module) {} /** - * Old-style analysis extension point. + * Old-style analysis extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. * * @deprecated implement {@link AnalysisPlugin} instead */ @@ -174,7 +265,8 @@ public final void onModule(ScriptModule module) {} public final void onModule(AnalysisModule module) {} /** - * Old-style action extension point. + * Old-style action extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. * * @deprecated implement {@link ActionPlugin} instead */ @@ -182,7 +274,8 @@ public final void onModule(AnalysisModule module) {} public final void onModule(ActionModule module) {} /** - * Old-style action extension point. + * Old-style search extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. * * @deprecated implement {@link SearchPlugin} instead */ @@ -190,13 +283,38 @@ public final void onModule(ActionModule module) {} public final void onModule(SearchModule module) {} /** - * Provides the list of this plugin's custom thread pools, empty if - * none. + * Old-style network extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. * - * @param settings the current settings - * @return executors builders for this plugin's custom thread pools + * @deprecated implement {@link NetworkPlugin} instead */ - public List> getExecutorBuilders(Settings settings) { - return Collections.emptyList(); - } + @Deprecated + public final void onModule(NetworkModule module) {} + + /** + * Old-style snapshot/restore extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. + * + * @deprecated implement {@link RepositoryPlugin} instead + */ + @Deprecated + public final void onModule(RepositoriesModule module) {} + + /** + * Old-style cluster extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. + * + * @deprecated implement {@link ClusterPlugin} instead + */ + @Deprecated + public final void onModule(ClusterModule module) {} + + /** + * Old-style discovery extension point. {@code @Deprecated} and {@code final} to act as a signpost for plugin authors upgrading + * from 2.x. + * + * @deprecated implement {@link DiscoveryPlugin} instead + */ + @Deprecated + public final void onModule(DiscoveryModule module) {} } diff --git a/core/src/main/java/org/elasticsearch/plugins/PluginCli.java b/core/src/main/java/org/elasticsearch/plugins/PluginCli.java deleted file mode 100644 index ba401404f2e54..0000000000000 --- a/core/src/main/java/org/elasticsearch/plugins/PluginCli.java +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.plugins; - -import org.elasticsearch.cli.MultiCommand; -import org.elasticsearch.cli.Terminal; -import org.elasticsearch.common.logging.LogConfigurator; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.env.Environment; -import org.elasticsearch.node.internal.InternalSettingsPreparer; - -/** - * A cli tool for adding, removing and listing plugins for elasticsearch. - */ -public class PluginCli extends MultiCommand { - - private PluginCli() { - super("A tool for managing installed elasticsearch plugins"); - subcommands.put("list", new ListPluginsCommand()); - subcommands.put("install", new InstallPluginCommand()); - subcommands.put("remove", new RemovePluginCommand()); - } - - public static void main(String[] args) throws Exception { - // initialize default for es.logger.level because we will not read the log4j2.properties - String loggerLevel = System.getProperty("es.logger.level", "INFO"); - String pathHome = System.getProperty("es.path.home"); - // Set the appender for all potential log files to terminal so that other components that use the logger print out the - // same terminal. - // The reason for this is that the plugin cli cannot be configured with a file appender because when the plugin command is - // executed there is no way of knowing where the logfiles should be placed. For example, if elasticsearch - // is run as service then the logs should be at /var/log/elasticsearch but when started from the tar they should be at es.home/logs. - // Therefore we print to Terminal. - Environment loggingEnvironment = InternalSettingsPreparer.prepareEnvironment(Settings.builder() - .put("path.home", pathHome) - .put("logger.level", loggerLevel) - .build(), Terminal.DEFAULT); - LogConfigurator.configure(loggingEnvironment, false); - - exit(new PluginCli().main(args, Terminal.DEFAULT)); - } - -} diff --git a/core/src/main/java/org/elasticsearch/plugins/PluginInfo.java b/core/src/main/java/org/elasticsearch/plugins/PluginInfo.java index 3e241eadd37ba..666cc22b92655 100644 --- a/core/src/main/java/org/elasticsearch/plugins/PluginInfo.java +++ b/core/src/main/java/org/elasticsearch/plugins/PluginInfo.java @@ -16,6 +16,7 @@ * specific language governing permissions and limitations * under the License. */ + package org.elasticsearch.plugins; import org.elasticsearch.Version; @@ -30,133 +31,215 @@ import java.io.InputStream; import java.nio.file.Files; import java.nio.file.Path; +import java.util.Locale; import java.util.Properties; +/** + * An in-memory representation of the plugin descriptor. + */ public class PluginInfo implements Writeable, ToXContent { public static final String ES_PLUGIN_PROPERTIES = "plugin-descriptor.properties"; public static final String ES_PLUGIN_POLICY = "plugin-security.policy"; - static final class Fields { - static final String NAME = "name"; - static final String DESCRIPTION = "description"; - static final String URL = "url"; - static final String VERSION = "version"; - static final String CLASSNAME = "classname"; - } - private final String name; private final String description; private final String version; private final String classname; + private final boolean hasNativeController; /** - * Information about plugins + * Construct plugin info. * - * @param name Its name - * @param description Its description - * @param version Version number + * @param name the name of the plugin + * @param description a description of the plugin + * @param version the version of Elasticsearch the plugin is built for + * @param classname the entry point to the plugin + * @param hasNativeController whether or not the plugin has a native controller */ - public PluginInfo(String name, String description, String version, String classname) { + public PluginInfo( + final String name, + final String description, + final String version, + final String classname, + final boolean hasNativeController) { this.name = name; this.description = description; this.version = version; this.classname = classname; + this.hasNativeController = hasNativeController; } - public PluginInfo(StreamInput in) throws IOException { + /** + * Construct plugin info from a stream. + * + * @param in the stream + * @throws IOException if an I/O exception occurred reading the plugin info from the stream + */ + public PluginInfo(final StreamInput in) throws IOException { this.name = in.readString(); this.description = in.readString(); this.version = in.readString(); this.classname = in.readString(); + if (in.getVersion().onOrAfter(Version.V_5_4_0)) { + hasNativeController = in.readBoolean(); + } else { + hasNativeController = false; + } } @Override - public void writeTo(StreamOutput out) throws IOException { + public void writeTo(final StreamOutput out) throws IOException { out.writeString(name); out.writeString(description); out.writeString(version); out.writeString(classname); + if (out.getVersion().onOrAfter(Version.V_5_4_0)) { + out.writeBoolean(hasNativeController); + } } /** reads (and validates) plugin metadata descriptor file */ - public static PluginInfo readFromProperties(Path dir) throws IOException { - Path descriptor = dir.resolve(ES_PLUGIN_PROPERTIES); - Properties props = new Properties(); + + /** + * Reads and validates the plugin descriptor file. + * + * @param path the path to the root directory for the plugin + * @return the plugin info + * @throws IOException if an I/O exception occurred reading the plugin descriptor + */ + public static PluginInfo readFromProperties(final Path path) throws IOException { + final Path descriptor = path.resolve(ES_PLUGIN_PROPERTIES); + final Properties props = new Properties(); try (InputStream stream = Files.newInputStream(descriptor)) { props.load(stream); } - String name = props.getProperty("name"); + final String name = props.getProperty("name"); if (name == null || name.isEmpty()) { - throw new IllegalArgumentException("Property [name] is missing in [" + descriptor + "]"); + throw new IllegalArgumentException( + "property [name] is missing in [" + descriptor + "]"); } - String description = props.getProperty("description"); + final String description = props.getProperty("description"); if (description == null) { - throw new IllegalArgumentException("Property [description] is missing for plugin [" + name + "]"); + throw new IllegalArgumentException( + "property [description] is missing for plugin [" + name + "]"); } - String version = props.getProperty("version"); + final String version = props.getProperty("version"); if (version == null) { - throw new IllegalArgumentException("Property [version] is missing for plugin [" + name + "]"); + throw new IllegalArgumentException( + "property [version] is missing for plugin [" + name + "]"); } - String esVersionString = props.getProperty("elasticsearch.version"); + final String esVersionString = props.getProperty("elasticsearch.version"); if (esVersionString == null) { - throw new IllegalArgumentException("Property [elasticsearch.version] is missing for plugin [" + name + "]"); + throw new IllegalArgumentException( + "property [elasticsearch.version] is missing for plugin [" + name + "]"); } - Version esVersion = Version.fromString(esVersionString); + final Version esVersion = Version.fromString(esVersionString); if (esVersion.equals(Version.CURRENT) == false) { - throw new IllegalArgumentException("Plugin [" + name + "] is incompatible with Elasticsearch [" + Version.CURRENT.toString() + - "]. Was designed for version [" + esVersionString + "]"); + final String message = String.format( + Locale.ROOT, + "plugin [%s] is incompatible with version [%s]; was designed for version [%s]", + name, + Version.CURRENT.toString(), + esVersionString); + throw new IllegalArgumentException(message); } - String javaVersionString = props.getProperty("java.version"); + final String javaVersionString = props.getProperty("java.version"); if (javaVersionString == null) { - throw new IllegalArgumentException("Property [java.version] is missing for plugin [" + name + "]"); + throw new IllegalArgumentException( + "property [java.version] is missing for plugin [" + name + "]"); } JarHell.checkVersionFormat(javaVersionString); JarHell.checkJavaVersion(name, javaVersionString); - String classname = props.getProperty("classname"); + final String classname = props.getProperty("classname"); if (classname == null) { - throw new IllegalArgumentException("Property [classname] is missing for plugin [" + name + "]"); + throw new IllegalArgumentException( + "property [classname] is missing for plugin [" + name + "]"); } - return new PluginInfo(name, description, version, classname); + final String hasNativeControllerValue = props.getProperty("has.native.controller"); + final boolean hasNativeController; + if (hasNativeControllerValue == null) { + hasNativeController = false; + } else { + switch (hasNativeControllerValue) { + case "true": + hasNativeController = true; + break; + case "false": + hasNativeController = false; + break; + default: + final String message = String.format( + Locale.ROOT, + "property [%s] must be [%s], [%s], or unspecified but was [%s]", + "has_native_controller", + "true", + "false", + hasNativeControllerValue); + throw new IllegalArgumentException(message); + } + } + + return new PluginInfo(name, description, version, classname, hasNativeController); } /** - * @return Plugin's name + * The name of the plugin. + * + * @return the plugin name */ public String getName() { return name; } /** - * @return Plugin's description if any + * The description of the plugin. + * + * @return the plugin description */ public String getDescription() { return description; } /** - * @return plugin's classname + * The entry point to the plugin. + * + * @return the entry point to the plugin */ public String getClassname() { return classname; } /** - * @return Version number for the plugin + * The version of Elasticsearch the plugin was built for. + * + * @return the version */ public String getVersion() { return version; } + /** + * Whether or not the plugin has a native controller. + * + * @return {@code true} if the plugin has a native controller + */ + public boolean hasNativeController() { + return hasNativeController; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); - builder.field(Fields.NAME, name); - builder.field(Fields.VERSION, version); - builder.field(Fields.DESCRIPTION, description); - builder.field(Fields.CLASSNAME, classname); + { + builder.field("name", name); + builder.field("version", version); + builder.field("description", description); + builder.field("classname", classname); + builder.field("has_native_controller", hasNativeController); + } builder.endObject(); return builder; @@ -187,8 +270,9 @@ public String toString() { .append("Name: ").append(name).append("\n") .append("Description: ").append(description).append("\n") .append("Version: ").append(version).append("\n") + .append("Native Controller: ").append(hasNativeController).append("\n") .append(" * Classname: ").append(classname); - return information.toString(); } + } diff --git a/core/src/main/java/org/elasticsearch/plugins/PluginSecurity.java b/core/src/main/java/org/elasticsearch/plugins/PluginSecurity.java index f9c3d1826c992..d09acbe1f3c23 100644 --- a/core/src/main/java/org/elasticsearch/plugins/PluginSecurity.java +++ b/core/src/main/java/org/elasticsearch/plugins/PluginSecurity.java @@ -22,7 +22,6 @@ import org.apache.lucene.util.IOUtils; import org.elasticsearch.cli.Terminal; import org.elasticsearch.cli.Terminal.Verbosity; -import org.elasticsearch.env.Environment; import java.io.IOException; import java.nio.file.Files; @@ -37,60 +36,74 @@ import java.util.Collections; import java.util.Comparator; import java.util.List; +import java.util.function.Supplier; class PluginSecurity { /** * Reads plugin policy, prints/confirms exceptions */ - static void readPolicy(Path file, Terminal terminal, Environment environment, boolean batch) throws IOException { - PermissionCollection permissions = parsePermissions(terminal, file, environment.tmpFile()); + static void readPolicy(PluginInfo info, Path file, Terminal terminal, Supplier tmpFile, boolean batch) throws IOException { + PermissionCollection permissions = parsePermissions(terminal, file, tmpFile.get()); List requested = Collections.list(permissions.elements()); if (requested.isEmpty()) { terminal.println(Verbosity.VERBOSE, "plugin has a policy file with no additional permissions"); - return; - } + } else { - // sort permissions in a reasonable order - Collections.sort(requested, new Comparator() { - @Override - public int compare(Permission o1, Permission o2) { - int cmp = o1.getClass().getName().compareTo(o2.getClass().getName()); - if (cmp == 0) { - String name1 = o1.getName(); - String name2 = o2.getName(); - if (name1 == null) { - name1 = ""; - } - if (name2 == null) { - name2 = ""; - } - cmp = name1.compareTo(name2); + // sort permissions in a reasonable order + Collections.sort(requested, new Comparator() { + @Override + public int compare(Permission o1, Permission o2) { + int cmp = o1.getClass().getName().compareTo(o2.getClass().getName()); if (cmp == 0) { - String actions1 = o1.getActions(); - String actions2 = o2.getActions(); - if (actions1 == null) { - actions1 = ""; + String name1 = o1.getName(); + String name2 = o2.getName(); + if (name1 == null) { + name1 = ""; + } + if (name2 == null) { + name2 = ""; } - if (actions2 == null) { - actions2 = ""; + cmp = name1.compareTo(name2); + if (cmp == 0) { + String actions1 = o1.getActions(); + String actions2 = o2.getActions(); + if (actions1 == null) { + actions1 = ""; + } + if (actions2 == null) { + actions2 = ""; + } + cmp = actions1.compareTo(actions2); } - cmp = actions1.compareTo(actions2); } + return cmp; } - return cmp; + }); + + terminal.println(Verbosity.NORMAL, "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@"); + terminal.println(Verbosity.NORMAL, "@ WARNING: plugin requires additional permissions @"); + terminal.println(Verbosity.NORMAL, "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@"); + // print all permissions: + for (Permission permission : requested) { + terminal.println(Verbosity.NORMAL, "* " + formatPermission(permission)); } - }); - - terminal.println(Verbosity.NORMAL, "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@"); - terminal.println(Verbosity.NORMAL, "@ WARNING: plugin requires additional permissions @"); - terminal.println(Verbosity.NORMAL, "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@"); - // print all permissions: - for (Permission permission : requested) { - terminal.println(Verbosity.NORMAL, "* " + formatPermission(permission)); + terminal.println(Verbosity.NORMAL, "See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html"); + terminal.println(Verbosity.NORMAL, "for descriptions of what these permissions allow and the associated risks."); + prompt(terminal, batch); + } + + if (info.hasNativeController()) { + terminal.println(Verbosity.NORMAL, "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@"); + terminal.println(Verbosity.NORMAL, "@ WARNING: plugin forks a native controller @"); + terminal.println(Verbosity.NORMAL, "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@"); + terminal.println(Verbosity.NORMAL, "This plugin launches a native controller that is not subject to the Java"); + terminal.println(Verbosity.NORMAL, "security manager nor to system call filters."); + prompt(terminal, batch); } - terminal.println(Verbosity.NORMAL, "See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html"); - terminal.println(Verbosity.NORMAL, "for descriptions of what these permissions allow and the associated risks."); + } + + private static void prompt(final Terminal terminal, final boolean batch) { if (!batch) { terminal.println(Verbosity.NORMAL, ""); String text = terminal.readText("Continue with installation? [y/N]"); diff --git a/core/src/main/java/org/elasticsearch/plugins/PluginsService.java b/core/src/main/java/org/elasticsearch/plugins/PluginsService.java index 01704e9ed86e5..27aa9893fb5ad 100644 --- a/core/src/main/java/org/elasticsearch/plugins/PluginsService.java +++ b/core/src/main/java/org/elasticsearch/plugins/PluginsService.java @@ -20,8 +20,6 @@ package org.elasticsearch.plugins; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.analysis.util.CharFilterFactory; import org.apache.lucene.analysis.util.TokenFilterFactory; import org.apache.lucene.analysis.util.TokenizerFactory; @@ -36,7 +34,6 @@ import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.component.LifecycleComponent; import org.elasticsearch.common.inject.Module; -import org.elasticsearch.common.io.FileSystemUtils; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -45,24 +42,26 @@ import org.elasticsearch.threadpool.ExecutorBuilder; import java.io.IOException; -import java.lang.reflect.InvocationTargetException; -import java.lang.reflect.Method; import java.net.URL; import java.net.URLClassLoader; import java.nio.file.DirectoryStream; import java.nio.file.Files; import java.nio.file.Path; import java.util.ArrayList; -import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; +import java.util.Iterator; +import java.util.LinkedHashSet; import java.util.List; +import java.util.Locale; import java.util.Map; +import java.util.Objects; import java.util.Set; import java.util.function.Function; import java.util.stream.Collectors; +import java.util.stream.Stream; import static org.elasticsearch.common.io.FileSystemUtils.isAccessibleDirectory; @@ -79,8 +78,6 @@ public class PluginsService extends AbstractComponent { public static final Setting> MANDATORY_SETTING = Setting.listSetting("plugin.mandatory", Collections.emptyList(), Function.identity(), Property.NodeScope); - private final Map> onModuleReferences; - public List> getPluginSettings() { return plugins.stream().flatMap(p -> p.v2().getSettings().stream()).collect(Collectors.toList()); } @@ -89,16 +86,6 @@ public List getPluginSettingsFilter() { return plugins.stream().flatMap(p -> p.v2().getSettingsFilter().stream()).collect(Collectors.toList()); } - static class OnModuleReference { - public final Class moduleClass; - public final Method onModuleMethod; - - OnModuleReference(Class moduleClass, Method onModuleMethod) { - this.moduleClass = moduleClass; - this.onModuleMethod = onModuleMethod; - } - } - /** * Constructs a new PluginService * @param settings The settings of the system @@ -114,7 +101,7 @@ public PluginsService(Settings settings, Path modulesDirectory, Path pluginsDire // first we load plugins that are on the classpath. this is for tests and transport clients for (Class pluginClass : classpathPlugins) { Plugin plugin = loadPlugin(pluginClass, settings); - PluginInfo pluginInfo = new PluginInfo(pluginClass.getName(), "classpath plugin", "NA", pluginClass.getName()); + PluginInfo pluginInfo = new PluginInfo(pluginClass.getName(), "classpath plugin", "NA", pluginClass.getName(), false); if (logger.isTraceEnabled()) { logger.trace("plugin loaded from classpath [{}]", pluginInfo); } @@ -122,16 +109,16 @@ public PluginsService(Settings settings, Path modulesDirectory, Path pluginsDire pluginsList.add(pluginInfo); } + Set seenBundles = new LinkedHashSet<>(); List modulesList = new ArrayList<>(); // load modules if (modulesDirectory != null) { try { - List bundles = getModuleBundles(modulesDirectory); - List> loaded = loadBundles(bundles); - pluginsLoaded.addAll(loaded); - for (Tuple module : loaded) { - modulesList.add(module.v1()); + Set modules = getModuleBundles(modulesDirectory); + for (Bundle bundle : modules) { + modulesList.add(bundle.plugin); } + seenBundles.addAll(modules); } catch (IOException ex) { throw new IllegalStateException("Unable to initialize modules", ex); } @@ -140,17 +127,19 @@ public PluginsService(Settings settings, Path modulesDirectory, Path pluginsDire // now, find all the ones that are in plugins/ if (pluginsDirectory != null) { try { - List bundles = getPluginBundles(pluginsDirectory); - List> loaded = loadBundles(bundles); - pluginsLoaded.addAll(loaded); - for (Tuple plugin : loaded) { - pluginsList.add(plugin.v1()); + Set plugins = getPluginBundles(pluginsDirectory); + for (Bundle bundle : plugins) { + pluginsList.add(bundle.plugin); } + seenBundles.addAll(plugins); } catch (IOException ex) { throw new IllegalStateException("Unable to initialize plugins", ex); } } + List> loaded = loadBundles(seenBundles); + pluginsLoaded.addAll(loaded); + this.info = new PluginsAndModules(pluginsList, modulesList); this.plugins = Collections.unmodifiableList(pluginsLoaded); @@ -178,40 +167,6 @@ public PluginsService(Settings settings, Path modulesDirectory, Path pluginsDire // but for now: just be transparent so we can debug any potential issues logPluginInfo(info.getModuleInfos(), "module", logger); logPluginInfo(info.getPluginInfos(), "plugin", logger); - - Map> onModuleReferences = new HashMap<>(); - for (Tuple pluginEntry : this.plugins) { - Plugin plugin = pluginEntry.v2(); - List list = new ArrayList<>(); - for (Method method : plugin.getClass().getMethods()) { - if (!method.getName().equals("onModule")) { - continue; - } - // this is a deprecated final method, so all Plugin subclasses have it - if (method.getParameterTypes().length == 1 && method.getParameterTypes()[0].equals(IndexModule.class)) { - continue; - } - if (method.getParameterTypes().length == 0 || method.getParameterTypes().length > 1) { - logger.warn("Plugin: {} implementing onModule with no parameters or more than one parameter", pluginEntry.v1().getName()); - continue; - } - Class moduleClass = method.getParameterTypes()[0]; - if (!Module.class.isAssignableFrom(moduleClass)) { - if (method.getDeclaringClass() == Plugin.class) { - // These are still part of the Plugin class to point the user to the new implementations - continue; - } - throw new RuntimeException( - "Plugin: [" + pluginEntry.v1().getName() + "] implements onModule taking a parameter that isn't a Module [" - + moduleClass.getSimpleName() + "]"); - } - list.add(new OnModuleReference(moduleClass, method)); - } - if (!list.isEmpty()) { - onModuleReferences.put(plugin, list); - } - } - this.onModuleReferences = Collections.unmodifiableMap(onModuleReferences); } private static void logPluginInfo(final List pluginInfos, final String type, final Logger logger) { @@ -225,38 +180,6 @@ private static void logPluginInfo(final List pluginInfos, final Stri } } - private List> plugins() { - return plugins; - } - - public void processModules(Iterable modules) { - for (Module module : modules) { - processModule(module); - } - } - - public void processModule(Module module) { - for (Tuple plugin : plugins()) { - // see if there are onModule references - List references = onModuleReferences.get(plugin.v2()); - if (references != null) { - for (OnModuleReference reference : references) { - if (reference.moduleClass.isAssignableFrom(module.getClass())) { - try { - reference.onModuleMethod.invoke(plugin.v2(), module); - } catch (IllegalAccessException | InvocationTargetException e) { - logger.warn("plugin {}, failed to invoke custom onModule method", e, plugin.v1().getName()); - throw new ElasticsearchException("failed to invoke onModule", e); - } catch (Exception e) { - logger.warn((Supplier) () -> new ParameterizedMessage("plugin {}, failed to invoke custom onModule method", plugin.v1().getName()), e); - throw e; - } - } - } - } - } - } - public Settings updatedSettings() { Map foundSettings = new HashMap<>(); final Settings.Builder builder = Settings.builder(); @@ -315,55 +238,93 @@ public PluginsAndModules info() { // a "bundle" is a group of plugins in a single classloader // really should be 1-1, but we are not so fortunate static class Bundle { - List plugins = new ArrayList<>(); - List urls = new ArrayList<>(); + final PluginInfo plugin; + final Set urls; + + Bundle(PluginInfo plugin, Set urls) { + this.plugin = Objects.requireNonNull(plugin); + this.urls = Objects.requireNonNull(urls); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + Bundle bundle = (Bundle) o; + return Objects.equals(plugin, bundle.plugin); + } + + @Override + public int hashCode() { + return Objects.hash(plugin); + } } // similar in impl to getPluginBundles, but DO NOT try to make them share code. // we don't need to inherit all the leniency, and things are different enough. - static List getModuleBundles(Path modulesDirectory) throws IOException { + static Set getModuleBundles(Path modulesDirectory) throws IOException { // damn leniency if (Files.notExists(modulesDirectory)) { - return Collections.emptyList(); + return Collections.emptySet(); } - List bundles = new ArrayList<>(); + Set bundles = new LinkedHashSet<>(); try (DirectoryStream stream = Files.newDirectoryStream(modulesDirectory)) { for (Path module : stream) { - if (FileSystemUtils.isHidden(module)) { - continue; // skip over .DS_Store etc - } PluginInfo info = PluginInfo.readFromProperties(module); - Bundle bundle = new Bundle(); - bundle.plugins.add(info); + Set urls = new LinkedHashSet<>(); // gather urls for jar files try (DirectoryStream jarStream = Files.newDirectoryStream(module, "*.jar")) { for (Path jar : jarStream) { // normalize with toRealPath to get symlinks out of our hair - bundle.urls.add(jar.toRealPath().toUri().toURL()); + URL url = jar.toRealPath().toUri().toURL(); + if (urls.add(url) == false) { + throw new IllegalStateException("duplicate codebase: " + url); + } } } - bundles.add(bundle); + if (bundles.add(new Bundle(info, urls)) == false) { + throw new IllegalStateException("duplicate module: " + info); + } } } return bundles; } - static List getPluginBundles(Path pluginsDirectory) throws IOException { + static void checkForFailedPluginRemovals(final Path pluginsDirectory) throws IOException { + /* + * Check for the existence of a marker file that indicates any plugins are in a garbage state from a failed attempt to remove the + * plugin. + */ + try (DirectoryStream stream = Files.newDirectoryStream(pluginsDirectory, ".removing-*")) { + final Iterator iterator = stream.iterator(); + if (iterator.hasNext()) { + final Path removing = iterator.next(); + final String fileName = removing.getFileName().toString(); + final String name = fileName.substring(1 + fileName.indexOf("-")); + final String message = String.format( + Locale.ROOT, + "found file [%s] from a failed attempt to remove the plugin [%s]; execute [elasticsearch-plugin remove %2$s]", + removing, + name); + throw new IllegalStateException(message); + } + } + } + + static Set getPluginBundles(Path pluginsDirectory) throws IOException { Logger logger = Loggers.getLogger(PluginsService.class); // TODO: remove this leniency, but tests bogusly rely on it if (!isAccessibleDirectory(pluginsDirectory, logger)) { - return Collections.emptyList(); + return Collections.emptySet(); } - List bundles = new ArrayList<>(); + Set bundles = new LinkedHashSet<>(); + + checkForFailedPluginRemovals(pluginsDirectory); try (DirectoryStream stream = Files.newDirectoryStream(pluginsDirectory)) { for (Path plugin : stream) { - if (FileSystemUtils.isHidden(plugin)) { - logger.trace("--- skip hidden plugin file[{}]", plugin.toAbsolutePath()); - continue; - } logger.trace("--- adding plugin [{}]", plugin.toAbsolutePath()); final PluginInfo info; try { @@ -373,47 +334,58 @@ static List getPluginBundles(Path pluginsDirectory) throws IOException { + plugin.getFileName() + "]. Was the plugin built before 2.0?", e); } - List urls = new ArrayList<>(); + Set urls = new LinkedHashSet<>(); try (DirectoryStream jarStream = Files.newDirectoryStream(plugin, "*.jar")) { for (Path jar : jarStream) { // normalize with toRealPath to get symlinks out of our hair - urls.add(jar.toRealPath().toUri().toURL()); + URL url = jar.toRealPath().toUri().toURL(); + if (urls.add(url) == false) { + throw new IllegalStateException("duplicate codebase: " + url); + } } } - final Bundle bundle = new Bundle(); - bundles.add(bundle); - bundle.plugins.add(info); - bundle.urls.addAll(urls); + if (bundles.add(new Bundle(info, urls)) == false) { + throw new IllegalStateException("duplicate plugin: " + info); + } } } return bundles; } - private List> loadBundles(List bundles) { + private List> loadBundles(Set bundles) { List> plugins = new ArrayList<>(); for (Bundle bundle : bundles) { // jar-hell check the bundle against the parent classloader // pluginmanager does it, but we do it again, in case lusers mess with jar files manually try { - final List jars = new ArrayList<>(); - jars.addAll(Arrays.asList(JarHell.parseClassPath())); - jars.addAll(bundle.urls); - JarHell.checkJarHell(jars.toArray(new URL[0])); + Set classpath = JarHell.parseClassPath(); + // check we don't have conflicting codebases + Set intersection = new HashSet<>(classpath); + intersection.retainAll(bundle.urls); + if (intersection.isEmpty() == false) { + throw new IllegalStateException("jar hell! duplicate codebases between" + + " plugin and core: " + intersection); + } + // check we don't have conflicting classes + Set union = new HashSet<>(classpath); + union.addAll(bundle.urls); + JarHell.checkJarHell(union); } catch (Exception e) { - throw new IllegalStateException("failed to load bundle " + bundle.urls + " due to jar hell", e); + throw new IllegalStateException("failed to load plugin " + bundle.plugin + + " due to jar hell", e); } - // create a child to load the plugins in this bundle - ClassLoader loader = URLClassLoader.newInstance(bundle.urls.toArray(new URL[0]), getClass().getClassLoader()); - for (PluginInfo pluginInfo : bundle.plugins) { - // reload lucene SPI with any new services from the plugin - reloadLuceneSPI(loader); - final Class pluginClass = loadPluginClass(pluginInfo.getClassname(), loader); - final Plugin plugin = loadPlugin(pluginClass, settings); - plugins.add(new Tuple<>(pluginInfo, plugin)); - } + // create a child to load the plugin in this bundle + ClassLoader loader = URLClassLoader.newInstance(bundle.urls.toArray(new URL[0]), + getClass().getClassLoader()); + // reload lucene SPI with any new services from the plugin + reloadLuceneSPI(loader); + final Class pluginClass = + loadPluginClass(bundle.plugin.getClassname(), loader); + final Plugin plugin = loadPlugin(pluginClass, settings); + plugins.add(new Tuple<>(bundle.plugin, plugin)); } return Collections.unmodifiableList(plugins); diff --git a/core/src/main/java/org/elasticsearch/plugins/RemovePluginCommand.java b/core/src/main/java/org/elasticsearch/plugins/RemovePluginCommand.java deleted file mode 100644 index 54cd34d674219..0000000000000 --- a/core/src/main/java/org/elasticsearch/plugins/RemovePluginCommand.java +++ /dev/null @@ -1,101 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.plugins; - -import java.nio.file.Files; -import java.nio.file.Path; -import java.nio.file.StandardCopyOption; -import java.util.ArrayList; -import java.util.List; -import java.util.Map; - -import joptsimple.OptionSet; -import joptsimple.OptionSpec; -import org.apache.lucene.util.IOUtils; -import org.elasticsearch.cli.ExitCodes; -import org.elasticsearch.cli.SettingCommand; -import org.elasticsearch.cli.UserException; -import org.elasticsearch.common.Strings; -import org.elasticsearch.cli.Terminal; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.env.Environment; -import org.elasticsearch.node.internal.InternalSettingsPreparer; - -import static org.elasticsearch.cli.Terminal.Verbosity.VERBOSE; - -/** - * A command for the plugin cli to remove a plugin from elasticsearch. - */ -final class RemovePluginCommand extends SettingCommand { - - private final OptionSpec arguments; - - RemovePluginCommand() { - super("Removes a plugin from elasticsearch"); - this.arguments = parser.nonOptions("plugin name"); - } - - @Override - protected void execute(Terminal terminal, OptionSet options, Map settings) throws Exception { - String arg = arguments.value(options); - execute(terminal, arg, settings); - } - - // pkg private for testing - void execute(Terminal terminal, String pluginName, Map settings) throws Exception { - final Environment env = InternalSettingsPreparer.prepareEnvironment(Settings.EMPTY, terminal, settings); - - terminal.println("-> Removing " + Strings.coalesceToEmpty(pluginName) + "..."); - - final Path pluginDir = env.pluginsFile().resolve(pluginName); - if (Files.exists(pluginDir) == false) { - throw new UserException( - ExitCodes.USAGE, - "plugin " + pluginName + " not found; run 'elasticsearch-plugin list' to get list of installed plugins"); - } - - final List pluginPaths = new ArrayList<>(); - - final Path pluginBinDir = env.binFile().resolve(pluginName); - if (Files.exists(pluginBinDir)) { - if (Files.isDirectory(pluginBinDir) == false) { - throw new UserException(ExitCodes.IO_ERROR, "Bin dir for " + pluginName + " is not a directory"); - } - pluginPaths.add(pluginBinDir); - terminal.println(VERBOSE, "Removing: " + pluginBinDir); - } - - terminal.println(VERBOSE, "Removing: " + pluginDir); - final Path tmpPluginDir = env.pluginsFile().resolve(".removing-" + pluginName); - Files.move(pluginDir, tmpPluginDir, StandardCopyOption.ATOMIC_MOVE); - pluginPaths.add(tmpPluginDir); - - IOUtils.rm(pluginPaths.toArray(new Path[pluginPaths.size()])); - - // we preserve the config files in case the user is upgrading the plugin, but we print - // a message so the user knows in case they want to remove manually - final Path pluginConfigDir = env.configFile().resolve(pluginName); - if (Files.exists(pluginConfigDir)) { - terminal.println( - "-> Preserving plugin config files [" + pluginConfigDir + "] in case of upgrade, delete manually if not needed"); - } - } - -} diff --git a/core/src/main/java/org/elasticsearch/plugins/RepositoryPlugin.java b/core/src/main/java/org/elasticsearch/plugins/RepositoryPlugin.java index 9306ee3707664..a3af52a9a4aca 100644 --- a/core/src/main/java/org/elasticsearch/plugins/RepositoryPlugin.java +++ b/core/src/main/java/org/elasticsearch/plugins/RepositoryPlugin.java @@ -22,6 +22,7 @@ import java.util.Collections; import java.util.Map; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.Environment; import org.elasticsearch.repositories.Repository; @@ -38,7 +39,7 @@ public interface RepositoryPlugin { * The key of the returned {@link Map} is the type name of the repository and * the value is a factory to construct the {@link Repository} interface. */ - default Map getRepositories(Environment env) { + default Map getRepositories(Environment env, NamedXContentRegistry namedXContentRegistry) { return Collections.emptyMap(); } } diff --git a/core/src/main/java/org/elasticsearch/plugins/SearchPlugin.java b/core/src/main/java/org/elasticsearch/plugins/SearchPlugin.java index 97ee715ef6a26..01685535a4e0e 100644 --- a/core/src/main/java/org/elasticsearch/plugins/SearchPlugin.java +++ b/core/src/main/java/org/elasticsearch/plugins/SearchPlugin.java @@ -20,16 +20,21 @@ package org.elasticsearch.plugins; import org.apache.lucene.search.Query; +import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.lucene.search.function.ScoreFunction; import org.elasticsearch.common.xcontent.XContent; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryParser; import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilder; import org.elasticsearch.index.query.functionscore.ScoreFunctionParser; +import org.elasticsearch.search.SearchExtBuilder; import org.elasticsearch.search.aggregations.Aggregation; import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.Aggregator; @@ -44,10 +49,13 @@ import org.elasticsearch.search.fetch.FetchSubPhase; import org.elasticsearch.search.fetch.subphase.highlight.Highlighter; import org.elasticsearch.search.suggest.Suggester; +import org.elasticsearch.search.suggest.SuggestionBuilder; +import java.io.IOException; import java.util.List; import java.util.Map; import java.util.TreeMap; +import java.util.function.BiConsumer; import static java.util.Collections.emptyList; import static java.util.Collections.emptyMap; @@ -82,6 +90,12 @@ default List> default List getFetchSubPhases(FetchPhaseConstructionContext context) { return emptyList(); } + /** + * The new {@link SearchExtBuilder}s defined by this plugin. + */ + default List> getSearchExts() { + return emptyList(); + } /** * Get the {@link Highlighter}s defined by this plugin. */ @@ -91,8 +105,8 @@ default Map getHighlighters() { /** * The new {@link Suggester}s defined by this plugin. */ - default Map> getSuggesters() { - return emptyMap(); + default List> getSuggesters() { + return emptyList(); } /** * The new {@link Query}s defined by this plugin. @@ -112,6 +126,15 @@ default List getAggregations() { default List getPipelineAggregations() { return emptyList(); } + /** + * The new search response listeners in the form of {@link BiConsumer}s added by this plugin. + * The listeners are invoked on the coordinating node, at the very end of the search request. + * This provides a convenient location if you wish to inspect/modify the final response (took time, etc). + * The BiConsumers are passed the original {@link SearchRequest} and the final {@link SearchResponse} + */ + default List> getSearchResponseListeners() { + return emptyList(); + } /** * Specification of custom {@link ScoreFunction}. @@ -126,6 +149,38 @@ public ScoreFunctionSpec(String name, Writeable.Reader reader, ScoreFunctionP } } + /** + * Specification for a {@link Suggester}. + */ + class SuggesterSpec> extends SearchExtensionSpec> { + /** + * Specification of custom {@link Suggester}. + * + * @param name holds the names by which this suggester might be parsed. The {@link ParseField#getPreferredName()} is special as it + * is the name by under which the reader is registered. So it is the name that the query should use as its + * {@link NamedWriteable#getWriteableName()} too. + * @param reader the reader registered for this suggester's builder. Typically a reference to a constructor that takes a + * {@link StreamInput} + * @param parser the parser the reads the query suggester from xcontent + */ + public SuggesterSpec(ParseField name, Writeable.Reader reader, CheckedFunction parser) { + super(name, reader, parser); + } + + /** + * Specification of custom {@link Suggester}. + * + * @param name the name by which this suggester might be parsed or deserialized. Make sure that the query builder returns this name + * for {@link NamedWriteable#getWriteableName()}. + * @param reader the reader registered for this suggester's builder. Typically a reference to a constructor that takes a + * {@link StreamInput} + * @param parser the parser the reads the suggester builder from xcontent + */ + public SuggesterSpec(String name, Writeable.Reader reader, CheckedFunction parser) { + super(name, reader, parser); + } + } + /** * Specification of custom {@link Query}. */ @@ -160,7 +215,7 @@ public QuerySpec(String name, Writeable.Reader reader, QueryParser parser) /** * Specification for an {@link Aggregation}. */ - public static class AggregationSpec extends SearchExtensionSpec { + class AggregationSpec extends SearchExtensionSpec { private final Map> resultReaders = new TreeMap<>(); /** @@ -217,7 +272,7 @@ public Map> getResultRea /** * Specification for a {@link PipelineAggregator}. */ - public static class PipelineAggregationSpec extends SearchExtensionSpec { + class PipelineAggregationSpec extends SearchExtensionSpec { private final Map> resultReaders = new TreeMap<>(); private final Writeable.Reader aggregatorReader; @@ -290,6 +345,20 @@ public Map> getResultRea } } + /** + * Specification for a {@link SearchExtBuilder} which represents an additional section that can be + * parsed in a search request (within the ext element). + */ + class SearchExtSpec extends SearchExtensionSpec> { + public SearchExtSpec(ParseField name, Writeable.Reader reader, + CheckedFunction parser) { + super(name, reader, parser); + } + + public SearchExtSpec(String name, Writeable.Reader reader, CheckedFunction parser) { + super(name, reader, parser); + } + } /** * Specification of search time behavior extension like a custom {@link MovAvgModel} or {@link ScoreFunction}. diff --git a/core/src/main/java/org/elasticsearch/plugins/spi/NamedXContentProvider.java b/core/src/main/java/org/elasticsearch/plugins/spi/NamedXContentProvider.java new file mode 100644 index 0000000000000..ef511fcfeae35 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/plugins/spi/NamedXContentProvider.java @@ -0,0 +1,35 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.plugins.spi; + +import org.elasticsearch.common.xcontent.NamedXContentRegistry; + +import java.util.List; + +/** + * Provides named XContent parsers. + */ +public interface NamedXContentProvider { + + /** + * @return a list of {@link NamedXContentRegistry.Entry} that this plugin provides. + */ + List getNamedXContentParsers(); +} diff --git a/core/src/main/java/org/elasticsearch/plugins/spi/package-info.java b/core/src/main/java/org/elasticsearch/plugins/spi/package-info.java new file mode 100644 index 0000000000000..7740e1424fb76 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/plugins/spi/package-info.java @@ -0,0 +1,25 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/** + * This package contains interfaces for services provided by + * Elasticsearch plugins to external applications like the + * Java High Level Rest Client. + */ +package org.elasticsearch.plugins.spi; diff --git a/core/src/main/java/org/elasticsearch/repositories/RepositoriesModule.java b/core/src/main/java/org/elasticsearch/repositories/RepositoriesModule.java index 50ab90b4fe1b5..d03e2c1ac349f 100644 --- a/core/src/main/java/org/elasticsearch/repositories/RepositoriesModule.java +++ b/core/src/main/java/org/elasticsearch/repositories/RepositoriesModule.java @@ -28,6 +28,7 @@ import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.inject.binder.LinkedBindingBuilder; import org.elasticsearch.common.inject.multibindings.MapBinder; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.Environment; import org.elasticsearch.plugins.RepositoryPlugin; import org.elasticsearch.repositories.fs.FsRepository; @@ -43,13 +44,13 @@ public class RepositoriesModule extends AbstractModule { private final Map repositoryTypes; - public RepositoriesModule(Environment env, List repoPlugins) { + public RepositoriesModule(Environment env, List repoPlugins, NamedXContentRegistry namedXContentRegistry) { Map factories = new HashMap<>(); - factories.put(FsRepository.TYPE, (metadata) -> new FsRepository(metadata, env)); - factories.put(URLRepository.TYPE, (metadata) -> new URLRepository(metadata, env)); + factories.put(FsRepository.TYPE, (metadata) -> new FsRepository(metadata, env, namedXContentRegistry)); + factories.put(URLRepository.TYPE, (metadata) -> new URLRepository(metadata, env, namedXContentRegistry)); for (RepositoryPlugin repoPlugin : repoPlugins) { - Map newRepoTypes = repoPlugin.getRepositories(env); + Map newRepoTypes = repoPlugin.getRepositories(env, namedXContentRegistry); for (Map.Entry entry : newRepoTypes.entrySet()) { if (factories.put(entry.getKey(), entry.getValue()) != null) { throw new IllegalArgumentException("Repository type [" + entry.getKey() + "] is already registered"); diff --git a/core/src/main/java/org/elasticsearch/repositories/RepositoriesService.java b/core/src/main/java/org/elasticsearch/repositories/RepositoriesService.java index e5951d48a003c..7aea5d18afb43 100644 --- a/core/src/main/java/org/elasticsearch/repositories/RepositoriesService.java +++ b/core/src/main/java/org/elasticsearch/repositories/RepositoriesService.java @@ -25,7 +25,7 @@ import org.elasticsearch.cluster.AckedClusterStateUpdateTask; import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.ClusterStateApplier; import org.elasticsearch.cluster.ack.ClusterStateUpdateRequest; import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse; import org.elasticsearch.cluster.metadata.MetaData; @@ -53,7 +53,7 @@ /** * Service responsible for maintaining and providing access to snapshot repositories on nodes. */ -public class RepositoriesService extends AbstractComponent implements ClusterStateListener { +public class RepositoriesService extends AbstractComponent implements ClusterStateApplier { private final Map typesRegistry; @@ -72,7 +72,7 @@ public RepositoriesService(Settings settings, ClusterService clusterService, Tra // Doesn't make sense to maintain repositories on non-master and non-data nodes // Nothing happens there anyway if (DiscoveryNode.isDataNode(settings) || DiscoveryNode.isMasterNode(settings)) { - clusterService.add(this); + clusterService.addStateApplier(this); } this.verifyAction = new VerifyNodeRepositoryAction(settings, transportService, clusterService, this); } @@ -253,7 +253,7 @@ public void onFailure(Exception e) { * @param event cluster changed event */ @Override - public void clusterChanged(ClusterChangedEvent event) { + public void applyClusterState(ClusterChangedEvent event) { try { RepositoriesMetaData oldMetaData = event.previousState().getMetaData().custom(RepositoriesMetaData.TYPE); RepositoriesMetaData newMetaData = event.state().getMetaData().custom(RepositoriesMetaData.TYPE); @@ -401,7 +401,7 @@ private class VerifyingRegisterRepositoryListener implements ActionListener listener; - public VerifyingRegisterRepositoryListener(String name, final ActionListener listener) { + VerifyingRegisterRepositoryListener(String name, final ActionListener listener) { this.name = name; this.listener = listener; } diff --git a/core/src/main/java/org/elasticsearch/repositories/Repository.java b/core/src/main/java/org/elasticsearch/repositories/Repository.java index b1f534bf684cd..462b7ea1dab12 100644 --- a/core/src/main/java/org/elasticsearch/repositories/Repository.java +++ b/core/src/main/java/org/elasticsearch/repositories/Repository.java @@ -115,16 +115,19 @@ interface Factory { * @param failure global failure reason or null * @param totalShards total number of shards * @param shardFailures list of shard failures + * @param repositoryStateId the unique id identifying the state of the repository when the snapshot began * @return snapshot description */ - SnapshotInfo finalizeSnapshot(SnapshotId snapshotId, List indices, long startTime, String failure, int totalShards, List shardFailures); + SnapshotInfo finalizeSnapshot(SnapshotId snapshotId, List indices, long startTime, String failure, int totalShards, + List shardFailures, long repositoryStateId); /** * Deletes snapshot * * @param snapshotId snapshot id + * @param repositoryStateId the unique id identifying the state of the repository when the snapshot deletion began */ - void deleteSnapshot(SnapshotId snapshotId); + void deleteSnapshot(SnapshotId snapshotId, long repositoryStateId); /** * Returns snapshot throttle time in nanoseconds diff --git a/core/src/main/java/org/elasticsearch/repositories/RepositoryData.java b/core/src/main/java/org/elasticsearch/repositories/RepositoryData.java index 4927e2b41b7f7..747aab8de8451 100644 --- a/core/src/main/java/org/elasticsearch/repositories/RepositoryData.java +++ b/core/src/main/java/org/elasticsearch/repositories/RepositoryData.java @@ -20,16 +20,21 @@ package org.elasticsearch.repositories; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.ResourceNotFoundException; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.snapshots.SnapshotId; +import org.elasticsearch.snapshots.SnapshotState; import java.io.IOException; import java.util.ArrayList; +import java.util.Collection; import java.util.Collections; import java.util.HashMap; +import java.util.LinkedHashMap; import java.util.LinkedHashSet; import java.util.List; import java.util.Map; @@ -42,14 +47,30 @@ * A class that represents the data in a repository, as captured in the * repository's index blob. */ -public final class RepositoryData implements ToXContent { +public final class RepositoryData { - public static final RepositoryData EMPTY = new RepositoryData(Collections.emptyList(), Collections.emptyMap()); + /** + * The generation value indicating the repository has no index generational files. + */ + public static final long EMPTY_REPO_GEN = -1L; + /** + * An instance initialized for an empty repository. + */ + public static final RepositoryData EMPTY = new RepositoryData(EMPTY_REPO_GEN, + Collections.emptyMap(), Collections.emptyMap(), Collections.emptyMap(), Collections.emptyList()); + /** + * The generational id of the index file from which the repository data was read. + */ + private final long genId; /** * The ids of the snapshots in the repository. */ - private final List snapshotIds; + private final Map snapshotIds; + /** + * The states of each snapshot in the repository. + */ + private final Map snapshotStates; /** * The indices found in the repository across all snapshots, as a name to {@link IndexId} mapping */ @@ -58,24 +79,69 @@ public final class RepositoryData implements ToXContent { * The snapshots that each index belongs to. */ private final Map> indexSnapshots; + /** + * The snapshots that are no longer compatible with the current cluster ES version. + */ + private final List incompatibleSnapshotIds; - public RepositoryData(List snapshotIds, Map> indexSnapshots) { - this.snapshotIds = Collections.unmodifiableList(snapshotIds); - this.indices = Collections.unmodifiableMap(indexSnapshots.keySet() - .stream() - .collect(Collectors.toMap(IndexId::getName, Function.identity()))); + public RepositoryData(long genId, + Map snapshotIds, + Map snapshotStates, + Map> indexSnapshots, + List incompatibleSnapshotIds) { + this.genId = genId; + this.snapshotIds = Collections.unmodifiableMap(snapshotIds); + this.snapshotStates = Collections.unmodifiableMap(snapshotStates); + this.indices = Collections.unmodifiableMap(indexSnapshots.keySet().stream() + .collect(Collectors.toMap(IndexId::getName, Function.identity()))); this.indexSnapshots = Collections.unmodifiableMap(indexSnapshots); + this.incompatibleSnapshotIds = Collections.unmodifiableList(incompatibleSnapshotIds); } protected RepositoryData copy() { - return new RepositoryData(snapshotIds, indexSnapshots); + return new RepositoryData(genId, snapshotIds, snapshotStates, indexSnapshots, incompatibleSnapshotIds); } /** - * Returns an unmodifiable list of the snapshot ids. + * Gets the generational index file id from which this instance was read. */ - public List getSnapshotIds() { - return snapshotIds; + public long getGenId() { + return genId; + } + + /** + * Returns an unmodifiable collection of the snapshot ids. + */ + public Collection getSnapshotIds() { + return Collections.unmodifiableCollection(snapshotIds.values()); + } + + /** + * Returns an immutable collection of the snapshot ids in the repository that are incompatible with the + * current ES version. + */ + public Collection getIncompatibleSnapshotIds() { + return incompatibleSnapshotIds; + } + + /** + * Returns an immutable collection of all the snapshot ids in the repository, both active and + * incompatible snapshots. + */ + public Collection getAllSnapshotIds() { + List allSnapshotIds = new ArrayList<>(snapshotIds.size() + incompatibleSnapshotIds.size()); + allSnapshotIds.addAll(snapshotIds.values()); + allSnapshotIds.addAll(incompatibleSnapshotIds); + return Collections.unmodifiableList(allSnapshotIds); + } + + /** + * Returns the {@link SnapshotState} for the given snapshot. Returns {@code null} if + * there is no state for the snapshot. + */ + @Nullable + public SnapshotState getSnapshotState(final SnapshotId snapshotId) { + return snapshotStates.get(snapshotId.getUUID()); } /** @@ -89,12 +155,19 @@ public Map getIndices() { * Add a snapshot and its indices to the repository; returns a new instance. If the snapshot * already exists in the repository data, this method throws an IllegalArgumentException. */ - public RepositoryData addSnapshot(final SnapshotId snapshotId, final List snapshottedIndices) { - if (snapshotIds.contains(snapshotId)) { - throw new IllegalArgumentException("[" + snapshotId + "] already exists in the repository data"); + public RepositoryData addSnapshot(final SnapshotId snapshotId, + final SnapshotState snapshotState, + final List snapshottedIndices) { + if (snapshotIds.containsKey(snapshotId.getUUID())) { + // if the snapshot id already exists in the repository data, it means an old master + // that is blocked from the cluster is trying to finalize a snapshot concurrently with + // the new master, so we make the operation idempotent + return this; } - List snapshots = new ArrayList<>(snapshotIds); - snapshots.add(snapshotId); + Map snapshots = new HashMap<>(snapshotIds); + snapshots.put(snapshotId.getUUID(), snapshotId); + Map newSnapshotStates = new HashMap<>(snapshotStates); + newSnapshotStates.put(snapshotId.getUUID(), snapshotState); Map> allIndexSnapshots = new HashMap<>(indexSnapshots); for (final IndexId indexId : snapshottedIndices) { if (allIndexSnapshots.containsKey(indexId)) { @@ -110,24 +183,21 @@ public RepositoryData addSnapshot(final SnapshotId snapshotId, final List> indexSnapshots) { - return new RepositoryData(snapshotIds, indexSnapshots); + return new RepositoryData(genId, snapshots, newSnapshotStates, allIndexSnapshots, incompatibleSnapshotIds); } /** * Remove a snapshot and remove any indices that no longer exist in the repository due to the deletion of the snapshot. */ public RepositoryData removeSnapshot(final SnapshotId snapshotId) { - List newSnapshotIds = snapshotIds - .stream() - .filter(id -> snapshotId.equals(id) == false) - .collect(Collectors.toList()); + Map newSnapshotIds = snapshotIds.values().stream() + .filter(id -> !snapshotId.equals(id)) + .collect(Collectors.toMap(SnapshotId::getUUID, Function.identity())); + if (newSnapshotIds.size() == snapshotIds.size()) { + throw new ResourceNotFoundException("Attempting to remove non-existent snapshot [{}] from repository data", snapshotId); + } + Map newSnapshotStates = new HashMap<>(snapshotStates); + newSnapshotStates.remove(snapshotId.getUUID()); Map> indexSnapshots = new HashMap<>(); for (final IndexId indexId : indices.values()) { Set set; @@ -135,7 +205,8 @@ public RepositoryData removeSnapshot(final SnapshotId snapshotId) { assert snapshotIds != null; if (snapshotIds.contains(snapshotId)) { if (snapshotIds.size() == 1) { - // removing the snapshot will mean no more snapshots have this index, so just skip over it + // removing the snapshot will mean no more snapshots + // have this index, so just skip over it continue; } set = new LinkedHashSet<>(snapshotIds); @@ -146,7 +217,7 @@ public RepositoryData removeSnapshot(final SnapshotId snapshotId) { indexSnapshots.put(indexId, set); } - return new RepositoryData(newSnapshotIds, indexSnapshots); + return new RepositoryData(genId, newSnapshotIds, newSnapshotStates, indexSnapshots, incompatibleSnapshotIds); } /** @@ -155,11 +226,18 @@ public RepositoryData removeSnapshot(final SnapshotId snapshotId) { public Set getSnapshots(final IndexId indexId) { Set snapshotIds = indexSnapshots.get(indexId); if (snapshotIds == null) { - throw new IllegalArgumentException("unknown snapshot index " + indexId + ""); + throw new IllegalArgumentException("unknown snapshot index " + indexId); } return snapshotIds; } + /** + * Initializes the indices in the repository metadata; returns a new instance. + */ + public RepositoryData initIndices(final Map> indexSnapshots) { + return new RepositoryData(genId, snapshotIds, snapshotStates, indexSnapshots, incompatibleSnapshotIds); + } + @Override public boolean equals(Object obj) { if (this == obj) { @@ -170,13 +248,15 @@ public boolean equals(Object obj) { } @SuppressWarnings("unchecked") RepositoryData that = (RepositoryData) obj; return snapshotIds.equals(that.snapshotIds) + && snapshotStates.equals(that.snapshotStates) && indices.equals(that.indices) - && indexSnapshots.equals(that.indexSnapshots); + && indexSnapshots.equals(that.indexSnapshots) + && incompatibleSnapshotIds.equals(that.incompatibleSnapshotIds); } @Override public int hashCode() { - return Objects.hash(snapshotIds, indices, indexSnapshots); + return Objects.hash(snapshotIds, snapshotStates, indices, indexSnapshots, incompatibleSnapshotIds); } /** @@ -224,17 +304,45 @@ public List resolveNewIndices(final List indicesToResolve) { return snapshotIndices; } + /** + * Returns a new {@link RepositoryData} instance containing the same snapshot data as the + * invoking instance, with the given incompatible snapshots added to the new instance. + */ + public RepositoryData addIncompatibleSnapshots(final List incompatibleSnapshotIds) { + List newSnapshotIds = new ArrayList<>(this.snapshotIds.values()); + List newIncompatibleSnapshotIds = new ArrayList<>(this.incompatibleSnapshotIds); + for (SnapshotId snapshotId : incompatibleSnapshotIds) { + newSnapshotIds.remove(snapshotId); + newIncompatibleSnapshotIds.add(snapshotId); + } + Map snapshotMap = newSnapshotIds.stream().collect(Collectors.toMap(SnapshotId::getUUID, Function.identity())); + return new RepositoryData(this.genId, snapshotMap, this.snapshotStates, this.indexSnapshots, newIncompatibleSnapshotIds); + } + private static final String SNAPSHOTS = "snapshots"; + private static final String INCOMPATIBLE_SNAPSHOTS = "incompatible-snapshots"; private static final String INDICES = "indices"; private static final String INDEX_ID = "id"; + private static final String NAME = "name"; + private static final String UUID = "uuid"; + private static final String STATE = "state"; - @Override - public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException { + /** + * Writes the snapshots metadata and the related indices metadata to x-content, omitting the + * incompatible snapshots. + */ + public XContentBuilder snapshotsToXContent(final XContentBuilder builder, final ToXContent.Params params) throws IOException { builder.startObject(); // write the snapshots list builder.startArray(SNAPSHOTS); for (final SnapshotId snapshot : getSnapshotIds()) { - snapshot.toXContent(builder, params); + builder.startObject(); + builder.field(NAME, snapshot.getName()); + builder.field(UUID, snapshot.getUUID()); + if (snapshotStates.containsKey(snapshot.getUUID())) { + builder.field(STATE, snapshotStates.get(snapshot.getUUID()).value()); + } + builder.endObject(); } builder.endArray(); // write the indices map @@ -246,7 +354,7 @@ public XContentBuilder toXContent(final XContentBuilder builder, final Params pa Set snapshotIds = indexSnapshots.get(indexId); assert snapshotIds != null; for (final SnapshotId snapshotId : snapshotIds) { - snapshotId.toXContent(builder, params); + builder.value(snapshotId.getUUID()); } builder.endArray(); builder.endObject(); @@ -256,21 +364,51 @@ public XContentBuilder toXContent(final XContentBuilder builder, final Params pa return builder; } - public static RepositoryData fromXContent(final XContentParser parser) throws IOException { - List snapshots = new ArrayList<>(); + /** + * Reads an instance of {@link RepositoryData} from x-content, loading the snapshots and indices metadata. + */ + public static RepositoryData snapshotsFromXContent(final XContentParser parser, long genId) throws IOException { + Map snapshots = new LinkedHashMap<>(); + Map snapshotStates = new HashMap<>(); Map> indexSnapshots = new HashMap<>(); if (parser.nextToken() == XContentParser.Token.START_OBJECT) { while (parser.nextToken() == XContentParser.Token.FIELD_NAME) { - String currentFieldName = parser.currentName(); - if (SNAPSHOTS.equals(currentFieldName)) { + String field = parser.currentName(); + if (SNAPSHOTS.equals(field)) { if (parser.nextToken() == XContentParser.Token.START_ARRAY) { while (parser.nextToken() != XContentParser.Token.END_ARRAY) { - snapshots.add(SnapshotId.fromXContent(parser)); + final SnapshotId snapshotId; + // the new format from 5.0 which contains the snapshot name and uuid + if (parser.currentToken() == XContentParser.Token.START_OBJECT) { + String name = null; + String uuid = null; + SnapshotState state = null; + while (parser.nextToken() != XContentParser.Token.END_OBJECT) { + String currentFieldName = parser.currentName(); + parser.nextToken(); + if (NAME.equals(currentFieldName)) { + name = parser.text(); + } else if (UUID.equals(currentFieldName)) { + uuid = parser.text(); + } else if (STATE.equals(currentFieldName)) { + state = SnapshotState.fromValue(parser.numberValue().byteValue()); + } + } + snapshotId = new SnapshotId(name, uuid); + if (state != null) { + snapshotStates.put(uuid, state); + } + } else { + // the old format pre 5.0 that only contains the snapshot name, use the name as the uuid too + final String name = parser.text(); + snapshotId = new SnapshotId(name, name); + } + snapshots.put(snapshotId.getUUID(), snapshotId); } } else { - throw new ElasticsearchParseException("expected array for [" + currentFieldName + "]"); + throw new ElasticsearchParseException("expected array for [" + field + "]"); } - } else if (INDICES.equals(currentFieldName)) { + } else if (INDICES.equals(field)) { if (parser.nextToken() != XContentParser.Token.START_OBJECT) { throw new ElasticsearchParseException("start object expected [indices]"); } @@ -291,13 +429,72 @@ public static RepositoryData fromXContent(final XContentParser parser) throws IO throw new ElasticsearchParseException("start array expected [snapshots]"); } while (parser.nextToken() != XContentParser.Token.END_ARRAY) { - snapshotIds.add(SnapshotId.fromXContent(parser)); + String uuid = null; + // the old format pre 5.4.1 which contains the snapshot name and uuid + if (parser.currentToken() == XContentParser.Token.START_OBJECT) { + while (parser.nextToken() != XContentParser.Token.END_OBJECT) { + String currentFieldName = parser.currentName(); + parser.nextToken(); + if (UUID.equals(currentFieldName)) { + uuid = parser.text(); + } + } + } else { + // the new format post 5.4.1 that only contains the snapshot uuid, + // since we already have the name/uuid combo in the snapshots array + uuid = parser.text(); + } + snapshotIds.add(snapshots.get(uuid)); } } } assert indexId != null; indexSnapshots.put(new IndexId(indexName, indexId), snapshotIds); } + } else { + throw new ElasticsearchParseException("unknown field name [" + field + "]"); + } + } + } else { + throw new ElasticsearchParseException("start object expected"); + } + return new RepositoryData(genId, snapshots, snapshotStates, indexSnapshots, Collections.emptyList()); + } + + /** + * Writes the incompatible snapshot ids to x-content. + */ + public XContentBuilder incompatibleSnapshotsToXContent(final XContentBuilder builder, final ToXContent.Params params) + throws IOException { + + builder.startObject(); + // write the incompatible snapshots list + builder.startArray(INCOMPATIBLE_SNAPSHOTS); + for (final SnapshotId snapshot : getIncompatibleSnapshotIds()) { + snapshot.toXContent(builder, params); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + /** + * Reads the incompatible snapshot ids from x-content, loading them into a new instance of {@link RepositoryData} + * that is created from the invoking instance, plus the incompatible snapshots that are read from x-content. + */ + public RepositoryData incompatibleSnapshotsFromXContent(final XContentParser parser) throws IOException { + List incompatibleSnapshotIds = new ArrayList<>(); + if (parser.nextToken() == XContentParser.Token.START_OBJECT) { + while (parser.nextToken() == XContentParser.Token.FIELD_NAME) { + String currentFieldName = parser.currentName(); + if (INCOMPATIBLE_SNAPSHOTS.equals(currentFieldName)) { + if (parser.nextToken() == XContentParser.Token.START_ARRAY) { + while (parser.nextToken() != XContentParser.Token.END_ARRAY) { + incompatibleSnapshotIds.add(SnapshotId.fromXContent(parser)); + } + } else { + throw new ElasticsearchParseException("expected array for [" + currentFieldName + "]"); + } } else { throw new ElasticsearchParseException("unknown field name [" + currentFieldName + "]"); } @@ -305,7 +502,7 @@ public static RepositoryData fromXContent(final XContentParser parser) throws IO } else { throw new ElasticsearchParseException("start object expected"); } - return new RepositoryData(snapshots, indexSnapshots); + return new RepositoryData(this.genId, this.snapshotIds, this.snapshotStates, this.indexSnapshots, incompatibleSnapshotIds); } } diff --git a/core/src/main/java/org/elasticsearch/repositories/VerifyNodeRepositoryAction.java b/core/src/main/java/org/elasticsearch/repositories/VerifyNodeRepositoryAction.java index 49edae3ce22ac..cc1170a4841a2 100644 --- a/core/src/main/java/org/elasticsearch/repositories/VerifyNodeRepositoryAction.java +++ b/core/src/main/java/org/elasticsearch/repositories/VerifyNodeRepositoryAction.java @@ -64,10 +64,6 @@ public VerifyNodeRepositoryAction(Settings settings, TransportService transportS transportService.registerRequestHandler(ACTION_NAME, VerifyNodeRepositoryRequest::new, ThreadPool.Names.SAME, new VerifyNodeRepositoryRequestHandler()); } - public void close() { - transportService.removeHandler(ACTION_NAME); - } - public void verify(String repository, String verificationToken, final ActionListener listener) { final DiscoveryNodes discoNodes = clusterService.state().nodes(); final DiscoveryNode localNode = discoNodes.getLocalNode(); diff --git a/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreFormat.java b/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreFormat.java index 04900705e0ab2..2f25ec3b508cc 100644 --- a/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreFormat.java +++ b/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreFormat.java @@ -19,10 +19,10 @@ package org.elasticsearch.repositories.blobstore; import org.elasticsearch.cluster.metadata.MetaData; -import org.elasticsearch.common.ParseFieldMatcher; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.blobstore.BlobContainer; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.xcontent.FromXContentBuilder; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; @@ -40,9 +40,9 @@ public abstract class BlobStoreFormat { protected final String blobNameFormat; - protected final FromXContentBuilder reader; + protected final CheckedFunction reader; - protected final ParseFieldMatcher parseFieldMatcher; + protected final NamedXContentRegistry namedXContentRegistry; // Serialization parameters to specify correct context for metadata serialization protected static final ToXContent.Params SNAPSHOT_ONLY_FORMAT_PARAMS; @@ -60,12 +60,12 @@ public abstract class BlobStoreFormat { /** * @param blobNameFormat format of the blobname in {@link String#format(Locale, String, Object...)} format * @param reader the prototype object that can deserialize objects with type T - * @param parseFieldMatcher parse field matcher */ - protected BlobStoreFormat(String blobNameFormat, FromXContentBuilder reader, ParseFieldMatcher parseFieldMatcher) { + protected BlobStoreFormat(String blobNameFormat, CheckedFunction reader, + NamedXContentRegistry namedXContentRegistry) { this.reader = reader; this.blobNameFormat = blobNameFormat; - this.parseFieldMatcher = parseFieldMatcher; + this.namedXContentRegistry = namedXContentRegistry; } /** @@ -109,10 +109,9 @@ protected String blobName(String name) { } protected T read(BytesReference bytes) throws IOException { - try (XContentParser parser = XContentHelper.createParser(bytes)) { - T obj = reader.fromXContent(parser, parseFieldMatcher); + try (XContentParser parser = XContentHelper.createParser(namedXContentRegistry, bytes)) { + T obj = reader.apply(parser); return obj; - } } diff --git a/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java b/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java index 2969a6ee6e3c2..b0bacc553ce8f 100644 --- a/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java +++ b/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java @@ -38,13 +38,14 @@ import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.ResourceNotFoundException; import org.elasticsearch.Version; +import org.elasticsearch.cluster.SnapshotsInProgress; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.metadata.RepositoryMetaData; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.Numbers; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.Strings; import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.blobstore.BlobContainer; @@ -68,6 +69,7 @@ import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.util.iterable.Iterables; import org.elasticsearch.common.util.set.Sets; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -93,6 +95,7 @@ import org.elasticsearch.repositories.RepositoryData; import org.elasticsearch.repositories.RepositoryException; import org.elasticsearch.repositories.RepositoryVerificationException; +import org.elasticsearch.snapshots.InvalidSnapshotNameException; import org.elasticsearch.snapshots.SnapshotCreationException; import org.elasticsearch.snapshots.SnapshotException; import org.elasticsearch.snapshots.SnapshotId; @@ -100,11 +103,11 @@ import org.elasticsearch.snapshots.SnapshotMissingException; import org.elasticsearch.snapshots.SnapshotShardFailure; -import java.io.FileNotFoundException; import java.io.FilterInputStream; import java.io.IOException; import java.io.InputStream; import java.nio.file.DirectoryNotEmptyException; +import java.nio.file.FileAlreadyExistsException; import java.nio.file.NoSuchFileException; import java.util.ArrayList; import java.util.Collection; @@ -128,8 +131,9 @@ *

      * {@code
      *   STORE_ROOT
    - *   |- index-N           - list of all snapshot name as JSON array, N is the generation of the file
    + *   |- index-N           - list of all snapshot ids and the indices belonging to each snapshot, N is the generation of the file
      *   |- index.latest      - contains the numeric value of the latest generation of the index file (i.e. N from above)
    + *   |- incompatible-snapshots - list of all snapshot ids that are no longer compatible with the current version of the cluster
      *   |- snap-20131010 - JSON serialized Snapshot for snapshot "20131010"
      *   |- meta-20131010.dat - JSON serialized MetaData for snapshot "20131010" (includes only global metadata)
      *   |- snap-20131011 - JSON serialized Snapshot for snapshot "20131011"
    @@ -167,6 +171,8 @@ public abstract class BlobStoreRepository extends AbstractLifecycleComponent imp
     
         protected final RepositoryMetaData metadata;
     
    +    protected final NamedXContentRegistry namedXContentRegistry;
    +
         private static final int BUFFER_SIZE = 4096;
     
         private static final String LEGACY_SNAPSHOT_PREFIX = "snapshot-";
    @@ -181,6 +187,8 @@ public abstract class BlobStoreRepository extends AbstractLifecycleComponent imp
     
         private static final String INDEX_LATEST_BLOB = "index.latest";
     
    +    private static final String INCOMPATIBLE_SNAPSHOTS_BLOB = "incompatible-snapshots";
    +
         private static final String TESTS_FILE = "tests-";
     
         private static final String METADATA_NAME_FORMAT = "meta-%s.dat";
    @@ -191,17 +199,17 @@ public abstract class BlobStoreRepository extends AbstractLifecycleComponent imp
     
         private static final String INDEX_METADATA_CODEC = "index-metadata";
     
    -    protected static final String LEGACY_SNAPSHOT_NAME_FORMAT = LEGACY_SNAPSHOT_PREFIX + "%s";
    +    private static final String LEGACY_SNAPSHOT_NAME_FORMAT = LEGACY_SNAPSHOT_PREFIX + "%s";
     
    -    protected static final String SNAPSHOT_NAME_FORMAT = SNAPSHOT_PREFIX + "%s.dat";
    +    private static final String SNAPSHOT_NAME_FORMAT = SNAPSHOT_PREFIX + "%s.dat";
     
    -    protected static final String SNAPSHOT_INDEX_PREFIX = "index-";
    +    private static final String SNAPSHOT_INDEX_PREFIX = "index-";
     
    -    protected static final String SNAPSHOT_INDEX_NAME_FORMAT = SNAPSHOT_INDEX_PREFIX + "%s";
    +    private static final String SNAPSHOT_INDEX_NAME_FORMAT = SNAPSHOT_INDEX_PREFIX + "%s";
     
    -    protected static final String SNAPSHOT_INDEX_CODEC = "snapshots";
    +    private static final String SNAPSHOT_INDEX_CODEC = "snapshots";
     
    -    protected static final String DATA_BLOB_PREFIX = "__";
    +    private static final String DATA_BLOB_PREFIX = "__";
     
         private final RateLimiter snapshotRateLimiter;
     
    @@ -225,8 +233,6 @@ public abstract class BlobStoreRepository extends AbstractLifecycleComponent imp
     
         private final boolean readOnly;
     
    -    private final ParseFieldMatcher parseFieldMatcher;
    -
         private final ChecksumBlobStoreFormat indexShardSnapshotFormat;
     
         private final LegacyBlobStoreFormat indexShardSnapshotLegacyFormat;
    @@ -239,32 +245,39 @@ public abstract class BlobStoreRepository extends AbstractLifecycleComponent imp
          * @param metadata       The metadata for this repository including name and settings
          * @param globalSettings Settings for the node this repository object is created on
          */
    -    protected BlobStoreRepository(RepositoryMetaData metadata, Settings globalSettings) {
    +    protected BlobStoreRepository(RepositoryMetaData metadata, Settings globalSettings, NamedXContentRegistry namedXContentRegistry) {
             super(globalSettings);
             this.metadata = metadata;
    -        parseFieldMatcher = new ParseFieldMatcher(settings);
    +        this.namedXContentRegistry = namedXContentRegistry;
             snapshotRateLimiter = getRateLimiter(metadata.settings(), "max_snapshot_bytes_per_sec", new ByteSizeValue(40, ByteSizeUnit.MB));
             restoreRateLimiter = getRateLimiter(metadata.settings(), "max_restore_bytes_per_sec", new ByteSizeValue(40, ByteSizeUnit.MB));
             readOnly = metadata.settings().getAsBoolean("readonly", false);
    -        indexShardSnapshotFormat = new ChecksumBlobStoreFormat<>(SNAPSHOT_CODEC, SNAPSHOT_NAME_FORMAT, BlobStoreIndexShardSnapshot.PROTO, parseFieldMatcher, isCompress());
    -        indexShardSnapshotLegacyFormat = new LegacyBlobStoreFormat<>(LEGACY_SNAPSHOT_NAME_FORMAT, BlobStoreIndexShardSnapshot.PROTO, parseFieldMatcher);
    -        indexShardSnapshotsFormat = new ChecksumBlobStoreFormat<>(SNAPSHOT_INDEX_CODEC, SNAPSHOT_INDEX_NAME_FORMAT, BlobStoreIndexShardSnapshots.PROTO, parseFieldMatcher, isCompress());
     
    +        indexShardSnapshotFormat = new ChecksumBlobStoreFormat<>(SNAPSHOT_CODEC, SNAPSHOT_NAME_FORMAT,
    +            BlobStoreIndexShardSnapshot::fromXContent, namedXContentRegistry, isCompress());
    +        indexShardSnapshotLegacyFormat = new LegacyBlobStoreFormat<>(LEGACY_SNAPSHOT_NAME_FORMAT,
    +            BlobStoreIndexShardSnapshot::fromXContent, namedXContentRegistry);
    +        indexShardSnapshotsFormat = new ChecksumBlobStoreFormat<>(SNAPSHOT_INDEX_CODEC, SNAPSHOT_INDEX_NAME_FORMAT,
    +            BlobStoreIndexShardSnapshots::fromXContent, namedXContentRegistry, isCompress());
         }
     
         @Override
         protected void doStart() {
             this.snapshotsBlobContainer = blobStore().blobContainer(basePath());
    -
    -        ParseFieldMatcher parseFieldMatcher = new ParseFieldMatcher(settings);
    -        globalMetaDataFormat = new ChecksumBlobStoreFormat<>(METADATA_CODEC, METADATA_NAME_FORMAT, MetaData.PROTO, parseFieldMatcher, isCompress());
    -        globalMetaDataLegacyFormat = new LegacyBlobStoreFormat<>(LEGACY_METADATA_NAME_FORMAT, MetaData.PROTO, parseFieldMatcher);
    -
    -        indexMetaDataFormat = new ChecksumBlobStoreFormat<>(INDEX_METADATA_CODEC, METADATA_NAME_FORMAT, IndexMetaData.PROTO, parseFieldMatcher, isCompress());
    -        indexMetaDataLegacyFormat = new LegacyBlobStoreFormat<>(LEGACY_SNAPSHOT_NAME_FORMAT, IndexMetaData.PROTO, parseFieldMatcher);
    -
    -        snapshotFormat = new ChecksumBlobStoreFormat<>(SNAPSHOT_CODEC, SNAPSHOT_NAME_FORMAT, SnapshotInfo.PROTO, parseFieldMatcher, isCompress());
    -        snapshotLegacyFormat = new LegacyBlobStoreFormat<>(LEGACY_SNAPSHOT_NAME_FORMAT, SnapshotInfo.PROTO, parseFieldMatcher);
    +        globalMetaDataFormat = new ChecksumBlobStoreFormat<>(METADATA_CODEC, METADATA_NAME_FORMAT,
    +            MetaData::fromXContent, namedXContentRegistry, isCompress());
    +        globalMetaDataLegacyFormat = new LegacyBlobStoreFormat<>(LEGACY_METADATA_NAME_FORMAT,
    +            MetaData::fromXContent, namedXContentRegistry);
    +
    +        indexMetaDataFormat = new ChecksumBlobStoreFormat<>(INDEX_METADATA_CODEC, METADATA_NAME_FORMAT,
    +            IndexMetaData::fromXContent, namedXContentRegistry, isCompress());
    +        indexMetaDataLegacyFormat = new LegacyBlobStoreFormat<>(LEGACY_SNAPSHOT_NAME_FORMAT,
    +            IndexMetaData::fromXContent, namedXContentRegistry);
    +
    +        snapshotFormat = new ChecksumBlobStoreFormat<>(SNAPSHOT_CODEC, SNAPSHOT_NAME_FORMAT,
    +            SnapshotInfo::fromXContent, namedXContentRegistry, isCompress());
    +        snapshotLegacyFormat = new LegacyBlobStoreFormat<>(LEGACY_SNAPSHOT_NAME_FORMAT,
    +            SnapshotInfo::fromXContent, namedXContentRegistry);
         }
     
         @Override
    @@ -322,12 +335,13 @@ public void initializeSnapshot(SnapshotId snapshotId, List indices, Met
             try {
                 final String snapshotName = snapshotId.getName();
                 // check if the snapshot name already exists in the repository
    -            if (getSnapshots().stream().anyMatch(s -> s.getName().equals(snapshotName))) {
    -                throw new SnapshotCreationException(metadata.name(), snapshotId, "snapshot with the same name already exists");
    +            final RepositoryData repositoryData = getRepositoryData();
    +            if (repositoryData.getAllSnapshotIds().stream().anyMatch(s -> s.getName().equals(snapshotName))) {
    +                throw new InvalidSnapshotNameException(metadata.name(), snapshotId.getName(), "snapshot with the same name already exists");
                 }
                 if (snapshotFormat.exists(snapshotsBlobContainer, snapshotId.getUUID()) ||
                         snapshotLegacyFormat.exists(snapshotsBlobContainer, snapshotName)) {
    -                throw new SnapshotCreationException(metadata.name(), snapshotId, "snapshot with such name already exists");
    +                throw new InvalidSnapshotNameException(metadata.name(), snapshotId.getName(), "snapshot with the same name already exists");
                 }
     
                 // Write Global MetaData
    @@ -348,8 +362,9 @@ public void initializeSnapshot(SnapshotId snapshotId, List indices, Met
         // Older repository index files (index-N) only contain snapshot info, not indices info,
         // so if the repository data is of the older format, populate it with the indices entries
         // so we know which indices of snapshots have blob ids in the older format.
    -    private RepositoryData upgradeRepositoryData(final RepositoryData repositoryData) throws IOException {
    +    private RepositoryData upgradeRepositoryData(RepositoryData repositoryData) throws IOException {
             final Map> indexToSnapshots = new HashMap<>();
    +        final List incompatibleSnapshots = new ArrayList<>();
             for (final SnapshotId snapshotId : repositoryData.getSnapshotIds()) {
                 final SnapshotInfo snapshotInfo;
                 try {
    @@ -358,7 +373,18 @@ private RepositoryData upgradeRepositoryData(final RepositoryData repositoryData
                     logger.warn((Supplier) () -> new ParameterizedMessage("[{}] repository is on a pre-5.0 format with an index file that contains snapshot [{}] but " +
                             "the corresponding snap-{}.dat file cannot be read. The snapshot will no longer be included in " +
                             "the repository but its data directories will remain.", getMetadata().name(), snapshotId, snapshotId.getUUID()), e);
    +                incompatibleSnapshots.add(snapshotId);
                     continue;
    +            } catch (IllegalStateException e) {
    +                if (e.getMessage().startsWith("unsupported compression")) {
    +                    logger.warn((Supplier) () -> new ParameterizedMessage("[{}] attempting to upgrade a pre-5.0 repository " +
    +                        "with compression turned on, and snapshot [{}] was taken from a 1.x instance that used a no longer supported " +
    +                        "compression format.", getMetadata().name(), snapshotId.getName()), e);
    +                    incompatibleSnapshots.add(snapshotId);
    +                    continue;
    +                } else {
    +                    throw e;
    +                }
                 }
                 for (final String indexName : snapshotInfo.indices()) {
                     final IndexId indexId = new IndexId(indexName, indexName);
    @@ -370,10 +396,12 @@ private RepositoryData upgradeRepositoryData(final RepositoryData repositoryData
                 }
             }
             try {
    -            final RepositoryData updatedRepoData = repositoryData.initIndices(indexToSnapshots);
    +            final RepositoryData updatedRepoData = repositoryData.addIncompatibleSnapshots(incompatibleSnapshots)
    +                                                       .initIndices(indexToSnapshots);
                 if (isReadOnly() == false) {
                     // write the new index gen file with the indices included
    -                writeIndexGen(updatedRepoData);
    +                writeIndexGen(updatedRepoData, updatedRepoData.getGenId());
    +                writeIncompatibleSnapshots(updatedRepoData);
                 }
                 return updatedRepoData;
             } catch (IOException e) {
    @@ -382,7 +410,7 @@ private RepositoryData upgradeRepositoryData(final RepositoryData repositoryData
         }
     
         @Override
    -    public void deleteSnapshot(SnapshotId snapshotId) {
    +    public void deleteSnapshot(SnapshotId snapshotId, long repositoryStateId) {
             if (isReadOnly()) {
                 throw new RepositoryException(metadata.name(), "cannot delete snapshot from a readonly repository");
             }
    @@ -395,7 +423,7 @@ public void deleteSnapshot(SnapshotId snapshotId) {
             } catch (SnapshotMissingException ex) {
                 throw ex;
             } catch (IllegalStateException | SnapshotException | ElasticsearchParseException ex) {
    -            logger.warn("cannot read snapshot file [{}]", ex, snapshotId);
    +            logger.warn((Supplier) () -> new ParameterizedMessage("cannot read snapshot file [{}]", snapshotId), ex);
             }
             MetaData metaData = null;
             try {
    @@ -405,12 +433,12 @@ public void deleteSnapshot(SnapshotId snapshotId) {
                     metaData = readSnapshotMetaData(snapshotId, null, repositoryData.resolveIndices(indices), true);
                 }
             } catch (IOException | SnapshotException ex) {
    -            logger.warn("cannot read metadata for snapshot [{}]", ex, snapshotId);
    +            logger.warn((Supplier) () -> new ParameterizedMessage("cannot read metadata for snapshot [{}]", snapshotId), ex);
             }
             try {
                 // Delete snapshot from the index file, since it is the maintainer of truth of active snapshots
                 final RepositoryData updatedRepositoryData = repositoryData.removeSnapshot(snapshotId);
    -            writeIndexGen(updatedRepositoryData);
    +            writeIndexGen(updatedRepositoryData, repositoryStateId);
     
                 // delete the snapshot file
                 safeSnapshotBlobDelete(snapshot, snapshotId.getUUID());
    @@ -462,8 +490,8 @@ public void deleteSnapshot(SnapshotId snapshotId) {
                                 "its index folder.", metadata.name(), indexId), ioe);
                     }
                 }
    -        } catch (IOException ex) {
    -            throw new RepositoryException(metadata.name(), "failed to update snapshot in repository", ex);
    +        } catch (IOException | ResourceNotFoundException ex) {
    +            throw new RepositoryException(metadata.name(), "failed to delete snapshot [" + snapshotId + "]", ex);
             }
         }
     
    @@ -524,29 +552,26 @@ public SnapshotInfo finalizeSnapshot(final SnapshotId snapshotId,
                                              final long startTime,
                                              final String failure,
                                              final int totalShards,
    -                                         final List shardFailures) {
    +                                         final List shardFailures,
    +                                         final long repositoryStateId) {
    +
    +        SnapshotInfo blobStoreSnapshot = new SnapshotInfo(snapshotId,
    +            indices.stream().map(IndexId::getName).collect(Collectors.toList()),
    +            startTime, failure, System.currentTimeMillis(), totalShards, shardFailures);
             try {
    -            SnapshotInfo blobStoreSnapshot = new SnapshotInfo(snapshotId,
    -                                                              indices.stream().map(IndexId::getName).collect(Collectors.toList()),
    -                                                              startTime,
    -                                                              failure,
    -                                                              System.currentTimeMillis(),
    -                                                              totalShards,
    -                                                              shardFailures);
                 snapshotFormat.write(blobStoreSnapshot, snapshotsBlobContainer, snapshotId.getUUID());
                 final RepositoryData repositoryData = getRepositoryData();
    -            List snapshotIds = repositoryData.getSnapshotIds();
    -            if (!snapshotIds.contains(snapshotId)) {
    -                writeIndexGen(repositoryData.addSnapshot(snapshotId, indices));
    -            }
    -            return blobStoreSnapshot;
    +            writeIndexGen(repositoryData.addSnapshot(snapshotId, blobStoreSnapshot.state(), indices), repositoryStateId);
    +        } catch (FileAlreadyExistsException ex) {
    +            // if another master was elected and took over finalizing the snapshot, it is possible
    +            // that both nodes try to finalize the snapshot and write to the same blobs, so we just
    +            // log a warning here and carry on
    +            throw new RepositoryException(metadata.name(), "Blob already exists while " +
    +                "finalizing snapshot, assume the snapshot has already been saved", ex);
             } catch (IOException ex) {
                 throw new RepositoryException(metadata.name(), "failed to update snapshot in repository", ex);
             }
    -    }
    -
    -    public List getSnapshots() {
    -        return getRepositoryData().getSnapshotIds();
    +        return blobStoreSnapshot;
         }
     
         @Override
    @@ -558,11 +583,11 @@ public MetaData getSnapshotMetaData(SnapshotInfo snapshot, List indices
         public SnapshotInfo getSnapshotInfo(final SnapshotId snapshotId) {
             try {
                 return snapshotFormat.read(snapshotsBlobContainer, snapshotId.getUUID());
    -        } catch (FileNotFoundException | NoSuchFileException ex) {
    +        } catch (NoSuchFileException ex) {
                 // File is missing - let's try legacy format instead
                 try {
                     return snapshotLegacyFormat.read(snapshotsBlobContainer, snapshotId.getName());
    -            } catch (FileNotFoundException | NoSuchFileException ex1) {
    +            } catch (NoSuchFileException ex1) {
                     throw new SnapshotMissingException(metadata.name(), snapshotId, ex);
                 } catch (IOException | NotXContentException ex1) {
                     throw new SnapshotException(metadata.name(), snapshotId, "failed to get snapshots", ex1);
    @@ -588,7 +613,7 @@ private MetaData readSnapshotMetaData(SnapshotId snapshotId, Version snapshotVer
             }
             try {
                 metaData = globalMetaDataFormat(snapshotVersion).read(snapshotsBlobContainer, snapshotId.getUUID());
    -        } catch (FileNotFoundException | NoSuchFileException ex) {
    +        } catch (NoSuchFileException ex) {
                 throw new SnapshotMissingException(metadata.name(), snapshotId, ex);
             } catch (IOException ex) {
                 throw new SnapshotException(metadata.name(), snapshotId, "failed to get snapshots", ex);
    @@ -601,7 +626,7 @@ private MetaData readSnapshotMetaData(SnapshotId snapshotId, Version snapshotVer
                     metaDataBuilder.put(indexMetaDataFormat(snapshotVersion).read(indexMetaDataBlobContainer, snapshotId.getUUID()), false);
                 } catch (ElasticsearchParseException | IOException ex) {
                     if (ignoreIndexErrors) {
    -                    logger.warn("[{}] [{}] failed to read metadata for index", ex, snapshotId, index.getName());
    +                    logger.warn((Supplier) () -> new ParameterizedMessage("[{}] [{}] failed to read metadata for index", snapshotId, index.getName()), ex);
                     } else {
                         throw ex;
                     }
    @@ -621,10 +646,10 @@ private MetaData readSnapshotMetaData(SnapshotId snapshotId, Version snapshotVer
         private RateLimiter getRateLimiter(Settings repositorySettings, String setting, ByteSizeValue defaultRate) {
             ByteSizeValue maxSnapshotBytesPerSec = repositorySettings.getAsBytesSize(setting,
                     settings.getAsBytesSize(setting, defaultRate));
    -        if (maxSnapshotBytesPerSec.bytes() <= 0) {
    +        if (maxSnapshotBytesPerSec.getBytes() <= 0) {
                 return null;
             } else {
    -            return new RateLimiter.SimpleRateLimiter(maxSnapshotBytesPerSec.mbFrac());
    +            return new RateLimiter.SimpleRateLimiter(maxSnapshotBytesPerSec.getMbFrac());
             }
         }
     
    @@ -735,8 +760,33 @@ public RepositoryData getRepositoryData() {
                 try (InputStream blob = snapshotsBlobContainer.readBlob(snapshotsIndexBlobName)) {
                     BytesStreamOutput out = new BytesStreamOutput();
                     Streams.copy(blob, out);
    -                try (XContentParser parser = XContentHelper.createParser(out.bytes())) {
    -                    repositoryData = RepositoryData.fromXContent(parser);
    +                // EMPTY is safe here because RepositoryData#fromXContent calls namedObject
    +                try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, out.bytes())) {
    +                    repositoryData = RepositoryData.snapshotsFromXContent(parser, indexGen);
    +                } catch (NotXContentException e) {
    +                    logger.warn("[{}] index blob is not valid x-content [{} bytes]", snapshotsIndexBlobName, out.bytes().length());
    +                    throw e;
    +                }
    +            }
    +
    +            // now load the incompatible snapshot ids, if they exist
    +            try (InputStream blob = snapshotsBlobContainer.readBlob(INCOMPATIBLE_SNAPSHOTS_BLOB)) {
    +                BytesStreamOutput out = new BytesStreamOutput();
    +                Streams.copy(blob, out);
    +                try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, out.bytes())) {
    +                    repositoryData = repositoryData.incompatibleSnapshotsFromXContent(parser);
    +                }
    +            } catch (NoSuchFileException e) {
    +                if (isReadOnly()) {
    +                    logger.debug("[{}] Incompatible snapshots blob [{}] does not exist, the likely " +
    +                                 "reason is that there are no incompatible snapshots in the repository",
    +                                 metadata.name(), INCOMPATIBLE_SNAPSHOTS_BLOB);
    +                } else {
    +                    // write an empty incompatible-snapshots blob - we do this so that there
    +                    // is a blob present, which helps speed up some cloud-based repositories
    +                    // (e.g. S3), which retry if a blob is missing with exponential backoff,
    +                    // delaying the read of repository data and sometimes causing a timeout
    +                    writeIncompatibleSnapshots(RepositoryData.EMPTY);
                     }
                 }
                 if (legacyFormat) {
    @@ -744,7 +794,7 @@ public RepositoryData getRepositoryData() {
                     repositoryData = upgradeRepositoryData(repositoryData);
                 }
                 return repositoryData;
    -        } catch (NoSuchFileException nsfe) {
    +        } catch (NoSuchFileException ex) {
                 // repository doesn't have an index blob, its a new blank repo
                 return RepositoryData.EMPTY;
             } catch (IOException ioe) {
    @@ -766,44 +816,74 @@ BlobContainer blobContainer() {
             return snapshotsBlobContainer;
         }
     
    -    protected void writeIndexGen(final RepositoryData repositoryData) throws IOException {
    +    protected void writeIndexGen(final RepositoryData repositoryData, final long repositoryStateId) throws IOException {
             assert isReadOnly() == false; // can not write to a read only repository
    +        final long currentGen = latestIndexBlobId();
    +        if (repositoryStateId != SnapshotsInProgress.UNDEFINED_REPOSITORY_STATE_ID && currentGen != repositoryStateId) {
    +            // the index file was updated by a concurrent operation, so we were operating on stale
    +            // repository data
    +            throw new RepositoryException(metadata.name(), "concurrent modification of the index-N file, expected current generation [" +
    +                                              repositoryStateId + "], actual current generation [" + currentGen +
    +                                              "] - possibly due to simultaneous snapshot deletion requests");
    +        }
    +        final long newGen = currentGen + 1;
             final BytesReference snapshotsBytes;
             try (BytesStreamOutput bStream = new BytesStreamOutput()) {
                 try (StreamOutput stream = new OutputStreamStreamOutput(bStream)) {
                     XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON, stream);
    -                repositoryData.toXContent(builder, ToXContent.EMPTY_PARAMS);
    +                repositoryData.snapshotsToXContent(builder, ToXContent.EMPTY_PARAMS);
                     builder.close();
                 }
                 snapshotsBytes = bStream.bytes();
             }
    -        final long gen = latestIndexBlobId() + 1;
             // write the index file
    -        writeAtomic(INDEX_FILE_PREFIX + Long.toString(gen), snapshotsBytes);
    +        final String indexBlob = INDEX_FILE_PREFIX + Long.toString(newGen);
    +        logger.debug("Repository [{}] writing new index generational blob [{}]", metadata.name(), indexBlob);
    +        writeAtomic(indexBlob, snapshotsBytes);
             // delete the N-2 index file if it exists, keep the previous one around as a backup
    -        if (isReadOnly() == false && gen - 2 >= 0) {
    -            final String oldSnapshotIndexFile = INDEX_FILE_PREFIX + Long.toString(gen - 2);
    +        if (isReadOnly() == false && newGen - 2 >= 0) {
    +            final String oldSnapshotIndexFile = INDEX_FILE_PREFIX + Long.toString(newGen - 2);
                 if (snapshotsBlobContainer.blobExists(oldSnapshotIndexFile)) {
                     snapshotsBlobContainer.deleteBlob(oldSnapshotIndexFile);
                 }
    -            // delete the old index file (non-generational) if it exists
    -            if (snapshotsBlobContainer.blobExists(SNAPSHOTS_FILE)) {
    -                snapshotsBlobContainer.deleteBlob(SNAPSHOTS_FILE);
    -            }
             }
     
             // write the current generation to the index-latest file
             final BytesReference genBytes;
             try (BytesStreamOutput bStream = new BytesStreamOutput()) {
    -            bStream.writeLong(gen);
    +            bStream.writeLong(newGen);
                 genBytes = bStream.bytes();
             }
             if (snapshotsBlobContainer.blobExists(INDEX_LATEST_BLOB)) {
                 snapshotsBlobContainer.deleteBlob(INDEX_LATEST_BLOB);
             }
    +        logger.debug("Repository [{}] updating index.latest with generation [{}]", metadata.name(), newGen);
             writeAtomic(INDEX_LATEST_BLOB, genBytes);
         }
     
    +    /**
    +     * Writes the incompatible snapshot ids list to the `incompatible-snapshots` blob in the repository.
    +     *
    +     * Package private for testing.
    +     */
    +    void writeIncompatibleSnapshots(RepositoryData repositoryData) throws IOException {
    +        assert isReadOnly() == false; // can not write to a read only repository
    +        final BytesReference bytes;
    +        try (BytesStreamOutput bStream = new BytesStreamOutput()) {
    +            try (StreamOutput stream = new OutputStreamStreamOutput(bStream)) {
    +                XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON, stream);
    +                repositoryData.incompatibleSnapshotsToXContent(builder, ToXContent.EMPTY_PARAMS);
    +                builder.close();
    +            }
    +            bytes = bStream.bytes();
    +        }
    +        if (snapshotsBlobContainer.blobExists(INCOMPATIBLE_SNAPSHOTS_BLOB)) {
    +            snapshotsBlobContainer.deleteBlob(INCOMPATIBLE_SNAPSHOTS_BLOB);
    +        }
    +        // write the incompatible snapshots blob
    +        writeAtomic(INCOMPATIBLE_SNAPSHOTS_BLOB, bytes);
    +    }
    +
         /**
          * Get the latest snapshot index blob id.  Snapshot index blobs are named index-N, where N is
          * the next version number from when the index blob was written.  Each individual index-N blob is
    @@ -814,24 +894,25 @@ protected void writeIndexGen(final RepositoryData repositoryData) throws IOExcep
          */
         long latestIndexBlobId() throws IOException {
             try {
    -            // first, try listing the blobs and determining which index blob is the latest
    +            // First, try listing all index-N blobs (there should only be two index-N blobs at any given
    +            // time in a repository if cleanup is happening properly) and pick the index-N blob with the
    +            // highest N value - this will be the latest index blob for the repository.  Note, we do this
    +            // instead of directly reading the index.latest blob to get the current index-N blob because
    +            // index.latest is not written atomically and is not immutable - on every index-N change,
    +            // we first delete the old index.latest and then write the new one.  If the repository is not
    +            // read-only, it is possible that we try deleting the index.latest blob while it is being read
    +            // by some other operation (such as the get snapshots operation).  In some file systems, it is
    +            // illegal to delete a file while it is being read elsewhere (e.g. Windows).  For read-only
    +            // repositories, we read for index.latest, both because listing blob prefixes is often unsupported
    +            // and because the index.latest blob will never be deleted and re-written.
                 return listBlobsToGetLatestIndexId();
             } catch (UnsupportedOperationException e) {
    -            // could not list the blobs because the repository does not support the operation,
    -            // try reading from the index-latest file
    +            // If its a read-only repository, listing blobs by prefix may not be supported (e.g. a URL repository),
    +            // in this case, try reading the latest index generation from the index.latest blob
                 try {
                     return readSnapshotIndexLatestBlob();
    -            } catch (IOException ioe) {
    -                // we likely could not find the blob, this can happen in two scenarios:
    -                //  (1) its an empty repository
    -                //  (2) when writing the index-latest blob, if the blob already exists,
    -                //      we first delete it, then atomically write the new blob.  there is
    -                //      a small window in time when the blob is deleted and the new one
    -                //      written - if the node crashes during that time, we won't have an
    -                //      index-latest blob
    -                // in a read-only repository, we can't know which of the two scenarios it is,
    -                // but we will assume (1) because we can't do anything about (2) anyway
    -                return -1;
    +            } catch (NoSuchFileException nsfe) {
    +                return RepositoryData.EMPTY_REPO_GEN;
                 }
             }
         }
    @@ -847,7 +928,7 @@ long readSnapshotIndexLatestBlob() throws IOException {
     
         private long listBlobsToGetLatestIndexId() throws IOException {
             Map blobs = snapshotsBlobContainer.listBlobsByPrefix(INDEX_FILE_PREFIX);
    -        long latest = -1;
    +        long latest = RepositoryData.EMPTY_REPO_GEN;
             if (blobs.isEmpty()) {
                 // no snapshot index blobs have been written yet
                 return latest;
    @@ -867,21 +948,21 @@ private long listBlobsToGetLatestIndexId() throws IOException {
         }
     
         private void writeAtomic(final String blobName, final BytesReference bytesRef) throws IOException {
    -        final String tempBlobName = "pending-" + blobName;
    +        final String tempBlobName = "pending-" + blobName + "-" + UUIDs.randomBase64UUID();
             try (InputStream stream = bytesRef.streamInput()) {
                 snapshotsBlobContainer.writeBlob(tempBlobName, stream, bytesRef.length());
    -        }
    -        try {
                 snapshotsBlobContainer.move(tempBlobName, blobName);
             } catch (IOException ex) {
    -            // Move failed - try cleaning up
    -            snapshotsBlobContainer.deleteBlob(tempBlobName);
    +            // temporary blob creation or move failed - try cleaning up
    +            try {
    +                snapshotsBlobContainer.deleteBlob(tempBlobName);
    +            } catch (IOException e) {
    +                ex.addSuppressed(e);
    +            }
                 throw ex;
             }
         }
     
    -
    -
         @Override
         public void snapshotShard(IndexShard shard, SnapshotId snapshotId, IndexId indexId, IndexCommit snapshotIndexCommit, IndexShardSnapshotStatus snapshotStatus) {
             SnapshotContext snapshotContext = new SnapshotContext(shard, snapshotId, indexId, snapshotStatus);
    @@ -986,11 +1067,11 @@ private class Context {
     
             protected final Version version;
     
    -        public Context(SnapshotId snapshotId, Version version, IndexId indexId, ShardId shardId) {
    +        Context(SnapshotId snapshotId, Version version, IndexId indexId, ShardId shardId) {
                 this(snapshotId, version, indexId, shardId, shardId);
             }
     
    -        public Context(SnapshotId snapshotId, Version version, IndexId indexId, ShardId shardId, ShardId snapshotShardId) {
    +        Context(SnapshotId snapshotId, Version version, IndexId indexId, ShardId shardId, ShardId snapshotShardId) {
                 this.snapshotId = snapshotId;
                 this.version = version;
                 this.shardId = shardId;
    @@ -1036,7 +1117,7 @@ public BlobStoreIndexShardSnapshot loadSnapshot() {
                 try {
                     return indexShardSnapshotFormat(version).read(blobContainer, snapshotId.getUUID());
                 } catch (IOException ex) {
    -                throw new IndexShardRestoreFailedException(shardId, "failed to read shard snapshot file", ex);
    +                throw new SnapshotException(metadata.name(), snapshotId, "failed to read shard snapshot file for " + shardId, ex);
                 }
             }
     
    @@ -1135,7 +1216,8 @@ protected long findLatestFileNameGeneration(Map blobs) {
              */
             protected Tuple buildBlobStoreIndexShardSnapshots(Map blobs) {
                 int latest = -1;
    -            for (String name : blobs.keySet()) {
    +            Set blobKeys = blobs.keySet();
    +            for (String name : blobKeys) {
                     if (name.startsWith(SNAPSHOT_INDEX_PREFIX)) {
                         try {
                             int gen = Integer.parseInt(name.substring(SNAPSHOT_INDEX_PREFIX.length()));
    @@ -1156,15 +1238,17 @@ protected Tuple buildBlobStoreIndexShardS
                         final String file = SNAPSHOT_INDEX_PREFIX + latest;
                         logger.warn((Supplier) () -> new ParameterizedMessage("failed to read index file [{}]", file), e);
                     }
    +            } else if (blobKeys.isEmpty() == false) {
    +                logger.debug("Could not find a readable index-N file in a non-empty shard snapshot directory [{}]", blobContainer.path());
                 }
     
                 // We couldn't load the index file - falling back to loading individual snapshots
                 List snapshots = new ArrayList<>();
    -            for (String name : blobs.keySet()) {
    +            for (String name : blobKeys) {
                     try {
                         BlobStoreIndexShardSnapshot snapshot = null;
                         if (name.startsWith(SNAPSHOT_PREFIX)) {
    -                        snapshot = indexShardSnapshotFormat.readBlob(blobContainer, snapshotId.getUUID());
    +                        snapshot = indexShardSnapshotFormat.readBlob(blobContainer, name);
                         } else if (name.startsWith(LEGACY_SNAPSHOT_PREFIX)) {
                             snapshot = indexShardSnapshotLegacyFormat.readBlob(blobContainer, name);
                         }
    @@ -1196,7 +1280,7 @@ private class SnapshotContext extends Context {
              * @param indexId        the id of the index being snapshotted
              * @param snapshotStatus snapshot status to report progress
              */
    -        public SnapshotContext(IndexShard shard, SnapshotId snapshotId, IndexId indexId, IndexShardSnapshotStatus snapshotStatus) {
    +        SnapshotContext(IndexShard shard, SnapshotId snapshotId, IndexId indexId, IndexShardSnapshotStatus snapshotStatus) {
                 super(snapshotId, Version.CURRENT, indexId, shard.shardId());
                 this.snapshotStatus = snapshotStatus;
                 this.store = shard.store();
    @@ -1399,7 +1483,7 @@ private boolean snapshotFileExistsInBlobs(BlobStoreIndexShardSnapshot.FileInfo f
             private class AbortableInputStream extends FilterInputStream {
                 private final String fileName;
     
    -            public AbortableInputStream(InputStream delegate, String fileName) {
    +            AbortableInputStream(InputStream delegate, String fileName) {
                     super(delegate);
                     this.fileName = fileName;
                 }
    @@ -1437,7 +1521,7 @@ private static void maybeRecalculateMetadataHash(final BlobContainer blobContain
                     // we have a hash - check if our repo has a hash too otherwise we have
                     // to calculate it.
                     // we might have multiple parts even though the file is small... make sure we read all of it.
    -                try (final InputStream stream = new PartSliceStream(blobContainer, fileInfo)) {
    +                try (InputStream stream = new PartSliceStream(blobContainer, fileInfo)) {
                         BytesRefBuilder builder = new BytesRefBuilder();
                         Store.MetadataSnapshot.hashFile(builder, stream, fileInfo.length());
                         BytesRef hash = fileInfo.metadata().hash(); // reset the file infos metadata hash
    @@ -1455,7 +1539,7 @@ private static final class PartSliceStream extends SlicedInputStream {
             private final BlobContainer container;
             private final BlobStoreIndexShardSnapshot.FileInfo info;
     
    -        public PartSliceStream(BlobContainer container, BlobStoreIndexShardSnapshot.FileInfo info) {
    +        PartSliceStream(BlobContainer container, BlobStoreIndexShardSnapshot.FileInfo info) {
                 super(info.numberOfParts());
                 this.info = info;
                 this.container = container;
    @@ -1485,7 +1569,7 @@ private class RestoreContext extends Context {
              * @param snapshotShardId shard in the snapshot that data should be restored from
              * @param recoveryState   recovery state to report progress
              */
    -        public RestoreContext(IndexShard shard, SnapshotId snapshotId, Version version, IndexId indexId, ShardId snapshotShardId, RecoveryState recoveryState) {
    +        RestoreContext(IndexShard shard, SnapshotId snapshotId, Version version, IndexId indexId, ShardId snapshotShardId, RecoveryState recoveryState) {
                 super(snapshotId, version, indexId, shard.shardId(), snapshotShardId);
                 this.recoveryState = recoveryState;
                 this.targetShard = shard;
    @@ -1647,7 +1731,7 @@ private void restoreFile(final BlobStoreIndexShardSnapshot.FileInfo fileInfo, fi
                         stream = new RateLimitingInputStream(partSliceStream, restoreRateLimiter, restoreRateLimitingTimeInNanos::inc);
                     }
     
    -                try (final IndexOutput indexOutput = store.createVerifyingOutput(fileInfo.physicalName(), fileInfo.metadata(), IOContext.DEFAULT)) {
    +                try (IndexOutput indexOutput = store.createVerifyingOutput(fileInfo.physicalName(), fileInfo.metadata(), IOContext.DEFAULT)) {
                         final byte[] buffer = new byte[BUFFER_SIZE];
                         int length;
                         while ((length = stream.read(buffer)) > 0) {
    diff --git a/core/src/main/java/org/elasticsearch/repositories/blobstore/ChecksumBlobStoreFormat.java b/core/src/main/java/org/elasticsearch/repositories/blobstore/ChecksumBlobStoreFormat.java
    index 17fad25e610eb..0cb38d9976d2d 100644
    --- a/core/src/main/java/org/elasticsearch/repositories/blobstore/ChecksumBlobStoreFormat.java
    +++ b/core/src/main/java/org/elasticsearch/repositories/blobstore/ChecksumBlobStoreFormat.java
    @@ -23,7 +23,7 @@
     import org.apache.lucene.index.IndexFormatTooNewException;
     import org.apache.lucene.index.IndexFormatTooOldException;
     import org.apache.lucene.store.OutputStreamIndexOutput;
    -import org.elasticsearch.common.ParseFieldMatcher;
    +import org.elasticsearch.common.CheckedFunction;
     import org.elasticsearch.common.blobstore.BlobContainer;
     import org.elasticsearch.common.bytes.BytesArray;
     import org.elasticsearch.common.bytes.BytesReference;
    @@ -33,10 +33,11 @@
     import org.elasticsearch.common.io.stream.StreamOutput;
     import org.elasticsearch.common.lucene.store.ByteArrayIndexInput;
     import org.elasticsearch.common.lucene.store.IndexOutputOutputStream;
    -import org.elasticsearch.common.xcontent.FromXContentBuilder;
    +import org.elasticsearch.common.xcontent.NamedXContentRegistry;
     import org.elasticsearch.common.xcontent.ToXContent;
     import org.elasticsearch.common.xcontent.XContentBuilder;
     import org.elasticsearch.common.xcontent.XContentFactory;
    +import org.elasticsearch.common.xcontent.XContentParser;
     import org.elasticsearch.common.xcontent.XContentType;
     import org.elasticsearch.gateway.CorruptStateException;
     
    @@ -73,8 +74,9 @@ public class ChecksumBlobStoreFormat extends BlobStoreForm
          * @param compress       true if the content should be compressed
          * @param xContentType   content type that should be used for write operations
          */
    -    public ChecksumBlobStoreFormat(String codec, String blobNameFormat, FromXContentBuilder reader, ParseFieldMatcher parseFieldMatcher, boolean compress, XContentType xContentType) {
    -        super(blobNameFormat, reader, parseFieldMatcher);
    +    public ChecksumBlobStoreFormat(String codec, String blobNameFormat, CheckedFunction reader,
    +                                   NamedXContentRegistry namedXContentRegistry, boolean compress, XContentType xContentType) {
    +        super(blobNameFormat, reader, namedXContentRegistry);
             this.xContentType = xContentType;
             this.compress = compress;
             this.codec = codec;
    @@ -86,8 +88,9 @@ public ChecksumBlobStoreFormat(String codec, String blobNameFormat, FromXContent
          * @param reader         prototype object that can deserialize T from XContent
          * @param compress       true if the content should be compressed
          */
    -    public ChecksumBlobStoreFormat(String codec, String blobNameFormat, FromXContentBuilder reader, ParseFieldMatcher parseFieldMatcher, boolean compress) {
    -        this(codec, blobNameFormat, reader, parseFieldMatcher, compress, DEFAULT_X_CONTENT_TYPE);
    +    public ChecksumBlobStoreFormat(String codec, String blobNameFormat, CheckedFunction reader,
    +                                   NamedXContentRegistry namedXContentRegistry, boolean compress) {
    +        this(codec, blobNameFormat, reader, namedXContentRegistry, compress, DEFAULT_X_CONTENT_TYPE);
         }
     
         /**
    diff --git a/core/src/main/java/org/elasticsearch/repositories/blobstore/LegacyBlobStoreFormat.java b/core/src/main/java/org/elasticsearch/repositories/blobstore/LegacyBlobStoreFormat.java
    index 206371261e7f3..7ea711d4df620 100644
    --- a/core/src/main/java/org/elasticsearch/repositories/blobstore/LegacyBlobStoreFormat.java
    +++ b/core/src/main/java/org/elasticsearch/repositories/blobstore/LegacyBlobStoreFormat.java
    @@ -18,12 +18,13 @@
      */
     package org.elasticsearch.repositories.blobstore;
     
    -import org.elasticsearch.common.ParseFieldMatcher;
    +import org.elasticsearch.common.CheckedFunction;
     import org.elasticsearch.common.blobstore.BlobContainer;
     import org.elasticsearch.common.io.Streams;
     import org.elasticsearch.common.io.stream.BytesStreamOutput;
    -import org.elasticsearch.common.xcontent.FromXContentBuilder;
    +import org.elasticsearch.common.xcontent.NamedXContentRegistry;
     import org.elasticsearch.common.xcontent.ToXContent;
    +import org.elasticsearch.common.xcontent.XContentParser;
     
     import java.io.IOException;
     import java.io.InputStream;
    @@ -37,8 +38,9 @@ public class LegacyBlobStoreFormat extends BlobStoreFormat
          * @param blobNameFormat format of the blobname in {@link String#format} format
          * @param reader the prototype object that can deserialize objects with type T
          */
    -    public LegacyBlobStoreFormat(String blobNameFormat, FromXContentBuilder reader, ParseFieldMatcher parseFieldMatcher) {
    -        super(blobNameFormat, reader, parseFieldMatcher);
    +    public LegacyBlobStoreFormat(String blobNameFormat, CheckedFunction reader,
    +                                 NamedXContentRegistry namedXContentRegistry) {
    +        super(blobNameFormat, reader, namedXContentRegistry);
         }
     
         /**
    diff --git a/core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java b/core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java
    index c028913d343db..b490a2e784dc1 100644
    --- a/core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java
    +++ b/core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java
    @@ -26,6 +26,7 @@
     import org.elasticsearch.common.settings.Setting;
     import org.elasticsearch.common.settings.Setting.Property;
     import org.elasticsearch.common.unit.ByteSizeValue;
    +import org.elasticsearch.common.xcontent.NamedXContentRegistry;
     import org.elasticsearch.env.Environment;
     import org.elasticsearch.repositories.RepositoryException;
     import org.elasticsearch.repositories.blobstore.BlobStoreRepository;
    @@ -72,8 +73,9 @@ public class FsRepository extends BlobStoreRepository {
         /**
          * Constructs a shared file system repository.
          */
    -    public FsRepository(RepositoryMetaData metadata, Environment environment) throws IOException {
    -        super(metadata, environment.settings());
    +    public FsRepository(RepositoryMetaData metadata, Environment environment,
    +                        NamedXContentRegistry namedXContentRegistry) throws IOException {
    +        super(metadata, environment.settings(), namedXContentRegistry);
             String location = REPOSITORIES_LOCATION_SETTING.get(metadata.settings());
             if (location.isEmpty()) {
                 logger.warn("the repository location is missing, it should point to a shared file system location that is available on all master and data nodes");
    diff --git a/core/src/main/java/org/elasticsearch/repositories/uri/URLRepository.java b/core/src/main/java/org/elasticsearch/repositories/uri/URLRepository.java
    index 5ca335c595364..fdeb27819bf08 100644
    --- a/core/src/main/java/org/elasticsearch/repositories/uri/URLRepository.java
    +++ b/core/src/main/java/org/elasticsearch/repositories/uri/URLRepository.java
    @@ -26,6 +26,7 @@
     import org.elasticsearch.common.settings.Setting;
     import org.elasticsearch.common.settings.Setting.Property;
     import org.elasticsearch.common.util.URIPattern;
    +import org.elasticsearch.common.xcontent.NamedXContentRegistry;
     import org.elasticsearch.env.Environment;
     import org.elasticsearch.repositories.RepositoryException;
     import org.elasticsearch.repositories.blobstore.BlobStoreRepository;
    @@ -77,8 +78,9 @@ public class URLRepository extends BlobStoreRepository {
         /**
          * Constructs a read-only URL-based repository
          */
    -    public URLRepository(RepositoryMetaData metadata, Environment environment) throws IOException {
    -        super(metadata, environment.settings());
    +    public URLRepository(RepositoryMetaData metadata, Environment environment,
    +                         NamedXContentRegistry namedXContentRegistry) throws IOException {
    +        super(metadata, environment.settings(), namedXContentRegistry);
     
             if (URL_SETTING.exists(metadata.settings()) == false && REPOSITORIES_URL_SETTING.exists(settings) ==  false) {
                 throw new RepositoryException(metadata.name(), "missing url");
    diff --git a/core/src/main/java/org/elasticsearch/rest/AbstractRestChannel.java b/core/src/main/java/org/elasticsearch/rest/AbstractRestChannel.java
    index f146267c9be33..4db9aec6e93ef 100644
    --- a/core/src/main/java/org/elasticsearch/rest/AbstractRestChannel.java
    +++ b/core/src/main/java/org/elasticsearch/rest/AbstractRestChannel.java
    @@ -20,13 +20,14 @@
     
     import org.elasticsearch.common.Nullable;
     import org.elasticsearch.common.Strings;
    -import org.elasticsearch.common.bytes.BytesReference;
    +import org.elasticsearch.common.io.Streams;
     import org.elasticsearch.common.io.stream.BytesStreamOutput;
     import org.elasticsearch.common.xcontent.XContentBuilder;
     import org.elasticsearch.common.xcontent.XContentFactory;
     import org.elasticsearch.common.xcontent.XContentType;
     
     import java.io.IOException;
    +import java.io.OutputStream;
     import java.util.Collections;
     import java.util.Set;
     import java.util.function.Predicate;
    @@ -40,59 +41,79 @@ public abstract class AbstractRestChannel implements RestChannel {
     
         protected final RestRequest request;
         protected final boolean detailedErrorsEnabled;
    +    private final String format;
    +    private final String filterPath;
    +    private final boolean pretty;
    +    private final boolean human;
     
         private BytesStreamOutput bytesOut;
     
         protected AbstractRestChannel(RestRequest request, boolean detailedErrorsEnabled) {
             this.request = request;
             this.detailedErrorsEnabled = detailedErrorsEnabled;
    +        this.format = request.param("format", request.header("Accept"));
    +        this.filterPath = request.param("filter_path", null);
    +        this.pretty = request.paramAsBoolean("pretty", false);
    +        this.human = request.paramAsBoolean("human", false);
         }
     
         @Override
         public XContentBuilder newBuilder() throws IOException {
    -        return newBuilder(request.hasContent() ? request.content() : null, true);
    +        return newBuilder(request.getXContentType(), true);
         }
     
         @Override
         public XContentBuilder newErrorBuilder() throws IOException {
             // Disable filtering when building error responses
    -        return newBuilder(request.hasContent() ? request.content() : null, false);
    +        return newBuilder(request.getXContentType(), false);
         }
     
    +    /**
    +     * Creates a new {@link XContentBuilder} for a response to be sent using this channel. The builder's type is determined by the following
    +     * logic. If the request has a format parameter that will be used to attempt to map to an {@link XContentType}. If there is no format
    +     * parameter, the HTTP Accept header is checked to see if it can be matched to a {@link XContentType}. If this first attempt to map
    +     * fails, the request content type will be used if the value is not {@code null}; if the value is {@code null} the output format falls
    +     * back to JSON.
    +     */
         @Override
    -    public XContentBuilder newBuilder(@Nullable BytesReference autoDetectSource, boolean useFiltering) throws IOException {
    -        XContentType contentType = XContentType.fromMediaTypeOrFormat(request.param("format", request.header("Accept")));
    -        if (contentType == null) {
    -            // try and guess it from the auto detect source
    -            if (autoDetectSource != null) {
    -                contentType = XContentFactory.xContentType(autoDetectSource);
    +    public XContentBuilder newBuilder(@Nullable XContentType requestContentType, boolean useFiltering) throws IOException {
    +        // try to determine the response content type from the media type or the format query string parameter, with the format parameter
    +        // taking precedence over the Accept header
    +        XContentType responseContentType = XContentType.fromMediaTypeOrFormat(format);
    +        if (responseContentType == null) {
    +            if (requestContentType != null) {
    +                // if there was a parsed content-type for the incoming request use that since no format was specified using the query
    +                // string parameter or the HTTP Accept header
    +                responseContentType = requestContentType;
    +            } else {
    +                // default to JSON output when all else fails
    +                responseContentType = XContentType.JSON;
                 }
             }
    -        if (contentType == null) {
    -            // default to JSON
    -            contentType = XContentType.JSON;
    -        }
     
             Set includes = Collections.emptySet();
             Set excludes = Collections.emptySet();
             if (useFiltering) {
    -            Set filters = Strings.splitStringByCommaToSet(request.param("filter_path", null));
    +            Set filters = Strings.splitStringByCommaToSet(filterPath);
                 includes = filters.stream().filter(INCLUDE_FILTER).collect(toSet());
                 excludes = filters.stream().filter(EXCLUDE_FILTER).map(f -> f.substring(1)).collect(toSet());
             }
     
    -        XContentBuilder builder = new XContentBuilder(XContentFactory.xContent(contentType), bytesOutput(), includes, excludes);
    -        if (request.paramAsBoolean("pretty", false)) {
    +        OutputStream unclosableOutputStream = Streams.flushOnCloseStream(bytesOutput());
    +        XContentBuilder builder =
    +            new XContentBuilder(XContentFactory.xContent(responseContentType), unclosableOutputStream, includes, excludes);
    +        if (pretty) {
                 builder.prettyPrint().lfAtEnd();
             }
     
    -        builder.humanReadable(request.paramAsBoolean("human", builder.humanReadable()));
    +        builder.humanReadable(human);
             return builder;
         }
     
         /**
    -     * A channel level bytes output that can be reused. It gets reset on each call to this
    -     * method.
    +     * A channel level bytes output that can be reused. The bytes output is lazily instantiated
    +     * by a call to {@link #newBytesOutput()}. Once the stream is created, it gets reset on each
    +     * call to this method.
          */
         @Override
         public final BytesStreamOutput bytesOutput() {
    @@ -104,6 +125,14 @@ public final BytesStreamOutput bytesOutput() {
             return bytesOut;
         }
     
    +    /**
    +     * An accessor to the raw value of the channel bytes output. This method will not instantiate
    +     * a new stream if one does not exist and this method will not reset the stream.
    +     */
    +    protected final BytesStreamOutput bytesOutputOrNull() {
    +        return bytesOut;
    +    }
    +
         protected BytesStreamOutput newBytesOutput() {
             return new BytesStreamOutput();
         }
    @@ -117,5 +146,4 @@ public RestRequest request() {
         public boolean detailedErrorsEnabled() {
             return detailedErrorsEnabled;
         }
    -
     }
    diff --git a/core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java b/core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java
    index 31e09d6706e04..81620ec8a7cf7 100644
    --- a/core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java
    +++ b/core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java
    @@ -19,13 +19,28 @@
     
     package org.elasticsearch.rest;
     
    -import org.elasticsearch.common.ParseFieldMatcher;
    +import org.apache.lucene.search.spell.LevensteinDistance;
    +import org.apache.lucene.util.CollectionUtil;
    +import org.elasticsearch.client.node.NodeClient;
    +import org.elasticsearch.common.CheckedConsumer;
    +import org.elasticsearch.common.collect.Tuple;
     import org.elasticsearch.common.component.AbstractComponent;
     import org.elasticsearch.common.settings.Setting;
     import org.elasticsearch.common.settings.Setting.Property;
     import org.elasticsearch.common.settings.Settings;
     import org.elasticsearch.plugins.ActionPlugin;
     
    +import java.io.IOException;
    +import java.util.ArrayList;
    +import java.util.Collections;
    +import java.util.HashSet;
    +import java.util.List;
    +import java.util.Locale;
    +import java.util.Set;
    +import java.util.SortedSet;
    +import java.util.TreeSet;
    +import java.util.stream.Collectors;
    +
     /**
      * Base handler for REST requests.
      * 

    @@ -35,12 +50,110 @@ * {@link ActionPlugin#getRestHeaders()}. */ public abstract class BaseRestHandler extends AbstractComponent implements RestHandler { + public static final Setting MULTI_ALLOW_EXPLICIT_INDEX = Setting.boolSetting("rest.action.multi.allow_explicit_index", true, Property.NodeScope); - protected final ParseFieldMatcher parseFieldMatcher; protected BaseRestHandler(Settings settings) { super(settings); - this.parseFieldMatcher = new ParseFieldMatcher(settings); } + + @Override + public final void handleRequest(RestRequest request, RestChannel channel, NodeClient client) throws Exception { + // prepare the request for execution; has the side effect of touching the request parameters + final RestChannelConsumer action = prepareRequest(request, client); + + // validate unconsumed params, but we must exclude params used to format the response + // use a sorted set so the unconsumed parameters appear in a reliable sorted order + final SortedSet unconsumedParams = + request.unconsumedParams().stream().filter(p -> !responseParams().contains(p)).collect(Collectors.toCollection(TreeSet::new)); + + // validate the non-response params + if (!unconsumedParams.isEmpty()) { + final Set candidateParams = new HashSet<>(); + candidateParams.addAll(request.consumedParams()); + candidateParams.addAll(responseParams()); + throw new IllegalArgumentException(unrecognized(request, unconsumedParams, candidateParams, "parameter")); + } + + // execute the action + action.accept(channel); + } + + protected final String unrecognized( + final RestRequest request, + final Set invalids, + final Set candidates, + final String detail) { + String message = String.format( + Locale.ROOT, + "request [%s] contains unrecognized %s%s: ", + request.path(), + detail, + invalids.size() > 1 ? "s" : ""); + boolean first = true; + for (final String invalid : invalids) { + final LevensteinDistance ld = new LevensteinDistance(); + final List> scoredParams = new ArrayList<>(); + for (final String candidate : candidates) { + final float distance = ld.getDistance(invalid, candidate); + if (distance > 0.5f) { + scoredParams.add(new Tuple<>(distance, candidate)); + } + } + CollectionUtil.timSort(scoredParams, (a, b) -> { + // sort by distance in reverse order, then parameter name for equal distances + int compare = a.v1().compareTo(b.v1()); + if (compare != 0) return -compare; + else return a.v2().compareTo(b.v2()); + }); + if (first == false) { + message += ", "; + } + message += "[" + invalid + "]"; + final List keys = scoredParams.stream().map(Tuple::v2).collect(Collectors.toList()); + if (keys.isEmpty() == false) { + message += " -> did you mean " + (keys.size() == 1 ? "[" + keys.get(0) + "]" : "any of " + keys.toString()) + "?"; + } + first = false; + } + + return message; + } + + /** + * REST requests are handled by preparing a channel consumer that represents the execution of + * the request against a channel. + */ + @FunctionalInterface + protected interface RestChannelConsumer extends CheckedConsumer { + } + + /** + * Prepare the request for execution. Implementations should consume all request params before + * returning the runnable for actual execution. Unconsumed params will immediately terminate + * execution of the request. However, some params are only used in processing the response; + * implementations can override {@link BaseRestHandler#responseParams()} to indicate such + * params. + * + * @param request the request to execute + * @param client client for executing actions on the local node + * @return the action to execute + * @throws IOException if an I/O exception occurred parsing the request and preparing for + * execution + */ + protected abstract RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException; + + /** + * Parameters used for controlling the response and thus might not be consumed during + * preparation of the request execution in + * {@link BaseRestHandler#prepareRequest(RestRequest, NodeClient)}. + * + * @return a set of parameters used to control the response and thus should not trip strict + * URL parameter checks. + */ + protected Set responseParams() { + return Collections.emptySet(); + } + } diff --git a/core/src/main/java/org/elasticsearch/rest/BytesRestResponse.java b/core/src/main/java/org/elasticsearch/rest/BytesRestResponse.java index 7af8249bf2ed0..11daaddd14720 100644 --- a/core/src/main/java/org/elasticsearch/rest/BytesRestResponse.java +++ b/core/src/main/java/org/elasticsearch/rest/BytesRestResponse.java @@ -23,21 +23,29 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ElasticsearchStatusException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; -import java.util.Collections; + +import static java.util.Collections.singletonMap; +import static org.elasticsearch.ElasticsearchException.REST_EXCEPTION_SKIP_STACK_TRACE; +import static org.elasticsearch.ElasticsearchException.REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; public class BytesRestResponse extends RestResponse { public static final String TEXT_CONTENT_TYPE = "text/plain; charset=UTF-8"; + private static final String STATUS = "status"; + private final RestStatus status; private final BytesReference content; private final String contentType; @@ -85,11 +93,7 @@ public BytesRestResponse(RestChannel channel, Exception e) throws IOException { public BytesRestResponse(RestChannel channel, RestStatus status, Exception e) throws IOException { this.status = status; - if (channel.request().method() == RestRequest.Method.HEAD) { - this.content = BytesArray.EMPTY; - this.contentType = TEXT_CONTENT_TYPE; - } else { - XContentBuilder builder = convert(channel, status, e); + try (XContentBuilder builder = build(channel, status, e)) { this.content = builder.bytes(); this.contentType = builder.contentType().mediaType(); } @@ -115,57 +119,68 @@ public RestStatus status() { private static final Logger SUPPRESSED_ERROR_LOGGER = ESLoggerFactory.getLogger("rest.suppressed"); - private static XContentBuilder convert(RestChannel channel, RestStatus status, Exception e) throws IOException { - XContentBuilder builder = channel.newErrorBuilder().startObject(); - if (e == null) { - builder.field("error", "unknown"); - } else if (channel.detailedErrorsEnabled()) { - final ToXContent.Params params; - if (channel.request().paramAsBoolean("error_trace", !ElasticsearchException.REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT)) { - params = new ToXContent.DelegatingMapParams(Collections.singletonMap(ElasticsearchException.REST_EXCEPTION_SKIP_STACK_TRACE, "false"), channel.request()); + private static XContentBuilder build(RestChannel channel, RestStatus status, Exception e) throws IOException { + ToXContent.Params params = channel.request(); + if (params.paramAsBoolean("error_trace", !REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT)) { + params = new ToXContent.DelegatingMapParams(singletonMap(REST_EXCEPTION_SKIP_STACK_TRACE, "false"), params); + } else if (e != null) { + Supplier messageSupplier = () -> new ParameterizedMessage("path: {}, params: {}", + channel.request().rawPath(), channel.request().params()); + + if (status.getStatus() < 500) { + SUPPRESSED_ERROR_LOGGER.debug(messageSupplier, e); } else { - if (status.getStatus() < 500) { - SUPPRESSED_ERROR_LOGGER.debug((Supplier) () -> new ParameterizedMessage("path: {}, params: {}", channel.request().rawPath(), channel.request().params()), e); - } else { - SUPPRESSED_ERROR_LOGGER.warn((Supplier) () -> new ParameterizedMessage("path: {}, params: {}", channel.request().rawPath(), channel.request().params()), e); - } - params = channel.request(); - } - builder.field("error"); - builder.startObject(); - final ElasticsearchException[] rootCauses = ElasticsearchException.guessRootCauses(e); - builder.field("root_cause"); - builder.startArray(); - for (ElasticsearchException rootCause : rootCauses){ - builder.startObject(); - rootCause.toXContent(builder, new ToXContent.DelegatingMapParams(Collections.singletonMap(ElasticsearchException.REST_EXCEPTION_SKIP_CAUSE, "true"), params)); - builder.endObject(); + SUPPRESSED_ERROR_LOGGER.warn(messageSupplier, e); } - builder.endArray(); - - ElasticsearchException.toXContent(builder, params, e); - builder.endObject(); - } else { - builder.field("error", simpleMessage(e)); } - builder.field("status", status.getStatus()); + + XContentBuilder builder = channel.newErrorBuilder().startObject(); + ElasticsearchException.generateFailureXContent(builder, params, e, channel.detailedErrorsEnabled()); + builder.field(STATUS, status.getStatus()); builder.endObject(); return builder; } - /* - * Builds a simple error string from the message of the first ElasticsearchException - */ - private static String simpleMessage(Throwable t) throws IOException { - int counter = 0; - Throwable next = t; - while (next != null && counter++ < 10) { - if (t instanceof ElasticsearchException) { - return next.getClass().getSimpleName() + "[" + next.getMessage() + "]"; + static BytesRestResponse createSimpleErrorResponse(RestChannel channel, RestStatus status, String errorMessage) throws IOException { + return new BytesRestResponse(status, channel.newErrorBuilder().startObject() + .field("error", errorMessage) + .field("status", status.getStatus()) + .endObject()); + } + + public static ElasticsearchStatusException errorFromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + + ElasticsearchException exception = null; + RestStatus status = null; + + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } + if (STATUS.equals(currentFieldName)) { + if (token != XContentParser.Token.FIELD_NAME) { + ensureExpectedToken(XContentParser.Token.VALUE_NUMBER, token, parser::getTokenLocation); + status = RestStatus.fromCode(parser.intValue()); + } + } else { + exception = ElasticsearchException.failureFromXContent(parser); } - next = next.getCause(); } - return "No ElasticsearchException found"; + if (exception == null) { + throw new IllegalStateException("Failed to parse elasticsearch status exception: no exception was found"); + } + + ElasticsearchStatusException result = new ElasticsearchStatusException(exception.getMessage(), status, exception.getCause()); + for (String header : exception.getHeaderKeys()) { + result.addHeader(header, exception.getHeader(header)); + } + for (String metadata : exception.getMetadataKeys()) { + result.addMetadata(metadata, exception.getMetadata(metadata)); + } + return result; } } diff --git a/core/src/main/java/org/elasticsearch/rest/RestChannel.java b/core/src/main/java/org/elasticsearch/rest/RestChannel.java index 2a56313fd8a79..8c8346f0ef4b2 100644 --- a/core/src/main/java/org/elasticsearch/rest/RestChannel.java +++ b/core/src/main/java/org/elasticsearch/rest/RestChannel.java @@ -20,9 +20,9 @@ package org.elasticsearch.rest; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; @@ -35,7 +35,7 @@ public interface RestChannel { XContentBuilder newErrorBuilder() throws IOException; - XContentBuilder newBuilder(@Nullable BytesReference autoDetectSource, boolean useFiltering) throws IOException; + XContentBuilder newBuilder(@Nullable XContentType xContentType, boolean useFiltering) throws IOException; BytesStreamOutput bytesOutput(); @@ -47,5 +47,4 @@ public interface RestChannel { boolean detailedErrorsEnabled(); void sendResponse(RestResponse response); - } diff --git a/core/src/main/java/org/elasticsearch/rest/RestController.java b/core/src/main/java/org/elasticsearch/rest/RestController.java index e63f35884e88c..42db6e8adae03 100644 --- a/core/src/main/java/org/elasticsearch/rest/RestController.java +++ b/core/src/main/java/org/elasticsearch/rest/RestController.java @@ -20,29 +20,45 @@ package org.elasticsearch.rest; import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.ElasticsearchException; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.breaker.CircuitBreaker; import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.component.AbstractLifecycleComponent; +import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.io.Streams; +import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.path.PathTrie; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.http.HttpServerTransport; +import org.elasticsearch.http.HttpTransportSettings; +import org.elasticsearch.indices.breaker.CircuitBreakerService; +import java.io.ByteArrayOutputStream; import java.io.IOException; -import java.util.Arrays; +import java.io.InputStream; +import java.util.List; +import java.util.Locale; +import java.util.Objects; import java.util.Set; -import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.Supplier; +import java.util.function.UnaryOperator; import static org.elasticsearch.rest.RestStatus.BAD_REQUEST; +import static org.elasticsearch.rest.RestStatus.FORBIDDEN; +import static org.elasticsearch.rest.RestStatus.INTERNAL_SERVER_ERROR; +import static org.elasticsearch.rest.RestStatus.NOT_ACCEPTABLE; import static org.elasticsearch.rest.RestStatus.OK; -/** - * - */ -public class RestController extends AbstractLifecycleComponent { +public class RestController extends AbstractComponent implements HttpServerTransport.Dispatcher { + private final PathTrie getHandlers = new PathTrie<>(RestUtils.REST_DECODER); private final PathTrie postHandlers = new PathTrie<>(RestUtils.REST_DECODER); private final PathTrie putHandlers = new PathTrie<>(RestUtils.REST_DECODER); @@ -50,43 +66,31 @@ public class RestController extends AbstractLifecycleComponent { private final PathTrie headHandlers = new PathTrie<>(RestUtils.REST_DECODER); private final PathTrie optionsHandlers = new PathTrie<>(RestUtils.REST_DECODER); - private final RestHandlerFilter handlerFilter = new RestHandlerFilter(); + private final UnaryOperator handlerWrapper; + + private final NodeClient client; + + private final CircuitBreakerService circuitBreakerService; /** Rest headers that are copied to internal requests made during a rest request. */ private final Set headersToCopy; - // non volatile since the assumption is that pre processors are registered on startup - private RestFilter[] filters = new RestFilter[0]; + private final boolean isContentTypeRequired; + + private final DeprecationLogger deprecationLogger; - public RestController(Settings settings, Set headersToCopy) { + public RestController(Settings settings, Set headersToCopy, UnaryOperator handlerWrapper, + NodeClient client, CircuitBreakerService circuitBreakerService) { super(settings); this.headersToCopy = headersToCopy; - } - - @Override - protected void doStart() { - } - - @Override - protected void doStop() { - } - - @Override - protected void doClose() { - for (RestFilter filter : filters) { - filter.close(); + if (handlerWrapper == null) { + handlerWrapper = h -> h; // passthrough if no wrapper set } - } - - /** - * Registers a pre processor to be executed before the rest request is actually handled. - */ - public synchronized void registerFilter(RestFilter preProcessor) { - RestFilter[] copy = new RestFilter[filters.length + 1]; - System.arraycopy(filters, 0, copy, 0, filters.length); - copy[filters.length] = preProcessor; - Arrays.sort(copy, (o1, o2) -> Integer.compare(o1.order(), o2.order())); - filters = copy; + this.handlerWrapper = handlerWrapper; + this.client = client; + this.circuitBreakerService = circuitBreakerService; + this.isContentTypeRequired = HttpTransportSettings.SETTING_HTTP_CONTENT_TYPE_REQUIRED.get(settings); + this.deprecationLogger = new DeprecationLogger(logger); } /** @@ -157,25 +161,6 @@ public void registerHandler(RestRequest.Method method, String path, RestHandler } } - /** - * Returns a filter chain (if needed) to execute. If this method returns null, simply execute - * as usual. - */ - @Nullable - public RestFilterChain filterChainOrNull(RestFilter executionFilter) { - if (filters.length == 0) { - return null; - } - return new ControllerFilterChain(executionFilter); - } - - /** - * Returns a filter chain with the final filter being the provided filter. - */ - public RestFilterChain filterChain(RestFilter executionFilter) { - return new ControllerFilterChain(executionFilter); - } - /** * @param request The current request. Must not be null. * @return true iff the circuit breaker limit must be enforced for processing this request. @@ -185,33 +170,167 @@ public boolean canTripCircuitBreaker(RestRequest request) { return (handler != null) ? handler.canTripCircuitBreaker() : true; } - public void dispatchRequest(final RestRequest request, final RestChannel channel, final NodeClient client, ThreadContext threadContext) throws Exception { - if (!checkRequestParameters(request, channel)) { + @Override + public void dispatchRequest(RestRequest request, RestChannel channel, ThreadContext threadContext) { + if (request.rawPath().equals("/favicon.ico")) { + handleFavicon(request, channel); return; } - try (ThreadContext.StoredContext ignored = threadContext.stashContext()) { + RestChannel responseChannel = channel; + try { + final int contentLength = request.hasContent() ? request.content().length() : 0; + assert contentLength >= 0 : "content length was negative, how is that possible?"; + final RestHandler handler = getHandler(request); + + if (contentLength > 0 && hasContentTypeOrCanAutoDetect(request, handler) == false) { + sendContentTypeErrorMessage(request, responseChannel); + } else if (contentLength > 0 && handler != null && handler.supportsContentStream() && + request.getXContentType() != XContentType.JSON && request.getXContentType() != XContentType.SMILE) { + responseChannel.sendResponse(BytesRestResponse.createSimpleErrorResponse(responseChannel, + RestStatus.NOT_ACCEPTABLE, "Content-Type [" + request.getXContentType() + + "] does not support stream parsing. Use JSON or SMILE instead")); + } else { + if (canTripCircuitBreaker(request)) { + inFlightRequestsBreaker(circuitBreakerService).addEstimateBytesAndMaybeBreak(contentLength, ""); + } else { + inFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(contentLength); + } + // iff we could reserve bytes for the request we need to send the response also over this channel + responseChannel = new ResourceHandlingHttpChannel(channel, circuitBreakerService, contentLength); + dispatchRequest(request, responseChannel, client, threadContext, handler); + } + } catch (Exception e) { + try { + responseChannel.sendResponse(new BytesRestResponse(channel, e)); + } catch (Exception inner) { + inner.addSuppressed(e); + logger.error((Supplier) () -> + new ParameterizedMessage("failed to send failure response for uri [{}]", request.uri()), inner); + } + } + } + + @Override + public void dispatchBadRequest( + final RestRequest request, + final RestChannel channel, + final ThreadContext threadContext, + final Throwable cause) { + try { + final Exception e; + if (cause == null) { + e = new ElasticsearchException("unknown cause"); + } else if (cause instanceof Exception) { + e = (Exception) cause; + } else { + e = new ElasticsearchException(cause); + } + channel.sendResponse(new BytesRestResponse(channel, BAD_REQUEST, e)); + } catch (final IOException e) { + if (cause != null) { + e.addSuppressed(cause); + } + logger.warn("failed to send bad request response", e); + channel.sendResponse(new BytesRestResponse(INTERNAL_SERVER_ERROR, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); + } + } + + void dispatchRequest(final RestRequest request, final RestChannel channel, final NodeClient client, ThreadContext threadContext, + final RestHandler handler) throws Exception { + if (checkRequestParameters(request, channel) == false) { + channel + .sendResponse(BytesRestResponse.createSimpleErrorResponse(channel,BAD_REQUEST, "error traces in responses are disabled.")); + } else { for (String key : headersToCopy) { String httpHeader = request.header(key); if (httpHeader != null) { threadContext.putHeader(key, httpHeader); } } - if (filters.length == 0) { - executeHandler(request, channel, client); + + if (handler == null) { + if (request.method() == RestRequest.Method.OPTIONS) { + // when we routinghave OPTIONS request, simply send OK by default (with the Access Control Origin header which gets automatically added) + + channel.sendResponse(new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); + } else { + final String msg = "No handler found for uri [" + request.uri() + "] and method [" + request.method() + "]"; + channel.sendResponse(new BytesRestResponse(BAD_REQUEST, msg)); + } } else { - ControllerFilterChain filterChain = new ControllerFilterChain(handlerFilter); - filterChain.continueProcessing(request, channel, client); + final RestHandler wrappedHandler = Objects.requireNonNull(handlerWrapper.apply(handler)); + wrappedHandler.handleRequest(request, channel, client); } } } - public void sendErrorResponse(RestRequest request, RestChannel channel, Exception e) { - try { - channel.sendResponse(new BytesRestResponse(channel, e)); - } catch (Exception inner) { - inner.addSuppressed(e); - logger.error((Supplier) () -> new ParameterizedMessage("failed to send failure response for uri [{}]", request.uri()), inner); + /** + * If a request contains content, this method will return {@code true} if the {@code Content-Type} header is present, matches an + * {@link XContentType} or the request is plain text, and content type is required. If content type is not required then this method + * returns true unless a content type could not be inferred from the body and the rest handler does not support plain text + */ + private boolean hasContentTypeOrCanAutoDetect(final RestRequest restRequest, final RestHandler restHandler) { + if (restRequest.getXContentType() == null) { + if (restHandler != null && restHandler.supportsPlainText()) { + if (isContentTypeRequired) { + // content type of null with a handler that supports plain text gets through for now. Once we remove plain text this can + // be removed! + deprecationLogger.deprecated("Plain text request bodies are deprecated. Use request parameters or body " + + "in a supported format."); + } else { + // attempt to autodetect since we do not know that it is truly plain-text + final boolean detected = autoDetectXContentType(restRequest); + if (detected == false) { + deprecationLogger.deprecated("Plain text request bodies are deprecated. Use request parameters or body " + + "in a supported format."); + } + } + } else if (restHandler != null && restHandler.supportsContentStream() && restRequest.header("Content-Type") != null) { + final String lowercaseMediaType = restRequest.header("Content-Type").toLowerCase(Locale.ROOT); + // we also support newline delimited JSON: http://specs.okfnlabs.org/ndjson/ + if (lowercaseMediaType.equals("application/x-ndjson")) { + restRequest.setXContentType(XContentType.JSON); + } else if (lowercaseMediaType.equals("application/x-ldjson")) { + restRequest.setXContentType(XContentType.JSON); + deprecationLogger.deprecated("The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] " + + "in the specification and should be used instead."); + } else if (isContentTypeRequired) { + return false; + } else { + return autoDetectXContentType(restRequest); + } + } else if (isContentTypeRequired) { + return false; + } else { + return autoDetectXContentType(restRequest); + } } + return true; + } + + private boolean autoDetectXContentType(RestRequest restRequest) { + deprecationLogger.deprecated("Content type detection for rest requests is deprecated. Specify the content type using " + + "the [Content-Type] header."); + XContentType xContentType = XContentFactory.xContentType(restRequest.content()); + if (xContentType == null) { + return false; + } else { + restRequest.setXContentType(xContentType); + } + return true; + } + + private void sendContentTypeErrorMessage(RestRequest restRequest, RestChannel channel) throws IOException { + final List contentTypeHeader = restRequest.getAllHeaderValues("Content-Type"); + final String errorMessage; + if (contentTypeHeader == null) { + errorMessage = "Content-Type header is missing"; + } else { + errorMessage = "Content-Type header [" + + Strings.collectionToCommaDelimitedString(restRequest.getAllHeaderValues("Content-Type")) + "] is not supported"; + } + + channel.sendResponse(BytesRestResponse.createSimpleErrorResponse(channel, NOT_ACCEPTABLE, errorMessage)); } /** @@ -220,37 +339,14 @@ public void sendErrorResponse(RestRequest request, RestChannel channel, Exceptio */ boolean checkRequestParameters(final RestRequest request, final RestChannel channel) { // error_trace cannot be used when we disable detailed errors - if (channel.detailedErrorsEnabled() == false && request.paramAsBoolean("error_trace", false)) { - try { - XContentBuilder builder = channel.newErrorBuilder(); - builder.startObject().field("error","error traces in responses are disabled.").endObject().string(); - RestResponse response = new BytesRestResponse(BAD_REQUEST, builder); - response.addHeader("Content-Type", "application/json"); - channel.sendResponse(response); - } catch (IOException e) { - logger.warn("Failed to send response", e); - } + // we consume the error_trace parameter first to ensure that it is always consumed + if (request.paramAsBoolean("error_trace", false) && channel.detailedErrorsEnabled() == false) { return false; } return true; } - void executeHandler(RestRequest request, RestChannel channel, NodeClient client) throws Exception { - final RestHandler handler = getHandler(request); - if (handler != null) { - handler.handleRequest(request, channel, client); - } else { - if (request.method() == RestRequest.Method.OPTIONS) { - // when we have OPTIONS request, simply send OK by default (with the Access Control Origin header which gets automatically added) - channel.sendResponse(new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - } else { - final String msg = "No handler found for uri [" + request.uri() + "] and method [" + request.method() + "]"; - channel.sendResponse(new BytesRestResponse(BAD_REQUEST, msg)); - } - } - } - private RestHandler getHandler(RestRequest request) { String path = getPath(request); PathTrie handlers = getHandlersForMethod(request.method()); @@ -286,43 +382,84 @@ private String getPath(RestRequest request) { return request.rawPath(); } - class ControllerFilterChain implements RestFilterChain { + void handleFavicon(RestRequest request, RestChannel channel) { + if (request.method() == RestRequest.Method.GET) { + try { + try (InputStream stream = getClass().getResourceAsStream("/config/favicon.ico")) { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + Streams.copy(stream, out); + BytesRestResponse restResponse = new BytesRestResponse(RestStatus.OK, "image/x-icon", out.toByteArray()); + channel.sendResponse(restResponse); + } + } catch (IOException e) { + channel.sendResponse(new BytesRestResponse(INTERNAL_SERVER_ERROR, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); + } + } else { + channel.sendResponse(new BytesRestResponse(FORBIDDEN, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); + } + } + + private static final class ResourceHandlingHttpChannel implements RestChannel { + private final RestChannel delegate; + private final CircuitBreakerService circuitBreakerService; + private final int contentLength; + private final AtomicBoolean closed = new AtomicBoolean(); - private final RestFilter executionFilter; + ResourceHandlingHttpChannel(RestChannel delegate, CircuitBreakerService circuitBreakerService, int contentLength) { + this.delegate = delegate; + this.circuitBreakerService = circuitBreakerService; + this.contentLength = contentLength; + } - private final AtomicInteger index = new AtomicInteger(); + @Override + public XContentBuilder newBuilder() throws IOException { + return delegate.newBuilder(); + } - ControllerFilterChain(RestFilter executionFilter) { - this.executionFilter = executionFilter; + @Override + public XContentBuilder newErrorBuilder() throws IOException { + return delegate.newErrorBuilder(); } @Override - public void continueProcessing(RestRequest request, RestChannel channel, NodeClient client) { - try { - int loc = index.getAndIncrement(); - if (loc > filters.length) { - throw new IllegalStateException("filter continueProcessing was called more than expected"); - } else if (loc == filters.length) { - executionFilter.process(request, channel, client, this); - } else { - RestFilter preProcessor = filters[loc]; - preProcessor.process(request, channel, client, this); - } - } catch (Exception e) { - try { - channel.sendResponse(new BytesRestResponse(channel, e)); - } catch (IOException e1) { - logger.error((Supplier) () -> new ParameterizedMessage("Failed to send failure response for uri [{}]", request.uri()), e1); - } - } + public XContentBuilder newBuilder(@Nullable XContentType xContentType, boolean useFiltering) throws IOException { + return delegate.newBuilder(xContentType, useFiltering); } - } - class RestHandlerFilter extends RestFilter { + @Override + public BytesStreamOutput bytesOutput() { + return delegate.bytesOutput(); + } + + @Override + public RestRequest request() { + return delegate.request(); + } + + @Override + public boolean detailedErrorsEnabled() { + return delegate.detailedErrorsEnabled(); + } @Override - public void process(RestRequest request, RestChannel channel, NodeClient client, RestFilterChain filterChain) throws Exception { - executeHandler(request, channel, client); + public void sendResponse(RestResponse response) { + close(); + delegate.sendResponse(response); } + + private void close() { + // attempt to close once atomically + if (closed.compareAndSet(false, true) == false) { + throw new IllegalStateException("Channel is already closed"); + } + inFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(-contentLength); + } + } + + private static CircuitBreaker inFlightRequestsBreaker(CircuitBreakerService circuitBreakerService) { + // We always obtain a fresh breaker to reflect changes to the breaker configuration. + return circuitBreakerService.getBreaker(CircuitBreaker.IN_FLIGHT_REQUESTS); + } + } diff --git a/core/src/main/java/org/elasticsearch/rest/RestFilter.java b/core/src/main/java/org/elasticsearch/rest/RestFilter.java deleted file mode 100644 index 276e99fc7e510..0000000000000 --- a/core/src/main/java/org/elasticsearch/rest/RestFilter.java +++ /dev/null @@ -1,49 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.rest; - -import java.io.Closeable; - -import org.elasticsearch.client.node.NodeClient; - -/** - * A filter allowing to filter rest operations. - */ -public abstract class RestFilter implements Closeable { - - /** - * Optionally, the order of the filter. Execution is done from lowest value to highest. - * It is a good practice to allow to configure this for the relevant filter. - */ - public int order() { - return 0; - } - - @Override - public void close() { - // a no op - } - - /** - * Process the rest request. Using the channel to send a response, or the filter chain to continue - * processing the request. - */ - public abstract void process(RestRequest request, RestChannel channel, NodeClient client, RestFilterChain filterChain) throws Exception; -} diff --git a/core/src/main/java/org/elasticsearch/rest/RestFilterChain.java b/core/src/main/java/org/elasticsearch/rest/RestFilterChain.java deleted file mode 100644 index 239a6c6b1bbbc..0000000000000 --- a/core/src/main/java/org/elasticsearch/rest/RestFilterChain.java +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.rest; - -import org.elasticsearch.client.node.NodeClient; - -/** - * A filter chain allowing to continue and process the rest request. - */ -public interface RestFilterChain { - - /** - * Continue processing the request. Should only be called if a response has not been sent - * through the channel. - */ - void continueProcessing(RestRequest request, RestChannel channel, NodeClient client); -} diff --git a/core/src/main/java/org/elasticsearch/rest/RestHandler.java b/core/src/main/java/org/elasticsearch/rest/RestHandler.java index 393e425baf937..215541b40e87f 100644 --- a/core/src/main/java/org/elasticsearch/rest/RestHandler.java +++ b/core/src/main/java/org/elasticsearch/rest/RestHandler.java @@ -20,6 +20,7 @@ package org.elasticsearch.rest; import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.xcontent.XContent; /** * Handler for REST requests @@ -28,7 +29,6 @@ public interface RestHandler { /** * Handles a rest request. - * * @param request The request to handle * @param channel The channel to write the request response to * @param client A client to use to make internal requests on behalf of the original request @@ -38,4 +38,22 @@ public interface RestHandler { default boolean canTripCircuitBreaker() { return true; } + + /** + * Indicates if a RestHandler supports plain text bodies + * @deprecated use request parameters or bodies that can be parsed with XContent! + */ + @Deprecated + default boolean supportsPlainText() { + return false; + } + + /** + * Indicates if the RestHandler supports content as a stream. A stream would be multiple objects delineated by + * {@link XContent#streamSeparator()}. If a handler returns true this will affect the types of content that can be sent to + * this endpoint. + */ + default boolean supportsContentStream() { + return false; + } } diff --git a/core/src/main/java/org/elasticsearch/rest/RestRequest.java b/core/src/main/java/org/elasticsearch/rest/RestRequest.java index 2db917dacf0bb..c8a7efa34d2af 100644 --- a/core/src/main/java/org/elasticsearch/rest/RestRequest.java +++ b/core/src/main/java/org/elasticsearch/rest/RestRequest.java @@ -19,28 +19,61 @@ package org.elasticsearch.rest; +import org.apache.lucene.util.SetOnce; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.Booleans; +import org.elasticsearch.common.CheckedConsumer; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; +import java.io.IOException; import java.net.SocketAddress; import java.util.Collections; import java.util.HashMap; +import java.util.HashSet; +import java.util.List; import java.util.Map; +import java.util.Set; +import java.util.regex.Pattern; +import java.util.stream.Collectors; import static org.elasticsearch.common.unit.ByteSizeValue.parseBytesSizeValue; import static org.elasticsearch.common.unit.TimeValue.parseTimeValue; public abstract class RestRequest implements ToXContent.Params { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(RestRequest.class)); + // tchar pattern as defined by RFC7230 section 3.2.6 + private static final Pattern TCHAR_PATTERN = Pattern.compile("[a-zA-z0-9!#$%&'*+\\-.\\^_`|~]+"); + + private final NamedXContentRegistry xContentRegistry; private final Map params; + private final Map> headers; private final String rawPath; + private final Set consumedParams = new HashSet<>(); + private final SetOnce xContentType = new SetOnce<>(); - public RestRequest(String uri) { + /** + * Creates a new RestRequest + * @param xContentRegistry the xContentRegistry to use when parsing XContent + * @param uri the URI of the request that potentially contains request parameters + * @param headers a map of the headers. This map should implement a Case-Insensitive hashing for keys as HTTP header names are case + * insensitive + */ + public RestRequest(NamedXContentRegistry xContentRegistry, String uri, Map> headers) { + this.xContentRegistry = xContentRegistry; final Map params = new HashMap<>(); int pathEndPos = uri.indexOf('?'); if (pathEndPos < 0) { @@ -50,11 +83,32 @@ public RestRequest(String uri) { RestUtils.decodeQueryString(uri, pathEndPos + 1, params); } this.params = params; + this.headers = Collections.unmodifiableMap(headers); + final List contentType = getAllHeaderValues("Content-Type"); + final XContentType xContentType = parseContentType(contentType); + if (xContentType != null) { + this.xContentType.set(xContentType); + } } - public RestRequest(Map params, String path) { + /** + * Creates a new RestRequest + * @param xContentRegistry the xContentRegistry to use when parsing XContent + * @param params the parameters of the request + * @param path the path of the request. This should not contain request parameters + * @param headers a map of the headers. This map should implement a Case-Insensitive hashing for keys as HTTP header names are case + * insensitive + */ + public RestRequest(NamedXContentRegistry xContentRegistry, Map params, String path, Map> headers) { + this.xContentRegistry = xContentRegistry; this.params = params; this.rawPath = path; + this.headers = Collections.unmodifiableMap(headers); + final List contentType = getAllHeaderValues("Content-Type"); + final XContentType xContentType = parseContentType(contentType); + if (xContentType != null) { + this.xContentType.set(xContentType); + } } public enum Method { @@ -86,9 +140,63 @@ public final String path() { public abstract BytesReference content(); - public abstract String header(String name); + /** + * @return content of the request body or throw an exception if the body or content type is missing + */ + public final BytesReference requiredContent() { + if (hasContent() == false) { + throw new ElasticsearchParseException("request body is required"); + } else if (xContentType.get() == null) { + throw new IllegalStateException("unknown content type"); + } + return content(); + } + + /** + * Get the value of the header or {@code null} if not found. This method only retrieves the first header value if multiple values are + * sent. Use of {@link #getAllHeaderValues(String)} should be preferred + */ + public final String header(String name) { + List values = headers.get(name); + if (values != null && values.isEmpty() == false) { + return values.get(0); + } + return null; + } + + /** + * Get all values for the header or {@code null} if the header was not found + */ + public final List getAllHeaderValues(String name) { + List values = headers.get(name); + if (values != null) { + return Collections.unmodifiableList(values); + } + return null; + } + + /** + * Get all of the headers and values associated with the headers. Modifications of this map are not supported. + */ + public final Map> getHeaders() { + return headers; + } + + /** + * The {@link XContentType} that was parsed from the {@code Content-Type} header. This value will be {@code null} in the case of + * a request without a valid {@code Content-Type} header, a request without content ({@link #hasContent()}, or a plain text request + */ + @Nullable + public final XContentType getXContentType() { + return xContentType.get(); + } - public abstract Iterable> headers(); + /** + * Sets the {@link XContentType} + */ + final void setXContentType(XContentType xContentType) { + this.xContentType.set(xContentType); + } @Nullable public SocketAddress getRemoteAddress() { @@ -106,11 +214,13 @@ public final boolean hasParam(String key) { @Override public final String param(String key) { + consumedParams.add(key); return params.get(key); } @Override public final String param(String key, String defaultValue) { + consumedParams.add(key); String value = params.get(key); if (value == null) { return defaultValue; @@ -122,6 +232,30 @@ public Map params() { return params; } + /** + * Returns a list of parameters that have been consumed. This method returns a copy, callers + * are free to modify the returned list. + * + * @return the list of currently consumed parameters. + */ + List consumedParams() { + return consumedParams.stream().collect(Collectors.toList()); + } + + /** + * Returns a list of parameters that have not yet been consumed. This method returns a copy, + * callers are free to modify the returned list. + * + * @return the list of currently unconsumed parameters. + */ + List unconsumedParams() { + return params + .keySet() + .stream() + .filter(p -> !consumedParams.contains(p)) + .collect(Collectors.toList()); + } + public float paramAsFloat(String key, float defaultValue) { String sValue = param(key); if (sValue == null) { @@ -160,12 +294,21 @@ public long paramAsLong(String key, long defaultValue) { @Override public boolean paramAsBoolean(String key, boolean defaultValue) { - return Booleans.parseBoolean(param(key), defaultValue); + return paramAsBoolean(key, (Boolean) defaultValue); } @Override public Boolean paramAsBoolean(String key, Boolean defaultValue) { - return Booleans.parseBoolean(param(key), defaultValue); + String rawParam = param(key); + // Treat empty string as true because that allows the presence of the url parameter to mean "turn this on" + if (rawParam != null && rawParam.length() == 0) { + return true; + } else { + if (rawParam != null && Booleans.isStrictlyBoolean(rawParam) == false) { + DEPRECATION_LOGGER.deprecated("Expected a boolean [true/false] for request parameter [{}] but got [{}]", key, rawParam); + } + return Booleans.parseBoolean(rawParam, defaultValue); + } } public TimeValue paramAsTime(String key, TimeValue defaultValue) { @@ -192,4 +335,164 @@ public String[] paramAsStringArrayOrEmptyIfAll(String key) { return params; } + /** + * Get the {@link NamedXContentRegistry} that should be used to create parsers from this request. + */ + public NamedXContentRegistry getXContentRegistry() { + return xContentRegistry; + } + + /** + * A parser for the contents of this request if there is a body, otherwise throws an {@link ElasticsearchParseException}. Use + * {@link #applyContentParser(CheckedConsumer)} if you want to gracefully handle when the request doesn't have any contents. Use + * {@link #contentOrSourceParamParser()} for requests that support specifying the request body in the {@code source} param. + */ + public final XContentParser contentParser() throws IOException { + BytesReference content = requiredContent(); // will throw exception if body or content type missing + return xContentType.get().xContent().createParser(xContentRegistry, content); + } + + /** + * If there is any content then call {@code applyParser} with the parser, otherwise do nothing. + */ + public final void applyContentParser(CheckedConsumer applyParser) throws IOException { + if (hasContent()) { + try (XContentParser parser = contentParser()) { + applyParser.accept(parser); + } + } + } + + /** + * Does this request have content or a {@code source} parameter? Use this instead of {@link #hasContent()} if this + * {@linkplain RestHandler} treats the {@code source} parameter like the body content. + */ + public final boolean hasContentOrSourceParam() { + return hasContent() || hasParam("source"); + } + + /** + * A parser for the contents of this request if it has contents, otherwise a parser for the {@code source} parameter if there is one, + * otherwise throws an {@link ElasticsearchParseException}. Use {@link #withContentOrSourceParamParserOrNull(CheckedConsumer)} instead + * if you need to handle the absence request content gracefully. + */ + public final XContentParser contentOrSourceParamParser() throws IOException { + Tuple tuple = contentOrSourceParam(); + return tuple.v1().xContent().createParser(xContentRegistry, tuple.v2()); + } + + /** + * Call a consumer with the parser for the contents of this request if it has contents, otherwise with a parser for the {@code source} + * parameter if there is one, otherwise with {@code null}. Use {@link #contentOrSourceParamParser()} if you should throw an exception + * back to the user when there isn't request content. + */ + public final void withContentOrSourceParamParserOrNull(CheckedConsumer withParser) throws IOException { + if (hasContentOrSourceParam()) { + Tuple tuple = contentOrSourceParam(); + BytesReference content = tuple.v2(); + XContentType xContentType = tuple.v1(); + try (XContentParser parser = xContentType.xContent().createParser(xContentRegistry, content)) { + withParser.accept(parser); + } + } else { + withParser.accept(null); + } + } + + /** + * Get the content of the request or the contents of the {@code source} param or throw an exception if both are missing. + * Prefer {@link #contentOrSourceParamParser()} or {@link #withContentOrSourceParamParserOrNull(CheckedConsumer)} if you need a parser. + */ + public final Tuple contentOrSourceParam() { + if (hasContentOrSourceParam() == false) { + throw new ElasticsearchParseException("request body or source parameter is required"); + } else if (hasContent()) { + return new Tuple<>(xContentType.get(), requiredContent()); + } + String source = param("source"); + String typeParam = param("source_content_type"); + BytesArray bytes = new BytesArray(source); + final XContentType xContentType; + if (typeParam != null) { + xContentType = parseContentType(Collections.singletonList(typeParam)); + } else { + DEPRECATION_LOGGER.deprecated("Deprecated use of the [source] parameter without the [source_content_type] parameter. Use " + + "the [source_content_type] parameter to specify the content type of the source such as [application/json]"); + xContentType = XContentFactory.xContentType(bytes); + } + + if (xContentType == null) { + throw new IllegalStateException("could not determine source content type"); + } + return new Tuple<>(xContentType, bytes); + } + + /** + * Call a consumer with the parser for the contents of this request if it has contents, otherwise with a parser for the {@code source} + * parameter if there is one, otherwise with {@code null}. Use {@link #contentOrSourceParamParser()} if you should throw an exception + * back to the user when there isn't request content. This version allows for plain text content + */ + @Deprecated + public final void withContentOrSourceParamParserOrNullLenient(CheckedConsumer withParser) + throws IOException { + if (hasContentOrSourceParam() == false) { + withParser.accept(null); + } else if (hasContent() && xContentType.get() == null) { + withParser.accept(null); + } else { + Tuple tuple = contentOrSourceParam(); + BytesReference content = tuple.v2(); + XContentType xContentType = tuple.v1(); + if (content.length() > 0) { + try (XContentParser parser = xContentType.xContent().createParser(xContentRegistry, content)) { + withParser.accept(parser); + } + } else { + withParser.accept(null); + } + } + } + + /** + * Get the content of the request or the contents of the {@code source} param without the xcontent type. This is useful the request can + * accept non xcontent values. + * @deprecated we should only take xcontent + */ + @Deprecated + public final BytesReference getContentOrSourceParamOnly() { + if (hasContent()) { + return content(); + } + String source = param("source"); + if (source != null) { + return new BytesArray(source); + } + return BytesArray.EMPTY; + } + + /** + * Parses the given content type string for the media type. This method currently ignores parameters. + */ + // TODO stop ignoring parameters such as charset... + private static XContentType parseContentType(List header) { + if (header == null || header.isEmpty()) { + return null; + } else if (header.size() > 1) { + throw new IllegalArgumentException("only one Content-Type header should be provided"); + } + + String rawContentType = header.get(0); + final String[] elements = rawContentType.split("[ \t]*;"); + if (elements.length > 0) { + final String[] splitMediaType = elements[0].split("/"); + if (splitMediaType.length == 2 && TCHAR_PATTERN.matcher(splitMediaType[0]).matches() + && TCHAR_PATTERN.matcher(splitMediaType[1].trim()).matches()) { + return XContentType.fromMediaType(elements[0]); + } else { + throw new IllegalArgumentException("invalid Content-Type header [" + rawContentType + "]"); + } + } + throw new IllegalArgumentException("empty Content-Type header"); + } + } diff --git a/core/src/main/java/org/elasticsearch/rest/RestStatus.java b/core/src/main/java/org/elasticsearch/rest/RestStatus.java index d72eb2d11f4c8..e7c07f21147d0 100644 --- a/core/src/main/java/org/elasticsearch/rest/RestStatus.java +++ b/core/src/main/java/org/elasticsearch/rest/RestStatus.java @@ -479,7 +479,7 @@ public enum RestStatus { * is considered to be temporary. If the request that received this status code was the result of a user action, * the request MUST NOT be repeated until it is requested by a separate user action. */ - INSUFFICIENT_STORAGE(506); + INSUFFICIENT_STORAGE(507); private static final Map CODE_TO_STATUS; static { diff --git a/core/src/main/java/org/elasticsearch/rest/action/RestActions.java b/core/src/main/java/org/elasticsearch/rest/action/RestActions.java index bb72e3e2249b6..308879d47e8ee 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/RestActions.java +++ b/core/src/main/java/org/elasticsearch/rest/action/RestActions.java @@ -19,29 +19,23 @@ package org.elasticsearch.rest.action; -import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.FailedNodeException; import org.elasticsearch.action.ShardOperationFailedException; import org.elasticsearch.action.support.broadcast.BroadcastResponse; import org.elasticsearch.action.support.nodes.BaseNodeResponse; import org.elasticsearch.action.support.nodes.BaseNodesResponse; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContent.Params; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.Operator; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryStringQueryBuilder; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; import org.elasticsearch.rest.BytesRestResponse; import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestRequest; @@ -56,6 +50,13 @@ */ public class RestActions { + public static final ParseField _SHARDS_FIELD = new ParseField("_shards"); + public static final ParseField TOTAL_FIELD = new ParseField("total"); + public static final ParseField SUCCESSFUL_FIELD = new ParseField("successful"); + public static final ParseField SKIPPED_FIELD = new ParseField("skipped"); + public static final ParseField FAILED_FIELD = new ParseField("failed"); + public static final ParseField FAILURES_FIELD = new ParseField("failures"); + public static long parseVersion(RestRequest request) { if (request.hasParam("version")) { return request.paramAsLong("version", Versions.MATCH_ANY); @@ -74,19 +75,22 @@ public static long parseVersion(RestRequest request, long defaultVersion) { public static void buildBroadcastShardsHeader(XContentBuilder builder, Params params, BroadcastResponse response) throws IOException { buildBroadcastShardsHeader(builder, params, - response.getTotalShards(), response.getSuccessfulShards(), response.getFailedShards(), + response.getTotalShards(), response.getSuccessfulShards(), -1, response.getFailedShards(), response.getShardFailures()); } public static void buildBroadcastShardsHeader(XContentBuilder builder, Params params, - int total, int successful, int failed, + int total, int successful, int skipped, int failed, ShardOperationFailedException[] shardFailures) throws IOException { - builder.startObject("_shards"); - builder.field("total", total); - builder.field("successful", successful); - builder.field("failed", failed); + builder.startObject(_SHARDS_FIELD.getPreferredName()); + builder.field(TOTAL_FIELD.getPreferredName(), total); + builder.field(SUCCESSFUL_FIELD.getPreferredName(), successful); + if (skipped >= 0) { + builder.field(SKIPPED_FIELD.getPreferredName(), skipped); + } + builder.field(FAILED_FIELD.getPreferredName(), failed); if (shardFailures != null && shardFailures.length > 0) { - builder.startArray("failures"); + builder.startArray(FAILURES_FIELD.getPreferredName()); final boolean group = params.paramAsBoolean("group_shard_failures", true); // we group by default for (ShardOperationFailedException shardFailure : group ? ExceptionsHelper.groupBy(shardFailures) : shardFailures) { builder.startObject(); @@ -195,7 +199,6 @@ public static QueryBuilder urlParamsToQueryBuilder(RestRequest request) { queryBuilder.defaultField(request.param("df")); queryBuilder.analyzer(request.param("analyzer")); queryBuilder.analyzeWildcard(request.paramAsBoolean("analyze_wildcard", false)); - queryBuilder.lowercaseExpandedTerms(request.paramAsBoolean("lowercase_expanded_terms", true)); queryBuilder.lenient(request.paramAsBoolean("lenient", null)); String defaultOperator = request.param("default_operator"); if (defaultOperator != null) { @@ -204,53 +207,9 @@ public static QueryBuilder urlParamsToQueryBuilder(RestRequest request) { return queryBuilder; } - /** - * Get Rest content from either payload or source parameter - * @param request Rest request - * @return rest content - */ - public static BytesReference getRestContent(RestRequest request) { - assert request != null; - - BytesReference content = request.content(); - if (!request.hasContent()) { - String source = request.param("source"); - if (source != null) { - content = new BytesArray(source); - } - } - - return content; - } - - public static QueryBuilder getQueryContent(BytesReference source, IndicesQueriesRegistry indicesQueriesRegistry, - ParseFieldMatcher parseFieldMatcher) { - try (XContentParser requestParser = XContentFactory.xContent(source).createParser(source)) { - QueryParseContext context = new QueryParseContext(indicesQueriesRegistry, requestParser, parseFieldMatcher); - return context.parseTopLevelQueryBuilder(); - } catch (IOException e) { - throw new ElasticsearchException("failed to parse source", e); - } - } - - /** - * guesses the content type from either payload or source parameter - * @param request Rest request - * @return rest content type or null if not applicable. - */ - public static XContentType guessBodyContentType(final RestRequest request) { - final BytesReference restContent = RestActions.getRestContent(request); - if (restContent == null) { - return null; - } - return XContentFactory.xContentType(restContent); - } - - /** - * Returns true if either payload or source parameter is present. Otherwise false - */ - public static boolean hasBodyContent(final RestRequest request) { - return request.hasContent() || request.hasParam("source"); + public static QueryBuilder getQueryContent(XContentParser requestParser) { + QueryParseContext context = new QueryParseContext(requestParser); + return context.parseTopLevelQueryBuilder(); } /** diff --git a/core/src/main/java/org/elasticsearch/rest/action/RestBuilderListener.java b/core/src/main/java/org/elasticsearch/rest/action/RestBuilderListener.java index cc93e72d80d2a..c460331afaa9d 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/RestBuilderListener.java +++ b/core/src/main/java/org/elasticsearch/rest/action/RestBuilderListener.java @@ -34,11 +34,22 @@ public RestBuilderListener(RestChannel channel) { @Override public final RestResponse buildResponse(Response response) throws Exception { - return buildResponse(response, channel.newBuilder()); + try (XContentBuilder builder = channel.newBuilder()) { + final RestResponse restResponse = buildResponse(response, builder); + assert assertBuilderClosed(builder); + return restResponse; + } } /** - * Builds a response to send back over the channel. + * Builds a response to send back over the channel. Implementors should ensure that they close the provided {@link XContentBuilder} + * using the {@link XContentBuilder#close()} method. */ public abstract RestResponse buildResponse(Response response, XContentBuilder builder) throws Exception; + + // pkg private method that we can override for testing + boolean assertBuilderClosed(XContentBuilder xContentBuilder) { + assert xContentBuilder.generator().isClosed() : "callers should ensure the XContentBuilder is closed themselves"; + return true; + } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/RestFieldCapabilitiesAction.java b/core/src/main/java/org/elasticsearch/rest/action/RestFieldCapabilitiesAction.java new file mode 100644 index 0000000000000..e983bdc182a01 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/rest/action/RestFieldCapabilitiesAction.java @@ -0,0 +1,88 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.rest.action; + +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequest; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesResponse; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.rest.BaseRestHandler; +import org.elasticsearch.rest.BytesRestResponse; +import org.elasticsearch.rest.RestController; +import org.elasticsearch.rest.RestRequest; +import org.elasticsearch.rest.RestResponse; +import org.elasticsearch.rest.RestStatus; + +import java.io.IOException; + +import static org.elasticsearch.rest.RestRequest.Method.GET; +import static org.elasticsearch.rest.RestRequest.Method.POST; +import static org.elasticsearch.rest.RestStatus.NOT_FOUND; +import static org.elasticsearch.rest.RestStatus.OK; + +public class RestFieldCapabilitiesAction extends BaseRestHandler { + public RestFieldCapabilitiesAction(Settings settings, RestController controller) { + super(settings); + controller.registerHandler(GET, "/_field_caps", this); + controller.registerHandler(POST, "/_field_caps", this); + controller.registerHandler(GET, "/{index}/_field_caps", this); + controller.registerHandler(POST, "/{index}/_field_caps", this); + } + + @Override + public RestChannelConsumer prepareRequest(final RestRequest request, + final NodeClient client) throws IOException { + if (request.hasContentOrSourceParam() && request.hasParam("fields")) { + throw new IllegalArgumentException("can't specify a request body and [fields]" + + " request parameter, either specify a request body or the" + + " [fields] request parameter"); + } + final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); + final FieldCapabilitiesRequest fieldRequest; + if (request.hasContentOrSourceParam()) { + try (XContentParser parser = request.contentOrSourceParamParser()) { + fieldRequest = FieldCapabilitiesRequest.parseFields(parser); + } + } else { + fieldRequest = new FieldCapabilitiesRequest(); + fieldRequest.fields(Strings.splitStringByCommaToArray(request.param("fields"))); + } + fieldRequest.indices(indices); + fieldRequest.indicesOptions( + IndicesOptions.fromRequest(request, fieldRequest.indicesOptions()) + ); + return channel -> client.fieldCaps(fieldRequest, + new RestBuilderListener(channel) { + @Override + public RestResponse buildResponse(FieldCapabilitiesResponse response, + XContentBuilder builder) throws Exception { + RestStatus status = OK; + builder.startObject(); + response.toXContent(builder, request); + builder.endObject(); + return new BytesRestResponse(status, builder); + } + }); + } +} diff --git a/core/src/main/java/org/elasticsearch/rest/action/RestFieldStatsAction.java b/core/src/main/java/org/elasticsearch/rest/action/RestFieldStatsAction.java index e6ef620db10d3..34e7b636aeb09 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/RestFieldStatsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/RestFieldStatsAction.java @@ -25,40 +25,45 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.RestStatus; +import java.io.IOException; import java.util.Map; import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; -/** - */ public class RestFieldStatsAction extends BaseRestHandler { - - @Inject public RestFieldStatsAction(Settings settings, RestController controller) { super(settings); - controller.registerHandler(GET, "/_field_stats", this); - controller.registerHandler(POST, "/_field_stats", this); - controller.registerHandler(GET, "/{index}/_field_stats", this); - controller.registerHandler(POST, "/{index}/_field_stats", this); + controller.registerAsDeprecatedHandler(GET, "/_field_stats", this, + deprecationMessage(), deprecationLogger); + controller.registerAsDeprecatedHandler(POST, "/_field_stats", this, + deprecationMessage(), deprecationLogger); + controller.registerAsDeprecatedHandler(GET, "/{index}/_field_stats", this, + deprecationMessage(), deprecationLogger); + controller.registerAsDeprecatedHandler(POST, "/{index}/_field_stats", this, + deprecationMessage(), deprecationLogger); + } + + static String deprecationMessage() { + return "[_field_stats] endpoint is deprecated! Use [_field_caps] instead or " + + "run a min/max aggregations on the desired fields."; } @Override - public void handleRequest(final RestRequest request, - final RestChannel channel, final NodeClient client) throws Exception { - if (RestActions.hasBodyContent(request) && request.hasParam("fields")) { + public RestChannelConsumer prepareRequest(final RestRequest request, + final NodeClient client) throws IOException { + if (request.hasContentOrSourceParam() && request.hasParam("fields")) { throw new IllegalArgumentException("can't specify a request body and [fields] request parameter, " + "either specify a request body or the [fields] request parameter"); } @@ -67,13 +72,15 @@ public void handleRequest(final RestRequest request, fieldStatsRequest.indices(Strings.splitStringByCommaToArray(request.param("index"))); fieldStatsRequest.indicesOptions(IndicesOptions.fromRequest(request, fieldStatsRequest.indicesOptions())); fieldStatsRequest.level(request.param("level", FieldStatsRequest.DEFAULT_LEVEL)); - if (RestActions.hasBodyContent(request)) { - fieldStatsRequest.source(RestActions.getRestContent(request)); + if (request.hasContentOrSourceParam()) { + try (XContentParser parser = request.contentOrSourceParamParser()) { + fieldStatsRequest.source(parser); + } } else { fieldStatsRequest.setFields(Strings.splitStringByCommaToArray(request.param("fields"))); } - client.fieldStats(fieldStatsRequest, new RestBuilderListener(channel) { + return channel -> client.fieldStats(fieldStatsRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(FieldStatsResponse response, XContentBuilder builder) throws Exception { builder.startObject(); @@ -81,7 +88,7 @@ public RestResponse buildResponse(FieldStatsResponse response, XContentBuilder b builder.startObject("indices"); for (Map.Entry> entry1 : - response.getIndicesMergedFieldStats().entrySet()) { + response.getIndicesMergedFieldStats().entrySet()) { builder.startObject(entry1.getKey()); builder.startObject("fields"); for (Map.Entry entry2 : entry1.getValue().entrySet()) { diff --git a/core/src/main/java/org/elasticsearch/rest/action/RestMainAction.java b/core/src/main/java/org/elasticsearch/rest/action/RestMainAction.java index 56053f414b069..006b6d71db4f3 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/RestMainAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/RestMainAction.java @@ -23,12 +23,10 @@ import org.elasticsearch.action.main.MainRequest; import org.elasticsearch.action.main.MainResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -39,12 +37,7 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.HEAD; -/** - * - */ public class RestMainAction extends BaseRestHandler { - - @Inject public RestMainAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/", this); @@ -52,8 +45,8 @@ public RestMainAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { - client.execute(MainAction.INSTANCE, new MainRequest(), new RestBuilderListener(channel) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { + return channel -> client.execute(MainAction.INSTANCE, new MainRequest(), new RestBuilderListener(channel) { @Override public RestResponse buildResponse(MainResponse mainResponse, XContentBuilder builder) throws Exception { return convertMainResponse(mainResponse, request, builder); @@ -63,9 +56,6 @@ public RestResponse buildResponse(MainResponse mainResponse, XContentBuilder bui static BytesRestResponse convertMainResponse(MainResponse response, RestRequest request, XContentBuilder builder) throws IOException { RestStatus status = response.isAvailable() ? RestStatus.OK : RestStatus.SERVICE_UNAVAILABLE; - if (request.method() == RestRequest.Method.HEAD) { - return new BytesRestResponse(status, builder); - } // Default to pretty printing, but allow ?pretty=false to disable if (request.hasParam("pretty") == false) { diff --git a/core/src/main/java/org/elasticsearch/rest/action/RestStatusToXContentListener.java b/core/src/main/java/org/elasticsearch/rest/action/RestStatusToXContentListener.java index f147d6bc00dc0..6abe61ea5edbe 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/RestStatusToXContentListener.java +++ b/core/src/main/java/org/elasticsearch/rest/action/RestStatusToXContentListener.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.rest.action; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BytesRestResponse; import org.elasticsearch.rest.RestChannel; @@ -30,7 +30,7 @@ /** * Content listener that extracts that {@link RestStatus} from the response. */ -public class RestStatusToXContentListener extends RestResponseListener { +public class RestStatusToXContentListener extends RestToXContentListener { private final Function extractLocation; /** @@ -52,17 +52,12 @@ public RestStatusToXContentListener(RestChannel channel, Function extends RestResponseListener { +public class RestToXContentListener extends RestResponseListener { public RestToXContentListener(RestChannel channel) { super(channel); @@ -41,10 +42,9 @@ public final RestResponse buildResponse(Response response) throws Exception { return buildResponse(response, channel.newBuilder()); } - public final RestResponse buildResponse(Response response, XContentBuilder builder) throws Exception { - builder.startObject(); + public RestResponse buildResponse(Response response, XContentBuilder builder) throws Exception { + assert response.isFragment() == false; //would be nice if we could make default methods final response.toXContent(builder, channel.request()); - builder.endObject(); return new BytesRestResponse(getStatus(response), builder); } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCancelTasksAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCancelTasksAction.java index 3c558fba937ee..28631501b7fd2 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCancelTasksAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCancelTasksAction.java @@ -21,48 +21,51 @@ import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.tasks.TaskId; +import java.io.IOException; +import java.util.function.Supplier; + import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.listTasksResponseListener; public class RestCancelTasksAction extends BaseRestHandler { - private final ClusterService clusterService; + private final Supplier nodesInCluster; - @Inject - public RestCancelTasksAction(Settings settings, RestController controller, ClusterService clusterService) { + public RestCancelTasksAction(Settings settings, RestController controller, Supplier nodesInCluster) { super(settings); - this.clusterService = clusterService; + this.nodesInCluster = nodesInCluster; controller.registerHandler(POST, "/_tasks/_cancel", this); - controller.registerHandler(POST, "/_tasks/{taskId}/_cancel", this); + controller.registerHandler(POST, "/_tasks/{task_id}/_cancel", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { - String[] nodesIds = Strings.splitStringByCommaToArray(request.param("nodeId")); - TaskId taskId = new TaskId(request.param("taskId")); - String[] actions = Strings.splitStringByCommaToArray(request.param("actions")); - TaskId parentTaskId = new TaskId(request.param("parent_task_id")); + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { + final String[] nodesIds = Strings.splitStringByCommaToArray(request.param("nodes")); + final TaskId taskId = new TaskId(request.param("task_id")); + final String[] actions = Strings.splitStringByCommaToArray(request.param("actions")); + final TaskId parentTaskId = new TaskId(request.param("parent_task_id")); + final String groupBy = request.param("group_by", "nodes"); CancelTasksRequest cancelTasksRequest = new CancelTasksRequest(); cancelTasksRequest.setTaskId(taskId); - cancelTasksRequest.setNodesIds(nodesIds); + cancelTasksRequest.setNodes(nodesIds); cancelTasksRequest.setActions(actions); cancelTasksRequest.setParentTaskId(parentTaskId); - client.admin().cluster().cancelTasks(cancelTasksRequest, listTasksResponseListener(clusterService, channel)); + return channel -> + client.admin().cluster().cancelTasks(cancelTasksRequest, listTasksResponseListener(nodesInCluster, groupBy, channel)); } @Override public boolean canTripCircuitBreaker() { return false; } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterAllocationExplainAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterAllocationExplainAction.java index bae1d1b671491..8855e65f976aa 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterAllocationExplainAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterAllocationExplainAction.java @@ -19,36 +19,27 @@ package org.elasticsearch.rest.action.admin.cluster; -import java.io.IOException; - -import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.admin.cluster.allocation.ClusterAllocationExplainRequest; import org.elasticsearch.action.admin.cluster.allocation.ClusterAllocationExplainResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.RestStatus; -import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + /** * Class handling cluster allocation explanation at the REST level */ public class RestClusterAllocationExplainAction extends BaseRestHandler { - - @Inject public RestClusterAllocationExplainAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.GET, "/_cluster/allocation/explain", this); @@ -56,36 +47,26 @@ public RestClusterAllocationExplainAction(Settings settings, RestController cont } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ClusterAllocationExplainRequest req; - if (RestActions.hasBodyContent(request) == false) { + if (request.hasContentOrSourceParam() == false) { // Empty request signals "explain the first unassigned shard you find" req = new ClusterAllocationExplainRequest(); } else { - BytesReference content = RestActions.getRestContent(request); - try (XContentParser parser = XContentFactory.xContent(content).createParser(content)) { + try (XContentParser parser = request.contentOrSourceParamParser()) { req = ClusterAllocationExplainRequest.parse(parser); - } catch (IOException e) { - logger.debug("failed to parse allocation explain request", e); - channel.sendResponse( - new BytesRestResponse(ExceptionsHelper.status(e), BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - return; } } - try { - req.includeYesDecisions(request.paramAsBoolean("include_yes_decisions", false)); - req.includeDiskInfo(request.paramAsBoolean("include_disk_info", false)); - client.admin().cluster().allocationExplain(req, new RestBuilderListener(channel) { + req.includeYesDecisions(request.paramAsBoolean("include_yes_decisions", false)); + req.includeDiskInfo(request.paramAsBoolean("include_disk_info", false)); + return channel -> client.admin().cluster().allocationExplain(req, + new RestBuilderListener(channel) { @Override - public RestResponse buildResponse(ClusterAllocationExplainResponse response, XContentBuilder builder) throws Exception { + public RestResponse buildResponse(ClusterAllocationExplainResponse response, XContentBuilder builder) throws IOException { response.getExplanation().toXContent(builder, ToXContent.EMPTY_PARAMS); return new BytesRestResponse(RestStatus.OK, builder); } }); - } catch (Exception e) { - logger.error("failed to explain allocation", e); - channel.sendResponse(new BytesRestResponse(ExceptionsHelper.status(e), BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - } } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterGetSettingsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterGetSettingsAction.java index ca2cbaf79face..5c3be8f4347f2 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterGetSettingsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterGetSettingsAction.java @@ -24,7 +24,6 @@ import org.elasticsearch.client.Requests; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; @@ -32,7 +31,6 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -40,13 +38,13 @@ import org.elasticsearch.rest.action.RestBuilderListener; import java.io.IOException; +import java.util.Set; public class RestClusterGetSettingsAction extends BaseRestHandler { private final ClusterSettings clusterSettings; private final SettingsFilter settingsFilter; - @Inject public RestClusterGetSettingsAction(Settings settings, RestController controller, ClusterSettings clusterSettings, SettingsFilter settingsFilter) { super(settings); @@ -56,13 +54,13 @@ public RestClusterGetSettingsAction(Settings settings, RestController controller } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ClusterStateRequest clusterStateRequest = Requests.clusterStateRequest() .routingTable(false) .nodes(false); final boolean renderDefaults = request.paramAsBoolean("include_defaults", false); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - client.admin().cluster().state(clusterStateRequest, new RestBuilderListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(ClusterStateResponse response, XContentBuilder builder) throws Exception { return new BytesRestResponse(RestStatus.OK, renderResponse(response.getState(), renderDefaults, builder, request)); @@ -70,6 +68,11 @@ public RestResponse buildResponse(ClusterStateResponse response, XContentBuilder }); } + @Override + protected Set responseParams() { + return Settings.FORMAT_PARAMS; + } + @Override public boolean canTripCircuitBreaker() { return false; diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterHealthAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterHealthAction.java index 5f64bcf8aa39f..d8dd34d4a2897 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterHealthAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterHealthAction.java @@ -20,27 +20,25 @@ package org.elasticsearch.rest.action.admin.cluster; import org.elasticsearch.action.admin.cluster.health.ClusterHealthRequest; -import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.common.Priority; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestStatusToXContentListener; +import java.io.IOException; +import java.util.Collections; import java.util.Locale; +import java.util.Set; import static org.elasticsearch.client.Requests.clusterHealthRequest; public class RestClusterHealthAction extends BaseRestHandler { - - @Inject public RestClusterHealthAction(Settings settings, RestController controller) { super(settings); @@ -49,7 +47,7 @@ public RestClusterHealthAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ClusterHealthRequest clusterHealthRequest = clusterHealthRequest(Strings.splitStringByCommaToArray(request.param("index"))); clusterHealthRequest.local(request.paramAsBoolean("local", clusterHealthRequest.local())); clusterHealthRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterHealthRequest.masterNodeTimeout())); @@ -73,11 +71,19 @@ public void handleRequest(final RestRequest request, final RestChannel channel, if (request.param("wait_for_events") != null) { clusterHealthRequest.waitForEvents(Priority.valueOf(request.param("wait_for_events").toUpperCase(Locale.ROOT))); } - client.admin().cluster().health(clusterHealthRequest, new RestStatusToXContentListener(channel)); + return channel -> client.admin().cluster().health(clusterHealthRequest, new RestStatusToXContentListener<>(channel)); + } + + private static final Set RESPONSE_PARAMS = Collections.singleton("level"); + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; } @Override public boolean canTripCircuitBreaker() { return false; } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterRerouteAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterRerouteAction.java index 1a1e78b172040..c567fbb11f301 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterRerouteAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterRerouteAction.java @@ -21,40 +21,36 @@ import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequest; import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse; -import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.client.Requests; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.routing.allocation.command.AllocationCommandRegistry; import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.ObjectParser.ValueType; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentHelper; -import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; import java.io.IOException; +import java.util.Collections; import java.util.EnumSet; +import java.util.HashSet; +import java.util.Set; /** */ public class RestClusterRerouteAction extends BaseRestHandler { - private static final ObjectParser PARSER = new ObjectParser<>("cluster_reroute"); + private static final ObjectParser PARSER = new ObjectParser<>("cluster_reroute"); static { - PARSER.declareField((p, v, c) -> v.commands(AllocationCommands.fromXContent(p, c.getParseFieldMatcher(), c.registry)), - new ParseField("commands"), ValueType.OBJECT_ARRAY); + PARSER.declareField((p, v, c) -> v.commands(AllocationCommands.fromXContent(p)), new ParseField("commands"), + ValueType.OBJECT_ARRAY); PARSER.declareBoolean(ClusterRerouteRequest::dryRun, new ParseField("dry_run")); } @@ -62,67 +58,61 @@ public class RestClusterRerouteAction extends BaseRestHandler { .arrayToCommaDelimitedString(EnumSet.complementOf(EnumSet.of(ClusterState.Metric.METADATA)).toArray()); private final SettingsFilter settingsFilter; - private final AllocationCommandRegistry registry; - @Inject - public RestClusterRerouteAction(Settings settings, RestController controller, SettingsFilter settingsFilter, - AllocationCommandRegistry registry) { + public RestClusterRerouteAction(Settings settings, RestController controller, SettingsFilter settingsFilter) { super(settings); this.settingsFilter = settingsFilter; - this.registry = registry; controller.registerHandler(RestRequest.Method.POST, "/_cluster/reroute", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { - ClusterRerouteRequest clusterRerouteRequest = createRequest(request, registry, parseFieldMatcher); - client.admin().cluster().reroute(clusterRerouteRequest, new AcknowledgedRestListener(channel) { - @Override - protected void addCustomFields(XContentBuilder builder, ClusterRerouteResponse response) throws IOException { - builder.startObject("state"); - // by default, return everything but metadata - if (request.param("metric") == null) { - request.params().put("metric", DEFAULT_METRICS); - } - settingsFilter.addFilterSettingParams(request); - response.getState().toXContent(builder, request); - builder.endObject(); - if (clusterRerouteRequest.explain()) { - assert response.getExplanations() != null; - response.getExplanations().toXContent(builder, ToXContent.EMPTY_PARAMS); + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { + ClusterRerouteRequest clusterRerouteRequest = createRequest(request); + + // by default, return everything but metadata + final String metric = request.param("metric"); + if (metric == null) { + request.params().put("metric", DEFAULT_METRICS); + } + + return channel -> + client.admin().cluster().reroute(clusterRerouteRequest, new AcknowledgedRestListener(channel) { + @Override + protected void addCustomFields(XContentBuilder builder, ClusterRerouteResponse response) throws IOException { + builder.startObject("state"); + settingsFilter.addFilterSettingParams(request); + response.getState().toXContent(builder, request); + builder.endObject(); + if (clusterRerouteRequest.explain()) { + assert response.getExplanations() != null; + response.getExplanations().toXContent(builder, ToXContent.EMPTY_PARAMS); + } } - } }); } - public static ClusterRerouteRequest createRequest(RestRequest request, AllocationCommandRegistry registry, - ParseFieldMatcher parseFieldMatcher) throws IOException { + private static final Set RESPONSE_PARAMS; + + static { + final Set responseParams = new HashSet<>(); + responseParams.add("metric"); + responseParams.addAll(Settings.FORMAT_PARAMS); + RESPONSE_PARAMS = Collections.unmodifiableSet(responseParams); + } + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; + } + + public static ClusterRerouteRequest createRequest(RestRequest request) throws IOException { ClusterRerouteRequest clusterRerouteRequest = Requests.clusterRerouteRequest(); clusterRerouteRequest.dryRun(request.paramAsBoolean("dry_run", clusterRerouteRequest.dryRun())); clusterRerouteRequest.explain(request.paramAsBoolean("explain", clusterRerouteRequest.explain())); clusterRerouteRequest.timeout(request.paramAsTime("timeout", clusterRerouteRequest.timeout())); clusterRerouteRequest.setRetryFailed(request.paramAsBoolean("retry_failed", clusterRerouteRequest.isRetryFailed())); clusterRerouteRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterRerouteRequest.masterNodeTimeout())); - if (request.hasContent()) { - try (XContentParser parser = XContentHelper.createParser(request.content())) { - PARSER.parse(parser, clusterRerouteRequest, new ParseContext(registry, parseFieldMatcher)); - } - } + request.applyContentParser(parser -> PARSER.parse(parser, clusterRerouteRequest, null)); return clusterRerouteRequest; } - - private static class ParseContext implements ParseFieldMatcherSupplier { - private final AllocationCommandRegistry registry; - private final ParseFieldMatcher parseFieldMatcher; - - private ParseContext(AllocationCommandRegistry registry, ParseFieldMatcher parseFieldMatcher) { - this.registry = registry; - this.parseFieldMatcher = parseFieldMatcher; - } - - @Override - public ParseFieldMatcher getParseFieldMatcher() { - return parseFieldMatcher; - } - } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterSearchShardsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterSearchShardsAction.java index 754b9b0d63342..a927cb8b4ac7a 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterSearchShardsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterSearchShardsAction.java @@ -20,27 +20,22 @@ package org.elasticsearch.rest.action.admin.cluster; import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsRequest; -import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsResponse; import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.client.Requests; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.POST; -/** - */ public class RestClusterSearchShardsAction extends BaseRestHandler { - - @Inject public RestClusterSearchShardsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_search_shards", this); @@ -52,16 +47,19 @@ public RestClusterSearchShardsAction(Settings settings, RestController controlle } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final ClusterSearchShardsRequest clusterSearchShardsRequest = Requests.clusterSearchShardsRequest(indices); clusterSearchShardsRequest.local(request.paramAsBoolean("local", clusterSearchShardsRequest.local())); - clusterSearchShardsRequest.types(Strings.splitStringByCommaToArray(request.param("type"))); + if (request.hasParam("type")) { + String type = request.param("type"); + deprecationLogger.deprecated("type [" + type + "] doesn't have any effect in the search shards api, " + + "it should be rather omitted"); + } clusterSearchShardsRequest.routing(request.param("routing")); clusterSearchShardsRequest.preference(request.param("preference")); clusterSearchShardsRequest.indicesOptions(IndicesOptions.fromRequest(request, clusterSearchShardsRequest.indicesOptions())); - - client.admin().cluster().searchShards(clusterSearchShardsRequest, new RestToXContentListener(channel)); + return channel -> client.admin().cluster().searchShards(clusterSearchShardsRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStateAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStateAction.java index fab2ee0062f1a..fe99d544d46e3 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStateAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStateAction.java @@ -22,30 +22,31 @@ import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest; import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse; import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.client.Requests; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; +import java.util.Collections; import java.util.EnumSet; +import java.util.HashSet; +import java.util.Set; public class RestClusterStateAction extends BaseRestHandler { private final SettingsFilter settingsFilter; - @Inject public RestClusterStateAction(Settings settings, RestController controller, SettingsFilter settingsFilter) { super(settings); controller.registerHandler(RestRequest.Method.GET, "/_cluster/state", this); @@ -56,7 +57,7 @@ public RestClusterStateAction(Settings settings, RestController controller, Sett } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final ClusterStateRequest clusterStateRequest = Requests.clusterStateRequest(); clusterStateRequest.indicesOptions(IndicesOptions.fromRequest(request, clusterStateRequest.indicesOptions())); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); @@ -84,7 +85,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel, } settingsFilter.addFilterSettingParams(request); - client.admin().cluster().state(clusterStateRequest, new RestBuilderListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(ClusterStateResponse response, XContentBuilder builder) throws Exception { builder.startObject(); @@ -96,6 +97,20 @@ public RestResponse buildResponse(ClusterStateResponse response, XContentBuilder }); } + private static final Set RESPONSE_PARAMS; + + static { + final Set responseParams = new HashSet<>(); + responseParams.add("metric"); + responseParams.addAll(Settings.FORMAT_PARAMS); + RESPONSE_PARAMS = Collections.unmodifiableSet(responseParams); + } + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; + } + @Override public boolean canTripCircuitBreaker() { return false; diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStatsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStatsAction.java index 7ef05d04553a2..e28c61776d17c 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStatsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStatsAction.java @@ -21,20 +21,18 @@ import org.elasticsearch.action.admin.cluster.stats.ClusterStatsRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestActions.NodesResponseRestListener; +import java.io.IOException; + /** * */ public class RestClusterStatsAction extends BaseRestHandler { - - @Inject public RestClusterStatsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.GET, "/_cluster/stats", this); @@ -42,10 +40,10 @@ public RestClusterStatsAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ClusterStatsRequest clusterStatsRequest = new ClusterStatsRequest().nodesIds(request.paramAsStringArray("nodeId", null)); clusterStatsRequest.timeout(request.param("timeout")); - client.admin().cluster().clusterStats(clusterStatsRequest, new NodesResponseRestListener<>(channel)); + return channel -> client.admin().cluster().clusterStats(clusterStatsRequest, new NodesResponseRestListener<>(channel)); } @Override diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterUpdateSettingsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterUpdateSettingsAction.java index 8de725dbe798c..02b62b8e4bf9e 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterUpdateSettingsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterUpdateSettingsAction.java @@ -21,38 +21,34 @@ import org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest; import org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsResponse; -import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.client.Requests; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; import java.io.IOException; import java.util.Map; +import java.util.Set; public class RestClusterUpdateSettingsAction extends BaseRestHandler { - - @Inject public RestClusterUpdateSettingsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.PUT, "/_cluster/settings", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final ClusterUpdateSettingsRequest clusterUpdateSettingsRequest = Requests.clusterUpdateSettingsRequest(); clusterUpdateSettingsRequest.timeout(request.paramAsTime("timeout", clusterUpdateSettingsRequest.timeout())); clusterUpdateSettingsRequest.masterNodeTimeout( request.paramAsTime("master_timeout", clusterUpdateSettingsRequest.masterNodeTimeout())); Map source; - try (XContentParser parser = XContentFactory.xContent(request.content()).createParser(request.content())) { + try (XContentParser parser = request.contentParser()) { source = parser.map(); } if (source.containsKey("transient")) { @@ -62,7 +58,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel, clusterUpdateSettingsRequest.persistentSettings((Map) source.get("persistent")); } - client.admin().cluster().updateSettings(clusterUpdateSettingsRequest, + return channel -> client.admin().cluster().updateSettings(clusterUpdateSettingsRequest, new AcknowledgedRestListener(channel) { @Override protected void addCustomFields(XContentBuilder builder, ClusterUpdateSettingsResponse response) throws IOException { @@ -77,6 +73,11 @@ protected void addCustomFields(XContentBuilder builder, ClusterUpdateSettingsRes }); } + @Override + protected Set responseParams() { + return Settings.FORMAT_PARAMS; + } + @Override public boolean canTripCircuitBreaker() { return false; diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCreateSnapshotAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCreateSnapshotAction.java index 96449131a61a7..bd0d7e2a9d73f 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCreateSnapshotAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCreateSnapshotAction.java @@ -20,16 +20,15 @@ package org.elasticsearch.rest.action.admin.cluster; import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest; -import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; + import static org.elasticsearch.client.Requests.createSnapshotRequest; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestRequest.Method.PUT; @@ -38,8 +37,6 @@ * Creates a new snapshot */ public class RestCreateSnapshotAction extends BaseRestHandler { - - @Inject public RestCreateSnapshotAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(PUT, "/_snapshot/{repository}/{snapshot}", this); @@ -47,11 +44,11 @@ public RestCreateSnapshotAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { CreateSnapshotRequest createSnapshotRequest = createSnapshotRequest(request.param("repository"), request.param("snapshot")); - createSnapshotRequest.source(request.content().utf8ToString()); + request.applyContentParser(p -> createSnapshotRequest.source(p.mapOrdered())); createSnapshotRequest.masterNodeTimeout(request.paramAsTime("master_timeout", createSnapshotRequest.masterNodeTimeout())); createSnapshotRequest.waitForCompletion(request.paramAsBoolean("wait_for_completion", false)); - client.admin().cluster().createSnapshot(createSnapshotRequest, new RestToXContentListener(channel)); + return channel -> client.admin().cluster().createSnapshot(createSnapshotRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteRepositoryAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteRepositoryAction.java index 78d063bae00c4..2019e04be5597 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteRepositoryAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteRepositoryAction.java @@ -20,16 +20,15 @@ package org.elasticsearch.rest.action.admin.cluster; import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryRequest; -import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; + import static org.elasticsearch.client.Requests.deleteRepositoryRequest; import static org.elasticsearch.rest.RestRequest.Method.DELETE; @@ -37,19 +36,17 @@ * Unregisters a repository */ public class RestDeleteRepositoryAction extends BaseRestHandler { - - @Inject public RestDeleteRepositoryAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(DELETE, "/_snapshot/{repository}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteRepositoryRequest deleteRepositoryRequest = deleteRepositoryRequest(request.param("repository")); deleteRepositoryRequest.masterNodeTimeout(request.paramAsTime("master_timeout", deleteRepositoryRequest.masterNodeTimeout())); deleteRepositoryRequest.timeout(request.paramAsTime("timeout", deleteRepositoryRequest.timeout())); deleteRepositoryRequest.masterNodeTimeout(request.paramAsTime("master_timeout", deleteRepositoryRequest.masterNodeTimeout())); - client.admin().cluster().deleteRepository(deleteRepositoryRequest, new AcknowledgedRestListener(channel)); + return channel -> client.admin().cluster().deleteRepository(deleteRepositoryRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteSnapshotAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteSnapshotAction.java index d001a1e90e5c1..a11d47278a8e8 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteSnapshotAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteSnapshotAction.java @@ -20,16 +20,15 @@ package org.elasticsearch.rest.action.admin.cluster; import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequest; -import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; + import static org.elasticsearch.client.Requests.deleteSnapshotRequest; import static org.elasticsearch.rest.RestRequest.Method.DELETE; @@ -37,17 +36,15 @@ * Deletes a snapshot */ public class RestDeleteSnapshotAction extends BaseRestHandler { - - @Inject public RestDeleteSnapshotAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(DELETE, "/_snapshot/{repository}/{snapshot}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteSnapshotRequest deleteSnapshotRequest = deleteSnapshotRequest(request.param("repository"), request.param("snapshot")); deleteSnapshotRequest.masterNodeTimeout(request.paramAsTime("master_timeout", deleteSnapshotRequest.masterNodeTimeout())); - client.admin().cluster().deleteSnapshot(deleteSnapshotRequest, new AcknowledgedRestListener(channel)); + return channel -> client.admin().cluster().deleteSnapshot(deleteSnapshotRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java index 212b42135e96c..a8f2567a64777 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java @@ -20,38 +20,49 @@ import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.DELETE; public class RestDeleteStoredScriptAction extends BaseRestHandler { - @Inject public RestDeleteStoredScriptAction(Settings settings, RestController controller) { - this(settings, controller, true); - } - - protected RestDeleteStoredScriptAction(Settings settings, RestController controller, boolean registerDefaultHandlers) { super(settings); - if (registerDefaultHandlers) { - controller.registerHandler(DELETE, "/_scripts/{lang}/{id}", this); - } - } - protected String getScriptLang(RestRequest request) { - return request.param("lang"); + // Note {lang} is actually {id} in the first handler. It appears + // parameters as part of the path must be of the same ordering relative + // to name or they will not work as expected. + controller.registerHandler(DELETE, "/_scripts/{lang}", this); + controller.registerHandler(DELETE, "/_scripts/{lang}/{id}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, NodeClient client) { - DeleteStoredScriptRequest deleteStoredScriptRequest = new DeleteStoredScriptRequest(getScriptLang(request), request.param("id")); - client.admin().cluster().deleteStoredScript(deleteStoredScriptRequest, new AcknowledgedRestListener<>(channel)); - } + public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException { + String id = request.param("id"); + String lang = request.param("lang"); + // In the case where only {lang} is not null, we make it {id} because of + // name ordering issues in the handlers' paths. + if (id == null) { + id = lang; + lang = null; + } + + if (lang != null) { + deprecationLogger.deprecated( + "specifying lang [" + lang + "] as part of the url path is deprecated"); + } + + DeleteStoredScriptRequest deleteStoredScriptRequest = new DeleteStoredScriptRequest(id, lang); + deleteStoredScriptRequest.timeout(request.paramAsTime("timeout", deleteStoredScriptRequest.timeout())); + deleteStoredScriptRequest.masterNodeTimeout(request.paramAsTime("master_timeout", deleteStoredScriptRequest.masterNodeTimeout())); + + return channel -> client.admin().cluster().deleteStoredScript(deleteStoredScriptRequest, new AcknowledgedRestListener<>(channel)); + } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetRepositoriesAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetRepositoriesAction.java index 802af3cb5b811..1af138400e310 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetRepositoriesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetRepositoriesAction.java @@ -25,18 +25,19 @@ import org.elasticsearch.cluster.metadata.RepositoriesMetaData; import org.elasticsearch.cluster.metadata.RepositoryMetaData; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; +import java.util.Set; + import static org.elasticsearch.client.Requests.getRepositoryRequest; import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestStatus.OK; @@ -48,7 +49,6 @@ public class RestGetRepositoriesAction extends BaseRestHandler { private final SettingsFilter settingsFilter; - @Inject public RestGetRepositoriesAction(Settings settings, RestController controller, SettingsFilter settingsFilter) { super(settings); controller.registerHandler(GET, "/_snapshot", this); @@ -57,23 +57,30 @@ public RestGetRepositoriesAction(Settings settings, RestController controller, S } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final String[] repositories = request.paramAsStringArray("repository", Strings.EMPTY_ARRAY); GetRepositoriesRequest getRepositoriesRequest = getRepositoryRequest(repositories); getRepositoriesRequest.masterNodeTimeout(request.paramAsTime("master_timeout", getRepositoriesRequest.masterNodeTimeout())); getRepositoriesRequest.local(request.paramAsBoolean("local", getRepositoriesRequest.local())); settingsFilter.addFilterSettingParams(request); - client.admin().cluster().getRepositories(getRepositoriesRequest, new RestBuilderListener(channel) { - @Override - public RestResponse buildResponse(GetRepositoriesResponse response, XContentBuilder builder) throws Exception { - builder.startObject(); - for (RepositoryMetaData repositoryMetaData : response.repositories()) { - RepositoriesMetaData.toXContent(repositoryMetaData, builder, request); - } - builder.endObject(); + return channel -> + client.admin().cluster().getRepositories(getRepositoriesRequest, new RestBuilderListener(channel) { + @Override + public RestResponse buildResponse(GetRepositoriesResponse response, XContentBuilder builder) throws Exception { + builder.startObject(); + for (RepositoryMetaData repositoryMetaData : response.repositories()) { + RepositoriesMetaData.toXContent(repositoryMetaData, builder, request); + } + builder.endObject(); - return new BytesRestResponse(OK, builder); - } + return new BytesRestResponse(OK, builder); + } }); } + + @Override + protected Set responseParams() { + return Settings.FORMAT_PARAMS; + } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetSnapshotsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetSnapshotsAction.java index 9e10a87bc037e..7348cb5896cb4 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetSnapshotsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetSnapshotsAction.java @@ -20,17 +20,16 @@ package org.elasticsearch.rest.action.admin.cluster; import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest; -import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; + import static org.elasticsearch.client.Requests.getSnapshotsRequest; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -38,23 +37,20 @@ * Returns information about snapshot */ public class RestGetSnapshotsAction extends BaseRestHandler { - - @Inject public RestGetSnapshotsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_snapshot/{repository}/{snapshot}", this); } - @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String repository = request.param("repository"); String[] snapshots = request.paramAsStringArray("snapshot", Strings.EMPTY_ARRAY); GetSnapshotsRequest getSnapshotsRequest = getSnapshotsRequest(repository).snapshots(snapshots); getSnapshotsRequest.ignoreUnavailable(request.paramAsBoolean("ignore_unavailable", getSnapshotsRequest.ignoreUnavailable())); - + getSnapshotsRequest.verbose(request.paramAsBoolean("verbose", getSnapshotsRequest.verbose())); getSnapshotsRequest.masterNodeTimeout(request.paramAsTime("master_timeout", getSnapshotsRequest.masterNodeTimeout())); - client.admin().cluster().getSnapshots(getSnapshotsRequest, new RestToXContentListener(channel)); + return channel -> client.admin().cluster().getSnapshots(getSnapshotsRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetStoredScriptAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetStoredScriptAction.java index 1185685c49a54..7af21a9fb45be 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetStoredScriptAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetStoredScriptAction.java @@ -21,68 +21,94 @@ import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequest; import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestBuilderListener; +import org.elasticsearch.script.StoredScriptSource; + +import java.io.IOException; import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestGetStoredScriptAction extends BaseRestHandler { - @Inject + public static final ParseField _ID_PARSE_FIELD = new ParseField("_id"); + + public static final ParseField FOUND_PARSE_FIELD = new ParseField("found"); + public RestGetStoredScriptAction(Settings settings, RestController controller) { - this(settings, controller, true); + super(settings); + + // Note {lang} is actually {id} in the first handler. It appears + // parameters as part of the path must be of the same ordering relative + // to name or they will not work as expected. + controller.registerHandler(GET, "/_scripts/{lang}", this); + controller.registerHandler(GET, "/_scripts/{lang}/{id}", this); } - protected RestGetStoredScriptAction(Settings settings, RestController controller, boolean registerDefaultHandlers) { - super(settings); - if (registerDefaultHandlers) { - controller.registerHandler(GET, "/_scripts/{lang}/{id}", this); + @Override + public RestChannelConsumer prepareRequest(final RestRequest request, NodeClient client) throws IOException { + String id; + String lang; + + // In the case where only {lang} is not null, we make it {id} because of + // name ordering issues in the handlers' paths. + if (request.param("id") == null) { + id = request.param("lang");; + lang = null; + } else { + id = request.param("id"); + lang = request.param("lang"); } - } - protected String getScriptFieldName() { - return Fields.SCRIPT; - } + if (lang != null) { + deprecationLogger.deprecated( + "specifying lang [" + lang + "] as part of the url path is deprecated"); + } - protected String getScriptLang(RestRequest request) { - return request.param("lang"); - } + GetStoredScriptRequest getRequest = new GetStoredScriptRequest(id, lang); - @Override - public void handleRequest(final RestRequest request, final RestChannel channel, NodeClient client) { - final GetStoredScriptRequest getRequest = new GetStoredScriptRequest(getScriptLang(request), request.param("id")); - client.admin().cluster().getStoredScript(getRequest, new RestBuilderListener(channel) { + return channel -> client.admin().cluster().getStoredScript(getRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(GetStoredScriptResponse response, XContentBuilder builder) throws Exception { builder.startObject(); - builder.field(Fields.LANG, getRequest.lang()); - builder.field(Fields._ID, getRequest.id()); - boolean found = response.getStoredScript() != null; - builder.field(Fields.FOUND, found); - RestStatus status = RestStatus.NOT_FOUND; + builder.field(_ID_PARSE_FIELD.getPreferredName(), id); + + if (lang != null) { + builder.field(StoredScriptSource.LANG_PARSE_FIELD.getPreferredName(), lang); + } + + StoredScriptSource source = response.getSource(); + boolean found = source != null; + builder.field(FOUND_PARSE_FIELD.getPreferredName(), found); + if (found) { - builder.field(getScriptFieldName(), response.getStoredScript()); - status = RestStatus.OK; + if (lang == null) { + builder.startObject(StoredScriptSource.SCRIPT_PARSE_FIELD.getPreferredName()); + builder.field(StoredScriptSource.LANG_PARSE_FIELD.getPreferredName(), source.getLang()); + builder.field(StoredScriptSource.SOURCE_PARSE_FIELD.getPreferredName(), source.getSource()); + + if (source.getOptions().isEmpty() == false) { + builder.field(StoredScriptSource.OPTIONS_PARSE_FIELD.getPreferredName(), source.getOptions()); + } + + builder.endObject(); + } else { + builder.field(StoredScriptSource.SCRIPT_PARSE_FIELD.getPreferredName(), source.getSource()); + } } + builder.endObject(); - return new BytesRestResponse(status, builder); + + return new BytesRestResponse(found ? RestStatus.OK : RestStatus.NOT_FOUND, builder); } }); } - - private static final class Fields { - private static final String SCRIPT = "script"; - private static final String LANG = "lang"; - private static final String _ID = "_id"; - private static final String FOUND = "found"; - } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetTaskAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetTaskAction.java index f1edf672010c1..e013970553fd3 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetTaskAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestGetTaskAction.java @@ -21,27 +21,26 @@ import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; import org.elasticsearch.tasks.TaskId; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestGetTaskAction extends BaseRestHandler { - @Inject public RestGetTaskAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_tasks/{taskId}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { TaskId taskId = new TaskId(request.param("taskId")); boolean waitForCompletion = request.paramAsBoolean("wait_for_completion", false); TimeValue timeout = request.paramAsTime("timeout", null); @@ -50,6 +49,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel, getTaskRequest.setTaskId(taskId); getTaskRequest.setWaitForCompletion(waitForCompletion); getTaskRequest.setTimeout(timeout); - client.admin().cluster().getTask(getTaskRequest, new RestToXContentListener<>(channel)); + return channel -> client.admin().cluster().getTask(getTaskRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestListTasksAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestListTasksAction.java index d5ff427e3d07d..4177386eff996 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestListTasksAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestListTasksAction.java @@ -23,9 +23,8 @@ import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksRequest; import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -40,29 +39,40 @@ import org.elasticsearch.rest.action.RestToXContentListener; import org.elasticsearch.tasks.TaskId; +import java.io.IOException; +import java.util.function.Supplier; + import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestListTasksAction extends BaseRestHandler { - private final ClusterService clusterService; - @Inject - public RestListTasksAction(Settings settings, RestController controller, ClusterService clusterService) { + private final Supplier nodesInCluster; + + public RestListTasksAction(Settings settings, RestController controller, Supplier nodesInCluster) { super(settings); - this.clusterService = clusterService; + this.nodesInCluster = nodesInCluster; controller.registerHandler(GET, "/_tasks", this); } + @Override + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { + final ListTasksRequest listTasksRequest = generateListTasksRequest(request); + final String groupBy = request.param("group_by", "nodes"); + return channel -> client.admin().cluster().listTasks(listTasksRequest, + listTasksResponseListener(nodesInCluster, groupBy, channel)); + } + public static ListTasksRequest generateListTasksRequest(RestRequest request) { boolean detailed = request.paramAsBoolean("detailed", false); - String[] nodesIds = Strings.splitStringByCommaToArray(request.param("node_id")); + String[] nodes = Strings.splitStringByCommaToArray(request.param("nodes")); String[] actions = Strings.splitStringByCommaToArray(request.param("actions")); TaskId parentTaskId = new TaskId(request.param("parent_task_id")); boolean waitForCompletion = request.paramAsBoolean("wait_for_completion", false); TimeValue timeout = request.paramAsTime("timeout", null); ListTasksRequest listTasksRequest = new ListTasksRequest(); - listTasksRequest.setNodesIds(nodesIds); + listTasksRequest.setNodes(nodes); listTasksRequest.setDetailed(detailed); listTasksRequest.setActions(actions); listTasksRequest.setParentTaskId(parentTaskId); @@ -71,23 +81,19 @@ public static ListTasksRequest generateListTasksRequest(RestRequest request) { return listTasksRequest; } - @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { - client.admin().cluster().listTasks(generateListTasksRequest(request), listTasksResponseListener(clusterService, channel)); - } - /** * Standard listener for extensions of {@link ListTasksResponse} that supports {@code group_by=nodes}. */ - public static ActionListener listTasksResponseListener(ClusterService clusterService, - RestChannel channel) { - String groupBy = channel.request().param("group_by", "nodes"); + public static ActionListener listTasksResponseListener( + Supplier nodesInCluster, + String groupBy, + final RestChannel channel) { if ("nodes".equals(groupBy)) { return new RestBuilderListener(channel) { @Override public RestResponse buildResponse(T response, XContentBuilder builder) throws Exception { builder.startObject(); - response.toXContentGroupedByNode(builder, channel.request(), clusterService.state().nodes()); + response.toXContentGroupedByNode(builder, channel.request(), nodesInCluster.get()); builder.endObject(); return new BytesRestResponse(RestStatus.OK, builder); } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesHotThreadsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesHotThreadsAction.java index 87af57276c2b0..50d1856c1973d 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesHotThreadsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesHotThreadsAction.java @@ -24,24 +24,22 @@ import org.elasticsearch.action.admin.cluster.node.hotthreads.NodesHotThreadsResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestResponseListener; +import java.io.IOException; + /** */ public class RestNodesHotThreadsAction extends BaseRestHandler { - - @Inject public RestNodesHotThreadsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.GET, "/_cluster/nodes/hotthreads", this); @@ -56,7 +54,7 @@ public RestNodesHotThreadsAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String[] nodesIds = Strings.splitStringByCommaToArray(request.param("nodeId")); NodesHotThreadsRequest nodesHotThreadsRequest = new NodesHotThreadsRequest(nodesIds); nodesHotThreadsRequest.threads(request.paramAsInt("threads", nodesHotThreadsRequest.threads())); @@ -65,18 +63,20 @@ public void handleRequest(final RestRequest request, final RestChannel channel, nodesHotThreadsRequest.interval(TimeValue.parseTimeValue(request.param("interval"), nodesHotThreadsRequest.interval(), "interval")); nodesHotThreadsRequest.snapshots(request.paramAsInt("snapshots", nodesHotThreadsRequest.snapshots())); nodesHotThreadsRequest.timeout(request.param("timeout")); - client.admin().cluster().nodesHotThreads(nodesHotThreadsRequest, new RestResponseListener(channel) { - @Override - public RestResponse buildResponse(NodesHotThreadsResponse response) throws Exception { - StringBuilder sb = new StringBuilder(); - for (NodeHotThreads node : response.getNodes()) { - sb.append("::: ").append(node.getNode().toString()).append("\n"); - Strings.spaceify(3, node.getHotThreads(), sb); - sb.append('\n'); - } - return new BytesRestResponse(RestStatus.OK, sb.toString()); - } - }); + return channel -> client.admin().cluster().nodesHotThreads( + nodesHotThreadsRequest, + new RestResponseListener(channel) { + @Override + public RestResponse buildResponse(NodesHotThreadsResponse response) throws Exception { + StringBuilder sb = new StringBuilder(); + for (NodeHotThreads node : response.getNodes()) { + sb.append("::: ").append(node.getNode().toString()).append("\n"); + Strings.spaceify(3, node.getHotThreads(), sb); + sb.append('\n'); + } + return new BytesRestResponse(RestStatus.OK, sb.toString()); + } + }); } @Override diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesInfoAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesInfoAction.java index c45709d07d5fa..dfe3b08697c97 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesInfoAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesInfoAction.java @@ -22,16 +22,15 @@ import org.elasticsearch.action.admin.cluster.node.info.NodesInfoRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestActions.NodesResponseRestListener; +import java.io.IOException; import java.util.Set; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -51,11 +50,10 @@ public class RestNodesInfoAction extends BaseRestHandler { private final SettingsFilter settingsFilter; - @Inject public RestNodesInfoAction(Settings settings, RestController controller, SettingsFilter settingsFilter) { super(settings); controller.registerHandler(GET, "/_nodes", this); - // this endpoint is used for metrics, not for nodeIds, like /_nodes/fs + // this endpoint is used for metrics, not for node IDs, like /_nodes/fs controller.registerHandler(GET, "/_nodes/{nodeId}", this); controller.registerHandler(GET, "/_nodes/{nodeId}/{metrics}", this); // added this endpoint to be aligned with stats @@ -65,7 +63,7 @@ public RestNodesInfoAction(Settings settings, RestController controller, Setting } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String[] nodeIds; Set metrics; @@ -108,7 +106,12 @@ public void handleRequest(final RestRequest request, final RestChannel channel, settingsFilter.addFilterSettingParams(request); - client.admin().cluster().nodesInfo(nodesInfoRequest, new NodesResponseRestListener<>(channel)); + return channel -> client.admin().cluster().nodesInfo(nodesInfoRequest, new NodesResponseRestListener<>(channel)); + } + + @Override + protected Set responseParams() { + return Settings.FORMAT_PARAMS; } @Override diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesStatsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesStatsAction.java index be6847f1b5281..2ad431f1ea5d7 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesStatsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesStatsAction.java @@ -24,21 +24,24 @@ import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags.Flag; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestActions.NodesResponseRestListener; +import java.io.IOException; +import java.util.Collections; +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; import java.util.Set; +import java.util.TreeSet; +import java.util.function.Consumer; import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestNodesStatsAction extends BaseRestHandler { - - @Inject public RestNodesStatsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_nodes/stats", this); @@ -47,13 +50,42 @@ public RestNodesStatsAction(Settings settings, RestController controller) { controller.registerHandler(GET, "/_nodes/stats/{metric}", this); controller.registerHandler(GET, "/_nodes/{nodeId}/stats/{metric}", this); - controller.registerHandler(GET, "/_nodes/stats/{metric}/{indexMetric}", this); + controller.registerHandler(GET, "/_nodes/stats/{metric}/{index_metric}", this); + + controller.registerHandler(GET, "/_nodes/{nodeId}/stats/{metric}/{index_metric}", this); + } + + static final Map> METRICS; + + static { + final Map> metrics = new HashMap<>(); + metrics.put("os", r -> r.os(true)); + metrics.put("jvm", r -> r.jvm(true)); + metrics.put("thread_pool", r -> r.threadPool(true)); + metrics.put("fs", r -> r.fs(true)); + metrics.put("transport", r -> r.transport(true)); + metrics.put("http", r -> r.http(true)); + metrics.put("indices", r -> r.indices(true)); + metrics.put("process", r -> r.process(true)); + metrics.put("breaker", r -> r.breaker(true)); + metrics.put("script", r -> r.script(true)); + metrics.put("discovery", r -> r.discovery(true)); + metrics.put("ingest", r -> r.ingest(true)); + METRICS = Collections.unmodifiableMap(metrics); + } + + static final Map> FLAGS; - controller.registerHandler(GET, "/_nodes/{nodeId}/stats/{metric}/{indexMetric}", this); + static { + final Map> flags = new HashMap<>(); + for (final Flag flag : CommonStatsFlags.Flag.values()) { + flags.put(flag.getRestName(), f -> f.set(flag, true)); + } + FLAGS = Collections.unmodifiableMap(flags); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String[] nodesIds = Strings.splitStringByCommaToArray(request.param("nodeId")); Set metrics = Strings.splitStringByCommaToSet(request.param("metric", "_all")); @@ -61,35 +93,78 @@ public void handleRequest(final RestRequest request, final RestChannel channel, nodesStatsRequest.timeout(request.param("timeout")); if (metrics.size() == 1 && metrics.contains("_all")) { + if (request.hasParam("index_metric")) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "request [%s] contains index metrics [%s] but all stats requested", + request.path(), + request.param("index_metric"))); + } nodesStatsRequest.all(); nodesStatsRequest.indices(CommonStatsFlags.ALL); + } else if (metrics.contains("_all")) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, + "request [%s] contains _all and individual metrics [%s]", + request.path(), + request.param("metric"))); } else { nodesStatsRequest.clear(); - nodesStatsRequest.os(metrics.contains("os")); - nodesStatsRequest.jvm(metrics.contains("jvm")); - nodesStatsRequest.threadPool(metrics.contains("thread_pool")); - nodesStatsRequest.fs(metrics.contains("fs")); - nodesStatsRequest.transport(metrics.contains("transport")); - nodesStatsRequest.http(metrics.contains("http")); - nodesStatsRequest.indices(metrics.contains("indices")); - nodesStatsRequest.process(metrics.contains("process")); - nodesStatsRequest.breaker(metrics.contains("breaker")); - nodesStatsRequest.script(metrics.contains("script")); - nodesStatsRequest.discovery(metrics.contains("discovery")); - nodesStatsRequest.ingest(metrics.contains("ingest")); + + // use a sorted set so the unrecognized parameters appear in a reliable sorted order + final Set invalidMetrics = new TreeSet<>(); + for (final String metric : metrics) { + final Consumer handler = METRICS.get(metric); + if (handler != null) { + handler.accept(nodesStatsRequest); + } else { + invalidMetrics.add(metric); + } + } + + if (!invalidMetrics.isEmpty()) { + throw new IllegalArgumentException(unrecognized(request, invalidMetrics, METRICS.keySet(), "metric")); + } // check for index specific metrics if (metrics.contains("indices")) { - Set indexMetrics = Strings.splitStringByCommaToSet(request.param("indexMetric", "_all")); + Set indexMetrics = Strings.splitStringByCommaToSet(request.param("index_metric", "_all")); if (indexMetrics.size() == 1 && indexMetrics.contains("_all")) { nodesStatsRequest.indices(CommonStatsFlags.ALL); } else { CommonStatsFlags flags = new CommonStatsFlags(); - for (Flag flag : CommonStatsFlags.Flag.values()) { - flags.set(flag, indexMetrics.contains(flag.getRestName())); + flags.clear(); + // use a sorted set so the unrecognized parameters appear in a reliable sorted order + final Set invalidIndexMetrics = new TreeSet<>(); + for (final String indexMetric : indexMetrics) { + final Consumer handler = FLAGS.get(indexMetric); + if (handler != null) { + handler.accept(flags); + } else { + invalidIndexMetrics.add(indexMetric); + } + } + + if (invalidIndexMetrics.contains("percolate")) { + deprecationLogger.deprecated( + "percolate stats are no longer available and requests for percolate stats will fail starting in 6.0.0"); + invalidIndexMetrics.remove("percolate"); + } + + if (!invalidIndexMetrics.isEmpty()) { + throw new IllegalArgumentException(unrecognized(request, invalidIndexMetrics, FLAGS.keySet(), "index metric")); } + nodesStatsRequest.indices(flags); } + } else if (request.hasParam("index_metric")) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "request [%s] contains index metrics [%s] but indices stats not requested", + request.path(), + request.param("index_metric"))); } } @@ -107,15 +182,23 @@ public void handleRequest(final RestRequest request, final RestChannel channel, if (nodesStatsRequest.indices().isSet(Flag.Indexing) && (request.hasParam("types"))) { nodesStatsRequest.indices().types(request.paramAsStringArray("types", null)); } - if (nodesStatsRequest.indices().isSet(Flag.Segments) && (request.hasParam("include_segment_file_sizes"))) { - nodesStatsRequest.indices().includeSegmentFileSizes(true); + if (nodesStatsRequest.indices().isSet(Flag.Segments)) { + nodesStatsRequest.indices().includeSegmentFileSizes(request.paramAsBoolean("include_segment_file_sizes", false)); } - client.admin().cluster().nodesStats(nodesStatsRequest, new NodesResponseRestListener<>(channel)); + return channel -> client.admin().cluster().nodesStats(nodesStatsRequest, new NodesResponseRestListener<>(channel)); + } + + private final Set RESPONSE_PARAMS = Collections.singleton("level"); + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; } @Override public boolean canTripCircuitBreaker() { return false; } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPendingClusterTasksAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPendingClusterTasksAction.java index d1cb65092ce2d..29b2b72895bb9 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPendingClusterTasksAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPendingClusterTasksAction.java @@ -21,27 +21,25 @@ import org.elasticsearch.action.admin.cluster.tasks.PendingClusterTasksRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; -public class RestPendingClusterTasksAction extends BaseRestHandler { +import java.io.IOException; - @Inject +public class RestPendingClusterTasksAction extends BaseRestHandler { public RestPendingClusterTasksAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.GET, "/_cluster/pending_tasks", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { PendingClusterTasksRequest pendingClusterTasksRequest = new PendingClusterTasksRequest(); pendingClusterTasksRequest.masterNodeTimeout(request.paramAsTime("master_timeout", pendingClusterTasksRequest.masterNodeTimeout())); pendingClusterTasksRequest.local(request.paramAsBoolean("local", pendingClusterTasksRequest.local())); - client.admin().cluster().pendingClusterTasks(pendingClusterTasksRequest, new RestToXContentListener<>(channel)); + return channel -> client.admin().cluster().pendingClusterTasks(pendingClusterTasksRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutRepositoryAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutRepositoryAction.java index 002e1bfdc951b..afd2fd851aead 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutRepositoryAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutRepositoryAction.java @@ -20,16 +20,16 @@ package org.elasticsearch.rest.action.admin.cluster; import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryRequest; -import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; + import static org.elasticsearch.client.Requests.putRepositoryRequest; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestRequest.Method.PUT; @@ -38,8 +38,6 @@ * Registers repositories */ public class RestPutRepositoryAction extends BaseRestHandler { - - @Inject public RestPutRepositoryAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(PUT, "/_snapshot/{repository}", this); @@ -48,12 +46,14 @@ public RestPutRepositoryAction(Settings settings, RestController controller) { @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { PutRepositoryRequest putRepositoryRequest = putRepositoryRequest(request.param("repository")); - putRepositoryRequest.source(request.content().utf8ToString()); + try (XContentParser parser = request.contentParser()) { + putRepositoryRequest.source(parser.mapOrdered()); + } putRepositoryRequest.verify(request.paramAsBoolean("verify", true)); putRepositoryRequest.masterNodeTimeout(request.paramAsTime("master_timeout", putRepositoryRequest.masterNodeTimeout())); putRepositoryRequest.timeout(request.paramAsTime("timeout", putRepositoryRequest.timeout())); - client.admin().cluster().putRepository(putRepositoryRequest, new AcknowledgedRestListener(channel)); + return channel -> client.admin().cluster().putRepository(putRepositoryRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java index c5156c4cd093b..9a9920dd4b064 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java @@ -20,40 +20,54 @@ import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestRequest.Method.PUT; public class RestPutStoredScriptAction extends BaseRestHandler { - @Inject public RestPutStoredScriptAction(Settings settings, RestController controller) { - this(settings, controller, true); - } - - protected RestPutStoredScriptAction(Settings settings, RestController controller, boolean registerDefaultHandlers) { super(settings); - if (registerDefaultHandlers) { - controller.registerHandler(POST, "/_scripts/{lang}/{id}", this); - controller.registerHandler(PUT, "/_scripts/{lang}/{id}", this); - } - } - protected String getScriptLang(RestRequest request) { - return request.param("lang"); + // Note {lang} is actually {id} in the first two handlers. It appears + // parameters as part of the path must be of the same ordering relative + // to name or they will not work as expected. + controller.registerHandler(POST, "/_scripts/{lang}", this); + controller.registerHandler(PUT, "/_scripts/{lang}", this); + controller.registerHandler(POST, "/_scripts/{lang}/{id}", this); + controller.registerHandler(PUT, "/_scripts/{lang}/{id}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, NodeClient client) { - PutStoredScriptRequest putRequest = new PutStoredScriptRequest(getScriptLang(request), request.param("id")); - putRequest.script(request.content()); - client.admin().cluster().putStoredScript(putRequest, new AcknowledgedRestListener<>(channel)); + public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException { + String id = request.param("id"); + String lang = request.param("lang"); + + // In the case where only {lang} is not null, we make it {id} because of + // name ordering issues in the handlers' paths. + if (id == null) { + id = lang; + lang = null; + } + + BytesReference content = request.requiredContent(); + + if (lang != null) { + deprecationLogger.deprecated( + "specifying lang [" + lang + "] as part of the url path is deprecated, use request content instead"); + } + + PutStoredScriptRequest putRequest = new PutStoredScriptRequest(id, lang, content, request.getXContentType()); + putRequest.masterNodeTimeout(request.paramAsTime("master_timeout", putRequest.masterNodeTimeout())); + putRequest.timeout(request.paramAsTime("timeout", putRequest.timeout())); + return channel -> client.admin().cluster().putStoredScript(putRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestRemoteClusterInfoAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestRemoteClusterInfoAction.java new file mode 100644 index 0000000000000..c15b2553e5de6 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestRemoteClusterInfoAction.java @@ -0,0 +1,63 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.rest.action.admin.cluster; + +import org.elasticsearch.action.admin.cluster.remote.RemoteInfoAction; +import org.elasticsearch.action.admin.cluster.remote.RemoteInfoRequest; +import org.elasticsearch.action.admin.cluster.remote.RemoteInfoResponse; +import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.rest.BaseRestHandler; +import org.elasticsearch.rest.BytesRestResponse; +import org.elasticsearch.rest.RestController; +import org.elasticsearch.rest.RestRequest; +import org.elasticsearch.rest.RestResponse; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.rest.action.RestBuilderListener; + +import java.io.IOException; + +import static org.elasticsearch.rest.RestRequest.Method.GET; + +public final class RestRemoteClusterInfoAction extends BaseRestHandler { + + public RestRemoteClusterInfoAction(Settings settings, RestController controller) { + super(settings); + controller.registerHandler(GET, "_remote/info", this); + } + + @Override + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) + throws IOException { + return channel -> client.execute(RemoteInfoAction.INSTANCE, new RemoteInfoRequest(), + new RestBuilderListener(channel) { + @Override + public RestResponse buildResponse(RemoteInfoResponse response, XContentBuilder builder) throws Exception { + response.toXContent(builder, request); + return new BytesRestResponse(RestStatus.OK, builder); + } + }); + } + @Override + public boolean canTripCircuitBreaker() { + return false; + } +} diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestRestoreSnapshotAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestRestoreSnapshotAction.java index 100866e02db8c..3948921c12da3 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestRestoreSnapshotAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestRestoreSnapshotAction.java @@ -20,16 +20,15 @@ package org.elasticsearch.rest.action.admin.cluster; import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest; -import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; + import static org.elasticsearch.client.Requests.restoreSnapshotRequest; import static org.elasticsearch.rest.RestRequest.Method.POST; @@ -37,19 +36,17 @@ * Restores a snapshot */ public class RestRestoreSnapshotAction extends BaseRestHandler { - - @Inject public RestRestoreSnapshotAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_snapshot/{repository}/{snapshot}/_restore", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { RestoreSnapshotRequest restoreSnapshotRequest = restoreSnapshotRequest(request.param("repository"), request.param("snapshot")); restoreSnapshotRequest.masterNodeTimeout(request.paramAsTime("master_timeout", restoreSnapshotRequest.masterNodeTimeout())); restoreSnapshotRequest.waitForCompletion(request.paramAsBoolean("wait_for_completion", false)); - restoreSnapshotRequest.source(request.content().utf8ToString()); - client.admin().cluster().restoreSnapshot(restoreSnapshotRequest, new RestToXContentListener(channel)); + request.applyContentParser(p -> restoreSnapshotRequest.source(p.mapOrdered())); + return channel -> client.admin().cluster().restoreSnapshot(restoreSnapshotRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestSnapshotsStatusAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestSnapshotsStatusAction.java index 4333dfc02710a..c5122641c1a15 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestSnapshotsStatusAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestSnapshotsStatusAction.java @@ -22,14 +22,14 @@ import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; + import static org.elasticsearch.client.Requests.snapshotsStatusRequest; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -37,8 +37,6 @@ * Returns status of currently running snapshot */ public class RestSnapshotsStatusAction extends BaseRestHandler { - - @Inject public RestSnapshotsStatusAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_snapshot/{repository}/{snapshot}/_status", this); @@ -47,7 +45,7 @@ public RestSnapshotsStatusAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String repository = request.param("repository", "_all"); String[] snapshots = request.paramAsStringArray("snapshot", Strings.EMPTY_ARRAY); if (snapshots.length == 1 && "_all".equalsIgnoreCase(snapshots[0])) { @@ -57,6 +55,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel, snapshotsStatusRequest.ignoreUnavailable(request.paramAsBoolean("ignore_unavailable", snapshotsStatusRequest.ignoreUnavailable())); snapshotsStatusRequest.masterNodeTimeout(request.paramAsTime("master_timeout", snapshotsStatusRequest.masterNodeTimeout())); - client.admin().cluster().snapshotsStatus(snapshotsStatusRequest, new RestToXContentListener<>(channel)); + return channel -> client.admin().cluster().snapshotsStatus(snapshotsStatusRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestVerifyRepositoryAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestVerifyRepositoryAction.java index 85aac84077736..a9fb44eae645b 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestVerifyRepositoryAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestVerifyRepositoryAction.java @@ -21,30 +21,28 @@ import org.elasticsearch.action.admin.cluster.repositories.verify.VerifyRepositoryRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; + import static org.elasticsearch.client.Requests.verifyRepositoryRequest; import static org.elasticsearch.rest.RestRequest.Method.POST; public class RestVerifyRepositoryAction extends BaseRestHandler { - - @Inject public RestVerifyRepositoryAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_snapshot/{repository}/_verify", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { VerifyRepositoryRequest verifyRepositoryRequest = verifyRepositoryRequest(request.param("repository")); verifyRepositoryRequest.masterNodeTimeout(request.paramAsTime("master_timeout", verifyRepositoryRequest.masterNodeTimeout())); verifyRepositoryRequest.timeout(request.paramAsTime("timeout", verifyRepositoryRequest.timeout())); - client.admin().cluster().verifyRepository(verifyRepositoryRequest, new RestToXContentListener<>(channel)); + return channel -> client.admin().cluster().verifyRepository(verifyRepositoryRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestAliasesExistAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestAliasesExistAction.java deleted file mode 100644 index dbb8ddde9d1ab..0000000000000 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestAliasesExistAction.java +++ /dev/null @@ -1,90 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.rest.action.admin.indices; - -import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.admin.indices.alias.exists.AliasesExistResponse; -import org.elasticsearch.action.admin.indices.alias.get.GetAliasesRequest; -import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; -import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; - -import static org.elasticsearch.rest.RestRequest.Method.HEAD; -import static org.elasticsearch.rest.RestStatus.NOT_FOUND; -import static org.elasticsearch.rest.RestStatus.OK; - -/** - */ -public class RestAliasesExistAction extends BaseRestHandler { - - @Inject - public RestAliasesExistAction(Settings settings, RestController controller) { - super(settings); - controller.registerHandler(HEAD, "/_alias/{name}", this); - controller.registerHandler(HEAD, "/{index}/_alias/{name}", this); - controller.registerHandler(HEAD, "/{index}/_alias", this); - } - - @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { - String[] aliases = request.paramAsStringArray("name", Strings.EMPTY_ARRAY); - final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); - GetAliasesRequest getAliasesRequest = new GetAliasesRequest(aliases); - getAliasesRequest.indices(indices); - getAliasesRequest.indicesOptions(IndicesOptions.fromRequest(request, getAliasesRequest.indicesOptions())); - getAliasesRequest.local(request.paramAsBoolean("local", getAliasesRequest.local())); - - client.admin().indices().aliasesExist(getAliasesRequest, new ActionListener() { - - @Override - public void onResponse(AliasesExistResponse response) { - try { - if (response.isExists()) { - channel.sendResponse(new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - } else { - channel.sendResponse(new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - } - } catch (Exception e) { - onFailure(e); - } - } - - @Override - public void onFailure(Exception e) { - try { - channel.sendResponse( - new BytesRestResponse(ExceptionsHelper.status(e), BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - } catch (Exception inner) { - inner.addSuppressed(e); - logger.error("Failed to send failure response", inner); - } - } - }); - } -} diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestAnalyzeAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestAnalyzeAction.java index 04d0bf57612e1..b1d9612936d1c 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestAnalyzeAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestAnalyzeAction.java @@ -16,24 +16,19 @@ * specific language governing permissions and limitations * under the License. */ + package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestToXContentListener; import java.io.IOException; @@ -56,7 +51,6 @@ public static class Fields { public static final ParseField ATTRIBUTES = new ParseField("attributes"); } - @Inject public RestAnalyzeAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_analyze", this); @@ -65,123 +59,161 @@ public RestAnalyzeAction(Settings settings, RestController controller) { controller.registerHandler(POST, "/{index}/_analyze", this); } + private void deprecationLog(String key, RestRequest request) { + if (request.hasParam(key)) { + deprecationLogWithoutCheck(key); + } + } + + private void deprecationLogWithoutCheck(String key) { + deprecationLogForText(key + " request parameter is deprecated and will be removed in the next major release." + + " Please use the JSON in the request body instead request param"); + } + + void deprecationLogForText(String msg) { + deprecationLogger.deprecated(msg); + } + @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String[] texts = request.paramAsStringArrayOrEmptyIfAll("text"); + deprecationLog("text", request); AnalyzeRequest analyzeRequest = new AnalyzeRequest(request.param("index")); analyzeRequest.text(texts); analyzeRequest.analyzer(request.param("analyzer")); + deprecationLog("analyzer", request); analyzeRequest.field(request.param("field")); - if (request.hasParam("tokenizer")) { - analyzeRequest.tokenizer(request.param("tokenizer")); + deprecationLog("field", request); + final String tokenizer = request.param("tokenizer"); + if (tokenizer != null) { + analyzeRequest.tokenizer(tokenizer); + deprecationLogWithoutCheck("tokenizer"); } for (String filter : request.paramAsStringArray("filter", Strings.EMPTY_ARRAY)) { analyzeRequest.addTokenFilter(filter); + deprecationLogWithoutCheck("filter"); } for (String charFilter : request.paramAsStringArray("char_filter", Strings.EMPTY_ARRAY)) { analyzeRequest.addTokenFilter(charFilter); + deprecationLogWithoutCheck("char_filter"); } analyzeRequest.explain(request.paramAsBoolean("explain", false)); + deprecationLog("explain", request); analyzeRequest.attributes(request.paramAsStringArray("attributes", analyzeRequest.attributes())); + deprecationLog("attributes", request); - if (RestActions.hasBodyContent(request)) { - XContentType type = RestActions.guessBodyContentType(request); - if (type == null) { - if (texts == null || texts.length == 0) { - texts = new String[]{ RestActions.getRestContent(request).utf8ToString() }; - analyzeRequest.text(texts); + try { + handleBodyContent(request, texts, analyzeRequest); + } catch (IOException e) { + throw new IllegalArgumentException("Failed to parse request body", e); + } + + return channel -> client.admin().indices().analyze(analyzeRequest, new RestToXContentListener<>(channel)); + } + + void handleBodyContent(RestRequest request, final String[] texts, AnalyzeRequest analyzeRequest) throws IOException { + request.withContentOrSourceParamParserOrNullLenient(parser -> { + if (parser == null) { + if (request.hasContent()) { + BytesReference body = request.getContentOrSourceParamOnly(); + if (texts == null || texts.length == 0) { + final String[] localTexts = new String[]{ body.utf8ToString() }; + analyzeRequest.text(localTexts); + deprecationLogForText(" plain text bodies is deprecated and " + + "this feature will be removed in the next major release. Please use the text param in JSON"); + } } } else { - // NOTE: if rest request with xcontent body has request parameters, the parameters does not override xcontent values - buildFromContent(RestActions.getRestContent(request), analyzeRequest, parseFieldMatcher); + // NOTE: if rest request with xcontent body has request parameters, the parameters do not override xcontent values + buildFromContent(parser, analyzeRequest); } - } + }); + } - client.admin().indices().analyze(analyzeRequest, new RestToXContentListener<>(channel)); + + @Override + public boolean supportsPlainText() { + return true; } - public static void buildFromContent(BytesReference content, AnalyzeRequest analyzeRequest, ParseFieldMatcher parseFieldMatcher) { - try (XContentParser parser = XContentHelper.createParser(content)) { - if (parser.nextToken() != XContentParser.Token.START_OBJECT) { - throw new IllegalArgumentException("Malformed content, must start with an object"); - } else { - XContentParser.Token token; - String currentFieldName = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (parseFieldMatcher.match(currentFieldName, Fields.TEXT) && token == XContentParser.Token.VALUE_STRING) { - analyzeRequest.text(parser.text()); - } else if (parseFieldMatcher.match(currentFieldName, Fields.TEXT) && token == XContentParser.Token.START_ARRAY) { - List texts = new ArrayList<>(); - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token.isValue() == false) { - throw new IllegalArgumentException(currentFieldName + " array element should only contain text"); - } - texts.add(parser.text()); + static void buildFromContent(XContentParser parser, AnalyzeRequest analyzeRequest) throws IOException { + if (parser.nextToken() != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException("Malformed content, must start with an object"); + } else { + XContentParser.Token token; + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (Fields.TEXT.match(currentFieldName) && token == XContentParser.Token.VALUE_STRING) { + analyzeRequest.text(parser.text()); + } else if (Fields.TEXT.match(currentFieldName) && token == XContentParser.Token.START_ARRAY) { + List texts = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token.isValue() == false) { + throw new IllegalArgumentException(currentFieldName + " array element should only contain text"); } - analyzeRequest.text(texts.toArray(new String[texts.size()])); - } else if (parseFieldMatcher.match(currentFieldName, Fields.ANALYZER) && token == XContentParser.Token.VALUE_STRING) { - analyzeRequest.analyzer(parser.text()); - } else if (parseFieldMatcher.match(currentFieldName, Fields.FIELD) && token == XContentParser.Token.VALUE_STRING) { - analyzeRequest.field(parser.text()); - } else if (parseFieldMatcher.match(currentFieldName, Fields.TOKENIZER)) { + texts.add(parser.text()); + } + analyzeRequest.text(texts.toArray(new String[texts.size()])); + } else if (Fields.ANALYZER.match(currentFieldName) && token == XContentParser.Token.VALUE_STRING) { + analyzeRequest.analyzer(parser.text()); + } else if (Fields.FIELD.match(currentFieldName) && token == XContentParser.Token.VALUE_STRING) { + analyzeRequest.field(parser.text()); + } else if (Fields.TOKENIZER.match(currentFieldName)) { + if (token == XContentParser.Token.VALUE_STRING) { + analyzeRequest.tokenizer(parser.text()); + } else if (token == XContentParser.Token.START_OBJECT) { + analyzeRequest.tokenizer(parser.map()); + } else { + throw new IllegalArgumentException(currentFieldName + " should be tokenizer's name or setting"); + } + } else if (Fields.TOKEN_FILTERS.match(currentFieldName) + && token == XContentParser.Token.START_ARRAY) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { if (token == XContentParser.Token.VALUE_STRING) { - analyzeRequest.tokenizer(parser.text()); + analyzeRequest.addTokenFilter(parser.text()); } else if (token == XContentParser.Token.START_OBJECT) { - analyzeRequest.tokenizer(parser.map()); + analyzeRequest.addTokenFilter(parser.map()); } else { - throw new IllegalArgumentException(currentFieldName + " should be tokenizer's name or setting"); - } - } else if (parseFieldMatcher.match(currentFieldName, Fields.TOKEN_FILTERS) - && token == XContentParser.Token.START_ARRAY) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token == XContentParser.Token.VALUE_STRING) { - analyzeRequest.addTokenFilter(parser.text()); - } else if (token == XContentParser.Token.START_OBJECT) { - analyzeRequest.addTokenFilter(parser.map()); - } else { - throw new IllegalArgumentException(currentFieldName - + " array element should contain filter's name or setting"); - } + throw new IllegalArgumentException(currentFieldName + + " array element should contain filter's name or setting"); } - } else if (parseFieldMatcher.match(currentFieldName, Fields.CHAR_FILTERS) - && token == XContentParser.Token.START_ARRAY) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token == XContentParser.Token.VALUE_STRING) { - analyzeRequest.addCharFilter(parser.text()); - } else if (token == XContentParser.Token.START_OBJECT) { - analyzeRequest.addCharFilter(parser.map()); - } else { - throw new IllegalArgumentException(currentFieldName - + " array element should contain char filter's name or setting"); - } - } - } else if (parseFieldMatcher.match(currentFieldName, Fields.EXPLAIN)) { - if (parser.isBooleanValue()) { - analyzeRequest.explain(parser.booleanValue()); + } + } else if (Fields.CHAR_FILTERS.match(currentFieldName) + && token == XContentParser.Token.START_ARRAY) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token == XContentParser.Token.VALUE_STRING) { + analyzeRequest.addCharFilter(parser.text()); + } else if (token == XContentParser.Token.START_OBJECT) { + analyzeRequest.addCharFilter(parser.map()); } else { - throw new IllegalArgumentException(currentFieldName + " must be either 'true' or 'false'"); + throw new IllegalArgumentException(currentFieldName + + " array element should contain char filter's name or setting"); } - } else if (parseFieldMatcher.match(currentFieldName, Fields.ATTRIBUTES) && token == XContentParser.Token.START_ARRAY) { - List attributes = new ArrayList<>(); - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token.isValue() == false) { - throw new IllegalArgumentException(currentFieldName + " array element should only contain attribute name"); - } - attributes.add(parser.text()); - } - analyzeRequest.attributes(attributes.toArray(new String[attributes.size()])); + } + } else if (Fields.EXPLAIN.match(currentFieldName)) { + if (parser.isBooleanValue()) { + analyzeRequest.explain(parser.booleanValue()); } else { - throw new IllegalArgumentException("Unknown parameter [" - + currentFieldName + "] in request body or parameter is of the wrong type[" + token + "] "); + throw new IllegalArgumentException(currentFieldName + " must be either 'true' or 'false'"); } + } else if (Fields.ATTRIBUTES.match(currentFieldName) && token == XContentParser.Token.START_ARRAY) { + List attributes = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token.isValue() == false) { + throw new IllegalArgumentException(currentFieldName + " array element should only contain attribute name"); + } + attributes.add(parser.text()); + } + analyzeRequest.attributes(attributes.toArray(new String[attributes.size()])); + } else { + throw new IllegalArgumentException("Unknown parameter [" + + currentFieldName + "] in request body or parameter is of the wrong type[" + token + "] "); } } - } catch (IOException e) { - throw new IllegalArgumentException("Failed to parse request body", e); } } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestClearIndicesCacheAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestClearIndicesCacheAction.java index 391eaa64d5045..1544a01f9f09b 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestClearIndicesCacheAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestClearIndicesCacheAction.java @@ -24,19 +24,17 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; import java.util.Map; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -45,8 +43,6 @@ import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; public class RestClearIndicesCacheAction extends BaseRestHandler { - - @Inject public RestClearIndicesCacheAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_cache/clear", this); @@ -57,12 +53,13 @@ public RestClearIndicesCacheAction(Settings settings, RestController controller) } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ClearIndicesCacheRequest clearIndicesCacheRequest = new ClearIndicesCacheRequest( Strings.splitStringByCommaToArray(request.param("index"))); clearIndicesCacheRequest.indicesOptions(IndicesOptions.fromRequest(request, clearIndicesCacheRequest.indicesOptions())); - fromRequest(request, clearIndicesCacheRequest, parseFieldMatcher); - client.admin().indices().clearCache(clearIndicesCacheRequest, new RestBuilderListener(channel) { + fromRequest(request, clearIndicesCacheRequest); + return channel -> + client.admin().indices().clearCache(clearIndicesCacheRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(ClearIndicesCacheResponse response, XContentBuilder builder) throws Exception { builder.startObject(); @@ -78,20 +75,22 @@ public boolean canTripCircuitBreaker() { return false; } - public static ClearIndicesCacheRequest fromRequest(final RestRequest request, ClearIndicesCacheRequest clearIndicesCacheRequest, - ParseFieldMatcher parseFieldMatcher) { + public static ClearIndicesCacheRequest fromRequest(final RestRequest request, ClearIndicesCacheRequest clearIndicesCacheRequest) { for (Map.Entry entry : request.params().entrySet()) { - if (parseFieldMatcher.match(entry.getKey(), Fields.QUERY)) { + if (Fields.QUERY.match(entry.getKey())) { clearIndicesCacheRequest.queryCache(request.paramAsBoolean(entry.getKey(), clearIndicesCacheRequest.queryCache())); } - if (parseFieldMatcher.match(entry.getKey(), Fields.FIELD_DATA)) { + if (Fields.REQUEST.match(entry.getKey())) { + clearIndicesCacheRequest.requestCache(request.paramAsBoolean(entry.getKey(), clearIndicesCacheRequest.requestCache())); + } + if (Fields.FIELD_DATA.match(entry.getKey())) { clearIndicesCacheRequest.fieldDataCache(request.paramAsBoolean(entry.getKey(), clearIndicesCacheRequest.fieldDataCache())); } - if (parseFieldMatcher.match(entry.getKey(), Fields.RECYCLER)) { + if (Fields.RECYCLER.match(entry.getKey())) { clearIndicesCacheRequest.recycler(request.paramAsBoolean(entry.getKey(), clearIndicesCacheRequest.recycler())); } - if (parseFieldMatcher.match(entry.getKey(), Fields.FIELDS)) { + if (Fields.FIELDS.match(entry.getKey())) { clearIndicesCacheRequest.fields(request.paramAsStringArray(entry.getKey(), clearIndicesCacheRequest.fields())); } } @@ -101,6 +100,7 @@ public static ClearIndicesCacheRequest fromRequest(final RestRequest request, Cl public static class Fields { public static final ParseField QUERY = new ParseField("query", "filter", "filter_cache"); + public static final ParseField REQUEST = new ParseField("request", "request_cache"); public static final ParseField FIELD_DATA = new ParseField("field_data", "fielddata"); public static final ParseField RECYCLER = new ParseField("recycler"); public static final ParseField FIELDS = new ParseField("fields"); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestCloseIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestCloseIndexAction.java index e5baa27f4eccb..2e0a46747ca77 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestCloseIndexAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestCloseIndexAction.java @@ -20,24 +20,18 @@ package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.close.CloseIndexRequest; -import org.elasticsearch.action.admin.indices.close.CloseIndexResponse; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; -/** - * - */ -public class RestCloseIndexAction extends BaseRestHandler { +import java.io.IOException; - @Inject +public class RestCloseIndexAction extends BaseRestHandler { public RestCloseIndexAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.POST, "/_close", this); @@ -45,11 +39,12 @@ public RestCloseIndexAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { CloseIndexRequest closeIndexRequest = new CloseIndexRequest(Strings.splitStringByCommaToArray(request.param("index"))); closeIndexRequest.masterNodeTimeout(request.paramAsTime("master_timeout", closeIndexRequest.masterNodeTimeout())); closeIndexRequest.timeout(request.paramAsTime("timeout", closeIndexRequest.timeout())); closeIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, closeIndexRequest.indicesOptions())); - client.admin().indices().close(closeIndexRequest, new AcknowledgedRestListener(channel)); + return channel -> client.admin().indices().close(closeIndexRequest, new AcknowledgedRestListener<>(channel)); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestCreateIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestCreateIndexAction.java index 2a7f2a629a7f2..8c314efee634a 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestCreateIndexAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestCreateIndexAction.java @@ -21,42 +21,34 @@ import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; import org.elasticsearch.action.admin.indices.create.CreateIndexResponse; -import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.action.support.ActiveShardCount; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; import java.io.IOException; -/** - * - */ public class RestCreateIndexAction extends BaseRestHandler { - - @Inject public RestCreateIndexAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.PUT, "/{index}", this); } - @SuppressWarnings({"unchecked"}) @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { CreateIndexRequest createIndexRequest = new CreateIndexRequest(request.param("index")); if (request.hasContent()) { - createIndexRequest.source(request.content()); + createIndexRequest.source(request.content(), request.getXContentType()); } createIndexRequest.updateAllTypes(request.paramAsBoolean("update_all_types", false)); createIndexRequest.timeout(request.paramAsTime("timeout", createIndexRequest.timeout())); createIndexRequest.masterNodeTimeout(request.paramAsTime("master_timeout", createIndexRequest.masterNodeTimeout())); createIndexRequest.waitForActiveShards(ActiveShardCount.parseString(request.param("wait_for_active_shards"))); - client.admin().indices().create(createIndexRequest, new AcknowledgedRestListener(channel) { + return channel -> client.admin().indices().create(createIndexRequest, new AcknowledgedRestListener(channel) { @Override public void addCustomFields(XContentBuilder builder, CreateIndexResponse response) throws IOException { response.addCustomFields(builder); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestDeleteIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestDeleteIndexAction.java index d3e07effc1465..6ca806bcffe01 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestDeleteIndexAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestDeleteIndexAction.java @@ -20,24 +20,18 @@ package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest; -import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; -/** - * - */ -public class RestDeleteIndexAction extends BaseRestHandler { +import java.io.IOException; - @Inject +public class RestDeleteIndexAction extends BaseRestHandler { public RestDeleteIndexAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.DELETE, "/", this); @@ -45,11 +39,11 @@ public RestDeleteIndexAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(Strings.splitStringByCommaToArray(request.param("index"))); deleteIndexRequest.timeout(request.paramAsTime("timeout", deleteIndexRequest.timeout())); deleteIndexRequest.masterNodeTimeout(request.paramAsTime("master_timeout", deleteIndexRequest.masterNodeTimeout())); deleteIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, deleteIndexRequest.indicesOptions())); - client.admin().indices().delete(deleteIndexRequest, new AcknowledgedRestListener(channel)); + return channel -> client.admin().indices().delete(deleteIndexRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestDeleteIndexTemplateAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestDeleteIndexTemplateAction.java index 425581fe92795..e9f315e8aee0f 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestDeleteIndexTemplateAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestDeleteIndexTemplateAction.java @@ -20,26 +20,24 @@ import org.elasticsearch.action.admin.indices.template.delete.DeleteIndexTemplateRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; -public class RestDeleteIndexTemplateAction extends BaseRestHandler { +import java.io.IOException; - @Inject +public class RestDeleteIndexTemplateAction extends BaseRestHandler { public RestDeleteIndexTemplateAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.DELETE, "/_template/{name}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteIndexTemplateRequest deleteIndexTemplateRequest = new DeleteIndexTemplateRequest(request.param("name")); deleteIndexTemplateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", deleteIndexTemplateRequest.masterNodeTimeout())); - client.admin().indices().deleteTemplate(deleteIndexTemplateRequest, new AcknowledgedRestListener<>(channel)); + return channel -> client.admin().indices().deleteTemplate(deleteIndexTemplateRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestFlushAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestFlushAction.java index b963a805934e4..00a91e36c5eb2 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestFlushAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestFlushAction.java @@ -24,17 +24,17 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestStatus.OK; @@ -44,8 +44,6 @@ * */ public class RestFlushAction extends BaseRestHandler { - - @Inject public RestFlushAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_flush", this); @@ -56,12 +54,12 @@ public RestFlushAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { FlushRequest flushRequest = new FlushRequest(Strings.splitStringByCommaToArray(request.param("index"))); flushRequest.indicesOptions(IndicesOptions.fromRequest(request, flushRequest.indicesOptions())); flushRequest.force(request.paramAsBoolean("force", flushRequest.force())); flushRequest.waitIfOngoing(request.paramAsBoolean("wait_if_ongoing", flushRequest.waitIfOngoing())); - client.admin().indices().flush(flushRequest, new RestBuilderListener(channel) { + return channel -> client.admin().indices().flush(flushRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(FlushResponse response, XContentBuilder builder) throws Exception { builder.startObject(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestForceMergeAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestForceMergeAction.java index c376866ad1e46..e6fdfbee30a34 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestForceMergeAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestForceMergeAction.java @@ -24,17 +24,17 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestStatus.OK; import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; @@ -43,8 +43,6 @@ * */ public class RestForceMergeAction extends BaseRestHandler { - - @Inject public RestForceMergeAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_forcemerge", this); @@ -52,13 +50,13 @@ public RestForceMergeAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ForceMergeRequest mergeRequest = new ForceMergeRequest(Strings.splitStringByCommaToArray(request.param("index"))); mergeRequest.indicesOptions(IndicesOptions.fromRequest(request, mergeRequest.indicesOptions())); mergeRequest.maxNumSegments(request.paramAsInt("max_num_segments", mergeRequest.maxNumSegments())); mergeRequest.onlyExpungeDeletes(request.paramAsBoolean("only_expunge_deletes", mergeRequest.onlyExpungeDeletes())); mergeRequest.flush(request.paramAsBoolean("flush", mergeRequest.flush())); - client.admin().indices().forceMerge(mergeRequest, new RestBuilderListener(channel) { + return channel -> client.admin().indices().forceMerge(mergeRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(ForceMergeResponse response, XContentBuilder builder) throws Exception { builder.startObject(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java index 4b58fd0f16cb6..62c7fbaa30508 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java @@ -19,6 +19,7 @@ package org.elasticsearch.rest.action.admin.indices; +import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; import org.elasticsearch.action.admin.indices.alias.get.GetAliasesRequest; import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse; @@ -26,93 +27,131 @@ import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.metadata.AliasMetaData; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.collect.ImmutableOpenMap; +import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; import java.util.List; import java.util.Locale; +import java.util.Set; +import java.util.SortedSet; +import java.util.stream.Collectors; import static org.elasticsearch.rest.RestRequest.Method.GET; -import static org.elasticsearch.rest.RestStatus.OK; +import static org.elasticsearch.rest.RestRequest.Method.HEAD; -/** +/* + * The REST handler for get alias and head alias APIs. */ public class RestGetAliasesAction extends BaseRestHandler { - @Inject - public RestGetAliasesAction(Settings settings, RestController controller) { + public RestGetAliasesAction(final Settings settings, final RestController controller) { super(settings); controller.registerHandler(GET, "/_alias/{name}", this); + controller.registerHandler(HEAD, "/_alias/{name}", this); controller.registerHandler(GET, "/{index}/_alias/{name}", this); + controller.registerHandler(HEAD, "/{index}/_alias/{name}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final String[] aliases = request.paramAsStringArrayOrEmptyIfAll("name"); - final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final GetAliasesRequest getAliasesRequest = new GetAliasesRequest(aliases); + final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); getAliasesRequest.indices(indices); getAliasesRequest.indicesOptions(IndicesOptions.fromRequest(request, getAliasesRequest.indicesOptions())); getAliasesRequest.local(request.paramAsBoolean("local", getAliasesRequest.local())); - client.admin().indices().getAliases(getAliasesRequest, new RestBuilderListener(channel) { + return channel -> client.admin().indices().getAliases(getAliasesRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(GetAliasesResponse response, XContentBuilder builder) throws Exception { - // empty body, if indices were specified but no aliases were - if (indices.length > 0 && response.getAliases().isEmpty()) { - return new BytesRestResponse(OK, builder.startObject().endObject()); - } else if (response.getAliases().isEmpty()) { - String message = String.format(Locale.ROOT, "alias [%s] missing", toNamesString(getAliasesRequest.aliases())); - builder.startObject() - .field("error", message) - .field("status", RestStatus.NOT_FOUND.getStatus()) - .endObject(); - return new BytesRestResponse(RestStatus.NOT_FOUND, builder); + final ImmutableOpenMap> aliasMap = response.getAliases(); + + final Set aliasNames = new HashSet<>(); + for (final ObjectCursor> cursor : aliasMap.values()) { + for (final AliasMetaData aliasMetaData : cursor.value) { + aliasNames.add(aliasMetaData.alias()); + } + } + + // first remove requested aliases that are exact matches + final SortedSet difference = Sets.sortedDifference(Arrays.stream(aliases).collect(Collectors.toSet()), aliasNames); + + // now remove requested aliases that contain wildcards that are simple matches + final List matches = new ArrayList<>(); + outer: + for (final String pattern : difference) { + if (pattern.contains("*")) { + for (final String aliasName : aliasNames) { + if (Regex.simpleMatch(pattern, aliasName)) { + matches.add(pattern); + continue outer; + } + } + } } + difference.removeAll(matches); + final RestStatus status; builder.startObject(); - for (ObjectObjectCursor> entry : response.getAliases()) { - builder.startObject(entry.key); - builder.startObject(Fields.ALIASES); - for (AliasMetaData alias : entry.value) { - AliasMetaData.Builder.toXContent(alias, builder, ToXContent.EMPTY_PARAMS); + { + if (difference.isEmpty()) { + status = RestStatus.OK; + } else { + status = RestStatus.NOT_FOUND; + final String message; + if (difference.size() == 1) { + message = String.format(Locale.ROOT, "alias [%s] missing", toNamesString(difference.iterator().next())); + } else { + message = String.format(Locale.ROOT, "aliases [%s] missing", toNamesString(difference.toArray(new String[0]))); + } + builder.field("error", message); + builder.field("status", status.getStatus()); + } + + for (final ObjectObjectCursor> entry : response.getAliases()) { + builder.startObject(entry.key); + { + builder.startObject("aliases"); + { + for (final AliasMetaData alias : entry.value) { + AliasMetaData.Builder.toXContent(alias, builder, ToXContent.EMPTY_PARAMS); + } + } + builder.endObject(); + } + builder.endObject(); } - builder.endObject(); - builder.endObject(); } builder.endObject(); - return new BytesRestResponse(OK, builder); + return new BytesRestResponse(status, builder); } + }); } - private static String toNamesString(String... names) { + private static String toNamesString(final String... names) { if (names == null || names.length == 0) { return ""; } else if (names.length == 1) { return names[0]; } else { - StringBuilder builder = new StringBuilder(names[0]); - for (int i = 1; i < names.length; i++) { - builder.append(',').append(names[i]); - } - return builder.toString(); + return Arrays.stream(names).collect(Collectors.joining(",")); } } - static class Fields { - - static final String ALIASES = "aliases"; - - } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetFieldMappingAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetFieldMappingAction.java index fba4192d5d4e6..fe6eb9a552f32 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetFieldMappingAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetFieldMappingAction.java @@ -25,18 +25,17 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; import java.util.Map; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -44,8 +43,6 @@ import static org.elasticsearch.rest.RestStatus.OK; public class RestGetFieldMappingAction extends BaseRestHandler { - - @Inject public RestGetFieldMappingAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_mapping/field/{fields}", this); @@ -56,7 +53,7 @@ public RestGetFieldMappingAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final String[] types = request.paramAsStringArrayOrEmptyIfAll("type"); final String[] fields = Strings.splitStringByCommaToArray(request.param("fields")); @@ -64,26 +61,27 @@ public void handleRequest(final RestRequest request, final RestChannel channel, getMappingsRequest.indices(indices).types(types).fields(fields).includeDefaults(request.paramAsBoolean("include_defaults", false)); getMappingsRequest.indicesOptions(IndicesOptions.fromRequest(request, getMappingsRequest.indicesOptions())); getMappingsRequest.local(request.paramAsBoolean("local", getMappingsRequest.local())); - client.admin().indices().getFieldMappings(getMappingsRequest, new RestBuilderListener(channel) { - @Override - public RestResponse buildResponse(GetFieldMappingsResponse response, XContentBuilder builder) throws Exception { - Map>> mappingsByIndex = response.mappings(); + return channel -> + client.admin().indices().getFieldMappings(getMappingsRequest, new RestBuilderListener(channel) { + @Override + public RestResponse buildResponse(GetFieldMappingsResponse response, XContentBuilder builder) throws Exception { + Map>> mappingsByIndex = response.mappings(); - boolean isPossibleSingleFieldRequest = indices.length == 1 && types.length == 1 && fields.length == 1; - if (isPossibleSingleFieldRequest && isFieldMappingMissingField(mappingsByIndex)) { - return new BytesRestResponse(OK, builder.startObject().endObject()); - } + boolean isPossibleSingleFieldRequest = indices.length == 1 && types.length == 1 && fields.length == 1; + if (isPossibleSingleFieldRequest && isFieldMappingMissingField(mappingsByIndex)) { + return new BytesRestResponse(OK, builder.startObject().endObject()); + } - RestStatus status = OK; - if (mappingsByIndex.isEmpty() && fields.length > 0) { - status = NOT_FOUND; - } - builder.startObject(); - response.toXContent(builder, request); - builder.endObject(); - return new BytesRestResponse(status, builder); - } - }); + RestStatus status = OK; + if (mappingsByIndex.isEmpty() && fields.length > 0) { + status = NOT_FOUND; + } + builder.startObject(); + response.toXContent(builder, request); + builder.endObject(); + return new BytesRestResponse(status, builder); + } + }); } /** diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndexTemplateAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndexTemplateAction.java index ead04532590fe..1814894636f77 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndexTemplateAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndexTemplateAction.java @@ -16,52 +16,65 @@ * specific language governing permissions and limitations * under the License. */ + package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesRequest; import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; +import java.util.Set; + import static org.elasticsearch.rest.RestRequest.Method.GET; +import static org.elasticsearch.rest.RestRequest.Method.HEAD; import static org.elasticsearch.rest.RestStatus.NOT_FOUND; import static org.elasticsearch.rest.RestStatus.OK; +/** + * The REST handler for get template and head template APIs. + */ public class RestGetIndexTemplateAction extends BaseRestHandler { - @Inject - public RestGetIndexTemplateAction(Settings settings, RestController controller) { + public RestGetIndexTemplateAction(final Settings settings, final RestController controller) { super(settings); - controller.registerHandler(GET, "/_template", this); controller.registerHandler(GET, "/_template/{name}", this); + controller.registerHandler(HEAD, "/_template/{name}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final String[] names = Strings.splitStringByCommaToArray(request.param("name")); - GetIndexTemplatesRequest getIndexTemplatesRequest = new GetIndexTemplatesRequest(names); + final GetIndexTemplatesRequest getIndexTemplatesRequest = new GetIndexTemplatesRequest(names); getIndexTemplatesRequest.local(request.paramAsBoolean("local", getIndexTemplatesRequest.local())); getIndexTemplatesRequest.masterNodeTimeout(request.paramAsTime("master_timeout", getIndexTemplatesRequest.masterNodeTimeout())); final boolean implicitAll = getIndexTemplatesRequest.names().length == 0; - client.admin().indices().getTemplates(getIndexTemplatesRequest, new RestToXContentListener(channel) { - @Override - protected RestStatus getStatus(GetIndexTemplatesResponse response) { - boolean templateExists = false == response.getIndexTemplates().isEmpty(); + return channel -> + client.admin() + .indices() + .getTemplates(getIndexTemplatesRequest, new RestToXContentListener(channel) { + @Override + protected RestStatus getStatus(final GetIndexTemplatesResponse response) { + final boolean templateExists = response.getIndexTemplates().isEmpty() == false; + return (templateExists || implicitAll) ? OK : NOT_FOUND; + } + }); + } - return (templateExists || implicitAll) ? OK : NOT_FOUND; - } - }); + @Override + protected Set responseParams() { + return Settings.FORMAT_PARAMS; } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndicesAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndicesAction.java index 2ad6f245cf0ab..bbca5c402be35 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndicesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndicesAction.java @@ -16,9 +16,11 @@ * specific language governing permissions and limitations * under the License. */ + package org.elasticsearch.rest.action.admin.indices; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; + import org.elasticsearch.action.admin.indices.get.GetIndexRequest; import org.elasticsearch.action.admin.indices.get.GetIndexRequest.Feature; import org.elasticsearch.action.admin.indices.get.GetIndexResponse; @@ -28,7 +30,6 @@ import org.elasticsearch.cluster.metadata.MappingMetaData; import org.elasticsearch.common.Strings; import org.elasticsearch.common.collect.ImmutableOpenMap; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; @@ -36,7 +37,6 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -44,33 +44,45 @@ import java.io.IOException; import java.util.List; +import java.util.Set; import static org.elasticsearch.rest.RestRequest.Method.GET; +import static org.elasticsearch.rest.RestRequest.Method.HEAD; import static org.elasticsearch.rest.RestStatus.OK; +/** + * The REST handler for get index and head index APIs. + */ public class RestGetIndicesAction extends BaseRestHandler { private final IndexScopedSettings indexScopedSettings; private final SettingsFilter settingsFilter; - @Inject - public RestGetIndicesAction(Settings settings, RestController controller, IndexScopedSettings indexScopedSettings, - SettingsFilter settingsFilter) { + public RestGetIndicesAction( + final Settings settings, + final RestController controller, + final IndexScopedSettings indexScopedSettings, + final SettingsFilter settingsFilter) { super(settings); this.indexScopedSettings = indexScopedSettings; controller.registerHandler(GET, "/{index}", this); + controller.registerHandler(HEAD, "/{index}", this); controller.registerHandler(GET, "/{index}/{type}", this); this.settingsFilter = settingsFilter; } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String[] indices = Strings.splitStringByCommaToArray(request.param("index")); String[] featureParams = request.paramAsStringArray("type", null); + if (featureParams != null && featureParams.length > 1) { + deprecationLogger.deprecated("Requesting comma-separated features is deprecated and " + + "will be removed in 6.0+, retrieve all features instead."); + } // Work out if the indices is a list of features if (featureParams == null && indices.length > 0 && indices[0] != null && indices[0].startsWith("_") && !"_all".equals(indices[0])) { featureParams = indices; - indices = new String[] {"_all"}; + indices = new String[]{"_all"}; } final GetIndexRequest getIndexRequest = new GetIndexRequest(); getIndexRequest.indices(indices); @@ -81,70 +93,87 @@ public void handleRequest(final RestRequest request, final RestChannel channel, getIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, getIndexRequest.indicesOptions())); getIndexRequest.local(request.paramAsBoolean("local", getIndexRequest.local())); getIndexRequest.humanReadable(request.paramAsBoolean("human", false)); - client.admin().indices().getIndex(getIndexRequest, new RestBuilderListener(channel) { + final boolean defaults = request.paramAsBoolean("include_defaults", false); + return channel -> client.admin().indices().getIndex(getIndexRequest, new RestBuilderListener(channel) { @Override - public RestResponse buildResponse(GetIndexResponse response, XContentBuilder builder) throws Exception { - Feature[] features = getIndexRequest.features(); - String[] indices = response.indices(); - + public RestResponse buildResponse(final GetIndexResponse response, final XContentBuilder builder) throws Exception { builder.startObject(); - for (String index : indices) { - builder.startObject(index); - for (Feature feature : features) { - switch (feature) { - case ALIASES: - writeAliases(response.aliases().get(index), builder, request); - break; - case MAPPINGS: - writeMappings(response.mappings().get(index), builder, request); - break; - case SETTINGS: - writeSettings(response.settings().get(index), builder, request); - break; - default: - throw new IllegalStateException("feature [" + feature + "] is not valid"); + { + for (final String index : response.indices()) { + builder.startObject(index); + { + for (final Feature feature : getIndexRequest.features()) { + switch (feature) { + case ALIASES: + writeAliases(response.aliases().get(index), builder, request); + break; + case MAPPINGS: + writeMappings(response.mappings().get(index), builder); + break; + case SETTINGS: + writeSettings(response.settings().get(index), builder, request, defaults); + break; + default: + throw new IllegalStateException("feature [" + feature + "] is not valid"); + } + } } - } - builder.endObject(); + builder.endObject(); + } } builder.endObject(); return new BytesRestResponse(OK, builder); } - private void writeAliases(List aliases, XContentBuilder builder, Params params) throws IOException { - builder.startObject(Fields.ALIASES); - if (aliases != null) { - for (AliasMetaData alias : aliases) { - AliasMetaData.Builder.toXContent(alias, builder, params); + private void writeAliases( + final List aliases, + final XContentBuilder builder, + final Params params) throws IOException { + builder.startObject("aliases"); + { + if (aliases != null) { + for (final AliasMetaData alias : aliases) { + AliasMetaData.Builder.toXContent(alias, builder, params); + } } } builder.endObject(); } - private void writeMappings(ImmutableOpenMap mappings, XContentBuilder builder, Params params) + private void writeMappings(final ImmutableOpenMap mappings, final XContentBuilder builder) throws IOException { - builder.startObject(Fields.MAPPINGS); - if (mappings != null) { - for (ObjectObjectCursor typeEntry : mappings) { - builder.field(typeEntry.key); - builder.map(typeEntry.value.sourceAsMap()); + builder.startObject("mappings"); + { + if (mappings != null) { + for (final ObjectObjectCursor typeEntry : mappings) { + builder.field(typeEntry.key); + builder.map(typeEntry.value.sourceAsMap()); + } } } builder.endObject(); } - private void writeSettings(Settings settings, XContentBuilder builder, Params params) throws IOException { - final boolean renderDefaults = request.paramAsBoolean("include_defaults", false); - builder.startObject(Fields.SETTINGS); - settings.toXContent(builder, params); + private void writeSettings( + final Settings settings, + final XContentBuilder builder, + final Params params, + final boolean defaults) throws IOException { + builder.startObject("settings"); + { + settings.toXContent(builder, params); + } builder.endObject(); - if (renderDefaults) { + if (defaults) { builder.startObject("defaults"); - settingsFilter.filter(indexScopedSettings.diff(settings, RestGetIndicesAction.this.settings)).toXContent(builder, - request); + { + settingsFilter + .filter(indexScopedSettings.diff(settings, RestGetIndicesAction.this.settings)) + .toXContent(builder, request); + } builder.endObject(); } } @@ -152,10 +181,9 @@ private void writeSettings(Settings settings, XContentBuilder builder, Params pa }); } - static class Fields { - static final String ALIASES = "aliases"; - static final String MAPPINGS = "mappings"; - static final String SETTINGS = "settings"; + @Override + protected Set responseParams() { + return Settings.FORMAT_PARAMS; } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetMappingAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetMappingAction.java index b9c8472964074..aaf78b6cd20d6 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetMappingAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetMappingAction.java @@ -19,6 +19,7 @@ package org.elasticsearch.rest.action.admin.indices; +import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsRequest; @@ -28,20 +29,32 @@ import org.elasticsearch.cluster.metadata.MappingMetaData; import org.elasticsearch.common.Strings; import org.elasticsearch.common.collect.ImmutableOpenMap; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.indices.TypeMissingException; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; +import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Locale; +import java.util.Set; +import java.util.SortedSet; +import java.util.stream.Collectors; + import static org.elasticsearch.rest.RestRequest.Method.GET; +import static org.elasticsearch.rest.RestRequest.Method.HEAD; import static org.elasticsearch.rest.RestStatus.OK; /** @@ -49,61 +62,103 @@ */ public class RestGetMappingAction extends BaseRestHandler { - @Inject - public RestGetMappingAction(Settings settings, RestController controller) { + public RestGetMappingAction(final Settings settings, final RestController controller) { super(settings); controller.registerHandler(GET, "/{index}/{type}/_mapping", this); controller.registerHandler(GET, "/{index}/_mappings/{type}", this); controller.registerHandler(GET, "/{index}/_mapping/{type}", this); + controller.registerHandler(HEAD, "/{index}/_mapping/{type}", this); controller.registerHandler(GET, "/_mapping/{type}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final String[] types = request.paramAsStringArrayOrEmptyIfAll("type"); - GetMappingsRequest getMappingsRequest = new GetMappingsRequest(); + final GetMappingsRequest getMappingsRequest = new GetMappingsRequest(); getMappingsRequest.indices(indices).types(types); getMappingsRequest.indicesOptions(IndicesOptions.fromRequest(request, getMappingsRequest.indicesOptions())); getMappingsRequest.local(request.paramAsBoolean("local", getMappingsRequest.local())); - client.admin().indices().getMappings(getMappingsRequest, new RestBuilderListener(channel) { + return channel -> client.admin().indices().getMappings(getMappingsRequest, new RestBuilderListener(channel) { @Override - public RestResponse buildResponse(GetMappingsResponse response, XContentBuilder builder) throws Exception { - builder.startObject(); - ImmutableOpenMap> mappingsByIndex = response.getMappings(); - if (mappingsByIndex.isEmpty()) { - if (indices.length != 0 && types.length != 0) { - return new BytesRestResponse(OK, builder.endObject()); - } else if (indices.length != 0) { - return new BytesRestResponse(channel, new IndexNotFoundException(indices[0])); - } else if (types.length != 0) { - return new BytesRestResponse(channel, new TypeMissingException("_all", types[0])); + public RestResponse buildResponse(final GetMappingsResponse response, final XContentBuilder builder) throws Exception { + final ImmutableOpenMap> mappingsByIndex = response.getMappings(); + if (mappingsByIndex.isEmpty() && (indices.length != 0 || types.length != 0)) { + if (indices.length != 0 && types.length == 0) { + builder.close(); + return new BytesRestResponse(channel, new IndexNotFoundException(String.join(",", indices))); } else { - return new BytesRestResponse(OK, builder.endObject()); + builder.close(); + return new BytesRestResponse(channel, new TypeMissingException("_all", String.join(",", types))); } } - for (ObjectObjectCursor> indexEntry : mappingsByIndex) { - if (indexEntry.value.isEmpty()) { - continue; + final Set typeNames = new HashSet<>(); + for (final ObjectCursor> cursor : mappingsByIndex.values()) { + for (final ObjectCursor inner : cursor.value.keys()) { + typeNames.add(inner.value); } - builder.startObject(indexEntry.key); - builder.startObject(Fields.MAPPINGS); - for (ObjectObjectCursor typeEntry : indexEntry.value) { - builder.field(typeEntry.key); - builder.map(typeEntry.value.sourceAsMap()); + } + + final SortedSet difference = Sets.sortedDifference(Arrays.stream(types).collect(Collectors.toSet()), typeNames); + + // now remove requested aliases that contain wildcards that are simple matches + final List matches = new ArrayList<>(); + outer: + for (final String pattern : difference) { + if (pattern.contains("*")) { + for (final String typeName : typeNames) { + if (Regex.simpleMatch(pattern, typeName)) { + matches.add(pattern); + continue outer; + } + } } - builder.endObject(); - builder.endObject(); } + difference.removeAll(matches); + + final RestStatus status; + builder.startObject(); + { + if (difference.isEmpty()) { + status = RestStatus.OK; + } else { + status = RestStatus.NOT_FOUND; + final String message; + if (difference.size() == 1) { + message = String.format(Locale.ROOT, "type [%s] missing", toNamesString(difference.iterator().next())); + } else { + message = String.format(Locale.ROOT, "types [%s] missing", toNamesString(difference.toArray(new String[0]))); + } + builder.field("error", message); + builder.field("status", status.getStatus()); + } + for (final ObjectObjectCursor> indexEntry : mappingsByIndex) { + if (indexEntry.value.isEmpty()) { + continue; + } + builder.startObject(indexEntry.key); + { + builder.startObject("mappings"); + { + for (final ObjectObjectCursor typeEntry : indexEntry.value) { + builder.field(typeEntry.key, typeEntry.value.sourceAsMap()); + } + } + builder.endObject(); + } + builder.endObject(); + } + } builder.endObject(); - return new BytesRestResponse(OK, builder); + return new BytesRestResponse(status, builder); } }); } - static class Fields { - static final String MAPPINGS = "mappings"; + private static String toNamesString(final String... names) { + return Arrays.stream(names).collect(Collectors.joining(",")); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetSettingsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetSettingsAction.java index 936a96e035afb..fcad131a3599a 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetSettingsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetSettingsAction.java @@ -20,24 +20,25 @@ package org.elasticsearch.rest.action.admin.indices; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; + import org.elasticsearch.action.admin.indices.settings.get.GetSettingsRequest; import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestStatus.OK; @@ -46,7 +47,6 @@ public class RestGetSettingsAction extends BaseRestHandler { private final IndexScopedSettings indexScopedSettings; private final SettingsFilter settingsFilter; - @Inject public RestGetSettingsAction(Settings settings, RestController controller, IndexScopedSettings indexScopedSettings, final SettingsFilter settingsFilter) { super(settings); @@ -58,7 +58,7 @@ public RestGetSettingsAction(Settings settings, RestController controller, Index } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final String[] names = request.paramAsStringArrayOrEmptyIfAll("name"); final boolean renderDefaults = request.paramAsBoolean("include_defaults", false); GetSettingsRequest getSettingsRequest = new GetSettingsRequest() @@ -68,14 +68,14 @@ public void handleRequest(final RestRequest request, final RestChannel channel, .names(names); getSettingsRequest.local(request.paramAsBoolean("local", getSettingsRequest.local())); - client.admin().indices().getSettings(getSettingsRequest, new RestBuilderListener(channel) { + return channel -> client.admin().indices().getSettings(getSettingsRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(GetSettingsResponse getSettingsResponse, XContentBuilder builder) throws Exception { builder.startObject(); for (ObjectObjectCursor cursor : getSettingsResponse.getIndexToSettings()) { // no settings, jump over it to shorten the response data - if (cursor.value.getAsMap().isEmpty()) { + if (cursor.value.isEmpty()) { continue; } builder.startObject(cursor.key); @@ -94,4 +94,5 @@ public RestResponse buildResponse(GetSettingsResponse getSettingsResponse, XCont } }); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestHeadIndexTemplateAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestHeadIndexTemplateAction.java deleted file mode 100644 index 3480fbb9afc00..0000000000000 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestHeadIndexTemplateAction.java +++ /dev/null @@ -1,68 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.rest.action.admin.indices; - -import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesRequest; -import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse; -import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; -import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; -import org.elasticsearch.rest.action.RestResponseListener; - -import static org.elasticsearch.rest.RestRequest.Method.HEAD; -import static org.elasticsearch.rest.RestStatus.NOT_FOUND; -import static org.elasticsearch.rest.RestStatus.OK; - -/** - * - */ -public class RestHeadIndexTemplateAction extends BaseRestHandler { - - @Inject - public RestHeadIndexTemplateAction(Settings settings, RestController controller) { - super(settings); - - controller.registerHandler(HEAD, "/_template/{name}", this); - } - - @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { - GetIndexTemplatesRequest getIndexTemplatesRequest = new GetIndexTemplatesRequest(request.param("name")); - getIndexTemplatesRequest.local(request.paramAsBoolean("local", getIndexTemplatesRequest.local())); - getIndexTemplatesRequest.masterNodeTimeout(request.paramAsTime("master_timeout", getIndexTemplatesRequest.masterNodeTimeout())); - client.admin().indices().getTemplates(getIndexTemplatesRequest, new RestResponseListener(channel) { - @Override - public RestResponse buildResponse(GetIndexTemplatesResponse getIndexTemplatesResponse) { - boolean templateExists = getIndexTemplatesResponse.getIndexTemplates().size() > 0; - if (templateExists) { - return new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY); - } else { - return new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY); - } - } - }); - } -} diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndexDeleteAliasesAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndexDeleteAliasesAction.java index b027aeb8d6712..d2180a53c9630 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndexDeleteAliasesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndexDeleteAliasesAction.java @@ -19,25 +19,22 @@ package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest; -import org.elasticsearch.action.admin.indices.alias.IndicesAliasesResponse; import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.DELETE; /** */ public class RestIndexDeleteAliasesAction extends BaseRestHandler { - - @Inject public RestIndexDeleteAliasesAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(DELETE, "/{index}/_alias/{name}", this); @@ -45,7 +42,7 @@ public RestIndexDeleteAliasesAction(Settings settings, RestController controller } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final String[] aliases = Strings.splitStringByCommaToArray(request.param("name")); IndicesAliasesRequest indicesAliasesRequest = new IndicesAliasesRequest(); @@ -53,6 +50,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel, indicesAliasesRequest.addAliasAction(AliasActions.remove().indices(indices).aliases(aliases)); indicesAliasesRequest.masterNodeTimeout(request.paramAsTime("master_timeout", indicesAliasesRequest.masterNodeTimeout())); - client.admin().indices().aliases(indicesAliasesRequest, new AcknowledgedRestListener(channel)); + return channel -> client.admin().indices().aliases(indicesAliasesRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndexPutAliasAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndexPutAliasAction.java index f7546bd57db05..2c68c4886103b 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndexPutAliasAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndexPutAliasAction.java @@ -22,24 +22,20 @@ import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; import java.util.Map; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestRequest.Method.PUT; public class RestIndexPutAliasAction extends BaseRestHandler { - - @Inject public RestIndexPutAliasAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(PUT, "/{index}/_alias/{name}", this); @@ -58,7 +54,7 @@ public RestIndexPutAliasAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String[] indices = Strings.splitStringByCommaToArray(request.param("index")); String alias = request.param("name"); Map filter = null; @@ -67,7 +63,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel, String searchRouting = null; if (request.hasContent()) { - try (XContentParser parser = XContentFactory.xContent(request.content()).createParser(request.content())) { + try (XContentParser parser = request.contentParser()) { XContentParser.Token token = parser.nextToken(); if (token == null) { throw new IllegalArgumentException("No index alias is specified"); @@ -117,6 +113,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel, aliasAction.filter(filter); } indicesAliasesRequest.addAliasAction(aliasAction); - client.admin().indices().aliases(indicesAliasesRequest, new AcknowledgedRestListener<>(channel)); + return channel -> client.admin().indices().aliases(indicesAliasesRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesAliasesAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesAliasesAction.java index fe8a6a1662818..58add2b4ea841 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesAliasesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesAliasesAction.java @@ -21,26 +21,22 @@ import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest; import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions; -import org.elasticsearch.action.admin.indices.alias.IndicesAliasesResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.POST; public class RestIndicesAliasesAction extends BaseRestHandler { - static final ObjectParser PARSER = new ObjectParser<>("aliases"); + static final ObjectParser PARSER = new ObjectParser<>("aliases"); static { PARSER.declareObjectArray((request, actions) -> { for (AliasActions action: actions) { @@ -49,23 +45,22 @@ public class RestIndicesAliasesAction extends BaseRestHandler { }, AliasActions.PARSER, new ParseField("actions")); } - @Inject public RestIndicesAliasesAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_aliases", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { IndicesAliasesRequest indicesAliasesRequest = new IndicesAliasesRequest(); indicesAliasesRequest.masterNodeTimeout(request.paramAsTime("master_timeout", indicesAliasesRequest.masterNodeTimeout())); indicesAliasesRequest.timeout(request.paramAsTime("timeout", indicesAliasesRequest.timeout())); - try (XContentParser parser = XContentFactory.xContent(request.content()).createParser(request.content())) { - PARSER.parse(parser, indicesAliasesRequest, () -> ParseFieldMatcher.STRICT); + try (XContentParser parser = request.contentParser()) { + PARSER.parse(parser, indicesAliasesRequest, null); } if (indicesAliasesRequest.getAliasActions().isEmpty()) { throw new IllegalArgumentException("No action specified"); } - client.admin().indices().aliases(indicesAliasesRequest, new AcknowledgedRestListener(channel)); + return channel -> client.admin().indices().aliases(indicesAliasesRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesExistsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesExistsAction.java deleted file mode 100644 index fa62a8443561c..0000000000000 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesExistsAction.java +++ /dev/null @@ -1,70 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.rest.action.admin.indices; - -import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequest; -import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsResponse; -import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; -import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; -import org.elasticsearch.rest.action.RestResponseListener; - -import static org.elasticsearch.rest.RestRequest.Method.HEAD; -import static org.elasticsearch.rest.RestStatus.NOT_FOUND; -import static org.elasticsearch.rest.RestStatus.OK; - -/** - * - */ -public class RestIndicesExistsAction extends BaseRestHandler { - - @Inject - public RestIndicesExistsAction(Settings settings, RestController controller) { - super(settings); - controller.registerHandler(HEAD, "/{index}", this); - } - - @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { - IndicesExistsRequest indicesExistsRequest = new IndicesExistsRequest(Strings.splitStringByCommaToArray(request.param("index"))); - indicesExistsRequest.indicesOptions(IndicesOptions.fromRequest(request, indicesExistsRequest.indicesOptions())); - indicesExistsRequest.local(request.paramAsBoolean("local", indicesExistsRequest.local())); - client.admin().indices().exists(indicesExistsRequest, new RestResponseListener(channel) { - @Override - public RestResponse buildResponse(IndicesExistsResponse response) { - if (response.isExists()) { - return new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY); - } else { - return new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY); - } - } - - }); - } -} diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesSegmentsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesSegmentsAction.java index db9de980c52b8..6852b8527c754 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesSegmentsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesSegmentsAction.java @@ -24,24 +24,22 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestStatus.OK; import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; public class RestIndicesSegmentsAction extends BaseRestHandler { - - @Inject public RestIndicesSegmentsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_segments", this); @@ -49,20 +47,21 @@ public RestIndicesSegmentsAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { IndicesSegmentsRequest indicesSegmentsRequest = new IndicesSegmentsRequest( Strings.splitStringByCommaToArray(request.param("index"))); indicesSegmentsRequest.verbose(request.paramAsBoolean("verbose", false)); indicesSegmentsRequest.indicesOptions(IndicesOptions.fromRequest(request, indicesSegmentsRequest.indicesOptions())); - client.admin().indices().segments(indicesSegmentsRequest, new RestBuilderListener(channel) { - @Override - public RestResponse buildResponse(IndicesSegmentResponse response, XContentBuilder builder) throws Exception { - builder.startObject(); - buildBroadcastShardsHeader(builder, request, response); - response.toXContent(builder, request); - builder.endObject(); - return new BytesRestResponse(OK, builder); - } - }); + return channel -> + client.admin().indices().segments(indicesSegmentsRequest, new RestBuilderListener(channel) { + @Override + public RestResponse buildResponse(IndicesSegmentResponse response, XContentBuilder builder) throws Exception { + builder.startObject(); + buildBroadcastShardsHeader(builder, request, response); + response.toXContent(builder, request); + builder.endObject(); + return new BytesRestResponse(OK, builder); + } + }); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesShardStoresAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesShardStoresAction.java index 65c0dc8aa45bd..c00e6efffb032 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesShardStoresAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesShardStoresAction.java @@ -25,17 +25,17 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestStatus.OK; @@ -43,8 +43,6 @@ * Rest action for {@link IndicesShardStoresAction} */ public class RestIndicesShardStoresAction extends BaseRestHandler { - - @Inject public RestIndicesShardStoresAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_shard_stores", this); @@ -52,21 +50,26 @@ public RestIndicesShardStoresAction(Settings settings, RestController controller } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { IndicesShardStoresRequest indicesShardStoresRequest = new IndicesShardStoresRequest( Strings.splitStringByCommaToArray(request.param("index"))); if (request.hasParam("status")) { indicesShardStoresRequest.shardStatuses(Strings.splitStringByCommaToArray(request.param("status"))); } indicesShardStoresRequest.indicesOptions(IndicesOptions.fromRequest(request, indicesShardStoresRequest.indicesOptions())); - client.admin().indices().shardStores(indicesShardStoresRequest, new RestBuilderListener(channel) { - @Override - public RestResponse buildResponse(IndicesShardStoresResponse response, XContentBuilder builder) throws Exception { - builder.startObject(); - response.toXContent(builder, request); - builder.endObject(); - return new BytesRestResponse(OK, builder); - } - }); + return channel -> + client.admin() + .indices() + .shardStores(indicesShardStoresRequest, new RestBuilderListener(channel) { + @Override + public RestResponse buildResponse( + IndicesShardStoresResponse response, + XContentBuilder builder) throws Exception { + builder.startObject(); + response.toXContent(builder, request); + builder.endObject(); + return new BytesRestResponse(OK, builder); + } + }); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsAction.java index c7dd62688fad7..c04b480527fec 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsAction.java @@ -24,37 +24,63 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; +import java.util.Collections; +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; import java.util.Set; +import java.util.TreeSet; +import java.util.function.Consumer; import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestStatus.OK; import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; public class RestIndicesStatsAction extends BaseRestHandler { - - @Inject public RestIndicesStatsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_stats", this); controller.registerHandler(GET, "/_stats/{metric}", this); - controller.registerHandler(GET, "/_stats/{metric}/{indexMetric}", this); controller.registerHandler(GET, "/{index}/_stats", this); controller.registerHandler(GET, "/{index}/_stats/{metric}", this); } + static final Map> METRICS; + + static { + final Map> metrics = new HashMap<>(); + metrics.put("docs", r -> r.docs(true)); + metrics.put("store", r -> r.store(true)); + metrics.put("indexing", r -> r.indexing(true)); + metrics.put("search", r -> r.search(true)); + metrics.put("suggest", r -> r.search(true)); + metrics.put("get", r -> r.get(true)); + metrics.put("merge", r -> r.merge(true)); + metrics.put("refresh", r -> r.refresh(true)); + metrics.put("flush", r -> r.flush(true)); + metrics.put("warmer", r -> r.warmer(true)); + metrics.put("query_cache", r -> r.queryCache(true)); + metrics.put("segments", r -> r.segments(true)); + metrics.put("fielddata", r -> r.fieldData(true)); + metrics.put("completion", r -> r.completion(true)); + metrics.put("request_cache", r -> r.requestCache(true)); + metrics.put("recovery", r -> r.recovery(true)); + metrics.put("translog", r -> r.translog(true)); + METRICS = Collections.unmodifiableMap(metrics); + } + @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { IndicesStatsRequest indicesStatsRequest = new IndicesStatsRequest(); indicesStatsRequest.indicesOptions(IndicesOptions.fromRequest(request, indicesStatsRequest.indicesOptions())); indicesStatsRequest.indices(Strings.splitStringByCommaToArray(request.param("index"))); @@ -64,24 +90,34 @@ public void handleRequest(final RestRequest request, final RestChannel channel, // short cut, if no metrics have been specified in URI if (metrics.size() == 1 && metrics.contains("_all")) { indicesStatsRequest.all(); + } else if (metrics.contains("_all")) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, + "request [%s] contains _all and individual metrics [%s]", + request.path(), + request.param("metric"))); } else { indicesStatsRequest.clear(); - indicesStatsRequest.docs(metrics.contains("docs")); - indicesStatsRequest.store(metrics.contains("store")); - indicesStatsRequest.indexing(metrics.contains("indexing")); - indicesStatsRequest.search(metrics.contains("search") || metrics.contains("suggest")); - indicesStatsRequest.get(metrics.contains("get")); - indicesStatsRequest.merge(metrics.contains("merge")); - indicesStatsRequest.refresh(metrics.contains("refresh")); - indicesStatsRequest.flush(metrics.contains("flush")); - indicesStatsRequest.warmer(metrics.contains("warmer")); - indicesStatsRequest.queryCache(metrics.contains("query_cache")); - indicesStatsRequest.segments(metrics.contains("segments")); - indicesStatsRequest.fieldData(metrics.contains("fielddata")); - indicesStatsRequest.completion(metrics.contains("completion")); - indicesStatsRequest.requestCache(metrics.contains("request_cache")); - indicesStatsRequest.recovery(metrics.contains("recovery")); - indicesStatsRequest.translog(metrics.contains("translog")); + // use a sorted set so the unrecognized parameters appear in a reliable sorted order + final Set invalidMetrics = new TreeSet<>(); + for (final String metric : metrics) { + final Consumer consumer = METRICS.get(metric); + if (consumer != null) { + consumer.accept(indicesStatsRequest); + } else { + invalidMetrics.add(metric); + } + } + + if (invalidMetrics.contains("percolate")) { + deprecationLogger.deprecated( + "percolate stats are no longer available and requests for percolate stats will fail starting in 6.0.0"); + invalidMetrics.remove("percolate"); + } + + if (!invalidMetrics.isEmpty()) { + throw new IllegalArgumentException(unrecognized(request, invalidMetrics, METRICS.keySet(), "metric")); + } } if (request.hasParam("groups")) { @@ -102,11 +138,11 @@ public void handleRequest(final RestRequest request, final RestChannel channel, request.paramAsStringArray("fielddata_fields", request.paramAsStringArray("fields", Strings.EMPTY_ARRAY))); } - if (indicesStatsRequest.segments() && request.hasParam("include_segment_file_sizes")) { - indicesStatsRequest.includeSegmentFileSizes(true); + if (indicesStatsRequest.segments()) { + indicesStatsRequest.includeSegmentFileSizes(request.paramAsBoolean("include_segment_file_sizes", false)); } - client.admin().indices().stats(indicesStatsRequest, new RestBuilderListener(channel) { + return channel -> client.admin().indices().stats(indicesStatsRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(IndicesStatsResponse response, XContentBuilder builder) throws Exception { builder.startObject(); @@ -122,4 +158,12 @@ public RestResponse buildResponse(IndicesStatsResponse response, XContentBuilder public boolean canTripCircuitBreaker() { return false; } + + private static final Set RESPONSE_PARAMS = Collections.singleton("level"); + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; + } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestOpenIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestOpenIndexAction.java index dd40705769fff..8b1050bbba7a1 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestOpenIndexAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestOpenIndexAction.java @@ -24,20 +24,18 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; + /** * */ public class RestOpenIndexAction extends BaseRestHandler { - - @Inject public RestOpenIndexAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.POST, "/_open", this); @@ -45,11 +43,11 @@ public RestOpenIndexAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { OpenIndexRequest openIndexRequest = new OpenIndexRequest(Strings.splitStringByCommaToArray(request.param("index"))); openIndexRequest.timeout(request.paramAsTime("timeout", openIndexRequest.timeout())); openIndexRequest.masterNodeTimeout(request.paramAsTime("master_timeout", openIndexRequest.masterNodeTimeout())); openIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, openIndexRequest.indicesOptions())); - client.admin().indices().open(openIndexRequest, new AcknowledgedRestListener(channel)); + return channel -> client.admin().indices().open(openIndexRequest, new AcknowledgedRestListener(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestPutIndexTemplateAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestPutIndexTemplateAction.java index 398eb62c1f147..282d8e2440c3e 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestPutIndexTemplateAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestPutIndexTemplateAction.java @@ -16,41 +16,36 @@ * specific language governing permissions and limitations * under the License. */ + package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateRequest; -import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; -/** - * - */ -public class RestPutIndexTemplateAction extends BaseRestHandler { +import java.io.IOException; - @Inject +public class RestPutIndexTemplateAction extends BaseRestHandler { public RestPutIndexTemplateAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.PUT, "/_template/{name}", this); controller.registerHandler(RestRequest.Method.POST, "/_template/{name}", this); } - @SuppressWarnings({"unchecked"}) @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { PutIndexTemplateRequest putRequest = new PutIndexTemplateRequest(request.param("name")); putRequest.template(request.param("template", putRequest.template())); putRequest.order(request.paramAsInt("order", putRequest.order())); putRequest.masterNodeTimeout(request.paramAsTime("master_timeout", putRequest.masterNodeTimeout())); putRequest.create(request.paramAsBoolean("create", false)); putRequest.cause(request.param("cause", "")); - putRequest.source(request.content()); - client.admin().indices().putTemplate(putRequest, new AcknowledgedRestListener(channel)); + putRequest.source(request.requiredContent(), request.getXContentType()); + return channel -> client.admin().indices().putTemplate(putRequest, new AcknowledgedRestListener<>(channel)); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestPutMappingAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestPutMappingAction.java index 3a582d0b0a96b..06b48d0142b47 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestPutMappingAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestPutMappingAction.java @@ -20,18 +20,17 @@ package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest; -import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; + import static org.elasticsearch.client.Requests.putMappingRequest; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestRequest.Method.PUT; @@ -40,9 +39,6 @@ * */ public class RestPutMappingAction extends BaseRestHandler { - - - @Inject public RestPutMappingAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(PUT, "/{index}/_mapping/", this); @@ -68,14 +64,14 @@ public RestPutMappingAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { PutMappingRequest putMappingRequest = putMappingRequest(Strings.splitStringByCommaToArray(request.param("index"))); putMappingRequest.type(request.param("type")); - putMappingRequest.source(request.content().utf8ToString()); + putMappingRequest.source(request.requiredContent(), request.getXContentType()); putMappingRequest.updateAllTypes(request.paramAsBoolean("update_all_types", false)); putMappingRequest.timeout(request.paramAsTime("timeout", putMappingRequest.timeout())); putMappingRequest.masterNodeTimeout(request.paramAsTime("master_timeout", putMappingRequest.masterNodeTimeout())); putMappingRequest.indicesOptions(IndicesOptions.fromRequest(request, putMappingRequest.indicesOptions())); - client.admin().indices().putMapping(putMappingRequest, new AcknowledgedRestListener(channel)); + return channel -> client.admin().indices().putMapping(putMappingRequest, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRecoveryAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRecoveryAction.java index 5dee73606fd72..ab1295e9c6015 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRecoveryAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRecoveryAction.java @@ -24,17 +24,17 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestStatus.OK; @@ -42,8 +42,6 @@ * REST handler to report on index recoveries. */ public class RestRecoveryAction extends BaseRestHandler { - - @Inject public RestRecoveryAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_recovery", this); @@ -51,14 +49,14 @@ public RestRecoveryAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final RecoveryRequest recoveryRequest = new RecoveryRequest(Strings.splitStringByCommaToArray(request.param("index"))); recoveryRequest.detailed(request.paramAsBoolean("detailed", false)); recoveryRequest.activeOnly(request.paramAsBoolean("active_only", false)); recoveryRequest.indicesOptions(IndicesOptions.fromRequest(request, recoveryRequest.indicesOptions())); - client.admin().indices().recoveries(recoveryRequest, new RestBuilderListener(channel) { + return channel -> client.admin().indices().recoveries(recoveryRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(RecoveryResponse response, XContentBuilder builder) throws Exception { response.detailed(recoveryRequest.detailed()); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRefreshAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRefreshAction.java index 54088a7ddb244..c213f3be18b6b 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRefreshAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRefreshAction.java @@ -24,28 +24,23 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestStatus.OK; import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; -/** - * - */ public class RestRefreshAction extends BaseRestHandler { - - @Inject public RestRefreshAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_refresh", this); @@ -56,10 +51,10 @@ public RestRefreshAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { RefreshRequest refreshRequest = new RefreshRequest(Strings.splitStringByCommaToArray(request.param("index"))); refreshRequest.indicesOptions(IndicesOptions.fromRequest(request, refreshRequest.indicesOptions())); - client.admin().indices().refresh(refreshRequest, new RestBuilderListener(channel) { + return channel -> client.admin().indices().refresh(refreshRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(RefreshResponse response, XContentBuilder builder) throws Exception { builder.startObject(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRolloverIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRolloverIndexAction.java index 1433bc425715d..09ef33ecaf913 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRolloverIndexAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRolloverIndexAction.java @@ -20,39 +20,34 @@ package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.rollover.RolloverRequest; -import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.action.support.ActiveShardCount; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; + /** * */ public class RestRolloverIndexAction extends BaseRestHandler { - - @Inject public RestRolloverIndexAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.POST, "/{index}/_rollover", this); controller.registerHandler(RestRequest.Method.POST, "/{index}/_rollover/{new_index}", this); } - @SuppressWarnings({"unchecked"}) @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { RolloverRequest rolloverIndexRequest = new RolloverRequest(request.param("index"), request.param("new_index")); - if (request.hasContent()) { - rolloverIndexRequest.source(request.content()); - } + request.applyContentParser(parser -> RolloverRequest.PARSER.parse(parser, rolloverIndexRequest, null)); rolloverIndexRequest.dryRun(request.paramAsBoolean("dry_run", false)); rolloverIndexRequest.timeout(request.paramAsTime("timeout", rolloverIndexRequest.timeout())); rolloverIndexRequest.masterNodeTimeout(request.paramAsTime("master_timeout", rolloverIndexRequest.masterNodeTimeout())); rolloverIndexRequest.setWaitForActiveShards(ActiveShardCount.parseString(request.param("wait_for_active_shards"))); - client.admin().indices().rolloverIndex(rolloverIndexRequest, new RestToXContentListener<>(channel)); + return channel -> client.admin().indices().rolloverIndex(rolloverIndexRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestShrinkIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestShrinkIndexAction.java index f04c9760a6397..1b63eb7f6d3ab 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestShrinkIndexAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestShrinkIndexAction.java @@ -20,35 +20,27 @@ package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.shrink.ShrinkRequest; -import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.action.admin.indices.shrink.ShrinkResponse; import org.elasticsearch.action.support.ActiveShardCount; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; import java.io.IOException; -/** - * - */ public class RestShrinkIndexAction extends BaseRestHandler { - - @Inject public RestShrinkIndexAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.PUT, "/{index}/_shrink/{target}", this); controller.registerHandler(RestRequest.Method.POST, "/{index}/_shrink/{target}", this); } - @SuppressWarnings({"unchecked"}) @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { if (request.param("target") == null) { throw new IllegalArgumentException("no target index"); } @@ -56,13 +48,11 @@ public void handleRequest(final RestRequest request, final RestChannel channel, throw new IllegalArgumentException("no source index"); } ShrinkRequest shrinkIndexRequest = new ShrinkRequest(request.param("target"), request.param("index")); - if (request.hasContent()) { - shrinkIndexRequest.source(request.content()); - } + request.applyContentParser(parser -> ShrinkRequest.PARSER.parse(parser, shrinkIndexRequest, null)); shrinkIndexRequest.timeout(request.paramAsTime("timeout", shrinkIndexRequest.timeout())); shrinkIndexRequest.masterNodeTimeout(request.paramAsTime("master_timeout", shrinkIndexRequest.masterNodeTimeout())); shrinkIndexRequest.setWaitForActiveShards(ActiveShardCount.parseString(request.param("wait_for_active_shards"))); - client.admin().indices().shrinkIndex(shrinkIndexRequest, new AcknowledgedRestListener(channel) { + return channel -> client.admin().indices().shrinkIndex(shrinkIndexRequest, new AcknowledgedRestListener(channel) { @Override public void addCustomFields(XContentBuilder builder, ShrinkResponse response) throws IOException { response.addCustomFields(builder); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestSyncedFlushAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestSyncedFlushAction.java index 784a588db8927..903b0dee04f0b 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestSyncedFlushAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestSyncedFlushAction.java @@ -24,17 +24,17 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.POST; @@ -42,8 +42,6 @@ * */ public class RestSyncedFlushAction extends BaseRestHandler { - - @Inject public RestSyncedFlushAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_flush/synced", this); @@ -54,11 +52,11 @@ public RestSyncedFlushAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { IndicesOptions indicesOptions = IndicesOptions.fromRequest(request, IndicesOptions.lenientExpandOpen()); SyncedFlushRequest syncedFlushRequest = new SyncedFlushRequest(Strings.splitStringByCommaToArray(request.param("index"))); syncedFlushRequest.indicesOptions(indicesOptions); - client.admin().indices().syncedFlush(syncedFlushRequest, new RestBuilderListener(channel) { + return channel -> client.admin().indices().syncedFlush(syncedFlushRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(SyncedFlushResponse results, XContentBuilder builder) throws Exception { builder.startObject(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestTypesExistsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestTypesExistsAction.java deleted file mode 100644 index 3877715395c34..0000000000000 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestTypesExistsAction.java +++ /dev/null @@ -1,72 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.rest.action.admin.indices; - -import org.elasticsearch.action.admin.indices.exists.types.TypesExistsRequest; -import org.elasticsearch.action.admin.indices.exists.types.TypesExistsResponse; -import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; -import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; -import org.elasticsearch.rest.action.RestResponseListener; - -import static org.elasticsearch.rest.RestRequest.Method.HEAD; -import static org.elasticsearch.rest.RestStatus.NOT_FOUND; -import static org.elasticsearch.rest.RestStatus.OK; - -/** - * Rest api for checking if a type exists. - */ -public class RestTypesExistsAction extends BaseRestHandler { - - @Inject - public RestTypesExistsAction(Settings settings, RestController controller) { - super(settings); - controller.registerWithDeprecatedHandler( - HEAD, "/{index}/_mapping/{type}", this, - HEAD, "/{index}/{type}", deprecationLogger); - } - - @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { - TypesExistsRequest typesExistsRequest = new TypesExistsRequest( - Strings.splitStringByCommaToArray(request.param("index")), Strings.splitStringByCommaToArray(request.param("type")) - ); - typesExistsRequest.local(request.paramAsBoolean("local", typesExistsRequest.local())); - typesExistsRequest.indicesOptions(IndicesOptions.fromRequest(request, typesExistsRequest.indicesOptions())); - client.admin().indices().typesExists(typesExistsRequest, new RestResponseListener(channel) { - @Override - public RestResponse buildResponse(TypesExistsResponse response) throws Exception { - if (response.isExists()) { - return new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY); - } else { - return new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY); - } - } - }); - } -} diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestUpdateSettingsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestUpdateSettingsAction.java index 0c1b535901e4f..c0ee1edb10aac 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestUpdateSettingsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestUpdateSettingsAction.java @@ -23,36 +23,25 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; +import java.io.IOException; +import java.util.HashMap; import java.util.Map; import java.util.Set; -import static java.util.Collections.unmodifiableSet; import static org.elasticsearch.client.Requests.updateSettingsRequest; -import static org.elasticsearch.common.util.set.Sets.newHashSet; /** * */ public class RestUpdateSettingsAction extends BaseRestHandler { - private static final Set VALUES_TO_EXCLUDE = unmodifiableSet(newHashSet( - "pretty", - "timeout", - "master_timeout", - "index", - "preserve_existing", - "expand_wildcards", - "ignore_unavailable", - "allow_no_indices")); - @Inject public RestUpdateSettingsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.PUT, "/{index}/_settings", this); @@ -60,35 +49,33 @@ public RestUpdateSettingsAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { UpdateSettingsRequest updateSettingsRequest = updateSettingsRequest(Strings.splitStringByCommaToArray(request.param("index"))); updateSettingsRequest.timeout(request.paramAsTime("timeout", updateSettingsRequest.timeout())); updateSettingsRequest.setPreserveExisting(request.paramAsBoolean("preserve_existing", updateSettingsRequest.isPreserveExisting())); updateSettingsRequest.masterNodeTimeout(request.paramAsTime("master_timeout", updateSettingsRequest.masterNodeTimeout())); updateSettingsRequest.indicesOptions(IndicesOptions.fromRequest(request, updateSettingsRequest.indicesOptions())); - Settings.Builder updateSettings = Settings.builder(); - String bodySettingsStr = request.content().utf8ToString(); - if (Strings.hasText(bodySettingsStr)) { - Settings buildSettings = Settings.builder().loadFromSource(bodySettingsStr).build(); - for (Map.Entry entry : buildSettings.getAsMap().entrySet()) { - String key = entry.getKey(); - String value = entry.getValue(); - // clean up in case the body is wrapped with "settings" : { ... } - if (key.startsWith("settings.")) { - key = key.substring("settings.".length()); - } - updateSettings.put(key, value); + Map settings = new HashMap<>(); + try (XContentParser parser = request.contentParser()) { + Map bodySettings = parser.map(); + Object innerBodySettings = bodySettings.get("settings"); + // clean up in case the body is wrapped with "settings" : { ... } + if (innerBodySettings instanceof Map) { + @SuppressWarnings("unchecked") + Map innerBodySettingsMap = (Map) innerBodySettings; + settings.putAll(innerBodySettingsMap); + } else { + settings.putAll(bodySettings); } } - for (Map.Entry entry : request.params().entrySet()) { - if (VALUES_TO_EXCLUDE.contains(entry.getKey())) { - continue; - } - updateSettings.put(entry.getKey(), entry.getValue()); - } - updateSettingsRequest.settings(updateSettings); + updateSettingsRequest.settings(settings); - client.admin().indices().updateSettings(updateSettingsRequest, new AcknowledgedRestListener<>(channel)); + return channel -> client.admin().indices().updateSettings(updateSettingsRequest, new AcknowledgedRestListener<>(channel)); + } + + @Override + protected Set responseParams() { + return Settings.FORMAT_PARAMS; } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestUpgradeAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestUpgradeAction.java index e0659e1cf525f..9437ad5eada93 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestUpgradeAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestUpgradeAction.java @@ -20,23 +20,24 @@ package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.Version; +import org.elasticsearch.action.admin.indices.upgrade.get.UpgradeStatusRequest; import org.elasticsearch.action.admin.indices.upgrade.get.UpgradeStatusResponse; import org.elasticsearch.action.admin.indices.upgrade.post.UpgradeRequest; import org.elasticsearch.action.admin.indices.upgrade.post.UpgradeResponse; +import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; import org.elasticsearch.common.collect.Tuple; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.RestBuilderListener; +import java.io.IOException; import java.util.Map; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -44,10 +45,7 @@ import static org.elasticsearch.rest.RestStatus.OK; import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; - public class RestUpgradeAction extends BaseRestHandler { - - @Inject public RestUpgradeAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_upgrade", this); @@ -58,31 +56,35 @@ public RestUpgradeAction(Settings settings, RestController controller) { } @Override - public void handleRequest(RestRequest request, RestChannel channel, NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException { if (request.method().equals(RestRequest.Method.GET)) { - handleGet(request, channel, client); + return handleGet(request, client); } else if (request.method().equals(RestRequest.Method.POST)) { - handlePost(request, channel, client); + return handlePost(request, client); + } else { + throw new IllegalArgumentException("illegal method [" + request.method() + "] for request [" + request.path() + "]"); } } - void handleGet(final RestRequest request, RestChannel channel, NodeClient client) { - client.admin().indices().prepareUpgradeStatus(Strings.splitStringByCommaToArray(request.param("index"))) - .execute(new RestBuilderListener(channel) { - @Override - public RestResponse buildResponse(UpgradeStatusResponse response, XContentBuilder builder) throws Exception { - builder.startObject(); - response.toXContent(builder, request); - builder.endObject(); - return new BytesRestResponse(OK, builder); - } - }); + private RestChannelConsumer handleGet(final RestRequest request, NodeClient client) { + UpgradeStatusRequest statusRequest = new UpgradeStatusRequest(Strings.splitStringByCommaToArray(request.param("index"))); + statusRequest.indicesOptions(IndicesOptions.fromRequest(request, statusRequest.indicesOptions())); + return channel -> client.admin().indices().upgradeStatus(statusRequest, new RestBuilderListener(channel) { + @Override + public RestResponse buildResponse(UpgradeStatusResponse response, XContentBuilder builder) throws Exception { + builder.startObject(); + response.toXContent(builder, request); + builder.endObject(); + return new BytesRestResponse(OK, builder); + } + }); } - void handlePost(final RestRequest request, RestChannel channel, NodeClient client) { + private RestChannelConsumer handlePost(final RestRequest request, NodeClient client) { UpgradeRequest upgradeReq = new UpgradeRequest(Strings.splitStringByCommaToArray(request.param("index"))); + upgradeReq.indicesOptions(IndicesOptions.fromRequest(request, upgradeReq.indicesOptions())); upgradeReq.upgradeOnlyAncientSegments(request.paramAsBoolean("only_ancient_segments", false)); - client.admin().indices().upgrade(upgradeReq, new RestBuilderListener(channel) { + return channel -> client.admin().indices().upgrade(upgradeReq, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(UpgradeResponse response, XContentBuilder builder) throws Exception { builder.startObject(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestValidateQueryAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestValidateQueryAction.java index 7bf2a34ef632e..0c2374045dd9b 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestValidateQueryAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestValidateQueryAction.java @@ -26,11 +26,8 @@ import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.query.QueryBuilder; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; import org.elasticsearch.rest.RestChannel; @@ -47,15 +44,8 @@ import static org.elasticsearch.rest.RestStatus.OK; import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; -/** - * - */ public class RestValidateQueryAction extends BaseRestHandler { - - private final IndicesQueriesRegistry indicesQueriesRegistry; - - @Inject - public RestValidateQueryAction(Settings settings, RestController controller, IndicesQueriesRegistry indicesQueriesRegistry) { + public RestValidateQueryAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_validate/query", this); controller.registerHandler(POST, "/_validate/query", this); @@ -63,62 +53,76 @@ public RestValidateQueryAction(Settings settings, RestController controller, Ind controller.registerHandler(POST, "/{index}/_validate/query", this); controller.registerHandler(GET, "/{index}/{type}/_validate/query", this); controller.registerHandler(POST, "/{index}/{type}/_validate/query", this); - this.indicesQueriesRegistry = indicesQueriesRegistry; } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ValidateQueryRequest validateQueryRequest = new ValidateQueryRequest(Strings.splitStringByCommaToArray(request.param("index"))); validateQueryRequest.indicesOptions(IndicesOptions.fromRequest(request, validateQueryRequest.indicesOptions())); validateQueryRequest.explain(request.paramAsBoolean("explain", false)); - if (RestActions.hasBodyContent(request)) { - try { - validateQueryRequest - .query(RestActions.getQueryContent(RestActions.getRestContent(request), indicesQueriesRegistry, parseFieldMatcher)); - } catch(ParsingException e) { - channel.sendResponse(buildErrorResponse(channel.newBuilder(), e.getDetailedMessage(), validateQueryRequest.explain())); - return; - } catch(Exception e) { - channel.sendResponse(buildErrorResponse(channel.newBuilder(), e.getMessage(), validateQueryRequest.explain())); - return; - } - } else { - QueryBuilder queryBuilder = RestActions.urlParamsToQueryBuilder(request); - if (queryBuilder != null) { - validateQueryRequest.query(queryBuilder); - } - } validateQueryRequest.types(Strings.splitStringByCommaToArray(request.param("type"))); validateQueryRequest.rewrite(request.paramAsBoolean("rewrite", false)); + validateQueryRequest.allShards(request.paramAsBoolean("all_shards", false)); + + Exception bodyParsingException = null; + try { + request.withContentOrSourceParamParserOrNull(parser -> { + if (parser != null) { + validateQueryRequest.query(RestActions.getQueryContent(parser)); + } else if (request.hasParam("q")) { + validateQueryRequest.query(RestActions.urlParamsToQueryBuilder(request)); + } + }); + } catch (Exception e) { + bodyParsingException = e; + } - client.admin().indices().validateQuery(validateQueryRequest, new RestBuilderListener(channel) { - @Override - public RestResponse buildResponse(ValidateQueryResponse response, XContentBuilder builder) throws Exception { - builder.startObject(); - builder.field(VALID_FIELD, response.isValid()); - buildBroadcastShardsHeader(builder, request, response); - if (response.getQueryExplanation() != null && !response.getQueryExplanation().isEmpty()) { - builder.startArray(EXPLANATIONS_FIELD); - for (QueryExplanation explanation : response.getQueryExplanation()) { + final Exception finalBodyParsingException = bodyParsingException; + return channel -> { + if (finalBodyParsingException != null) { + if (finalBodyParsingException instanceof ParsingException) { + handleException(validateQueryRequest, ((ParsingException) finalBodyParsingException).getDetailedMessage(), channel); + } else { + handleException(validateQueryRequest, finalBodyParsingException.getMessage(), channel); + } + } else { + client.admin().indices().validateQuery(validateQueryRequest, new RestBuilderListener(channel) { + @Override + public RestResponse buildResponse(ValidateQueryResponse response, XContentBuilder builder) throws Exception { builder.startObject(); - if (explanation.getIndex() != null) { - builder.field(INDEX_FIELD, explanation.getIndex()); - } - builder.field(VALID_FIELD, explanation.isValid()); - if (explanation.getError() != null) { - builder.field(ERROR_FIELD, explanation.getError()); - } - if (explanation.getExplanation() != null) { - builder.field(EXPLANATION_FIELD, explanation.getExplanation()); + builder.field(VALID_FIELD, response.isValid()); + buildBroadcastShardsHeader(builder, request, response); + if (response.getQueryExplanation() != null && !response.getQueryExplanation().isEmpty()) { + builder.startArray(EXPLANATIONS_FIELD); + for (QueryExplanation explanation : response.getQueryExplanation()) { + builder.startObject(); + if (explanation.getIndex() != null) { + builder.field(INDEX_FIELD, explanation.getIndex()); + } + if(explanation.getShard() >= 0) { + builder.field(SHARD_FIELD, explanation.getShard()); + } + builder.field(VALID_FIELD, explanation.isValid()); + if (explanation.getError() != null) { + builder.field(ERROR_FIELD, explanation.getError()); + } + if (explanation.getExplanation() != null) { + builder.field(EXPLANATION_FIELD, explanation.getExplanation()); + } + builder.endObject(); + } + builder.endArray(); } builder.endObject(); + return new BytesRestResponse(OK, builder); } - builder.endArray(); - } - builder.endObject(); - return new BytesRestResponse(OK, builder); + }); } - }); + }; + } + + private void handleException(final ValidateQueryRequest request, final String message, final RestChannel channel) throws IOException { + channel.sendResponse(buildErrorResponse(channel.newBuilder(), message, request.explain())); } private static BytesRestResponse buildErrorResponse(XContentBuilder builder, String error, boolean explain) throws IOException { @@ -132,6 +136,7 @@ private static BytesRestResponse buildErrorResponse(XContentBuilder builder, Str } private static final String INDEX_FIELD = "index"; + private static final String SHARD_FIELD = "shard"; private static final String VALID_FIELD = "valid"; private static final String EXPLANATIONS_FIELD = "explanations"; private static final String ERROR_FIELD = "error"; diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/AbstractCatAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/AbstractCatAction.java index 8315e34d08e23..58dc861126bfe 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/AbstractCatAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/AbstractCatAction.java @@ -20,54 +20,68 @@ import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Table; +import org.elasticsearch.common.io.Streams; import org.elasticsearch.common.io.UTF8StreamWriter; -import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.BytesStream; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestStatus; +import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.Set; + import static org.elasticsearch.rest.action.cat.RestTable.buildHelpWidths; import static org.elasticsearch.rest.action.cat.RestTable.pad; -/** - * - */ public abstract class AbstractCatAction extends BaseRestHandler { public AbstractCatAction(Settings settings) { super(settings); } - protected abstract void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client); + protected abstract RestChannelConsumer doCatRequest(RestRequest request, NodeClient client); protected abstract void documentation(StringBuilder sb); - protected abstract Table getTableWithHeader(final RestRequest request); + protected abstract Table getTableWithHeader(RestRequest request); @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { boolean helpWanted = request.paramAsBoolean("help", false); if (helpWanted) { - Table table = getTableWithHeader(request); - int[] width = buildHelpWidths(table, request); - BytesStreamOutput bytesOutput = channel.bytesOutput(); - UTF8StreamWriter out = new UTF8StreamWriter().setOutput(bytesOutput); - for (Table.Cell cell : table.getHeaders()) { - // need to do left-align always, so create new cells - pad(new Table.Cell(cell.value), width[0], request, out); - out.append(" | "); - pad(new Table.Cell(cell.attr.containsKey("alias") ? cell.attr.get("alias") : ""), width[1], request, out); - out.append(" | "); - pad(new Table.Cell(cell.attr.containsKey("desc") ? cell.attr.get("desc") : "not available"), width[2], request, out); - out.append("\n"); - } - out.close(); - channel.sendResponse(new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, bytesOutput.bytes())); + return channel -> { + Table table = getTableWithHeader(request); + int[] width = buildHelpWidths(table, request); + BytesStream bytesOutput = Streams.flushOnCloseStream(channel.bytesOutput()); + UTF8StreamWriter out = new UTF8StreamWriter().setOutput(bytesOutput); + for (Table.Cell cell : table.getHeaders()) { + // need to do left-align always, so create new cells + pad(new Table.Cell(cell.value), width[0], request, out); + out.append(" | "); + pad(new Table.Cell(cell.attr.containsKey("alias") ? cell.attr.get("alias") : ""), width[1], request, out); + out.append(" | "); + pad(new Table.Cell(cell.attr.containsKey("desc") ? cell.attr.get("desc") : "not available"), width[2], request, out); + out.append("\n"); + } + out.close(); + channel.sendResponse(new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, bytesOutput.bytes())); + }; } else { - doRequest(request, channel, client); + return doCatRequest(request, client); } } + + static Set RESPONSE_PARAMS = + Collections.unmodifiableSet(new HashSet<>(Arrays.asList("format", "h", "v", "ts", "pri", "bytes", "size", "time", "s"))); + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; + } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java index 82d59784fc950..a783a9c2a8215 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java @@ -19,15 +19,14 @@ package org.elasticsearch.rest.action.cat; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; + import org.elasticsearch.action.admin.indices.alias.get.GetAliasesRequest; import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.metadata.AliasMetaData; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -37,27 +36,21 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; -/** - * - */ public class RestAliasAction extends AbstractCatAction { - - @Inject public RestAliasAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/aliases", this); controller.registerHandler(GET, "/_cat/aliases/{alias}", this); } - @Override - protected void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + protected RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final GetAliasesRequest getAliasesRequest = request.hasParam("alias") ? - new GetAliasesRequest(request.param("alias")) : + new GetAliasesRequest(Strings.commaDelimitedListToStringArray(request.param("alias"))) : new GetAliasesRequest(); getAliasesRequest.local(request.paramAsBoolean("local", getAliasesRequest.local())); - client.admin().indices().getAliases(getAliasesRequest, new RestResponseListener(channel) { + return channel -> client.admin().indices().getAliases(getAliasesRequest, new RestResponseListener(channel) { @Override public RestResponse buildResponse(GetAliasesResponse response) throws Exception { Table tab = buildTable(request, response); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestAllocationAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestAllocationAction.java index 7649d59af9923..0077297cbf3c9 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestAllocationAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestAllocationAction.java @@ -20,6 +20,7 @@ package org.elasticsearch.rest.action.cat; import com.carrotsearch.hppc.ObjectIntScatterMap; + import org.elasticsearch.action.admin.cluster.node.stats.NodeStats; import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsRequest; import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse; @@ -31,10 +32,8 @@ import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -45,8 +44,6 @@ public class RestAllocationAction extends AbstractCatAction { - - @Inject public RestAllocationAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/allocation", this); @@ -59,14 +56,14 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final String[] nodes = Strings.splitStringByCommaToArray(request.param("nodes", "data:true")); final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().routingTable(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); clusterStateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterStateRequest.masterNodeTimeout())); - client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { @Override public void processResponse(final ClusterStateResponse state) { NodesStatsRequest statsRequest = new NodesStatsRequest(nodes); @@ -126,10 +123,10 @@ private Table buildTable(RestRequest request, final ClusterStateResponse state, //if we don't know how much we use (non data nodes), it means 0 long used = 0; short diskPercent = -1; - if (total.bytes() > 0) { - used = total.bytes() - avail.bytes(); - if (used >= 0 && avail.bytes() >= 0) { - diskPercent = (short) (used * 100 / (used + avail.bytes())); + if (total.getBytes() > 0) { + used = total.getBytes() - avail.getBytes(); + if (used >= 0 && avail.getBytes() >= 0) { + diskPercent = (short) (used * 100 / (used + avail.getBytes())); } } @@ -137,8 +134,8 @@ private Table buildTable(RestRequest request, final ClusterStateResponse state, table.addCell(shardCount); table.addCell(nodeStats.getIndices().getStore().getSize()); table.addCell(used < 0 ? null : new ByteSizeValue(used)); - table.addCell(avail.bytes() < 0 ? null : avail); - table.addCell(total.bytes() < 0 ? null : total); + table.addCell(avail.getBytes() < 0 ? null : avail); + table.addCell(total.getBytes() < 0 ? null : total); table.addCell(diskPercent < 0 ? null : diskPercent); table.addCell(node.getHostName()); table.addCell(node.getHostAddress()); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestCatAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestCatAction.java index b9cc5011a810c..7442a7d85ee6b 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestCatAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestCatAction.java @@ -24,12 +24,12 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestStatus; -import java.util.Set; +import java.io.IOException; +import java.util.List; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -40,7 +40,7 @@ public class RestCatAction extends BaseRestHandler { private final String HELP; @Inject - public RestCatAction(Settings settings, RestController controller, Set catActions) { + public RestCatAction(Settings settings, RestController controller, List catActions) { super(settings); controller.registerHandler(GET, "/_cat", this); StringBuilder sb = new StringBuilder(); @@ -52,7 +52,8 @@ public RestCatAction(Settings settings, RestController controller, Set channel.sendResponse(new BytesRestResponse(RestStatus.OK, HELP)); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestCountAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestCountAction.java index 4faddc3168cc5..00eb89a9c0242 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestCountAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestCountAction.java @@ -19,17 +19,14 @@ package org.elasticsearch.rest.action.cat; +import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.query.QueryBuilder; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -37,18 +34,15 @@ import org.elasticsearch.rest.action.RestResponseListener; import org.elasticsearch.search.builder.SearchSourceBuilder; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestCountAction extends AbstractCatAction { - - private final IndicesQueriesRegistry indicesQueriesRegistry; - - @Inject - public RestCountAction(Settings settings, RestController restController, RestController controller, IndicesQueriesRegistry indicesQueriesRegistry) { + public RestCountAction(Settings settings, RestController restController) { super(settings); restController.registerHandler(GET, "/_cat/count", this); restController.registerHandler(GET, "/_cat/count/{index}", this); - this.indicesQueriesRegistry = indicesQueriesRegistry; } @Override @@ -58,21 +52,26 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { String[] indices = Strings.splitStringByCommaToArray(request.param("index")); SearchRequest countRequest = new SearchRequest(indices); - String source = request.param("source"); SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder().size(0); countRequest.source(searchSourceBuilder); - if (source != null) { - searchSourceBuilder.query(RestActions.getQueryContent(new BytesArray(source), indicesQueriesRegistry, parseFieldMatcher)); - } else { - QueryBuilder queryBuilder = RestActions.urlParamsToQueryBuilder(request); - if (queryBuilder != null) { - searchSourceBuilder.query(queryBuilder); - } + try { + request.withContentOrSourceParamParserOrNull(parser -> { + if (parser == null) { + QueryBuilder queryBuilder = RestActions.urlParamsToQueryBuilder(request); + if (queryBuilder != null) { + searchSourceBuilder.query(queryBuilder); + } + } else { + searchSourceBuilder.query(RestActions.getQueryContent(parser)); + } + }); + } catch (IOException e) { + throw new ElasticsearchException("Couldn't parse query", e); } - client.search(countRequest, new RestResponseListener(channel) { + return channel -> client.search(countRequest, new RestResponseListener(channel) { @Override public RestResponse buildResponse(SearchResponse countResponse) throws Exception { return RestTable.buildResponse(buildTable(request, countResponse), channel); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestFielddataAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestFielddataAction.java index fcdad0c3f7eea..4156ea4619273 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestFielddataAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestFielddataAction.java @@ -20,15 +20,14 @@ package org.elasticsearch.rest.action.cat; import com.carrotsearch.hppc.cursors.ObjectLongCursor; + import org.elasticsearch.action.admin.cluster.node.stats.NodeStats; import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsRequest; import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -40,8 +39,6 @@ * Cat API class to display information about the size of fielddata fields per node */ public class RestFielddataAction extends AbstractCatAction { - - @Inject public RestFielddataAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/fielddata", this); @@ -49,14 +46,14 @@ public RestFielddataAction(Settings settings, RestController controller) { } @Override - protected void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + protected RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final NodesStatsRequest nodesStatsRequest = new NodesStatsRequest("data:true"); nodesStatsRequest.clear(); nodesStatsRequest.indices(true); String[] fields = request.paramAsStringArray("fields", null); nodesStatsRequest.indices().fieldDataFields(fields == null ? new String[] {"*"} : fields); - client.admin().cluster().nodesStats(nodesStatsRequest, new RestResponseListener(channel) { + return channel -> client.admin().cluster().nodesStats(nodesStatsRequest, new RestResponseListener(channel) { @Override public RestResponse buildResponse(NodesStatsResponse nodeStatses) throws Exception { return RestTable.buildResponse(buildTable(request, nodeStatses), channel); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestHealthAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestHealthAction.java index cd226e28b562c..2bc1c11b9a1d0 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestHealthAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestHealthAction.java @@ -23,9 +23,7 @@ import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -36,8 +34,6 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestHealthAction extends AbstractCatAction { - - @Inject public RestHealthAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/health", this); @@ -49,10 +45,10 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { ClusterHealthRequest clusterHealthRequest = new ClusterHealthRequest(); - client.admin().cluster().health(clusterHealthRequest, new RestResponseListener(channel) { + return channel -> client.admin().cluster().health(clusterHealthRequest, new RestResponseListener(channel) { @Override public RestResponse buildResponse(final ClusterHealthResponse health) throws Exception { return RestTable.buildResponse(buildTable(health, request), channel); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java index 3c65e32c746db..26bbc865fb276 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java @@ -23,23 +23,23 @@ import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest; import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse; +import org.elasticsearch.action.admin.indices.stats.CommonStats; import org.elasticsearch.action.admin.indices.stats.IndexStats; import org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest; import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.client.Requests; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.cluster.health.ClusterIndexHealth; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -48,7 +48,11 @@ import org.joda.time.DateTime; import org.joda.time.DateTimeZone; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; import java.util.Locale; +import java.util.Set; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -56,7 +60,6 @@ public class RestIndicesAction extends AbstractCatAction { private final IndexNameExpressionResolver indexNameExpressionResolver; - @Inject public RestIndicesAction(Settings settings, RestController controller, IndexNameExpressionResolver indexNameExpressionResolver) { super(settings); this.indexNameExpressionResolver = indexNameExpressionResolver; @@ -71,7 +74,7 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().indices(indices).metaData(true); @@ -80,7 +83,7 @@ public void doRequest(final RestRequest request, final RestChannel channel, fina final IndicesOptions strictExpandIndicesOptions = IndicesOptions.strictExpand(); clusterStateRequest.indicesOptions(strictExpandIndicesOptions); - client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { @Override public void processResponse(final ClusterStateResponse clusterStateResponse) { final ClusterState state = clusterStateResponse.getState(); @@ -122,6 +125,19 @@ public RestResponse buildResponse(IndicesStatsResponse indicesStatsResponse) thr }); } + private static final Set RESPONSE_PARAMS; + + static { + final Set responseParams = new HashSet<>(Arrays.asList("local", "health")); + responseParams.addAll(AbstractCatAction.RESPONSE_PARAMS); + RESPONSE_PARAMS = Collections.unmodifiableSet(responseParams); + } + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; + } + @Override protected Table getTableWithHeader(final RestRequest request) { Table table = new Table(); @@ -243,6 +259,9 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("refresh.time", "sibling:pri;alias:rti,refreshTime;default:false;text-align:right;desc:time spent in refreshes"); table.addCell("pri.refresh.time", "default:false;text-align:right;desc:time spent in refreshes"); + table.addCell("refresh.listeners", "sibling:pri;alias:rli,refreshListeners;default:false;text-align:right;desc:number of pending refresh listeners"); + table.addCell("pri.refresh.listeners", "default:false;text-align:right;desc:number of pending refresh listeners"); + table.addCell("search.fetch_current", "sibling:pri;alias:sfc,searchFetchCurrent;default:false;text-align:right;desc:current fetch phase ops"); table.addCell("pri.search.fetch_current", "default:false;text-align:right;desc:current fetch phase ops"); @@ -314,16 +333,35 @@ protected Table getTableWithHeader(final RestRequest request) { } // package private for testing - Table buildTable(RestRequest request, Index[] indices, ClusterHealthResponse health, IndicesStatsResponse stats, MetaData indexMetaDatas) { + Table buildTable(RestRequest request, Index[] indices, ClusterHealthResponse response, IndicesStatsResponse stats, MetaData indexMetaDatas) { + final String healthParam = request.param("health"); + final ClusterHealthStatus status; + if (healthParam != null) { + status = ClusterHealthStatus.fromString(healthParam); + } else { + status = null; + } + Table table = getTableWithHeader(request); for (final Index index : indices) { final String indexName = index.getName(); - ClusterIndexHealth indexHealth = health.getIndices().get(indexName); + ClusterIndexHealth indexHealth = response.getIndices().get(indexName); IndexStats indexStats = stats.getIndices().get(indexName); IndexMetaData indexMetaData = indexMetaDatas.getIndices().get(indexName); IndexMetaData.State state = indexMetaData.getState(); + if (status != null) { + if (state == IndexMetaData.State.CLOSE || + (indexHealth == null && !ClusterHealthStatus.RED.equals(status)) || + !indexHealth.getStatus().equals(status)) { + continue; + } + } + + final CommonStats primaryStats = indexStats == null ? new CommonStats() : indexStats.getPrimaries(); + final CommonStats totalStats = indexStats == null ? new CommonStats() : indexStats.getTotal(); + table.startRow(); table.addCell(state == IndexMetaData.State.OPEN ? (indexHealth == null ? "red*" : indexHealth.getStatus().toString().toLowerCase(Locale.ROOT)) : null); table.addCell(state.toString().toLowerCase(Locale.ROOT)); @@ -331,179 +369,183 @@ Table buildTable(RestRequest request, Index[] indices, ClusterHealthResponse hea table.addCell(index.getUUID()); table.addCell(indexHealth == null ? null : indexHealth.getNumberOfShards()); table.addCell(indexHealth == null ? null : indexHealth.getNumberOfReplicas()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getDocs().getCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getDocs().getDeleted()); + + table.addCell(primaryStats.getDocs() == null ? null : primaryStats.getDocs().getCount()); + table.addCell(primaryStats.getDocs() == null ? null : primaryStats.getDocs().getDeleted()); table.addCell(indexMetaData.getCreationDate()); table.addCell(new DateTime(indexMetaData.getCreationDate(), DateTimeZone.UTC)); - table.addCell(indexStats == null ? null : indexStats.getTotal().getStore().size()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getStore().size()); + table.addCell(totalStats.getStore() == null ? null : totalStats.getStore().size()); + table.addCell(primaryStats.getStore() == null ? null : primaryStats.getStore().size()); + + table.addCell(totalStats.getCompletion() == null ? null : totalStats.getCompletion().getSize()); + table.addCell(primaryStats.getCompletion() == null ? null : primaryStats.getCompletion().getSize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getCompletion().getSize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getCompletion().getSize()); + table.addCell(totalStats.getFieldData() == null ? null : totalStats.getFieldData().getMemorySize()); + table.addCell(primaryStats.getFieldData() == null ? null : primaryStats.getFieldData().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getFieldData().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFieldData().getMemorySize()); + table.addCell(totalStats.getFieldData() == null ? null : totalStats.getFieldData().getEvictions()); + table.addCell(primaryStats.getFieldData() == null ? null : primaryStats.getFieldData().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getFieldData().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFieldData().getEvictions()); + table.addCell(totalStats.getQueryCache() == null ? null : totalStats.getQueryCache().getMemorySize()); + table.addCell(primaryStats.getQueryCache() == null ? null : primaryStats.getQueryCache().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getQueryCache().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getQueryCache().getMemorySize()); + table.addCell(totalStats.getQueryCache() == null ? null : totalStats.getQueryCache().getEvictions()); + table.addCell(primaryStats.getQueryCache() == null ? null : primaryStats.getQueryCache().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getQueryCache().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getQueryCache().getEvictions()); + table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getMemorySize()); + table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getMemorySize()); + table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getEvictions()); + table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getEvictions()); + table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getHitCount()); + table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getHitCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getHitCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getHitCount()); + table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getMissCount()); + table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getMissCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getMissCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getMissCount()); + table.addCell(totalStats.getFlush() == null ? null : totalStats.getFlush().getTotal()); + table.addCell(primaryStats.getFlush() == null ? null : primaryStats.getFlush().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getFlush().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFlush().getTotal()); + table.addCell(totalStats.getFlush() == null ? null : totalStats.getFlush().getTotalTime()); + table.addCell(primaryStats.getFlush() == null ? null : primaryStats.getFlush().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getFlush().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFlush().getTotalTime()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().current()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().current()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().current()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().current()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getTime()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getTime()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getCount()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getCount()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getExistsTime()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getExistsTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getExistsTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getExistsTime()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getExistsCount()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getExistsCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getExistsCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getExistsCount()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getMissingTime()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getMissingTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getMissingTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getMissingTime()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getMissingCount()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getMissingCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getMissingCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getMissingCount()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getDeleteCurrent()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getDeleteCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getDeleteCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getDeleteCurrent()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getDeleteTime()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getDeleteTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getDeleteTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getDeleteTime()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getDeleteCount()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getDeleteCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getDeleteCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getDeleteCount()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexCurrent()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexCurrent()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexTime()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexTime()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexCount()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexCount()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexFailedCount()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexFailedCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexFailedCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexFailedCount()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getCurrent()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getCurrent()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getCurrentNumDocs()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getCurrentNumDocs()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getCurrentNumDocs()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getCurrentNumDocs()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getCurrentSize()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getCurrentSize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getCurrentSize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getCurrentSize()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotal()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotal()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotalNumDocs()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotalNumDocs()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotalNumDocs()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotalNumDocs()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotalSize()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotalSize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotalSize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotalSize()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotalTime()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotalTime()); + table.addCell(totalStats.getRefresh() == null ? null : totalStats.getRefresh().getTotal()); + table.addCell(primaryStats.getRefresh() == null ? null : primaryStats.getRefresh().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRefresh().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRefresh().getTotal()); + table.addCell(totalStats.getRefresh() == null ? null : totalStats.getRefresh().getTotalTime()); + table.addCell(primaryStats.getRefresh() == null ? null : primaryStats.getRefresh().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRefresh().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRefresh().getTotalTime()); + table.addCell(totalStats.getRefresh() == null ? null : totalStats.getRefresh().getListeners()); + table.addCell(primaryStats.getRefresh() == null ? null : primaryStats.getRefresh().getListeners()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getFetchCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getFetchCurrent()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getFetchCurrent()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getFetchCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getFetchTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getFetchTime()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getFetchTime()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getFetchTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getFetchCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getFetchCount()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getFetchCount()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getFetchCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getOpenContexts()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getOpenContexts()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getOpenContexts()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getOpenContexts()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getQueryCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getQueryCurrent()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getQueryCurrent()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getQueryCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getQueryTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getQueryTime()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getQueryTime()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getQueryTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getQueryCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getQueryCount()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getQueryCount()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getQueryCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getScrollCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getScrollCurrent()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getScrollCurrent()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getScrollCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getScrollTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getScrollTime()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getScrollTime()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getScrollTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getScrollCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getScrollCount()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getScrollCount()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getScrollCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getCount()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getCount()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getMemory()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getMemory()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getMemory()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getMemory()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getIndexWriterMemory()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getIndexWriterMemory()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getIndexWriterMemory()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getIndexWriterMemory()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getVersionMapMemory()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getVersionMapMemory()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getVersionMapMemory()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getVersionMapMemory()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getBitsetMemory()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getBitsetMemory()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getBitsetMemory()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getBitsetMemory()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().current()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().current()); + table.addCell(totalStats.getWarmer() == null ? null : totalStats.getWarmer().current()); + table.addCell(primaryStats.getWarmer() == null ? null : primaryStats.getWarmer().current()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().total()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().total()); + table.addCell(totalStats.getWarmer() == null ? null : totalStats.getWarmer().total()); + table.addCell(primaryStats.getWarmer() == null ? null : primaryStats.getWarmer().total()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().totalTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().totalTime()); + table.addCell(totalStats.getWarmer() == null ? null : totalStats.getWarmer().totalTime()); + table.addCell(primaryStats.getWarmer() == null ? null : primaryStats.getWarmer().totalTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getSuggestCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getSuggestCurrent()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getSuggestCurrent()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getSuggestCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getSuggestTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getSuggestTime()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getSuggestTime()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getSuggestTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getSuggestCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getSuggestCount()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getSuggestCount()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getSuggestCount()); table.addCell(indexStats == null ? null : indexStats.getTotal().getTotalMemory()); table.addCell(indexStats == null ? null : indexStats.getPrimaries().getTotalMemory()); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestMasterAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestMasterAction.java index 5902ba60e5727..db5c1149cc0ad 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestMasterAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestMasterAction.java @@ -25,9 +25,7 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -36,8 +34,6 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestMasterAction extends AbstractCatAction { - - @Inject public RestMasterAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/master", this); @@ -49,13 +45,13 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().nodes(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); clusterStateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterStateRequest.masterNodeTimeout())); - client.admin().cluster().state(clusterStateRequest, new RestResponseListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestResponseListener(channel) { @Override public RestResponse buildResponse(final ClusterStateResponse clusterStateResponse) throws Exception { return RestTable.buildResponse(buildTable(request, clusterStateResponse), channel); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodeAttrsAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodeAttrsAction.java index 5ab98316c7cf5..6819d2421de2a 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodeAttrsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodeAttrsAction.java @@ -29,10 +29,8 @@ import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.InetSocketTransportAddress; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -44,8 +42,6 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestNodeAttrsAction extends AbstractCatAction { - - @Inject public RestNodeAttrsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/nodeattrs", this); @@ -57,13 +53,13 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().nodes(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); clusterStateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterStateRequest.masterNodeTimeout())); - client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { @Override public void processResponse(final ClusterStateResponse clusterStateResponse) { NodesInfoRequest nodesInfoRequest = new NodesInfoRequest(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java index 2c1900feefa91..5f64bb188533f 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java @@ -32,11 +32,11 @@ import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.network.NetworkAddress; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.InetSocketTransportAddress; import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.http.HttpInfo; import org.elasticsearch.index.cache.query.QueryCacheStats; import org.elasticsearch.index.cache.request.RequestCacheStats; @@ -54,7 +54,6 @@ import org.elasticsearch.monitor.jvm.JvmStats; import org.elasticsearch.monitor.os.OsStats; import org.elasticsearch.monitor.process.ProcessStats; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -69,8 +68,6 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestNodesAction extends AbstractCatAction { - - @Inject public RestNodesAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/nodes", this); @@ -82,13 +79,13 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().nodes(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); clusterStateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterStateRequest.masterNodeTimeout())); - - client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { + final boolean fullId = request.paramAsBoolean("full_id", false); + return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { @Override public void processResponse(final ClusterStateResponse clusterStateResponse) { NodesInfoRequest nodesInfoRequest = new NodesInfoRequest(); @@ -101,7 +98,8 @@ public void processResponse(final NodesInfoResponse nodesInfoResponse) { client.admin().cluster().nodesStats(nodesStatsRequest, new RestResponseListener(channel) { @Override public RestResponse buildResponse(NodesStatsResponse nodesStatsResponse) throws Exception { - return RestTable.buildResponse(buildTable(request, clusterStateResponse, nodesInfoResponse, nodesStatsResponse), channel); + return RestTable.buildResponse(buildTable(fullId, request, clusterStateResponse, nodesInfoResponse, + nodesStatsResponse), channel); } }); } @@ -123,7 +121,10 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("version", "default:false;alias:v;desc:es version"); table.addCell("build", "default:false;alias:b;desc:es build hash"); table.addCell("jdk", "default:false;alias:j;desc:jdk version"); - table.addCell("disk.avail", "default:false;alias:d,disk,diskAvail;text-align:right;desc:available disk space"); + table.addCell("disk.total", "default:false;alias:dt,diskTotal;text-align:right;desc:total disk space"); + table.addCell("disk.used", "default:false;alias:du,diskUsed;text-align:right;desc:used disk space"); + table.addCell("disk.avail", "default:false;alias:d,da,disk,diskAvail;text-align:right;desc:available disk space"); + table.addCell("disk.used_percent", "default:false;alias:dup,diskUsedPercent;text-align:right;desc:used disk space percentage"); table.addCell("heap.current", "default:false;alias:hc,heapCurrent;text-align:right;desc:used heap"); table.addCell("heap.percent", "alias:hp,heapPercent;text-align:right;desc:used heap ratio"); table.addCell("heap.max", "default:false;alias:hm,heapMax;text-align:right;desc:max configured heap"); @@ -131,7 +132,8 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("ram.percent", "alias:rp,ramPercent;text-align:right;desc:used machine memory ratio"); table.addCell("ram.max", "default:false;alias:rm,ramMax;text-align:right;desc:total machine memory"); table.addCell("file_desc.current", "default:false;alias:fdc,fileDescriptorCurrent;text-align:right;desc:used file descriptors"); - table.addCell("file_desc.percent", "default:false;alias:fdp,fileDescriptorPercent;text-align:right;desc:used file descriptor ratio"); + table.addCell("file_desc.percent", + "default:false;alias:fdp,fileDescriptorPercent;text-align:right;desc:used file descriptor ratio"); table.addCell("file_desc.max", "default:false;alias:fdm,fileDescriptorMax;text-align:right;desc:max file descriptors"); table.addCell("cpu", "alias:cpu;text-align:right;desc:recent cpu usage"); @@ -139,7 +141,8 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("load_5m", "alias:l;text-align:right;desc:5m load avg"); table.addCell("load_15m", "alias:l;text-align:right;desc:15m load avg"); table.addCell("uptime", "default:false;alias:u;text-align:right;desc:node uptime"); - table.addCell("node.role", "alias:r,role,nodeRole;desc:m:master eligible node, d:data node, i:ingest node, -:coordinating node only"); + table.addCell("node.role", + "alias:r,role,nodeRole;desc:m:master eligible node, d:data node, i:ingest node, -:coordinating node only"); table.addCell("master", "alias:m;desc:*:current master"); table.addCell("name", "alias:n;desc:node name"); @@ -152,9 +155,12 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("query_cache.evictions", "alias:qce,queryCacheEvictions;default:false;text-align:right;desc:query cache evictions"); table.addCell("request_cache.memory_size", "alias:rcm,requestCacheMemory;default:false;text-align:right;desc:used request cache"); - table.addCell("request_cache.evictions", "alias:rce,requestCacheEvictions;default:false;text-align:right;desc:request cache evictions"); - table.addCell("request_cache.hit_count", "alias:rchc,requestCacheHitCount;default:false;text-align:right;desc:request cache hit counts"); - table.addCell("request_cache.miss_count", "alias:rcmc,requestCacheMissCount;default:false;text-align:right;desc:request cache miss counts"); + table.addCell("request_cache.evictions", + "alias:rce,requestCacheEvictions;default:false;text-align:right;desc:request cache evictions"); + table.addCell("request_cache.hit_count", + "alias:rchc,requestCacheHitCount;default:false;text-align:right;desc:request cache hit counts"); + table.addCell("request_cache.miss_count", + "alias:rcmc,requestCacheMissCount;default:false;text-align:right;desc:request cache miss counts"); table.addCell("flush.total", "alias:ft,flushTotal;default:false;text-align:right;desc:number of flushes"); table.addCell("flush.total_time", "alias:ftt,flushTotalTime;default:false;text-align:right;desc:time spent in flush"); @@ -167,16 +173,20 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("get.missing_time", "alias:gmti,getMissingTime;default:false;text-align:right;desc:time spent in failed gets"); table.addCell("get.missing_total", "alias:gmto,getMissingTotal;default:false;text-align:right;desc:number of failed gets"); - table.addCell("indexing.delete_current", "alias:idc,indexingDeleteCurrent;default:false;text-align:right;desc:number of current deletions"); + table.addCell("indexing.delete_current", + "alias:idc,indexingDeleteCurrent;default:false;text-align:right;desc:number of current deletions"); table.addCell("indexing.delete_time", "alias:idti,indexingDeleteTime;default:false;text-align:right;desc:time spent in deletions"); table.addCell("indexing.delete_total", "alias:idto,indexingDeleteTotal;default:false;text-align:right;desc:number of delete ops"); - table.addCell("indexing.index_current", "alias:iic,indexingIndexCurrent;default:false;text-align:right;desc:number of current indexing ops"); + table.addCell("indexing.index_current", + "alias:iic,indexingIndexCurrent;default:false;text-align:right;desc:number of current indexing ops"); table.addCell("indexing.index_time", "alias:iiti,indexingIndexTime;default:false;text-align:right;desc:time spent in indexing"); table.addCell("indexing.index_total", "alias:iito,indexingIndexTotal;default:false;text-align:right;desc:number of indexing ops"); - table.addCell("indexing.index_failed", "alias:iif,indexingIndexFailed;default:false;text-align:right;desc:number of failed indexing ops"); + table.addCell("indexing.index_failed", + "alias:iif,indexingIndexFailed;default:false;text-align:right;desc:number of failed indexing ops"); table.addCell("merges.current", "alias:mc,mergesCurrent;default:false;text-align:right;desc:number of current merges"); - table.addCell("merges.current_docs", "alias:mcd,mergesCurrentDocs;default:false;text-align:right;desc:number of current merging docs"); + table.addCell("merges.current_docs", + "alias:mcd,mergesCurrentDocs;default:false;text-align:right;desc:number of current merging docs"); table.addCell("merges.current_size", "alias:mcs,mergesCurrentSize;default:false;text-align:right;desc:size of current merges"); table.addCell("merges.total", "alias:mt,mergesTotal;default:false;text-align:right;desc:number of completed merge ops"); table.addCell("merges.total_docs", "alias:mtd,mergesTotalDocs;default:false;text-align:right;desc:docs merged"); @@ -185,9 +195,12 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("refresh.total", "alias:rto,refreshTotal;default:false;text-align:right;desc:total refreshes"); table.addCell("refresh.time", "alias:rti,refreshTime;default:false;text-align:right;desc:time spent in refreshes"); + table.addCell("refresh.listeners", "alias:rli,refreshListeners;default:false;text-align:right;" + + "desc:number of pending refresh listeners"); table.addCell("script.compilations", "alias:scrcc,scriptCompilations;default:false;text-align:right;desc:script compilations"); - table.addCell("script.cache_evictions", "alias:scrce,scriptCacheEvictions;default:false;text-align:right;desc:script cache evictions"); + table.addCell("script.cache_evictions", + "alias:scrce,scriptCacheEvictions;default:false;text-align:right;desc:script cache evictions"); table.addCell("search.fetch_current", "alias:sfc,searchFetchCurrent;default:false;text-align:right;desc:current fetch phase ops"); table.addCell("search.fetch_time", "alias:sfti,searchFetchTime;default:false;text-align:right;desc:time spent in fetch phase"); @@ -197,14 +210,19 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("search.query_time", "alias:sqti,searchQueryTime;default:false;text-align:right;desc:time spent in query phase"); table.addCell("search.query_total", "alias:sqto,searchQueryTotal;default:false;text-align:right;desc:total query phase ops"); table.addCell("search.scroll_current", "alias:scc,searchScrollCurrent;default:false;text-align:right;desc:open scroll contexts"); - table.addCell("search.scroll_time", "alias:scti,searchScrollTime;default:false;text-align:right;desc:time scroll contexts held open"); + table.addCell("search.scroll_time", + "alias:scti,searchScrollTime;default:false;text-align:right;desc:time scroll contexts held open"); table.addCell("search.scroll_total", "alias:scto,searchScrollTotal;default:false;text-align:right;desc:completed scroll contexts"); table.addCell("segments.count", "alias:sc,segmentsCount;default:false;text-align:right;desc:number of segments"); table.addCell("segments.memory", "alias:sm,segmentsMemory;default:false;text-align:right;desc:memory used by segments"); - table.addCell("segments.index_writer_memory", "alias:siwm,segmentsIndexWriterMemory;default:false;text-align:right;desc:memory used by index writer"); - table.addCell("segments.version_map_memory", "alias:svmm,segmentsVersionMapMemory;default:false;text-align:right;desc:memory used by version map"); - table.addCell("segments.fixed_bitset_memory", "alias:sfbm,fixedBitsetMemory;default:false;text-align:right;desc:memory used by fixed bit sets for nested object field types and type filters for types referred in _parent fields"); + table.addCell("segments.index_writer_memory", + "alias:siwm,segmentsIndexWriterMemory;default:false;text-align:right;desc:memory used by index writer"); + table.addCell("segments.version_map_memory", + "alias:svmm,segmentsVersionMapMemory;default:false;text-align:right;desc:memory used by version map"); + table.addCell("segments.fixed_bitset_memory", + "alias:sfbm,fixedBitsetMemory;default:false;text-align:right;desc:memory used by fixed bit sets for nested object field types" + + " and type filters for types referred in _parent fields"); table.addCell("suggest.current", "alias:suc,suggestCurrent;default:false;text-align:right;desc:number of current suggest ops"); table.addCell("suggest.time", "alias:suti,suggestTime;default:false;text-align:right;desc:time spend in suggest"); @@ -214,8 +232,8 @@ protected Table getTableWithHeader(final RestRequest request) { return table; } - private Table buildTable(RestRequest req, ClusterStateResponse state, NodesInfoResponse nodesInfo, NodesStatsResponse nodesStats) { - boolean fullId = req.paramAsBoolean("full_id", false); + private Table buildTable(boolean fullId, RestRequest req, ClusterStateResponse state, NodesInfoResponse nodesInfo, + NodesStatsResponse nodesStats) { DiscoveryNodes nodes = state.getState().nodes(); String masterId = nodes.getMasterNodeId(); @@ -257,7 +275,15 @@ private Table buildTable(RestRequest req, ClusterStateResponse state, NodesInfoR table.addCell(node.getVersion().toString()); table.addCell(info == null ? null : info.getBuild().shortHash()); table.addCell(jvmInfo == null ? null : jvmInfo.version()); + + long diskTotal = fsInfo.getTotal().getTotal().getBytes(); + long diskUsed = diskTotal - fsInfo.getTotal().getAvailable().getBytes(); + double diskUsedRatio = diskTotal == 0 ? 1.0 : (double) diskUsed / diskTotal; + table.addCell(fsInfo == null ? null : fsInfo.getTotal().getTotal()); + table.addCell(fsInfo == null ? null : new ByteSizeValue(diskUsed)); table.addCell(fsInfo == null ? null : fsInfo.getTotal().getAvailable()); + table.addCell(fsInfo == null ? null : String.format(Locale.ROOT, "%.2f", 100.0 * diskUsedRatio)); + table.addCell(jvmStats == null ? null : jvmStats.getMem().getHeapUsed()); table.addCell(jvmStats == null ? null : jvmStats.getMem().getHeapUsedPercent()); table.addCell(jvmInfo == null ? null : jvmInfo.getMem().getHeapMax()); @@ -265,14 +291,18 @@ private Table buildTable(RestRequest req, ClusterStateResponse state, NodesInfoR table.addCell(osStats == null ? null : osStats.getMem() == null ? null : osStats.getMem().getUsedPercent()); table.addCell(osStats == null ? null : osStats.getMem() == null ? null : osStats.getMem().getTotal()); table.addCell(processStats == null ? null : processStats.getOpenFileDescriptors()); - table.addCell(processStats == null ? null : calculatePercentage(processStats.getOpenFileDescriptors(), processStats.getMaxFileDescriptors())); + table.addCell(processStats == null ? null : calculatePercentage(processStats.getOpenFileDescriptors(), + processStats.getMaxFileDescriptors())); table.addCell(processStats == null ? null : processStats.getMaxFileDescriptors()); table.addCell(osStats == null ? null : Short.toString(osStats.getCpu().getPercent())); boolean hasLoadAverage = osStats != null && osStats.getCpu().getLoadAverage() != null; - table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[0] == -1 ? null : String.format(Locale.ROOT, "%.2f", osStats.getCpu().getLoadAverage()[0])); - table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[1] == -1 ? null : String.format(Locale.ROOT, "%.2f", osStats.getCpu().getLoadAverage()[1])); - table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[2] == -1 ? null : String.format(Locale.ROOT, "%.2f", osStats.getCpu().getLoadAverage()[2])); + table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[0] == -1 ? null : + String.format(Locale.ROOT, "%.2f", osStats.getCpu().getLoadAverage()[0])); + table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[1] == -1 ? null : + String.format(Locale.ROOT, "%.2f", osStats.getCpu().getLoadAverage()[1])); + table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[2] == -1 ? null : + String.format(Locale.ROOT, "%.2f", osStats.getCpu().getLoadAverage()[2])); table.addCell(jvmStats == null ? null : jvmStats.getUptime()); final String roles; @@ -336,6 +366,7 @@ private Table buildTable(RestRequest req, ClusterStateResponse state, NodesInfoR RefreshStats refreshStats = indicesStats == null ? null : indicesStats.getRefresh(); table.addCell(refreshStats == null ? null : refreshStats.getTotal()); table.addCell(refreshStats == null ? null : refreshStats.getTotalTime()); + table.addCell(refreshStats == null ? null : refreshStats.getListeners()); ScriptStats scriptStats = stats == null ? null : stats.getScriptStats(); table.addCell(scriptStats == null ? null : scriptStats.getCompilations()); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestPendingClusterTasksAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestPendingClusterTasksAction.java index 773c6d292b5fc..a9f044c144642 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestPendingClusterTasksAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestPendingClusterTasksAction.java @@ -24,9 +24,7 @@ import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.service.PendingClusterTask; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -35,7 +33,6 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestPendingClusterTasksAction extends AbstractCatAction { - @Inject public RestPendingClusterTasksAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/pending_tasks", this); @@ -47,17 +44,20 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { PendingClusterTasksRequest pendingClusterTasksRequest = new PendingClusterTasksRequest(); pendingClusterTasksRequest.masterNodeTimeout(request.paramAsTime("master_timeout", pendingClusterTasksRequest.masterNodeTimeout())); pendingClusterTasksRequest.local(request.paramAsBoolean("local", pendingClusterTasksRequest.local())); - client.admin().cluster().pendingClusterTasks(pendingClusterTasksRequest, new RestResponseListener(channel) { - @Override - public RestResponse buildResponse(PendingClusterTasksResponse pendingClusterTasks) throws Exception { - Table tab = buildTable(request, pendingClusterTasks); - return RestTable.buildResponse(tab, channel); - } - }); + return channel -> + client.admin() + .cluster() + .pendingClusterTasks(pendingClusterTasksRequest, new RestResponseListener(channel) { + @Override + public RestResponse buildResponse(PendingClusterTasksResponse pendingClusterTasks) throws Exception { + Table tab = buildTable(request, pendingClusterTasks); + return RestTable.buildResponse(tab, channel); + } + }); } @Override diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestPluginsAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestPluginsAction.java index ef8385653f166..7851c15b32f76 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestPluginsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestPluginsAction.java @@ -28,10 +28,8 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.plugins.PluginInfo; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -41,8 +39,6 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestPluginsAction extends AbstractCatAction { - - @Inject public RestPluginsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/plugins", this); @@ -54,13 +50,13 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().nodes(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); clusterStateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterStateRequest.masterNodeTimeout())); - client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { @Override public void processResponse(final ClusterStateResponse clusterStateResponse) throws Exception { NodesInfoRequest nodesInfoRequest = new NodesInfoRequest(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestRecoveryAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestRecoveryAction.java index b0ab8db8b2973..e2e831f890d99 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestRecoveryAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestRecoveryAction.java @@ -28,11 +28,9 @@ import org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.indices.recovery.RecoveryState; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -50,9 +48,7 @@ * be specified to limit output to a particular index or indices. */ public class RestRecoveryAction extends AbstractCatAction { - - @Inject - public RestRecoveryAction(Settings settings, RestController restController, RestController controller) { + public RestRecoveryAction(Settings settings, RestController restController) { super(settings); restController.registerHandler(GET, "/_cat/recovery", this); restController.registerHandler(GET, "/_cat/recovery/{index}", this); @@ -65,13 +61,13 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final RecoveryRequest recoveryRequest = new RecoveryRequest(Strings.splitStringByCommaToArray(request.param("index"))); recoveryRequest.detailed(request.paramAsBoolean("detailed", false)); recoveryRequest.activeOnly(request.paramAsBoolean("active_only", false)); recoveryRequest.indicesOptions(IndicesOptions.fromRequest(request, recoveryRequest.indicesOptions())); - client.admin().indices().recoveries(recoveryRequest, new RestResponseListener(channel) { + return channel -> client.admin().indices().recoveries(recoveryRequest, new RestResponseListener(channel) { @Override public RestResponse buildResponse(final RecoveryResponse response) throws Exception { return RestTable.buildResponse(buildRecoveryTable(request, response), channel); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestRepositoriesAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestRepositoriesAction.java index 05130504e5091..631c03050492d 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestRepositoriesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestRepositoriesAction.java @@ -24,9 +24,7 @@ import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.metadata.RepositoryMetaData; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -38,24 +36,26 @@ * Cat API class to display information about snapshot repositories */ public class RestRepositoriesAction extends AbstractCatAction { - @Inject public RestRepositoriesAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/repositories", this); } @Override - protected void doRequest(RestRequest request, RestChannel channel, NodeClient client) { + protected RestChannelConsumer doCatRequest(RestRequest request, NodeClient client) { GetRepositoriesRequest getRepositoriesRequest = new GetRepositoriesRequest(); getRepositoriesRequest.local(request.paramAsBoolean("local", getRepositoriesRequest.local())); getRepositoriesRequest.masterNodeTimeout(request.paramAsTime("master_timeout", getRepositoriesRequest.masterNodeTimeout())); - client.admin().cluster().getRepositories(getRepositoriesRequest, new RestResponseListener(channel) { - @Override - public RestResponse buildResponse(GetRepositoriesResponse getRepositoriesResponse) throws Exception { - return RestTable.buildResponse(buildTable(request, getRepositoriesResponse), channel); - } - }); + return channel -> + client.admin() + .cluster() + .getRepositories(getRepositoriesRequest, new RestResponseListener(channel) { + @Override + public RestResponse buildResponse(GetRepositoriesResponse getRepositoriesResponse) throws Exception { + return RestTable.buildResponse(buildTable(request, getRepositoriesResponse), channel); + } + }); } @Override diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestSegmentsAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestSegmentsAction.java index d2b30c49ca9d6..48983ab836b7a 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestSegmentsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestSegmentsAction.java @@ -30,10 +30,8 @@ import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.engine.Segment; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -46,8 +44,6 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestSegmentsAction extends AbstractCatAction { - - @Inject public RestSegmentsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/segments", this); @@ -55,7 +51,7 @@ public RestSegmentsAction(Settings settings, RestController controller) { } @Override - protected void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + protected RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); @@ -63,7 +59,7 @@ protected void doRequest(final RestRequest request, final RestChannel channel, f clusterStateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterStateRequest.masterNodeTimeout())); clusterStateRequest.clear().nodes(true).routingTable(true).indices(indices); - client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { @Override public void processResponse(final ClusterStateResponse clusterStateResponse) { final IndicesSegmentsRequest indicesSegmentsRequest = new IndicesSegmentsRequest(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java index 2dd4b6a10d137..3582fe60e0040 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java @@ -31,12 +31,10 @@ import org.elasticsearch.cluster.routing.UnassignedInfo; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.engine.CommitStats; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -48,8 +46,6 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestShardsAction extends AbstractCatAction { - - @Inject public RestShardsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/shards", this); @@ -63,13 +59,13 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); clusterStateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterStateRequest.masterNodeTimeout())); clusterStateRequest.clear().nodes(true).metaData(true).routingTable(true).indices(indices); - client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { @Override public void processResponse(final ClusterStateResponse clusterStateResponse) { IndicesStatsRequest indicesStatsRequest = new IndicesStatsRequest(); @@ -145,6 +141,7 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("refresh.total", "alias:rto,refreshTotal;default:false;text-align:right;desc:total refreshes"); table.addCell("refresh.time", "alias:rti,refreshTime;default:false;text-align:right;desc:time spent in refreshes"); + table.addCell("refresh.listeners", "alias:rli,refreshListeners;default:false;text-align:right;desc:number of pending refresh listeners"); table.addCell("search.fetch_current", "alias:sfc,searchFetchCurrent;default:false;text-align:right;desc:current fetch phase ops"); table.addCell("search.fetch_time", "alias:sfti,searchFetchTime;default:false;text-align:right;desc:time spent in fetch phase"); @@ -287,6 +284,7 @@ private Table buildTable(RestRequest request, ClusterStateResponse state, Indice table.addCell(commonStats == null ? null : commonStats.getRefresh().getTotal()); table.addCell(commonStats == null ? null : commonStats.getRefresh().getTotalTime()); + table.addCell(commonStats == null ? null : commonStats.getRefresh().getListeners()); table.addCell(commonStats == null ? null : commonStats.getSearch().getTotal().getFetchCurrent()); table.addCell(commonStats == null ? null : commonStats.getSearch().getTotal().getFetchTime()); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestSnapshotAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestSnapshotAction.java index 021b00be24e1c..54337e5e14369 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestSnapshotAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestSnapshotAction.java @@ -24,10 +24,8 @@ import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -45,14 +43,14 @@ * Cat API class to display information about snapshots */ public class RestSnapshotAction extends AbstractCatAction { - @Inject public RestSnapshotAction(Settings settings, RestController controller) { super(settings); + controller.registerHandler(GET, "/_cat/snapshots", this); controller.registerHandler(GET, "/_cat/snapshots/{repository}", this); } @Override - protected void doRequest(final RestRequest request, RestChannel channel, NodeClient client) { + protected RestChannelConsumer doCatRequest(final RestRequest request, NodeClient client) { GetSnapshotsRequest getSnapshotsRequest = new GetSnapshotsRequest() .repository(request.param("repository")) .snapshots(new String[]{GetSnapshotsRequest.ALL_SNAPSHOTS}); @@ -61,12 +59,13 @@ protected void doRequest(final RestRequest request, RestChannel channel, NodeCli getSnapshotsRequest.masterNodeTimeout(request.paramAsTime("master_timeout", getSnapshotsRequest.masterNodeTimeout())); - client.admin().cluster().getSnapshots(getSnapshotsRequest, new RestResponseListener(channel) { - @Override - public RestResponse buildResponse(GetSnapshotsResponse getSnapshotsResponse) throws Exception { - return RestTable.buildResponse(buildTable(request, getSnapshotsResponse), channel); - } - }); + return channel -> + client.admin().cluster().getSnapshots(getSnapshotsRequest, new RestResponseListener(channel) { + @Override + public RestResponse buildResponse(GetSnapshotsResponse getSnapshotsResponse) throws Exception { + return RestTable.buildResponse(buildTable(request, getSnapshotsResponse), channel); + } + }); } @Override diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestTable.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestTable.java index 78d30407ffc51..ac8237471cc2f 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestTable.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestTable.java @@ -22,8 +22,11 @@ import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; +import org.elasticsearch.common.io.Streams; import org.elasticsearch.common.io.UTF8StreamWriter; -import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.BytesStream; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.SizeValue; @@ -38,13 +41,16 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; import java.util.LinkedHashSet; import java.util.List; +import java.util.Locale; +import java.util.Map; import java.util.Set; -/** - */ public class RestTable { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(RestRequest.class)); public static RestResponse buildResponse(Table table, RestChannel channel) throws Exception { RestRequest request = channel.request(); @@ -61,13 +67,13 @@ public static RestResponse buildXContentBuilder(Table table, RestChannel channel List displayHeaders = buildDisplayHeaders(table, request); builder.startArray(); - for (int row = 0; row < table.getRows().size(); row++) { + List rowOrder = getRowOrder(table, request); + for (Integer row : rowOrder) { builder.startObject(); for (DisplayHeader header : displayHeaders) { builder.field(header.display, renderValue(request, table.getAsMap().get(header.name).get(row).value)); } builder.endObject(); - } builder.endArray(); return new BytesRestResponse(RestStatus.OK, builder); @@ -80,7 +86,7 @@ public static RestResponse buildTextPlainResponse(Table table, RestChannel chann List headers = buildDisplayHeaders(table, request); int[] width = buildWidths(table, request, verbose, headers); - BytesStreamOutput bytesOut = channel.bytesOutput(); + BytesStream bytesOut = Streams.flushOnCloseStream(channel.bytesOutput()); UTF8StreamWriter out = new UTF8StreamWriter().setOutput(bytesOut); int lastHeader = headers.size() - 1; if (verbose) { @@ -94,7 +100,10 @@ public static RestResponse buildTextPlainResponse(Table table, RestChannel chann } out.append("\n"); } - for (int row = 0; row < table.getRows().size(); row++) { + + List rowOrder = getRowOrder(table, request); + + for (Integer row: rowOrder) { for (int col = 0; col < headers.size(); col++) { DisplayHeader header = headers.get(col); boolean isLastColumn = col == lastHeader; @@ -109,6 +118,38 @@ public static RestResponse buildTextPlainResponse(Table table, RestChannel chann return new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, bytesOut.bytes()); } + static List getRowOrder(Table table, RestRequest request) { + String[] columnOrdering = request.paramAsStringArray("s", null); + + List rowOrder = new ArrayList<>(); + for (int i = 0; i < table.getRows().size(); i++) { + rowOrder.add(i); + } + + if (columnOrdering != null) { + Map headerAliasMap = table.getAliasMap(); + List ordering = new ArrayList<>(); + for (int i = 0; i < columnOrdering.length; i++) { + String columnHeader = columnOrdering[i]; + boolean reverse = false; + if (columnHeader.endsWith(":desc")) { + columnHeader = columnHeader.substring(0, columnHeader.length() - ":desc".length()); + reverse = true; + } else if (columnHeader.endsWith(":asc")) { + columnHeader = columnHeader.substring(0, columnHeader.length() - ":asc".length()); + } + if (headerAliasMap.containsKey(columnHeader)) { + ordering.add(new ColumnOrderElement(headerAliasMap.get(columnHeader), reverse)); + } else { + throw new UnsupportedOperationException( + String.format(Locale.ROOT, "Unable to sort by unknown sort key `%s`", columnHeader)); + } + } + Collections.sort(rowOrder, new TableIndexComparator(table, ordering)); + } + return rowOrder; + } + static List buildDisplayHeaders(Table table, RestRequest request) { List display = new ArrayList<>(); if (request.hasParam("h")) { @@ -153,7 +194,13 @@ static List buildDisplayHeaders(Table table, RestRequest request) } else { for (Table.Cell cell : table.getHeaders()) { String d = cell.attr.get("default"); - if (Booleans.parseBoolean(d, true) && checkOutputTimestamp(cell.value.toString(), request)) { + boolean defaultValue = Booleans.parseBoolean(d, true); + if (d != null && Booleans.isStrictlyBoolean(d) == false) { + DEPRECATION_LOGGER.deprecated( + "Expected a boolean [true/false] for attribute [default] of table header [{}] but got [{}]", + cell.value.toString(), d); + } + if (defaultValue && checkOutputTimestamp(cell.value.toString(), request)) { display.add(new DisplayHeader(cell.value.toString(), cell.value.toString())); } } @@ -302,17 +349,17 @@ private static String renderValue(RestRequest request, Object value) { ByteSizeValue v = (ByteSizeValue) value; String resolution = request.param("bytes"); if ("b".equals(resolution)) { - return Long.toString(v.bytes()); + return Long.toString(v.getBytes()); } else if ("k".equals(resolution) || "kb".equals(resolution)) { - return Long.toString(v.kb()); + return Long.toString(v.getKb()); } else if ("m".equals(resolution) || "mb".equals(resolution)) { - return Long.toString(v.mb()); + return Long.toString(v.getMb()); } else if ("g".equals(resolution) || "gb".equals(resolution)) { - return Long.toString(v.gb()); + return Long.toString(v.getGb()); } else if ("t".equals(resolution) || "tb".equals(resolution)) { - return Long.toString(v.tb()); + return Long.toString(v.getTb()); } else if ("p".equals(resolution) || "pb".equals(resolution)) { - return Long.toString(v.pb()); + return Long.toString(v.getPb()); } else { return v.toString(); } @@ -370,4 +417,71 @@ static class DisplayHeader { this.display = display; } } + + static class TableIndexComparator implements Comparator { + private final Table table; + private final int maxIndex; + private final List ordering; + + TableIndexComparator(Table table, List ordering) { + this.table = table; + this.maxIndex = table.getRows().size(); + this.ordering = ordering; + } + + private int compareCell(Object o1, Object o2) { + if (o1 == null && o2 == null) { + return 0; + } else if (o1 == null) { + return -1; + } else if (o2 == null) { + return 1; + } else { + if (o1 instanceof Comparable && o1.getClass().equals(o2.getClass())) { + return ((Comparable) o1).compareTo(o2); + } else { + return o1.toString().compareTo(o2.toString()); + } + } + } + + @Override + public int compare(Integer rowIndex1, Integer rowIndex2) { + if (rowIndex1 < maxIndex && rowIndex1 >= 0 && rowIndex2 < maxIndex && rowIndex2 >= 0) { + Map> tableMap = table.getAsMap(); + for (ColumnOrderElement orderingElement : ordering) { + String column = orderingElement.getColumn(); + if (tableMap.containsKey(column)) { + int comparison = compareCell(tableMap.get(column).get(rowIndex1).value, + tableMap.get(column).get(rowIndex2).value); + if (comparison != 0) { + return orderingElement.isReversed() ? -1 * comparison : comparison; + } + } + } + return 0; + } else { + throw new AssertionError(String.format(Locale.ENGLISH, "Invalid comparison of indices (%s, %s): Table has %s rows.", + rowIndex1, rowIndex2, table.getRows().size())); + } + } + } + + static class ColumnOrderElement { + private final String column; + private final boolean reverse; + + ColumnOrderElement(String column, boolean reverse) { + this.column = column; + this.reverse = reverse; + } + + public String getColumn() { + return column; + } + + public boolean isReversed() { + return reverse; + } + } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestTasksAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestTasksAction.java index 99eee4e735ad0..a1b48b7115be3 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestTasksAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestTasksAction.java @@ -24,14 +24,11 @@ import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; -import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Strings; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.InetSocketTransportAddress; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -41,19 +38,22 @@ import org.joda.time.format.DateTimeFormatter; import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; import java.util.List; +import java.util.Set; +import java.util.function.Supplier; import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.generateListTasksRequest; public class RestTasksAction extends AbstractCatAction { - private final ClusterService clusterService; + private final Supplier nodesInCluster; - @Inject - public RestTasksAction(Settings settings, RestController controller, ClusterService clusterService) { + public RestTasksAction(Settings settings, RestController controller, Supplier nodesInCluster) { super(settings); controller.registerHandler(GET, "/_cat/tasks", this); - this.clusterService = clusterService; + this.nodesInCluster = nodesInCluster; } @Override @@ -62,8 +62,9 @@ protected void documentation(StringBuilder sb) { } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { - client.admin().cluster().listTasks(generateListTasksRequest(request), new RestResponseListener(channel) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { + return channel -> + client.admin().cluster().listTasks(generateListTasksRequest(request), new RestResponseListener(channel) { @Override public RestResponse buildResponse(ListTasksResponse listTasksResponse) throws Exception { return RestTable.buildResponse(buildTable(request, listTasksResponse), channel); @@ -71,6 +72,20 @@ public RestResponse buildResponse(ListTasksResponse listTasksResponse) throws Ex }); } + private static final Set RESPONSE_PARAMS; + + static { + final Set responseParams = new HashSet<>(); + responseParams.add("detailed"); + responseParams.addAll(AbstractCatAction.RESPONSE_PARAMS); + RESPONSE_PARAMS = Collections.unmodifiableSet(responseParams); + } + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; + } + @Override protected Table getTableWithHeader(final RestRequest request) { boolean detailed = request.paramAsBoolean("detailed", false); @@ -142,7 +157,7 @@ private void buildRow(Table table, boolean fullId, boolean detailed, DiscoveryNo } private void buildGroups(Table table, boolean fullId, boolean detailed, List taskGroups) { - DiscoveryNodes discoveryNodes = clusterService.state().nodes(); + DiscoveryNodes discoveryNodes = nodesInCluster.get(); List sortedGroups = new ArrayList<>(taskGroups); sortedGroups.sort((o1, o2) -> Long.compare(o1.getTaskInfo().getStartTime(), o2.getTaskInfo().getStartTime())); for (TaskGroup taskGroup : sortedGroups) { diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestTemplatesAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestTemplatesAction.java new file mode 100644 index 0000000000000..8416a346299d2 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestTemplatesAction.java @@ -0,0 +1,95 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.rest.action.cat; + +import com.carrotsearch.hppc.cursors.ObjectObjectCursor; + +import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest; +import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse; +import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.cluster.metadata.IndexTemplateMetaData; +import org.elasticsearch.cluster.metadata.MetaData; +import org.elasticsearch.common.Table; +import org.elasticsearch.common.regex.Regex; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.rest.RestController; +import org.elasticsearch.rest.RestRequest; +import org.elasticsearch.rest.RestResponse; +import org.elasticsearch.rest.action.RestResponseListener; + +import static org.elasticsearch.rest.RestRequest.Method.GET; + +public class RestTemplatesAction extends AbstractCatAction { + public RestTemplatesAction(Settings settings, RestController controller) { + super(settings); + controller.registerHandler(GET, "/_cat/templates", this); + controller.registerHandler(GET, "/_cat/templates/{name}", this); + } + + @Override + protected void documentation(StringBuilder sb) { + sb.append("/_cat/templates\n"); + } + + @Override + protected RestChannelConsumer doCatRequest(final RestRequest request, NodeClient client) { + final String matchPattern = request.hasParam("name") ? request.param("name") : null; + final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); + clusterStateRequest.clear().metaData(true); + clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); + clusterStateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterStateRequest.masterNodeTimeout())); + + return channel -> client.admin().cluster().state(clusterStateRequest, new RestResponseListener(channel) { + @Override + public RestResponse buildResponse(ClusterStateResponse clusterStateResponse) throws Exception { + return RestTable.buildResponse(buildTable(request, clusterStateResponse, matchPattern), channel); + } + }); + } + + @Override + protected Table getTableWithHeader(RestRequest request) { + Table table = new Table(); + table.startHeaders(); + table.addCell("name", "alias:n;desc:template name"); + table.addCell("template", "alias:t;desc:template pattern string"); + table.addCell("order", "alias:o;desc:template application order number"); + table.addCell("version", "alias:v;desc:version"); + table.endHeaders(); + return table; + } + + private Table buildTable(RestRequest request, ClusterStateResponse clusterStateResponse, String patternString) { + Table table = getTableWithHeader(request); + MetaData metadata = clusterStateResponse.getState().metaData(); + for (ObjectObjectCursor entry : metadata.templates()) { + IndexTemplateMetaData indexData = entry.value; + if (patternString == null || Regex.simpleMatch(patternString, indexData.name())) { + table.startRow(); + table.addCell(indexData.name()); + table.addCell(indexData.getTemplate()); + table.addCell(indexData.getOrder()); + table.addCell(indexData.getVersion()); + table.endRow(); + } + } + return table; + } +} diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestThreadPoolAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestThreadPoolAction.java index 6f3c5c11ce01e..f841a3a77e8da 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestThreadPoolAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestThreadPoolAction.java @@ -31,11 +31,9 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.Table; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.InetSocketTransportAddress; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -54,8 +52,6 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestThreadPoolAction extends AbstractCatAction { - - @Inject public RestThreadPoolAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_cat/thread_pool", this); @@ -65,17 +61,17 @@ public RestThreadPoolAction(Settings settings, RestController controller) { @Override protected void documentation(StringBuilder sb) { sb.append("/_cat/thread_pool\n"); - sb.append("/_cat/thread_pool/{thread_pools}"); + sb.append("/_cat/thread_pool/{thread_pools}\n"); } @Override - public void doRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().nodes(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); clusterStateRequest.masterNodeTimeout(request.paramAsTime("master_timeout", clusterStateRequest.masterNodeTimeout())); - client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { + return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { @Override public void processResponse(final ClusterStateResponse clusterStateResponse) { NodesInfoRequest nodesInfoRequest = new NodesInfoRequest(); @@ -97,6 +93,19 @@ public RestResponse buildResponse(NodesStatsResponse nodesStatsResponse) throws }); } + private static final Set RESPONSE_PARAMS; + + static { + final Set responseParams = new HashSet<>(AbstractCatAction.RESPONSE_PARAMS); + responseParams.add("thread_pool_patterns"); + RESPONSE_PARAMS = Collections.unmodifiableSet(responseParams); + } + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; + } + @Override protected Table getTableWithHeader(final RestRequest request) { final Table table = new Table(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestBulkAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestBulkAction.java index f5dca3f22c9f8..8745c4bfe2d98 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestBulkAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestBulkAction.java @@ -19,28 +19,25 @@ package org.elasticsearch.rest.action.document; -import org.elasticsearch.action.bulk.BulkItemResponse; import org.elasticsearch.action.bulk.BulkRequest; -import org.elasticsearch.action.bulk.BulkResponse; import org.elasticsearch.action.bulk.BulkShardRequest; import org.elasticsearch.action.support.ActiveShardCount; -import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.client.Requests; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; -import org.elasticsearch.rest.action.RestBuilderListener; +import org.elasticsearch.rest.action.RestStatusToXContentListener; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; + +import java.io.IOException; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestRequest.Method.PUT; -import static org.elasticsearch.rest.RestStatus.OK; /** *

    @@ -52,10 +49,11 @@
      * 
    */ public class RestBulkAction extends BaseRestHandler { + private static final DeprecationLogger DEPRECATION_LOGGER = + new DeprecationLogger(Loggers.getLogger(RestBulkAction.class)); private final boolean allowExplicitIndex; - @Inject public RestBulkAction(Settings settings, RestController controller) { super(settings); @@ -70,52 +68,32 @@ public RestBulkAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { BulkRequest bulkRequest = Requests.bulkRequest(); String defaultIndex = request.param("index"); String defaultType = request.param("type"); String defaultRouting = request.param("routing"); + FetchSourceContext defaultFetchSourceContext = FetchSourceContext.parseFromRestRequest(request); String fieldsParam = request.param("fields"); - String defaultPipeline = request.param("pipeline"); + if (fieldsParam != null) { + DEPRECATION_LOGGER.deprecated("Deprecated field [fields] used, expected [_source] instead"); + } String[] defaultFields = fieldsParam != null ? Strings.commaDelimitedListToStringArray(fieldsParam) : null; - + String defaultPipeline = request.param("pipeline"); String waitForActiveShards = request.param("wait_for_active_shards"); if (waitForActiveShards != null) { bulkRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards)); } bulkRequest.timeout(request.paramAsTime("timeout", BulkShardRequest.DEFAULT_TIMEOUT)); bulkRequest.setRefreshPolicy(request.param("refresh")); - bulkRequest.add(request.content(), defaultIndex, defaultType, defaultRouting, defaultFields, defaultPipeline, - null, allowExplicitIndex); - - client.bulk(bulkRequest, new RestBuilderListener(channel) { - @Override - public RestResponse buildResponse(BulkResponse response, XContentBuilder builder) throws Exception { - builder.startObject(); - builder.field(Fields.TOOK, response.getTookInMillis()); - if (response.getIngestTookInMillis() != BulkResponse.NO_INGEST_TOOK) { - builder.field(Fields.INGEST_TOOK, response.getIngestTookInMillis()); - } - builder.field(Fields.ERRORS, response.hasFailures()); - builder.startArray(Fields.ITEMS); - for (BulkItemResponse itemResponse : response) { - builder.startObject(); - itemResponse.toXContent(builder, request); - builder.endObject(); - } - builder.endArray(); + bulkRequest.add(request.requiredContent(), defaultIndex, defaultType, defaultRouting, defaultFields, + defaultFetchSourceContext, defaultPipeline, null, allowExplicitIndex, request.getXContentType()); - builder.endObject(); - return new BytesRestResponse(OK, builder); - } - }); + return channel -> client.bulk(bulkRequest, new RestStatusToXContentListener<>(channel)); } - static final class Fields { - static final String ITEMS = "items"; - static final String ERRORS = "errors"; - static final String TOOK = "took"; - static final String INGEST_TOOK = "ingest_took"; + @Override + public boolean supportsContentStream() { + return true; } - } diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestCountAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestCountAction.java index e5ca6f2cadd02..4d469ea3e0ce6 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestCountAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestCountAction.java @@ -24,15 +24,11 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.query.QueryBuilder; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -40,20 +36,15 @@ import org.elasticsearch.rest.action.RestBuilderListener; import org.elasticsearch.search.builder.SearchSourceBuilder; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; import static org.elasticsearch.search.internal.SearchContext.DEFAULT_TERMINATE_AFTER; -/** - * - */ public class RestCountAction extends BaseRestHandler { - - private final IndicesQueriesRegistry indicesQueriesRegistry; - - @Inject - public RestCountAction(Settings settings, RestController controller, IndicesQueriesRegistry indicesQueriesRegistry) { + public RestCountAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/_count", this); controller.registerHandler(GET, "/_count", this); @@ -61,24 +52,24 @@ public RestCountAction(Settings settings, RestController controller, IndicesQuer controller.registerHandler(GET, "/{index}/_count", this); controller.registerHandler(POST, "/{index}/{type}/_count", this); controller.registerHandler(GET, "/{index}/{type}/_count", this); - this.indicesQueriesRegistry = indicesQueriesRegistry; } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { SearchRequest countRequest = new SearchRequest(Strings.splitStringByCommaToArray(request.param("index"))); countRequest.indicesOptions(IndicesOptions.fromRequest(request, countRequest.indicesOptions())); SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder().size(0); countRequest.source(searchSourceBuilder); - if (RestActions.hasBodyContent(request)) { - BytesReference restContent = RestActions.getRestContent(request); - searchSourceBuilder.query(RestActions.getQueryContent(restContent, indicesQueriesRegistry, parseFieldMatcher)); - } else { - QueryBuilder queryBuilder = RestActions.urlParamsToQueryBuilder(request); - if (queryBuilder != null) { - searchSourceBuilder.query(queryBuilder); + request.withContentOrSourceParamParserOrNull(parser -> { + if (parser == null) { + QueryBuilder queryBuilder = RestActions.urlParamsToQueryBuilder(request); + if (queryBuilder != null) { + searchSourceBuilder.query(queryBuilder); + } + } else { + searchSourceBuilder.query(RestActions.getQueryContent(parser)); } - } + }); countRequest.routing(request.param("routing")); float minScore = request.paramAsFloat("min_score", -1f); if (minScore != -1f) { @@ -93,7 +84,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel, } else if (terminateAfter > 0) { searchSourceBuilder.terminateAfter(terminateAfter); } - client.search(countRequest, new RestBuilderListener(channel) { + return channel -> client.search(countRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(SearchResponse response, XContentBuilder builder) throws Exception { builder.startObject(); @@ -102,11 +93,12 @@ public RestResponse buildResponse(SearchResponse response, XContentBuilder build } builder.field("count", response.getHits().totalHits()); buildBroadcastShardsHeader(builder, request, response.getTotalShards(), response.getSuccessfulShards(), - response.getFailedShards(), response.getShardFailures()); + 0, response.getFailedShards(), response.getShardFailures()); builder.endObject(); return new BytesRestResponse(response.status(), builder); } }); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestDeleteAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestDeleteAction.java index 392cff7ffbae6..a39ec5e747b59 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestDeleteAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestDeleteAction.java @@ -22,31 +22,29 @@ import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.VersionType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestStatusToXContentListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.DELETE; /** * */ public class RestDeleteAction extends BaseRestHandler { - - @Inject public RestDeleteAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(DELETE, "/{index}/{type}/{id}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteRequest deleteRequest = new DeleteRequest(request.param("index"), request.param("type"), request.param("id")); deleteRequest.routing(request.param("routing")); deleteRequest.parent(request.param("parent")); // order is important, set it after routing, so it will set the routing @@ -60,6 +58,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel, deleteRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards)); } - client.delete(deleteRequest, new RestStatusToXContentListener<>(channel)); + return channel -> client.delete(deleteRequest, new RestStatusToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestGetAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestGetAction.java index 8c782a8d128d3..c1a7c5868094b 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestGetAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestGetAction.java @@ -23,34 +23,33 @@ import org.elasticsearch.action.get.GetResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.VersionType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; +import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestActions; -import org.elasticsearch.rest.action.RestBuilderListener; +import org.elasticsearch.rest.action.RestToXContentListener; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; +import static org.elasticsearch.rest.RestRequest.Method.HEAD; import static org.elasticsearch.rest.RestStatus.NOT_FOUND; import static org.elasticsearch.rest.RestStatus.OK; public class RestGetAction extends BaseRestHandler { - @Inject - public RestGetAction(Settings settings, RestController controller) { + public RestGetAction(final Settings settings, final RestController controller) { super(settings); controller.registerHandler(GET, "/{index}/{type}/{id}", this); + controller.registerHandler(HEAD, "/{index}/{type}/{id}", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final GetRequest getRequest = new GetRequest(request.param("index"), request.param("type"), request.param("id")); getRequest.operationThreaded(true); getRequest.refresh(request.paramAsBoolean("refresh", getRequest.refresh())); @@ -58,13 +57,15 @@ public void handleRequest(final RestRequest request, final RestChannel channel, getRequest.parent(request.param("parent")); getRequest.preference(request.param("preference")); getRequest.realtime(request.paramAsBoolean("realtime", getRequest.realtime())); - getRequest.ignoreErrorsOnGeneratedFields(request.paramAsBoolean("ignore_errors_on_generated_fields", false)); - - String sField = request.param("fields"); - if (sField != null) { - String[] sFields = Strings.splitStringByCommaToArray(sField); - if (sFields != null) { - getRequest.fields(sFields); + if (request.param("fields") != null) { + throw new IllegalArgumentException("the parameter [fields] is no longer supported, " + + "please use [stored_fields] to retrieve stored fields or [_source] to load the field from _source"); + } + final String fieldsParam = request.param("stored_fields"); + if (fieldsParam != null) { + final String[] fields = Strings.splitStringByCommaToArray(fieldsParam); + if (fields != null) { + getRequest.storedFields(fields); } } @@ -73,18 +74,12 @@ public void handleRequest(final RestRequest request, final RestChannel channel, getRequest.fetchSourceContext(FetchSourceContext.parseFromRestRequest(request)); - client.get(getRequest, new RestBuilderListener(channel) { + return channel -> client.get(getRequest, new RestToXContentListener(channel) { @Override - public RestResponse buildResponse(GetResponse response, XContentBuilder builder) throws Exception { - builder.startObject(); - response.toXContent(builder, request); - builder.endObject(); - if (!response.isExists()) { - return new BytesRestResponse(NOT_FOUND, builder); - } else { - return new BytesRestResponse(OK, builder); - } + protected RestStatus getStatus(final GetResponse response) { + return response.isExists() ? OK : NOT_FOUND; } }); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestGetSourceAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestGetSourceAction.java index 1ecfe317f4eac..341c1ddc91753 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestGetSourceAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestGetSourceAction.java @@ -23,12 +23,10 @@ import org.elasticsearch.action.get.GetRequest; import org.elasticsearch.action.get.GetResponse; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -38,50 +36,54 @@ import java.io.IOException; import static org.elasticsearch.rest.RestRequest.Method.GET; +import static org.elasticsearch.rest.RestRequest.Method.HEAD; import static org.elasticsearch.rest.RestStatus.NOT_FOUND; import static org.elasticsearch.rest.RestStatus.OK; +/** + * The REST handler for get source and head source APIs. + */ public class RestGetSourceAction extends BaseRestHandler { - @Inject - public RestGetSourceAction(Settings settings, RestController controller) { + public RestGetSourceAction(final Settings settings, final RestController controller) { super(settings); controller.registerHandler(GET, "/{index}/{type}/{id}/_source", this); + controller.registerHandler(HEAD, "/{index}/{type}/{id}/_source", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final GetRequest getRequest = new GetRequest(request.param("index"), request.param("type"), request.param("id")); getRequest.operationThreaded(true); getRequest.refresh(request.paramAsBoolean("refresh", getRequest.refresh())); - getRequest.routing(request.param("routing")); // order is important, set it after routing, so it will set the routing + getRequest.routing(request.param("routing")); getRequest.parent(request.param("parent")); getRequest.preference(request.param("preference")); getRequest.realtime(request.paramAsBoolean("realtime", getRequest.realtime())); getRequest.fetchSourceContext(FetchSourceContext.parseFromRestRequest(request)); - if (getRequest.fetchSourceContext() != null && !getRequest.fetchSourceContext().fetchSource()) { - try { - ActionRequestValidationException validationError = new ActionRequestValidationException(); + return channel -> { + if (getRequest.fetchSourceContext() != null && !getRequest.fetchSourceContext().fetchSource()) { + final ActionRequestValidationException validationError = new ActionRequestValidationException(); validationError.addValidationError("fetching source can not be disabled"); channel.sendResponse(new BytesRestResponse(channel, validationError)); - } catch (IOException e) { - logger.error("Failed to send failure response", e); - } - } - - client.get(getRequest, new RestResponseListener(channel) { - @Override - public RestResponse buildResponse(GetResponse response) throws Exception { - XContentBuilder builder = channel.newBuilder(response.getSourceInternal(), false); - if (response.isSourceEmpty()) { // check if doc source (or doc itself) is missing - return new BytesRestResponse(NOT_FOUND, builder); - } else { - builder.rawValue(response.getSourceInternal()); - return new BytesRestResponse(OK, builder); - } + } else { + client.get(getRequest, new RestResponseListener(channel) { + @Override + public RestResponse buildResponse(final GetResponse response) throws Exception { + final XContentBuilder builder = channel.newBuilder(request.getXContentType(), false); + // check if doc source (or doc itself) is missing + if (response.isSourceEmpty()) { + return new BytesRestResponse(NOT_FOUND, builder); + } else { + builder.rawValue(response.getSourceInternal()); + return new BytesRestResponse(OK, builder); + } + } + }); } - }); + }; } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestHeadAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestHeadAction.java deleted file mode 100644 index 9fb706bd8e6d1..0000000000000 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestHeadAction.java +++ /dev/null @@ -1,110 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.rest.action.document; - -import org.elasticsearch.action.get.GetRequest; -import org.elasticsearch.action.get.GetResponse; -import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; -import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; -import org.elasticsearch.rest.action.RestResponseListener; - -import static org.elasticsearch.rest.RestRequest.Method.HEAD; -import static org.elasticsearch.rest.RestStatus.NOT_FOUND; -import static org.elasticsearch.rest.RestStatus.OK; - -/** - * Base class for {@code HEAD} request handlers for a single document. - */ -public abstract class RestHeadAction extends BaseRestHandler { - - /** - * Handler to check for document existence. - */ - public static class Document extends RestHeadAction { - - @Inject - public Document(Settings settings, RestController controller) { - super(settings, false); - controller.registerHandler(HEAD, "/{index}/{type}/{id}", this); - } - } - - /** - * Handler to check for document source existence (may be disabled in the mapping). - */ - public static class Source extends RestHeadAction { - - @Inject - public Source(Settings settings, RestController controller) { - super(settings, true); - controller.registerHandler(HEAD, "/{index}/{type}/{id}/_source", this); - } - } - - private final boolean source; - - /** - * All subclasses must be registered in {@link org.elasticsearch.common.network.NetworkModule}. - * - * @param settings injected settings - * @param source {@code false} to check for {@link GetResponse#isExists()}. - * {@code true} to also check for {@link GetResponse#isSourceEmpty()}. - */ - public RestHeadAction(Settings settings, boolean source) { - super(settings); - this.source = source; - } - - @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { - final GetRequest getRequest = new GetRequest(request.param("index"), request.param("type"), request.param("id")); - getRequest.operationThreaded(true); - getRequest.refresh(request.paramAsBoolean("refresh", getRequest.refresh())); - getRequest.routing(request.param("routing")); // order is important, set it after routing, so it will set the routing - getRequest.parent(request.param("parent")); - getRequest.preference(request.param("preference")); - getRequest.realtime(request.paramAsBoolean("realtime", getRequest.realtime())); - // don't get any fields back... - getRequest.fields(Strings.EMPTY_ARRAY); - // TODO we can also just return the document size as Content-Length - - client.get(getRequest, new RestResponseListener(channel) { - @Override - public RestResponse buildResponse(GetResponse response) { - if (!response.isExists()) { - return new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY); - } else if (source && response.isSourceEmpty()) { // doc exists, but source might not (disabled in the mapping) - return new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY); - } else { - return new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY); - } - } - }); - } -} diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestIndexAction.java index 6c9723b5b9346..8de694c4f2bdd 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestIndexAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestIndexAction.java @@ -22,13 +22,9 @@ import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.VersionType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestActions; @@ -38,14 +34,8 @@ import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.RestRequest.Method.PUT; -import static org.elasticsearch.rest.RestStatus.BAD_REQUEST; -/** - * - */ public class RestIndexAction extends BaseRestHandler { - - @Inject public RestIndexAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/{index}/{type}", this); // auto id creation @@ -62,46 +52,42 @@ protected CreateHandler(Settings settings) { } @Override - public void handleRequest(RestRequest request, RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(RestRequest request, final NodeClient client) throws IOException { request.params().put("op_type", "create"); - RestIndexAction.this.handleRequest(request, channel, client); + return RestIndexAction.this.prepareRequest(request, client); } } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { IndexRequest indexRequest = new IndexRequest(request.param("index"), request.param("type"), request.param("id")); indexRequest.routing(request.param("routing")); indexRequest.parent(request.param("parent")); // order is important, set it after routing, so it will set the routing + if (request.hasParam("timestamp")) { + deprecationLogger.deprecated("The [timestamp] parameter of index requests is deprecated"); + } indexRequest.timestamp(request.param("timestamp")); if (request.hasParam("ttl")) { + deprecationLogger.deprecated("The [ttl] parameter of index requests is deprecated"); indexRequest.ttl(request.param("ttl")); } indexRequest.setPipeline(request.param("pipeline")); - indexRequest.source(request.content()); + indexRequest.source(request.requiredContent(), request.getXContentType()); indexRequest.timeout(request.paramAsTime("timeout", IndexRequest.DEFAULT_TIMEOUT)); indexRequest.setRefreshPolicy(request.param("refresh")); indexRequest.version(RestActions.parseVersion(request)); indexRequest.versionType(VersionType.fromString(request.param("version_type"), indexRequest.versionType())); String sOpType = request.param("op_type"); - if (sOpType != null) { - try { - indexRequest.opType(IndexRequest.OpType.fromString(sOpType)); - } catch (IllegalArgumentException eia){ - try { - XContentBuilder builder = channel.newErrorBuilder(); - channel.sendResponse( - new BytesRestResponse(BAD_REQUEST, builder.startObject().field("error", eia.getMessage()).endObject())); - } catch (IOException e1) { - logger.warn("Failed to send response", e1); - return; - } - } - } String waitForActiveShards = request.param("wait_for_active_shards"); if (waitForActiveShards != null) { indexRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards)); } - client.index(indexRequest, new RestStatusToXContentListener<>(channel, r -> r.getLocation(indexRequest.routing()))); + if (sOpType != null) { + indexRequest.opType(sOpType); + } + + return channel -> + client.index(indexRequest, new RestStatusToXContentListener<>(channel, r -> r.getLocation(indexRequest.routing()))); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestMultiGetAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestMultiGetAction.java index 995c43059dac7..cba8fafbda39e 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestMultiGetAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestMultiGetAction.java @@ -20,19 +20,18 @@ package org.elasticsearch.rest.action.document; import org.elasticsearch.action.get.MultiGetRequest; -import org.elasticsearch.action.get.MultiGetResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestToXContentListener; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.POST; @@ -40,7 +39,6 @@ public class RestMultiGetAction extends BaseRestHandler { private final boolean allowExplicitIndex; - @Inject public RestMultiGetAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_mget", this); @@ -54,23 +52,27 @@ public RestMultiGetAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { MultiGetRequest multiGetRequest = new MultiGetRequest(); multiGetRequest.refresh(request.paramAsBoolean("refresh", multiGetRequest.refresh())); multiGetRequest.preference(request.param("preference")); multiGetRequest.realtime(request.paramAsBoolean("realtime", multiGetRequest.realtime())); - multiGetRequest.ignoreErrorsOnGeneratedFields(request.paramAsBoolean("ignore_errors_on_generated_fields", false)); - + if (request.param("fields") != null) { + throw new IllegalArgumentException("The parameter [fields] is no longer supported, " + + "please use [stored_fields] to retrieve stored fields or _source filtering if the field is not stored"); + } String[] sFields = null; - String sField = request.param("fields"); + String sField = request.param("stored_fields"); if (sField != null) { sFields = Strings.splitStringByCommaToArray(sField); } FetchSourceContext defaultFetchSource = FetchSourceContext.parseFromRestRequest(request); - multiGetRequest.add(request.param("index"), request.param("type"), sFields, defaultFetchSource, - request.param("routing"), RestActions.getRestContent(request), allowExplicitIndex); + try (XContentParser parser = request.contentOrSourceParamParser()) { + multiGetRequest.add(request.param("index"), request.param("type"), sFields, defaultFetchSource, + request.param("routing"), parser, allowExplicitIndex); + } - client.multiGet(multiGetRequest, new RestToXContentListener(channel)); + return channel -> client.multiGet(multiGetRequest, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestMultiTermVectorsAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestMultiTermVectorsAction.java index dab23e8df352c..605c8654f500d 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestMultiTermVectorsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestMultiTermVectorsAction.java @@ -24,21 +24,18 @@ import org.elasticsearch.action.termvectors.TermVectorsRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.POST; public class RestMultiTermVectorsAction extends BaseRestHandler { - - @Inject public RestMultiTermVectorsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/_mtermvectors", this); @@ -50,15 +47,16 @@ public RestMultiTermVectorsAction(Settings settings, RestController controller) } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { MultiTermVectorsRequest multiTermVectorsRequest = new MultiTermVectorsRequest(); TermVectorsRequest template = new TermVectorsRequest(); template.index(request.param("index")); template.type(request.param("type")); RestTermVectorsAction.readURIParameters(template, request); multiTermVectorsRequest.ids(Strings.commaDelimitedListToStringArray(request.param("ids"))); - multiTermVectorsRequest.add(template, RestActions.getRestContent(request)); + request.withContentOrSourceParamParserOrNull(p -> multiTermVectorsRequest.add(template, p)); - client.multiTermVectors(multiTermVectorsRequest, new RestToXContentListener(channel)); + return channel -> client.multiTermVectors(multiTermVectorsRequest, new RestToXContentListener(channel)); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestTermVectorsAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestTermVectorsAction.java index b64219215a93b..a649e5eff4774 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestTermVectorsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestTermVectorsAction.java @@ -20,21 +20,18 @@ package org.elasticsearch.rest.action.document; import org.elasticsearch.action.termvectors.TermVectorsRequest; -import org.elasticsearch.action.termvectors.TermVectorsResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.VersionType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestToXContentListener; +import java.io.IOException; import java.util.HashSet; import java.util.Set; @@ -46,8 +43,6 @@ * TermVectorsRequest. */ public class RestTermVectorsAction extends BaseRestHandler { - - @Inject public RestTermVectorsAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(GET, "/{index}/{type}/_termvectors", this); @@ -63,17 +58,16 @@ public RestTermVectorsAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { TermVectorsRequest termVectorsRequest = new TermVectorsRequest(request.param("index"), request.param("type"), request.param("id")); - if (RestActions.hasBodyContent(request)) { - try (XContentParser parser = XContentFactory.xContent(RestActions.guessBodyContentType(request)) - .createParser(RestActions.getRestContent(request))){ + if (request.hasContentOrSourceParam()) { + try (XContentParser parser = request.contentOrSourceParamParser()) { TermVectorsRequest.parseRequest(termVectorsRequest, parser); } } readURIParameters(termVectorsRequest, request); - client.termVectors(termVectorsRequest, new RestToXContentListener(channel)); + return channel -> client.termVectors(termVectorsRequest, new RestToXContentListener<>(channel)); } public static void readURIParameters(TermVectorsRequest termVectorsRequest, RestRequest request) { diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestUpdateAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestUpdateAction.java index d0d7916adfed1..26591c52a5d5e 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestUpdateAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestUpdateAction.java @@ -24,30 +24,34 @@ import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.VersionType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestStatusToXContentListener; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; + +import java.io.IOException; import static org.elasticsearch.rest.RestRequest.Method.POST; /** */ public class RestUpdateAction extends BaseRestHandler { + private static final DeprecationLogger DEPRECATION_LOGGER = + new DeprecationLogger(Loggers.getLogger(RestUpdateAction.class)); - @Inject public RestUpdateAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(POST, "/{index}/{type}/{id}/_update", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { UpdateRequest updateRequest = new UpdateRequest(request.param("index"), request.param("type"), request.param("id")); updateRequest.routing(request.param("routing")); updateRequest.parent(request.param("parent")); @@ -58,21 +62,32 @@ public void handleRequest(final RestRequest request, final RestChannel channel, updateRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards)); } updateRequest.docAsUpsert(request.paramAsBoolean("doc_as_upsert", updateRequest.docAsUpsert())); + FetchSourceContext fetchSourceContext = FetchSourceContext.parseFromRestRequest(request); String sField = request.param("fields"); + if (sField != null && fetchSourceContext != null) { + throw new IllegalArgumentException("[fields] and [_source] cannot be used in the same request"); + } if (sField != null) { + DEPRECATION_LOGGER.deprecated("Deprecated field [fields] used, expected [_source] instead"); String[] sFields = Strings.splitStringByCommaToArray(sField); - if (sFields != null) { - updateRequest.fields(sFields); - } + updateRequest.fields(sFields); + } else if (fetchSourceContext != null) { + updateRequest.fetchSource(fetchSourceContext); } + updateRequest.retryOnConflict(request.paramAsInt("retry_on_conflict", updateRequest.retryOnConflict())); updateRequest.version(RestActions.parseVersion(request)); updateRequest.versionType(VersionType.fromString(request.param("version_type"), updateRequest.versionType())); + if (request.hasParam("timestamp")) { + deprecationLogger.deprecated("The [timestamp] parameter of index requests is deprecated"); + } + if (request.hasParam("ttl")) { + deprecationLogger.deprecated("The [ttl] parameter of index requests is deprecated"); + } - // see if we have it in the body - if (request.hasContent()) { - updateRequest.source(request.content()); + request.applyContentParser(parser -> { + updateRequest.fromXContent(parser); IndexRequest upsertRequest = updateRequest.upsertRequest(); if (upsertRequest != null) { upsertRequest.routing(request.param("routing")); @@ -95,8 +110,10 @@ public void handleRequest(final RestRequest request, final RestChannel channel, doc.version(RestActions.parseVersion(request)); doc.versionType(VersionType.fromString(request.param("version_type"), doc.versionType())); } - } + }); - client.update(updateRequest, new RestStatusToXContentListener<>(channel, r -> r.getLocation(updateRequest.routing()))); + return channel -> + client.update(updateRequest, new RestStatusToXContentListener<>(channel, r -> r.getLocation(updateRequest.routing()))); } + } diff --git a/core/src/main/java/org/elasticsearch/rest/action/ingest/RestDeletePipelineAction.java b/core/src/main/java/org/elasticsearch/rest/action/ingest/RestDeletePipelineAction.java index 593b55b8b754f..b776d2475cecb 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/ingest/RestDeletePipelineAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/ingest/RestDeletePipelineAction.java @@ -21,27 +21,25 @@ import org.elasticsearch.action.ingest.DeletePipelineRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; -public class RestDeletePipelineAction extends BaseRestHandler { +import java.io.IOException; - @Inject +public class RestDeletePipelineAction extends BaseRestHandler { public RestDeletePipelineAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.DELETE, "/_ingest/pipeline/{id}", this); } @Override - public void handleRequest(RestRequest restRequest, RestChannel channel, NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(RestRequest restRequest, NodeClient client) throws IOException { DeletePipelineRequest request = new DeletePipelineRequest(restRequest.param("id")); request.masterNodeTimeout(restRequest.paramAsTime("master_timeout", request.masterNodeTimeout())); request.timeout(restRequest.paramAsTime("timeout", request.timeout())); - client.admin().cluster().deletePipeline(request, new AcknowledgedRestListener<>(channel)); + return channel -> client.admin().cluster().deletePipeline(request, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/ingest/RestGetPipelineAction.java b/core/src/main/java/org/elasticsearch/rest/action/ingest/RestGetPipelineAction.java index 308fb146c3020..c8facf7b4cc15 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/ingest/RestGetPipelineAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/ingest/RestGetPipelineAction.java @@ -22,17 +22,15 @@ import org.elasticsearch.action.ingest.GetPipelineRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestStatusToXContentListener; -public class RestGetPipelineAction extends BaseRestHandler { +import java.io.IOException; - @Inject +public class RestGetPipelineAction extends BaseRestHandler { public RestGetPipelineAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.GET, "/_ingest/pipeline", this); @@ -40,9 +38,9 @@ public RestGetPipelineAction(Settings settings, RestController controller) { } @Override - public void handleRequest(RestRequest restRequest, RestChannel channel, NodeClient client) throws Exception { + public RestChannelConsumer prepareRequest(RestRequest restRequest, NodeClient client) throws IOException { GetPipelineRequest request = new GetPipelineRequest(Strings.splitStringByCommaToArray(restRequest.param("id"))); request.masterNodeTimeout(restRequest.paramAsTime("master_timeout", request.masterNodeTimeout())); - client.admin().cluster().getPipeline(request, new RestStatusToXContentListener<>(channel)); + return channel -> client.admin().cluster().getPipeline(request, new RestStatusToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/ingest/RestPutPipelineAction.java b/core/src/main/java/org/elasticsearch/rest/action/ingest/RestPutPipelineAction.java index b6d34a6c25475..2496c9b4a2487 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/ingest/RestPutPipelineAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/ingest/RestPutPipelineAction.java @@ -21,30 +21,31 @@ import org.elasticsearch.action.ingest.PutPipelineRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.AcknowledgedRestListener; -import org.elasticsearch.rest.action.RestActions; +import java.io.IOException; -public class RestPutPipelineAction extends BaseRestHandler { - @Inject +public class RestPutPipelineAction extends BaseRestHandler { public RestPutPipelineAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.PUT, "/_ingest/pipeline/{id}", this); } @Override - public void handleRequest(RestRequest restRequest, RestChannel channel, NodeClient client) throws Exception { - PutPipelineRequest request = new PutPipelineRequest(restRequest.param("id"), RestActions.getRestContent(restRequest)); + public RestChannelConsumer prepareRequest(RestRequest restRequest, NodeClient client) throws IOException { + Tuple sourceTuple = restRequest.contentOrSourceParam(); + PutPipelineRequest request = new PutPipelineRequest(restRequest.param("id"), sourceTuple.v2(), sourceTuple.v1()); request.masterNodeTimeout(restRequest.paramAsTime("master_timeout", request.masterNodeTimeout())); request.timeout(restRequest.paramAsTime("timeout", request.timeout())); - client.admin().cluster().putPipeline(request, new AcknowledgedRestListener<>(channel)); + return channel -> client.admin().cluster().putPipeline(request, new AcknowledgedRestListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/ingest/RestSimulatePipelineAction.java b/core/src/main/java/org/elasticsearch/rest/action/ingest/RestSimulatePipelineAction.java index a51bdf5fef2fd..9dbe1808a8c59 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/ingest/RestSimulatePipelineAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/ingest/RestSimulatePipelineAction.java @@ -21,19 +21,18 @@ import org.elasticsearch.action.ingest.SimulatePipelineRequest; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.action.RestActions; -import org.elasticsearch.rest.action.RestStatusToXContentListener; import org.elasticsearch.rest.action.RestToXContentListener; -public class RestSimulatePipelineAction extends BaseRestHandler { +import java.io.IOException; - @Inject +public class RestSimulatePipelineAction extends BaseRestHandler { public RestSimulatePipelineAction(Settings settings, RestController controller) { super(settings); controller.registerHandler(RestRequest.Method.POST, "/_ingest/pipeline/{id}/_simulate", this); @@ -43,10 +42,11 @@ public RestSimulatePipelineAction(Settings settings, RestController controller) } @Override - public void handleRequest(RestRequest restRequest, RestChannel channel, NodeClient client) throws Exception { - SimulatePipelineRequest request = new SimulatePipelineRequest(RestActions.getRestContent(restRequest)); + public RestChannelConsumer prepareRequest(RestRequest restRequest, NodeClient client) throws IOException { + Tuple sourceTuple = restRequest.contentOrSourceParam(); + SimulatePipelineRequest request = new SimulatePipelineRequest(sourceTuple.v2(), sourceTuple.v1()); request.setId(restRequest.param("id")); request.setVerbose(restRequest.paramAsBoolean("verbose", false)); - client.admin().cluster().simulatePipeline(request, new RestToXContentListener<>(channel)); + return channel -> client.admin().cluster().simulatePipeline(request, new RestToXContentListener<>(channel)); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/search/RestClearScrollAction.java b/core/src/main/java/org/elasticsearch/rest/action/search/RestClearScrollAction.java index 4c8f84a222352..066711667c5f1 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/search/RestClearScrollAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/search/RestClearScrollAction.java @@ -20,20 +20,13 @@ package org.elasticsearch.rest.action.search; import org.elasticsearch.action.search.ClearScrollRequest; -import org.elasticsearch.action.search.ClearScrollResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentHelper; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestStatusToXContentListener; import java.io.IOException; @@ -42,8 +35,6 @@ import static org.elasticsearch.rest.RestRequest.Method.DELETE; public class RestClearScrollAction extends BaseRestHandler { - - @Inject public RestClearScrollAction(Settings settings, RestController controller) { super(settings); @@ -52,58 +43,40 @@ public RestClearScrollAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String scrollIds = request.param("scroll_id"); ClearScrollRequest clearRequest = new ClearScrollRequest(); clearRequest.setScrollIds(Arrays.asList(splitScrollIds(scrollIds))); - if (RestActions.hasBodyContent(request)) { - XContentType type = RestActions.guessBodyContentType(request); - if (type == null) { - scrollIds = RestActions.getRestContent(request).utf8ToString(); - clearRequest.setScrollIds(Arrays.asList(splitScrollIds(scrollIds))); - } else { - // NOTE: if rest request with xcontent body has request parameters, these parameters does not override xcontent value - clearRequest.setScrollIds(null); - buildFromContent(RestActions.getRestContent(request), clearRequest); - } - } + request.withContentOrSourceParamParserOrNullLenient((xContentParser -> { + if (xContentParser == null) { + if (request.hasContent()) { + // TODO: why do we accept this plain text value? maybe we can just use the scroll params? + BytesReference body = request.content(); + String bodyScrollIds = body.utf8ToString(); + clearRequest.setScrollIds(Arrays.asList(splitScrollIds(bodyScrollIds))); + } + } else { + // NOTE: if rest request with xcontent body has request parameters, values parsed from request body have the precedence + try { + clearRequest.fromXContent(xContentParser); + } catch (IOException e) { + throw new IllegalArgumentException("Failed to parse request body", e); + } + } + })); + + return channel -> client.clearScroll(clearRequest, new RestStatusToXContentListener<>(channel)); + } - client.clearScroll(clearRequest, new RestStatusToXContentListener(channel)); + @Override + public boolean supportsPlainText() { + return true; } - public static String[] splitScrollIds(String scrollIds) { + private static String[] splitScrollIds(String scrollIds) { if (scrollIds == null) { return Strings.EMPTY_ARRAY; } return Strings.splitStringByCommaToArray(scrollIds); } - - public static void buildFromContent(BytesReference content, ClearScrollRequest clearScrollRequest) { - try (XContentParser parser = XContentHelper.createParser(content)) { - if (parser.nextToken() != XContentParser.Token.START_OBJECT) { - throw new IllegalArgumentException("Malformed content, must start with an object"); - } else { - XContentParser.Token token; - String currentFieldName = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if ("scroll_id".equals(currentFieldName) && token == XContentParser.Token.START_ARRAY) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token.isValue() == false) { - throw new IllegalArgumentException("scroll_id array element should only contain scroll_id"); - } - clearScrollRequest.addScrollId(parser.text()); - } - } else { - throw new IllegalArgumentException("Unknown parameter [" + currentFieldName - + "] in request body or parameter is of the wrong type[" + token + "] "); - } - } - } - } catch (IOException e) { - throw new IllegalArgumentException("Failed to parse request body", e); - } - } - } diff --git a/core/src/main/java/org/elasticsearch/rest/action/search/RestExplainAction.java b/core/src/main/java/org/elasticsearch/rest/action/search/RestExplainAction.java index 7088b96c6de62..7339718c28b0b 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/search/RestExplainAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/search/RestExplainAction.java @@ -24,16 +24,12 @@ import org.elasticsearch.action.explain.ExplainResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.get.GetResult; import org.elasticsearch.index.query.QueryBuilder; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; @@ -52,43 +48,43 @@ * Rest action for computing a score explanation for specific documents. */ public class RestExplainAction extends BaseRestHandler { - - private final IndicesQueriesRegistry indicesQueriesRegistry; - - @Inject - public RestExplainAction(Settings settings, RestController controller, IndicesQueriesRegistry indicesQueriesRegistry) { + public RestExplainAction(Settings settings, RestController controller) { super(settings); - this.indicesQueriesRegistry = indicesQueriesRegistry; controller.registerHandler(GET, "/{index}/{type}/{id}/_explain", this); controller.registerHandler(POST, "/{index}/{type}/{id}/_explain", this); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final ExplainRequest explainRequest = new ExplainRequest(request.param("index"), request.param("type"), request.param("id")); explainRequest.parent(request.param("parent")); explainRequest.routing(request.param("routing")); explainRequest.preference(request.param("preference")); String queryString = request.param("q"); - if (RestActions.hasBodyContent(request)) { - BytesReference restContent = RestActions.getRestContent(request); - explainRequest.query(RestActions.getQueryContent(restContent, indicesQueriesRegistry, parseFieldMatcher)); - } else if (queryString != null) { - QueryBuilder query = RestActions.urlParamsToQueryBuilder(request); - explainRequest.query(query); - } + request.withContentOrSourceParamParserOrNull(parser -> { + if (parser != null) { + explainRequest.query(RestActions.getQueryContent(parser)); + } else if (queryString != null) { + QueryBuilder query = RestActions.urlParamsToQueryBuilder(request); + explainRequest.query(query); + } + }); - String sField = request.param("fields"); + if (request.param("fields") != null) { + throw new IllegalArgumentException("The parameter [fields] is no longer supported, " + + "please use [stored_fields] to retrieve stored fields"); + } + String sField = request.param("stored_fields"); if (sField != null) { String[] sFields = Strings.splitStringByCommaToArray(sField); if (sFields != null) { - explainRequest.fields(sFields); + explainRequest.storedFields(sFields); } } explainRequest.fetchSourceContext(FetchSourceContext.parseFromRestRequest(request)); - client.explain(explainRequest, new RestBuilderListener(channel) { + return channel -> client.explain(explainRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(ExplainResponse response, XContentBuilder builder) throws Exception { builder.startObject(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/search/RestMultiSearchAction.java b/core/src/main/java/org/elasticsearch/rest/action/search/RestMultiSearchAction.java index ae320bccac234..04e0d255641dd 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/search/RestMultiSearchAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/search/RestMultiSearchAction.java @@ -19,33 +19,32 @@ package org.elasticsearch.rest.action.search; -import java.io.IOException; -import java.util.Map; -import java.util.function.BiConsumer; - import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.search.MultiSearchRequest; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContent; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.action.RestActions; +import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestToXContentListener; -import org.elasticsearch.search.SearchRequestParsers; import org.elasticsearch.search.builder.SearchSourceBuilder; +import java.io.IOException; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.BiConsumer; + import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeStringArrayValue; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeStringValue; @@ -56,13 +55,12 @@ */ public class RestMultiSearchAction extends BaseRestHandler { + private static final Set RESPONSE_PARAMS = Collections.singleton(RestSearchAction.TYPED_KEYS_PARAM); + private final boolean allowExplicitIndex; - private final SearchRequestParsers searchRequestParsers; - @Inject - public RestMultiSearchAction(Settings settings, RestController controller, SearchRequestParsers searchRequestParsers) { + public RestMultiSearchAction(Settings settings, RestController controller) { super(settings); - this.searchRequestParsers = searchRequestParsers; controller.registerHandler(GET, "/_msearch", this); controller.registerHandler(POST, "/_msearch", this); @@ -75,52 +73,56 @@ public RestMultiSearchAction(Settings settings, RestController controller, Searc } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception { - MultiSearchRequest multiSearchRequest = parseRequest(request, allowExplicitIndex, searchRequestParsers, parseFieldMatcher); - client.multiSearch(multiSearchRequest, new RestToXContentListener<>(channel)); + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { + MultiSearchRequest multiSearchRequest = parseRequest(request, allowExplicitIndex); + return channel -> client.multiSearch(multiSearchRequest, new RestToXContentListener<>(channel)); } /** * Parses a {@link RestRequest} body and returns a {@link MultiSearchRequest} */ - public static MultiSearchRequest parseRequest(RestRequest restRequest, boolean allowExplicitIndex, - SearchRequestParsers searchRequestParsers, - ParseFieldMatcher parseFieldMatcher) throws IOException { - + public static MultiSearchRequest parseRequest(RestRequest restRequest, boolean allowExplicitIndex) throws IOException { MultiSearchRequest multiRequest = new MultiSearchRequest(); if (restRequest.hasParam("max_concurrent_searches")) { multiRequest.maxConcurrentSearchRequests(restRequest.paramAsInt("max_concurrent_searches", 0)); } - parseMultiLineRequest(restRequest, multiRequest.indicesOptions(), allowExplicitIndex, (searchRequest, bytes) -> { - try (XContentParser requestParser = XContentFactory.xContent(bytes).createParser(bytes)) { - final QueryParseContext queryParseContext = new QueryParseContext(searchRequestParsers.queryParsers, - requestParser, parseFieldMatcher); - searchRequest.source(SearchSourceBuilder.fromXContent(queryParseContext, - searchRequestParsers.aggParsers, searchRequestParsers.suggesters)); + int preFilterShardSize = restRequest.paramAsInt("pre_filter_shard_size", SearchRequest.DEFAULT_PRE_FILTER_SHARD_SIZE); + + + parseMultiLineRequest(restRequest, multiRequest.indicesOptions(), allowExplicitIndex, (searchRequest, parser) -> { + try { + final QueryParseContext queryParseContext = new QueryParseContext(parser); + searchRequest.source(SearchSourceBuilder.fromXContent(queryParseContext)); multiRequest.add(searchRequest); } catch (IOException e) { throw new ElasticsearchParseException("Exception when parsing search request", e); } }); - + List requests = multiRequest.requests(); + preFilterShardSize = Math.max(1, preFilterShardSize / (requests.size()+1)); + for (SearchRequest request : requests) { + // preserve if it's set on the request + request.setPreFilterShardSize(Math.min(preFilterShardSize, request.getPreFilterShardSize())); + } return multiRequest; } /** - * Parses a multi-line {@link RestRequest} body, instanciating a {@link SearchRequest} for each line and applying the given consumer. + * Parses a multi-line {@link RestRequest} body, instantiating a {@link SearchRequest} for each line and applying the given consumer. */ public static void parseMultiLineRequest(RestRequest request, IndicesOptions indicesOptions, boolean allowExplicitIndex, - BiConsumer consumer) throws IOException { + BiConsumer consumer) throws IOException { String[] indices = Strings.splitStringByCommaToArray(request.param("index")); String[] types = Strings.splitStringByCommaToArray(request.param("type")); String searchType = request.param("search_type"); String routing = request.param("routing"); - final BytesReference data = RestActions.getRestContent(request); + final Tuple sourceTuple = request.contentOrSourceParam(); + final XContent xContent = sourceTuple.v1().xContent(); + final BytesReference data = sourceTuple.v2(); - XContent xContent = XContentFactory.xContent(data); int from = 0; int length = data.length(); byte marker = xContent.streamSeparator(); @@ -157,7 +159,7 @@ public static void parseMultiLineRequest(RestRequest request, IndicesOptions ind // now parse the action if (nextMarker - from > 0) { - try (XContentParser parser = xContent.createParser(data.slice(from, nextMarker - from))) { + try (XContentParser parser = xContent.createParser(request.getXContentRegistry(), data.slice(from, nextMarker - from))) { Map source = parser.map(); for (Map.Entry entry : source.entrySet()) { Object value = entry.getValue(); @@ -171,7 +173,7 @@ public static void parseMultiLineRequest(RestRequest request, IndicesOptions ind } else if ("search_type".equals(entry.getKey()) || "searchType".equals(entry.getKey())) { searchRequest.searchType(nodeStringValue(value, null)); } else if ("request_cache".equals(entry.getKey()) || "requestCache".equals(entry.getKey())) { - searchRequest.requestCache(lenientNodeBooleanValue(value)); + searchRequest.requestCache(lenientNodeBooleanValue(value, entry.getKey())); } else if ("preference".equals(entry.getKey())) { searchRequest.preference(nodeStringValue(value, null)); } else if ("routing".equals(entry.getKey())) { @@ -190,12 +192,20 @@ public static void parseMultiLineRequest(RestRequest request, IndicesOptions ind if (nextMarker == -1) { break; } - consumer.accept(searchRequest, data.slice(from, nextMarker - from)); + BytesReference bytes = data.slice(from, nextMarker - from); + try (XContentParser parser = xContent.createParser(request.getXContentRegistry(), bytes)) { + consumer.accept(searchRequest, parser); + } // move pointers from = nextMarker + 1; } } + @Override + public boolean supportsContentStream() { + return true; + } + private static int findNextMarker(byte marker, int from, BytesReference data, int length) { for (int i = from; i < length; i++) { if (data.get(i) == marker) { @@ -204,4 +214,9 @@ private static int findNextMarker(byte marker, int from, BytesReference data, in } return -1; } + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; + } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java b/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java index 215165e6bbfc1..edbc6a226abd7 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java @@ -20,26 +20,19 @@ package org.elasticsearch.rest.action.search; import org.elasticsearch.action.search.SearchRequest; -import org.elasticsearch.action.search.SearchType; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestStatusToXContentListener; import org.elasticsearch.search.Scroll; -import org.elasticsearch.search.SearchRequestParsers; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.fetch.StoredFieldsContext; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; @@ -50,6 +43,8 @@ import java.io.IOException; import java.util.Arrays; +import java.util.Collections; +import java.util.Set; import static org.elasticsearch.common.unit.TimeValue.parseTimeValue; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -61,12 +56,11 @@ */ public class RestSearchAction extends BaseRestHandler { - private final SearchRequestParsers searchRequestParsers; + public static final String TYPED_KEYS_PARAM = "typed_keys"; + private static final Set RESPONSE_PARAMS = Collections.singleton(TYPED_KEYS_PARAM); - @Inject - public RestSearchAction(Settings settings, RestController controller, SearchRequestParsers searchRequestParsers) { + public RestSearchAction(Settings settings, RestController controller) { super(settings); - this.searchRequestParsers = searchRequestParsers; controller.registerHandler(GET, "/_search", this); controller.registerHandler(POST, "/_search", this); controller.registerHandler(GET, "/{index}/_search", this); @@ -76,42 +70,50 @@ public RestSearchAction(Settings settings, RestController controller, SearchRequ } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws IOException { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { SearchRequest searchRequest = new SearchRequest(); - BytesReference restContent = RestActions.hasBodyContent(request) ? RestActions.getRestContent(request) : null; - parseSearchRequest(searchRequest, request, searchRequestParsers, parseFieldMatcher, restContent); - client.search(searchRequest, new RestStatusToXContentListener<>(channel)); + request.withContentOrSourceParamParserOrNull(parser -> + parseSearchRequest(searchRequest, request, parser)); + + return channel -> client.search(searchRequest, new RestStatusToXContentListener<>(channel)); } /** - * Parses the rest request on top of the SearchRequest, preserving values - * that are not overridden by the rest request. + * Parses the rest request on top of the SearchRequest, preserving values that are not overridden by the rest request. * - * @param restContent - * override body content to use for the request. If null body - * content is read from the request using - * RestAction.hasBodyContent. + * @param requestContentParser body of the request to read. This method does not attempt to read the body from the {@code request} + * parameter */ - public static void parseSearchRequest(SearchRequest searchRequest, RestRequest request, SearchRequestParsers searchRequestParsers, - ParseFieldMatcher parseFieldMatcher, BytesReference restContent) throws IOException { + public static void parseSearchRequest(SearchRequest searchRequest, RestRequest request, + XContentParser requestContentParser) throws IOException { if (searchRequest.source() == null) { searchRequest.source(new SearchSourceBuilder()); } searchRequest.indices(Strings.splitStringByCommaToArray(request.param("index"))); - if (restContent != null) { - try (XContentParser parser = XContentFactory.xContent(restContent).createParser(restContent)) { - QueryParseContext context = new QueryParseContext(searchRequestParsers.queryParsers, parser, parseFieldMatcher); - searchRequest.source().parseXContent(context, searchRequestParsers.aggParsers, searchRequestParsers.suggesters); - } + if (requestContentParser != null) { + QueryParseContext context = new QueryParseContext(requestContentParser); + searchRequest.source().parseXContent(context); + } + + final int batchedReduceSize = request.paramAsInt("batched_reduce_size", searchRequest.getBatchedReduceSize()); + searchRequest.setBatchedReduceSize(batchedReduceSize); + searchRequest.setPreFilterShardSize(request.paramAsInt("pre_filter_shard_size", searchRequest.getPreFilterShardSize())); + + if (request.hasParam("max_concurrent_shard_requests")) { + // only set if we have the parameter since we auto adjust the max concurrency on the coordinator + // based on the number of nodes in the cluster + final int maxConcurrentShardRequests = request.paramAsInt("max_concurrent_shard_requests", + searchRequest.getMaxConcurrentShardRequests()); + searchRequest.setMaxConcurrentShardRequests(maxConcurrentShardRequests); } // do not allow 'query_and_fetch' or 'dfs_query_and_fetch' search types // from the REST layer. these modes are an internal optimization and should // not be specified explicitly by the user. String searchType = request.param("search_type"); - if (SearchType.fromString(searchType, parseFieldMatcher).equals(SearchType.QUERY_AND_FETCH) || - SearchType.fromString(searchType, parseFieldMatcher).equals(SearchType.DFS_QUERY_AND_FETCH)) { + if ("query_and_fetch".equals(searchType) || + "dfs_query_and_fetch".equals(searchType)) { throw new IllegalArgumentException("Unsupported search type [" + searchType + "]"); } else { searchRequest.searchType(searchType); @@ -237,4 +239,9 @@ private static void parseSearchSource(final SearchSourceBuilder searchSourceBuil .suggestMode(SuggestMode.resolve(suggestMode)))); } } + + @Override + protected Set responseParams() { + return RESPONSE_PARAMS; + } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchScrollAction.java b/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchScrollAction.java index 9b9ddd3a93da7..8e535bfad7977 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchScrollAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchScrollAction.java @@ -19,22 +19,13 @@ package org.elasticsearch.rest.action.search; -import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.search.SearchScrollRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentHelper; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestStatusToXContentListener; import org.elasticsearch.search.Scroll; @@ -45,8 +36,6 @@ import static org.elasticsearch.rest.RestRequest.Method.POST; public class RestSearchScrollAction extends BaseRestHandler { - - @Inject public RestSearchScrollAction(Settings settings, RestController controller) { super(settings); @@ -57,7 +46,7 @@ public RestSearchScrollAction(Settings settings, RestController controller) { } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { String scrollId = request.param("scroll_id"); SearchScrollRequest searchScrollRequest = new SearchScrollRequest(); searchScrollRequest.scrollId(scrollId); @@ -65,44 +54,30 @@ public void handleRequest(final RestRequest request, final RestChannel channel, if (scroll != null) { searchScrollRequest.scroll(new Scroll(parseTimeValue(scroll, null, "scroll"))); } - - if (RestActions.hasBodyContent(request)) { - XContentType type = XContentFactory.xContentType(RestActions.getRestContent(request)); - if (type == null) { - if (scrollId == null) { - scrollId = RestActions.getRestContent(request).utf8ToString(); - searchScrollRequest.scrollId(scrollId); + request.withContentOrSourceParamParserOrNullLenient(xContentParser -> { + if (xContentParser == null) { + if (request.hasContent()) { + // TODO: why do we accept this plain text value? maybe we can just use the scroll params? + BytesReference body = request.getContentOrSourceParamOnly(); + if (scrollId == null) { + String bodyScrollId = body.utf8ToString(); + searchScrollRequest.scrollId(bodyScrollId); + } } } else { - // NOTE: if rest request with xcontent body has request parameters, these parameters override xcontent values - buildFromContent(RestActions.getRestContent(request), searchScrollRequest); + // NOTE: if rest request with xcontent body has request parameters, values parsed from request body have the precedence + try { + searchScrollRequest.fromXContent(xContentParser); + } catch (IOException e) { + throw new IllegalArgumentException("Failed to parse request body", e); + } } - } - client.searchScroll(searchScrollRequest, new RestStatusToXContentListener(channel)); + }); + return channel -> client.searchScroll(searchScrollRequest, new RestStatusToXContentListener<>(channel)); } - public static void buildFromContent(BytesReference content, SearchScrollRequest searchScrollRequest) { - try (XContentParser parser = XContentHelper.createParser(content)) { - if (parser.nextToken() != XContentParser.Token.START_OBJECT) { - throw new IllegalArgumentException("Malformed content, must start with an object"); - } else { - XContentParser.Token token; - String currentFieldName = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if ("scroll_id".equals(currentFieldName) && token == XContentParser.Token.VALUE_STRING) { - searchScrollRequest.scrollId(parser.text()); - } else if ("scroll".equals(currentFieldName) && token == XContentParser.Token.VALUE_STRING) { - searchScrollRequest.scroll(new Scroll(TimeValue.parseTimeValue(parser.text(), null, "scroll"))); - } else { - throw new IllegalArgumentException("Unknown parameter [" + currentFieldName - + "] in request body or parameter is of the wrong type[" + token + "] "); - } - } - } - } catch (IOException e) { - throw new IllegalArgumentException("Failed to parse request body", e); - } + @Override + public boolean supportsPlainText() { + return true; } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/search/RestSuggestAction.java b/core/src/main/java/org/elasticsearch/rest/action/search/RestSuggestAction.java index b55a1590f3d84..e414a67ab6440 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/search/RestSuggestAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/search/RestSuggestAction.java @@ -19,77 +19,62 @@ package org.elasticsearch.rest.action.search; -import java.io.IOException; - import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.RestStatus; -import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.rest.action.RestBuilderListener; -import org.elasticsearch.search.SearchRequestParsers; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.suggest.Suggest; import org.elasticsearch.search.suggest.SuggestBuilder; +import java.io.IOException; + import static org.elasticsearch.rest.RestRequest.Method.GET; import static org.elasticsearch.rest.RestRequest.Method.POST; import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; public class RestSuggestAction extends BaseRestHandler { - - private final SearchRequestParsers searchRequestParsers; - - @Inject - public RestSuggestAction(Settings settings, RestController controller, - SearchRequestParsers searchRequestParsers) { + public RestSuggestAction(Settings settings, RestController controller) { super(settings); - this.searchRequestParsers = searchRequestParsers; - controller.registerHandler(POST, "/_suggest", this); - controller.registerHandler(GET, "/_suggest", this); - controller.registerHandler(POST, "/{index}/_suggest", this); - controller.registerHandler(GET, "/{index}/_suggest", this); + controller.registerAsDeprecatedHandler(POST, "/_suggest", this, + "[POST /_suggest] is deprecated! Use [POST /_search] instead.", deprecationLogger); + controller.registerAsDeprecatedHandler(GET, "/_suggest", this, + "[GET /_suggest] is deprecated! Use [GET /_search] instead.", deprecationLogger); + controller.registerAsDeprecatedHandler(POST, "/{index}/_suggest", this, + "[POST /{index}/_suggest] is deprecated! Use [POST /{index}/_search] instead.", deprecationLogger); + controller.registerAsDeprecatedHandler(GET, "/{index}/_suggest", this, + "[GET /{index}/_suggest] is deprecated! Use [GET /{index}/_search] instead.", deprecationLogger); } @Override - public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws IOException { + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final SearchRequest searchRequest = new SearchRequest( Strings.splitStringByCommaToArray(request.param("index")), new SearchSourceBuilder()); searchRequest.indicesOptions(IndicesOptions.fromRequest(request, searchRequest.indicesOptions())); - if (RestActions.hasBodyContent(request)) { - final BytesReference sourceBytes = RestActions.getRestContent(request); - try (XContentParser parser = XContentFactory.xContent(sourceBytes).createParser(sourceBytes)) { - final QueryParseContext context = new QueryParseContext(searchRequestParsers.queryParsers, parser, parseFieldMatcher); - searchRequest.source().suggest(SuggestBuilder.fromXContent(context, searchRequestParsers.suggesters)); - } - } else { - throw new IllegalArgumentException("no content or source provided to execute suggestion"); + try (XContentParser parser = request.contentOrSourceParamParser()) { + searchRequest.source().suggest(SuggestBuilder.fromXContent(parser)); } searchRequest.routing(request.param("routing")); searchRequest.preference(request.param("preference")); - client.search(searchRequest, new RestBuilderListener(channel) { + return channel -> client.search(searchRequest, new RestBuilderListener(channel) { @Override public RestResponse buildResponse(SearchResponse response, XContentBuilder builder) throws Exception { RestStatus restStatus = RestStatus.status(response.getSuccessfulShards(), response.getTotalShards(), response.getShardFailures()); builder.startObject(); buildBroadcastShardsHeader(builder, request, response.getTotalShards(), - response.getSuccessfulShards(), response.getFailedShards(), response.getShardFailures()); + response.getSuccessfulShards(), response.getFailedShards(), response.getSkippedShards(), response.getShardFailures()); Suggest suggest = response.getSuggest(); if (suggest != null) { suggest.toInnerXContent(builder, request); diff --git a/core/src/main/java/org/elasticsearch/script/CompiledScript.java b/core/src/main/java/org/elasticsearch/script/CompiledScript.java index ec2ad4192a2fc..818971f0f8990 100644 --- a/core/src/main/java/org/elasticsearch/script/CompiledScript.java +++ b/core/src/main/java/org/elasticsearch/script/CompiledScript.java @@ -24,7 +24,7 @@ */ public class CompiledScript { - private final ScriptService.ScriptType type; + private final ScriptType type; private final String name; private final String lang; private final Object compiled; @@ -36,7 +36,7 @@ public class CompiledScript { * @param lang The language of the script to be executed. * @param compiled The compiled script Object that is executable. */ - public CompiledScript(ScriptService.ScriptType type, String name, String lang, Object compiled) { + public CompiledScript(ScriptType type, String name, String lang, Object compiled) { this.type = type; this.name = name; this.lang = lang; @@ -47,7 +47,7 @@ public CompiledScript(ScriptService.ScriptType type, String name, String lang, O * Method to get the type of language. * @return The type of language the script was compiled in. */ - public ScriptService.ScriptType type() { + public ScriptType type() { return type; } diff --git a/core/src/main/java/org/elasticsearch/script/LeafSearchScript.java b/core/src/main/java/org/elasticsearch/script/LeafSearchScript.java index 0bf9e0d50e072..762168d3c9086 100644 --- a/core/src/main/java/org/elasticsearch/script/LeafSearchScript.java +++ b/core/src/main/java/org/elasticsearch/script/LeafSearchScript.java @@ -19,18 +19,30 @@ package org.elasticsearch.script; +import org.apache.lucene.search.Scorer; import org.elasticsearch.common.lucene.ScorerAware; import java.util.Map; /** * A per-segment {@link SearchScript}. + * + * This is effectively a functional interface, requiring at least implementing {@link #runAsDouble()}. */ public interface LeafSearchScript extends ScorerAware, ExecutableScript { - void setDocument(int doc); + /** + * Set the document this script will process next. + */ + default void setDocument(int doc) {} + + @Override + default void setScorer(Scorer scorer) {} - void setSource(Map source); + /** + * Set the source for the current document. + */ + default void setSource(Map source) {} /** * Sets per-document aggregation {@code _value}. @@ -44,8 +56,23 @@ default void setNextAggregationValue(Object value) { setNextVar("_value", value); } - long runAsLong(); + @Override + default void setNextVar(String field, Object value) {} - double runAsDouble(); + /** + * Return the result as a long. This is used by aggregation scripts over long fields. + */ + default long runAsLong() { + throw new UnsupportedOperationException("runAsLong is not implemented"); + } + + @Override + default Object run() { + return runAsDouble(); + } + /** + * Return the result as a double. This is the main use case of search script, used for document scoring. + */ + double runAsDouble(); } diff --git a/core/src/main/java/org/elasticsearch/script/NativeScriptEngineService.java b/core/src/main/java/org/elasticsearch/script/NativeScriptEngineService.java index 191c2b4bcf7a4..a8031d85ac696 100644 --- a/core/src/main/java/org/elasticsearch/script/NativeScriptEngineService.java +++ b/core/src/main/java/org/elasticsearch/script/NativeScriptEngineService.java @@ -19,9 +19,12 @@ package org.elasticsearch.script; +import org.apache.logging.log4j.Logger; import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.search.lookup.SearchLookup; @@ -41,6 +44,11 @@ public class NativeScriptEngineService extends AbstractComponent implements Scri public NativeScriptEngineService(Settings settings, Map scripts) { super(settings); + if (scripts.isEmpty() == false) { + Logger logger = Loggers.getLogger(ScriptModule.class); + DeprecationLogger deprecationLogger = new DeprecationLogger(logger); + deprecationLogger.deprecated("Native scripts are deprecated. Use a custom ScriptEngine to write scripts in java."); + } this.scripts = unmodifiableMap(scripts); } @@ -72,10 +80,10 @@ public ExecutableScript executable(CompiledScript compiledScript, @Nullable Map< @Override public SearchScript search(CompiledScript compiledScript, final SearchLookup lookup, @Nullable final Map vars) { final NativeScriptFactory scriptFactory = (NativeScriptFactory) compiledScript.compiled(); + final AbstractSearchScript script = (AbstractSearchScript) scriptFactory.newScript(vars); return new SearchScript() { @Override public LeafSearchScript getLeafSearchScript(LeafReaderContext context) throws IOException { - AbstractSearchScript script = (AbstractSearchScript) scriptFactory.newScript(vars); script.setLookup(lookup.getLeafSearchLookup(context)); return script; } diff --git a/core/src/main/java/org/elasticsearch/script/NativeScriptFactory.java b/core/src/main/java/org/elasticsearch/script/NativeScriptFactory.java index 7fca2501903be..e889c1fe5663f 100644 --- a/core/src/main/java/org/elasticsearch/script/NativeScriptFactory.java +++ b/core/src/main/java/org/elasticsearch/script/NativeScriptFactory.java @@ -31,7 +31,9 @@ * @see AbstractSearchScript * @see AbstractLongSearchScript * @see AbstractDoubleSearchScript + * @deprecated Create a {@link ScriptEngineService} instead of using native scripts */ +@Deprecated public interface NativeScriptFactory { /** diff --git a/core/src/main/java/org/elasticsearch/script/Script.java b/core/src/main/java/org/elasticsearch/script/Script.java index 94abb43bc066c..4cce56186ed1f 100644 --- a/core/src/main/java/org/elasticsearch/script/Script.java +++ b/core/src/main/java/org/elasticsearch/script/Script.java @@ -19,273 +19,759 @@ package org.elasticsearch.script; -import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.common.Nullable; +import org.apache.logging.log4j.Logger; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.ESLoggerFactory; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParser.Token; import org.elasticsearch.common.xcontent.XContentType; -import org.elasticsearch.script.ScriptService.ScriptType; import java.io.IOException; +import java.io.UncheckedIOException; +import java.util.Collections; +import java.util.HashMap; import java.util.Map; import java.util.Objects; /** - * Script holds all the parameters necessary to compile or find in cache and then execute a script. + * {@link Script} represents used-defined input that can be used to + * compile and execute a script from the {@link ScriptService} + * based on the {@link ScriptType}. + * + * There are three types of scripts specified by {@link ScriptType}. + * + * The following describes the expected parameters for each type of script: + * + *
      + *
    • {@link ScriptType#INLINE} + *
        + *
      • {@link Script#lang} - specifies the language, defaults to {@link Script#DEFAULT_SCRIPT_LANG} + *
      • {@link Script#idOrCode} - specifies the code to be compiled, must not be {@code null} + *
      • {@link Script#options} - specifies the compiler options for this script; must not be {@code null}, + * use an empty {@link Map} to specify no options + *
      • {@link Script#params} - {@link Map} of user-defined parameters; must not be {@code null}, + * use an empty {@link Map} to specify no params + *
      + *
    • {@link ScriptType#STORED} + *
        + *
      • {@link Script#lang} - the language will be specified when storing the script, so this should + * be {@code null}; however, this can be specified to look up a stored + * script as part of the deprecated API + *
      • {@link Script#idOrCode} - specifies the id of the stored script to be looked up, must not be {@code null} + *
      • {@link Script#options} - compiler options will be specified when a stored script is stored, + * so they have no meaning here and must be {@code null} + *
      • {@link Script#params} - {@link Map} of user-defined parameters; must not be {@code null}, + * use an empty {@link Map} to specify no params + *
      + *
    • {@link ScriptType#FILE} + *
        + *
      • {@link Script#lang} - specifies the language for look up, defaults to {@link Script#DEFAULT_SCRIPT_LANG} + *
      • {@link Script#idOrCode} - specifies the id of the file script to be looked up, must not be {@code null} + *
      • {@link Script#options} - compiler options will be specified when a file script is loaded, + * so they have no meaning here and must be {@code null} + *
      • {@link Script#params} - {@link Map} of user-defined parameters; must not be {@code null}, + * use an empty {@link Map} to specify no params + *
      + *
    */ -public final class Script implements ToXContent, Writeable { +public final class Script implements ToXContentObject, Writeable { + + public static final Version V_5_1_0_UNRELEASED = Version.fromId(5010099); - public static final ScriptType DEFAULT_TYPE = ScriptType.INLINE; + /** + * Standard logger necessary for allocation of the deprecation logger. + */ + private static final Logger LOGGER = ESLoggerFactory.getLogger(ScriptMetaData.class); + + /** + * Deprecation logger necessary for namespace changes related to stored scripts. + */ + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(LOGGER); + + /** + * The name of the of the default scripting language. + */ public static final String DEFAULT_SCRIPT_LANG = "painless"; - private String script; - private ScriptType type; - @Nullable private String lang; - @Nullable private Map params; - @Nullable private XContentType contentType; + /** + * The name of the default template language. + */ + public static final String DEFAULT_TEMPLATE_LANG = "mustache"; /** - * Constructor for simple inline script. The script will have no lang or params set. - * - * @param script The inline script to execute. + * The default {@link ScriptType}. */ - public Script(String script) { - this(script, ScriptType.INLINE, null, null); - } + public static final ScriptType DEFAULT_SCRIPT_TYPE = ScriptType.INLINE; - public Script(String script, ScriptType type, String lang, @Nullable Map params) { - this(script, type, lang, params, null); - } + /** + * Compiler option for {@link XContentType} used for templates. + */ + public static final String CONTENT_TYPE_OPTION = "content_type"; /** - * Constructor for Script. - * - * @param script The cache key of the script to be compiled/executed. For inline scripts this is the actual - * script source code. For indexed scripts this is the id used in the request. For on file - * scripts this is the file name. - * @param type The type of script -- dynamic, stored, or file. - * @param lang The language of the script to be compiled/executed. - * @param params The map of parameters the script will be executed with. - * @param contentType The {@link XContentType} of the script. Only relevant for inline scripts that have not been - * defined as a plain string, but as json or yaml content. This class needs this information - * when serializing the script back to xcontent. + * Standard {@link ParseField} for outer level of script queries. + */ + public static final ParseField SCRIPT_PARSE_FIELD = new ParseField("script"); + + /** + * Standard {@link ParseField} for lang on the inner level. */ - @SuppressWarnings("unchecked") - public Script(String script, ScriptType type, String lang, @Nullable Map params, - @Nullable XContentType contentType) { - if (contentType != null && type != ScriptType.INLINE) { - throw new IllegalArgumentException("The parameter contentType only makes sense for inline scripts"); + public static final ParseField LANG_PARSE_FIELD = new ParseField("lang"); + + /** + * Standard {@link ParseField} for options on the inner level. + */ + public static final ParseField OPTIONS_PARSE_FIELD = new ParseField("options"); + + /** + * Standard {@link ParseField} for params on the inner level. + */ + public static final ParseField PARAMS_PARSE_FIELD = new ParseField("params"); + + /** + * Helper class used by {@link ObjectParser} to store mutable {@link Script} variables and then + * construct an immutable {@link Script} object based on parsed XContent. + */ + private static final class Builder { + private ScriptType type; + private String lang; + private String idOrCode; + private Map options; + private Map params; + + private Builder() { + // This cannot default to an empty map because options are potentially added at multiple points. + this.options = new HashMap<>(); + this.params = Collections.emptyMap(); } - this.script = Objects.requireNonNull(script); - this.type = Objects.requireNonNull(type); - this.lang = lang == null ? DEFAULT_SCRIPT_LANG : lang; - this.params = (Map) params; - this.contentType = contentType; - } - public Script(StreamInput in) throws IOException { - script = in.readString(); - if (in.readBoolean()) { - type = ScriptType.readFrom(in); + /** + * Since inline scripts can accept code rather than just an id, they must also be able + * to handle template parsing, hence the need for custom parsing code. Templates can + * consist of either an {@link String} or a JSON object. If a JSON object is discovered + * then the content type option must also be saved as a compiler option. + */ + private void setInline(XContentParser parser) { + try { + if (type != null) { + throwOnlyOneOfType(); + } + + type = ScriptType.INLINE; + + if (parser.currentToken() == Token.START_OBJECT) { + //this is really for search templates, that need to be converted to json format + XContentBuilder builder = XContentFactory.jsonBuilder(); + idOrCode = builder.copyCurrentStructure(parser).string(); + options.put(CONTENT_TYPE_OPTION, XContentType.JSON.mediaType()); + } else { + idOrCode = parser.text(); + } + } catch (IOException exception) { + throw new UncheckedIOException(exception); + } } - lang = in.readOptionalString(); - params = in.readMap(); - if (in.readBoolean()) { - contentType = XContentType.readFrom(in); + + /** + * Set both the id and the type of the stored script. + */ + private void setStored(String idOrCode) { + if (type != null) { + throwOnlyOneOfType(); + } + + type = ScriptType.STORED; + this.idOrCode = idOrCode; } - } - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeString(script); - boolean hasType = type != null; - out.writeBoolean(hasType); - if (hasType) { - ScriptType.writeTo(type, out); + /** + * Set both the id and the type of the file script. + */ + private void setFile(String idOrCode) { + if (type != null) { + throwOnlyOneOfType(); + } + + type = ScriptType.FILE; + this.idOrCode = idOrCode; + } + + /** + * Helper method to throw an exception if more than one type of {@link Script} is specified. + */ + private void throwOnlyOneOfType() { + throw new IllegalArgumentException("must only use one of [" + + ScriptType.INLINE.getParseField().getPreferredName() + " + , " + + ScriptType.STORED.getParseField().getPreferredName() + " + , " + + ScriptType.FILE.getParseField().getPreferredName() + "]" + + " when specifying a script"); + } + + private void setLang(String lang) { + this.lang = lang; } - out.writeOptionalString(lang); - out.writeMap(params); - boolean hasContentType = contentType != null; - out.writeBoolean(hasContentType); - if (hasContentType) { - XContentType.writeTo(contentType, out); + + /** + * Options may have already been added if an inline template was specified. + * Appends the user-defined compiler options with the internal compiler options. + */ + private void setOptions(Map options) { + this.options.putAll(options); } + + private void setParams(Map params) { + this.params = params; + } + + /** + * Validates the parameters and creates an {@link Script}. + * @param defaultLang The default lang is not a compile-time constant and must be provided + * at run-time this way in case a legacy default language is used from + * previously stored queries. + */ + private Script build(String defaultLang) { + if (type == null) { + throw new IllegalArgumentException("must specify either [source] for an inline script, [id] for a stored script, " + + "or [" + ScriptType.FILE.getParseField().getPreferredName() + "] for a file script"); + } + + if (type == ScriptType.INLINE) { + if (lang == null) { + lang = defaultLang; + } + + if (idOrCode == null) { + throw new IllegalArgumentException( + "must specify for an [" + ScriptType.INLINE.getParseField().getPreferredName() + "] script"); + } + + if (options.size() > 1 || options.size() == 1 && options.get(CONTENT_TYPE_OPTION) == null) { + options.remove(CONTENT_TYPE_OPTION); + + throw new IllegalArgumentException("illegal compiler options [" + options + "] specified"); + } + } else if (type == ScriptType.STORED) { + // Only issue this deprecation warning if we aren't using a template. Templates during + // this deprecation phase must always specify the default template language or they would + // possibly pick up a script in a different language as defined by the user under the new + // namespace unintentionally. + if (lang != null && lang.equals(DEFAULT_TEMPLATE_LANG) == false) { + DEPRECATION_LOGGER.deprecated("specifying the field [" + LANG_PARSE_FIELD.getPreferredName() + "] " + + "for executing " + ScriptType.STORED + " scripts is deprecated; use only the field " + + "[" + ScriptType.STORED.getParseField().getPreferredName() + "] to specify an "); + } + + if (idOrCode == null) { + throw new IllegalArgumentException( + "must specify for an [" + ScriptType.STORED.getParseField().getPreferredName() + "] script"); + } + + if (options.isEmpty()) { + options = null; + } else { + throw new IllegalArgumentException("field [" + OPTIONS_PARSE_FIELD.getPreferredName() + "] " + + "cannot be specified using a [" + ScriptType.STORED.getParseField().getPreferredName() + "] script"); + } + } else if (type == ScriptType.FILE) { + if (lang == null) { + lang = defaultLang; + } + + if (idOrCode == null) { + throw new IllegalArgumentException( + "must specify for an [" + ScriptType.FILE.getParseField().getPreferredName() + "] script"); + } + + if (options.isEmpty()) { + options = null; + } else { + throw new IllegalArgumentException("field [" + OPTIONS_PARSE_FIELD.getPreferredName() + "] " + + "cannot be specified using a [" + ScriptType.FILE.getParseField().getPreferredName() + "] script"); + } + } + + return new Script(type, lang, idOrCode, options, params); + } + } + + private static final ObjectParser PARSER = new ObjectParser<>("script", Builder::new); + + static { + // Defines the fields necessary to parse a Script as XContent using an ObjectParser. + PARSER.declareField(Builder::setInline, parser -> parser, ScriptType.INLINE.getParseField(), ValueType.OBJECT_OR_STRING); + PARSER.declareString(Builder::setStored, ScriptType.STORED.getParseField()); + PARSER.declareString(Builder::setFile, ScriptType.FILE.getParseField()); + PARSER.declareString(Builder::setLang, LANG_PARSE_FIELD); + PARSER.declareField(Builder::setOptions, XContentParser::mapStrings, OPTIONS_PARSE_FIELD, ValueType.OBJECT); + PARSER.declareField(Builder::setParams, XContentParser::map, PARAMS_PARSE_FIELD, ValueType.OBJECT); } /** - * Method for getting the script. - * @return The cache key of the script to be compiled/executed. For dynamic scripts this is the actual - * script source code. For indexed scripts this is the id used in the request. For on disk scripts - * this is the file name. + * Convenience method to call {@link Script#parse(XContentParser, String)} + * using the default scripting language. */ - public String getScript() { - return script; + public static Script parse(XContentParser parser) throws IOException { + return parse(parser, DEFAULT_SCRIPT_LANG); } /** - * Method for getting the type. + * This will parse XContent into a {@link Script}. The following formats can be parsed: + * + * The simple format defaults to an {@link ScriptType#INLINE} with no compiler options or user-defined params: + * + * Example: + * {@code + * "return Math.log(doc.popularity) * 100;" + * } + * + * The complex format where {@link ScriptType} and idOrCode are required while lang, options and params are not required. * - * @return The type of script -- inline, stored, or file. + * {@code + * { + * // Exactly one of "id" or "source" must be specified + * "id" : "", + * // OR + * "source": "", + * "lang" : "", + * "options" : { + * "option0" : "", + * "option1" : "", + * ... + * }, + * "params" : { + * "param0" : "", + * "param1" : "", + * ... + * } + * } + * } + * + * Example: + * {@code + * { + * "source" : "return Math.log(doc.popularity) * params.multiplier", + * "lang" : "painless", + * "params" : { + * "multiplier" : 100.0 + * } + * } + * } + * + * This also handles templates in a special way. If a complexly formatted query is specified as another complex + * JSON object the query is assumed to be a template, and the format will be preserved. + * + * {@code + * { + * "source" : { "query" : ... }, + * "lang" : "", + * "options" : { + * "option0" : "", + * "option1" : "", + * ... + * }, + * "params" : { + * "param0" : "", + * "param1" : "", + * ... + * } + * } + * } + * + * @param parser The {@link XContentParser} to be used. + * @param defaultLang The default language to use if no language is specified. The default language isn't necessarily + * the one defined by {@link Script#DEFAULT_SCRIPT_LANG} due to backwards compatiblity requirements + * related to stored queries using previously default languauges. + * + * @return The parsed {@link Script}. */ - public ScriptType getType() { - return type; + public static Script parse(XContentParser parser, String defaultLang) throws IOException { + Objects.requireNonNull(defaultLang); + + Token token = parser.currentToken(); + + if (token == null) { + token = parser.nextToken(); + } + + if (token == Token.VALUE_STRING) { + return new Script(ScriptType.INLINE, defaultLang, parser.text(), Collections.emptyMap()); + } + + return PARSER.apply(parser, null).build(defaultLang); } + private final ScriptType type; + private final String lang; + private final String idOrCode; + private final Map options; + private final Map params; + /** - * Method for getting language. - * - * @return The language of the script to be compiled/executed. + * Constructor for simple script using the default language and default type. + * @param idOrCode The id or code to use dependent on the default script type. */ - public String getLang() { - return lang; + public Script(String idOrCode) { + this(DEFAULT_SCRIPT_TYPE, DEFAULT_SCRIPT_LANG, idOrCode, Collections.emptyMap(), Collections.emptyMap()); } /** - * Method for getting the parameters. - * - * @return The map of parameters the script will be executed with. + * Constructor for a script that does not need to use compiler options. + * @param type The {@link ScriptType}. + * @param lang The language for this {@link Script} if the {@link ScriptType} is {@link ScriptType#INLINE} or + * {@link ScriptType#FILE}. For {@link ScriptType#STORED} scripts this should be null, but can + * be specified to access scripts stored as part of the stored scripts deprecated API. + * @param idOrCode The id for this {@link Script} if the {@link ScriptType} is {@link ScriptType#FILE} or {@link ScriptType#STORED}. + * The code for this {@link Script} if the {@link ScriptType} is {@link ScriptType#INLINE}. + * @param params The user-defined params to be bound for script execution. */ - public Map getParams() { - return params; + public Script(ScriptType type, String lang, String idOrCode, Map params) { + this(type, lang, idOrCode, type == ScriptType.INLINE ? Collections.emptyMap() : null, params); + } + + /** + * Constructor for a script that requires the use of compiler options. + * @param type The {@link ScriptType}. + * @param lang The language for this {@link Script} if the {@link ScriptType} is {@link ScriptType#INLINE} or + * {@link ScriptType#FILE}. For {@link ScriptType#STORED} scripts this should be null, but can + * be specified to access scripts stored as part of the stored scripts deprecated API. + * @param idOrCode The id for this {@link Script} if the {@link ScriptType} is {@link ScriptType#FILE} or {@link ScriptType#STORED}. + * The code for this {@link Script} if the {@link ScriptType} is {@link ScriptType#INLINE}. + * @param options The map of compiler options for this {@link Script} if the {@link ScriptType} + * is {@link ScriptType#INLINE}, {@code null} otherwise. + * @param params The user-defined params to be bound for script execution. + */ + public Script(ScriptType type, String lang, String idOrCode, Map options, Map params) { + this.type = Objects.requireNonNull(type); + this.idOrCode = Objects.requireNonNull(idOrCode); + this.params = Collections.unmodifiableMap(Objects.requireNonNull(params)); + + if (type == ScriptType.INLINE) { + this.lang = Objects.requireNonNull(lang); + this.options = Collections.unmodifiableMap(Objects.requireNonNull(options)); + } else if (type == ScriptType.STORED) { + this.lang = lang; + + if (options != null) { + throw new IllegalStateException( + "options must be null for [" + ScriptType.STORED.getParseField().getPreferredName() + "] scripts"); + } + + this.options = null; + } else if (type == ScriptType.FILE) { + this.lang = Objects.requireNonNull(lang); + + if (options != null) { + throw new IllegalStateException( + "options must be null for [" + ScriptType.FILE.getParseField().getPreferredName() + "] scripts"); + } + + this.options = null; + } else { + throw new IllegalStateException("unknown script type [" + type.getName() + "]"); + } } /** - * @return The content type of the script if it is an inline script and the script has been defined as json - * or yaml content instead of a plain string. + * Creates a {@link Script} read from an input stream. */ - public XContentType getContentType() { - return contentType; + public Script(StreamInput in) throws IOException { + // Version 5.3 allows lang to be an optional parameter for stored scripts and expects + // options to be null for stored and file scripts. + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + this.type = ScriptType.readFrom(in); + this.lang = in.readOptionalString(); + this.idOrCode = in.readString(); + @SuppressWarnings("unchecked") + Map options = (Map)(Map)in.readMap(); + this.options = options; + this.params = in.readMap(); + // Version 5.1 to 5.3 (exclusive) requires all Script members to be non-null and supports the potential + // for more options than just XContentType. Reorders the read in contents to be in + // same order as the constructor. + } else if (in.getVersion().onOrAfter(Version.V_5_1_1)) { + this.type = ScriptType.readFrom(in); + this.lang = in.readString(); + + this.idOrCode = in.readString(); + @SuppressWarnings("unchecked") + Map options = (Map)(Map)in.readMap(); + + if (this.type != ScriptType.INLINE && options.isEmpty()) { + this.options = null; + } else { + this.options = options; + } + + this.params = in.readMap(); + // Prior to version 5.1 the script members are read in certain cases as optional and given + // default values when necessary. Also the only option supported is for XContentType. + } else { + this.idOrCode = in.readString(); + + if (in.readBoolean()) { + this.type = ScriptType.readFrom(in); + } else { + this.type = DEFAULT_SCRIPT_TYPE; + } + + String lang = in.readOptionalString(); + + if (lang == null) { + this.lang = DEFAULT_SCRIPT_LANG; + } else { + this.lang = lang; + } + + Map params = in.readMap(); + + if (params == null) { + this.params = new HashMap<>(); + } else { + this.params = params; + } + + if (in.readBoolean()) { + this.options = new HashMap<>(); + XContentType contentType = XContentType.readFrom(in); + this.options.put(CONTENT_TYPE_OPTION, contentType.mediaType()); + } else if (type == ScriptType.INLINE) { + options = new HashMap<>(); + } else { + this.options = null; + } + } } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) throws IOException { - if (type == null) { - return builder.value(script); + public void writeTo(StreamOutput out) throws IOException { + // Version 5.3+ allows lang to be an optional parameter for stored scripts and expects + // options to be null for stored and file scripts. + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + type.writeTo(out); + out.writeOptionalString(lang); + out.writeString(idOrCode); + @SuppressWarnings("unchecked") + Map options = (Map)(Map)this.options; + out.writeMap(options); + out.writeMap(params); + // Version 5.1 to 5.3 (exclusive) requires all Script members to be non-null and supports the potential + // for more options than just XContentType. Reorders the written out contents to be in + // same order as the constructor. + } else if (out.getVersion().onOrAfter(Version.V_5_1_1)) { + type.writeTo(out); + + if (lang == null) { + out.writeString(""); + } else { + out.writeString(lang); + } + + out.writeString(idOrCode); + @SuppressWarnings("unchecked") + Map options = (Map)(Map)this.options; + + if (options == null) { + out.writeMap(new HashMap<>()); + } else { + out.writeMap(options); + } + + out.writeMap(params); + // Prior to version 5.1 the Script members were possibly written as optional or null, though there is no case where a null + // value wasn't equivalent to it's default value when actually compiling/executing a script. Meaning, there are no + // backwards compatibility issues, and now there's enforced consistency. Also the only supported compiler + // option was XContentType. + } else { + out.writeString(idOrCode); + out.writeBoolean(true); + type.writeTo(out); + out.writeOptionalString(lang); + + if (params.isEmpty()) { + out.writeMap(null); + } else { + out.writeMap(params); + } + + if (options != null && options.containsKey(CONTENT_TYPE_OPTION)) { + XContentType contentType = XContentType.fromMediaTypeOrFormat(options.get(CONTENT_TYPE_OPTION)); + out.writeBoolean(true); + contentType.writeTo(out); + } else { + out.writeBoolean(false); + } } + } + + /** + * This will build scripts into the following XContent structure: + * + * {@code + * { + * "<(id, source)>" : "", + * "lang" : "", + * "options" : { + * "option0" : "", + * "option1" : "", + * ... + * }, + * "params" : { + * "param0" : "", + * "param1" : "", + * ... + * } + * } + * } + * + * Example: + * {@code + * { + * "source" : "return Math.log(doc.popularity) * params.multiplier;", + * "lang" : "painless", + * "params" : { + * "multiplier" : 100.0 + * } + * } + * } + * + * Note that lang, options, and params will only be included if there have been any specified. + * + * This also handles templates in a special way. If the {@link Script#CONTENT_TYPE_OPTION} option + * is provided and the {@link ScriptType#INLINE} is specified then the template will be preserved as a raw field. + * + * {@code + * { + * "source" : { "query" : ... }, + * "lang" : "", + * "options" : { + * "option0" : "", + * "option1" : "", + * ... + * }, + * "params" : { + * "param0" : "", + * "param1" : "", + * ... + * } + * } + * } + */ + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) throws IOException { builder.startObject(); - if (type == ScriptType.INLINE && contentType != null && builder.contentType() == contentType) { - builder.rawField(type.getParseField().getPreferredName(), new BytesArray(script)); + + String contentType = options == null ? null : options.get(CONTENT_TYPE_OPTION); + + if (type == ScriptType.INLINE) { + if (contentType != null && builder.contentType().mediaType().equals(contentType)) { + builder.rawField(ScriptType.INLINE.getParseField().getPreferredName(), new BytesArray(idOrCode)); + } else { + builder.field(ScriptType.INLINE.getParseField().getPreferredName(), idOrCode); + } } else { - builder.field(type.getParseField().getPreferredName(), script); + builder.field(type.getParseField().getPreferredName(), idOrCode); } + if (lang != null) { - builder.field(ScriptField.LANG.getPreferredName(), lang); + builder.field(LANG_PARSE_FIELD.getPreferredName(), lang); + } + + if (options != null && !options.isEmpty()) { + builder.field(OPTIONS_PARSE_FIELD.getPreferredName(), options); } - if (params != null) { - builder.field(ScriptField.PARAMS.getPreferredName(), params); + + if (!params.isEmpty()) { + builder.field(PARAMS_PARSE_FIELD.getPreferredName(), params); } + builder.endObject(); + return builder; } - public static Script parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException { - return parse(parser, parseFieldMatcher, null); + /** + * @return The {@link ScriptType} for this {@link Script}. + */ + public ScriptType getType() { + return type; } - public static Script parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher, @Nullable String lang) throws IOException { - XContentParser.Token token = parser.currentToken(); - // If the parser hasn't yet been pushed to the first token, do it now - if (token == null) { - token = parser.nextToken(); - } - if (token == XContentParser.Token.VALUE_STRING) { - return new Script(parser.text(), ScriptType.INLINE, lang, null); - } - if (token != XContentParser.Token.START_OBJECT) { - throw new ElasticsearchParseException("expected a string value or an object, but found [{}] instead", token); - } - String script = null; - ScriptType type = null; - Map params = null; - XContentType contentType = null; - String cfn = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - cfn = parser.currentName(); - } else if (parseFieldMatcher.match(cfn, ScriptType.INLINE.getParseField())) { - type = ScriptType.INLINE; - if (parser.currentToken() == XContentParser.Token.START_OBJECT) { - contentType = parser.contentType(); - XContentBuilder builder = XContentFactory.contentBuilder(contentType); - script = builder.copyCurrentStructure(parser).bytes().utf8ToString(); - } else { - script = parser.text(); - } - } else if (parseFieldMatcher.match(cfn, ScriptType.FILE.getParseField())) { - type = ScriptType.FILE; - if (token == XContentParser.Token.VALUE_STRING) { - script = parser.text(); - } else { - throw new ElasticsearchParseException("expected a string value for field [{}], but found [{}]", cfn, token); - } - } else if (parseFieldMatcher.match(cfn, ScriptType.STORED.getParseField())) { - type = ScriptType.STORED; - if (token == XContentParser.Token.VALUE_STRING) { - script = parser.text(); - } else { - throw new ElasticsearchParseException("expected a string value for field [{}], but found [{}]", cfn, token); - } - } else if (parseFieldMatcher.match(cfn, ScriptField.LANG)) { - if (token == XContentParser.Token.VALUE_STRING) { - lang = parser.text(); - } else { - throw new ElasticsearchParseException("expected a string value for field [{}], but found [{}]", cfn, token); - } - } else if (parseFieldMatcher.match(cfn, ScriptField.PARAMS)) { - if (token == XContentParser.Token.START_OBJECT) { - params = parser.map(); - } else { - throw new ElasticsearchParseException("expected an object for field [{}], but found [{}]", cfn, token); - } - } else { - throw new ElasticsearchParseException("unexpected field [{}]", cfn); - } - } - if (script == null) { - throw new ElasticsearchParseException("expected one of [{}], [{}] or [{}] fields, but found none", - ScriptType.INLINE.getParseField() .getPreferredName(), ScriptType.FILE.getParseField().getPreferredName(), - ScriptType.STORED.getParseField() .getPreferredName()); - } - return new Script(script, type, lang, params, contentType); + /** + * @return The language for this {@link Script} if the {@link ScriptType} is {@link ScriptType#INLINE} or + * {@link ScriptType#FILE}. For {@link ScriptType#STORED} scripts this should be null, but can + * be specified to access scripts stored as part of the stored scripts deprecated API. + */ + public String getLang() { + return lang; } - @Override - public int hashCode() { - return Objects.hash(lang, params, script, type, contentType); + /** + * @return The id for this {@link Script} if the {@link ScriptType} is {@link ScriptType#FILE} or {@link ScriptType#STORED}. + * The code for this {@link Script} if the {@link ScriptType} is {@link ScriptType#INLINE}. + */ + public String getIdOrCode() { + return idOrCode; } - @Override - public boolean equals(Object obj) { - if (this == obj) return true; - if (obj == null) return false; - if (getClass() != obj.getClass()) return false; - Script other = (Script) obj; - - return Objects.equals(lang, other.lang) && - Objects.equals(params, other.params) && - Objects.equals(script, other.script) && - Objects.equals(type, other.type) && - Objects.equals(contentType, other.contentType); + /** + * @return The map of compiler options for this {@link Script} if the {@link ScriptType} + * is {@link ScriptType#INLINE}, {@code null} otherwise. + */ + public Map getOptions() { + return options; + } + + /** + * @return The map of user-defined params for this {@link Script}. + */ + public Map getParams() { + return params; } @Override - public String toString() { - return "[script: " + script + ", type: " + type.getParseField().getPreferredName() + ", lang: " - + lang + ", params: " + params + "]"; + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + Script script = (Script)o; + + if (type != script.type) return false; + if (lang != null ? !lang.equals(script.lang) : script.lang != null) return false; + if (!idOrCode.equals(script.idOrCode)) return false; + if (options != null ? !options.equals(script.options) : script.options != null) return false; + return params.equals(script.params); + } - public interface ScriptField { - ParseField SCRIPT = new ParseField("script"); - ParseField LANG = new ParseField("lang"); - ParseField PARAMS = new ParseField("params"); + @Override + public int hashCode() { + int result = type.hashCode(); + result = 31 * result + (lang != null ? lang.hashCode() : 0); + result = 31 * result + idOrCode.hashCode(); + result = 31 * result + (options != null ? options.hashCode() : 0); + result = 31 * result + params.hashCode(); + return result; } + @Override + public String toString() { + return "Script{" + + "type=" + type + + ", lang='" + lang + '\'' + + ", idOrCode='" + idOrCode + '\'' + + ", options=" + options + + ", params=" + params + + '}'; + } } diff --git a/core/src/main/java/org/elasticsearch/script/ScriptContextRegistry.java b/core/src/main/java/org/elasticsearch/script/ScriptContextRegistry.java index b4ed91faebaea..765be1d437818 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptContextRegistry.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptContextRegistry.java @@ -19,8 +19,6 @@ package org.elasticsearch.script; -import org.elasticsearch.common.settings.Settings; - import java.util.Collection; import java.util.HashMap; import java.util.HashSet; @@ -66,8 +64,8 @@ Collection scriptContexts() { /** * @return true if the provided {@link ScriptContext} is supported, false otherwise */ - boolean isSupportedContext(ScriptContext scriptContext) { - return scriptContexts.containsKey(scriptContext.getKey()); + boolean isSupportedContext(String scriptContext) { + return scriptContexts.containsKey(scriptContext); } //script contexts can be used in fine-grained settings, we need to be careful with what we allow here @@ -81,8 +79,8 @@ private void validateScriptContext(ScriptContext.Plugin scriptContext) { } private static Set reservedScriptContexts() { - Set reserved = new HashSet<>(ScriptService.ScriptType.values().length + ScriptContext.Standard.values().length); - for (ScriptService.ScriptType scriptType : ScriptService.ScriptType.values()) { + Set reserved = new HashSet<>(ScriptType.values().length + ScriptContext.Standard.values().length); + for (ScriptType scriptType : ScriptType.values()) { reserved.add(scriptType.toString()); } for (ScriptContext.Standard scriptContext : ScriptContext.Standard.values()) { diff --git a/core/src/main/java/org/elasticsearch/script/ScriptEngineService.java b/core/src/main/java/org/elasticsearch/script/ScriptEngineService.java index 55a931d8d57c3..4dc3870fac9a2 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptEngineService.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptEngineService.java @@ -32,7 +32,12 @@ public interface ScriptEngineService extends Closeable { String getType(); - String getExtension(); + /** + * The extension for file scripts in this language. + */ + default String getExtension() { + return getType(); + } /** * Compiles a script. diff --git a/core/src/main/java/org/elasticsearch/script/ScriptException.java b/core/src/main/java/org/elasticsearch/script/ScriptException.java index 475091f9f6d7a..91e6ad401fc88 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptException.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptException.java @@ -87,8 +87,7 @@ public void writeTo(StreamOutput out) throws IOException { } @Override - protected void innerToXContent(XContentBuilder builder, Params params) throws IOException { - super.innerToXContent(builder, params); + protected void metadataToXContent(XContentBuilder builder, Params params) throws IOException { builder.field("script_stack", scriptStack); builder.field("script", script); builder.field("lang", lang); diff --git a/core/src/main/java/org/elasticsearch/script/ScriptMetaData.java b/core/src/main/java/org/elasticsearch/script/ScriptMetaData.java index 979bffb4bccfc..63b5e2e46ab8d 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptMetaData.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptMetaData.java @@ -18,22 +18,24 @@ */ package org.elasticsearch.script; +import org.apache.logging.log4j.Logger; import org.elasticsearch.ResourceNotFoundException; -import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.Diff; import org.elasticsearch.cluster.DiffableUtils; +import org.elasticsearch.cluster.NamedDiff; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.ESLoggerFactory; +import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; import java.util.Collections; @@ -41,255 +43,416 @@ import java.util.HashMap; import java.util.Map; -public final class ScriptMetaData implements MetaData.Custom { +/** + * {@link ScriptMetaData} is used to store user-defined scripts + * as part of the {@link ClusterState}. Currently scripts can + * be stored as part of the new namespace for a stored script where + * only an id is used or as part of the deprecated namespace where + * both a language and an id are used. + */ +public final class ScriptMetaData implements MetaData.Custom, Writeable, ToXContent { + + /** + * A builder used to modify the currently stored scripts data held within + * the {@link ClusterState}. Scripts can be added or deleted, then built + * to generate a new {@link Map} of scripts that will be used to update + * the current {@link ClusterState}. + */ + public static final class Builder { - public static final String TYPE = "stored_scripts"; - public static final ScriptMetaData PROTO = new ScriptMetaData(Collections.emptyMap()); + private final Map scripts; - private final Map scripts; + /** + * @param previous The current {@link ScriptMetaData} or {@code null} if there + * is no existing {@link ScriptMetaData}. + */ + public Builder(ScriptMetaData previous) { + this.scripts = previous == null ? new HashMap<>() :new HashMap<>(previous.scripts); + } - ScriptMetaData(Map scripts) { - this.scripts = scripts; - } + /** + * Add a new script to the existing stored scripts. The script will be added under + * both the new namespace and the deprecated namespace, so that look ups under + * the deprecated namespace will continue to work. Should a script already exist under + * the new namespace using a different language, it will be replaced and a deprecation + * warning will be issued. The replaced script will still exist under the deprecated + * namespace and can continue to be looked up this way until it is deleted. + *

    + * Take for example script 'A' with lang 'L0' and data 'D0'. If we add script 'A' to the + * empty set, the scripts {@link Map} will be ["A" -- D0, "A#L0" -- D0]. If a script + * 'A' with lang 'L1' and data 'D1' is then added, the scripts {@link Map} will be + * ["A" -- D1, "A#L1" -- D1, "A#L0" -- D0]. + * @param id The user-specified id to use for the look up. + * @param source The user-specified stored script data held in {@link StoredScriptSource}. + */ + public Builder storeScript(String id, StoredScriptSource source) { + StoredScriptSource previous = scripts.put(id, source); + scripts.put(source.getLang() + "#" + id, source); + + if (previous != null && previous.getLang().equals(source.getLang()) == false) { + DEPRECATION_LOGGER.deprecated("stored script [" + id + "] already exists using a different lang " + + "[" + previous.getLang() + "], the new namespace for stored scripts will only use (id) instead of (lang, id)"); + } - public BytesReference getScriptAsBytes(String language, String id) { - ScriptAsBytes scriptAsBytes = scripts.get(toKey(language, id)); - if (scriptAsBytes != null) { - return scriptAsBytes.script; - } else { - return null; + return this; } - } - public String getScript(String language, String id) { - BytesReference scriptAsBytes = getScriptAsBytes(language, id); - if (scriptAsBytes == null) { - return null; - } - return scriptAsBytes.utf8ToString(); - } + /** + * Delete a script from the existing stored scripts. The script will be removed from the + * new namespace if the script language matches the current script under the same id or + * if the script language is {@code null}. The script will be removed from the deprecated + * namespace on any delete either using using the specified lang parameter or the language + * found from looking up the script in the new namespace. + *

    + * Take for example a scripts {@link Map} with {"A" -- D1, "A#L1" -- D1, "A#L0" -- D0}. + * If a script is removed specified by an id 'A' and lang {@code null} then the scripts + * {@link Map} will be {"A#L0" -- D0}. To remove the final script, the deprecated + * namespace must be used, so an id 'A' and lang 'L0' would need to be specified. + * @param id The user-specified id to use for the look up. + * @param lang The user-specified language to use for the look up if using the deprecated + * namespace, otherwise {@code null}. + */ + public Builder deleteScript(String id, String lang) { + StoredScriptSource source = scripts.get(id); + + if (lang == null) { + if (source == null) { + throw new ResourceNotFoundException("stored script [" + id + "] does not exist and cannot be deleted"); + } + + lang = source.getLang(); + } - public static String parseStoredScript(BytesReference scriptAsBytes) { - // Scripts can be stored via API in several ways: - // 1) wrapped into a 'script' json object or field - // 2) wrapped into a 'template' json object or field - // 3) just as is - // In order to fetch the actual script in consistent manner this parsing logic is needed: - try (XContentParser parser = XContentHelper.createParser(scriptAsBytes); - XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON)) { - parser.nextToken(); - parser.nextToken(); - if (parser.currentToken() == Token.END_OBJECT) { - throw new IllegalArgumentException("Empty script"); + if (source != null) { + if (lang.equals(source.getLang())) { + scripts.remove(id); + } } - switch (parser.currentName()) { - case "script": - case "template": - if (parser.nextToken() == Token.VALUE_STRING) { - return parser.text(); - } else { - builder.copyCurrentStructure(parser); - } - break; - default: - // There is no enclosing 'script' or 'template' object so we just need to return the script as is... - // because the parsers current location is already beyond the beginning we need to add a START_OBJECT: - builder.startObject(); - builder.copyCurrentStructure(parser); - builder.endObject(); - break; + + source = scripts.get(lang + "#" + id); + + if (source == null) { + throw new ResourceNotFoundException( + "stored script [" + id + "] using lang [" + lang + "] does not exist and cannot be deleted"); } - return builder.string(); - } catch (IOException e) { - throw new RuntimeException(e); + + scripts.remove(lang + "#" + id); + + return this; } - } - @Override - public String type() { - return TYPE; + /** + * @return A {@link ScriptMetaData} with the updated {@link Map} of scripts. + */ + public ScriptMetaData build() { + return new ScriptMetaData(scripts); + } } - @Override - public ScriptMetaData fromXContent(XContentParser parser) throws IOException { - Map scripts = new HashMap<>(); - String key = null; - for (Token token = parser.nextToken(); token != Token.END_OBJECT; token = parser.nextToken()) { - switch (token) { - case FIELD_NAME: - key = parser.currentName(); - break; - case VALUE_STRING: - scripts.put(key, new ScriptAsBytes(new BytesArray(parser.text()))); - break; - default: - throw new ParsingException(parser.getTokenLocation(), "Unexpected token [" + token + "]"); - } + static final class ScriptMetadataDiff implements NamedDiff { + + final Diff> pipelines; + + ScriptMetadataDiff(ScriptMetaData before, ScriptMetaData after) { + this.pipelines = DiffableUtils.diff(before.scripts, after.scripts, DiffableUtils.getStringKeySerializer()); } - return new ScriptMetaData(scripts); - } - @Override - public EnumSet context() { - return MetaData.API_AND_GATEWAY; - } + ScriptMetadataDiff(StreamInput in) throws IOException { + pipelines = DiffableUtils.readJdkMapDiff(in, DiffableUtils.getStringKeySerializer(), + StoredScriptSource::new, StoredScriptSource::readDiffFrom); + } - @Override - public ScriptMetaData readFrom(StreamInput in) throws IOException { - int size = in.readVInt(); - Map scripts = new HashMap<>(); - for (int i = 0; i < size; i++) { - String languageAndId = in.readString(); - BytesReference script = in.readBytesReference(); - scripts.put(languageAndId, new ScriptAsBytes(script)); + @Override + public String getWriteableName() { + return TYPE; } - return new ScriptMetaData(scripts); - } - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - for (Map.Entry entry : scripts.entrySet()) { - builder.field(entry.getKey(), entry.getValue().script.utf8ToString()); + @Override + public MetaData.Custom apply(MetaData.Custom part) { + return new ScriptMetaData(pipelines.apply(((ScriptMetaData) part).scripts)); } - return builder; - } - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(scripts.size()); - for (Map.Entry entry : scripts.entrySet()) { - out.writeString(entry.getKey()); - entry.getValue().writeTo(out); + @Override + public void writeTo(StreamOutput out) throws IOException { + pipelines.writeTo(out); } } - @Override - public Diff diff(MetaData.Custom before) { - return new ScriptMetadataDiff((ScriptMetaData) before, this); - } + /** + * Convenience method to build and return a new + * {@link ScriptMetaData} adding the specified stored script. + */ + static ScriptMetaData putStoredScript(ScriptMetaData previous, String id, StoredScriptSource source) { + Builder builder = new Builder(previous); + builder.storeScript(id, source); - @Override - public Diff readDiffFrom(StreamInput in) throws IOException { - return new ScriptMetadataDiff(in); + return builder.build(); } - @Override - public boolean equals(Object o) { - if (this == o) return true; - if (o == null || getClass() != o.getClass()) return false; + /** + * Convenience method to build and return a new + * {@link ScriptMetaData} deleting the specified stored script. + */ + static ScriptMetaData deleteStoredScript(ScriptMetaData previous, String id, String lang) { + Builder builder = new ScriptMetaData.Builder(previous); + builder.deleteScript(id, lang); - ScriptMetaData other = (ScriptMetaData) o; - return scripts.equals(other.scripts); + return builder.build(); } - @Override - public int hashCode() { - return scripts.hashCode(); - } + /** + * Standard logger necessary for allocation of the deprecation logger. + */ + private static final Logger LOGGER = ESLoggerFactory.getLogger(ScriptMetaData.class); - @Override - public String toString() { - return "ScriptMetaData{" + - "scripts=" + scripts + - '}'; - } + /** + * Deprecation logger necessary for namespace changes related to stored scripts. + */ + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(LOGGER); + + /** + * The type of {@link ClusterState} data. + */ + public static final String TYPE = "stored_scripts"; - static String toKey(String language, String id) { - if (id.contains("#")) { - throw new IllegalArgumentException("stored script id can't contain: '#'"); + /** + * This will parse XContent into {@link ScriptMetaData}. + * + * The following format will be parsed for the new namespace: + * + * {@code + * { + * "" : "<{@link StoredScriptSource#fromXContent(XContentParser)}>", + * "" : "<{@link StoredScriptSource#fromXContent(XContentParser)}>", + * ... + * } + * } + * + * The following format will be parsed for the deprecated namespace: + * + * {@code + * { + * "" : "", + * "" : "", + * ... + * } + * } + * + * Note when using the deprecated namespace, the language will be pulled from + * the id and options will be set to an empty {@link Map}. + */ + public static ScriptMetaData fromXContent(XContentParser parser) throws IOException { + Map scripts = new HashMap<>(); + String id = null; + StoredScriptSource source; + + Token token = parser.currentToken(); + + if (token == null) { + token = parser.nextToken(); } - if (language.contains("#")) { - throw new IllegalArgumentException("stored script language can't contain: '#'"); + + if (token != Token.START_OBJECT) { + throw new ParsingException(parser.getTokenLocation(), "unexpected token [" + token + "], expected [{]"); } - return language + "#" + id; - } + token = parser.nextToken(); - public static final class Builder { + while (token != Token.END_OBJECT) { + switch (token) { + case FIELD_NAME: + id = parser.currentName(); + break; + case VALUE_STRING: + if (id == null) { + throw new ParsingException(parser.getTokenLocation(), + "unexpected token [" + token + "], expected [, , {]"); + } - private Map scripts; + int split = id.indexOf('#'); - public Builder(ScriptMetaData previous) { - if (previous != null) { - this.scripts = new HashMap<>(previous.scripts); - } else { - this.scripts = new HashMap<>(); + if (split == -1) { + throw new IllegalArgumentException("illegal stored script id [" + id + "], does not contain lang"); + } else { + source = new StoredScriptSource(id.substring(0, split), parser.text(), Collections.emptyMap()); + } + scripts.put(id, source); + + id = null; + + break; + case START_OBJECT: + if (id == null) { + throw new ParsingException(parser.getTokenLocation(), + "unexpected token [" + token + "], expected [, , {]"); + } + + source = StoredScriptSource.fromXContent(parser); + scripts.put(id, source); + + break; + default: + throw new ParsingException(parser.getTokenLocation(), "unexpected token [" + token + "], expected [, , {]"); } - } - public Builder storeScript(String lang, String id, BytesReference script) { - BytesReference scriptBytest = new BytesArray(parseStoredScript(script)); - scripts.put(toKey(lang, id), new ScriptAsBytes(scriptBytest)); - return this; + token = parser.nextToken(); } - public Builder deleteScript(String lang, String id) { - if (scripts.remove(toKey(lang, id)) == null) { - throw new ResourceNotFoundException("Stored script with id [{}] for language [{}] does not exist", id, lang); + return new ScriptMetaData(scripts); + } + + public static NamedDiff readDiffFrom(StreamInput in) throws IOException { + return new ScriptMetadataDiff(in); + } + + private final Map scripts; + + /** + * Standard constructor to create metadata to store scripts. + * @param scripts The currently stored scripts. Must not be {@code null}, + * use and empty {@link Map} to specify there were no + * previously stored scripts. + */ + ScriptMetaData(Map scripts) { + this.scripts = Collections.unmodifiableMap(scripts); + } + + public ScriptMetaData(StreamInput in) throws IOException { + Map scripts = new HashMap<>(); + StoredScriptSource source; + int size = in.readVInt(); + + for (int i = 0; i < size; i++) { + String id = in.readString(); + + // Prior to version 5.3 all scripts were stored using the deprecated namespace. + // Split the id to find the language then use StoredScriptSource to parse the + // expected BytesReference after which a new StoredScriptSource is created + // with the appropriate language and options. + if (in.getVersion().before(Version.V_5_3_0)) { + int split = id.indexOf('#'); + + if (split == -1) { + throw new IllegalArgumentException("illegal stored script id [" + id + "], does not contain lang"); + } else { + source = new StoredScriptSource(in); + source = new StoredScriptSource(id.substring(0, split), source.getSource(), Collections.emptyMap()); + } + // Version 5.3+ can just be parsed normally using StoredScriptSource. + } else { + source = new StoredScriptSource(in); } - return this; - } - public ScriptMetaData build() { - return new ScriptMetaData(Collections.unmodifiableMap(scripts)); + scripts.put(id, source); } + + this.scripts = Collections.unmodifiableMap(scripts); } - static final class ScriptMetadataDiff implements Diff { + @Override + public void writeTo(StreamOutput out) throws IOException { + // Version 5.3+ will output the contents of the scripts' Map using + // StoredScriptSource to stored the language, code, and options. + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + out.writeVInt(scripts.size()); + + for (Map.Entry entry : scripts.entrySet()) { + out.writeString(entry.getKey()); + entry.getValue().writeTo(out); + } + // Prior to Version 5.3, stored scripts can only be read using the deprecated + // namespace. Scripts using the deprecated namespace are first isolated in a + // temporary Map, then written out. Since all scripts will be stored using the + // deprecated namespace, no scripts will be lost. + } else { + Map filtered = new HashMap<>(); - final Diff> pipelines; + for (Map.Entry entry : scripts.entrySet()) { + if (entry.getKey().contains("#")) { + filtered.put(entry.getKey(), entry.getValue()); + } + } - ScriptMetadataDiff(ScriptMetaData before, ScriptMetaData after) { - this.pipelines = DiffableUtils.diff(before.scripts, after.scripts, DiffableUtils.getStringKeySerializer()); - } + out.writeVInt(filtered.size()); - public ScriptMetadataDiff(StreamInput in) throws IOException { - pipelines = DiffableUtils.readJdkMapDiff(in, DiffableUtils.getStringKeySerializer(), new ScriptAsBytes(null)); + for (Map.Entry entry : filtered.entrySet()) { + out.writeString(entry.getKey()); + entry.getValue().writeTo(out); + } } + } - @Override - public MetaData.Custom apply(MetaData.Custom part) { - return new ScriptMetaData(pipelines.apply(((ScriptMetaData) part).scripts)); - } - @Override - public void writeTo(StreamOutput out) throws IOException { - pipelines.writeTo(out); + + /** + * This will write XContent from {@link ScriptMetaData}. The following format will be written: + * + * {@code + * { + * "" : "<{@link StoredScriptSource#toXContent(XContentBuilder, Params)}>", + * "" : "<{@link StoredScriptSource#toXContent(XContentBuilder, Params)}>", + * ... + * } + * } + */ + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + for (Map.Entry entry : scripts.entrySet()) { + builder.field(entry.getKey()); + entry.getValue().toXContent(builder, params); } + + return builder; } - static final class ScriptAsBytes extends AbstractDiffable { + @Override + public Diff diff(MetaData.Custom before) { + return new ScriptMetadataDiff((ScriptMetaData)before, this); + } - public ScriptAsBytes(BytesReference script) { - this.script = script; - } + @Override + public String getWriteableName() { + return TYPE; + } - private final BytesReference script; + @Override + public EnumSet context() { + return MetaData.ALL_CONTEXTS; + } - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeBytesReference(script); + /** + * Retrieves a stored script from the new namespace if lang is {@code null}. + * Otherwise, returns a stored script from the deprecated namespace. Either + * way an id is required. + */ + StoredScriptSource getStoredScript(String id, String lang) { + if (lang == null) { + return scripts.get(id); + } else { + return scripts.get(lang + "#" + id); } + } - @Override - public ScriptAsBytes readFrom(StreamInput in) throws IOException { - return new ScriptAsBytes(in.readBytesReference()); - } + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; - @Override - public boolean equals(Object o) { - if (this == o) return true; - if (o == null || getClass() != o.getClass()) return false; + ScriptMetaData that = (ScriptMetaData)o; - ScriptAsBytes that = (ScriptAsBytes) o; + return scripts.equals(that.scripts); - return script.equals(that.script); + } - } + @Override + public int hashCode() { + return scripts.hashCode(); + } - @Override - public int hashCode() { - return script.hashCode(); - } + @Override + public String toString() { + return "ScriptMetaData{" + + "scripts=" + scripts + + '}'; } } diff --git a/core/src/main/java/org/elasticsearch/script/ScriptModes.java b/core/src/main/java/org/elasticsearch/script/ScriptModes.java index 46ab2a44d21cd..f698e1a8b5427 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptModes.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptModes.java @@ -21,12 +21,15 @@ import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.script.ScriptService.ScriptType; import java.util.Collections; import java.util.HashMap; +import java.util.HashSet; +import java.util.List; import java.util.Map; +import java.util.Set; import java.util.TreeMap; +import java.util.function.Function; /** * Holds the boolean indicating the enabled mode for each of the different scripting languages available, each script source and each @@ -39,12 +42,89 @@ public class ScriptModes { final Map scriptEnabled; - ScriptModes(ScriptSettings scriptSettings, Settings settings) { + private static final String NONE = "none"; + + public static final Setting> TYPES_ALLOWED_SETTING = + Setting.listSetting("script.allowed_types", Collections.emptyList(), Function.identity(), Setting.Property.NodeScope); + public static final Setting> CONTEXTS_ALLOWED_SETTING = + Setting.listSetting("script.allowed_contexts", Collections.emptyList(), Function.identity(), Setting.Property.NodeScope); + + private final Set typesAllowed; + private final Set contextsAllowed; + + ScriptModes(ScriptContextRegistry scriptContextRegistry, ScriptSettings scriptSettings, Settings settings) { HashMap scriptModes = new HashMap<>(); for (Setting scriptModeSetting : scriptSettings.getScriptLanguageSettings()) { scriptModes.put(scriptModeSetting.getKey(), scriptModeSetting.get(settings)); } this.scriptEnabled = Collections.unmodifiableMap(scriptModes); + + this.typesAllowed = TYPES_ALLOWED_SETTING.exists(settings) ? new HashSet<>() : null; + + if (this.typesAllowed != null) { + List typesAllowedList = TYPES_ALLOWED_SETTING.get(settings); + + if (typesAllowedList.isEmpty()) { + throw new IllegalArgumentException( + "must specify at least one script type or none for setting [" + TYPES_ALLOWED_SETTING.getKey() + "]."); + } + + for (String settingType : typesAllowedList) { + if (NONE.equals(settingType)) { + if (typesAllowedList.size() != 1) { + throw new IllegalArgumentException("cannot specify both [" + NONE + "]" + + " and other script types for setting [" + TYPES_ALLOWED_SETTING.getKey() + "]."); + } else { + break; + } + } + + boolean found = false; + + for (ScriptType scriptType : ScriptType.values()) { + if (scriptType.getName().equals(settingType)) { + found = true; + this.typesAllowed.add(settingType); + + break; + } + } + + if (found == false) { + throw new IllegalArgumentException( + "unknown script type [" + settingType + "] found in setting [" + TYPES_ALLOWED_SETTING.getKey() + "]."); + } + } + } + + this.contextsAllowed = CONTEXTS_ALLOWED_SETTING.exists(settings) ? new HashSet<>() : null; + + if (this.contextsAllowed != null) { + List contextsAllowedList = CONTEXTS_ALLOWED_SETTING.get(settings); + + if (contextsAllowedList.isEmpty()) { + throw new IllegalArgumentException( + "must specify at least one script context or none for setting [" + CONTEXTS_ALLOWED_SETTING.getKey() + "]."); + } + + for (String settingContext : contextsAllowedList) { + if (NONE.equals(settingContext)) { + if (contextsAllowedList.size() != 1) { + throw new IllegalArgumentException("cannot specify both [" + NONE + "]" + + " and other script contexts for setting [" + CONTEXTS_ALLOWED_SETTING.getKey() + "]."); + } else { + break; + } + } + + if (scriptContextRegistry.isSupportedContext(settingContext)) { + this.contextsAllowed.add(settingContext); + } else { + throw new IllegalArgumentException( + "unknown script context [" + settingContext + "] found in setting [" + CONTEXTS_ALLOWED_SETTING.getKey() + "]."); + } + } + } } /** @@ -61,6 +141,15 @@ public boolean getScriptEnabled(String lang, ScriptType scriptType, ScriptContex if (NativeScriptEngineService.NAME.equals(lang)) { return true; } + + if (typesAllowed != null && typesAllowed.contains(scriptType.getName()) == false) { + throw new IllegalArgumentException("[" + scriptType.getName() + "] scripts cannot be executed"); + } + + if (contextsAllowed != null && contextsAllowed.contains(scriptContext.getKey()) == false) { + throw new IllegalArgumentException("[" + scriptContext.getKey() + "] scripts cannot be executed"); + } + Boolean scriptMode = scriptEnabled.get(getKey(lang, scriptType, scriptContext)); if (scriptMode == null) { throw new IllegalArgumentException("script mode not found for lang [" + lang + "], script_type [" + scriptType + "], operation [" + scriptContext.getKey() + "]"); @@ -73,7 +162,7 @@ static String operationKey(ScriptContext scriptContext) { } static String sourceKey(ScriptType scriptType) { - return SCRIPT_SETTINGS_PREFIX + "." + scriptType.getScriptType(); + return SCRIPT_SETTINGS_PREFIX + "." + scriptType.getName(); } static String getGlobalKey(String lang, ScriptType scriptType) { diff --git a/core/src/main/java/org/elasticsearch/script/ScriptService.java b/core/src/main/java/org/elasticsearch/script/ScriptService.java index 9e61f39378e4f..6ae0d206a9817 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptService.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptService.java @@ -22,6 +22,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.util.IOUtils; +import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ResourceNotFoundException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequest; @@ -35,7 +36,6 @@ import org.elasticsearch.cluster.ClusterStateListener; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.ParseField; import org.elasticsearch.common.Strings; import org.elasticsearch.common.breaker.CircuitBreakingException; import org.elasticsearch.common.bytes.BytesReference; @@ -46,17 +46,18 @@ import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.io.Streams; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.logging.LoggerMessageFormat; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.env.Environment; import org.elasticsearch.search.lookup.SearchLookup; +import org.elasticsearch.template.CompiledTemplate; import org.elasticsearch.watcher.FileChangesListener; import org.elasticsearch.watcher.FileWatcher; import org.elasticsearch.watcher.ResourceWatcherService; @@ -70,7 +71,6 @@ import java.util.Collection; import java.util.Collections; import java.util.HashMap; -import java.util.Locale; import java.util.Map; import java.util.Objects; import java.util.concurrent.ConcurrentMap; @@ -86,11 +86,11 @@ public class ScriptService extends AbstractComponent implements Closeable, Clust public static final Setting SCRIPT_CACHE_EXPIRE_SETTING = Setting.positiveTimeSetting("script.cache.expire", TimeValue.timeValueMillis(0), Property.NodeScope); public static final Setting SCRIPT_AUTO_RELOAD_ENABLED_SETTING = - Setting.boolSetting("script.auto_reload_enabled", true, Property.NodeScope); + Setting.boolSetting("script.auto_reload_enabled", true, Property.NodeScope, Property.Deprecated); public static final Setting SCRIPT_MAX_SIZE_IN_BYTES = Setting.intSetting("script.max_size_in_bytes", 65535, Property.NodeScope); public static final Setting SCRIPT_MAX_COMPILATIONS_PER_MINUTE = - Setting.intSetting("script.max_compilations_per_minute", 15, 0, Property.Dynamic, Property.NodeScope); + Setting.intSetting("script.max_compilations_per_minute", 15, 0, Property.Dynamic, Property.NodeScope, Property.Deprecated); private final Collection scriptEngines; private final Map scriptEnginesByLang; @@ -136,7 +136,7 @@ public ScriptService(Settings settings, Environment env, TimeValue cacheExpire = SCRIPT_CACHE_EXPIRE_SETTING.get(settings); if (cacheExpire.getNanos() != 0) { - cacheBuilder.setExpireAfterAccess(cacheExpire.nanos()); + cacheBuilder.setExpireAfterAccess(cacheExpire); } logger.debug("using script cache with max_size [{}], expire [{}]", cacheMaxSize, cacheExpire); @@ -152,7 +152,7 @@ public ScriptService(Settings settings, Environment env, this.scriptEnginesByLang = unmodifiableMap(enginesByLangBuilder); this.scriptEnginesByExt = unmodifiableMap(enginesByExtBuilder); - this.scriptModes = new ScriptModes(scriptSettings, settings); + this.scriptModes = new ScriptModes(scriptContextRegistry, scriptSettings, settings); // add file watcher for static scripts scriptsDirectory = env.scriptsFile(); @@ -162,7 +162,7 @@ public ScriptService(Settings settings, Environment env, FileWatcher fileWatcher = new FileWatcher(scriptsDirectory); fileWatcher.addListener(new ScriptChangesListener()); - if (SCRIPT_AUTO_RELOAD_ENABLED_SETTING.get(settings)) { + if (SCRIPT_AUTO_RELOAD_ENABLED_SETTING.get(settings) && resourceWatcherService != null) { // automatic reload is enabled - register scripts resourceWatcherService.add(fileWatcher); } else { @@ -209,18 +209,44 @@ void setMaxCompilationsPerMinute(Integer newMaxPerMinute) { /** * Checks if a script can be executed and compiles it if needed, or returns the previously compiled and cached script. */ - public CompiledScript compile(Script script, ScriptContext scriptContext, Map params) { - if (script == null) { - throw new IllegalArgumentException("The parameter script (Script) must not be null."); - } - if (scriptContext == null) { - throw new IllegalArgumentException("The parameter scriptContext (ScriptContext) must not be null."); - } + public CompiledScript compile(Script script, ScriptContext scriptContext) { + Objects.requireNonNull(script); + Objects.requireNonNull(scriptContext); + ScriptType type = script.getType(); String lang = script.getLang(); - ScriptEngineService scriptEngineService = getScriptEngineServiceForLang(lang); - if (canExecuteScript(lang, script.getType(), scriptContext) == false) { - throw new IllegalStateException("scripts of type [" + script.getType() + "], operation [" + scriptContext.getKey() + "] and lang [" + lang + "] are disabled"); + String idOrCode = script.getIdOrCode(); + Map options = script.getOptions(); + + String id = idOrCode; + + // lang may be null when looking up a stored script, so we must get the + // source to retrieve the lang before checking if the context is supported + if (type == ScriptType.STORED) { + // search template requests can possibly pass in the entire path instead + // of just an id for looking up a stored script, so we parse the path and + // check for appropriate errors + String[] path = id.split("/"); + + if (path.length == 3) { + if (lang != null && lang.equals(path[1]) == false) { + throw new IllegalStateException("conflicting script languages, found [" + path[1] + "] but expected [" + lang + "]"); + } + + id = path[2]; + + deprecationLogger.deprecated("use of [" + idOrCode + "] for looking up" + + " stored scripts/templates has been deprecated, use only [" + id + "] instead"); + } else if (path.length != 1) { + throw new IllegalArgumentException("illegal stored script format [" + id + "] use only "); + } + + // a stored script must be pulled from the cluster state every time in case + // the script has been updated since the last compilation + StoredScriptSource source = getScriptFromClusterState(id, lang); + lang = source.getLang(); + idOrCode = source.getSource(); + options = source.getOptions(); } // TODO: fix this through some API or something, that's wrong @@ -229,89 +255,32 @@ public CompiledScript compile(Script script, ScriptContext scriptContext, Map totalCompilesPerMinute) { - scriptsPerMinCounter = totalCompilesPerMinute; + " operation [" + scriptContext.getKey() + "] and lang [" + lang + "] are not supported"); } - // If there is enough tokens in the bucket, allow the request and decrease the tokens by 1 - if (scriptsPerMinCounter >= 1) { - scriptsPerMinCounter -= 1.0; - } else { - // Otherwise reject the request - throw new CircuitBreakingException("[script] Too many dynamic script compilations within one minute, max: [" + - totalCompilesPerMinute + "/min]; please use on-disk, indexed, or scripts with parameters instead; " + - "this limit can be changed by the [" + SCRIPT_MAX_COMPILATIONS_PER_MINUTE.getKey() + "] setting"); - } - } + ScriptEngineService scriptEngineService = getScriptEngineServiceForLang(lang); - /** - * Compiles a script straight-away, or returns the previously compiled and cached script, - * without checking if it can be executed based on settings. - */ - CompiledScript compileInternal(Script script, Map params) { - if (script == null) { - throw new IllegalArgumentException("The parameter script (Script) must not be null."); + if (canExecuteScript(lang, type, scriptContext) == false) { + throw new IllegalStateException("scripts of type [" + script.getType() + "]," + + " operation [" + scriptContext.getKey() + "] and lang [" + lang + "] are disabled"); } - String lang = script.getLang(); - ScriptType type = script.getType(); - //script.getScript() could return either a name or code for a script, - //but we check for a file script name first and an indexed script name second - String name = script.getScript(); - if (logger.isTraceEnabled()) { - logger.trace("Compiling lang: [{}] type: [{}] script: {}", lang, type, name); + logger.trace("compiling lang: [{}] type: [{}] script: {}", lang, type, idOrCode); } - ScriptEngineService scriptEngineService = getScriptEngineServiceForLang(lang); - if (type == ScriptType.FILE) { - CacheKey cacheKey = new CacheKey(scriptEngineService, name, null, params); - //On disk scripts will be loaded into the staticCache by the listener + CacheKey cacheKey = new CacheKey(lang, idOrCode, options); CompiledScript compiledScript = staticCache.get(cacheKey); if (compiledScript == null) { - throw new IllegalArgumentException("Unable to find on disk file script [" + name + "] using lang [" + lang + "]"); + throw new IllegalArgumentException("unable to find file script [" + idOrCode + "] using lang [" + lang + "]"); } return compiledScript; } - //script.getScript() will be code if the script type is inline - String code = script.getScript(); - - if (type == ScriptType.STORED) { - //The look up for an indexed script must be done every time in case - //the script has been updated in the index since the last look up. - final IndexedScript indexedScript = new IndexedScript(lang, name); - name = indexedScript.id; - code = getScriptFromClusterState(indexedScript.lang, indexedScript.id); - } - - CacheKey cacheKey = new CacheKey(scriptEngineService, type == ScriptType.INLINE ? null : name, code, params); + CacheKey cacheKey = new CacheKey(lang, idOrCode, options); CompiledScript compiledScript = cache.get(cacheKey); if (compiledScript != null) { @@ -330,18 +299,17 @@ CompiledScript compileInternal(Script script, Map params) { // but give the script engine the chance to be better, give it separate name + source code // for the inline case, then its anonymous: null. - String actualName = (type == ScriptType.INLINE) ? null : name; if (logger.isTraceEnabled()) { - logger.trace("compiling script, type: [{}], lang: [{}], params: [{}]", type, lang, params); + logger.trace("compiling script, type: [{}], lang: [{}], options: [{}]", type, lang, options); } // Check whether too many compilations have happened checkCompilationLimit(); - compiledScript = new CompiledScript(type, name, lang, scriptEngineService.compile(actualName, code, params)); + compiledScript = new CompiledScript(type, id, lang, scriptEngineService.compile(id, idOrCode, options)); } catch (ScriptException good) { // TODO: remove this try-catch completely, when all script engines have good exceptions! throw good; // its already good } catch (Exception exception) { - throw new GeneralScriptException("Failed to compile " + type + " script [" + name + "] using lang [" + lang + "]", exception); + throw new GeneralScriptException("Failed to compile " + type + " script [" + id + "] using lang [" + lang + "]", exception); } // Since the cache key is the script content itself we don't need to @@ -354,65 +322,109 @@ CompiledScript compileInternal(Script script, Map params) { } } - private String validateScriptLanguage(String scriptLang) { - Objects.requireNonNull(scriptLang); - if (scriptEnginesByLang.containsKey(scriptLang) == false) { - throw new IllegalArgumentException("script_lang not supported [" + scriptLang + "]"); + /** Compiles a template. Note this will be moved to a separate TemplateService in the future. */ + public CompiledTemplate compileTemplate(Script script, ScriptContext scriptContext) { + CompiledScript compiledScript = compile(script, scriptContext); + return params -> (BytesReference)executable(compiledScript, params).run(); + } + + /** + * Check whether there have been too many compilations within the last minute, throwing a circuit breaking exception if so. + * This is a variant of the token bucket algorithm: https://en.wikipedia.org/wiki/Token_bucket + * + * It can be thought of as a bucket with water, every time the bucket is checked, water is added proportional to the amount of time that + * elapsed since the last time it was checked. If there is enough water, some is removed and the request is allowed. If there is not + * enough water the request is denied. Just like a normal bucket, if water is added that overflows the bucket, the extra water/capacity + * is discarded - there can never be more water in the bucket than the size of the bucket. + */ + void checkCompilationLimit() { + long now = System.nanoTime(); + long timePassed = now - lastInlineCompileTime; + lastInlineCompileTime = now; + + scriptsPerMinCounter += (timePassed) * compilesAllowedPerNano; + + // It's been over the time limit anyway, readjust the bucket to be level + if (scriptsPerMinCounter > totalCompilesPerMinute) { + scriptsPerMinCounter = totalCompilesPerMinute; } - return scriptLang; + + // If there is enough tokens in the bucket, allow the request and decrease the tokens by 1 + if (scriptsPerMinCounter >= 1) { + scriptsPerMinCounter -= 1.0; + } else { + // Otherwise reject the request + throw new CircuitBreakingException("[script] Too many dynamic script compilations within one minute, max: [" + + totalCompilesPerMinute + "/min]; please use on-disk, indexed, or scripts with parameters instead; " + + "this limit can be changed by the [" + SCRIPT_MAX_COMPILATIONS_PER_MINUTE.getKey() + "] setting"); + } + } + + public boolean isLangSupported(String lang) { + Objects.requireNonNull(lang); + + return scriptEnginesByLang.containsKey(lang); } - String getScriptFromClusterState(String scriptLang, String id) { - scriptLang = validateScriptLanguage(scriptLang); + StoredScriptSource getScriptFromClusterState(String id, String lang) { + if (lang != null && isLangSupported(lang) == false) { + throw new IllegalArgumentException("unable to get stored script with unsupported lang [" + lang + "]"); + } + ScriptMetaData scriptMetadata = clusterState.metaData().custom(ScriptMetaData.TYPE); + if (scriptMetadata == null) { - throw new ResourceNotFoundException("Unable to find script [" + scriptLang + "/" + id + "] in cluster state"); + throw new ResourceNotFoundException("unable to find script [" + id + "]" + + (lang == null ? "" : " using lang [" + lang + "]") + " in cluster state"); } - String script = scriptMetadata.getScript(scriptLang, id); - if (script == null) { - throw new ResourceNotFoundException("Unable to find script [" + scriptLang + "/" + id + "] in cluster state"); + StoredScriptSource source = scriptMetadata.getStoredScript(id, lang); + + if (source == null) { + throw new ResourceNotFoundException("unable to find script [" + id + "]" + + (lang == null ? "" : " using lang [" + lang + "]") + " in cluster state"); } - return script; + + return source; } - void validateStoredScript(String id, String scriptLang, BytesReference scriptBytes) { - validateScriptSize(id, scriptBytes.length()); - String script = ScriptMetaData.parseStoredScript(scriptBytes); - if (Strings.hasLength(scriptBytes)) { - //Just try and compile it - try { - ScriptEngineService scriptEngineService = getScriptEngineServiceForLang(scriptLang); - //we don't know yet what the script will be used for, but if all of the operations for this lang with - //indexed scripts are disabled, it makes no sense to even compile it. - if (isAnyScriptContextEnabled(scriptLang, ScriptType.STORED)) { - Object compiled = scriptEngineService.compile(id, script, Collections.emptyMap()); - if (compiled == null) { - throw new IllegalArgumentException("Unable to parse [" + script + "] lang [" + scriptLang + - "] (ScriptService.compile returned null)"); - } - } else { - logger.warn( - "skipping compile of script [{}], lang [{}] as all scripted operations are disabled for indexed scripts", - script, scriptLang); + public void putStoredScript(ClusterService clusterService, PutStoredScriptRequest request, + ActionListener listener) { + int max = SCRIPT_MAX_SIZE_IN_BYTES.get(settings); + + if (request.content().length() > max) { + throw new IllegalArgumentException("exceeded max allowed stored script size in bytes [" + max + "] with size [" + + request.content().length() + "] for script [" + request.id() + "]"); + } + + StoredScriptSource source = StoredScriptSource.parse(request.lang(), request.content(), request.xContentType()); + + if (isLangSupported(source.getLang()) == false) { + throw new IllegalArgumentException("unable to put stored script with unsupported lang [" + source.getLang() + "]"); + } + + try { + ScriptEngineService scriptEngineService = getScriptEngineServiceForLang(source.getLang()); + + if (isAnyScriptContextEnabled(source.getLang(), ScriptType.STORED)) { + Object compiled = scriptEngineService.compile(request.id(), source.getSource(), Collections.emptyMap()); + + if (compiled == null) { + throw new IllegalArgumentException("failed to parse/compile stored script [" + request.id() + "]" + + (source.getSource() == null ? "" : " using code [" + source.getSource() + "]")); } - } catch (ScriptException good) { - // TODO: remove this when all script engines have good exceptions! - throw good; // its already good! - } catch (Exception e) { - throw new IllegalArgumentException("Unable to parse [" + script + - "] lang [" + scriptLang + "]", e); + } else { + throw new IllegalArgumentException( + "cannot put stored script [" + request.id() + "], stored scripts cannot be run under any context"); } - } else { - throw new IllegalArgumentException("Unable to find script in : " + scriptBytes.utf8ToString()); + } catch (ScriptException good) { + throw good; + } catch (Exception exception) { + throw new IllegalArgumentException("failed to parse/compile stored script [" + request.id() + "]", exception); } - } - public void storeScript(ClusterService clusterService, PutStoredScriptRequest request, ActionListener listener) { - String scriptLang = validateScriptLanguage(request.scriptLang()); - //verify that the script compiles - validateStoredScript(request.id(), scriptLang, request.script()); - clusterService.submitStateUpdateTask("put-script-" + request.id(), new AckedClusterStateUpdateTask(request, listener) { + clusterService.submitStateUpdateTask("put-script-" + request.id(), + new AckedClusterStateUpdateTask(request, listener) { @Override protected PutStoredScriptResponse newResponse(boolean acknowledged) { @@ -421,23 +433,23 @@ protected PutStoredScriptResponse newResponse(boolean acknowledged) { @Override public ClusterState execute(ClusterState currentState) throws Exception { - return innerStoreScript(currentState, scriptLang, request); + ScriptMetaData smd = currentState.metaData().custom(ScriptMetaData.TYPE); + smd = ScriptMetaData.putStoredScript(smd, request.id(), source); + MetaData.Builder mdb = MetaData.builder(currentState.getMetaData()).putCustom(ScriptMetaData.TYPE, smd); + + return ClusterState.builder(currentState).metaData(mdb).build(); } }); } - static ClusterState innerStoreScript(ClusterState currentState, String validatedScriptLang, PutStoredScriptRequest request) { - ScriptMetaData scriptMetadata = currentState.metaData().custom(ScriptMetaData.TYPE); - ScriptMetaData.Builder scriptMetadataBuilder = new ScriptMetaData.Builder(scriptMetadata); - scriptMetadataBuilder.storeScript(validatedScriptLang, request.id(), request.script()); - MetaData.Builder metaDataBuilder = MetaData.builder(currentState.getMetaData()) - .putCustom(ScriptMetaData.TYPE, scriptMetadataBuilder.build()); - return ClusterState.builder(currentState).metaData(metaDataBuilder).build(); - } + public void deleteStoredScript(ClusterService clusterService, DeleteStoredScriptRequest request, + ActionListener listener) { + if (request.lang() != null && isLangSupported(request.lang()) == false) { + throw new IllegalArgumentException("unable to delete stored script with unsupported lang [" + request.lang() +"]"); + } - public void deleteStoredScript(ClusterService clusterService, DeleteStoredScriptRequest request, ActionListener listener) { - String scriptLang = validateScriptLanguage(request.scriptLang()); - clusterService.submitStateUpdateTask("delete-script-" + request.id(), new AckedClusterStateUpdateTask(request, listener) { + clusterService.submitStateUpdateTask("delete-script-" + request.id(), + new AckedClusterStateUpdateTask(request, listener) { @Override protected DeleteStoredScriptResponse newResponse(boolean acknowledged) { @@ -446,49 +458,46 @@ protected DeleteStoredScriptResponse newResponse(boolean acknowledged) { @Override public ClusterState execute(ClusterState currentState) throws Exception { - return innerDeleteScript(currentState, scriptLang, request); + ScriptMetaData smd = currentState.metaData().custom(ScriptMetaData.TYPE); + smd = ScriptMetaData.deleteStoredScript(smd, request.id(), request.lang()); + MetaData.Builder mdb = MetaData.builder(currentState.getMetaData()).putCustom(ScriptMetaData.TYPE, smd); + + return ClusterState.builder(currentState).metaData(mdb).build(); } }); } - static ClusterState innerDeleteScript(ClusterState currentState, String validatedLang, DeleteStoredScriptRequest request) { - ScriptMetaData scriptMetadata = currentState.metaData().custom(ScriptMetaData.TYPE); - ScriptMetaData.Builder scriptMetadataBuilder = new ScriptMetaData.Builder(scriptMetadata); - scriptMetadataBuilder.deleteScript(validatedLang, request.id()); - MetaData.Builder metaDataBuilder = MetaData.builder(currentState.getMetaData()) - .putCustom(ScriptMetaData.TYPE, scriptMetadataBuilder.build()); - return ClusterState.builder(currentState).metaData(metaDataBuilder).build(); - } - - public String getStoredScript(ClusterState state, GetStoredScriptRequest request) { + public StoredScriptSource getStoredScript(ClusterState state, GetStoredScriptRequest request) { ScriptMetaData scriptMetadata = state.metaData().custom(ScriptMetaData.TYPE); + if (scriptMetadata != null) { - return scriptMetadata.getScript(request.lang(), request.id()); + return scriptMetadata.getStoredScript(request.id(), request.lang()); } else { return null; } } /** - * Compiles (or retrieves from cache) and executes the provided script + * Executes a previously compiled script provided as an argument */ - public ExecutableScript executable(Script script, ScriptContext scriptContext, Map params) { - return executable(compile(script, scriptContext, params), script.getParams()); + public ExecutableScript executable(CompiledScript compiledScript, Map params) { + return getScriptEngineServiceForLang(compiledScript.lang()).executable(compiledScript, params); } /** - * Executes a previously compiled script provided as an argument + * Compiles (or retrieves from cache) and executes the provided search script */ - public ExecutableScript executable(CompiledScript compiledScript, Map vars) { - return getScriptEngineServiceForLang(compiledScript.lang()).executable(compiledScript, vars); + public SearchScript search(SearchLookup lookup, Script script, ScriptContext scriptContext) { + CompiledScript compiledScript = compile(script, scriptContext); + return search(lookup, compiledScript, script.getParams()); } /** - * Compiles (or retrieves from cache) and executes the provided search script + * Binds provided parameters to a compiled script returning a + * {@link SearchScript} ready for execution */ - public SearchScript search(SearchLookup lookup, Script script, ScriptContext scriptContext, Map params) { - CompiledScript compiledScript = compile(script, scriptContext, params); - return getScriptEngineServiceForLang(compiledScript.lang()).search(compiledScript, lookup, script.getParams()); + public SearchScript search(SearchLookup lookup, CompiledScript compiledScript, Map params) { + return getScriptEngineServiceForLang(compiledScript.lang()).search(compiledScript, lookup, params); } private boolean isAnyScriptContextEnabled(String lang, ScriptType scriptType) { @@ -502,7 +511,7 @@ private boolean isAnyScriptContextEnabled(String lang, ScriptType scriptType) { private boolean canExecuteScript(String lang, ScriptType scriptType, ScriptContext scriptContext) { assert lang != null; - if (scriptContextRegistry.isSupportedContext(scriptContext) == false) { + if (scriptContextRegistry.isSupportedContext(scriptContext.getKey()) == false) { throw new IllegalArgumentException("script context [" + scriptContext.getKey() + "] not supported"); } return scriptModes.getScriptEnabled(lang, scriptType, scriptContext); @@ -512,18 +521,6 @@ public ScriptStats stats() { return scriptMetrics.stats(); } - private void validateScriptSize(String identifier, int scriptSizeInBytes) { - int allowedScriptSizeInBytes = SCRIPT_MAX_SIZE_IN_BYTES.get(settings); - if (scriptSizeInBytes > allowedScriptSizeInBytes) { - String message = LoggerMessageFormat.format( - "Limit of script size in bytes [{}] has been exceeded for script [{}] with size [{}]", - allowedScriptSizeInBytes, - identifier, - scriptSizeInBytes); - throw new IllegalArgumentException(message); - } - } - @Override public void clusterChanged(ClusterChangedEvent event) { clusterState = event.state(); @@ -545,6 +542,7 @@ public void onRemoval(RemovalNotification notification } private class ScriptChangesListener implements FileChangesListener { + private boolean deprecationEmitted = false; private Tuple getScriptNameExt(Path file) { Path scriptPath = scriptsDirectory.relativize(file); @@ -577,6 +575,11 @@ public void onFileInit(Path file) { if (engineService == null) { logger.warn("No script engine found for [{}]", scriptNameExt.v2()); } else { + if (deprecationEmitted == false) { + deprecationLogger.deprecated("File scripts are deprecated. Use stored or inline scripts instead."); + deprecationEmitted = true; + } + try { //we don't know yet what the script will be used for, but if all of the operations for this lang // with file scripts are disabled, it makes no sense to even compile it and cache it. @@ -584,17 +587,33 @@ public void onFileInit(Path file) { logger.info("compiling script file [{}]", file.toAbsolutePath()); try (InputStreamReader reader = new InputStreamReader(Files.newInputStream(file), StandardCharsets.UTF_8)) { String script = Streams.copyToString(reader); - String name = scriptNameExt.v1(); - CacheKey cacheKey = new CacheKey(engineService, name, null, Collections.emptyMap()); + String id = scriptNameExt.v1(); + CacheKey cacheKey = new CacheKey(engineService.getType(), id, null); // pass the actual file name to the compiler (for script engines that care about this) Object executable = engineService.compile(file.getFileName().toString(), script, Collections.emptyMap()); - CompiledScript compiledScript = new CompiledScript(ScriptType.FILE, name, engineService.getType(), executable); + CompiledScript compiledScript = new CompiledScript(ScriptType.FILE, id, engineService.getType(), executable); staticCache.put(cacheKey, compiledScript); scriptMetrics.onCompilation(); } } else { logger.warn("skipping compile of script file [{}] as all scripted operations are disabled for file scripts", file.toAbsolutePath()); } + } catch (ScriptException e) { + try (XContentBuilder builder = JsonXContent.contentBuilder()) { + builder.prettyPrint(); + builder.startObject(); + ElasticsearchException.generateThrowableXContent(builder, ToXContent.EMPTY_PARAMS, e); + builder.endObject(); + logger.warn("failed to load/compile script [{}]: {}", scriptNameExt.v1(), builder.string()); + } catch (IOException ioe) { + ioe.addSuppressed(e); + logger.warn((Supplier) () -> new ParameterizedMessage( + "failed to log an appropriate warning after failing to load/compile script [{}]", scriptNameExt.v1()), ioe); + } + /* Log at the whole exception at the debug level as well just in case the stack trace is important. That way you can + * turn on the stack trace if you need it. */ + logger.debug((Supplier) () -> new ParameterizedMessage("failed to load/compile script [{}]. full exception:", + scriptNameExt.v1()), e); } catch (Exception e) { logger.warn((Supplier) () -> new ParameterizedMessage("failed to load/compile script [{}]", scriptNameExt.v1()), e); } @@ -613,7 +632,7 @@ public void onFileDeleted(Path file) { ScriptEngineService engineService = getScriptEngineServiceForFileExt(scriptNameExt.v2()); assert engineService != null; logger.info("removing script file [{}]", file.toAbsolutePath()); - staticCache.remove(new CacheKey(engineService, scriptNameExt.v1(), null, Collections.emptyMap())); + staticCache.remove(new CacheKey(engineService.getType(), scriptNameExt.v1(), null)); } } @@ -624,79 +643,15 @@ public void onFileChanged(Path file) { } - /** - * The type of a script, more specifically where it gets loaded from: - * - provided dynamically at request time - * - loaded from an index - * - loaded from file - */ - public enum ScriptType { - - INLINE(0, "inline", "inline", false), - STORED(1, "id", "stored", false), - FILE(2, "file", "file", true); - - private final int val; - private final ParseField parseField; - private final String scriptType; - private final boolean defaultScriptEnabled; - - public static ScriptType readFrom(StreamInput in) throws IOException { - int scriptTypeVal = in.readVInt(); - for (ScriptType type : values()) { - if (type.val == scriptTypeVal) { - return type; - } - } - throw new IllegalArgumentException("Unexpected value read for ScriptType got [" + scriptTypeVal + "] expected one of [" - + INLINE.val + "," + FILE.val + "," + STORED.val + "]"); - } - - public static void writeTo(ScriptType scriptType, StreamOutput out) throws IOException{ - if (scriptType != null) { - out.writeVInt(scriptType.val); - } else { - out.writeVInt(INLINE.val); //Default to inline - } - } - - ScriptType(int val, String name, String scriptType, boolean defaultScriptEnabled) { - this.val = val; - this.parseField = new ParseField(name); - this.scriptType = scriptType; - this.defaultScriptEnabled = defaultScriptEnabled; - } - - public ParseField getParseField() { - return parseField; - } - - public boolean getDefaultScriptEnabled() { - return defaultScriptEnabled; - } - - public String getScriptType() { - return scriptType; - } - - @Override - public String toString() { - return name().toLowerCase(Locale.ROOT); - } - - } - private static final class CacheKey { final String lang; - final String name; - final String code; - final Map params; + final String idOrCode; + final Map options; - private CacheKey(final ScriptEngineService service, final String name, final String code, final Map params) { - this.lang = service.getType(); - this.name = name; - this.code = code; - this.params = params; + private CacheKey(String lang, String idOrCode, Map options) { + this.lang = lang; + this.idOrCode = idOrCode; + this.options = options; } @Override @@ -706,44 +661,18 @@ public boolean equals(Object o) { CacheKey cacheKey = (CacheKey)o; - if (!lang.equals(cacheKey.lang)) return false; - if (name != null ? !name.equals(cacheKey.name) : cacheKey.name != null) return false; - if (code != null ? !code.equals(cacheKey.code) : cacheKey.code != null) return false; - return params.equals(cacheKey.params); + if (lang != null ? !lang.equals(cacheKey.lang) : cacheKey.lang != null) return false; + if (!idOrCode.equals(cacheKey.idOrCode)) return false; + return options != null ? options.equals(cacheKey.options) : cacheKey.options == null; } @Override public int hashCode() { - int result = lang.hashCode(); - result = 31 * result + (name != null ? name.hashCode() : 0); - result = 31 * result + (code != null ? code.hashCode() : 0); - result = 31 * result + params.hashCode(); + int result = lang != null ? lang.hashCode() : 0; + result = 31 * result + idOrCode.hashCode(); + result = 31 * result + (options != null ? options.hashCode() : 0); return result; } } - - - private static class IndexedScript { - private final String lang; - private final String id; - - IndexedScript(String lang, String script) { - this.lang = lang; - final String[] parts = script.split("/"); - if (parts.length == 1) { - this.id = script; - } else { - if (parts.length != 3) { - throw new IllegalArgumentException("Illegal index script format [" + script + "]" + - " should be /lang/id"); - } else { - if (!parts[1].equals(this.lang)) { - throw new IllegalStateException("Conflicting script language, found [" + parts[1] + "] expected + ["+ this.lang + "]"); - } - this.id = parts[2]; - } - } - } - } } diff --git a/core/src/main/java/org/elasticsearch/script/ScriptSettings.java b/core/src/main/java/org/elasticsearch/script/ScriptSettings.java index 1cb2b35624544..a897fdc7be86e 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptSettings.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptSettings.java @@ -25,6 +25,7 @@ import java.util.ArrayList; import java.util.Collections; +import java.util.EnumMap; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -43,15 +44,16 @@ public class ScriptSettings { @Deprecated public static final String LEGACY_SCRIPT_SETTING = "script.legacy.default_lang"; - private static final Map> SCRIPT_TYPE_SETTING_MAP; + private static final Map> SCRIPT_TYPE_SETTING_MAP; static { - Map> scriptTypeSettingMap = new HashMap<>(); - for (ScriptService.ScriptType scriptType : ScriptService.ScriptType.values()) { + Map> scriptTypeSettingMap = new EnumMap<>(ScriptType.class); + for (ScriptType scriptType : ScriptType.values()) { scriptTypeSettingMap.put(scriptType, Setting.boolSetting( ScriptModes.sourceKey(scriptType), - scriptType.getDefaultScriptEnabled(), - Property.NodeScope)); + scriptType.isDefaultEnabled(), + Property.NodeScope, + Property.Deprecated)); } SCRIPT_TYPE_SETTING_MAP = Collections.unmodifiableMap(scriptTypeSettingMap); } @@ -79,12 +81,12 @@ private static Map> contextSettings(ScriptContex Map> scriptContextSettingMap = new HashMap<>(); for (ScriptContext scriptContext : scriptContextRegistry.scriptContexts()) { scriptContextSettingMap.put(scriptContext, - Setting.boolSetting(ScriptModes.operationKey(scriptContext), false, Property.NodeScope)); + Setting.boolSetting(ScriptModes.operationKey(scriptContext), false, Property.NodeScope, Property.Deprecated)); } return scriptContextSettingMap; } - private static List> languageSettings(Map> scriptTypeSettingMap, + private static List> languageSettings(Map> scriptTypeSettingMap, Map> scriptContextSettingMap, ScriptEngineRegistry scriptEngineRegistry, ScriptContextRegistry scriptContextRegistry) { @@ -96,20 +98,20 @@ private static List> languageSettings(Map defaultLangAndTypeFn = settings -> { final Setting globalTypeSetting = scriptTypeSettingMap.get(scriptType); final Setting langAndTypeSetting = Setting.boolSetting(ScriptModes.getGlobalKey(language, scriptType), - defaultIfNothingSet, Property.NodeScope); + defaultIfNothingSet, Property.NodeScope, Property.Deprecated); if (langAndTypeSetting.exists(settings)) { // fine-grained e.g. script.engine.groovy.inline @@ -124,7 +126,7 @@ private static List> languageSettings(Map langAndTypeSetting = Setting.boolSetting(ScriptModes.getGlobalKey(language, scriptType), - defaultLangAndTypeFn, Property.NodeScope); + defaultLangAndTypeFn, Property.NodeScope, Property.Deprecated); scriptModeSettings.add(langAndTypeSetting); for (ScriptContext scriptContext : scriptContextRegistry.scriptContexts()) { @@ -135,7 +137,7 @@ private static List> languageSettings(Map globalOpSetting = scriptContextSettingMap.get(scriptContext); final Setting globalTypeSetting = scriptTypeSettingMap.get(scriptType); final Setting langAndTypeAndContextSetting = Setting.boolSetting(langAndTypeAndContextName, - defaultIfNothingSet, Property.NodeScope); + defaultIfNothingSet, Property.NodeScope, Property.Deprecated); // fallback logic for script mode settings if (langAndTypeAndContextSetting.exists(settings)) { @@ -156,7 +158,8 @@ private static List> languageSettings(Map setting = Setting.boolSetting(langAndTypeAndContextName, defaultSettingFn, Property.NodeScope); + Setting setting = + Setting.boolSetting(langAndTypeAndContextName, defaultSettingFn, Property.NodeScope, Property.Deprecated); scriptModeSettings.add(setting); } } diff --git a/core/src/main/java/org/elasticsearch/script/ScriptType.java b/core/src/main/java/org/elasticsearch/script/ScriptType.java new file mode 100644 index 0000000000000..119aec469b1aa --- /dev/null +++ b/core/src/main/java/org/elasticsearch/script/ScriptType.java @@ -0,0 +1,140 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.script; + +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; + +import java.io.IOException; +import java.util.Locale; + +/** + * ScriptType represents the way a script is stored and retrieved from the {@link ScriptService}. + * It's also used to by {@link ScriptSettings} and {@link ScriptModes} to determine whether or not + * a {@link Script} is allowed to be executed based on both default and user-defined settings. + */ +public enum ScriptType implements Writeable { + + /** + * INLINE scripts are specified in numerous queries and compiled on-the-fly. + * They will be cached based on the lang and code of the script. + * They are turned off by default because most languages are insecure + * (Groovy and others), but can be overriden by the specific {@link ScriptEngineService} + * if the language is naturally secure (Painless, Mustache, and Expressions). + */ + INLINE ( 0 , new ParseField("source", "inline") , false ), + + /** + * STORED scripts are saved as part of the {@link org.elasticsearch.cluster.ClusterState} + * based on user requests. They will be cached when they are first used in a query. + * They are turned off by default because most languages are insecure + * (Groovy and others), but can be overriden by the specific {@link ScriptEngineService} + * if the language is naturally secure (Painless, Mustache, and Expressions). + */ + STORED ( 1 , new ParseField("id", "stored") , false ), + + /** + * FILE scripts are loaded from disk either on start-up or on-the-fly depending on + * user-defined settings. They will be compiled and cached as soon as they are loaded + * from disk. They are turned on by default as they should always be safe to execute. + */ + FILE ( 2 , new ParseField("file") , true ); + + /** + * Reads an int from the input stream and converts it to a {@link ScriptType}. + * @return The ScriptType read from the stream. Throws an {@link IllegalStateException} + * if no ScriptType is found based on the id. + */ + public static ScriptType readFrom(StreamInput in) throws IOException { + int id = in.readVInt(); + + if (FILE.id == id) { + return FILE; + } else if (STORED.id == id) { + return STORED; + } else if (INLINE.id == id) { + return INLINE; + } else { + throw new IllegalStateException("Error reading ScriptType id [" + id + "] from stream, expected one of [" + + FILE.id + " [" + FILE.parseField.getPreferredName() + "], " + + STORED.id + " [" + STORED.parseField.getPreferredName() + "], " + + INLINE.id + " [" + INLINE.parseField.getPreferredName() + "]]"); + } + } + + private final int id; + private final ParseField parseField; + private final boolean defaultEnabled; + + /** + * Standard constructor. + * @param id A unique identifier for a type that can be read/written to a stream. + * @param parseField Specifies the name used to parse input from queries. + * @param defaultEnabled Whether or not a {@link ScriptType} can be run by default. + */ + ScriptType(int id, ParseField parseField, boolean defaultEnabled) { + this.id = id; + this.parseField = parseField; + this.defaultEnabled = defaultEnabled; + } + + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(id); + } + + /** + * @return The unique id for this {@link ScriptType}. + */ + public int getId() { + return id; + } + + /** + * @return The unique name for this {@link ScriptType} based on the {@link ParseField}. + */ + public String getName() { + return name().toLowerCase(Locale.ROOT); + } + + /** + * @return Specifies the name used to parse input from queries. + */ + public ParseField getParseField() { + return parseField; + } + + /** + * @return Whether or not a {@link ScriptType} can be run by default. Note + * this can be potentially overriden by any {@link ScriptEngineService}. + */ + public boolean isDefaultEnabled() { + return defaultEnabled; + } + + /** + * @return The same as calling {@link #getName()}. + */ + @Override + public String toString() { + return getName(); + } +} diff --git a/core/src/main/java/org/elasticsearch/script/StoredScriptSource.java b/core/src/main/java/org/elasticsearch/script/StoredScriptSource.java new file mode 100644 index 0000000000000..4c71b05a5082c --- /dev/null +++ b/core/src/main/java/org/elasticsearch/script/StoredScriptSource.java @@ -0,0 +1,495 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.script; + +import org.elasticsearch.Version; +import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptResponse; +import org.elasticsearch.cluster.AbstractDiffable; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.Diff; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParser.Token; +import org.elasticsearch.common.xcontent.XContentType; + +import java.io.IOException; +import java.io.UncheckedIOException; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; + +/** + * {@link StoredScriptSource} represents user-defined parameters for a script + * saved in the {@link ClusterState}. + */ +public class StoredScriptSource extends AbstractDiffable implements Writeable, ToXContentObject { + + /** + * Standard {@link ParseField} for outer level of stored script source. + */ + public static final ParseField SCRIPT_PARSE_FIELD = new ParseField("script"); + + /** + * Standard {@link ParseField} for outer level of stored script source. + */ + public static final ParseField TEMPLATE_PARSE_FIELD = new ParseField("template"); + + /** + * Standard {@link ParseField} for lang on the inner level. + */ + public static final ParseField LANG_PARSE_FIELD = new ParseField("lang"); + + /** + * Standard {@link ParseField} for source on the inner level. + */ + public static final ParseField SOURCE_PARSE_FIELD = new ParseField("source", "code"); + + /** + * Standard {@link ParseField} for options on the inner level. + */ + public static final ParseField OPTIONS_PARSE_FIELD = new ParseField("options"); + + /** + * Helper class used by {@link ObjectParser} to store mutable {@link StoredScriptSource} variables and then + * construct an immutable {@link StoredScriptSource} object based on parsed XContent. + */ + private static final class Builder { + private String lang; + private String source; + private Map options; + + private Builder() { + // This cannot default to an empty map because options are potentially added at multiple points. + this.options = new HashMap<>(); + } + + private void setLang(String lang) { + this.lang = lang; + } + + /** + * Since stored scripts can accept templates rather than just scripts, they must also be able + * to handle template parsing, hence the need for custom parsing source. Templates can + * consist of either an {@link String} or a JSON object. If a JSON object is discovered + * then the content type option must also be saved as a compiler option. + */ + private void setSource(XContentParser parser) { + try { + if (parser.currentToken() == Token.START_OBJECT) { + //this is really for search templates, that need to be converted to json format + XContentBuilder builder = XContentFactory.jsonBuilder(); + source = builder.copyCurrentStructure(parser).string(); + options.put(Script.CONTENT_TYPE_OPTION, XContentType.JSON.mediaType()); + } else { + source = parser.text(); + } + } catch (IOException exception) { + throw new UncheckedIOException(exception); + } + } + + /** + * Options may have already been added if a template was specified. + * Appends the user-defined compiler options with the internal compiler options. + */ + private void setOptions(Map options) { + this.options.putAll(options); + } + + /** + * Validates the parameters and creates an {@link StoredScriptSource}. + */ + private StoredScriptSource build() { + if (lang == null) { + throw new IllegalArgumentException("must specify lang for stored script"); + } else if (lang.isEmpty()) { + throw new IllegalArgumentException("lang cannot be empty"); + } + + if (source == null) { + throw new IllegalArgumentException("must specify source for stored script"); + } else if (source.isEmpty()) { + throw new IllegalArgumentException("source cannot be empty"); + } + + if (options.size() > 1 || options.size() == 1 && options.get(Script.CONTENT_TYPE_OPTION) == null) { + throw new IllegalArgumentException("illegal compiler options [" + options + "] specified"); + } + + return new StoredScriptSource(lang, source, options); + } + } + + private static final ObjectParser PARSER = new ObjectParser<>("stored script source", Builder::new); + + static { + // Defines the fields necessary to parse a Script as XContent using an ObjectParser. + PARSER.declareString(Builder::setLang, LANG_PARSE_FIELD); + PARSER.declareField(Builder::setSource, parser -> parser, SOURCE_PARSE_FIELD, ValueType.OBJECT_OR_STRING); + PARSER.declareField(Builder::setOptions, XContentParser::mapStrings, OPTIONS_PARSE_FIELD, ValueType.OBJECT); + } + + /** + * This will parse XContent into a {@link StoredScriptSource}. The following formats can be parsed: + * + * The simple script format with no compiler options or user-defined params: + * + * Example: + * {@code + * {"script": "return Math.log(doc.popularity) * 100;"} + * } + * + * The above format requires the lang to be specified using the deprecated stored script namespace + * (as a url parameter during a put request). See {@link ScriptMetaData} for more information about + * the stored script namespaces. + * + * The complex script format using the new stored script namespace + * where lang and source are required but options is optional: + * + * {@code + * { + * "script" : { + * "lang" : "", + * "source" : "", + * "options" : { + * "option0" : "", + * "option1" : "", + * ... + * } + * } + * } + * } + * + * Example: + * {@code + * { + * "script": { + * "lang" : "painless", + * "source" : "return Math.log(doc.popularity) * params.multiplier" + * } + * } + * } + * + * The use of "source" may also be substituted with "code" for backcompat with 5.3 to 5.5 format. For example: + * + * {@code + * { + * "script" : { + * "lang" : "", + * "code" : "", + * "options" : { + * "option0" : "", + * "option1" : "", + * ... + * } + * } + * } + * } + * + * The simple template format: + * + * {@code + * { + * "query" : ... + * } + * } + * + * The complex template format: + * + * {@code + * { + * "template": { + * "query" : ... + * } + * } + * } + * + * Note that templates can be handled as both strings and complex JSON objects. + * Also templates may be part of the 'source' parameter in a script. The Parser + * can handle this case as well. + * + * @param lang An optional parameter to allow for use of the deprecated stored + * script namespace. This will be used to specify the language + * coming in as a url parameter from a request or for stored templates. + * @param content The content from the request to be parsed as described above. + * @return The parsed {@link StoredScriptSource}. + */ + public static StoredScriptSource parse(String lang, BytesReference content, XContentType xContentType) { + try (XContentParser parser = xContentType.xContent().createParser(NamedXContentRegistry.EMPTY, content)) { + Token token = parser.nextToken(); + + if (token != Token.START_OBJECT) { + throw new ParsingException(parser.getTokenLocation(), "unexpected token [" + token + "], expected [{]"); + } + + token = parser.nextToken(); + + if (token != Token.FIELD_NAME) { + throw new ParsingException(parser.getTokenLocation(), "unexpected token [" + token + ", expected [" + + SCRIPT_PARSE_FIELD.getPreferredName() + ", " + TEMPLATE_PARSE_FIELD.getPreferredName()); + } + + String name = parser.currentName(); + + if (SCRIPT_PARSE_FIELD.getPreferredName().equals(name)) { + token = parser.nextToken(); + + if (token == Token.VALUE_STRING) { + if (lang == null) { + throw new IllegalArgumentException( + "must specify lang as a url parameter when using the deprecated stored script namespace"); + } + + return new StoredScriptSource(lang, parser.text(), Collections.emptyMap()); + } else if (token == Token.START_OBJECT) { + if (lang == null) { + return PARSER.apply(parser, null).build(); + } else { + //this is really for search templates, that need to be converted to json format + try (XContentBuilder builder = XContentFactory.jsonBuilder()) { + builder.copyCurrentStructure(parser); + return new StoredScriptSource(lang, builder.string(), Collections.emptyMap()); + } + } + + } else { + throw new ParsingException(parser.getTokenLocation(), "unexpected token [" + token + "], expected [{, ]"); + } + } else { + if (lang == null) { + throw new IllegalArgumentException("unexpected stored script format"); + } + + if (TEMPLATE_PARSE_FIELD.getPreferredName().equals(name)) { + token = parser.nextToken(); + + if (token == Token.VALUE_STRING) { + return new StoredScriptSource(lang, parser.text(), Collections.emptyMap()); + } + } + + try (XContentBuilder builder = XContentFactory.jsonBuilder()) { + if (token != Token.START_OBJECT) { + builder.startObject(); + builder.copyCurrentStructure(parser); + builder.endObject(); + } else { + builder.copyCurrentStructure(parser); + } + + return new StoredScriptSource(lang, builder.string(), Collections.emptyMap()); + } + } + } catch (IOException ioe) { + throw new UncheckedIOException(ioe); + } + } + + /** + * This will parse XContent into a {@link StoredScriptSource}. The following format is what will be parsed: + * + * {@code + * { + * "script" : { + * "lang" : "", + * "source" : "", + * "options" : { + * "option0" : "", + * "option1" : "", + * ... + * } + * } + * } + * } + * + * Note that the "source" parameter can also handle template parsing including from + * a complex JSON object. + */ + public static StoredScriptSource fromXContent(XContentParser parser) throws IOException { + return PARSER.apply(parser, null).build(); + } + + /** + * Required for {@link ScriptMetaData.ScriptMetadataDiff}. Uses + * the {@link StoredScriptSource#StoredScriptSource(StreamInput)} + * constructor. + */ + public static Diff readDiffFrom(StreamInput in) throws IOException { + return readDiffFrom(StoredScriptSource::new, in); + } + + private final String lang; + private final String source; + private final Map options; + + /** + * Constructor for use with {@link GetStoredScriptResponse} + * to support the deprecated stored script namespace. + */ + public StoredScriptSource(String source) { + this.lang = null; + this.source = Objects.requireNonNull(source); + this.options = null; + } + + /** + * Standard StoredScriptSource constructor. + * @param lang The language to compile the script with. Must not be {@code null}. + * @param source The source source to compile with. Must not be {@code null}. + * @param options Compiler options to be compiled with. Must not be {@code null}, + * use an empty {@link Map} to represent no options. + */ + public StoredScriptSource(String lang, String source, Map options) { + this.lang = Objects.requireNonNull(lang); + this.source = Objects.requireNonNull(source); + this.options = Collections.unmodifiableMap(Objects.requireNonNull(options)); + } + + /** + * Reads a {@link StoredScriptSource} from a stream. Version 5.3+ will read + * all of the lang, source, and options parameters. For versions prior to 5.3, + * only the source parameter will be read in as a bytes reference. + */ + public StoredScriptSource(StreamInput in) throws IOException { + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + this.lang = in.readString(); + this.source = in.readString(); + @SuppressWarnings("unchecked") + Map options = (Map)(Map)in.readMap(); + this.options = options; + } else { + this.lang = null; + this.source = in.readBytesReference().utf8ToString(); + this.options = null; + } + } + + /** + * Writes a {@link StoredScriptSource} to a stream. Version 5.3+ will write + * all of the lang, source, and options parameters. For versions prior to 5.3, + * only the source parameter will be read in as a bytes reference. + */ + @Override + public void writeTo(StreamOutput out) throws IOException { + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + out.writeString(lang); + out.writeString(source); + @SuppressWarnings("unchecked") + Map options = (Map)(Map)this.options; + out.writeMap(options); + } else { + out.writeBytesReference(new BytesArray(source)); + } + } + + /** + * This will write XContent from a {@link StoredScriptSource}. The following format will be written: + * + * {@code + * { + * "script" : { + * "lang" : "", + * "source" : "", + * "options" : { + * "option0" : "", + * "option1" : "", + * ... + * } + * } + * } + * } + * + * Note that the 'source' parameter can also handle templates written as complex JSON. + */ + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(LANG_PARSE_FIELD.getPreferredName(), lang); + builder.field(SOURCE_PARSE_FIELD.getPreferredName(), source); + builder.field(OPTIONS_PARSE_FIELD.getPreferredName(), options); + builder.endObject(); + + return builder; + } + + /** + * @return The language used for compiling this script. + */ + public String getLang() { + return lang; + } + + /** + * @return The source used for compiling this script. + */ + public String getSource() { + return source; + } + + /** + * @return The compiler options used for this script. + */ + public Map getOptions() { + return options; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + StoredScriptSource that = (StoredScriptSource)o; + + if (lang != null ? !lang.equals(that.lang) : that.lang != null) return false; + if (source != null ? !source.equals(that.source) : that.source != null) return false; + return options != null ? options.equals(that.options) : that.options == null; + + } + + @Override + public int hashCode() { + int result = lang != null ? lang.hashCode() : 0; + result = 31 * result + (source != null ? source.hashCode() : 0); + result = 31 * result + (options != null ? options.hashCode() : 0); + return result; + } + + @Override + public String toString() { + return "StoredScriptSource{" + + "lang='" + lang + '\'' + + ", source='" + source + '\'' + + ", options=" + options + + '}'; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java b/core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java similarity index 79% rename from core/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java rename to core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java index 780352508bfe5..45b8967545414 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java +++ b/core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java @@ -17,20 +17,18 @@ * under the License. */ -package org.elasticsearch.search.internal; +package org.elasticsearch.search; -import org.apache.lucene.queries.TermsQuery; import org.apache.lucene.search.BooleanClause.Occur; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.Collector; -import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.Counter; +import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.action.search.SearchType; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; @@ -39,7 +37,6 @@ import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.fielddata.IndexFieldDataService; @@ -49,22 +46,25 @@ import org.elasticsearch.index.mapper.TypeFieldMapper; import org.elasticsearch.index.query.AbstractQueryBuilder; import org.elasticsearch.index.query.ParsedQuery; +import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryShardContext; -import org.elasticsearch.search.fetch.StoredFieldsContext; +import org.elasticsearch.index.search.NestedHelper; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.similarity.SimilarityService; -import org.elasticsearch.script.ScriptService; -import org.elasticsearch.search.SearchShardTarget; import org.elasticsearch.search.aggregations.SearchContextAggregations; +import org.elasticsearch.search.collapse.CollapseContext; import org.elasticsearch.search.dfs.DfsSearchResult; import org.elasticsearch.search.fetch.FetchPhase; import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.FetchSubPhase; -import org.elasticsearch.search.fetch.FetchSubPhaseContext; +import org.elasticsearch.search.fetch.StoredFieldsContext; +import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext; import org.elasticsearch.search.fetch.subphase.highlight.SearchContextHighlight; -import org.elasticsearch.search.lookup.SearchLookup; +import org.elasticsearch.search.internal.ContextIndexSearcher; +import org.elasticsearch.search.internal.ScrollContext; +import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.search.internal.ShardSearchRequest; import org.elasticsearch.search.profile.Profilers; import org.elasticsearch.search.query.QueryPhaseExecutionException; import org.elasticsearch.search.query.QuerySearchResult; @@ -74,16 +74,15 @@ import org.elasticsearch.search.suggest.SuggestionSearchContext; import java.io.IOException; +import java.io.UncheckedIOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; -/** - * - */ -public class DefaultSearchContext extends SearchContext { +final class DefaultSearchContext extends SearchContext { private final long id; private final ShardSearchRequest request; @@ -91,7 +90,6 @@ public class DefaultSearchContext extends SearchContext { private final Counter timeEstimateCounter; private SearchType searchType; private final Engine.Searcher engineSearcher; - private final ScriptService scriptService; private final BigArrays bigArrays; private final IndexShard indexShard; private final IndexService indexService; @@ -99,7 +97,7 @@ public class DefaultSearchContext extends SearchContext { private final DfsSearchResult dfsResult; private final QuerySearchResult queryResult; private final FetchSearchResult fetchResult; - private float queryBoost = 1.0f; + private final float queryBoost; private TimeValue timeout; // terminate after count private int terminateAfter = DEFAULT_TERMINATE_AFTER; @@ -110,14 +108,19 @@ public class DefaultSearchContext extends SearchContext { private StoredFieldsContext storedFields; private ScriptFieldsContext scriptFields; private FetchSourceContext fetchSourceContext; + private DocValueFieldsContext docValueFieldsContext; private int from = -1; private int size = -1; private SortAndFormats sort; private Float minimumScore; private boolean trackScores = false; // when sorting, track scores as well... private FieldDoc searchAfter; + private CollapseContext collapse; + private boolean lowLevelCancellation; // filter for sliced scroll private SliceBuilder sliceBuilder; + private SearchTask task; + /** * The original query as sent by the user without the types and aliases @@ -125,10 +128,7 @@ public class DefaultSearchContext extends SearchContext { * things like the type filter or alias filters. */ private ParsedQuery originalQuery; - /** - * Just like originalQuery but with the filters from types, aliases and slice applied. - */ - private ParsedQuery filteredQuery; + /** * The query to actually execute. */ @@ -142,29 +142,25 @@ public class DefaultSearchContext extends SearchContext { private SearchContextHighlight highlight; private SuggestionSearchContext suggest; private List rescore; - private SearchLookup searchLookup; private volatile long keepAlive; private final long originNanoTime = System.nanoTime(); private volatile long lastAccessTime = -1; private Profilers profilers; - private final Map subPhaseContexts = new HashMap<>(); + private final Map searchExtBuilders = new HashMap<>(); private final Map, Collector> queryCollectors = new HashMap<>(); private final QueryShardContext queryShardContext; private FetchPhase fetchPhase; - public DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget, Engine.Searcher engineSearcher, - IndexService indexService, IndexShard indexShard, ScriptService scriptService, - BigArrays bigArrays, Counter timeEstimateCounter, ParseFieldMatcher parseFieldMatcher, TimeValue timeout, - FetchPhase fetchPhase) { - super(parseFieldMatcher); + DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget, Engine.Searcher engineSearcher, + IndexService indexService, IndexShard indexShard, + BigArrays bigArrays, Counter timeEstimateCounter, TimeValue timeout, FetchPhase fetchPhase) { this.id = id; this.request = request; this.fetchPhase = fetchPhase; this.searchType = request.searchType(); this.shardTarget = shardTarget; this.engineSearcher = engineSearcher; - this.scriptService = scriptService; // SearchContexts use a BigArrays that can circuit break this.bigArrays = bigArrays.withCircuitBreaking(); this.dfsResult = new DfsSearchResult(id, shardTarget); @@ -175,8 +171,9 @@ public DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarg this.searcher = new ContextIndexSearcher(engineSearcher, indexService.cache().query(), indexShard.getQueryCachingPolicy()); this.timeEstimateCounter = timeEstimateCounter; this.timeout = timeout; - queryShardContext = indexService.newQueryShardContext(searcher.getIndexReader()); + queryShardContext = indexService.newQueryShardContext(request.shardId().id(), searcher.getIndexReader(), request::nowInMillis); queryShardContext.setTypes(request.types()); + queryBoost = request.indexBoost(); } @Override @@ -189,7 +186,7 @@ public void doClose() { * Should be called before executing the main query and after all other parameters have been set. */ @Override - public void preProcess() { + public void preProcess(boolean rewrite) { if (hasOnlySuggest() ) { return; } @@ -235,7 +232,12 @@ public void preProcess() { } // initialize the filtering alias based on the provided filters - aliasFilter = indexService.aliasFilter(queryShardContext, request.filteringAliases()); + try { + final QueryBuilder queryBuilder = request.filteringAliases(); + aliasFilter = queryBuilder == null ? null : queryBuilder.toFilter(queryShardContext); + } catch (IOException e) { + throw new UncheckedIOException(e); + } if (query() == null) { parsedQuery(ParsedQuery.parsedMatchAllQuery()); @@ -243,75 +245,61 @@ public void preProcess() { if (queryBoost() != AbstractQueryBuilder.DEFAULT_BOOST) { parsedQuery(new ParsedQuery(new FunctionScoreQuery(query(), new WeightFactorFunction(queryBoost)), parsedQuery())); } - filteredQuery(buildFilteredQuery()); - try { - this.query = searcher().rewrite(this.query); - } catch (IOException e) { - throw new QueryPhaseExecutionException(this, "Failed to rewrite main query", e); - } - } - - private ParsedQuery buildFilteredQuery() { - Query searchFilter = searchFilter(queryShardContext.getTypes()); - if (searchFilter == null) { - return originalQuery; - } - Query result; - if (Queries.isConstantMatchAllQuery(query())) { - result = new ConstantScoreQuery(searchFilter); - } else { - result = new BooleanQuery.Builder() - .add(query, Occur.MUST) - .add(searchFilter, Occur.FILTER) - .build(); + this.query = buildFilteredQuery(query); + if (rewrite) { + try { + this.query = searcher.rewrite(query); + } catch (IOException e) { + throw new QueryPhaseExecutionException(this, "Failed to rewrite main query", e); + } } - return new ParsedQuery(result, originalQuery); } @Override - @Nullable - public Query searchFilter(String[] types) { - Query typesFilter = createSearchFilter(types, aliasFilter, mapperService().hasNested()); - if (sliceBuilder == null) { - return typesFilter; + public Query buildFilteredQuery(Query query) { + List filters = new ArrayList<>(); + Query typeFilter = createTypeFilter(queryShardContext.getTypes()); + if (typeFilter != null) { + filters.add(typeFilter); } - Query sliceFilter = sliceBuilder.toFilter(queryShardContext, shardTarget().getShardId().getId(), - queryShardContext.getIndexSettings().getNumberOfShards()); - if (typesFilter == null) { - return sliceFilter; - } - return new BooleanQuery.Builder() - .add(typesFilter, Occur.FILTER) - .add(sliceFilter, Occur.FILTER) - .build(); - } - // extracted to static helper method to make writing unit tests easier: - static Query createSearchFilter(String[] types, Query aliasFilter, boolean hasNestedFields) { - Query typesFilter = null; - if (types != null && types.length >= 1) { - BytesRef[] typesBytes = new BytesRef[types.length]; - for (int i = 0; i < typesBytes.length; i++) { - typesBytes[i] = new BytesRef(types[i]); - } - typesFilter = new TermsQuery(TypeFieldMapper.NAME, typesBytes); + if (mapperService().hasNested() + && typeFilter == null // when a _type filter is set, it will automatically exclude nested docs + && new NestedHelper(mapperService()).mightMatchNestedDocs(query) + && (aliasFilter == null || new NestedHelper(mapperService()).mightMatchNestedDocs(aliasFilter))) { + filters.add(Queries.newNonNestedFilter()); } - if (typesFilter == null && aliasFilter == null && hasNestedFields == false) { - return null; + if (aliasFilter != null) { + filters.add(aliasFilter); } - BooleanQuery.Builder bq = new BooleanQuery.Builder(); - if (typesFilter != null) { - bq.add(typesFilter, Occur.FILTER); - } else if (hasNestedFields) { - bq.add(Queries.newNonNestedFilter(), Occur.FILTER); + if (sliceBuilder != null) { + filters.add(sliceBuilder.toFilter(queryShardContext, shardTarget().getShardId().getId(), + queryShardContext.getIndexSettings().getNumberOfShards())); } - if (aliasFilter != null) { - bq.add(aliasFilter, Occur.FILTER); + + if (filters.isEmpty()) { + return query; + } else { + BooleanQuery.Builder builder = new BooleanQuery.Builder(); + builder.add(query, Occur.MUST); + for (Query filter : filters) { + builder.add(filter, Occur.FILTER); + } + return builder.build(); } + } - return bq.build(); + private Query createTypeFilter(String[] types) { + if (types != null && types.length >= 1) { + MappedFieldType ft = mapperService().fullName(TypeFieldMapper.NAME); + if (ft != null) { + // ft might be null if no documents have been indexed yet + return ft.termsQuery(Arrays.asList(types), queryShardContext); + } + } + return null; } @Override @@ -349,22 +337,11 @@ public float queryBoost() { return queryBoost; } - @Override - public SearchContext queryBoost(float queryBoost) { - this.queryBoost = queryBoost; - return this; - } - @Override public long getOriginNanoTime() { return originNanoTime; } - @Override - protected long nowInMillisImpl() { - return request.nowInMillis(); - } - @Override public ScrollContext scrollContext() { return this.scrollContext; @@ -388,14 +365,16 @@ public SearchContext aggregations(SearchContextAggregations aggregations) { } @Override - public SubPhaseContext getFetchSubPhaseContext(FetchSubPhase.ContextFactory contextFactory) { - String subPhaseName = contextFactory.getName(); - if (subPhaseContexts.get(subPhaseName) == null) { - subPhaseContexts.put(subPhaseName, contextFactory.newContextInstance()); - } - return (SubPhaseContext) subPhaseContexts.get(subPhaseName); + public void addSearchExt(SearchExtBuilder searchExtBuilder) { + //it's ok to use the writeable name here given that we enforce it to be the same as the name of the element that gets + //parsed by the corresponding parser. There is one single name and one single way to retrieve the parsed object from the context. + searchExtBuilders.put(searchExtBuilder.getWriteableName(), searchExtBuilder); } + @Override + public SearchExtBuilder getSearchExt(String name) { + return searchExtBuilders.get(name); + } @Override public SearchContextHighlight highlight() { @@ -470,6 +449,17 @@ public SearchContext fetchSourceContext(FetchSourceContext fetchSourceContext) { return this; } + @Override + public DocValueFieldsContext docValueFieldsContext() { + return docValueFieldsContext; + } + + @Override + public SearchContext docValueFieldsContext(DocValueFieldsContext docValueFieldsContext) { + this.docValueFieldsContext = docValueFieldsContext; + return this; + } + @Override public ContextIndexSearcher searcher() { return this.searcher; @@ -485,21 +475,11 @@ public MapperService mapperService() { return indexService.mapperService(); } - @Override - public AnalysisService analysisService() { - return indexService.analysisService(); - } - @Override public SimilarityService similarityService() { return indexService.similarityService(); } - @Override - public ScriptService scriptService() { - return scriptService; - } - @Override public BigArrays bigArrays() { return bigArrays; @@ -574,11 +554,31 @@ public SearchContext searchAfter(FieldDoc searchAfter) { return this; } + @Override + public boolean lowLevelCancellation() { + return lowLevelCancellation; + } + + public void lowLevelCancellation(boolean lowLevelCancellation) { + this.lowLevelCancellation = lowLevelCancellation; + } + @Override public FieldDoc searchAfter() { return searchAfter; } + @Override + public SearchContext collapse(CollapseContext collapse) { + this.collapse = collapse; + return this; + } + + @Override + public CollapseContext collapse() { + return collapse; + } + public SearchContext sliceBuilder(SliceBuilder sliceBuilder) { this.sliceBuilder = sliceBuilder; return this; @@ -607,15 +607,6 @@ public SearchContext parsedQuery(ParsedQuery query) { return this; } - public ParsedQuery filteredQuery() { - return filteredQuery; - } - - private void filteredQuery(ParsedQuery filteredQuery) { - this.filteredQuery = filteredQuery; - this.query = filteredQuery.query(); - } - @Override public ParsedQuery parsedQuery() { return this.originalQuery; @@ -751,15 +742,6 @@ public void keepAlive(long keepAlive) { this.keepAlive = keepAlive; } - @Override - public SearchLookup lookup() { - // TODO: The types should take into account the parsing context in QueryParserContext... - if (searchLookup == null) { - searchLookup = new SearchLookup(mapperService(), fieldData(), request.types()); - } - return searchLookup; - } - @Override public DfsSearchResult dfsResult() { return dfsResult; @@ -813,4 +795,19 @@ public Profilers getProfilers() { public void setProfilers(Profilers profilers) { this.profilers = profilers; } + + @Override + public void setTask(SearchTask task) { + this.task = task; + } + + @Override + public SearchTask getTask() { + return task; + } + + @Override + public boolean isCancelled() { + return task.isCancelled(); + } } diff --git a/core/src/main/java/org/elasticsearch/search/DocValueFormat.java b/core/src/main/java/org/elasticsearch/search/DocValueFormat.java index 4cbb8720d775b..eb76db3be687b 100644 --- a/core/src/main/java/org/elasticsearch/search/DocValueFormat.java +++ b/core/src/main/java/org/elasticsearch/search/DocValueFormat.java @@ -41,7 +41,7 @@ import java.util.Arrays; import java.util.Locale; import java.util.Objects; -import java.util.concurrent.Callable; +import java.util.function.LongSupplier; /** A formatter for values as returned by the fielddata/doc-values APIs. */ public interface DocValueFormat extends NamedWriteable { @@ -63,11 +63,11 @@ public interface DocValueFormat extends NamedWriteable { /** Parse a value that was formatted with {@link #format(long)} back to the * original long value. */ - long parseLong(String value, boolean roundUp, Callable now); + long parseLong(String value, boolean roundUp, LongSupplier now); /** Parse a value that was formatted with {@link #format(double)} back to * the original double value. */ - double parseDouble(String value, boolean roundUp, Callable now); + double parseDouble(String value, boolean roundUp, LongSupplier now); /** Parse a value that was formatted with {@link #format(BytesRef)} back * to the original BytesRef. */ @@ -100,7 +100,7 @@ public String format(BytesRef value) { } @Override - public long parseLong(String value, boolean roundUp, Callable now) { + public long parseLong(String value, boolean roundUp, LongSupplier now) { double d = Double.parseDouble(value); if (roundUp) { d = Math.ceil(d); @@ -111,7 +111,7 @@ public long parseLong(String value, boolean roundUp, Callable now) { } @Override - public double parseDouble(String value, boolean roundUp, Callable now) { + public double parseDouble(String value, boolean roundUp, LongSupplier now) { return Double.parseDouble(value); } @@ -166,12 +166,12 @@ public String format(BytesRef value) { } @Override - public long parseLong(String value, boolean roundUp, Callable now) { + public long parseLong(String value, boolean roundUp, LongSupplier now) { return parser.parse(value, now, roundUp, timeZone); } @Override - public double parseDouble(String value, boolean roundUp, Callable now) { + public double parseDouble(String value, boolean roundUp, LongSupplier now) { return parseLong(value, roundUp, now); } @@ -208,12 +208,12 @@ public String format(BytesRef value) { } @Override - public long parseLong(String value, boolean roundUp, Callable now) { + public long parseLong(String value, boolean roundUp, LongSupplier now) { throw new UnsupportedOperationException(); } @Override - public double parseDouble(String value, boolean roundUp, Callable now) { + public double parseDouble(String value, boolean roundUp, LongSupplier now) { throw new UnsupportedOperationException(); } @@ -250,7 +250,7 @@ public String format(BytesRef value) { } @Override - public long parseLong(String value, boolean roundUp, Callable now) { + public long parseLong(String value, boolean roundUp, LongSupplier now) { switch (value) { case "false": return 0; @@ -261,8 +261,8 @@ public long parseLong(String value, boolean roundUp, Callable now) { } @Override - public double parseDouble(String value, boolean roundUp, Callable now) { - throw new UnsupportedOperationException(); + public double parseDouble(String value, boolean roundUp, LongSupplier now) { + return parseLong(value, roundUp, now); } @Override @@ -300,12 +300,12 @@ public String format(BytesRef value) { } @Override - public long parseLong(String value, boolean roundUp, Callable now) { + public long parseLong(String value, boolean roundUp, LongSupplier now) { throw new UnsupportedOperationException(); } @Override - public double parseDouble(String value, boolean roundUp, Callable now) { + public double parseDouble(String value, boolean roundUp, LongSupplier now) { throw new UnsupportedOperationException(); } @@ -358,7 +358,7 @@ public String format(BytesRef value) { } @Override - public long parseLong(String value, boolean roundUp, Callable now) { + public long parseLong(String value, boolean roundUp, LongSupplier now) { Number n; try { n = format.parse(value); @@ -379,7 +379,7 @@ public long parseLong(String value, boolean roundUp, Callable now) { } @Override - public double parseDouble(String value, boolean roundUp, Callable now) { + public double parseDouble(String value, boolean roundUp, LongSupplier now) { Number n; try { n = format.parse(value); @@ -393,5 +393,22 @@ public double parseDouble(String value, boolean roundUp, Callable now) { public BytesRef parseBytesRef(String value) { throw new UnsupportedOperationException(); } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + Decimal that = (Decimal) o; + return Objects.equals(pattern, that.pattern); + } + + @Override + public int hashCode() { + return Objects.hash(pattern); + } } } diff --git a/core/src/main/java/org/elasticsearch/search/MultiValueMode.java b/core/src/main/java/org/elasticsearch/search/MultiValueMode.java index 90e3417ad1f11..826487d108813 100644 --- a/core/src/main/java/org/elasticsearch/search/MultiValueMode.java +++ b/core/src/main/java/org/elasticsearch/search/MultiValueMode.java @@ -935,14 +935,10 @@ public interface UnsortedNumericDoubleValues { @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(this.ordinal()); + out.writeEnum(this); } public static MultiValueMode readMultiValueModeFrom(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown MultiValueMode ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(MultiValueMode.class); } } diff --git a/core/src/main/java/org/elasticsearch/search/SearchExtBuilder.java b/core/src/main/java/org/elasticsearch/search/SearchExtBuilder.java new file mode 100644 index 0000000000000..bf696fcc917ad --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/SearchExtBuilder.java @@ -0,0 +1,51 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search; + +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.io.stream.NamedWriteable; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.plugins.SearchPlugin; +import org.elasticsearch.plugins.SearchPlugin.SearchExtSpec; + +/** + * Intermediate serializable representation of a search ext section. To be subclassed by plugins that support + * a custom section as part of a search request, which will be provided within the ext element. + * Any state needs to be serialized as part of the {@link Writeable#writeTo(StreamOutput)} method and + * read from the incoming stream, usually done adding a constructor that takes {@link StreamInput} as + * an argument. + * + * Registration happens through {@link SearchPlugin#getSearchExts()}, which also needs a {@link CheckedFunction} that's able to parse + * the incoming request from the REST layer into the proper {@link SearchExtBuilder} subclass. + * + * {@link #getWriteableName()} must return the same name as the one used for the registration + * of the {@link SearchExtSpec}. + * + * @see SearchExtSpec + */ +public abstract class SearchExtBuilder implements NamedWriteable, ToXContent { + + public abstract int hashCode(); + + public abstract boolean equals(Object obj); +} diff --git a/core/src/main/java/org/elasticsearch/search/SearchHit.java b/core/src/main/java/org/elasticsearch/search/SearchHit.java index c9ccddd05e648..71c97d734e1ad 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchHit.java +++ b/core/src/main/java/org/elasticsearch/search/SearchHit.java @@ -21,210 +21,1071 @@ import org.apache.lucene.search.Explanation; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.compress.CompressorFactory; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.text.Text; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParser.Token; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.SourceFieldMapper; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.search.fetch.subphase.highlight.HighlightField; +import org.elasticsearch.search.lookup.SourceLookup; +import org.elasticsearch.search.suggest.completion.CompletionSuggestion; +import org.elasticsearch.transport.RemoteClusterAware; +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; import java.util.Map; +import java.util.Objects; + +import static java.util.Collections.emptyMap; +import static java.util.Collections.singletonMap; +import static java.util.Collections.unmodifiableMap; +import static org.elasticsearch.common.lucene.Lucene.readExplanation; +import static org.elasticsearch.common.lucene.Lucene.writeExplanation; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.optionalConstructorArg; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureFieldName; +import static org.elasticsearch.common.xcontent.XContentParserUtils.parseStoredFieldsValue; +import static org.elasticsearch.search.fetch.subphase.highlight.HighlightField.readHighlightField; /** * A single search hit. * * @see SearchHits */ -public interface SearchHit extends Streamable, ToXContent, Iterable { +public class SearchHit implements Streamable, ToXContentObject, Iterable { + + private transient int docId; + + private static final float DEFAULT_SCORE = Float.NEGATIVE_INFINITY; + private float score = DEFAULT_SCORE; + + private Text id; + private Text type; + + private NestedIdentity nestedIdentity; + + private long version = -1; + + private BytesReference source; + + private Map fields = emptyMap(); + + private Map highlightFields = null; + + private SearchSortValues sortValues = SearchSortValues.EMPTY; + + private String[] matchedQueries = Strings.EMPTY_ARRAY; + + private Explanation explanation; + + @Nullable + private SearchShardTarget shard; + + private transient String index; + private transient String clusterAlias; + + private Map sourceAsMap; + private byte[] sourceAsBytes; + + private Map innerHits; + + private SearchHit() { + + } + + public SearchHit(int docId) { + this(docId, null, null, null); + } + + public SearchHit(int docId, String id, Text type, Map fields) { + this(docId, id, type, null, fields); + } + + public SearchHit(int nestedTopDocId, String id, Text type, NestedIdentity nestedIdentity, Map fields) { + this.docId = nestedTopDocId; + if (id != null) { + this.id = new Text(id); + } else { + this.id = null; + } + this.type = type; + this.nestedIdentity = nestedIdentity; + this.fields = fields; + } + + public int docId() { + return this.docId; + } + + public void score(float score) { + this.score = score; + } /** * The score. + * @deprecated use {@link #getScore()} instead */ - float score(); + @Deprecated + public float score() { + return this.score; + } /** * The score. */ - float getScore(); + public float getScore() { + return score(); + } + + public void version(long version) { + this.version = version; + } + + /** + * The version of the hit. + * @deprecated use {@link #getVersion()} instead + */ + @Deprecated + public long version() { + return this.version; + } + + /** + * The version of the hit. + */ + public long getVersion() { + return this.version; + } /** * The index of the hit. + * @deprecated use {@link #getIndex()} instead */ - String index(); + @Deprecated + public String index() { + return getIndex(); + } /** * The index of the hit. */ - String getIndex(); + public String getIndex() { + return this.index; + } /** * The id of the document. + * @deprecated use {@link #getId()} instead */ - String id(); + @Deprecated + public String id() { + return getId(); + } /** * The id of the document. */ - String getId(); + public String getId() { + return id != null ? id.string() : null; + } /** * The type of the document. + * @deprecated use {@link #getType()} instead */ - String type(); + @Deprecated + public String type() { + return getType(); + } /** * The type of the document. */ - String getType(); + public String getType() { + return type != null ? type.string() : null; + } /** * If this is a nested hit then nested reference information is returned otherwise null is returned. */ - NestedIdentity getNestedIdentity(); + public NestedIdentity getNestedIdentity() { + return nestedIdentity; + } /** - * The version of the hit. + * Returns bytes reference, also un compress the source if needed. + * @deprecated use {@link #getSourceRef()} instead */ - long version(); + @Deprecated + public BytesReference sourceRef() { + return getSourceRef(); + } /** - * The version of the hit. + * Sets representation, might be compressed.... */ - long getVersion(); + public SearchHit sourceRef(BytesReference source) { + this.source = source; + this.sourceAsBytes = null; + this.sourceAsMap = null; + return this; + } /** - * Returns bytes reference, also un compress the source if needed. + * Returns bytes reference, also uncompress the source if needed. */ - BytesReference sourceRef(); + public BytesReference getSourceRef() { + if (this.source == null) { + return null; + } - /** - * Returns bytes reference, also un compress the source if needed. - */ - BytesReference getSourceRef(); + try { + this.source = CompressorFactory.uncompressIfNeeded(this.source); + return this.source; + } catch (IOException e) { + throw new ElasticsearchParseException("failed to decompress source", e); + } + } /** * The source of the document (can be null). Note, its a copy of the source * into a byte array, consider using {@link #sourceRef()} so there won't be a need to copy. */ - byte[] source(); + @Deprecated + public byte[] source() { + if (source == null) { + return null; + } + if (sourceAsBytes != null) { + return sourceAsBytes; + } + this.sourceAsBytes = BytesReference.toBytes(sourceRef()); + return this.sourceAsBytes; + } /** * Is the source available or not. A source with no fields will return true. This will return false if {@code fields} doesn't contain * {@code _source} or if source is disabled in the mapping. */ - boolean hasSource(); + public boolean hasSource() { + return source != null; + } /** * The source of the document as a map (can be null). */ - Map getSource(); + public Map getSource() { + return sourceAsMap(); + } /** * The source of the document as string (can be null). + * @deprecated use {@link #getSourceAsString()} instead */ - String sourceAsString(); + @Deprecated + public String sourceAsString() { + return getSourceAsString(); + } /** * The source of the document as string (can be null). */ - String getSourceAsString(); + public String getSourceAsString() { + if (source == null) { + return null; + } + try { + return XContentHelper.convertToJson(sourceRef(), false); + } catch (IOException e) { + throw new ElasticsearchParseException("failed to convert source to a json string"); + } + } /** * The source of the document as a map (can be null). + * @deprecated use {@link #getSourceAsMap()} instgead */ - Map sourceAsMap() throws ElasticsearchParseException; + @Deprecated + public Map sourceAsMap() throws ElasticsearchParseException { + return getSourceAsMap(); + } /** - * If enabled, the explanation of the search hit. + * The source of the document as a map (can be null). */ - Explanation explanation(); + public Map getSourceAsMap() throws ElasticsearchParseException { + if (source == null) { + return null; + } + if (sourceAsMap != null) { + return sourceAsMap; + } + + sourceAsMap = SourceLookup.sourceAsMap(source); + return sourceAsMap; + } + + @Override + public Iterator iterator() { + return fields.values().iterator(); + } /** - * If enabled, the explanation of the search hit. + * The hit field matching the given field name. + * @deprecated use {@link #getField(String)} instead */ - Explanation getExplanation(); + @Deprecated + public SearchHitField field(String fieldName) { + return getField(fieldName); + } /** * The hit field matching the given field name. */ - SearchHitField field(String fieldName); + public SearchHitField getField(String fieldName) { + return fields().get(fieldName); + } /** * A map of hit fields (from field name to hit fields) if additional fields * were required to be loaded. + * @deprecated use {@link #getFields()} instead */ - Map fields(); + @Deprecated + public Map fields() { + return getFields(); + } + + // returns the fields without handling null cases + public Map fieldsOrNull() { + return fields; + } /** * A map of hit fields (from field name to hit fields) if additional fields * were required to be loaded. */ - Map getFields(); + public Map getFields() { + return fields == null ? emptyMap() : fields; + } + + public void fields(Map fields) { + this.fields = fields; + } /** * A map of highlighted fields. + * @deprecated use {@link #getHighlightFields()} instead */ - Map highlightFields(); + @Deprecated + public Map highlightFields() { + return highlightFields == null ? emptyMap() : highlightFields; + } /** * A map of highlighted fields. */ - Map getHighlightFields(); + public Map getHighlightFields() { + return highlightFields(); + } + + public void highlightFields(Map highlightFields) { + this.highlightFields = highlightFields; + } + + public void sortValues(Object[] sortValues, DocValueFormat[] sortValueFormats) { + sortValues(new SearchSortValues(sortValues, sortValueFormats)); + } + + public void sortValues(SearchSortValues sortValues) { + this.sortValues = sortValues; + } /** * An array of the sort values used. + * @deprecated use {@link #getSortValues()} instead */ - Object[] sortValues(); + @Deprecated + public Object[] sortValues() { + return sortValues.sortValues(); + } /** * An array of the sort values used. */ - Object[] getSortValues(); + public Object[] getSortValues() { + return sortValues(); + } /** - * The set of query and filter names the query matched with. Mainly makes sense for compound filters and queries. + * If enabled, the explanation of the search hit. + * @deprecated use {@link #getExplanation()} instead */ - String[] matchedQueries(); + @Deprecated + public Explanation explanation() { + return explanation; + } /** - * The set of query and filter names the query matched with. Mainly makes sense for compound filters and queries. + * If enabled, the explanation of the search hit. */ - String[] getMatchedQueries(); + public Explanation getExplanation() { + return explanation(); + } + + public void explanation(Explanation explanation) { + this.explanation = explanation; + } /** * The shard of the search hit. + * @deprecated use {@link #getShard()} instead */ - SearchShardTarget shard(); + @Deprecated + public SearchShardTarget shard() { + return shard; + } /** * The shard of the search hit. */ - SearchShardTarget getShard(); + public SearchShardTarget getShard() { + return shard(); + } + + public void shard(SearchShardTarget target) { + this.shard = target; + if (target != null) { + this.index = target.getIndex(); + this.clusterAlias = target.getClusterAlias(); + } + } + + /** + * Returns the cluster alias this hit comes from or null if it comes from a local cluster + */ + public String getClusterAlias() { + return clusterAlias; + } + + public void matchedQueries(String[] matchedQueries) { + this.matchedQueries = matchedQueries; + } + + /** + * The set of query and filter names the query matched with. Mainly makes sense for compound filters and queries. + */ + @Deprecated + public String[] matchedQueries() { + return this.matchedQueries; + } + + /** + * The set of query and filter names the query matched with. Mainly makes sense for compound filters and queries. + */ + public String[] getMatchedQueries() { + return this.matchedQueries; + } /** * @return Inner hits or null if there are none */ - Map getInnerHits(); + @SuppressWarnings("unchecked") + public Map getInnerHits() { + return innerHits; + } + + public void setInnerHits(Map innerHits) { + this.innerHits = innerHits; + } + + public static class Fields { + static final String _INDEX = "_index"; + static final String _TYPE = "_type"; + static final String _ID = "_id"; + static final String _VERSION = "_version"; + static final String _SCORE = "_score"; + static final String FIELDS = "fields"; + static final String HIGHLIGHT = "highlight"; + static final String SORT = "sort"; + static final String MATCHED_QUERIES = "matched_queries"; + static final String _EXPLANATION = "_explanation"; + static final String VALUE = "value"; + static final String DESCRIPTION = "description"; + static final String DETAILS = "details"; + static final String INNER_HITS = "inner_hits"; + static final String _SHARD = "_shard"; + static final String _NODE = "_node"; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + toInnerXContent(builder, params); + builder.endObject(); + return builder; + } + + // public because we render hit as part of completion suggestion option + public XContentBuilder toInnerXContent(XContentBuilder builder, Params params) throws IOException { + List metaFields = new ArrayList<>(); + List otherFields = new ArrayList<>(); + if (fields != null && !fields.isEmpty()) { + for (SearchHitField field : fields.values()) { + if (field.values().isEmpty()) { + continue; + } + if (field.isMetadataField()) { + metaFields.add(field); + } else { + otherFields.add(field); + } + } + } + + // For inner_hit hits shard is null and that is ok, because the parent search hit has all this information. + // Even if this was included in the inner_hit hits this would be the same, so better leave it out. + if (explanation() != null && shard != null) { + builder.field(Fields._SHARD, shard.getShardId()); + builder.field(Fields._NODE, shard.getNodeIdText()); + } + if (nestedIdentity != null) { + nestedIdentity.toXContent(builder, params); + } else { + if (index != null) { + builder.field(Fields._INDEX, RemoteClusterAware.buildRemoteIndexName(clusterAlias, index)); + } + if (type != null) { + builder.field(Fields._TYPE, type); + } + if (id != null) { + builder.field(Fields._ID, id); + } + } + if (version != -1) { + builder.field(Fields._VERSION, version); + } + if (Float.isNaN(score)) { + builder.nullField(Fields._SCORE); + } else { + builder.field(Fields._SCORE, score); + } + for (SearchHitField field : metaFields) { + Object value = field.value(); + builder.field(field.name(), value); + } + if (source != null) { + XContentHelper.writeRawField(SourceFieldMapper.NAME, source, builder, params); + } + if (!otherFields.isEmpty()) { + builder.startObject(Fields.FIELDS); + for (SearchHitField field : otherFields) { + builder.startArray(field.name()); + for (Object value : field.getValues()) { + builder.value(value); + } + builder.endArray(); + } + builder.endObject(); + } + if (highlightFields != null && !highlightFields.isEmpty()) { + builder.startObject(Fields.HIGHLIGHT); + for (HighlightField field : highlightFields.values()) { + field.toXContent(builder, params); + } + builder.endObject(); + } + sortValues.toXContent(builder, params); + if (matchedQueries.length > 0) { + builder.startArray(Fields.MATCHED_QUERIES); + for (String matchedFilter : matchedQueries) { + builder.value(matchedFilter); + } + builder.endArray(); + } + if (explanation() != null) { + builder.field(Fields._EXPLANATION); + buildExplanation(builder, explanation()); + } + if (innerHits != null) { + builder.startObject(Fields.INNER_HITS); + for (Map.Entry entry : innerHits.entrySet()) { + builder.startObject(entry.getKey()); + entry.getValue().toXContent(builder, params); + builder.endObject(); + } + builder.endObject(); + } + return builder; + } + + /** + * This parser outputs a temporary map of the objects needed to create the + * SearchHit instead of directly creating the SearchHit. The reason for this + * is that this way we can reuse the parser when parsing xContent from + * {@link CompletionSuggestion.Entry.Option} which unfortunately inlines the + * output of + * {@link #toInnerXContent(XContentBuilder, org.elasticsearch.common.xcontent.ToXContent.Params)} + * of the included search hit. The output of the map is used to create the + * actual SearchHit instance via {@link #createFromMap(Map)} + */ + private static ObjectParser, Void> MAP_PARSER = new ObjectParser<>("innerHitParser", true, HashMap::new); + + static { + declareInnerHitsParseFields(MAP_PARSER); + } + + public static SearchHit fromXContent(XContentParser parser) { + return createFromMap(MAP_PARSER.apply(parser, null)); + } + + public static void declareInnerHitsParseFields(ObjectParser, Void> parser) { + declareMetaDataFields(parser); + parser.declareString((map, value) -> map.put(Fields._TYPE, new Text(value)), new ParseField(Fields._TYPE)); + parser.declareString((map, value) -> map.put(Fields._INDEX, value), new ParseField(Fields._INDEX)); + parser.declareString((map, value) -> map.put(Fields._ID, value), new ParseField(Fields._ID)); + parser.declareString((map, value) -> map.put(Fields._NODE, value), new ParseField(Fields._NODE)); + parser.declareField((map, value) -> map.put(Fields._SCORE, value), SearchHit::parseScore, new ParseField(Fields._SCORE), + ValueType.FLOAT_OR_NULL); + parser.declareLong((map, value) -> map.put(Fields._VERSION, value), new ParseField(Fields._VERSION)); + parser.declareField((map, value) -> map.put(Fields._SHARD, value), (p, c) -> ShardId.fromString(p.text()), + new ParseField(Fields._SHARD), ValueType.STRING); + parser.declareObject((map, value) -> map.put(SourceFieldMapper.NAME, value), (p, c) -> parseSourceBytes(p), + new ParseField(SourceFieldMapper.NAME)); + parser.declareObject((map, value) -> map.put(Fields.HIGHLIGHT, value), (p, c) -> parseHighlightFields(p), + new ParseField(Fields.HIGHLIGHT)); + parser.declareObject((map, value) -> { + Map fieldMap = get(Fields.FIELDS, map, new HashMap()); + fieldMap.putAll(value); + map.put(Fields.FIELDS, fieldMap); + }, (p, c) -> parseFields(p), new ParseField(Fields.FIELDS)); + parser.declareObject((map, value) -> map.put(Fields._EXPLANATION, value), (p, c) -> parseExplanation(p), + new ParseField(Fields._EXPLANATION)); + parser.declareObject((map, value) -> map.put(NestedIdentity._NESTED, value), NestedIdentity::fromXContent, + new ParseField(NestedIdentity._NESTED)); + parser.declareObject((map, value) -> map.put(Fields.INNER_HITS, value), (p,c) -> parseInnerHits(p), + new ParseField(Fields.INNER_HITS)); + parser.declareStringArray((map, list) -> map.put(Fields.MATCHED_QUERIES, list), new ParseField(Fields.MATCHED_QUERIES)); + parser.declareField((map, list) -> map.put(Fields.SORT, list), SearchSortValues::fromXContent, new ParseField(Fields.SORT), + ValueType.OBJECT_ARRAY); + } + + public static SearchHit createFromMap(Map values) { + String id = get(Fields._ID, values, null); + Text type = get(Fields._TYPE, values, null); + NestedIdentity nestedIdentity = get(NestedIdentity._NESTED, values, null); + Map fields = get(Fields.FIELDS, values, null); + + SearchHit searchHit = new SearchHit(-1, id, type, nestedIdentity, fields); + searchHit.index = get(Fields._INDEX, values, null); + searchHit.score(get(Fields._SCORE, values, DEFAULT_SCORE)); + searchHit.version(get(Fields._VERSION, values, -1L)); + searchHit.sortValues(get(Fields.SORT, values, SearchSortValues.EMPTY)); + searchHit.highlightFields(get(Fields.HIGHLIGHT, values, null)); + searchHit.sourceRef(get(SourceFieldMapper.NAME, values, null)); + searchHit.explanation(get(Fields._EXPLANATION, values, null)); + searchHit.setInnerHits(get(Fields.INNER_HITS, values, null)); + List matchedQueries = get(Fields.MATCHED_QUERIES, values, null); + if (matchedQueries != null) { + searchHit.matchedQueries(matchedQueries.toArray(new String[matchedQueries.size()])); + } + ShardId shardId = get(Fields._SHARD, values, null); + String nodeId = get(Fields._NODE, values, null); + if (shardId != null && nodeId != null) { + searchHit.shard(new SearchShardTarget(nodeId, shardId, null, OriginalIndices.NONE)); + } + searchHit.fields(fields); + return searchHit; + } + + @SuppressWarnings("unchecked") + private static T get(String key, Map map, T defaultValue) { + return (T) map.getOrDefault(key, defaultValue); + } + + private static float parseScore(XContentParser parser) throws IOException { + if (parser.currentToken() == XContentParser.Token.VALUE_NUMBER || parser.currentToken() == XContentParser.Token.VALUE_STRING) { + return parser.floatValue(); + } else { + return Float.NaN; + } + } + + private static BytesReference parseSourceBytes(XContentParser parser) throws IOException { + try (XContentBuilder builder = XContentBuilder.builder(parser.contentType().xContent())) { + // the original document gets slightly modified: whitespaces or + // pretty printing are not preserved, + // it all depends on the current builder settings + builder.copyCurrentStructure(parser); + return builder.bytes(); + } + } + + /** + * we need to declare parse fields for each metadata field, except for _ID, _INDEX and _TYPE which are + * handled individually. All other fields are parsed to an entry in the fields map + */ + private static void declareMetaDataFields(ObjectParser, Void> parser) { + for (String metadatafield : MapperService.getAllMetaFields()) { + if (metadatafield.equals(Fields._ID) == false && metadatafield.equals(Fields._INDEX) == false + && metadatafield.equals(Fields._TYPE) == false) { + parser.declareField((map, field) -> { + @SuppressWarnings("unchecked") + Map fieldMap = (Map) map.computeIfAbsent(Fields.FIELDS, + v -> new HashMap()); + fieldMap.put(field.getName(), field); + }, (p, c) -> { + List values = new ArrayList<>(); + values.add(parseStoredFieldsValue(p)); + return new SearchHitField(metadatafield, values); + }, new ParseField(metadatafield), ValueType.VALUE); + } + } + } + + private static Map parseFields(XContentParser parser) throws IOException { + Map fields = new HashMap<>(); + while ((parser.nextToken()) != XContentParser.Token.END_OBJECT) { + String fieldName = parser.currentName(); + ensureExpectedToken(XContentParser.Token.START_ARRAY, parser.nextToken(), parser::getTokenLocation); + List values = new ArrayList<>(); + while ((parser.nextToken()) != XContentParser.Token.END_ARRAY) { + values.add(parseStoredFieldsValue(parser)); + } + fields.put(fieldName, new SearchHitField(fieldName, values)); + } + return fields; + } + + private static Map parseInnerHits(XContentParser parser) throws IOException { + Map innerHits = new HashMap<>(); + while ((parser.nextToken()) != XContentParser.Token.END_OBJECT) { + ensureExpectedToken(XContentParser.Token.FIELD_NAME, parser.currentToken(), parser::getTokenLocation); + String name = parser.currentName(); + ensureExpectedToken(Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + ensureFieldName(parser, parser.nextToken(), SearchHits.Fields.HITS); + innerHits.put(name, SearchHits.fromXContent(parser)); + ensureExpectedToken(XContentParser.Token.END_OBJECT, parser.nextToken(), parser::getTokenLocation); + } + return innerHits; + } + + private static Map parseHighlightFields(XContentParser parser) throws IOException { + Map highlightFields = new HashMap<>(); + while((parser.nextToken()) != XContentParser.Token.END_OBJECT) { + HighlightField highlightField = HighlightField.fromXContent(parser); + highlightFields.put(highlightField.getName(), highlightField); + } + return highlightFields; + } + + private static Explanation parseExplanation(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.currentToken(), parser::getTokenLocation); + XContentParser.Token token; + Float value = null; + String description = null; + List details = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, () -> parser.getTokenLocation()); + String currentFieldName = parser.currentName(); + token = parser.nextToken(); + if (Fields.VALUE.equals(currentFieldName)) { + value = parser.floatValue(); + } else if (Fields.DESCRIPTION.equals(currentFieldName)) { + description = parser.textOrNull(); + } else if (Fields.DETAILS.equals(currentFieldName)) { + ensureExpectedToken(XContentParser.Token.START_ARRAY, token, () -> parser.getTokenLocation()); + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + details.add(parseExplanation(parser)); + } + } else { + parser.skipChildren(); + } + } + if (value == null) { + throw new ParsingException(parser.getTokenLocation(), "missing explanation value"); + } + if (description == null) { + throw new ParsingException(parser.getTokenLocation(), "missing explanation description"); + } + return Explanation.match(value, description, details); + } + + private void buildExplanation(XContentBuilder builder, Explanation explanation) throws IOException { + builder.startObject(); + builder.field(Fields.VALUE, explanation.getValue()); + builder.field(Fields.DESCRIPTION, explanation.getDescription()); + Explanation[] innerExps = explanation.getDetails(); + if (innerExps != null) { + builder.startArray(Fields.DETAILS); + for (Explanation exp : innerExps) { + buildExplanation(builder, exp); + } + builder.endArray(); + } + builder.endObject(); + } + + public static SearchHit readSearchHit(StreamInput in) throws IOException { + SearchHit hit = new SearchHit(); + hit.readFrom(in); + return hit; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + score = in.readFloat(); + id = in.readOptionalText(); + type = in.readOptionalText(); + nestedIdentity = in.readOptionalWriteable(NestedIdentity::new); + version = in.readLong(); + source = in.readBytesReference(); + if (source.length() == 0) { + source = null; + } + if (in.readBoolean()) { + explanation = readExplanation(in); + } + int size = in.readVInt(); + if (size == 0) { + fields = emptyMap(); + } else if (size == 1) { + SearchHitField hitField = SearchHitField.readSearchHitField(in); + fields = singletonMap(hitField.name(), hitField); + } else { + Map fields = new HashMap<>(); + for (int i = 0; i < size; i++) { + SearchHitField hitField = SearchHitField.readSearchHitField(in); + fields.put(hitField.name(), hitField); + } + this.fields = unmodifiableMap(fields); + } + + size = in.readVInt(); + if (size == 0) { + highlightFields = emptyMap(); + } else if (size == 1) { + HighlightField field = readHighlightField(in); + highlightFields = singletonMap(field.name(), field); + } else { + Map highlightFields = new HashMap<>(); + for (int i = 0; i < size; i++) { + HighlightField field = readHighlightField(in); + highlightFields.put(field.name(), field); + } + this.highlightFields = unmodifiableMap(highlightFields); + } + + sortValues = new SearchSortValues(in); + + size = in.readVInt(); + if (size > 0) { + matchedQueries = new String[size]; + for (int i = 0; i < size; i++) { + matchedQueries[i] = in.readString(); + } + } + // we call the setter here because that also sets the local index parameter + shard(in.readOptionalWriteable(SearchShardTarget::new)); + size = in.readVInt(); + if (size > 0) { + innerHits = new HashMap<>(size); + for (int i = 0; i < size; i++) { + String key = in.readString(); + SearchHits value = SearchHits.readSearchHits(in); + innerHits.put(key, value); + } + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeFloat(score); + out.writeOptionalText(id); + out.writeOptionalText(type); + out.writeOptionalWriteable(nestedIdentity); + out.writeLong(version); + out.writeBytesReference(source); + if (explanation == null) { + out.writeBoolean(false); + } else { + out.writeBoolean(true); + writeExplanation(out, explanation); + } + if (fields == null) { + out.writeVInt(0); + } else { + out.writeVInt(fields.size()); + for (SearchHitField hitField : fields().values()) { + hitField.writeTo(out); + } + } + if (highlightFields == null) { + out.writeVInt(0); + } else { + out.writeVInt(highlightFields.size()); + for (HighlightField highlightField : highlightFields.values()) { + highlightField.writeTo(out); + } + } + sortValues.writeTo(out); + + if (matchedQueries.length == 0) { + out.writeVInt(0); + } else { + out.writeVInt(matchedQueries.length); + for (String matchedFilter : matchedQueries) { + out.writeString(matchedFilter); + } + } + out.writeOptionalWriteable(shard); + if (innerHits == null) { + out.writeVInt(0); + } else { + out.writeVInt(innerHits.size()); + for (Map.Entry entry : innerHits.entrySet()) { + out.writeString(entry.getKey()); + entry.getValue().writeTo(out); + } + } + } /** * Encapsulates the nested identity of a hit. */ - interface NestedIdentity { + public static final class NestedIdentity implements Writeable, ToXContent { + + private static final String _NESTED = "_nested"; + private static final String FIELD = "field"; + private static final String OFFSET = "offset"; + + private Text field; + private int offset; + private NestedIdentity child; + + public NestedIdentity(String field, int offset, NestedIdentity child) { + this.field = new Text(field); + this.offset = offset; + this.child = child; + } + + NestedIdentity(StreamInput in) throws IOException { + field = in.readOptionalText(); + offset = in.readInt(); + child = in.readOptionalWriteable(NestedIdentity::new); + } /** * Returns the nested field in the source this hit originates from */ - Text getField(); + public Text getField() { + return field; + } /** * Returns the offset in the nested array of objects in the source this hit */ - int getOffset(); + public int getOffset() { + return offset; + } /** * Returns the next child nested level if there is any, otherwise null is returned. * * In the case of mappings with multiple levels of nested object fields */ - NestedIdentity getChild(); + public NestedIdentity getChild() { + return child; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeOptionalText(field); + out.writeInt(offset); + out.writeOptionalWriteable(child); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.field(_NESTED); + return innerToXContent(builder, params); + } + + /** + * Rendering of the inner XContent object without the leading field name. This way the structure innerToXContent renders and + * fromXContent parses correspond to each other. + */ + XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + if (field != null) { + builder.field(FIELD, field); + } + if (offset != -1) { + builder.field(OFFSET, offset); + } + if (child != null) { + builder = child.toXContent(builder, params); + } + builder.endObject(); + return builder; + } + + private static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>("nested_identity", true, + ctorArgs -> new NestedIdentity((String) ctorArgs[0], (int) ctorArgs[1], (NestedIdentity) ctorArgs[2])); + static { + PARSER.declareString(constructorArg(), new ParseField(FIELD)); + PARSER.declareInt(constructorArg(), new ParseField(OFFSET)); + PARSER.declareObject(optionalConstructorArg(), PARSER, new ParseField(_NESTED)); + } + + static NestedIdentity fromXContent(XContentParser parser, Void context) { + return fromXContent(parser); + } + + public static NestedIdentity fromXContent(XContentParser parser) { + return PARSER.apply(parser, null); + } + + @Override + public boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj == null || getClass() != obj.getClass()) { + return false; + } + NestedIdentity other = (NestedIdentity) obj; + return Objects.equals(field, other.field) && + Objects.equals(offset, other.offset) && + Objects.equals(child, other.child); + } + + @Override + public int hashCode() { + return Objects.hash(field, offset, child); + } } } diff --git a/core/src/main/java/org/elasticsearch/search/SearchHitField.java b/core/src/main/java/org/elasticsearch/search/SearchHitField.java index 5747bbebef8d1..51f075fdfa398 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchHitField.java +++ b/core/src/main/java/org/elasticsearch/search/SearchHitField.java @@ -19,8 +19,14 @@ package org.elasticsearch.search; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.index.mapper.MapperService; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Iterator; import java.util.List; /** @@ -28,40 +34,104 @@ * * @see SearchHit */ -public interface SearchHitField extends Streamable, Iterable { +public class SearchHitField implements Streamable, Iterable { + + private String name; + private List values; + + private SearchHitField() { + } + + public SearchHitField(String name, List values) { + this.name = name; + this.values = values; + } /** * The name of the field. + * @deprecated use {@link #getName()} instead */ - String name(); + @Deprecated + public String name() { + return name; + } /** * The name of the field. */ - String getName(); + public String getName() { + return name(); + } /** * The first value of the hit. + * @deprecated use {@link #getValue()} instead */ - V value(); + @Deprecated + public T value() { + return getValue(); + } /** * The first value of the hit. */ - V getValue(); + public T getValue() { + if (values == null || values.isEmpty()) { + return null; + } + return (T) values.get(0); + } /** * The field values. + * @deprecated use {@link #getValues()} instead */ - List values(); + @Deprecated + public List values() { + return values; + } /** * The field values. */ - List getValues(); + public List getValues() { + return values(); + } /** * @return The field is a metadata field */ - boolean isMetadataField(); + public boolean isMetadataField() { + return MapperService.isMetadataField(name); + } + + @Override + public Iterator iterator() { + return values.iterator(); + } + + public static SearchHitField readSearchHitField(StreamInput in) throws IOException { + SearchHitField result = new SearchHitField(); + result.readFrom(in); + return result; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + name = in.readString(); + int size = in.readVInt(); + values = new ArrayList<>(size); + for (int i = 0; i < size; i++) { + values.add(in.readGenericValue()); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(name); + out.writeVInt(values.size()); + for (Object value : values) { + out.writeGenericValue(value); + } + } } diff --git a/core/src/main/java/org/elasticsearch/search/SearchHits.java b/core/src/main/java/org/elasticsearch/search/SearchHits.java index 400e2ebc44fc5..58f4fc675cac0 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchHits.java +++ b/core/src/main/java/org/elasticsearch/search/SearchHits.java @@ -19,48 +19,209 @@ package org.elasticsearch.search; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Iterator; +import java.util.List; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; /** * The hits of a search request. - * - * */ -public interface SearchHits extends Streamable, ToXContent, Iterable { +public class SearchHits implements Streamable, ToXContent, Iterable { + + public static SearchHits empty() { + // We shouldn't use static final instance, since that could directly be returned by native transport clients + return new SearchHits(EMPTY, 0, 0); + } + + public static final SearchHit[] EMPTY = new SearchHit[0]; + + private SearchHit[] hits; + + public long totalHits; + + private float maxScore; + + SearchHits() { + + } + + public SearchHits(SearchHit[] hits, long totalHits, float maxScore) { + this.hits = hits; + this.totalHits = totalHits; + this.maxScore = maxScore; + } /** * The total number of hits that matches the search request. + * @deprecated use {@link #getTotalHits()} instead */ - long totalHits(); + @Deprecated + public long totalHits() { + return totalHits; + } /** * The total number of hits that matches the search request. */ - long getTotalHits(); + public long getTotalHits() { + return totalHits(); + } /** * The maximum score of this query. + * @deprecated use {@link #getMaxScore()} instead */ - float maxScore(); + @Deprecated + public float maxScore() { + return this.maxScore; + } /** * The maximum score of this query. */ - float getMaxScore(); + public float getMaxScore() { + return maxScore(); + } /** * The hits of the search request (based on the search type, and from / size provided). + * @deprecated use {@link #getHits()} instead */ - SearchHit[] hits(); + @Deprecated + public SearchHit[] hits() { + return this.hits; + } /** * Return the hit as the provided position. */ - SearchHit getAt(int position); + public SearchHit getAt(int position) { + return hits[position]; + } /** * The hits of the search request (based on the search type, and from / size provided). */ - SearchHit[] getHits(); + public SearchHit[] getHits() { + return hits(); + } + + @Override + public Iterator iterator() { + return Arrays.stream(hits()).iterator(); + } + + public SearchHit[] internalHits() { + return this.hits; + } + + public static final class Fields { + public static final String HITS = "hits"; + public static final String TOTAL = "total"; + public static final String MAX_SCORE = "max_score"; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(Fields.HITS); + builder.field(Fields.TOTAL, totalHits); + if (Float.isNaN(maxScore)) { + builder.nullField(Fields.MAX_SCORE); + } else { + builder.field(Fields.MAX_SCORE, maxScore); + } + builder.field(Fields.HITS); + builder.startArray(); + for (SearchHit hit : hits) { + hit.toXContent(builder, params); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + public static SearchHits fromXContent(XContentParser parser) throws IOException { + if (parser.currentToken() != XContentParser.Token.START_OBJECT) { + parser.nextToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.currentToken(), parser::getTokenLocation); + } + XContentParser.Token token = parser.currentToken(); + String currentFieldName = null; + List hits = new ArrayList<>(); + long totalHits = 0; + float maxScore = 0f; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (Fields.TOTAL.equals(currentFieldName)) { + totalHits = parser.longValue(); + } else if (Fields.MAX_SCORE.equals(currentFieldName)) { + maxScore = parser.floatValue(); + } + } else if (token == XContentParser.Token.VALUE_NULL) { + if (Fields.MAX_SCORE.equals(currentFieldName)) { + maxScore = Float.NaN; // NaN gets rendered as null-field + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (Fields.HITS.equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + hits.add(SearchHit.fromXContent(parser)); + } + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + parser.skipChildren(); + } + } + SearchHits searchHits = new SearchHits(hits.toArray(new SearchHit[hits.size()]), totalHits, + maxScore); + return searchHits; + } + + + public static SearchHits readSearchHits(StreamInput in) throws IOException { + SearchHits hits = new SearchHits(); + hits.readFrom(in); + return hits; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + totalHits = in.readVLong(); + maxScore = in.readFloat(); + int size = in.readVInt(); + if (size == 0) { + hits = EMPTY; + } else { + hits = new SearchHit[size]; + for (int i = 0; i < hits.length; i++) { + hits[i] = SearchHit.readSearchHit(in); + } + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVLong(totalHits); + out.writeFloat(maxScore); + out.writeVInt(hits.length); + if (hits.length > 0) { + for (SearchHit hit : hits) { + hit.writeTo(out); + } + } + } } diff --git a/core/src/main/java/org/elasticsearch/search/SearchModule.java b/core/src/main/java/org/elasticsearch/search/SearchModule.java index ab36f03639cb0..a68a5b7fa17de 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchModule.java +++ b/core/src/main/java/org/elasticsearch/search/SearchModule.java @@ -23,12 +23,14 @@ import org.elasticsearch.common.NamedRegistry; import org.elasticsearch.common.geo.ShapesAvailability; import org.elasticsearch.common.geo.builders.ShapeBuilders; -import org.elasticsearch.common.inject.AbstractModule; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.io.stream.NamedWriteableRegistry.Entry; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.ParseFieldRegistry; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.BoolQueryBuilder; import org.elasticsearch.index.query.BoostingQueryBuilder; import org.elasticsearch.index.query.CommonTermsQueryBuilder; @@ -43,8 +45,6 @@ import org.elasticsearch.index.query.GeoPolygonQueryBuilder; import org.elasticsearch.index.query.GeoShapeQueryBuilder; import org.elasticsearch.index.query.GeohashCellQuery; -import org.elasticsearch.index.query.HasChildQueryBuilder; -import org.elasticsearch.index.query.HasParentQueryBuilder; import org.elasticsearch.index.query.IdsQueryBuilder; import org.elasticsearch.index.query.IndicesQueryBuilder; import org.elasticsearch.index.query.MatchAllQueryBuilder; @@ -55,9 +55,9 @@ import org.elasticsearch.index.query.MoreLikeThisQueryBuilder; import org.elasticsearch.index.query.MultiMatchQueryBuilder; import org.elasticsearch.index.query.NestedQueryBuilder; -import org.elasticsearch.index.query.ParentIdQueryBuilder; import org.elasticsearch.index.query.PrefixQueryBuilder; import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryStringQueryBuilder; import org.elasticsearch.index.query.RangeQueryBuilder; import org.elasticsearch.index.query.RegexpQueryBuilder; @@ -83,43 +83,38 @@ import org.elasticsearch.index.query.functionscore.LinearDecayFunctionBuilder; import org.elasticsearch.index.query.functionscore.RandomScoreFunctionBuilder; import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilder; -import org.elasticsearch.index.query.functionscore.ScoreFunctionParser; import org.elasticsearch.index.query.functionscore.ScriptScoreFunctionBuilder; import org.elasticsearch.index.query.functionscore.WeightBuilder; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; import org.elasticsearch.plugins.SearchPlugin; import org.elasticsearch.plugins.SearchPlugin.AggregationSpec; import org.elasticsearch.plugins.SearchPlugin.FetchPhaseConstructionContext; import org.elasticsearch.plugins.SearchPlugin.PipelineAggregationSpec; import org.elasticsearch.plugins.SearchPlugin.QuerySpec; import org.elasticsearch.plugins.SearchPlugin.ScoreFunctionSpec; +import org.elasticsearch.plugins.SearchPlugin.SearchExtSpec; import org.elasticsearch.plugins.SearchPlugin.SearchExtensionSpec; -import org.elasticsearch.search.action.SearchTransportService; +import org.elasticsearch.plugins.SearchPlugin.SuggesterSpec; import org.elasticsearch.search.aggregations.AggregationBuilder; -import org.elasticsearch.search.aggregations.Aggregator; -import org.elasticsearch.search.aggregations.AggregatorParsers; +import org.elasticsearch.search.aggregations.AggregatorFactories; +import org.elasticsearch.search.aggregations.BaseAggregationBuilder; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.children.ChildrenAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.children.InternalChildren; +import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.adjacency.InternalAdjacencyMatrix; import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.filter.InternalFilter; import org.elasticsearch.search.aggregations.bucket.filters.FiltersAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.filters.InternalFilters; import org.elasticsearch.search.aggregations.bucket.geogrid.GeoGridAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.geogrid.GeoHashGridParser; import org.elasticsearch.search.aggregations.bucket.geogrid.InternalGeoHashGrid; import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.global.InternalGlobal; import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramParser; import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.histogram.HistogramParser; import org.elasticsearch.search.aggregations.bucket.histogram.InternalDateHistogram; import org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram; import org.elasticsearch.search.aggregations.bucket.missing.InternalMissing; import org.elasticsearch.search.aggregations.bucket.missing.MissingAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.missing.MissingParser; import org.elasticsearch.search.aggregations.bucket.nested.InternalNested; import org.elasticsearch.search.aggregations.bucket.nested.InternalReverseNested; import org.elasticsearch.search.aggregations.bucket.nested.NestedAggregationBuilder; @@ -127,24 +122,18 @@ import org.elasticsearch.search.aggregations.bucket.range.InternalBinaryRange; import org.elasticsearch.search.aggregations.bucket.range.InternalRange; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.range.RangeParser; import org.elasticsearch.search.aggregations.bucket.range.date.DateRangeAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.range.date.DateRangeParser; import org.elasticsearch.search.aggregations.bucket.range.date.InternalDateRange; import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceParser; import org.elasticsearch.search.aggregations.bucket.range.geodistance.InternalGeoDistance; import org.elasticsearch.search.aggregations.bucket.range.ip.IpRangeAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.range.ip.IpRangeParser; import org.elasticsearch.search.aggregations.bucket.sampler.DiversifiedAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.sampler.DiversifiedSamplerParser; import org.elasticsearch.search.aggregations.bucket.sampler.InternalSampler; import org.elasticsearch.search.aggregations.bucket.sampler.SamplerAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.sampler.UnmappedSampler; import org.elasticsearch.search.aggregations.bucket.significant.SignificantLongTerms; import org.elasticsearch.search.aggregations.bucket.significant.SignificantStringTerms; import org.elasticsearch.search.aggregations.bucket.significant.SignificantTermsAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.significant.SignificantTermsParser; import org.elasticsearch.search.aggregations.bucket.significant.UnmappedSignificantTerms; import org.elasticsearch.search.aggregations.bucket.significant.heuristics.ChiSquare; import org.elasticsearch.search.aggregations.bucket.significant.heuristics.GND; @@ -158,30 +147,21 @@ import org.elasticsearch.search.aggregations.bucket.terms.LongTerms; import org.elasticsearch.search.aggregations.bucket.terms.StringTerms; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.terms.TermsParser; import org.elasticsearch.search.aggregations.bucket.terms.UnmappedTerms; import org.elasticsearch.search.aggregations.metrics.avg.AvgAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.avg.AvgParser; import org.elasticsearch.search.aggregations.metrics.avg.InternalAvg; import org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityParser; import org.elasticsearch.search.aggregations.metrics.cardinality.InternalCardinality; import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBoundsAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBoundsParser; import org.elasticsearch.search.aggregations.metrics.geobounds.InternalGeoBounds; import org.elasticsearch.search.aggregations.metrics.geocentroid.GeoCentroidAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.geocentroid.GeoCentroidParser; import org.elasticsearch.search.aggregations.metrics.geocentroid.InternalGeoCentroid; import org.elasticsearch.search.aggregations.metrics.max.InternalMax; import org.elasticsearch.search.aggregations.metrics.max.MaxAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.max.MaxParser; import org.elasticsearch.search.aggregations.metrics.min.InternalMin; import org.elasticsearch.search.aggregations.metrics.min.MinAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.min.MinParser; import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanksAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanksParser; import org.elasticsearch.search.aggregations.metrics.percentiles.PercentilesAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.percentiles.PercentilesParser; import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.InternalHDRPercentileRanks; import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.InternalHDRPercentiles; import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.InternalTDigestPercentileRanks; @@ -190,18 +170,14 @@ import org.elasticsearch.search.aggregations.metrics.scripted.ScriptedMetricAggregationBuilder; import org.elasticsearch.search.aggregations.metrics.stats.InternalStats; import org.elasticsearch.search.aggregations.metrics.stats.StatsAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.stats.StatsParser; import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStatsAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStatsParser; import org.elasticsearch.search.aggregations.metrics.stats.extended.InternalExtendedStats; import org.elasticsearch.search.aggregations.metrics.sum.InternalSum; import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.sum.SumParser; import org.elasticsearch.search.aggregations.metrics.tophits.InternalTopHits; import org.elasticsearch.search.aggregations.metrics.tophits.TopHitsAggregationBuilder; import org.elasticsearch.search.aggregations.metrics.valuecount.InternalValueCount; import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountAggregationBuilder; -import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountParser; import org.elasticsearch.search.aggregations.pipeline.InternalSimpleValue; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.InternalBucketMetricValue; @@ -242,7 +218,6 @@ import org.elasticsearch.search.aggregations.pipeline.movavg.models.SimpleModel; import org.elasticsearch.search.aggregations.pipeline.serialdiff.SerialDiffPipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.serialdiff.SerialDiffPipelineAggregator; -import org.elasticsearch.search.controller.SearchPhaseController; import org.elasticsearch.search.fetch.FetchPhase; import org.elasticsearch.search.fetch.FetchSubPhase; import org.elasticsearch.search.fetch.subphase.DocValueFieldsFetchSubPhase; @@ -257,6 +232,7 @@ import org.elasticsearch.search.fetch.subphase.highlight.Highlighter; import org.elasticsearch.search.fetch.subphase.highlight.PlainHighlighter; import org.elasticsearch.search.fetch.subphase.highlight.PostingsHighlighter; +import org.elasticsearch.search.fetch.subphase.highlight.UnifiedHighlighter; import org.elasticsearch.search.rescore.QueryRescorerBuilder; import org.elasticsearch.search.rescore.RescoreBuilder; import org.elasticsearch.search.sort.FieldSortBuilder; @@ -264,20 +240,19 @@ import org.elasticsearch.search.sort.ScoreSortBuilder; import org.elasticsearch.search.sort.ScriptSortBuilder; import org.elasticsearch.search.sort.SortBuilder; -import org.elasticsearch.search.suggest.Suggester; -import org.elasticsearch.search.suggest.Suggesters; import org.elasticsearch.search.suggest.SuggestionBuilder; -import org.elasticsearch.search.suggest.completion.CompletionSuggester; +import org.elasticsearch.search.suggest.completion.CompletionSuggestionBuilder; import org.elasticsearch.search.suggest.phrase.Laplace; import org.elasticsearch.search.suggest.phrase.LinearInterpolation; -import org.elasticsearch.search.suggest.phrase.PhraseSuggester; +import org.elasticsearch.search.suggest.phrase.PhraseSuggestionBuilder; import org.elasticsearch.search.suggest.phrase.SmoothingModel; import org.elasticsearch.search.suggest.phrase.StupidBackoff; -import org.elasticsearch.search.suggest.term.TermSuggester; +import org.elasticsearch.search.suggest.term.TermSuggestionBuilder; import java.util.ArrayList; import java.util.List; import java.util.Map; +import java.util.Optional; import java.util.function.Consumer; import java.util.function.Function; @@ -287,19 +262,12 @@ /** * Sets up things that can be done at search time like queries, aggregations, and suggesters. */ -public class SearchModule extends AbstractModule { +public class SearchModule { public static final Setting INDICES_MAX_CLAUSE_COUNT_SETTING = Setting.intSetting("indices.query.bool.max_clause_count", 1024, 1, Integer.MAX_VALUE, Setting.Property.NodeScope); private final boolean transportClient; private final Map highlighters; - private final Map> suggesters; - private final ParseFieldRegistry> scoreFunctionParserRegistry = new ParseFieldRegistry<>("score_function"); - private final IndicesQueriesRegistry queryParserRegistry = new IndicesQueriesRegistry(); - private final ParseFieldRegistry aggregationParserRegistry = new ParseFieldRegistry<>("aggregation"); - private final ParseFieldRegistry pipelineAggregationParserRegistry = new ParseFieldRegistry<>( - "pipline_aggregation"); - private final AggregatorParsers aggregatorParsers = new AggregatorParsers(aggregationParserRegistry, pipelineAggregationParserRegistry); private final ParseFieldRegistry significanceHeuristicParserRegistry = new ParseFieldRegistry<>( "significance_heuristic"); private final ParseFieldRegistry movingAverageModelParserRegistry = new ParseFieldRegistry<>( @@ -308,13 +276,13 @@ public class SearchModule extends AbstractModule { private final List fetchSubPhases = new ArrayList<>(); private final Settings settings; - private final List namedWriteables = new ArrayList<>(); - private final SearchRequestParsers searchRequestParsers; + private final List namedWriteables = new ArrayList<>(); + private final List namedXContents = new ArrayList<>(); public SearchModule(Settings settings, boolean transportClient, List plugins) { this.settings = settings; this.transportClient = transportClient; - suggesters = setupSuggesters(plugins); + registerSuggesters(plugins); highlighters = setupHighlighters(settings, plugins); registerScoreFunctions(plugins); registerQueryParsers(plugins); @@ -326,24 +294,16 @@ public SearchModule(Settings settings, boolean transportClient, List getNamedWriteables() { + public List getNamedWriteables() { return namedWriteables; } - public Suggesters getSuggesters() { - return new Suggesters(suggesters); - } - - public IndicesQueriesRegistry getQueryParserRegistry() { - return queryParserRegistry; - } - - public SearchRequestParsers getSearchRequestParsers() { - return searchRequestParsers; + public List getNamedXContents() { + return namedXContents; } /** @@ -367,114 +327,101 @@ public ParseFieldRegistry getMovingAverageModel return movingAverageModelParserRegistry; } - /** - * Parsers for {@link AggregationBuilder}s and {@link PipelineAggregationBuilder}s. - */ - public AggregatorParsers getAggregatorParsers() { - return aggregatorParsers; - } - - - @Override - protected void configure() { - if (false == transportClient) { - bind(IndicesQueriesRegistry.class).toInstance(queryParserRegistry); - bind(SearchRequestParsers.class).toInstance(searchRequestParsers); - configureSearch(); - } - } - private void registerAggregations(List plugins) { - registerAggregation(new AggregationSpec(AvgAggregationBuilder.NAME, AvgAggregationBuilder::new, new AvgParser()) + registerAggregation(new AggregationSpec(AvgAggregationBuilder.NAME, AvgAggregationBuilder::new, AvgAggregationBuilder::parse) .addResultReader(InternalAvg::new)); - registerAggregation(new AggregationSpec(SumAggregationBuilder.NAME, SumAggregationBuilder::new, new SumParser()) + registerAggregation(new AggregationSpec(SumAggregationBuilder.NAME, SumAggregationBuilder::new, SumAggregationBuilder::parse) .addResultReader(InternalSum::new)); - registerAggregation(new AggregationSpec(MinAggregationBuilder.NAME, MinAggregationBuilder::new, new MinParser()) + registerAggregation(new AggregationSpec(MinAggregationBuilder.NAME, MinAggregationBuilder::new, MinAggregationBuilder::parse) .addResultReader(InternalMin::new)); - registerAggregation(new AggregationSpec(MaxAggregationBuilder.NAME, MaxAggregationBuilder::new, new MaxParser()) + registerAggregation(new AggregationSpec(MaxAggregationBuilder.NAME, MaxAggregationBuilder::new, MaxAggregationBuilder::parse) .addResultReader(InternalMax::new)); - registerAggregation(new AggregationSpec(StatsAggregationBuilder.NAME, StatsAggregationBuilder::new, new StatsParser()) + registerAggregation(new AggregationSpec(StatsAggregationBuilder.NAME, StatsAggregationBuilder::new, StatsAggregationBuilder::parse) .addResultReader(InternalStats::new)); registerAggregation(new AggregationSpec(ExtendedStatsAggregationBuilder.NAME, ExtendedStatsAggregationBuilder::new, - new ExtendedStatsParser()).addResultReader(InternalExtendedStats::new)); + ExtendedStatsAggregationBuilder::parse).addResultReader(InternalExtendedStats::new)); registerAggregation(new AggregationSpec(ValueCountAggregationBuilder.NAME, ValueCountAggregationBuilder::new, - new ValueCountParser()).addResultReader(InternalValueCount::new)); + ValueCountAggregationBuilder::parse).addResultReader(InternalValueCount::new)); registerAggregation(new AggregationSpec(PercentilesAggregationBuilder.NAME, PercentilesAggregationBuilder::new, - new PercentilesParser()) + PercentilesAggregationBuilder::parse) .addResultReader(InternalTDigestPercentiles.NAME, InternalTDigestPercentiles::new) .addResultReader(InternalHDRPercentiles.NAME, InternalHDRPercentiles::new)); registerAggregation(new AggregationSpec(PercentileRanksAggregationBuilder.NAME, PercentileRanksAggregationBuilder::new, - new PercentileRanksParser()) + PercentileRanksAggregationBuilder::parse) .addResultReader(InternalTDigestPercentileRanks.NAME, InternalTDigestPercentileRanks::new) .addResultReader(InternalHDRPercentileRanks.NAME, InternalHDRPercentileRanks::new)); registerAggregation(new AggregationSpec(CardinalityAggregationBuilder.NAME, CardinalityAggregationBuilder::new, - new CardinalityParser()).addResultReader(InternalCardinality::new)); + CardinalityAggregationBuilder::parse).addResultReader(InternalCardinality::new)); registerAggregation(new AggregationSpec(GlobalAggregationBuilder.NAME, GlobalAggregationBuilder::new, GlobalAggregationBuilder::parse).addResultReader(InternalGlobal::new)); - registerAggregation(new AggregationSpec(MissingAggregationBuilder.NAME, MissingAggregationBuilder::new, new MissingParser()) - .addResultReader(InternalMissing::new)); + registerAggregation(new AggregationSpec(MissingAggregationBuilder.NAME, MissingAggregationBuilder::new, + MissingAggregationBuilder::parse).addResultReader(InternalMissing::new)); registerAggregation(new AggregationSpec(FilterAggregationBuilder.NAME, FilterAggregationBuilder::new, FilterAggregationBuilder::parse).addResultReader(InternalFilter::new)); registerAggregation(new AggregationSpec(FiltersAggregationBuilder.NAME, FiltersAggregationBuilder::new, FiltersAggregationBuilder::parse).addResultReader(InternalFilters::new)); + registerAggregation(new AggregationSpec(AdjacencyMatrixAggregationBuilder.NAME, AdjacencyMatrixAggregationBuilder::new, + AdjacencyMatrixAggregationBuilder.getParser()).addResultReader(InternalAdjacencyMatrix::new)); registerAggregation(new AggregationSpec(SamplerAggregationBuilder.NAME, SamplerAggregationBuilder::new, SamplerAggregationBuilder::parse) .addResultReader(InternalSampler.NAME, InternalSampler::new) .addResultReader(UnmappedSampler.NAME, UnmappedSampler::new)); registerAggregation(new AggregationSpec(DiversifiedAggregationBuilder.NAME, DiversifiedAggregationBuilder::new, - new DiversifiedSamplerParser()) + DiversifiedAggregationBuilder::parse) /* Reuses result readers from SamplerAggregator*/); - registerAggregation(new AggregationSpec(TermsAggregationBuilder.NAME, TermsAggregationBuilder::new, new TermsParser()) + registerAggregation(new AggregationSpec(TermsAggregationBuilder.NAME, TermsAggregationBuilder::new, + TermsAggregationBuilder::parse) .addResultReader(StringTerms.NAME, StringTerms::new) .addResultReader(UnmappedTerms.NAME, UnmappedTerms::new) .addResultReader(LongTerms.NAME, LongTerms::new) .addResultReader(DoubleTerms.NAME, DoubleTerms::new)); registerAggregation(new AggregationSpec(SignificantTermsAggregationBuilder.NAME, SignificantTermsAggregationBuilder::new, - new SignificantTermsParser(significanceHeuristicParserRegistry, queryParserRegistry)) + SignificantTermsAggregationBuilder.getParser(significanceHeuristicParserRegistry)) .addResultReader(SignificantStringTerms.NAME, SignificantStringTerms::new) .addResultReader(SignificantLongTerms.NAME, SignificantLongTerms::new) .addResultReader(UnmappedSignificantTerms.NAME, UnmappedSignificantTerms::new)); registerAggregation(new AggregationSpec(RangeAggregationBuilder.NAME, RangeAggregationBuilder::new, - new RangeParser()).addResultReader(InternalRange::new)); - registerAggregation(new AggregationSpec(DateRangeAggregationBuilder.NAME, DateRangeAggregationBuilder::new, new DateRangeParser()) - .addResultReader(InternalDateRange::new)); - registerAggregation(new AggregationSpec(IpRangeAggregationBuilder.NAME, IpRangeAggregationBuilder::new, new IpRangeParser()) - .addResultReader(InternalBinaryRange::new)); - registerAggregation(new AggregationSpec(HistogramAggregationBuilder.NAME, HistogramAggregationBuilder::new, new HistogramParser()) - .addResultReader(InternalHistogram::new)); + RangeAggregationBuilder::parse).addResultReader(InternalRange::new)); + registerAggregation(new AggregationSpec(DateRangeAggregationBuilder.NAME, DateRangeAggregationBuilder::new, + DateRangeAggregationBuilder::parse).addResultReader(InternalDateRange::new)); + registerAggregation(new AggregationSpec(IpRangeAggregationBuilder.NAME, IpRangeAggregationBuilder::new, + IpRangeAggregationBuilder::parse).addResultReader(InternalBinaryRange::new)); + registerAggregation(new AggregationSpec(HistogramAggregationBuilder.NAME, HistogramAggregationBuilder::new, + HistogramAggregationBuilder::parse).addResultReader(InternalHistogram::new)); registerAggregation(new AggregationSpec(DateHistogramAggregationBuilder.NAME, DateHistogramAggregationBuilder::new, - new DateHistogramParser()).addResultReader(InternalDateHistogram::new)); + DateHistogramAggregationBuilder::parse).addResultReader(InternalDateHistogram::new)); registerAggregation(new AggregationSpec(GeoDistanceAggregationBuilder.NAME, GeoDistanceAggregationBuilder::new, - new GeoDistanceParser()).addResultReader(InternalGeoDistance::new)); - registerAggregation(new AggregationSpec(GeoGridAggregationBuilder.NAME, GeoGridAggregationBuilder::new, new GeoHashGridParser()) - .addResultReader(InternalGeoHashGrid::new)); + GeoDistanceAggregationBuilder::parse).addResultReader(InternalGeoDistance::new)); + registerAggregation(new AggregationSpec(GeoGridAggregationBuilder.NAME, GeoGridAggregationBuilder::new, + GeoGridAggregationBuilder::parse).addResultReader(InternalGeoHashGrid::new)); registerAggregation(new AggregationSpec(NestedAggregationBuilder.NAME, NestedAggregationBuilder::new, NestedAggregationBuilder::parse).addResultReader(InternalNested::new)); registerAggregation(new AggregationSpec(ReverseNestedAggregationBuilder.NAME, ReverseNestedAggregationBuilder::new, ReverseNestedAggregationBuilder::parse).addResultReader(InternalReverseNested::new)); registerAggregation(new AggregationSpec(TopHitsAggregationBuilder.NAME, TopHitsAggregationBuilder::new, TopHitsAggregationBuilder::parse).addResultReader(InternalTopHits::new)); - registerAggregation(new AggregationSpec(GeoBoundsAggregationBuilder.NAME, GeoBoundsAggregationBuilder::new, new GeoBoundsParser()) - .addResultReader(InternalGeoBounds::new)); + registerAggregation(new AggregationSpec(GeoBoundsAggregationBuilder.NAME, GeoBoundsAggregationBuilder::new, + GeoBoundsAggregationBuilder::parse).addResultReader(InternalGeoBounds::new)); registerAggregation(new AggregationSpec(GeoCentroidAggregationBuilder.NAME, GeoCentroidAggregationBuilder::new, - new GeoCentroidParser()).addResultReader(InternalGeoCentroid::new)); + GeoCentroidAggregationBuilder::parse).addResultReader(InternalGeoCentroid::new)); registerAggregation(new AggregationSpec(ScriptedMetricAggregationBuilder.NAME, ScriptedMetricAggregationBuilder::new, ScriptedMetricAggregationBuilder::parse).addResultReader(InternalScriptedMetric::new)); - registerAggregation(new AggregationSpec(ChildrenAggregationBuilder.NAME, ChildrenAggregationBuilder::new, - ChildrenAggregationBuilder::parse).addResultReader(InternalChildren::new)); - registerFromPlugin(plugins, SearchPlugin::getAggregations, this::registerAggregation); } private void registerAggregation(AggregationSpec spec) { if (false == transportClient) { - aggregationParserRegistry.register(spec.getParser(), spec.getName()); + namedXContents.add(new NamedXContentRegistry.Entry(BaseAggregationBuilder.class, spec.getName(), (p, c) -> { + AggregatorFactories.AggParseContext context = (AggregatorFactories.AggParseContext) c; + return spec.getParser().parse(context.name, context.queryParseContext); + })); } - namedWriteables.add(new Entry(AggregationBuilder.class, spec.getName().getPreferredName(), spec.getReader())); + namedWriteables.add( + new NamedWriteableRegistry.Entry(AggregationBuilder.class, spec.getName().getPreferredName(), spec.getReader())); for (Map.Entry> t : spec.getResultReaders().entrySet()) { String writeableName = t.getKey(); Writeable.Reader internalReader = t.getValue(); - namedWriteables.add(new Entry(InternalAggregation.class, writeableName, internalReader)); + namedWriteables.add(new NamedWriteableRegistry.Entry(InternalAggregation.class, writeableName, internalReader)); } } @@ -561,22 +508,21 @@ private void registerPipelineAggregations(List plugins) { private void registerPipelineAggregation(PipelineAggregationSpec spec) { if (false == transportClient) { - pipelineAggregationParserRegistry.register(spec.getParser(), spec.getName()); + namedXContents.add(new NamedXContentRegistry.Entry(BaseAggregationBuilder.class, spec.getName(), (p, c) -> { + AggregatorFactories.AggParseContext context = (AggregatorFactories.AggParseContext) c; + return spec.getParser().parse(context.name, context.queryParseContext); + })); } - namedWriteables.add(new Entry(PipelineAggregationBuilder.class, spec.getName().getPreferredName(), spec.getReader())); - namedWriteables.add(new Entry(PipelineAggregator.class, spec.getName().getPreferredName(), spec.getAggregatorReader())); + namedWriteables.add( + new NamedWriteableRegistry.Entry(PipelineAggregationBuilder.class, spec.getName().getPreferredName(), spec.getReader())); + namedWriteables.add( + new NamedWriteableRegistry.Entry(PipelineAggregator.class, spec.getName().getPreferredName(), spec.getAggregatorReader())); for (Map.Entry> resultReader : spec.getResultReaders().entrySet()) { - namedWriteables.add(new Entry(InternalAggregation.class, resultReader.getKey(), resultReader.getValue())); + namedWriteables + .add(new NamedWriteableRegistry.Entry(InternalAggregation.class, resultReader.getKey(), resultReader.getValue())); } } - protected void configureSearch() { - // configure search private classes... - bind(SearchPhaseController.class).asEagerSingleton(); - bind(FetchPhase.class).toInstance(new FetchPhase(fetchSubPhases)); - bind(SearchTransportService.class).asEagerSingleton(); - } - private void registerShapes() { if (ShapesAvailability.JTS_AVAILABLE && ShapesAvailability.SPATIAL4J_AVAILABLE) { ShapeBuilders.register(namedWriteables); @@ -584,14 +530,14 @@ private void registerShapes() { } private void registerRescorers() { - namedWriteables.add(new Entry(RescoreBuilder.class, QueryRescorerBuilder.NAME, QueryRescorerBuilder::new)); + namedWriteables.add(new NamedWriteableRegistry.Entry(RescoreBuilder.class, QueryRescorerBuilder.NAME, QueryRescorerBuilder::new)); } private void registerSorts() { - namedWriteables.add(new Entry(SortBuilder.class, GeoDistanceSortBuilder.NAME, GeoDistanceSortBuilder::new)); - namedWriteables.add(new Entry(SortBuilder.class, ScoreSortBuilder.NAME, ScoreSortBuilder::new)); - namedWriteables.add(new Entry(SortBuilder.class, ScriptSortBuilder.NAME, ScriptSortBuilder::new)); - namedWriteables.add(new Entry(SortBuilder.class, FieldSortBuilder.NAME, FieldSortBuilder::new)); + namedWriteables.add(new NamedWriteableRegistry.Entry(SortBuilder.class, GeoDistanceSortBuilder.NAME, GeoDistanceSortBuilder::new)); + namedWriteables.add(new NamedWriteableRegistry.Entry(SortBuilder.class, ScoreSortBuilder.NAME, ScoreSortBuilder::new)); + namedWriteables.add(new NamedWriteableRegistry.Entry(SortBuilder.class, ScriptSortBuilder.NAME, ScriptSortBuilder::new)); + namedWriteables.add(new NamedWriteableRegistry.Entry(SortBuilder.class, FieldSortBuilder.NAME, FieldSortBuilder::new)); } private void registerFromPlugin(List plugins, Function> producer, Consumer consumer) { @@ -603,28 +549,26 @@ private void registerFromPlugin(List plugins, Function namedWriteables) { - namedWriteables.add(new Entry(SmoothingModel.class, Laplace.NAME, Laplace::new)); - namedWriteables.add(new Entry(SmoothingModel.class, LinearInterpolation.NAME, LinearInterpolation::new)); - namedWriteables.add(new Entry(SmoothingModel.class, StupidBackoff.NAME, StupidBackoff::new)); + namedWriteables.add(new NamedWriteableRegistry.Entry(SmoothingModel.class, Laplace.NAME, Laplace::new)); + namedWriteables.add(new NamedWriteableRegistry.Entry(SmoothingModel.class, LinearInterpolation.NAME, LinearInterpolation::new)); + namedWriteables.add(new NamedWriteableRegistry.Entry(SmoothingModel.class, StupidBackoff.NAME, StupidBackoff::new)); } - private Map> setupSuggesters(List plugins) { + private void registerSuggesters(List plugins) { registerSmoothingModels(namedWriteables); - // Suggester is weird - it is both a Parser and a reader.... - NamedRegistry> suggesters = new NamedRegistry>("suggester") { - @Override - public void register(String name, Suggester t) { - super.register(name, t); - namedWriteables.add(new Entry(SuggestionBuilder.class, name, t)); - } - }; - suggesters.register("phrase", PhraseSuggester.INSTANCE); - suggesters.register("term", TermSuggester.INSTANCE); - suggesters.register("completion", CompletionSuggester.INSTANCE); + registerSuggester(new SuggesterSpec<>("term", TermSuggestionBuilder::new, TermSuggestionBuilder::fromXContent)); + registerSuggester(new SuggesterSpec<>("phrase", PhraseSuggestionBuilder::new, PhraseSuggestionBuilder::fromXContent)); + registerSuggester(new SuggesterSpec<>("completion", CompletionSuggestionBuilder::new, CompletionSuggestionBuilder::fromXContent)); - suggesters.extractAndRegister(plugins, SearchPlugin::getSuggesters); - return unmodifiableMap(suggesters.getRegistry()); + registerFromPlugin(plugins, SearchPlugin::getSuggesters, this::registerSuggester); + } + + private void registerSuggester(SuggesterSpec suggester) { + namedWriteables.add(new NamedWriteableRegistry.Entry( + SuggestionBuilder.class, suggester.getName().getPreferredName(), suggester.getReader())); + namedXContents.add(new NamedXContentRegistry.Entry(SuggestionBuilder.class, suggester.getName(), + suggester.getParser())); } private Map setupHighlighters(Settings settings, List plugins) { @@ -632,7 +576,7 @@ private Map setupHighlighters(Settings settings, List plugins) { //weight doesn't have its own parser, so every function supports it out of the box. //Can be a single function too when not associated to any other function, which is why it needs to be registered manually here. - namedWriteables.add(new Entry(ScoreFunctionBuilder.class, WeightBuilder.NAME, WeightBuilder::new)); + namedWriteables.add(new NamedWriteableRegistry.Entry(ScoreFunctionBuilder.class, WeightBuilder.NAME, WeightBuilder::new)); registerFromPlugin(plugins, SearchPlugin::getScoreFunctions, this::registerScoreFunction); } private void registerScoreFunction(ScoreFunctionSpec scoreFunction) { - scoreFunctionParserRegistry.register(scoreFunction.getParser(), scoreFunction.getName()); - namedWriteables.add(new Entry(ScoreFunctionBuilder.class, scoreFunction.getName().getPreferredName(), scoreFunction.getReader())); + namedWriteables.add(new NamedWriteableRegistry.Entry( + ScoreFunctionBuilder.class, scoreFunction.getName().getPreferredName(), scoreFunction.getReader())); + // TODO remove funky contexts + namedXContents.add(new NamedXContentRegistry.Entry( + ScoreFunctionBuilder.class, scoreFunction.getName(), + (XContentParser p, Object c) -> scoreFunction.getParser().fromXContent((QueryParseContext) c))); } private void registerValueFormats() { @@ -677,7 +625,7 @@ private void registerValueFormats() { * Register a new ValueFormat. */ private void registerValueFormat(String name, Writeable.Reader reader) { - namedWriteables.add(new Entry(DocValueFormat.class, name, reader)); + namedWriteables.add(new NamedWriteableRegistry.Entry(DocValueFormat.class, name, reader)); } private void registerSignificanceHeuristics(List plugins) { @@ -693,7 +641,8 @@ private void registerSignificanceHeuristics(List plugins) { private void registerSignificanceHeuristic(SearchExtensionSpec heuristic) { significanceHeuristicParserRegistry.register(heuristic.getParser(), heuristic.getName()); - namedWriteables.add(new Entry(SignificanceHeuristic.class, heuristic.getName().getPreferredName(), heuristic.getReader())); + namedWriteables.add(new NamedWriteableRegistry.Entry(SignificanceHeuristic.class, heuristic.getName().getPreferredName(), + heuristic.getReader())); } private void registerMovingAverageModels(List plugins) { @@ -708,7 +657,8 @@ private void registerMovingAverageModels(List plugins) { private void registerMovingAverageModel(SearchExtensionSpec movAvgModel) { movingAverageModelParserRegistry.register(movAvgModel.getParser(), movAvgModel.getName()); - namedWriteables.add(new Entry(MovAvgModel.class, movAvgModel.getName().getPreferredName(), movAvgModel.getReader())); + namedWriteables.add( + new NamedWriteableRegistry.Entry(MovAvgModel.class, movAvgModel.getName().getPreferredName(), movAvgModel.getReader())); } private void registerFetchSubPhases(List plugins) { @@ -725,6 +675,15 @@ private void registerFetchSubPhases(List plugins) { registerFromPlugin(plugins, p -> p.getFetchSubPhases(context), this::registerFetchSubPhase); } + private void registerSearchExts(List plugins) { + registerFromPlugin(plugins, SearchPlugin::getSearchExts, this::registerSearchExt); + } + + private void registerSearchExt(SearchExtSpec spec) { + namedXContents.add(new NamedXContentRegistry.Entry(SearchExtBuilder.class, spec.getName(), spec.getParser())); + namedWriteables.add(new NamedWriteableRegistry.Entry(SearchExtBuilder.class, spec.getName().getPreferredName(), spec.getReader())); + } + private void registerFetchSubPhase(FetchSubPhase subPhase) { Class subPhaseClass = subPhase.getClass(); if (fetchSubPhases.stream().anyMatch(p -> p.getClass().equals(subPhaseClass))) { @@ -740,8 +699,6 @@ private void registerQueryParsers(List plugins) { MatchPhrasePrefixQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(MultiMatchQueryBuilder.NAME, MultiMatchQueryBuilder::new, MultiMatchQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(NestedQueryBuilder.NAME, NestedQueryBuilder::new, NestedQueryBuilder::fromXContent)); - registerQuery(new QuerySpec<>(HasChildQueryBuilder.NAME, HasChildQueryBuilder::new, HasChildQueryBuilder::fromXContent)); - registerQuery(new QuerySpec<>(HasParentQueryBuilder.NAME, HasParentQueryBuilder::new, HasParentQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(DisMaxQueryBuilder.NAME, DisMaxQueryBuilder::new, DisMaxQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(IdsQueryBuilder.NAME, IdsQueryBuilder::new, IdsQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(MatchAllQueryBuilder.NAME, MatchAllQueryBuilder::new, MatchAllQueryBuilder::fromXContent)); @@ -777,7 +734,7 @@ private void registerQueryParsers(List plugins) { registerQuery( new QuerySpec<>(SpanMultiTermQueryBuilder.NAME, SpanMultiTermQueryBuilder::new, SpanMultiTermQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(FunctionScoreQueryBuilder.NAME, FunctionScoreQueryBuilder::new, - c -> FunctionScoreQueryBuilder.fromXContent(scoreFunctionParserRegistry, c))); + FunctionScoreQueryBuilder::fromXContent)); registerQuery( new QuerySpec<>(SimpleQueryStringBuilder.NAME, SimpleQueryStringBuilder::new, SimpleQueryStringBuilder::fromXContent)); registerQuery(new QuerySpec<>(TypeQueryBuilder.NAME, TypeQueryBuilder::new, TypeQueryBuilder::fromXContent)); @@ -791,7 +748,6 @@ private void registerQueryParsers(List plugins) { registerQuery(new QuerySpec<>(GeoPolygonQueryBuilder.NAME, GeoPolygonQueryBuilder::new, GeoPolygonQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(ExistsQueryBuilder.NAME, ExistsQueryBuilder::new, ExistsQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(MatchNoneQueryBuilder.NAME, MatchNoneQueryBuilder::new, MatchNoneQueryBuilder::fromXContent)); - registerQuery(new QuerySpec<>(ParentIdQueryBuilder.NAME, ParentIdQueryBuilder::new, ParentIdQueryBuilder::fromXContent)); if (ShapesAvailability.JTS_AVAILABLE && ShapesAvailability.SPATIAL4J_AVAILABLE) { registerQuery(new QuerySpec<>(GeoShapeQueryBuilder.NAME, GeoShapeQueryBuilder::new, GeoShapeQueryBuilder::fromXContent)); @@ -801,7 +757,13 @@ private void registerQueryParsers(List plugins) { } private void registerQuery(QuerySpec spec) { - queryParserRegistry.register(spec.getParser(), spec.getName()); - namedWriteables.add(new Entry(QueryBuilder.class, spec.getName().getPreferredName(), spec.getReader())); + namedWriteables.add(new NamedWriteableRegistry.Entry(QueryBuilder.class, spec.getName().getPreferredName(), spec.getReader())); + // Using Optional here is fairly horrible, but in master we don't have to because the builders return QueryBuilder. + namedXContents.add(new NamedXContentRegistry.Entry(Optional.class, spec.getName(), + (XContentParser p, Object c) -> (Optional) spec.getParser().fromXContent((QueryParseContext) c))); + } + + public FetchPhase getFetchPhase() { + return new FetchPhase(fetchSubPhases); } } diff --git a/core/src/main/java/org/elasticsearch/search/SearchParseElement.java b/core/src/main/java/org/elasticsearch/search/SearchParseElement.java deleted file mode 100644 index 9bf680deb5511..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/SearchParseElement.java +++ /dev/null @@ -1,31 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search; - -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.internal.SearchContext; - -/** - * - */ -public interface SearchParseElement { - - void parse(XContentParser parser, SearchContext context) throws Exception; -} diff --git a/core/src/main/java/org/elasticsearch/search/SearchParseException.java b/core/src/main/java/org/elasticsearch/search/SearchParseException.java index c0a9a3702701d..b1c5d284a6439 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchParseException.java +++ b/core/src/main/java/org/elasticsearch/search/SearchParseException.java @@ -75,12 +75,11 @@ public RestStatus status() { } @Override - protected void innerToXContent(XContentBuilder builder, Params params) throws IOException { + protected void metadataToXContent(XContentBuilder builder, Params params) throws IOException { if (lineNumber != UNKNOWN_POSITION) { builder.field("line", lineNumber); builder.field("col", columnNumber); } - super.innerToXContent(builder, params); } /** diff --git a/core/src/main/java/org/elasticsearch/search/SearchPhase.java b/core/src/main/java/org/elasticsearch/search/SearchPhase.java index 48c041f12f53f..33260706b3cb2 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchPhase.java +++ b/core/src/main/java/org/elasticsearch/search/SearchPhase.java @@ -21,22 +21,18 @@ import org.elasticsearch.search.internal.SearchContext; -import java.util.Collections; -import java.util.Map; - /** - * + * Represents a phase of a search request e.g. query, fetch etc. */ public interface SearchPhase { - default Map parseElements() { - return Collections.emptyMap(); - } - /** * Performs pre processing of the search context before the execute. */ void preProcess(SearchContext context); + /** + * Executes the search phase + */ void execute(SearchContext context); } diff --git a/core/src/main/java/org/elasticsearch/search/SearchPhaseResult.java b/core/src/main/java/org/elasticsearch/search/SearchPhaseResult.java index 067761b9b71fb..ede9f525a5a14 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchPhaseResult.java +++ b/core/src/main/java/org/elasticsearch/search/SearchPhaseResult.java @@ -20,15 +20,63 @@ package org.elasticsearch.search; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.search.fetch.FetchSearchResult; +import org.elasticsearch.search.query.QuerySearchResult; +import org.elasticsearch.transport.TransportResponse; /** - * + * This class is a base class for all search releated results. It contains the shard target it + * was executed against, a shard index used to reference the result on the coordinating node + * and a request ID that is used to reference the request context on the executing node. The + * request ID is particularly important since it is used to reference and maintain a context + * across search phases to ensure the same point in time snapshot is used for querying and + * fetching etc. */ -public interface SearchPhaseResult extends Streamable { +public abstract class SearchPhaseResult extends TransportResponse implements Streamable { + + private SearchShardTarget searchShardTarget; + private int shardIndex = -1; + protected long requestId; + + /** + * Returns the results request ID that is used to reference the search context on the executing + * node + */ + public long getRequestId() { + return requestId; + } + + /** + * Returns the shard index in the context of the currently executing search request that is + * used for accounting on the coordinating node + */ + public int getShardIndex() { + assert shardIndex != -1 : "shardIndex is not set"; + return shardIndex; + } + + public SearchShardTarget getSearchShardTarget() { + return searchShardTarget; + } + + public void setSearchShardTarget(SearchShardTarget shardTarget) { + this.searchShardTarget = shardTarget; + } - long id(); + public void setShardIndex(int shardIndex) { + assert shardIndex >= 0 : "shardIndex must be >= 0 but was: " + shardIndex; + this.shardIndex = shardIndex; + } - SearchShardTarget shardTarget(); + /** + * Returns the query result iff it's included in this response otherwise null + */ + public QuerySearchResult queryResult() { + return null; + } - void shardTarget(SearchShardTarget shardTarget); + /** + * Returns the fetch result iff it's included in this response otherwise null + */ + public FetchSearchResult fetchResult() { return null; } } diff --git a/core/src/main/java/org/elasticsearch/search/SearchRequestParsers.java b/core/src/main/java/org/elasticsearch/search/SearchRequestParsers.java deleted file mode 100644 index 83eebd125d8e5..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/SearchRequestParsers.java +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search; - -import org.elasticsearch.index.query.QueryParseContext; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; -import org.elasticsearch.search.aggregations.AggregatorParsers; -import org.elasticsearch.search.suggest.Suggesters; - -/** - * A container for all parsers used to parse - * {@link org.elasticsearch.action.search.SearchRequest} objects from a rest request. - */ -public class SearchRequestParsers { - // TODO: this class should be renamed to SearchRequestParser, and all the parse - // methods split across RestSearchAction and SearchSourceBuilder should be moved here - // TODO: make all members private once parsing functions are moved here - - // TODO: IndicesQueriesRegistry should be removed and just have the map of query parsers here - /** - * Query parsers that may be used in search requests. - * @see org.elasticsearch.index.query.QueryParseContext - * @see org.elasticsearch.search.builder.SearchSourceBuilder#fromXContent(QueryParseContext, AggregatorParsers, Suggesters) - */ - public final IndicesQueriesRegistry queryParsers; - - // TODO: AggregatorParsers should be removed and the underlying maps of agg - // and pipeline agg parsers should be here - /** - * Agg and pipeline agg parsers that may be used in search requests. - * @see org.elasticsearch.search.builder.SearchSourceBuilder#fromXContent(QueryParseContext, AggregatorParsers, Suggesters) - */ - public final AggregatorParsers aggParsers; - - // TODO: Suggesters should be removed and the underlying map moved here - /** - * Suggesters that may be used in search requests. - * @see org.elasticsearch.search.builder.SearchSourceBuilder#fromXContent(QueryParseContext, AggregatorParsers, Suggesters) - */ - public final Suggesters suggesters; - - public SearchRequestParsers(IndicesQueriesRegistry queryParsers, AggregatorParsers aggParsers, Suggesters suggesters) { - this.queryParsers = queryParsers; - this.aggParsers = aggParsers; - this.suggesters = suggesters; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/SearchService.java b/core/src/main/java/org/elasticsearch/search/SearchService.java index ba39c55a7a239..88f522e532806 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchService.java +++ b/core/src/main/java/org/elasticsearch/search/SearchService.java @@ -19,46 +19,47 @@ package org.elasticsearch.search; -import com.carrotsearch.hppc.ObjectFloatHashMap; import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.TopDocs; +import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.action.search.SearchTask; +import org.elasticsearch.action.search.SearchType; +import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.component.AbstractLifecycleComponent; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.common.util.concurrent.ConcurrentMapLong; -import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentLocation; -import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexService; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.query.InnerHitBuilder; +import org.elasticsearch.index.query.InnerHitContextBuilder; +import org.elasticsearch.index.query.MatchAllQueryBuilder; +import org.elasticsearch.index.query.MatchNoneQueryBuilder; +import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.shard.IndexEventListener; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.SearchOperationListener; import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason; import org.elasticsearch.script.ScriptContext; import org.elasticsearch.script.ScriptService; import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.aggregations.AggregationInitializationException; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.SearchContextAggregations; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.collapse.CollapseContext; import org.elasticsearch.search.dfs.DfsPhase; import org.elasticsearch.search.dfs.DfsSearchResult; import org.elasticsearch.search.fetch.FetchPhase; @@ -67,11 +68,9 @@ import org.elasticsearch.search.fetch.ScrollQueryFetchSearchResult; import org.elasticsearch.search.fetch.ShardFetchRequest; import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext; -import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext.DocValueField; -import org.elasticsearch.search.fetch.subphase.DocValueFieldsFetchSubPhase; import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext.ScriptField; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; -import org.elasticsearch.search.internal.DefaultSearchContext; +import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.search.internal.InternalScrollSearchRequest; import org.elasticsearch.search.internal.ScrollContext; import org.elasticsearch.search.internal.SearchContext; @@ -81,7 +80,6 @@ import org.elasticsearch.search.query.QueryPhase; import org.elasticsearch.search.query.QuerySearchRequest; import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.query.QuerySearchResultProvider; import org.elasticsearch.search.query.ScrollQuerySearchResult; import org.elasticsearch.search.rescore.RescoreBuilder; import org.elasticsearch.search.searchafter.SearchAfterBuilder; @@ -92,6 +90,7 @@ import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.threadpool.ThreadPool.Cancellable; import org.elasticsearch.threadpool.ThreadPool.Names; +import org.elasticsearch.transport.TransportRequest; import java.io.IOException; import java.util.Collections; @@ -102,7 +101,6 @@ import java.util.concurrent.ExecutionException; import java.util.concurrent.atomic.AtomicLong; -import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.common.unit.TimeValue.timeValueMillis; import static org.elasticsearch.common.unit.TimeValue.timeValueMinutes; @@ -113,6 +111,13 @@ public class SearchService extends AbstractLifecycleComponent implements IndexEv Setting.positiveTimeSetting("search.default_keep_alive", timeValueMinutes(5), Property.NodeScope); public static final Setting KEEPALIVE_INTERVAL_SETTING = Setting.positiveTimeSetting("search.keep_alive_interval", timeValueMinutes(1), Property.NodeScope); + /** + * Enables low-level, frequent search cancellation checks. Enabling low-level checks will make long running searches to react + * to the cancellation request faster. However, since it will produce more cancellation checks it might slow the search performance + * down. + */ + public static final Setting LOW_LEVEL_CANCELLATION_SETTING = + Setting.boolSetting("search.low_level_cancellation", false, Property.Dynamic, Property.NodeScope); public static final TimeValue NO_TIMEOUT = timeValueMillis(-1); public static final Setting DEFAULT_SEARCH_TIMEOUT_SETTING = @@ -139,21 +144,17 @@ public class SearchService extends AbstractLifecycleComponent implements IndexEv private volatile TimeValue defaultSearchTimeout; + private volatile boolean lowLevelCancellation; + private final Cancellable keepAliveReaper; private final AtomicLong idGenerator = new AtomicLong(); private final ConcurrentMapLong activeContexts = ConcurrentCollections.newConcurrentMapLongWithAggressiveConcurrency(); - private final Map elementParsers; - - private final ParseFieldMatcher parseFieldMatcher; - - @Inject - public SearchService(Settings settings, ClusterSettings clusterSettings, ClusterService clusterService, IndicesService indicesService, + public SearchService(ClusterService clusterService, IndicesService indicesService, ThreadPool threadPool, ScriptService scriptService, BigArrays bigArrays, FetchPhase fetchPhase) { - super(settings); - this.parseFieldMatcher = new ParseFieldMatcher(settings); + super(clusterService.getSettings()); this.threadPool = threadPool; this.clusterService = clusterService; this.indicesService = indicesService; @@ -165,40 +166,34 @@ public SearchService(Settings settings, ClusterSettings clusterSettings, Cluster TimeValue keepAliveInterval = KEEPALIVE_INTERVAL_SETTING.get(settings); this.defaultKeepAlive = DEFAULT_KEEPALIVE_SETTING.get(settings).millis(); - Map elementParsers = new HashMap<>(); - elementParsers.putAll(dfsPhase.parseElements()); - elementParsers.putAll(queryPhase.parseElements()); - elementParsers.putAll(fetchPhase.parseElements()); - this.elementParsers = unmodifiableMap(elementParsers); - this.keepAliveReaper = threadPool.scheduleWithFixedDelay(new Reaper(), keepAliveInterval, Names.SAME); defaultSearchTimeout = DEFAULT_SEARCH_TIMEOUT_SETTING.get(settings); - clusterSettings.addSettingsUpdateConsumer(DEFAULT_SEARCH_TIMEOUT_SETTING, this::setDefaultSearchTimeout); + clusterService.getClusterSettings().addSettingsUpdateConsumer(DEFAULT_SEARCH_TIMEOUT_SETTING, this::setDefaultSearchTimeout); + + lowLevelCancellation = LOW_LEVEL_CANCELLATION_SETTING.get(settings); + clusterService.getClusterSettings().addSettingsUpdateConsumer(LOW_LEVEL_CANCELLATION_SETTING, this::setLowLevelCancellation); } private void setDefaultSearchTimeout(TimeValue defaultSearchTimeout) { this.defaultSearchTimeout = defaultSearchTimeout; } + private void setLowLevelCancellation(Boolean lowLevelCancellation) { + this.lowLevelCancellation = lowLevelCancellation; + } + @Override - public void afterIndexClosed(Index index, Settings indexSettings) { - // once an index is closed we can just clean up all the pending search context information + public void afterIndexRemoved(Index index, IndexSettings indexSettings, IndexRemovalReason reason) { + // once an index is removed due to deletion or closing, we can just clean up all the pending search context information + // if we then close all the contexts we can get some search failures along the way which are not expected. + // it's fine to keep the contexts open if the index is still "alive" + // unfortunately we don't have a clear way to signal today why an index is closed. // to release memory and let references to the filesystem go etc. - IndexMetaData idxMeta = SearchService.this.clusterService.state().metaData().index(index); - if (idxMeta != null && idxMeta.getState() == IndexMetaData.State.CLOSE) { - // we need to check if it's really closed - // since sometimes due to a relocation we already closed the shard and that causes the index to be closed - // if we then close all the contexts we can get some search failures along the way which are not expected. - // it's fine to keep the contexts open if the index is still "alive" - // unfortunately we don't have a clear way to signal today why an index is closed. - afterIndexDeleted(index, indexSettings); + if (reason == IndexRemovalReason.DELETED || reason == IndexRemovalReason.CLOSED) { + freeAllContextForIndex(index); } - } - @Override - public void afterIndexDeleted(Index index, Settings indexSettings) { - freeAllContextForIndex(index); } protected void putContext(SearchContext context) { @@ -227,10 +222,11 @@ protected void doClose() { keepAliveReaper.cancel(); } - public DfsSearchResult executeDfsPhase(ShardSearchRequest request) throws IOException { + public DfsSearchResult executeDfsPhase(ShardSearchRequest request, SearchTask task) throws IOException { final SearchContext context = createAndPutContext(request); context.incRef(); try { + context.setTask(task); contextProcessing(context); dfsPhase.execute(context); contextProcessedSuccessfully(context); @@ -249,6 +245,7 @@ public DfsSearchResult executeDfsPhase(ShardSearchRequest request) throws IOExce */ private void loadOrExecuteQueryPhase(final ShardSearchRequest request, final SearchContext context) throws Exception { final boolean canCache = indicesService.canCache(request, context); + context.getQueryShardContext().freezeContext(); if (canCache) { indicesService.loadIntoContext(request, context, queryPhase); } else { @@ -256,32 +253,40 @@ private void loadOrExecuteQueryPhase(final ShardSearchRequest request, final Sea } } - public QuerySearchResultProvider executeQueryPhase(ShardSearchRequest request) throws IOException { + public SearchPhaseResult executeQueryPhase(ShardSearchRequest request, SearchTask task) throws IOException { final SearchContext context = createAndPutContext(request); final SearchOperationListener operationListener = context.indexShard().getSearchOperationListener(); context.incRef(); + boolean queryPhaseSuccess = false; try { + context.setTask(task); operationListener.onPreQueryPhase(context); long time = System.nanoTime(); contextProcessing(context); loadOrExecuteQueryPhase(request, context); - if (context.queryResult().hasHits() == false && context.scrollContext() == null) { + if (context.queryResult().hasSearchContext() == false && context.scrollContext() == null) { freeContext(context.id()); } else { contextProcessedSuccessfully(context); } - operationListener.onQueryPhase(context, System.nanoTime() - time); - + final long afterQueryTime = System.nanoTime(); + queryPhaseSuccess = true; + operationListener.onQueryPhase(context, afterQueryTime - time); + if (request.numberOfShards() == 1) { + return executeFetchPhase(context, operationListener, afterQueryTime); + } return context.queryResult(); } catch (Exception e) { // execution exception can happen while loading the cache, strip it if (e instanceof ExecutionException) { e = (e.getCause() == null || e.getCause() instanceof Exception) ? - (Exception) e.getCause() : new ElasticsearchException(e.getCause()); + (Exception) e.getCause() : new ElasticsearchException(e.getCause()); + } + if (!queryPhaseSuccess) { + operationListener.onFailedQueryPhase(context); } - operationListener.onFailedQueryPhase(context); logger.trace("Query phase failed", e); processFailure(context, e); throw ExceptionsHelper.convertToRuntime(e); @@ -290,11 +295,31 @@ public QuerySearchResultProvider executeQueryPhase(ShardSearchRequest request) t } } - public ScrollQuerySearchResult executeQueryPhase(InternalScrollSearchRequest request) { - final SearchContext context = findContext(request.id()); + private QueryFetchSearchResult executeFetchPhase(SearchContext context, SearchOperationListener operationListener, + long afterQueryTime) { + operationListener.onPreFetchPhase(context); + try { + shortcutDocIdsToLoad(context); + fetchPhase.execute(context); + if (fetchPhaseShouldFreeContext(context)) { + freeContext(context.id()); + } else { + contextProcessedSuccessfully(context); + } + } catch (Exception e) { + operationListener.onFailedFetchPhase(context); + throw ExceptionsHelper.convertToRuntime(e); + } + operationListener.onFetchPhase(context, System.nanoTime() - afterQueryTime); + return new QueryFetchSearchResult(context.queryResult(), context.fetchResult()); + } + + public ScrollQuerySearchResult executeQueryPhase(InternalScrollSearchRequest request, SearchTask task) { + final SearchContext context = findContext(request.id(), request); SearchOperationListener operationListener = context.indexShard().getSearchOperationListener(); context.incRef(); try { + context.setTask(task); operationListener.onPreQueryPhase(context); long time = System.nanoTime(); contextProcessing(context); @@ -313,8 +338,9 @@ public ScrollQuerySearchResult executeQueryPhase(InternalScrollSearchRequest req } } - public QuerySearchResult executeQueryPhase(QuerySearchRequest request) { - final SearchContext context = findContext(request.id()); + public QuerySearchResult executeQueryPhase(QuerySearchRequest request, SearchTask task) { + final SearchContext context = findContext(request.id(), request); + context.setTask(task); IndexShard indexShard = context.indexShard(); SearchOperationListener operationListener = indexShard.getSearchOperationListener(); context.incRef(); @@ -325,7 +351,7 @@ public QuerySearchResult executeQueryPhase(QuerySearchRequest request) { operationListener.onPreQueryPhase(context); long time = System.nanoTime(); queryPhase.execute(context); - if (context.queryResult().hasHits() == false && context.scrollContext() == null) { + if (context.queryResult().hasSearchContext() == false && context.scrollContext() == null) { // no hits, we can release the context since there will be no fetch phase freeContext(context.id()); } else { @@ -353,119 +379,28 @@ private boolean fetchPhaseShouldFreeContext(SearchContext context) { } } - public QueryFetchSearchResult executeFetchPhase(ShardSearchRequest request) throws IOException { - final SearchContext context = createAndPutContext(request); - context.incRef(); - try { - contextProcessing(context); - SearchOperationListener operationListener = context.indexShard().getSearchOperationListener(); - operationListener.onPreQueryPhase(context); - long time = System.nanoTime(); - try { - loadOrExecuteQueryPhase(request, context); - } catch (Exception e) { - operationListener.onFailedQueryPhase(context); - throw ExceptionsHelper.convertToRuntime(e); - } - long time2 = System.nanoTime(); - operationListener.onQueryPhase(context, time2 - time); - operationListener.onPreFetchPhase(context); - try { - shortcutDocIdsToLoad(context); - fetchPhase.execute(context); - if (fetchPhaseShouldFreeContext(context)) { - freeContext(context.id()); - } else { - contextProcessedSuccessfully(context); - } - } catch (Exception e) { - operationListener.onFailedFetchPhase(context); - throw ExceptionsHelper.convertToRuntime(e); - } - operationListener.onFetchPhase(context, System.nanoTime() - time2); - return new QueryFetchSearchResult(context.queryResult(), context.fetchResult()); - } catch (Exception e) { - logger.trace("Fetch phase failed", e); - processFailure(context, e); - throw ExceptionsHelper.convertToRuntime(e); - } finally { - cleanContext(context); - } - } - - public QueryFetchSearchResult executeFetchPhase(QuerySearchRequest request) { - final SearchContext context = findContext(request.id()); - context.incRef(); - try { - contextProcessing(context); - context.searcher().setAggregatedDfs(request.dfs()); - SearchOperationListener operationListener = context.indexShard().getSearchOperationListener(); - operationListener.onPreQueryPhase(context); - long time = System.nanoTime(); - try { - queryPhase.execute(context); - } catch (Exception e) { - operationListener.onFailedQueryPhase(context); - throw ExceptionsHelper.convertToRuntime(e); - } - long time2 = System.nanoTime(); - operationListener.onQueryPhase(context, time2 - time); - operationListener.onPreFetchPhase(context); - try { - shortcutDocIdsToLoad(context); - fetchPhase.execute(context); - if (fetchPhaseShouldFreeContext(context)) { - freeContext(request.id()); - } else { - contextProcessedSuccessfully(context); - } - } catch (Exception e) { - operationListener.onFailedFetchPhase(context); - throw ExceptionsHelper.convertToRuntime(e); - } - operationListener.onFetchPhase(context, System.nanoTime() - time2); - return new QueryFetchSearchResult(context.queryResult(), context.fetchResult()); - } catch (Exception e) { - logger.trace("Fetch phase failed", e); - processFailure(context, e); - throw ExceptionsHelper.convertToRuntime(e); - } finally { - cleanContext(context); - } - } - - public ScrollQueryFetchSearchResult executeFetchPhase(InternalScrollSearchRequest request) { - final SearchContext context = findContext(request.id()); + public ScrollQueryFetchSearchResult executeFetchPhase(InternalScrollSearchRequest request, SearchTask task) { + final SearchContext context = findContext(request.id(), request); context.incRef(); try { + context.setTask(task); contextProcessing(context); SearchOperationListener operationListener = context.indexShard().getSearchOperationListener(); processScroll(request, context); operationListener.onPreQueryPhase(context); - long time = System.nanoTime(); + final long time = System.nanoTime(); try { queryPhase.execute(context); } catch (Exception e) { operationListener.onFailedQueryPhase(context); throw ExceptionsHelper.convertToRuntime(e); } - long time2 = System.nanoTime(); - operationListener.onQueryPhase(context, time2 - time); - operationListener.onPreFetchPhase(context); - try { - shortcutDocIdsToLoad(context); - fetchPhase.execute(context); - if (fetchPhaseShouldFreeContext(context)) { - freeContext(request.id()); - } else { - contextProcessedSuccessfully(context); - } - } catch (Exception e) { - operationListener.onFailedFetchPhase(context); - throw ExceptionsHelper.convertToRuntime(e); - } - operationListener.onFetchPhase(context, System.nanoTime() - time2); - return new ScrollQueryFetchSearchResult(new QueryFetchSearchResult(context.queryResult(), context.fetchResult()), context.shardTarget()); + long afterQueryTime = System.nanoTime(); + operationListener.onQueryPhase(context, afterQueryTime - time); + QueryFetchSearchResult fetchSearchResult = executeFetchPhase(context, operationListener, afterQueryTime); + + return new ScrollQueryFetchSearchResult(fetchSearchResult, + context.shardTarget()); } catch (Exception e) { logger.trace("Fetch phase failed", e); processFailure(context, e); @@ -475,11 +410,12 @@ public ScrollQueryFetchSearchResult executeFetchPhase(InternalScrollSearchReques } } - public FetchSearchResult executeFetchPhase(ShardFetchRequest request) { - final SearchContext context = findContext(request.id()); + public FetchSearchResult executeFetchPhase(ShardFetchRequest request, SearchTask task) { + final SearchContext context = findContext(request.id(), request); final SearchOperationListener operationListener = context.indexShard().getSearchOperationListener(); context.incRef(); try { + context.setTask(task); contextProcessing(context); if (request.lastEmittedDoc() != null) { context.scrollContext().lastEmittedDoc = request.lastEmittedDoc(); @@ -505,13 +441,20 @@ public FetchSearchResult executeFetchPhase(ShardFetchRequest request) { } } - private SearchContext findContext(long id) throws SearchContextMissingException { + private SearchContext findContext(long id, TransportRequest request) throws SearchContextMissingException { SearchContext context = activeContexts.get(id); if (context == null) { throw new SearchContextMissingException(id); } - SearchContext.setCurrent(context); - return context; + + SearchOperationListener operationListener = context.indexShard().getSearchOperationListener(); + try { + operationListener.validateSearchContext(context, request); + return context; + } catch (Exception e) { + processFailure(context, e); + throw e; + } } final SearchContext createAndPutContext(ShardSearchRequest request) throws IOException { @@ -533,25 +476,8 @@ final SearchContext createAndPutContext(ShardSearchRequest request) throws IOExc } final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.Searcher searcher) throws IOException { - IndexService indexService = indicesService.indexServiceSafe(request.shardId().getIndex()); - IndexShard indexShard = indexService.getShard(request.shardId().getId()); - SearchShardTarget shardTarget = new SearchShardTarget(clusterService.localNode().getId(), indexShard.shardId()); - - Engine.Searcher engineSearcher = searcher == null ? indexShard.acquireSearcher("search") : searcher; - - DefaultSearchContext context = new DefaultSearchContext(idGenerator.incrementAndGet(), request, shardTarget, engineSearcher, - indexService, - indexShard, scriptService, bigArrays, threadPool.estimatedTimeInMillisCounter(), parseFieldMatcher, - defaultSearchTimeout, fetchPhase); - SearchContext.setCurrent(context); + final DefaultSearchContext context = createSearchContext(request, defaultSearchTimeout, searcher); try { - request.rewrite(context.getQueryShardContext()); - // reset that we have used nowInMillis from the context since it may - // have been rewritten so its no longer in the query and the request can - // be cached. If it is still present in the request (e.g. in a range - // aggregation) it will still be caught when the aggregation is - // evaluated. - context.resetNowInMillisUsed(); if (request.scroll() != null) { context.scrollContext(new ScrollContext()); context.scrollContext().scroll = request.scroll(); @@ -577,6 +503,7 @@ final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.S keepAlive = request.scroll().keepAlive().millis(); } context.keepAlive(keepAlive); + context.lowLevelCancellation(lowLevelCancellation); } catch (Exception e) { context.close(); throw ExceptionsHelper.convertToRuntime(e); @@ -585,6 +512,32 @@ final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.S return context; } + public DefaultSearchContext createSearchContext(ShardSearchRequest request, TimeValue timeout, @Nullable Engine.Searcher searcher) + throws IOException { + IndexService indexService = indicesService.indexServiceSafe(request.shardId().getIndex()); + IndexShard indexShard = indexService.getShard(request.shardId().getId()); + SearchShardTarget shardTarget = new SearchShardTarget(clusterService.localNode().getId(), + indexShard.shardId(), request.getClusterAlias(), OriginalIndices.NONE); + Engine.Searcher engineSearcher = searcher == null ? indexShard.acquireSearcher("search") : searcher; + + final DefaultSearchContext searchContext = new DefaultSearchContext(idGenerator.incrementAndGet(), request, shardTarget, + engineSearcher, indexService, indexShard, bigArrays, threadPool.estimatedTimeInMillisCounter(), timeout, fetchPhase); + boolean success = false; + try { + // we clone the query shard context here just for rewriting otherwise we + // might end up with incorrect state since we are using now() or script services + // during rewrite and normalized / evaluate templates etc. + request.rewrite(new QueryShardContext(searchContext.getQueryShardContext())); + assert searchContext.getQueryShardContext().isCachable(); + success = true; + } finally { + if (success == false) { + IOUtils.closeWhileHandlingException(searchContext); + } + } + return searchContext; + } + private void freeAllContextForIndex(Index index) { assert index != null; for (SearchContext ctx : activeContexts.values()) { @@ -626,14 +579,13 @@ private void contextProcessing(SearchContext context) { } private void contextProcessedSuccessfully(SearchContext context) { - context.accessed(threadPool.estimatedTimeInMillis()); + context.accessed(threadPool.relativeTimeInMillis()); } private void cleanContext(SearchContext context) { try { - assert context == SearchContext.current(); context.clearReleasables(Lifetime.PHASE); - SearchContext.removeCurrent(); + context.setTask(null); } finally { context.decRef(); } @@ -659,24 +611,17 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc QueryShardContext queryShardContext = context.getQueryShardContext(); context.from(source.from()); context.size(source.size()); - ObjectFloatHashMap indexBoostMap = source.indexBoost(); - if (indexBoostMap != null) { - Float indexBoost = indexBoostMap.get(context.shardTarget().index()); - if (indexBoost != null) { - context.queryBoost(indexBoost); - } - } - Map innerHitBuilders = new HashMap<>(); + Map innerHitBuilders = new HashMap<>(); if (source.query() != null) { - InnerHitBuilder.extractInnerHits(source.query(), innerHitBuilders); + InnerHitContextBuilder.extractInnerHits(source.query(), innerHitBuilders); context.parsedQuery(queryShardContext.toQuery(source.query())); } if (source.postFilter() != null) { - InnerHitBuilder.extractInnerHits(source.postFilter(), innerHitBuilders); + InnerHitContextBuilder.extractInnerHits(source.postFilter(), innerHitBuilders); context.parsedPostFilter(queryShardContext.toQuery(source.postFilter())); } if (innerHitBuilders.size() > 0) { - for (Map.Entry entry : innerHitBuilders.entrySet()) { + for (Map.Entry entry : innerHitBuilders.entrySet()) { try { entry.getValue().build(context, context.innerHits()); } catch (IOException e) { @@ -701,12 +646,13 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc if (source.profile()) { context.setProfilers(new Profilers(context.searcher())); } - context.timeout(source.timeout()); + if (source.timeout() != null) { + context.timeout(source.timeout()); + } context.terminateAfter(source.terminateAfter()); if (source.aggregations() != null) { try { - AggregationContext aggContext = new AggregationContext(context); - AggregatorFactories factories = source.aggregations().build(aggContext, null); + AggregatorFactories factories = source.aggregations().build(context, null); factories.validate(); context.aggregations(new SearchContextAggregations(factories)); } catch (IOException e) { @@ -736,11 +682,7 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc context.fetchSourceContext(source.fetchSource()); } if (source.docValueFields() != null) { - DocValueFieldsContext docValuesFieldsContext = context.getFetchSubPhaseContext(DocValueFieldsFetchSubPhase.CONTEXT_FACTORY); - for (String field : source.docValueFields()) { - docValuesFieldsContext.add(new DocValueField(field)); - } - docValuesFieldsContext.setHitExecutionNeeded(true); + context.docValueFieldsContext(new DocValueFieldsContext(source.docValueFields())); } if (source.highlighter() != null) { HighlightBuilder highlightBuilder = source.highlighter(); @@ -752,53 +694,13 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc } if (source.scriptFields() != null) { for (org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField field : source.scriptFields()) { - SearchScript searchScript = context.scriptService().search(context.lookup(), field.script(), ScriptContext.Standard.SEARCH, - Collections.emptyMap()); + SearchScript searchScript = scriptService.search(context.lookup(), field.script(), ScriptContext.Standard.SEARCH); context.scriptFields().add(new ScriptField(field.fieldName(), searchScript, field.ignoreFailure())); } } if (source.ext() != null) { - XContentParser extParser = null; - try { - extParser = XContentFactory.xContent(source.ext()).createParser(source.ext()); - if (extParser.nextToken() != XContentParser.Token.START_OBJECT) { - throw new SearchParseException(context, "expected start object, found [" + extParser.currentToken() + "] instead", - extParser.getTokenLocation()); - } - XContentParser.Token token; - String currentFieldName = null; - while ((token = extParser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = extParser.currentName(); - } else { - SearchParseElement parseElement = this.elementParsers.get(currentFieldName); - if (parseElement == null) { - if (currentFieldName != null && currentFieldName.equals("suggest")) { - throw new SearchParseException(context, - "suggest is not supported in [ext], please use SearchSourceBuilder#suggest(SuggestBuilder) instead", - extParser.getTokenLocation()); - } - throw new SearchParseException(context, "Unknown element [" + currentFieldName + "] in [ext]", - extParser.getTokenLocation()); - } else { - parseElement.parse(extParser, context); - } - } - } - } catch (Exception e) { - String sSource = "_na_"; - try { - sSource = source.toString(); - } catch (Exception inner) { - e.addSuppressed(inner); - // ignore - } - XContentLocation location = extParser != null ? extParser.getTokenLocation() : null; - throw new SearchParseException(context, "failed to parse ext source [" + sSource + "]", location, e); - } finally { - if (extParser != null) { - extParser.close(); - } + for (SearchExtBuilder searchExtBuilder : source.ext()) { + context.addSearchExt(searchExtBuilder); } } if (source.version() != null) { @@ -836,6 +738,11 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc } context.storedFieldsContext(source.storedFields()); } + + if (source.collapse() != null) { + final CollapseContext collapseContext = source.collapse().build(context); + context.collapse(collapseContext); + } } /** @@ -905,7 +812,7 @@ public int getActiveContexts() { class Reaper implements Runnable { @Override public void run() { - final long time = threadPool.estimatedTimeInMillis(); + final long time = threadPool.relativeTimeInMillis(); for (SearchContext context : activeContexts.values()) { // Use the same value for both checks since lastAccessTime can // be modified by another thread between checks! @@ -914,10 +821,51 @@ public void run() { continue; } if ((time - lastAccessTime > context.keepAlive())) { - logger.debug("freeing search context [{}], time [{}], lastAccessTime [{}], keepAlive [{}]", context.id(), time, lastAccessTime, context.keepAlive()); + logger.debug("freeing search context [{}], time [{}], lastAccessTime [{}], keepAlive [{}]", context.id(), time, + lastAccessTime, context.keepAlive()); freeContext(context.id()); } } } } + + public AliasFilter buildAliasFilter(ClusterState state, String index, String... expressions) { + return indicesService.buildAliasFilter(state, index, expressions); + } + + /** + * This method does a very quick rewrite of the query and returns true if the query can potentially match any documents. + * This method can have false positives while if it returns false the query won't match any documents on the current + * shard. + */ + public boolean canMatch(ShardSearchRequest request) throws IOException { + assert request.searchType() == SearchType.QUERY_THEN_FETCH : "unexpected search type: " + request.searchType(); + try (DefaultSearchContext context = createSearchContext(request, defaultSearchTimeout, null)) { + SearchSourceBuilder source = context.request().source(); + if (canRewriteToMatchNone(source)) { + QueryBuilder queryBuilder = source.query(); + return queryBuilder instanceof MatchNoneQueryBuilder == false; + } + return true; // null query means match_all + } + } + + /** + * Returns true iff the given search source builder can be early terminated by rewriting to a match none query. Or in other words + * if the execution of a the search request can be early terminated without executing it. This is for instance not possible if + * a global aggregation is part of this request or if there is a suggest builder present. + */ + public static boolean canRewriteToMatchNone(SearchSourceBuilder source) { + if (source == null || source.query() == null || source.query() instanceof MatchAllQueryBuilder || source.suggest() != null) { + return false; + } else { + AggregatorFactories.Builder aggregations = source.aggregations(); + if (aggregations != null) { + if (aggregations.mustVisitAllDocs()) { + return false; + } + } + } + return true; + } } diff --git a/core/src/main/java/org/elasticsearch/search/SearchShardTarget.java b/core/src/main/java/org/elasticsearch/search/SearchShardTarget.java index 9fb8322718859..4ac742ff5d9fe 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchShardTarget.java +++ b/core/src/main/java/org/elasticsearch/search/SearchShardTarget.java @@ -19,81 +19,86 @@ package org.elasticsearch.search; +import org.elasticsearch.Version; +import org.elasticsearch.action.OriginalIndices; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.text.Text; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.transport.RemoteClusterAware; import java.io.IOException; /** * The target that the search request was executed on. */ -public class SearchShardTarget implements Writeable, Comparable { +public final class SearchShardTarget implements Writeable, Comparable { - private Text nodeId; - private Text index; - private ShardId shardId; + private final Text nodeId; + private final ShardId shardId; + //original indices and cluster alias are only needed in the coordinating node throughout the search request execution. + //no need to serialize them as part of SearchShardTarget. + private final transient OriginalIndices originalIndices; + private final String clusterAlias; public SearchShardTarget(StreamInput in) throws IOException { if (in.readBoolean()) { nodeId = in.readText(); + } else { + nodeId = null; } shardId = ShardId.readShardId(in); - index = new Text(shardId.getIndexName()); + this.originalIndices = null; + if (in.getVersion().onOrAfter(Version.V_5_6_0)) { + clusterAlias = in.readOptionalString(); + } else { + clusterAlias = null; + } } - public SearchShardTarget(String nodeId, ShardId shardId) { + public SearchShardTarget(String nodeId, ShardId shardId, String clusterAlias, OriginalIndices originalIndices) { this.nodeId = nodeId == null ? null : new Text(nodeId); - this.index = new Text(shardId.getIndexName()); this.shardId = shardId; + this.originalIndices = originalIndices; + this.clusterAlias = clusterAlias; } - public SearchShardTarget(String nodeId, Index index, int shardId) { - this(nodeId, new ShardId(index, shardId)); - } - - @Nullable - public String nodeId() { - return nodeId.string(); + //this constructor is only used in tests + public SearchShardTarget(String nodeId, Index index, int shardId, String clusterAlias) { + this(nodeId, new ShardId(index, shardId), clusterAlias, OriginalIndices.NONE); } @Nullable public String getNodeId() { - return nodeId(); + return nodeId.string(); } - public Text nodeIdText() { + public Text getNodeIdText() { return this.nodeId; } - public String index() { - return index.string(); - } - public String getIndex() { - return index(); + return shardId.getIndexName(); } - public Text indexText() { - return this.index; + public ShardId getShardId() { + return shardId; } - public ShardId shardId() { - return shardId; + public OriginalIndices getOriginalIndices() { + return originalIndices; } - public ShardId getShardId() { - return shardId; + public String getClusterAlias() { + return clusterAlias; } @Override public int compareTo(SearchShardTarget o) { - int i = index.string().compareTo(o.index()); + int i = shardId.getIndexName().compareTo(o.getIndex()); if (i == 0) { i = shardId.getId() - o.shardId.id(); } @@ -109,6 +114,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeText(nodeId); } shardId.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_5_6_0)) { + out.writeOptionalString(clusterAlias); + } } @Override @@ -118,23 +126,26 @@ public boolean equals(Object o) { SearchShardTarget that = (SearchShardTarget) o; if (shardId.equals(that.shardId) == false) return false; if (nodeId != null ? !nodeId.equals(that.nodeId) : that.nodeId != null) return false; - + if (clusterAlias != null ? !clusterAlias.equals(that.clusterAlias) : that.clusterAlias != null) return false; return true; } @Override public int hashCode() { int result = nodeId != null ? nodeId.hashCode() : 0; - result = 31 * result + (index != null ? index.hashCode() : 0); + result = 31 * result + (shardId.getIndexName() != null ? shardId.getIndexName().hashCode() : 0); result = 31 * result + shardId.hashCode(); + result = 31 * result + (clusterAlias != null ? clusterAlias.hashCode() : 0); return result; } @Override public String toString() { + String shardToString = "[" + RemoteClusterAware.buildRemoteIndexName(clusterAlias, shardId.getIndexName()) + "][" + shardId.getId() + + "]"; if (nodeId == null) { - return "[_na_]" + shardId; + return "[_na_]" + shardToString; } - return "[" + nodeId + "]" + shardId; + return "[" + nodeId + "]" + shardToString; } } diff --git a/core/src/main/java/org/elasticsearch/search/SearchSortValues.java b/core/src/main/java/org/elasticsearch/search/SearchSortValues.java new file mode 100644 index 0000000000000..d3d55ff481afd --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/SearchSortValues.java @@ -0,0 +1,165 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search; + +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; +import org.elasticsearch.search.SearchHit.Fields; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Objects; + +public class SearchSortValues implements ToXContent, Writeable { + + static final SearchSortValues EMPTY = new SearchSortValues(new Object[0]); + private final Object[] sortValues; + + SearchSortValues(Object[] sortValues) { + this.sortValues = Objects.requireNonNull(sortValues, "sort values must not be empty"); + } + + public SearchSortValues(Object[] sortValues, DocValueFormat[] sortValueFormats) { + Objects.requireNonNull(sortValues); + Objects.requireNonNull(sortValueFormats); + this.sortValues = Arrays.copyOf(sortValues, sortValues.length); + for (int i = 0; i < sortValues.length; ++i) { + if (this.sortValues[i] instanceof BytesRef) { + this.sortValues[i] = sortValueFormats[i].format((BytesRef) sortValues[i]); + } + } + } + + public SearchSortValues(StreamInput in) throws IOException { + int size = in.readVInt(); + if (size > 0) { + sortValues = new Object[size]; + for (int i = 0; i < sortValues.length; i++) { + byte type = in.readByte(); + if (type == 0) { + sortValues[i] = null; + } else if (type == 1) { + sortValues[i] = in.readString(); + } else if (type == 2) { + sortValues[i] = in.readInt(); + } else if (type == 3) { + sortValues[i] = in.readLong(); + } else if (type == 4) { + sortValues[i] = in.readFloat(); + } else if (type == 5) { + sortValues[i] = in.readDouble(); + } else if (type == 6) { + sortValues[i] = in.readByte(); + } else if (type == 7) { + sortValues[i] = in.readShort(); + } else if (type == 8) { + sortValues[i] = in.readBoolean(); + } else { + throw new IOException("Can't match type [" + type + "]"); + } + } + } else { + sortValues = new Object[0]; + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(sortValues.length); + for (Object sortValue : sortValues) { + if (sortValue == null) { + out.writeByte((byte) 0); + } else { + Class type = sortValue.getClass(); + if (type == String.class) { + out.writeByte((byte) 1); + out.writeString((String) sortValue); + } else if (type == Integer.class) { + out.writeByte((byte) 2); + out.writeInt((Integer) sortValue); + } else if (type == Long.class) { + out.writeByte((byte) 3); + out.writeLong((Long) sortValue); + } else if (type == Float.class) { + out.writeByte((byte) 4); + out.writeFloat((Float) sortValue); + } else if (type == Double.class) { + out.writeByte((byte) 5); + out.writeDouble((Double) sortValue); + } else if (type == Byte.class) { + out.writeByte((byte) 6); + out.writeByte((Byte) sortValue); + } else if (type == Short.class) { + out.writeByte((byte) 7); + out.writeShort((Short) sortValue); + } else if (type == Boolean.class) { + out.writeByte((byte) 8); + out.writeBoolean((Boolean) sortValue); + } else { + throw new IOException("Can't handle sort field value of type [" + type + "]"); + } + } + } + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + if (sortValues.length > 0) { + builder.startArray(Fields.SORT); + for (Object sortValue : sortValues) { + builder.value(sortValue); + } + builder.endArray(); + } + return builder; + } + + public static SearchSortValues fromXContent(XContentParser parser) throws IOException { + XContentParserUtils.ensureExpectedToken(XContentParser.Token.START_ARRAY, parser.currentToken(), parser::getTokenLocation); + return new SearchSortValues(parser.list().toArray()); + } + + public Object[] sortValues() { + return sortValues; + } + + @Override + public boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj == null || getClass() != obj.getClass()) { + return false; + } + SearchSortValues other = (SearchSortValues) obj; + return Arrays.equals(sortValues, other.sortValues); + } + + @Override + public int hashCode() { + return Arrays.hashCode(sortValues); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/action/SearchTransportService.java b/core/src/main/java/org/elasticsearch/search/action/SearchTransportService.java deleted file mode 100644 index 8552d21b5c335..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/action/SearchTransportService.java +++ /dev/null @@ -1,367 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.action; - -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionListenerResponseHandler; -import org.elasticsearch.action.IndicesRequest; -import org.elasticsearch.action.OriginalIndices; -import org.elasticsearch.action.search.SearchRequest; -import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.search.SearchService; -import org.elasticsearch.search.dfs.DfsSearchResult; -import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.QueryFetchSearchResult; -import org.elasticsearch.search.fetch.ScrollQueryFetchSearchResult; -import org.elasticsearch.search.fetch.ShardFetchRequest; -import org.elasticsearch.search.fetch.ShardFetchSearchRequest; -import org.elasticsearch.search.internal.InternalScrollSearchRequest; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.search.query.QuerySearchRequest; -import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.query.QuerySearchResultProvider; -import org.elasticsearch.search.query.ScrollQuerySearchResult; -import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.transport.TransportChannel; -import org.elasticsearch.transport.TransportRequest; -import org.elasticsearch.transport.TransportRequestHandler; -import org.elasticsearch.transport.TransportResponse; -import org.elasticsearch.transport.TransportService; - -import java.io.IOException; - -/** - * An encapsulation of {@link org.elasticsearch.search.SearchService} operations exposed through - * transport. - */ -public class SearchTransportService extends AbstractComponent { - - public static final String FREE_CONTEXT_SCROLL_ACTION_NAME = "indices:data/read/search[free_context/scroll]"; - public static final String FREE_CONTEXT_ACTION_NAME = "indices:data/read/search[free_context]"; - public static final String CLEAR_SCROLL_CONTEXTS_ACTION_NAME = "indices:data/read/search[clear_scroll_contexts]"; - public static final String DFS_ACTION_NAME = "indices:data/read/search[phase/dfs]"; - public static final String QUERY_ACTION_NAME = "indices:data/read/search[phase/query]"; - public static final String QUERY_ID_ACTION_NAME = "indices:data/read/search[phase/query/id]"; - public static final String QUERY_SCROLL_ACTION_NAME = "indices:data/read/search[phase/query/scroll]"; - public static final String QUERY_FETCH_ACTION_NAME = "indices:data/read/search[phase/query+fetch]"; - public static final String QUERY_QUERY_FETCH_ACTION_NAME = "indices:data/read/search[phase/query/query+fetch]"; - public static final String QUERY_FETCH_SCROLL_ACTION_NAME = "indices:data/read/search[phase/query+fetch/scroll]"; - public static final String FETCH_ID_SCROLL_ACTION_NAME = "indices:data/read/search[phase/fetch/id/scroll]"; - public static final String FETCH_ID_ACTION_NAME = "indices:data/read/search[phase/fetch/id]"; - - private final TransportService transportService; - private final SearchService searchService; - - @Inject - public SearchTransportService(Settings settings, TransportService transportService, SearchService searchService) { - super(settings); - this.transportService = transportService; - this.searchService = searchService; - transportService.registerRequestHandler(FREE_CONTEXT_SCROLL_ACTION_NAME, ScrollFreeContextRequest::new, ThreadPool.Names.SAME, - new FreeContextTransportHandler<>()); - transportService.registerRequestHandler(FREE_CONTEXT_ACTION_NAME, SearchFreeContextRequest::new, ThreadPool.Names.SAME, - new FreeContextTransportHandler<>()); - transportService.registerRequestHandler(CLEAR_SCROLL_CONTEXTS_ACTION_NAME, ClearScrollContextsRequest::new, ThreadPool.Names.SAME, - new ClearScrollContextsTransportHandler()); - transportService.registerRequestHandler(DFS_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, - new SearchDfsTransportHandler()); - transportService.registerRequestHandler(QUERY_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, - new SearchQueryTransportHandler()); - transportService.registerRequestHandler(QUERY_ID_ACTION_NAME, QuerySearchRequest::new, ThreadPool.Names.SEARCH, - new SearchQueryByIdTransportHandler()); - transportService.registerRequestHandler(QUERY_SCROLL_ACTION_NAME, InternalScrollSearchRequest::new, ThreadPool.Names.SEARCH, - new SearchQueryScrollTransportHandler()); - transportService.registerRequestHandler(QUERY_FETCH_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, - new SearchQueryFetchTransportHandler()); - transportService.registerRequestHandler(QUERY_QUERY_FETCH_ACTION_NAME, QuerySearchRequest::new, ThreadPool.Names.SEARCH, - new SearchQueryQueryFetchTransportHandler()); - transportService.registerRequestHandler(QUERY_FETCH_SCROLL_ACTION_NAME, InternalScrollSearchRequest::new, ThreadPool.Names.SEARCH, - new SearchQueryFetchScrollTransportHandler()); - transportService.registerRequestHandler(FETCH_ID_SCROLL_ACTION_NAME, ShardFetchRequest::new, ThreadPool.Names.SEARCH, - new FetchByIdTransportHandler<>()); - transportService.registerRequestHandler(FETCH_ID_ACTION_NAME, ShardFetchSearchRequest::new, ThreadPool.Names.SEARCH, - new FetchByIdTransportHandler<>()); - } - - public void sendFreeContext(DiscoveryNode node, final long contextId, SearchRequest request) { - transportService.sendRequest(node, FREE_CONTEXT_ACTION_NAME, new SearchFreeContextRequest(request, contextId), - new ActionListenerResponseHandler<>(new ActionListener() { - @Override - public void onResponse(SearchFreeContextResponse response) { - // no need to respond if it was freed or not - } - - @Override - public void onFailure(Exception e) { - - } - }, SearchFreeContextResponse::new)); - } - - public void sendFreeContext(DiscoveryNode node, long contextId, final ActionListener listener) { - transportService.sendRequest(node, FREE_CONTEXT_SCROLL_ACTION_NAME, new ScrollFreeContextRequest(contextId), - new ActionListenerResponseHandler<>(listener, SearchFreeContextResponse::new)); - } - - public void sendClearAllScrollContexts(DiscoveryNode node, final ActionListener listener) { - transportService.sendRequest(node, CLEAR_SCROLL_CONTEXTS_ACTION_NAME, new ClearScrollContextsRequest(), - new ActionListenerResponseHandler<>(listener, () -> TransportResponse.Empty.INSTANCE)); - } - - public void sendExecuteDfs(DiscoveryNode node, final ShardSearchTransportRequest request, - final ActionListener listener) { - transportService.sendRequest(node, DFS_ACTION_NAME, request, new ActionListenerResponseHandler<>(listener, DfsSearchResult::new)); - } - - public void sendExecuteQuery(DiscoveryNode node, final ShardSearchTransportRequest request, - final ActionListener listener) { - transportService.sendRequest(node, QUERY_ACTION_NAME, request, - new ActionListenerResponseHandler<>(listener, QuerySearchResult::new)); - } - - public void sendExecuteQuery(DiscoveryNode node, final QuerySearchRequest request, final ActionListener listener) { - transportService.sendRequest(node, QUERY_ID_ACTION_NAME, request, - new ActionListenerResponseHandler<>(listener, QuerySearchResult::new)); - } - - public void sendExecuteQuery(DiscoveryNode node, final InternalScrollSearchRequest request, - final ActionListener listener) { - transportService.sendRequest(node, QUERY_SCROLL_ACTION_NAME, request, - new ActionListenerResponseHandler<>(listener, ScrollQuerySearchResult::new)); - } - - public void sendExecuteFetch(DiscoveryNode node, final ShardSearchTransportRequest request, - final ActionListener listener) { - transportService.sendRequest(node, QUERY_FETCH_ACTION_NAME, request, - new ActionListenerResponseHandler<>(listener, QueryFetchSearchResult::new)); - } - - public void sendExecuteFetch(DiscoveryNode node, final QuerySearchRequest request, - final ActionListener listener) { - transportService.sendRequest(node, QUERY_QUERY_FETCH_ACTION_NAME, request, - new ActionListenerResponseHandler<>(listener, QueryFetchSearchResult::new)); - } - - public void sendExecuteFetch(DiscoveryNode node, final InternalScrollSearchRequest request, - final ActionListener listener) { - transportService.sendRequest(node, QUERY_FETCH_SCROLL_ACTION_NAME, request, - new ActionListenerResponseHandler<>(listener, ScrollQueryFetchSearchResult::new)); - } - - public void sendExecuteFetch(DiscoveryNode node, final ShardFetchSearchRequest request, - final ActionListener listener) { - sendExecuteFetch(node, FETCH_ID_ACTION_NAME, request, listener); - } - - public void sendExecuteFetchScroll(DiscoveryNode node, final ShardFetchRequest request, - final ActionListener listener) { - sendExecuteFetch(node, FETCH_ID_SCROLL_ACTION_NAME, request, listener); - } - - private void sendExecuteFetch(DiscoveryNode node, String action, final ShardFetchRequest request, - final ActionListener listener) { - transportService.sendRequest(node, action, request, new ActionListenerResponseHandler<>(listener, FetchSearchResult::new)); - } - - static class ScrollFreeContextRequest extends TransportRequest { - private long id; - - ScrollFreeContextRequest() { - } - - ScrollFreeContextRequest(long id) { - this.id = id; - } - - public long id() { - return this.id; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - id = in.readLong(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeLong(id); - } - } - - static class SearchFreeContextRequest extends ScrollFreeContextRequest implements IndicesRequest { - private OriginalIndices originalIndices; - - public SearchFreeContextRequest() { - } - - SearchFreeContextRequest(SearchRequest request, long id) { - super(id); - this.originalIndices = new OriginalIndices(request); - } - - @Override - public String[] indices() { - if (originalIndices == null) { - return null; - } - return originalIndices.indices(); - } - - @Override - public IndicesOptions indicesOptions() { - if (originalIndices == null) { - return null; - } - return originalIndices.indicesOptions(); - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - originalIndices = OriginalIndices.readOriginalIndices(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - OriginalIndices.writeOriginalIndices(originalIndices, out); - } - } - - public static class SearchFreeContextResponse extends TransportResponse { - - private boolean freed; - - SearchFreeContextResponse() { - } - - SearchFreeContextResponse(boolean freed) { - this.freed = freed; - } - - public boolean isFreed() { - return freed; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - freed = in.readBoolean(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeBoolean(freed); - } - } - - class FreeContextTransportHandler - implements TransportRequestHandler { - @Override - public void messageReceived(FreeContextRequest request, TransportChannel channel) throws Exception { - boolean freed = searchService.freeContext(request.id()); - channel.sendResponse(new SearchFreeContextResponse(freed)); - } - } - - static class ClearScrollContextsRequest extends TransportRequest { - } - - class ClearScrollContextsTransportHandler implements TransportRequestHandler { - @Override - public void messageReceived(ClearScrollContextsRequest request, TransportChannel channel) throws Exception { - searchService.freeAllScrollContexts(); - channel.sendResponse(TransportResponse.Empty.INSTANCE); - } - } - - class SearchDfsTransportHandler implements TransportRequestHandler { - @Override - public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel) throws Exception { - DfsSearchResult result = searchService.executeDfsPhase(request); - channel.sendResponse(result); - } - } - - class SearchQueryTransportHandler implements TransportRequestHandler { - @Override - public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel) throws Exception { - QuerySearchResultProvider result = searchService.executeQueryPhase(request); - channel.sendResponse(result); - } - } - - class SearchQueryByIdTransportHandler implements TransportRequestHandler { - @Override - public void messageReceived(QuerySearchRequest request, TransportChannel channel) throws Exception { - QuerySearchResult result = searchService.executeQueryPhase(request); - channel.sendResponse(result); - } - } - - class SearchQueryScrollTransportHandler implements TransportRequestHandler { - @Override - public void messageReceived(InternalScrollSearchRequest request, TransportChannel channel) throws Exception { - ScrollQuerySearchResult result = searchService.executeQueryPhase(request); - channel.sendResponse(result); - } - } - - class SearchQueryFetchTransportHandler implements TransportRequestHandler { - @Override - public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel) throws Exception { - QueryFetchSearchResult result = searchService.executeFetchPhase(request); - channel.sendResponse(result); - } - } - - class SearchQueryQueryFetchTransportHandler implements TransportRequestHandler { - @Override - public void messageReceived(QuerySearchRequest request, TransportChannel channel) throws Exception { - QueryFetchSearchResult result = searchService.executeFetchPhase(request); - channel.sendResponse(result); - } - } - - class FetchByIdTransportHandler implements TransportRequestHandler { - @Override - public void messageReceived(Request request, TransportChannel channel) throws Exception { - FetchSearchResult result = searchService.executeFetchPhase(request); - channel.sendResponse(result); - } - } - - class SearchQueryFetchScrollTransportHandler implements TransportRequestHandler { - @Override - public void messageReceived(InternalScrollSearchRequest request, TransportChannel channel) throws Exception { - ScrollQueryFetchSearchResult result = searchService.executeFetchPhase(request); - channel.sendResponse(result); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AbstractAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/AbstractAggregationBuilder.java index e84d76843043c..ef82c48b4e7a2 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AbstractAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AbstractAggregationBuilder.java @@ -21,8 +21,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Map; @@ -40,17 +39,16 @@ public abstract class AbstractAggregationBuilder metaData) { return (AB) this; } - public String getType() { - return type.name(); + @Override + public final String getWriteableName() { + // We always use the type of the aggregation as the writeable name + return getType(); } @Override - public final AggregatorFactory build(AggregationContext context, AggregatorFactory parent) throws IOException { + public final AggregatorFactory build(SearchContext context, AggregatorFactory parent) throws IOException { AggregatorFactory factory = doBuild(context, parent, factoriesBuilder); return factory; } - protected abstract AggregatorFactory doBuild(AggregationContext context, AggregatorFactory parent, + protected abstract AggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subfactoriesBuilder) throws IOException; @Override @@ -137,7 +137,7 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params) if (this.metaData != null) { builder.field("meta", this.metaData); } - builder.field(type.name()); + builder.field(getType()); internalXContent(builder, params); if (factoriesBuilder != null && (factoriesBuilder.count()) > 0) { @@ -153,7 +153,7 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params) @Override public int hashCode() { - return Objects.hash(factoriesBuilder, metaData, name, type, doHashCode()); + return Objects.hash(factoriesBuilder, metaData, name, doHashCode()); } protected abstract int doHashCode(); @@ -168,8 +168,6 @@ public boolean equals(Object obj) { AbstractAggregationBuilder other = (AbstractAggregationBuilder) obj; if (!Objects.equals(name, other.name)) return false; - if (!Objects.equals(type, other.type)) - return false; if (!Objects.equals(metaData, other.metaData)) return false; if (!Objects.equals(factoriesBuilder, other.factoriesBuilder)) diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/Aggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/Aggregation.java index 83472081021f9..2d8a81b080829 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/Aggregation.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/Aggregation.java @@ -18,12 +18,21 @@ */ package org.elasticsearch.search.aggregations; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.xcontent.ToXContent; + import java.util.Map; /** - * An aggregation + * An aggregation. Extends {@link ToXContent} as it makes it easier to print out its content. */ -public interface Aggregation { +public interface Aggregation extends ToXContent { + + /** + * Delimiter used when prefixing aggregation names with their type + * using the typed_keys parameter + */ + String TYPED_KEYS_DELIMITER = "#"; /** * @return The name of this aggregation. @@ -31,16 +40,32 @@ public interface Aggregation { String getName(); /** - * Get the value of specified path in the aggregation. - * - * @param path - * the path to the property in the aggregation tree - * @return the value of the property + * @return a string representing the type of the aggregation. This type is added to + * the aggregation name in the response, so that it can later be used by clients + * to determine type of the aggregation and parse it into the proper object. */ - Object getProperty(String path); + String getType(); /** * Get the optional byte array metadata that was set on the aggregation */ Map getMetaData(); + + /** + * Common xcontent fields that are shared among addAggregation + */ + final class CommonFields extends ParseField.CommonFields { + public static final ParseField META = new ParseField("meta"); + public static final ParseField BUCKETS = new ParseField("buckets"); + public static final ParseField VALUE = new ParseField("value"); + public static final ParseField VALUES = new ParseField("values"); + public static final ParseField VALUE_AS_STRING = new ParseField("value_as_string"); + public static final ParseField DOC_COUNT = new ParseField("doc_count"); + public static final ParseField KEY = new ParseField("key"); + public static final ParseField KEY_AS_STRING = new ParseField("key_as_string"); + public static final ParseField FROM = new ParseField("from"); + public static final ParseField FROM_AS_STRING = new ParseField("from_as_string"); + public static final ParseField TO = new ParseField("to"); + public static final ParseField TO_AS_STRING = new ParseField("to_as_string"); + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilder.java index 9417ec8a0e659..14875895b77e1 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilder.java @@ -23,8 +23,8 @@ import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.xcontent.ToXContent; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Map; @@ -34,27 +34,21 @@ */ public abstract class AggregationBuilder extends ToXContentToBytes - implements NamedWriteable, ToXContent { + implements NamedWriteable, ToXContent, BaseAggregationBuilder { protected final String name; - protected final Type type; protected AggregatorFactories.Builder factoriesBuilder = AggregatorFactories.builder(); /** * Constructs a new aggregation builder. * * @param name The aggregation name - * @param type The aggregation type */ - protected AggregationBuilder(String name, Type type) { + protected AggregationBuilder(String name) { if (name == null) { throw new IllegalArgumentException("[name] must not be null: [" + name + "]"); } - if (type == null) { - throw new IllegalArgumentException("[type] must not be null: [" + name + "]"); - } this.name = name; - this.type = type; } /** Return this aggregation's name. */ @@ -63,9 +57,10 @@ public String getName() { } /** Internal: build an {@link AggregatorFactory} based on the configuration of this builder. */ - protected abstract AggregatorFactory build(AggregationContext context, AggregatorFactory parent) throws IOException; + protected abstract AggregatorFactory build(SearchContext context, AggregatorFactory parent) throws IOException; /** Associate metadata with this {@link AggregationBuilder}. */ + @Override public abstract AggregationBuilder setMetaData(Map metaData); /** Add a sub aggregation to this builder. */ @@ -77,13 +72,14 @@ public String getName() { /** * Internal: Registers sub-factories with this factory. The sub-factory will be * responsible for the creation of sub-aggregators under the aggregator - * created by this factory. This is only for use by {@link AggregatorParsers}. + * created by this factory. This is only for use by {@link AggregatorFactories#parseAggregators(QueryParseContext)}. * * @param subFactories * The sub-factories * @return this factory (fluent interface) */ - protected abstract AggregationBuilder subAggregations(AggregatorFactories.Builder subFactories); + @Override + public abstract AggregationBuilder subAggregations(AggregatorFactories.Builder subFactories); /** Common xcontent fields shared among aggregator builders */ public static final class CommonFields extends ParseField.CommonFields { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilders.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilders.java index b1818971d6bb6..8b704ee8a69a2 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilders.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilders.java @@ -21,8 +21,8 @@ import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.index.query.QueryBuilder; -import org.elasticsearch.search.aggregations.bucket.children.Children; -import org.elasticsearch.search.aggregations.bucket.children.ChildrenAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrix; +import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.filter.Filter; import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.filters.Filters; @@ -82,6 +82,8 @@ import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCount; import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountAggregationBuilder; +import java.util.Map; + /** * Utility class to create aggregations. */ @@ -160,6 +162,20 @@ public static FiltersAggregationBuilder filters(String name, QueryBuilder... fil return new FiltersAggregationBuilder(name, filters); } + /** + * Create a new {@link AdjacencyMatrix} aggregation with the given name. + */ + public static AdjacencyMatrixAggregationBuilder adjacencyMatrix(String name, Map filters) { + return new AdjacencyMatrixAggregationBuilder(name, filters); + } + + /** + * Create a new {@link AdjacencyMatrix} aggregation with the given name and separator + */ + public static AdjacencyMatrixAggregationBuilder adjacencyMatrix(String name, String separator, Map filters) { + return new AdjacencyMatrixAggregationBuilder(name, separator, filters); + } + /** * Create a new {@link Sampler} aggregation with the given name. */ @@ -202,13 +218,6 @@ public static ReverseNestedAggregationBuilder reverseNested(String name) { return new ReverseNestedAggregationBuilder(name); } - /** - * Create a new {@link Children} aggregation with the given name. - */ - public static ChildrenAggregationBuilder children(String name, String childType) { - return new ChildrenAggregationBuilder(name, childType); - } - /** * Create a new {@link GeoDistance} aggregation with the given name. */ diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregationPhase.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregationPhase.java index 5dc29374310a3..9b4a02cd8163b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregationPhase.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregationPhase.java @@ -18,8 +18,6 @@ */ package org.elasticsearch.search.aggregations; -import org.apache.lucene.search.BooleanClause.Occur; -import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.Collector; import org.apache.lucene.search.Query; import org.elasticsearch.common.inject.Inject; @@ -28,7 +26,6 @@ import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.SiblingPipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.profile.query.CollectorResult; import org.elasticsearch.search.profile.query.InternalProfileCollector; @@ -40,7 +37,7 @@ import java.util.List; /** - * + * Aggregation phase of a search request, used to collect aggregations */ public class AggregationPhase implements SearchPhase { @@ -51,9 +48,6 @@ public AggregationPhase() { @Override public void preProcess(SearchContext context) { if (context.aggregations() != null) { - AggregationContext aggregationContext = new AggregationContext(context); - context.aggregations().aggregationContext(aggregationContext); - List collectors = new ArrayList<>(); Aggregator[] aggregators; try { @@ -88,7 +82,7 @@ public void execute(SearchContext context) { return; } - if (context.queryResult().aggregations() != null) { + if (context.queryResult().hasAggs()) { // no need to compute the aggs twice, they should be computed on a per context basis return; } @@ -104,16 +98,8 @@ public void execute(SearchContext context) { // optimize the global collector based execution if (!globals.isEmpty()) { BucketCollector globalsCollector = BucketCollector.wrap(globals); - Query query = Queries.newMatchAllQuery(); - Query searchFilter = context.searchFilter(context.getQueryShardContext().getTypes()); + Query query = context.buildFilteredQuery(Queries.newMatchAllQuery()); - if (searchFilter != null) { - BooleanQuery filtered = new BooleanQuery.Builder() - .add(query, Occur.MUST) - .add(searchFilter, Occur.FILTER) - .build(); - query = filtered; - } try { final Collector collector; if (context.getProfilers() == null) { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/Aggregations.java b/core/src/main/java/org/elasticsearch/search/aggregations/Aggregations.java index d7a6668ed468c..ba51fc419fb0f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/Aggregations.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/Aggregations.java @@ -18,41 +18,135 @@ */ package org.elasticsearch.search.aggregations; +import org.apache.lucene.util.SetOnce; +import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.Iterator; import java.util.List; +import java.util.Locale; import java.util.Map; +import java.util.Objects; + +import static java.util.Collections.unmodifiableMap; +import static org.elasticsearch.common.xcontent.XContentParserUtils.parseTypedKeysObject; /** - * Represents a set of computed addAggregation. + * Represents a set of {@link Aggregation}s */ -public interface Aggregations extends Iterable { +public class Aggregations implements Iterable, ToXContent { + + public static final String AGGREGATIONS_FIELD = "aggregations"; + + protected List aggregations = Collections.emptyList(); + protected Map aggregationsAsMap; + + protected Aggregations() { + } + + public Aggregations(List aggregations) { + this.aggregations = aggregations; + } + + /** + * Iterates over the {@link Aggregation}s. + */ + @Override + public final Iterator iterator() { + return aggregations.stream().map((p) -> (Aggregation) p).iterator(); + } /** * The list of {@link Aggregation}s. */ - List asList(); + public final List asList() { + return Collections.unmodifiableList(aggregations); + } /** * Returns the {@link Aggregation}s keyed by aggregation name. */ - Map asMap(); + public final Map asMap() { + return getAsMap(); + } /** * Returns the {@link Aggregation}s keyed by aggregation name. */ - Map getAsMap(); + public final Map getAsMap() { + if (aggregationsAsMap == null) { + Map newAggregationsAsMap = new HashMap<>(aggregations.size()); + for (Aggregation aggregation : aggregations) { + newAggregationsAsMap.put(aggregation.getName(), aggregation); + } + this.aggregationsAsMap = unmodifiableMap(newAggregationsAsMap); + } + return aggregationsAsMap; + } /** * Returns the aggregation that is associated with the specified name. */ - A get(String name); + @SuppressWarnings("unchecked") + public final A get(String name) { + return (A) asMap().get(name); + } + + @Override + public final boolean equals(Object obj) { + if (obj == null || getClass() != obj.getClass()) { + return false; + } + return aggregations.equals(((Aggregations) obj).aggregations); + } + + @Override + public final int hashCode() { + return Objects.hash(getClass(), aggregations); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + if (aggregations.isEmpty()) { + return builder; + } + builder.startObject(AGGREGATIONS_FIELD); + toXContentInternal(builder, params); + return builder.endObject(); + } /** - * Get the value of specified path in the aggregation. - * - * @param path - * the path to the property in the aggregation tree - * @return the value of the property + * Directly write all the aggregations without their bounding object. Used by sub-aggregations (non top level aggs) */ - Object getProperty(String path); + public XContentBuilder toXContentInternal(XContentBuilder builder, Params params) throws IOException { + for (Aggregation aggregation : aggregations) { + aggregation.toXContent(builder, params); + } + return builder; + } + public static Aggregations fromXContent(XContentParser parser) throws IOException { + final List aggregations = new ArrayList<>(); + XContentParser.Token token; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.START_OBJECT) { + SetOnce typedAgg = new SetOnce<>(); + String currentField = parser.currentName(); + parseTypedKeysObject(parser, Aggregation.TYPED_KEYS_DELIMITER, Aggregation.class, typedAgg::set); + if (typedAgg.get() != null) { + aggregations.add(typedAgg.get()); + } else { + throw new ParsingException(parser.getTokenLocation(), + String.format(Locale.ROOT, "Could not parse aggregation keyed as [%s]", currentField)); + } + } + } + return new Aggregations(aggregations); + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/Aggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/Aggregator.java index bd81a397f040d..41a0fa6dd30a7 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/Aggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/Aggregator.java @@ -21,14 +21,13 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; @@ -77,9 +76,9 @@ public static boolean descendsFromBucketAggregator(Aggregator parent) { public abstract String name(); /** - * Return the {@link AggregationContext} attached with this {@link Aggregator}. + * Return the {@link SearchContext} attached with this {@link Aggregator}. */ - public abstract AggregationContext context(); + public abstract SearchContext context(); /** * Return the parent aggregator. @@ -129,10 +128,10 @@ public ParseField parseField() { return parseField; } - public static SubAggCollectionMode parse(String value, ParseFieldMatcher parseFieldMatcher) { + public static SubAggCollectionMode parse(String value) { SubAggCollectionMode[] modes = SubAggCollectionMode.values(); for (SubAggCollectionMode mode : modes) { - if (parseFieldMatcher.match(value, mode.parseField)) { + if (mode.parseField.match(value)) { return mode; } } @@ -140,16 +139,12 @@ public static SubAggCollectionMode parse(String value, ParseFieldMatcher parseFi } public static SubAggCollectionMode readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown SubAggCollectionMode ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(SubAggCollectionMode.class); } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(ordinal()); + out.writeEnum(this); } } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorBase.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorBase.java index c99da85f33130..2d2c2abf65ef9 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorBase.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorBase.java @@ -25,7 +25,7 @@ import org.elasticsearch.search.aggregations.bucket.BestBucketsDeferringCollector; import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.SearchContext.Lifetime; import org.elasticsearch.search.query.QueryPhaseExecutionException; @@ -45,7 +45,7 @@ public abstract class AggregatorBase extends Aggregator { protected final String name; protected final Aggregator parent; - protected final AggregationContext context; + protected final SearchContext context; private final Map metaData; protected final Aggregator[] subAggregators; @@ -66,7 +66,7 @@ public abstract class AggregatorBase extends Aggregator { * @param parent The parent aggregator (may be {@code null} for top level aggregators) * @param metaData The metaData associated with this aggregator */ - protected AggregatorBase(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent, + protected AggregatorBase(String name, AggregatorFactories factories, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { this.name = name; this.pipelineAggregators = pipelineAggregators; @@ -76,11 +76,11 @@ protected AggregatorBase(String name, AggregatorFactories factories, Aggregation this.breakerService = context.bigArrays().breakerService(); assert factories != null : "sub-factories provided to BucketAggregator must not be null, use AggragatorFactories.EMPTY instead"; this.subAggregators = factories.createSubAggregators(this); - context.searchContext().addReleasable(this, Lifetime.PHASE); + context.addReleasable(this, Lifetime.PHASE); // Register a safeguard to highlight any invalid construction logic (call to this constructor without subsequent preCollection call) collectableSubAggregators = new BucketCollector() { void badState(){ - throw new QueryPhaseExecutionException(AggregatorBase.this.context.searchContext(), + throw new QueryPhaseExecutionException(AggregatorBase.this.context, "preCollection not called on new Aggregator before use", null); } @Override @@ -245,7 +245,7 @@ public Aggregator subAggregator(String aggName) { * @return The current aggregation context. */ @Override - public AggregationContext context() { + public SearchContext context() { return context; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java index 616b0091fe878..0015276f8dd7c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java @@ -19,19 +19,25 @@ package org.elasticsearch.search.aggregations; import org.elasticsearch.action.support.ToXContentToBytes; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.AggregationPath; import org.elasticsearch.search.aggregations.support.AggregationPath.PathElement; +import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.profile.Profilers; import org.elasticsearch.search.profile.aggregation.ProfilingAggregator; import java.io.IOException; import java.util.ArrayList; +import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.LinkedList; @@ -39,11 +45,129 @@ import java.util.Map; import java.util.Objects; import java.util.Set; +import java.util.regex.Matcher; +import java.util.regex.Pattern; /** * */ public class AggregatorFactories { + public static final Pattern VALID_AGG_NAME = Pattern.compile("[^\\[\\]>]+"); + + /** + * Parses the aggregation request recursively generating aggregator factories in turn. + * + * @param parseContext The parse context. + * + * @return The parsed aggregator factories. + * + * @throws IOException When parsing fails for unknown reasons. + */ + public static AggregatorFactories.Builder parseAggregators(QueryParseContext parseContext) throws IOException { + return parseAggregators(parseContext, 0); + } + + private static AggregatorFactories.Builder parseAggregators(QueryParseContext parseContext, int level) throws IOException { + Matcher validAggMatcher = VALID_AGG_NAME.matcher(""); + AggregatorFactories.Builder factories = new AggregatorFactories.Builder(); + + XContentParser.Token token = null; + XContentParser parser = parseContext.parser(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token != XContentParser.Token.FIELD_NAME) { + throw new ParsingException(parser.getTokenLocation(), + "Unexpected token " + token + " in [aggs]: aggregations definitions must start with the name of the aggregation."); + } + final String aggregationName = parser.currentName(); + if (!validAggMatcher.reset(aggregationName).matches()) { + throw new ParsingException(parser.getTokenLocation(), "Invalid aggregation name [" + aggregationName + + "]. Aggregation names must be alpha-numeric and can only contain '_' and '-'"); + } + + token = parser.nextToken(); + if (token != XContentParser.Token.START_OBJECT) { + throw new ParsingException(parser.getTokenLocation(), "Aggregation definition for [" + aggregationName + " starts with a [" + + token + "], expected a [" + XContentParser.Token.START_OBJECT + "]."); + } + + BaseAggregationBuilder aggBuilder = null; + AggregatorFactories.Builder subFactories = null; + + Map metaData = null; + + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token != XContentParser.Token.FIELD_NAME) { + throw new ParsingException( + parser.getTokenLocation(), "Expected [" + XContentParser.Token.FIELD_NAME + "] under a [" + + XContentParser.Token.START_OBJECT + "], but got a [" + token + "] in [" + aggregationName + "]", + parser.getTokenLocation()); + } + final String fieldName = parser.currentName(); + + token = parser.nextToken(); + if (token == XContentParser.Token.START_OBJECT) { + switch (fieldName) { + case "meta": + metaData = parser.map(); + break; + case "aggregations": + case "aggs": + if (subFactories != null) { + throw new ParsingException(parser.getTokenLocation(), + "Found two sub aggregation definitions under [" + aggregationName + "]"); + } + subFactories = parseAggregators(parseContext, level + 1); + break; + default: + if (aggBuilder != null) { + throw new ParsingException(parser.getTokenLocation(), "Found two aggregation type definitions in [" + + aggregationName + "]: [" + aggBuilder.getType() + "] and [" + fieldName + "]"); + } + + aggBuilder = parser.namedObject(BaseAggregationBuilder.class, fieldName, + new AggParseContext(aggregationName, parseContext)); + } + } else { + throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.START_OBJECT + "] under [" + + fieldName + "], but got a [" + token + "] in [" + aggregationName + "]"); + } + } + + if (aggBuilder == null) { + throw new ParsingException(parser.getTokenLocation(), "Missing definition for aggregation [" + aggregationName + "]", + parser.getTokenLocation()); + } else { + if (metaData != null) { + aggBuilder.setMetaData(metaData); + } + + if (subFactories != null) { + aggBuilder.subAggregations(subFactories); + } + + if (aggBuilder instanceof AggregationBuilder) { + factories.addAggregator((AggregationBuilder) aggBuilder); + } else { + factories.addPipelineAggregator((PipelineAggregationBuilder) aggBuilder); + } + } + } + + return factories; + } + + /** + * Context to parse and aggregation. This should eventually be removed and replaced with a String. + */ + public static final class AggParseContext { + public final String name; + public final QueryParseContext queryParseContext; + + public AggParseContext(String name, QueryParseContext queryParseContext) { + this.name = name; + this.queryParseContext = queryParseContext; + } + } public static final AggregatorFactories EMPTY = new AggregatorFactories(null, new AggregatorFactory[0], new ArrayList()); @@ -64,7 +188,7 @@ private AggregatorFactories(AggregatorFactory parent, AggregatorFactory[] } public List createPipelineAggregators() throws IOException { - List pipelineAggregators = new ArrayList<>(); + List pipelineAggregators = new ArrayList<>(this.pipelineAggregatorFactories.size()); for (PipelineAggregationBuilder factory : this.pipelineAggregatorFactories) { pipelineAggregators.add(factory.create()); } @@ -84,7 +208,7 @@ public Aggregator[] createSubAggregators(Aggregator parent) throws IOException { // aggs final boolean collectsFromSingleBucket = false; Aggregator factory = factories[i].create(parent, collectsFromSingleBucket); - Profilers profilers = factory.context().searchContext().getProfilers(); + Profilers profilers = factory.context().getProfilers(); if (profilers != null) { factory = new ProfilingAggregator(factory, profilers.getAggregationProfiler()); } @@ -100,7 +224,7 @@ public Aggregator[] createTopLevelAggregators() throws IOException { // top-level aggs only get called with bucket 0 final boolean collectsFromSingleBucket = true; Aggregator factory = factories[i].create(null, collectsFromSingleBucket); - Profilers profilers = factory.context().searchContext().getProfilers(); + Profilers profilers = factory.context().getProfilers(); if (profilers != null) { factory = new ProfilingAggregator(factory, profilers.getAggregationProfiler()); } @@ -171,10 +295,22 @@ public void writeTo(StreamOutput out) throws IOException { } } - public Builder addAggregators(AggregatorFactories factories) { - throw new UnsupportedOperationException("This needs to be removed"); + public boolean mustVisitAllDocs() { + for (AggregationBuilder builder : aggregationBuilders) { + if (builder instanceof GlobalAggregationBuilder) { + return true; + } else if (builder instanceof TermsAggregationBuilder) { + if (((TermsAggregationBuilder) builder).minDocCount() == 0) { + return true; + } + } + + } + return false; } + + public Builder addAggregator(AggregationBuilder factory) { if (!names.add(factory.name)) { throw new IllegalArgumentException("Two sibling aggregations cannot have the same name: [" + factory.name + "]"); @@ -196,7 +332,7 @@ Builder skipResolveOrder() { return this; } - public AggregatorFactories build(AggregationContext context, AggregatorFactory parent) throws IOException { + public AggregatorFactories build(SearchContext context, AggregatorFactory parent) throws IOException { if (aggregationBuilders.isEmpty() && pipelineAggregatorBuilders.isEmpty()) { return EMPTY; } @@ -258,7 +394,7 @@ private void resolvePipelineAggregatorOrder(Map aggB } else { // Check the non-pipeline sub-aggregator // factories - AggregationBuilder[] subBuilders = aggBuilder.factoriesBuilder.getAggregatorFactories(); + List subBuilders = aggBuilder.factoriesBuilder.aggregationBuilders; boolean foundSubBuilder = false; for (AggregationBuilder subBuilder : subBuilders) { if (aggName.equals(subBuilder.name)) { @@ -300,12 +436,12 @@ private void resolvePipelineAggregatorOrder(Map aggB } } - AggregationBuilder[] getAggregatorFactories() { - return this.aggregationBuilders.toArray(new AggregationBuilder[this.aggregationBuilders.size()]); + public List getAggregatorFactories() { + return Collections.unmodifiableList(aggregationBuilders); } - List getPipelineAggregatorFactories() { - return this.pipelineAggregatorBuilders; + public List getPipelineAggregatorFactories() { + return Collections.unmodifiableList(pipelineAggregatorBuilders); } public int count() { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactory.java index 44ecacd841773..7b6340996a928 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactory.java @@ -24,10 +24,10 @@ import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.ObjectArray; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.SearchContext.Lifetime; + import java.io.IOException; import java.util.List; import java.util.Map; @@ -42,13 +42,13 @@ public static final class MultiBucketAggregatorWrapper extends Aggregator { ObjectArray aggregators; ObjectArray collectors; - MultiBucketAggregatorWrapper(BigArrays bigArrays, AggregationContext context, Aggregator parent, AggregatorFactory factory, + MultiBucketAggregatorWrapper(BigArrays bigArrays, SearchContext context, Aggregator parent, AggregatorFactory factory, Aggregator first) { this.bigArrays = bigArrays; this.parent = parent; this.factory = factory; this.first = first; - context.searchContext().addReleasable(this, Lifetime.PHASE); + context.addReleasable(this, Lifetime.PHASE); aggregators = bigArrays.newObjectArray(1); aggregators.set(0, first); collectors = bigArrays.newObjectArray(1); @@ -64,7 +64,7 @@ public String name() { } @Override - public AggregationContext context() { + public SearchContext context() { return first.context(); } @@ -130,7 +130,11 @@ public void collect(int doc, long bucket) throws IOException { aggregators.set(bucket, aggregator); } collector = aggregator.getLeafCollector(ctx); - collector.setScorer(scorer); + if (scorer != null) { + // Passing a null scorer can cause unexpected NPE at a later time, + // which can't not be directly linked to the fact that a null scorer has been supplied. + collector.setScorer(scorer); + } collectors.set(bucket, collector); } collector.collect(doc, 0); @@ -162,26 +166,22 @@ public void close() { } protected final String name; - protected final Type type; protected final AggregatorFactory parent; protected final AggregatorFactories factories; protected final Map metaData; - protected final AggregationContext context; + protected final SearchContext context; /** * Constructs a new aggregator factory. * * @param name * The aggregation name - * @param type - * The aggregation type * @throws IOException * if an error occurs creating the factory */ - public AggregatorFactory(String name, Type type, AggregationContext context, AggregatorFactory parent, + public AggregatorFactory(String name, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { this.name = name; - this.type = type; this.context = context; this.parent = parent; this.factories = subFactoriesBuilder.build(context, this); @@ -225,10 +225,6 @@ public final Aggregator create(Aggregator parent, boolean collectsFromSingleBuck return createInternal(parent, collectsFromSingleBucket, this.factories.createPipelineAggregators(), this.metaData); } - public String getType() { - return type.name(); - } - public AggregatorFactory getParent() { return parent; } @@ -238,7 +234,7 @@ public AggregatorFactory getParent() { * {@link Aggregator}s that only know how to collect bucket 0, this * returns an aggregator that can collect any bucket. */ - protected static Aggregator asMultiBucketAggregator(final AggregatorFactory factory, final AggregationContext context, + protected static Aggregator asMultiBucketAggregator(final AggregatorFactory factory, final SearchContext context, final Aggregator parent) throws IOException { final Aggregator first = factory.create(parent, true); final BigArrays bigArrays = context.bigArrays(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorParsers.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorParsers.java deleted file mode 100644 index df64d440dc0b7..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorParsers.java +++ /dev/null @@ -1,196 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations; - -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.xcontent.ParseFieldRegistry; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.query.QueryParseContext; -import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; - -import java.io.IOException; -import java.util.Map; -import java.util.regex.Matcher; -import java.util.regex.Pattern; - -/** - * A registry for all the aggregator parser, also servers as the main parser for the aggregations module - */ -public class AggregatorParsers { - public static final Pattern VALID_AGG_NAME = Pattern.compile("[^\\[\\]>]+"); - - private final ParseFieldRegistry aggregationParserRegistry; - private final ParseFieldRegistry pipelineAggregationParserRegistry; - - public AggregatorParsers(ParseFieldRegistry aggregationParserRegistry, - ParseFieldRegistry pipelineAggregationParserRegistry) { - this.aggregationParserRegistry = aggregationParserRegistry; - this.pipelineAggregationParserRegistry = pipelineAggregationParserRegistry; - } - - /** - * Returns the parser that is registered under the given aggregation type. - * - * @param type The aggregation type - * @param parseFieldMatcher used for making error messages. - * @return The parser associated with the given aggregation type or null if it wasn't found. - */ - public Aggregator.Parser parser(String type, ParseFieldMatcher parseFieldMatcher) { - return aggregationParserRegistry.lookupReturningNullIfNotFound(type, parseFieldMatcher); - } - - /** - * Returns the parser that is registered under the given pipeline aggregator type. - * - * @param type The pipeline aggregator type - * @param parseFieldMatcher used for making error messages. - * @return The parser associated with the given pipeline aggregator type or null if it wasn't found. - */ - public PipelineAggregator.Parser pipelineParser(String type, ParseFieldMatcher parseFieldMatcher) { - return pipelineAggregationParserRegistry.lookupReturningNullIfNotFound(type, parseFieldMatcher); - } - - /** - * Parses the aggregation request recursively generating aggregator factories in turn. - * - * @param parseContext The parse context. - * - * @return The parsed aggregator factories. - * - * @throws IOException When parsing fails for unknown reasons. - */ - public AggregatorFactories.Builder parseAggregators(QueryParseContext parseContext) throws IOException { - return parseAggregators(parseContext, 0); - } - - private AggregatorFactories.Builder parseAggregators(QueryParseContext parseContext, int level) throws IOException { - Matcher validAggMatcher = VALID_AGG_NAME.matcher(""); - AggregatorFactories.Builder factories = new AggregatorFactories.Builder(); - - XContentParser.Token token = null; - XContentParser parser = parseContext.parser(); - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token != XContentParser.Token.FIELD_NAME) { - throw new ParsingException(parser.getTokenLocation(), - "Unexpected token " + token + " in [aggs]: aggregations definitions must start with the name of the aggregation."); - } - final String aggregationName = parser.currentName(); - if (!validAggMatcher.reset(aggregationName).matches()) { - throw new ParsingException(parser.getTokenLocation(), "Invalid aggregation name [" + aggregationName - + "]. Aggregation names must be alpha-numeric and can only contain '_' and '-'"); - } - - token = parser.nextToken(); - if (token != XContentParser.Token.START_OBJECT) { - throw new ParsingException(parser.getTokenLocation(), "Aggregation definition for [" + aggregationName + " starts with a [" - + token + "], expected a [" + XContentParser.Token.START_OBJECT + "]."); - } - - AggregationBuilder aggFactory = null; - PipelineAggregationBuilder pipelineAggregatorFactory = null; - AggregatorFactories.Builder subFactories = null; - - Map metaData = null; - - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token != XContentParser.Token.FIELD_NAME) { - throw new ParsingException( - parser.getTokenLocation(), "Expected [" + XContentParser.Token.FIELD_NAME + "] under a [" - + XContentParser.Token.START_OBJECT + "], but got a [" + token + "] in [" + aggregationName + "]", - parser.getTokenLocation()); - } - final String fieldName = parser.currentName(); - - token = parser.nextToken(); - if (token == XContentParser.Token.START_OBJECT) { - switch (fieldName) { - case "meta": - metaData = parser.map(); - break; - case "aggregations": - case "aggs": - if (subFactories != null) { - throw new ParsingException(parser.getTokenLocation(), - "Found two sub aggregation definitions under [" + aggregationName + "]"); - } - subFactories = parseAggregators(parseContext, level + 1); - break; - default: - if (aggFactory != null) { - throw new ParsingException(parser.getTokenLocation(), "Found two aggregation type definitions in [" - + aggregationName + "]: [" + aggFactory.type + "] and [" + fieldName + "]"); - } - if (pipelineAggregatorFactory != null) { - throw new ParsingException(parser.getTokenLocation(), "Found two aggregation type definitions in [" - + aggregationName + "]: [" + pipelineAggregatorFactory + "] and [" + fieldName + "]"); - } - - Aggregator.Parser aggregatorParser = parser(fieldName, parseContext.getParseFieldMatcher()); - if (aggregatorParser == null) { - PipelineAggregator.Parser pipelineAggregatorParser = pipelineParser(fieldName, - parseContext.getParseFieldMatcher()); - if (pipelineAggregatorParser == null) { - throw new ParsingException(parser.getTokenLocation(), - "Could not find aggregator type [" + fieldName + "] in [" + aggregationName + "]"); - } else { - pipelineAggregatorFactory = pipelineAggregatorParser.parse(aggregationName, parseContext); - } - } else { - aggFactory = aggregatorParser.parse(aggregationName, parseContext); - } - } - } else { - throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.START_OBJECT + "] under [" - + fieldName + "], but got a [" + token + "] in [" + aggregationName + "]"); - } - } - - if (aggFactory == null && pipelineAggregatorFactory == null) { - throw new ParsingException(parser.getTokenLocation(), "Missing definition for aggregation [" + aggregationName + "]", - parser.getTokenLocation()); - } else if (aggFactory != null) { - assert pipelineAggregatorFactory == null; - if (metaData != null) { - aggFactory.setMetaData(metaData); - } - - if (subFactories != null) { - aggFactory.subAggregations(subFactories); - } - - factories.addAggregator(aggFactory); - } else { - assert pipelineAggregatorFactory != null; - if (subFactories != null) { - throw new ParsingException(parser.getTokenLocation(), - "Aggregation [" + aggregationName + "] cannot define sub-aggregations", - parser.getTokenLocation()); - } - if (metaData != null) { - pipelineAggregatorFactory.setMetaData(metaData); - } - factories.addPipelineAggregator(pipelineAggregatorFactory); - } - } - - return factories; - } - -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/BaseAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/BaseAggregationBuilder.java new file mode 100644 index 0000000000000..8f076cfe4567f --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/BaseAggregationBuilder.java @@ -0,0 +1,46 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations; + +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; + +import java.util.Map; + +/** + * Interface shared by {@link AggregationBuilder} and {@link PipelineAggregationBuilder} so they can conveniently share the same namespace + * for {@link XContentParser#namedObject(Class, String, Object)}. + */ +public interface BaseAggregationBuilder { + /** + * The name of the type of aggregation built by this builder. + */ + String getType(); + + /** + * Set the aggregation's metadata. Returns {@code this} for chaining. + */ + BaseAggregationBuilder setMetaData(Map metaData); + + /** + * Set the sub aggregations if this aggregation supports sub aggregations. Returns {@code this} for chaining. + */ + BaseAggregationBuilder subAggregations(Builder subFactories); +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/BucketCollector.java b/core/src/main/java/org/elasticsearch/search/aggregations/BucketCollector.java index 2de6ae0cf933f..40e66bd964539 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/BucketCollector.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/BucketCollector.java @@ -70,7 +70,7 @@ public static BucketCollector wrap(Iterable collector @Override public LeafBucketCollector getLeafCollector(LeafReaderContext ctx) throws IOException { - List leafCollectors = new ArrayList<>(); + List leafCollectors = new ArrayList<>(collectors.length); for (BucketCollector c : collectors) { leafCollectors.add(c.getLeafCollector(ctx)); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/InternalAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/InternalAggregation.java index b7635d3dc320e..b34d3daa1a0e3 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/InternalAggregation.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/InternalAggregation.java @@ -18,14 +18,13 @@ */ package org.elasticsearch.search.aggregations; -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.rest.action.search.RestSearchAction; import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.support.AggregationPath; @@ -38,43 +37,26 @@ * An internal implementation of {@link Aggregation}. Serves as a base class for all aggregation implementations. */ public abstract class InternalAggregation implements Aggregation, ToXContent, NamedWriteable { - /** - * The aggregation type that holds all the string types that are associated with an aggregation: - *
      - *
    • name - used as the parser type
    • - *
    - */ - public static class Type { - private final String name; - - public Type(String name) { - this.name = name; - } - - /** - * @return The name of the type of aggregation. This is the key for parsing the aggregation from XContent and is the name of the - * aggregation's builder when serialized. - */ - public String name() { - return name; - } - - @Override - public String toString() { - return name; - } - } public static class ReduceContext { private final BigArrays bigArrays; private final ScriptService scriptService; - private final ClusterState clusterState; + private final boolean isFinalReduce; - public ReduceContext(BigArrays bigArrays, ScriptService scriptService, ClusterState clusterState) { + public ReduceContext(BigArrays bigArrays, ScriptService scriptService, boolean isFinalReduce) { this.bigArrays = bigArrays; this.scriptService = scriptService; - this.clusterState = clusterState; + this.isFinalReduce = isFinalReduce; + } + + /** + * Returns true iff the current reduce phase is the final reduce phase. This indicates if operations like + * pipeline aggregations should be applied or if specific features like minDocCount should be taken into account. + * Operations that are potentially loosing information can only be applied during the final reduce phase. + */ + public boolean isFinalReduce() { + return isFinalReduce; } public BigArrays bigArrays() { @@ -84,10 +66,6 @@ public BigArrays bigArrays() { public ScriptService scriptService() { return scriptService; } - - public ClusterState clusterState() { - return clusterState; - } } protected final String name; @@ -126,7 +104,6 @@ public final void writeTo(StreamOutput out) throws IOException { protected abstract void doWriteTo(StreamOutput out) throws IOException; - @Override public String getName() { return name; @@ -140,15 +117,23 @@ public String getName() { */ public final InternalAggregation reduce(List aggregations, ReduceContext reduceContext) { InternalAggregation aggResult = doReduce(aggregations, reduceContext); - for (PipelineAggregator pipelineAggregator : pipelineAggregators) { - aggResult = pipelineAggregator.reduce(aggResult, reduceContext); + if (reduceContext.isFinalReduce()) { + for (PipelineAggregator pipelineAggregator : pipelineAggregators) { + aggResult = pipelineAggregator.reduce(aggResult, reduceContext); + } } return aggResult; } public abstract InternalAggregation doReduce(List aggregations, ReduceContext reduceContext); - @Override + /** + * Get the value of specified path in the aggregation. + * + * @param path + * the path to the property in the aggregation tree + * @return the value of the property + */ public Object getProperty(String path) { AggregationPath aggPath = AggregationPath.parse(path); return getProperty(aggPath.getPathElementsAsStringList()); @@ -183,11 +168,21 @@ public List pipelineAggregators() { return pipelineAggregators; } + @Override + public String getType() { + return getWriteableName(); + } + @Override public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(name); + if (params.paramAsBoolean(RestSearchAction.TYPED_KEYS_PARAM, false)) { + // Concatenates the type and the name of the aggregation (ex: top_hits#foo) + builder.startObject(String.join(TYPED_KEYS_DELIMITER, getType(), getName())); + } else { + builder.startObject(getName()); + } if (this.metaData != null) { - builder.field(CommonFields.META); + builder.field(CommonFields.META.getPreferredName()); builder.map(this.metaData); } doXContentBody(builder, params); @@ -196,24 +191,4 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params) } public abstract XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException; - - /** - * Common xcontent fields that are shared among addAggregation - */ - public static final class CommonFields extends ParseField.CommonFields { - // todo convert these to ParseField - public static final String META = "meta"; - public static final String BUCKETS = "buckets"; - public static final String VALUE = "value"; - public static final String VALUES = "values"; - public static final String VALUE_AS_STRING = "value_as_string"; - public static final String DOC_COUNT = "doc_count"; - public static final String KEY = "key"; - public static final String KEY_AS_STRING = "key_as_string"; - public static final String FROM = "from"; - public static final String FROM_AS_STRING = "from_as_string"; - public static final String TO = "to"; - public static final String TO_AS_STRING = "to_as_string"; - } - } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/InternalAggregations.java b/core/src/main/java/org/elasticsearch/search/aggregations/InternalAggregations.java index 66e45156caf06..15ba641c711cb 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/InternalAggregations.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/InternalAggregations.java @@ -22,32 +22,22 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.xcontent.ToXContent; -import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; -import org.elasticsearch.search.aggregations.support.AggregationPath; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; -import java.util.Iterator; import java.util.List; import java.util.Map; -import java.util.stream.Collectors; -import static java.util.Collections.emptyMap; -import static java.util.Collections.unmodifiableMap; /** * An internal implementation of {@link Aggregations}. */ -public class InternalAggregations implements Aggregations, ToXContent, Streamable { +public final class InternalAggregations extends Aggregations implements ToXContent, Streamable { public static final InternalAggregations EMPTY = new InternalAggregations(); - private List aggregations = Collections.emptyList(); - - private Map aggregationsAsMap; - private InternalAggregations() { } @@ -55,73 +45,7 @@ private InternalAggregations() { * Constructs a new addAggregation. */ public InternalAggregations(List aggregations) { - this.aggregations = aggregations; - } - - /** - * Iterates over the {@link Aggregation}s. - */ - @Override - public Iterator iterator() { - return aggregations.stream().map((p) -> (Aggregation) p).iterator(); - } - - /** - * The list of {@link Aggregation}s. - */ - @Override - public List asList() { - return aggregations.stream().map((p) -> (Aggregation) p).collect(Collectors.toList()); - } - - /** - * Returns the {@link Aggregation}s keyed by map. - */ - @Override - public Map asMap() { - return getAsMap(); - } - - /** - * Returns the {@link Aggregation}s keyed by map. - */ - @Override - public Map getAsMap() { - if (aggregationsAsMap == null) { - Map newAggregationsAsMap = new HashMap<>(); - for (InternalAggregation aggregation : aggregations) { - newAggregationsAsMap.put(aggregation.getName(), aggregation); - } - this.aggregationsAsMap = unmodifiableMap(newAggregationsAsMap); - } - return aggregationsAsMap; - } - - /** - * @return the aggregation of the specified name. - */ - @SuppressWarnings("unchecked") - @Override - public
    A get(String name) { - return (A) asMap().get(name); - } - - @Override - public Object getProperty(String path) { - AggregationPath aggPath = AggregationPath.parse(path); - return getProperty(aggPath.getPathElementsAsStringList()); - } - - public Object getProperty(List path) { - if (path.isEmpty()) { - return this; - } - String aggName = path.get(0); - InternalAggregation aggregation = get(aggName); - if (aggregation == null) { - throw new IllegalArgumentException("Cannot find an aggregation named [" + aggName + "]"); - } - return aggregation.getProperty(path.subList(1, path.size())); + super(aggregations); } /** @@ -136,21 +60,16 @@ public static InternalAggregations reduce(List aggregation } // first we collect all aggregations of the same type and list them together - Map> aggByName = new HashMap<>(); for (InternalAggregations aggregations : aggregationsList) { - for (InternalAggregation aggregation : aggregations.aggregations) { - List aggs = aggByName.get(aggregation.getName()); - if (aggs == null) { - aggs = new ArrayList<>(aggregationsList.size()); - aggByName.put(aggregation.getName(), aggs); - } - aggs.add(aggregation); + for (Aggregation aggregation : aggregations.aggregations) { + List aggs = aggByName.computeIfAbsent( + aggregation.getName(), k -> new ArrayList<>(aggregationsList.size())); + aggs.add((InternalAggregation)aggregation); } } // now we can use the first aggregation of each list to handle the reduce of its list - List reducedAggregations = new ArrayList<>(); for (Map.Entry> entry : aggByName.entrySet()) { List aggregations = entry.getValue(); @@ -160,31 +79,6 @@ public static InternalAggregations reduce(List aggregation return new InternalAggregations(reducedAggregations); } - /** The fields required to write this addAggregation to xcontent */ - static class Fields { - public static final String AGGREGATIONS = "aggregations"; - } - - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - if (aggregations.isEmpty()) { - return builder; - } - builder.startObject(Fields.AGGREGATIONS); - toXContentInternal(builder, params); - return builder.endObject(); - } - - /** - * Directly write all the addAggregation without their bounding object. Used by sub-addAggregation (non top level addAggregation) - */ - public XContentBuilder toXContentInternal(XContentBuilder builder, Params params) throws IOException { - for (Aggregation aggregation : aggregations) { - ((InternalAggregation) aggregation).toXContent(builder, params); - } - return builder; - } - public static InternalAggregations readAggregations(StreamInput in) throws IOException { InternalAggregations result = new InternalAggregations(); result.readFrom(in); @@ -199,13 +93,13 @@ public static InternalAggregations readOptionalAggregations(StreamInput in) thro public void readFrom(StreamInput in) throws IOException { aggregations = in.readList(stream -> in.readNamedWriteable(InternalAggregation.class)); if (aggregations.isEmpty()) { - aggregationsAsMap = emptyMap(); + aggregationsAsMap = Collections.emptyMap(); } } @Override + @SuppressWarnings("unchecked") public void writeTo(StreamOutput out) throws IOException { - out.writeNamedWriteableList(aggregations); + out.writeNamedWriteableList((List)aggregations); } - } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/InternalMultiBucketAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/InternalMultiBucketAggregation.java index a1326aaed117c..8e8f4edcf3193 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/InternalMultiBucketAggregation.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/InternalMultiBucketAggregation.java @@ -20,6 +20,7 @@ package org.elasticsearch.search.aggregations; import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -62,6 +63,9 @@ protected InternalMultiBucketAggregation(StreamInput in) throws IOException { */ public abstract B createBucket(InternalAggregations aggregations, B prototype); + @Override + public abstract List getBuckets(); + @Override public Object getProperty(List path) { if (path.isEmpty()) { @@ -69,7 +73,7 @@ public Object getProperty(List path) { } else if (path.get(0).equals("_bucket_count")) { return getBuckets().size(); } else { - List buckets = getBuckets(); + List buckets = getBuckets(); Object[] propertyArray = new Object[buckets.size()]; for (int i = 0; i < buckets.size(); i++) { propertyArray[i] = buckets.get(i).getProperty(getName(), path); @@ -78,8 +82,8 @@ public Object getProperty(List path) { } } - public abstract static class InternalBucket implements Bucket { - @Override + public abstract static class InternalBucket implements Bucket, Writeable { + public Object getProperty(String containingAggName, List path) { if (path.isEmpty()) { return this; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/NonCollectingAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/NonCollectingAggregator.java index 7e0e3cc9f9fe4..d65afd2ba9125 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/NonCollectingAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/NonCollectingAggregator.java @@ -21,7 +21,7 @@ import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -33,12 +33,12 @@ */ public abstract class NonCollectingAggregator extends AggregatorBase { - protected NonCollectingAggregator(String name, AggregationContext context, Aggregator parent, AggregatorFactories subFactories, + protected NonCollectingAggregator(String name, SearchContext context, Aggregator parent, AggregatorFactories subFactories, List pipelineAggregators, Map metaData) throws IOException { super(name, subFactories, context, parent, pipelineAggregators, metaData); } - protected NonCollectingAggregator(String name, AggregationContext context, Aggregator parent, + protected NonCollectingAggregator(String name, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { this(name, context, parent, AggregatorFactories.EMPTY, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/ParsedAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/ParsedAggregation.java new file mode 100644 index 0000000000000..d79baac06b097 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/ParsedAggregation.java @@ -0,0 +1,87 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParser.Token; + +import java.io.IOException; +import java.util.Collections; +import java.util.Map; + +/** + * An implementation of {@link Aggregation} that is parsed from a REST response. + * Serves as a base class for all aggregation implementations that are parsed from REST. + */ +public abstract class ParsedAggregation implements Aggregation, ToXContent { + + protected static void declareAggregationFields(ObjectParser objectParser) { + objectParser.declareObject((parsedAgg, metadata) -> parsedAgg.metadata = Collections.unmodifiableMap(metadata), + (parser, context) -> parser.map(), InternalAggregation.CommonFields.META); + } + + private String name; + protected Map metadata; + + @Override + public final String getName() { + return name; + } + + protected void setName(String name) { + this.name = name; + } + + @Override + public final Map getMetaData() { + return metadata; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException { + // Concatenates the type and the name of the aggregation (ex: top_hits#foo) + builder.startObject(String.join(InternalAggregation.TYPED_KEYS_DELIMITER, getType(), name)); + if (this.metadata != null) { + builder.field(InternalAggregation.CommonFields.META.getPreferredName()); + builder.map(this.metadata); + } + doXContentBody(builder, params); + builder.endObject(); + return builder; + } + + protected abstract XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException; + + /** + * Parse a token of type XContentParser.Token.VALUE_NUMBER or XContentParser.Token.STRING to a double. + * In other cases the default value is returned instead. + */ + protected static double parseDouble(XContentParser parser, double defaultNullValue) throws IOException { + Token currentToken = parser.currentToken(); + if (currentToken == XContentParser.Token.VALUE_NUMBER || currentToken == XContentParser.Token.VALUE_STRING) { + return parser.doubleValue(); + } else { + return defaultNullValue; + } + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/ParsedMultiBucketAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/ParsedMultiBucketAggregation.java new file mode 100644 index 0000000000000..1e601cb30fe75 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/ParsedMultiBucketAggregation.java @@ -0,0 +1,182 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations; + +import org.elasticsearch.common.CheckedBiConsumer; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; +import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.function.Supplier; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + +public abstract class ParsedMultiBucketAggregation + extends ParsedAggregation implements MultiBucketsAggregation { + + protected final List buckets = new ArrayList<>(); + protected boolean keyed = false; + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + if (keyed) { + builder.startObject(CommonFields.BUCKETS.getPreferredName()); + } else { + builder.startArray(CommonFields.BUCKETS.getPreferredName()); + } + for (B bucket : buckets) { + bucket.toXContent(builder, params); + } + if (keyed) { + builder.endObject(); + } else { + builder.endArray(); + } + return builder; + } + + protected static void declareMultiBucketAggregationFields(final ObjectParser objectParser, + final CheckedFunction bucketParser, + final CheckedFunction keyedBucketParser) { + declareAggregationFields(objectParser); + objectParser.declareField((parser, aggregation, context) -> { + XContentParser.Token token = parser.currentToken(); + if (token == XContentParser.Token.START_OBJECT) { + aggregation.keyed = true; + while (parser.nextToken() != XContentParser.Token.END_OBJECT) { + aggregation.buckets.add(keyedBucketParser.apply(parser)); + } + } else if (token == XContentParser.Token.START_ARRAY) { + aggregation.keyed = false; + while (parser.nextToken() != XContentParser.Token.END_ARRAY) { + aggregation.buckets.add(bucketParser.apply(parser)); + } + } + }, CommonFields.BUCKETS, ObjectParser.ValueType.OBJECT_ARRAY); + } + + public abstract static class ParsedBucket implements MultiBucketsAggregation.Bucket { + + private Aggregations aggregations; + private String keyAsString; + private long docCount; + private boolean keyed; + + protected void setKeyAsString(String keyAsString) { + this.keyAsString = keyAsString; + } + + @Override + public String getKeyAsString() { + return keyAsString; + } + + protected void setDocCount(long docCount) { + this.docCount = docCount; + } + + @Override + public long getDocCount() { + return docCount; + } + + public void setKeyed(boolean keyed) { + this.keyed = keyed; + } + + protected boolean isKeyed() { + return keyed; + } + + protected void setAggregations(Aggregations aggregations) { + this.aggregations = aggregations; + } + + @Override + public Aggregations getAggregations() { + return aggregations; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + if (keyed) { + // Subclasses can override the getKeyAsString method to handle specific cases like + // keyed bucket with RAW doc value format where the key_as_string field is not printed + // out but we still need to have a string version of the key to use as the bucket's name. + builder.startObject(getKeyAsString()); + } else { + builder.startObject(); + } + if (keyAsString != null) { + builder.field(CommonFields.KEY_AS_STRING.getPreferredName(), getKeyAsString()); + } + keyToXContent(builder); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); + aggregations.toXContentInternal(builder, params); + builder.endObject(); + return builder; + } + + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + return builder.field(CommonFields.KEY.getPreferredName(), getKey()); + } + + protected static B parseXContent(final XContentParser parser, + final boolean keyed, + final Supplier bucketSupplier, + final CheckedBiConsumer keyConsumer) + throws IOException { + final B bucket = bucketSupplier.get(); + bucket.setKeyed(keyed); + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + if (keyed) { + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + } + + List aggregations = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (CommonFields.KEY_AS_STRING.getPreferredName().equals(currentFieldName)) { + bucket.setKeyAsString(parser.text()); + } else if (CommonFields.KEY.getPreferredName().equals(currentFieldName)) { + keyConsumer.accept(parser, bucket); + } else if (CommonFields.DOC_COUNT.getPreferredName().equals(currentFieldName)) { + bucket.setDocCount(parser.longValue()); + } + } else if (token == XContentParser.Token.START_OBJECT) { + XContentParserUtils.parseTypedKeysObject(parser, Aggregation.TYPED_KEYS_DELIMITER, Aggregation.class, + aggregations::add); + } + } + bucket.setAggregations(new Aggregations(aggregations)); + return bucket; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/PipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/PipelineAggregationBuilder.java index f946232547152..8f965d1d87eb4 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/PipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/PipelineAggregationBuilder.java @@ -20,7 +20,7 @@ import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.io.stream.NamedWriteable; -import org.elasticsearch.search.aggregations.AggregatorFactory; +import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; @@ -32,7 +32,7 @@ * specific type. */ public abstract class PipelineAggregationBuilder extends ToXContentToBytes - implements NamedWriteable { + implements NamedWriteable, BaseAggregationBuilder { protected final String name; protected final String[] bucketsPaths; @@ -79,6 +79,11 @@ protected abstract void validate(AggregatorFactory parent, AggregatorFactory< protected abstract PipelineAggregator create() throws IOException; /** Associate metadata with this {@link PipelineAggregationBuilder}. */ + @Override public abstract PipelineAggregationBuilder setMetaData(Map metaData); + @Override + public PipelineAggregationBuilder subAggregations(Builder subFactories) { + throw new IllegalArgumentException("Aggregation [" + name + "] cannot define sub-aggregations"); + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/SearchContextAggregations.java b/core/src/main/java/org/elasticsearch/search/aggregations/SearchContextAggregations.java index f003519350f87..9476af03846bf 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/SearchContextAggregations.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/SearchContextAggregations.java @@ -18,8 +18,6 @@ */ package org.elasticsearch.search.aggregations; -import org.elasticsearch.search.aggregations.support.AggregationContext; - /** * The aggregation context that is part of the search context. */ @@ -27,7 +25,6 @@ public class SearchContextAggregations { private final AggregatorFactories factories; private Aggregator[] aggregators; - private AggregationContext aggregationContext; /** * Creates a new aggregation context with the parsed aggregator factories @@ -44,14 +41,6 @@ public Aggregator[] aggregators() { return aggregators; } - public AggregationContext aggregationContext() { - return aggregationContext; - } - - public void aggregationContext(AggregationContext aggregationContext) { - this.aggregationContext = aggregationContext; - } - /** * Registers all the created aggregators (top level aggregators) for the search execution context. * diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollector.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollector.java index 7c6ebae7403d3..61d85b80ad9e7 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollector.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollector.java @@ -31,7 +31,7 @@ import org.elasticsearch.search.aggregations.BucketCollector; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.ArrayList; @@ -49,7 +49,7 @@ private static class Entry { final PackedLongValues docDeltas; final PackedLongValues buckets; - public Entry(LeafReaderContext context, PackedLongValues docDeltas, PackedLongValues buckets) { + Entry(LeafReaderContext context, PackedLongValues docDeltas, PackedLongValues buckets) { this.context = context; this.docDeltas = docDeltas; this.buckets = buckets; @@ -58,7 +58,7 @@ public Entry(LeafReaderContext context, PackedLongValues docDeltas, PackedLongVa final List entries = new ArrayList<>(); BucketCollector collector; - final AggregationContext aggContext; + final SearchContext searchContext; LeafReaderContext context; PackedLongValues.Builder docDeltas; PackedLongValues.Builder buckets; @@ -67,8 +67,8 @@ public Entry(LeafReaderContext context, PackedLongValues docDeltas, PackedLongVa LongHash selectedBuckets; /** Sole constructor. */ - public BestBucketsDeferringCollector(AggregationContext context) { - this.aggContext = context; + public BestBucketsDeferringCollector(SearchContext context) { + this.searchContext = context; } @Override @@ -147,8 +147,8 @@ public void prepareSelectedBuckets(long... selectedBuckets) throws IOException { boolean needsScores = collector.needsScores(); Weight weight = null; if (needsScores) { - weight = aggContext.searchContext().searcher() - .createNormalizedWeight(aggContext.searchContext().query(), true); + weight = searchContext.searcher() + .createNormalizedWeight(searchContext.query(), true); } for (Entry entry : entries) { final LeafBucketCollector leafCollector = collector.getLeafCollector(entry.context); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestDocsDeferringCollector.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestDocsDeferringCollector.java index 90316c1a001d1..8d699b7c43852 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestDocsDeferringCollector.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestDocsDeferringCollector.java @@ -163,7 +163,7 @@ class PerParentBucketSamples { private long parentBucket; private int matchedDocs; - public PerParentBucketSamples(long parentBucket, Scorer scorer, LeafReaderContext readerContext) { + PerParentBucketSamples(long parentBucket, Scorer scorer, LeafReaderContext readerContext) { try { this.parentBucket = parentBucket; tdc = createTopDocsCollector(shardSize); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BucketsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BucketsAggregator.java index ab655497c4c42..665bf9b79670c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BucketsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BucketsAggregator.java @@ -28,7 +28,7 @@ import org.elasticsearch.search.aggregations.InternalAggregations; import org.elasticsearch.search.aggregations.LeafBucketCollector; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -43,7 +43,7 @@ public abstract class BucketsAggregator extends AggregatorBase { private final BigArrays bigArrays; private IntArray docCounts; - public BucketsAggregator(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent, + public BucketsAggregator(String name, AggregatorFactories factories, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, context, parent, pipelineAggregators, metaData); bigArrays = context.bigArrays(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferringBucketCollector.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferringBucketCollector.java index 7ecf7672bfd77..3c63df2c06a76 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferringBucketCollector.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferringBucketCollector.java @@ -24,7 +24,7 @@ import org.elasticsearch.search.aggregations.BucketCollector; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; @@ -82,7 +82,7 @@ public Aggregator parent() { } @Override - public AggregationContext context() { + public SearchContext context() { return in.context(); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/InternalSingleBucketAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/InternalSingleBucketAggregation.java index e8b04680064e3..74eee9d6973a6 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/InternalSingleBucketAggregation.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/InternalSingleBucketAggregation.java @@ -129,7 +129,7 @@ public Object getProperty(List path) { @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(CommonFields.DOC_COUNT, docCount); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); aggregations.toXContentInternal(builder, params); return builder; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/MultiBucketsAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/MultiBucketsAggregation.java index 2d8d26dd35c15..fc223916f72c1 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/MultiBucketsAggregation.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/MultiBucketsAggregation.java @@ -19,9 +19,6 @@ package org.elasticsearch.search.aggregations.bucket; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.util.Comparators; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.search.aggregations.Aggregation; @@ -29,7 +26,6 @@ import org.elasticsearch.search.aggregations.HasAggregations; import org.elasticsearch.search.aggregations.support.AggregationPath; -import java.io.IOException; import java.util.List; /** @@ -40,7 +36,7 @@ public interface MultiBucketsAggregation extends Aggregation { * A bucket represents a criteria to which all documents that fall in it adhere to. It is also uniquely identified * by a key, and can potentially hold sub-aggregations computed over all documents in it. */ - public interface Bucket extends HasAggregations, ToXContent, Writeable { + interface Bucket extends HasAggregations, ToXContent { /** * @return The key associated with the bucket */ @@ -62,9 +58,7 @@ public interface Bucket extends HasAggregations, ToXContent, Writeable { @Override Aggregations getAggregations(); - Object getProperty(String containingAggName, List path); - - static class SubAggregationComparator implements java.util.Comparator { + class SubAggregationComparator implements java.util.Comparator { private final AggregationPath path; private final boolean asc; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/ParsedSingleBucketAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/ParsedSingleBucketAggregation.java new file mode 100644 index 0000000000000..d6f6b2fe3ed0e --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/ParsedSingleBucketAggregation.java @@ -0,0 +1,94 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations.bucket; + +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.aggregations.ParsedAggregation; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + +/** + * A base class for all the single bucket aggregations. + */ +public abstract class ParsedSingleBucketAggregation extends ParsedAggregation implements SingleBucketAggregation { + + private long docCount; + protected Aggregations aggregations = new Aggregations(Collections.emptyList()); + + @Override + public long getDocCount() { + return docCount; + } + + protected void setDocCount(long docCount) { + this.docCount = docCount; + } + + @Override + public Aggregations getAggregations() { + return aggregations; + } + + @Override + public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); + aggregations.toXContentInternal(builder, params); + return builder; + } + + protected static T parseXContent(final XContentParser parser, T aggregation, String name) + throws IOException { + aggregation.setName(name); + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + if (token == XContentParser.Token.FIELD_NAME) { + token = parser.nextToken(); + } + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + + List aggregations = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (CommonFields.DOC_COUNT.getPreferredName().equals(currentFieldName)) { + aggregation.setDocCount(parser.longValue()); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (CommonFields.META.getPreferredName().equals(currentFieldName)) { + aggregation.metadata = parser.map(); + } else { + XContentParserUtils.parseTypedKeysObject(parser, Aggregation.TYPED_KEYS_DELIMITER, Aggregation.class, + aggregations::add); + } + } + } + aggregation.aggregations = new Aggregations(aggregations); + return aggregation; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/SingleBucketAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/SingleBucketAggregator.java index 44b1eea914662..f74df5d63b167 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/SingleBucketAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/SingleBucketAggregator.java @@ -21,7 +21,7 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -33,7 +33,7 @@ public abstract class SingleBucketAggregator extends BucketsAggregator { protected SingleBucketAggregator(String name, AggregatorFactories factories, - AggregationContext aggregationContext, Aggregator parent, + SearchContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrix.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrix.java new file mode 100644 index 0000000000000..365fbb5547b33 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrix.java @@ -0,0 +1,48 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.adjacency; + +import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation; + +import java.util.List; + +/** + * A multi bucket aggregation where the buckets are defined by a set of filters + * (a bucket is produced per filter plus a bucket for each non-empty filter + * intersection so A, B and A&B). + */ +public interface AdjacencyMatrix extends MultiBucketsAggregation { + + /** + * A bucket associated with a specific filter or pair (identified by its + * key) + */ + interface Bucket extends MultiBucketsAggregation.Bucket { + } + + /** + * The buckets created by this aggregation. + */ + @Override + List getBuckets(); + + Bucket getBucketByKey(String key); + +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java new file mode 100644 index 0000000000000..eeb60d393e5c8 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java @@ -0,0 +1,237 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.adjacency; + +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AbstractAggregationBuilder; +import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.Aggregator; +import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; +import org.elasticsearch.search.aggregations.AggregatorFactory; +import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregator.KeyedFilter; +import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.search.query.QueryPhaseExecutionException; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Objects; + +public class AdjacencyMatrixAggregationBuilder extends AbstractAggregationBuilder { + public static final String NAME = "adjacency_matrix"; + + private static final String DEFAULT_SEPARATOR = "&"; + + private static final ParseField SEPARATOR_FIELD = new ParseField("separator"); + private static final ParseField FILTERS_FIELD = new ParseField("filters"); + private List filters; + private String separator = DEFAULT_SEPARATOR; + + public static Aggregator.Parser getParser() { + ObjectParser parser = new ObjectParser<>( + AdjacencyMatrixAggregationBuilder.NAME); + parser.declareString(AdjacencyMatrixAggregationBuilder::separator, SEPARATOR_FIELD); + parser.declareNamedObjects(AdjacencyMatrixAggregationBuilder::setFiltersAsList, KeyedFilter.PARSER, FILTERS_FIELD); + return new Aggregator.Parser() { + @Override + public AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + AdjacencyMatrixAggregationBuilder result = parser.parse(context.parser(), + new AdjacencyMatrixAggregationBuilder(aggregationName), context); + result.checkConsistency(); + return result; + } + }; + } + + protected void checkConsistency() { + if ((filters == null) || (filters.size() == 0)) { + throw new IllegalStateException("[" + name + "] is missing : " + FILTERS_FIELD.getPreferredName() + " parameter"); + } + } + + + protected void setFiltersAsMap(Map filters) { + // Convert uniquely named objects into internal KeyedFilters + this.filters = new ArrayList<>(filters.size()); + for (Entry kv : filters.entrySet()) { + this.filters.add(new KeyedFilter(kv.getKey(), kv.getValue())); + } + // internally we want to have a fixed order of filters, regardless of + // the order of the filters in the request + Collections.sort(this.filters, Comparator.comparing(KeyedFilter::key)); + } + + protected void setFiltersAsList(List filters) { + this.filters = new ArrayList<>(filters); + // internally we want to have a fixed order of filters, regardless of + // the order of the filters in the request + Collections.sort(this.filters, Comparator.comparing(KeyedFilter::key)); + } + + + /** + * @param name + * the name of this aggregation + */ + protected AdjacencyMatrixAggregationBuilder(String name) { + super(name); + } + + + /** + * @param name + * the name of this aggregation + * @param filters + * the filters and their keys to use with this aggregation. + */ + public AdjacencyMatrixAggregationBuilder(String name, Map filters) { + this(name, DEFAULT_SEPARATOR, filters); + } + + /** + * @param name + * the name of this aggregation + * @param separator + * the string used to separate keys in intersections buckets e.g. + * & character for keyed filters A and B would return an + * intersection bucket named A&B + * @param filters + * the filters and their key to use with this aggregation. + */ + public AdjacencyMatrixAggregationBuilder(String name, String separator, Map filters) { + super(name); + this.separator = separator; + setFiltersAsMap(filters); + } + + /** + * Read from a stream. + */ + public AdjacencyMatrixAggregationBuilder(StreamInput in) throws IOException { + super(in); + int filtersSize = in.readVInt(); + separator = in.readString(); + filters = new ArrayList<>(filtersSize); + for (int i = 0; i < filtersSize; i++) { + filters.add(new KeyedFilter(in)); + } + } + + @Override + protected void doWriteTo(StreamOutput out) throws IOException { + out.writeVInt(filters.size()); + out.writeString(separator); + for (KeyedFilter keyedFilter : filters) { + keyedFilter.writeTo(out); + } + } + + /** + * Set the separator used to join pairs of bucket keys + */ + public AdjacencyMatrixAggregationBuilder separator(String separator) { + if (separator == null) { + throw new IllegalArgumentException("[separator] must not be null: [" + name + "]"); + } + this.separator = separator; + return this; + } + + /** + * Get the separator used to join pairs of bucket keys + */ + public String separator() { + return separator; + } + + /** + * Get the filters. This will be an unmodifiable map + */ + public Map filters() { + Mapresult = new HashMap<>(this.filters.size()); + for (KeyedFilter keyedFilter : this.filters) { + result.put(keyedFilter.key(), keyedFilter.filter()); + } + return result; + } + + + @Override + protected AggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, Builder subFactoriesBuilder) + throws IOException { + int maxFilters = context.indexShard().indexSettings().getMaxAdjacencyMatrixFilters(); + if (filters.size() > maxFilters){ + throw new QueryPhaseExecutionException(context, + "Number of filters is too large, must be less than or equal to: [" + maxFilters + "] but was [" + + filters.size() + "]." + + "This limit can be set by changing the [" + IndexSettings.MAX_ADJACENCY_MATRIX_FILTERS_SETTING.getKey() + + "] index level setting."); + } + + List rewrittenFilters = new ArrayList<>(filters.size()); + for (KeyedFilter kf : filters) { + rewrittenFilters.add(new KeyedFilter(kf.key(), QueryBuilder.rewriteQuery(kf.filter(), context.getQueryShardContext()))); + } + + return new AdjacencyMatrixAggregatorFactory(name, rewrittenFilters, separator, context, parent, + subFactoriesBuilder, metaData); + } + + @Override + protected XContentBuilder internalXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(SEPARATOR_FIELD.getPreferredName(), separator); + builder.startObject(AdjacencyMatrixAggregator.FILTERS_FIELD.getPreferredName()); + for (KeyedFilter keyedFilter : filters) { + builder.field(keyedFilter.key(), keyedFilter.filter()); + } + builder.endObject(); + builder.endObject(); + return builder; + } + + @Override + protected int doHashCode() { + return Objects.hash(filters, separator); + } + + @Override + protected boolean doEquals(Object obj) { + AdjacencyMatrixAggregationBuilder other = (AdjacencyMatrixAggregationBuilder) obj; + return Objects.equals(filters, other.filters) && Objects.equals(separator, other.separator); + } + + @Override + public String getType() { + return NAME; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregator.java new file mode 100644 index 0000000000000..7c8e54bbfca95 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregator.java @@ -0,0 +1,243 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.adjacency; + +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.Weight; +import org.apache.lucene.util.Bits; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.lucene.Lucene; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.ObjectParser.NamedObjectParser; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.Aggregator; +import org.elasticsearch.search.aggregations.AggregatorFactories; +import org.elasticsearch.search.aggregations.InternalAggregation; +import org.elasticsearch.search.aggregations.LeafBucketCollector; +import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; +import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; +import org.elasticsearch.search.internal.SearchContext; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Objects; + +/** + * Aggregation for adjacency matrices. + * + * NOTE! This is an experimental class. + * + * TODO the aggregation produces a sparse response but in the + * computation it uses a non-sparse structure (an array of Bits + * objects). This could be changed to a sparse structure in future. + * + */ +public class AdjacencyMatrixAggregator extends BucketsAggregator { + + public static final ParseField FILTERS_FIELD = new ParseField("filters"); + + protected static class KeyedFilter implements Writeable, ToXContent { + private final String key; + private final QueryBuilder filter; + + public static final NamedObjectParser PARSER = + (XContentParser p, QueryParseContext c, String name) -> + new KeyedFilter(name, c.parseInnerQueryBuilder().get()); + + + public KeyedFilter(String key, QueryBuilder filter) { + if (key == null) { + throw new IllegalArgumentException("[key] must not be null"); + } + if (filter == null) { + throw new IllegalArgumentException("[filter] must not be null"); + } + this.key = key; + this.filter = filter; + } + + /** + * Read from a stream. + */ + public KeyedFilter(StreamInput in) throws IOException { + key = in.readString(); + filter = in.readNamedWriteable(QueryBuilder.class); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(key); + out.writeNamedWriteable(filter); + } + + public String key() { + return key; + } + + public QueryBuilder filter() { + return filter; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.field(key, filter); + return builder; + } + + @Override + public int hashCode() { + return Objects.hash(key, filter); + } + + @Override + public boolean equals(Object obj) { + if (obj == null) { + return false; + } + if (getClass() != obj.getClass()) { + return false; + } + KeyedFilter other = (KeyedFilter) obj; + return Objects.equals(key, other.key) && Objects.equals(filter, other.filter); + } + } + + private final String[] keys; + private Weight[] filters; + private final int totalNumKeys; + private final int totalNumIntersections; + private final String separator; + + public AdjacencyMatrixAggregator(String name, AggregatorFactories factories, String separator, String[] keys, + Weight[] filters, SearchContext context, Aggregator parent, List pipelineAggregators, + Map metaData) throws IOException { + super(name, factories, context, parent, pipelineAggregators, metaData); + this.separator = separator; + this.keys = keys; + this.filters = filters; + this.totalNumIntersections = ((keys.length * keys.length) - keys.length) / 2; + this.totalNumKeys = keys.length + totalNumIntersections; + } + + private static class BitsIntersector implements Bits { + Bits a; + Bits b; + + BitsIntersector(Bits a, Bits b) { + super(); + this.a = a; + this.b = b; + } + + @Override + public boolean get(int index) { + return a.get(index) && b.get(index); + } + + @Override + public int length() { + return Math.min(a.length(), b.length()); + } + + } + + @Override + public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, final LeafBucketCollector sub) throws IOException { + // no need to provide deleted docs to the filter + final Bits[] bits = new Bits[filters.length + totalNumIntersections]; + for (int i = 0; i < filters.length; ++i) { + bits[i] = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), filters[i].scorerSupplier(ctx)); + } + // Add extra Bits for intersections + int pos = filters.length; + for (int i = 0; i < filters.length; i++) { + for (int j = i + 1; j < filters.length; j++) { + bits[pos++] = new BitsIntersector(bits[i], bits[j]); + } + } + assert pos == bits.length; + return new LeafBucketCollectorBase(sub, null) { + @Override + public void collect(int doc, long bucket) throws IOException { + for (int i = 0; i < bits.length; i++) { + if (bits[i].get(doc)) { + collectBucket(sub, doc, bucketOrd(bucket, i)); + } + } + } + }; + } + + @Override + public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException { + + // Buckets are ordered into groups - [keyed filters] [key1&key2 intersects] + + List buckets = new ArrayList<>(filters.length); + for (int i = 0; i < keys.length; i++) { + long bucketOrd = bucketOrd(owningBucketOrdinal, i); + int docCount = bucketDocCount(bucketOrd); + // Empty buckets are not returned because this aggregation will commonly be used under a + // a date-histogram where we will look for transactions over time and can expect many + // empty buckets. + if (docCount > 0) { + InternalAdjacencyMatrix.InternalBucket bucket = new InternalAdjacencyMatrix.InternalBucket(keys[i], + docCount, bucketAggregations(bucketOrd)); + buckets.add(bucket); + } + } + int pos = keys.length; + for (int i = 0; i < keys.length; i++) { + for (int j = i + 1; j < keys.length; j++) { + long bucketOrd = bucketOrd(owningBucketOrdinal, pos); + int docCount = bucketDocCount(bucketOrd); + // Empty buckets are not returned due to potential for very sparse matrices + if (docCount > 0) { + String intersectKey = keys[i] + separator + keys[j]; + InternalAdjacencyMatrix.InternalBucket bucket = new InternalAdjacencyMatrix.InternalBucket(intersectKey, + docCount, bucketAggregations(bucketOrd)); + buckets.add(bucket); + } + pos++; + } + } + return new InternalAdjacencyMatrix(name, buckets, pipelineAggregators(), metaData()); + } + + @Override + public InternalAggregation buildEmptyAggregation() { + List buckets = new ArrayList<>(0); + return new InternalAdjacencyMatrix(name, buckets, pipelineAggregators(), metaData()); + } + + final long bucketOrd(long owningBucketOrdinal, int filterOrd) { + return owningBucketOrdinal * totalNumKeys + filterOrd; + } + +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregatorFactory.java new file mode 100644 index 0000000000000..6df88379d4eb0 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregatorFactory.java @@ -0,0 +1,65 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.adjacency; + +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Weight; +import org.elasticsearch.search.aggregations.Aggregator; +import org.elasticsearch.search.aggregations.AggregatorFactories; +import org.elasticsearch.search.aggregations.AggregatorFactory; +import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregator.KeyedFilter; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; +import org.elasticsearch.search.internal.SearchContext; + +import java.io.IOException; +import java.util.List; +import java.util.Map; + +public class AdjacencyMatrixAggregatorFactory extends AggregatorFactory { + + private final String[] keys; + private final Weight[] weights; + private final String separator; + + public AdjacencyMatrixAggregatorFactory(String name, List filters, String separator, + SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, + Map metaData) throws IOException { + super(name, context, parent, subFactories, metaData); + IndexSearcher contextSearcher = context.searcher(); + this.separator = separator; + weights = new Weight[filters.size()]; + keys = new String[filters.size()]; + for (int i = 0; i < filters.size(); ++i) { + KeyedFilter keyedFilter = filters.get(i); + this.keys[i] = keyedFilter.key(); + Query filter = keyedFilter.filter().toFilter(context.getQueryShardContext()); + this.weights[i] = contextSearcher.createNormalizedWeight(filter, false); + } + } + + @Override + public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBucket, List pipelineAggregators, + Map metaData) throws IOException { + return new AdjacencyMatrixAggregator(name, factories, separator, keys, weights, context, parent, + pipelineAggregators, metaData); + } + +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/InternalAdjacencyMatrix.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/InternalAdjacencyMatrix.java new file mode 100644 index 0000000000000..3d0839b7fb477 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/InternalAdjacencyMatrix.java @@ -0,0 +1,218 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.adjacency; + +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.aggregations.InternalAggregation; +import org.elasticsearch.search.aggregations.InternalAggregations; +import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class InternalAdjacencyMatrix + extends InternalMultiBucketAggregation + implements AdjacencyMatrix { + public static class InternalBucket extends InternalMultiBucketAggregation.InternalBucket implements AdjacencyMatrix.Bucket { + + private final String key; + private long docCount; + InternalAggregations aggregations; + + public InternalBucket(String key, long docCount, InternalAggregations aggregations) { + this.key = key; + this.docCount = docCount; + this.aggregations = aggregations; + } + + /** + * Read from a stream. + */ + public InternalBucket(StreamInput in) throws IOException { + key = in.readOptionalString(); + docCount = in.readVLong(); + aggregations = InternalAggregations.readAggregations(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeOptionalString(key); + out.writeVLong(docCount); + aggregations.writeTo(out); + } + + @Override + public String getKey() { + return key; + } + + @Override + public String getKeyAsString() { + return key; + } + + @Override + public long getDocCount() { + return docCount; + } + + @Override + public Aggregations getAggregations() { + return aggregations; + } + + InternalBucket reduce(List buckets, ReduceContext context) { + InternalBucket reduced = null; + List aggregationsList = new ArrayList<>(buckets.size()); + for (InternalBucket bucket : buckets) { + if (reduced == null) { + reduced = new InternalBucket(bucket.key, bucket.docCount, bucket.aggregations); + } else { + reduced.docCount += bucket.docCount; + } + aggregationsList.add(bucket.aggregations); + } + reduced.aggregations = InternalAggregations.reduce(aggregationsList, context); + return reduced; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(CommonFields.KEY.getPreferredName(), key); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); + aggregations.toXContentInternal(builder, params); + builder.endObject(); + return builder; + } + } + + private final List buckets; + private Map bucketMap; + + public InternalAdjacencyMatrix(String name, List buckets, + List pipelineAggregators, Map metaData) { + super(name, pipelineAggregators, metaData); + this.buckets = buckets; + } + + /** + * Read from a stream. + */ + public InternalAdjacencyMatrix(StreamInput in) throws IOException { + super(in); + int size = in.readVInt(); + List buckets = new ArrayList<>(size); + for (int i = 0; i < size; i++) { + buckets.add(new InternalBucket(in)); + } + this.buckets = buckets; + this.bucketMap = null; + } + + @Override + protected void doWriteTo(StreamOutput out) throws IOException { + out.writeVInt(buckets.size()); + for (InternalBucket bucket : buckets) { + bucket.writeTo(out); + } + } + + @Override + public String getWriteableName() { + return AdjacencyMatrixAggregationBuilder.NAME; + } + + @Override + public InternalAdjacencyMatrix create(List buckets) { + return new InternalAdjacencyMatrix(this.name, buckets, this.pipelineAggregators(), this.metaData); + } + + @Override + public InternalBucket createBucket(InternalAggregations aggregations, InternalBucket prototype) { + return new InternalBucket(prototype.key, prototype.docCount, aggregations); + } + + @Override + public List getBuckets() { + return buckets; + } + + @Override + public InternalBucket getBucketByKey(String key) { + if (bucketMap == null) { + bucketMap = new HashMap<>(buckets.size()); + for (InternalBucket bucket : buckets) { + bucketMap.put(bucket.getKey(), bucket); + } + } + return bucketMap.get(key); + } + + @Override + public InternalAggregation doReduce(List aggregations, ReduceContext reduceContext) { + Map> bucketsMap = new HashMap<>(); + for (InternalAggregation aggregation : aggregations) { + InternalAdjacencyMatrix filters = (InternalAdjacencyMatrix) aggregation; + for (InternalBucket bucket : filters.buckets) { + List sameRangeList = bucketsMap.get(bucket.key); + if(sameRangeList == null){ + sameRangeList = new ArrayList<>(aggregations.size()); + bucketsMap.put(bucket.key, sameRangeList); + } + sameRangeList.add(bucket); + } + } + + ArrayList reducedBuckets = new ArrayList<>(bucketsMap.size()); + for (List sameRangeList : bucketsMap.values()) { + InternalBucket reducedBucket = sameRangeList.get(0).reduce(sameRangeList, reduceContext); + if(reducedBucket.docCount >= 1){ + reducedBuckets.add(reducedBucket); + } + } + Collections.sort(reducedBuckets, Comparator.comparing(InternalBucket::getKey)); + + InternalAdjacencyMatrix reduced = new InternalAdjacencyMatrix(name, reducedBuckets, pipelineAggregators(), + getMetaData()); + + return reduced; + } + + @Override + public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + builder.startArray(CommonFields.BUCKETS.getPreferredName()); + for (InternalBucket bucket : buckets) { + bucket.toXContent(builder, params); + } + builder.endArray(); + return builder; + } + +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/ParsedAdjacencyMatrix.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/ParsedAdjacencyMatrix.java new file mode 100644 index 0000000000000..1fb356d45c28c --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/ParsedAdjacencyMatrix.java @@ -0,0 +1,88 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.adjacency; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; + +import java.io.IOException; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class ParsedAdjacencyMatrix extends ParsedMultiBucketAggregation implements AdjacencyMatrix { + + private Map bucketMap; + + @Override + public String getType() { + return AdjacencyMatrixAggregationBuilder.NAME; + } + + @Override + public List getBuckets() { + return buckets; + } + + @Override + public ParsedBucket getBucketByKey(String key) { + if (bucketMap == null) { + bucketMap = new HashMap<>(buckets.size()); + for (ParsedBucket bucket : buckets) { + bucketMap.put(bucket.getKey(), bucket); + } + } + return bucketMap.get(key); + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedAdjacencyMatrix.class.getSimpleName(), true, ParsedAdjacencyMatrix::new); + static { + declareMultiBucketAggregationFields(PARSER, + parser -> ParsedBucket.fromXContent(parser), + parser -> ParsedBucket.fromXContent(parser)); + } + + public static ParsedAdjacencyMatrix fromXContent(XContentParser parser, String name) throws IOException { + ParsedAdjacencyMatrix aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedMultiBucketAggregation.ParsedBucket implements AdjacencyMatrix.Bucket { + + private String key; + + @Override + public String getKey() { + return key; + } + + @Override + public String getKeyAsString() { + return key; + } + + static ParsedBucket fromXContent(XContentParser parser) throws IOException { + return parseXContent(parser, false, ParsedBucket::new, (p, bucket) -> bucket.key = p.text()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ChildrenAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ChildrenAggregationBuilder.java deleted file mode 100644 index a05dda207e381..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ChildrenAggregationBuilder.java +++ /dev/null @@ -1,169 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.bucket.children; - -import org.apache.lucene.search.Query; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.ParentFieldMapper; -import org.elasticsearch.index.query.QueryParseContext; -import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; -import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; -import org.elasticsearch.search.aggregations.support.FieldContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSource.Bytes.ParentChild; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; -import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Objects; - -public class ChildrenAggregationBuilder extends ValuesSourceAggregationBuilder { - public static final String NAME = "children"; - private static final Type TYPE = new Type(NAME); - - private String parentType; - private final String childType; - private Query parentFilter; - private Query childFilter; - - /** - * @param name - * the name of this aggregation - * @param childType - * the type of children documents - */ - public ChildrenAggregationBuilder(String name, String childType) { - super(name, TYPE, ValuesSourceType.BYTES, ValueType.STRING); - if (childType == null) { - throw new IllegalArgumentException("[childType] must not be null: [" + name + "]"); - } - this.childType = childType; - } - - /** - * Read from a stream. - */ - public ChildrenAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.BYTES, ValueType.STRING); - childType = in.readString(); - } - - @Override - protected void innerWriteTo(StreamOutput out) throws IOException { - out.writeString(childType); - } - - @Override - protected ValuesSourceAggregatorFactory innerBuild(AggregationContext context, - ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new ChildrenAggregatorFactory(name, type, config, parentType, childFilter, parentFilter, context, parent, - subFactoriesBuilder, metaData); - } - - @Override - protected ValuesSourceConfig resolveConfig(AggregationContext aggregationContext) { - ValuesSourceConfig config = new ValuesSourceConfig<>(ValuesSourceType.BYTES); - DocumentMapper childDocMapper = aggregationContext.searchContext().mapperService().documentMapper(childType); - - if (childDocMapper != null) { - ParentFieldMapper parentFieldMapper = childDocMapper.parentFieldMapper(); - if (!parentFieldMapper.active()) { - throw new IllegalArgumentException("[children] no [_parent] field not configured that points to a parent type"); - } - parentType = parentFieldMapper.type(); - DocumentMapper parentDocMapper = aggregationContext.searchContext().mapperService().documentMapper(parentType); - if (parentDocMapper != null) { - parentFilter = parentDocMapper.typeFilter(); - childFilter = childDocMapper.typeFilter(); - ParentChildIndexFieldData parentChildIndexFieldData = aggregationContext.searchContext().fieldData() - .getForField(parentFieldMapper.fieldType()); - config.fieldContext(new FieldContext(parentFieldMapper.fieldType().name(), parentChildIndexFieldData, - parentFieldMapper.fieldType())); - } else { - config.unmapped(true); - } - } else { - config.unmapped(true); - } - return config; - } - - @Override - protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(ParentToChildrenAggregator.TYPE_FIELD.getPreferredName(), childType); - return builder; - } - - public static ChildrenAggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { - String childType = null; - - XContentParser.Token token; - String currentFieldName = null; - XContentParser parser = context.parser(); - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_STRING) { - if ("type".equals(currentFieldName)) { - childType = parser.text(); - } else { - throw new ParsingException(parser.getTokenLocation(), - "Unknown key for a " + token + " in [" + aggregationName + "]: [" + currentFieldName + "]."); - } - } else { - throw new ParsingException(parser.getTokenLocation(), "Unexpected token " + token + " in [" + aggregationName + "]."); - } - } - - if (childType == null) { - throw new ParsingException(parser.getTokenLocation(), - "Missing [child_type] field for children aggregation [" + aggregationName + "]"); - } - - - return new ChildrenAggregationBuilder(aggregationName, childType); - } - - @Override - protected int innerHashCode() { - return Objects.hash(childType); - } - - @Override - protected boolean innerEquals(Object obj) { - ChildrenAggregationBuilder other = (ChildrenAggregationBuilder) obj; - return Objects.equals(childType, other.childType); - } - - @Override - public String getWriteableName() { - return NAME; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ChildrenAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ChildrenAggregatorFactory.java deleted file mode 100644 index cb87f1260b382..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ChildrenAggregatorFactory.java +++ /dev/null @@ -1,78 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.bucket.children; - -import org.apache.lucene.search.Query; -import org.elasticsearch.search.aggregations.Aggregator; -import org.elasticsearch.search.aggregations.AggregatorFactories; -import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.NonCollectingAggregator; -import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; -import org.elasticsearch.search.aggregations.support.ValuesSource; -import org.elasticsearch.search.aggregations.support.ValuesSource.Bytes.ParentChild; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; -import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; - -import java.io.IOException; -import java.util.List; -import java.util.Map; - -public class ChildrenAggregatorFactory - extends ValuesSourceAggregatorFactory { - - private final String parentType; - private final Query parentFilter; - private final Query childFilter; - - public ChildrenAggregatorFactory(String name, Type type, ValuesSourceConfig config, String parentType, Query childFilter, - Query parentFilter, AggregationContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, - Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); - this.parentType = parentType; - this.childFilter = childFilter; - this.parentFilter = parentFilter; - } - - @Override - protected Aggregator createUnmapped(Aggregator parent, List pipelineAggregators, Map metaData) - throws IOException { - return new NonCollectingAggregator(name, context, parent, pipelineAggregators, metaData) { - - @Override - public InternalAggregation buildEmptyAggregation() { - return new InternalChildren(name, 0, buildEmptySubAggregations(), pipelineAggregators(), metaData()); - } - - }; - } - - @Override - protected Aggregator doCreateInternal(ValuesSource.Bytes.WithOrdinals.ParentChild valuesSource, Aggregator parent, - boolean collectsFromSingleBucket, List pipelineAggregators, Map metaData) - throws IOException { - long maxOrd = valuesSource.globalMaxOrd(context.searchContext().searcher(), parentType); - return new ParentToChildrenAggregator(name, factories, context, parent, parentType, childFilter, parentFilter, valuesSource, maxOrd, - pipelineAggregators, metaData); - } - -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregationBuilder.java index 48be5365bb19f..bbf58384c8be2 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregationBuilder.java @@ -28,15 +28,13 @@ import org.elasticsearch.search.aggregations.AbstractAggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; public class FilterAggregationBuilder extends AbstractAggregationBuilder { public static final String NAME = "filter"; - private static final Type TYPE = new Type(NAME); private final QueryBuilder filter; @@ -49,7 +47,7 @@ public class FilterAggregationBuilder extends AbstractAggregationBuilder doBuild(AggregationContext context, AggregatorFactory parent, + protected AggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder) throws IOException { - return new FilterAggregatorFactory(name, type, filter, context, parent, subFactoriesBuilder, metaData); + // TODO this sucks we need a rewrite phase for aggregations too + final QueryBuilder rewrittenFilter = QueryBuilder.rewriteQuery(filter, context.getQueryShardContext()); + return new FilterAggregatorFactory(name, rewrittenFilter, context, parent, subFactoriesBuilder, metaData); } @Override @@ -100,7 +100,7 @@ protected boolean doEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java index 1a79631f93d59..46a9049711f85 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java @@ -29,7 +29,7 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -45,10 +45,10 @@ public class FilterAggregator extends SingleBucketAggregator { public FilterAggregator(String name, Weight filter, AggregatorFactories factories, - AggregationContext aggregationContext, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, factories, context, parent, pipelineAggregators, metaData); this.filter = filter; } @@ -56,7 +56,7 @@ public FilterAggregator(String name, public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, final LeafBucketCollector sub) throws IOException { // no need to provide deleted docs to the filter - final Bits bits = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), filter.scorer(ctx)); + final Bits bits = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), filter.scorerSupplier(ctx)); return new LeafBucketCollectorBase(sub, null) { @Override public void collect(int doc, long bucket) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorFactory.java index 212494ef48b7d..482bcb3d00951 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorFactory.java @@ -26,9 +26,8 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -36,13 +35,13 @@ public class FilterAggregatorFactory extends AggregatorFactory { - private final Weight weight; + final Weight weight; - public FilterAggregatorFactory(String name, Type type, QueryBuilder filterBuilder, AggregationContext context, + public FilterAggregatorFactory(String name, QueryBuilder filterBuilder, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, context, parent, subFactoriesBuilder, metaData); - IndexSearcher contextSearcher = context.searchContext().searcher(); - Query filter = filterBuilder.toQuery(context.searchContext().getQueryShardContext()); + super(name, context, parent, subFactoriesBuilder, metaData); + IndexSearcher contextSearcher = context.searcher(); + Query filter = filterBuilder.toFilter(context.getQueryShardContext()); weight = contextSearcher.createNormalizedWeight(filter, false); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/ParsedFilter.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/ParsedFilter.java new file mode 100644 index 0000000000000..5f5cf104498e8 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/ParsedFilter.java @@ -0,0 +1,36 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations.bucket.filter; + +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.bucket.ParsedSingleBucketAggregation; + +import java.io.IOException; + +public class ParsedFilter extends ParsedSingleBucketAggregation implements Filter { + + @Override + public String getType() { + return FilterAggregationBuilder.NAME; + } + + public static ParsedFilter fromXContent(XContentParser parser, final String name) throws IOException { + return parseXContent(parser, new ParsedFilter(), name); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregationBuilder.java index 2cd4f508ccbea..12c977a95a7ad 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregationBuilder.java @@ -29,10 +29,9 @@ import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.search.aggregations.AbstractAggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.bucket.filters.FiltersAggregator.KeyedFilter; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.ArrayList; @@ -45,7 +44,6 @@ public class FiltersAggregationBuilder extends AbstractAggregationBuilder { public static final String NAME = "filters"; - private static final Type TYPE = new Type(NAME); private static final ParseField FILTERS_FIELD = new ParseField("filters"); private static final ParseField OTHER_BUCKET_FIELD = new ParseField("other_bucket"); @@ -67,7 +65,7 @@ public FiltersAggregationBuilder(String name, KeyedFilter... filters) { } private FiltersAggregationBuilder(String name, List filters) { - super(name, TYPE); + super(name); // internally we want to have a fixed order of filters, regardless of the order of the filters in the request this.filters = new ArrayList<>(filters); Collections.sort(this.filters, (KeyedFilter kf1, KeyedFilter kf2) -> kf1.key().compareTo(kf2.key())); @@ -81,7 +79,7 @@ private FiltersAggregationBuilder(String name, List filters) { * the filters to use with this aggregation */ public FiltersAggregationBuilder(String name, QueryBuilder... filters) { - super(name, TYPE); + super(name); List keyedFilters = new ArrayList<>(filters.length); for (int i = 0; i < filters.length; i++) { keyedFilters.add(new KeyedFilter(String.valueOf(i), filters[i])); @@ -94,7 +92,7 @@ public FiltersAggregationBuilder(String name, QueryBuilder... filters) { * Read from a stream. */ public FiltersAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE); + super(in); keyed = in.readBoolean(); int filtersSize = in.readVInt(); filters = new ArrayList<>(filtersSize); @@ -171,9 +169,14 @@ public String otherBucketKey() { } @Override - protected AggregatorFactory doBuild(AggregationContext context, AggregatorFactory parent, Builder subFactoriesBuilder) + protected AggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new FiltersAggregatorFactory(name, type, filters, keyed, otherBucket, otherBucketKey, context, parent, + List rewrittenFilters = new ArrayList<>(filters.size()); + for(KeyedFilter kf : filters) { + rewrittenFilters.add(new KeyedFilter(kf.key(), QueryBuilder.rewriteQuery(kf.filter(), + context.getQueryShardContext()))); + } + return new FiltersAggregatorFactory(name, rewrittenFilters, keyed, otherBucket, otherBucketKey, context, parent, subFactoriesBuilder, metaData); } @@ -209,26 +212,26 @@ public static FiltersAggregationBuilder parse(String aggregationName, QueryParse XContentParser.Token token = null; String currentFieldName = null; String otherBucketKey = null; - Boolean otherBucket = false; + Boolean otherBucket = null; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_BOOLEAN) { - if (context.getParseFieldMatcher().match(currentFieldName, OTHER_BUCKET_FIELD)) { + if (OTHER_BUCKET_FIELD.match(currentFieldName)) { otherBucket = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + aggregationName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, OTHER_BUCKET_KEY_FIELD)) { + if (OTHER_BUCKET_KEY_FIELD.match(currentFieldName)) { otherBucketKey = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + aggregationName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, FILTERS_FIELD)) { + if (FILTERS_FIELD.match(currentFieldName)) { keyedFilters = new ArrayList<>(); String key = null; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { @@ -244,7 +247,7 @@ public static FiltersAggregationBuilder parse(String aggregationName, QueryParse "Unknown key for a " + token + " in [" + aggregationName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, FILTERS_FIELD)) { + if (FILTERS_FIELD.match(currentFieldName)) { nonKeyedFilters = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { QueryBuilder filter = context.parseInnerQueryBuilder().orElse(matchAllQuery()); @@ -260,8 +263,9 @@ public static FiltersAggregationBuilder parse(String aggregationName, QueryParse } } - if (otherBucket && otherBucketKey == null) { - otherBucketKey = "_other_"; + if (otherBucket == null && otherBucketKey != null) { + // automatically enable the other bucket if a key is set, as per the doc + otherBucket = true; } FiltersAggregationBuilder factory; @@ -296,7 +300,7 @@ protected boolean doEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java index 08bbdaf3e3b53..898acd8eb77aa 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java @@ -29,7 +29,6 @@ import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.query.MatchAllQueryBuilder; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; @@ -39,7 +38,7 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.ArrayList; @@ -126,10 +125,10 @@ public boolean equals(Object obj) { private final int totalNumKeys; public FiltersAggregator(String name, AggregatorFactories factories, String[] keys, Weight[] filters, boolean keyed, String otherBucketKey, - AggregationContext aggregationContext, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, factories, context, parent, pipelineAggregators, metaData); this.keyed = keyed; this.keys = keys; this.filters = filters; @@ -148,7 +147,7 @@ public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, // no need to provide deleted docs to the filter final Bits[] bits = new Bits[filters.length]; for (int i = 0; i < filters.length; ++i) { - bits[i] = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), filters[i].scorer(ctx)); + bits[i] = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), filters[i].scorerSupplier(ctx)); } return new LeafBucketCollectorBase(sub, null) { @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregatorFactory.java index 9b7def2395e97..ded828d7623d3 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregatorFactory.java @@ -25,10 +25,9 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.filters.FiltersAggregator.KeyedFilter; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -37,25 +36,25 @@ public class FiltersAggregatorFactory extends AggregatorFactory { private final String[] keys; - private final Weight[] weights; + final Weight[] weights; private final boolean keyed; private final boolean otherBucket; private final String otherBucketKey; - public FiltersAggregatorFactory(String name, Type type, List filters, boolean keyed, boolean otherBucket, - String otherBucketKey, AggregationContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, + public FiltersAggregatorFactory(String name, List filters, boolean keyed, boolean otherBucket, + String otherBucketKey, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, Map metaData) throws IOException { - super(name, type, context, parent, subFactories, metaData); + super(name, context, parent, subFactories, metaData); this.keyed = keyed; this.otherBucket = otherBucket; this.otherBucketKey = otherBucketKey; - IndexSearcher contextSearcher = context.searchContext().searcher(); + IndexSearcher contextSearcher = context.searcher(); weights = new Weight[filters.size()]; keys = new String[filters.size()]; for (int i = 0; i < filters.size(); ++i) { KeyedFilter keyedFilter = filters.get(i); this.keys[i] = keyedFilter.key(); - Query filter = keyedFilter.filter().toFilter(context.searchContext().getQueryShardContext()); + Query filter = keyedFilter.filter().toFilter(context.getQueryShardContext()); this.weights[i] = contextSearcher.createNormalizedWeight(filter, false); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/InternalFilters.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/InternalFilters.java index bd33f1608bc8b..5153122272564 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/InternalFilters.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/InternalFilters.java @@ -108,7 +108,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } else { builder.startObject(); } - builder.field(CommonFields.DOC_COUNT, docCount); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); aggregations.toXContentInternal(builder, params); builder.endObject(); return builder; @@ -210,9 +210,9 @@ public InternalAggregation doReduce(List aggregations, Redu @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { if (keyed) { - builder.startObject(CommonFields.BUCKETS); + builder.startObject(CommonFields.BUCKETS.getPreferredName()); } else { - builder.startArray(CommonFields.BUCKETS); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); } for (InternalBucket bucket : buckets) { bucket.toXContent(builder, params); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/ParsedFilters.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/ParsedFilters.java new file mode 100644 index 0000000000000..e27706d162bff --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/ParsedFilters.java @@ -0,0 +1,142 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.filters; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + +public class ParsedFilters extends ParsedMultiBucketAggregation implements Filters { + + private Map bucketMap; + + @Override + public String getType() { + return FiltersAggregationBuilder.NAME; + } + + @Override + public List getBuckets() { + return buckets; + } + + @Override + public ParsedBucket getBucketByKey(String key) { + if (bucketMap == null) { + bucketMap = new HashMap<>(buckets.size()); + for (ParsedBucket bucket : buckets) { + bucketMap.put(bucket.getKey(), bucket); + } + } + return bucketMap.get(key); + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedFilters.class.getSimpleName(), true, ParsedFilters::new); + static { + declareMultiBucketAggregationFields(PARSER, + parser -> ParsedBucket.fromXContent(parser, false), + parser -> ParsedBucket.fromXContent(parser, true)); + } + + public static ParsedFilters fromXContent(XContentParser parser, String name) throws IOException { + ParsedFilters aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + // in case this is not a keyed aggregation, we need to add numeric keys to the buckets + if (aggregation.keyed == false) { + int i = 0; + for (ParsedBucket bucket : aggregation.buckets) { + assert bucket.key == null; + bucket.key = String.valueOf(i); + i++; + } + } + return aggregation; + } + + public static class ParsedBucket extends ParsedMultiBucketAggregation.ParsedBucket implements Filters.Bucket { + + private String key; + + @Override + public String getKey() { + return key; + } + + @Override + public String getKeyAsString() { + return key; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + if (isKeyed()) { + builder.startObject(key); + } else { + builder.startObject(); + } + builder.field(CommonFields.DOC_COUNT.getPreferredName(), getDocCount()); + getAggregations().toXContentInternal(builder, params); + builder.endObject(); + return builder; + } + + + static ParsedBucket fromXContent(XContentParser parser, boolean keyed) throws IOException { + final ParsedBucket bucket = new ParsedBucket(); + bucket.setKeyed(keyed); + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + if (keyed) { + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + bucket.key = currentFieldName; + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + } + + List aggregations = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (CommonFields.DOC_COUNT.getPreferredName().equals(currentFieldName)) { + bucket.setDocCount(parser.longValue()); + } + } else if (token == XContentParser.Token.START_OBJECT) { + XContentParserUtils.parseTypedKeysObject(parser, Aggregation.TYPED_KEYS_DELIMITER, Aggregation.class, + aggregations::add); + } + } + bucket.setAggregations(new Aggregations(aggregations)); + return bucket; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java index 24d5999e698d5..602c3a81c66e9 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java @@ -26,43 +26,59 @@ import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.fielddata.MultiGeoPointValues; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; import org.elasticsearch.index.fielddata.SortingNumericDocValues; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.BucketUtils; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; public class GeoGridAggregationBuilder extends ValuesSourceAggregationBuilder { public static final String NAME = "geohash_grid"; - private static final Type TYPE = new Type(NAME); + public static final int DEFAULT_PRECISION = 5; + public static final int DEFAULT_MAX_NUM_CELLS = 10000; + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(GeoGridAggregationBuilder.NAME); + ValuesSourceParserHelper.declareGeoFields(PARSER, false, false); + PARSER.declareInt(GeoGridAggregationBuilder::precision, GeoHashGridParams.FIELD_PRECISION); + PARSER.declareInt(GeoGridAggregationBuilder::size, GeoHashGridParams.FIELD_SIZE); + PARSER.declareInt(GeoGridAggregationBuilder::shardSize, GeoHashGridParams.FIELD_SHARD_SIZE); + } + + public static GeoGridAggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new GeoGridAggregationBuilder(aggregationName), context); + } - private int precision = GeoHashGridParser.DEFAULT_PRECISION; - private int requiredSize = GeoHashGridParser.DEFAULT_MAX_NUM_CELLS; + private int precision = DEFAULT_PRECISION; + private int requiredSize = DEFAULT_MAX_NUM_CELLS; private int shardSize = -1; public GeoGridAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); + super(name, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); } /** * Read from a stream. */ public GeoGridAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); + super(in, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); precision = in.readVInt(); requiredSize = in.readVInt(); shardSize = in.readVInt(); @@ -111,7 +127,7 @@ public int shardSize() { } @Override - protected ValuesSourceAggregatorFactory innerBuild(AggregationContext context, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { int shardSize = this.shardSize; @@ -121,7 +137,7 @@ public int shardSize() { if (shardSize < 0) { // Use default heuristic to avoid any wrong-ranking caused by // distributed counting - shardSize = BucketUtils.suggestShardSideQueueSize(requiredSize, context.searchContext().numberOfShards()); + shardSize = BucketUtils.suggestShardSideQueueSize(requiredSize, context.numberOfShards()); } if (requiredSize <= 0 || shardSize <= 0) { @@ -132,7 +148,7 @@ public int shardSize() { if (shardSize < requiredSize) { shardSize = requiredSize; } - return new GeoHashGridAggregatorFactory(name, type, config, precision, requiredSize, shardSize, context, parent, + return new GeoHashGridAggregatorFactory(name, config, precision, requiredSize, shardSize, context, parent, subFactoriesBuilder, metaData); } @@ -167,7 +183,7 @@ protected int innerHashCode() { } @Override - public String getWriteableName() { + public String getType() { return NAME; } @@ -196,7 +212,7 @@ static class CellIdSource extends ValuesSource.Numeric { private final ValuesSource.GeoPoint valuesSource; private final int precision; - public CellIdSource(ValuesSource.GeoPoint valuesSource, int precision) { + CellIdSource(ValuesSource.GeoPoint valuesSource, int precision) { this.valuesSource = valuesSource; //different GeoPoints could map to the same or different geohash cells. this.precision = precision; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGrid.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGrid.java index 6456fba864021..9cce698957d70 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGrid.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGrid.java @@ -38,6 +38,5 @@ interface Bucket extends MultiBucketsAggregation.Bucket { * @return The buckets of this aggregation (each bucket representing a geohash grid cell) */ @Override - List getBuckets(); - + List getBuckets(); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregator.java index d3653e87422de..a24efe2df9bfc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregator.java @@ -29,7 +29,7 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -50,7 +50,7 @@ public class GeoHashGridAggregator extends BucketsAggregator { private final LongHash bucketOrds; public GeoHashGridAggregator(String name, AggregatorFactories factories, GeoGridAggregationBuilder.CellIdSource valuesSource, - int requiredSize, int shardSize, AggregationContext aggregationContext, Aggregator parent, List pipelineAggregators, + int requiredSize, int shardSize, SearchContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; @@ -98,7 +98,7 @@ static class OrdinalBucket extends InternalGeoHashGrid.Bucket { long bucketOrd; - public OrdinalBucket() { + OrdinalBucket() { super(0, 0, (InternalAggregations) null); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregatorFactory.java index 1b2c4c263724f..4f87b82f7882f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregatorFactory.java @@ -23,15 +23,14 @@ import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.NonCollectingAggregator; import org.elasticsearch.search.aggregations.bucket.geogrid.GeoGridAggregationBuilder.CellIdSource; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.GeoPoint; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Collections; @@ -44,10 +43,10 @@ public class GeoHashGridAggregatorFactory extends ValuesSourceAggregatorFactory< private final int requiredSize; private final int shardSize; - public GeoHashGridAggregatorFactory(String name, Type type, ValuesSourceConfig config, int precision, int requiredSize, - int shardSize, AggregationContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, + public GeoHashGridAggregatorFactory(String name, ValuesSourceConfig config, int precision, int requiredSize, + int shardSize, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.precision = precision; this.requiredSize = requiredSize; this.shardSize = shardSize; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParser.java deleted file mode 100644 index e669ee8b9d9e0..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParser.java +++ /dev/null @@ -1,85 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.geogrid; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.index.query.GeoBoundingBoxQueryBuilder; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.GeoPointValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * Aggregates Geo information into cells determined by geohashes of a given precision. - * WARNING - for high-precision geohashes it may prove necessary to use a {@link GeoBoundingBoxQueryBuilder} - * aggregation to focus in on a smaller area to avoid generating too many buckets and using too much RAM - */ -public class GeoHashGridParser extends GeoPointValuesSourceParser { - - public static final int DEFAULT_PRECISION = 5; - public static final int DEFAULT_MAX_NUM_CELLS = 10000; - - public GeoHashGridParser() { - super(false, false); - } - - @Override - protected GeoGridAggregationBuilder createFactory( - String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - GeoGridAggregationBuilder factory = new GeoGridAggregationBuilder(aggregationName); - Integer precision = (Integer) otherOptions.get(GeoHashGridParams.FIELD_PRECISION); - if (precision != null) { - factory.precision(precision); - } - Integer size = (Integer) otherOptions.get(GeoHashGridParams.FIELD_SIZE); - if (size != null) { - factory.size(size); - } - Integer shardSize = (Integer) otherOptions.get(GeoHashGridParams.FIELD_SHARD_SIZE); - if (shardSize != null) { - factory.shardSize(shardSize); - } - return factory; - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (token == XContentParser.Token.VALUE_NUMBER || token == XContentParser.Token.VALUE_STRING) { - if (context.matchField(currentFieldName, GeoHashGridParams.FIELD_PRECISION)) { - otherOptions.put(GeoHashGridParams.FIELD_PRECISION, parser.intValue()); - return true; - } else if (context.matchField(currentFieldName, GeoHashGridParams.FIELD_SIZE)) { - otherOptions.put(GeoHashGridParams.FIELD_SIZE, parser.intValue()); - return true; - } else if (context.matchField(currentFieldName, GeoHashGridParams.FIELD_SHARD_SIZE)) { - otherOptions.put(GeoHashGridParams.FIELD_SHARD_SIZE, parser.intValue()); - return true; - } - } - return false; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/InternalGeoHashGrid.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/InternalGeoHashGrid.java index eaf00755f76dc..af98e4a1c249c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/InternalGeoHashGrid.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/InternalGeoHashGrid.java @@ -52,7 +52,7 @@ static class Bucket extends InternalMultiBucketAggregation.InternalBucket implem protected long docCount; protected InternalAggregations aggregations; - public Bucket(long geohashAsLong, long docCount, InternalAggregations aggregations) { + Bucket(long geohashAsLong, long docCount, InternalAggregations aggregations) { this.docCount = docCount; this.aggregations = aggregations; this.geohashAsLong = geohashAsLong; @@ -120,8 +120,8 @@ public Bucket reduce(List buckets, ReduceContext context) { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); - builder.field(CommonFields.KEY, getKeyAsString()); - builder.field(CommonFields.DOC_COUNT, docCount); + builder.field(CommonFields.KEY.getPreferredName(), getKeyAsString()); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); aggregations.toXContentInternal(builder, params); builder.endObject(); return builder; @@ -169,7 +169,7 @@ public Bucket createBucket(InternalAggregations aggregations, Bucket prototype) } @Override - public List getBuckets() { + public List getBuckets() { return unmodifiableList(buckets); } @@ -192,7 +192,7 @@ public InternalGeoHashGrid doReduce(List aggregations, Redu } } - final int size = (int) Math.min(requiredSize, buckets.size()); + final int size = Math.toIntExact(reduceContext.isFinalReduce() == false ? buckets.size() : Math.min(requiredSize, buckets.size())); BucketPriorityQueue ordered = new BucketPriorityQueue(size); for (LongObjectPagedHashMap.Cursor> cursor : buckets) { List sameCellBuckets = cursor.value; @@ -208,7 +208,7 @@ public InternalGeoHashGrid doReduce(List aggregations, Redu @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.startArray(CommonFields.BUCKETS); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); for (Bucket bucket : buckets) { bucket.toXContent(builder, params); } @@ -218,7 +218,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th static class BucketPriorityQueue extends PriorityQueue { - public BucketPriorityQueue(int size) { + BucketPriorityQueue(int size) { super(size); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/ParsedGeoHashGrid.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/ParsedGeoHashGrid.java new file mode 100644 index 0000000000000..4551523e0fc8b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/ParsedGeoHashGrid.java @@ -0,0 +1,78 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.geogrid; + +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; + +import java.io.IOException; +import java.util.List; + +public class ParsedGeoHashGrid extends ParsedMultiBucketAggregation implements GeoHashGrid { + + @Override + public String getType() { + return GeoGridAggregationBuilder.NAME; + } + + @Override + public List getBuckets() { + return buckets; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedGeoHashGrid.class.getSimpleName(), true, ParsedGeoHashGrid::new); + static { + declareMultiBucketAggregationFields(PARSER, ParsedBucket::fromXContent, ParsedBucket::fromXContent); + } + + public static ParsedGeoHashGrid fromXContent(XContentParser parser, String name) throws IOException { + ParsedGeoHashGrid aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedMultiBucketAggregation.ParsedBucket implements GeoHashGrid.Bucket { + + private String geohashAsString; + + @Override + public GeoPoint getKey() { + return GeoPoint.fromGeohash(geohashAsString); + } + + @Override + public String getKeyAsString() { + return geohashAsString; + } + + @Override + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + return builder.field(CommonFields.KEY.getPreferredName(), geohashAsString); + } + + static ParsedBucket fromXContent(XContentParser parser) throws IOException { + return parseXContent(parser, false, ParsedBucket::new, (p, bucket) -> bucket.geohashAsString = p.textOrNull()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregationBuilder.java index 94211f39dd099..2363ed498a9dd 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregationBuilder.java @@ -26,24 +26,22 @@ import org.elasticsearch.search.aggregations.AbstractAggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class GlobalAggregationBuilder extends AbstractAggregationBuilder { public static final String NAME = "global"; - private static final Type TYPE = new Type(NAME); public GlobalAggregationBuilder(String name) { - super(name, TYPE); + super(name); } /** * Read from a stream. */ public GlobalAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE); + super(in); } @Override @@ -52,9 +50,9 @@ protected void doWriteTo(StreamOutput out) throws IOException { } @Override - protected AggregatorFactory doBuild(AggregationContext context, AggregatorFactory parent, Builder subFactoriesBuilder) + protected AggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new GlobalAggregatorFactory(name, type, context, parent, subFactoriesBuilder, metaData); + return new GlobalAggregatorFactory(name, context, parent, subFactoriesBuilder, metaData); } @Override @@ -80,7 +78,7 @@ protected int doHashCode() { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java index 6e4980ede2715..f89144bfae32a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java @@ -25,7 +25,7 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -36,7 +36,7 @@ */ public class GlobalAggregator extends SingleBucketAggregator { - public GlobalAggregator(String name, AggregatorFactories subFactories, AggregationContext aggregationContext, List pipelineAggregators, + public GlobalAggregator(String name, AggregatorFactories subFactories, SearchContext aggregationContext, List pipelineAggregators, Map metaData) throws IOException { super(name, subFactories, aggregationContext, null, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregatorFactory.java index f92241f534598..b11ffde6d7bc4 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregatorFactory.java @@ -23,9 +23,8 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -33,9 +32,9 @@ public class GlobalAggregatorFactory extends AggregatorFactory { - public GlobalAggregatorFactory(String name, Type type, AggregationContext context, AggregatorFactory parent, + public GlobalAggregatorFactory(String name, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, Map metaData) throws IOException { - super(name, type, context, parent, subFactories, metaData); + super(name, context, parent, subFactories, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/InternalGlobal.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/InternalGlobal.java index f278e5e72b1da..6ba3b79e96863 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/InternalGlobal.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/InternalGlobal.java @@ -32,7 +32,8 @@ * regardless the query. */ public class InternalGlobal extends InternalSingleBucketAggregation implements Global { - InternalGlobal(String name, long docCount, InternalAggregations aggregations, List pipelineAggregators, Map metaData) { + InternalGlobal(String name, long docCount, InternalAggregations aggregations, List pipelineAggregators, + Map metaData) { super(name, docCount, aggregations, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/ParsedGlobal.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/ParsedGlobal.java new file mode 100644 index 0000000000000..062752805b1c8 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/ParsedGlobal.java @@ -0,0 +1,36 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations.bucket.global; + +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.bucket.ParsedSingleBucketAggregation; + +import java.io.IOException; + +public class ParsedGlobal extends ParsedSingleBucketAggregation implements Global { + + @Override + public String getType() { + return GlobalAggregationBuilder.NAME; + } + + public static ParsedGlobal fromXContent(XContentParser parser, final String name) throws IOException { + return parseXContent(parser, new ParsedGlobal(), name); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java index d3b8857ccab91..e805abd86c981 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java @@ -19,30 +19,107 @@ package org.elasticsearch.search.aggregations.bucket.histogram; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.rounding.DateTimeUnit; +import org.elasticsearch.common.rounding.Rounding; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParser.Token; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; +import java.util.HashMap; +import java.util.Map; import java.util.Objects; +import static java.util.Collections.unmodifiableMap; + /** * A builder for histograms on date fields. */ public class DateHistogramAggregationBuilder extends ValuesSourceAggregationBuilder { - public static final String NAME = InternalDateHistogram.TYPE.name(); + public static final String NAME = "date_histogram"; + + public static final Map DATE_FIELD_UNITS; + + static { + Map dateFieldUnits = new HashMap<>(); + dateFieldUnits.put("year", DateTimeUnit.YEAR_OF_CENTURY); + dateFieldUnits.put("1y", DateTimeUnit.YEAR_OF_CENTURY); + dateFieldUnits.put("quarter", DateTimeUnit.QUARTER); + dateFieldUnits.put("1q", DateTimeUnit.QUARTER); + dateFieldUnits.put("month", DateTimeUnit.MONTH_OF_YEAR); + dateFieldUnits.put("1M", DateTimeUnit.MONTH_OF_YEAR); + dateFieldUnits.put("week", DateTimeUnit.WEEK_OF_WEEKYEAR); + dateFieldUnits.put("1w", DateTimeUnit.WEEK_OF_WEEKYEAR); + dateFieldUnits.put("day", DateTimeUnit.DAY_OF_MONTH); + dateFieldUnits.put("1d", DateTimeUnit.DAY_OF_MONTH); + dateFieldUnits.put("hour", DateTimeUnit.HOUR_OF_DAY); + dateFieldUnits.put("1h", DateTimeUnit.HOUR_OF_DAY); + dateFieldUnits.put("minute", DateTimeUnit.MINUTES_OF_HOUR); + dateFieldUnits.put("1m", DateTimeUnit.MINUTES_OF_HOUR); + dateFieldUnits.put("second", DateTimeUnit.SECOND_OF_MINUTE); + dateFieldUnits.put("1s", DateTimeUnit.SECOND_OF_MINUTE); + DATE_FIELD_UNITS = unmodifiableMap(dateFieldUnits); + } + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(DateHistogramAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, true); + + PARSER.declareField((histogram, interval) -> { + if (interval instanceof Long) { + histogram.interval((long) interval); + } else { + histogram.dateHistogramInterval((DateHistogramInterval) interval); + } + }, p -> { + if (p.currentToken() == XContentParser.Token.VALUE_NUMBER) { + return p.longValue(); + } else { + return new DateHistogramInterval(p.text()); + } + }, Histogram.INTERVAL_FIELD, ObjectParser.ValueType.LONG); + + PARSER.declareField(DateHistogramAggregationBuilder::offset, p -> { + if (p.currentToken() == XContentParser.Token.VALUE_NUMBER) { + return p.longValue(); + } else { + return DateHistogramAggregationBuilder.parseStringOffset(p.text()); + } + }, Histogram.OFFSET_FIELD, ObjectParser.ValueType.LONG); + + PARSER.declareBoolean(DateHistogramAggregationBuilder::keyed, Histogram.KEYED_FIELD); + + PARSER.declareLong(DateHistogramAggregationBuilder::minDocCount, Histogram.MIN_DOC_COUNT_FIELD); + + PARSER.declareField(DateHistogramAggregationBuilder::extendedBounds, parser -> ExtendedBounds.PARSER.apply(parser, null), + ExtendedBounds.EXTENDED_BOUNDS_FIELD, ObjectParser.ValueType.OBJECT); + + PARSER.declareField(DateHistogramAggregationBuilder::order, DateHistogramAggregationBuilder::parseOrder, + Histogram.ORDER_FIELD, ObjectParser.ValueType.OBJECT); + } + + public static DateHistogramAggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new DateHistogramAggregationBuilder(aggregationName), context); + } private long interval; private DateHistogramInterval dateHistogramInterval; @@ -54,12 +131,12 @@ public class DateHistogramAggregationBuilder /** Create a new builder with the given name. */ public DateHistogramAggregationBuilder(String name) { - super(name, InternalDateHistogram.TYPE, ValuesSourceType.NUMERIC, ValueType.DATE); + super(name, ValuesSourceType.NUMERIC, ValueType.DATE); } /** Read from a stream, for internal use only. */ public DateHistogramAggregationBuilder(StreamInput in) throws IOException { - super(in, InternalDateHistogram.TYPE, ValuesSourceType.NUMERIC, ValueType.DATE); + super(in, ValuesSourceType.NUMERIC, ValueType.DATE); if (in.readBoolean()) { order = InternalOrder.Streams.readOrder(in); } @@ -238,15 +315,43 @@ protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) } @Override - public String getWriteableName() { + public String getType() { return NAME; } @Override - protected ValuesSourceAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new DateHistogramAggregatorFactory(name, type, config, interval, dateHistogramInterval, offset, order, keyed, minDocCount, - extendedBounds, context, parent, subFactoriesBuilder, metaData); + Rounding rounding = createRounding(); + ExtendedBounds roundedBounds = null; + if (this.extendedBounds != null) { + // parse any string bounds to longs and round + roundedBounds = this.extendedBounds.parseAndValidate(name, context, config.format()).round(rounding); + } + return new DateHistogramAggregatorFactory(name, config, interval, dateHistogramInterval, offset, order, keyed, minDocCount, + rounding, roundedBounds, context, parent, subFactoriesBuilder, metaData); + } + + private Rounding createRounding() { + Rounding.Builder tzRoundingBuilder; + if (dateHistogramInterval != null) { + DateTimeUnit dateTimeUnit = DATE_FIELD_UNITS.get(dateHistogramInterval.toString()); + if (dateTimeUnit != null) { + tzRoundingBuilder = Rounding.builder(dateTimeUnit); + } else { + // the interval is a time value? + tzRoundingBuilder = Rounding.builder( + TimeValue.parseTimeValue(dateHistogramInterval.toString(), null, getClass().getSimpleName() + ".interval")); + } + } else { + // the interval is an integer time value in millis? + tzRoundingBuilder = Rounding.builder(TimeValue.timeValueMillis(interval)); + } + if (timeZone() != null) { + tzRoundingBuilder.timeZone(timeZone()); + } + Rounding rounding = tzRoundingBuilder.build(); + return rounding; } @Override @@ -265,4 +370,35 @@ protected boolean innerEquals(Object obj) { && Objects.equals(offset, other.offset) && Objects.equals(extendedBounds, other.extendedBounds); } + + // similar to the parsing oh histogram orders, but also accepts _time as an alias for _key + private static InternalOrder parseOrder(XContentParser parser, QueryParseContext context) throws IOException { + InternalOrder order = null; + Token token; + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token == XContentParser.Token.VALUE_STRING) { + String dir = parser.text(); + boolean asc = "asc".equals(dir); + if (!asc && !"desc".equals(dir)) { + throw new ParsingException(parser.getTokenLocation(), "Unknown order direction: [" + dir + + "]. Should be either [asc] or [desc]"); + } + order = resolveOrder(currentFieldName, asc); + } + } + return order; + } + + static InternalOrder resolveOrder(String key, boolean asc) { + if ("_key".equals(key) || "_time".equals(key)) { + return (InternalOrder) (asc ? InternalOrder.KEY_ASC : InternalOrder.KEY_DESC); + } + if ("_count".equals(key)) { + return (InternalOrder) (asc ? InternalOrder.COUNT_ASC : InternalOrder.COUNT_DESC); + } + return new InternalOrder.Aggregation(key, asc); + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregator.java index 0ea2fba719b09..a52f2b2cfb5aa 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregator.java @@ -33,8 +33,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.ArrayList; @@ -62,10 +62,10 @@ class DateHistogramAggregator extends BucketsAggregator { private final LongHash bucketOrds; private long offset; - public DateHistogramAggregator(String name, AggregatorFactories factories, Rounding rounding, long offset, InternalOrder order, + DateHistogramAggregator(String name, AggregatorFactories factories, Rounding rounding, long offset, InternalOrder order, boolean keyed, long minDocCount, @Nullable ExtendedBounds extendedBounds, @Nullable ValuesSource.Numeric valuesSource, - DocValueFormat formatter, AggregationContext aggregationContext, + DocValueFormat formatter, SearchContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregatorFactory.java index 79f81e28374c4..44bb3e02afece 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregatorFactory.java @@ -19,54 +19,24 @@ package org.elasticsearch.search.aggregations.bucket.histogram; -import org.elasticsearch.common.rounding.DateTimeUnit; import org.elasticsearch.common.rounding.Rounding; -import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; +import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; +import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; +import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; -import java.util.HashMap; import java.util.List; import java.util.Map; -import static java.util.Collections.unmodifiableMap; - -import org.elasticsearch.search.aggregations.support.AggregationContext; -import org.elasticsearch.search.aggregations.support.ValuesSource; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; -import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; - public final class DateHistogramAggregatorFactory extends ValuesSourceAggregatorFactory { - public static final Map DATE_FIELD_UNITS; - - static { - Map dateFieldUnits = new HashMap<>(); - dateFieldUnits.put("year", DateTimeUnit.YEAR_OF_CENTURY); - dateFieldUnits.put("1y", DateTimeUnit.YEAR_OF_CENTURY); - dateFieldUnits.put("quarter", DateTimeUnit.QUARTER); - dateFieldUnits.put("1q", DateTimeUnit.QUARTER); - dateFieldUnits.put("month", DateTimeUnit.MONTH_OF_YEAR); - dateFieldUnits.put("1M", DateTimeUnit.MONTH_OF_YEAR); - dateFieldUnits.put("week", DateTimeUnit.WEEK_OF_WEEKYEAR); - dateFieldUnits.put("1w", DateTimeUnit.WEEK_OF_WEEKYEAR); - dateFieldUnits.put("day", DateTimeUnit.DAY_OF_MONTH); - dateFieldUnits.put("1d", DateTimeUnit.DAY_OF_MONTH); - dateFieldUnits.put("hour", DateTimeUnit.HOUR_OF_DAY); - dateFieldUnits.put("1h", DateTimeUnit.HOUR_OF_DAY); - dateFieldUnits.put("minute", DateTimeUnit.MINUTES_OF_HOUR); - dateFieldUnits.put("1m", DateTimeUnit.MINUTES_OF_HOUR); - dateFieldUnits.put("second", DateTimeUnit.SECOND_OF_MINUTE); - dateFieldUnits.put("1s", DateTimeUnit.SECOND_OF_MINUTE); - DATE_FIELD_UNITS = unmodifiableMap(dateFieldUnits); - } - private final DateHistogramInterval dateHistogramInterval; private final long interval; private final long offset; @@ -74,12 +44,13 @@ public final class DateHistogramAggregatorFactory private final boolean keyed; private final long minDocCount; private final ExtendedBounds extendedBounds; + private Rounding rounding; - public DateHistogramAggregatorFactory(String name, Type type, ValuesSourceConfig config, long interval, + public DateHistogramAggregatorFactory(String name, ValuesSourceConfig config, long interval, DateHistogramInterval dateHistogramInterval, long offset, InternalOrder order, boolean keyed, long minDocCount, - ExtendedBounds extendedBounds, AggregationContext context, AggregatorFactory parent, + Rounding rounding, ExtendedBounds extendedBounds, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.interval = interval; this.dateHistogramInterval = dateHistogramInterval; this.offset = offset; @@ -87,34 +58,13 @@ public DateHistogramAggregatorFactory(String name, Type type, ValuesSourceConfig this.keyed = keyed; this.minDocCount = minDocCount; this.extendedBounds = extendedBounds; + this.rounding = rounding; } public long minDocCount() { return minDocCount; } - private Rounding createRounding() { - Rounding.Builder tzRoundingBuilder; - if (dateHistogramInterval != null) { - DateTimeUnit dateTimeUnit = DATE_FIELD_UNITS.get(dateHistogramInterval.toString()); - if (dateTimeUnit != null) { - tzRoundingBuilder = Rounding.builder(dateTimeUnit); - } else { - // the interval is a time value? - tzRoundingBuilder = Rounding.builder( - TimeValue.parseTimeValue(dateHistogramInterval.toString(), null, getClass().getSimpleName() + ".interval")); - } - } else { - // the interval is an integer time value in millis? - tzRoundingBuilder = Rounding.builder(TimeValue.timeValueMillis(interval)); - } - if (timeZone() != null) { - tzRoundingBuilder.timeZone(timeZone()); - } - Rounding rounding = tzRoundingBuilder.build(); - return rounding; - } - @Override protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, Aggregator parent, boolean collectsFromSingleBucket, List pipelineAggregators, Map metaData) throws IOException { @@ -126,18 +76,7 @@ protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, Aggrega private Aggregator createAggregator(ValuesSource.Numeric valuesSource, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - Rounding rounding = createRounding(); - // we need to round the bounds given by the user and we have to do it - // for every aggregator we create - // as the rounding is not necessarily an idempotent operation. - // todo we need to think of a better structure to the factory/agtor - // code so we won't need to do that - ExtendedBounds roundedBounds = null; - if (extendedBounds != null) { - // parse any string bounds to longs and round them - roundedBounds = extendedBounds.parseAndValidate(name, context.searchContext(), config.format()).round(rounding); - } - return new DateHistogramAggregator(name, factories, rounding, offset, order, keyed, minDocCount, roundedBounds, valuesSource, + return new DateHistogramAggregator(name, factories, rounding, offset, order, keyed, minDocCount, extendedBounds, valuesSource, config.format(), context, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramParser.java deleted file mode 100644 index 952a0e2568f31..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramParser.java +++ /dev/null @@ -1,156 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.histogram; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * A parser for date histograms. This translates json into a - * {@link DateHistogramAggregationBuilder} instance. - */ -public class DateHistogramParser extends NumericValuesSourceParser { - - public DateHistogramParser() { - super(true, true, true); - } - - @Override - protected DateHistogramAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - DateHistogramAggregationBuilder factory = new DateHistogramAggregationBuilder(aggregationName); - Object interval = otherOptions.get(Histogram.INTERVAL_FIELD); - if (interval == null) { - throw new ParsingException(null, "Missing required field [interval] for histogram aggregation [" + aggregationName + "]"); - } else if (interval instanceof Long) { - factory.interval((Long) interval); - } else if (interval instanceof DateHistogramInterval) { - factory.dateHistogramInterval((DateHistogramInterval) interval); - } else { - throw new IllegalStateException("Unexpected interval class: " + interval.getClass()); - } - Long offset = (Long) otherOptions.get(Histogram.OFFSET_FIELD); - if (offset != null) { - factory.offset(offset); - } - - ExtendedBounds extendedBounds = (ExtendedBounds) otherOptions.get(ExtendedBounds.EXTENDED_BOUNDS_FIELD); - if (extendedBounds != null) { - factory.extendedBounds(extendedBounds); - } - Boolean keyed = (Boolean) otherOptions.get(Histogram.KEYED_FIELD); - if (keyed != null) { - factory.keyed(keyed); - } - Long minDocCount = (Long) otherOptions.get(Histogram.MIN_DOC_COUNT_FIELD); - if (minDocCount != null) { - factory.minDocCount(minDocCount); - } - InternalOrder order = (InternalOrder) otherOptions.get(Histogram.ORDER_FIELD); - if (order != null) { - factory.order(order); - } - return factory; - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (token.isValue()) { - if (context.matchField(currentFieldName, Histogram.INTERVAL_FIELD)) { - if (token == XContentParser.Token.VALUE_STRING) { - otherOptions.put(Histogram.INTERVAL_FIELD, new DateHistogramInterval(parser.text())); - return true; - } else { - otherOptions.put(Histogram.INTERVAL_FIELD, parser.longValue()); - return true; - } - } else if (context.matchField(currentFieldName, Histogram.MIN_DOC_COUNT_FIELD)) { - otherOptions.put(Histogram.MIN_DOC_COUNT_FIELD, parser.longValue()); - return true; - } else if (context.matchField(currentFieldName, Histogram.KEYED_FIELD)) { - otherOptions.put(Histogram.KEYED_FIELD, parser.booleanValue()); - return true; - } else if (context.matchField(currentFieldName, Histogram.OFFSET_FIELD)) { - if (token == XContentParser.Token.VALUE_STRING) { - otherOptions.put(Histogram.OFFSET_FIELD, - DateHistogramAggregationBuilder.parseStringOffset(parser.text())); - return true; - } else { - otherOptions.put(Histogram.OFFSET_FIELD, parser.longValue()); - return true; - } - } else { - return false; - } - } else if (token == XContentParser.Token.START_OBJECT) { - if (context.matchField(currentFieldName, Histogram.ORDER_FIELD)) { - InternalOrder order = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_STRING) { - String dir = parser.text(); - boolean asc = "asc".equals(dir); - if (!asc && !"desc".equals(dir)) { - throw new ParsingException(parser.getTokenLocation(), "Unknown order direction in aggregation [" - + aggregationName + "]: [" + dir - + "]. Should be either [asc] or [desc]"); - } - order = resolveOrder(currentFieldName, asc); - } - } - otherOptions.put(Histogram.ORDER_FIELD, order); - return true; - } else if (context.matchField(currentFieldName, ExtendedBounds.EXTENDED_BOUNDS_FIELD)) { - try { - otherOptions.put(ExtendedBounds.EXTENDED_BOUNDS_FIELD, - ExtendedBounds.PARSER.apply(parser, context::getParseFieldMatcher)); - } catch (Exception e) { - throw new ParsingException(parser.getTokenLocation(), "Error parsing [{}]", e, aggregationName); - } - return true; - } else { - return false; - } - } else { - return false; - } - } - - static InternalOrder resolveOrder(String key, boolean asc) { - if ("_key".equals(key) || "_time".equals(key)) { - return (InternalOrder) (asc ? InternalOrder.KEY_ASC : InternalOrder.KEY_DESC); - } - if ("_count".equals(key)) { - return (InternalOrder) (asc ? InternalOrder.COUNT_ASC : InternalOrder.COUNT_DESC); - } - return new InternalOrder.Aggregation(key, asc); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ExtendedBounds.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ExtendedBounds.java index 46fae19e49f9d..eeba2994c8229 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ExtendedBounds.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ExtendedBounds.java @@ -19,17 +19,17 @@ package org.elasticsearch.search.aggregations.bucket.histogram; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.rounding.Rounding; -import org.elasticsearch.common.xcontent.AbstractObjectParser.NoContextParser; import org.elasticsearch.common.xcontent.ConstructingObjectParser; import org.elasticsearch.common.xcontent.ObjectParser.ValueType; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.SearchParseException; @@ -45,7 +45,7 @@ public class ExtendedBounds implements ToXContent, Writeable { static final ParseField MIN_FIELD = new ParseField("min"); static final ParseField MAX_FIELD = new ParseField("max"); - public static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>( + public static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>( "extended_bounds", a -> { assert a.length == 2; Long min = null; @@ -73,7 +73,7 @@ public class ExtendedBounds implements ToXContent, Writeable { return new ExtendedBounds(min, max, minAsStr, maxAsStr); }); static { - NoContextParser longOrString = p -> { + CheckedFunction longOrString = p -> { if (p.currentToken() == Token.VALUE_NUMBER) { return p.longValue(false); } @@ -153,11 +153,11 @@ ExtendedBounds parseAndValidate(String aggName, SearchContext context, DocValueF Long max = this.max; assert format != null; if (minAsStr != null) { - min = format.parseLong(minAsStr, false, context::nowInMillis); + min = format.parseLong(minAsStr, false, context.getQueryShardContext()::nowInMillis); } if (maxAsStr != null) { // TODO: Should we rather pass roundUp=true? - max = format.parseLong(maxAsStr, false, context::nowInMillis); + max = format.parseLong(maxAsStr, false, context.getQueryShardContext()::nowInMillis); } if (min != null && max != null && min.compareTo(max) > 0) { throw new SearchParseException(context, "[extended_bounds.min][" + min + "] cannot be greater than " + diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/Histogram.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/Histogram.java index 9453ecef59690..3ac87de81ed23 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/Histogram.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/Histogram.java @@ -48,8 +48,7 @@ interface Bucket extends MultiBucketsAggregation.Bucket { * @return The buckets of this histogram (each bucket representing an interval in the histogram) */ @Override - List getBuckets(); - + List getBuckets(); /** * A strategy defining the order in which the buckets in this histogram are ordered. diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java index 03ff1ee935a85..87c7404c088ba 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java @@ -19,19 +19,26 @@ package org.elasticsearch.search.aggregations.bucket.histogram; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParser.Token; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; @@ -41,7 +48,40 @@ */ public class HistogramAggregationBuilder extends ValuesSourceAggregationBuilder { - public static final String NAME = InternalHistogram.TYPE.name(); + public static final String NAME = "histogram"; + + private static final ObjectParser EXTENDED_BOUNDS_PARSER = new ObjectParser<>( + Histogram.EXTENDED_BOUNDS_FIELD.getPreferredName(), + () -> new double[]{ Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY }); + static { + EXTENDED_BOUNDS_PARSER.declareDouble((bounds, d) -> bounds[0] = d, new ParseField("min")); + EXTENDED_BOUNDS_PARSER.declareDouble((bounds, d) -> bounds[1] = d, new ParseField("max")); + } + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(HistogramAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false); + + PARSER.declareDouble(HistogramAggregationBuilder::interval, Histogram.INTERVAL_FIELD); + + PARSER.declareDouble(HistogramAggregationBuilder::offset, Histogram.OFFSET_FIELD); + + PARSER.declareBoolean(HistogramAggregationBuilder::keyed, Histogram.KEYED_FIELD); + + PARSER.declareLong(HistogramAggregationBuilder::minDocCount, Histogram.MIN_DOC_COUNT_FIELD); + + PARSER.declareField((histogram, extendedBounds) -> { + histogram.extendedBounds(extendedBounds[0], extendedBounds[1]); + }, parser -> EXTENDED_BOUNDS_PARSER.apply(parser, null), ExtendedBounds.EXTENDED_BOUNDS_FIELD, ObjectParser.ValueType.OBJECT); + + PARSER.declareField(HistogramAggregationBuilder::order, HistogramAggregationBuilder::parseOrder, + Histogram.ORDER_FIELD, ObjectParser.ValueType.OBJECT); + } + + public static HistogramAggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new HistogramAggregationBuilder(aggregationName), context); + } private double interval; private double offset = 0; @@ -53,12 +93,12 @@ public class HistogramAggregationBuilder /** Create a new builder with the given name. */ public HistogramAggregationBuilder(String name) { - super(name, InternalHistogram.TYPE, ValuesSourceType.NUMERIC, ValueType.DOUBLE); + super(name, ValuesSourceType.NUMERIC, ValueType.DOUBLE); } /** Read from a stream, for internal use only. */ public HistogramAggregationBuilder(StreamInput in) throws IOException { - super(in, InternalHistogram.TYPE, ValuesSourceType.NUMERIC, ValueType.DOUBLE); + super(in, ValuesSourceType.NUMERIC, ValueType.DOUBLE); if (in.readBoolean()) { order = InternalOrder.Streams.readOrder(in); } @@ -219,14 +259,14 @@ protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) } @Override - public String getWriteableName() { - return InternalHistogram.TYPE.name(); + public String getType() { + return NAME; } @Override - protected ValuesSourceAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new HistogramAggregatorFactory(name, type, config, interval, offset, order, keyed, minDocCount, minBound, maxBound, + return new HistogramAggregatorFactory(name, config, interval, offset, order, keyed, minDocCount, minBound, maxBound, context, parent, subFactoriesBuilder, metaData); } @@ -246,4 +286,34 @@ protected boolean innerEquals(Object obj) { && Objects.equals(minBound, other.minBound) && Objects.equals(maxBound, other.maxBound); } + + private static InternalOrder parseOrder(XContentParser parser, QueryParseContext context) throws IOException { + InternalOrder order = null; + Token token; + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token == XContentParser.Token.VALUE_STRING) { + String dir = parser.text(); + boolean asc = "asc".equals(dir); + if (!asc && !"desc".equals(dir)) { + throw new ParsingException(parser.getTokenLocation(), "Unknown order direction: [" + dir + + "]. Should be either [asc] or [desc]"); + } + order = resolveOrder(currentFieldName, asc); + } + } + return order; + } + + static InternalOrder resolveOrder(String key, boolean asc) { + if ("_key".equals(key)) { + return (InternalOrder) (asc ? InternalOrder.KEY_ASC : InternalOrder.KEY_DESC); + } + if ("_count".equals(key)) { + return (InternalOrder) (asc ? InternalOrder.COUNT_ASC : InternalOrder.COUNT_DESC); + } + return new InternalOrder.Aggregation(key, asc); + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java index 7d102578a720c..4b547989d8bce 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java @@ -34,8 +34,8 @@ import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram.EmptyBucketInfo; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.ArrayList; @@ -61,13 +61,13 @@ class HistogramAggregator extends BucketsAggregator { private final LongHash bucketOrds; - public HistogramAggregator(String name, AggregatorFactories factories, double interval, double offset, + HistogramAggregator(String name, AggregatorFactories factories, double interval, double offset, InternalOrder order, boolean keyed, long minDocCount, double minBound, double maxBound, @Nullable ValuesSource.Numeric valuesSource, DocValueFormat formatter, - AggregationContext aggregationContext, Aggregator parent, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, factories, context, parent, pipelineAggregators, metaData); if (interval <= 0) { throw new IllegalArgumentException("interval must be positive, got: " + interval); } @@ -81,7 +81,7 @@ public HistogramAggregator(String name, AggregatorFactories factories, double in this.valuesSource = valuesSource; this.formatter = formatter; - bucketOrds = new LongHash(1, aggregationContext.bigArrays()); + bucketOrds = new LongHash(1, context.bigArrays()); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregatorFactory.java index 805aab9ecf5df..939210b63a699 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; -import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -42,11 +41,11 @@ public final class HistogramAggregatorFactory extends ValuesSourceAggregatorFact private final long minDocCount; private final double minBound, maxBound; - HistogramAggregatorFactory(String name, Type type, ValuesSourceConfig config, double interval, double offset, + HistogramAggregatorFactory(String name, ValuesSourceConfig config, double interval, double offset, InternalOrder order, boolean keyed, long minDocCount, double minBound, double maxBound, - AggregationContext context, AggregatorFactory parent, + SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.interval = interval; this.offset = offset; this.order = order; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramParser.java deleted file mode 100644 index f27677a1a660f..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramParser.java +++ /dev/null @@ -1,147 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.histogram; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * A parser for date histograms. This translates json into an - * {@link HistogramAggregationBuilder} instance. - */ -public class HistogramParser extends NumericValuesSourceParser { - - private static final ObjectParser EXTENDED_BOUNDS_PARSER = new ObjectParser<>( - Histogram.EXTENDED_BOUNDS_FIELD.getPreferredName(), - () -> new double[]{ Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY }); - static { - EXTENDED_BOUNDS_PARSER.declareDouble((bounds, d) -> bounds[0] = d, new ParseField("min")); - EXTENDED_BOUNDS_PARSER.declareDouble((bounds, d) -> bounds[1] = d, new ParseField("max")); - } - - public HistogramParser() { - super(true, true, false); - } - - @Override - protected HistogramAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - HistogramAggregationBuilder factory = new HistogramAggregationBuilder(aggregationName); - Double interval = (Double) otherOptions.get(Histogram.INTERVAL_FIELD); - if (interval == null) { - throw new ParsingException(null, "Missing required field [interval] for histogram aggregation [" + aggregationName + "]"); - } else { - factory.interval(interval); - } - Double offset = (Double) otherOptions.get(Histogram.OFFSET_FIELD); - if (offset != null) { - factory.offset(offset); - } - - double[] extendedBounds = (double[]) otherOptions.get(Histogram.EXTENDED_BOUNDS_FIELD); - if (extendedBounds != null) { - factory.extendedBounds(extendedBounds[0], extendedBounds[1]); - } - Boolean keyed = (Boolean) otherOptions.get(Histogram.KEYED_FIELD); - if (keyed != null) { - factory.keyed(keyed); - } - Long minDocCount = (Long) otherOptions.get(Histogram.MIN_DOC_COUNT_FIELD); - if (minDocCount != null) { - factory.minDocCount(minDocCount); - } - InternalOrder order = (InternalOrder) otherOptions.get(Histogram.ORDER_FIELD); - if (order != null) { - factory.order(order); - } - return factory; - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (token.isValue()) { - if (context.matchField(currentFieldName, Histogram.INTERVAL_FIELD)) { - otherOptions.put(Histogram.INTERVAL_FIELD, parser.doubleValue()); - return true; - } else if (context.matchField(currentFieldName, Histogram.MIN_DOC_COUNT_FIELD)) { - otherOptions.put(Histogram.MIN_DOC_COUNT_FIELD, parser.longValue()); - return true; - } else if (context.matchField(currentFieldName, Histogram.KEYED_FIELD)) { - otherOptions.put(Histogram.KEYED_FIELD, parser.booleanValue()); - return true; - } else if (context.matchField(currentFieldName, Histogram.OFFSET_FIELD)) { - otherOptions.put(Histogram.OFFSET_FIELD, parser.doubleValue()); - return true; - } else { - return false; - } - } else if (token == XContentParser.Token.START_OBJECT) { - if (context.matchField(currentFieldName, Histogram.ORDER_FIELD)) { - InternalOrder order = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_STRING) { - String dir = parser.text(); - boolean asc = "asc".equals(dir); - if (!asc && !"desc".equals(dir)) { - throw new ParsingException(parser.getTokenLocation(), "Unknown order direction in aggregation [" - + aggregationName + "]: [" + dir - + "]. Should be either [asc] or [desc]"); - } - order = resolveOrder(currentFieldName, asc); - } - } - otherOptions.put(Histogram.ORDER_FIELD, order); - return true; - } else if (context.matchField(currentFieldName, Histogram.EXTENDED_BOUNDS_FIELD)) { - double[] bounds = EXTENDED_BOUNDS_PARSER.apply(parser, context::getParseFieldMatcher); - otherOptions.put(Histogram.EXTENDED_BOUNDS_FIELD, bounds); - return true; - } else { - return false; - } - } else { - return false; - } - } - - static InternalOrder resolveOrder(String key, boolean asc) { - if ("_key".equals(key)) { - return (InternalOrder) (asc ? InternalOrder.KEY_ASC : InternalOrder.KEY_DESC); - } - if ("_count".equals(key)) { - return (InternalOrder) (asc ? InternalOrder.COUNT_ASC : InternalOrder.COUNT_DESC); - } - return new InternalOrder.Aggregation(key, asc); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalDateHistogram.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalDateHistogram.java index 56d3792e0c6b3..dc876b50c97a6 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalDateHistogram.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalDateHistogram.java @@ -43,13 +43,11 @@ import java.util.Map; /** - * Imelementation of {@link Histogram}. + * Implementation of {@link Histogram}. */ public final class InternalDateHistogram extends InternalMultiBucketAggregation implements Histogram, HistogramFactory { - static final Type TYPE = new Type("date_histogram"); - public static class Bucket extends InternalMultiBucketAggregation.InternalBucket implements Histogram.Bucket { final long key; @@ -125,10 +123,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.startObject(); } if (format != DocValueFormat.RAW) { - builder.field(CommonFields.KEY_AS_STRING, keyAsString); + builder.field(CommonFields.KEY_AS_STRING.getPreferredName(), keyAsString); } - builder.field(CommonFields.KEY, key); - builder.field(CommonFields.DOC_COUNT, docCount); + builder.field(CommonFields.KEY.getPreferredName(), key); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); aggregations.toXContentInternal(builder, params); builder.endObject(); return builder; @@ -233,7 +231,7 @@ public String getWriteableName() { } @Override - public List getBuckets() { + public List getBuckets() { return Collections.unmodifiableList(buckets); } @@ -287,7 +285,7 @@ protected boolean lessThan(IteratorAndCurrent a, IteratorAndCurrent b) { if (top.current.key != key) { // the key changes, reduce what we already buffered and reset the buffer for current buckets final Bucket reduced = currentBuckets.get(0).reduce(currentBuckets, reduceContext); - if (reduced.getDocCount() >= minDocCount) { + if (reduced.getDocCount() >= minDocCount || reduceContext.isFinalReduce() == false) { reducedBuckets.add(reduced); } currentBuckets.clear(); @@ -308,7 +306,7 @@ protected boolean lessThan(IteratorAndCurrent a, IteratorAndCurrent b) { if (currentBuckets.isEmpty() == false) { final Bucket reduced = currentBuckets.get(0).reduce(currentBuckets, reduceContext); - if (reduced.getDocCount() >= minDocCount) { + if (reduced.getDocCount() >= minDocCount || reduceContext.isFinalReduce() == false) { reducedBuckets.add(reduced); } } @@ -329,8 +327,8 @@ private void addEmptyBuckets(List list, ReduceContext reduceContext) { Bucket firstBucket = iter.hasNext() ? list.get(iter.nextIndex()) : null; if (firstBucket == null) { if (bounds.getMin() != null && bounds.getMax() != null) { - long key = bounds.getMin(); - long max = bounds.getMax(); + long key = bounds.getMin() + offset; + long max = bounds.getMax() + offset; while (key <= max) { iter.add(new InternalDateHistogram.Bucket(key, 0, keyed, format, reducedEmptySubAggs)); key = nextKey(key).longValue(); @@ -338,7 +336,7 @@ private void addEmptyBuckets(List list, ReduceContext reduceContext) { } } else { if (bounds.getMin() != null) { - long key = bounds.getMin(); + long key = bounds.getMin() + offset; if (key < firstBucket.key) { while (key < firstBucket.key) { iter.add(new InternalDateHistogram.Bucket(key, 0, keyed, format, reducedEmptySubAggs)); @@ -365,12 +363,12 @@ private void addEmptyBuckets(List list, ReduceContext reduceContext) { } // finally, adding the empty buckets *after* the actual data (based on the extended_bounds.max requested by the user) - if (bounds != null && lastBucket != null && bounds.getMax() != null && bounds.getMax() > lastBucket.key) { - long key = emptyBucketInfo.rounding.nextRoundingValue(lastBucket.key); - long max = bounds.getMax(); + if (bounds != null && lastBucket != null && bounds.getMax() != null && bounds.getMax() + offset > lastBucket.key) { + long key = nextKey(lastBucket.key).longValue(); + long max = bounds.getMax() + offset; while (key <= max) { iter.add(new InternalDateHistogram.Bucket(key, 0, keyed, format, reducedEmptySubAggs)); - key = emptyBucketInfo.rounding.nextRoundingValue(key); + key = nextKey(key).longValue(); } } } @@ -384,7 +382,7 @@ public InternalAggregation doReduce(List aggregations, Redu addEmptyBuckets(reducedBuckets, reduceContext); } - if (order == InternalOrder.KEY_ASC) { + if (order == InternalOrder.KEY_ASC || reduceContext.isFinalReduce() == false) { // nothing to do, data are already sorted since shards return // sorted buckets and the merge-sort performed by reduceBuckets // maintains order @@ -405,9 +403,9 @@ public InternalAggregation doReduce(List aggregations, Redu @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { if (keyed) { - builder.startObject(CommonFields.BUCKETS); + builder.startObject(CommonFields.BUCKETS.getPreferredName()); } else { - builder.startArray(CommonFields.BUCKETS); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); } for (Bucket bucket : buckets) { bucket.toXContent(builder, params); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java index 4dae51533db3b..a77217ec98035 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java @@ -40,13 +40,10 @@ import java.util.Map; /** - * Imelementation of {@link Histogram}. + * Implementation of {@link Histogram}. */ public final class InternalHistogram extends InternalMultiBucketAggregation implements Histogram, HistogramFactory { - - static final Type TYPE = new Type("histogram"); - public static class Bucket extends InternalMultiBucketAggregation.InternalBucket implements Histogram.Bucket { final double key; @@ -122,10 +119,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.startObject(); } if (format != DocValueFormat.RAW) { - builder.field(CommonFields.KEY_AS_STRING, keyAsString); + builder.field(CommonFields.KEY_AS_STRING.getPreferredName(), keyAsString); } - builder.field(CommonFields.KEY, key); - builder.field(CommonFields.DOC_COUNT, docCount); + builder.field(CommonFields.KEY.getPreferredName(), key); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); aggregations.toXContentInternal(builder, params); builder.endObject(); return builder; @@ -222,7 +219,7 @@ public String getWriteableName() { } @Override - public List getBuckets() { + public List getBuckets() { return Collections.unmodifiableList(buckets); } @@ -272,10 +269,11 @@ protected boolean lessThan(IteratorAndCurrent a, IteratorAndCurrent b) { do { final IteratorAndCurrent top = pq.top(); - if (top.current.key != key) { - // the key changes, reduce what we already buffered and reset the buffer for current buckets + if (Double.compare(top.current.key, key) != 0) { + // The key changes, reduce what we already buffered and reset the buffer for current buckets. + // Using Double.compare instead of != to handle NaN correctly. final Bucket reduced = currentBuckets.get(0).reduce(currentBuckets, reduceContext); - if (reduced.getDocCount() >= minDocCount) { + if (reduced.getDocCount() >= minDocCount || reduceContext.isFinalReduce() == false) { reducedBuckets.add(reduced); } currentBuckets.clear(); @@ -286,7 +284,7 @@ protected boolean lessThan(IteratorAndCurrent a, IteratorAndCurrent b) { if (top.iterator.hasNext()) { final Bucket next = top.iterator.next(); - assert next.key > top.current.key : "shards must return data sorted by key"; + assert Double.compare(next.key, top.current.key) > 0 : "shards must return data sorted by key"; top.current = next; pq.updateTop(); } else { @@ -296,7 +294,7 @@ protected boolean lessThan(IteratorAndCurrent a, IteratorAndCurrent b) { if (currentBuckets.isEmpty() == false) { final Bucket reduced = currentBuckets.get(0).reduce(currentBuckets, reduceContext); - if (reduced.getDocCount() >= minDocCount) { + if (reduced.getDocCount() >= minDocCount || reduceContext.isFinalReduce() == false) { reducedBuckets.add(reduced); } } @@ -367,7 +365,7 @@ public InternalAggregation doReduce(List aggregations, Redu addEmptyBuckets(reducedBuckets, reduceContext); } - if (order == InternalOrder.KEY_ASC) { + if (order == InternalOrder.KEY_ASC || reduceContext.isFinalReduce() == false) { // nothing to do, data are already sorted since shards return // sorted buckets and the merge-sort performed by reduceBuckets // maintains order @@ -388,9 +386,9 @@ public InternalAggregation doReduce(List aggregations, Redu @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { if (keyed) { - builder.startObject(CommonFields.BUCKETS); + builder.startObject(CommonFields.BUCKETS.getPreferredName()); } else { - builder.startArray(CommonFields.BUCKETS); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); } for (Bucket bucket : buckets) { bucket.toXContent(builder, params); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ParsedDateHistogram.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ParsedDateHistogram.java new file mode 100644 index 0000000000000..ace0cb59907a8 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ParsedDateHistogram.java @@ -0,0 +1,91 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.histogram; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; +import org.joda.time.DateTime; +import org.joda.time.DateTimeZone; + +import java.io.IOException; +import java.util.List; + +public class ParsedDateHistogram extends ParsedMultiBucketAggregation implements Histogram { + + @Override + public String getType() { + return DateHistogramAggregationBuilder.NAME; + } + + @Override + public List getBuckets() { + return buckets; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedDateHistogram.class.getSimpleName(), true, ParsedDateHistogram::new); + static { + declareMultiBucketAggregationFields(PARSER, + parser -> ParsedBucket.fromXContent(parser, false), + parser -> ParsedBucket.fromXContent(parser, true)); + } + + public static ParsedDateHistogram fromXContent(XContentParser parser, String name) throws IOException { + ParsedDateHistogram aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedMultiBucketAggregation.ParsedBucket implements Histogram.Bucket { + + private Long key; + + @Override + public Object getKey() { + if (key != null) { + return new DateTime(key, DateTimeZone.UTC); + } + return null; + } + + @Override + public String getKeyAsString() { + String keyAsString = super.getKeyAsString(); + if (keyAsString != null) { + return keyAsString; + } + if (key != null) { + return Long.toString(key); + } + return null; + } + + @Override + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + return builder.field(CommonFields.KEY.getPreferredName(), key); + } + + static ParsedBucket fromXContent(XContentParser parser, boolean keyed) throws IOException { + return parseXContent(parser, keyed, ParsedBucket::new, (p, bucket) -> bucket.key = p.longValue()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ParsedHistogram.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ParsedHistogram.java new file mode 100644 index 0000000000000..6037c1558867a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ParsedHistogram.java @@ -0,0 +1,80 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.histogram; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; + +import java.io.IOException; +import java.util.List; + +public class ParsedHistogram extends ParsedMultiBucketAggregation implements Histogram { + + @Override + public String getType() { + return HistogramAggregationBuilder.NAME; + } + + @Override + public List getBuckets() { + return buckets; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedHistogram.class.getSimpleName(), true, ParsedHistogram::new); + static { + declareMultiBucketAggregationFields(PARSER, + parser -> ParsedBucket.fromXContent(parser, false), + parser -> ParsedBucket.fromXContent(parser, true)); + } + + public static ParsedHistogram fromXContent(XContentParser parser, String name) throws IOException { + ParsedHistogram aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + static class ParsedBucket extends ParsedMultiBucketAggregation.ParsedBucket implements Histogram.Bucket { + + private Double key; + + @Override + public Object getKey() { + return key; + } + + @Override + public String getKeyAsString() { + String keyAsString = super.getKeyAsString(); + if (keyAsString != null) { + return keyAsString; + } + if (key != null) { + return Double.toString(key); + } + return null; + } + + static ParsedBucket fromXContent(XContentParser parser, boolean keyed) throws IOException { + return parseXContent(parser, keyed, ParsedBucket::new, (p, bucket) -> bucket.key = p.doubleValue()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregationBuilder.java index 09916acfdf8a0..9361acc8fcb4f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregationBuilder.java @@ -21,33 +21,44 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class MissingAggregationBuilder extends ValuesSourceAggregationBuilder { public static final String NAME = "missing"; - public static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(MissingAggregationBuilder.NAME); + ValuesSourceParserHelper.declareAnyFields(PARSER, true, true); + } + + public static MissingAggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new MissingAggregationBuilder(aggregationName, null), context); + } public MissingAggregationBuilder(String name, ValueType targetValueType) { - super(name, TYPE, ValuesSourceType.ANY, targetValueType); + super(name, ValuesSourceType.ANY, targetValueType); } /** * Read from a stream. */ public MissingAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.ANY); + super(in, ValuesSourceType.ANY); } @Override @@ -61,9 +72,9 @@ protected boolean serializeTargetValueType() { } @Override - protected ValuesSourceAggregatorFactory innerBuild(AggregationContext context, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new MissingAggregatorFactory(name, type, config, context, parent, subFactoriesBuilder, metaData); + return new MissingAggregatorFactory(name, config, context, parent, subFactoriesBuilder, metaData); } @Override @@ -82,7 +93,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java index e80be56f34199..c0be6b2bfb5c1 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java @@ -27,8 +27,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -42,7 +42,7 @@ public class MissingAggregator extends SingleBucketAggregator { private final ValuesSource valuesSource; public MissingAggregator(String name, AggregatorFactories factories, ValuesSource valuesSource, - AggregationContext aggregationContext, Aggregator parent, List pipelineAggregators, + SearchContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregatorFactory.java index d7b478b78f8fd..5d05b79cef2d6 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregatorFactory.java @@ -22,12 +22,11 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -35,9 +34,9 @@ public class MissingAggregatorFactory extends ValuesSourceAggregatorFactory { - public MissingAggregatorFactory(String name, Type type, ValuesSourceConfig config, AggregationContext context, + public MissingAggregatorFactory(String name, ValuesSourceConfig config, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingParser.java deleted file mode 100644 index 5d6844ebbd25f..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingParser.java +++ /dev/null @@ -1,48 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.missing; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.AnyValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -public class MissingParser extends AnyValuesSourceParser { - - public MissingParser() { - super(true, true); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, XContentParser.Token token, - XContentParseContext context, Map otherOptions) throws IOException { - return false; - } - - @Override - protected MissingAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - return new MissingAggregationBuilder(aggregationName, targetValueType); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/ParsedMissing.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/ParsedMissing.java new file mode 100644 index 0000000000000..2897372df8954 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/ParsedMissing.java @@ -0,0 +1,36 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations.bucket.missing; + +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.bucket.ParsedSingleBucketAggregation; + +import java.io.IOException; + +public class ParsedMissing extends ParsedSingleBucketAggregation implements Missing { + + @Override + public String getType() { + return MissingAggregationBuilder.NAME; + } + + public static ParsedMissing fromXContent(XContentParser parser, final String name) throws IOException { + return parseXContent(parser, new ParsedMissing(), name); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregationBuilder.java index c2bf201e48f0e..f8f2602a474c6 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregationBuilder.java @@ -30,15 +30,13 @@ import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; public class NestedAggregationBuilder extends AbstractAggregationBuilder { public static final String NAME = "nested"; - private static final Type TYPE = new Type(NAME); private final String path; @@ -50,7 +48,7 @@ public class NestedAggregationBuilder extends AbstractAggregationBuilder doBuild(AggregationContext context, AggregatorFactory parent, Builder subFactoriesBuilder) + protected AggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - ObjectMapper childObjectMapper = context.searchContext().getObjectMapper(path); + ObjectMapper childObjectMapper = context.getObjectMapper(path); if (childObjectMapper == null) { // in case the path has been unmapped: - return new NestedAggregatorFactory(name, type, null, null, context, parent, subFactoriesBuilder, metaData); + return new NestedAggregatorFactory(name, null, null, context, parent, subFactoriesBuilder, metaData); } if (childObjectMapper.nested().isNested() == false) { throw new AggregationExecutionException("[nested] nested path [" + path + "] is not nested"); } try { - ObjectMapper parentObjectMapper = context.searchContext().getQueryShardContext().nestedScope().nextLevel(childObjectMapper); - return new NestedAggregatorFactory(name, type, parentObjectMapper, childObjectMapper, context, parent, subFactoriesBuilder, + ObjectMapper parentObjectMapper = context.getQueryShardContext().nestedScope().nextLevel(childObjectMapper); + return new NestedAggregatorFactory(name, parentObjectMapper, childObjectMapper, context, parent, subFactoriesBuilder, metaData); } finally { - context.searchContext().getQueryShardContext().nestedScope().previousLevel(); + context.getQueryShardContext().nestedScope().previousLevel(); } } @@ -116,7 +114,7 @@ public static NestedAggregationBuilder parse(String aggregationName, QueryParseC if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, NestedAggregator.PATH_FIELD)) { + if (NestedAggregator.PATH_FIELD.match(currentFieldName)) { path = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -148,7 +146,7 @@ protected boolean doEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java index 448ea44e7ebf2..3df4b28993bbc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java @@ -38,7 +38,7 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -52,11 +52,11 @@ public class NestedAggregator extends SingleBucketAggregator { private final Query childFilter; public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper, - AggregationContext aggregationContext, Aggregator parentAggregator, + SearchContext context, Aggregator parentAggregator, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parentAggregator, pipelineAggregators, metaData); + super(name, factories, context, parentAggregator, pipelineAggregators, metaData); Query parentFilter = parentObjectMapper != null ? parentObjectMapper.nestedTypeFilter() : Queries.newNonNestedFilter(); - this.parentFilter = context.searchContext().bitsetFilterCache().getBitSetProducer(parentFilter); + this.parentFilter = context.bitsetFilterCache().getBitSetProducer(parentFilter); this.childFilter = childObjectMapper.nestedTypeFilter(); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorFactory.java index b4e9fa05f708e..0ca0ef0a71e91 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorFactory.java @@ -20,15 +20,13 @@ package org.elasticsearch.search.aggregations.bucket.nested; import org.elasticsearch.index.mapper.ObjectMapper; -import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.NonCollectingAggregator; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -39,10 +37,10 @@ public class NestedAggregatorFactory extends AggregatorFactory parent, AggregatorFactories.Builder subFactories, + public NestedAggregatorFactory(String name, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper, + SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, Map metaData) throws IOException { - super(name, type, context, parent, subFactories, metaData); + super(name, context, parent, subFactories, metaData); this.parentObjectMapper = parentObjectMapper; this.childObjectMapper = childObjectMapper; } @@ -61,7 +59,7 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu private static final class Unmapped extends NonCollectingAggregator { - public Unmapped(String name, AggregationContext context, Aggregator parent, List pipelineAggregators, + Unmapped(String name, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ParsedNested.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ParsedNested.java new file mode 100644 index 0000000000000..f241675678cfe --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ParsedNested.java @@ -0,0 +1,36 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations.bucket.nested; + +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.bucket.ParsedSingleBucketAggregation; + +import java.io.IOException; + +public class ParsedNested extends ParsedSingleBucketAggregation implements Nested { + + @Override + public String getType() { + return NestedAggregationBuilder.NAME; + } + + public static ParsedNested fromXContent(XContentParser parser, final String name) throws IOException { + return parseXContent(parser, new ParsedNested(), name); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ParsedReverseNested.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ParsedReverseNested.java new file mode 100644 index 0000000000000..dec15c3eded10 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ParsedReverseNested.java @@ -0,0 +1,36 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations.bucket.nested; + +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.bucket.ParsedSingleBucketAggregation; + +import java.io.IOException; + +public class ParsedReverseNested extends ParsedSingleBucketAggregation implements Nested { + + @Override + public String getType() { + return ReverseNestedAggregationBuilder.NAME; + } + + public static ParsedReverseNested fromXContent(XContentParser parser, final String name) throws IOException { + return parseXContent(parser, new ParsedReverseNested(), name); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregationBuilder.java index 707d46338454b..30bd72c6c6894 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregationBuilder.java @@ -32,27 +32,25 @@ import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; public class ReverseNestedAggregationBuilder extends AbstractAggregationBuilder { public static final String NAME = "reverse_nested"; - private static final Type TYPE = new Type(NAME); private String path; public ReverseNestedAggregationBuilder(String name) { - super(name, TYPE); + super(name); } /** * Read from a stream. */ public ReverseNestedAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE); + super(in); path = in.readOptionalString(); } @@ -82,28 +80,28 @@ public String path() { } @Override - protected AggregatorFactory doBuild(AggregationContext context, AggregatorFactory parent, Builder subFactoriesBuilder) + protected AggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { if (findNestedAggregatorFactory(parent) == null) { - throw new SearchParseException(context.searchContext(), + throw new SearchParseException(context, "Reverse nested aggregation [" + name + "] can only be used inside a [nested] aggregation", null); } ObjectMapper parentObjectMapper = null; if (path != null) { - parentObjectMapper = context.searchContext().getObjectMapper(path); + parentObjectMapper = context.getObjectMapper(path); if (parentObjectMapper == null) { - return new ReverseNestedAggregatorFactory(name, type, true, null, context, parent, subFactoriesBuilder, metaData); + return new ReverseNestedAggregatorFactory(name, true, null, context, parent, subFactoriesBuilder, metaData); } if (parentObjectMapper.nested().isNested() == false) { throw new AggregationExecutionException("[reverse_nested] nested path [" + path + "] is not nested"); } } - NestedScope nestedScope = context.searchContext().getQueryShardContext().nestedScope(); + NestedScope nestedScope = context.getQueryShardContext().nestedScope(); try { nestedScope.nextLevel(parentObjectMapper); - return new ReverseNestedAggregatorFactory(name, type, false, parentObjectMapper, context, parent, subFactoriesBuilder, + return new ReverseNestedAggregatorFactory(name, false, parentObjectMapper, context, parent, subFactoriesBuilder, metaData); } finally { nestedScope.previousLevel(); @@ -172,7 +170,7 @@ protected boolean doEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java index d45f103ed5e9c..f55dadaa8eb61 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java @@ -19,6 +19,7 @@ package org.elasticsearch.search.aggregations.bucket.nested; import com.carrotsearch.hppc.LongIntHashMap; + import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Query; @@ -34,7 +35,7 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -51,15 +52,15 @@ public class ReverseNestedAggregator extends SingleBucketAggregator { private final BitSetProducer parentBitsetProducer; public ReverseNestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, - AggregationContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData) + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, factories, context, parent, pipelineAggregators, metaData); if (objectMapper == null) { parentFilter = Queries.newNonNestedFilter(); } else { parentFilter = objectMapper.nestedTypeFilter(); } - parentBitsetProducer = context.searchContext().bitsetFilterCache().getBitSetProducer(parentFilter); + parentBitsetProducer = context.bitsetFilterCache().getBitSetProducer(parentFilter); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregatorFactory.java index b077e755bb7d3..3ed6f21d99c8d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregatorFactory.java @@ -20,16 +20,13 @@ package org.elasticsearch.search.aggregations.bucket.nested; import org.elasticsearch.index.mapper.ObjectMapper; -import org.elasticsearch.search.SearchParseException; -import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.NonCollectingAggregator; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -40,11 +37,11 @@ public class ReverseNestedAggregatorFactory extends AggregatorFactory parent, + public ReverseNestedAggregatorFactory(String name, boolean unmapped, ObjectMapper parentObjectMapper, + SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, Map metaData) throws IOException { - super(name, type, context, parent, subFactories, metaData); + super(name, context, parent, subFactories, metaData); this.unmapped = unmapped; this.parentObjectMapper = parentObjectMapper; } @@ -61,7 +58,7 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu private static final class Unmapped extends NonCollectingAggregator { - public Unmapped(String name, AggregationContext context, Aggregator parent, List pipelineAggregators, + Unmapped(String name, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeAggregatorFactory.java index f4103d87fbde3..156adcdc4f3ea 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeAggregatorFactory.java @@ -22,15 +22,14 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Unmapped; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -40,13 +39,13 @@ public class AbstractRangeAggregatorFactory { private final InternalRange.Factory rangeFactory; - private final List ranges; + private final R[] ranges; private final boolean keyed; - public AbstractRangeAggregatorFactory(String name, Type type, ValuesSourceConfig config, List ranges, boolean keyed, - InternalRange.Factory rangeFactory, AggregationContext context, AggregatorFactory parent, + public AbstractRangeAggregatorFactory(String name, ValuesSourceConfig config, R[] ranges, boolean keyed, + InternalRange.Factory rangeFactory, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.ranges = ranges; this.keyed = keyed; this.rangeFactory = rangeFactory; @@ -55,7 +54,7 @@ public AbstractRangeAggregatorFactory(String name, Type type, ValuesSourceConfig @Override protected Aggregator createUnmapped(Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - return new Unmapped(name, ranges, keyed, config.format(), context, parent, rangeFactory, pipelineAggregators, metaData); + return new Unmapped<>(name, ranges, keyed, config.format(), context, parent, rangeFactory, pipelineAggregators, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java index 13d10bd0a0ccf..635a0a6015c78 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java @@ -19,13 +19,17 @@ package org.elasticsearch.search.aggregations.bucket.range; +import org.apache.lucene.util.InPlaceMergeSorter; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.ArrayList; @@ -40,7 +44,7 @@ public abstract class AbstractRangeBuilder rangeFactory) { - super(name, rangeFactory.type(), rangeFactory.getValueSourceType(), rangeFactory.getValueType()); + super(name, rangeFactory.getValueSourceType(), rangeFactory.getValueType()); this.rangeFactory = rangeFactory; } @@ -49,12 +53,46 @@ protected AbstractRangeBuilder(String name, InternalRange.Factory rangeFac */ protected AbstractRangeBuilder(StreamInput in, InternalRange.Factory rangeFactory, Writeable.Reader rangeReader) throws IOException { - super(in, rangeFactory.type(), rangeFactory.getValueSourceType(), rangeFactory.getValueType()); + super(in, rangeFactory.getValueSourceType(), rangeFactory.getValueType()); this.rangeFactory = rangeFactory; ranges = in.readList(rangeReader); keyed = in.readBoolean(); } + /** + * Resolve any strings in the ranges so we have a number value for the from + * and to of each range. The ranges are also sorted before being returned. + */ + protected Range[] processRanges(SearchContext context, ValuesSourceConfig config) { + Range[] ranges = new Range[this.ranges.size()]; + for (int i = 0; i < ranges.length; i++) { + ranges[i] = this.ranges.get(i).process(config.format(), context); + } + sortRanges(ranges); + return ranges; + } + + private static void sortRanges(final Range[] ranges) { + new InPlaceMergeSorter() { + + @Override + protected void swap(int i, int j) { + final Range tmp = ranges[i]; + ranges[i] = ranges[j]; + ranges[j] = tmp; + } + + @Override + protected int compare(int i, int j) { + int cmp = Double.compare(ranges[i].from, ranges[j].from); + if (cmp == 0) { + cmp = Double.compare(ranges[i].to, ranges[j].to); + } + return cmp; + } + }.sort(0, ranges.length); + } + @Override protected void innerWriteTo(StreamOutput out) throws IOException { out.writeVInt(ranges.size()); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/BinaryRangeAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/BinaryRangeAggregator.java index 77a6fc547b031..7ec3b3bb9c821 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/BinaryRangeAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/BinaryRangeAggregator.java @@ -39,8 +39,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; /** A range aggregator for values that are stored in SORTED_SET doc values. */ public final class BinaryRangeAggregator extends BucketsAggregator { @@ -78,10 +78,10 @@ private static int compare(BytesRef a, BytesRef b, int m) { public BinaryRangeAggregator(String name, AggregatorFactories factories, ValuesSource.Bytes valuesSource, DocValueFormat format, - List ranges, boolean keyed, AggregationContext aggregationContext, + List ranges, boolean keyed, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, factories, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; this.format = format; this.keyed = keyed; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/BinaryRangeAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/BinaryRangeAggregatorFactory.java index fda822cf11bf2..a8d74a7d83615 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/BinaryRangeAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/BinaryRangeAggregatorFactory.java @@ -18,19 +18,18 @@ */ package org.elasticsearch.search.aggregations.bucket.range; -import java.io.IOException; -import java.util.List; -import java.util.Map; - import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; + +import java.io.IOException; +import java.util.List; +import java.util.Map; public class BinaryRangeAggregatorFactory extends ValuesSourceAggregatorFactory { @@ -38,13 +37,13 @@ public class BinaryRangeAggregatorFactory private final List ranges; private final boolean keyed; - public BinaryRangeAggregatorFactory(String name, Type type, + public BinaryRangeAggregatorFactory(String name, ValuesSourceConfig config, List ranges, boolean keyed, - AggregationContext context, + SearchContext context, AggregatorFactory parent, Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.ranges = ranges; this.keyed = keyed; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/InternalBinaryRange.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/InternalBinaryRange.java index ed46062ee0662..c8b50d5b327fc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/InternalBinaryRange.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/InternalBinaryRange.java @@ -42,6 +42,7 @@ public final class InternalBinaryRange extends InternalMultiBucketAggregation implements Range { + public static class Bucket extends InternalMultiBucketAggregation.InternalBucket implements Range.Bucket { private final transient DocValueFormat format; @@ -131,16 +132,16 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } else { builder.startObject(); if (key != null) { - builder.field(CommonFields.KEY, key); + builder.field(CommonFields.KEY.getPreferredName(), key); } } if (from != null) { - builder.field(CommonFields.FROM, getFrom()); + builder.field(CommonFields.FROM.getPreferredName(), getFrom()); } if (to != null) { - builder.field(CommonFields.TO, getTo()); + builder.field(CommonFields.TO.getPreferredName(), getTo()); } - builder.field(CommonFields.DOC_COUNT, docCount); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); aggregations.toXContentInternal(builder, params); builder.endObject(); return builder; @@ -204,7 +205,7 @@ public String getWriteableName() { } @Override - public List getBuckets() { + public List getBuckets() { return unmodifiableList(buckets); } @@ -249,9 +250,9 @@ public InternalAggregation doReduce(List aggregations, Redu public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { if (keyed) { - builder.startObject(CommonFields.BUCKETS); + builder.startObject(CommonFields.BUCKETS.getPreferredName()); } else { - builder.startArray(CommonFields.BUCKETS); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); } for (Bucket range : buckets) { range.toXContent(builder, params); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/InternalRange.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/InternalRange.java index 54ee27bfa96fd..db1d2d087750b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/InternalRange.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/InternalRange.java @@ -144,21 +144,21 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.startObject(key); } else { builder.startObject(); - builder.field(CommonFields.KEY, key); + builder.field(CommonFields.KEY.getPreferredName(), key); } if (!Double.isInfinite(from)) { - builder.field(CommonFields.FROM, from); + builder.field(CommonFields.FROM.getPreferredName(), from); if (format != DocValueFormat.RAW) { - builder.field(CommonFields.FROM_AS_STRING, format.format(from)); + builder.field(CommonFields.FROM_AS_STRING.getPreferredName(), format.format(from)); } } if (!Double.isInfinite(to)) { - builder.field(CommonFields.TO, to); + builder.field(CommonFields.TO.getPreferredName(), to); if (format != DocValueFormat.RAW) { - builder.field(CommonFields.TO_AS_STRING, format.format(to)); + builder.field(CommonFields.TO_AS_STRING.getPreferredName(), format.format(to)); } } - builder.field(CommonFields.DOC_COUNT, docCount); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), docCount); aggregations.toXContentInternal(builder, params); builder.endObject(); return builder; @@ -178,10 +178,6 @@ public void writeTo(StreamOutput out) throws IOException { } public static class Factory> { - public Type type() { - return RangeAggregationBuilder.TYPE; - } - public ValuesSourceType getValueSourceType() { return ValuesSourceType.NUMERIC; } @@ -309,9 +305,9 @@ public InternalAggregation doReduce(List aggregations, Redu @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { if (keyed) { - builder.startObject(CommonFields.BUCKETS); + builder.startObject(CommonFields.BUCKETS.getPreferredName()); } else { - builder.startArray(CommonFields.BUCKETS); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); } for (B range : ranges) { range.toXContent(builder, params); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ParsedBinaryRange.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ParsedBinaryRange.java new file mode 100644 index 0000000000000..955724aa66fe1 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ParsedBinaryRange.java @@ -0,0 +1,169 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.range; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; +import org.elasticsearch.search.aggregations.bucket.range.ip.IpRangeAggregationBuilder; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + +public class ParsedBinaryRange extends ParsedMultiBucketAggregation implements Range { + + @Override + public String getType() { + return IpRangeAggregationBuilder.NAME; + } + + @Override + public List getBuckets() { + return buckets; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedBinaryRange.class.getSimpleName(), true, ParsedBinaryRange::new); + static { + declareMultiBucketAggregationFields(PARSER, + parser -> ParsedBucket.fromXContent(parser, false), + parser -> ParsedBucket.fromXContent(parser, true)); + } + + public static ParsedBinaryRange fromXContent(XContentParser parser, String name) throws IOException { + ParsedBinaryRange aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedMultiBucketAggregation.ParsedBucket implements Range.Bucket { + + private String key; + private String from; + private String to; + + @Override + public Object getKey() { + return key; + } + + @Override + public String getKeyAsString() { + return key; + } + + @Override + public Object getFrom() { + return from; + } + + @Override + public String getFromAsString() { + return from; + } + + @Override + public Object getTo() { + return to; + } + + @Override + public String getToAsString() { + return to; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + if (isKeyed()) { + builder.startObject(key != null ? key : rangeKey(from, to)); + } else { + builder.startObject(); + if (key != null) { + builder.field(CommonFields.KEY.getPreferredName(), key); + } + } + if (from != null) { + builder.field(CommonFields.FROM.getPreferredName(), getFrom()); + } + if (to != null) { + builder.field(CommonFields.TO.getPreferredName(), getTo()); + } + builder.field(CommonFields.DOC_COUNT.getPreferredName(), getDocCount()); + getAggregations().toXContentInternal(builder, params); + builder.endObject(); + return builder; + } + + static ParsedBucket fromXContent(final XContentParser parser, final boolean keyed) throws IOException { + final ParsedBucket bucket = new ParsedBucket(); + bucket.setKeyed(keyed); + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + + String rangeKey = null; + if (keyed) { + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + rangeKey = currentFieldName; + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + } + + List aggregations = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (CommonFields.KEY.getPreferredName().equals(currentFieldName)) { + bucket.key = parser.text(); + } else if (CommonFields.DOC_COUNT.getPreferredName().equals(currentFieldName)) { + bucket.setDocCount(parser.longValue()); + } else if (CommonFields.FROM.getPreferredName().equals(currentFieldName)) { + bucket.from = parser.text(); + } else if (CommonFields.TO.getPreferredName().equals(currentFieldName)) { + bucket.to = parser.text(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + XContentParserUtils.parseTypedKeysObject(parser, Aggregation.TYPED_KEYS_DELIMITER, Aggregation.class, + aggregations::add); + } + } + bucket.setAggregations(new Aggregations(aggregations)); + + if (keyed) { + if (rangeKey(bucket.from, bucket.to).equals(rangeKey)) { + bucket.key = null; + } else { + bucket.key = rangeKey; + } + } + return bucket; + } + + private static String rangeKey(String from, String to) { + return (from == null ? "*" : from) + '-' + (to == null ? "*" : to); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ParsedRange.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ParsedRange.java new file mode 100644 index 0000000000000..6095348e68863 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ParsedRange.java @@ -0,0 +1,194 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.range; + +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.function.Supplier; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + +public class ParsedRange extends ParsedMultiBucketAggregation implements Range { + + @Override + public String getType() { + return RangeAggregationBuilder.NAME; + } + + @Override + public List getBuckets() { + return buckets; + } + + protected static void declareParsedRangeFields(final ObjectParser objectParser, + final CheckedFunction bucketParser, + final CheckedFunction keyedBucketParser) { + declareMultiBucketAggregationFields(objectParser, bucketParser::apply, keyedBucketParser::apply); + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedRange.class.getSimpleName(), true, ParsedRange::new); + static { + declareParsedRangeFields(PARSER, + parser -> ParsedBucket.fromXContent(parser, false), + parser -> ParsedBucket.fromXContent(parser, true)); + } + + public static ParsedRange fromXContent(XContentParser parser, String name) throws IOException { + ParsedRange aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedMultiBucketAggregation.ParsedBucket implements Range.Bucket { + + protected String key; + protected double from = Double.NEGATIVE_INFINITY; + protected String fromAsString; + protected double to = Double.POSITIVE_INFINITY; + protected String toAsString; + + @Override + public String getKey() { + return getKeyAsString(); + } + + @Override + public String getKeyAsString() { + String keyAsString = super.getKeyAsString(); + if (keyAsString != null) { + return keyAsString; + } + return key; + } + + @Override + public Object getFrom() { + return from; + } + + @Override + public String getFromAsString() { + if (fromAsString != null) { + return fromAsString; + } + return doubleAsString(from); + } + + @Override + public Object getTo() { + return to; + } + + @Override + public String getToAsString() { + if (toAsString != null) { + return toAsString; + } + return doubleAsString(to); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + if (isKeyed()) { + builder.startObject(key); + } else { + builder.startObject(); + builder.field(CommonFields.KEY.getPreferredName(), key); + } + if (Double.isInfinite(from) == false) { + builder.field(CommonFields.FROM.getPreferredName(), from); + if (fromAsString != null) { + builder.field(CommonFields.FROM_AS_STRING.getPreferredName(), fromAsString); + } + } + if (Double.isInfinite(to) == false) { + builder.field(CommonFields.TO.getPreferredName(), to); + if (toAsString != null) { + builder.field(CommonFields.TO_AS_STRING.getPreferredName(), toAsString); + } + } + builder.field(CommonFields.DOC_COUNT.getPreferredName(), getDocCount()); + getAggregations().toXContentInternal(builder, params); + builder.endObject(); + return builder; + } + + private static String doubleAsString(double d) { + return Double.isInfinite(d) ? null : Double.toString(d); + } + + protected static B parseRangeBucketXContent(final XContentParser parser, + final Supplier bucketSupplier, + final boolean keyed) throws IOException { + final B bucket = bucketSupplier.get(); + bucket.setKeyed(keyed); + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + if (keyed) { + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + bucket.key = currentFieldName; + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + } + + List aggregations = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (CommonFields.KEY_AS_STRING.getPreferredName().equals(currentFieldName)) { + bucket.setKeyAsString(parser.text()); + } else if (CommonFields.KEY.getPreferredName().equals(currentFieldName)) { + bucket.key = parser.text(); + } else if (CommonFields.DOC_COUNT.getPreferredName().equals(currentFieldName)) { + bucket.setDocCount(parser.longValue()); + } else if (CommonFields.FROM.getPreferredName().equals(currentFieldName)) { + bucket.from = parser.doubleValue(); + } else if (CommonFields.FROM_AS_STRING.getPreferredName().equals(currentFieldName)) { + bucket.fromAsString = parser.text(); + } else if (CommonFields.TO.getPreferredName().equals(currentFieldName)) { + bucket.to = parser.doubleValue(); + } else if (CommonFields.TO_AS_STRING.getPreferredName().equals(currentFieldName)) { + bucket.toAsString = parser.text(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + XContentParserUtils.parseTypedKeysObject(parser, Aggregation.TYPED_KEYS_DELIMITER, Aggregation.class, + aggregations::add); + } + } + bucket.setAggregations(new Aggregations(aggregations)); + return bucket; + } + + static ParsedBucket fromXContent(final XContentParser parser, final boolean keyed) throws IOException { + return parseRangeBucketXContent(parser, ParsedBucket::new, keyed); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregationBuilder.java index c815ae9d3cf1e..105dbbc545830 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregationBuilder.java @@ -20,19 +20,43 @@ package org.elasticsearch.search.aggregations.bucket.range; import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class RangeAggregationBuilder extends AbstractRangeBuilder { public static final String NAME = "range"; - static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(RangeAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false); + PARSER.declareBoolean(RangeAggregationBuilder::keyed, RangeAggregator.KEYED_FIELD); + + PARSER.declareObjectArray((agg, ranges) -> { + for (Range range : ranges) { + agg.addRange(range); + } + }, RangeAggregationBuilder::parseRange, RangeAggregator.RANGES_FIELD); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new RangeAggregationBuilder(aggregationName), context); + } + + private static Range parseRange(XContentParser parser, QueryParseContext context) throws IOException { + return Range.fromXContent(parser); + } public RangeAggregationBuilder(String name) { super(name, InternalRange.FACTORY); @@ -112,14 +136,19 @@ public RangeAggregationBuilder addUnboundedFrom(double from) { } @Override - protected RangeAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected RangeAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new RangeAggregatorFactory(name, type, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder, + // We need to call processRanges here so they are parsed before we make the decision of whether to cache the request + Range[] ranges = processRanges(context, config); + if (ranges.length == 0) { + throw new IllegalArgumentException("No [ranges] specified for the [" + this.getName() + "] aggregation"); + } + return new RangeAggregatorFactory(name, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder, metaData); } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java index c83e2d2c72141..a2348c4d19d54 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java @@ -19,9 +19,7 @@ package org.elasticsearch.search.aggregations.bucket.range; import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.util.InPlaceMergeSorter; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -39,7 +37,6 @@ import org.elasticsearch.search.aggregations.NonCollectingAggregator; import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.internal.SearchContext; @@ -119,15 +116,15 @@ public Range process(DocValueFormat parser, SearchContext context) { Double from = this.from; Double to = this.to; if (fromAsStr != null) { - from = parser.parseDouble(fromAsStr, false, context::nowInMillis); + from = parser.parseDouble(fromAsStr, false, context.getQueryShardContext()::nowInMillis); } if (toAsStr != null) { - to = parser.parseDouble(toAsStr, false, context::nowInMillis); + to = parser.parseDouble(toAsStr, false, context.getQueryShardContext()::nowInMillis); } return new Range(key, from, fromAsStr, to, toAsStr); } - public static Range fromXContent(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException { + public static Range fromXContent(XContentParser parser) throws IOException { XContentParser.Token token; String currentFieldName = null; double from = Double.NEGATIVE_INFINITY; @@ -139,17 +136,17 @@ public static Range fromXContent(XContentParser parser, ParseFieldMatcher parseF if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_NUMBER) { - if (parseFieldMatcher.match(currentFieldName, FROM_FIELD)) { + if (FROM_FIELD.match(currentFieldName)) { from = parser.doubleValue(); - } else if (parseFieldMatcher.match(currentFieldName, TO_FIELD)) { + } else if (TO_FIELD.match(currentFieldName)) { to = parser.doubleValue(); } } else if (token == XContentParser.Token.VALUE_STRING) { - if (parseFieldMatcher.match(currentFieldName, FROM_FIELD)) { + if (FROM_FIELD.match(currentFieldName)) { fromAsStr = parser.text(); - } else if (parseFieldMatcher.match(currentFieldName, TO_FIELD)) { + } else if (TO_FIELD.match(currentFieldName)) { toAsStr = parser.text(); - } else if (parseFieldMatcher.match(currentFieldName, KEY_FIELD)) { + } else if (KEY_FIELD.match(currentFieldName)) { key = parser.text(); } } @@ -210,21 +207,17 @@ public boolean equals(Object obj) { final double[] maxTo; public RangeAggregator(String name, AggregatorFactories factories, ValuesSource.Numeric valuesSource, DocValueFormat format, - InternalRange.Factory rangeFactory, List ranges, boolean keyed, AggregationContext aggregationContext, + InternalRange.Factory rangeFactory, Range[] ranges, boolean keyed, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, factories, context, parent, pipelineAggregators, metaData); assert valuesSource != null; this.valuesSource = valuesSource; this.format = format; this.keyed = keyed; this.rangeFactory = rangeFactory; - this.ranges = new Range[ranges.size()]; - for (int i = 0; i < this.ranges.length; i++) { - this.ranges[i] = ranges.get(i).process(format, context.searchContext()); - } - sortRanges(this.ranges); + this.ranges = ranges; maxTo = new double[this.ranges.length]; maxTo[0] = this.ranges[0].to; @@ -337,45 +330,21 @@ public InternalAggregation buildEmptyAggregation() { return rangeFactory.create(name, buckets, format, keyed, pipelineAggregators(), metaData()); } - private static void sortRanges(final Range[] ranges) { - new InPlaceMergeSorter() { - - @Override - protected void swap(int i, int j) { - final Range tmp = ranges[i]; - ranges[i] = ranges[j]; - ranges[j] = tmp; - } - - @Override - protected int compare(int i, int j) { - int cmp = Double.compare(ranges[i].from, ranges[j].from); - if (cmp == 0) { - cmp = Double.compare(ranges[i].to, ranges[j].to); - } - return cmp; - } - }.sort(0, ranges.length); - } - public static class Unmapped extends NonCollectingAggregator { - private final List ranges; + private final R[] ranges; private final boolean keyed; private final InternalRange.Factory factory; private final DocValueFormat format; @SuppressWarnings("unchecked") - public Unmapped(String name, List ranges, boolean keyed, DocValueFormat format, - AggregationContext context, + public Unmapped(String name, R[] ranges, boolean keyed, DocValueFormat format, + SearchContext context, Aggregator parent, InternalRange.Factory factory, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); - this.ranges = new ArrayList<>(); - for (R range : ranges) { - this.ranges.add((R) range.process(format, context.searchContext())); - } + this.ranges = ranges; this.keyed = keyed; this.format = format; this.factory = factory; @@ -384,7 +353,7 @@ public Unmapped(String name, List ranges, boolean keyed, DocValueFormat forma @Override public InternalAggregation buildEmptyAggregation() { InternalAggregations subAggs = buildEmptySubAggregations(); - List buckets = new ArrayList<>(ranges.size()); + List buckets = new ArrayList<>(ranges.length); for (RangeAggregator.Range range : ranges) { buckets.add(factory.createBucket(range.key, range.from, range.to, 0, subAggs, keyed, format)); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregatorFactory.java index 5dec4c40c4580..d1dc3e71b5c11 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregatorFactory.java @@ -21,23 +21,21 @@ import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.range.InternalRange.Factory; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; -import java.util.List; import java.util.Map; public class RangeAggregatorFactory extends AbstractRangeAggregatorFactory { - public RangeAggregatorFactory(String name, Type type, ValuesSourceConfig config, List ranges, boolean keyed, - Factory rangeFactory, AggregationContext context, AggregatorFactory parent, + public RangeAggregatorFactory(String name, ValuesSourceConfig config, Range[] ranges, boolean keyed, + Factory rangeFactory, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder, metaData); + super(name, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder, metaData); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeParser.java deleted file mode 100644 index c8cb2c767157a..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeParser.java +++ /dev/null @@ -1,95 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.range; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.Map; - -/** - * - */ -public class RangeParser extends NumericValuesSourceParser { - - public RangeParser() { - this(true, true, false); - } - - /** - * Used by subclasses that parse slightly different kinds of ranges. - */ - protected RangeParser(boolean scriptable, boolean formattable, boolean timezoneAware) { - super(scriptable, formattable, timezoneAware); - } - - @Override - protected AbstractRangeBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - RangeAggregationBuilder factory = new RangeAggregationBuilder(aggregationName); - @SuppressWarnings("unchecked") - List ranges = (List) otherOptions.get(RangeAggregator.RANGES_FIELD); - for (Range range : ranges) { - factory.addRange(range); - } - Boolean keyed = (Boolean) otherOptions.get(RangeAggregator.KEYED_FIELD); - if (keyed != null) { - factory.keyed(keyed); - } - return factory; - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (token == XContentParser.Token.START_ARRAY) { - if (context.matchField(currentFieldName, RangeAggregator.RANGES_FIELD)) { - List ranges = new ArrayList<>(); - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - Range range = parseRange(parser, context.getParseFieldMatcher()); - ranges.add(range); - } - otherOptions.put(RangeAggregator.RANGES_FIELD, ranges); - return true; - } - } else if (token == XContentParser.Token.VALUE_BOOLEAN) { - if (context.matchField(currentFieldName, RangeAggregator.KEYED_FIELD)) { - boolean keyed = parser.booleanValue(); - otherOptions.put(RangeAggregator.KEYED_FIELD, keyed); - return true; - } - } - return false; - } - - protected Range parseRange(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException { - return Range.fromXContent(parser, parseFieldMatcher); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeAggregationBuilder.java index a75b071569c62..2c686fbb97768 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeAggregationBuilder.java @@ -20,22 +20,46 @@ package org.elasticsearch.search.aggregations.bucket.range.date; import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.range.AbstractRangeBuilder; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; +import org.elasticsearch.search.internal.SearchContext; import org.joda.time.DateTime; import java.io.IOException; public class DateRangeAggregationBuilder extends AbstractRangeBuilder { public static final String NAME = "date_range"; - static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(DateRangeAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, true); + PARSER.declareBoolean(DateRangeAggregationBuilder::keyed, RangeAggregator.KEYED_FIELD); + + PARSER.declareObjectArray((agg, ranges) -> { + for (Range range : ranges) { + agg.addRange(range); + } + }, DateRangeAggregationBuilder::parseRange, RangeAggregator.RANGES_FIELD); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new DateRangeAggregationBuilder(aggregationName), context); + } + + private static Range parseRange(XContentParser parser, QueryParseContext context) throws IOException { + return Range.fromXContent(parser); + } public DateRangeAggregationBuilder(String name) { super(name, InternalDateRange.FACTORY); @@ -49,7 +73,7 @@ public DateRangeAggregationBuilder(StreamInput in) throws IOException { } @Override - public String getWriteableName() { + public String getType() { return NAME; } @@ -257,9 +281,15 @@ public DateRangeAggregationBuilder addUnboundedFrom(DateTime from) { } @Override - protected DateRangeAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected DateRangeAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new DateRangeAggregatorFactory(name, type, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder, + // We need to call processRanges here so they are parsed and we know whether `now` has been used before we make + // the decision of whether to cache the request + Range[] ranges = processRanges(context, config); + if (ranges.length == 0) { + throw new IllegalArgumentException("No [ranges] specified for the [" + this.getName() + "] aggregation"); + } + return new DateRangeAggregatorFactory(name, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder, metaData); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeAggregatorFactory.java index d3bb7ac6238d0..3d674c5a04006 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeAggregatorFactory.java @@ -21,24 +21,22 @@ import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.range.AbstractRangeAggregatorFactory; import org.elasticsearch.search.aggregations.bucket.range.InternalRange.Factory; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; -import java.util.List; import java.util.Map; public class DateRangeAggregatorFactory extends AbstractRangeAggregatorFactory { - public DateRangeAggregatorFactory(String name, Type type, ValuesSourceConfig config, List ranges, boolean keyed, - Factory rangeFactory, AggregationContext context, AggregatorFactory parent, + public DateRangeAggregatorFactory(String name, ValuesSourceConfig config, Range[] ranges, boolean keyed, + Factory rangeFactory, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder, metaData); + super(name, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder, metaData); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeParser.java deleted file mode 100644 index f8eff715abb3b..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeParser.java +++ /dev/null @@ -1,55 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.range.date; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator; -import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range; -import org.elasticsearch.search.aggregations.bucket.range.RangeParser; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.util.List; -import java.util.Map; - -/** - * - */ -public class DateRangeParser extends RangeParser { - - public DateRangeParser() { - super(true, true, true); - } - - @Override - protected DateRangeAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - DateRangeAggregationBuilder factory = new DateRangeAggregationBuilder(aggregationName); - @SuppressWarnings("unchecked") - List ranges = (List) otherOptions.get(RangeAggregator.RANGES_FIELD); - for (Range range : ranges) { - factory.addRange(range); - } - Boolean keyed = (Boolean) otherOptions.get(RangeAggregator.KEYED_FIELD); - if (keyed != null) { - factory.keyed(keyed); - } - return factory; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/InternalDateRange.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/InternalDateRange.java index f0dfec2312f4a..b0cb11624ef10 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/InternalDateRange.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/InternalDateRange.java @@ -87,11 +87,6 @@ DocValueFormat format() { } public static class Factory extends InternalRange.Factory { - @Override - public Type type() { - return DateRangeAggregationBuilder.TYPE; - } - @Override public ValueType getValueType() { return ValueType.DATE; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/ParsedDateRange.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/ParsedDateRange.java new file mode 100644 index 0000000000000..b8f2f008cec65 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/ParsedDateRange.java @@ -0,0 +1,74 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.range.date; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.bucket.range.ParsedRange; +import org.joda.time.DateTime; +import org.joda.time.DateTimeZone; + +import java.io.IOException; + +public class ParsedDateRange extends ParsedRange { + + @Override + public String getType() { + return DateRangeAggregationBuilder.NAME; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedDateRange.class.getSimpleName(), true, ParsedDateRange::new); + static { + declareParsedRangeFields(PARSER, + parser -> ParsedBucket.fromXContent(parser, false), + parser -> ParsedBucket.fromXContent(parser, true)); + } + + public static ParsedDateRange fromXContent(XContentParser parser, String name) throws IOException { + ParsedDateRange aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedRange.ParsedBucket { + + @Override + public Object getFrom() { + return doubleAsDateTime(from); + } + + @Override + public Object getTo() { + return doubleAsDateTime(to); + } + + private static DateTime doubleAsDateTime(Double d) { + if (d == null || Double.isInfinite(d)) { + return null; + } + return new DateTime(d.longValue(), DateTimeZone.UTC); + } + + static ParsedBucket fromXContent(final XContentParser parser, final boolean keyed) throws IOException { + return parseRangeBucketXContent(parser, ParsedBucket::new, keyed); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceAggregationBuilder.java index 4a4cab2affa03..1484fae8d44fb 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceAggregationBuilder.java @@ -19,23 +19,29 @@ package org.elasticsearch.search.aggregations.bucket.range.geodistance; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.DistanceUnit; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParser.Token; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.range.InternalRange; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator; -import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceParser.Range; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.ArrayList; @@ -44,24 +50,170 @@ public class GeoDistanceAggregationBuilder extends ValuesSourceAggregationBuilder { public static final String NAME = "geo_distance"; - public static final Type TYPE = new Type(NAME); + static final ParseField ORIGIN_FIELD = new ParseField("origin", "center", "point", "por"); + static final ParseField UNIT_FIELD = new ParseField("unit"); + static final ParseField DISTANCE_TYPE_FIELD = new ParseField("distance_type"); - private final GeoPoint origin; + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(GeoDistanceAggregationBuilder.NAME); + ValuesSourceParserHelper.declareGeoFields(PARSER, true, false); + + PARSER.declareBoolean(GeoDistanceAggregationBuilder::keyed, RangeAggregator.KEYED_FIELD); + + PARSER.declareObjectArray((agg, ranges) -> { + for (Range range : ranges) { + agg.addRange(range); + } + }, GeoDistanceAggregationBuilder::parseRange, RangeAggregator.RANGES_FIELD); + + PARSER.declareField(GeoDistanceAggregationBuilder::unit, p -> DistanceUnit.fromString(p.text()), + UNIT_FIELD, ObjectParser.ValueType.STRING); + + PARSER.declareField(GeoDistanceAggregationBuilder::distanceType, p -> GeoDistance.fromString(p.text()), + DISTANCE_TYPE_FIELD, ObjectParser.ValueType.STRING); + + PARSER.declareField(GeoDistanceAggregationBuilder::origin, GeoDistanceAggregationBuilder::parseGeoPoint, + ORIGIN_FIELD, ObjectParser.ValueType.OBJECT_ARRAY_OR_STRING); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + GeoDistanceAggregationBuilder builder = PARSER.parse(context.parser(), new GeoDistanceAggregationBuilder(aggregationName), context); + if (builder.origin() == null) { + throw new IllegalArgumentException("Aggregation [" + aggregationName + "] must define an [origin]."); + } + return builder; + } + + public static class Range extends RangeAggregator.Range { + public Range(String key, Double from, Double to) { + super(key(key, from, to), from == null ? 0 : from, to); + } + + /** + * Read from a stream. + */ + public Range(StreamInput in) throws IOException { + super(in.readOptionalString(), in.readDouble(), in.readDouble()); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeOptionalString(key); + out.writeDouble(from); + out.writeDouble(to); + } + + private static String key(String key, Double from, Double to) { + if (key != null) { + return key; + } + StringBuilder sb = new StringBuilder(); + sb.append((from == null || from == 0) ? "*" : from); + sb.append("-"); + sb.append((to == null || Double.isInfinite(to)) ? "*" : to); + return sb.toString(); + } + } + + private static GeoPoint parseGeoPoint(XContentParser parser, QueryParseContext context) throws IOException { + Token token = parser.currentToken(); + if (token == XContentParser.Token.VALUE_STRING) { + GeoPoint point = new GeoPoint(); + point.resetFromString(parser.text()); + return point; + } + if (token == XContentParser.Token.START_ARRAY) { + double lat = Double.NaN; + double lon = Double.NaN; + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (Double.isNaN(lon)) { + lon = parser.doubleValue(); + } else if (Double.isNaN(lat)) { + lat = parser.doubleValue(); + } else { + throw new ParsingException(parser.getTokenLocation(), "malformed [" + ORIGIN_FIELD.getPreferredName() + + "]: a geo point array must be of the form [lon, lat]"); + } + } + return new GeoPoint(lat, lon); + } + if (token == XContentParser.Token.START_OBJECT) { + String currentFieldName = null; + double lat = Double.NaN; + double lon = Double.NaN; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token == XContentParser.Token.VALUE_NUMBER) { + if ("lat".equals(currentFieldName)) { + lat = parser.doubleValue(); + } else if ("lon".equals(currentFieldName)) { + lon = parser.doubleValue(); + } + } + } + if (Double.isNaN(lat) || Double.isNaN(lon)) { + throw new ParsingException(parser.getTokenLocation(), + "malformed [" + currentFieldName + "] geo point object. either [lat] or [lon] (or both) are " + "missing"); + } + return new GeoPoint(lat, lon); + } + + // should not happen since we only parse geo points when we encounter a string, an object or an array + throw new IllegalArgumentException("Unexpected token [" + token + "] while parsing geo point"); + } + + private static Range parseRange(XContentParser parser, QueryParseContext context) throws IOException { + String fromAsStr = null; + String toAsStr = null; + double from = 0.0; + double to = Double.POSITIVE_INFINITY; + String key = null; + String toOrFromOrKey = null; + Token token; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + toOrFromOrKey = parser.currentName(); + } else if (token == XContentParser.Token.VALUE_NUMBER) { + if (Range.FROM_FIELD.match(toOrFromOrKey)) { + from = parser.doubleValue(); + } else if (Range.TO_FIELD.match(toOrFromOrKey)) { + to = parser.doubleValue(); + } + } else if (token == XContentParser.Token.VALUE_STRING) { + if (Range.KEY_FIELD.match(toOrFromOrKey)) { + key = parser.text(); + } else if (Range.FROM_FIELD.match(toOrFromOrKey)) { + fromAsStr = parser.text(); + } else if (Range.TO_FIELD.match(toOrFromOrKey)) { + toAsStr = parser.text(); + } + } + } + if (fromAsStr != null || toAsStr != null) { + return new Range(key, Double.parseDouble(fromAsStr), Double.parseDouble(toAsStr)); + } else { + return new Range(key, from, to); + } + } + + private GeoPoint origin; private List ranges = new ArrayList<>(); private DistanceUnit unit = DistanceUnit.DEFAULT; - private GeoDistance distanceType = GeoDistance.DEFAULT; + private GeoDistance distanceType = GeoDistance.ARC; private boolean keyed = false; public GeoDistanceAggregationBuilder(String name, GeoPoint origin) { this(name, origin, InternalGeoDistance.FACTORY); + if (origin == null) { + throw new IllegalArgumentException("[origin] must not be null: [" + name + "]"); + } } private GeoDistanceAggregationBuilder(String name, GeoPoint origin, InternalRange.Factory rangeFactory) { - super(name, rangeFactory.type(), rangeFactory.getValueSourceType(), rangeFactory.getValueType()); - if (origin == null) { - throw new IllegalArgumentException("[origin] must not be null: [" + name + "]"); - } + super(name, rangeFactory.getValueSourceType(), rangeFactory.getValueType()); this.origin = origin; } @@ -69,8 +221,7 @@ private GeoDistanceAggregationBuilder(String name, GeoPoint origin, * Read from a stream. */ public GeoDistanceAggregationBuilder(StreamInput in) throws IOException { - super(in, InternalGeoDistance.FACTORY.type(), InternalGeoDistance.FACTORY.getValueSourceType(), - InternalGeoDistance.FACTORY.getValueType()); + super(in, InternalGeoDistance.FACTORY.getValueSourceType(), InternalGeoDistance.FACTORY.getValueType()); origin = new GeoPoint(in.readDouble(), in.readDouble()); int size = in.readVInt(); ranges = new ArrayList<>(size); @@ -82,6 +233,23 @@ public GeoDistanceAggregationBuilder(StreamInput in) throws IOException { unit = DistanceUnit.readFromStream(in); } + // for parsing + GeoDistanceAggregationBuilder(String name) { + this(name, null, InternalGeoDistance.FACTORY); + } + + GeoDistanceAggregationBuilder origin(GeoPoint origin) { + this.origin = origin; + return this; + } + + /** + * Return the {@link GeoPoint} that is used for distance computations. + */ + public GeoPoint origin() { + return origin; + } + @Override protected void innerWriteTo(StreamOutput out) throws IOException { out.writeDouble(origin.lat()); @@ -174,7 +342,7 @@ public List range() { } @Override - public String getWriteableName() { + public String getType() { return NAME; } @@ -212,20 +380,24 @@ public boolean keyed() { } @Override - protected ValuesSourceAggregatorFactory innerBuild(AggregationContext context, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new GeoDistanceRangeAggregatorFactory(name, type, config, origin, ranges, unit, distanceType, keyed, context, parent, + Range[] ranges = this.ranges.toArray(new Range[this.range().size()]); + if (ranges.length == 0) { + throw new IllegalArgumentException("No [ranges] specified for the [" + this.getName() + "] aggregation"); + } + return new GeoDistanceRangeAggregatorFactory(name, config, origin, ranges, unit, distanceType, keyed, context, parent, subFactoriesBuilder, metaData); } @Override protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(GeoDistanceParser.ORIGIN_FIELD.getPreferredName(), origin); + builder.field(ORIGIN_FIELD.getPreferredName(), origin); builder.field(RangeAggregator.RANGES_FIELD.getPreferredName(), ranges); builder.field(RangeAggregator.KEYED_FIELD.getPreferredName(), keyed); - builder.field(GeoDistanceParser.UNIT_FIELD.getPreferredName(), unit); - builder.field(GeoDistanceParser.DISTANCE_TYPE_FIELD.getPreferredName(), distanceType); + builder.field(UNIT_FIELD.getPreferredName(), unit); + builder.field(DISTANCE_TYPE_FIELD.getPreferredName(), distanceType); return builder; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceParser.java deleted file mode 100644 index 677731d64ef3b..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceParser.java +++ /dev/null @@ -1,175 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.range.geodistance; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.geo.GeoDistance; -import org.elasticsearch.common.geo.GeoPoint; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.unit.DistanceUnit; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.GeoPointValuesSourceParser; -import org.elasticsearch.search.aggregations.support.GeoPointParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.Map; - -/** - * - */ -public class GeoDistanceParser extends GeoPointValuesSourceParser { - - static final ParseField ORIGIN_FIELD = new ParseField("origin", "center", "point", "por"); - static final ParseField UNIT_FIELD = new ParseField("unit"); - static final ParseField DISTANCE_TYPE_FIELD = new ParseField("distance_type"); - - private GeoPointParser geoPointParser = new GeoPointParser(GeoDistanceAggregationBuilder.TYPE, ORIGIN_FIELD); - - public GeoDistanceParser() { - super(true, false); - } - - public static class Range extends RangeAggregator.Range { - public Range(String key, Double from, Double to) { - super(key(key, from, to), from == null ? 0 : from, to); - } - - /** - * Read from a stream. - */ - public Range(StreamInput in) throws IOException { - super(in.readOptionalString(), in.readDouble(), in.readDouble()); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeOptionalString(key); - out.writeDouble(from); - out.writeDouble(to); - } - - private static String key(String key, Double from, Double to) { - if (key != null) { - return key; - } - StringBuilder sb = new StringBuilder(); - sb.append((from == null || from == 0) ? "*" : from); - sb.append("-"); - sb.append((to == null || Double.isInfinite(to)) ? "*" : to); - return sb.toString(); - } - } - - @Override - protected GeoDistanceAggregationBuilder createFactory( - String aggregationName, ValuesSourceType valuesSourceType, ValueType targetValueType, Map otherOptions) { - GeoPoint origin = (GeoPoint) otherOptions.get(ORIGIN_FIELD); - GeoDistanceAggregationBuilder factory = new GeoDistanceAggregationBuilder(aggregationName, origin); - @SuppressWarnings("unchecked") - List ranges = (List) otherOptions.get(RangeAggregator.RANGES_FIELD); - for (Range range : ranges) { - factory.addRange(range); - } - Boolean keyed = (Boolean) otherOptions.get(RangeAggregator.KEYED_FIELD); - if (keyed != null) { - factory.keyed(keyed); - } - DistanceUnit unit = (DistanceUnit) otherOptions.get(UNIT_FIELD); - if (unit != null) { - factory.unit(unit); - } - GeoDistance distanceType = (GeoDistance) otherOptions.get(DISTANCE_TYPE_FIELD); - if (distanceType != null) { - factory.distanceType(distanceType); - } - return factory; - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (geoPointParser.token(aggregationName, currentFieldName, token, parser, context.getParseFieldMatcher(), otherOptions)) { - return true; - } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.matchField(currentFieldName, UNIT_FIELD)) { - DistanceUnit unit = DistanceUnit.fromString(parser.text()); - otherOptions.put(UNIT_FIELD, unit); - return true; - } else if (context.matchField(currentFieldName, DISTANCE_TYPE_FIELD)) { - GeoDistance distanceType = GeoDistance.fromString(parser.text()); - otherOptions.put(DISTANCE_TYPE_FIELD, distanceType); - return true; - } - } else if (token == XContentParser.Token.VALUE_BOOLEAN) { - if (context.matchField(currentFieldName, RangeAggregator.KEYED_FIELD)) { - boolean keyed = parser.booleanValue(); - otherOptions.put(RangeAggregator.KEYED_FIELD, keyed); - return true; - } - } else if (token == XContentParser.Token.START_ARRAY) { - if (context.matchField(currentFieldName, RangeAggregator.RANGES_FIELD)) { - List ranges = new ArrayList<>(); - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - String fromAsStr = null; - String toAsStr = null; - double from = 0.0; - double to = Double.POSITIVE_INFINITY; - String key = null; - String toOrFromOrKey = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - toOrFromOrKey = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_NUMBER) { - if (context.matchField(toOrFromOrKey, Range.FROM_FIELD)) { - from = parser.doubleValue(); - } else if (context.matchField(toOrFromOrKey, Range.TO_FIELD)) { - to = parser.doubleValue(); - } - } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.matchField(toOrFromOrKey, Range.KEY_FIELD)) { - key = parser.text(); - } else if (context.matchField(toOrFromOrKey, Range.FROM_FIELD)) { - fromAsStr = parser.text(); - } else if (context.matchField(toOrFromOrKey, Range.TO_FIELD)) { - toAsStr = parser.text(); - } - } - } - if (fromAsStr != null || toAsStr != null) { - ranges.add(new Range(key, Double.parseDouble(fromAsStr), Double.parseDouble(toAsStr))); - } else { - ranges.add(new Range(key, from, to)); - } - } - otherOptions.put(RangeAggregator.RANGES_FIELD, ranges); - return true; - } - } - return false; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceRangeAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceRangeAggregatorFactory.java index 32c3592a8fcb4..e12853e5b17f5 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceRangeAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceRangeAggregatorFactory.java @@ -23,7 +23,7 @@ import org.apache.lucene.index.SortedNumericDocValues; import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoPoint; -import org.elasticsearch.common.geo.GeoDistance.FixedSourceDistance; +import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.index.fielddata.MultiGeoPointValues; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; @@ -31,16 +31,15 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.range.InternalRange; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Unmapped; -import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceParser.Range; +import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceAggregationBuilder.Range; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -51,15 +50,15 @@ public class GeoDistanceRangeAggregatorFactory private final InternalRange.Factory rangeFactory = InternalGeoDistance.FACTORY; private final GeoPoint origin; - private final List ranges; + private final Range[] ranges; private final DistanceUnit unit; private final GeoDistance distanceType; private final boolean keyed; - public GeoDistanceRangeAggregatorFactory(String name, Type type, ValuesSourceConfig config, GeoPoint origin, - List ranges, DistanceUnit unit, GeoDistance distanceType, boolean keyed, AggregationContext context, + public GeoDistanceRangeAggregatorFactory(String name, ValuesSourceConfig config, GeoPoint origin, + Range[] ranges, DistanceUnit unit, GeoDistance distanceType, boolean keyed, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.origin = origin; this.ranges = ranges; this.unit = unit; @@ -70,7 +69,7 @@ public GeoDistanceRangeAggregatorFactory(String name, Type type, ValuesSourceCon @Override protected Aggregator createUnmapped(Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - return new Unmapped(name, ranges, keyed, config.format(), context, parent, rangeFactory, pipelineAggregators, metaData); + return new Unmapped<>(name, ranges, keyed, config.format(), context, parent, rangeFactory, pipelineAggregators, metaData); } @Override @@ -86,16 +85,16 @@ private static class DistanceSource extends ValuesSource.Numeric { private final ValuesSource.GeoPoint source; private final GeoDistance distanceType; - private final DistanceUnit unit; + private final DistanceUnit units; private final org.elasticsearch.common.geo.GeoPoint origin; - public DistanceSource(ValuesSource.GeoPoint source, GeoDistance distanceType, org.elasticsearch.common.geo.GeoPoint origin, - DistanceUnit unit) { + DistanceSource(ValuesSource.GeoPoint source, GeoDistance distanceType, + org.elasticsearch.common.geo.GeoPoint origin, DistanceUnit units) { this.source = source; // even if the geo points are unique, there's no guarantee the // distances are this.distanceType = distanceType; - this.unit = unit; + this.units = units; this.origin = origin; } @@ -112,8 +111,7 @@ public SortedNumericDocValues longValues(LeafReaderContext ctx) { @Override public SortedNumericDoubleValues doubleValues(LeafReaderContext ctx) { final MultiGeoPointValues geoValues = source.geoPointValues(ctx); - final FixedSourceDistance distance = distanceType.fixedSourceDistance(origin.getLat(), origin.getLon(), unit); - return GeoDistance.distanceValues(geoValues, distance); + return GeoUtils.distanceValues(distanceType, units, geoValues, origin); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/InternalGeoDistance.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/InternalGeoDistance.java index f01e0233afd79..e636d6d23c7d4 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/InternalGeoDistance.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/InternalGeoDistance.java @@ -62,11 +62,6 @@ boolean keyed() { } public static class Factory extends InternalRange.Factory { - @Override - public Type type() { - return GeoDistanceAggregationBuilder.TYPE; - } - @Override public ValuesSourceType getValueSourceType() { return ValuesSourceType.GEOPOINT; @@ -119,4 +114,9 @@ public InternalGeoDistance(StreamInput in) throws IOException { public InternalRange.Factory getFactory() { return FACTORY; } -} \ No newline at end of file + + @Override + public String getWriteableName() { + return GeoDistanceAggregationBuilder.NAME; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/ParsedGeoDistance.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/ParsedGeoDistance.java new file mode 100644 index 0000000000000..a926499e9249a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/ParsedGeoDistance.java @@ -0,0 +1,55 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.range.geodistance; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.bucket.range.ParsedRange; + +import java.io.IOException; + +public class ParsedGeoDistance extends ParsedRange { + + @Override + public String getType() { + return GeoDistanceAggregationBuilder.NAME; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedGeoDistance.class.getSimpleName(), true, ParsedGeoDistance::new); + static { + declareParsedRangeFields(PARSER, + parser -> ParsedBucket.fromXContent(parser, false), + parser -> ParsedBucket.fromXContent(parser, true)); + } + + public static ParsedGeoDistance fromXContent(XContentParser parser, String name) throws IOException { + ParsedGeoDistance aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedRange.ParsedBucket { + + static ParsedBucket fromXContent(final XContentParser parser, final boolean keyed) throws IOException { + return parseRangeBucketXContent(parser, ParsedBucket::new, keyed); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ip/IpRangeAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ip/IpRangeAggregationBuilder.java index bd2353b50998a..c530ecbfa9946 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ip/IpRangeAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ip/IpRangeAggregationBuilder.java @@ -20,25 +20,32 @@ import org.apache.lucene.document.InetAddressPoint; import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.network.InetAddresses; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParser.Token; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.script.Script; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.bucket.range.BinaryRangeAggregator; import org.elasticsearch.search.aggregations.bucket.range.BinaryRangeAggregatorFactory; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.net.InetAddress; @@ -52,9 +59,61 @@ public final class IpRangeAggregationBuilder extends ValuesSourceAggregationBuilder { public static final String NAME = "ip_range"; - private static final InternalAggregation.Type TYPE = new InternalAggregation.Type(NAME); + private static final ParseField MASK_FIELD = new ParseField("mask"); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(IpRangeAggregationBuilder.NAME); + ValuesSourceParserHelper.declareBytesFields(PARSER, false, false); + + PARSER.declareBoolean(IpRangeAggregationBuilder::keyed, RangeAggregator.KEYED_FIELD); + + PARSER.declareObjectArray((agg, ranges) -> { + for (Range range : ranges) agg.addRange(range); + }, IpRangeAggregationBuilder::parseRange, RangeAggregator.RANGES_FIELD); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new IpRangeAggregationBuilder(aggregationName), context); + } + + private static Range parseRange(XContentParser parser, QueryParseContext context) throws IOException { + String key = null; + String from = null; + String to = null; + String mask = null; + + if (parser.currentToken() != Token.START_OBJECT) { + throw new ParsingException(parser.getTokenLocation(), "[ranges] must contain objects, but hit a " + parser.currentToken()); + } + while (parser.nextToken() != Token.END_OBJECT) { + if (parser.currentToken() == Token.FIELD_NAME) { + continue; + } + if (RangeAggregator.Range.KEY_FIELD.match(parser.currentName())) { + key = parser.text(); + } else if (RangeAggregator.Range.FROM_FIELD.match(parser.currentName())) { + from = parser.textOrNull(); + } else if (RangeAggregator.Range.TO_FIELD.match(parser.currentName())) { + to = parser.textOrNull(); + } else if (MASK_FIELD.match(parser.currentName())) { + mask = parser.text(); + } else { + throw new ParsingException(parser.getTokenLocation(), "Unexpected ip range parameter: [" + parser.currentName() + "]"); + } + } + if (mask != null) { + if (key == null) { + key = mask; + } + return new Range(key, mask); + } else { + return new Range(key, from, to); + } + } public static class Range implements ToXContent { + private final String key; private final String from; private final String to; @@ -94,8 +153,18 @@ public static class Range implements ToXContent { } this.key = key; try { - this.from = InetAddresses.toAddrString(InetAddress.getByAddress(lower)); - this.to = InetAddresses.toAddrString(InetAddress.getByAddress(upper)); + InetAddress fromAddress = InetAddress.getByAddress(lower); + if (fromAddress.equals(InetAddressPoint.MIN_VALUE)) { + this.from = null; + } else { + this.from = InetAddresses.toAddrString(fromAddress); + } + InetAddress inclusiveToAddress = InetAddress.getByAddress(upper); + if (inclusiveToAddress.equals(InetAddressPoint.MAX_VALUE)) { + this.to = null; + } else { + this.to = InetAddresses.toAddrString(InetAddressPoint.nextUp(inclusiveToAddress)); + } } catch (UnknownHostException bogus) { throw new AssertionError(bogus); } @@ -162,11 +231,11 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws private List ranges = new ArrayList<>(); public IpRangeAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.BYTES, ValueType.IP); + super(name, ValuesSourceType.BYTES, ValueType.IP); } @Override - public String getWriteableName() { + public String getType() { return NAME; } @@ -268,7 +337,7 @@ public IpRangeAggregationBuilder addUnboundedFrom(String from) { } public IpRangeAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.BYTES, ValueType.IP); + super(in, ValuesSourceType.BYTES, ValueType.IP); final int numRanges = in.readVInt(); for (int i = 0; i < numRanges; ++i) { addRange(new Range(in)); @@ -296,14 +365,17 @@ private static BytesRef toBytesRef(String ip) { @Override protected ValuesSourceAggregatorFactory innerBuild( - AggregationContext context, ValuesSourceConfig config, + SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { List ranges = new ArrayList<>(); + if(this.ranges.size() == 0){ + throw new IllegalArgumentException("No [ranges] specified for the [" + this.getName() + "] aggregation"); + } for (Range range : this.ranges) { ranges.add(new BinaryRangeAggregator.Range(range.key, toBytesRef(range.from), toBytesRef(range.to))); } - return new BinaryRangeAggregatorFactory(name, TYPE, config, ranges, + return new BinaryRangeAggregatorFactory(name, config, ranges, keyed, context, parent, subFactoriesBuilder, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ip/IpRangeParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ip/IpRangeParser.java deleted file mode 100644 index 5d95f0dd4944e..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ip/IpRangeParser.java +++ /dev/null @@ -1,128 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.range.ip; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.Map; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.BytesValuesSourceParser; -import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator; -import org.elasticsearch.search.aggregations.bucket.range.ip.IpRangeAggregationBuilder.Range; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSource; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -/** - * A parser for ip range aggregations. - */ -public class IpRangeParser extends BytesValuesSourceParser { - - private static final ParseField MASK_FIELD = new ParseField("mask"); - - public IpRangeParser() { - super(false, false); - } - - @Override - protected ValuesSourceAggregationBuilder createFactory( - String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - IpRangeAggregationBuilder range = new IpRangeAggregationBuilder(aggregationName); - @SuppressWarnings("unchecked") - Iterable ranges = (Iterable) otherOptions.get(RangeAggregator.RANGES_FIELD); - if (otherOptions.containsKey(RangeAggregator.RANGES_FIELD)) { - for (Range r : ranges) { - range.addRange(r); - } - } - if (otherOptions.containsKey(RangeAggregator.KEYED_FIELD)) { - range.keyed((Boolean) otherOptions.get(RangeAggregator.KEYED_FIELD)); - } - return range; - } - - private Range parseRange(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException { - String key = null; - String from = null; - String to = null; - String mask = null; - - if (parser.currentToken() != Token.START_OBJECT) { - throw new ParsingException(parser.getTokenLocation(), "[ranges] must contain objects, but hit a " + parser.currentToken()); - } - while (parser.nextToken() != Token.END_OBJECT) { - if (parser.currentToken() == Token.FIELD_NAME) { - continue; - } - if (parseFieldMatcher.match(parser.currentName(), RangeAggregator.Range.KEY_FIELD)) { - key = parser.text(); - } else if (parseFieldMatcher.match(parser.currentName(), RangeAggregator.Range.FROM_FIELD)) { - from = parser.text(); - } else if (parseFieldMatcher.match(parser.currentName(), RangeAggregator.Range.TO_FIELD)) { - to = parser.text(); - } else if (parseFieldMatcher.match(parser.currentName(), MASK_FIELD)) { - mask = parser.text(); - } else { - throw new ParsingException(parser.getTokenLocation(), "Unexpected ip range parameter: [" + parser.currentName() + "]"); - } - } - if (mask != null) { - if (key == null) { - key = mask; - } - return new Range(key, mask); - } else { - return new Range(key, from, to); - } - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, - Token token, - XContentParseContext context, - Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (context.matchField(currentFieldName, RangeAggregator.RANGES_FIELD)) { - if (parser.currentToken() != Token.START_ARRAY) { - throw new ParsingException(parser.getTokenLocation(), "[ranges] must be passed as an array, but got a " + token); - } - List ranges = new ArrayList<>(); - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - Range range = parseRange(parser, context.getParseFieldMatcher()); - ranges.add(range); - } - otherOptions.put(RangeAggregator.RANGES_FIELD, ranges); - return true; - } else if (context.matchField(parser.currentName(), RangeAggregator.KEYED_FIELD)) { - otherOptions.put(RangeAggregator.KEYED_FIELD, parser.booleanValue()); - return true; - } - return false; - } - -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedAggregationBuilder.java index 8a83fabcab28e..78f5bd0a7afd8 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedAggregationBuilder.java @@ -21,39 +21,54 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; public class DiversifiedAggregationBuilder extends ValuesSourceAggregationBuilder { public static final String NAME = "diversified_sampler"; - public static final Type TYPE = new Type(NAME); public static final int MAX_DOCS_PER_VALUE_DEFAULT = 1; + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(DiversifiedAggregationBuilder.NAME); + ValuesSourceParserHelper.declareAnyFields(PARSER, true, false); + PARSER.declareInt(DiversifiedAggregationBuilder::shardSize, SamplerAggregator.SHARD_SIZE_FIELD); + PARSER.declareInt(DiversifiedAggregationBuilder::maxDocsPerValue, SamplerAggregator.MAX_DOCS_PER_VALUE_FIELD); + PARSER.declareString(DiversifiedAggregationBuilder::executionHint, SamplerAggregator.EXECUTION_HINT_FIELD); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new DiversifiedAggregationBuilder(aggregationName), context); + } + private int shardSize = SamplerAggregationBuilder.DEFAULT_SHARD_SAMPLE_SIZE; private int maxDocsPerValue = MAX_DOCS_PER_VALUE_DEFAULT; private String executionHint = null; public DiversifiedAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.ANY, null); + super(name, ValuesSourceType.ANY, null); } /** * Read from a stream. */ public DiversifiedAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.ANY, null); + super(in, ValuesSourceType.ANY, null); shardSize = in.readVInt(); maxDocsPerValue = in.readVInt(); executionHint = in.readOptionalString(); @@ -120,9 +135,9 @@ public String executionHint() { } @Override - protected ValuesSourceAggregatorFactory innerBuild(AggregationContext context, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new DiversifiedAggregatorFactory(name, TYPE, config, shardSize, maxDocsPerValue, executionHint, context, parent, + return new DiversifiedAggregatorFactory(name, config, shardSize, maxDocsPerValue, executionHint, context, parent, subFactoriesBuilder, metaData); } @@ -150,7 +165,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedAggregatorFactory.java index a39135192bc19..97a68649ca233 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedAggregatorFactory.java @@ -25,14 +25,13 @@ import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.NonCollectingAggregator; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.sampler.SamplerAggregator.ExecutionMode; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; -import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -44,10 +43,10 @@ public class DiversifiedAggregatorFactory extends ValuesSourceAggregatorFactory< private final int maxDocsPerValue; private final String executionHint; - public DiversifiedAggregatorFactory(String name, Type type, ValuesSourceConfig config, int shardSize, int maxDocsPerValue, - String executionHint, AggregationContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, + public DiversifiedAggregatorFactory(String name, ValuesSourceConfig config, int shardSize, int maxDocsPerValue, + String executionHint, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.shardSize = shardSize; this.maxDocsPerValue = maxDocsPerValue; this.executionHint = executionHint; @@ -65,7 +64,7 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, Aggregator pare if (valuesSource instanceof ValuesSource.Bytes) { ExecutionMode execution = null; if (executionHint != null) { - execution = ExecutionMode.fromString(executionHint, context.searchContext().parseFieldMatcher()); + execution = ExecutionMode.fromString(executionHint); } // In some cases using ordinals is just not supported: override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedBytesHashSamplerAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedBytesHashSamplerAggregator.java index b2f7be614dba8..f4d45a7d471b8 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedBytesHashSamplerAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedBytesHashSamplerAggregator.java @@ -32,8 +32,8 @@ import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector; import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -49,10 +49,10 @@ public class DiversifiedBytesHashSamplerAggregator extends SamplerAggregator { private int maxDocsPerValue; public DiversifiedBytesHashSamplerAggregator(String name, int shardSize, AggregatorFactories factories, - AggregationContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData, ValuesSource valuesSource, int maxDocsPerValue) throws IOException { - super(name, shardSize, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, shardSize, factories, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; this.maxDocsPerValue = maxDocsPerValue; } @@ -70,7 +70,7 @@ public DeferringBucketCollector getDeferringCollector() { */ class DiverseDocsDeferringCollector extends BestDocsDeferringCollector { - public DiverseDocsDeferringCollector() { + DiverseDocsDeferringCollector() { super(shardSize, context.bigArrays()); } @@ -86,7 +86,7 @@ class ValuesDiversifiedTopDocsCollector extends DiversifiedTopDocsCollector { private SortedBinaryDocValues values; - public ValuesDiversifiedTopDocsCollector(int numHits, int maxHitsPerValue) { + ValuesDiversifiedTopDocsCollector(int numHits, int maxHitsPerValue) { super(numHits, maxHitsPerValue); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedMapSamplerAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedMapSamplerAggregator.java index 7578296cee823..08fa1bcb7fae0 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedMapSamplerAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedMapSamplerAggregator.java @@ -34,8 +34,8 @@ import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector; import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -48,12 +48,12 @@ public class DiversifiedMapSamplerAggregator extends SamplerAggregator { private BytesRefHash bucketOrds; public DiversifiedMapSamplerAggregator(String name, int shardSize, AggregatorFactories factories, - AggregationContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData, ValuesSource valuesSource, int maxDocsPerValue) throws IOException { - super(name, shardSize, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, shardSize, factories, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; this.maxDocsPerValue = maxDocsPerValue; - bucketOrds = new BytesRefHash(shardSize, aggregationContext.bigArrays()); + bucketOrds = new BytesRefHash(shardSize, context.bigArrays()); } @@ -76,7 +76,7 @@ public DeferringBucketCollector getDeferringCollector() { */ class DiverseDocsDeferringCollector extends BestDocsDeferringCollector { - public DiverseDocsDeferringCollector() { + DiverseDocsDeferringCollector() { super(shardSize, context.bigArrays()); } @@ -92,7 +92,7 @@ class ValuesDiversifiedTopDocsCollector extends DiversifiedTopDocsCollector { private SortedBinaryDocValues values; - public ValuesDiversifiedTopDocsCollector(int numHits, int maxHitsPerKey) { + ValuesDiversifiedTopDocsCollector(int numHits, int maxHitsPerKey) { super(numHits, maxHitsPerKey); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedNumericSamplerAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedNumericSamplerAggregator.java index 430288db83b83..c595fdb5c25d8 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedNumericSamplerAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedNumericSamplerAggregator.java @@ -31,8 +31,8 @@ import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector; import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -44,9 +44,9 @@ public class DiversifiedNumericSamplerAggregator extends SamplerAggregator { private int maxDocsPerValue; public DiversifiedNumericSamplerAggregator(String name, int shardSize, AggregatorFactories factories, - AggregationContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData, ValuesSource.Numeric valuesSource, int maxDocsPerValue) throws IOException { - super(name, shardSize, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, shardSize, factories, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; this.maxDocsPerValue = maxDocsPerValue; } @@ -63,7 +63,7 @@ public DeferringBucketCollector getDeferringCollector() { * This implementation is only for use with a single bucket aggregation. */ class DiverseDocsDeferringCollector extends BestDocsDeferringCollector { - public DiverseDocsDeferringCollector() { + DiverseDocsDeferringCollector() { super(shardSize, context.bigArrays()); } @@ -78,7 +78,7 @@ class ValuesDiversifiedTopDocsCollector extends DiversifiedTopDocsCollector { private SortedNumericDocValues values; - public ValuesDiversifiedTopDocsCollector(int numHits, int maxHitsPerKey) { + ValuesDiversifiedTopDocsCollector(int numHits, int maxHitsPerKey) { super(numHits, maxHitsPerKey); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedOrdinalsSamplerAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedOrdinalsSamplerAggregator.java index 3725e10335c1b..5eb37f310add5 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedOrdinalsSamplerAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedOrdinalsSamplerAggregator.java @@ -32,8 +32,8 @@ import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector; import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -45,9 +45,9 @@ public class DiversifiedOrdinalsSamplerAggregator extends SamplerAggregator { private int maxDocsPerValue; public DiversifiedOrdinalsSamplerAggregator(String name, int shardSize, AggregatorFactories factories, - AggregationContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData, ValuesSource.Bytes.WithOrdinals.FieldData valuesSource, int maxDocsPerValue) throws IOException { - super(name, shardSize, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, shardSize, factories, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; this.maxDocsPerValue = maxDocsPerValue; } @@ -65,7 +65,7 @@ public DeferringBucketCollector getDeferringCollector() { */ class DiverseDocsDeferringCollector extends BestDocsDeferringCollector { - public DiverseDocsDeferringCollector() { + DiverseDocsDeferringCollector() { super(shardSize, context.bigArrays()); } @@ -79,7 +79,7 @@ protected TopDocsCollector createTopDocsCollector(int size) { class ValuesDiversifiedTopDocsCollector extends DiversifiedTopDocsCollector { - public ValuesDiversifiedTopDocsCollector(int numHits, int maxHitsPerKey) { + ValuesDiversifiedTopDocsCollector(int numHits, int maxHitsPerKey) { super(numHits, maxHitsPerKey); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedSamplerParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedSamplerParser.java deleted file mode 100644 index a62035d7234fc..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedSamplerParser.java +++ /dev/null @@ -1,82 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.sampler; - - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.AnyValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * - */ -public class DiversifiedSamplerParser extends AnyValuesSourceParser { - public DiversifiedSamplerParser() { - super(true, false); - } - - @Override - protected DiversifiedAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - DiversifiedAggregationBuilder factory = new DiversifiedAggregationBuilder(aggregationName); - Integer shardSize = (Integer) otherOptions.get(SamplerAggregator.SHARD_SIZE_FIELD); - if (shardSize != null) { - factory.shardSize(shardSize); - } - Integer maxDocsPerValue = (Integer) otherOptions.get(SamplerAggregator.MAX_DOCS_PER_VALUE_FIELD); - if (maxDocsPerValue != null) { - factory.maxDocsPerValue(maxDocsPerValue); - } - String executionHint = (String) otherOptions.get(SamplerAggregator.EXECUTION_HINT_FIELD); - if (executionHint != null) { - factory.executionHint(executionHint); - } - return factory; - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, XContentParser.Token token, - XContentParseContext context, Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (token == XContentParser.Token.VALUE_NUMBER) { - if (context.matchField(currentFieldName, SamplerAggregator.SHARD_SIZE_FIELD)) { - int shardSize = parser.intValue(); - otherOptions.put(SamplerAggregator.SHARD_SIZE_FIELD, shardSize); - return true; - } else if (context.matchField(currentFieldName, SamplerAggregator.MAX_DOCS_PER_VALUE_FIELD)) { - int maxDocsPerValue = parser.intValue(); - otherOptions.put(SamplerAggregator.MAX_DOCS_PER_VALUE_FIELD, maxDocsPerValue); - return true; - } - } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.matchField(currentFieldName, SamplerAggregator.EXECUTION_HINT_FIELD)) { - String executionHint = parser.text(); - otherOptions.put(SamplerAggregator.EXECUTION_HINT_FIELD, executionHint); - return true; - } - } - return false; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/InternalSampler.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/InternalSampler.java index 224de07e93d1c..1a04133e82bfc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/InternalSampler.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/InternalSampler.java @@ -29,7 +29,11 @@ public class InternalSampler extends InternalSingleBucketAggregation implements Sampler { public static final String NAME = "mapped_sampler"; - InternalSampler(String name, long docCount, InternalAggregations subAggregations, List pipelineAggregators, Map metaData) { + // InternalSampler and UnmappedSampler share the same parser name, so we use this when identifying the aggregation type + public static final String PARSER_NAME = "sampler"; + + InternalSampler(String name, long docCount, InternalAggregations subAggregations, List pipelineAggregators, + Map metaData) { super(name, docCount, subAggregations, pipelineAggregators, metaData); } @@ -45,6 +49,11 @@ public String getWriteableName() { return NAME; } + @Override + public String getType() { + return PARSER_NAME; + } + @Override protected InternalSingleBucketAggregation newAggregation(String name, long docCount, InternalAggregations subAggregations) { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/ParsedSampler.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/ParsedSampler.java new file mode 100644 index 0000000000000..3d5e946beadfc --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/ParsedSampler.java @@ -0,0 +1,36 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations.bucket.sampler; + +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.bucket.ParsedSingleBucketAggregation; + +import java.io.IOException; + +public class ParsedSampler extends ParsedSingleBucketAggregation implements Sampler { + + @Override + public String getType() { + return InternalSampler.PARSER_NAME; + } + + public static ParsedSampler fromXContent(XContentParser parser, final String name) throws IOException { + return parseXContent(parser, new ParsedSampler(), name); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregationBuilder.java index 2230408b033b9..f69b66ffd1eee 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregationBuilder.java @@ -28,29 +28,27 @@ import org.elasticsearch.search.aggregations.AbstractAggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; public class SamplerAggregationBuilder extends AbstractAggregationBuilder { public static final String NAME = "sampler"; - private static final Type TYPE = new Type(NAME); public static final int DEFAULT_SHARD_SAMPLE_SIZE = 100; private int shardSize = DEFAULT_SHARD_SAMPLE_SIZE; public SamplerAggregationBuilder(String name) { - super(name, TYPE); + super(name); } /** * Read from a stream. */ public SamplerAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE); + super(in); shardSize = in.readVInt(); } @@ -75,9 +73,9 @@ public int shardSize() { } @Override - protected SamplerAggregatorFactory doBuild(AggregationContext context, AggregatorFactory parent, Builder subFactoriesBuilder) + protected SamplerAggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new SamplerAggregatorFactory(name, type, shardSize, context, parent, subFactoriesBuilder, metaData); + return new SamplerAggregatorFactory(name, shardSize, context, parent, subFactoriesBuilder, metaData); } @Override @@ -98,7 +96,7 @@ public static SamplerAggregationBuilder parse(String aggregationName, QueryParse if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_NUMBER) { - if (context.getParseFieldMatcher().match(currentFieldName, SamplerAggregator.SHARD_SIZE_FIELD)) { + if (SamplerAggregator.SHARD_SIZE_FIELD.match(currentFieldName)) { shardSize = parser.intValue(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -129,7 +127,7 @@ protected boolean doEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java index cec2c4577dfdf..9e0fb17a56234 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java @@ -20,7 +20,6 @@ import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.Aggregator; @@ -31,10 +30,11 @@ import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; +import java.util.Arrays; import java.util.List; import java.util.Map; @@ -61,7 +61,7 @@ public enum ExecutionMode { @Override Aggregator create(String name, AggregatorFactories factories, int shardSize, int maxDocsPerValue, ValuesSource valuesSource, - AggregationContext context, Aggregator parent, List pipelineAggregators, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { return new DiversifiedMapSamplerAggregator(name, shardSize, factories, context, parent, pipelineAggregators, metaData, @@ -79,7 +79,7 @@ boolean needsGlobalOrdinals() { @Override Aggregator create(String name, AggregatorFactories factories, int shardSize, int maxDocsPerValue, ValuesSource valuesSource, - AggregationContext context, Aggregator parent, List pipelineAggregators, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { return new DiversifiedBytesHashSamplerAggregator(name, shardSize, factories, context, parent, pipelineAggregators, @@ -98,7 +98,7 @@ boolean needsGlobalOrdinals() { @Override Aggregator create(String name, AggregatorFactories factories, int shardSize, int maxDocsPerValue, ValuesSource valuesSource, - AggregationContext context, Aggregator parent, List pipelineAggregators, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { return new DiversifiedOrdinalsSamplerAggregator(name, shardSize, factories, context, parent, pipelineAggregators, metaData, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, maxDocsPerValue); @@ -111,13 +111,13 @@ boolean needsGlobalOrdinals() { }; - public static ExecutionMode fromString(String value, ParseFieldMatcher parseFieldMatcher) { + public static ExecutionMode fromString(String value) { for (ExecutionMode mode : values()) { - if (parseFieldMatcher.match(value, mode.parseField)) { + if (mode.parseField.match(value)) { return mode; } } - throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of " + values()); + throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of " + Arrays.toString(values())); } private final ParseField parseField; @@ -126,8 +126,8 @@ public static ExecutionMode fromString(String value, ParseFieldMatcher parseFiel this.parseField = parseField; } - abstract Aggregator create(String name, AggregatorFactories factories, int shardSize, int maxDocsPerValue, ValuesSource valuesSource, - AggregationContext context, Aggregator parent, List pipelineAggregators, + abstract Aggregator create(String name, AggregatorFactories factories, int shardSize, int maxDocsPerValue, + ValuesSource valuesSource, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException; abstract boolean needsGlobalOrdinals(); @@ -142,9 +142,9 @@ public String toString() { protected final int shardSize; protected BestDocsDeferringCollector bdd; - public SamplerAggregator(String name, int shardSize, AggregatorFactories factories, AggregationContext aggregationContext, + public SamplerAggregator(String name, int shardSize, AggregatorFactories factories, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); + super(name, factories, context, parent, pipelineAggregators, metaData); this.shardSize = shardSize; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregatorFactory.java index 09f510f80e5a4..4fb7e28f3d638 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregatorFactory.java @@ -22,9 +22,8 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -34,9 +33,9 @@ public class SamplerAggregatorFactory extends AggregatorFactory parent, + public SamplerAggregatorFactory(String name, int shardSize, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, Map metaData) throws IOException { - super(name, type, context, parent, subFactories, metaData); + super(name, context, parent, subFactories, metaData); this.shardSize = shardSize; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/UnmappedSampler.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/UnmappedSampler.java index 1cf77baa1f0fd..b30f1c48beec6 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/UnmappedSampler.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/UnmappedSampler.java @@ -20,6 +20,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.search.aggregations.Aggregation; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregations; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -59,7 +60,7 @@ public InternalAggregation doReduce(List aggregations, Redu @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(InternalAggregation.CommonFields.DOC_COUNT, 0); + builder.field(Aggregation.CommonFields.DOC_COUNT.getPreferredName(), 0); return builder; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/GlobalOrdinalsSignificantTermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/GlobalOrdinalsSignificantTermsAggregator.java index 1aeacf4be60f2..0c5835a54e41e 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/GlobalOrdinalsSignificantTermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/GlobalOrdinalsSignificantTermsAggregator.java @@ -22,7 +22,6 @@ import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.lease.Releasables; -import org.elasticsearch.common.util.LongHash; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; @@ -32,9 +31,9 @@ import org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.internal.ContextIndexSearcher; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -52,22 +51,28 @@ public class GlobalOrdinalsSignificantTermsAggregator extends GlobalOrdinalsStri protected final SignificantTermsAggregatorFactory termsAggFactory; private final SignificanceHeuristic significanceHeuristic; - public GlobalOrdinalsSignificantTermsAggregator(String name, AggregatorFactories factories, - ValuesSource.Bytes.WithOrdinals.FieldData valuesSource, DocValueFormat format, - BucketCountThresholds bucketCountThresholds, IncludeExclude.OrdinalsFilter includeExclude, - AggregationContext aggregationContext, Aggregator parent, - SignificanceHeuristic significanceHeuristic, SignificantTermsAggregatorFactory termsAggFactory, - List pipelineAggregators, Map metaData) throws IOException { - - super(name, factories, valuesSource, null, format, bucketCountThresholds, includeExclude, aggregationContext, parent, - SubAggCollectionMode.DEPTH_FIRST, false, pipelineAggregators, metaData); + public GlobalOrdinalsSignificantTermsAggregator(String name, + AggregatorFactories factories, + ValuesSource.Bytes.WithOrdinals.FieldData valuesSource, + DocValueFormat format, + BucketCountThresholds bucketCountThresholds, + IncludeExclude.OrdinalsFilter includeExclude, + SearchContext context, + Aggregator parent, + boolean forceRemapGlobalOrds, + SignificanceHeuristic significanceHeuristic, + SignificantTermsAggregatorFactory termsAggFactory, + List pipelineAggregators, + Map metaData) throws IOException { + super(name, factories, valuesSource, null, format, bucketCountThresholds, includeExclude, context, parent, + forceRemapGlobalOrds, SubAggCollectionMode.DEPTH_FIRST, false, pipelineAggregators, metaData); this.significanceHeuristic = significanceHeuristic; this.termsAggFactory = termsAggFactory; + this.numCollectedDocs = 0; } @Override - public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, - final LeafBucketCollector sub) throws IOException { + public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, final LeafBucketCollector sub) throws IOException { return new LeafBucketCollectorBase(super.getLeafCollector(ctx, sub), null) { @Override public void collect(int doc, long bucket) throws IOException { @@ -77,18 +82,17 @@ public void collect(int doc, long bucket) throws IOException { }; } - @Override public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws IOException { assert owningBucketOrdinal == 0; - if (globalOrds == null) { // no context in this reader + if (valueCount == 0) { // no context in this reader return buildEmptyAggregation(); } final int size; if (bucketCountThresholds.getMinDocCount() == 0) { // if minDocCount == 0 then we can end up with more buckets then maxBucketOrd() returns - size = (int) Math.min(globalOrds.getValueCount(), bucketCountThresholds.getShardSize()); + size = (int) Math.min(valueCount, bucketCountThresholds.getShardSize()); } else { size = (int) Math.min(maxBucketOrd(), bucketCountThresholds.getShardSize()); } @@ -97,7 +101,7 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws BucketSignificancePriorityQueue ordered = new BucketSignificancePriorityQueue<>(size); SignificantStringTerms.Bucket spare = null; - for (long globalTermOrd = 0; globalTermOrd < globalOrds.getValueCount(); ++globalTermOrd) { + for (long globalTermOrd = 0; globalTermOrd < valueCount; ++globalTermOrd) { if (includeExclude != null && !acceptedGlobalOrdinals.get(globalTermOrd)) { continue; } @@ -114,7 +118,7 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws spare = new SignificantStringTerms.Bucket(new BytesRef(), 0, 0, 0, 0, null, format); } spare.bucketOrd = bucketOrd; - copy(globalOrds.lookupOrd(globalTermOrd), spare.termBytes); + copy(lookupGlobalOrd.apply(globalTermOrd), spare.termBytes); spare.subsetDf = bucketDocCount; spare.subsetSize = subsetSize; spare.supersetDf = termsAggFactory.getBackgroundFrequency(spare.termBytes); @@ -143,66 +147,17 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws @Override public SignificantStringTerms buildEmptyAggregation() { // We need to account for the significance of a miss in our global stats - provide corpus size as context - ContextIndexSearcher searcher = context.searchContext().searcher(); + ContextIndexSearcher searcher = context.searcher(); IndexReader topReader = searcher.getIndexReader(); int supersetSize = topReader.numDocs(); return new SignificantStringTerms(name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), - pipelineAggregators(), metaData(), format, 0, supersetSize, significanceHeuristic, emptyList()); + pipelineAggregators(), metaData(), format, numCollectedDocs, supersetSize, significanceHeuristic, emptyList()); } @Override protected void doClose() { + super.doClose(); Releasables.close(termsAggFactory); } - - public static class WithHash extends GlobalOrdinalsSignificantTermsAggregator { - - private final LongHash bucketOrds; - - public WithHash(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals.FieldData valuesSource, - DocValueFormat format, BucketCountThresholds bucketCountThresholds, IncludeExclude.OrdinalsFilter includeExclude, - AggregationContext aggregationContext, Aggregator parent, SignificanceHeuristic significanceHeuristic, - SignificantTermsAggregatorFactory termsAggFactory, List pipelineAggregators, - Map metaData) throws IOException { - super(name, factories, valuesSource, format, bucketCountThresholds, includeExclude, aggregationContext, parent, significanceHeuristic, - termsAggFactory, pipelineAggregators, metaData); - bucketOrds = new LongHash(1, aggregationContext.bigArrays()); - } - - @Override - public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, - final LeafBucketCollector sub) throws IOException { - return new LeafBucketCollectorBase(super.getLeafCollector(ctx, sub), null) { - @Override - public void collect(int doc, long bucket) throws IOException { - assert bucket == 0; - numCollectedDocs++; - globalOrds.setDocument(doc); - final int numOrds = globalOrds.cardinality(); - for (int i = 0; i < numOrds; i++) { - final long globalOrd = globalOrds.ordAt(i); - long bucketOrd = bucketOrds.add(globalOrd); - if (bucketOrd < 0) { - bucketOrd = -1 - bucketOrd; - collectExistingBucket(sub, doc, bucketOrd); - } else { - collectBucket(sub, doc, bucketOrd); - } - } - } - }; - } - - @Override - protected long getBucketOrd(long termOrd) { - return bucketOrds.find(termOrd); - } - - @Override - protected void doClose() { - Releasables.close(termsAggFactory, bucketOrds); - } - } - } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalMappedSignificantTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalMappedSignificantTerms.java index 92995d5fab464..a15be066e7bbd 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalMappedSignificantTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalMappedSignificantTerms.java @@ -21,11 +21,14 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; +import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.function.Function; @@ -73,7 +76,12 @@ protected final void writeTermTypeInfoTo(StreamOutput out) throws IOException { } @Override - protected List getBucketsInternal() { + public Iterator iterator() { + return buckets.stream().map(bucket -> (SignificantTerms.Bucket) bucket).collect(Collectors.toList()).iterator(); + } + + @Override + public List getBuckets() { return buckets; } @@ -99,4 +107,20 @@ protected long getSupersetSize() { protected SignificanceHeuristic getSignificanceHeuristic() { return significanceHeuristic; } + + @Override + public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + builder.field(CommonFields.DOC_COUNT.getPreferredName(), subsetSize); + builder.field(BG_COUNT, supersetSize); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); + for (Bucket bucket : buckets) { + //There is a condition (presumably when only one shard has a bucket?) where reduce is not called + // and I end up with buckets that contravene the user's min_doc_count criteria in my reducer + if (bucket.subsetDf >= minDocCount) { + bucket.toXContent(builder, params); + } + } + builder.endArray(); + return builder; + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalSignificantTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalSignificantTerms.java index cd18386da1e48..ec91c8bbd625a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalSignificantTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalSignificantTerms.java @@ -21,6 +21,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.Aggregations; import org.elasticsearch.search.aggregations.InternalAggregation; @@ -33,19 +34,21 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.HashMap; -import java.util.Iterator; import java.util.List; import java.util.Map; -import static java.util.Collections.unmodifiableList; - /** * Result of the significant terms aggregation. */ public abstract class InternalSignificantTerms, B extends InternalSignificantTerms.Bucket> extends InternalMultiBucketAggregation implements SignificantTerms, ToXContent { + + public static final String SCORE = "score"; + public static final String BG_COUNT = "bg_count"; + @SuppressWarnings("PMD.ConstructorCallsOverridableMethod") - public abstract static class Bucket> extends SignificantTerms.Bucket { + public abstract static class Bucket> extends InternalMultiBucketAggregation.InternalBucket + implements SignificantTerms.Bucket { /** * Reads a bucket. Should be a constructor reference. */ @@ -54,14 +57,21 @@ public interface Reader> { B read(StreamInput in, long subsetSize, long supersetSize, DocValueFormat format) throws IOException; } + long subsetDf; + long subsetSize; + long supersetDf; + long supersetSize; long bucketOrd; - protected InternalAggregations aggregations; double score; + protected InternalAggregations aggregations; final transient DocValueFormat format; protected Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize, InternalAggregations aggregations, DocValueFormat format) { - super(subsetDf, subsetSize, supersetDf, supersetSize); + this.subsetSize = subsetSize; + this.supersetSize = supersetSize; + this.subsetDf = subsetDf; + this.supersetDf = supersetDf; this.aggregations = aggregations; this.format = format; } @@ -70,7 +80,8 @@ protected Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetS * Read from a stream. */ protected Bucket(StreamInput in, long subsetSize, long supersetSize, DocValueFormat format) { - super(in, subsetSize, supersetSize); + this.subsetSize = subsetSize; + this.supersetSize = supersetSize; this.format = format; } @@ -94,7 +105,7 @@ public long getSubsetSize() { return subsetSize; } - public void updateScore(SignificanceHeuristic significanceHeuristic) { + void updateScore(SignificanceHeuristic significanceHeuristic) { score = significanceHeuristic.getScore(subsetDf, subsetSize, supersetDf, supersetSize); } @@ -127,6 +138,20 @@ public B reduce(List buckets, ReduceContext context) { public double getSignificanceScore() { return score; } + + @Override + public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + keyToXContent(builder); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), getDocCount()); + builder.field(SCORE, score); + builder.field(BG_COUNT, supersetDf); + aggregations.toXContentInternal(builder, params); + builder.endObject(); + return builder; + } + + protected abstract XContentBuilder keyToXContent(XContentBuilder builder) throws IOException; } protected final int requiredSize; @@ -157,16 +182,7 @@ protected final void doWriteTo(StreamOutput out) throws IOException { protected abstract void writeTermTypeInfoTo(StreamOutput out) throws IOException; @Override - public Iterator iterator() { - return getBuckets().iterator(); - } - - @Override - public List getBuckets() { - return unmodifiableList(getBucketsInternal()); - } - - protected abstract List getBucketsInternal(); + public abstract List getBuckets(); @Override public InternalAggregation doReduce(List aggregations, ReduceContext reduceContext) { @@ -184,7 +200,7 @@ public InternalAggregation doReduce(List aggregations, Redu for (InternalAggregation aggregation : aggregations) { @SuppressWarnings("unchecked") InternalSignificantTerms terms = (InternalSignificantTerms) aggregation; - for (B bucket : terms.getBucketsInternal()) { + for (B bucket : terms.getBuckets()) { List existingBuckets = buckets.get(bucket.getKeyAsString()); if (existingBuckets == null) { existingBuckets = new ArrayList<>(aggregations.size()); @@ -196,15 +212,14 @@ public InternalAggregation doReduce(List aggregations, Redu bucket.aggregations)); } } - - getSignificanceHeuristic().initialize(reduceContext); - final int size = Math.min(requiredSize, buckets.size()); + SignificanceHeuristic heuristic = getSignificanceHeuristic().rewrite(reduceContext); + final int size = reduceContext.isFinalReduce() == false ? buckets.size() : Math.min(requiredSize, buckets.size()); BucketSignificancePriorityQueue ordered = new BucketSignificancePriorityQueue<>(size); for (Map.Entry> entry : buckets.entrySet()) { List sameTermBuckets = entry.getValue(); final B b = sameTermBuckets.get(0).reduce(sameTermBuckets, reduceContext); - b.updateScore(getSignificanceHeuristic()); - if ((b.score > 0) && (b.subsetDf >= minDocCount)) { + b.updateScore(heuristic); + if (((b.score > 0) && (b.subsetDf >= minDocCount)) || reduceContext.isFinalReduce() == false) { ordered.insertWithOverflow(b); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/ParsedSignificantLongTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/ParsedSignificantLongTerms.java new file mode 100644 index 0000000000000..9592d80c77625 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/ParsedSignificantLongTerms.java @@ -0,0 +1,80 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.significant; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; + +import java.io.IOException; + +public class ParsedSignificantLongTerms extends ParsedSignificantTerms { + + @Override + public String getType() { + return SignificantLongTerms.NAME; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedSignificantLongTerms.class.getSimpleName(), true, ParsedSignificantLongTerms::new); + static { + declareParsedSignificantTermsFields(PARSER, ParsedBucket::fromXContent); + } + + public static ParsedSignificantLongTerms fromXContent(XContentParser parser, String name) throws IOException { + return parseSignificantTermsXContent(() -> PARSER.parse(parser, null), name); + } + + public static class ParsedBucket extends ParsedSignificantTerms.ParsedBucket { + + private Long key; + + @Override + public Object getKey() { + return key; + } + + @Override + public String getKeyAsString() { + String keyAsString = super.getKeyAsString(); + if (keyAsString != null) { + return keyAsString; + } + return Long.toString(key); + } + + public Number getKeyAsNumber() { + return key; + } + + @Override + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + builder.field(CommonFields.KEY.getPreferredName(), key); + if (super.getKeyAsString() != null) { + builder.field(CommonFields.KEY_AS_STRING.getPreferredName(), getKeyAsString()); + } + return builder; + } + + static ParsedBucket fromXContent(XContentParser parser) throws IOException { + return parseSignificantTermsBucketXContent(parser, new ParsedBucket(), (p, bucket) -> bucket.key = p.longValue()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/ParsedSignificantStringTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/ParsedSignificantStringTerms.java new file mode 100644 index 0000000000000..008a5a28e5d39 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/ParsedSignificantStringTerms.java @@ -0,0 +1,77 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.significant; + +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; + +import java.io.IOException; + +public class ParsedSignificantStringTerms extends ParsedSignificantTerms { + + @Override + public String getType() { + return SignificantStringTerms.NAME; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedSignificantStringTerms.class.getSimpleName(), true, ParsedSignificantStringTerms::new); + static { + declareParsedSignificantTermsFields(PARSER, ParsedBucket::fromXContent); + } + + public static ParsedSignificantStringTerms fromXContent(XContentParser parser, String name) throws IOException { + return parseSignificantTermsXContent(() -> PARSER.parse(parser, null), name); + } + + public static class ParsedBucket extends ParsedSignificantTerms.ParsedBucket { + + private BytesRef key; + + @Override + public Object getKey() { + return getKeyAsString(); + } + + @Override + public String getKeyAsString() { + String keyAsString = super.getKeyAsString(); + if (keyAsString != null) { + return keyAsString; + } + return key.utf8ToString(); + } + + public Number getKeyAsNumber() { + return Double.parseDouble(key.utf8ToString()); + } + + @Override + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + return builder.field(CommonFields.KEY.getPreferredName(), getKey()); + } + + static ParsedBucket fromXContent(XContentParser parser) throws IOException { + return parseSignificantTermsBucketXContent(parser, new ParsedBucket(), (p, bucket) -> bucket.key = p.utf8BytesOrNull()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/ParsedSignificantTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/ParsedSignificantTerms.java new file mode 100644 index 0000000000000..1b4739c184d5e --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/ParsedSignificantTerms.java @@ -0,0 +1,191 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.significant; + +import org.elasticsearch.common.CheckedBiConsumer; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.CheckedSupplier; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.function.Function; +import java.util.stream.Collectors; + +public abstract class ParsedSignificantTerms extends ParsedMultiBucketAggregation + implements SignificantTerms { + + private Map bucketMap; + protected long subsetSize; + protected long supersetSize; + + protected long getSubsetSize() { + return subsetSize; + } + + protected long getSupersetSize() { + return supersetSize; + } + + @Override + public List getBuckets() { + return buckets; + } + + @Override + public SignificantTerms.Bucket getBucketByKey(String term) { + if (bucketMap == null) { + bucketMap = buckets.stream().collect(Collectors.toMap(SignificantTerms.Bucket::getKeyAsString, Function.identity())); + } + return bucketMap.get(term); + } + + @Override + public Iterator iterator() { + return buckets.stream().map(bucket -> (SignificantTerms.Bucket) bucket).collect(Collectors.toList()).iterator(); + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + builder.field(CommonFields.DOC_COUNT.getPreferredName(), subsetSize); + builder.field(InternalMappedSignificantTerms.BG_COUNT, supersetSize); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); + for (SignificantTerms.Bucket bucket : buckets) { + bucket.toXContent(builder, params); + } + builder.endArray(); + return builder; + } + + static T parseSignificantTermsXContent(final CheckedSupplier aggregationSupplier, + final String name) throws IOException { + T aggregation = aggregationSupplier.get(); + aggregation.setName(name); + for (ParsedBucket bucket : aggregation.buckets) { + bucket.subsetSize = aggregation.subsetSize; + bucket.supersetSize = aggregation.supersetSize; + } + return aggregation; + } + + static void declareParsedSignificantTermsFields(final ObjectParser objectParser, + final CheckedFunction bucketParser) { + declareMultiBucketAggregationFields(objectParser, bucketParser::apply, bucketParser::apply); + objectParser.declareLong((parsedTerms, value) -> parsedTerms.subsetSize = value , CommonFields.DOC_COUNT); + objectParser.declareLong((parsedTerms, value) -> parsedTerms.supersetSize = value , + new ParseField(InternalMappedSignificantTerms.BG_COUNT)); + } + + public abstract static class ParsedBucket extends ParsedMultiBucketAggregation.ParsedBucket implements SignificantTerms.Bucket { + + protected long subsetDf; + protected long subsetSize; + protected long supersetDf; + protected long supersetSize; + protected double score; + + @Override + public long getDocCount() { + return getSubsetDf(); + } + + @Override + public long getSubsetDf() { + return subsetDf; + } + + @Override + public long getSupersetDf() { + return supersetDf; + } + + @Override + public double getSignificanceScore() { + return score; + } + + @Override + public long getSupersetSize() { + return supersetSize; + } + + @Override + public long getSubsetSize() { + return subsetSize; + } + + @Override + public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + keyToXContent(builder); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), getDocCount()); + builder.field(InternalSignificantTerms.SCORE, getSignificanceScore()); + builder.field(InternalSignificantTerms.BG_COUNT, getSupersetDf()); + getAggregations().toXContentInternal(builder, params); + builder.endObject(); + return builder; + } + + @Override + protected abstract XContentBuilder keyToXContent(XContentBuilder builder) throws IOException; + + static B parseSignificantTermsBucketXContent(final XContentParser parser, final B bucket, + final CheckedBiConsumer keyConsumer) throws IOException { + + final List aggregations = new ArrayList<>(); + XContentParser.Token token; + String currentFieldName = parser.currentName(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (CommonFields.KEY_AS_STRING.getPreferredName().equals(currentFieldName)) { + bucket.setKeyAsString(parser.text()); + } else if (CommonFields.KEY.getPreferredName().equals(currentFieldName)) { + keyConsumer.accept(parser, bucket); + } else if (CommonFields.DOC_COUNT.getPreferredName().equals(currentFieldName)) { + long value = parser.longValue(); + bucket.subsetDf = value; + bucket.setDocCount(value); + } else if (InternalSignificantTerms.SCORE.equals(currentFieldName)) { + bucket.score = parser.longValue(); + } else if (InternalSignificantTerms.BG_COUNT.equals(currentFieldName)) { + bucket.supersetDf = parser.longValue(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + XContentParserUtils.parseTypedKeysObject(parser, Aggregation.TYPED_KEYS_DELIMITER, Aggregation.class, + aggregations::add); + } + } + bucket.setAggregations(new Aggregations(aggregations)); + return bucket; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantLongTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantLongTerms.java index ffca3740184ac..d20302f32b4ae 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantLongTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantLongTerms.java @@ -40,19 +40,19 @@ static class Bucket extends InternalSignificantTerms.Bucket { long term; - public Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize, long term, InternalAggregations aggregations, + Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize, long term, InternalAggregations aggregations, DocValueFormat format) { super(subsetDf, subsetSize, supersetDf, supersetSize, aggregations, format); this.term = term; } - public Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize, long term, InternalAggregations aggregations, + Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize, long term, InternalAggregations aggregations, double score) { this(subsetDf, subsetSize, supersetDf, supersetSize, term, aggregations, null); this.score = score; } - public Bucket(StreamInput in, long subsetSize, long supersetSize, DocValueFormat format) throws IOException { + Bucket(StreamInput in, long subsetSize, long supersetSize, DocValueFormat format) throws IOException { super(in, subsetSize, supersetSize, format); subsetDf = in.readVLong(); supersetDf = in.readVLong(); @@ -75,11 +75,6 @@ public Object getKey() { return term; } - @Override - int compareTerm(SignificantTerms.Bucket other) { - return Long.compare(term, ((Number) other.getKey()).longValue()); - } - @Override public String getKeyAsString() { return format.format(term); @@ -96,17 +91,11 @@ Bucket newBucket(long subsetDf, long subsetSize, long supersetDf, long supersetS } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(); - builder.field(CommonFields.KEY, term); + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + builder.field(CommonFields.KEY.getPreferredName(), term); if (format != DocValueFormat.RAW) { - builder.field(CommonFields.KEY_AS_STRING, format.format(term)); + builder.field(CommonFields.KEY_AS_STRING.getPreferredName(), format.format(term)); } - builder.field(CommonFields.DOC_COUNT, getDocCount()); - builder.field("score", score); - builder.field("bg_count", supersetDf); - aggregations.toXContentInternal(builder, params); - builder.endObject(); return builder; } } @@ -148,17 +137,6 @@ protected SignificantLongTerms create(long subsetSize, long supersetSize, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, valuesSource, format, null, bucketCountThresholds, aggregationContext, parent, + super(name, factories, valuesSource, format, null, bucketCountThresholds, context, parent, SubAggCollectionMode.DEPTH_FIRST, false, includeExclude, pipelineAggregators, metaData); this.significanceHeuristic = significanceHeuristic; this.termsAggFactory = termsAggFactory; @@ -119,7 +119,7 @@ public SignificantLongTerms buildAggregation(long owningBucketOrdinal) throws IO @Override public SignificantLongTerms buildEmptyAggregation() { // We need to account for the significance of a miss in our global stats - provide corpus size as context - ContextIndexSearcher searcher = context.searchContext().searcher(); + ContextIndexSearcher searcher = context.searcher(); IndexReader topReader = searcher.getIndexReader(); int supersetSize = topReader.numDocs(); return new SignificantLongTerms(name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTerms.java index e73da337a0fc0..299a2655ffc3d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTerms.java @@ -80,11 +80,6 @@ public Number getKeyAsNumber() { return Double.parseDouble(termBytes.utf8ToString()); } - @Override - int compareTerm(SignificantTerms.Bucket other) { - return termBytes.compareTo(((Bucket) other).termBytes); - } - @Override public String getKeyAsString() { return format.format(termBytes); @@ -101,15 +96,8 @@ Bucket newBucket(long subsetDf, long subsetSize, long supersetDf, long supersetS } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(); - builder.field(CommonFields.KEY, getKeyAsString()); - builder.field(CommonFields.DOC_COUNT, getDocCount()); - builder.field("score", score); - builder.field("bg_count", supersetDf); - aggregations.toXContentInternal(builder, params); - builder.endObject(); - return builder; + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + return builder.field(CommonFields.KEY.getPreferredName(), getKeyAsString()); } } @@ -150,21 +138,6 @@ protected SignificantStringTerms create(long subsetSize, long supersetSize, List supersetSize, significanceHeuristic, buckets); } - @Override - public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field("doc_count", subsetSize); - builder.startArray(CommonFields.BUCKETS); - for (Bucket bucket : buckets) { - //There is a condition (presumably when only one shard has a bucket?) where reduce is not called - // and I end up with buckets that contravene the user's min_doc_count criteria in my reducer - if (bucket.subsetDf >= minDocCount) { - bucket.toXContent(builder, params); - } - } - builder.endArray(); - return builder; - } - @Override protected Bucket[] createBucketsArray(int size) { return new Bucket[size]; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTermsAggregator.java index ad7645b1efe33..d9cd5b260d048 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTermsAggregator.java @@ -31,9 +31,9 @@ import org.elasticsearch.search.aggregations.bucket.terms.StringTermsAggregator; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.internal.ContextIndexSearcher; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -52,7 +52,7 @@ public class SignificantStringTermsAggregator extends StringTermsAggregator { private final SignificanceHeuristic significanceHeuristic; public SignificantStringTermsAggregator(String name, AggregatorFactories factories, ValuesSource valuesSource, DocValueFormat format, - BucketCountThresholds bucketCountThresholds, IncludeExclude.StringFilter includeExclude, AggregationContext aggregationContext, + BucketCountThresholds bucketCountThresholds, IncludeExclude.StringFilter includeExclude, SearchContext aggregationContext, Aggregator parent, SignificanceHeuristic significanceHeuristic, SignificantTermsAggregatorFactory termsAggFactory, List pipelineAggregators, Map metaData) throws IOException { @@ -126,7 +126,7 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws @Override public SignificantStringTerms buildEmptyAggregation() { // We need to account for the significance of a miss in our global stats - provide corpus size as context - ContextIndexSearcher searcher = context.searchContext().searcher(); + ContextIndexSearcher searcher = context.searcher(); IndexReader topReader = searcher.getIndexReader(); int supersetSize = topReader.numDocs(); return new SignificantStringTerms(name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTerms.java index 34d8361cbf05c..61cb4a9ca0a2d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTerms.java @@ -18,8 +18,6 @@ */ package org.elasticsearch.search.aggregations.bucket.significant; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation; import java.util.List; @@ -28,54 +26,46 @@ * An aggregation that collects significant terms in comparison to a background set. */ public interface SignificantTerms extends MultiBucketsAggregation, Iterable { - abstract static class Bucket extends InternalMultiBucketAggregation.InternalBucket { - long subsetDf; - long subsetSize; - long supersetDf; - long supersetSize; - - Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize) { - this.subsetSize = subsetSize; - this.supersetSize = supersetSize; - this.subsetDf = subsetDf; - this.supersetDf = supersetDf; - } + interface Bucket extends MultiBucketsAggregation.Bucket { /** - * Read from a stream. + * @return The significant score for the subset */ - protected Bucket(StreamInput in, long subsetSize, long supersetSize) { - this.subsetSize = subsetSize; - this.supersetSize = supersetSize; - } - - abstract int compareTerm(SignificantTerms.Bucket other); - - public abstract double getSignificanceScore(); - - abstract Number getKeyAsNumber(); + double getSignificanceScore(); - public long getSubsetDf() { - return subsetDf; - } + /** + * @return The number of docs in the subset containing a particular term. + * This number is equal to the document count of the bucket. + */ + long getSubsetDf(); - public long getSupersetDf() { - return supersetDf; - } + /** + * @return The numbers of docs in the subset (also known as "foreground set"). + * This number is equal to the document count of the containing aggregation. + */ + long getSubsetSize(); - public long getSupersetSize() { - return supersetSize; - } + /** + * @return The number of docs in the superset containing a particular term (also + * known as the "background count" of the bucket) + */ + long getSupersetDf(); - public long getSubsetSize() { - return subsetSize; - } + /** + * @return The numbers of docs in the superset (ordinarily the background count + * of the containing aggregation). + */ + long getSupersetSize(); + /** + * @return The key, expressed as a number + */ + Number getKeyAsNumber(); } @Override - List getBuckets(); + List getBuckets(); /** * Get the bucket for the given term, or null if there is no such bucket. diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java index 245297d72cc28..f416d2dcf1095 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java @@ -21,25 +21,30 @@ import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ParseFieldRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.significant.heuristics.JLHScore; import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic; +import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristicParser; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; @@ -49,7 +54,6 @@ */ public class SignificantTermsAggregationBuilder extends ValuesSourceAggregationBuilder { public static final String NAME = "significant_terms"; - public static final InternalAggregation.Type TYPE = new Type(NAME); static final ParseField BACKGROUND_FILTER = new ParseField("background_filter"); static final ParseField HEURISTIC = new ParseField("significance_heuristic"); @@ -58,6 +62,48 @@ public class SignificantTermsAggregationBuilder extends ValuesSourceAggregationB 3, 0, 10, -1); static final SignificanceHeuristic DEFAULT_SIGNIFICANCE_HEURISTIC = new JLHScore(); + public static Aggregator.Parser getParser(ParseFieldRegistry significanceHeuristicParserRegistry) { + ObjectParser parser = + new ObjectParser<>(SignificantTermsAggregationBuilder.NAME); + ValuesSourceParserHelper.declareAnyFields(parser, true, true); + + parser.declareInt(SignificantTermsAggregationBuilder::shardSize, TermsAggregationBuilder.SHARD_SIZE_FIELD_NAME); + + parser.declareLong(SignificantTermsAggregationBuilder::minDocCount, TermsAggregationBuilder.MIN_DOC_COUNT_FIELD_NAME); + + parser.declareLong(SignificantTermsAggregationBuilder::shardMinDocCount, TermsAggregationBuilder.SHARD_MIN_DOC_COUNT_FIELD_NAME); + + parser.declareInt(SignificantTermsAggregationBuilder::size, TermsAggregationBuilder.REQUIRED_SIZE_FIELD_NAME); + + parser.declareString(SignificantTermsAggregationBuilder::executionHint, TermsAggregationBuilder.EXECUTION_HINT_FIELD_NAME); + + parser.declareObject((b, v) -> { if (v.isPresent()) b.backgroundFilter(v.get()); }, + (p, context) -> context.parseInnerQueryBuilder(), + SignificantTermsAggregationBuilder.BACKGROUND_FILTER); + + parser.declareField((b, v) -> b.includeExclude(IncludeExclude.merge(v, b.includeExclude())), + IncludeExclude::parseInclude, IncludeExclude.INCLUDE_FIELD, ObjectParser.ValueType.OBJECT_ARRAY_OR_STRING); + + parser.declareField((b, v) -> b.includeExclude(IncludeExclude.merge(b.includeExclude(), v)), + IncludeExclude::parseExclude, IncludeExclude.EXCLUDE_FIELD, ObjectParser.ValueType.OBJECT_ARRAY_OR_STRING); + + for (String name : significanceHeuristicParserRegistry.getNames()) { + parser.declareObject(SignificantTermsAggregationBuilder::significanceHeuristic, + (p, context) -> { + SignificanceHeuristicParser significanceHeuristicParser = significanceHeuristicParserRegistry + .lookupReturningNullIfNotFound(name); + return significanceHeuristicParser.parse(context); + }, + new ParseField(name)); + } + return new Aggregator.Parser() { + @Override + public AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return parser.parse(context.parser(), new SignificantTermsAggregationBuilder(aggregationName, null), context); + } + }; + } + private IncludeExclude includeExclude = null; private String executionHint = null; private QueryBuilder filterBuilder = null; @@ -65,14 +111,14 @@ public class SignificantTermsAggregationBuilder extends ValuesSourceAggregationB private SignificanceHeuristic significanceHeuristic = DEFAULT_SIGNIFICANCE_HEURISTIC; public SignificantTermsAggregationBuilder(String name, ValueType valueType) { - super(name, TYPE, ValuesSourceType.ANY, valueType); + super(name, ValuesSourceType.ANY, valueType); } /** * Read from a Stream. */ public SignificantTermsAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.ANY); + super(in, ValuesSourceType.ANY); bucketCountThresholds = new BucketCountThresholds(in); executionHint = in.readOptionalString(); filterBuilder = in.readOptionalNamedWriteable(QueryBuilder.class); @@ -218,10 +264,11 @@ public SignificanceHeuristic significanceHeuristic() { } @Override - protected ValuesSourceAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new SignificantTermsAggregatorFactory(name, type, config, includeExclude, executionHint, filterBuilder, - bucketCountThresholds, significanceHeuristic, context, parent, subFactoriesBuilder, metaData); + SignificanceHeuristic executionHeuristic = this.significanceHeuristic.rewrite(context); + return new SignificantTermsAggregatorFactory(name, config, includeExclude, executionHint, filterBuilder, + bucketCountThresholds, executionHeuristic, context, parent, subFactoriesBuilder, metaData); } @Override @@ -256,7 +303,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregatorFactory.java index ab30e1b2d4aad..65819736d5419 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregatorFactory.java @@ -19,17 +19,16 @@ package org.elasticsearch.search.aggregations.bucket.significant; -import org.apache.lucene.search.BooleanQuery; -import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.TermQuery; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.PostingsEnum; import org.apache.lucene.index.Term; import org.apache.lucene.search.BooleanClause.Occur; +import org.apache.lucene.search.BooleanQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lucene.index.FilterableTermsEnum; import org.elasticsearch.common.lucene.index.FreqTermsEnum; @@ -42,20 +41,19 @@ import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.NonCollectingAggregator; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.BucketUtils; import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; +import java.util.Arrays; import java.util.List; import java.util.Map; @@ -68,22 +66,29 @@ public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFac private MappedFieldType fieldType; private FilterableTermsEnum termsEnum; private int numberOfAggregatorsCreated; - private final Query filter; + final Query filter; private final int supersetNumDocs; private final TermsAggregator.BucketCountThresholds bucketCountThresholds; private final SignificanceHeuristic significanceHeuristic; - public SignificantTermsAggregatorFactory(String name, Type type, ValuesSourceConfig config, IncludeExclude includeExclude, - String executionHint, QueryBuilder filterBuilder, TermsAggregator.BucketCountThresholds bucketCountThresholds, - SignificanceHeuristic significanceHeuristic, AggregationContext context, AggregatorFactory parent, - AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + public SignificantTermsAggregatorFactory(String name, + ValuesSourceConfig config, + IncludeExclude includeExclude, + String executionHint, + QueryBuilder filterBuilder, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + SignificanceHeuristic significanceHeuristic, + SearchContext context, + AggregatorFactory parent, + AggregatorFactories.Builder subFactoriesBuilder, + Map metaData) throws IOException { + super(name, config, context, parent, subFactoriesBuilder, metaData); this.includeExclude = includeExclude; this.executionHint = executionHint; this.filter = filterBuilder == null ? null - : filterBuilder.toQuery(context.searchContext().getQueryShardContext()); - IndexSearcher searcher = context.searchContext().searcher(); + : filterBuilder.toFilter(context.getQueryShardContext()); + IndexSearcher searcher = context.searcher(); this.supersetNumDocs = filter == null // Important - need to use the doc count that includes deleted docs // or we have this issue: https://github.com/elastic/elasticsearch/issues/7951 @@ -91,15 +96,14 @@ public SignificantTermsAggregatorFactory(String name, Type type, ValuesSourceCon : searcher.count(filter); this.bucketCountThresholds = bucketCountThresholds; this.significanceHeuristic = significanceHeuristic; - this.significanceHeuristic.initialize(context.searchContext()); - setFieldInfo(); + setFieldInfo(context); } - private void setFieldInfo() { + private void setFieldInfo(SearchContext context) { if (!config.unmapped()) { this.indexedFieldName = config.fieldContext().field(); - fieldType = SearchContext.current().smartNameFieldType(indexedFieldName); + fieldType = context.smartNameFieldType(indexedFieldName); } } @@ -114,7 +118,7 @@ private FilterableTermsEnum getTermsEnum(String field) throws IOException { if (termsEnum != null) { return termsEnum; } - IndexReader reader = context.searchContext().searcher().getIndexReader(); + IndexReader reader = context.searcher().getIndexReader(); if (numberOfAggregatorsCreated > 1) { termsEnum = new FreqTermsEnum(reader, field, true, false, filter, context.bigArrays()); } else { @@ -124,7 +128,7 @@ private FilterableTermsEnum getTermsEnum(String field) throws IOException { } private long getBackgroundFrequency(String value) throws IOException { - Query query = fieldType.termQuery(value, context.searchContext().getQueryShardContext()); + Query query = fieldType.termQuery(value, context.getQueryShardContext()); if (query instanceof TermQuery) { // for types that use the inverted index, we prefer using a caching terms // enum that will do a better job at reusing index inputs @@ -143,7 +147,7 @@ private long getBackgroundFrequency(String value) throws IOException { .add(filter, Occur.FILTER) .build(); } - return context.searchContext().searcher().count(query); + return context.searcher().count(query); } public long getBackgroundFrequency(BytesRef termBytes) throws IOException { @@ -192,13 +196,13 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, Aggregator pare // such are impossible to differentiate from non-significant terms // at that early stage. bucketCountThresholds.setShardSize(2 * BucketUtils.suggestShardSideQueueSize(bucketCountThresholds.getRequiredSize(), - context.searchContext().numberOfShards())); + context.numberOfShards())); } if (valuesSource instanceof ValuesSource.Bytes) { ExecutionMode execution = null; if (executionHint != null) { - execution = ExecutionMode.fromString(executionHint, context.searchContext().parseFieldMatcher()); + execution = ExecutionMode.fromString(executionHint); } if (!(valuesSource instanceof ValuesSource.Bytes.WithOrdinals)) { execution = ExecutionMode.MAP; @@ -211,13 +215,13 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, Aggregator pare } } assert execution != null; - + DocValueFormat format = config.format(); if ((includeExclude != null) && (includeExclude.isRegexBased()) && format != DocValueFormat.RAW) { throw new AggregationExecutionException("Aggregation [" + name + "] cannot support regular expression style include/exclude " + "settings as they can only be applied to string fields. Use an array of values for include/exclude clauses"); } - + return execution.create(name, factories, valuesSource, format, bucketCountThresholds, includeExclude, context, parent, significanceHeuristic, this, pipelineAggregators, metaData); } @@ -250,54 +254,81 @@ public enum ExecutionMode { MAP(new ParseField("map")) { @Override - Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource, DocValueFormat format, - TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude, - AggregationContext aggregationContext, Aggregator parent, SignificanceHeuristic significanceHeuristic, - SignificantTermsAggregatorFactory termsAggregatorFactory, List pipelineAggregators, - Map metaData) throws IOException { + Aggregator create(String name, + AggregatorFactories factories, + ValuesSource valuesSource, + DocValueFormat format, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + IncludeExclude includeExclude, + SearchContext aggregationContext, + Aggregator parent, + SignificanceHeuristic significanceHeuristic, + SignificantTermsAggregatorFactory termsAggregatorFactory, + List pipelineAggregators, + Map metaData) throws IOException { + final IncludeExclude.StringFilter filter = includeExclude == null ? null : includeExclude.convertToStringFilter(format); return new SignificantStringTermsAggregator(name, factories, valuesSource, format, bucketCountThresholds, filter, aggregationContext, parent, significanceHeuristic, termsAggregatorFactory, pipelineAggregators, metaData); + } }, GLOBAL_ORDINALS(new ParseField("global_ordinals")) { @Override - Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource, DocValueFormat format, - TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude, - AggregationContext aggregationContext, Aggregator parent, SignificanceHeuristic significanceHeuristic, - SignificantTermsAggregatorFactory termsAggregatorFactory, List pipelineAggregators, - Map metaData) throws IOException { + Aggregator create(String name, + AggregatorFactories factories, + ValuesSource valuesSource, + DocValueFormat format, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + IncludeExclude includeExclude, + SearchContext aggregationContext, + Aggregator parent, + SignificanceHeuristic significanceHeuristic, + SignificantTermsAggregatorFactory termsAggregatorFactory, + List pipelineAggregators, + Map metaData) throws IOException { + final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter(format); return new GlobalOrdinalsSignificantTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, format, bucketCountThresholds, filter, - aggregationContext, parent, significanceHeuristic, termsAggregatorFactory, pipelineAggregators, metaData); + aggregationContext, parent, false, significanceHeuristic, termsAggregatorFactory, pipelineAggregators, metaData); + } }, GLOBAL_ORDINALS_HASH(new ParseField("global_ordinals_hash")) { @Override - Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource, DocValueFormat format, - TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude, - AggregationContext aggregationContext, Aggregator parent, SignificanceHeuristic significanceHeuristic, - SignificantTermsAggregatorFactory termsAggregatorFactory, List pipelineAggregators, - Map metaData) throws IOException { + Aggregator create(String name, + AggregatorFactories factories, + ValuesSource valuesSource, + DocValueFormat format, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + IncludeExclude includeExclude, + SearchContext aggregationContext, + Aggregator parent, + SignificanceHeuristic significanceHeuristic, + SignificantTermsAggregatorFactory termsAggregatorFactory, + List pipelineAggregators, + Map metaData) throws IOException { + final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter(format); - return new GlobalOrdinalsSignificantTermsAggregator.WithHash(name, factories, - (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, format, bucketCountThresholds, filter, - aggregationContext, parent, significanceHeuristic, termsAggregatorFactory, pipelineAggregators, metaData); + return new GlobalOrdinalsSignificantTermsAggregator(name, factories, + (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, format, bucketCountThresholds, filter, aggregationContext, parent, + true, significanceHeuristic, termsAggregatorFactory, pipelineAggregators, metaData); + } }; - public static ExecutionMode fromString(String value, ParseFieldMatcher parseFieldMatcher) { + public static ExecutionMode fromString(String value) { for (ExecutionMode mode : values()) { - if (parseFieldMatcher.match(value, mode.parseField)) { + if (mode.parseField.match(value)) { return mode; } } - throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of " + values()); + throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of " + Arrays.toString(values())); } private final ParseField parseField; @@ -306,11 +337,18 @@ public static ExecutionMode fromString(String value, ParseFieldMatcher parseFiel this.parseField = parseField; } - abstract Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource, DocValueFormat format, - TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude, - AggregationContext aggregationContext, Aggregator parent, SignificanceHeuristic significanceHeuristic, - SignificantTermsAggregatorFactory termsAggregatorFactory, List pipelineAggregators, - Map metaData) throws IOException; + abstract Aggregator create(String name, + AggregatorFactories factories, + ValuesSource valuesSource, + DocValueFormat format, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + IncludeExclude includeExclude, + SearchContext aggregationContext, + Aggregator parent, + SignificanceHeuristic significanceHeuristic, + SignificantTermsAggregatorFactory termsAggregatorFactory, + List pipelineAggregators, + Map metaData) throws IOException; @Override public String toString() { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsParser.java deleted file mode 100644 index 0f08cf0a0a32b..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsParser.java +++ /dev/null @@ -1,110 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.significant; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.ParseFieldRegistry; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.index.query.QueryBuilder; -import org.elasticsearch.index.query.QueryParseContext; -import org.elasticsearch.indices.query.IndicesQueriesRegistry; -import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode; -import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic; -import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristicParser; -import org.elasticsearch.search.aggregations.bucket.terms.AbstractTermsParser; -import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator; -import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds; -import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; -import java.util.Optional; - -/** - * - */ -public class SignificantTermsParser extends AbstractTermsParser { - private final ParseFieldRegistry significanceHeuristicParserRegistry; - private final IndicesQueriesRegistry queriesRegistry; - - public SignificantTermsParser(ParseFieldRegistry significanceHeuristicParserRegistry, - IndicesQueriesRegistry queriesRegistry) { - this.significanceHeuristicParserRegistry = significanceHeuristicParserRegistry; - this.queriesRegistry = queriesRegistry; - } - - @Override - protected SignificantTermsAggregationBuilder doCreateFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, BucketCountThresholds bucketCountThresholds, - SubAggCollectionMode collectMode, String executionHint, - IncludeExclude incExc, Map otherOptions) { - SignificantTermsAggregationBuilder factory = new SignificantTermsAggregationBuilder(aggregationName, targetValueType); - if (bucketCountThresholds != null) { - factory.bucketCountThresholds(bucketCountThresholds); - } - if (executionHint != null) { - factory.executionHint(executionHint); - } - if (incExc != null) { - factory.includeExclude(incExc); - } - QueryBuilder backgroundFilter = (QueryBuilder) otherOptions.get(SignificantTermsAggregationBuilder.BACKGROUND_FILTER); - if (backgroundFilter != null) { - factory.backgroundFilter(backgroundFilter); - } - SignificanceHeuristic significanceHeuristic = - (SignificanceHeuristic) otherOptions.get(SignificantTermsAggregationBuilder.HEURISTIC); - if (significanceHeuristic != null) { - factory.significanceHeuristic(significanceHeuristic); - } - return factory; - } - - @Override - public boolean parseSpecial(String aggregationName, XContentParseContext context, Token token, - String currentFieldName, Map otherOptions) throws IOException { - if (token == XContentParser.Token.START_OBJECT) { - SignificanceHeuristicParser significanceHeuristicParser = significanceHeuristicParserRegistry - .lookupReturningNullIfNotFound(currentFieldName, context.getParseFieldMatcher()); - if (significanceHeuristicParser != null) { - SignificanceHeuristic significanceHeuristic = significanceHeuristicParser.parse(context); - otherOptions.put(SignificantTermsAggregationBuilder.HEURISTIC, significanceHeuristic); - return true; - } else if (context.matchField(currentFieldName, SignificantTermsAggregationBuilder.BACKGROUND_FILTER)) { - QueryParseContext queryParseContext = new QueryParseContext(context.getDefaultScriptLanguage(), queriesRegistry, - context.getParser(), context.getParseFieldMatcher()); - Optional filter = queryParseContext.parseInnerQueryBuilder(); - if (filter.isPresent()) { - otherOptions.put(SignificantTermsAggregationBuilder.BACKGROUND_FILTER, filter.get()); - } - return true; - } - } - return false; - } - - @Override - protected BucketCountThresholds getDefaultBucketCountThresholds() { - return new TermsAggregator.BucketCountThresholds(SignificantTermsAggregationBuilder.DEFAULT_BUCKET_COUNT_THRESHOLDS); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/UnmappedSignificantTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/UnmappedSignificantTerms.java index 1a1094e62b467..c2cf93b28105e 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/UnmappedSignificantTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/UnmappedSignificantTerms.java @@ -31,15 +31,18 @@ import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; +import java.util.Iterator; import java.util.List; import java.util.Map; +import static java.util.Collections.emptyIterator; import static java.util.Collections.emptyList; /** * Result of the running the significant terms aggregation on an unmapped field. */ public class UnmappedSignificantTerms extends InternalSignificantTerms { + public static final String NAME = "umsigterms"; /** @@ -75,6 +78,11 @@ public String getWriteableName() { return NAME; } + @Override + public String getType() { + return SignificantStringTerms.NAME; + } + @Override public UnmappedSignificantTerms create(List buckets) { return new UnmappedSignificantTerms(name, requiredSize, minDocCount, pipelineAggregators(), metaData); @@ -102,7 +110,7 @@ public InternalAggregation doReduce(List aggregations, Redu @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.startArray(CommonFields.BUCKETS).endArray(); + builder.startArray(CommonFields.BUCKETS.getPreferredName()).endArray(); return builder; } @@ -112,7 +120,12 @@ protected Bucket[] createBucketsArray(int size) { } @Override - protected List getBucketsInternal() { + public Iterator iterator() { + return emptyIterator(); + } + + @Override + public List getBuckets() { return emptyList(); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/GND.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/GND.java index 3ae26639aa9c4..5968f42211e54 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/GND.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/GND.java @@ -26,8 +26,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryShardException; -import org.elasticsearch.search.aggregations.support.XContentParseContext; import java.io.IOException; @@ -113,13 +113,13 @@ protected SignificanceHeuristic newHeuristic(boolean includeNegatives, boolean b } @Override - public SignificanceHeuristic parse(XContentParseContext context) throws IOException, QueryShardException { - XContentParser parser = context.getParser(); + public SignificanceHeuristic parse(QueryParseContext context) throws IOException, QueryShardException { + XContentParser parser = context.parser(); String givenName = parser.currentName(); boolean backgroundIsSuperset = true; XContentParser.Token token = parser.nextToken(); while (!token.equals(XContentParser.Token.END_OBJECT)) { - if (context.matchField(parser.currentName(), BACKGROUND_IS_SUPERSET)) { + if (BACKGROUND_IS_SUPERSET.match(parser.currentName())) { parser.nextToken(); backgroundIsSuperset = parser.booleanValue(); } else { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/JLHScore.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/JLHScore.java index 58f8060a10874..d8009818f7f6b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/JLHScore.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/JLHScore.java @@ -26,8 +26,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryShardException; -import org.elasticsearch.search.aggregations.support.XContentParseContext; import java.io.IOException; @@ -104,9 +104,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } - public static SignificanceHeuristic parse(XContentParseContext context) + public static SignificanceHeuristic parse(QueryParseContext context) throws IOException, QueryShardException { - XContentParser parser = context.getParser(); + XContentParser parser = context.parser(); // move to the closing bracket if (!parser.nextToken().equals(XContentParser.Token.END_OBJECT)) { throw new ElasticsearchParseException( diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/NXYSignificanceHeuristic.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/NXYSignificanceHeuristic.java index d6064ca37fd52..5f92b5b40e632 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/NXYSignificanceHeuristic.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/NXYSignificanceHeuristic.java @@ -27,8 +27,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryShardException; -import org.elasticsearch.search.aggregations.support.XContentParseContext; import java.io.IOException; @@ -152,18 +152,18 @@ protected void build(XContentBuilder builder) throws IOException { public abstract static class NXYParser implements SignificanceHeuristicParser { @Override - public SignificanceHeuristic parse(XContentParseContext context) + public SignificanceHeuristic parse(QueryParseContext context) throws IOException, QueryShardException { - XContentParser parser = context.getParser(); + XContentParser parser = context.parser(); String givenName = parser.currentName(); boolean includeNegatives = false; boolean backgroundIsSuperset = true; XContentParser.Token token = parser.nextToken(); while (!token.equals(XContentParser.Token.END_OBJECT)) { - if (context.matchField(parser.currentName(), INCLUDE_NEGATIVES_FIELD)) { + if (INCLUDE_NEGATIVES_FIELD.match(parser.currentName())) { parser.nextToken(); includeNegatives = parser.booleanValue(); - } else if (context.matchField(parser.currentName(), BACKGROUND_IS_SUPERSET)) { + } else if (BACKGROUND_IS_SUPERSET.match(parser.currentName())) { parser.nextToken(); backgroundIsSuperset = parser.booleanValue(); } else { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/PercentageScore.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/PercentageScore.java index c7e5c7ead6f58..f4a61fbfae0cc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/PercentageScore.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/PercentageScore.java @@ -26,8 +26,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryShardException; -import org.elasticsearch.search.aggregations.support.XContentParseContext; import java.io.IOException; @@ -56,9 +56,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } - public static SignificanceHeuristic parse(XContentParseContext context) + public static SignificanceHeuristic parse(QueryParseContext context) throws IOException, QueryShardException { - XContentParser parser = context.getParser(); + XContentParser parser = context.parser(); // move to the closing bracket if (!parser.nextToken().equals(XContentParser.Token.END_OBJECT)) { throw new ElasticsearchParseException("failed to parse [percentage] significance heuristic. expected an empty object, but got [{}] instead", parser.currentToken()); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/ScriptHeuristic.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/ScriptHeuristic.java index c933f9ef59631..5f496ecbf8049 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/ScriptHeuristic.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/ScriptHeuristic.java @@ -24,38 +24,58 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryShardException; +import org.elasticsearch.script.CompiledScript; import org.elasticsearch.script.ExecutableScript; import org.elasticsearch.script.Script; -import org.elasticsearch.script.Script.ScriptField; import org.elasticsearch.script.ScriptContext; -import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.support.XContentParseContext; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; -import java.util.Collections; import java.util.Objects; public class ScriptHeuristic extends SignificanceHeuristic { public static final String NAME = "script_heuristic"; - private final LongAccessor subsetSizeHolder; - private final LongAccessor supersetSizeHolder; - private final LongAccessor subsetDfHolder; - private final LongAccessor supersetDfHolder; private final Script script; - ExecutableScript searchScript = null; + + // This class holds an executable form of the script with private variables ready for execution + // on a single search thread. + static class ExecutableScriptHeuristic extends ScriptHeuristic { + private final LongAccessor subsetSizeHolder; + private final LongAccessor supersetSizeHolder; + private final LongAccessor subsetDfHolder; + private final LongAccessor supersetDfHolder; + private final ExecutableScript executableScript; + + ExecutableScriptHeuristic(Script script, ExecutableScript executableScript){ + super(script); + subsetSizeHolder = new LongAccessor(); + supersetSizeHolder = new LongAccessor(); + subsetDfHolder = new LongAccessor(); + supersetDfHolder = new LongAccessor(); + this.executableScript = executableScript; + executableScript.setNextVar("_subset_freq", subsetDfHolder); + executableScript.setNextVar("_subset_size", subsetSizeHolder); + executableScript.setNextVar("_superset_freq", supersetDfHolder); + executableScript.setNextVar("_superset_size", supersetSizeHolder); + } + + @Override + public double getScore(long subsetFreq, long subsetSize, long supersetFreq, long supersetSize) { + subsetSizeHolder.value = subsetSize; + supersetSizeHolder.value = supersetSize; + subsetDfHolder.value = subsetFreq; + supersetDfHolder.value = supersetFreq; + return ((Number) executableScript.run()).doubleValue(); + } + } public ScriptHeuristic(Script script) { - subsetSizeHolder = new LongAccessor(); - supersetSizeHolder = new LongAccessor(); - subsetDfHolder = new LongAccessor(); - supersetDfHolder = new LongAccessor(); this.script = script; } @@ -72,21 +92,14 @@ public void writeTo(StreamOutput out) throws IOException { } @Override - public void initialize(InternalAggregation.ReduceContext context) { - initialize(context.scriptService()); + public SignificanceHeuristic rewrite(InternalAggregation.ReduceContext context) { + CompiledScript compiledScript = context.scriptService().compile(script, ScriptContext.Standard.AGGS); + return new ExecutableScriptHeuristic(script, context.scriptService().executable(compiledScript, script.getParams())); } @Override - public void initialize(SearchContext context) { - initialize(context.scriptService()); - } - - public void initialize(ScriptService scriptService) { - searchScript = scriptService.executable(script, ScriptContext.Standard.AGGS, Collections.emptyMap()); - searchScript.setNextVar("_subset_freq", subsetDfHolder); - searchScript.setNextVar("_subset_size", subsetSizeHolder); - searchScript.setNextVar("_superset_freq", supersetDfHolder); - searchScript.setNextVar("_superset_size", supersetSizeHolder); + public SignificanceHeuristic rewrite(SearchContext context) { + return new ExecutableScriptHeuristic(script, context.getQueryShardContext().getExecutableScript(script, ScriptContext.Standard.AGGS)); } /** @@ -100,19 +113,7 @@ public void initialize(ScriptService scriptService) { */ @Override public double getScore(long subsetFreq, long subsetSize, long supersetFreq, long supersetSize) { - if (searchScript == null) { - //In tests, wehn calling assertSearchResponse(..) the response is streamed one additional time with an arbitrary version, see assertVersionSerializable(..). - // Now, for version before 1.5.0 the score is computed after streaming the response but for scripts the script does not exists yet. - // assertSearchResponse() might therefore fail although there is no problem. - // This should be replaced by an exception in 2.0. - ESLoggerFactory.getLogger("script heuristic").warn("cannot compute score - script has not been initialized yet."); - return 0; - } - subsetSizeHolder.value = subsetSize; - supersetSizeHolder.value = supersetSize; - subsetDfHolder.value = subsetFreq; - supersetDfHolder.value = supersetFreq; - return ((Number) searchScript.run()).doubleValue(); + throw new UnsupportedOperationException("This scoring heuristic must have 'rewrite' called on it to provide a version ready for use"); } @Override @@ -123,7 +124,7 @@ public String getWriteableName() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) throws IOException { builder.startObject(NAME); - builder.field(ScriptField.SCRIPT.getPreferredName()); + builder.field(Script.SCRIPT_PARSE_FIELD.getPreferredName()); script.toXContent(builder, builderParams); builder.endObject(); return builder; @@ -146,9 +147,9 @@ public boolean equals(Object obj) { return Objects.equals(script, other.script); } - public static SignificanceHeuristic parse(XContentParseContext context) + public static SignificanceHeuristic parse(QueryParseContext context) throws IOException, QueryShardException { - XContentParser parser = context.getParser(); + XContentParser parser = context.parser(); String heuristicName = parser.currentName(); Script script = null; XContentParser.Token token; @@ -157,8 +158,8 @@ public static SignificanceHeuristic parse(XContentParseContext context) if (token.equals(XContentParser.Token.FIELD_NAME)) { currentFieldName = parser.currentName(); } else { - if (context.matchField(currentFieldName, ScriptField.SCRIPT)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); + if (Script.SCRIPT_PARSE_FIELD.match(currentFieldName)) { + script = Script.parse(parser, context.getDefaultScriptLanguage()); } else { throw new ElasticsearchParseException("failed to parse [{}] significance heuristic. unknown object [{}]", heuristicName, currentFieldName); } @@ -171,26 +172,6 @@ public static SignificanceHeuristic parse(XContentParseContext context) return new ScriptHeuristic(script); } - public static class ScriptHeuristicBuilder implements SignificanceHeuristicBuilder { - - private Script script = null; - - public ScriptHeuristicBuilder setScript(Script script) { - this.script = script; - return this; - } - - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) throws IOException { - builder.startObject(NAME); - builder.field(ScriptField.SCRIPT.getPreferredName()); - script.toXContent(builder, builderParams); - builder.endObject(); - return builder; - } - - } - public final class LongAccessor extends Number { public long value; @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristic.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristic.java index db9711c1a8de5..7b6cf699741c6 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristic.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristic.java @@ -50,11 +50,23 @@ protected void checkFrequencyValidity(long subsetFreq, long subsetSize, long sup } } - public void initialize(InternalAggregation.ReduceContext reduceContext) { - + /** + * Provides a hook for subclasses to provide a version of the heuristic + * prepared for execution on data on the coordinating node. + * @param reduceContext the reduce context on the coordinating node + * @return a version of this heuristic suitable for execution + */ + public SignificanceHeuristic rewrite(InternalAggregation.ReduceContext reduceContext) { + return this; } - public void initialize(SearchContext context) { - + /** + * Provides a hook for subclasses to provide a version of the heuristic + * prepared for execution on data on a shard. + * @param context the search context on the data node + * @return a version of this heuristic suitable for execution + */ + public SignificanceHeuristic rewrite(SearchContext context) { + return this; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristicParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristicParser.java index 26fd552a6b142..3c6f98c155eac 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristicParser.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristicParser.java @@ -22,7 +22,7 @@ import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; +import org.elasticsearch.index.query.QueryParseContext; import java.io.IOException; @@ -31,5 +31,5 @@ */ @FunctionalInterface public interface SignificanceHeuristicParser { - SignificanceHeuristic parse(XContentParseContext context) throws IOException, ParsingException; + SignificanceHeuristic parse(QueryParseContext context) throws IOException, ParsingException; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/AbstractStringTermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/AbstractStringTermsAggregator.java index 5719fe7721b3e..9b40fcc42dcc6 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/AbstractStringTermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/AbstractStringTermsAggregator.java @@ -24,7 +24,7 @@ import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -36,7 +36,7 @@ abstract class AbstractStringTermsAggregator extends TermsAggregator { protected final boolean showTermDocCountError; - public AbstractStringTermsAggregator(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent, + AbstractStringTermsAggregator(String name, AggregatorFactories factories, SearchContext context, Aggregator parent, Terms.Order order, DocValueFormat format, BucketCountThresholds bucketCountThresholds, SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, context, parent, bucketCountThresholds, order, format, subAggCollectMode, pipelineAggregators, metaData); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/AbstractTermsParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/AbstractTermsParser.java deleted file mode 100644 index a106cea3a1596..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/AbstractTermsParser.java +++ /dev/null @@ -1,137 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.bucket.terms; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode; -import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds; -import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.AnyValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSource; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -public abstract class AbstractTermsParser extends AnyValuesSourceParser { - - public static final ParseField EXECUTION_HINT_FIELD_NAME = new ParseField("execution_hint"); - public static final ParseField SHARD_SIZE_FIELD_NAME = new ParseField("shard_size"); - public static final ParseField MIN_DOC_COUNT_FIELD_NAME = new ParseField("min_doc_count"); - public static final ParseField SHARD_MIN_DOC_COUNT_FIELD_NAME = new ParseField("shard_min_doc_count"); - public static final ParseField REQUIRED_SIZE_FIELD_NAME = new ParseField("size"); - - public IncludeExclude.Parser incExcParser = new IncludeExclude.Parser(); - - protected AbstractTermsParser() { - super(true, true); - } - - @Override - protected final ValuesSourceAggregationBuilder createFactory(String aggregationName, - ValuesSourceType valuesSourceType, - ValueType targetValueType, - Map otherOptions) { - BucketCountThresholds bucketCountThresholds = getDefaultBucketCountThresholds(); - Integer requiredSize = (Integer) otherOptions.get(REQUIRED_SIZE_FIELD_NAME); - if (requiredSize != null && requiredSize != -1) { - bucketCountThresholds.setRequiredSize(requiredSize); - } - Integer shardSize = (Integer) otherOptions.get(SHARD_SIZE_FIELD_NAME); - if (shardSize != null && shardSize != -1) { - bucketCountThresholds.setShardSize(shardSize); - } - Long minDocCount = (Long) otherOptions.get(MIN_DOC_COUNT_FIELD_NAME); - if (minDocCount != null && minDocCount != -1) { - bucketCountThresholds.setMinDocCount(minDocCount); - } - Long shardMinDocCount = (Long) otherOptions.get(SHARD_MIN_DOC_COUNT_FIELD_NAME); - if (shardMinDocCount != null && shardMinDocCount != -1) { - bucketCountThresholds.setShardMinDocCount(shardMinDocCount); - } - SubAggCollectionMode collectMode = (SubAggCollectionMode) otherOptions.get(SubAggCollectionMode.KEY); - String executionHint = (String) otherOptions.get(EXECUTION_HINT_FIELD_NAME); - IncludeExclude incExc = incExcParser.createIncludeExclude(otherOptions); - return doCreateFactory(aggregationName, valuesSourceType, targetValueType, bucketCountThresholds, collectMode, executionHint, - incExc, - otherOptions); - } - - protected abstract ValuesSourceAggregationBuilder doCreateFactory(String aggregationName, - ValuesSourceType valuesSourceType, - ValueType targetValueType, - BucketCountThresholds bucketCountThresholds, - SubAggCollectionMode collectMode, - String executionHint, - IncludeExclude incExc, - Map otherOptions); - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (incExcParser.token(currentFieldName, token, parser, context.getParseFieldMatcher(), otherOptions)) { - return true; - } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.matchField(currentFieldName, EXECUTION_HINT_FIELD_NAME)) { - otherOptions.put(EXECUTION_HINT_FIELD_NAME, parser.text()); - return true; - } else if (context.matchField(currentFieldName, SubAggCollectionMode.KEY)) { - otherOptions.put(SubAggCollectionMode.KEY, SubAggCollectionMode.parse(parser.text(), context.getParseFieldMatcher())); - return true; - } else if (context.matchField(currentFieldName, REQUIRED_SIZE_FIELD_NAME)) { - otherOptions.put(REQUIRED_SIZE_FIELD_NAME, parser.intValue()); - return true; - } else if (parseSpecial(aggregationName, context, token, currentFieldName, otherOptions)) { - return true; - } - } else if (token == XContentParser.Token.VALUE_NUMBER) { - if (context.matchField(currentFieldName, REQUIRED_SIZE_FIELD_NAME)) { - otherOptions.put(REQUIRED_SIZE_FIELD_NAME, parser.intValue()); - return true; - } else if (context.matchField(currentFieldName, SHARD_SIZE_FIELD_NAME)) { - otherOptions.put(SHARD_SIZE_FIELD_NAME, parser.intValue()); - return true; - } else if (context.matchField(currentFieldName, MIN_DOC_COUNT_FIELD_NAME)) { - otherOptions.put(MIN_DOC_COUNT_FIELD_NAME, parser.longValue()); - return true; - } else if (context.matchField(currentFieldName, SHARD_MIN_DOC_COUNT_FIELD_NAME)) { - otherOptions.put(SHARD_MIN_DOC_COUNT_FIELD_NAME, parser.longValue()); - return true; - } else if (parseSpecial(aggregationName, context, token, currentFieldName, otherOptions)) { - return true; - } - } else if (parseSpecial(aggregationName, context, token, currentFieldName, otherOptions)) { - return true; - } - return false; - } - - public abstract boolean parseSpecial(String aggregationName, XContentParseContext context, - Token token, String currentFieldName, Map otherOptions) throws IOException; - - protected abstract TermsAggregator.BucketCountThresholds getDefaultBucketCountThresholds(); - -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTerms.java index 7e3daf5034b83..830f4d359467a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTerms.java @@ -38,7 +38,7 @@ public class DoubleTerms extends InternalMappedTerms { private final double term; - public Bucket(double term, long docCount, InternalAggregations aggregations, boolean showDocCountError, long docCountError, + Bucket(double term, long docCount, InternalAggregations aggregations, boolean showDocCountError, long docCountError, DocValueFormat format) { super(docCount, aggregations, showDocCountError, docCountError, format); this.term = term; @@ -47,7 +47,7 @@ public Bucket(double term, long docCount, InternalAggregations aggregations, boo /** * Read from a stream. */ - public Bucket(StreamInput in, DocValueFormat format, boolean showDocCountError) throws IOException { + Bucket(StreamInput in, DocValueFormat format, boolean showDocCountError) throws IOException { super(in, format, showDocCountError); term = in.readDouble(); } @@ -73,7 +73,7 @@ public Number getKeyAsNumber() { } @Override - int compareTerm(Terms.Bucket other) { + public int compareTerm(Terms.Bucket other) { return Double.compare(term, ((Number) other.getKey()).doubleValue()); } @@ -83,18 +83,11 @@ Bucket newBucket(long docCount, InternalAggregations aggs, long docCountError) { } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(); - builder.field(CommonFields.KEY, term); + protected final XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + builder.field(CommonFields.KEY.getPreferredName(), term); if (format != DocValueFormat.RAW) { - builder.field(CommonFields.KEY_AS_STRING, format.format(term)); + builder.field(CommonFields.KEY_AS_STRING.getPreferredName(), format.format(term)); } - builder.field(CommonFields.DOC_COUNT, getDocCount()); - if (showDocCountError) { - builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME, getDocCountError()); - } - aggregations.toXContentInternal(builder, params); - builder.endObject(); return builder; } } @@ -136,20 +129,9 @@ protected DoubleTerms create(String name, List buckets, long docCountErr shardSize, showTermDocCountError, otherDocCount, buckets, docCountError); } - @Override - public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME, docCountError); - builder.field(SUM_OF_OTHER_DOC_COUNTS, otherDocCount); - builder.startArray(CommonFields.BUCKETS); - for (Bucket bucket : buckets) { - bucket.toXContent(builder, params); - } - builder.endArray(); - return builder; - } - @Override protected Bucket[] createBucketsArray(int size) { return new Bucket[size]; } + } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTermsAggregator.java index b2b57f9d06004..3b8fdd795f2fa 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTermsAggregator.java @@ -27,9 +27,9 @@ import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -42,7 +42,7 @@ public class DoubleTermsAggregator extends LongTermsAggregator { public DoubleTermsAggregator(String name, AggregatorFactories factories, ValuesSource.Numeric valuesSource, DocValueFormat format, - Terms.Order order, BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext, Aggregator parent, + Terms.Order order, BucketCountThresholds bucketCountThresholds, SearchContext aggregationContext, Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, IncludeExclude.LongFilter longFilter, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, valuesSource, format, order, bucketCountThresholds, aggregationContext, parent, collectionMode, diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java index 00a066c11ba38..62bad9313a774 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java @@ -20,13 +20,13 @@ package org.elasticsearch.search.aggregations.bucket.terms; import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedDocValues; import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.LongBitSet; -import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.IntArray; @@ -44,8 +44,8 @@ import org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -66,66 +66,106 @@ public class GlobalOrdinalsStringTermsAggregator extends AbstractStringTermsAggr // first defined one. // So currently for each instance of this aggregator the acceptedglobalValues will be computed, this is unnecessary // especially if this agg is on a second layer or deeper. - protected LongBitSet acceptedGlobalOrdinals; + protected final LongBitSet acceptedGlobalOrdinals; + protected final long valueCount; + protected final GlobalOrdLookupFunction lookupGlobalOrd; - protected RandomAccessOrds globalOrds; + private final LongHash bucketOrds; - public GlobalOrdinalsStringTermsAggregator(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals valuesSource, - Terms.Order order, DocValueFormat format, BucketCountThresholds bucketCountThresholds, - IncludeExclude.OrdinalsFilter includeExclude, AggregationContext aggregationContext, Aggregator parent, - SubAggCollectionMode collectionMode, boolean showTermDocCountError, List pipelineAggregators, - Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, order, format, bucketCountThresholds, collectionMode, showTermDocCountError, + public interface GlobalOrdLookupFunction { + BytesRef apply(long ord) throws IOException; + } + + public GlobalOrdinalsStringTermsAggregator(String name, AggregatorFactories factories, + ValuesSource.Bytes.WithOrdinals valuesSource, + Terms.Order order, + DocValueFormat format, + BucketCountThresholds bucketCountThresholds, + IncludeExclude.OrdinalsFilter includeExclude, + SearchContext context, + Aggregator parent, + boolean forceRemapGlobalOrds, + SubAggCollectionMode collectionMode, + boolean showTermDocCountError, + List pipelineAggregators, + Map metaData) throws IOException { + super(name, factories, context, parent, order, format, bucketCountThresholds, collectionMode, showTermDocCountError, pipelineAggregators, metaData); this.valuesSource = valuesSource; this.includeExclude = includeExclude; + final IndexReader reader = context.searcher().getIndexReader(); + final RandomAccessOrds values = reader.leaves().size() > 0 ? + valuesSource.globalOrdinalsValues(context.searcher().getIndexReader().leaves().get(0)) : DocValues.emptySortedSet(); + this.valueCount = values.getValueCount(); + this.lookupGlobalOrd = values::lookupOrd; + this.acceptedGlobalOrdinals = includeExclude != null ? includeExclude.acceptedGlobalOrdinals(values) : null; + + /** + * Remap global ords to dense bucket ordinals if any sub-aggregator cannot be deferred. + * Sub-aggregators expect dense buckets and allocate memories based on this assumption. + * Deferred aggregators are safe because the selected ordinals are remapped when the buckets + * are replayed. + */ + boolean remapGlobalOrds = forceRemapGlobalOrds || Arrays.stream(subAggregators).anyMatch((a) -> shouldDefer(a) == false); + this.bucketOrds = remapGlobalOrds ? new LongHash(1, context.bigArrays()) : null; } - protected long getBucketOrd(long termOrd) { - return termOrd; - } - - @Override - public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, - final LeafBucketCollector sub) throws IOException { - globalOrds = valuesSource.globalOrdinalsValues(ctx); + boolean remapGlobalOrds() { + return bucketOrds != null; + } - if (acceptedGlobalOrdinals == null && includeExclude != null) { - acceptedGlobalOrdinals = includeExclude.acceptedGlobalOrdinals(globalOrds); - } + protected final long getBucketOrd(long globalOrd) { + return bucketOrds == null ? globalOrd : bucketOrds.find(globalOrd); + } - if (acceptedGlobalOrdinals != null) { - globalOrds = new FilteredOrdinals(globalOrds, acceptedGlobalOrdinals); + private void collectGlobalOrd(int doc, long globalOrd, LeafBucketCollector sub) throws IOException { + if (bucketOrds == null) { + collectExistingBucket(sub, doc, globalOrd); + } else { + long bucketOrd = bucketOrds.add(globalOrd); + if (bucketOrd < 0) { + bucketOrd = -1 - bucketOrd; + collectExistingBucket(sub, doc, bucketOrd); + } else { + collectBucket(sub, doc, bucketOrd); + } } + } - return newCollector(globalOrds, sub); + private RandomAccessOrds getGlobalOrds(LeafReaderContext ctx) throws IOException { + return acceptedGlobalOrdinals == null ? + valuesSource.globalOrdinalsValues(ctx) : new FilteredOrdinals(valuesSource.globalOrdinalsValues(ctx), acceptedGlobalOrdinals); } - protected LeafBucketCollector newCollector(final RandomAccessOrds ords, final LeafBucketCollector sub) { - grow(ords.getValueCount()); - final SortedDocValues singleValues = DocValues.unwrapSingleton(ords); + @Override + public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, final LeafBucketCollector sub) throws IOException { + final RandomAccessOrds globalOrds = getGlobalOrds(ctx); + if (bucketOrds == null) { + grow(globalOrds.getValueCount()); + } + final SortedDocValues singleValues = DocValues.unwrapSingleton(globalOrds); if (singleValues != null) { - return new LeafBucketCollectorBase(sub, ords) { + return new LeafBucketCollectorBase(sub, globalOrds) { @Override public void collect(int doc, long bucket) throws IOException { assert bucket == 0; final int ord = singleValues.getOrd(doc); if (ord >= 0) { - collectExistingBucket(sub, doc, ord); + collectGlobalOrd(doc, ord, sub); } } }; } else { - return new LeafBucketCollectorBase(sub, ords) { + return new LeafBucketCollectorBase(sub, globalOrds) { @Override public void collect(int doc, long bucket) throws IOException { assert bucket == 0; - ords.setDocument(doc); - final int numOrds = ords.cardinality(); + globalOrds.setDocument(doc); + final int numOrds = globalOrds.cardinality(); for (int i = 0; i < numOrds; i++) { - final long globalOrd = ords.ordAt(i); - collectExistingBucket(sub, doc, globalOrd); + final long globalOrd = globalOrds.ordAt(i); + collectGlobalOrd(doc, globalOrd, sub); } } }; @@ -143,21 +183,21 @@ protected static void copy(BytesRef from, BytesRef to) { @Override public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException { - if (globalOrds == null) { // no context in this reader + if (valueCount == 0) { // no context in this reader return buildEmptyAggregation(); } final int size; if (bucketCountThresholds.getMinDocCount() == 0) { // if minDocCount == 0 then we can end up with more buckets then maxBucketOrd() returns - size = (int) Math.min(globalOrds.getValueCount(), bucketCountThresholds.getShardSize()); + size = (int) Math.min(valueCount, bucketCountThresholds.getShardSize()); } else { size = (int) Math.min(maxBucketOrd(), bucketCountThresholds.getShardSize()); } long otherDocCount = 0; BucketPriorityQueue ordered = new BucketPriorityQueue<>(size, order.comparator(this)); OrdBucket spare = new OrdBucket(-1, 0, null, showTermDocCountError, 0); - for (long globalTermOrd = 0; globalTermOrd < globalOrds.getValueCount(); ++globalTermOrd) { + for (long globalTermOrd = 0; globalTermOrd < valueCount; ++globalTermOrd) { if (includeExclude != null && !acceptedGlobalOrdinals.get(globalTermOrd)) { continue; } @@ -182,10 +222,10 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOE final StringTerms.Bucket[] list = new StringTerms.Bucket[ordered.size()]; long survivingBucketOrds[] = new long[ordered.size()]; for (int i = ordered.size() - 1; i >= 0; --i) { - final OrdBucket bucket = (OrdBucket) ordered.pop(); + final OrdBucket bucket = ordered.pop(); survivingBucketOrds[i] = bucket.bucketOrd; BytesRef scratch = new BytesRef(); - copy(globalOrds.lookupOrd(bucket.globalOrd), scratch); + copy(lookupGlobalOrd.apply(bucket.globalOrd), scratch); list[i] = new StringTerms.Bucket(scratch, bucket.docCount, null, showTermDocCountError, 0, format); list[i].bucketOrd = bucket.bucketOrd; otherDocCount -= list[i].docCount; @@ -217,7 +257,7 @@ static class OrdBucket extends InternalTerms.Bucket { } @Override - int compareTerm(Terms.Bucket other) { + public int compareTerm(Terms.Bucket other) { return Long.compare(globalOrd, ((OrdBucket) other).globalOrd); } @@ -247,79 +287,14 @@ protected void writeTermTo(StreamOutput out) throws IOException { } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + protected final XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { throw new UnsupportedOperationException(); } } - /** - * Variant of {@link GlobalOrdinalsStringTermsAggregator} that rebases hashes in order to make them dense. Might be - * useful in case few hashes are visited. - */ - public static class WithHash extends GlobalOrdinalsStringTermsAggregator { - - private final LongHash bucketOrds; - - public WithHash(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals valuesSource, Terms.Order order, - DocValueFormat format, BucketCountThresholds bucketCountThresholds, IncludeExclude.OrdinalsFilter includeExclude, - AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode collectionMode, - boolean showTermDocCountError, List pipelineAggregators, Map metaData) - throws IOException { - super(name, factories, valuesSource, order, format, bucketCountThresholds, includeExclude, aggregationContext, parent, collectionMode, - showTermDocCountError, pipelineAggregators, metaData); - bucketOrds = new LongHash(1, aggregationContext.bigArrays()); - } - - @Override - protected LeafBucketCollector newCollector(final RandomAccessOrds ords, final LeafBucketCollector sub) { - final SortedDocValues singleValues = DocValues.unwrapSingleton(ords); - if (singleValues != null) { - return new LeafBucketCollectorBase(sub, ords) { - @Override - public void collect(int doc, long bucket) throws IOException { - final int globalOrd = singleValues.getOrd(doc); - if (globalOrd >= 0) { - long bucketOrd = bucketOrds.add(globalOrd); - if (bucketOrd < 0) { - bucketOrd = -1 - bucketOrd; - collectExistingBucket(sub, doc, bucketOrd); - } else { - collectBucket(sub, doc, bucketOrd); - } - } - } - }; - } else { - return new LeafBucketCollectorBase(sub, ords) { - @Override - public void collect(int doc, long bucket) throws IOException { - ords.setDocument(doc); - final int numOrds = ords.cardinality(); - for (int i = 0; i < numOrds; i++) { - final long globalOrd = ords.ordAt(i); - long bucketOrd = bucketOrds.add(globalOrd); - if (bucketOrd < 0) { - bucketOrd = -1 - bucketOrd; - collectExistingBucket(sub, doc, bucketOrd); - } else { - collectBucket(sub, doc, bucketOrd); - } - } - } - }; - } - } - - @Override - protected long getBucketOrd(long termOrd) { - return bucketOrds.find(termOrd); - } - - @Override - protected void doClose() { - Releasables.close(bucketOrds); - } - + @Override + protected void doClose() { + Releasables.close(bucketOrds); } /** @@ -327,31 +302,43 @@ protected void doClose() { * instead of on the fly for each match.This is beneficial for low cardinality fields, because it can reduce * the amount of look-ups significantly. */ - public static class LowCardinality extends GlobalOrdinalsStringTermsAggregator { + static class LowCardinality extends GlobalOrdinalsStringTermsAggregator { private IntArray segmentDocCounts; - + private RandomAccessOrds globalOrds; private RandomAccessOrds segmentOrds; - public LowCardinality(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals valuesSource, - Terms.Order order, DocValueFormat format, - BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext, Aggregator parent, - SubAggCollectionMode collectionMode, boolean showTermDocCountError, List pipelineAggregators, - Map metaData) throws IOException { - super(name, factories, valuesSource, order, format, bucketCountThresholds, null, aggregationContext, parent, collectionMode, - showTermDocCountError, pipelineAggregators, metaData); - assert factories == null || factories.countAggregators() == 0; + LowCardinality(String name, + AggregatorFactories factories, + ValuesSource.Bytes.WithOrdinals valuesSource, + Terms.Order order, + DocValueFormat format, + BucketCountThresholds bucketCountThresholds, + SearchContext context, + Aggregator parent, + boolean forceDenseMode, + SubAggCollectionMode collectionMode, + boolean showTermDocCountError, + List pipelineAggregators, + Map metaData) throws IOException { + super(name, factories, valuesSource, order, format, bucketCountThresholds, null, context, parent, forceDenseMode, + collectionMode, showTermDocCountError, pipelineAggregators, metaData); this.segmentDocCounts = context.bigArrays().newIntArray(1, true); } - // bucketOrd is ord + 1 to avoid a branch to deal with the missing ord @Override - protected LeafBucketCollector newCollector(final RandomAccessOrds ords, LeafBucketCollector sub) { - segmentDocCounts = context.bigArrays().grow(segmentDocCounts, 1 + ords.getValueCount()); + public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, + final LeafBucketCollector sub) throws IOException { + if (segmentOrds != null) { + mapSegmentCountsToGlobalCounts(); + } + globalOrds = valuesSource.globalOrdinalsValues(ctx); + segmentOrds = valuesSource.ordinalsValues(ctx); + segmentDocCounts = context.bigArrays().grow(segmentDocCounts, 1 + segmentOrds.getValueCount()); assert sub == LeafBucketCollector.NO_OP_COLLECTOR; - final SortedDocValues singleValues = DocValues.unwrapSingleton(ords); + final SortedDocValues singleValues = DocValues.unwrapSingleton(segmentOrds); if (singleValues != null) { - return new LeafBucketCollectorBase(sub, ords) { + return new LeafBucketCollectorBase(sub, segmentOrds) { @Override public void collect(int doc, long bucket) throws IOException { assert bucket == 0; @@ -360,14 +347,14 @@ public void collect(int doc, long bucket) throws IOException { } }; } else { - return new LeafBucketCollectorBase(sub, ords) { + return new LeafBucketCollectorBase(sub, segmentOrds) { @Override public void collect(int doc, long bucket) throws IOException { assert bucket == 0; - ords.setDocument(doc); - final int numOrds = ords.cardinality(); + segmentOrds.setDocument(doc); + final int numOrds = segmentOrds.cardinality(); for (int i = 0; i < numOrds; i++) { - final long segmentOrd = ords.ordAt(i); + final long segmentOrd = segmentOrds.ordAt(i); segmentDocCounts.increment(segmentOrd + 1, 1); } } @@ -375,18 +362,6 @@ public void collect(int doc, long bucket) throws IOException { } } - @Override - public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, - final LeafBucketCollector sub) throws IOException { - if (segmentOrds != null) { - mapSegmentCountsToGlobalCounts(); - } - - globalOrds = valuesSource.globalOrdinalsValues(ctx); - segmentOrds = valuesSource.ordinalsValues(ctx); - return newCollector(segmentOrds, sub); - } - @Override protected void doPostCollection() { if (segmentOrds != null) { @@ -418,7 +393,8 @@ private void mapSegmentCountsToGlobalCounts() { } final long ord = i - 1; // remember we do +1 when counting final long globalOrd = mapping == null ? ord : mapping.getGlobalOrd(ord); - incrementBucketDocCount(globalOrd, inc); + long bucketOrd = getBucketOrd(globalOrd); + incrementBucketDocCount(bucketOrd, inc); } } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalMappedTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalMappedTerms.java index e3f842a08de03..a83aad541bd70 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalMappedTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalMappedTerms.java @@ -21,10 +21,12 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; +import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.function.Function; @@ -99,7 +101,7 @@ public long getSumOfOtherDocCounts() { } @Override - public List getBucketsInternal() { + public List getBuckets() { return buckets; } @@ -110,4 +112,9 @@ public B getBucketByKey(String term) { } return bucketMap.get(term); } + + @Override + public final XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + return doXContentCommon(builder, params, docCountError, otherDocCount, buckets); + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalOrder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalOrder.java index f3f87c09dca7a..3b85fa71d99d1 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalOrder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalOrder.java @@ -244,11 +244,11 @@ static class CompoundOrder extends Terms.Order { private final List orderElements; - public CompoundOrder(List compoundOrder) { + CompoundOrder(List compoundOrder) { this(compoundOrder, true); } - public CompoundOrder(List compoundOrder, boolean absoluteOrdering) { + CompoundOrder(List compoundOrder, boolean absoluteOrdering) { this.orderElements = new LinkedList<>(compoundOrder); Terms.Order lastElement = compoundOrder.get(compoundOrder.size() - 1); if (absoluteOrdering && !(InternalOrder.TERM_ASC == lastElement || InternalOrder.TERM_DESC == lastElement)) { @@ -303,7 +303,7 @@ public static class CompoundOrderComparator implements Comparator private List compoundOrder; private Aggregator aggregator; - public CompoundOrderComparator(List compoundOrder, Aggregator aggregator) { + CompoundOrderComparator(List compoundOrder, Aggregator aggregator) { this.compoundOrder = compoundOrder; this.aggregator = aggregator; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalTerms.java index a8b4c44ce462d..d62cd83473a38 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalTerms.java @@ -18,9 +18,11 @@ */ package org.elasticsearch.search.aggregations.bucket.terms; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.Aggregations; @@ -37,15 +39,14 @@ import java.util.List; import java.util.Map; -import static java.util.Collections.unmodifiableList; - public abstract class InternalTerms, B extends InternalTerms.Bucket> extends InternalMultiBucketAggregation implements Terms, ToXContent { - protected static final String DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME = "doc_count_error_upper_bound"; - protected static final String SUM_OF_OTHER_DOC_COUNTS = "sum_other_doc_count"; + protected static final ParseField DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME = new ParseField("doc_count_error_upper_bound"); + protected static final ParseField SUM_OF_OTHER_DOC_COUNTS = new ParseField("sum_other_doc_count"); + + public abstract static class Bucket> extends InternalMultiBucketAggregation.InternalBucket implements Terms.Bucket { - public abstract static class Bucket> extends Terms.Bucket { /** * Reads a bucket. Should be a constructor reference. */ @@ -119,6 +120,10 @@ public Aggregations getAggregations() { public B reduce(List buckets, ReduceContext context) { long docCount = 0; + // For the per term doc count error we add up the errors from the + // shards that did not respond with the term. To do this we add up + // the errors from the shards that did respond with the terms and + // subtract that from the sum of the error from all shards long docCountError = 0; List aggregationsList = new ArrayList<>(buckets.size()); for (B bucket : buckets) { @@ -135,6 +140,21 @@ public B reduce(List buckets, ReduceContext context) { InternalAggregations aggs = InternalAggregations.reduce(aggregationsList, context); return newBucket(docCount, aggs, docCountError); } + + @Override + public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + keyToXContent(builder); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), getDocCount()); + if (showDocCountError) { + builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME.getPreferredName(), getDocCountError()); + } + aggregations.toXContentInternal(builder, params); + builder.endObject(); + return builder; + } + + protected abstract XContentBuilder keyToXContent(XContentBuilder builder) throws IOException; } protected final Terms.Order order; @@ -170,11 +190,7 @@ protected final void doWriteTo(StreamOutput out) throws IOException { protected abstract void writeTermTypeInfoTo(StreamOutput out) throws IOException; @Override - public final List getBuckets() { - return unmodifiableList(getBucketsInternal()); - } - - protected abstract List getBucketsInternal(); + public abstract List getBuckets(); @Override public abstract B getBucketByKey(String term); @@ -202,10 +218,18 @@ public InternalAggregation doReduce(List aggregations, Redu } otherDocCount += terms.getSumOfOtherDocCounts(); final long thisAggDocCountError; - if (terms.getBucketsInternal().size() < getShardSize() || InternalOrder.isTermOrder(order)) { + if (terms.getBuckets().size() < getShardSize() || InternalOrder.isTermOrder(order)) { thisAggDocCountError = 0; } else if (InternalOrder.isCountDesc(this.order)) { - thisAggDocCountError = terms.getBucketsInternal().get(terms.getBucketsInternal().size() - 1).docCount; + if (terms.getDocCountError() > 0) { + // If there is an existing docCountError for this agg then + // use this as the error for this aggregation + thisAggDocCountError = terms.getDocCountError(); + } else { + // otherwise use the doc count of the last term in the + // aggregation + thisAggDocCountError = terms.getBuckets().get(terms.getBuckets().size() - 1).docCount; + } } else { thisAggDocCountError = -1; } @@ -217,8 +241,15 @@ public InternalAggregation doReduce(List aggregations, Redu } } setDocCountError(thisAggDocCountError); - for (B bucket : terms.getBucketsInternal()) { - bucket.docCountError = thisAggDocCountError; + for (B bucket : terms.getBuckets()) { + // If there is already a doc count error for this bucket + // subtract this aggs doc count error from it to make the + // new value for the bucket. This then means that when the + // final error for the bucket is calculated below we account + // for the existing error calculated in a previous reduce. + // Note that if the error is unbounded (-1) this will be fixed + // later in this method. + bucket.docCountError -= thisAggDocCountError; List bucketList = buckets.get(bucket.getKey()); if (bucketList == null) { bucketList = new ArrayList<>(); @@ -228,18 +259,16 @@ public InternalAggregation doReduce(List aggregations, Redu } } - final int size = Math.min(requiredSize, buckets.size()); - BucketPriorityQueue ordered = new BucketPriorityQueue<>(size, order.comparator(null)); + final int size = reduceContext.isFinalReduce() == false ? buckets.size() : Math.min(requiredSize, buckets.size()); + final BucketPriorityQueue ordered = new BucketPriorityQueue<>(size, order.comparator(null)); for (List sameTermBuckets : buckets.values()) { final B b = sameTermBuckets.get(0).reduce(sameTermBuckets, reduceContext); - if (b.docCountError != -1) { - if (sumDocCountError == -1) { - b.docCountError = -1; - } else { - b.docCountError = sumDocCountError - b.docCountError; - } + if (sumDocCountError == -1) { + b.docCountError = -1; + } else { + b.docCountError += sumDocCountError; } - if (b.docCount >= minDocCount) { + if (b.docCount >= minDocCount || reduceContext.isFinalReduce() == false) { B removed = ordered.insertWithOverflow(b); if (removed != null) { otherDocCount += removed.getDocCount(); @@ -269,4 +298,16 @@ public InternalAggregation doReduce(List aggregations, Redu * Create an array to hold some buckets. Used in collecting the results. */ protected abstract B[] createBucketsArray(int size); + + protected static XContentBuilder doXContentCommon(XContentBuilder builder, Params params, + long docCountError, long otherDocCount, List buckets) throws IOException { + builder.field(DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME.getPreferredName(), docCountError); + builder.field(SUM_OF_OTHER_DOC_COUNTS.getPreferredName(), otherDocCount); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); + for (Bucket bucket : buckets) { + bucket.toXContent(builder, params); + } + builder.endArray(); + return builder; + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTerms.java index 41092dba17653..1f950cc4e9824 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTerms.java @@ -22,12 +22,15 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregations; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; +import java.util.ArrayList; import java.util.List; import java.util.Map; +import java.util.Objects; /** * Result of the {@link TermsAggregator} when the field is some kind of whole number like a integer, long, or a date. @@ -73,7 +76,7 @@ public Number getKeyAsNumber() { } @Override - int compareTerm(Terms.Bucket other) { + public int compareTerm(Terms.Bucket other) { return Long.compare(term, ((Number) other.getKey()).longValue()); } @@ -83,20 +86,23 @@ Bucket newBucket(long docCount, InternalAggregations aggs, long docCountError) { } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(); - builder.field(CommonFields.KEY, term); + protected final XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + builder.field(CommonFields.KEY.getPreferredName(), term); if (format != DocValueFormat.RAW) { - builder.field(CommonFields.KEY_AS_STRING, format.format(term)); + builder.field(CommonFields.KEY_AS_STRING.getPreferredName(), format.format(term)); } - builder.field(CommonFields.DOC_COUNT, getDocCount()); - if (showDocCountError) { - builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME, getDocCountError()); - } - aggregations.toXContentInternal(builder, params); - builder.endObject(); return builder; } + + @Override + public boolean equals(Object obj) { + return super.equals(obj) && Objects.equals(term, ((Bucket) obj).term); + } + + @Override + public int hashCode() { + return Objects.hash(super.hashCode(), term); + } } public LongTerms(String name, Terms.Order order, int requiredSize, long minDocCount, List pipelineAggregators, @@ -136,18 +142,6 @@ protected LongTerms create(String name, List buckets, long docCountError showTermDocCountError, otherDocCount, buckets, docCountError); } - @Override - public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME, docCountError); - builder.field(SUM_OF_OTHER_DOC_COUNTS, otherDocCount); - builder.startArray(CommonFields.BUCKETS); - for (Bucket bucket : buckets) { - bucket.toXContent(builder, params); - } - builder.endArray(); - return builder; - } - @Override protected Bucket[] createBucketsArray(int size) { return new Bucket[size]; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTermsAggregator.java index fd7296d07d9c3..44a7f29aeb6e4 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTermsAggregator.java @@ -32,8 +32,8 @@ import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude.LongFilter; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -53,7 +53,7 @@ public class LongTermsAggregator extends TermsAggregator { private LongFilter longFilter; public LongTermsAggregator(String name, AggregatorFactories factories, ValuesSource.Numeric valuesSource, DocValueFormat format, - Terms.Order order, BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext, Aggregator parent, + Terms.Order order, BucketCountThresholds bucketCountThresholds, SearchContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, IncludeExclude.LongFilter longFilter, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, aggregationContext, parent, bucketCountThresholds, order, format, subAggCollectMode, pipelineAggregators, metaData); @@ -110,13 +110,16 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOE if (bucketCountThresholds.getMinDocCount() == 0 && (order != InternalOrder.COUNT_DESC || bucketOrds.size() < bucketCountThresholds.getRequiredSize())) { // we need to fill-in the blanks - for (LeafReaderContext ctx : context.searchContext().searcher().getTopReaderContext().leaves()) { + for (LeafReaderContext ctx : context.searcher().getTopReaderContext().leaves()) { final SortedNumericDocValues values = getValues(valuesSource, ctx); for (int docId = 0; docId < ctx.reader().maxDoc(); ++docId) { values.setDocument(docId); final int valueCount = values.count(); for (int i = 0; i < valueCount; ++i) { - bucketOrds.add(values.valueAt(i)); + long value = values.valueAt(i); + if (longFilter == null || longFilter.accept(value)) { + bucketOrds.add(value); + } } } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedDoubleTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedDoubleTerms.java new file mode 100644 index 0000000000000..d3afe5c17603f --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedDoubleTerms.java @@ -0,0 +1,85 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.terms; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; + +import java.io.IOException; + +public class ParsedDoubleTerms extends ParsedTerms { + + @Override + public String getType() { + return DoubleTerms.NAME; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedDoubleTerms.class.getSimpleName(), true, ParsedDoubleTerms::new); + static { + declareParsedTermsFields(PARSER, ParsedBucket::fromXContent); + } + + public static ParsedDoubleTerms fromXContent(XContentParser parser, String name) throws IOException { + ParsedDoubleTerms aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedTerms.ParsedBucket { + + private Double key; + + @Override + public Object getKey() { + return key; + } + + @Override + public String getKeyAsString() { + String keyAsString = super.getKeyAsString(); + if (keyAsString != null) { + return keyAsString; + } + if (key != null) { + return Double.toString(key); + } + return null; + } + + public Number getKeyAsNumber() { + return key; + } + + @Override + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + builder.field(CommonFields.KEY.getPreferredName(), key); + if (super.getKeyAsString() != null) { + builder.field(CommonFields.KEY_AS_STRING.getPreferredName(), getKeyAsString()); + } + return builder; + } + + static ParsedBucket fromXContent(XContentParser parser) throws IOException { + return parseTermsBucketXContent(parser, ParsedBucket::new, (p, bucket) -> bucket.key = p.doubleValue()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedLongTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedLongTerms.java new file mode 100644 index 0000000000000..b5869fc6ee2d8 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedLongTerms.java @@ -0,0 +1,85 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.terms; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; + +import java.io.IOException; + +public class ParsedLongTerms extends ParsedTerms { + + @Override + public String getType() { + return LongTerms.NAME; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedLongTerms.class.getSimpleName(), true, ParsedLongTerms::new); + static { + declareParsedTermsFields(PARSER, ParsedBucket::fromXContent); + } + + public static ParsedLongTerms fromXContent(XContentParser parser, String name) throws IOException { + ParsedLongTerms aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedTerms.ParsedBucket { + + private Long key; + + @Override + public Object getKey() { + return key; + } + + @Override + public String getKeyAsString() { + String keyAsString = super.getKeyAsString(); + if (keyAsString != null) { + return keyAsString; + } + if (key != null) { + return Long.toString(key); + } + return null; + } + + public Number getKeyAsNumber() { + return key; + } + + @Override + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + builder.field(CommonFields.KEY.getPreferredName(), key); + if (super.getKeyAsString() != null) { + builder.field(CommonFields.KEY_AS_STRING.getPreferredName(), getKeyAsString()); + } + return builder; + } + + static ParsedBucket fromXContent(XContentParser parser) throws IOException { + return parseTermsBucketXContent(parser, ParsedBucket::new, (p, bucket) -> bucket.key = p.longValue()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedStringTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedStringTerms.java new file mode 100644 index 0000000000000..792365d6b1aa2 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedStringTerms.java @@ -0,0 +1,85 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.terms; + +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; + +import java.io.IOException; + +public class ParsedStringTerms extends ParsedTerms { + + @Override + public String getType() { + return StringTerms.NAME; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedStringTerms.class.getSimpleName(), true, ParsedStringTerms::new); + static { + declareParsedTermsFields(PARSER, ParsedBucket::fromXContent); + } + + public static ParsedStringTerms fromXContent(XContentParser parser, String name) throws IOException { + ParsedStringTerms aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } + + public static class ParsedBucket extends ParsedTerms.ParsedBucket { + + private BytesRef key; + + @Override + public Object getKey() { + return getKeyAsString(); + } + + @Override + public String getKeyAsString() { + String keyAsString = super.getKeyAsString(); + if (keyAsString != null) { + return keyAsString; + } + if (key != null) { + return key.utf8ToString(); + } + return null; + } + + public Number getKeyAsNumber() { + if (key != null) { + return Double.parseDouble(key.utf8ToString()); + } + return null; + } + + @Override + protected XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + return builder.field(CommonFields.KEY.getPreferredName(), getKey()); + } + + static ParsedBucket fromXContent(XContentParser parser) throws IOException { + return parseTermsBucketXContent(parser, ParsedBucket::new, (p, bucket) -> bucket.key = p.utf8BytesOrNull()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedTerms.java new file mode 100644 index 0000000000000..1ff4598295a77 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/ParsedTerms.java @@ -0,0 +1,152 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.terms; + +import org.elasticsearch.common.CheckedBiConsumer; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.function.Supplier; + +import static org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME; +import static org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.SUM_OF_OTHER_DOC_COUNTS; + +public abstract class ParsedTerms extends ParsedMultiBucketAggregation implements Terms { + + protected long docCountErrorUpperBound; + protected long sumOtherDocCount; + + @Override + public long getDocCountError() { + return docCountErrorUpperBound; + } + + @Override + public long getSumOfOtherDocCounts() { + return sumOtherDocCount; + } + + @Override + public List getBuckets() { + return buckets; + } + + @Override + public Terms.Bucket getBucketByKey(String term) { + for (Terms.Bucket bucket : getBuckets()) { + if (bucket.getKeyAsString().equals(term)) { + return bucket; + } + } + return null; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + builder.field(DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME.getPreferredName(), getDocCountError()); + builder.field(SUM_OF_OTHER_DOC_COUNTS.getPreferredName(), getSumOfOtherDocCounts()); + builder.startArray(CommonFields.BUCKETS.getPreferredName()); + for (Terms.Bucket bucket : getBuckets()) { + bucket.toXContent(builder, params); + } + builder.endArray(); + return builder; + } + + static void declareParsedTermsFields(final ObjectParser objectParser, + final CheckedFunction bucketParser) { + declareMultiBucketAggregationFields(objectParser, bucketParser::apply, bucketParser::apply); + objectParser.declareLong((parsedTerms, value) -> parsedTerms.docCountErrorUpperBound = value , + DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME); + objectParser.declareLong((parsedTerms, value) -> parsedTerms.sumOtherDocCount = value, + SUM_OF_OTHER_DOC_COUNTS); + } + + public abstract static class ParsedBucket extends ParsedMultiBucketAggregation.ParsedBucket implements Terms.Bucket { + + boolean showDocCountError = false; + protected long docCountError; + + @Override + public int compareTerm(Terms.Bucket other) { + throw new UnsupportedOperationException(); + } + + @Override + public long getDocCountError() { + return docCountError; + } + + @Override + public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + keyToXContent(builder); + builder.field(CommonFields.DOC_COUNT.getPreferredName(), getDocCount()); + if (showDocCountError) { + builder.field(DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME.getPreferredName(), getDocCountError()); + } + getAggregations().toXContentInternal(builder, params); + builder.endObject(); + return builder; + } + + + static B parseTermsBucketXContent(final XContentParser parser, final Supplier bucketSupplier, + final CheckedBiConsumer keyConsumer) + throws IOException { + + final B bucket = bucketSupplier.get(); + final List aggregations = new ArrayList<>(); + + XContentParser.Token token; + String currentFieldName = parser.currentName(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (CommonFields.KEY_AS_STRING.getPreferredName().equals(currentFieldName)) { + bucket.setKeyAsString(parser.text()); + } else if (CommonFields.KEY.getPreferredName().equals(currentFieldName)) { + keyConsumer.accept(parser, bucket); + } else if (CommonFields.DOC_COUNT.getPreferredName().equals(currentFieldName)) { + bucket.setDocCount(parser.longValue()); + } else if (DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME.getPreferredName().equals(currentFieldName)) { + bucket.docCountError = parser.longValue(); + bucket.showDocCountError = true; + } + } else if (token == XContentParser.Token.START_OBJECT) { + XContentParserUtils.parseTypedKeysObject(parser, Aggregation.TYPED_KEYS_DELIMITER, Aggregation.class, + aggregations::add); + } + } + bucket.setAggregations(new Aggregations(aggregations)); + return bucket; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTerms.java index 4a40f77b2b24b..62ad5afd4b0e8 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTerms.java @@ -74,7 +74,7 @@ public String getKeyAsString() { } @Override - int compareTerm(Terms.Bucket other) { + public int compareTerm(Terms.Bucket other) { return termBytes.compareTo(((Bucket) other).termBytes); } @@ -84,16 +84,8 @@ Bucket newBucket(long docCount, InternalAggregations aggs, long docCountError) { } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(); - builder.field(CommonFields.KEY, getKeyAsString()); - builder.field(CommonFields.DOC_COUNT, getDocCount()); - if (showDocCountError) { - builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME, getDocCountError()); - } - aggregations.toXContentInternal(builder, params); - builder.endObject(); - return builder; + protected final XContentBuilder keyToXContent(XContentBuilder builder) throws IOException { + return builder.field(CommonFields.KEY.getPreferredName(), getKeyAsString()); } } @@ -134,18 +126,6 @@ protected StringTerms create(String name, List buckets, long docCountErr showTermDocCountError, otherDocCount, buckets, docCountError); } - @Override - public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME, docCountError); - builder.field(SUM_OF_OTHER_DOC_COUNTS, otherDocCount); - builder.startArray(CommonFields.BUCKETS); - for (Bucket bucket : buckets) { - bucket.toXContent(builder, params); - } - builder.endArray(); - return builder; - } - @Override protected Bucket[] createBucketsArray(int size) { return new Bucket[size]; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTermsAggregator.java index 7b2b76a086154..c93fc94ff6187 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTermsAggregator.java @@ -33,8 +33,8 @@ import org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -52,15 +52,15 @@ public class StringTermsAggregator extends AbstractStringTermsAggregator { public StringTermsAggregator(String name, AggregatorFactories factories, ValuesSource valuesSource, Terms.Order order, DocValueFormat format, BucketCountThresholds bucketCountThresholds, - IncludeExclude.StringFilter includeExclude, AggregationContext aggregationContext, + IncludeExclude.StringFilter includeExclude, SearchContext context, Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, order, format, bucketCountThresholds, collectionMode, showTermDocCountError, + super(name, factories, context, parent, order, format, bucketCountThresholds, collectionMode, showTermDocCountError, pipelineAggregators, metaData); this.valuesSource = valuesSource; this.includeExclude = includeExclude; - bucketOrds = new BytesRefHash(1, aggregationContext.bigArrays()); + bucketOrds = new BytesRefHash(1, context.bigArrays()); } @Override @@ -110,7 +110,7 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOE if (bucketCountThresholds.getMinDocCount() == 0 && (order != InternalOrder.COUNT_DESC || bucketOrds.size() < bucketCountThresholds.getRequiredSize())) { // we need to fill-in the blanks - for (LeafReaderContext ctx : context.searchContext().searcher().getTopReaderContext().leaves()) { + for (LeafReaderContext ctx : context.searcher().getTopReaderContext().leaves()) { final SortedBinaryDocValues values = valuesSource.bytesValues(ctx); // brute force for (int docId = 0; docId < ctx.reader().maxDoc(); ++docId) { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/Terms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/Terms.java index aa70309da95c0..166ece4e1122d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/Terms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/Terms.java @@ -20,7 +20,6 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.search.aggregations.Aggregator; -import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation; import java.util.Arrays; @@ -33,50 +32,23 @@ */ public interface Terms extends MultiBucketsAggregation { - static enum ValueType { - - STRING(org.elasticsearch.search.aggregations.support.ValueType.STRING), - LONG(org.elasticsearch.search.aggregations.support.ValueType.LONG), - DOUBLE(org.elasticsearch.search.aggregations.support.ValueType.DOUBLE); - - final org.elasticsearch.search.aggregations.support.ValueType scriptValueType; - - private ValueType(org.elasticsearch.search.aggregations.support.ValueType scriptValueType) { - this.scriptValueType = scriptValueType; - } - - static ValueType resolveType(String type) { - if ("string".equals(type)) { - return STRING; - } - if ("double".equals(type) || "float".equals(type)) { - return DOUBLE; - } - if ("long".equals(type) || "integer".equals(type) || "short".equals(type) || "byte".equals(type)) { - return LONG; - } - return null; - } - } - /** * A bucket that is associated with a single term */ - abstract static class Bucket extends InternalMultiBucketAggregation.InternalBucket { - - public abstract Number getKeyAsNumber(); + interface Bucket extends MultiBucketsAggregation.Bucket { - abstract int compareTerm(Terms.Bucket other); + Number getKeyAsNumber(); - public abstract long getDocCountError(); + int compareTerm(Terms.Bucket other); + long getDocCountError(); } /** * Return the sorted list of the buckets in this terms aggregation. */ @Override - List getBuckets(); + List getBuckets(); /** * Get the bucket for the given term, or null if there is no such bucket. @@ -97,7 +69,7 @@ abstract static class Bucket extends InternalMultiBucketAggregation.InternalBuck /** * Determines the order by which the term buckets will be sorted */ - abstract static class Order implements ToXContent { + abstract class Order implements ToXContent { /** * @return a bucket ordering strategy that sorts buckets by their document counts (ascending or descending) diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java index e34c3bcbed4c5..210f45be3179c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java @@ -19,23 +19,29 @@ package org.elasticsearch.search.aggregations.bucket.terms; import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; +import org.elasticsearch.search.aggregations.bucket.terms.InternalOrder.CompoundOrder; +import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -43,7 +49,6 @@ public class TermsAggregationBuilder extends ValuesSourceAggregationBuilder { public static final String NAME = "terms"; - private static final InternalAggregation.Type TYPE = new Type("terms"); public static final ParseField EXECUTION_HINT_FIELD_NAME = new ParseField("execution_hint"); public static final ParseField SHARD_SIZE_FIELD_NAME = new ParseField("shard_size"); @@ -56,6 +61,42 @@ public class TermsAggregationBuilder extends ValuesSourceAggregationBuilder PARSER; + static { + PARSER = new ObjectParser<>(TermsAggregationBuilder.NAME); + ValuesSourceParserHelper.declareAnyFields(PARSER, true, true); + + PARSER.declareBoolean(TermsAggregationBuilder::showTermDocCountError, + TermsAggregationBuilder.SHOW_TERM_DOC_COUNT_ERROR); + + PARSER.declareInt(TermsAggregationBuilder::shardSize, SHARD_SIZE_FIELD_NAME); + + PARSER.declareLong(TermsAggregationBuilder::minDocCount, MIN_DOC_COUNT_FIELD_NAME); + + PARSER.declareLong(TermsAggregationBuilder::shardMinDocCount, SHARD_MIN_DOC_COUNT_FIELD_NAME); + + PARSER.declareInt(TermsAggregationBuilder::size, REQUIRED_SIZE_FIELD_NAME); + + PARSER.declareString(TermsAggregationBuilder::executionHint, EXECUTION_HINT_FIELD_NAME); + + PARSER.declareField(TermsAggregationBuilder::collectMode, + (p, c) -> SubAggCollectionMode.parse(p.text()), + SubAggCollectionMode.KEY, ObjectParser.ValueType.STRING); + + PARSER.declareObjectArray(TermsAggregationBuilder::order, TermsAggregationBuilder::parseOrderParam, + TermsAggregationBuilder.ORDER_FIELD); + + PARSER.declareField((b, v) -> b.includeExclude(IncludeExclude.merge(v, b.includeExclude())), + IncludeExclude::parseInclude, IncludeExclude.INCLUDE_FIELD, ObjectParser.ValueType.OBJECT_ARRAY_OR_STRING); + + PARSER.declareField((b, v) -> b.includeExclude(IncludeExclude.merge(b.includeExclude(), v)), + IncludeExclude::parseExclude, IncludeExclude.EXCLUDE_FIELD, ObjectParser.ValueType.OBJECT_ARRAY_OR_STRING); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new TermsAggregationBuilder(aggregationName, null), context); + } + private Terms.Order order = Terms.Order.compound(Terms.Order.count(false), Terms.Order.term(true)); private IncludeExclude includeExclude = null; private String executionHint = null; @@ -65,14 +106,14 @@ public class TermsAggregationBuilder extends ValuesSourceAggregationBuilder innerBuild(AggregationContext context, ValuesSourceConfig config, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new TermsAggregatorFactory(name, type, config, order, includeExclude, executionHint, collectMode, + return new TermsAggregatorFactory(name, config, order, includeExclude, executionHint, collectMode, bucketCountThresholds, showTermDocCountError, context, parent, subFactoriesBuilder, metaData); } @@ -295,7 +335,49 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } + + private static Terms.Order parseOrderParam(XContentParser parser, QueryParseContext context) throws IOException { + XContentParser.Token token; + Terms.Order orderParam = null; + String orderKey = null; + boolean orderAsc = false; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + orderKey = parser.currentName(); + } else if (token == XContentParser.Token.VALUE_STRING) { + String dir = parser.text(); + if ("asc".equalsIgnoreCase(dir)) { + orderAsc = true; + } else if ("desc".equalsIgnoreCase(dir)) { + orderAsc = false; + } else { + throw new ParsingException(parser.getTokenLocation(), + "Unknown terms order direction [" + dir + "]"); + } + } else { + throw new ParsingException(parser.getTokenLocation(), + "Unexpected token " + token + " for [order]"); + } + } + if (orderKey == null) { + throw new ParsingException(parser.getTokenLocation(), + "Must specify at least one field for [order]"); + } else { + orderParam = resolveOrder(orderKey, orderAsc); + } + return orderParam; + } + + static Terms.Order resolveOrder(String key, boolean asc) { + if ("_term".equals(key)) { + return Order.term(asc); + } + if ("_count".equals(key)) { + return Order.count(asc); + } + return Order.aggregation(key, asc); + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregator.java index 4b8ec77c46761..78d6cde211cde 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregator.java @@ -33,8 +33,8 @@ import org.elasticsearch.search.aggregations.bucket.terms.InternalOrder.Aggregation; import org.elasticsearch.search.aggregations.bucket.terms.InternalOrder.CompoundOrder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.AggregationPath; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.HashSet; @@ -137,7 +137,9 @@ public void setShardSize(int shardSize) { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.field(TermsAggregationBuilder.REQUIRED_SIZE_FIELD_NAME.getPreferredName(), requiredSize); - builder.field(TermsAggregationBuilder.SHARD_SIZE_FIELD_NAME.getPreferredName(), shardSize); + if (shardSize != -1) { + builder.field(TermsAggregationBuilder.SHARD_SIZE_FIELD_NAME.getPreferredName(), shardSize); + } builder.field(TermsAggregationBuilder.MIN_DOC_COUNT_FIELD_NAME.getPreferredName(), minDocCount); builder.field(TermsAggregationBuilder.SHARD_MIN_DOC_COUNT_FIELD_NAME.getPreferredName(), shardMinDocCount); return builder; @@ -170,7 +172,7 @@ public boolean equals(Object obj) { protected final Set aggsUsedForSorting = new HashSet<>(); protected final SubAggCollectionMode collectMode; - public TermsAggregator(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent, + public TermsAggregator(String name, AggregatorFactories factories, SearchContext context, Aggregator parent, BucketCountThresholds bucketCountThresholds, Terms.Order order, DocValueFormat format, SubAggCollectionMode collectMode, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, context, parent, pipelineAggregators, metaData); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java index c01377d976189..0ef6d0b9bf35d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java @@ -21,26 +21,25 @@ import org.apache.lucene.search.IndexSearcher; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.Aggregator; +import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.NonCollectingAggregator; -import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.bucket.BucketUtils; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds; import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; +import java.util.Arrays; import java.util.List; import java.util.Map; @@ -53,11 +52,19 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory config, Terms.Order order, - IncludeExclude includeExclude, String executionHint, SubAggCollectionMode collectMode, - TermsAggregator.BucketCountThresholds bucketCountThresholds, boolean showTermDocCountError, AggregationContext context, - AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + public TermsAggregatorFactory(String name, + ValuesSourceConfig config, + Terms.Order order, + IncludeExclude includeExclude, + String executionHint, + SubAggCollectionMode collectMode, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + boolean showTermDocCountError, + SearchContext context, + AggregatorFactory parent, + AggregatorFactories.Builder subFactoriesBuilder, + Map metaData) throws IOException { + super(name, config, context, parent, subFactoriesBuilder, metaData); this.order = order; this.includeExclude = includeExclude; this.executionHint = executionHint; @@ -98,13 +105,13 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, Aggregator pare // heuristic to avoid any wrong-ranking caused by distributed // counting bucketCountThresholds.setShardSize(BucketUtils.suggestShardSideQueueSize(bucketCountThresholds.getRequiredSize(), - context.searchContext().numberOfShards())); + context.numberOfShards())); } bucketCountThresholds.ensureValidity(); if (valuesSource instanceof ValuesSource.Bytes) { ExecutionMode execution = null; if (executionHint != null) { - execution = ExecutionMode.fromString(executionHint, context.searchContext().parseFieldMatcher()); + execution = ExecutionMode.fromString(executionHint); } // In some cases, using ordinals is just not supported: override it @@ -116,7 +123,7 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, Aggregator pare final double ratio; if (execution == null || execution.needsGlobalOrdinals()) { ValuesSource.Bytes.WithOrdinals valueSourceWithOrdinals = (ValuesSource.Bytes.WithOrdinals) valuesSource; - IndexSearcher indexSearcher = context.searchContext().searcher(); + IndexSearcher indexSearcher = context.searcher(); maxOrd = valueSourceWithOrdinals.globalMaxOrd(indexSearcher); ratio = maxOrd / ((double) indexSearcher.getIndexReader().numDocs()); } else { @@ -131,7 +138,10 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, Aggregator pare // to be unbounded and most instances may only aggregate few // documents, so use hashed based // global ordinals to keep the bucket ords dense. - if (Aggregator.descendsFromBucketAggregator(parent)) { + // Additionally, if using partitioned terms the regular global + // ordinals would be sparse so we opt for hash + if (Aggregator.descendsFromBucketAggregator(parent) || + (includeExclude != null && includeExclude.isPartitionBased())) { execution = ExecutionMode.GLOBAL_ORDINALS_HASH; } else { if (factories == AggregatorFactories.EMPTY) { @@ -222,14 +232,23 @@ public enum ExecutionMode { MAP(new ParseField("map")) { @Override - Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource, Terms.Order order, - DocValueFormat format, TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude, - AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, - boolean showTermDocCountError, List pipelineAggregators, Map metaData) - throws IOException { + Aggregator create(String name, + AggregatorFactories factories, + ValuesSource valuesSource, + Terms.Order order, + DocValueFormat format, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + IncludeExclude includeExclude, + SearchContext context, Aggregator parent, + SubAggCollectionMode subAggCollectMode, + boolean showTermDocCountError, + List pipelineAggregators, + Map metaData) throws IOException { + final IncludeExclude.StringFilter filter = includeExclude == null ? null : includeExclude.convertToStringFilter(format); return new StringTermsAggregator(name, factories, valuesSource, order, format, bucketCountThresholds, filter, - aggregationContext, parent, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); + context, parent, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); + } @Override @@ -241,15 +260,25 @@ boolean needsGlobalOrdinals() { GLOBAL_ORDINALS(new ParseField("global_ordinals")) { @Override - Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource, Terms.Order order, - DocValueFormat format, TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude, - AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, - boolean showTermDocCountError, List pipelineAggregators, Map metaData) - throws IOException { + Aggregator create(String name, + AggregatorFactories factories, + ValuesSource valuesSource, + Terms.Order order, + DocValueFormat format, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + IncludeExclude includeExclude, + SearchContext context, + Aggregator parent, + SubAggCollectionMode subAggCollectMode, + boolean showTermDocCountError, + List pipelineAggregators, + Map metaData) throws IOException { + final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter(format); return new GlobalOrdinalsStringTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals) valuesSource, order, - format, bucketCountThresholds, filter, aggregationContext, parent, subAggCollectMode, showTermDocCountError, + format, bucketCountThresholds, filter, context, parent, false, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); + } @Override @@ -261,15 +290,25 @@ boolean needsGlobalOrdinals() { GLOBAL_ORDINALS_HASH(new ParseField("global_ordinals_hash")) { @Override - Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource, Terms.Order order, - DocValueFormat format, TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude, - AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, - boolean showTermDocCountError, List pipelineAggregators, Map metaData) - throws IOException { + Aggregator create(String name, + AggregatorFactories factories, + ValuesSource valuesSource, + Terms.Order order, + DocValueFormat format, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + IncludeExclude includeExclude, + SearchContext context, + Aggregator parent, + SubAggCollectionMode subAggCollectMode, + boolean showTermDocCountError, + List pipelineAggregators, + Map metaData) throws IOException { + final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter(format); - return new GlobalOrdinalsStringTermsAggregator.WithHash(name, factories, (ValuesSource.Bytes.WithOrdinals) valuesSource, - order, format, bucketCountThresholds, filter, aggregationContext, parent, subAggCollectMode, showTermDocCountError, - pipelineAggregators, metaData); + return new GlobalOrdinalsStringTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals) valuesSource, + order, format, bucketCountThresholds, filter, context, parent, true, subAggCollectMode, + showTermDocCountError, pipelineAggregators, metaData); + } @Override @@ -280,21 +319,31 @@ boolean needsGlobalOrdinals() { GLOBAL_ORDINALS_LOW_CARDINALITY(new ParseField("global_ordinals_low_cardinality")) { @Override - Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource, Terms.Order order, - DocValueFormat format, TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude, - AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, - boolean showTermDocCountError, List pipelineAggregators, Map metaData) - throws IOException { + Aggregator create(String name, + AggregatorFactories factories, + ValuesSource valuesSource, + Terms.Order order, + DocValueFormat format, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + IncludeExclude includeExclude, + SearchContext context, + Aggregator parent, + SubAggCollectionMode subAggCollectMode, + boolean showTermDocCountError, + List pipelineAggregators, + Map metaData) throws IOException { + if (includeExclude != null || factories.countAggregators() > 0 - // we need the FieldData impl to be able to extract the - // segment to global ord mapping + // we need the FieldData impl to be able to extract the + // segment to global ord mapping || valuesSource.getClass() != ValuesSource.Bytes.FieldData.class) { return GLOBAL_ORDINALS.create(name, factories, valuesSource, order, format, bucketCountThresholds, includeExclude, - aggregationContext, parent, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); + context, parent, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); } return new GlobalOrdinalsStringTermsAggregator.LowCardinality(name, factories, - (ValuesSource.Bytes.WithOrdinals) valuesSource, order, format, bucketCountThresholds, aggregationContext, parent, - subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); + (ValuesSource.Bytes.WithOrdinals) valuesSource, order, format, bucketCountThresholds, context, parent, + false, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); + } @Override @@ -303,13 +352,13 @@ boolean needsGlobalOrdinals() { } }; - public static ExecutionMode fromString(String value, ParseFieldMatcher parseFieldMatcher) { + public static ExecutionMode fromString(String value) { for (ExecutionMode mode : values()) { - if (parseFieldMatcher.match(value, mode.parseField)) { + if (mode.parseField.match(value)) { return mode; } } - throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of " + values()); + throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of " + Arrays.toString(values())); } private final ParseField parseField; @@ -318,11 +367,19 @@ public static ExecutionMode fromString(String value, ParseFieldMatcher parseFiel this.parseField = parseField; } - abstract Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource, Terms.Order order, - DocValueFormat format, TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude, - AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, - boolean showTermDocCountError, List pipelineAggregators, Map metaData) - throws IOException; + abstract Aggregator create(String name, + AggregatorFactories factories, + ValuesSource valuesSource, + Terms.Order order, + DocValueFormat format, + TermsAggregator.BucketCountThresholds bucketCountThresholds, + IncludeExclude includeExclude, + SearchContext context, + Aggregator parent, + SubAggCollectionMode subAggCollectMode, + boolean showTermDocCountError, + List pipelineAggregators, + Map metaData) throws IOException; abstract boolean needsGlobalOrdinals(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsParser.java deleted file mode 100644 index bf8b06ab65aaa..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsParser.java +++ /dev/null @@ -1,175 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.bucket.terms; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode; -import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order; -import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds; -import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; -import java.util.Map; - -/** - * - */ -public class TermsParser extends AbstractTermsParser { - @Override - protected TermsAggregationBuilder doCreateFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, BucketCountThresholds bucketCountThresholds, - SubAggCollectionMode collectMode, String executionHint, - IncludeExclude incExc, Map otherOptions) { - TermsAggregationBuilder factory = new TermsAggregationBuilder(aggregationName, targetValueType); - @SuppressWarnings("unchecked") - List orderElements = (List) otherOptions.get(TermsAggregationBuilder.ORDER_FIELD); - if (orderElements != null) { - List orders = new ArrayList<>(orderElements.size()); - for (OrderElement orderElement : orderElements) { - orders.add(resolveOrder(orderElement.key(), orderElement.asc())); - } - factory.order(orders); - } - if (bucketCountThresholds != null) { - factory.bucketCountThresholds(bucketCountThresholds); - } - if (collectMode != null) { - factory.collectMode(collectMode); - } - if (executionHint != null) { - factory.executionHint(executionHint); - } - if (incExc != null) { - factory.includeExclude(incExc); - } - Boolean showTermDocCountError = (Boolean) otherOptions.get(TermsAggregationBuilder.SHOW_TERM_DOC_COUNT_ERROR); - if (showTermDocCountError != null) { - factory.showTermDocCountError(showTermDocCountError); - } - return factory; - } - - @Override - public boolean parseSpecial(String aggregationName, XContentParseContext context, Token token, - String currentFieldName, Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (token == XContentParser.Token.START_OBJECT) { - if (context.matchField(currentFieldName, TermsAggregationBuilder.ORDER_FIELD)) { - otherOptions.put(TermsAggregationBuilder.ORDER_FIELD, Collections.singletonList(parseOrderParam(aggregationName, parser))); - return true; - } - } else if (token == XContentParser.Token.START_ARRAY) { - if (context.matchField(currentFieldName, TermsAggregationBuilder.ORDER_FIELD)) { - List orderElements = new ArrayList<>(); - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token == XContentParser.Token.START_OBJECT) { - OrderElement orderParam = parseOrderParam(aggregationName, parser); - orderElements.add(orderParam); - } else { - throw new ParsingException(parser.getTokenLocation(), - "Order elements must be of type object in [" + aggregationName + "] found token of type [" + token + "]."); - } - } - otherOptions.put(TermsAggregationBuilder.ORDER_FIELD, orderElements); - return true; - } - } else if (token == XContentParser.Token.VALUE_BOOLEAN) { - if (context.matchField(currentFieldName, TermsAggregationBuilder.SHOW_TERM_DOC_COUNT_ERROR)) { - otherOptions.put(TermsAggregationBuilder.SHOW_TERM_DOC_COUNT_ERROR, parser.booleanValue()); - return true; - } - } - return false; - } - - private OrderElement parseOrderParam(String aggregationName, XContentParser parser) throws IOException { - XContentParser.Token token; - OrderElement orderParam = null; - String orderKey = null; - boolean orderAsc = false; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - orderKey = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_STRING) { - String dir = parser.text(); - if ("asc".equalsIgnoreCase(dir)) { - orderAsc = true; - } else if ("desc".equalsIgnoreCase(dir)) { - orderAsc = false; - } else { - throw new ParsingException(parser.getTokenLocation(), - "Unknown terms order direction [" + dir + "] in terms aggregation [" + aggregationName + "]"); - } - } else { - throw new ParsingException(parser.getTokenLocation(), - "Unexpected token " + token + " for [order] in [" + aggregationName + "]."); - } - } - if (orderKey == null) { - throw new ParsingException(parser.getTokenLocation(), - "Must specify at least one field for [order] in [" + aggregationName + "]."); - } else { - orderParam = new OrderElement(orderKey, orderAsc); - } - return orderParam; - } - - static class OrderElement { - private final String key; - private final boolean asc; - - public OrderElement(String key, boolean asc) { - this.key = key; - this.asc = asc; - } - - public String key() { - return key; - } - - public boolean asc() { - return asc; - } - - } - - @Override - public TermsAggregator.BucketCountThresholds getDefaultBucketCountThresholds() { - return new TermsAggregator.BucketCountThresholds(TermsAggregationBuilder.DEFAULT_BUCKET_COUNT_THRESHOLDS); - } - - static Terms.Order resolveOrder(String key, boolean asc) { - if ("_term".equals(key)) { - return Order.term(asc); - } - if ("_count".equals(key)) { - return Order.count(asc); - } - return Order.aggregation(key, asc); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/UnmappedTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/UnmappedTerms.java index c3d1a8937c4ed..ebd5e8a62ba52 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/UnmappedTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/UnmappedTerms.java @@ -27,6 +27,7 @@ import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; +import java.util.Collections; import java.util.List; import java.util.Map; @@ -71,6 +72,11 @@ public String getWriteableName() { return NAME; } + @Override + public String getType() { + return StringTerms.NAME; + } + @Override public UnmappedTerms create(List buckets) { return new UnmappedTerms(name, order, requiredSize, minDocCount, pipelineAggregators(), metaData); @@ -97,11 +103,8 @@ public InternalAggregation doReduce(List aggregations, Redu } @Override - public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME, 0); - builder.field(SUM_OF_OTHER_DOC_COUNTS, 0); - builder.startArray(CommonFields.BUCKETS).endArray(); - return builder; + public final XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + return doXContentCommon(builder, params, 0, 0, Collections.emptyList()); } @Override @@ -124,7 +127,7 @@ public long getSumOfOtherDocCounts() { } @Override - protected List getBucketsInternal() { + public List getBuckets() { return emptyList(); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/support/IncludeExclude.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/support/IncludeExclude.java index e751c54fb16e9..7499575f642d9 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/support/IncludeExclude.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/support/IncludeExclude.java @@ -18,9 +18,9 @@ */ package org.elasticsearch.search.aggregations.bucket.terms.support; +import com.carrotsearch.hppc.BitMixer; import com.carrotsearch.hppc.LongHashSet; import com.carrotsearch.hppc.LongSet; - import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedSetDocValues; import org.apache.lucene.index.Terms; @@ -28,6 +28,7 @@ import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.LongBitSet; import org.apache.lucene.util.NumericUtils; +import org.apache.lucene.util.StringHelper; import org.apache.lucene.util.automaton.Automata; import org.apache.lucene.util.automaton.Automaton; import org.apache.lucene.util.automaton.ByteRunAutomaton; @@ -35,8 +36,8 @@ import org.apache.lucene.util.automaton.Operations; import org.apache.lucene.util.automaton.RegExp; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -47,7 +48,6 @@ import java.io.IOException; import java.util.HashSet; -import java.util.Map; import java.util.Objects; import java.util.Set; import java.util.SortedSet; @@ -58,18 +58,143 @@ * exclusion has precedence, where the {@code include} is evaluated first and then the {@code exclude}. */ public class IncludeExclude implements Writeable, ToXContent { - private static final ParseField INCLUDE_FIELD = new ParseField("include"); - private static final ParseField EXCLUDE_FIELD = new ParseField("exclude"); - private static final ParseField PATTERN_FIELD = new ParseField("pattern"); + public static final ParseField INCLUDE_FIELD = new ParseField("include"); + public static final ParseField EXCLUDE_FIELD = new ParseField("exclude"); + public static final ParseField PATTERN_FIELD = new ParseField("pattern").withAllDeprecated("Put patterns directly under the [include] or [exclude]"); + public static final ParseField PARTITION_FIELD = new ParseField("partition"); + public static final ParseField NUM_PARTITIONS_FIELD = new ParseField("num_partitions"); + // Needed to add this seed for a deterministic term hashing policy + // otherwise tests fail to get expected results and worse, shards + // can disagree on which terms hash to the required partition. + private static final int HASH_PARTITIONING_SEED = 31; + + // for parsing purposes only + // TODO: move all aggs to the same package so that this stuff could be pkg-private + public static IncludeExclude merge(IncludeExclude include, IncludeExclude exclude) { + if (include == null) { + return exclude; + } + if (exclude == null) { + return include; + } + if (include.isPartitionBased()) { + throw new IllegalArgumentException("Cannot specify any excludes when using a partition-based include"); + } + String includeMethod = include.isRegexBased() ? "regex" : "set"; + String excludeMethod = exclude.isRegexBased() ? "regex" : "set"; + if (includeMethod.equals(excludeMethod) == false) { + throw new IllegalArgumentException("Cannot mix a " + includeMethod + "-based include with a " + + excludeMethod + "-based method"); + } + if (include.isRegexBased()) { + return new IncludeExclude(include.include, exclude.exclude); + } else { + return new IncludeExclude(include.includeValues, exclude.excludeValues); + } + } + + public static IncludeExclude parseInclude(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + if (token == XContentParser.Token.VALUE_STRING) { + return new IncludeExclude(parser.text(), null); + } else if (token == XContentParser.Token.START_ARRAY) { + return new IncludeExclude(new TreeSet<>(parseArrayToSet(parser)), null); + } else if (token == XContentParser.Token.START_OBJECT) { + String currentFieldName = null; + Integer partition = null, numPartitions = null; + String pattern = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else + // This "include":{"pattern":"foo.*"} syntax is undocumented since 2.0 + // Regexes should be "include":"foo.*" + if (PATTERN_FIELD.match(currentFieldName)) { + pattern = parser.text(); + } else if (NUM_PARTITIONS_FIELD.match(currentFieldName)) { + numPartitions = parser.intValue(); + } else if (PARTITION_FIELD.match(currentFieldName)) { + partition = parser.intValue(); + } else { + throw new ElasticsearchParseException( + "Unknown parameter in Include/Exclude clause: " + currentFieldName); + } + } + + final boolean hasPattern = pattern != null; + final boolean hasPartition = partition != null || numPartitions != null; + if (hasPattern && hasPartition) { + throw new IllegalArgumentException("Cannot mix pattern-based and partition-based includes"); + } + + if (pattern != null) { + return new IncludeExclude(pattern, null); + } + + if (partition == null) { + throw new IllegalArgumentException("Missing [" + PARTITION_FIELD.getPreferredName() + + "] parameter for partition-based include"); + } + if (numPartitions == null) { + throw new IllegalArgumentException("Missing [" + NUM_PARTITIONS_FIELD.getPreferredName() + + "] parameter for partition-based include"); + } + return new IncludeExclude(partition, numPartitions); + } else { + throw new IllegalArgumentException("Unrecognized token for an include [" + token + "]"); + } + } + + public static IncludeExclude parseExclude(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + if (token == XContentParser.Token.VALUE_STRING) { + return new IncludeExclude(null, parser.text()); + } else if (token == XContentParser.Token.START_ARRAY) { + return new IncludeExclude(null, new TreeSet<>(parseArrayToSet(parser))); + } else if (token == XContentParser.Token.START_OBJECT) { + String currentFieldName = null; + String pattern = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (PATTERN_FIELD.match(currentFieldName)) { + pattern = parser.text(); + } else { + throw new IllegalArgumentException("Unrecognized field [" + parser.currentName() + "]"); + } + } + if (pattern == null) { + throw new IllegalArgumentException("Missing [pattern] element under [exclude]"); + } + return new IncludeExclude(null, pattern); + } else { + throw new IllegalArgumentException("Unrecognized token for an exclude [" + token + "]"); + } + } // The includeValue and excludeValue ByteRefs which are the result of the parsing // process are converted into a LongFilter when used on numeric fields // in the index. - public static class LongFilter { + public abstract static class LongFilter { + public abstract boolean accept(long value); + + } + + public class PartitionedLongFilter extends LongFilter { + @Override + public boolean accept(long value) { + // hash the value to keep even distributions + final long hashCode = BitMixer.mix64(value); + return Math.floorMod(hashCode, incNumPartitions) == incZeroBasedPartition; + } + } + + + public static class SetBackedLongFilter extends LongFilter { private LongSet valids; private LongSet invalids; - private LongFilter(int numValids, int numInvalids) { + private SetBackedLongFilter(int numValids, int numInvalids) { if (numValids > 0) { valids = new LongHashSet(numValids); } @@ -96,6 +221,13 @@ public abstract static class StringFilter { public abstract boolean accept(BytesRef value); } + class PartitionedStringFilter extends StringFilter { + @Override + public boolean accept(BytesRef value) { + return Math.floorMod(StringHelper.murmurhash3_x86_32(value, HASH_PARTITIONING_SEED), incNumPartitions) == incZeroBasedPartition; + } + } + static class AutomatonBackedStringFilter extends StringFilter { private final ByteRunAutomaton runAutomaton; @@ -118,7 +250,7 @@ static class TermListBackedStringFilter extends StringFilter { private final Set valids; private final Set invalids; - public TermListBackedStringFilter(Set includeValues, Set excludeValues) { + TermListBackedStringFilter(Set includeValues, Set excludeValues) { this.valids = includeValues; this.invalids = excludeValues; } @@ -135,7 +267,25 @@ public boolean accept(BytesRef value) { public abstract static class OrdinalsFilter { public abstract LongBitSet acceptedGlobalOrdinals(RandomAccessOrds globalOrdinals) throws IOException; + } + + class PartitionedOrdinalsFilter extends OrdinalsFilter { + @Override + public LongBitSet acceptedGlobalOrdinals(RandomAccessOrds globalOrdinals) throws IOException { + final long numOrds = globalOrdinals.getValueCount(); + final LongBitSet acceptedGlobalOrdinals = new LongBitSet(numOrds); + final TermsEnum termEnum = globalOrdinals.termsEnum(); + + BytesRef term = termEnum.next(); + while (term != null) { + if (Math.floorMod(StringHelper.murmurhash3_x86_32(term, HASH_PARTITIONING_SEED), incNumPartitions) == incZeroBasedPartition) { + acceptedGlobalOrdinals.set(termEnum.ord()); + } + term = termEnum.next(); + } + return acceptedGlobalOrdinals; + } } static class AutomatonBackedOrdinalsFilter extends OrdinalsFilter { @@ -151,8 +301,7 @@ private AutomatonBackedOrdinalsFilter(Automaton automaton) { * */ @Override - public LongBitSet acceptedGlobalOrdinals(RandomAccessOrds globalOrdinals) - throws IOException { + public LongBitSet acceptedGlobalOrdinals(RandomAccessOrds globalOrdinals) throws IOException { LongBitSet acceptedGlobalOrdinals = new LongBitSet(globalOrdinals.getValueCount()); TermsEnum globalTermsEnum; Terms globalTerms = new DocValuesTerms(globalOrdinals); @@ -171,7 +320,7 @@ static class TermListBackedOrdinalsFilter extends OrdinalsFilter { private final SortedSet includeValues; private final SortedSet excludeValues; - public TermListBackedOrdinalsFilter(SortedSet includeValues, SortedSet excludeValues) { + TermListBackedOrdinalsFilter(SortedSet includeValues, SortedSet excludeValues) { this.includeValues = includeValues; this.excludeValues = excludeValues; } @@ -205,6 +354,8 @@ public LongBitSet acceptedGlobalOrdinals(RandomAccessOrds globalOrdinals) throws private final RegExp include, exclude; private final SortedSet includeValues, excludeValues; + private final int incZeroBasedPartition; + private final int incNumPartitions; /** * @param include The regular expression pattern for the terms to be included @@ -218,6 +369,8 @@ public IncludeExclude(RegExp include, RegExp exclude) { this.exclude = exclude; this.includeValues = null; this.excludeValues = null; + this.incZeroBasedPartition = 0; + this.incNumPartitions = 0; } public IncludeExclude(String include, String exclude) { @@ -234,6 +387,8 @@ public IncludeExclude(SortedSet includeValues, SortedSet exc } this.include = null; this.exclude = null; + this.incZeroBasedPartition = 0; + this.incNumPartitions = 0; this.includeValues = includeValues; this.excludeValues = excludeValues; } @@ -250,6 +405,21 @@ public IncludeExclude(long[] includeValues, long[] excludeValues) { this(convertToBytesRefSet(includeValues), convertToBytesRefSet(excludeValues)); } + public IncludeExclude(int partition, int numPartitions) { + if (partition < 0 || partition >= numPartitions) { + throw new IllegalArgumentException("Partition must be >=0 and < numPartition which is "+numPartitions); + } + this.incZeroBasedPartition = partition; + this.incNumPartitions = numPartitions; + this.include = null; + this.exclude = null; + this.includeValues = null; + this.excludeValues = null; + + } + + + /** * Read from a stream. */ @@ -257,6 +427,8 @@ public IncludeExclude(StreamInput in) throws IOException { if (in.readBoolean()) { includeValues = null; excludeValues = null; + incZeroBasedPartition = 0; + incNumPartitions = 0; String includeString = in.readOptionalString(); include = includeString == null ? null : new RegExp(includeString); String excludeString = in.readOptionalString(); @@ -283,6 +455,13 @@ public IncludeExclude(StreamInput in) throws IOException { } else { excludeValues = null; } + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { + incNumPartitions = in.readVInt(); + incZeroBasedPartition = in.readVInt(); + } else { + incNumPartitions = 0; + incZeroBasedPartition = 0; + } } @Override @@ -309,6 +488,10 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBytesRef(value); } } + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { + out.writeVInt(incNumPartitions); + out.writeVInt(incZeroBasedPartition); + } } } @@ -403,120 +586,28 @@ public boolean hasPayloads() { } - - - public static class Parser { - - public boolean token(String currentFieldName, XContentParser.Token token, XContentParser parser, - ParseFieldMatcher parseFieldMatcher, Map otherOptions) throws IOException { - - if (token == XContentParser.Token.VALUE_STRING) { - if (parseFieldMatcher.match(currentFieldName, INCLUDE_FIELD)) { - otherOptions.put(INCLUDE_FIELD, parser.text()); - } else if (parseFieldMatcher.match(currentFieldName, EXCLUDE_FIELD)) { - otherOptions.put(EXCLUDE_FIELD, parser.text()); - } else { - return false; - } - return true; - } - - if (token == XContentParser.Token.START_ARRAY) { - if (parseFieldMatcher.match(currentFieldName, INCLUDE_FIELD)) { - otherOptions.put(INCLUDE_FIELD, new TreeSet<>(parseArrayToSet(parser))); - return true; - } - if (parseFieldMatcher.match(currentFieldName, EXCLUDE_FIELD)) { - otherOptions.put(EXCLUDE_FIELD, new TreeSet<>(parseArrayToSet(parser))); - return true; - } - return false; - } - - if (token == XContentParser.Token.START_OBJECT) { - if (parseFieldMatcher.match(currentFieldName, INCLUDE_FIELD)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_STRING) { - if (parseFieldMatcher.match(currentFieldName, PATTERN_FIELD)) { - otherOptions.put(INCLUDE_FIELD, parser.text()); - } - } - } - } else if (parseFieldMatcher.match(currentFieldName, EXCLUDE_FIELD)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_STRING) { - if (parseFieldMatcher.match(currentFieldName, PATTERN_FIELD)) { - otherOptions.put(EXCLUDE_FIELD, parser.text()); - } - } - } - } else { - return false; - } - return true; - } - - return false; + private static Set parseArrayToSet(XContentParser parser) throws IOException { + final Set set = new HashSet<>(); + if (parser.currentToken() != XContentParser.Token.START_ARRAY) { + throw new ElasticsearchParseException("Missing start of array in include/exclude clause"); } - - private Set parseArrayToSet(XContentParser parser) throws IOException { - final Set set = new HashSet<>(); - if (parser.currentToken() != XContentParser.Token.START_ARRAY) { - throw new ElasticsearchParseException("Missing start of array in include/exclude clause"); - } - while (parser.nextToken() != XContentParser.Token.END_ARRAY) { - if (!parser.currentToken().isValue()) { - throw new ElasticsearchParseException("Array elements in include/exclude clauses should be string values"); - } - set.add(new BytesRef(parser.text())); - } - return set; - } - - public IncludeExclude createIncludeExclude(Map otherOptions) { - Object includeObject = otherOptions.get(INCLUDE_FIELD); - String include = null; - SortedSet includeValues = null; - if (includeObject != null) { - if (includeObject instanceof String) { - include = (String) includeObject; - } else if (includeObject instanceof SortedSet) { - includeValues = (SortedSet) includeObject; - } - } - Object excludeObject = otherOptions.get(EXCLUDE_FIELD); - String exclude = null; - SortedSet excludeValues = null; - if (excludeObject != null) { - if (excludeObject instanceof String) { - exclude = (String) excludeObject; - } else if (excludeObject instanceof SortedSet) { - excludeValues = (SortedSet) excludeObject; - } - } - RegExp includePattern = include != null ? new RegExp(include) : null; - RegExp excludePattern = exclude != null ? new RegExp(exclude) : null; - if (includePattern != null || excludePattern != null) { - if (includeValues != null || excludeValues != null) { - throw new IllegalArgumentException("Can only use regular expression include/exclude or a set of values, not both"); - } - return new IncludeExclude(includePattern, excludePattern); - } else if (includeValues != null || excludeValues != null) { - return new IncludeExclude(includeValues, excludeValues); - } else { - return null; + while (parser.nextToken() != XContentParser.Token.END_ARRAY) { + if (!parser.currentToken().isValue()) { + throw new ElasticsearchParseException("Array elements in include/exclude clauses should be string values"); } + set.add(new BytesRef(parser.text())); } + return set; } public boolean isRegexBased() { return include != null || exclude != null; } + public boolean isPartitionBased() { + return incNumPartitions > 0; + } + private Automaton toAutomaton() { Automaton a = null; if (include != null) { @@ -538,6 +629,9 @@ public StringFilter convertToStringFilter(DocValueFormat format) { if (isRegexBased()) { return new AutomatonBackedStringFilter(toAutomaton()); } + if (isPartitionBased()){ + return new PartitionedStringFilter(); + } return new TermListBackedStringFilter(parseForDocValues(includeValues, format), parseForDocValues(excludeValues, format)); } @@ -559,13 +653,22 @@ public OrdinalsFilter convertToOrdinalsFilter(DocValueFormat format) { if (isRegexBased()) { return new AutomatonBackedOrdinalsFilter(toAutomaton()); } + if (isPartitionBased()){ + return new PartitionedOrdinalsFilter(); + } + return new TermListBackedOrdinalsFilter(parseForDocValues(includeValues, format), parseForDocValues(excludeValues, format)); } public LongFilter convertToLongFilter(DocValueFormat format) { + + if(isPartitionBased()){ + return new PartitionedLongFilter(); + } + int numValids = includeValues == null ? 0 : includeValues.size(); int numInvalids = excludeValues == null ? 0 : excludeValues.size(); - LongFilter result = new LongFilter(numValids, numInvalids); + SetBackedLongFilter result = new SetBackedLongFilter(numValids, numInvalids); if (includeValues != null) { for (BytesRef val : includeValues) { result.addAccept(format.parseLong(val.utf8ToString(), false, null)); @@ -580,9 +683,13 @@ public LongFilter convertToLongFilter(DocValueFormat format) { } public LongFilter convertToDoubleFilter() { + if(isPartitionBased()){ + return new PartitionedLongFilter(); + } + int numValids = includeValues == null ? 0 : includeValues.size(); int numInvalids = excludeValues == null ? 0 : excludeValues.size(); - LongFilter result = new LongFilter(numValids, numInvalids); + SetBackedLongFilter result = new SetBackedLongFilter(numValids, numInvalids); if (includeValues != null) { for (BytesRef val : includeValues) { double dval = Double.parseDouble(val.utf8ToString()); @@ -602,18 +709,21 @@ public LongFilter convertToDoubleFilter() { public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { if (include != null) { builder.field(INCLUDE_FIELD.getPreferredName(), include.getOriginalString()); - } - if (includeValues != null) { + } else if (includeValues != null) { builder.startArray(INCLUDE_FIELD.getPreferredName()); for (BytesRef value : includeValues) { builder.value(value.utf8ToString()); } builder.endArray(); + } else if (isPartitionBased()) { + builder.startObject(INCLUDE_FIELD.getPreferredName()); + builder.field(PARTITION_FIELD.getPreferredName(), incZeroBasedPartition); + builder.field(NUM_PARTITIONS_FIELD.getPreferredName(), incNumPartitions); + builder.endObject(); } if (exclude != null) { builder.field(EXCLUDE_FIELD.getPreferredName(), exclude.getOriginalString()); - } - if (excludeValues != null) { + } else if (excludeValues != null) { builder.startArray(EXCLUDE_FIELD.getPreferredName()); for (BytesRef value : excludeValues) { builder.value(value.utf8ToString()); @@ -625,8 +735,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws @Override public int hashCode() { - return Objects.hash(include == null ? null : include.getOriginalString(), exclude == null ? null : exclude.getOriginalString(), - includeValues, excludeValues); + return Objects.hash( + include == null ? null : include.getOriginalString(), + exclude == null ? null : exclude.getOriginalString(), + includeValues, excludeValues, incZeroBasedPartition, incNumPartitions); } @Override @@ -640,7 +752,9 @@ public boolean equals(Object obj) { return Objects.equals(include == null ? null : include.getOriginalString(), other.include == null ? null : other.include.getOriginalString()) && Objects.equals(exclude == null ? null : exclude.getOriginalString(), other.exclude == null ? null : other.exclude.getOriginalString()) && Objects.equals(includeValues, other.includeValues) - && Objects.equals(excludeValues, other.excludeValues); + && Objects.equals(excludeValues, other.excludeValues) + && Objects.equals(incZeroBasedPartition, other.incZeroBasedPartition) + && Objects.equals(incNumPartitions, other.incNumPartitions); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalMetricsAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalMetricsAggregation.java deleted file mode 100644 index ded69d9f75bfb..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalMetricsAggregation.java +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.metrics; - -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; - -import java.io.IOException; -import java.util.List; -import java.util.Map; - -public abstract class InternalMetricsAggregation extends InternalAggregation { - protected InternalMetricsAggregation(String name, List pipelineAggregators, Map metaData) { - super(name, pipelineAggregators, metaData); - } - - /** - * Read from a stream. - */ - protected InternalMetricsAggregation(StreamInput in) throws IOException { - super(in); - } - -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalNumericMetricsAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalNumericMetricsAggregation.java index 901c52a232d22..b5d1eeddae9d8 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalNumericMetricsAggregation.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalNumericMetricsAggregation.java @@ -20,16 +20,14 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; import java.util.List; import java.util.Map; -/** - * - */ -public abstract class InternalNumericMetricsAggregation extends InternalMetricsAggregation { +public abstract class InternalNumericMetricsAggregation extends InternalAggregation { private static final DocValueFormat DEFAULT_FORMAT = DocValueFormat.RAW; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/MetricsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/MetricsAggregator.java index 30330e617196e..0a8553e00547e 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/MetricsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/MetricsAggregator.java @@ -23,7 +23,7 @@ import org.elasticsearch.search.aggregations.AggregatorBase; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -31,7 +31,7 @@ public abstract class MetricsAggregator extends AggregatorBase { - protected MetricsAggregator(String name, AggregationContext context, Aggregator parent, List pipelineAggregators, + protected MetricsAggregator(String name, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, AggregatorFactories.EMPTY, context, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/NumericMetricsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/NumericMetricsAggregator.java index 3ffd7f797503f..c81169ae13ace 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/NumericMetricsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/NumericMetricsAggregator.java @@ -20,7 +20,7 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -31,14 +31,14 @@ */ public abstract class NumericMetricsAggregator extends MetricsAggregator { - private NumericMetricsAggregator(String name, AggregationContext context, Aggregator parent, + private NumericMetricsAggregator(String name, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); } public abstract static class SingleValue extends NumericMetricsAggregator { - protected SingleValue(String name, AggregationContext context, Aggregator parent, List pipelineAggregators, + protected SingleValue(String name, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); } @@ -48,7 +48,7 @@ protected SingleValue(String name, AggregationContext context, Aggregator parent public abstract static class MultiValue extends NumericMetricsAggregator { - protected MultiValue(String name, AggregationContext context, Aggregator parent, List pipelineAggregators, + protected MultiValue(String name, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/ParsedSingleValueNumericMetricsAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/ParsedSingleValueNumericMetricsAggregation.java new file mode 100644 index 0000000000000..3105a785e1903 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/ParsedSingleValueNumericMetricsAggregation.java @@ -0,0 +1,60 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations.metrics; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; +import org.elasticsearch.search.aggregations.ParsedAggregation; + +public abstract class ParsedSingleValueNumericMetricsAggregation extends ParsedAggregation + implements NumericMetricsAggregation.SingleValue { + + protected double value; + protected String valueAsString; + + @Override + public String getValueAsString() { + if (valueAsString != null) { + return valueAsString; + } else { + return Double.toString(value); + } + } + + @Override + public double value() { + return value; + } + + protected void setValue(double value) { + this.value = value; + } + + protected void setValueAsString(String valueAsString) { + this.valueAsString = valueAsString; + } + + protected static void declareSingleValueFields(ObjectParser objectParser, + double defaultNullValue) { + declareAggregationFields(objectParser); + objectParser.declareField(ParsedSingleValueNumericMetricsAggregation::setValue, + (parser, context) -> parseDouble(parser, defaultNullValue), CommonFields.VALUE, ValueType.DOUBLE_OR_NULL); + objectParser.declareString(ParsedSingleValueNumericMetricsAggregation::setValueAsString, CommonFields.VALUE_AS_STRING); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregationBuilder.java index 37d6788782202..0d9bd6fc1a9b7 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregationBuilder.java @@ -21,33 +21,45 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class AvgAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly { public static final String NAME = "avg"; - private static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(AvgAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new AvgAggregationBuilder(aggregationName), context); + } public AvgAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(name, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } /** * Read from a stream. */ public AvgAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(in, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } @Override @@ -56,9 +68,9 @@ protected void innerWriteTo(StreamOutput out) { } @Override - protected AvgAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected AvgAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new AvgAggregatorFactory(name, type, config, context, parent, subFactoriesBuilder, metaData); + return new AvgAggregatorFactory(name, config, context, parent, subFactoriesBuilder, metaData); } @Override @@ -77,7 +89,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java index eb0fc42f9c20b..c948334bc5bce 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java @@ -31,8 +31,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -49,7 +49,7 @@ public class AvgAggregator extends NumericMetricsAggregator.SingleValue { DoubleArray sums; DocValueFormat format; - public AvgAggregator(String name, ValuesSource.Numeric valuesSource, DocValueFormat formatter, AggregationContext context, + public AvgAggregator(String name, ValuesSource.Numeric valuesSource, DocValueFormat formatter, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregatorFactory.java index 5d425f2732cbf..f1fc12ef4e505 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -36,9 +35,9 @@ public class AvgAggregatorFactory extends ValuesSourceAggregatorFactory { - public AvgAggregatorFactory(String name, Type type, ValuesSourceConfig config, AggregationContext context, + public AvgAggregatorFactory(String name, ValuesSourceConfig config, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgParser.java deleted file mode 100644 index bc6f762295cd8..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgParser.java +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.metrics.avg; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * - */ -public class AvgParser extends NumericValuesSourceParser { - - public AvgParser() { - super(true, true, false); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, XContentParser.Token token, - XContentParseContext context, Map otherOptions) throws IOException { - return false; - } - - @Override - protected AvgAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - return new AvgAggregationBuilder(aggregationName); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/InternalAvg.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/InternalAvg.java index 5f0d54db00329..f33bb931f9eda 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/InternalAvg.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/InternalAvg.java @@ -69,6 +69,14 @@ public double getValue() { return sum / count; } + double getSum() { + return sum; + } + + long getCount() { + return count; + } + @Override public String getWriteableName() { return AvgAggregationBuilder.NAME; @@ -87,11 +95,10 @@ public InternalAvg doReduce(List aggregations, ReduceContex @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(CommonFields.VALUE, count != 0 ? getValue() : null); + builder.field(CommonFields.VALUE.getPreferredName(), count != 0 ? getValue() : null); if (count != 0 && format != DocValueFormat.RAW) { - builder.field(CommonFields.VALUE_AS_STRING, format.format(getValue())); + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(getValue())); } return builder; } - } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/ParsedAvg.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/ParsedAvg.java new file mode 100644 index 0000000000000..16d91bd08f0d3 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/ParsedAvg.java @@ -0,0 +1,64 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.avg; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.ParsedSingleValueNumericMetricsAggregation; + +import java.io.IOException; + +public class ParsedAvg extends ParsedSingleValueNumericMetricsAggregation implements Avg { + + @Override + public double getValue() { + return value(); + } + + @Override + public String getType() { + return AvgAggregationBuilder.NAME; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + // InternalAvg renders value only if the avg normalizer (count) is not 0. + // We parse back `null` as Double.POSITIVE_INFINITY so we check for that value here to get the same xContent output + boolean hasValue = value != Double.POSITIVE_INFINITY; + builder.field(CommonFields.VALUE.getPreferredName(), hasValue ? value : null); + if (hasValue && valueAsString != null) { + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), valueAsString); + } + return builder; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedAvg.class.getSimpleName(), true, ParsedAvg::new); + + static { + declareSingleValueFields(PARSER, Double.POSITIVE_INFINITY); + } + + public static ParsedAvg fromXContent(XContentParser parser, final String name) { + ParsedAvg avg = PARSER.apply(parser, null); + avg.setName(name); + return avg; + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregationBuilder.java index d545cfce23ebd..1f76d8530f77f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregationBuilder.java @@ -22,16 +22,19 @@ import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; @@ -40,21 +43,33 @@ public final class CardinalityAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly { public static final String NAME = "cardinality"; - private static final Type TYPE = new Type(NAME); + private static final ParseField REHASH = new ParseField("rehash").withAllDeprecated("no replacement - values will always be rehashed"); public static final ParseField PRECISION_THRESHOLD_FIELD = new ParseField("precision_threshold"); + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(CardinalityAggregationBuilder.NAME); + ValuesSourceParserHelper.declareAnyFields(PARSER, true, false); + PARSER.declareLong(CardinalityAggregationBuilder::precisionThreshold, CardinalityAggregationBuilder.PRECISION_THRESHOLD_FIELD); + PARSER.declareLong((b, v) -> {/*ignore*/}, REHASH); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new CardinalityAggregationBuilder(aggregationName, null), context); + } + private Long precisionThreshold = null; public CardinalityAggregationBuilder(String name, ValueType targetValueType) { - super(name, TYPE, ValuesSourceType.ANY, targetValueType); + super(name, ValuesSourceType.ANY, targetValueType); } /** * Read from a stream. */ public CardinalityAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.ANY); + super(in, ValuesSourceType.ANY); if (in.readBoolean()) { precisionThreshold = in.readLong(); } @@ -105,9 +120,9 @@ public void rehash(boolean rehash) { } @Override - protected CardinalityAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected CardinalityAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new CardinalityAggregatorFactory(name, type, config, precisionThreshold, context, parent, subFactoriesBuilder, metaData); + return new CardinalityAggregatorFactory(name, config, precisionThreshold, context, parent, subFactoriesBuilder, metaData); } @Override @@ -130,7 +145,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java index 99922cc6aec1d..7d5db460ae6b0 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java @@ -20,6 +20,7 @@ package org.elasticsearch.search.aggregations.metrics.cardinality; import com.carrotsearch.hppc.BitMixer; + import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedNumericDocValues; @@ -40,8 +41,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollector; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -62,7 +63,7 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue private Collector collector; public CardinalityAggregator(String name, ValuesSource valuesSource, int precision, - AggregationContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; this.precision = precision; @@ -326,7 +327,7 @@ private static class Long extends MurmurHash3Values { private final SortedNumericDocValues values; - public Long(SortedNumericDocValues values) { + Long(SortedNumericDocValues values) { this.values = values; } @@ -350,7 +351,7 @@ private static class Double extends MurmurHash3Values { private final SortedNumericDoubleValues values; - public Double(SortedNumericDoubleValues values) { + Double(SortedNumericDoubleValues values) { this.values = values; } @@ -376,7 +377,7 @@ private static class Bytes extends MurmurHash3Values { private final SortedBinaryDocValues values; - public Bytes(SortedBinaryDocValues values) { + Bytes(SortedBinaryDocValues values) { this.values = values; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java index dad9ba46e6d2c..0d2d32f04697c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java @@ -22,12 +22,11 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -37,10 +36,10 @@ public class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory< private final Long precisionThreshold; - public CardinalityAggregatorFactory(String name, Type type, ValuesSourceConfig config, Long precisionThreshold, - AggregationContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, + public CardinalityAggregatorFactory(String name, ValuesSourceConfig config, Long precisionThreshold, + SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.precisionThreshold = precisionThreshold; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java deleted file mode 100644 index e40e0767994dc..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java +++ /dev/null @@ -1,66 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.metrics.cardinality; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.AnyValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - - -public class CardinalityParser extends AnyValuesSourceParser { - - private static final ParseField REHASH = new ParseField("rehash").withAllDeprecated("no replacement - values will always be rehashed"); - - public CardinalityParser() { - super(true, false); - } - - @Override - protected CardinalityAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - CardinalityAggregationBuilder factory = new CardinalityAggregationBuilder(aggregationName, targetValueType); - Long precisionThreshold = (Long) otherOptions.get(CardinalityAggregationBuilder.PRECISION_THRESHOLD_FIELD); - if (precisionThreshold != null) { - factory.precisionThreshold(precisionThreshold); - } - return factory; - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - if (token.isValue()) { - if (context.matchField(currentFieldName, CardinalityAggregationBuilder.PRECISION_THRESHOLD_FIELD)) { - otherOptions.put(CardinalityAggregationBuilder.PRECISION_THRESHOLD_FIELD, context.getParser().longValue()); - return true; - } else if (context.matchField(currentFieldName, REHASH)) { - // ignore - return true; - } - } - return false; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/HyperLogLogPlusPlus.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/HyperLogLogPlusPlus.java index 568ecdbec5901..42b4561e07b3c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/HyperLogLogPlusPlus.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/HyperLogLogPlusPlus.java @@ -433,7 +433,7 @@ private class Hashset { private final BytesRef readSpare; private final ByteBuffer writeSpare; - public Hashset(long initialBucketCount) { + Hashset(long initialBucketCount) { capacity = m / 4; // because ints take 4 bytes threshold = (int) (capacity * MAX_LOAD_FACTOR); mask = capacity - 1; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/InternalCardinality.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/InternalCardinality.java index 02953abc2daa5..de72582b16fa3 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/InternalCardinality.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/InternalCardinality.java @@ -109,8 +109,9 @@ public void merge(InternalCardinality other) { @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { final long cardinality = getValue(); - builder.field(CommonFields.VALUE, cardinality); + builder.field(CommonFields.VALUE.getPreferredName(), cardinality); return builder; } } + diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/ParsedCardinality.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/ParsedCardinality.java new file mode 100644 index 0000000000000..5a615f61a4ae6 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/ParsedCardinality.java @@ -0,0 +1,73 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.cardinality; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedAggregation; + +import java.io.IOException; + +public class ParsedCardinality extends ParsedAggregation implements Cardinality { + + private long cardinalityValue; + + @Override + public String getValueAsString() { + return Double.toString((double) cardinalityValue); + } + + @Override + public double value() { + return getValue(); + } + + @Override + public long getValue() { + return cardinalityValue; + } + + @Override + public String getType() { + return CardinalityAggregationBuilder.NAME; + } + + private static final ObjectParser PARSER = new ObjectParser<>( + ParsedCardinality.class.getSimpleName(), true, ParsedCardinality::new); + + static { + declareAggregationFields(PARSER); + PARSER.declareLong((agg, value) -> agg.cardinalityValue = value, CommonFields.VALUE); + } + + public static ParsedCardinality fromXContent(XContentParser parser, final String name) { + ParsedCardinality cardinality = PARSER.apply(parser, null); + cardinality.setName(name); + return cardinality; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) + throws IOException { + builder.field(CommonFields.VALUE.getPreferredName(), cardinalityValue); + return builder; + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregationBuilder.java index c3565d794a835..be3ad4db802e2 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregationBuilder.java @@ -21,35 +21,48 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; public class GeoBoundsAggregationBuilder extends ValuesSourceAggregationBuilder { public static final String NAME = "geo_bounds"; - private static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(GeoBoundsAggregationBuilder.NAME); + ValuesSourceParserHelper.declareGeoFields(PARSER, false, false); + PARSER.declareBoolean(GeoBoundsAggregationBuilder::wrapLongitude, GeoBoundsAggregator.WRAP_LONGITUDE_FIELD); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new GeoBoundsAggregationBuilder(aggregationName), context); + } private boolean wrapLongitude = true; public GeoBoundsAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); + super(name, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); } /** * Read from a stream. */ public GeoBoundsAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); + super(in, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); wrapLongitude = in.readBoolean(); } @@ -74,9 +87,9 @@ public boolean wrapLongitude() { } @Override - protected GeoBoundsAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected GeoBoundsAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new GeoBoundsAggregatorFactory(name, type, config, wrapLongitude, context, parent, subFactoriesBuilder, metaData); + return new GeoBoundsAggregatorFactory(name, config, wrapLongitude, context, parent, subFactoriesBuilder, metaData); } @Override @@ -97,7 +110,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java index 57e30fd58bd78..2083ea570d30a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java @@ -32,8 +32,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.MetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -52,7 +52,7 @@ public final class GeoBoundsAggregator extends MetricsAggregator { DoubleArray negLefts; DoubleArray negRights; - protected GeoBoundsAggregator(String name, AggregationContext aggregationContext, Aggregator parent, + protected GeoBoundsAggregator(String name, SearchContext aggregationContext, Aggregator parent, ValuesSource.GeoPoint valuesSource, boolean wrapLongitude, List pipelineAggregators, Map metaData) throws IOException { super(name, aggregationContext, parent, pipelineAggregators, metaData); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregatorFactory.java index e7420a9f1f779..e67ad49115ac6 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregatorFactory.java @@ -22,12 +22,11 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -37,10 +36,10 @@ public class GeoBoundsAggregatorFactory extends ValuesSourceAggregatorFactory config, boolean wrapLongitude, - AggregationContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, + public GeoBoundsAggregatorFactory(String name, ValuesSourceConfig config, boolean wrapLongitude, + SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.wrapLongitude = wrapLongitude; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsParser.java deleted file mode 100644 index c42de23949b11..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsParser.java +++ /dev/null @@ -1,61 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.metrics.geobounds; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.GeoPointValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -public class GeoBoundsParser extends GeoPointValuesSourceParser { - - public GeoBoundsParser() { - super(false, false); - } - - @Override - protected GeoBoundsAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - GeoBoundsAggregationBuilder factory = new GeoBoundsAggregationBuilder(aggregationName); - Boolean wrapLongitude = (Boolean) otherOptions.get(GeoBoundsAggregator.WRAP_LONGITUDE_FIELD); - if (wrapLongitude != null) { - factory.wrapLongitude(wrapLongitude); - } - return factory; - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - if (token == XContentParser.Token.VALUE_BOOLEAN) { - if (context.matchField(currentFieldName, GeoBoundsAggregator.WRAP_LONGITUDE_FIELD)) { - otherOptions.put(GeoBoundsAggregator.WRAP_LONGITUDE_FIELD, context.getParser().booleanValue()); - return true; - } - } - return false; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/InternalGeoBounds.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/InternalGeoBounds.java index b999936693d06..92cf381d058dc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/InternalGeoBounds.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/InternalGeoBounds.java @@ -19,19 +19,26 @@ package org.elasticsearch.search.aggregations.metrics.geobounds; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.metrics.InternalMetricsAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; import java.util.List; import java.util.Map; -public class InternalGeoBounds extends InternalMetricsAggregation implements GeoBounds { +public class InternalGeoBounds extends InternalAggregation implements GeoBounds { + + static final ParseField BOUNDS_FIELD = new ParseField("bounds"); + static final ParseField TOP_LEFT_FIELD = new ParseField("top_left"); + static final ParseField BOTTOM_RIGHT_FIELD = new ParseField("bottom_right"); + static final ParseField LAT_FIELD = new ParseField("lat"); + static final ParseField LON_FIELD = new ParseField("lon"); + private final double top; private final double bottom; private final double posLeft; @@ -82,7 +89,7 @@ protected void doWriteTo(StreamOutput out) throws IOException { public String getWriteableName() { return GeoBoundsAggregationBuilder.NAME; } - + @Override public InternalAggregation doReduce(List aggregations, ReduceContext reduceContext) { double top = Double.NEGATIVE_INFINITY; @@ -170,14 +177,14 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th GeoPoint topLeft = topLeft(); GeoPoint bottomRight = bottomRight(); if (topLeft != null) { - builder.startObject("bounds"); - builder.startObject("top_left"); - builder.field("lat", topLeft.lat()); - builder.field("lon", topLeft.lon()); + builder.startObject(BOUNDS_FIELD.getPreferredName()); + builder.startObject(TOP_LEFT_FIELD.getPreferredName()); + builder.field(LAT_FIELD.getPreferredName(), topLeft.lat()); + builder.field(LON_FIELD.getPreferredName(), topLeft.lon()); builder.endObject(); - builder.startObject("bottom_right"); - builder.field("lat", bottomRight.lat()); - builder.field("lon", bottomRight.lon()); + builder.startObject(BOTTOM_RIGHT_FIELD.getPreferredName()); + builder.field(LAT_FIELD.getPreferredName(), bottomRight.lat()); + builder.field(LON_FIELD.getPreferredName(), bottomRight.lon()); builder.endObject(); builder.endObject(); } @@ -187,21 +194,21 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th private static class BoundingBox { private final GeoPoint topLeft; private final GeoPoint bottomRight; - - public BoundingBox(GeoPoint topLeft, GeoPoint bottomRight) { + + BoundingBox(GeoPoint topLeft, GeoPoint bottomRight) { this.topLeft = topLeft; this.bottomRight = bottomRight; } - + public GeoPoint topLeft() { return topLeft; } - + public GeoPoint bottomRight() { return bottomRight; } } - + private BoundingBox resolveBoundingBox() { if (Double.isInfinite(top)) { return null; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/ParsedGeoBounds.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/ParsedGeoBounds.java new file mode 100644 index 0000000000000..04d2b2448d2e2 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/ParsedGeoBounds.java @@ -0,0 +1,105 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.geobounds; + +import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedAggregation; + +import java.io.IOException; + +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; +import static org.elasticsearch.search.aggregations.metrics.geobounds.InternalGeoBounds.BOTTOM_RIGHT_FIELD; +import static org.elasticsearch.search.aggregations.metrics.geobounds.InternalGeoBounds.BOUNDS_FIELD; +import static org.elasticsearch.search.aggregations.metrics.geobounds.InternalGeoBounds.LAT_FIELD; +import static org.elasticsearch.search.aggregations.metrics.geobounds.InternalGeoBounds.LON_FIELD; +import static org.elasticsearch.search.aggregations.metrics.geobounds.InternalGeoBounds.TOP_LEFT_FIELD; + +public class ParsedGeoBounds extends ParsedAggregation implements GeoBounds { + private GeoPoint topLeft; + private GeoPoint bottomRight; + + @Override + public String getType() { + return GeoBoundsAggregationBuilder.NAME; + } + + @Override + public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + if (topLeft != null) { + builder.startObject("bounds"); + builder.startObject("top_left"); + builder.field("lat", topLeft.getLat()); + builder.field("lon", topLeft.getLon()); + builder.endObject(); + builder.startObject("bottom_right"); + builder.field("lat", bottomRight.getLat()); + builder.field("lon", bottomRight.getLon()); + builder.endObject(); + builder.endObject(); + } + return builder; + } + + @Override + public GeoPoint topLeft() { + return topLeft; + } + + @Override + public GeoPoint bottomRight() { + return bottomRight; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedGeoBounds.class.getSimpleName(), true, + ParsedGeoBounds::new); + + private static final ConstructingObjectParser, Void> BOUNDS_PARSER = + new ConstructingObjectParser<>(ParsedGeoBounds.class.getSimpleName() + "_BOUNDS", true, + args -> new Tuple<>((GeoPoint) args[0], (GeoPoint) args[1])); + + private static final ObjectParser GEO_POINT_PARSER = new ObjectParser<>( + ParsedGeoBounds.class.getSimpleName() + "_POINT", true, GeoPoint::new); + + static { + declareAggregationFields(PARSER); + PARSER.declareObject((agg, bbox) -> { + agg.topLeft = bbox.v1(); + agg.bottomRight = bbox.v2(); + }, BOUNDS_PARSER, BOUNDS_FIELD); + + BOUNDS_PARSER.declareObject(constructorArg(), GEO_POINT_PARSER, TOP_LEFT_FIELD); + BOUNDS_PARSER.declareObject(constructorArg(), GEO_POINT_PARSER, BOTTOM_RIGHT_FIELD); + + GEO_POINT_PARSER.declareDouble(GeoPoint::resetLat, LAT_FIELD); + GEO_POINT_PARSER.declareDouble(GeoPoint::resetLon, LON_FIELD); + } + + public static ParsedGeoBounds fromXContent(XContentParser parser, final String name) { + ParsedGeoBounds geoBounds = PARSER.apply(parser, null); + geoBounds.setName(name); + return geoBounds; + } + +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregationBuilder.java index 3c5208f873856..8e173e65923ee 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregationBuilder.java @@ -21,33 +21,45 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class GeoCentroidAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly { public static final String NAME = "geo_centroid"; - public static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(GeoCentroidAggregationBuilder.NAME); + ValuesSourceParserHelper.declareGeoFields(PARSER, true, false); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new GeoCentroidAggregationBuilder(aggregationName), context); + } public GeoCentroidAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); + super(name, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); } /** * Read from a stream. */ public GeoCentroidAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); + super(in, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); } @Override @@ -56,9 +68,9 @@ protected void innerWriteTo(StreamOutput out) { } @Override - protected GeoCentroidAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected GeoCentroidAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new GeoCentroidAggregatorFactory(name, type, config, context, parent, subFactoriesBuilder, metaData); + return new GeoCentroidAggregatorFactory(name, config, context, parent, subFactoriesBuilder, metaData); } @Override @@ -77,7 +89,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregator.java index ec838e7dd41ad..9d537030fa948 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregator.java @@ -32,8 +32,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.MetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -47,10 +47,10 @@ public final class GeoCentroidAggregator extends MetricsAggregator { LongArray centroids; LongArray counts; - protected GeoCentroidAggregator(String name, AggregationContext aggregationContext, Aggregator parent, + protected GeoCentroidAggregator(String name, SearchContext context, Aggregator parent, ValuesSource.GeoPoint valuesSource, List pipelineAggregators, Map metaData) throws IOException { - super(name, aggregationContext, parent, pipelineAggregators, metaData); + super(name, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; if (valuesSource != null) { final BigArrays bigArrays = context.bigArrays(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregatorFactory.java index 6fa2d28856efd..c21999d3fb48e 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregatorFactory.java @@ -22,12 +22,11 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -35,10 +34,10 @@ public class GeoCentroidAggregatorFactory extends ValuesSourceAggregatorFactory { - public GeoCentroidAggregatorFactory(String name, Type type, ValuesSourceConfig config, - AggregationContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, + public GeoCentroidAggregatorFactory(String name, ValuesSourceConfig config, + SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidParser.java deleted file mode 100644 index 8e88a11c6b60a..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidParser.java +++ /dev/null @@ -1,52 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.metrics.geocentroid; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.GeoPointValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * Parser class for {@link org.elasticsearch.search.aggregations.metrics.geocentroid.GeoCentroidAggregator} - */ -public class GeoCentroidParser extends GeoPointValuesSourceParser { - - public GeoCentroidParser() { - super(true, false); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - return false; - } - - @Override - protected GeoCentroidAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - return new GeoCentroidAggregationBuilder(aggregationName); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/InternalGeoCentroid.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/InternalGeoCentroid.java index 06d9d369029af..9beabf12887f0 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/InternalGeoCentroid.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/InternalGeoCentroid.java @@ -20,12 +20,12 @@ package org.elasticsearch.search.aggregations.metrics.geocentroid; import org.apache.lucene.spatial.geopoint.document.GeoPointField; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.metrics.InternalMetricsAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; @@ -35,9 +35,9 @@ /** * Serialization and merge logic for {@link GeoCentroidAggregator}. */ -public class InternalGeoCentroid extends InternalMetricsAggregation implements GeoCentroid { - protected final GeoPoint centroid; - protected final long count; +public class InternalGeoCentroid extends InternalAggregation implements GeoCentroid { + private final GeoPoint centroid; + private final long count; public InternalGeoCentroid(String name, GeoPoint centroid, long count, List pipelineAggregators, Map metaData) { @@ -124,6 +124,8 @@ public Object getProperty(List path) { return centroid.lat(); case "lon": return centroid.lon(); + case "count": + return count; default: throw new IllegalArgumentException("Found unknown path element [" + coordinate + "] in [" + getName() + "]"); } @@ -133,14 +135,18 @@ public Object getProperty(List path) { } static class Fields { - public static final String CENTROID = "location"; + static final ParseField CENTROID = new ParseField("location"); + static final ParseField COUNT = new ParseField("count"); + static final ParseField CENTROID_LAT = new ParseField("lat"); + static final ParseField CENTROID_LON = new ParseField("lon"); } @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { if (centroid != null) { - builder.startObject(Fields.CENTROID).field("lat", centroid.lat()).field("lon", centroid.lon()).endObject(); + builder.startObject(Fields.CENTROID.getPreferredName()).field("lat", centroid.lat()).field("lon", centroid.lon()).endObject(); } + builder.field(Fields.COUNT.getPreferredName(), count); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/ParsedGeoCentroid.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/ParsedGeoCentroid.java new file mode 100644 index 0000000000000..7ce1f5d86feb3 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/ParsedGeoCentroid.java @@ -0,0 +1,87 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.geocentroid; + +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedAggregation; +import org.elasticsearch.search.aggregations.metrics.geocentroid.InternalGeoCentroid.Fields; + +import java.io.IOException; + +/** + * Serialization and merge logic for {@link GeoCentroidAggregator}. + */ +public class ParsedGeoCentroid extends ParsedAggregation implements GeoCentroid { + private GeoPoint centroid; + private long count; + + @Override + public GeoPoint centroid() { + return centroid; + } + + @Override + public long count() { + return count; + } + + @Override + public String getType() { + return GeoCentroidAggregationBuilder.NAME; + } + + @Override + public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + if (centroid != null) { + builder.startObject(Fields.CENTROID.getPreferredName()); + { + builder.field(Fields.CENTROID_LAT.getPreferredName(), centroid.lat()); + builder.field(Fields.CENTROID_LON.getPreferredName(), centroid.lon()); + } + builder.endObject(); + } + builder.field(Fields.COUNT.getPreferredName(), count); + return builder; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedGeoCentroid.class.getSimpleName(), true, + ParsedGeoCentroid::new); + + private static final ObjectParser GEO_POINT_PARSER = new ObjectParser<>( + ParsedGeoCentroid.class.getSimpleName() + "_POINT", true, GeoPoint::new); + + static { + declareAggregationFields(PARSER); + PARSER.declareObject((agg, centroid) -> agg.centroid = centroid, GEO_POINT_PARSER, Fields.CENTROID); + PARSER.declareLong((agg, count) -> agg.count = count, Fields.COUNT); + + GEO_POINT_PARSER.declareDouble(GeoPoint::resetLat, Fields.CENTROID_LAT); + GEO_POINT_PARSER.declareDouble(GeoPoint::resetLon, Fields.CENTROID_LON); + } + + public static ParsedGeoCentroid fromXContent(XContentParser parser, final String name) { + ParsedGeoCentroid geoCentroid = PARSER.apply(parser, null); + geoCentroid.setName(name); + return geoCentroid; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/InternalMax.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/InternalMax.java index c9d045a7235f1..0a51142ca056b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/InternalMax.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/InternalMax.java @@ -82,9 +82,9 @@ public InternalMax doReduce(List aggregations, ReduceContex @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { boolean hasValue = !Double.isInfinite(max); - builder.field(CommonFields.VALUE, hasValue ? max : null); + builder.field(CommonFields.VALUE.getPreferredName(), hasValue ? max : null); if (hasValue && format != DocValueFormat.RAW) { - builder.field(CommonFields.VALUE_AS_STRING, format.format(max)); + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(max)); } return builder; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregationBuilder.java index 117b360992a6e..dafad9dacb59f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregationBuilder.java @@ -21,33 +21,45 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class MaxAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly { public static final String NAME = "max"; - public static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(MaxAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new MaxAggregationBuilder(aggregationName), context); + } public MaxAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(name, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } /** * Read from a stream. */ public MaxAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(in, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } @Override @@ -56,9 +68,9 @@ protected void innerWriteTo(StreamOutput out) { } @Override - protected MaxAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected MaxAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new MaxAggregatorFactory(name, type, config, context, parent, subFactoriesBuilder, metaData); + return new MaxAggregatorFactory(name, config, context, parent, subFactoriesBuilder, metaData); } @Override @@ -77,7 +89,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java index 73504f9a8f462..7d4f837e00c88 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java @@ -24,16 +24,16 @@ import org.elasticsearch.common.util.DoubleArray; import org.elasticsearch.index.fielddata.NumericDoubleValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; -import org.elasticsearch.search.MultiValueMode; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.MultiValueMode; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -50,7 +50,7 @@ public class MaxAggregator extends NumericMetricsAggregator.SingleValue { DoubleArray maxes; public MaxAggregator(String name, ValuesSource.Numeric valuesSource, DocValueFormat formatter, - AggregationContext context, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregatorFactory.java index 2a9bdd9d8dc32..aedba76e0c7d1 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -36,9 +35,9 @@ public class MaxAggregatorFactory extends ValuesSourceAggregatorFactory { - public MaxAggregatorFactory(String name, Type type, ValuesSourceConfig config, AggregationContext context, + public MaxAggregatorFactory(String name, ValuesSourceConfig config, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxParser.java deleted file mode 100644 index f0290e93fa9e1..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxParser.java +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.metrics.max; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * - */ -public class MaxParser extends NumericValuesSourceParser { - - public MaxParser() { - super(true, true, false); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, XContentParser.Token token, - XContentParseContext context, Map otherOptions) throws IOException { - return false; - } - - @Override - protected MaxAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - return new MaxAggregationBuilder(aggregationName); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/ParsedMax.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/ParsedMax.java new file mode 100644 index 0000000000000..f6a3190cd04d4 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/ParsedMax.java @@ -0,0 +1,62 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.max; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.ParsedSingleValueNumericMetricsAggregation; + +import java.io.IOException; + +public class ParsedMax extends ParsedSingleValueNumericMetricsAggregation implements Max { + + @Override + public double getValue() { + return value(); + } + + @Override + public String getType() { + return MaxAggregationBuilder.NAME; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + boolean hasValue = !Double.isInfinite(value); + builder.field(CommonFields.VALUE.getPreferredName(), hasValue ? value : null); + if (hasValue && valueAsString != null) { + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), valueAsString); + } + return builder; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedMax.class.getSimpleName(), true, ParsedMax::new); + + static { + declareSingleValueFields(PARSER, Double.NEGATIVE_INFINITY); + } + + public static ParsedMax fromXContent(XContentParser parser, final String name) { + ParsedMax max = PARSER.apply(parser, null); + max.setName(name); + return max; + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/InternalMin.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/InternalMin.java index 09f9c3915fdfd..9f9f6b3d31928 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/InternalMin.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/InternalMin.java @@ -82,9 +82,9 @@ public InternalMin doReduce(List aggregations, ReduceContex @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { boolean hasValue = !Double.isInfinite(min); - builder.field(CommonFields.VALUE, hasValue ? min : null); + builder.field(CommonFields.VALUE.getPreferredName(), hasValue ? min : null); if (hasValue && format != DocValueFormat.RAW) { - builder.field(CommonFields.VALUE_AS_STRING, format.format(min)); + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(min)); } return builder; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregationBuilder.java index 248b83db0c936..0f85748416edc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregationBuilder.java @@ -21,33 +21,46 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.aggregations.metrics.avg.AvgAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class MinAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly { public static final String NAME = "min"; - private static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(AvgAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new MinAggregationBuilder(aggregationName), context); + } public MinAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(name, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } /** * Read from a stream. */ public MinAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(in, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } @Override @@ -56,9 +69,9 @@ protected void innerWriteTo(StreamOutput out) { } @Override - protected MinAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected MinAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new MinAggregatorFactory(name, type, config, context, parent, subFactoriesBuilder, metaData); + return new MinAggregatorFactory(name, config, context, parent, subFactoriesBuilder, metaData); } @Override @@ -77,7 +90,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java index a32379b7d10a7..6626b7755d8d8 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java @@ -24,16 +24,16 @@ import org.elasticsearch.common.util.DoubleArray; import org.elasticsearch.index.fielddata.NumericDoubleValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; -import org.elasticsearch.search.MultiValueMode; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.MultiValueMode; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -50,7 +50,7 @@ public class MinAggregator extends NumericMetricsAggregator.SingleValue { DoubleArray mins; public MinAggregator(String name, ValuesSource.Numeric valuesSource, DocValueFormat formatter, - AggregationContext context, Aggregator parent, List pipelineAggregators, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregatorFactory.java index 1bc01d1e9e35c..8f5538fb7a2bb 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -36,9 +35,9 @@ public class MinAggregatorFactory extends ValuesSourceAggregatorFactory { - public MinAggregatorFactory(String name, Type type, ValuesSourceConfig config, AggregationContext context, + public MinAggregatorFactory(String name, ValuesSourceConfig config, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinParser.java deleted file mode 100644 index 4381ca4189926..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinParser.java +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.metrics.min; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * - */ -public class MinParser extends NumericValuesSourceParser { - - public MinParser() { - super(true, true, false); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - return false; - } - - @Override - protected MinAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - return new MinAggregationBuilder(aggregationName); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/ParsedMin.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/ParsedMin.java new file mode 100644 index 0000000000000..9b214bb346201 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/ParsedMin.java @@ -0,0 +1,62 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.min; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.ParsedSingleValueNumericMetricsAggregation; + +import java.io.IOException; + +public class ParsedMin extends ParsedSingleValueNumericMetricsAggregation implements Min { + + @Override + public double getValue() { + return value(); + } + + @Override + public String getType() { + return MinAggregationBuilder.NAME; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + boolean hasValue = !Double.isInfinite(value); + builder.field(CommonFields.VALUE.getPreferredName(), hasValue ? value : null); + if (hasValue && valueAsString != null) { + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), valueAsString); + } + return builder; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedMin.class.getSimpleName(), true, ParsedMin::new); + + static { + declareSingleValueFields(PARSER, Double.POSITIVE_INFINITY); + } + + public static ParsedMin fromXContent(XContentParser parser, final String name) { + ParsedMin min = PARSER.apply(parser, null); + min.setName(name); + return min; + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractPercentilesParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractPercentilesParser.java deleted file mode 100644 index 053a415c9719e..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractPercentilesParser.java +++ /dev/null @@ -1,137 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.metrics.percentiles; - -import com.carrotsearch.hppc.DoubleArrayList; -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentParser.Token; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -public abstract class AbstractPercentilesParser extends NumericValuesSourceParser { - - public static final ParseField KEYED_FIELD = new ParseField("keyed"); - public static final ParseField METHOD_FIELD = new ParseField("method"); - public static final ParseField COMPRESSION_FIELD = new ParseField("compression"); - public static final ParseField NUMBER_SIGNIFICANT_DIGITS_FIELD = new ParseField("number_of_significant_value_digits"); - - public AbstractPercentilesParser(boolean formattable) { - super(true, formattable, false); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, Token token, - XContentParseContext context, Map otherOptions) throws IOException { - XContentParser parser = context.getParser(); - if (token == XContentParser.Token.START_ARRAY) { - if (context.matchField(currentFieldName, keysField())) { - DoubleArrayList values = new DoubleArrayList(10); - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - double value = parser.doubleValue(); - values.add(value); - } - double[] keys = values.toArray(); - otherOptions.put(keysField(), keys); - return true; - } else { - return false; - } - } else if (token == XContentParser.Token.VALUE_BOOLEAN) { - if (context.matchField(currentFieldName, KEYED_FIELD)) { - boolean keyed = parser.booleanValue(); - otherOptions.put(KEYED_FIELD, keyed); - return true; - } else { - return false; - } - } else if (token == XContentParser.Token.START_OBJECT) { - PercentilesMethod method = PercentilesMethod.resolveFromName(currentFieldName); - if (method == null) { - return false; - } else { - otherOptions.put(METHOD_FIELD, method); - switch (method) { - case TDIGEST: - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_NUMBER) { - if (context.matchField(currentFieldName, COMPRESSION_FIELD)) { - double compression = parser.doubleValue(); - otherOptions.put(COMPRESSION_FIELD, compression); - } else { - return false; - } - } else { - return false; - } - } - break; - case HDR: - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_NUMBER) { - if (context.matchField(currentFieldName, NUMBER_SIGNIFICANT_DIGITS_FIELD)) { - int numberOfSignificantValueDigits = parser.intValue(); - otherOptions.put(NUMBER_SIGNIFICANT_DIGITS_FIELD, numberOfSignificantValueDigits); - } else { - return false; - } - } else { - return false; - } - } - break; - } - return true; - } - } - return false; - } - - @Override - protected ValuesSourceAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - PercentilesMethod method = (PercentilesMethod) otherOptions.getOrDefault(METHOD_FIELD, PercentilesMethod.TDIGEST); - - double[] cdfValues = (double[]) otherOptions.get(keysField()); - Double compression = (Double) otherOptions.get(COMPRESSION_FIELD); - Integer numberOfSignificantValueDigits = (Integer) otherOptions.get(NUMBER_SIGNIFICANT_DIGITS_FIELD); - Boolean keyed = (Boolean) otherOptions.get(KEYED_FIELD); - return buildFactory(aggregationName, cdfValues, method, compression, numberOfSignificantValueDigits, keyed); - } - - protected abstract ValuesSourceAggregationBuilder buildFactory(String aggregationName, double[] cdfValues, - PercentilesMethod method, - Double compression, - Integer numberOfSignificantValueDigits, Boolean keyed); - - protected abstract ParseField keysField(); - -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/InternalPercentile.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/InternalPercentile.java deleted file mode 100644 index bb8876d82fded..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/InternalPercentile.java +++ /dev/null @@ -1,41 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.metrics.percentiles; - -public class InternalPercentile implements Percentile { - - private final double percent; - private final double value; - - public InternalPercentile(double percent, double value) { - this.percent = percent; - this.value = value; - } - - @Override - public double getPercent() { - return percent; - } - - @Override - public double getValue() { - return value; - } -} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/ParsedPercentileRanks.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/ParsedPercentileRanks.java new file mode 100644 index 0000000000000..2c80d0328dd86 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/ParsedPercentileRanks.java @@ -0,0 +1,33 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.percentiles; + +public abstract class ParsedPercentileRanks extends ParsedPercentiles implements PercentileRanks { + + @Override + public double percent(double value) { + return getPercentile(value); + } + + @Override + public String percentAsString(double value) { + return getPercentileAsString(value); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/ParsedPercentiles.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/ParsedPercentiles.java new file mode 100644 index 0000000000000..3f56b21dcd8a0 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/ParsedPercentiles.java @@ -0,0 +1,183 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.percentiles; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedAggregation; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Iterator; +import java.util.LinkedHashMap; +import java.util.Map; + +public abstract class ParsedPercentiles extends ParsedAggregation implements Iterable { + + protected final Map percentiles = new LinkedHashMap<>(); + protected final Map percentilesAsString = new HashMap<>(); + + private boolean keyed; + + void addPercentile(Double key, Double value) { + percentiles.put(key, value); + } + + void addPercentileAsString(Double key, String valueAsString) { + percentilesAsString.put(key, valueAsString); + } + + protected Double getPercentile(double percent) { + if (percentiles.isEmpty()) { + return Double.NaN; + } + return percentiles.get(percent); + } + + protected String getPercentileAsString(double percent) { + String valueAsString = percentilesAsString.get(percent); + if (valueAsString != null) { + return valueAsString; + } + Double value = getPercentile(percent); + if (value != null) { + return Double.toString(value); + } + return null; + } + + void setKeyed(boolean keyed) { + this.keyed = keyed; + } + + @Override + public Iterator iterator() { + return new Iterator() { + final Iterator> iterator = percentiles.entrySet().iterator(); + @Override + public boolean hasNext() { + return iterator.hasNext(); + } + + @Override + public Percentile next() { + Map.Entry next = iterator.next(); + return new Percentile(next.getKey(), next.getValue()); + } + }; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + final boolean valuesAsString = (percentilesAsString.isEmpty() == false); + if (keyed) { + builder.startObject(CommonFields.VALUES.getPreferredName()); + for (Map.Entry percentile : percentiles.entrySet()) { + Double key = percentile.getKey(); + builder.field(String.valueOf(key), percentile.getValue()); + + if (valuesAsString) { + builder.field(key + "_as_string", getPercentileAsString(key)); + } + } + builder.endObject(); + } else { + builder.startArray(CommonFields.VALUES.getPreferredName()); + for (Map.Entry percentile : percentiles.entrySet()) { + Double key = percentile.getKey(); + builder.startObject(); + { + builder.field(CommonFields.KEY.getPreferredName(), key); + builder.field(CommonFields.VALUE.getPreferredName(), percentile.getValue()); + if (valuesAsString) { + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), getPercentileAsString(key)); + } + } + builder.endObject(); + } + builder.endArray(); + } + return builder; + } + + protected static void declarePercentilesFields(ObjectParser objectParser) { + ParsedAggregation.declareAggregationFields(objectParser); + + objectParser.declareField((parser, aggregation, context) -> { + XContentParser.Token token = parser.currentToken(); + if (token == XContentParser.Token.START_OBJECT) { + aggregation.setKeyed(true); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token.isValue()) { + if (token == XContentParser.Token.VALUE_NUMBER) { + aggregation.addPercentile(Double.valueOf(parser.currentName()), parser.doubleValue()); + } else if (token == XContentParser.Token.VALUE_STRING) { + int i = parser.currentName().indexOf("_as_string"); + if (i > 0) { + double key = Double.valueOf(parser.currentName().substring(0, i)); + aggregation.addPercentileAsString(key, parser.text()); + } else { + aggregation.addPercentile(Double.valueOf(parser.currentName()), Double.valueOf(parser.text())); + } + } + } else if (token == XContentParser.Token.VALUE_NULL) { + aggregation.addPercentile(Double.valueOf(parser.currentName()), Double.NaN); + } else { + parser.skipChildren(); // skip potential inner objects and arrays for forward compatibility + } + } + } else if (token == XContentParser.Token.START_ARRAY) { + aggregation.setKeyed(false); + + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + Double key = null; + Double value = null; + String valueAsString = null; + + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (CommonFields.KEY.getPreferredName().equals(currentFieldName)) { + key = parser.doubleValue(); + } else if (CommonFields.VALUE.getPreferredName().equals(currentFieldName)) { + value = parser.doubleValue(); + } else if (CommonFields.VALUE_AS_STRING.getPreferredName().equals(currentFieldName)) { + valueAsString = parser.text(); + } + } else if (token == XContentParser.Token.VALUE_NULL) { + value = Double.NaN; + } else { + parser.skipChildren(); // skip potential inner objects and arrays for forward compatibility + } + } + if (key != null) { + aggregation.addPercentile(key, value); + if (valueAsString != null) { + aggregation.addPercentileAsString(key, valueAsString); + } + } + } + } + }, CommonFields.VALUES, ObjectParser.ValueType.OBJECT_ARRAY); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/Percentile.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/Percentile.java index 96ad4f261a64a..ca62ca6b2007e 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/Percentile.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/Percentile.java @@ -19,10 +19,41 @@ package org.elasticsearch.search.aggregations.metrics.percentiles; -public interface Percentile { +import java.util.Objects; - double getPercent(); +public class Percentile { - double getValue(); + private final double percent; + private final double value; + public Percentile(double percent, double value) { + this.percent = percent; + this.value = value; + } + + public double getPercent() { + return percent; + } + + public double getValue() { + return value; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + Percentile that = (Percentile) o; + return Double.compare(that.percent, percent) == 0 + && Double.compare(that.value, value) == 0; + } + + @Override + public int hashCode() { + return Objects.hash(percent, value); + } } \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentileRanksAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentileRanksAggregationBuilder.java index 8320774da0f73..db322a8e70e1c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentileRanksAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentileRanksAggregationBuilder.java @@ -19,22 +19,26 @@ package org.elasticsearch.search.aggregations.metrics.percentiles; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.HDRPercentileRanksAggregatorFactory; import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.TDigestPercentileRanksAggregatorFactory; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder.LeafOnly; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -42,7 +46,59 @@ public class PercentileRanksAggregationBuilder extends LeafOnly { public static final String NAME = PercentileRanks.TYPE_NAME; - public static final Type TYPE = new Type(NAME); + + public static final ParseField VALUES_FIELD = new ParseField("values"); + + private static class TDigestOptions { + Double compression; + } + + private static final ObjectParser TDIGEST_OPTIONS_PARSER = + new ObjectParser<>(PercentilesMethod.TDIGEST.getParseField().getPreferredName(), TDigestOptions::new); + static { + TDIGEST_OPTIONS_PARSER.declareDouble((opts, compression) -> opts.compression = compression, new ParseField("compression")); + } + + private static class HDROptions { + Integer numberOfSigDigits; + } + + private static final ObjectParser HDR_OPTIONS_PARSER = + new ObjectParser<>(PercentilesMethod.HDR.getParseField().getPreferredName(), HDROptions::new); + static { + HDR_OPTIONS_PARSER.declareInt((opts, numberOfSigDigits) -> opts.numberOfSigDigits = numberOfSigDigits, + new ParseField("number_of_significant_value_digits")); + } + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(PercentileRanksAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, false, false); + + PARSER.declareDoubleArray( + (b, v) -> b.values(v.stream().mapToDouble(Double::doubleValue).toArray()), + VALUES_FIELD); + + PARSER.declareBoolean(PercentileRanksAggregationBuilder::keyed, PercentilesAggregationBuilder.KEYED_FIELD); + + PARSER.declareField((b, v) -> { + b.method(PercentilesMethod.TDIGEST); + if (v.compression != null) { + b.compression(v.compression); + } + }, TDIGEST_OPTIONS_PARSER::parse, PercentilesMethod.TDIGEST.getParseField(), ObjectParser.ValueType.OBJECT); + + PARSER.declareField((b, v) -> { + b.method(PercentilesMethod.HDR); + if (v.numberOfSigDigits != null) { + b.numberOfSignificantValueDigits(v.numberOfSigDigits); + } + }, HDR_OPTIONS_PARSER::parse, PercentilesMethod.HDR.getParseField(), ObjectParser.ValueType.OBJECT); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new PercentileRanksAggregationBuilder(aggregationName), context); + } private double[] values; private PercentilesMethod method = PercentilesMethod.TDIGEST; @@ -51,14 +107,14 @@ public class PercentileRanksAggregationBuilder extends LeafOnly innerBuild(AggregationContext context, ValuesSourceConfig config, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { switch (method) { case TDIGEST: - return new TDigestPercentileRanksAggregatorFactory(name, type, config, values, compression, keyed, context, parent, + return new TDigestPercentileRanksAggregatorFactory(name, config, values, compression, keyed, context, parent, subFactoriesBuilder, metaData); case HDR: - return new HDRPercentileRanksAggregatorFactory(name, type, config, values, numberOfSignificantValueDigits, keyed, context, + return new HDRPercentileRanksAggregatorFactory(name, config, values, numberOfSignificantValueDigits, keyed, context, parent, subFactoriesBuilder, metaData); default: - throw new IllegalStateException("Illegal method [" + method.getName() + "]"); + throw new IllegalStateException("Illegal method [" + method + "]"); } } @Override protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(PercentileRanksParser.VALUES_FIELD.getPreferredName(), values); - builder.field(AbstractPercentilesParser.KEYED_FIELD.getPreferredName(), keyed); - builder.startObject(method.getName()); + builder.array(VALUES_FIELD.getPreferredName(), values); + builder.field(PercentilesAggregationBuilder.KEYED_FIELD.getPreferredName(), keyed); + builder.startObject(method.toString()); if (method == PercentilesMethod.TDIGEST) { - builder.field(AbstractPercentilesParser.COMPRESSION_FIELD.getPreferredName(), compression); + builder.field(PercentilesAggregationBuilder.COMPRESSION_FIELD.getPreferredName(), compression); } else { - builder.field(AbstractPercentilesParser.NUMBER_SIGNIFICANT_DIGITS_FIELD.getPreferredName(), numberOfSignificantValueDigits); + builder.field(PercentilesAggregationBuilder.NUMBER_SIGNIFICANT_DIGITS_FIELD.getPreferredName(), numberOfSignificantValueDigits); } builder.endObject(); return builder; @@ -207,7 +263,7 @@ protected boolean innerEquals(Object obj) { equalSettings = Objects.equals(compression, other.compression); break; default: - throw new IllegalStateException("Illegal method [" + method.getName() + "]"); + throw new IllegalStateException("Illegal method [" + method + "]"); } return equalSettings && Objects.deepEquals(values, other.values) @@ -223,12 +279,12 @@ protected int innerHashCode() { case TDIGEST: return Objects.hash(Arrays.hashCode(values), keyed, compression, method); default: - throw new IllegalStateException("Illegal method [" + method.getName() + "]"); + throw new IllegalStateException("Illegal method [" + method + "]"); } } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentileRanksParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentileRanksParser.java deleted file mode 100644 index 313b8e7b7c04a..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentileRanksParser.java +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.metrics.percentiles; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; - -/** - * - */ -public class PercentileRanksParser extends AbstractPercentilesParser { - - public static final ParseField VALUES_FIELD = new ParseField("values"); - - public PercentileRanksParser() { - super(false); - } - - @Override - protected ParseField keysField() { - return VALUES_FIELD; - } - - @Override - protected ValuesSourceAggregationBuilder buildFactory(String aggregationName, double[] keys, PercentilesMethod method, - Double compression, Integer numberOfSignificantValueDigits, - Boolean keyed) { - PercentileRanksAggregationBuilder factory = new PercentileRanksAggregationBuilder(aggregationName); - if (keys != null) { - factory.values(keys); - } - if (method != null) { - factory.method(method); - } - if (compression != null) { - factory.compression(compression); - } - if (numberOfSignificantValueDigits != null) { - factory.numberOfSignificantValueDigits(numberOfSignificantValueDigits); - } - if (keyed != null) { - factory.keyed(keyed); - } - return factory; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesAggregationBuilder.java index f3fd9ad744b27..767e875c13274 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesAggregationBuilder.java @@ -19,22 +19,26 @@ package org.elasticsearch.search.aggregations.metrics.percentiles; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.HDRPercentilesAggregatorFactory; import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.TDigestPercentilesAggregatorFactory; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder.LeafOnly; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -42,23 +46,81 @@ public class PercentilesAggregationBuilder extends LeafOnly { public static final String NAME = Percentiles.TYPE_NAME; - public static final Type TYPE = new Type(NAME); - private double[] percents = PercentilesParser.DEFAULT_PERCENTS; + public static final double[] DEFAULT_PERCENTS = new double[] { 1, 5, 25, 50, 75, 95, 99 }; + public static final ParseField PERCENTS_FIELD = new ParseField("percents"); + public static final ParseField KEYED_FIELD = new ParseField("keyed"); + public static final ParseField METHOD_FIELD = new ParseField("method"); + public static final ParseField COMPRESSION_FIELD = new ParseField("compression"); + public static final ParseField NUMBER_SIGNIFICANT_DIGITS_FIELD = new ParseField("number_of_significant_value_digits"); + + private static class TDigestOptions { + Double compression; + } + + private static final ObjectParser TDIGEST_OPTIONS_PARSER = + new ObjectParser<>(PercentilesMethod.TDIGEST.getParseField().getPreferredName(), TDigestOptions::new); + static { + TDIGEST_OPTIONS_PARSER.declareDouble((opts, compression) -> opts.compression = compression, COMPRESSION_FIELD); + } + + private static class HDROptions { + Integer numberOfSigDigits; + } + + private static final ObjectParser HDR_OPTIONS_PARSER = + new ObjectParser<>(PercentilesMethod.HDR.getParseField().getPreferredName(), HDROptions::new); + static { + HDR_OPTIONS_PARSER.declareInt( + (opts, numberOfSigDigits) -> opts.numberOfSigDigits = numberOfSigDigits, + NUMBER_SIGNIFICANT_DIGITS_FIELD); + } + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(PercentilesAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false); + + PARSER.declareDoubleArray( + (b, v) -> b.percentiles(v.stream().mapToDouble(Double::doubleValue).toArray()), + PERCENTS_FIELD); + + PARSER.declareBoolean(PercentilesAggregationBuilder::keyed, KEYED_FIELD); + + PARSER.declareField((b, v) -> { + b.method(PercentilesMethod.TDIGEST); + if (v.compression != null) { + b.compression(v.compression); + } + }, TDIGEST_OPTIONS_PARSER::parse, PercentilesMethod.TDIGEST.getParseField(), ObjectParser.ValueType.OBJECT); + + PARSER.declareField((b, v) -> { + b.method(PercentilesMethod.HDR); + if (v.numberOfSigDigits != null) { + b.numberOfSignificantValueDigits(v.numberOfSigDigits); + } + }, HDR_OPTIONS_PARSER::parse, PercentilesMethod.HDR.getParseField(), ObjectParser.ValueType.OBJECT); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new PercentilesAggregationBuilder(aggregationName), context); + } + + private double[] percents = DEFAULT_PERCENTS; private PercentilesMethod method = PercentilesMethod.TDIGEST; private int numberOfSignificantValueDigits = 3; private double compression = 100.0; private boolean keyed = true; public PercentilesAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(name, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } /** * Read from a stream. */ public PercentilesAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(in, ValuesSourceType.NUMERIC, ValueType.NUMERIC); percents = in.readDoubleArray(); keyed = in.readBoolean(); numberOfSignificantValueDigits = in.readVInt(); @@ -164,29 +226,29 @@ public PercentilesMethod method() { } @Override - protected ValuesSourceAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { switch (method) { case TDIGEST: - return new TDigestPercentilesAggregatorFactory(name, type, config, percents, compression, keyed, context, parent, + return new TDigestPercentilesAggregatorFactory(name, config, percents, compression, keyed, context, parent, subFactoriesBuilder, metaData); case HDR: - return new HDRPercentilesAggregatorFactory(name, type, config, percents, numberOfSignificantValueDigits, keyed, context, parent, + return new HDRPercentilesAggregatorFactory(name, config, percents, numberOfSignificantValueDigits, keyed, context, parent, subFactoriesBuilder, metaData); default: - throw new IllegalStateException("Illegal method [" + method.getName() + "]"); + throw new IllegalStateException("Illegal method [" + method + "]"); } } @Override protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(PercentilesParser.PERCENTS_FIELD.getPreferredName(), percents); - builder.field(AbstractPercentilesParser.KEYED_FIELD.getPreferredName(), keyed); - builder.startObject(method.getName()); + builder.array(PERCENTS_FIELD.getPreferredName(), percents); + builder.field(KEYED_FIELD.getPreferredName(), keyed); + builder.startObject(method.toString()); if (method == PercentilesMethod.TDIGEST) { - builder.field(AbstractPercentilesParser.COMPRESSION_FIELD.getPreferredName(), compression); + builder.field(COMPRESSION_FIELD.getPreferredName(), compression); } else { - builder.field(AbstractPercentilesParser.NUMBER_SIGNIFICANT_DIGITS_FIELD.getPreferredName(), numberOfSignificantValueDigits); + builder.field(NUMBER_SIGNIFICANT_DIGITS_FIELD.getPreferredName(), numberOfSignificantValueDigits); } builder.endObject(); return builder; @@ -207,7 +269,7 @@ protected boolean innerEquals(Object obj) { equalSettings = Objects.equals(compression, other.compression); break; default: - throw new IllegalStateException("Illegal method [" + method.getName() + "]"); + throw new IllegalStateException("Illegal method [" + method.toString() + "]"); } return equalSettings && Objects.deepEquals(percents, other.percents) @@ -223,12 +285,12 @@ protected int innerHashCode() { case TDIGEST: return Objects.hash(Arrays.hashCode(percents), keyed, compression, method); default: - throw new IllegalStateException("Illegal method [" + method.getName() + "]"); + throw new IllegalStateException("Illegal method [" + method.toString() + "]"); } } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesMethod.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesMethod.java index 97b8a727be331..3b8085793dc0a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesMethod.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesMethod.java @@ -19,6 +19,7 @@ package org.elasticsearch.search.aggregations.metrics.percentiles; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -32,48 +33,36 @@ public enum PercentilesMethod implements Writeable { /** * The TDigest method for calculating percentiles */ - TDIGEST("tdigest"), + TDIGEST("tdigest", "TDigest", "TDIGEST"), /** * The HDRHistogram method of calculating percentiles */ - HDR("hdr"); + HDR("hdr", "HDR"); - private String name; + private final ParseField parseField; - private PercentilesMethod(String name) { - this.name = name; + PercentilesMethod(String name, String... deprecatedNames) { + this.parseField = new ParseField(name, deprecatedNames); } /** * @return the name of the method */ - public String getName() { - return name; + public ParseField getParseField() { + return parseField; } public static PercentilesMethod readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown PercentilesMethod ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(PercentilesMethod.class); } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(ordinal()); + out.writeEnum(this); } - /** - * Returns the {@link PercentilesMethod} for this method name. returns - * null if no {@link PercentilesMethod} exists for the name. - */ - public static PercentilesMethod resolveFromName(String name) { - for (PercentilesMethod method : values()) { - if (method.name.equalsIgnoreCase(name)) { - return method; - } - } - return null; + @Override + public String toString() { + return parseField.getPreferredName(); } -} \ No newline at end of file +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesParser.java deleted file mode 100644 index 806fb26cd3fe8..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesParser.java +++ /dev/null @@ -1,65 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.metrics.percentiles; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; - -/** - * - */ -public class PercentilesParser extends AbstractPercentilesParser { - - public static final ParseField PERCENTS_FIELD = new ParseField("percents"); - - public PercentilesParser() { - super(true); - } - - public static final double[] DEFAULT_PERCENTS = new double[] { 1, 5, 25, 50, 75, 95, 99 }; - - @Override - protected ParseField keysField() { - return PERCENTS_FIELD; - } - - @Override - protected ValuesSourceAggregationBuilder buildFactory(String aggregationName, double[] keys, PercentilesMethod method, - Double compression, Integer numberOfSignificantValueDigits, - Boolean keyed) { - PercentilesAggregationBuilder factory = new PercentilesAggregationBuilder(aggregationName); - if (keys != null) { - factory.percentiles(keys); - } - if (method != null) { - factory.method(method); - } - if (compression != null) { - factory.compression(compression); - } - if (numberOfSignificantValueDigits != null) { - factory.numberOfSignificantValueDigits(numberOfSignificantValueDigits); - } - if (keyed != null) { - factory.keyed(keyed); - } - return factory; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/AbstractHDRPercentilesAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/AbstractHDRPercentilesAggregator.java index 26ac179a5613e..bf4443c88739f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/AbstractHDRPercentilesAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/AbstractHDRPercentilesAggregator.java @@ -32,8 +32,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -52,7 +52,7 @@ private static int indexOfKey(double[] keys, double key) { protected final int numberOfSignificantValueDigits; protected final boolean keyed; - public AbstractHDRPercentilesAggregator(String name, ValuesSource.Numeric valuesSource, AggregationContext context, Aggregator parent, + public AbstractHDRPercentilesAggregator(String name, ValuesSource.Numeric valuesSource, SearchContext context, Aggregator parent, double[] keys, int numberOfSignificantValueDigits, boolean keyed, DocValueFormat formatter, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/AbstractInternalHDRPercentiles.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/AbstractInternalHDRPercentiles.java index ba76f8b98632f..5f9e87e7c047a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/AbstractInternalHDRPercentiles.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/AbstractInternalHDRPercentiles.java @@ -40,7 +40,7 @@ abstract class AbstractInternalHDRPercentiles extends InternalNumericMetricsAggr protected final DoubleHistogram state; private final boolean keyed; - public AbstractInternalHDRPercentiles(String name, double[] keys, DoubleHistogram state, boolean keyed, DocValueFormat format, + AbstractInternalHDRPercentiles(String name, double[] keys, DoubleHistogram state, boolean keyed, DocValueFormat format, List pipelineAggregators, Map metaData) { super(name, pipelineAggregators, metaData); @@ -113,7 +113,7 @@ protected abstract AbstractInternalHDRPercentiles createReduced(String name, dou @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { if (keyed) { - builder.startObject(CommonFields.VALUES); + builder.startObject(CommonFields.VALUES.getPreferredName()); for(int i = 0; i < keys.length; ++i) { String key = String.valueOf(keys[i]); double value = value(keys[i]); @@ -124,14 +124,14 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th } builder.endObject(); } else { - builder.startArray(CommonFields.VALUES); + builder.startArray(CommonFields.VALUES.getPreferredName()); for (int i = 0; i < keys.length; i++) { double value = value(keys[i]); builder.startObject(); - builder.field(CommonFields.KEY, keys[i]); - builder.field(CommonFields.VALUE, value); + builder.field(CommonFields.KEY.getPreferredName(), keys[i]); + builder.field(CommonFields.VALUE.getPreferredName(), value); if (format != DocValueFormat.RAW) { - builder.field(CommonFields.VALUE_AS_STRING, format.format(value)); + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(value)); } builder.endObject(); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentileRanksAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentileRanksAggregator.java index faa6039f56cd9..53da1ee81856c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentileRanksAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentileRanksAggregator.java @@ -23,8 +23,8 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -35,7 +35,7 @@ */ public class HDRPercentileRanksAggregator extends AbstractHDRPercentilesAggregator { - public HDRPercentileRanksAggregator(String name, Numeric valuesSource, AggregationContext context, Aggregator parent, + public HDRPercentileRanksAggregator(String name, Numeric valuesSource, SearchContext context, Aggregator parent, double[] percents, int numberOfSignificantValueDigits, boolean keyed, DocValueFormat format, List pipelineAggregators, Map metaData) throws IOException { super(name, valuesSource, context, parent, percents, numberOfSignificantValueDigits, keyed, format, pipelineAggregators, diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentileRanksAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentileRanksAggregatorFactory.java index 7f17220edad8e..d89a9a85b28a9 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentileRanksAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentileRanksAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -41,10 +40,10 @@ public class HDRPercentileRanksAggregatorFactory private final int numberOfSignificantValueDigits; private final boolean keyed; - public HDRPercentileRanksAggregatorFactory(String name, Type type, ValuesSourceConfig config, double[] values, - int numberOfSignificantValueDigits, boolean keyed, AggregationContext context, AggregatorFactory parent, + public HDRPercentileRanksAggregatorFactory(String name, ValuesSourceConfig config, double[] values, + int numberOfSignificantValueDigits, boolean keyed, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.values = values; this.numberOfSignificantValueDigits = numberOfSignificantValueDigits; this.keyed = keyed; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentilesAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentilesAggregator.java index 0b9c2c43d3438..d61b4ddade0b1 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentilesAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentilesAggregator.java @@ -23,8 +23,8 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -35,7 +35,7 @@ */ public class HDRPercentilesAggregator extends AbstractHDRPercentilesAggregator { - public HDRPercentilesAggregator(String name, Numeric valuesSource, AggregationContext context, Aggregator parent, double[] percents, + public HDRPercentilesAggregator(String name, Numeric valuesSource, SearchContext context, Aggregator parent, double[] percents, int numberOfSignificantValueDigits, boolean keyed, DocValueFormat formatter, List pipelineAggregators, Map metaData) throws IOException { super(name, valuesSource, context, parent, percents, numberOfSignificantValueDigits, keyed, formatter, diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentilesAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentilesAggregatorFactory.java index 0e1bd1e532d31..1074b6e142db6 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentilesAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentilesAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -40,10 +39,10 @@ public class HDRPercentilesAggregatorFactory extends ValuesSourceAggregatorFacto private final int numberOfSignificantValueDigits; private final boolean keyed; - public HDRPercentilesAggregatorFactory(String name, Type type, ValuesSourceConfig config, double[] percents, - int numberOfSignificantValueDigits, boolean keyed, AggregationContext context, AggregatorFactory parent, + public HDRPercentilesAggregatorFactory(String name, ValuesSourceConfig config, double[] percents, + int numberOfSignificantValueDigits, boolean keyed, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.percents = percents; this.numberOfSignificantValueDigits = numberOfSignificantValueDigits; this.keyed = keyed; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/InternalHDRPercentileRanks.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/InternalHDRPercentileRanks.java index 35234d73accf7..cb058128c5a49 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/InternalHDRPercentileRanks.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/InternalHDRPercentileRanks.java @@ -21,7 +21,6 @@ import org.HdrHistogram.DoubleHistogram; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.search.DocValueFormat; -import org.elasticsearch.search.aggregations.metrics.percentiles.InternalPercentile; import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile; import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanks; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -109,7 +108,7 @@ public boolean hasNext() { @Override public Percentile next() { - final Percentile next = new InternalPercentile(percentileRank(state, values[i]), values[i]); + final Percentile next = new Percentile(percentileRank(state, values[i]), values[i]); ++i; return next; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/InternalHDRPercentiles.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/InternalHDRPercentiles.java index 579f25c1666de..a153e497f7bc8 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/InternalHDRPercentiles.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/InternalHDRPercentiles.java @@ -21,7 +21,6 @@ import org.HdrHistogram.DoubleHistogram; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.search.DocValueFormat; -import org.elasticsearch.search.aggregations.metrics.percentiles.InternalPercentile; import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile; import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -99,7 +98,9 @@ public boolean hasNext() { @Override public Percentile next() { - final Percentile next = new InternalPercentile(percents[i], state.getValueAtPercentile(percents[i])); + double percent = percents[i]; + double value = (state.getTotalCount() == 0) ? Double.NaN : state.getValueAtPercentile(percent); + final Percentile next = new Percentile(percent, value); ++i; return next; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/ParsedHDRPercentileRanks.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/ParsedHDRPercentileRanks.java new file mode 100644 index 0000000000000..f5fd7717e04bf --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/ParsedHDRPercentileRanks.java @@ -0,0 +1,66 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.percentiles.hdr; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.percentiles.ParsedPercentiles; +import org.elasticsearch.search.aggregations.metrics.percentiles.ParsedPercentileRanks; +import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile; + +import java.io.IOException; +import java.util.Iterator; + +public class ParsedHDRPercentileRanks extends ParsedPercentileRanks { + + @Override + public String getType() { + return InternalHDRPercentileRanks.NAME; + } + + @Override + public Iterator iterator() { + final Iterator iterator = super.iterator(); + return new Iterator() { + @Override + public boolean hasNext() { + return iterator.hasNext(); + } + + @Override + public Percentile next() { + Percentile percentile = iterator.next(); + return new Percentile(percentile.getValue(), percentile.getPercent()); + } + }; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedHDRPercentileRanks.class.getSimpleName(), true, ParsedHDRPercentileRanks::new); + static { + ParsedPercentiles.declarePercentilesFields(PARSER); + } + + public static ParsedHDRPercentileRanks fromXContent(XContentParser parser, String name) throws IOException { + ParsedHDRPercentileRanks aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/ParsedHDRPercentiles.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/ParsedHDRPercentiles.java new file mode 100644 index 0000000000000..1b1ba906aa087 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/ParsedHDRPercentiles.java @@ -0,0 +1,57 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.percentiles.hdr; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.percentiles.ParsedPercentiles; +import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles; + +import java.io.IOException; + +public class ParsedHDRPercentiles extends ParsedPercentiles implements Percentiles { + + @Override + public String getType() { + return InternalHDRPercentiles.NAME; + } + + @Override + public double percentile(double percent) { + return getPercentile(percent); + } + + @Override + public String percentileAsString(double percent) { + return getPercentileAsString(percent); + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedHDRPercentiles.class.getSimpleName(), true, ParsedHDRPercentiles::new); + static { + ParsedPercentiles.declarePercentilesFields(PARSER); + } + + public static ParsedHDRPercentiles fromXContent(XContentParser parser, String name) throws IOException { + ParsedHDRPercentiles aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractInternalTDigestPercentiles.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractInternalTDigestPercentiles.java index 43bee101f4a01..b1cf553529d01 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractInternalTDigestPercentiles.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractInternalTDigestPercentiles.java @@ -37,7 +37,7 @@ abstract class AbstractInternalTDigestPercentiles extends InternalNumericMetrics protected final TDigestState state; private final boolean keyed; - public AbstractInternalTDigestPercentiles(String name, double[] keys, TDigestState state, boolean keyed, DocValueFormat formatter, + AbstractInternalTDigestPercentiles(String name, double[] keys, TDigestState state, boolean keyed, DocValueFormat formatter, List pipelineAggregators, Map metaData) { super(name, pipelineAggregators, metaData); @@ -96,7 +96,7 @@ protected abstract AbstractInternalTDigestPercentiles createReduced(String name, @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { if (keyed) { - builder.startObject(CommonFields.VALUES); + builder.startObject(CommonFields.VALUES.getPreferredName()); for(int i = 0; i < keys.length; ++i) { String key = String.valueOf(keys[i]); double value = value(keys[i]); @@ -107,14 +107,14 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th } builder.endObject(); } else { - builder.startArray(CommonFields.VALUES); + builder.startArray(CommonFields.VALUES.getPreferredName()); for (int i = 0; i < keys.length; i++) { double value = value(keys[i]); builder.startObject(); - builder.field(CommonFields.KEY, keys[i]); - builder.field(CommonFields.VALUE, value); + builder.field(CommonFields.KEY.getPreferredName(), keys[i]); + builder.field(CommonFields.VALUE.getPreferredName(), value); if (format != DocValueFormat.RAW) { - builder.field(CommonFields.VALUE_AS_STRING, format.format(value)); + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(value)); } builder.endObject(); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractTDigestPercentilesAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractTDigestPercentilesAggregator.java index 0676d291a1133..2c68d580e14b5 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractTDigestPercentilesAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractTDigestPercentilesAggregator.java @@ -31,8 +31,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -51,7 +51,7 @@ private static int indexOfKey(double[] keys, double key) { protected final double compression; protected final boolean keyed; - public AbstractTDigestPercentilesAggregator(String name, ValuesSource.Numeric valuesSource, AggregationContext context, Aggregator parent, + public AbstractTDigestPercentilesAggregator(String name, ValuesSource.Numeric valuesSource, SearchContext context, Aggregator parent, double[] keys, double compression, boolean keyed, DocValueFormat formatter, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/InternalTDigestPercentileRanks.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/InternalTDigestPercentileRanks.java index 9e24ba5d86e0a..666993f41fda3 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/InternalTDigestPercentileRanks.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/InternalTDigestPercentileRanks.java @@ -20,7 +20,6 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.search.DocValueFormat; -import org.elasticsearch.search.aggregations.metrics.percentiles.InternalPercentile; import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile; import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanks; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -106,7 +105,7 @@ public boolean hasNext() { @Override public Percentile next() { - final Percentile next = new InternalPercentile(percentileRank(state, values[i]), values[i]); + final Percentile next = new Percentile(percentileRank(state, values[i]), values[i]); ++i; return next; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/InternalTDigestPercentiles.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/InternalTDigestPercentiles.java index ec619219111eb..5a62f24933b40 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/InternalTDigestPercentiles.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/InternalTDigestPercentiles.java @@ -20,7 +20,6 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.search.DocValueFormat; -import org.elasticsearch.search.aggregations.metrics.percentiles.InternalPercentile; import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile; import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -95,7 +94,7 @@ public boolean hasNext() { @Override public Percentile next() { - final Percentile next = new InternalPercentile(percents[i], state.quantile(percents[i] / 100)); + final Percentile next = new Percentile(percents[i], state.quantile(percents[i] / 100)); ++i; return next; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/ParsedTDigestPercentileRanks.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/ParsedTDigestPercentileRanks.java new file mode 100644 index 0000000000000..01929f374d486 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/ParsedTDigestPercentileRanks.java @@ -0,0 +1,66 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.percentiles.tdigest; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.percentiles.ParsedPercentiles; +import org.elasticsearch.search.aggregations.metrics.percentiles.ParsedPercentileRanks; +import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile; + +import java.io.IOException; +import java.util.Iterator; + +public class ParsedTDigestPercentileRanks extends ParsedPercentileRanks { + + @Override + public String getType() { + return InternalTDigestPercentileRanks.NAME; + } + + @Override + public Iterator iterator() { + final Iterator iterator = super.iterator(); + return new Iterator() { + @Override + public boolean hasNext() { + return iterator.hasNext(); + } + + @Override + public Percentile next() { + Percentile percentile = iterator.next(); + return new Percentile(percentile.getValue(), percentile.getPercent()); + } + }; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedTDigestPercentileRanks.class.getSimpleName(), true, ParsedTDigestPercentileRanks::new); + static { + ParsedPercentiles.declarePercentilesFields(PARSER); + } + + public static ParsedTDigestPercentileRanks fromXContent(XContentParser parser, String name) throws IOException { + ParsedTDigestPercentileRanks aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/ParsedTDigestPercentiles.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/ParsedTDigestPercentiles.java new file mode 100644 index 0000000000000..cbae25d61e046 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/ParsedTDigestPercentiles.java @@ -0,0 +1,57 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.percentiles.tdigest; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.percentiles.ParsedPercentiles; +import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles; + +import java.io.IOException; + +public class ParsedTDigestPercentiles extends ParsedPercentiles implements Percentiles { + + @Override + public String getType() { + return InternalTDigestPercentiles.NAME; + } + + @Override + public double percentile(double percent) { + return getPercentile(percent); + } + + @Override + public String percentileAsString(double percent) { + return getPercentileAsString(percent); + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedTDigestPercentiles.class.getSimpleName(), true, ParsedTDigestPercentiles::new); + static { + ParsedPercentiles.declarePercentilesFields(PARSER); + } + + public static ParsedTDigestPercentiles fromXContent(XContentParser parser, String name) throws IOException { + ParsedTDigestPercentiles aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentileRanksAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentileRanksAggregator.java index 1151e2272a442..c796742fbb21f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentileRanksAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentileRanksAggregator.java @@ -22,8 +22,8 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -34,7 +34,7 @@ */ public class TDigestPercentileRanksAggregator extends AbstractTDigestPercentilesAggregator { - public TDigestPercentileRanksAggregator(String name, Numeric valuesSource, AggregationContext context, Aggregator parent, double[] percents, + public TDigestPercentileRanksAggregator(String name, Numeric valuesSource, SearchContext context, Aggregator parent, double[] percents, double compression, boolean keyed, DocValueFormat formatter, List pipelineAggregators, Map metaData) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentileRanksAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentileRanksAggregatorFactory.java index a2605fce23923..223d25216bca2 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentileRanksAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentileRanksAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -41,10 +40,10 @@ public class TDigestPercentileRanksAggregatorFactory private final double compression; private final boolean keyed; - public TDigestPercentileRanksAggregatorFactory(String name, Type type, ValuesSourceConfig config, double[] percents, - double compression, boolean keyed, AggregationContext context, AggregatorFactory parent, + public TDigestPercentileRanksAggregatorFactory(String name, ValuesSourceConfig config, double[] percents, + double compression, boolean keyed, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.percents = percents; this.compression = compression; this.keyed = keyed; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentilesAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentilesAggregator.java index c0063102e070b..74a5bce0e519a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentilesAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentilesAggregator.java @@ -22,8 +22,8 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -34,7 +34,7 @@ */ public class TDigestPercentilesAggregator extends AbstractTDigestPercentilesAggregator { - public TDigestPercentilesAggregator(String name, Numeric valuesSource, AggregationContext context, + public TDigestPercentilesAggregator(String name, Numeric valuesSource, SearchContext context, Aggregator parent, double[] percents, double compression, boolean keyed, DocValueFormat formatter, List pipelineAggregators, Map metaData) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentilesAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentilesAggregatorFactory.java index 4513801fbfbbe..47b17d84f3b6b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentilesAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentilesAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -41,10 +40,10 @@ public class TDigestPercentilesAggregatorFactory private final double compression; private final boolean keyed; - public TDigestPercentilesAggregatorFactory(String name, Type type, ValuesSourceConfig config, double[] percents, - double compression, boolean keyed, AggregationContext context, AggregatorFactory parent, + public TDigestPercentilesAggregatorFactory(String name, ValuesSourceConfig config, double[] percents, + double compression, boolean keyed, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.percents = percents; this.compression = compression; this.keyed = keyed; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/InternalScriptedMetric.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/InternalScriptedMetric.java index c5704d9f2b6ef..d86104a5b142a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/InternalScriptedMetric.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/InternalScriptedMetric.java @@ -27,7 +27,6 @@ import org.elasticsearch.script.Script; import org.elasticsearch.script.ScriptContext; import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.metrics.InternalMetricsAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; @@ -37,11 +36,16 @@ import java.util.List; import java.util.Map; -public class InternalScriptedMetric extends InternalMetricsAggregation implements ScriptedMetric { +public class InternalScriptedMetric extends InternalAggregation implements ScriptedMetric { private final Script reduceScript; - private final Object aggregation; + private final List aggregation; public InternalScriptedMetric(String name, Object aggregation, Script reduceScript, List pipelineAggregators, + Map metaData) { + this(name, Collections.singletonList(aggregation), reduceScript, pipelineAggregators, metaData); + } + + private InternalScriptedMetric(String name, List aggregation, Script reduceScript, List pipelineAggregators, Map metaData) { super(name, pipelineAggregators, metaData); this.aggregation = aggregation; @@ -54,13 +58,13 @@ public InternalScriptedMetric(String name, Object aggregation, Script reduceScri public InternalScriptedMetric(StreamInput in) throws IOException { super(in); reduceScript = in.readOptionalWriteable(Script::new); - aggregation = in.readGenericValue(); + aggregation = Collections.singletonList(in.readGenericValue()); } @Override protected void doWriteTo(StreamOutput out) throws IOException { out.writeOptionalWriteable(reduceScript); - out.writeGenericValue(aggregation); + out.writeGenericValue(aggregation()); } @Override @@ -70,7 +74,10 @@ public String getWriteableName() { @Override public Object aggregation() { - return aggregation; + if (aggregation.size() != 1) { + throw new IllegalStateException("aggregation was not reduced"); + } + return aggregation.get(0); } @Override @@ -78,26 +85,29 @@ public InternalAggregation doReduce(List aggregations, Redu List aggregationObjects = new ArrayList<>(); for (InternalAggregation aggregation : aggregations) { InternalScriptedMetric mapReduceAggregation = (InternalScriptedMetric) aggregation; - aggregationObjects.add(mapReduceAggregation.aggregation()); + aggregationObjects.addAll(mapReduceAggregation.aggregation); } InternalScriptedMetric firstAggregation = ((InternalScriptedMetric) aggregations.get(0)); - Object aggregation; - if (firstAggregation.reduceScript != null) { + List aggregation; + if (firstAggregation.reduceScript != null && reduceContext.isFinalReduce()) { Map vars = new HashMap<>(); vars.put("_aggs", aggregationObjects); if (firstAggregation.reduceScript.getParams() != null) { vars.putAll(firstAggregation.reduceScript.getParams()); } - CompiledScript compiledScript = reduceContext.scriptService().compile(firstAggregation.reduceScript, - ScriptContext.Standard.AGGS, Collections.emptyMap()); + CompiledScript compiledScript = reduceContext.scriptService().compile( + firstAggregation.reduceScript, ScriptContext.Standard.AGGS); ExecutableScript script = reduceContext.scriptService().executable(compiledScript, vars); - aggregation = script.run(); + aggregation = Collections.singletonList(script.run()); + } else if (reduceContext.isFinalReduce()) { + aggregation = Collections.singletonList(aggregationObjects); } else { + // if we are not an final reduce we have to maintain all the aggs from all the incoming one + // until we hit the final reduce phase. aggregation = aggregationObjects; } return new InternalScriptedMetric(firstAggregation.getName(), aggregation, firstAggregation.reduceScript, pipelineAggregators(), getMetaData()); - } @Override @@ -105,7 +115,7 @@ public Object getProperty(List path) { if (path.isEmpty()) { return this; } else if (path.size() == 1 && "value".equals(path.get(0))) { - return aggregation; + return aggregation(); } else { throw new IllegalArgumentException("path not supported for [" + getName() + "]: " + path); } @@ -113,7 +123,7 @@ public Object getProperty(List path) { @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - return builder.field("value", aggregation); + return builder.field(CommonFields.VALUE.getPreferredName(), aggregation()); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ParsedScriptedMetric.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ParsedScriptedMetric.java new file mode 100644 index 0000000000000..f2aae9f5e8aa5 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ParsedScriptedMetric.java @@ -0,0 +1,92 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.scripted; + +import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParser.Token; +import org.elasticsearch.search.aggregations.ParsedAggregation; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; + +public class ParsedScriptedMetric extends ParsedAggregation implements ScriptedMetric { + private List aggregation; + + @Override + public String getType() { + return ScriptedMetricAggregationBuilder.NAME; + } + + @Override + public Object aggregation() { + assert aggregation.size() == 1; // see InternalScriptedMetric#aggregations() for why we can assume this + return aggregation.get(0); + } + + @Override + public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + return builder.field(CommonFields.VALUE.getPreferredName(), aggregation()); + } + + private static final ObjectParser PARSER = + new ObjectParser<>(ParsedScriptedMetric.class.getSimpleName(), true, ParsedScriptedMetric::new); + + static { + declareAggregationFields(PARSER); + PARSER.declareField((agg, value) -> agg.aggregation = Collections.singletonList(value), + ParsedScriptedMetric::parseValue, CommonFields.VALUE, ValueType.VALUE_OBJECT_ARRAY); + } + + private static Object parseValue(XContentParser parser) throws IOException { + Token token = parser.currentToken(); + Object value = null; + if (token == XContentParser.Token.VALUE_NULL) { + value = null; + } else if (token.isValue()) { + if (token == XContentParser.Token.VALUE_STRING) { + //binary values will be parsed back and returned as base64 strings when reading from json and yaml + value = parser.text(); + } else if (token == XContentParser.Token.VALUE_NUMBER) { + value = parser.numberValue(); + } else if (token == XContentParser.Token.VALUE_BOOLEAN) { + value = parser.booleanValue(); + } else if (token == XContentParser.Token.VALUE_EMBEDDED_OBJECT) { + //binary values will be parsed back and returned as BytesArray when reading from cbor and smile + value = new BytesArray(parser.binaryValue()); + } + } else if (token == XContentParser.Token.START_OBJECT) { + value = parser.map(); + } else if (token == XContentParser.Token.START_ARRAY) { + value = parser.list(); + } + return value; + } + + public static ParsedScriptedMetric fromXContent(XContentParser parser, final String name) { + ParsedScriptedMetric aggregation = PARSER.apply(parser, null); + aggregation.setName(name); + return aggregation; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregationBuilder.java index 244881a51557e..d7604a14d6883 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregationBuilder.java @@ -26,23 +26,25 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.script.ExecutableScript; import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptContext; +import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.aggregations.AbstractAggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.HashSet; import java.util.Map; import java.util.Objects; import java.util.Set; +import java.util.function.Function; public class ScriptedMetricAggregationBuilder extends AbstractAggregationBuilder { - public static final String NAME = "scripted_metric"; - private static final Type TYPE = new Type(NAME); private static final ParseField INIT_SCRIPT_FIELD = new ParseField("init_script"); private static final ParseField MAP_SCRIPT_FIELD = new ParseField("map_script"); @@ -57,14 +59,14 @@ public class ScriptedMetricAggregationBuilder extends AbstractAggregationBuilder private Map params; public ScriptedMetricAggregationBuilder(String name) { - super(name, TYPE); + super(name); } /** * Read from a stream. */ public ScriptedMetricAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE); + super(in); initScript = in.readOptionalWriteable(Script::new); mapScript = in.readOptionalWriteable(Script::new); combineScript = in.readOptionalWriteable(Script::new); @@ -180,12 +182,29 @@ public Map params() { } @Override - protected ScriptedMetricAggregatorFactory doBuild(AggregationContext context, AggregatorFactory parent, + protected ScriptedMetricAggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, Builder subfactoriesBuilder) throws IOException { - return new ScriptedMetricAggregatorFactory(name, type, initScript, mapScript, combineScript, reduceScript, params, context, - parent, subfactoriesBuilder, metaData); + + QueryShardContext queryShardContext = context.getQueryShardContext(); + Function, ExecutableScript> executableInitScript; + if (initScript != null) { + executableInitScript = queryShardContext.getLazyExecutableScript(initScript, ScriptContext.Standard.AGGS); + } else { + executableInitScript = (p) -> null; + } + Function, SearchScript> searchMapScript = queryShardContext.getLazySearchScript(mapScript, + ScriptContext.Standard.AGGS); + Function, ExecutableScript> executableCombineScript; + if (combineScript != null) { + executableCombineScript = queryShardContext.getLazyExecutableScript(combineScript, ScriptContext.Standard.AGGS); + } else { + executableCombineScript = (p) -> null; + } + return new ScriptedMetricAggregatorFactory(name, searchMapScript, executableInitScript, executableCombineScript, reduceScript, + params, context, parent, subfactoriesBuilder, metaData); } + @Override protected XContentBuilder internalXContent(XContentBuilder builder, Params builderParams) throws IOException { builder.startObject(); @@ -231,16 +250,16 @@ public static ScriptedMetricAggregationBuilder parse(String aggregationName, Que if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT || token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, INIT_SCRIPT_FIELD)) { - initScript = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); - } else if (context.getParseFieldMatcher().match(currentFieldName, MAP_SCRIPT_FIELD)) { - mapScript = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); - } else if (context.getParseFieldMatcher().match(currentFieldName, COMBINE_SCRIPT_FIELD)) { - combineScript = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); - } else if (context.getParseFieldMatcher().match(currentFieldName, REDUCE_SCRIPT_FIELD)) { - reduceScript = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); + if (INIT_SCRIPT_FIELD.match(currentFieldName)) { + initScript = Script.parse(parser, context.getDefaultScriptLanguage()); + } else if (MAP_SCRIPT_FIELD.match(currentFieldName)) { + mapScript = Script.parse(parser, context.getDefaultScriptLanguage()); + } else if (COMBINE_SCRIPT_FIELD.match(currentFieldName)) { + combineScript = Script.parse(parser, context.getDefaultScriptLanguage()); + } else if (REDUCE_SCRIPT_FIELD.match(currentFieldName)) { + reduceScript = Script.parse(parser, context.getDefaultScriptLanguage()); } else if (token == XContentParser.Token.START_OBJECT && - context.getParseFieldMatcher().match(currentFieldName, PARAMS_FIELD)) { + PARAMS_FIELD.match(currentFieldName)) { params = parser.map(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -275,7 +294,7 @@ public static ScriptedMetricAggregationBuilder parse(String aggregationName, Que } @Override - public String getWriteableName() { + public String getType() { return NAME; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java index e2d3034fa1165..cee7b3402f3e4 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java @@ -23,8 +23,6 @@ import org.elasticsearch.script.ExecutableScript; import org.elasticsearch.script.LeafSearchScript; import org.elasticsearch.script.Script; -import org.elasticsearch.script.ScriptContext; -import org.elasticsearch.script.ScriptService; import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.InternalAggregation; @@ -32,10 +30,9 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.MetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; -import java.util.Collections; import java.util.List; import java.util.Map; @@ -46,21 +43,14 @@ public class ScriptedMetricAggregator extends MetricsAggregator { private final Script reduceScript; private Map params; - protected ScriptedMetricAggregator(String name, Script initScript, Script mapScript, Script combineScript, Script reduceScript, - Map params, AggregationContext context, Aggregator parent, List pipelineAggregators, Map metaData) + protected ScriptedMetricAggregator(String name, SearchScript mapScript, ExecutableScript combineScript, + Script reduceScript, + Map params, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); this.params = params; - ScriptService scriptService = context.searchContext().scriptService(); - if (initScript != null) { - scriptService.executable(initScript, ScriptContext.Standard.AGGS, Collections.emptyMap()).run(); - } - this.mapScript = scriptService.search(context.searchContext().lookup(), mapScript, ScriptContext.Standard.AGGS, Collections.emptyMap()); - if (combineScript != null) { - this.combineScript = scriptService.executable(combineScript, ScriptContext.Standard.AGGS, Collections.emptyMap()); - } else { - this.combineScript = null; - } + this.mapScript = mapScript; + this.combineScript = combineScript; this.reduceScript = reduceScript; } @@ -73,7 +63,7 @@ public boolean needsScores() { public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, final LeafBucketCollector sub) throws IOException { final LeafSearchScript leafMapScript = mapScript.getLeafSearchScript(ctx); - return new LeafBucketCollectorBase(sub, mapScript) { + return new LeafBucketCollectorBase(sub, leafMapScript) { @Override public void collect(int doc, long bucket) throws IOException { assert bucket == 0 : bucket; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java index f89e99f44b31e..bac2becc8e4e9 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java @@ -19,14 +19,14 @@ package org.elasticsearch.search.aggregations.metrics.scripted; +import org.elasticsearch.script.ExecutableScript; import org.elasticsearch.script.Script; +import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.SearchParseException; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; @@ -34,22 +34,23 @@ import java.util.HashMap; import java.util.List; import java.util.Map; -import java.util.Map.Entry; +import java.util.function.Function; public class ScriptedMetricAggregatorFactory extends AggregatorFactory { - private final Script initScript; - private final Script mapScript; - private final Script combineScript; + private final Function, SearchScript> mapScript; + private final Function, ExecutableScript> combineScript; private final Script reduceScript; private final Map params; + private final Function, ExecutableScript> initScript; - public ScriptedMetricAggregatorFactory(String name, Type type, Script initScript, Script mapScript, Script combineScript, - Script reduceScript, Map params, AggregationContext context, AggregatorFactory parent, + public ScriptedMetricAggregatorFactory(String name, Function, SearchScript> mapScript, + Function, ExecutableScript> initScript, Function, ExecutableScript> combineScript, + Script reduceScript, Map params, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, Map metaData) throws IOException { - super(name, type, context, parent, subFactories, metaData); - this.initScript = initScript; + super(name, context, parent, subFactories, metaData); this.mapScript = mapScript; + this.initScript = initScript; this.combineScript = combineScript; this.reduceScript = reduceScript; this.params = params; @@ -63,21 +64,23 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu } Map params = this.params; if (params != null) { - params = deepCopyParams(params, context.searchContext()); + params = deepCopyParams(params, context); } else { params = new HashMap<>(); params.put("_agg", new HashMap()); } - return new ScriptedMetricAggregator(name, insertParams(initScript, params), insertParams(mapScript, params), - insertParams(combineScript, params), deepCopyScript(reduceScript, context.searchContext()), params, context, parent, - pipelineAggregators, metaData); - } - private static Script insertParams(Script script, Map params) { - if (script == null) { - return null; + final ExecutableScript initScript = this.initScript.apply(params); + final SearchScript mapScript = this.mapScript.apply(params); + final ExecutableScript combineScript = this.combineScript.apply(params); + + final Script reduceScript = deepCopyScript(this.reduceScript, context); + if (initScript != null) { + initScript.run(); } - return new Script(script.getScript(), script.getType(), script.getLang(), params); + return new ScriptedMetricAggregator(name, mapScript, + combineScript, reduceScript, params, context, parent, + pipelineAggregators, metaData); } private static Script deepCopyScript(Script script, SearchContext context) { @@ -86,7 +89,7 @@ private static Script deepCopyScript(Script script, SearchContext context) { if (params != null) { params = deepCopyParams(params, context); } - return new Script(script.getScript(), script.getType(), script.getLang(), params); + return new Script(script.getType(), script.getLang(), script.getIdOrCode(), params); } else { return null; } @@ -98,26 +101,27 @@ private static T deepCopyParams(T original, SearchContext context) { if (original instanceof Map) { Map originalMap = (Map) original; Map clonedMap = new HashMap<>(); - for (Entry e : originalMap.entrySet()) { + for (Map.Entry e : originalMap.entrySet()) { clonedMap.put(deepCopyParams(e.getKey(), context), deepCopyParams(e.getValue(), context)); } clone = (T) clonedMap; } else if (original instanceof List) { List originalList = (List) original; - List clonedList = new ArrayList(); + List clonedList = new ArrayList<>(); for (Object o : originalList) { clonedList.add(deepCopyParams(o, context)); } clone = (T) clonedList; } else if (original instanceof String || original instanceof Integer || original instanceof Long || original instanceof Short - || original instanceof Byte || original instanceof Float || original instanceof Double || original instanceof Character - || original instanceof Boolean) { + || original instanceof Byte || original instanceof Float || original instanceof Double || original instanceof Character + || original instanceof Boolean) { clone = original; } else { throw new SearchParseException(context, - "Can only clone primitives, String, ArrayList, and HashMap. Found: " + original.getClass().getCanonicalName(), null); + "Can only clone primitives, String, ArrayList, and HashMap. Found: " + original.getClass().getCanonicalName(), null); } return clone; } + } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java index e060826c24c0b..253825b75e1b2 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java @@ -111,11 +111,6 @@ public double getSum() { return sum; } - @Override - public String getCountAsString() { - return valueAsString(Metrics.count.name()); - } - @Override public String getMinAsString() { return valueAsString(Metrics.min.name()); @@ -181,21 +176,28 @@ static class Fields { @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { builder.field(Fields.COUNT, count); - builder.field(Fields.MIN, count != 0 ? min : null); - builder.field(Fields.MAX, count != 0 ? max : null); - builder.field(Fields.AVG, count != 0 ? getAvg() : null); - builder.field(Fields.SUM, count != 0 ? sum : null); - if (count != 0 && format != DocValueFormat.RAW) { - builder.field(Fields.MIN_AS_STRING, format.format(min)); - builder.field(Fields.MAX_AS_STRING, format.format(max)); - builder.field(Fields.AVG_AS_STRING, format.format(getAvg())); - builder.field(Fields.SUM_AS_STRING, format.format(sum)); + if (count != 0) { + builder.field(Fields.MIN, min); + builder.field(Fields.MAX, max); + builder.field(Fields.AVG, getAvg()); + builder.field(Fields.SUM, sum); + if (format != DocValueFormat.RAW) { + builder.field(Fields.MIN_AS_STRING, format.format(min)); + builder.field(Fields.MAX_AS_STRING, format.format(max)); + builder.field(Fields.AVG_AS_STRING, format.format(getAvg())); + builder.field(Fields.SUM_AS_STRING, format.format(sum)); + } + } else { + builder.nullField(Fields.MIN); + builder.nullField(Fields.MAX); + builder.nullField(Fields.AVG); + builder.nullField(Fields.SUM); } - otherStatsToXCotent(builder, params); + otherStatsToXContent(builder, params); return builder; } - protected XContentBuilder otherStatsToXCotent(XContentBuilder builder, Params params) throws IOException { + protected XContentBuilder otherStatsToXContent(XContentBuilder builder, Params params) throws IOException { return builder; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/ParsedStats.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/ParsedStats.java new file mode 100644 index 0000000000000..239548ecdebc6 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/ParsedStats.java @@ -0,0 +1,155 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.stats; + +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedAggregation; +import org.elasticsearch.search.aggregations.metrics.stats.InternalStats.Fields; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; + +public class ParsedStats extends ParsedAggregation implements Stats { + + protected long count; + protected double min; + protected double max; + protected double sum; + protected double avg; + + protected final Map valueAsString = new HashMap<>(); + + @Override + public long getCount() { + return count; + } + + @Override + public double getMin() { + return min; + } + + @Override + public double getMax() { + return max; + } + + @Override + public double getAvg() { + return avg; + } + + @Override + public double getSum() { + return sum; + } + + @Override + public String getMinAsString() { + return valueAsString.getOrDefault(Fields.MIN_AS_STRING, Double.toString(min)); + } + + @Override + public String getMaxAsString() { + return valueAsString.getOrDefault(Fields.MAX_AS_STRING, Double.toString(max)); + } + + @Override + public String getAvgAsString() { + return valueAsString.getOrDefault(Fields.AVG_AS_STRING, Double.toString(avg)); + } + + @Override + public String getSumAsString() { + return valueAsString.getOrDefault(Fields.SUM_AS_STRING, Double.toString(sum)); + } + + @Override + public String getType() { + return StatsAggregationBuilder.NAME; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + builder.field(Fields.COUNT, count); + if (count != 0) { + builder.field(Fields.MIN, min); + builder.field(Fields.MAX, max); + builder.field(Fields.AVG, avg); + builder.field(Fields.SUM, sum); + if (valueAsString.get(Fields.MIN_AS_STRING) != null) { + builder.field(Fields.MIN_AS_STRING, getMinAsString()); + builder.field(Fields.MAX_AS_STRING, getMaxAsString()); + builder.field(Fields.AVG_AS_STRING, getAvgAsString()); + builder.field(Fields.SUM_AS_STRING, getSumAsString()); + } + } else { + builder.nullField(Fields.MIN); + builder.nullField(Fields.MAX); + builder.nullField(Fields.AVG); + builder.nullField(Fields.SUM); + } + otherStatsToXContent(builder, params); + return builder; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedStats.class.getSimpleName(), true, + ParsedStats::new); + + static { + declareStatsFields(PARSER); + } + + protected static void declareStatsFields(ObjectParser objectParser) { + declareAggregationFields(objectParser); + objectParser.declareLong((agg, value) -> agg.count = value, new ParseField(Fields.COUNT)); + objectParser.declareField((agg, value) -> agg.min = value, (parser, context) -> parseDouble(parser, Double.POSITIVE_INFINITY), + new ParseField(Fields.MIN), ValueType.DOUBLE_OR_NULL); + objectParser.declareField((agg, value) -> agg.max = value, (parser, context) -> parseDouble(parser, Double.NEGATIVE_INFINITY), + new ParseField(Fields.MAX), ValueType.DOUBLE_OR_NULL); + objectParser.declareField((agg, value) -> agg.avg = value, (parser, context) -> parseDouble(parser, 0), new ParseField(Fields.AVG), + ValueType.DOUBLE_OR_NULL); + objectParser.declareField((agg, value) -> agg.sum = value, (parser, context) -> parseDouble(parser, 0), new ParseField(Fields.SUM), + ValueType.DOUBLE_OR_NULL); + objectParser.declareString((agg, value) -> agg.valueAsString.put(Fields.MIN_AS_STRING, value), + new ParseField(Fields.MIN_AS_STRING)); + objectParser.declareString((agg, value) -> agg.valueAsString.put(Fields.MAX_AS_STRING, value), + new ParseField(Fields.MAX_AS_STRING)); + objectParser.declareString((agg, value) -> agg.valueAsString.put(Fields.AVG_AS_STRING, value), + new ParseField(Fields.AVG_AS_STRING)); + objectParser.declareString((agg, value) -> agg.valueAsString.put(Fields.SUM_AS_STRING, value), + new ParseField(Fields.SUM_AS_STRING)); + } + + public static ParsedStats fromXContent(XContentParser parser, final String name) { + ParsedStats parsedStats = PARSER.apply(parser, null); + parsedStats.setName(name); + return parsedStats; + } + + protected XContentBuilder otherStatsToXContent(XContentBuilder builder, Params params) throws IOException { + return builder; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/Stats.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/Stats.java index 4910dc140026d..46620f51dc2fc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/Stats.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/Stats.java @@ -50,11 +50,6 @@ public interface Stats extends NumericMetricsAggregation.MultiValue { */ double getSum(); - /** - * @return The number of values that were aggregated as a String. - */ - String getCountAsString(); - /** * @return The minimum value of all aggregated values as a String. */ diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregationBuilder.java index e26edbcc290e8..390be44d74716 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregationBuilder.java @@ -21,33 +21,45 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class StatsAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly { public static final String NAME = "stats"; - private static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(StatsAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new StatsAggregationBuilder(aggregationName), context); + } public StatsAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(name, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } /** * Read from a stream. */ public StatsAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(in, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } @Override @@ -56,9 +68,9 @@ protected void innerWriteTo(StreamOutput out) { } @Override - protected StatsAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected StatsAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new StatsAggregatorFactory(name, type, config, context, parent, subFactoriesBuilder, metaData); + return new StatsAggregatorFactory(name, config, context, parent, subFactoriesBuilder, metaData); } @Override @@ -77,7 +89,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregator.java index 374cbcaf0e6c0..8357cae3cd344 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregator.java @@ -31,8 +31,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -53,7 +53,7 @@ public class StatsAggregator extends NumericMetricsAggregator.MultiValue { public StatsAggregator(String name, ValuesSource.Numeric valuesSource, DocValueFormat format, - AggregationContext context, + SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregatorFactory.java index a4f78f53d0788..a6e59d7c75bf0 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -36,9 +35,9 @@ public class StatsAggregatorFactory extends ValuesSourceAggregatorFactory { - public StatsAggregatorFactory(String name, Type type, ValuesSourceConfig config, AggregationContext context, + public StatsAggregatorFactory(String name, ValuesSourceConfig config, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsParser.java deleted file mode 100644 index 60e3d2ef0aad0..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsParser.java +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.metrics.stats; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * - */ -public class StatsParser extends NumericValuesSourceParser { - - public StatsParser() { - super(true, true, false); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, XContentParser.Token token, - XContentParseContext context, Map otherOptions) throws IOException { - return false; - } - - @Override - protected StatsAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - return new StatsAggregationBuilder(aggregationName); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStats.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStats.java index 1b235a6cfecaa..8a198a5825a3d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStats.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStats.java @@ -67,7 +67,7 @@ public interface ExtendedStats extends Stats { String getVarianceAsString(); - public enum Bounds { + enum Bounds { UPPER, LOWER } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregationBuilder.java index a20b8fa676498..94857c8753f08 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregationBuilder.java @@ -21,17 +21,20 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Objects; @@ -39,19 +42,29 @@ public class ExtendedStatsAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly { public static final String NAME = "extended_stats"; - public static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(ExtendedStatsAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false); + PARSER.declareDouble(ExtendedStatsAggregationBuilder::sigma, ExtendedStatsAggregator.SIGMA_FIELD); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new ExtendedStatsAggregationBuilder(aggregationName), context); + } private double sigma = 2.0; public ExtendedStatsAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(name, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } /** * Read from a stream. */ public ExtendedStatsAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(in, ValuesSourceType.NUMERIC, ValueType.NUMERIC); sigma = in.readDouble(); } @@ -73,9 +86,9 @@ public double sigma() { } @Override - protected ExtendedStatsAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected ExtendedStatsAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new ExtendedStatsAggregatorFactory(name, type, config, sigma, context, parent, subFactoriesBuilder, metaData); + return new ExtendedStatsAggregatorFactory(name, config, sigma, context, parent, subFactoriesBuilder, metaData); } @Override @@ -96,7 +109,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java index 7715d4b713ec0..0cb73118632e0 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java @@ -32,8 +32,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -57,7 +57,7 @@ public class ExtendedStatsAggregator extends NumericMetricsAggregator.MultiValue DoubleArray sumOfSqrs; public ExtendedStatsAggregator(String name, ValuesSource.Numeric valuesSource, DocValueFormat formatter, - AggregationContext context, Aggregator parent, double sigma, List pipelineAggregators, + SearchContext context, Aggregator parent, double sigma, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregatorFactory.java index 82b0862482e56..521ea8f68a67d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -38,10 +37,10 @@ public class ExtendedStatsAggregatorFactory extends ValuesSourceAggregatorFactor private final double sigma; - public ExtendedStatsAggregatorFactory(String name, Type type, ValuesSourceConfig config, double sigma, - AggregationContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, + public ExtendedStatsAggregatorFactory(String name, ValuesSourceConfig config, double sigma, + SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); this.sigma = sigma; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsParser.java deleted file mode 100644 index 9644d26e93a6a..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsParser.java +++ /dev/null @@ -1,62 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.metrics.stats.extended; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * - */ -public class ExtendedStatsParser extends NumericValuesSourceParser { - - public ExtendedStatsParser() { - super(true, true, false); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, XContentParser.Token token, - XContentParseContext context, Map otherOptions) throws IOException { - if (context.matchField(currentFieldName, ExtendedStatsAggregator.SIGMA_FIELD)) { - if (token.isValue()) { - otherOptions.put(ExtendedStatsAggregator.SIGMA_FIELD, context.getParser().doubleValue()); - return true; - } - } - return false; - } - - @Override - protected ExtendedStatsAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - ExtendedStatsAggregationBuilder factory = new ExtendedStatsAggregationBuilder(aggregationName); - Double sigma = (Double) otherOptions.get(ExtendedStatsAggregator.SIGMA_FIELD); - if (sigma != null) { - factory.sigma(sigma); - } - return factory; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/InternalExtendedStats.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/InternalExtendedStats.java index d848001171c6b..be999849c7fa7 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/InternalExtendedStats.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/InternalExtendedStats.java @@ -164,25 +164,38 @@ static class Fields { } @Override - protected XContentBuilder otherStatsToXCotent(XContentBuilder builder, Params params) throws IOException { - builder.field(Fields.SUM_OF_SQRS, count != 0 ? sumOfSqrs : null); - builder.field(Fields.VARIANCE, count != 0 ? getVariance() : null); - builder.field(Fields.STD_DEVIATION, count != 0 ? getStdDeviation() : null); - builder.startObject(Fields.STD_DEVIATION_BOUNDS) - .field(Fields.UPPER, count != 0 ? getStdDeviationBound(Bounds.UPPER) : null) - .field(Fields.LOWER, count != 0 ? getStdDeviationBound(Bounds.LOWER) : null) - .endObject(); - - if (count != 0 && format != DocValueFormat.RAW) { - builder.field(Fields.SUM_OF_SQRS_AS_STRING, format.format(sumOfSqrs)); - builder.field(Fields.VARIANCE_AS_STRING, format.format(getVariance())); - builder.field(Fields.STD_DEVIATION_AS_STRING, getStdDeviationAsString()); - - builder.startObject(Fields.STD_DEVIATION_BOUNDS_AS_STRING) - .field(Fields.UPPER, getStdDeviationBoundAsString(Bounds.UPPER)) - .field(Fields.LOWER, getStdDeviationBoundAsString(Bounds.LOWER)) - .endObject(); - + protected XContentBuilder otherStatsToXContent(XContentBuilder builder, Params params) throws IOException { + if (count != 0) { + builder.field(Fields.SUM_OF_SQRS, sumOfSqrs); + builder.field(Fields.VARIANCE, getVariance()); + builder.field(Fields.STD_DEVIATION, getStdDeviation()); + builder.startObject(Fields.STD_DEVIATION_BOUNDS); + { + builder.field(Fields.UPPER, getStdDeviationBound(Bounds.UPPER)); + builder.field(Fields.LOWER, getStdDeviationBound(Bounds.LOWER)); + } + builder.endObject(); + if (format != DocValueFormat.RAW) { + builder.field(Fields.SUM_OF_SQRS_AS_STRING, format.format(sumOfSqrs)); + builder.field(Fields.VARIANCE_AS_STRING, format.format(getVariance())); + builder.field(Fields.STD_DEVIATION_AS_STRING, getStdDeviationAsString()); + builder.startObject(Fields.STD_DEVIATION_BOUNDS_AS_STRING); + { + builder.field(Fields.UPPER, getStdDeviationBoundAsString(Bounds.UPPER)); + builder.field(Fields.LOWER, getStdDeviationBoundAsString(Bounds.LOWER)); + } + builder.endObject(); + } + } else { + builder.nullField(Fields.SUM_OF_SQRS); + builder.nullField(Fields.VARIANCE); + builder.nullField(Fields.STD_DEVIATION); + builder.startObject(Fields.STD_DEVIATION_BOUNDS); + { + builder.nullField(Fields.UPPER); + builder.nullField(Fields.LOWER); + } + builder.endObject(); } return builder; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ParsedExtendedStats.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ParsedExtendedStats.java new file mode 100644 index 0000000000000..59311127368f5 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ParsedExtendedStats.java @@ -0,0 +1,188 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.stats.extended; + +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.stats.ParsedStats; +import org.elasticsearch.search.aggregations.metrics.stats.extended.InternalExtendedStats.Fields; + +import java.io.IOException; + +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; + +public class ParsedExtendedStats extends ParsedStats implements ExtendedStats { + + protected double sumOfSquares; + protected double variance; + protected double stdDeviation; + protected double stdDeviationBoundUpper; + protected double stdDeviationBoundLower; + protected double sum; + protected double avg; + + @Override + public String getType() { + return ExtendedStatsAggregationBuilder.NAME; + } + + @Override + public double getSumOfSquares() { + return sumOfSquares; + } + + @Override + public double getVariance() { + return variance; + } + + @Override + public double getStdDeviation() { + return stdDeviation; + } + + private void setStdDeviationBounds(Tuple bounds) { + this.stdDeviationBoundLower = bounds.v1(); + this.stdDeviationBoundUpper = bounds.v2(); + } + + @Override + public double getStdDeviationBound(Bounds bound) { + return (bound.equals(Bounds.LOWER)) ? stdDeviationBoundLower : stdDeviationBoundUpper; + } + + @Override + public String getStdDeviationAsString() { + return valueAsString.getOrDefault(Fields.STD_DEVIATION_AS_STRING, Double.toString(stdDeviation)); + } + + private void setStdDeviationBoundsAsString(Tuple boundsAsString) { + this.valueAsString.put(Fields.STD_DEVIATION_BOUNDS_AS_STRING + "_lower", boundsAsString.v1()); + this.valueAsString.put(Fields.STD_DEVIATION_BOUNDS_AS_STRING + "_upper", boundsAsString.v2()); + } + + @Override + public String getStdDeviationBoundAsString(Bounds bound) { + if (bound.equals(Bounds.LOWER)) { + return valueAsString.getOrDefault(Fields.STD_DEVIATION_BOUNDS_AS_STRING + "_lower", Double.toString(stdDeviationBoundLower)); + } else { + return valueAsString.getOrDefault(Fields.STD_DEVIATION_BOUNDS_AS_STRING + "_upper", Double.toString(stdDeviationBoundUpper)); + } + } + + @Override + public String getSumOfSquaresAsString() { + return valueAsString.getOrDefault(Fields.SUM_OF_SQRS_AS_STRING, Double.toString(sumOfSquares)); + } + + @Override + public String getVarianceAsString() { + return valueAsString.getOrDefault(Fields.VARIANCE_AS_STRING, Double.toString(variance)); + } + + @Override + protected XContentBuilder otherStatsToXContent(XContentBuilder builder, Params params) throws IOException { + if (count != 0) { + builder.field(Fields.SUM_OF_SQRS, sumOfSquares); + builder.field(Fields.VARIANCE, getVariance()); + builder.field(Fields.STD_DEVIATION, getStdDeviation()); + builder.startObject(Fields.STD_DEVIATION_BOUNDS); + { + builder.field(Fields.UPPER, getStdDeviationBound(Bounds.UPPER)); + builder.field(Fields.LOWER, getStdDeviationBound(Bounds.LOWER)); + } + builder.endObject(); + if (valueAsString.containsKey(Fields.SUM_OF_SQRS_AS_STRING)) { + builder.field(Fields.SUM_OF_SQRS_AS_STRING, getSumOfSquaresAsString()); + builder.field(Fields.VARIANCE_AS_STRING, getVarianceAsString()); + builder.field(Fields.STD_DEVIATION_AS_STRING, getStdDeviationAsString()); + builder.startObject(Fields.STD_DEVIATION_BOUNDS_AS_STRING); + { + builder.field(Fields.UPPER, getStdDeviationBoundAsString(Bounds.UPPER)); + builder.field(Fields.LOWER, getStdDeviationBoundAsString(Bounds.LOWER)); + } + builder.endObject(); + } + } else { + builder.nullField(Fields.SUM_OF_SQRS); + builder.nullField(Fields.VARIANCE); + builder.nullField(Fields.STD_DEVIATION); + builder.startObject(Fields.STD_DEVIATION_BOUNDS); + { + builder.nullField(Fields.UPPER); + builder.nullField(Fields.LOWER); + } + builder.endObject(); + } + return builder; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedExtendedStats.class.getSimpleName(), + true, ParsedExtendedStats::new); + + private static final ConstructingObjectParser, Void> STD_BOUNDS_PARSER = new ConstructingObjectParser<>( + ParsedExtendedStats.class.getSimpleName() + "_STD_BOUNDS", true, args -> new Tuple<>((Double) args[0], (Double) args[1])); + + private static final ConstructingObjectParser, Void> STD_BOUNDS_AS_STRING_PARSER = new ConstructingObjectParser<>( + ParsedExtendedStats.class.getSimpleName() + "_STD_BOUNDS_AS_STRING", true, + args -> new Tuple<>((String) args[0], (String) args[1])); + + static { + STD_BOUNDS_PARSER.declareField(constructorArg(), (parser, context) -> parseDouble(parser, 0), + new ParseField(Fields.LOWER), ValueType.DOUBLE_OR_NULL); + STD_BOUNDS_PARSER.declareField(constructorArg(), (parser, context) -> parseDouble(parser, 0), + new ParseField(Fields.UPPER), ValueType.DOUBLE_OR_NULL); + STD_BOUNDS_AS_STRING_PARSER.declareString(constructorArg(), new ParseField(Fields.LOWER)); + STD_BOUNDS_AS_STRING_PARSER.declareString(constructorArg(), new ParseField(Fields.UPPER)); + declareExtendedStatsFields(PARSER); + } + + protected static void declareExtendedStatsFields(ObjectParser objectParser) { + declareAggregationFields(objectParser); + declareStatsFields(objectParser); + objectParser.declareField((agg, value) -> agg.sumOfSquares = value, (parser, context) -> parseDouble(parser, 0), + new ParseField(Fields.SUM_OF_SQRS), ValueType.DOUBLE_OR_NULL); + objectParser.declareField((agg, value) -> agg.variance = value, (parser, context) -> parseDouble(parser, 0), + new ParseField(Fields.VARIANCE), ValueType.DOUBLE_OR_NULL); + objectParser.declareField((agg, value) -> agg.stdDeviation = value, (parser, context) -> parseDouble(parser, 0), + new ParseField(Fields.STD_DEVIATION), ValueType.DOUBLE_OR_NULL); + objectParser.declareObject(ParsedExtendedStats::setStdDeviationBounds, STD_BOUNDS_PARSER, + new ParseField(Fields.STD_DEVIATION_BOUNDS)); + objectParser.declareString((agg, value) -> agg.valueAsString.put(Fields.SUM_OF_SQRS_AS_STRING, value), + new ParseField(Fields.SUM_OF_SQRS_AS_STRING)); + objectParser.declareString((agg, value) -> agg.valueAsString.put(Fields.VARIANCE_AS_STRING, value), + new ParseField(Fields.VARIANCE_AS_STRING)); + objectParser.declareString((agg, value) -> agg.valueAsString.put(Fields.STD_DEVIATION_AS_STRING, value), + new ParseField(Fields.STD_DEVIATION_AS_STRING)); + objectParser.declareObject(ParsedExtendedStats::setStdDeviationBoundsAsString, STD_BOUNDS_AS_STRING_PARSER, + new ParseField(Fields.STD_DEVIATION_BOUNDS_AS_STRING)); + } + + public static ParsedExtendedStats fromXContent(XContentParser parser, final String name) { + ParsedExtendedStats parsedStats = PARSER.apply(parser, null); + parsedStats.setName(name); + return parsedStats; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/InternalSum.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/InternalSum.java index 70c69b65a1759..b93d97a663539 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/InternalSum.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/InternalSum.java @@ -81,9 +81,9 @@ public InternalSum doReduce(List aggregations, ReduceContex @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(CommonFields.VALUE, sum); + builder.field(CommonFields.VALUE.getPreferredName(), sum); if (format != DocValueFormat.RAW) { - builder.field(CommonFields.VALUE_AS_STRING, format.format(sum)); + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(sum)); } return builder; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/ParsedSum.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/ParsedSum.java new file mode 100644 index 0000000000000..a51f03d356549 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/ParsedSum.java @@ -0,0 +1,61 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.sum; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.ParsedSingleValueNumericMetricsAggregation; + +import java.io.IOException; + +public class ParsedSum extends ParsedSingleValueNumericMetricsAggregation implements Sum { + + @Override + public double getValue() { + return value(); + } + + @Override + public String getType() { + return SumAggregationBuilder.NAME; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + builder.field(CommonFields.VALUE.getPreferredName(), value); + if (valueAsString != null) { + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), valueAsString); + } + return builder; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedSum.class.getSimpleName(), true, ParsedSum::new); + + static { + declareSingleValueFields(PARSER, Double.NEGATIVE_INFINITY); + } + + public static ParsedSum fromXContent(XContentParser parser, final String name) { + ParsedSum sum = PARSER.apply(parser, null); + sum.setName(name); + return sum; + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregationBuilder.java index 2f285b146c1ae..7118b14d0cdca 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregationBuilder.java @@ -21,33 +21,45 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class SumAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly { public static final String NAME = "sum"; - private static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(SumAggregationBuilder.NAME); + ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new SumAggregationBuilder(aggregationName), context); + } public SumAggregationBuilder(String name) { - super(name, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(name, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } /** * Read from a stream. */ public SumAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.NUMERIC, ValueType.NUMERIC); + super(in, ValuesSourceType.NUMERIC, ValueType.NUMERIC); } @Override @@ -56,9 +68,9 @@ protected void innerWriteTo(StreamOutput out) { } @Override - protected SumAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected SumAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, Builder subFactoriesBuilder) throws IOException { - return new SumAggregatorFactory(name, type, config, context, parent, subFactoriesBuilder, metaData); + return new SumAggregatorFactory(name, config, context, parent, subFactoriesBuilder, metaData); } @Override @@ -77,7 +89,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java index 0d6b5de4a80b8..d47ffdb1bbae4 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java @@ -30,8 +30,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -47,7 +47,7 @@ public class SumAggregator extends NumericMetricsAggregator.SingleValue { DoubleArray sums; - public SumAggregator(String name, ValuesSource.Numeric valuesSource, DocValueFormat formatter, AggregationContext context, + public SumAggregator(String name, ValuesSource.Numeric valuesSource, DocValueFormat formatter, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregatorFactory.java index 07184849c5e91..8b6103214a754 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregatorFactory.java @@ -22,13 +22,12 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -36,9 +35,9 @@ public class SumAggregatorFactory extends ValuesSourceAggregatorFactory { - public SumAggregatorFactory(String name, Type type, ValuesSourceConfig config, AggregationContext context, + public SumAggregatorFactory(String name, ValuesSourceConfig config, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumParser.java deleted file mode 100644 index ee82829b0a71d..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumParser.java +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.metrics.sum; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.NumericValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * - */ -public class SumParser extends NumericValuesSourceParser { - - public SumParser() { - super(true, true, false); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, XContentParser.Token token, - XContentParseContext context, Map otherOptions) throws IOException { - return false; - } - - @Override - protected SumAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions) { - return new SumAggregationBuilder(aggregationName); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/InternalTopHits.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/InternalTopHits.java index 8c6758b4f3567..bcef9c1388ab1 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/InternalTopHits.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/InternalTopHits.java @@ -22,17 +22,14 @@ import org.apache.lucene.search.Sort; import org.apache.lucene.search.TopDocs; import org.apache.lucene.search.TopFieldDocs; -import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.SearchHits; import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.metrics.InternalMetricsAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.internal.InternalSearchHit; -import org.elasticsearch.search.internal.InternalSearchHits; import java.io.IOException; import java.util.List; @@ -41,13 +38,13 @@ /** * Results of the {@link TopHitsAggregator}. */ -public class InternalTopHits extends InternalMetricsAggregation implements TopHits { +public class InternalTopHits extends InternalAggregation implements TopHits { private int from; private int size; private TopDocs topDocs; - private InternalSearchHits searchHits; + private SearchHits searchHits; - public InternalTopHits(String name, int from, int size, TopDocs topDocs, InternalSearchHits searchHits, + public InternalTopHits(String name, int from, int size, TopDocs topDocs, SearchHits searchHits, List pipelineAggregators, Map metaData) { super(name, pipelineAggregators, metaData); this.from = from; @@ -65,7 +62,7 @@ public InternalTopHits(StreamInput in) throws IOException { size = in.readVInt(); topDocs = Lucene.readTopDocs(in); assert topDocs != null; - searchHits = InternalSearchHits.readSearchHits(in); + searchHits = SearchHits.readSearchHits(in); } @Override @@ -88,47 +85,54 @@ public SearchHits getHits() { @Override public InternalAggregation doReduce(List aggregations, ReduceContext reduceContext) { - InternalSearchHits[] shardHits = new InternalSearchHits[aggregations.size()]; + final SearchHits[] shardHits = new SearchHits[aggregations.size()]; + final int from; + final int size; + if (reduceContext.isFinalReduce()) { + from = this.from; + size = this.size; + } else { + // if we are not in the final reduce we need to ensure we maintain all possible elements during reduce + // hence for pagination we need to maintain all hits until we are in the final phase. + from = 0; + size = this.from + this.size; + } final TopDocs reducedTopDocs; final TopDocs[] shardDocs; - try { - if (topDocs instanceof TopFieldDocs) { - Sort sort = new Sort(((TopFieldDocs) topDocs).fields); - shardDocs = new TopFieldDocs[aggregations.size()]; - for (int i = 0; i < shardDocs.length; i++) { - InternalTopHits topHitsAgg = (InternalTopHits) aggregations.get(i); - shardDocs[i] = (TopFieldDocs) topHitsAgg.topDocs; - shardHits[i] = topHitsAgg.searchHits; - } - reducedTopDocs = TopDocs.merge(sort, from, size, (TopFieldDocs[]) shardDocs); - } else { - shardDocs = new TopDocs[aggregations.size()]; - for (int i = 0; i < shardDocs.length; i++) { - InternalTopHits topHitsAgg = (InternalTopHits) aggregations.get(i); - shardDocs[i] = topHitsAgg.topDocs; - shardHits[i] = topHitsAgg.searchHits; - } - reducedTopDocs = TopDocs.merge(from, size, shardDocs); + if (topDocs instanceof TopFieldDocs) { + Sort sort = new Sort(((TopFieldDocs) topDocs).fields); + shardDocs = new TopFieldDocs[aggregations.size()]; + for (int i = 0; i < shardDocs.length; i++) { + InternalTopHits topHitsAgg = (InternalTopHits) aggregations.get(i); + shardDocs[i] = topHitsAgg.topDocs; + shardHits[i] = topHitsAgg.searchHits; } - - final int[] tracker = new int[shardHits.length]; - InternalSearchHit[] hits = new InternalSearchHit[reducedTopDocs.scoreDocs.length]; - for (int i = 0; i < reducedTopDocs.scoreDocs.length; i++) { - ScoreDoc scoreDoc = reducedTopDocs.scoreDocs[i]; - int position; - do { - position = tracker[scoreDoc.shardIndex]++; - } while (shardDocs[scoreDoc.shardIndex].scoreDocs[position] != scoreDoc); - hits[i] = (InternalSearchHit) shardHits[scoreDoc.shardIndex].getAt(position); + reducedTopDocs = TopDocs.merge(sort, from, size, (TopFieldDocs[]) shardDocs, true); + } else { + shardDocs = new TopDocs[aggregations.size()]; + for (int i = 0; i < shardDocs.length; i++) { + InternalTopHits topHitsAgg = (InternalTopHits) aggregations.get(i); + shardDocs[i] = topHitsAgg.topDocs; + shardHits[i] = topHitsAgg.searchHits; } - return new InternalTopHits(name, from, size, reducedTopDocs, new InternalSearchHits(hits, reducedTopDocs.totalHits, - reducedTopDocs.getMaxScore()), - pipelineAggregators(), getMetaData()); - } catch (IOException e) { - throw ExceptionsHelper.convertToElastic(e); + reducedTopDocs = TopDocs.merge(from, size, shardDocs, true); + } + + final int[] tracker = new int[shardHits.length]; + SearchHit[] hits = new SearchHit[reducedTopDocs.scoreDocs.length]; + for (int i = 0; i < reducedTopDocs.scoreDocs.length; i++) { + ScoreDoc scoreDoc = reducedTopDocs.scoreDocs[i]; + int position; + do { + position = tracker[scoreDoc.shardIndex]++; + } while (shardDocs[scoreDoc.shardIndex].scoreDocs[position] != scoreDoc); + hits[i] = shardHits[scoreDoc.shardIndex].getAt(position); } + return new InternalTopHits(name, this.from, this.size, reducedTopDocs, new SearchHits(hits, reducedTopDocs.totalHits, + reducedTopDocs.getMaxScore()), + pipelineAggregators(), getMetaData()); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/ParsedTopHits.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/ParsedTopHits.java new file mode 100644 index 0000000000000..362423abca8a3 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/ParsedTopHits.java @@ -0,0 +1,63 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.tophits; + +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.aggregations.ParsedAggregation; + +import java.io.IOException; + +public class ParsedTopHits extends ParsedAggregation implements TopHits { + + private SearchHits searchHits; + + @Override + public String getType() { + return TopHitsAggregationBuilder.NAME; + } + + @Override + public SearchHits getHits() { + return searchHits; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + return searchHits.toXContent(builder, params); + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedTopHits.class.getSimpleName(), true, ParsedTopHits::new); + static { + declareAggregationFields(PARSER); + PARSER.declareObject((topHit, searchHits) -> topHit.searchHits = searchHits, (parser, context) -> SearchHits.fromXContent(parser), + new ParseField(SearchHits.Fields.HITS)); + } + + public static ParsedTopHits fromXContent(XContentParser parser, String name) throws IOException { + ParsedTopHits aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java index 828d56798463a..a8ec235c56399 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java @@ -28,19 +28,21 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptContext; +import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.aggregations.AbstractAggregationBuilder; import org.elasticsearch.search.aggregations.AggregationInitializationException; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField; import org.elasticsearch.search.fetch.StoredFieldsContext; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; +import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.sort.ScoreSortBuilder; +import org.elasticsearch.search.sort.SortAndFormats; import org.elasticsearch.search.sort.SortBuilder; import org.elasticsearch.search.sort.SortBuilders; import org.elasticsearch.search.sort.SortOrder; @@ -51,11 +53,11 @@ import java.util.HashSet; import java.util.List; import java.util.Objects; +import java.util.Optional; import java.util.Set; public class TopHitsAggregationBuilder extends AbstractAggregationBuilder { public static final String NAME = "top_hits"; - private static final InternalAggregation.Type TYPE = new Type(NAME); private int from = 0; private int size = 3; @@ -70,16 +72,16 @@ public class TopHitsAggregationBuilder extends AbstractAggregationBuilder(size); @@ -112,7 +114,7 @@ public TopHitsAggregationBuilder(StreamInput in) throws IOException { @Override protected void doWriteTo(StreamOutput out) throws IOException { out.writeBoolean(explain); - out.writeOptionalStreamable(fetchSourceContext); + out.writeOptionalWriteable(fetchSourceContext); boolean hasFieldDataFields = fieldDataFields != null; out.writeBoolean(hasFieldDataFields); if (hasFieldDataFields) { @@ -280,11 +282,9 @@ public HighlightBuilder highlighter() { * every hit */ public TopHitsAggregationBuilder fetchSource(boolean fetch) { - if (this.fetchSourceContext == null) { - this.fetchSourceContext = new FetchSourceContext(fetch); - } else { - this.fetchSourceContext.fetchSource(fetch); - } + FetchSourceContext fetchSourceContext = this.fetchSourceContext != null ? this.fetchSourceContext + : FetchSourceContext.FETCH_SOURCE; + this.fetchSourceContext = new FetchSourceContext(fetch, fetchSourceContext.includes(), fetchSourceContext.excludes()); return this; } @@ -319,7 +319,9 @@ public TopHitsAggregationBuilder fetchSource(@Nullable String include, @Nullable * pattern to filter the returned _source */ public TopHitsAggregationBuilder fetchSource(@Nullable String[] includes, @Nullable String[] excludes) { - fetchSourceContext = new FetchSourceContext(includes, excludes); + FetchSourceContext fetchSourceContext = this.fetchSourceContext != null ? this.fetchSourceContext + : FetchSourceContext.FETCH_SOURCE; + this.fetchSourceContext = new FetchSourceContext(fetchSourceContext.fetchSource(), includes, excludes); return this; } @@ -521,15 +523,31 @@ public boolean trackScores() { @Override public TopHitsAggregationBuilder subAggregations(Builder subFactories) { - throw new AggregationInitializationException("Aggregator [" + name + "] of type [" + type + "] cannot accept sub-aggregations"); + throw new AggregationInitializationException("Aggregator [" + name + "] of type [" + + getType() + "] cannot accept sub-aggregations"); } @Override - protected TopHitsAggregatorFactory doBuild(AggregationContext context, AggregatorFactory parent, Builder subfactoriesBuilder) + protected TopHitsAggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, Builder subfactoriesBuilder) throws IOException { - return new TopHitsAggregatorFactory(name, type, from, size, explain, version, trackScores, sorts, highlightBuilder, - storedFieldsContext, fieldDataFields, scriptFields, fetchSourceContext, context, - parent, subfactoriesBuilder, metaData); + List fields = new ArrayList<>(); + if (scriptFields != null) { + for (ScriptField field : scriptFields) { + SearchScript searchScript = context.getQueryShardContext().getSearchScript(field.script(), + ScriptContext.Standard.SEARCH); + fields.add(new org.elasticsearch.search.fetch.subphase.ScriptFieldsContext.ScriptField( + field.fieldName(), searchScript, field.ignoreFailure())); + } + } + + final Optional optionalSort; + if (sorts == null) { + optionalSort = Optional.empty(); + } else { + optionalSort = SortBuilder.buildSort(sorts, context.getQueryShardContext()); + } + return new TopHitsAggregatorFactory(name, from, size, explain, version, trackScores, optionalSort, highlightBuilder, + storedFieldsContext, fieldDataFields, fields, fetchSourceContext, context, parent, subfactoriesBuilder, metaData); } @Override @@ -585,31 +603,31 @@ public static TopHitsAggregationBuilder parse(String aggregationName, QueryParse if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.FROM_FIELD)) { + if (SearchSourceBuilder.FROM_FIELD.match(currentFieldName)) { factory.from(parser.intValue()); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SIZE_FIELD)) { + } else if (SearchSourceBuilder.SIZE_FIELD.match(currentFieldName)) { factory.size(parser.intValue()); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.VERSION_FIELD)) { + } else if (SearchSourceBuilder.VERSION_FIELD.match(currentFieldName)) { factory.version(parser.booleanValue()); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.EXPLAIN_FIELD)) { + } else if (SearchSourceBuilder.EXPLAIN_FIELD.match(currentFieldName)) { factory.explain(parser.booleanValue()); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.TRACK_SCORES_FIELD)) { + } else if (SearchSourceBuilder.TRACK_SCORES_FIELD.match(currentFieldName)) { factory.trackScores(parser.booleanValue()); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder._SOURCE_FIELD)) { - factory.fetchSource(FetchSourceContext.parse(context)); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.STORED_FIELDS_FIELD)) { + } else if (SearchSourceBuilder._SOURCE_FIELD.match(currentFieldName)) { + factory.fetchSource(FetchSourceContext.fromXContent(context.parser())); + } else if (SearchSourceBuilder.STORED_FIELDS_FIELD.match(currentFieldName)) { factory.storedFieldsContext = StoredFieldsContext.fromXContent(SearchSourceBuilder.STORED_FIELDS_FIELD.getPreferredName(), context); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SORT_FIELD)) { + } else if (SearchSourceBuilder.SORT_FIELD.match(currentFieldName)) { factory.sort(parser.text()); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + currentFieldName + "].", parser.getTokenLocation()); } } else if (token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder._SOURCE_FIELD)) { - factory.fetchSource(FetchSourceContext.parse(context)); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SCRIPT_FIELDS_FIELD)) { + if (SearchSourceBuilder._SOURCE_FIELD.match(currentFieldName)) { + factory.fetchSource(FetchSourceContext.fromXContent(context.parser())); + } else if (SearchSourceBuilder.SCRIPT_FIELDS_FIELD.match(currentFieldName)) { List scriptFields = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { String scriptFieldName = parser.currentName(); @@ -621,10 +639,9 @@ public static TopHitsAggregationBuilder parse(String aggregationName, QueryParse if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SCRIPT_FIELD)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); - } else if (context.getParseFieldMatcher().match(currentFieldName, - SearchSourceBuilder.IGNORE_FAILURE_FIELD)) { + if (SearchSourceBuilder.SCRIPT_FIELD.match(currentFieldName)) { + script = Script.parse(parser, context.getDefaultScriptLanguage()); + } else if (SearchSourceBuilder.IGNORE_FAILURE_FIELD.match(currentFieldName)) { ignoreFailure = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -632,8 +649,8 @@ public static TopHitsAggregationBuilder parse(String aggregationName, QueryParse parser.getTokenLocation()); } } else if (token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SCRIPT_FIELD)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); + if (SearchSourceBuilder.SCRIPT_FIELD.match(currentFieldName)) { + script = Script.parse(parser, context.getDefaultScriptLanguage()); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + currentFieldName + "].", @@ -651,9 +668,9 @@ public static TopHitsAggregationBuilder parse(String aggregationName, QueryParse } } factory.scriptFields(scriptFields); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.HIGHLIGHT_FIELD)) { + } else if (SearchSourceBuilder.HIGHLIGHT_FIELD.match(currentFieldName)) { factory.highlighter(HighlightBuilder.fromXContent(context)); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SORT_FIELD)) { + } else if (SearchSourceBuilder.SORT_FIELD.match(currentFieldName)) { List> sorts = SortBuilder.fromXContent(context); factory.sorts(sorts); } else { @@ -662,10 +679,10 @@ public static TopHitsAggregationBuilder parse(String aggregationName, QueryParse } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.STORED_FIELDS_FIELD)) { + if (SearchSourceBuilder.STORED_FIELDS_FIELD.match(currentFieldName)) { factory.storedFieldsContext = StoredFieldsContext.fromXContent(SearchSourceBuilder.STORED_FIELDS_FIELD.getPreferredName(), context); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.DOCVALUE_FIELDS_FIELD)) { + } else if (SearchSourceBuilder.DOCVALUE_FIELDS_FIELD.match(currentFieldName)) { List fieldDataFields = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { if (token == XContentParser.Token.VALUE_STRING) { @@ -676,11 +693,11 @@ public static TopHitsAggregationBuilder parse(String aggregationName, QueryParse } } factory.fieldDataFields(fieldDataFields); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SORT_FIELD)) { + } else if (SearchSourceBuilder.SORT_FIELD.match(currentFieldName)) { List> sorts = SortBuilder.fromXContent(context); factory.sorts(sorts); - } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder._SOURCE_FIELD)) { - factory.fetchSource(FetchSourceContext.parse(context)); + } else if (SearchSourceBuilder._SOURCE_FIELD.match(currentFieldName)) { + factory.fetchSource(FetchSourceContext.fromXContent(context.parser())); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + currentFieldName + "].", parser.getTokenLocation()); @@ -716,7 +733,7 @@ protected boolean doEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java index 07292f1d29f82..f91a982dbd300 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java @@ -29,6 +29,7 @@ import org.apache.lucene.search.TopFieldCollector; import org.apache.lucene.search.TopFieldDocs; import org.apache.lucene.search.TopScoreDocCollector; +import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.util.LongObjectPagedHashMap; @@ -38,12 +39,13 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.MetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.fetch.FetchPhase; import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.internal.InternalSearchHit; -import org.elasticsearch.search.internal.InternalSearchHits; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.SubSearchContext; +import org.elasticsearch.search.rescore.RescoreSearchContext; import org.elasticsearch.search.sort.SortAndFormats; import java.io.IOException; @@ -68,7 +70,7 @@ private static class TopDocsAndLeafCollector { final SubSearchContext subSearchContext; final LongObjectPagedHashMap topDocsCollectors; - public TopHitsAggregator(FetchPhase fetchPhase, SubSearchContext subSearchContext, String name, AggregationContext context, + public TopHitsAggregator(FetchPhase fetchPhase, SubSearchContext subSearchContext, String name, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); this.fetchPhase = fetchPhase; @@ -114,10 +116,21 @@ public void collect(int docId, long bucket) throws IOException { if (collectors == null) { SortAndFormats sort = subSearchContext.sort(); int topN = subSearchContext.from() + subSearchContext.size(); + if (sort == null) { + for (RescoreSearchContext rescoreContext : context.rescore()) { + topN = Math.max(rescoreContext.window(), topN); + } + } // In the QueryPhase we don't need this protection, because it is build into the IndexSearcher, // but here we create collectors ourselves and we need prevent OOM because of crazy an offset and size. topN = Math.min(topN, subSearchContext.searcher().getIndexReader().maxDoc()); - TopDocsCollector topLevelCollector = sort != null ? TopFieldCollector.create(sort.sort, topN, true, subSearchContext.trackScores(), subSearchContext.trackScores()) : TopScoreDocCollector.create(topN); + TopDocsCollector topLevelCollector; + if (sort == null) { + topLevelCollector = TopScoreDocCollector.create(topN); + } else { + topLevelCollector = TopFieldCollector.create(sort.sort, topN, true, subSearchContext.trackScores(), + subSearchContext.trackScores()); + } collectors = new TopDocsAndLeafCollector(topLevelCollector); collectors.leafCollector = collectors.topLevelCollector.getLeafCollector(ctx); collectors.leafCollector.setScorer(scorer); @@ -135,9 +148,18 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) { if (topDocsCollector == null) { topHits = buildEmptyAggregation(); } else { - final TopDocs topDocs = topDocsCollector.topLevelCollector.topDocs(); - - subSearchContext.queryResult().topDocs(topDocs, subSearchContext.sort() == null ? null : subSearchContext.sort().formats); + TopDocs topDocs = topDocsCollector.topLevelCollector.topDocs(); + if (subSearchContext.sort() == null) { + for (RescoreSearchContext ctx : context().rescore()) { + try { + topDocs = ctx.rescorer().rescore(topDocs, context, ctx); + } catch (IOException e) { + throw new ElasticsearchException("Rescore TopHits Failed", e); + } + } + } + subSearchContext.queryResult().topDocs(topDocs, + subSearchContext.sort() == null ? null : subSearchContext.sort().formats); int[] docIdsToLoad = new int[topDocs.scoreDocs.length]; for (int i = 0; i < topDocs.scoreDocs.length; i++) { docIdsToLoad[i] = topDocs.scoreDocs[i].doc; @@ -145,10 +167,10 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) { subSearchContext.docIdsToLoad(docIdsToLoad, 0, docIdsToLoad.length); fetchPhase.execute(subSearchContext); FetchSearchResult fetchResult = subSearchContext.fetchResult(); - InternalSearchHit[] internalHits = fetchResult.fetchResult().hits().internalHits(); + SearchHit[] internalHits = fetchResult.fetchResult().hits().internalHits(); for (int i = 0; i < internalHits.length; i++) { ScoreDoc scoreDoc = topDocs.scoreDocs[i]; - InternalSearchHit searchHitFields = internalHits[i]; + SearchHit searchHitFields = internalHits[i]; searchHitFields.shard(subSearchContext.shardTarget()); searchHitFields.score(scoreDoc.score); if (scoreDoc instanceof FieldDoc) { @@ -156,8 +178,8 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) { searchHitFields.sortValues(fieldDoc.fields, subSearchContext.sort().formats); } } - topHits = new InternalTopHits(name, subSearchContext.from(), subSearchContext.size(), topDocs, fetchResult.hits(), pipelineAggregators(), - metaData()); + topHits = new InternalTopHits(name, subSearchContext.from(), subSearchContext.size(), topDocs, fetchResult.hits(), + pipelineAggregators(), metaData()); } return topHits; } @@ -170,7 +192,8 @@ public InternalTopHits buildEmptyAggregation() { } else { topDocs = Lucene.EMPTY_TOP_DOCS; } - return new InternalTopHits(name, subSearchContext.from(), subSearchContext.size(), topDocs, InternalSearchHits.empty(), pipelineAggregators(), metaData()); + return new InternalTopHits(name, subSearchContext.from(), subSearchContext.size(), topDocs, SearchHits.empty(), + pipelineAggregators(), metaData()); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java index b3389322d9c2a..6a41cc97f8ec5 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java @@ -19,31 +19,23 @@ package org.elasticsearch.search.aggregations.metrics.tophits; -import org.elasticsearch.search.fetch.StoredFieldsContext; -import org.elasticsearch.script.ScriptContext; -import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; -import org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField; +import org.elasticsearch.search.fetch.StoredFieldsContext; import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext; -import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext.DocValueField; -import org.elasticsearch.search.fetch.subphase.DocValueFieldsFetchSubPhase; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; +import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.SubSearchContext; import org.elasticsearch.search.sort.SortAndFormats; -import org.elasticsearch.search.sort.SortBuilder; import java.io.IOException; -import java.util.Collections; import java.util.List; import java.util.Map; import java.util.Optional; -import java.util.Set; public class TopHitsAggregatorFactory extends AggregatorFactory { @@ -52,25 +44,25 @@ public class TopHitsAggregatorFactory extends AggregatorFactory> sorts; + private final Optional sort; private final HighlightBuilder highlightBuilder; private final StoredFieldsContext storedFieldsContext; private final List docValueFields; - private final Set scriptFields; + private final List scriptFields; private final FetchSourceContext fetchSourceContext; - public TopHitsAggregatorFactory(String name, Type type, int from, int size, boolean explain, boolean version, boolean trackScores, - List> sorts, HighlightBuilder highlightBuilder, StoredFieldsContext storedFieldsContext, - List docValueFields, Set scriptFields, FetchSourceContext fetchSourceContext, - AggregationContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, - Map metaData) throws IOException { - super(name, type, context, parent, subFactories, metaData); + public TopHitsAggregatorFactory(String name, int from, int size, boolean explain, boolean version, boolean trackScores, + Optional sort, HighlightBuilder highlightBuilder, StoredFieldsContext storedFieldsContext, + List docValueFields, List scriptFields, FetchSourceContext fetchSourceContext, + SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, Map metaData) + throws IOException { + super(name, context, parent, subFactories, metaData); this.from = from; this.size = size; this.explain = explain; this.version = version; this.trackScores = trackScores; - this.sorts = sorts; + this.sort = sort; this.highlightBuilder = highlightBuilder; this.storedFieldsContext = storedFieldsContext; this.docValueFields = docValueFields; @@ -81,45 +73,32 @@ public TopHitsAggregatorFactory(String name, Type type, int from, int size, bool @Override public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBucket, List pipelineAggregators, Map metaData) throws IOException { - SubSearchContext subSearchContext = new SubSearchContext(context.searchContext()); - subSearchContext.parsedQuery(context.searchContext().parsedQuery()); + SubSearchContext subSearchContext = new SubSearchContext(context); + subSearchContext.parsedQuery(context.parsedQuery()); subSearchContext.explain(explain); subSearchContext.version(version); subSearchContext.trackScores(trackScores); subSearchContext.from(from); subSearchContext.size(size); - if (sorts != null) { - Optional optionalSort = SortBuilder.buildSort(sorts, subSearchContext.getQueryShardContext()); - if (optionalSort.isPresent()) { - subSearchContext.sort(optionalSort.get()); - } + if (sort.isPresent()) { + subSearchContext.sort(sort.get()); } if (storedFieldsContext != null) { subSearchContext.storedFieldsContext(storedFieldsContext); } if (docValueFields != null) { - DocValueFieldsContext docValueFieldsContext = subSearchContext - .getFetchSubPhaseContext(DocValueFieldsFetchSubPhase.CONTEXT_FACTORY); - for (String field : docValueFields) { - docValueFieldsContext.add(new DocValueField(field)); - } - docValueFieldsContext.setHitExecutionNeeded(true); + subSearchContext.docValueFieldsContext(new DocValueFieldsContext(docValueFields)); } - if (scriptFields != null) { - for (ScriptField field : scriptFields) { - SearchScript searchScript = subSearchContext.scriptService().search(subSearchContext.lookup(), field.script(), - ScriptContext.Standard.SEARCH, Collections.emptyMap()); - subSearchContext.scriptFields().add(new org.elasticsearch.search.fetch.subphase.ScriptFieldsContext.ScriptField( - field.fieldName(), searchScript, field.ignoreFailure())); + for (ScriptFieldsContext.ScriptField field : scriptFields) { + subSearchContext.scriptFields().add(field); } - } if (fetchSourceContext != null) { subSearchContext.fetchSourceContext(fetchSourceContext); } if (highlightBuilder != null) { - subSearchContext.highlight(highlightBuilder.build(context.searchContext().getQueryShardContext())); + subSearchContext.highlight(highlightBuilder.build(context.getQueryShardContext())); } - return new TopHitsAggregator(context.searchContext().fetchPhase(), subSearchContext, name, context, parent, + return new TopHitsAggregator(context.fetchPhase(), subSearchContext, name, context, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/InternalValueCount.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/InternalValueCount.java index bfd186df5f247..835d98b1b1912 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/InternalValueCount.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/InternalValueCount.java @@ -80,7 +80,7 @@ public InternalAggregation doReduce(List aggregations, Redu @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { - builder.field(CommonFields.VALUE, value); + builder.field(CommonFields.VALUE.getPreferredName(), value); return builder; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ParsedValueCount.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ParsedValueCount.java new file mode 100644 index 0000000000000..7430bca08de32 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ParsedValueCount.java @@ -0,0 +1,74 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.metrics.valuecount; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.ParsedAggregation; + +import java.io.IOException; + +public class ParsedValueCount extends ParsedAggregation implements ValueCount { + + private long valueCount; + + @Override + public double value() { + return getValue(); + } + + @Override + public long getValue() { + return valueCount; + } + + @Override + public String getValueAsString() { + // InternalValueCount doesn't print "value_as_string", but you can get a formatted value using + // getValueAsString() using the raw formatter and converting the value to double + return Double.toString(valueCount); + } + + @Override + public String getType() { + return ValueCountAggregationBuilder.NAME; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + builder.field(CommonFields.VALUE.getPreferredName(), valueCount); + return builder; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedValueCount.class.getSimpleName(), true, + ParsedValueCount::new); + + static { + declareAggregationFields(PARSER); + PARSER.declareLong((agg, value) -> agg.valueCount = value, CommonFields.VALUE); + } + + public static ParsedValueCount fromXContent(XContentParser parser, final String name) { + ParsedValueCount sum = PARSER.apply(parser, null); + sum.setName(name); + return sum; + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregationBuilder.java index ce0c1fd3e5ba6..50916b4063cc4 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregationBuilder.java @@ -21,32 +21,44 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.aggregations.support.ValuesSourceParserHelper; import org.elasticsearch.search.aggregations.support.ValuesSourceType; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; public class ValueCountAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly { public static final String NAME = "value_count"; - public static final Type TYPE = new Type(NAME); + + private static final ObjectParser PARSER; + static { + PARSER = new ObjectParser<>(ValueCountAggregationBuilder.NAME); + ValuesSourceParserHelper.declareAnyFields(PARSER, true, true); + } + + public static AggregationBuilder parse(String aggregationName, QueryParseContext context) throws IOException { + return PARSER.parse(context.parser(), new ValueCountAggregationBuilder(aggregationName, null), context); + } public ValueCountAggregationBuilder(String name, ValueType targetValueType) { - super(name, TYPE, ValuesSourceType.ANY, targetValueType); + super(name, ValuesSourceType.ANY, targetValueType); } /** * Read from a stream. */ public ValueCountAggregationBuilder(StreamInput in) throws IOException { - super(in, TYPE, ValuesSourceType.ANY); + super(in, ValuesSourceType.ANY); } @Override @@ -60,9 +72,9 @@ protected boolean serializeTargetValueType() { } @Override - protected ValueCountAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, + protected ValueCountAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder) throws IOException { - return new ValueCountAggregatorFactory(name, type, config, context, parent, subFactoriesBuilder, metaData); + return new ValueCountAggregatorFactory(name, config, context, parent, subFactoriesBuilder, metaData); } @Override @@ -81,7 +93,7 @@ protected boolean innerEquals(Object obj) { } @Override - public String getWriteableName() { + public String getType() { return NAME; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java index 4dbfb54e8da63..54eba8d1fffe0 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java @@ -29,8 +29,8 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -50,7 +50,7 @@ public class ValueCountAggregator extends NumericMetricsAggregator.SingleValue { LongArray counts; public ValueCountAggregator(String name, ValuesSource valuesSource, - AggregationContext aggregationContext, Aggregator parent, List pipelineAggregators, + SearchContext aggregationContext, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, aggregationContext, parent, pipelineAggregators, metaData); @@ -82,7 +82,7 @@ public void collect(int doc, long bucket) throws IOException { @Override public double metric(long owningBucketOrd) { - return valuesSource == null ? 0 : counts.get(owningBucketOrd); + return (valuesSource == null || owningBucketOrd >= counts.size()) ? 0 : counts.get(owningBucketOrd); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregatorFactory.java index 4569f3421d05f..80c8001b93c97 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregatorFactory.java @@ -22,12 +22,11 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.support.AggregationContext; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory; import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; +import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.List; @@ -35,9 +34,9 @@ public class ValueCountAggregatorFactory extends ValuesSourceAggregatorFactory { - public ValueCountAggregatorFactory(String name, Type type, ValuesSourceConfig config, AggregationContext context, + public ValueCountAggregatorFactory(String name, ValuesSourceConfig config, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, config, context, parent, subFactoriesBuilder, metaData); + super(name, config, context, parent, subFactoriesBuilder, metaData); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountParser.java deleted file mode 100644 index bd61e276fec75..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountParser.java +++ /dev/null @@ -1,53 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.metrics.valuecount; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.AnyValuesSourceParser; -import org.elasticsearch.search.aggregations.support.XContentParseContext; -import org.elasticsearch.search.aggregations.support.ValueType; -import org.elasticsearch.search.aggregations.support.ValuesSource; -import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; -import org.elasticsearch.search.aggregations.support.ValuesSourceType; - -import java.io.IOException; -import java.util.Map; - -/** - * - */ -public class ValueCountParser extends AnyValuesSourceParser { - - public ValueCountParser() { - super(true, true); - } - - @Override - protected boolean token(String aggregationName, String currentFieldName, XContentParser.Token token, - XContentParseContext context, Map otherOptions) throws IOException { - return false; - } - - @Override - protected ValuesSourceAggregationBuilder createFactory( - String aggregationName, ValuesSourceType valuesSourceType, ValueType targetValueType, Map otherOptions) { - return new ValueCountAggregationBuilder(aggregationName, targetValueType); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/AbstractPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/AbstractPipelineAggregationBuilder.java index 4fb2ed914011a..8d28195d55163 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/AbstractPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/AbstractPipelineAggregationBuilder.java @@ -171,4 +171,8 @@ public boolean equals(Object obj) { protected abstract boolean doEquals(Object obj); + @Override + public String getType() { + return type; + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/BucketHelpers.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/BucketHelpers.java index 98b5b67b7cf5f..90665eeb9b0fc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/BucketHelpers.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/BucketHelpers.java @@ -52,7 +52,7 @@ public class BucketHelpers { * "insert_zeros": empty buckets will be filled with zeros for all metrics * "ignore": empty buckets will simply be ignored */ - public static enum GapPolicy { + public enum GapPolicy { INSERT_ZEROS((byte) 0, "insert_zeros"), SKIP((byte) 1, "skip"); /** @@ -65,7 +65,7 @@ public static enum GapPolicy { public static GapPolicy parse(QueryParseContext context, String text, XContentLocation tokenLocation) { GapPolicy result = null; for (GapPolicy policy : values()) { - if (context.getParseFieldMatcher().match(text, policy.parseField)) { + if (policy.parseField.match(text)) { if (result == null) { result = policy; } else { @@ -87,7 +87,7 @@ public static GapPolicy parse(QueryParseContext context, String text, XContentLo private final byte id; private final ParseField parseField; - private GapPolicy(byte id, String name) { + GapPolicy(byte id, String name) { this.id = id; this.parseField = new ParseField(name); } @@ -147,13 +147,13 @@ public String getName() { * aggPath */ public static Double resolveBucketValue(MultiBucketsAggregation agg, - InternalMultiBucketAggregation.Bucket bucket, String aggPath, GapPolicy gapPolicy) { + InternalMultiBucketAggregation.InternalBucket bucket, String aggPath, GapPolicy gapPolicy) { List aggPathsList = AggregationPath.parse(aggPath).getPathElementsAsStringList(); return resolveBucketValue(agg, bucket, aggPathsList, gapPolicy); } public static Double resolveBucketValue(MultiBucketsAggregation agg, - InternalMultiBucketAggregation.Bucket bucket, List aggPathAsList, GapPolicy gapPolicy) { + InternalMultiBucketAggregation.InternalBucket bucket, List aggPathAsList, GapPolicy gapPolicy) { try { Object propertyValue = bucket.getProperty(agg.getName(), aggPathAsList); if (propertyValue == null) { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/InternalSimpleValue.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/InternalSimpleValue.java index b9d79b8e3297a..0f8eec4e66a34 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/InternalSimpleValue.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/InternalSimpleValue.java @@ -79,9 +79,9 @@ public InternalMax doReduce(List aggregations, ReduceContex @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { boolean hasValue = !(Double.isInfinite(value) || Double.isNaN(value)); - builder.field(CommonFields.VALUE, hasValue ? value : null); + builder.field(CommonFields.VALUE.getPreferredName(), hasValue ? value : null); if (hasValue && format != DocValueFormat.RAW) { - builder.field(CommonFields.VALUE_AS_STRING, format.format(value)); + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(value)); } return builder; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/ParsedSimpleValue.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/ParsedSimpleValue.java new file mode 100644 index 0000000000000..7449ce66666e6 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/ParsedSimpleValue.java @@ -0,0 +1,58 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.pipeline; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.ParsedSingleValueNumericMetricsAggregation; + +import java.io.IOException; + +public class ParsedSimpleValue extends ParsedSingleValueNumericMetricsAggregation implements SimpleValue { + + @Override + public String getType() { + return InternalSimpleValue.NAME; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedSimpleValue.class.getSimpleName(), true, + ParsedSimpleValue::new); + + static { + declareSingleValueFields(PARSER, Double.NaN); + } + + public static ParsedSimpleValue fromXContent(XContentParser parser, final String name) { + ParsedSimpleValue simpleValue = PARSER.apply(parser, null); + simpleValue.setName(name); + return simpleValue; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + boolean hasValue = Double.isNaN(value) == false; + builder.field(CommonFields.VALUE.getPreferredName(), hasValue ? value : null); + if (hasValue && valueAsString != null) { + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), valueAsString); + } + return builder; + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricValue.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricValue.java new file mode 100644 index 0000000000000..be22679a4e1bf --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricValue.java @@ -0,0 +1,27 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.pipeline.bucketmetrics; + +import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregation; + +public interface BucketMetricValue extends NumericMetricsAggregation.SingleValue { + + String[] keys(); +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsParser.java index 9dee002ca2905..e7954174aa3c9 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsParser.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsParser.java @@ -58,17 +58,17 @@ public final BucketMetricsPipelineAggregationBuilder parse(String pipelineAgg if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, FORMAT)) { + if (FORMAT.match(currentFieldName)) { format = parser.text(); - } else if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + } else if (BUCKETS_PATH.match(currentFieldName)) { bucketsPaths = new String[] { parser.text() }; - } else if (context.getParseFieldMatcher().match(currentFieldName, GAP_POLICY)) { + } else if (GAP_POLICY.match(currentFieldName)) { gapPolicy = GapPolicy.parse(context, parser.text(), parser.getTokenLocation()); } else { parseToken(pipelineAggregatorName, parser, context, currentFieldName, token, params); } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + if (BUCKETS_PATH.match(currentFieldName)) { List paths = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { String path = parser.text(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregator.java index 6bdbd5e6e4f9d..413862d3f1d2b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregator.java @@ -27,7 +27,6 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; -import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket; import org.elasticsearch.search.aggregations.pipeline.BucketHelpers; import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -82,9 +81,8 @@ public final InternalAggregation doReduce(Aggregations aggregations, ReduceConte if (aggregation.getName().equals(bucketsPath.get(0))) { bucketsPath = bucketsPath.subList(1, bucketsPath.size()); InternalMultiBucketAggregation multiBucketsAgg = (InternalMultiBucketAggregation) aggregation; - List buckets = multiBucketsAgg.getBuckets(); - for (int i = 0; i < buckets.size(); i++) { - Bucket bucket = buckets.get(i); + List buckets = multiBucketsAgg.getBuckets(); + for (InternalMultiBucketAggregation.InternalBucket bucket : buckets) { Double bucketValue = BucketHelpers.resolveBucketValue(multiBucketsAgg, bucket, bucketsPath, gapPolicy); if (bucketValue != null && !Double.isNaN(bucketValue)) { collectBucketValue(bucket.getKeyAsString(), bucketValue); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/InternalBucketMetricValue.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/InternalBucketMetricValue.java index 6477728b3231d..8edcb684505d5 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/InternalBucketMetricValue.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/InternalBucketMetricValue.java @@ -19,6 +19,7 @@ package org.elasticsearch.search.aggregations.pipeline.bucketmetrics; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -31,8 +32,9 @@ import java.util.List; import java.util.Map; -public class InternalBucketMetricValue extends InternalNumericMetricsAggregation.SingleValue { +public class InternalBucketMetricValue extends InternalNumericMetricsAggregation.SingleValue implements BucketMetricValue { public static final String NAME = "bucket_metric_value"; + static final ParseField KEYS_FIELD = new ParseField("keys"); private double value; private String[] keys; @@ -72,6 +74,7 @@ public double value() { return value; } + @Override public String[] keys() { return keys; } @@ -87,7 +90,7 @@ public Object getProperty(List path) { return this; } else if (path.size() == 1 && "value".equals(path.get(0))) { return value(); - } else if (path.size() == 1 && "keys".equals(path.get(0))) { + } else if (path.size() == 1 && KEYS_FIELD.getPreferredName().equals(path.get(0))) { return keys(); } else { throw new IllegalArgumentException("path not supported for [" + getName() + "]: " + path); @@ -97,16 +100,15 @@ public Object getProperty(List path) { @Override public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { boolean hasValue = !Double.isInfinite(value); - builder.field(CommonFields.VALUE, hasValue ? value : null); + builder.field(CommonFields.VALUE.getPreferredName(), hasValue ? value : null); if (hasValue && format != DocValueFormat.RAW) { - builder.field(CommonFields.VALUE_AS_STRING, format.format(value)); + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(value)); } - builder.startArray("keys"); + builder.startArray(KEYS_FIELD.getPreferredName()); for (String key : keys) { builder.value(key); } builder.endArray(); return builder; } - } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/ParsedBucketMetricValue.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/ParsedBucketMetricValue.java new file mode 100644 index 0000000000000..69e99352636b6 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/ParsedBucketMetricValue.java @@ -0,0 +1,73 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.pipeline.bucketmetrics; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.ParsedSingleValueNumericMetricsAggregation; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; + +public class ParsedBucketMetricValue extends ParsedSingleValueNumericMetricsAggregation implements BucketMetricValue { + + private List keys = Collections.emptyList(); + + @Override + public String[] keys() { + return this.keys.toArray(new String[keys.size()]); + } + + @Override + public String getType() { + return InternalBucketMetricValue.NAME; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + boolean hasValue = !Double.isInfinite(value); + builder.field(CommonFields.VALUE.getPreferredName(), hasValue ? value : null); + if (hasValue && valueAsString != null) { + builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), valueAsString); + } + builder.startArray(InternalBucketMetricValue.KEYS_FIELD.getPreferredName()); + for (String key : keys) { + builder.value(key); + } + builder.endArray(); + return builder; + } + + private static final ObjectParser PARSER = new ObjectParser<>( + ParsedBucketMetricValue.class.getSimpleName(), true, ParsedBucketMetricValue::new); + + static { + declareSingleValueFields(PARSER, Double.NEGATIVE_INFINITY); + PARSER.declareStringArray((agg, value) -> agg.keys = value, InternalBucketMetricValue.KEYS_FIELD); + } + + public static ParsedBucketMetricValue fromXContent(XContentParser parser, final String name) { + ParsedBucketMetricValue bucketMetricValue = PARSER.apply(parser, null); + bucketMetricValue.setName(name); + return bucketMetricValue; + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/InternalPercentilesBucket.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/InternalPercentilesBucket.java index 059d72a8e1013..4633aaa457dcf 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/InternalPercentilesBucket.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/InternalPercentilesBucket.java @@ -26,12 +26,12 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.metrics.InternalNumericMetricsAggregation; import org.elasticsearch.search.aggregations.metrics.max.InternalMax; -import org.elasticsearch.search.aggregations.metrics.percentiles.InternalPercentile; import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; import java.util.Arrays; +import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -39,14 +39,26 @@ public class InternalPercentilesBucket extends InternalNumericMetricsAggregation.MultiValue implements PercentilesBucket { private double[] percentiles; private double[] percents; + private final transient Map percentileLookups = new HashMap<>(); public InternalPercentilesBucket(String name, double[] percents, double[] percentiles, DocValueFormat formatter, List pipelineAggregators, Map metaData) { super(name, pipelineAggregators, metaData); + if ((percentiles.length == percents.length) == false) { + throw new IllegalArgumentException("The number of provided percents and percentiles didn't match. percents: " + + Arrays.toString(percents) + ", percentiles: " + Arrays.toString(percentiles)); + } this.format = formatter; this.percentiles = percentiles; this.percents = percents; + computeLookup(); + } + + private void computeLookup() { + for (int i = 0; i < percents.length; i++) { + percentileLookups.put(percents[i], percentiles[i]); + } } /** @@ -57,6 +69,7 @@ public InternalPercentilesBucket(StreamInput in) throws IOException { format = in.readNamedWriteable(DocValueFormat.class); percentiles = in.readDoubleArray(); percents = in.readDoubleArray(); + computeLookup(); } @Override @@ -73,12 +86,12 @@ public String getWriteableName() { @Override public double percentile(double percent) throws IllegalArgumentException { - int index = Arrays.binarySearch(percents, percent); - if (index < 0) { + Double percentile = percentileLookups.get(percent); + if (percentile == null) { throw new IllegalArgumentException("Percent requested [" + String.valueOf(percent) + "] was not" + " one of the computed percentiles. Available keys are: " + Arrays.toString(percents)); } - return percentiles[index]; + return percentile; } @Override @@ -136,7 +149,7 @@ public boolean hasNext() { @Override public Percentile next() { - final Percentile next = new InternalPercentile(percents[i], percentiles[i]); + final Percentile next = new Percentile(percents[i], percentiles[i]); ++i; return next; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/ParsedPercentilesBucket.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/ParsedPercentilesBucket.java new file mode 100644 index 0000000000000..eebe296e531fe --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/ParsedPercentilesBucket.java @@ -0,0 +1,88 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.percentiles.ParsedPercentiles; +import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles; + +import java.io.IOException; +import java.util.Map.Entry; + +public class ParsedPercentilesBucket extends ParsedPercentiles implements Percentiles { + + @Override + public String getType() { + return PercentilesBucketPipelineAggregationBuilder.NAME; + } + + @Override + public double percentile(double percent) throws IllegalArgumentException { + Double value = percentiles.get(percent); + if (value == null) { + throw new IllegalArgumentException("Percent requested [" + String.valueOf(percent) + "] was not" + + " one of the computed percentiles. Available keys are: " + percentiles.keySet()); + } + return value; + } + + @Override + public String percentileAsString(double percent) { + double value = percentile(percent); // check availability as unformatted value + String valueAsString = percentilesAsString.get(percent); + if (valueAsString != null) { + return valueAsString; + } else { + return Double.toString(value); + } + } + + @Override + public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + builder.startObject("values"); + for (Entry percent : percentiles.entrySet()) { + double value = percent.getValue(); + boolean hasValue = !(Double.isNaN(value)); + Double key = percent.getKey(); + builder.field(Double.toString(key), hasValue ? value : null); + String valueAsString = percentilesAsString.get(key); + if (hasValue && valueAsString != null) { + builder.field(key + "_as_string", valueAsString); + } + } + builder.endObject(); + return builder; + } + + private static ObjectParser PARSER = + new ObjectParser<>(ParsedPercentilesBucket.class.getSimpleName(), true, ParsedPercentilesBucket::new); + + static { + ParsedPercentiles.declarePercentilesFields(PARSER); + } + + public static ParsedPercentilesBucket fromXContent(XContentParser parser, String name) throws IOException { + ParsedPercentilesBucket aggregation = PARSER.parse(parser, null); + aggregation.setName(name); + return aggregation; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregationBuilder.java index 435d0239cbe51..cea7d01136786 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregationBuilder.java @@ -113,7 +113,7 @@ public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFa @Override protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { if (percents != null) { - builder.field(PERCENTS_FIELD.getPreferredName(), percents); + builder.array(PERCENTS_FIELD.getPreferredName(), percents); } return builder; } @@ -138,7 +138,7 @@ protected PercentilesBucketPipelineAggregationBuilder buildFactory(String pipeli @Override protected boolean token(XContentParser parser, QueryParseContext context, String field, XContentParser.Token token, Map params) throws IOException { - if (context.getParseFieldMatcher().match(field, PERCENTS_FIELD) && token == XContentParser.Token.START_ARRAY) { + if (PERCENTS_FIELD.match(field) && token == XContentParser.Token.START_ARRAY) { DoubleArrayList percents = new DoubleArrayList(10); while (parser.nextToken() != XContentParser.Token.END_ARRAY) { percents.add(parser.doubleValue()); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregator.java index 2818d0f3f5ed8..7f51a99d79867 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregator.java @@ -88,7 +88,7 @@ protected InternalAggregation buildAggregation(List pipeline } } else { for (int i = 0; i < percents.length; i++) { - int index = (int)((percents[i] / 100.0) * data.size()); + int index = (int) Math.round((percents[i] / 100.0) * (data.size() - 1)); percentiles[i] = data.get(index); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/ParsedStatsBucket.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/ParsedStatsBucket.java new file mode 100644 index 0000000000000..c7ddcc6ee9686 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/ParsedStatsBucket.java @@ -0,0 +1,46 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.stats.ParsedStats; + + +public class ParsedStatsBucket extends ParsedStats implements StatsBucket { + + @Override + public String getType() { + return StatsBucketPipelineAggregationBuilder.NAME; + } + + private static final ObjectParser PARSER = new ObjectParser<>( + ParsedStatsBucket.class.getSimpleName(), true, ParsedStatsBucket::new); + + static { + declareStatsFields(PARSER); + } + + public static ParsedStatsBucket fromXContent(XContentParser parser, final String name) { + ParsedStatsBucket parsedStatsBucket = PARSER.apply(parser, null); + parsedStatsBucket.setName(name); + return parsedStatsBucket; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketParser.java index b7fa49267dc55..dfa28c3dd2724 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketParser.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketParser.java @@ -46,7 +46,7 @@ protected ExtendedStatsBucketPipelineAggregationBuilder buildFactory(String pipe @Override protected boolean token(XContentParser parser, QueryParseContext context, String field, XContentParser.Token token, Map params) throws IOException { - if (context.getParseFieldMatcher().match(field, SIGMA) && token == XContentParser.Token.VALUE_NUMBER) { + if (SIGMA.match(field) && token == XContentParser.Token.VALUE_NUMBER) { params.put(SIGMA.getPreferredName(), parser.doubleValue()); return true; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ParsedExtendedStatsBucket.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ParsedExtendedStatsBucket.java new file mode 100644 index 0000000000000..d292249242396 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ParsedExtendedStatsBucket.java @@ -0,0 +1,46 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended; + +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.metrics.stats.extended.ParsedExtendedStats; + + +public class ParsedExtendedStatsBucket extends ParsedExtendedStats implements ExtendedStatsBucket { + + @Override + public String getType() { + return ExtendedStatsBucketPipelineAggregationBuilder.NAME; + } + + private static final ObjectParser PARSER = new ObjectParser<>( + ParsedExtendedStatsBucket.class.getSimpleName(), true, ParsedExtendedStatsBucket::new); + + static { + declareExtendedStatsFields(PARSER); + } + + public static ParsedExtendedStatsBucket fromXContent(XContentParser parser, final String name) { + ParsedExtendedStatsBucket parsedStatsBucket = PARSER.apply(parser, null); + parsedStatsBucket.setName(name); + return parsedStatsBucket; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketscript/BucketScriptPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketscript/BucketScriptPipelineAggregationBuilder.java index cd7b1bb828ed7..7c96b04b1561a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketscript/BucketScriptPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketscript/BucketScriptPipelineAggregationBuilder.java @@ -26,7 +26,6 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.script.Script; -import org.elasticsearch.script.Script.ScriptField; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.pipeline.AbstractPipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; @@ -150,7 +149,7 @@ protected PipelineAggregator createInternal(Map metaData) throws @Override protected XContentBuilder internalXContent(XContentBuilder builder, Params params) throws IOException { builder.field(BUCKETS_PATH.getPreferredName(), bucketsPathsMap); - builder.field(ScriptField.SCRIPT.getPreferredName(), script); + builder.field(Script.SCRIPT_PARSE_FIELD.getPreferredName(), script); if (format != null) { builder.field(FORMAT.getPreferredName(), format); } @@ -171,21 +170,21 @@ public static BucketScriptPipelineAggregationBuilder parse(String reducerName, Q if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, FORMAT)) { + if (FORMAT.match(currentFieldName)) { format = parser.text(); - } else if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + } else if (BUCKETS_PATH.match(currentFieldName)) { bucketsPathsMap = new HashMap<>(); bucketsPathsMap.put("_value", parser.text()); - } else if (context.getParseFieldMatcher().match(currentFieldName, GAP_POLICY)) { + } else if (GAP_POLICY.match(currentFieldName)) { gapPolicy = GapPolicy.parse(context, parser.text(), parser.getTokenLocation()); - } else if (context.getParseFieldMatcher().match(currentFieldName, ScriptField.SCRIPT)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); + } else if (Script.SCRIPT_PARSE_FIELD.match(currentFieldName)) { + script = Script.parse(parser, context.getDefaultScriptLanguage()); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + reducerName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + if (BUCKETS_PATH.match(currentFieldName)) { List paths = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { String path = parser.text(); @@ -200,9 +199,9 @@ public static BucketScriptPipelineAggregationBuilder parse(String reducerName, Q "Unknown key for a " + token + " in [" + reducerName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, ScriptField.SCRIPT)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); - } else if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + if (Script.SCRIPT_PARSE_FIELD.match(currentFieldName)) { + script = Script.parse(parser, context.getDefaultScriptLanguage()); + } else if (BUCKETS_PATH.match(currentFieldName)) { Map map = parser.map(); bucketsPathsMap = new HashMap<>(); for (Map.Entry entry : map.entrySet()) { @@ -223,7 +222,7 @@ public static BucketScriptPipelineAggregationBuilder parse(String reducerName, Q } if (script == null) { - throw new ParsingException(parser.getTokenLocation(), "Missing required field [" + ScriptField.SCRIPT.getPreferredName() + throw new ParsingException(parser.getTokenLocation(), "Missing required field [" + Script.SCRIPT_PARSE_FIELD.getPreferredName() + "] for series_arithmetic aggregation [" + reducerName + "]"); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketscript/BucketScriptPipelineAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketscript/BucketScriptPipelineAggregator.java index 008b0217c65d5..87df926ebab55 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketscript/BucketScriptPipelineAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketscript/BucketScriptPipelineAggregator.java @@ -31,14 +31,12 @@ import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; import org.elasticsearch.search.aggregations.InternalAggregations; import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; -import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket; import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; import org.elasticsearch.search.aggregations.pipeline.InternalSimpleValue; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; import java.util.ArrayList; -import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -89,13 +87,13 @@ public String getWriteableName() { @Override public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext reduceContext) { - InternalMultiBucketAggregation originalAgg = (InternalMultiBucketAggregation) aggregation; - List buckets = originalAgg.getBuckets(); + InternalMultiBucketAggregation originalAgg = + (InternalMultiBucketAggregation) aggregation; + List buckets = originalAgg.getBuckets(); - CompiledScript compiledScript = reduceContext.scriptService().compile(script, ScriptContext.Standard.AGGS, - Collections.emptyMap()); - List newBuckets = new ArrayList<>(); - for (Bucket bucket : buckets) { + CompiledScript compiledScript = reduceContext.scriptService().compile(script, ScriptContext.Standard.AGGS); + List newBuckets = new ArrayList<>(); + for (InternalMultiBucketAggregation.InternalBucket bucket : buckets) { Map vars = new HashMap<>(); if (script.getParams() != null) { vars.putAll(script.getParams()); @@ -123,13 +121,12 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext throw new AggregationExecutionException("series_arithmetic script for reducer [" + name() + "] must return a Number"); } - final List aggs = StreamSupport.stream(bucket.getAggregations().spliterator(), false).map((p) -> { - return (InternalAggregation) p; - }).collect(Collectors.toList()); + final List aggs = StreamSupport.stream(bucket.getAggregations().spliterator(), false).map( + (p) -> (InternalAggregation) p).collect(Collectors.toList()); aggs.add(new InternalSimpleValue(name(), ((Number) returned).doubleValue(), formatter, new ArrayList<>(), metaData())); InternalMultiBucketAggregation.InternalBucket newBucket = originalAgg.createBucket(new InternalAggregations(aggs), - (InternalMultiBucketAggregation.InternalBucket) bucket); + bucket); newBuckets.add(newBucket); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketselector/BucketSelectorPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketselector/BucketSelectorPipelineAggregationBuilder.java index e3b423767286b..078f50a978f1e 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketselector/BucketSelectorPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketselector/BucketSelectorPipelineAggregationBuilder.java @@ -26,7 +26,6 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.script.Script; -import org.elasticsearch.script.Script.ScriptField; import org.elasticsearch.search.aggregations.pipeline.AbstractPipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -119,7 +118,7 @@ protected PipelineAggregator createInternal(Map metaData) throws @Override protected XContentBuilder internalXContent(XContentBuilder builder, Params params) throws IOException { builder.field(BUCKETS_PATH.getPreferredName(), bucketsPathsMap); - builder.field(ScriptField.SCRIPT.getPreferredName(), script); + builder.field(Script.SCRIPT_PARSE_FIELD.getPreferredName(), script); builder.field(GAP_POLICY.getPreferredName(), gapPolicy.getName()); return builder; } @@ -136,19 +135,19 @@ public static BucketSelectorPipelineAggregationBuilder parse(String reducerName, if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + if (BUCKETS_PATH.match(currentFieldName)) { bucketsPathsMap = new HashMap<>(); bucketsPathsMap.put("_value", parser.text()); - } else if (context.getParseFieldMatcher().match(currentFieldName, GAP_POLICY)) { + } else if (GAP_POLICY.match(currentFieldName)) { gapPolicy = GapPolicy.parse(context, parser.text(), parser.getTokenLocation()); - } else if (context.getParseFieldMatcher().match(currentFieldName, ScriptField.SCRIPT)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); + } else if (Script.SCRIPT_PARSE_FIELD.match(currentFieldName)) { + script = Script.parse(parser, context.getDefaultScriptLanguage()); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + reducerName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + if (BUCKETS_PATH.match(currentFieldName)) { List paths = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { String path = parser.text(); @@ -163,9 +162,9 @@ public static BucketSelectorPipelineAggregationBuilder parse(String reducerName, "Unknown key for a " + token + " in [" + reducerName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, ScriptField.SCRIPT)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); - } else if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + if (Script.SCRIPT_PARSE_FIELD.match(currentFieldName)) { + script = Script.parse(parser, context.getDefaultScriptLanguage()); + } else if (BUCKETS_PATH.match(currentFieldName)) { Map map = parser.map(); bucketsPathsMap = new HashMap<>(); for (Map.Entry entry : map.entrySet()) { @@ -186,7 +185,7 @@ public static BucketSelectorPipelineAggregationBuilder parse(String reducerName, } if (script == null) { - throw new ParsingException(parser.getTokenLocation(), "Missing required field [" + ScriptField.SCRIPT.getPreferredName() + throw new ParsingException(parser.getTokenLocation(), "Missing required field [" + Script.SCRIPT_PARSE_FIELD.getPreferredName() + "] for bucket_selector aggregation [" + reducerName + "]"); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketselector/BucketSelectorPipelineAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketselector/BucketSelectorPipelineAggregator.java index eabbad7213a4e..62eed8d4e0a08 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketselector/BucketSelectorPipelineAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketselector/BucketSelectorPipelineAggregator.java @@ -29,13 +29,11 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; -import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket; import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; import java.util.ArrayList; -import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -84,12 +82,11 @@ public String getWriteableName() { public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext reduceContext) { InternalMultiBucketAggregation originalAgg = (InternalMultiBucketAggregation) aggregation; - List buckets = originalAgg.getBuckets(); + List buckets = originalAgg.getBuckets(); - CompiledScript compiledScript = reduceContext.scriptService().compile(script, ScriptContext.Standard.AGGS, - Collections.emptyMap()); - List newBuckets = new ArrayList<>(); - for (Bucket bucket : buckets) { + CompiledScript compiledScript = reduceContext.scriptService().compile(script, ScriptContext.Standard.AGGS); + List newBuckets = new ArrayList<>(); + for (InternalMultiBucketAggregation.InternalBucket bucket : buckets) { Map vars = new HashMap<>(); if (script.getParams() != null) { vars.putAll(script.getParams()); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregationBuilder.java index a83787f13650e..5ac185990b488 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregationBuilder.java @@ -141,16 +141,16 @@ public static CumulativeSumPipelineAggregationBuilder parse(String pipelineAggre if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, FORMAT)) { + if (FORMAT.match(currentFieldName)) { format = parser.text(); - } else if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + } else if (BUCKETS_PATH.match(currentFieldName)) { bucketsPaths = new String[] { parser.text() }; } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + pipelineAggregatorName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + if (BUCKETS_PATH.match(currentFieldName)) { List paths = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { String path = parser.text(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregator.java index 98c6f7b2fa29e..8a1b70fdd145b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregator.java @@ -25,7 +25,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; import org.elasticsearch.search.aggregations.InternalAggregations; -import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation; +import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket; import org.elasticsearch.search.aggregations.bucket.histogram.HistogramFactory; import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; @@ -70,13 +70,14 @@ public String getWriteableName() { @Override public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext reduceContext) { - MultiBucketsAggregation histo = (MultiBucketsAggregation) aggregation; - List buckets = histo.getBuckets(); + InternalMultiBucketAggregation + histo = (InternalMultiBucketAggregation) aggregation; + List buckets = histo.getBuckets(); HistogramFactory factory = (HistogramFactory) histo; - - List newBuckets = new ArrayList<>(); + List newBuckets = new ArrayList<>(buckets.size()); double sum = 0; - for (Bucket bucket : buckets) { + for (InternalMultiBucketAggregation.InternalBucket bucket : buckets) { Double thisBucketValue = resolveBucketValue(histo, bucket, bucketsPaths()[0], GapPolicy.INSERT_ZEROS); sum += thisBucketValue; List aggs = StreamSupport.stream(bucket.getAggregations().spliterator(), false).map((p) -> { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java index 5ffc77669b8df..bb2a1c01a505d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java @@ -32,6 +32,7 @@ import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregatorFactory; +import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval; import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregatorFactory; import org.elasticsearch.search.aggregations.pipeline.AbstractPipelineAggregationBuilder; @@ -141,7 +142,7 @@ protected PipelineAggregator createInternal(Map metaData) throws } Long xAxisUnits = null; if (units != null) { - DateTimeUnit dateTimeUnit = DateHistogramAggregatorFactory.DATE_FIELD_UNITS.get(units); + DateTimeUnit dateTimeUnit = DateHistogramAggregationBuilder.DATE_FIELD_UNITS.get(units); if (dateTimeUnit != null) { xAxisUnits = dateTimeUnit.field(DateTimeZone.UTC).getDurationField().getUnitMillis(); } else { @@ -206,20 +207,20 @@ public static DerivativePipelineAggregationBuilder parse(String pipelineAggregat if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, FORMAT_FIELD)) { + if (FORMAT_FIELD.match(currentFieldName)) { format = parser.text(); - } else if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH_FIELD)) { + } else if (BUCKETS_PATH_FIELD.match(currentFieldName)) { bucketsPaths = new String[] { parser.text() }; - } else if (context.getParseFieldMatcher().match(currentFieldName, GAP_POLICY_FIELD)) { + } else if (GAP_POLICY_FIELD.match(currentFieldName)) { gapPolicy = GapPolicy.parse(context, parser.text(), parser.getTokenLocation()); - } else if (context.getParseFieldMatcher().match(currentFieldName, UNIT_FIELD)) { + } else if (UNIT_FIELD.match(currentFieldName)) { units = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + pipelineAggregatorName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH_FIELD)) { + if (BUCKETS_PATH_FIELD.match(currentFieldName)) { List paths = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { String path = parser.text(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregator.java index 480f04f545a4e..3fe60f23cf31d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregator.java @@ -25,7 +25,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; import org.elasticsearch.search.aggregations.InternalAggregations; -import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation; +import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket; import org.elasticsearch.search.aggregations.bucket.histogram.HistogramFactory; import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; @@ -77,14 +77,16 @@ public String getWriteableName() { @Override public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext reduceContext) { - MultiBucketsAggregation histo = (MultiBucketsAggregation) aggregation; - List buckets = histo.getBuckets(); + InternalMultiBucketAggregation + histo = (InternalMultiBucketAggregation) aggregation; + List buckets = histo.getBuckets(); HistogramFactory factory = (HistogramFactory) histo; List newBuckets = new ArrayList<>(); Number lastBucketKey = null; Double lastBucketValue = null; - for (Bucket bucket : buckets) { + for (InternalMultiBucketAggregation.InternalBucket bucket : buckets) { Number thisBucketKey = factory.getKey(bucket); Double thisBucketValue = resolveBucketValue(histo, bucket, bucketsPaths()[0], gapPolicy); if (lastBucketValue != null && thisBucketValue != null) { @@ -107,5 +109,4 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext } return factory.createAggregation(newBuckets); } - } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/ParsedDerivative.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/ParsedDerivative.java new file mode 100644 index 0000000000000..2b871a99d9a6a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/ParsedDerivative.java @@ -0,0 +1,79 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.pipeline.derivative; + +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.pipeline.ParsedSimpleValue; + +import java.io.IOException; + +public class ParsedDerivative extends ParsedSimpleValue implements Derivative { + + private double normalizedValue; + private String normalizedAsString; + private boolean hasNormalizationFactor; + private static final ParseField NORMALIZED_AS_STRING = new ParseField("normalized_value_as_string"); + private static final ParseField NORMALIZED = new ParseField("normalized_value"); + + @Override + public double normalizedValue() { + return this.normalizedValue; + } + + @Override + public String getType() { + return DerivativePipelineAggregationBuilder.NAME; + } + + private static final ObjectParser PARSER = new ObjectParser<>(ParsedDerivative.class.getSimpleName(), true, + ParsedDerivative::new); + + static { + declareSingleValueFields(PARSER, Double.NaN); + PARSER.declareField((agg, normalized) -> { + agg.normalizedValue = normalized; + agg.hasNormalizationFactor = true; + }, (parser, context) -> parseDouble(parser, Double.NaN), NORMALIZED, ValueType.DOUBLE_OR_NULL); + PARSER.declareString((agg, normalAsString) -> agg.normalizedAsString = normalAsString, NORMALIZED_AS_STRING); + } + + public static ParsedDerivative fromXContent(XContentParser parser, final String name) { + ParsedDerivative derivative = PARSER.apply(parser, null); + derivative.setName(name); + return derivative; + } + + @Override + protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { + super.doXContentBody(builder, params); + if (hasNormalizationFactor) { + boolean hasValue = Double.isNaN(normalizedValue) == false; + builder.field(NORMALIZED.getPreferredName(), hasValue ? normalizedValue : null); + if (hasValue && normalizedAsString != null) { + builder.field(NORMALIZED_AS_STRING.getPreferredName(), normalizedAsString); + } + } + return builder; + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregationBuilder.java index f0aa1f8112639..bc973ad442f54 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregationBuilder.java @@ -322,13 +322,13 @@ public static MovAvgPipelineAggregationBuilder parse( if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_NUMBER) { - if (context.getParseFieldMatcher().match(currentFieldName, WINDOW)) { + if (WINDOW.match(currentFieldName)) { window = parser.intValue(); if (window <= 0) { throw new ParsingException(parser.getTokenLocation(), "[" + currentFieldName + "] value must be a positive, " + "non-zero integer. Value supplied was [" + predict + "] in [" + pipelineAggregatorName + "]."); } - } else if (context.getParseFieldMatcher().match(currentFieldName, PREDICT)) { + } else if (PREDICT.match(currentFieldName)) { predict = parser.intValue(); if (predict <= 0) { throw new ParsingException(parser.getTokenLocation(), "[" + currentFieldName + "] value must be a positive integer." @@ -339,20 +339,20 @@ public static MovAvgPipelineAggregationBuilder parse( "Unknown key for a " + token + " in [" + pipelineAggregatorName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, FORMAT)) { + if (FORMAT.match(currentFieldName)) { format = parser.text(); - } else if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + } else if (BUCKETS_PATH.match(currentFieldName)) { bucketsPaths = new String[] { parser.text() }; - } else if (context.getParseFieldMatcher().match(currentFieldName, GAP_POLICY)) { + } else if (GAP_POLICY.match(currentFieldName)) { gapPolicy = GapPolicy.parse(context, parser.text(), parser.getTokenLocation()); - } else if (context.getParseFieldMatcher().match(currentFieldName, MODEL)) { + } else if (MODEL.match(currentFieldName)) { model = parser.text(); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + pipelineAggregatorName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + if (BUCKETS_PATH.match(currentFieldName)) { List paths = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { String path = parser.text(); @@ -364,14 +364,14 @@ public static MovAvgPipelineAggregationBuilder parse( "Unknown key for a " + token + " in [" + pipelineAggregatorName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, SETTINGS)) { + if (SETTINGS.match(currentFieldName)) { settings = parser.map(); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + pipelineAggregatorName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.VALUE_BOOLEAN) { - if (context.getParseFieldMatcher().match(currentFieldName, MINIMIZE)) { + if (MINIMIZE.match(currentFieldName)) { minimize = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), @@ -403,11 +403,10 @@ public static MovAvgPipelineAggregationBuilder parse( factory.predict(predict); } if (model != null) { - MovAvgModel.AbstractModelParser modelParser = movingAverageMdelParserRegistry.lookup(model, context.getParseFieldMatcher(), - parser.getTokenLocation()); + MovAvgModel.AbstractModelParser modelParser = movingAverageMdelParserRegistry.lookup(model, parser.getTokenLocation()); MovAvgModel movAvgModel; try { - movAvgModel = modelParser.parse(settings, pipelineAggregatorName, factory.window(), context.getParseFieldMatcher()); + movAvgModel = modelParser.parse(settings, pipelineAggregatorName, factory.window()); } catch (ParseException exception) { throw new ParsingException(parser.getTokenLocation(), "Could not parse settings for model [" + model + "].", exception); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregator.java index 87aa5bfda63e0..196f7cca4737f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregator.java @@ -26,6 +26,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; import org.elasticsearch.search.aggregations.InternalAggregations; +import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation; import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket; import org.elasticsearch.search.aggregations.bucket.histogram.HistogramFactory; @@ -93,8 +94,10 @@ public String getWriteableName() { @Override public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext reduceContext) { - MultiBucketsAggregation histo = (MultiBucketsAggregation) aggregation; - List buckets = histo.getBuckets(); + InternalMultiBucketAggregation + histo = (InternalMultiBucketAggregation) aggregation; + List buckets = histo.getBuckets(); HistogramFactory factory = (HistogramFactory) histo; List newBuckets = new ArrayList<>(); @@ -110,7 +113,7 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext model = minimize(buckets, histo, model); } - for (Bucket bucket : buckets) { + for (InternalMultiBucketAggregation.InternalBucket bucket : buckets) { Double thisBucketValue = resolveBucketValue(histo, bucket, bucketsPaths()[0], gapPolicy); // Default is to reuse existing bucket. Simplifies the rest of the logic, @@ -158,7 +161,7 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext }).collect(Collectors.toList()); aggs.add(new InternalSimpleValue(name(), predictions[i], formatter, new ArrayList(), metaData())); - Bucket newBucket = factory.createBucket(newKey, 0, new InternalAggregations(aggs)); + Bucket newBucket = factory.createBucket(newKey, bucket.getDocCount(), new InternalAggregations(aggs)); // Overwrite the existing bucket with the new version newBuckets.set(lastValidPosition + i + 1, newBucket); @@ -180,13 +183,14 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext return factory.createAggregation(newBuckets); } - private MovAvgModel minimize(List buckets, MultiBucketsAggregation histo, MovAvgModel model) { + private MovAvgModel minimize(List buckets, + MultiBucketsAggregation histo, MovAvgModel model) { int counter = 0; EvictingQueue values = new EvictingQueue<>(this.window); double[] test = new double[window]; - ListIterator iter = buckets.listIterator(buckets.size()); + ListIterator iter = buckets.listIterator(buckets.size()); // We have to walk the iterator backwards because we don't know if/how many buckets are empty. while (iter.hasPrevious() && counter < window) { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/EwmaModel.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/EwmaModel.java index c7e6b0e898004..26fb0333b188b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/EwmaModel.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/EwmaModel.java @@ -20,7 +20,6 @@ package org.elasticsearch.search.aggregations.pipeline.movavg.models; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -127,8 +126,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws public static final AbstractModelParser PARSER = new AbstractModelParser() { @Override - public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize, - ParseFieldMatcher parseFieldMatcher) throws ParseException { + public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize) throws ParseException { double alpha = parseDoubleParam(settings, "alpha", DEFAULT_ALPHA); checkUnrecognizedParams(settings); return new EwmaModel(alpha); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltLinearModel.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltLinearModel.java index d8a591972ecd0..1819333738502 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltLinearModel.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltLinearModel.java @@ -20,7 +20,6 @@ package org.elasticsearch.search.aggregations.pipeline.movavg.models; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -191,8 +190,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws public static final AbstractModelParser PARSER = new AbstractModelParser() { @Override - public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize, - ParseFieldMatcher parseFieldMatcher) throws ParseException { + public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize) throws ParseException { double alpha = parseDoubleParam(settings, "alpha", DEFAULT_ALPHA); double beta = parseDoubleParam(settings, "beta", DEFAULT_BETA); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltWintersModel.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltWintersModel.java index 2130faf8674d8..92b2e4d3ea26e 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltWintersModel.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltWintersModel.java @@ -23,7 +23,6 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -59,17 +58,16 @@ public enum SeasonalityType { * Parse a string SeasonalityType into the byte enum * * @param text SeasonalityType in string format (e.g. "add") - * @param parseFieldMatcher Matcher for field names * @return SeasonalityType enum */ @Nullable - public static SeasonalityType parse(String text, ParseFieldMatcher parseFieldMatcher) { + public static SeasonalityType parse(String text) { if (text == null) { return null; } SeasonalityType result = null; for (SeasonalityType policy : values()) { - if (parseFieldMatcher.match(text, policy.parseField)) { + if (policy.parseField.match(text)) { result = policy; break; } @@ -379,8 +377,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws public static final AbstractModelParser PARSER = new AbstractModelParser() { @Override - public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize, - ParseFieldMatcher parseFieldMatcher) throws ParseException { + public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize) throws ParseException { double alpha = parseDoubleParam(settings, "alpha", DEFAULT_ALPHA); double beta = parseDoubleParam(settings, "beta", DEFAULT_BETA); @@ -399,7 +396,7 @@ public MovAvgModel parse(@Nullable Map settings, String pipeline Object value = settings.get("type"); if (value != null) { if (value instanceof String) { - seasonalityType = SeasonalityType.parse((String)value, parseFieldMatcher); + seasonalityType = SeasonalityType.parse((String)value); settings.remove("type"); } else { throw new ParseException("Parameter [type] must be a String, type `" diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/LinearModel.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/LinearModel.java index 089f3a430ca48..3eed0bf603baa 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/LinearModel.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/LinearModel.java @@ -21,7 +21,6 @@ import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -106,8 +105,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws public static final AbstractModelParser PARSER = new AbstractModelParser() { @Override - public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize, - ParseFieldMatcher parseFieldMatcher) throws ParseException { + public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize) throws ParseException { checkUnrecognizedParams(settings); return new LinearModel(); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/MovAvgModel.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/MovAvgModel.java index 0837eca38bd92..f64117236d6d4 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/MovAvgModel.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/MovAvgModel.java @@ -20,7 +20,6 @@ package org.elasticsearch.search.aggregations.pipeline.movavg.models; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; @@ -143,11 +142,10 @@ public abstract static class AbstractModelParser { * @param settings Map of settings, extracted from the request * @param pipelineName Name of the parent pipeline agg * @param windowSize Size of the window for this moving avg - * @param parseFieldMatcher Matcher for field names * @return A fully built moving average model */ public abstract MovAvgModel parse(@Nullable Map settings, String pipelineName, - int windowSize, ParseFieldMatcher parseFieldMatcher) throws ParseException; + int windowSize) throws ParseException; /** diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/SimpleModel.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/SimpleModel.java index 1454488188385..e30a59d288711 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/SimpleModel.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/SimpleModel.java @@ -20,7 +20,6 @@ package org.elasticsearch.search.aggregations.pipeline.movavg.models; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -99,8 +98,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws public static final AbstractModelParser PARSER = new AbstractModelParser() { @Override - public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize, - ParseFieldMatcher parseFieldMatcher) throws ParseException { + public MovAvgModel parse(@Nullable Map settings, String pipelineName, int windowSize) throws ParseException { checkUnrecognizedParams(settings); return new SimpleModel(); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/serialdiff/SerialDiffPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/serialdiff/SerialDiffPipelineAggregationBuilder.java index f20a4f8da42cf..0acd4c7f1b795 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/serialdiff/SerialDiffPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/serialdiff/SerialDiffPipelineAggregationBuilder.java @@ -162,18 +162,18 @@ public static SerialDiffPipelineAggregationBuilder parse(String reducerName, Que if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, FORMAT)) { + if (FORMAT.match(currentFieldName)) { format = parser.text(); - } else if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + } else if (BUCKETS_PATH.match(currentFieldName)) { bucketsPaths = new String[] { parser.text() }; - } else if (context.getParseFieldMatcher().match(currentFieldName, GAP_POLICY)) { + } else if (GAP_POLICY.match(currentFieldName)) { gapPolicy = GapPolicy.parse(context, parser.text(), parser.getTokenLocation()); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + reducerName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.VALUE_NUMBER) { - if (context.getParseFieldMatcher().match(currentFieldName, LAG)) { + if (LAG.match(currentFieldName)) { lag = parser.intValue(true); if (lag <= 0) { throw new ParsingException(parser.getTokenLocation(), @@ -186,7 +186,7 @@ public static SerialDiffPipelineAggregationBuilder parse(String reducerName, Que "Unknown key for a " + token + " in [" + reducerName + "]: [" + currentFieldName + "]."); } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, BUCKETS_PATH)) { + if (BUCKETS_PATH.match(currentFieldName)) { List paths = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { String path = parser.text(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/serialdiff/SerialDiffPipelineAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/serialdiff/SerialDiffPipelineAggregator.java index 3216d5527dc76..d438104be7fe4 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/serialdiff/SerialDiffPipelineAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/serialdiff/SerialDiffPipelineAggregator.java @@ -26,10 +26,10 @@ import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; -import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation; +import org.elasticsearch.search.aggregations.InternalAggregations; +import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation; import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket; import org.elasticsearch.search.aggregations.bucket.histogram.HistogramFactory; -import org.elasticsearch.search.aggregations.InternalAggregations; import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; import org.elasticsearch.search.aggregations.pipeline.InternalSimpleValue; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -80,15 +80,17 @@ public String getWriteableName() { @Override public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext reduceContext) { - MultiBucketsAggregation histo = (MultiBucketsAggregation) aggregation; - List buckets = histo.getBuckets(); + InternalMultiBucketAggregation + histo = (InternalMultiBucketAggregation) aggregation; + List buckets = histo.getBuckets(); HistogramFactory factory = (HistogramFactory) histo; List newBuckets = new ArrayList<>(); EvictingQueue lagWindow = new EvictingQueue<>(lag); int counter = 0; - for (Bucket bucket : buckets) { + for (InternalMultiBucketAggregation.InternalBucket bucket : buckets) { Double thisBucketValue = resolveBucketValue(histo, bucket, bucketsPaths()[0], gapPolicy); Bucket newBucket = bucket; @@ -111,17 +113,14 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext if (!Double.isNaN(thisBucketValue) && !Double.isNaN(lagValue)) { double diff = thisBucketValue - lagValue; - List aggs = StreamSupport.stream(bucket.getAggregations().spliterator(), false).map((p) -> { - return (InternalAggregation) p; - }).collect(Collectors.toList()); - aggs.add(new InternalSimpleValue(name(), diff, formatter, new ArrayList(), metaData())); + List aggs = StreamSupport.stream(bucket.getAggregations().spliterator(), false).map( + (p) -> (InternalAggregation) p).collect(Collectors.toList()); + aggs.add(new InternalSimpleValue(name(), diff, formatter, new ArrayList<>(), metaData())); newBucket = factory.createBucket(factory.getKey(bucket), bucket.getDocCount(), new InternalAggregations(aggs)); } - newBuckets.add(newBucket); lagWindow.add(thisBucketValue); - } return factory.createAggregation(newBuckets); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/AbstractValuesSourceParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/AbstractValuesSourceParser.java deleted file mode 100644 index 57eea9ccf65b4..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/AbstractValuesSourceParser.java +++ /dev/null @@ -1,220 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.support; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.query.QueryParseContext; -import org.elasticsearch.script.Script; -import org.elasticsearch.script.Script.ScriptField; -import org.elasticsearch.search.aggregations.Aggregator; -import org.joda.time.DateTimeZone; - -import java.io.IOException; -import java.util.HashMap; -import java.util.Map; - -/** - * - */ -public abstract class AbstractValuesSourceParser - implements Aggregator.Parser { - static final ParseField TIME_ZONE = new ParseField("time_zone"); - - public abstract static class AnyValuesSourceParser extends AbstractValuesSourceParser { - - protected AnyValuesSourceParser(boolean scriptable, boolean formattable) { - super(scriptable, formattable, false, ValuesSourceType.ANY, null); - } - } - - public abstract static class NumericValuesSourceParser extends AbstractValuesSourceParser { - - protected NumericValuesSourceParser(boolean scriptable, boolean formattable, boolean timezoneAware) { - super(scriptable, formattable, timezoneAware, ValuesSourceType.NUMERIC, ValueType.NUMERIC); - } - } - - public abstract static class BytesValuesSourceParser extends AbstractValuesSourceParser { - - protected BytesValuesSourceParser(boolean scriptable, boolean formattable) { - super(scriptable, formattable, false, ValuesSourceType.BYTES, ValueType.STRING); - } - } - - public abstract static class GeoPointValuesSourceParser extends AbstractValuesSourceParser { - - protected GeoPointValuesSourceParser(boolean scriptable, boolean formattable) { - super(scriptable, formattable, false, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT); - } - } - - private boolean scriptable = true; - private boolean formattable = false; - private boolean timezoneAware = false; - private ValuesSourceType valuesSourceType = null; - private ValueType targetValueType = null; - - private AbstractValuesSourceParser(boolean scriptable, boolean formattable, boolean timezoneAware, ValuesSourceType valuesSourceType, - ValueType targetValueType) { - this.timezoneAware = timezoneAware; - this.valuesSourceType = valuesSourceType; - this.targetValueType = targetValueType; - this.scriptable = scriptable; - this.formattable = formattable; - } - - @Override - public final ValuesSourceAggregationBuilder parse(String aggregationName, QueryParseContext context) - throws IOException { - - XContentParser parser = context.parser(); - String field = null; - Script script = null; - ValueType valueType = null; - String format = null; - Object missing = null; - DateTimeZone timezone = null; - Map otherOptions = new HashMap<>(); - XContentParseContext parserContext = - new XContentParseContext(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); - - XContentParser.Token token; - String currentFieldName = null; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if ("missing".equals(currentFieldName) && token.isValue()) { - missing = parser.objectText(); - } else if (timezoneAware && context.getParseFieldMatcher().match(currentFieldName, TIME_ZONE)) { - if (token == XContentParser.Token.VALUE_STRING) { - timezone = DateTimeZone.forID(parser.text()); - } else if (token == XContentParser.Token.VALUE_NUMBER) { - timezone = DateTimeZone.forOffsetHours(parser.intValue()); - } else { - throw new ParsingException(parser.getTokenLocation(), - "Unexpected token " + token + " [" + currentFieldName + "] in [" + aggregationName + "]."); - } - } else if (token == XContentParser.Token.VALUE_STRING) { - if ("field".equals(currentFieldName)) { - field = parser.text(); - } else if (formattable && "format".equals(currentFieldName)) { - format = parser.text(); - } else if (scriptable) { - if ("value_type".equals(currentFieldName) || "valueType".equals(currentFieldName)) { - valueType = ValueType.resolveForScript(parser.text()); - if (targetValueType != null && valueType.isNotA(targetValueType)) { - throw new ParsingException(parser.getTokenLocation(), - "Aggregation [" + aggregationName + "] was configured with an incompatible value type [" - + valueType + "]. It can only work on value of type [" - + targetValueType + "]"); - } - } else if (!token(aggregationName, currentFieldName, token, parserContext, otherOptions)) { - throw new ParsingException(parser.getTokenLocation(), - "Unexpected token " + token + " [" + currentFieldName + "] in [" + aggregationName + "]."); - } - } else if (!token(aggregationName, currentFieldName, token, parserContext, otherOptions)) { - throw new ParsingException(parser.getTokenLocation(), - "Unexpected token " + token + " [" + currentFieldName + "] in [" + aggregationName + "]."); - } - } else if (scriptable && token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, ScriptField.SCRIPT)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); - } else if (!token(aggregationName, currentFieldName, token, parserContext, otherOptions)) { - throw new ParsingException(parser.getTokenLocation(), - "Unexpected token " + token + " [" + currentFieldName + "] in [" + aggregationName + "]."); - } - } else if (!token(aggregationName, currentFieldName, token, parserContext, otherOptions)) { - throw new ParsingException(parser.getTokenLocation(), - "Unexpected token " + token + " [" + currentFieldName + "] in [" + aggregationName + "]."); - } - } - - ValuesSourceAggregationBuilder factory = createFactory(aggregationName, this.valuesSourceType, this.targetValueType, - otherOptions); - if (field != null) { - factory.field(field); - } - if (script != null) { - factory.script(script); - } - if (valueType != null) { - factory.valueType(valueType); - } - if (format != null) { - factory.format(format); - } - if (missing != null) { - factory.missing(missing); - } - if (timezone != null) { - factory.timeZone(timezone); - } - return factory; - } - - /** - * Creates a {@link ValuesSourceAggregationBuilder} from the information - * gathered by the subclass. Options parsed in - * {@link AbstractValuesSourceParser} itself will be added to the factory - * after it has been returned by this method. - * - * @param aggregationName - * the name of the aggregation - * @param valuesSourceType - * the type of the {@link ValuesSource} - * @param targetValueType - * the target type of the final value output by the aggregation - * @param otherOptions - * a {@link Map} containing the extra options parsed by the - * {@link #token(String, String, XContentParser.Token, XContentParseContext, Map)} - * method - * @return the created factory - */ - protected abstract ValuesSourceAggregationBuilder createFactory(String aggregationName, ValuesSourceType valuesSourceType, - ValueType targetValueType, Map otherOptions); - - /** - * Allows subclasses of {@link AbstractValuesSourceParser} to parse extra - * parameters and store them in a {@link Map} which will later be passed to - * {@link #createFactory(String, ValuesSourceType, ValueType, Map)}. - * - * @param aggregationName - * the name of the aggregation - * @param currentFieldName - * the name of the current field being parsed - * @param token - * the current token for the parser - * @param context - * the query context - * @param otherOptions - * a {@link Map} of options to be populated by successive calls - * to this method which will then be passed to the - * {@link #createFactory(String, ValuesSourceType, ValueType, Map)} - * method - * @return true if the current token was correctly parsed, - * false otherwise - * @throws IOException - * if an error occurs whilst parsing - */ - protected abstract boolean token(String aggregationName, String currentFieldName, XContentParser.Token token, - XContentParseContext context, Map otherOptions) throws IOException; -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java deleted file mode 100644 index 79549f87392d0..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java +++ /dev/null @@ -1,189 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.search.aggregations.support; - -import org.apache.lucene.util.BytesRef; -import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.geo.GeoPoint; -import org.elasticsearch.common.geo.GeoUtils; -import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.index.fielddata.IndexFieldData; -import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; -import org.elasticsearch.index.fielddata.IndexNumericFieldData; -import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; -import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; -import org.elasticsearch.search.SearchParseException; -import org.elasticsearch.search.aggregations.AggregationExecutionException; -import org.elasticsearch.search.internal.SearchContext; -import org.joda.time.DateTimeZone; - -import java.io.IOException; - -public class AggregationContext { - - private final SearchContext searchContext; - - public AggregationContext(SearchContext searchContext) { - this.searchContext = searchContext; - } - - public SearchContext searchContext() { - return searchContext; - } - - public BigArrays bigArrays() { - return searchContext.bigArrays(); - } - - /** Get a value source given its configuration. A return value of null indicates that - * no value source could be built. */ - @Nullable - public VS valuesSource(ValuesSourceConfig config, SearchContext context) throws IOException { - if (!config.valid()) { - throw new IllegalStateException( - "value source config is invalid; must have either a field context or a script or marked as unwrapped"); - } - - final VS vs; - if (config.unmapped()) { - if (config.missing() == null) { - // otherwise we will have values because of the missing value - vs = null; - } else if (config.valueSourceType() == ValuesSourceType.NUMERIC) { - vs = (VS) ValuesSource.Numeric.EMPTY; - } else if (config.valueSourceType() == ValuesSourceType.GEOPOINT) { - vs = (VS) ValuesSource.GeoPoint.EMPTY; - } else if (config.valueSourceType() == ValuesSourceType.ANY || config.valueSourceType() == ValuesSourceType.BYTES) { - vs = (VS) ValuesSource.Bytes.WithOrdinals.EMPTY; - } else { - throw new SearchParseException(searchContext, "Can't deal with unmapped ValuesSource type " - + config.valueSourceType(), null); - } - } else { - vs = originalValuesSource(config); - } - - if (config.missing() == null) { - return vs; - } - - if (vs instanceof ValuesSource.Bytes) { - final BytesRef missing = new BytesRef(config.missing().toString()); - if (vs instanceof ValuesSource.Bytes.WithOrdinals) { - return (VS) MissingValues.replaceMissing((ValuesSource.Bytes.WithOrdinals) vs, missing); - } else { - return (VS) MissingValues.replaceMissing((ValuesSource.Bytes) vs, missing); - } - } else if (vs instanceof ValuesSource.Numeric) { - Number missing = null; - if (config.missing() instanceof Number) { - missing = (Number) config.missing(); - } else { - if (config.fieldContext() != null && config.fieldContext().fieldType() != null) { - missing = config.fieldContext().fieldType().docValueFormat(null, DateTimeZone.UTC) - .parseDouble(config.missing().toString(), false, context::nowInMillis); - } else { - missing = Double.parseDouble(config.missing().toString()); - } - } - return (VS) MissingValues.replaceMissing((ValuesSource.Numeric) vs, missing); - } else if (vs instanceof ValuesSource.GeoPoint) { - // TODO: also support the structured formats of geo points - final GeoPoint missing = GeoUtils.parseGeoPoint(config.missing().toString(), new GeoPoint()); - return (VS) MissingValues.replaceMissing((ValuesSource.GeoPoint) vs, missing); - } else { - // Should not happen - throw new SearchParseException(searchContext, "Can't apply missing values on a " + vs.getClass(), null); - } - } - - /** - * Return the original values source, before we apply `missing`. - */ - private VS originalValuesSource(ValuesSourceConfig config) throws IOException { - if (config.fieldContext() == null) { - if (config.valueSourceType() == ValuesSourceType.NUMERIC) { - return (VS) numericScript(config); - } - if (config.valueSourceType() == ValuesSourceType.BYTES) { - return (VS) bytesScript(config); - } - throw new AggregationExecutionException("value source of type [" + config.valueSourceType().name() - + "] is not supported by scripts"); - } - - if (config.valueSourceType() == ValuesSourceType.NUMERIC) { - return (VS) numericField(config); - } - if (config.valueSourceType() == ValuesSourceType.GEOPOINT) { - return (VS) geoPointField(config); - } - // falling back to bytes values - return (VS) bytesField(config); - } - - private ValuesSource.Numeric numericScript(ValuesSourceConfig config) throws IOException { - return new ValuesSource.Numeric.Script(config.script(), config.scriptValueType()); - } - - private ValuesSource.Numeric numericField(ValuesSourceConfig config) throws IOException { - - if (!(config.fieldContext().indexFieldData() instanceof IndexNumericFieldData)) { - throw new IllegalArgumentException("Expected numeric type on field [" + config.fieldContext().field() + - "], but got [" + config.fieldContext().fieldType().typeName() + "]"); - } - - ValuesSource.Numeric dataSource = new ValuesSource.Numeric.FieldData((IndexNumericFieldData)config.fieldContext().indexFieldData()); - if (config.script() != null) { - dataSource = new ValuesSource.Numeric.WithScript(dataSource, config.script()); - } - return dataSource; - } - - private ValuesSource bytesField(ValuesSourceConfig config) throws IOException { - final IndexFieldData indexFieldData = config.fieldContext().indexFieldData(); - ValuesSource dataSource; - if (indexFieldData instanceof ParentChildIndexFieldData) { - dataSource = new ValuesSource.Bytes.WithOrdinals.ParentChild((ParentChildIndexFieldData) indexFieldData); - } else if (indexFieldData instanceof IndexOrdinalsFieldData) { - dataSource = new ValuesSource.Bytes.WithOrdinals.FieldData((IndexOrdinalsFieldData) indexFieldData); - } else { - dataSource = new ValuesSource.Bytes.FieldData(indexFieldData); - } - if (config.script() != null) { - dataSource = new ValuesSource.WithScript(dataSource, config.script()); - } - return dataSource; - } - - private ValuesSource.Bytes bytesScript(ValuesSourceConfig config) throws IOException { - return new ValuesSource.Bytes.Script(config.script()); - } - - private ValuesSource.GeoPoint geoPointField(ValuesSourceConfig config) throws IOException { - - if (!(config.fieldContext().indexFieldData() instanceof IndexGeoPointFieldData)) { - throw new IllegalArgumentException("Expected geo_point type on field [" + config.fieldContext().field() + - "], but got [" + config.fieldContext().fieldType().typeName() + "]"); - } - - return new ValuesSource.GeoPoint.Fielddata((IndexGeoPointFieldData) config.fieldContext().indexFieldData()); - } - -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/GeoPointParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/GeoPointParser.java deleted file mode 100644 index fd2f3636d173f..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/GeoPointParser.java +++ /dev/null @@ -1,100 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.support; - - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.geo.GeoPoint; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.search.aggregations.InternalAggregation; - -import java.io.IOException; -import java.util.Map; - -/** - * - */ -public class GeoPointParser { - - private final InternalAggregation.Type aggType; - private final ParseField field; - - public GeoPointParser(InternalAggregation.Type aggType, ParseField field) { - this.aggType = aggType; - this.field = field; - } - - public boolean token(String aggName, String currentFieldName, XContentParser.Token token, XContentParser parser, - ParseFieldMatcher parseFieldMatcher, Map otherOptions) throws IOException { - if (!parseFieldMatcher.match(currentFieldName, field)) { - return false; - } - if (token == XContentParser.Token.VALUE_STRING) { - GeoPoint point = new GeoPoint(); - point.resetFromString(parser.text()); - otherOptions.put(field, point); - return true; - } - if (token == XContentParser.Token.START_ARRAY) { - double lat = Double.NaN; - double lon = Double.NaN; - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (Double.isNaN(lon)) { - lon = parser.doubleValue(); - } else if (Double.isNaN(lat)) { - lat = parser.doubleValue(); - } else { - throw new ParsingException(parser.getTokenLocation(), "malformed [" + currentFieldName + "] geo point array in [" - + aggName + "] " + aggType + " aggregation. a geo point array must be of the form [lon, lat]"); - } - } - GeoPoint point = new GeoPoint(lat, lon); - otherOptions.put(field, point); - return true; - } - if (token == XContentParser.Token.START_OBJECT) { - double lat = Double.NaN; - double lon = Double.NaN; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_NUMBER) { - if ("lat".equals(currentFieldName)) { - lat = parser.doubleValue(); - } else if ("lon".equals(currentFieldName)) { - lon = parser.doubleValue(); - } - } - } - if (Double.isNaN(lat) || Double.isNaN(lon)) { - throw new ParsingException(parser.getTokenLocation(), - "malformed [" + currentFieldName + "] geo point object. either [lat] or [lon] (or both) are " + "missing in [" - + aggName + "] " + aggType + " aggregation"); - } - GeoPoint point = new GeoPoint(lat, lon); - otherOptions.put(field, point); - return true; - } - return false; - } - -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/MissingValues.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/MissingValues.java index 28a4bd2567ce4..4a01b67b78078 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/MissingValues.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/support/MissingValues.java @@ -80,7 +80,7 @@ public int count() { } public static ValuesSource.Numeric replaceMissing(final ValuesSource.Numeric valuesSource, final Number missing) { - final boolean missingIsFloat = missing.longValue() != (long) missing.doubleValue(); + final boolean missingIsFloat = missing.doubleValue() % 1 != 0; final boolean isFloatingPoint = valuesSource.isFloatingPoint() || missingIsFloat; return new ValuesSource.Numeric() { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValueType.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValueType.java index 9e0bf350beb96..6c918feb57a5c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValueType.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValueType.java @@ -71,6 +71,7 @@ public boolean isNumeric() { } }, IP((byte) 6, "ip", "ip", ValuesSourceType.BYTES, IndexFieldData.class, DocValueFormat.IP), + // TODO: what is the difference between "number" and "numeric"? NUMERIC((byte) 7, "numeric", "numeric", ValuesSourceType.NUMERIC, IndexNumericFieldData.class, DocValueFormat.RAW) { @Override public boolean isNumeric() { @@ -82,6 +83,12 @@ public boolean isNumeric() { public boolean isGeoPoint() { return true; } + }, + BOOLEAN((byte) 9, "boolean", "boolean", ValuesSourceType.NUMERIC, IndexNumericFieldData.class, DocValueFormat.BOOLEAN) { + @Override + public boolean isNumeric() { + return super.isNumeric(); + } }; final String description; @@ -91,7 +98,7 @@ public boolean isGeoPoint() { private final byte id; private String preferredName; - private ValueType(byte id, String description, String preferredName, ValuesSourceType valuesSourceType, + ValueType(byte id, String description, String preferredName, ValuesSourceType valuesSourceType, Class fieldDataType, DocValueFormat defaultFormat) { this.id = id; this.description = description; @@ -153,7 +160,9 @@ public static ValueType resolveForScript(String type) { case "byte": return LONG; case "date": return DATE; case "ip": return IP; + case "boolean": return BOOLEAN; default: + // TODO: do not be lenient here return null; } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSource.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSource.java index 124b7ba71f4b5..ca3424ae7042b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSource.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSource.java @@ -23,7 +23,6 @@ import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.RandomAccessOrds; -import org.apache.lucene.index.SortedDocValues; import org.apache.lucene.index.SortedNumericDocValues; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Scorer; @@ -31,19 +30,16 @@ import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.lucene.ScorerAware; import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; -import org.elasticsearch.index.fielddata.AtomicParentChildFieldData; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData; import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; -import org.elasticsearch.index.fielddata.IndexParentChildFieldData; import org.elasticsearch.index.fielddata.MultiGeoPointValues; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; import org.elasticsearch.index.fielddata.SortingBinaryDocValues; import org.elasticsearch.index.fielddata.SortingNumericDocValues; import org.elasticsearch.index.fielddata.SortingNumericDoubleValues; -import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.script.LeafSearchScript; import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.aggregations.support.ValuesSource.WithScript.BytesValues; @@ -154,40 +150,6 @@ public RandomAccessOrds globalOrdinalsValues(LeafReaderContext context) { } } - public static class ParentChild extends Bytes { - - protected final ParentChildIndexFieldData indexFieldData; - - public ParentChild(ParentChildIndexFieldData indexFieldData) { - this.indexFieldData = indexFieldData; - } - - public long globalMaxOrd(IndexSearcher indexSearcher, String type) { - DirectoryReader indexReader = (DirectoryReader) indexSearcher.getIndexReader(); - if (indexReader.leaves().isEmpty()) { - return 0; - } else { - LeafReaderContext atomicReaderContext = indexReader.leaves().get(0); - IndexParentChildFieldData globalFieldData = indexFieldData.loadGlobal(indexReader); - AtomicParentChildFieldData afd = globalFieldData.load(atomicReaderContext); - SortedDocValues values = afd.getOrdinalsValues(type); - return values.getValueCount(); - } - } - - public SortedDocValues globalOrdinalsValues(String type, LeafReaderContext context) { - final IndexParentChildFieldData global = indexFieldData.loadGlobal((DirectoryReader)context.parent.reader()); - final AtomicParentChildFieldData atomicFieldData = global.load(context); - return atomicFieldData.getOrdinalsValues(type); - } - - @Override - public SortedBinaryDocValues bytesValues(LeafReaderContext context) { - final AtomicParentChildFieldData atomicFieldData = indexFieldData.load(context); - return atomicFieldData.getBytesValues(); - } - } - public static class FieldData extends Bytes { protected final IndexFieldData indexFieldData; @@ -318,7 +280,7 @@ static class LongValues extends SortingNumericDocValues implements ScorerAware { private final SortedNumericDocValues longValues; private final LeafSearchScript script; - public LongValues(SortedNumericDocValues values, LeafSearchScript script) { + LongValues(SortedNumericDocValues values, LeafSearchScript script) { this.longValues = values; this.script = script; } @@ -346,7 +308,7 @@ static class DoubleValues extends SortingNumericDoubleValues implements ScorerAw private final SortedNumericDoubleValues doubleValues; private final LeafSearchScript script; - public DoubleValues(SortedNumericDoubleValues values, LeafSearchScript script) { + DoubleValues(SortedNumericDoubleValues values, LeafSearchScript script) { this.doubleValues = values; this.script = script; } @@ -462,7 +424,7 @@ static class BytesValues extends SortingBinaryDocValues implements ScorerAware { private final SortedBinaryDocValues bytesValues; private final LeafSearchScript script; - public BytesValues(SortedBinaryDocValues bytesValues, LeafSearchScript script) { + BytesValues(SortedBinaryDocValues bytesValues, LeafSearchScript script) { this.bytesValues = bytesValues; this.script = script; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceAggregationBuilder.java index 8f14a1ffaf9d1..de76d1490e942 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceAggregationBuilder.java @@ -18,29 +18,19 @@ */ package org.elasticsearch.search.aggregations.support; -import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.fielddata.IndexFieldData; -import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; -import org.elasticsearch.index.fielddata.IndexNumericFieldData; -import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.script.Script; -import org.elasticsearch.script.ScriptContext; -import org.elasticsearch.script.SearchScript; -import org.elasticsearch.search.DocValueFormat; -import org.elasticsearch.search.aggregations.AggregationInitializationException; import org.elasticsearch.search.aggregations.AbstractAggregationBuilder; +import org.elasticsearch.search.aggregations.AggregationInitializationException; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.internal.SearchContext; import org.joda.time.DateTimeZone; import java.io.IOException; -import java.util.Collections; import java.util.Objects; /** @@ -52,28 +42,29 @@ public abstract class ValuesSourceAggregationBuilder> extends ValuesSourceAggregationBuilder { - protected LeafOnly(String name, Type type, ValuesSourceType valuesSourceType, ValueType targetValueType) { - super(name, type, valuesSourceType, targetValueType); + protected LeafOnly(String name, ValuesSourceType valuesSourceType, ValueType targetValueType) { + super(name, valuesSourceType, targetValueType); } /** * Read an aggregation from a stream that does not serialize its targetValueType. This should be used by most subclasses. */ - protected LeafOnly(StreamInput in, Type type, ValuesSourceType valuesSourceType, ValueType targetValueType) throws IOException { - super(in, type, valuesSourceType, targetValueType); + protected LeafOnly(StreamInput in, ValuesSourceType valuesSourceType, ValueType targetValueType) throws IOException { + super(in, valuesSourceType, targetValueType); } /** * Read an aggregation from a stream that serializes its targetValueType. This should only be used by subclasses that override * {@link #serializeTargetValueType()} to return true. */ - protected LeafOnly(StreamInput in, Type type, ValuesSourceType valuesSourceType) throws IOException { - super(in, type, valuesSourceType); + protected LeafOnly(StreamInput in, ValuesSourceType valuesSourceType) throws IOException { + super(in, valuesSourceType); } @Override public AB subAggregations(Builder subFactories) { - throw new AggregationInitializationException("Aggregator [" + name + "] of type [" + type + "] cannot accept sub-aggregations"); + throw new AggregationInitializationException("Aggregator [" + name + "] of type [" + + getType() + "] cannot accept sub-aggregations"); } } @@ -87,8 +78,8 @@ public AB subAggregations(Builder subFactories) { private DateTimeZone timeZone = null; protected ValuesSourceConfig config; - protected ValuesSourceAggregationBuilder(String name, Type type, ValuesSourceType valuesSourceType, ValueType targetValueType) { - super(name, type); + protected ValuesSourceAggregationBuilder(String name, ValuesSourceType valuesSourceType, ValueType targetValueType) { + super(name); if (valuesSourceType == null) { throw new IllegalArgumentException("[valuesSourceType] must not be null: [" + name + "]"); } @@ -99,9 +90,9 @@ protected ValuesSourceAggregationBuilder(String name, Type type, ValuesSourceTyp /** * Read an aggregation from a stream that does not serialize its targetValueType. This should be used by most subclasses. */ - protected ValuesSourceAggregationBuilder(StreamInput in, Type type, ValuesSourceType valuesSourceType, ValueType targetValueType) + protected ValuesSourceAggregationBuilder(StreamInput in, ValuesSourceType valuesSourceType, ValueType targetValueType) throws IOException { - super(in, type); + super(in); assert false == serializeTargetValueType() : "Wrong read constructor called for subclass that provides its targetValueType"; this.valuesSourceType = valuesSourceType; this.targetValueType = targetValueType; @@ -112,8 +103,8 @@ protected ValuesSourceAggregationBuilder(StreamInput in, Type type, ValuesSource * Read an aggregation from a stream that serializes its targetValueType. This should only be used by subclasses that override * {@link #serializeTargetValueType()} to return true. */ - protected ValuesSourceAggregationBuilder(StreamInput in, Type type, ValuesSourceType valuesSourceType) throws IOException { - super(in, type); + protected ValuesSourceAggregationBuilder(StreamInput in, ValuesSourceType valuesSourceType) throws IOException { + super(in); assert serializeTargetValueType() : "Wrong read constructor called for subclass that serializes its targetValueType"; this.valuesSourceType = valuesSourceType; this.targetValueType = in.readOptionalWriteable(ValueType::readFromStream); @@ -294,102 +285,21 @@ public DateTimeZone timeZone() { } @Override - protected final ValuesSourceAggregatorFactory doBuild(AggregationContext context, AggregatorFactory parent, + protected final ValuesSourceAggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder) throws IOException { ValuesSourceConfig config = resolveConfig(context); ValuesSourceAggregatorFactory factory = innerBuild(context, config, parent, subFactoriesBuilder); return factory; } - protected ValuesSourceConfig resolveConfig(AggregationContext context) { - ValuesSourceConfig config = config(context); - return config; - } - - protected abstract ValuesSourceAggregatorFactory innerBuild(AggregationContext context, ValuesSourceConfig config, - AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder) throws IOException; - - public ValuesSourceConfig config(AggregationContext context) { - + protected ValuesSourceConfig resolveConfig(SearchContext context) { ValueType valueType = this.valueType != null ? this.valueType : targetValueType; - - if (field == null) { - if (script == null) { - @SuppressWarnings("unchecked") - ValuesSourceConfig config = new ValuesSourceConfig(ValuesSourceType.ANY); - config.format(resolveFormat(null, valueType)); - return config; - } - ValuesSourceType valuesSourceType = valueType != null ? valueType.getValuesSourceType() : this.valuesSourceType; - if (valuesSourceType == null || valuesSourceType == ValuesSourceType.ANY) { - // the specific value source type is undefined, but for scripts, - // we need to have a specific value source - // type to know how to handle the script values, so we fallback - // on Bytes - valuesSourceType = ValuesSourceType.BYTES; - } - ValuesSourceConfig config = new ValuesSourceConfig(valuesSourceType); - config.missing(missing); - config.timezone(timeZone); - config.format(resolveFormat(format, valueType)); - config.script(createScript(script, context.searchContext())); - config.scriptValueType(valueType); - return config; - } - - MappedFieldType fieldType = context.searchContext().smartNameFieldType(field); - if (fieldType == null) { - ValuesSourceType valuesSourceType = valueType != null ? valueType.getValuesSourceType() : this.valuesSourceType; - ValuesSourceConfig config = new ValuesSourceConfig<>(valuesSourceType); - config.missing(missing); - config.timezone(timeZone); - config.format(resolveFormat(format, valueType)); - config.unmapped(true); - if (valueType != null) { - // todo do we really need this for unmapped? - config.scriptValueType(valueType); - } - return config; - } - - IndexFieldData indexFieldData = context.searchContext().fieldData().getForField(fieldType); - - ValuesSourceConfig config; - if (valuesSourceType == ValuesSourceType.ANY) { - if (indexFieldData instanceof IndexNumericFieldData) { - config = new ValuesSourceConfig<>(ValuesSourceType.NUMERIC); - } else if (indexFieldData instanceof IndexGeoPointFieldData) { - config = new ValuesSourceConfig<>(ValuesSourceType.GEOPOINT); - } else { - config = new ValuesSourceConfig<>(ValuesSourceType.BYTES); - } - } else { - config = new ValuesSourceConfig(valuesSourceType); - } - - config.fieldContext(new FieldContext(field, indexFieldData, fieldType)); - config.missing(missing); - config.timezone(timeZone); - config.script(createScript(script, context.searchContext())); - config.format(fieldType.docValueFormat(format, timeZone)); - return config; - } - - private SearchScript createScript(Script script, SearchContext context) { - return script == null ? null - : context.scriptService().search(context.lookup(), script, ScriptContext.Standard.AGGS, Collections.emptyMap()); + return ValuesSourceConfig.resolve(context.getQueryShardContext(), + valueType, field, script, missing, timeZone, format); } - private static DocValueFormat resolveFormat(@Nullable String format, @Nullable ValueType valueType) { - if (valueType == null) { - return DocValueFormat.RAW; // we can't figure it out - } - DocValueFormat valueFormat = valueType.defaultFormat; - if (valueFormat instanceof DocValueFormat.Decimal && format != null) { - valueFormat = new DocValueFormat.Decimal(format); - } - return valueFormat; - } + protected abstract ValuesSourceAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig config, + AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder) throws IOException; @Override public final XContentBuilder internalXContent(XContentBuilder builder, Params params) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceAggregatorFactory.java index 36ab0d505b959..28d82f4cafd72 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceAggregatorFactory.java @@ -22,8 +22,8 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.InternalAggregation.Type; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; +import org.elasticsearch.search.internal.SearchContext; import org.joda.time.DateTimeZone; import java.io.IOException; @@ -35,9 +35,9 @@ public abstract class ValuesSourceAggregatorFactory config; - public ValuesSourceAggregatorFactory(String name, Type type, ValuesSourceConfig config, AggregationContext context, + public ValuesSourceAggregatorFactory(String name, ValuesSourceConfig config, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { - super(name, type, context, parent, subFactoriesBuilder, metaData); + super(name, context, parent, subFactoriesBuilder, metaData); this.config = config; } @@ -48,7 +48,7 @@ public DateTimeZone timeZone() { @Override public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBucket, List pipelineAggregators, Map metaData) throws IOException { - VS vs = context.valuesSource(config, context.searchContext()); + VS vs = config.toValuesSource(context.getQueryShardContext()); if (vs == null) { return createUnmapped(parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceConfig.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceConfig.java index cb6f76037cf03..e5fac62840fac 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceConfig.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceConfig.java @@ -18,15 +18,123 @@ */ package org.elasticsearch.search.aggregations.support; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.geo.GeoUtils; +import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; +import org.elasticsearch.index.fielddata.IndexNumericFieldData; +import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptContext; import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.joda.time.DateTimeZone; +import java.io.IOException; + /** - * + * A configuration that tells aggregations how to retrieve data from the index + * in order to run a specific aggregation. */ public class ValuesSourceConfig { + /** + * Resolve a {@link ValuesSourceConfig} given configuration parameters. + */ + public static ValuesSourceConfig resolve( + QueryShardContext context, + ValueType valueType, + String field, Script script, + Object missing, + DateTimeZone timeZone, + String format) { + + if (field == null) { + if (script == null) { + @SuppressWarnings("unchecked") + ValuesSourceConfig config = new ValuesSourceConfig<>(ValuesSourceType.ANY); + config.format(resolveFormat(null, valueType)); + return config; + } + ValuesSourceType valuesSourceType = valueType != null ? valueType.getValuesSourceType() : ValuesSourceType.ANY; + if (valuesSourceType == ValuesSourceType.ANY) { + // the specific value source type is undefined, but for scripts, + // we need to have a specific value source + // type to know how to handle the script values, so we fallback + // on Bytes + valuesSourceType = ValuesSourceType.BYTES; + } + ValuesSourceConfig config = new ValuesSourceConfig(valuesSourceType); + config.missing(missing); + config.timezone(timeZone); + config.format(resolveFormat(format, valueType)); + config.script(createScript(script, context)); + config.scriptValueType(valueType); + return config; + } + + MappedFieldType fieldType = context.fieldMapper(field); + if (fieldType == null) { + ValuesSourceType valuesSourceType = valueType != null ? valueType.getValuesSourceType() : ValuesSourceType.ANY; + ValuesSourceConfig config = new ValuesSourceConfig<>(valuesSourceType); + config.missing(missing); + config.timezone(timeZone); + config.format(resolveFormat(format, valueType)); + config.unmapped(true); + if (valueType != null) { + // todo do we really need this for unmapped? + config.scriptValueType(valueType); + } + return config; + } + + IndexFieldData indexFieldData = context.getForField(fieldType); + + ValuesSourceConfig config; + if (valueType == null) { + if (indexFieldData instanceof IndexNumericFieldData) { + config = new ValuesSourceConfig<>(ValuesSourceType.NUMERIC); + } else if (indexFieldData instanceof IndexGeoPointFieldData) { + config = new ValuesSourceConfig<>(ValuesSourceType.GEOPOINT); + } else { + config = new ValuesSourceConfig<>(ValuesSourceType.BYTES); + } + } else { + config = new ValuesSourceConfig<>(valueType.getValuesSourceType()); + } + + config.fieldContext(new FieldContext(field, indexFieldData, fieldType)); + config.missing(missing); + config.timezone(timeZone); + config.script(createScript(script, context)); + config.format(fieldType.docValueFormat(format, timeZone)); + return config; + } + + private static SearchScript createScript(Script script, QueryShardContext context) { + if (script == null) { + return null; + } else { + return context.getSearchScript(script, ScriptContext.Standard.AGGS); + } + } + + private static DocValueFormat resolveFormat(@Nullable String format, @Nullable ValueType valueType) { + if (valueType == null) { + return DocValueFormat.RAW; // we can't figure it out + } + DocValueFormat valueFormat = valueType.defaultFormat; + if (valueFormat instanceof DocValueFormat.Decimal && format != null) { + valueFormat = new DocValueFormat.Decimal(format); + } + return valueFormat; + } + private final ValuesSourceType valueSourceType; private FieldContext fieldContext; private SearchScript script; @@ -110,4 +218,126 @@ public DateTimeZone timezone() { public DocValueFormat format() { return format; } + + /** Get a value source given its configuration. A return value of null indicates that + * no value source could be built. */ + @Nullable + public VS toValuesSource(QueryShardContext context) throws IOException { + if (!valid()) { + throw new IllegalStateException( + "value source config is invalid; must have either a field context or a script or marked as unwrapped"); + } + + final VS vs; + if (unmapped()) { + if (missing() == null) { + // otherwise we will have values because of the missing value + vs = null; + } else if (valueSourceType() == ValuesSourceType.NUMERIC) { + vs = (VS) ValuesSource.Numeric.EMPTY; + } else if (valueSourceType() == ValuesSourceType.GEOPOINT) { + vs = (VS) ValuesSource.GeoPoint.EMPTY; + } else if (valueSourceType() == ValuesSourceType.ANY || valueSourceType() == ValuesSourceType.BYTES) { + vs = (VS) ValuesSource.Bytes.WithOrdinals.EMPTY; + } else { + throw new IllegalArgumentException("Can't deal with unmapped ValuesSource type " + valueSourceType()); + } + } else { + vs = originalValuesSource(); + } + + if (missing() == null) { + return vs; + } + + if (vs instanceof ValuesSource.Bytes) { + final BytesRef missing = new BytesRef(missing().toString()); + if (vs instanceof ValuesSource.Bytes.WithOrdinals) { + return (VS) MissingValues.replaceMissing((ValuesSource.Bytes.WithOrdinals) vs, missing); + } else { + return (VS) MissingValues.replaceMissing((ValuesSource.Bytes) vs, missing); + } + } else if (vs instanceof ValuesSource.Numeric) { + Number missing = format.parseDouble(missing().toString(), false, context::nowInMillis); + return (VS) MissingValues.replaceMissing((ValuesSource.Numeric) vs, missing); + } else if (vs instanceof ValuesSource.GeoPoint) { + // TODO: also support the structured formats of geo points + final GeoPoint missing = GeoUtils.parseGeoPoint(missing().toString(), new GeoPoint()); + return (VS) MissingValues.replaceMissing((ValuesSource.GeoPoint) vs, missing); + } else { + // Should not happen + throw new IllegalArgumentException("Can't apply missing values on a " + vs.getClass()); + } + } + + /** + * Return the original values source, before we apply `missing`. + */ + private VS originalValuesSource() throws IOException { + if (fieldContext() == null) { + if (valueSourceType() == ValuesSourceType.NUMERIC) { + return (VS) numericScript(); + } + if (valueSourceType() == ValuesSourceType.BYTES) { + return (VS) bytesScript(); + } + throw new AggregationExecutionException("value source of type [" + valueSourceType().name() + + "] is not supported by scripts"); + } + + if (valueSourceType() == ValuesSourceType.NUMERIC) { + return (VS) numericField(); + } + if (valueSourceType() == ValuesSourceType.GEOPOINT) { + return (VS) geoPointField(); + } + // falling back to bytes values + return (VS) bytesField(); + } + + private ValuesSource.Numeric numericScript() throws IOException { + return new ValuesSource.Numeric.Script(script(), scriptValueType()); + } + + private ValuesSource.Numeric numericField() throws IOException { + + if (!(fieldContext().indexFieldData() instanceof IndexNumericFieldData)) { + throw new IllegalArgumentException("Expected numeric type on field [" + fieldContext().field() + + "], but got [" + fieldContext().fieldType().typeName() + "]"); + } + + ValuesSource.Numeric dataSource = new ValuesSource.Numeric.FieldData((IndexNumericFieldData)fieldContext().indexFieldData()); + if (script() != null) { + dataSource = new ValuesSource.Numeric.WithScript(dataSource, script()); + } + return dataSource; + } + + private ValuesSource bytesField() throws IOException { + final IndexFieldData indexFieldData = fieldContext().indexFieldData(); + ValuesSource dataSource; + if (indexFieldData instanceof IndexOrdinalsFieldData) { + dataSource = new ValuesSource.Bytes.WithOrdinals.FieldData((IndexOrdinalsFieldData) indexFieldData); + } else { + dataSource = new ValuesSource.Bytes.FieldData(indexFieldData); + } + if (script() != null) { + dataSource = new ValuesSource.WithScript(dataSource, script()); + } + return dataSource; + } + + private ValuesSource.Bytes bytesScript() throws IOException { + return new ValuesSource.Bytes.Script(script()); + } + + private ValuesSource.GeoPoint geoPointField() throws IOException { + + if (!(fieldContext().indexFieldData() instanceof IndexGeoPointFieldData)) { + throw new IllegalArgumentException("Expected geo_point type on field [" + fieldContext().field() + + "], but got [" + fieldContext().fieldType().typeName() + "]"); + } + + return new ValuesSource.GeoPoint.Fielddata((IndexGeoPointFieldData) fieldContext().indexFieldData()); + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceParserHelper.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceParserHelper.java new file mode 100644 index 0000000000000..7b174d789f4e4 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceParserHelper.java @@ -0,0 +1,103 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.support; + +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.script.Script; +import org.joda.time.DateTimeZone; + +public final class ValuesSourceParserHelper { + static final ParseField TIME_ZONE = new ParseField("time_zone"); + + private ValuesSourceParserHelper() {} // utility class, no instantiation + + public static void declareAnyFields( + ObjectParser, QueryParseContext> objectParser, + boolean scriptable, boolean formattable) { + declareFields(objectParser, scriptable, formattable, false, null); + } + + public static void declareNumericFields( + ObjectParser, QueryParseContext> objectParser, + boolean scriptable, boolean formattable, boolean timezoneAware) { + declareFields(objectParser, scriptable, formattable, timezoneAware, ValueType.NUMERIC); + } + + public static void declareBytesFields( + ObjectParser, QueryParseContext> objectParser, + boolean scriptable, boolean formattable) { + declareFields(objectParser, scriptable, formattable, false, ValueType.STRING); + } + + public static void declareGeoFields( + ObjectParser, QueryParseContext> objectParser, + boolean scriptable, boolean formattable) { + declareFields(objectParser, scriptable, formattable, false, ValueType.GEOPOINT); + } + + private static void declareFields( + ObjectParser, QueryParseContext> objectParser, + boolean scriptable, boolean formattable, boolean timezoneAware, ValueType targetValueType) { + + + objectParser.declareField(ValuesSourceAggregationBuilder::field, XContentParser::text, + new ParseField("field"), ObjectParser.ValueType.STRING); + + objectParser.declareField(ValuesSourceAggregationBuilder::missing, XContentParser::objectText, + new ParseField("missing"), ObjectParser.ValueType.VALUE); + + objectParser.declareField(ValuesSourceAggregationBuilder::valueType, p -> { + ValueType valueType = ValueType.resolveForScript(p.text()); + if (targetValueType != null && valueType.isNotA(targetValueType)) { + throw new ParsingException(p.getTokenLocation(), + "Aggregation [" + objectParser.getName() + "] was configured with an incompatible value type [" + + valueType + "]. It can only work on value of type [" + + targetValueType + "]"); + } + return valueType; + }, new ParseField("value_type", "valueType"), ObjectParser.ValueType.STRING); + + if (formattable) { + objectParser.declareField(ValuesSourceAggregationBuilder::format, XContentParser::text, + new ParseField("format"), ObjectParser.ValueType.STRING); + } + + if (scriptable) { + objectParser.declareField(ValuesSourceAggregationBuilder::script, + (parser, context) -> Script.parse(parser, context.getDefaultScriptLanguage()), + Script.SCRIPT_PARSE_FIELD, ObjectParser.ValueType.OBJECT_OR_STRING); + } + + if (timezoneAware) { + objectParser.declareField(ValuesSourceAggregationBuilder::timeZone, p -> { + if (p.currentToken() == XContentParser.Token.VALUE_STRING) { + return DateTimeZone.forID(p.text()); + } else { + return DateTimeZone.forOffsetHours(p.intValue()); + } + }, TIME_ZONE, ObjectParser.ValueType.LONG); + } + } + +} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/XContentParseContext.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/XContentParseContext.java deleted file mode 100644 index 07c33f1f47384..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/XContentParseContext.java +++ /dev/null @@ -1,65 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.aggregations.support; - -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.xcontent.XContentParser; - -/** - * A minimal context for parsing xcontent into aggregation builders. - * Only a minimal set of dependencies and settings are available. - */ -public final class XContentParseContext { - - private final XContentParser parser; - - private final ParseFieldMatcher parseFieldMatcher; - - private final String defaultScriptLanguage; - - public XContentParseContext(XContentParser parser, ParseFieldMatcher parseFieldMatcher, String defaultScriptLanguage) { - this.parser = parser; - this.parseFieldMatcher = parseFieldMatcher; - this.defaultScriptLanguage = defaultScriptLanguage; - } - - public XContentParser getParser() { - return parser; - } - - public ParseFieldMatcher getParseFieldMatcher() { - return parseFieldMatcher; - } - - public String getDefaultScriptLanguage() { - return defaultScriptLanguage; - } - - /** - * Returns whether the parse field we're looking for matches with the found field name. - * - * Helper that delegates to {@link ParseFieldMatcher#match(String, ParseField)}. - */ - public boolean matchField(String fieldName, ParseField parseField) { - return parseFieldMatcher.match(fieldName, parseField); - } - -} diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/values/ScriptLongValues.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/values/ScriptLongValues.java index 06bf998d8dafc..503cec39b8d01 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/values/ScriptLongValues.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/support/values/ScriptLongValues.java @@ -50,15 +50,10 @@ public void setDocument(int docId) { resize(0); } - else if (value instanceof Number) { - resize(1); - values[0] = ((Number) value).longValue(); - } - else if (value.getClass().isArray()) { resize(Array.getLength(value)); for (int i = 0; i < count(); ++i) { - values[i] = ((Number) Array.get(value, i)).longValue(); + values[i] = toLongValue(Array.get(value, i)); } } @@ -66,18 +61,33 @@ else if (value instanceof Collection) { resize(((Collection) value).size()); int i = 0; for (Iterator it = ((Collection) value).iterator(); it.hasNext(); ++i) { - values[i] = ((Number) it.next()).longValue(); + values[i] = toLongValue(it.next()); } assert i == count(); } else { - throw new AggregationExecutionException("Unsupported script value [" + value + "]"); + resize(1); + values[0] = toLongValue(value); } sort(); } + private static long toLongValue(Object o) { + if (o instanceof Number) { + return ((Number) o).longValue(); + } else if (o instanceof Boolean) { + // We do expose boolean fields as boolean in scripts, however aggregations still expect + // that scripts return the same internal representation as regular fields, so boolean + // values in scripts need to be converted to a number, and the value formatter will + // make sure of using true/false in the key_as_string field + return ((Boolean) o).booleanValue() ? 1L : 0L; + } else { + throw new AggregationExecutionException("Unsupported script value [" + o + "], expected a number"); + } + } + @Override public void setScorer(Scorer scorer) { script.setScorer(scorer); diff --git a/core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java b/core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java index 309e49448e9da..de476a103dc6c 100644 --- a/core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java @@ -19,31 +19,31 @@ package org.elasticsearch.search.builder; -import com.carrotsearch.hppc.ObjectFloatHashMap; +import org.elasticsearch.Version; import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.script.Script; +import org.elasticsearch.search.SearchExtBuilder; import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories; -import org.elasticsearch.search.aggregations.AggregatorParsers; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; +import org.elasticsearch.search.collapse.CollapseBuilder; import org.elasticsearch.search.fetch.StoredFieldsContext; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; @@ -56,17 +56,12 @@ import org.elasticsearch.search.sort.SortBuilders; import org.elasticsearch.search.sort.SortOrder; import org.elasticsearch.search.suggest.SuggestBuilder; -import org.elasticsearch.search.suggest.Suggesters; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.Objects; -import java.util.stream.Collectors; -import java.util.stream.StreamSupport; - -import static org.elasticsearch.common.collect.Tuple.tuple; /** * A search source builder allowing to easily build search source. Simple @@ -75,7 +70,9 @@ * * @see org.elasticsearch.action.search.SearchRequest#source(SearchSourceBuilder) */ -public final class SearchSourceBuilder extends ToXContentToBytes implements Writeable { +public final class SearchSourceBuilder extends ToXContentToBytes implements Writeable, ToXContentObject { + private static final DeprecationLogger DEPRECATION_LOGGER = + new DeprecationLogger(Loggers.getLogger(SearchSourceBuilder.class)); public static final ParseField FROM_FIELD = new ParseField("from"); public static final ParseField SIZE_FIELD = new ParseField("size"); @@ -105,12 +102,13 @@ public final class SearchSourceBuilder extends ToXContentToBytes implements Writ public static final ParseField EXT_FIELD = new ParseField("ext"); public static final ParseField PROFILE_FIELD = new ParseField("profile"); public static final ParseField SEARCH_AFTER = new ParseField("search_after"); + public static final ParseField COLLAPSE = new ParseField("collapse"); public static final ParseField SLICE = new ParseField("slice"); + public static final ParseField ALL_FIELDS_FIELDS = new ParseField("all_fields"); - public static SearchSourceBuilder fromXContent(QueryParseContext context, AggregatorParsers aggParsers, - Suggesters suggesters) throws IOException { + public static SearchSourceBuilder fromXContent(QueryParseContext context) throws IOException { SearchSourceBuilder builder = new SearchSourceBuilder(); - builder.parseXContent(context, aggParsers, suggesters); + builder.parseXContent(context); return builder; } @@ -164,16 +162,17 @@ public static HighlightBuilder highlight() { private SuggestBuilder suggestBuilder; - private List> rescoreBuilders; + private List rescoreBuilders; - private ObjectFloatHashMap indexBoost = null; + private List indexBoosts = new ArrayList<>(); private List stats; - private BytesReference ext = null; + private List extBuilders = Collections.emptyList(); private boolean profile = false; + private CollapseBuilder collapse = null; /** * Constructs a new search source builder. @@ -187,34 +186,20 @@ public SearchSourceBuilder() { public SearchSourceBuilder(StreamInput in) throws IOException { aggregations = in.readOptionalWriteable(AggregatorFactories.Builder::new); explain = in.readOptionalBoolean(); - fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new); + fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new); docValueFields = (List) in.readGenericValue(); storedFieldsContext = in.readOptionalWriteable(StoredFieldsContext::new); from = in.readVInt(); highlightBuilder = in.readOptionalWriteable(HighlightBuilder::new); - int indexBoostSize = in.readVInt(); - if (indexBoostSize > 0) { - indexBoost = new ObjectFloatHashMap<>(indexBoostSize); - for (int i = 0; i < indexBoostSize; i++) { - indexBoost.put(in.readString(), in.readFloat()); - } - } + indexBoosts = in.readList(IndexBoost::new); minScore = in.readOptionalFloat(); postQueryBuilder = in.readOptionalNamedWriteable(QueryBuilder.class); queryBuilder = in.readOptionalNamedWriteable(QueryBuilder.class); if (in.readBoolean()) { - int size = in.readVInt(); - rescoreBuilders = new ArrayList<>(); - for (int i = 0; i < size; i++) { - rescoreBuilders.add(in.readNamedWriteable(RescoreBuilder.class)); - } + rescoreBuilders = in.readNamedWriteableList(RescoreBuilder.class); } if (in.readBoolean()) { - int size = in.readVInt(); - scriptFields = new ArrayList<>(size); - for (int i = 0; i < size; i++) { - scriptFields.add(new ScriptField(in)); - } + scriptFields = in.readList(ScriptField::new); } size = in.readVInt(); if (in.readBoolean()) { @@ -225,55 +210,44 @@ public SearchSourceBuilder(StreamInput in) throws IOException { } } if (in.readBoolean()) { - int size = in.readVInt(); - stats = new ArrayList<>(); - for (int i = 0; i < size; i++) { - stats.add(in.readString()); - } + stats = in.readList(StreamInput::readString); } suggestBuilder = in.readOptionalWriteable(SuggestBuilder::new); terminateAfter = in.readVInt(); timeout = in.readOptionalWriteable(TimeValue::new); trackScores = in.readBoolean(); version = in.readOptionalBoolean(); - ext = in.readOptionalBytesReference(); + extBuilders = in.readNamedWriteableList(SearchExtBuilder.class); profile = in.readBoolean(); searchAfterBuilder = in.readOptionalWriteable(SearchAfterBuilder::new); sliceBuilder = in.readOptionalWriteable(SliceBuilder::new); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + collapse = in.readOptionalWriteable(CollapseBuilder::new); + } } @Override public void writeTo(StreamOutput out) throws IOException { out.writeOptionalWriteable(aggregations); out.writeOptionalBoolean(explain); - out.writeOptionalStreamable(fetchSourceContext); + out.writeOptionalWriteable(fetchSourceContext); out.writeGenericValue(docValueFields); out.writeOptionalWriteable(storedFieldsContext); out.writeVInt(from); out.writeOptionalWriteable(highlightBuilder); - int indexBoostSize = indexBoost == null ? 0 : indexBoost.size(); - out.writeVInt(indexBoostSize); - if (indexBoostSize > 0) { - writeIndexBoost(out); - } + out.writeList(indexBoosts); out.writeOptionalFloat(minScore); out.writeOptionalNamedWriteable(postQueryBuilder); out.writeOptionalNamedWriteable(queryBuilder); boolean hasRescoreBuilders = rescoreBuilders != null; out.writeBoolean(hasRescoreBuilders); if (hasRescoreBuilders) { - out.writeVInt(rescoreBuilders.size()); - for (RescoreBuilder rescoreBuilder : rescoreBuilders) { - out.writeNamedWriteable(rescoreBuilder); - } + out.writeNamedWriteableList(rescoreBuilders); } boolean hasScriptFields = scriptFields != null; out.writeBoolean(hasScriptFields); if (hasScriptFields) { - out.writeVInt(scriptFields.size()); - for (ScriptField scriptField : scriptFields) { - scriptField.writeTo(out); - } + out.writeList(scriptFields); } out.writeVInt(size); boolean hasSorts = sorts != null; @@ -287,30 +261,19 @@ public void writeTo(StreamOutput out) throws IOException { boolean hasStats = stats != null; out.writeBoolean(hasStats); if (hasStats) { - out.writeVInt(stats.size()); - for (String stat : stats) { - out.writeString(stat); - } + out.writeStringList(stats); } out.writeOptionalWriteable(suggestBuilder); out.writeVInt(terminateAfter); out.writeOptionalWriteable(timeout); out.writeBoolean(trackScores); out.writeOptionalBoolean(version); - out.writeOptionalBytesReference(ext); + out.writeNamedWriteableList(extBuilders); out.writeBoolean(profile); out.writeOptionalWriteable(searchAfterBuilder); out.writeOptionalWriteable(sliceBuilder); - } - - private void writeIndexBoost(StreamOutput out) throws IOException { - List> ibs = StreamSupport - .stream(indexBoost.spliterator(), false) - .map(i -> tuple(i.key, i.value)).sorted((o1, o2) -> o1.v1().compareTo(o2.v1())) - .collect(Collectors.toList()); - for (Tuple ib : ibs) { - out.writeString(ib.v1()); - out.writeFloat(ib.v2()); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + out.writeOptionalWriteable(collapse); } } @@ -352,6 +315,9 @@ public QueryBuilder postFilter() { * From index to start the search from. Defaults to 0. */ public SearchSourceBuilder from(int from) { + if (from < 0) { + throw new IllegalArgumentException("[from] parameter cannot be negative"); + } this.from = from; return this; } @@ -561,6 +527,16 @@ public SliceBuilder slice() { return sliceBuilder; } + + public CollapseBuilder collapse() { + return collapse; + } + + public SearchSourceBuilder collapse(CollapseBuilder collapse) { + this.collapse = collapse; + return this; + } + /** * Add an aggregation to perform as part of the search. */ @@ -649,7 +625,7 @@ public boolean profile() { /** * Gets the bytes representing the rescore builders for this request. */ - public List> rescores() { + public List rescores() { return rescoreBuilders; } @@ -658,11 +634,9 @@ public List> rescores() { * every hit */ public SearchSourceBuilder fetchSource(boolean fetch) { - if (this.fetchSourceContext == null) { - this.fetchSourceContext = new FetchSourceContext(fetch); - } else { - this.fetchSourceContext.fetchSource(fetch); - } + FetchSourceContext fetchSourceContext = this.fetchSourceContext != null ? this.fetchSourceContext + : FetchSourceContext.FETCH_SOURCE; + this.fetchSourceContext = new FetchSourceContext(fetch, fetchSourceContext.includes(), fetchSourceContext.excludes()); return this; } @@ -696,7 +670,9 @@ public SearchSourceBuilder fetchSource(@Nullable String include, @Nullable Strin * filter the returned _source */ public SearchSourceBuilder fetchSource(@Nullable String[] includes, @Nullable String[] excludes) { - fetchSourceContext = new FetchSourceContext(includes, excludes); + FetchSourceContext fetchSourceContext = this.fetchSourceContext != null ? this.fetchSourceContext + : FetchSourceContext.FETCH_SOURCE; + this.fetchSourceContext = new FetchSourceContext(fetchSourceContext.fetchSource(), includes, excludes); return this; } @@ -836,28 +812,26 @@ public List scriptFields() { } /** - * Sets the boost a specific index will receive when the query is executed + * Sets the boost a specific index or alias will receive when the query is executed * against it. * * @param index - * The index to apply the boost against + * The index or alias to apply the boost against * @param indexBoost * The boost to apply to the index */ public SearchSourceBuilder indexBoost(String index, float indexBoost) { - if (this.indexBoost == null) { - this.indexBoost = new ObjectFloatHashMap<>(); - } - this.indexBoost.put(index, indexBoost); + Objects.requireNonNull(index, "index must not be null"); + this.indexBoosts.add(new IndexBoost(index, indexBoost)); return this; } /** - * Gets the boost a specific indices will receive when the query is + * Gets the boost a specific indices or aliases will receive when the query is * executed against them. */ - public ObjectFloatHashMap indexBoost() { - return indexBoost; + public List indexBoosts() { + return indexBoosts; } /** @@ -875,13 +849,13 @@ public List stats() { return stats; } - public SearchSourceBuilder ext(XContentBuilder ext) { - this.ext = ext.bytes(); + public SearchSourceBuilder ext(List searchExtBuilders) { + this.extBuilders = Objects.requireNonNull(searchExtBuilders, "searchExtBuilders must not be null"); return this; } - public BytesReference ext() { - return ext; + public List ext() { + return extBuilders; } /** @@ -899,7 +873,7 @@ public boolean isSuggestOnly() { * infinitely. */ public SearchSourceBuilder rewrite(QueryShardContext context) throws IOException { - assert (this.equals(shallowCopy(queryBuilder, postQueryBuilder))); + assert (this.equals(shallowCopy(queryBuilder, postQueryBuilder, sliceBuilder))); QueryBuilder queryBuilder = null; if (this.queryBuilder != null) { queryBuilder = this.queryBuilder.rewrite(context); @@ -910,86 +884,96 @@ public SearchSourceBuilder rewrite(QueryShardContext context) throws IOException } boolean rewritten = queryBuilder != this.queryBuilder || postQueryBuilder != this.postQueryBuilder; if (rewritten) { - return shallowCopy(queryBuilder, postQueryBuilder); + return shallowCopy(queryBuilder, postQueryBuilder, sliceBuilder); } return this; } - private SearchSourceBuilder shallowCopy(QueryBuilder queryBuilder, QueryBuilder postQueryBuilder) { - SearchSourceBuilder rewrittenBuilder = new SearchSourceBuilder(); - rewrittenBuilder.aggregations = aggregations; - rewrittenBuilder.explain = explain; - rewrittenBuilder.ext = ext; - rewrittenBuilder.fetchSourceContext = fetchSourceContext; - rewrittenBuilder.docValueFields = docValueFields; - rewrittenBuilder.storedFieldsContext = storedFieldsContext; - rewrittenBuilder.from = from; - rewrittenBuilder.highlightBuilder = highlightBuilder; - rewrittenBuilder.indexBoost = indexBoost; - rewrittenBuilder.minScore = minScore; - rewrittenBuilder.postQueryBuilder = postQueryBuilder; - rewrittenBuilder.profile = profile; - rewrittenBuilder.queryBuilder = queryBuilder; - rewrittenBuilder.rescoreBuilders = rescoreBuilders; - rewrittenBuilder.scriptFields = scriptFields; - rewrittenBuilder.searchAfterBuilder = searchAfterBuilder; - rewrittenBuilder.sliceBuilder = sliceBuilder; - rewrittenBuilder.size = size; - rewrittenBuilder.sorts = sorts; - rewrittenBuilder.stats = stats; - rewrittenBuilder.suggestBuilder = suggestBuilder; - rewrittenBuilder.terminateAfter = terminateAfter; - rewrittenBuilder.timeout = timeout; - rewrittenBuilder.trackScores = trackScores; - rewrittenBuilder.version = version; - return rewrittenBuilder; - } + /** + * Create a shallow copy of this builder with a new slice configuration. + */ + public SearchSourceBuilder copyWithNewSlice(SliceBuilder slice) { + return shallowCopy(queryBuilder, postQueryBuilder, slice); + } + + /** + * Create a shallow copy of this source replaced {@link #queryBuilder}, {@link #postQueryBuilder}, and {@link #sliceBuilder}. Used by + * {@link #rewrite(QueryShardContext)} and {@link #copyWithNewSlice(SliceBuilder)}. + */ + private SearchSourceBuilder shallowCopy(QueryBuilder queryBuilder, QueryBuilder postQueryBuilder, SliceBuilder slice) { + SearchSourceBuilder rewrittenBuilder = new SearchSourceBuilder(); + rewrittenBuilder.aggregations = aggregations; + rewrittenBuilder.explain = explain; + rewrittenBuilder.extBuilders = extBuilders; + rewrittenBuilder.fetchSourceContext = fetchSourceContext; + rewrittenBuilder.docValueFields = docValueFields; + rewrittenBuilder.storedFieldsContext = storedFieldsContext; + rewrittenBuilder.from = from; + rewrittenBuilder.highlightBuilder = highlightBuilder; + rewrittenBuilder.indexBoosts = indexBoosts; + rewrittenBuilder.minScore = minScore; + rewrittenBuilder.postQueryBuilder = postQueryBuilder; + rewrittenBuilder.profile = profile; + rewrittenBuilder.queryBuilder = queryBuilder; + rewrittenBuilder.rescoreBuilders = rescoreBuilders; + rewrittenBuilder.scriptFields = scriptFields; + rewrittenBuilder.searchAfterBuilder = searchAfterBuilder; + rewrittenBuilder.sliceBuilder = slice; + rewrittenBuilder.size = size; + rewrittenBuilder.sorts = sorts; + rewrittenBuilder.stats = stats; + rewrittenBuilder.suggestBuilder = suggestBuilder; + rewrittenBuilder.terminateAfter = terminateAfter; + rewrittenBuilder.timeout = timeout; + rewrittenBuilder.trackScores = trackScores; + rewrittenBuilder.version = version; + rewrittenBuilder.collapse = collapse; + return rewrittenBuilder; + } /** * Parse some xContent into this SearchSourceBuilder, overwriting any values specified in the xContent. Use this if you need to set up * different defaults than a regular SearchSourceBuilder would have and use - * {@link #fromXContent(QueryParseContext, AggregatorParsers, Suggesters)} if you have normal defaults. + * {@link #fromXContent(QueryParseContext)} if you have normal defaults. */ - public void parseXContent(QueryParseContext context, AggregatorParsers aggParsers, Suggesters suggesters) - throws IOException { - + public void parseXContent(QueryParseContext context) throws IOException { XContentParser parser = context.parser(); XContentParser.Token token = parser.currentToken(); String currentFieldName = null; if (token != XContentParser.Token.START_OBJECT && (token = parser.nextToken()) != XContentParser.Token.START_OBJECT) { - throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.START_OBJECT + "] but found [" + token + "]", - parser.getTokenLocation()); + throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.START_OBJECT + + "] but found [" + token + "]", parser.getTokenLocation()); } while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (context.getParseFieldMatcher().match(currentFieldName, FROM_FIELD)) { + if (FROM_FIELD.match(currentFieldName)) { from = parser.intValue(); - } else if (context.getParseFieldMatcher().match(currentFieldName, SIZE_FIELD)) { + } else if (SIZE_FIELD.match(currentFieldName)) { size = parser.intValue(); - } else if (context.getParseFieldMatcher().match(currentFieldName, TIMEOUT_FIELD)) { + } else if (TIMEOUT_FIELD.match(currentFieldName)) { timeout = TimeValue.parseTimeValue(parser.text(), null, TIMEOUT_FIELD.getPreferredName()); - } else if (context.getParseFieldMatcher().match(currentFieldName, TERMINATE_AFTER_FIELD)) { + } else if (TERMINATE_AFTER_FIELD.match(currentFieldName)) { terminateAfter = parser.intValue(); - } else if (context.getParseFieldMatcher().match(currentFieldName, MIN_SCORE_FIELD)) { + } else if (MIN_SCORE_FIELD.match(currentFieldName)) { minScore = parser.floatValue(); - } else if (context.getParseFieldMatcher().match(currentFieldName, VERSION_FIELD)) { + } else if (VERSION_FIELD.match(currentFieldName)) { version = parser.booleanValue(); - } else if (context.getParseFieldMatcher().match(currentFieldName, EXPLAIN_FIELD)) { + } else if (EXPLAIN_FIELD.match(currentFieldName)) { explain = parser.booleanValue(); - } else if (context.getParseFieldMatcher().match(currentFieldName, TRACK_SCORES_FIELD)) { + } else if (TRACK_SCORES_FIELD.match(currentFieldName)) { trackScores = parser.booleanValue(); - } else if (context.getParseFieldMatcher().match(currentFieldName, _SOURCE_FIELD)) { - fetchSourceContext = FetchSourceContext.parse(context); - } else if (context.getParseFieldMatcher().match(currentFieldName, STORED_FIELDS_FIELD)) { + } else if (_SOURCE_FIELD.match(currentFieldName)) { + fetchSourceContext = FetchSourceContext.fromXContent(context.parser()); + } else if (STORED_FIELDS_FIELD.match(currentFieldName)) { storedFieldsContext = StoredFieldsContext.fromXContent(SearchSourceBuilder.STORED_FIELDS_FIELD.getPreferredName(), context); - } else if (context.getParseFieldMatcher().match(currentFieldName, SORT_FIELD)) { + } else if (SORT_FIELD.match(currentFieldName)) { sort(parser.text()); - } else if (context.getParseFieldMatcher().match(currentFieldName, PROFILE_FIELD)) { + } else if (PROFILE_FIELD.match(currentFieldName)) { profile = parser.booleanValue(); - } else if (context.getParseFieldMatcher().match(currentFieldName, FIELDS_FIELD)) { + } else if (FIELDS_FIELD.match(currentFieldName)) { throw new ParsingException(parser.getTokenLocation(), "Deprecated field [" + SearchSourceBuilder.FIELDS_FIELD + "] used, expected [" + SearchSourceBuilder.STORED_FIELDS_FIELD + "] instead"); @@ -998,85 +982,105 @@ public void parseXContent(QueryParseContext context, AggregatorParsers aggParser parser.getTokenLocation()); } } else if (token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, QUERY_FIELD)) { + if (QUERY_FIELD.match(currentFieldName)) { queryBuilder = context.parseInnerQueryBuilder().orElse(null); - } else if (context.getParseFieldMatcher().match(currentFieldName, POST_FILTER_FIELD)) { + } else if (POST_FILTER_FIELD.match(currentFieldName)) { postQueryBuilder = context.parseInnerQueryBuilder().orElse(null); - } else if (context.getParseFieldMatcher().match(currentFieldName, _SOURCE_FIELD)) { - fetchSourceContext = FetchSourceContext.parse(context); - } else if (context.getParseFieldMatcher().match(currentFieldName, SCRIPT_FIELDS_FIELD)) { + } else if (_SOURCE_FIELD.match(currentFieldName)) { + fetchSourceContext = FetchSourceContext.fromXContent(context.parser()); + } else if (SCRIPT_FIELDS_FIELD.match(currentFieldName)) { scriptFields = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { scriptFields.add(new ScriptField(context)); } - } else if (context.getParseFieldMatcher().match(currentFieldName, INDICES_BOOST_FIELD)) { - indexBoost = new ObjectFloatHashMap<>(); + } else if (INDICES_BOOST_FIELD.match(currentFieldName)) { + DEPRECATION_LOGGER.deprecated( + "Object format in indices_boost is deprecated, please use array format instead"); while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - indexBoost.put(currentFieldName, parser.floatValue()); + indexBoosts.add(new IndexBoost(currentFieldName, parser.floatValue())); } else { - throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + currentFieldName + "].", - parser.getTokenLocation()); + throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + + " in [" + currentFieldName + "].", parser.getTokenLocation()); } } - } else if (context.getParseFieldMatcher().match(currentFieldName, AGGREGATIONS_FIELD) - || context.getParseFieldMatcher().match(currentFieldName, AGGS_FIELD)) { - aggregations = aggParsers.parseAggregators(context); - } else if (context.getParseFieldMatcher().match(currentFieldName, HIGHLIGHT_FIELD)) { + } else if (AGGREGATIONS_FIELD.match(currentFieldName) + || AGGS_FIELD.match(currentFieldName)) { + aggregations = AggregatorFactories.parseAggregators(context); + } else if (HIGHLIGHT_FIELD.match(currentFieldName)) { highlightBuilder = HighlightBuilder.fromXContent(context); - } else if (context.getParseFieldMatcher().match(currentFieldName, SUGGEST_FIELD)) { - suggestBuilder = SuggestBuilder.fromXContent(context, suggesters); - } else if (context.getParseFieldMatcher().match(currentFieldName, SORT_FIELD)) { + } else if (SUGGEST_FIELD.match(currentFieldName)) { + suggestBuilder = SuggestBuilder.fromXContent(context.parser()); + } else if (SORT_FIELD.match(currentFieldName)) { sorts = new ArrayList<>(SortBuilder.fromXContent(context)); - } else if (context.getParseFieldMatcher().match(currentFieldName, RESCORE_FIELD)) { + } else if (RESCORE_FIELD.match(currentFieldName)) { rescoreBuilders = new ArrayList<>(); rescoreBuilders.add(RescoreBuilder.parseFromXContent(context)); - } else if (context.getParseFieldMatcher().match(currentFieldName, EXT_FIELD)) { - XContentBuilder xContentBuilder = XContentFactory.jsonBuilder().copyCurrentStructure(parser); - ext = xContentBuilder.bytes(); - } else if (context.getParseFieldMatcher().match(currentFieldName, SLICE)) { + } else if (EXT_FIELD.match(currentFieldName)) { + extBuilders = new ArrayList<>(); + String extSectionName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + extSectionName = parser.currentName(); + } else { + SearchExtBuilder searchExtBuilder = parser.namedObject(SearchExtBuilder.class, extSectionName, null); + if (searchExtBuilder.getWriteableName().equals(extSectionName) == false) { + throw new IllegalStateException("The parsed [" + searchExtBuilder.getClass().getName() + "] object has a " + + "different writeable name compared to the name of the section that it was parsed from: found [" + + searchExtBuilder.getWriteableName() + "] expected [" + extSectionName + "]"); + } + extBuilders.add(searchExtBuilder); + } + } + } else if (SLICE.match(currentFieldName)) { sliceBuilder = SliceBuilder.fromXContent(context); + } else if (COLLAPSE.match(currentFieldName)) { + collapse = CollapseBuilder.fromXContent(context); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + currentFieldName + "].", parser.getTokenLocation()); } } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, STORED_FIELDS_FIELD)) { + if (STORED_FIELDS_FIELD.match(currentFieldName)) { storedFieldsContext = StoredFieldsContext.fromXContent(STORED_FIELDS_FIELD.getPreferredName(), context); - } else if (context.getParseFieldMatcher().match(currentFieldName, DOCVALUE_FIELDS_FIELD)) { + } else if (DOCVALUE_FIELDS_FIELD.match(currentFieldName)) { docValueFields = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { if (token == XContentParser.Token.VALUE_STRING) { docValueFields.add(parser.text()); } else { - throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.VALUE_STRING + "] in [" - + currentFieldName + "] but found [" + token + "]", parser.getTokenLocation()); + throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.VALUE_STRING + + "] in [" + currentFieldName + "] but found [" + token + "]", parser.getTokenLocation()); } } - } else if (context.getParseFieldMatcher().match(currentFieldName, SORT_FIELD)) { + } else if (INDICES_BOOST_FIELD.match(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + indexBoosts.add(new IndexBoost(context)); + } + } else if (SORT_FIELD.match(currentFieldName)) { sorts = new ArrayList<>(SortBuilder.fromXContent(context)); - } else if (context.getParseFieldMatcher().match(currentFieldName, RESCORE_FIELD)) { + } else if (RESCORE_FIELD.match(currentFieldName)) { rescoreBuilders = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { rescoreBuilders.add(RescoreBuilder.parseFromXContent(context)); } - } else if (context.getParseFieldMatcher().match(currentFieldName, STATS_FIELD)) { + } else if (STATS_FIELD.match(currentFieldName)) { stats = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { if (token == XContentParser.Token.VALUE_STRING) { stats.add(parser.text()); } else { - throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.VALUE_STRING + "] in [" - + currentFieldName + "] but found [" + token + "]", parser.getTokenLocation()); + throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.VALUE_STRING + + "] in [" + currentFieldName + "] but found [" + token + "]", parser.getTokenLocation()); } } - } else if (context.getParseFieldMatcher().match(currentFieldName, _SOURCE_FIELD)) { - fetchSourceContext = FetchSourceContext.parse(context); - } else if (context.getParseFieldMatcher().match(currentFieldName, SEARCH_AFTER)) { - searchAfterBuilder = SearchAfterBuilder.fromXContent(parser, context.getParseFieldMatcher()); - } else if (context.getParseFieldMatcher().match(currentFieldName, FIELDS_FIELD)) { + } else if (_SOURCE_FIELD.match(currentFieldName)) { + fetchSourceContext = FetchSourceContext.fromXContent(context.parser()); + } else if (SEARCH_AFTER.match(currentFieldName)) { + searchAfterBuilder = SearchAfterBuilder.fromXContent(parser); + } else if (FIELDS_FIELD.match(currentFieldName)) { throw new ParsingException(parser.getTokenLocation(), "The field [" + SearchSourceBuilder.FIELDS_FIELD + "] is no longer supported, please use [" + SearchSourceBuilder.STORED_FIELDS_FIELD + "] to retrieve stored fields or _source filtering " + @@ -1095,12 +1099,6 @@ public void parseXContent(QueryParseContext context, AggregatorParsers aggParser @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); - innerToXContent(builder, params); - builder.endObject(); - return builder; - } - - public void innerToXContent(XContentBuilder builder, Params params) throws IOException { if (from != -1) { builder.field(FROM_FIELD.getPreferredName(), from); } @@ -1177,29 +1175,26 @@ public void innerToXContent(XContentBuilder builder, Params params) throws IOExc } if (searchAfterBuilder != null) { - builder.field(SEARCH_AFTER.getPreferredName(), searchAfterBuilder.getSortValues()); + builder.array(SEARCH_AFTER.getPreferredName(), searchAfterBuilder.getSortValues()); } if (sliceBuilder != null) { builder.field(SLICE.getPreferredName(), sliceBuilder); } - if (indexBoost != null) { - builder.startObject(INDICES_BOOST_FIELD.getPreferredName()); - assert !indexBoost.containsKey(null); - final Object[] keys = indexBoost.keys; - final float[] values = indexBoost.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - builder.field((String) keys[i], values[i]); - } + if (!indexBoosts.isEmpty()) { + builder.startArray(INDICES_BOOST_FIELD.getPreferredName()); + for (IndexBoost ib : indexBoosts) { + builder.startObject(); + builder.field(ib.index, ib.boost); + builder.endObject(); } - builder.endObject(); + builder.endArray(); } if (aggregations != null) { builder.field(AGGREGATIONS_FIELD.getPreferredName(), aggregations); - } + } if (highlightBuilder != null) { builder.field(HIGHLIGHT_FIELD.getPreferredName(), highlightBuilder); @@ -1221,15 +1216,106 @@ public void innerToXContent(XContentBuilder builder, Params params) throws IOExc builder.field(STATS_FIELD.getPreferredName(), stats); } - if (ext != null) { - builder.field(EXT_FIELD.getPreferredName()); - try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(ext)) { - parser.nextToken(); - builder.copyCurrentStructure(parser); + if (extBuilders != null && extBuilders.isEmpty() == false) { + builder.startObject(EXT_FIELD.getPreferredName()); + for (SearchExtBuilder extBuilder : extBuilders) { + extBuilder.toXContent(builder, params); } + builder.endObject(); } + + if (collapse != null) { + builder.field(COLLAPSE.getPreferredName(), collapse); + } + builder.endObject(); + return builder; } + public static class IndexBoost implements Writeable, ToXContent { + private final String index; + private final float boost; + + IndexBoost(String index, float boost) { + this.index = index; + this.boost = boost; + } + + IndexBoost(StreamInput in) throws IOException { + index = in.readString(); + boost = in.readFloat(); + } + + IndexBoost(QueryParseContext context) throws IOException { + XContentParser parser = context.parser(); + XContentParser.Token token = parser.currentToken(); + + if (token == XContentParser.Token.START_OBJECT) { + token = parser.nextToken(); + if (token == XContentParser.Token.FIELD_NAME) { + index = parser.currentName(); + } else { + throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.FIELD_NAME + + "] in [" + INDICES_BOOST_FIELD + "] but found [" + token + "]", parser.getTokenLocation()); + } + token = parser.nextToken(); + if (token == XContentParser.Token.VALUE_NUMBER) { + boost = parser.floatValue(); + } else { + throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.VALUE_NUMBER + + "] in [" + INDICES_BOOST_FIELD + "] but found [" + token + "]", parser.getTokenLocation()); + } + token = parser.nextToken(); + if (token != XContentParser.Token.END_OBJECT) { + throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.END_OBJECT + + "] in [" + INDICES_BOOST_FIELD + "] but found [" + token + "]", parser.getTokenLocation()); + } + } else { + throw new ParsingException(parser.getTokenLocation(), "Expected [" + XContentParser.Token.START_OBJECT + + "] in [" + parser.currentName() + "] but found [" + token + "]", parser.getTokenLocation()); + } + } + + public String getIndex() { + return index; + } + + public float getBoost() { + return boost; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(index); + out.writeFloat(boost); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(index, boost); + builder.endObject(); + return builder; + } + + @Override + public int hashCode() { + return Objects.hash(index, boost); + } + + @Override + public boolean equals(Object obj) { + if (obj == null) { + return false; + } + if (getClass() != obj.getClass()) { + return false; + } + IndexBoost other = (IndexBoost) obj; + return Objects.equals(index, other.index) + && Objects.equals(boost, other.boost); + } + + } public static class ScriptField implements Writeable, ToXContent { private final boolean ignoreFailure; @@ -1272,17 +1358,17 @@ public ScriptField(QueryParseContext context) throws IOException { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token.isValue()) { - if (context.getParseFieldMatcher().match(currentFieldName, SCRIPT_FIELD)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); - } else if (context.getParseFieldMatcher().match(currentFieldName, IGNORE_FAILURE_FIELD)) { + if (SCRIPT_FIELD.match(currentFieldName)) { + script = Script.parse(parser, context.getDefaultScriptLanguage()); + } else if (IGNORE_FAILURE_FIELD.match(currentFieldName)) { ignoreFailure = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + currentFieldName + "].", parser.getTokenLocation()); } } else if (token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, SCRIPT_FIELD)) { - script = Script.parse(parser, context.getParseFieldMatcher(), context.getDefaultScriptLanguage()); + if (SCRIPT_FIELD.match(currentFieldName)) { + script = Script.parse(parser, context.getDefaultScriptLanguage()); } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + currentFieldName + "].", parser.getTokenLocation()); @@ -1344,9 +1430,10 @@ public boolean equals(Object obj) { @Override public int hashCode() { - return Objects.hash(aggregations, explain, fetchSourceContext, docValueFields, storedFieldsContext, from, - highlightBuilder, indexBoost, minScore, postQueryBuilder, queryBuilder, rescoreBuilders, scriptFields, - size, sorts, searchAfterBuilder, sliceBuilder, stats, suggestBuilder, terminateAfter, timeout, trackScores, version, profile); + return Objects.hash(aggregations, explain, fetchSourceContext, docValueFields, storedFieldsContext, from, highlightBuilder, + indexBoosts, minScore, postQueryBuilder, queryBuilder, rescoreBuilders, scriptFields, size, + sorts, searchAfterBuilder, sliceBuilder, stats, suggestBuilder, terminateAfter, timeout, trackScores, version, + profile, extBuilders, collapse); } @Override @@ -1365,7 +1452,7 @@ public boolean equals(Object obj) { && Objects.equals(storedFieldsContext, other.storedFieldsContext) && Objects.equals(from, other.from) && Objects.equals(highlightBuilder, other.highlightBuilder) - && Objects.equals(indexBoost, other.indexBoost) + && Objects.equals(indexBoosts, other.indexBoosts) && Objects.equals(minScore, other.minScore) && Objects.equals(postQueryBuilder, other.postQueryBuilder) && Objects.equals(queryBuilder, other.queryBuilder) @@ -1381,7 +1468,8 @@ public boolean equals(Object obj) { && Objects.equals(timeout, other.timeout) && Objects.equals(trackScores, other.trackScores) && Objects.equals(version, other.version) - && Objects.equals(profile, other.profile); + && Objects.equals(profile, other.profile) + && Objects.equals(extBuilders, other.extBuilders) + && Objects.equals(collapse, other.collapse); } - } diff --git a/core/src/main/java/org/elasticsearch/search/collapse/CollapseBuilder.java b/core/src/main/java/org/elasticsearch/search/collapse/CollapseBuilder.java new file mode 100644 index 0000000000000..93f4b8bf41c9a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/collapse/CollapseBuilder.java @@ -0,0 +1,254 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.collapse; + +import org.apache.lucene.index.IndexOptions; +import org.elasticsearch.Version; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.mapper.KeywordFieldMapper; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.NumberFieldMapper; +import org.elasticsearch.index.query.InnerHitBuilder; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.SearchContextException; +import org.elasticsearch.search.internal.SearchContext; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Objects; + +/** + * A builder that enables field collapsing on search request. + */ +public class CollapseBuilder implements Writeable, ToXContentObject { + public static final ParseField FIELD_FIELD = new ParseField("field"); + public static final ParseField INNER_HITS_FIELD = new ParseField("inner_hits"); + public static final ParseField MAX_CONCURRENT_GROUP_REQUESTS_FIELD = new ParseField("max_concurrent_group_searches"); + private static final ObjectParser PARSER = + new ObjectParser<>("collapse", CollapseBuilder::new); + + static { + PARSER.declareString(CollapseBuilder::setField, FIELD_FIELD); + PARSER.declareInt(CollapseBuilder::setMaxConcurrentGroupRequests, MAX_CONCURRENT_GROUP_REQUESTS_FIELD); + PARSER.declareField((parser, builder, context) -> { + XContentParser.Token currentToken = parser.currentToken(); + if (currentToken == XContentParser.Token.START_OBJECT) { + builder.setInnerHits(InnerHitBuilder.fromXContent(context)); + } else if (currentToken == XContentParser.Token.START_ARRAY) { + List innerHitBuilders = new ArrayList<>(); + for (currentToken = parser.nextToken(); currentToken != XContentParser.Token.END_ARRAY; currentToken = parser.nextToken()) { + if (currentToken == XContentParser.Token.START_OBJECT) { + innerHitBuilders.add(InnerHitBuilder.fromXContent(context)); + } else { + throw new ParsingException(parser.getTokenLocation(), "Invalid token in inner_hits array"); + } + } + + builder.setInnerHits(innerHitBuilders); + } + }, INNER_HITS_FIELD, ObjectParser.ValueType.OBJECT_ARRAY); + } + + private String field; + private List innerHits = Collections.emptyList(); + private int maxConcurrentGroupRequests = 0; + + private CollapseBuilder() {} + + /** + * Public constructor + * @param field The name of the field to collapse on + */ + public CollapseBuilder(String field) { + Objects.requireNonNull(field, "field must be non-null"); + this.field = field; + } + + public CollapseBuilder(StreamInput in) throws IOException { + this.field = in.readString(); + this.maxConcurrentGroupRequests = in.readVInt(); + if (in.getVersion().onOrAfter(Version.V_5_5_0)) { + this.innerHits = in.readList(InnerHitBuilder::new); + } else { + InnerHitBuilder innerHitBuilder = in.readOptionalWriteable(InnerHitBuilder::new); + if (innerHitBuilder != null) { + this.innerHits = Collections.singletonList(innerHitBuilder); + } else { + this.innerHits = Collections.emptyList(); + } + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(field); + out.writeVInt(maxConcurrentGroupRequests); + if (out.getVersion().onOrAfter(Version.V_5_5_0)) { + out.writeList(innerHits); + } else { + boolean hasInnerHit = innerHits.isEmpty() == false; + out.writeBoolean(hasInnerHit); + if (hasInnerHit) { + innerHits.get(0).writeToCollapseBWC(out); + } + } + } + + public static CollapseBuilder fromXContent(QueryParseContext context) throws IOException { + CollapseBuilder builder = PARSER.parse(context.parser(), new CollapseBuilder(), context); + return builder; + } + + // for object parser only + private CollapseBuilder setField(String field) { + if (Strings.isEmpty(field)) { + throw new IllegalArgumentException("field name is null or empty"); + } + this.field = field; + return this; + } + + public CollapseBuilder setInnerHits(InnerHitBuilder innerHit) { + this.innerHits = Collections.singletonList(innerHit); + return this; + } + + public CollapseBuilder setInnerHits(List innerHits) { + this.innerHits = innerHits; + return this; + } + + public CollapseBuilder setMaxConcurrentGroupRequests(int num) { + if (num < 1) { + throw new IllegalArgumentException("maxConcurrentGroupRequests` must be positive"); + } + this.maxConcurrentGroupRequests = num; + return this; + } + + /** + * The name of the field to collapse against + */ + public String getField() { + return this.field; + } + + /** + * The inner hit options to expand the collapsed results + */ + public List getInnerHits() { + return this.innerHits; + } + + /** + * Returns the amount of group requests that are allowed to be ran concurrently in the inner_hits phase. + */ + public int getMaxConcurrentGroupRequests() { + return maxConcurrentGroupRequests; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException { + builder.startObject(); + innerToXContent(builder); + builder.endObject(); + return builder; + } + + private void innerToXContent(XContentBuilder builder) throws IOException { + builder.field(FIELD_FIELD.getPreferredName(), field); + if (maxConcurrentGroupRequests > 0) { + builder.field(MAX_CONCURRENT_GROUP_REQUESTS_FIELD.getPreferredName(), maxConcurrentGroupRequests); + } + if (innerHits.isEmpty() == false) { + if (innerHits.size() == 1) { + builder.field(INNER_HITS_FIELD.getPreferredName(), innerHits.get(0)); + } else { + builder.startArray(INNER_HITS_FIELD.getPreferredName()); + for (InnerHitBuilder innerHit : innerHits) { + innerHit.toXContent(builder, ToXContent.EMPTY_PARAMS); + } + builder.endArray(); + } + } + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + CollapseBuilder that = (CollapseBuilder) o; + + if (maxConcurrentGroupRequests != that.maxConcurrentGroupRequests) return false; + if (!field.equals(that.field)) return false; + return Objects.equals(innerHits, that.innerHits); + } + + @Override + public int hashCode() { + int result = Objects.hash(field, innerHits); + result = 31 * result + maxConcurrentGroupRequests; + return result; + } + + public CollapseContext build(SearchContext context) { + if (context.scrollContext() != null) { + throw new SearchContextException(context, "cannot use `collapse` in a scroll context"); + } + if (context.searchAfter() != null) { + throw new SearchContextException(context, "cannot use `collapse` in conjunction with `search_after`"); + } + if (context.rescore() != null && context.rescore().isEmpty() == false) { + throw new SearchContextException(context, "cannot use `collapse` in conjunction with `rescore`"); + } + + MappedFieldType fieldType = context.getQueryShardContext().fieldMapper(field); + if (fieldType == null) { + throw new SearchContextException(context, "no mapping found for `" + field + "` in order to collapse on"); + } + if (fieldType instanceof KeywordFieldMapper.KeywordFieldType == false && + fieldType instanceof NumberFieldMapper.NumberFieldType == false) { + throw new SearchContextException(context, "unknown type for collapse field `" + field + + "`, only keywords and numbers are accepted"); + } + + if (fieldType.hasDocValues() == false) { + throw new SearchContextException(context, "cannot collapse on field `" + field + "` without `doc_values`"); + } + if (fieldType.indexOptions() == IndexOptions.NONE && (innerHits != null && !innerHits.isEmpty())) { + throw new SearchContextException(context, "cannot expand `inner_hits` for collapse field `" + + field + "`, " + "only indexed field can retrieve `inner_hits`"); + } + + return new CollapseContext(fieldType, innerHits); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/collapse/CollapseContext.java b/core/src/main/java/org/elasticsearch/search/collapse/CollapseContext.java new file mode 100644 index 0000000000000..cb1587cd7d9e9 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/collapse/CollapseContext.java @@ -0,0 +1,69 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.collapse; + +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.grouping.CollapsingTopDocsCollector; +import org.elasticsearch.index.mapper.KeywordFieldMapper; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.NumberFieldMapper; +import org.elasticsearch.index.query.InnerHitBuilder; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; + +/** + * Context used for field collapsing + */ +public class CollapseContext { + private final MappedFieldType fieldType; + private final List innerHits; + + public CollapseContext(MappedFieldType fieldType, InnerHitBuilder innerHit) { + this.fieldType = fieldType; + this.innerHits = Collections.singletonList(innerHit); + } + + public CollapseContext(MappedFieldType fieldType, List innerHits) { + this.fieldType = fieldType; + this.innerHits = innerHits; + } + + /** The field type used for collapsing **/ + public MappedFieldType getFieldType() { + return fieldType; + } + + /** The inner hit options to expand the collapsed results **/ + public List getInnerHit() { + return innerHits; + } + + public CollapsingTopDocsCollector createTopDocs(Sort sort, int topN, boolean trackMaxScore) throws IOException { + if (fieldType instanceof KeywordFieldMapper.KeywordFieldType) { + return CollapsingTopDocsCollector.createKeyword(fieldType.name(), sort, topN, trackMaxScore); + } else if (fieldType instanceof NumberFieldMapper.NumberFieldType) { + return CollapsingTopDocsCollector.createNumeric(fieldType.name(), sort, topN, trackMaxScore); + } else { + throw new IllegalStateException("unknown type for collapse field " + fieldType.name() + + ", only keywords and numbers are accepted"); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/controller/SearchPhaseController.java b/core/src/main/java/org/elasticsearch/search/controller/SearchPhaseController.java deleted file mode 100644 index 1766d41dde376..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/controller/SearchPhaseController.java +++ /dev/null @@ -1,537 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.controller; - -import com.carrotsearch.hppc.IntArrayList; -import com.carrotsearch.hppc.ObjectObjectHashMap; -import org.apache.lucene.index.Term; -import org.apache.lucene.search.CollectionStatistics; -import org.apache.lucene.search.FieldDoc; -import org.apache.lucene.search.ScoreDoc; -import org.apache.lucene.search.Sort; -import org.apache.lucene.search.SortField; -import org.apache.lucene.search.TermStatistics; -import org.apache.lucene.search.TopDocs; -import org.apache.lucene.search.TopFieldDocs; -import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.collect.HppcMaps; -import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.common.util.concurrent.AtomicArray; -import org.elasticsearch.script.ScriptService; -import org.elasticsearch.search.aggregations.InternalAggregation; -import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; -import org.elasticsearch.search.aggregations.InternalAggregations; -import org.elasticsearch.search.aggregations.pipeline.SiblingPipelineAggregator; -import org.elasticsearch.search.dfs.AggregatedDfs; -import org.elasticsearch.search.dfs.DfsSearchResult; -import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.FetchSearchResultProvider; -import org.elasticsearch.search.internal.InternalSearchHit; -import org.elasticsearch.search.internal.InternalSearchHits; -import org.elasticsearch.search.internal.InternalSearchResponse; -import org.elasticsearch.search.profile.ProfileShardResult; -import org.elasticsearch.search.profile.SearchProfileShardResults; -import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.query.QuerySearchResultProvider; -import org.elasticsearch.search.suggest.Suggest; -import org.elasticsearch.search.suggest.Suggest.Suggestion; -import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry; -import org.elasticsearch.search.suggest.completion.CompletionSuggestion; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collections; -import java.util.Comparator; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.stream.Collectors; -import java.util.stream.StreamSupport; - -/** - * - */ -public class SearchPhaseController extends AbstractComponent { - - public static final Comparator> QUERY_RESULT_ORDERING = (o1, o2) -> { - int i = o1.value.shardTarget().index().compareTo(o2.value.shardTarget().index()); - if (i == 0) { - i = o1.value.shardTarget().shardId().id() - o2.value.shardTarget().shardId().id(); - } - return i; - }; - - public static final ScoreDoc[] EMPTY_DOCS = new ScoreDoc[0]; - - private final BigArrays bigArrays; - private final ScriptService scriptService; - private final ClusterService clusterService; - - @Inject - public SearchPhaseController(Settings settings, BigArrays bigArrays, ScriptService scriptService, ClusterService clusterService) { - super(settings); - this.bigArrays = bigArrays; - this.scriptService = scriptService; - this.clusterService = clusterService; - } - - public AggregatedDfs aggregateDfs(AtomicArray results) { - ObjectObjectHashMap termStatistics = HppcMaps.newNoNullKeysMap(); - ObjectObjectHashMap fieldStatistics = HppcMaps.newNoNullKeysMap(); - long aggMaxDoc = 0; - for (AtomicArray.Entry lEntry : results.asList()) { - final Term[] terms = lEntry.value.terms(); - final TermStatistics[] stats = lEntry.value.termStatistics(); - assert terms.length == stats.length; - for (int i = 0; i < terms.length; i++) { - assert terms[i] != null; - TermStatistics existing = termStatistics.get(terms[i]); - if (existing != null) { - assert terms[i].bytes().equals(existing.term()); - // totalTermFrequency is an optional statistic we need to check if either one or both - // are set to -1 which means not present and then set it globally to -1 - termStatistics.put(terms[i], new TermStatistics(existing.term(), - existing.docFreq() + stats[i].docFreq(), - optionalSum(existing.totalTermFreq(), stats[i].totalTermFreq()))); - } else { - termStatistics.put(terms[i], stats[i]); - } - - } - - assert !lEntry.value.fieldStatistics().containsKey(null); - final Object[] keys = lEntry.value.fieldStatistics().keys; - final Object[] values = lEntry.value.fieldStatistics().values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - String key = (String) keys[i]; - CollectionStatistics value = (CollectionStatistics) values[i]; - assert key != null; - CollectionStatistics existing = fieldStatistics.get(key); - if (existing != null) { - CollectionStatistics merged = new CollectionStatistics( - key, existing.maxDoc() + value.maxDoc(), - optionalSum(existing.docCount(), value.docCount()), - optionalSum(existing.sumTotalTermFreq(), value.sumTotalTermFreq()), - optionalSum(existing.sumDocFreq(), value.sumDocFreq()) - ); - fieldStatistics.put(key, merged); - } else { - fieldStatistics.put(key, value); - } - } - } - aggMaxDoc += lEntry.value.maxDoc(); - } - return new AggregatedDfs(termStatistics, fieldStatistics, aggMaxDoc); - } - - private static long optionalSum(long left, long right) { - return Math.min(left, right) == -1 ? -1 : left + right; - } - - /** - * Returns a score doc array of top N search docs across all shards, followed by top suggest docs for each - * named completion suggestion across all shards. If more than one named completion suggestion is specified in the - * request, the suggest docs for a named suggestion are ordered by the suggestion name. - * - * @param ignoreFrom Whether to ignore the from and sort all hits in each shard result. - * Enabled only for scroll search, because that only retrieves hits of length 'size' in the query phase. - * @param resultsArr Shard result holder - */ - public ScoreDoc[] sortDocs(boolean ignoreFrom, AtomicArray resultsArr) throws IOException { - List> results = resultsArr.asList(); - if (results.isEmpty()) { - return EMPTY_DOCS; - } - - boolean canOptimize = false; - QuerySearchResult result = null; - int shardIndex = -1; - if (results.size() == 1) { - canOptimize = true; - result = results.get(0).value.queryResult(); - shardIndex = results.get(0).index; - } else { - // lets see if we only got hits from a single shard, if so, we can optimize... - for (AtomicArray.Entry entry : results) { - if (entry.value.queryResult().hasHits()) { - if (result != null) { // we already have one, can't really optimize - canOptimize = false; - break; - } - canOptimize = true; - result = entry.value.queryResult(); - shardIndex = entry.index; - } - } - } - if (canOptimize) { - int offset = result.from(); - if (ignoreFrom) { - offset = 0; - } - ScoreDoc[] scoreDocs = result.topDocs().scoreDocs; - ScoreDoc[] docs; - int numSuggestDocs = 0; - final Suggest suggest = result.queryResult().suggest(); - final List completionSuggestions; - if (suggest != null) { - completionSuggestions = suggest.filter(CompletionSuggestion.class); - for (CompletionSuggestion suggestion : completionSuggestions) { - numSuggestDocs += suggestion.getOptions().size(); - } - } else { - completionSuggestions = Collections.emptyList(); - } - int docsOffset = 0; - if (scoreDocs.length == 0 || scoreDocs.length < offset) { - docs = new ScoreDoc[numSuggestDocs]; - } else { - int resultDocsSize = result.size(); - if ((scoreDocs.length - offset) < resultDocsSize) { - resultDocsSize = scoreDocs.length - offset; - } - docs = new ScoreDoc[resultDocsSize + numSuggestDocs]; - for (int i = 0; i < resultDocsSize; i++) { - ScoreDoc scoreDoc = scoreDocs[offset + i]; - scoreDoc.shardIndex = shardIndex; - docs[i] = scoreDoc; - docsOffset++; - } - } - for (CompletionSuggestion suggestion: completionSuggestions) { - for (CompletionSuggestion.Entry.Option option : suggestion.getOptions()) { - ScoreDoc doc = option.getDoc(); - doc.shardIndex = shardIndex; - docs[docsOffset++] = doc; - } - } - return docs; - } - - @SuppressWarnings("unchecked") - AtomicArray.Entry[] sortedResults = results.toArray(new AtomicArray.Entry[results.size()]); - Arrays.sort(sortedResults, QUERY_RESULT_ORDERING); - QuerySearchResultProvider firstResult = sortedResults[0].value; - - int topN = topN(results); - int from = firstResult.queryResult().from(); - if (ignoreFrom) { - from = 0; - } - - final TopDocs mergedTopDocs; - if (firstResult.queryResult().topDocs() instanceof TopFieldDocs) { - TopFieldDocs firstTopDocs = (TopFieldDocs) firstResult.queryResult().topDocs(); - final Sort sort = new Sort(firstTopDocs.fields); - - final TopFieldDocs[] shardTopDocs = new TopFieldDocs[resultsArr.length()]; - for (AtomicArray.Entry sortedResult : sortedResults) { - TopDocs topDocs = sortedResult.value.queryResult().topDocs(); - // the 'index' field is the position in the resultsArr atomic array - shardTopDocs[sortedResult.index] = (TopFieldDocs) topDocs; - } - // TopDocs#merge can't deal with null shard TopDocs - for (int i = 0; i < shardTopDocs.length; ++i) { - if (shardTopDocs[i] == null) { - shardTopDocs[i] = new TopFieldDocs(0, new FieldDoc[0], sort.getSort(), Float.NaN); - } - } - mergedTopDocs = TopDocs.merge(sort, from, topN, shardTopDocs); - } else { - final TopDocs[] shardTopDocs = new TopDocs[resultsArr.length()]; - for (AtomicArray.Entry sortedResult : sortedResults) { - TopDocs topDocs = sortedResult.value.queryResult().topDocs(); - // the 'index' field is the position in the resultsArr atomic array - shardTopDocs[sortedResult.index] = topDocs; - } - // TopDocs#merge can't deal with null shard TopDocs - for (int i = 0; i < shardTopDocs.length; ++i) { - if (shardTopDocs[i] == null) { - shardTopDocs[i] = Lucene.EMPTY_TOP_DOCS; - } - } - mergedTopDocs = TopDocs.merge(from, topN, shardTopDocs); - } - - ScoreDoc[] scoreDocs = mergedTopDocs.scoreDocs; - final Map>> groupedCompletionSuggestions = new HashMap<>(); - // group suggestions and assign shard index - for (AtomicArray.Entry sortedResult : sortedResults) { - Suggest shardSuggest = sortedResult.value.queryResult().suggest(); - if (shardSuggest != null) { - for (CompletionSuggestion suggestion : shardSuggest.filter(CompletionSuggestion.class)) { - suggestion.setShardIndex(sortedResult.index); - List> suggestions = - groupedCompletionSuggestions.computeIfAbsent(suggestion.getName(), s -> new ArrayList<>()); - suggestions.add(suggestion); - } - } - } - if (groupedCompletionSuggestions.isEmpty() == false) { - int numSuggestDocs = 0; - List>> completionSuggestions = - new ArrayList<>(groupedCompletionSuggestions.size()); - for (List> groupedSuggestions : groupedCompletionSuggestions.values()) { - final CompletionSuggestion completionSuggestion = CompletionSuggestion.reduceTo(groupedSuggestions); - assert completionSuggestion != null; - numSuggestDocs += completionSuggestion.getOptions().size(); - completionSuggestions.add(completionSuggestion); - } - scoreDocs = new ScoreDoc[mergedTopDocs.scoreDocs.length + numSuggestDocs]; - System.arraycopy(mergedTopDocs.scoreDocs, 0, scoreDocs, 0, mergedTopDocs.scoreDocs.length); - int offset = mergedTopDocs.scoreDocs.length; - Suggest suggestions = new Suggest(completionSuggestions); - for (CompletionSuggestion completionSuggestion : suggestions.filter(CompletionSuggestion.class)) { - for (CompletionSuggestion.Entry.Option option : completionSuggestion.getOptions()) { - scoreDocs[offset++] = option.getDoc(); - } - } - } - return scoreDocs; - } - - public ScoreDoc[] getLastEmittedDocPerShard(List> queryResults, - ScoreDoc[] sortedScoreDocs, int numShards) { - ScoreDoc[] lastEmittedDocPerShard = new ScoreDoc[numShards]; - if (queryResults.isEmpty() == false) { - long fetchHits = 0; - for (AtomicArray.Entry queryResult : queryResults) { - fetchHits += queryResult.value.queryResult().topDocs().scoreDocs.length; - } - // from is always zero as when we use scroll, we ignore from - long size = Math.min(fetchHits, topN(queryResults)); - for (int sortedDocsIndex = 0; sortedDocsIndex < size; sortedDocsIndex++) { - ScoreDoc scoreDoc = sortedScoreDocs[sortedDocsIndex]; - lastEmittedDocPerShard[scoreDoc.shardIndex] = scoreDoc; - } - } - return lastEmittedDocPerShard; - - } - - /** - * Builds an array, with potential null elements, with docs to load. - */ - public void fillDocIdsToLoad(AtomicArray docIdsToLoad, ScoreDoc[] shardDocs) { - for (ScoreDoc shardDoc : shardDocs) { - IntArrayList shardDocIdsToLoad = docIdsToLoad.get(shardDoc.shardIndex); - if (shardDocIdsToLoad == null) { - shardDocIdsToLoad = new IntArrayList(); // can't be shared!, uses unsafe on it later on - docIdsToLoad.set(shardDoc.shardIndex, shardDocIdsToLoad); - } - shardDocIdsToLoad.add(shardDoc.doc); - } - } - - /** - * Enriches search hits and completion suggestion hits from sortedDocs using fetchResultsArr, - * merges suggestions, aggregations and profile results - * - * Expects sortedDocs to have top search docs across all shards, optionally followed by top suggest docs for each named - * completion suggestion ordered by suggestion name - */ - public InternalSearchResponse merge(boolean ignoreFrom, ScoreDoc[] sortedDocs, - AtomicArray queryResultsArr, - AtomicArray fetchResultsArr) { - - List> queryResults = queryResultsArr.asList(); - List> fetchResults = fetchResultsArr.asList(); - - if (queryResults.isEmpty()) { - return InternalSearchResponse.empty(); - } - - QuerySearchResult firstResult = queryResults.get(0).value.queryResult(); - - boolean sorted = false; - int sortScoreIndex = -1; - if (firstResult.topDocs() instanceof TopFieldDocs) { - sorted = true; - TopFieldDocs fieldDocs = (TopFieldDocs) firstResult.queryResult().topDocs(); - for (int i = 0; i < fieldDocs.fields.length; i++) { - if (fieldDocs.fields[i].getType() == SortField.Type.SCORE) { - sortScoreIndex = i; - } - } - } - - // count the total (we use the query result provider here, since we might not get any hits (we scrolled past them)) - long totalHits = 0; - long fetchHits = 0; - float maxScore = Float.NEGATIVE_INFINITY; - boolean timedOut = false; - Boolean terminatedEarly = null; - for (AtomicArray.Entry entry : queryResults) { - QuerySearchResult result = entry.value.queryResult(); - if (result.searchTimedOut()) { - timedOut = true; - } - if (result.terminatedEarly() != null) { - if (terminatedEarly == null) { - terminatedEarly = result.terminatedEarly(); - } else if (result.terminatedEarly()) { - terminatedEarly = true; - } - } - totalHits += result.topDocs().totalHits; - fetchHits += result.topDocs().scoreDocs.length; - if (!Float.isNaN(result.topDocs().getMaxScore())) { - maxScore = Math.max(maxScore, result.topDocs().getMaxScore()); - } - } - if (Float.isInfinite(maxScore)) { - maxScore = Float.NaN; - } - - // clean the fetch counter - for (AtomicArray.Entry entry : fetchResults) { - entry.value.fetchResult().initCounter(); - } - int from = ignoreFrom ? 0 : firstResult.queryResult().from(); - int numSearchHits = (int) Math.min(fetchHits - from, topN(queryResults)); - // merge hits - List hits = new ArrayList<>(); - if (!fetchResults.isEmpty()) { - for (int i = 0; i < numSearchHits; i++) { - ScoreDoc shardDoc = sortedDocs[i]; - FetchSearchResultProvider fetchResultProvider = fetchResultsArr.get(shardDoc.shardIndex); - if (fetchResultProvider == null) { - continue; - } - FetchSearchResult fetchResult = fetchResultProvider.fetchResult(); - int index = fetchResult.counterGetAndIncrement(); - if (index < fetchResult.hits().internalHits().length) { - InternalSearchHit searchHit = fetchResult.hits().internalHits()[index]; - searchHit.score(shardDoc.score); - searchHit.shard(fetchResult.shardTarget()); - if (sorted) { - FieldDoc fieldDoc = (FieldDoc) shardDoc; - searchHit.sortValues(fieldDoc.fields, firstResult.sortValueFormats()); - if (sortScoreIndex != -1) { - searchHit.score(((Number) fieldDoc.fields[sortScoreIndex]).floatValue()); - } - } - hits.add(searchHit); - } - } - } - - // merge suggest results - Suggest suggest = null; - if (firstResult.suggest() != null) { - final Map> groupedSuggestions = new HashMap<>(); - for (AtomicArray.Entry queryResult : queryResults) { - Suggest shardSuggest = queryResult.value.queryResult().suggest(); - if (shardSuggest != null) { - for (Suggestion> suggestion : shardSuggest) { - List suggestionList = groupedSuggestions.computeIfAbsent(suggestion.getName(), s -> new ArrayList<>()); - suggestionList.add(suggestion); - } - } - } - if (groupedSuggestions.isEmpty() == false) { - suggest = new Suggest(Suggest.reduce(groupedSuggestions)); - if (!fetchResults.isEmpty()) { - int currentOffset = numSearchHits; - for (CompletionSuggestion suggestion : suggest.filter(CompletionSuggestion.class)) { - final List suggestionOptions = suggestion.getOptions(); - for (int scoreDocIndex = currentOffset; scoreDocIndex < currentOffset + suggestionOptions.size(); scoreDocIndex++) { - ScoreDoc shardDoc = sortedDocs[scoreDocIndex]; - FetchSearchResultProvider fetchSearchResultProvider = fetchResultsArr.get(shardDoc.shardIndex); - if (fetchSearchResultProvider == null) { - continue; - } - FetchSearchResult fetchResult = fetchSearchResultProvider.fetchResult(); - int fetchResultIndex = fetchResult.counterGetAndIncrement(); - if (fetchResultIndex < fetchResult.hits().internalHits().length) { - InternalSearchHit hit = fetchResult.hits().internalHits()[fetchResultIndex]; - CompletionSuggestion.Entry.Option suggestOption = - suggestionOptions.get(scoreDocIndex - currentOffset); - hit.score(shardDoc.score); - hit.shard(fetchResult.shardTarget()); - suggestOption.setHit(hit); - } - } - currentOffset += suggestionOptions.size(); - } - assert currentOffset == sortedDocs.length : "expected no more score doc slices"; - } - } - } - - // merge Aggregation - InternalAggregations aggregations = null; - if (firstResult.aggregations() != null && firstResult.aggregations().asList() != null) { - List aggregationsList = new ArrayList<>(queryResults.size()); - for (AtomicArray.Entry entry : queryResults) { - aggregationsList.add((InternalAggregations) entry.value.queryResult().aggregations()); - } - ReduceContext reduceContext = new ReduceContext(bigArrays, scriptService, clusterService.state()); - aggregations = InternalAggregations.reduce(aggregationsList, reduceContext); - List pipelineAggregators = firstResult.pipelineAggregators(); - if (pipelineAggregators != null) { - List newAggs = StreamSupport.stream(aggregations.spliterator(), false) - .map((p) -> (InternalAggregation) p) - .collect(Collectors.toList()); - for (SiblingPipelineAggregator pipelineAggregator : pipelineAggregators) { - InternalAggregation newAgg = pipelineAggregator.doReduce(new InternalAggregations(newAggs), reduceContext); - newAggs.add(newAgg); - } - aggregations = new InternalAggregations(newAggs); - } - } - - //Collect profile results - SearchProfileShardResults shardResults = null; - if (firstResult.profileResults() != null) { - Map profileResults = new HashMap<>(queryResults.size()); - for (AtomicArray.Entry entry : queryResults) { - String key = entry.value.queryResult().shardTarget().toString(); - profileResults.put(key, entry.value.queryResult().profileResults()); - } - shardResults = new SearchProfileShardResults(profileResults); - } - - InternalSearchHits searchHits = new InternalSearchHits(hits.toArray(new InternalSearchHit[hits.size()]), totalHits, maxScore); - - return new InternalSearchResponse(searchHits, aggregations, suggest, shardResults, timedOut, terminatedEarly); - } - - /** - * returns the number of top results to be considered across all shards - */ - private static int topN(List> queryResults) { - QuerySearchResultProvider firstResult = queryResults.get(0).value; - int topN = firstResult.queryResult().size(); - if (firstResult.includeFetch()) { - // if we did both query and fetch on the same go, we have fetched all the docs from each shards already, use them... - // this is also important since we shortcut and fetch only docs from "from" and up to "size" - topN *= queryResults.size(); - } - return topN; - } -} diff --git a/core/src/main/java/org/elasticsearch/search/dfs/AggregatedDfs.java b/core/src/main/java/org/elasticsearch/search/dfs/AggregatedDfs.java index d762540caabef..f7bfb6a805251 100644 --- a/core/src/main/java/org/elasticsearch/search/dfs/AggregatedDfs.java +++ b/core/src/main/java/org/elasticsearch/search/dfs/AggregatedDfs.java @@ -85,10 +85,10 @@ public void writeTo(final StreamOutput out) throws IOException { out.writeVInt(termStatistics.size()); for (ObjectObjectCursor c : termStatistics()) { - Term term = (Term) c.key; + Term term = c.key; out.writeString(term.field()); out.writeBytesRef(term.bytes()); - TermStatistics stats = (TermStatistics) c.value; + TermStatistics stats = c.value; out.writeBytesRef(stats.term()); out.writeVLong(stats.docFreq()); out.writeVLong(DfsSearchResult.addOne(stats.totalTermFreq())); diff --git a/core/src/main/java/org/elasticsearch/search/dfs/DfsPhase.java b/core/src/main/java/org/elasticsearch/search/dfs/DfsPhase.java index de06655f4146d..6be95a8bceb20 100644 --- a/core/src/main/java/org/elasticsearch/search/dfs/DfsPhase.java +++ b/core/src/main/java/org/elasticsearch/search/dfs/DfsPhase.java @@ -28,16 +28,19 @@ import org.apache.lucene.search.CollectionStatistics; import org.apache.lucene.search.TermStatistics; import org.elasticsearch.common.collect.HppcMaps; +import org.elasticsearch.search.SearchContextException; import org.elasticsearch.search.SearchPhase; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.tasks.TaskCancelledException; import java.util.AbstractSet; import java.util.Collection; import java.util.Iterator; /** - * + * Dfs phase of a search request, used to make scoring 100% accurate by collecting additional info from each shard before the query phase. + * The additional information is used to better compare the scores coming from all the shards, which depend on local factors (e.g. idf) */ public class DfsPhase implements SearchPhase { @@ -58,6 +61,9 @@ public void execute(SearchContext context) { TermStatistics[] termStatistics = new TermStatistics[terms.length]; IndexReaderContext indexReaderContext = context.searcher().getTopReaderContext(); for (int i = 0; i < terms.length; i++) { + if(context.isCancelled()) { + throw new TaskCancelledException("cancelled"); + } // LUCENE 4 UPGRADE: cache TermContext? TermContext termContext = TermContext.build(indexReaderContext, terms[i]); termStatistics[i] = context.searcher().termStatistics(terms[i], termContext); @@ -69,6 +75,9 @@ public void execute(SearchContext context) { if (!fieldStatistics.containsKey(term.field())) { final CollectionStatistics collectionStatistics = context.searcher().collectionStatistics(term.field()); fieldStatistics.put(term.field(), collectionStatistics); + if(context.isCancelled()) { + throw new TaskCancelledException("cancelled"); + } } } diff --git a/core/src/main/java/org/elasticsearch/search/dfs/DfsPhaseExecutionException.java b/core/src/main/java/org/elasticsearch/search/dfs/DfsPhaseExecutionException.java index a6020b4498acd..dace1e271cff9 100644 --- a/core/src/main/java/org/elasticsearch/search/dfs/DfsPhaseExecutionException.java +++ b/core/src/main/java/org/elasticsearch/search/dfs/DfsPhaseExecutionException.java @@ -34,7 +34,11 @@ public DfsPhaseExecutionException(SearchContext context, String msg, Throwable t super(context, "Dfs Failed [" + msg + "]", t); } + public DfsPhaseExecutionException(SearchContext context, String msg) { + super(context, "Dfs Failed [" + msg + "]"); + } + public DfsPhaseExecutionException(StreamInput in) throws IOException { super(in); } -} \ No newline at end of file +} diff --git a/core/src/main/java/org/elasticsearch/search/dfs/DfsSearchResult.java b/core/src/main/java/org/elasticsearch/search/dfs/DfsSearchResult.java index 6e93e41058725..0cd624b00a36b 100644 --- a/core/src/main/java/org/elasticsearch/search/dfs/DfsSearchResult.java +++ b/core/src/main/java/org/elasticsearch/search/dfs/DfsSearchResult.java @@ -30,47 +30,24 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.transport.TransportResponse; import java.io.IOException; -/** - * - */ -public class DfsSearchResult extends TransportResponse implements SearchPhaseResult { +public class DfsSearchResult extends SearchPhaseResult { private static final Term[] EMPTY_TERMS = new Term[0]; private static final TermStatistics[] EMPTY_TERM_STATS = new TermStatistics[0]; - - private SearchShardTarget shardTarget; - private long id; private Term[] terms; private TermStatistics[] termStatistics; private ObjectObjectHashMap fieldStatistics = HppcMaps.newNoNullKeysMap(); private int maxDoc; public DfsSearchResult() { - } public DfsSearchResult(long id, SearchShardTarget shardTarget) { - this.id = id; - this.shardTarget = shardTarget; - } - - @Override - public long id() { - return this.id; - } - - @Override - public SearchShardTarget shardTarget() { - return shardTarget; - } - - @Override - public void shardTarget(SearchShardTarget shardTarget) { - this.shardTarget = shardTarget; + this.setSearchShardTarget(shardTarget); + this.requestId = id; } public DfsSearchResult maxDoc(int maxDoc) { @@ -105,16 +82,10 @@ public ObjectObjectHashMap fieldStatistics() { return fieldStatistics; } - public static DfsSearchResult readDfsSearchResult(StreamInput in) throws IOException, ClassNotFoundException { - DfsSearchResult result = new DfsSearchResult(); - result.readFrom(in); - return result; - } - @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - id = in.readLong(); + requestId = in.readLong(); int termsSize = in.readVInt(); if (termsSize == 0) { terms = EMPTY_TERMS; @@ -131,11 +102,10 @@ public void readFrom(StreamInput in) throws IOException { maxDoc = in.readVInt(); } - @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeLong(id); + out.writeLong(requestId); out.writeVInt(terms.length); for (Term term : terms) { out.writeString(term.field()); diff --git a/core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java index 997cb7caa4b99..d16eeb04605e7 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java @@ -39,19 +39,20 @@ import org.elasticsearch.index.fieldvisitor.FieldsVisitor; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.ObjectMapper; import org.elasticsearch.index.mapper.SourceFieldMapper; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.SearchHitField; -import org.elasticsearch.search.SearchParseElement; import org.elasticsearch.search.SearchPhase; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import org.elasticsearch.search.fetch.subphase.InnerHitsContext; import org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase; -import org.elasticsearch.search.internal.InternalSearchHit; -import org.elasticsearch.search.internal.InternalSearchHitField; -import org.elasticsearch.search.internal.InternalSearchHits; +import org.elasticsearch.search.SearchHits; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.lookup.SourceLookup; +import org.elasticsearch.tasks.TaskCancelledException; import java.io.IOException; import java.util.ArrayList; @@ -62,11 +63,11 @@ import java.util.Map; import java.util.Set; -import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.common.xcontent.XContentFactory.contentBuilder; /** - * + * Fetch phase of a search request, used to fetch the actual top matching documents to be returned to the client, identified + * after reducing all of the matches returned by the query phase */ public class FetchPhase implements SearchPhase { @@ -77,15 +78,6 @@ public FetchPhase(List fetchSubPhases) { this.fetchSubPhases[fetchSubPhases.size()] = new InnerHitsFetchSubPhase(this); } - @Override - public Map parseElements() { - Map parseElements = new HashMap<>(); - for (FetchSubPhase fetchSubPhase : fetchSubPhases) { - parseElements.putAll(fetchSubPhase.parseElements()); - } - return unmodifiableMap(parseElements); - } - @Override public void preProcess(SearchContext context) { } @@ -109,11 +101,9 @@ public void execute(SearchContext context) { } else { for (String fieldName : context.storedFieldsContext().fieldNames()) { if (fieldName.equals(SourceFieldMapper.NAME)) { - if (context.hasFetchSourceContext()) { - context.fetchSourceContext().fetchSource(true); - } else { - context.fetchSourceContext(new FetchSourceContext(true)); - } + FetchSourceContext fetchSourceContext = context.hasFetchSourceContext() ? context.fetchSourceContext() + : FetchSourceContext.FETCH_SOURCE; + context.fetchSourceContext(new FetchSourceContext(true, fetchSourceContext.includes(), fetchSourceContext.excludes())); continue; } if (Regex.isSimpleMatchPattern(fieldName)) { @@ -141,23 +131,27 @@ public void execute(SearchContext context) { fieldsVisitor = new FieldsVisitor(loadSource); } else { fieldsVisitor = new CustomFieldsVisitor(fieldNames == null ? Collections.emptySet() : fieldNames, - fieldNamePatterns == null ? Collections.emptyList() : fieldNamePatterns, loadSource); + fieldNamePatterns == null ? Collections.emptyList() : fieldNamePatterns, loadSource); } } - InternalSearchHit[] hits = new InternalSearchHit[context.docIdsToLoadSize()]; + SearchHit[] hits = new SearchHit[context.docIdsToLoadSize()]; FetchSubPhase.HitContext hitContext = new FetchSubPhase.HitContext(); for (int index = 0; index < context.docIdsToLoadSize(); index++) { + if (context.isCancelled()) { + throw new TaskCancelledException("cancelled"); + } int docId = context.docIdsToLoad()[context.docIdsToLoadFrom() + index]; int readerIndex = ReaderUtil.subIndex(docId, context.searcher().getIndexReader().leaves()); LeafReaderContext subReaderContext = context.searcher().getIndexReader().leaves().get(readerIndex); int subDocId = docId - subReaderContext.docBase; - final InternalSearchHit searchHit; + final SearchHit searchHit; try { int rootDocId = findRootDocumentIfNested(context, subReaderContext, subDocId); if (rootDocId != -1) { - searchHit = createNestedSearchHit(context, docId, subDocId, rootDocId, fieldNames, fieldNamePatterns, subReaderContext); + searchHit = createNestedSearchHit(context, docId, subDocId, rootDocId, fieldNames, fieldNamePatterns, + subReaderContext); } else { searchHit = createSearchHit(context, fieldsVisitor, docId, subDocId, subReaderContext); } @@ -176,7 +170,7 @@ public void execute(SearchContext context) { fetchSubPhase.hitsExecute(context, hits); } - context.fetchResult().hits(new InternalSearchHits(hits, context.queryResult().topDocs().totalHits, context.queryResult().topDocs().getMaxScore())); + context.fetchResult().hits(new SearchHits(hits, context.queryResult().getTotalHits(), context.queryResult().getMaxScore())); } private int findRootDocumentIfNested(SearchContext context, LeafReaderContext subReaderContext, int subDocId) throws IOException { @@ -189,9 +183,10 @@ private int findRootDocumentIfNested(SearchContext context, LeafReaderContext su return -1; } - private InternalSearchHit createSearchHit(SearchContext context, FieldsVisitor fieldsVisitor, int docId, int subDocId, LeafReaderContext subReaderContext) { + private SearchHit createSearchHit(SearchContext context, FieldsVisitor fieldsVisitor, int docId, int subDocId, + LeafReaderContext subReaderContext) { if (fieldsVisitor == null) { - return new InternalSearchHit(docId); + return new SearchHit(docId); } loadStoredFields(context, subReaderContext, fieldsVisitor, subDocId); fieldsVisitor.postProcess(context.mapperService()); @@ -200,7 +195,7 @@ private InternalSearchHit createSearchHit(SearchContext context, FieldsVisitor f if (!fieldsVisitor.fields().isEmpty()) { searchFields = new HashMap<>(fieldsVisitor.fields().size()); for (Map.Entry> entry : fieldsVisitor.fields().entrySet()) { - searchFields.put(entry.getKey(), new InternalSearchHitField(entry.getKey(), entry.getValue())); + searchFields.put(entry.getKey(), new SearchHitField(entry.getKey(), entry.getValue())); } } @@ -211,7 +206,7 @@ private InternalSearchHit createSearchHit(SearchContext context, FieldsVisitor f } else { typeText = documentMapper.typeText(); } - InternalSearchHit searchHit = new InternalSearchHit(docId, fieldsVisitor.uid().id(), typeText, searchFields); + SearchHit searchHit = new SearchHit(docId, fieldsVisitor.uid().id(), typeText, searchFields); // Set _source if requested. SourceLookup sourceLookup = context.lookup().source(); sourceLookup.setSegmentAndDocument(subReaderContext, subDocId); @@ -221,24 +216,39 @@ private InternalSearchHit createSearchHit(SearchContext context, FieldsVisitor f return searchHit; } - private InternalSearchHit createNestedSearchHit(SearchContext context, int nestedTopDocId, int nestedSubDocId, int rootSubDocId, Set fieldNames, List fieldNamePatterns, LeafReaderContext subReaderContext) throws IOException { + private SearchHit createNestedSearchHit(SearchContext context, int nestedTopDocId, int nestedSubDocId, + int rootSubDocId, Set fieldNames, + List fieldNamePatterns, LeafReaderContext subReaderContext) throws IOException { // Also if highlighting is requested on nested documents we need to fetch the _source from the root document, // otherwise highlighting will attempt to fetch the _source from the nested doc, which will fail, // because the entire _source is only stored with the root document. - final FieldsVisitor rootFieldsVisitor = new FieldsVisitor(context.sourceRequested() || context.highlight() != null); - loadStoredFields(context, subReaderContext, rootFieldsVisitor, rootSubDocId); - rootFieldsVisitor.postProcess(context.mapperService()); + final Uid uid; + final BytesReference source; + final boolean needSource = context.sourceRequested() || context.highlight() != null; + if (needSource || (context instanceof InnerHitsContext.InnerHitSubContext == false)) { + FieldsVisitor rootFieldsVisitor = new FieldsVisitor(needSource); + loadStoredFields(context, subReaderContext, rootFieldsVisitor, rootSubDocId); + rootFieldsVisitor.postProcess(context.mapperService()); + uid = rootFieldsVisitor.uid(); + source = rootFieldsVisitor.source(); + } else { + // In case of nested inner hits we already know the uid, so no need to fetch it from stored fields again! + uid = ((InnerHitsContext.InnerHitSubContext) context).getUid(); + source = null; + } - Map searchFields = getSearchFields(context, nestedSubDocId, fieldNames, fieldNamePatterns, subReaderContext); - DocumentMapper documentMapper = context.mapperService().documentMapper(rootFieldsVisitor.uid().type()); + + Map searchFields = + getSearchFields(context, nestedSubDocId, fieldNames, fieldNamePatterns, subReaderContext); + DocumentMapper documentMapper = context.mapperService().documentMapper(uid.type()); SourceLookup sourceLookup = context.lookup().source(); sourceLookup.setSegmentAndDocument(subReaderContext, nestedSubDocId); ObjectMapper nestedObjectMapper = documentMapper.findNestedObjectMapper(nestedSubDocId, context, subReaderContext); assert nestedObjectMapper != null; - InternalSearchHit.InternalNestedIdentity nestedIdentity = getInternalNestedIdentity(context, nestedSubDocId, subReaderContext, documentMapper, nestedObjectMapper); + SearchHit.NestedIdentity nestedIdentity = + getInternalNestedIdentity(context, nestedSubDocId, subReaderContext, context.mapperService(), nestedObjectMapper); - BytesReference source = rootFieldsVisitor.source(); if (source != null) { Tuple> tuple = XContentHelper.convertToMap(source, true); Map sourceAsMap = tuple.v2(); @@ -253,17 +263,28 @@ private InternalSearchHit createNestedSearchHit(SearchContext context, int neste String nestedPath = nested.getField().string(); current.put(nestedPath, new HashMap<>()); Object extractedValue = XContentMapValues.extractValue(nestedPath, sourceAsMap); - List> nestedParsedSource; + List nestedParsedSource; if (extractedValue instanceof List) { // nested field has an array value in the _source - nestedParsedSource = (List>) extractedValue; + nestedParsedSource = (List) extractedValue; } else if (extractedValue instanceof Map) { - // nested field has an object value in the _source. This just means the nested field has just one inner object, which is valid, but uncommon. - nestedParsedSource = Collections.singletonList((Map) extractedValue); + // nested field has an object value in the _source. This just means the nested field has just one inner object, + // which is valid, but uncommon. + nestedParsedSource = Collections.singletonList(extractedValue); } else { throw new IllegalStateException("extracted source isn't an object or an array"); } - sourceAsMap = nestedParsedSource.get(nested.getOffset()); + if ((nestedParsedSource.get(0) instanceof Map) == false && + nestedObjectMapper.parentObjectMapperAreNested(context.mapperService()) == false) { + // When one of the parent objects are not nested then XContentMapValues.extractValue(...) extracts the values + // from two or more layers resulting in a list of list being returned. This is because nestedPath + // encapsulates two or more object layers in the _source. + // + // This is why only the first element of nestedParsedSource needs to be checked. + throw new IllegalArgumentException("Cannot execute inner hits. One or more parent object fields of nested field [" + + nestedObjectMapper.name() + "] are not nested. All parent fields need to be nested fields too"); + } + sourceAsMap = (Map) nestedParsedSource.get(nested.getOffset()); if (nested.getChild() == null) { current.put(nestedPath, sourceAsMap); } else { @@ -278,22 +299,22 @@ private InternalSearchHit createNestedSearchHit(SearchContext context, int neste context.lookup().source().setSource(nestedSource); context.lookup().source().setSourceContentType(contentType); } - - return new InternalSearchHit(nestedTopDocId, rootFieldsVisitor.uid().id(), documentMapper.typeText(), nestedIdentity, searchFields); + return new SearchHit(nestedTopDocId, uid.id(), documentMapper.typeText(), nestedIdentity, searchFields); } - private Map getSearchFields(SearchContext context, int nestedSubDocId, Set fieldNames, List fieldNamePatterns, LeafReaderContext subReaderContext) { + private Map getSearchFields(SearchContext context, int nestedSubDocId, Set fieldNames, + List fieldNamePatterns, LeafReaderContext subReaderContext) { Map searchFields = null; if (context.hasStoredFields() && !context.storedFieldsContext().fieldNames().isEmpty()) { FieldsVisitor nestedFieldsVisitor = new CustomFieldsVisitor(fieldNames == null ? Collections.emptySet() : fieldNames, - fieldNamePatterns == null ? Collections.emptyList() : fieldNamePatterns, false); + fieldNamePatterns == null ? Collections.emptyList() : fieldNamePatterns, false); if (nestedFieldsVisitor != null) { loadStoredFields(context, subReaderContext, nestedFieldsVisitor, nestedSubDocId); nestedFieldsVisitor.postProcess(context.mapperService()); if (!nestedFieldsVisitor.fields().isEmpty()) { searchFields = new HashMap<>(nestedFieldsVisitor.fields().size()); for (Map.Entry> entry : nestedFieldsVisitor.fields().entrySet()) { - searchFields.put(entry.getKey(), new InternalSearchHitField(entry.getKey(), entry.getValue())); + searchFields.put(entry.getKey(), new SearchHitField(entry.getKey(), entry.getValue())); } } } @@ -301,15 +322,18 @@ private Map getSearchFields(SearchContext context, int n return searchFields; } - private InternalSearchHit.InternalNestedIdentity getInternalNestedIdentity(SearchContext context, int nestedSubDocId, LeafReaderContext subReaderContext, DocumentMapper documentMapper, ObjectMapper nestedObjectMapper) throws IOException { + private SearchHit.NestedIdentity getInternalNestedIdentity(SearchContext context, int nestedSubDocId, + LeafReaderContext subReaderContext, + MapperService mapperService, + ObjectMapper nestedObjectMapper) throws IOException { int currentParent = nestedSubDocId; ObjectMapper nestedParentObjectMapper; ObjectMapper current = nestedObjectMapper; String originalName = nestedObjectMapper.name(); - InternalSearchHit.InternalNestedIdentity nestedIdentity = null; + SearchHit.NestedIdentity nestedIdentity = null; do { Query parentFilter; - nestedParentObjectMapper = documentMapper.findParentObjectMapper(current); + nestedParentObjectMapper = current.getParentObjectMapper(mapperService); if (nestedParentObjectMapper != null) { if (nestedParentObjectMapper.nested().isNested() == false) { current = nestedParentObjectMapper; @@ -337,13 +361,14 @@ private InternalSearchHit.InternalNestedIdentity getInternalNestedIdentity(Searc int offset = 0; int nextParent = parentBits.nextSetBit(currentParent); - for (int docId = childIter.advance(currentParent + 1); docId < nextParent && docId != DocIdSetIterator.NO_MORE_DOCS; docId = childIter.nextDoc()) { + for (int docId = childIter.advance(currentParent + 1); docId < nextParent && docId != DocIdSetIterator.NO_MORE_DOCS; + docId = childIter.nextDoc()) { offset++; } currentParent = nextParent; current = nestedObjectMapper = nestedParentObjectMapper; int currentPrefix = current == null ? 0 : current.name().length() + 1; - nestedIdentity = new InternalSearchHit.InternalNestedIdentity(originalName.substring(currentPrefix), offset, nestedIdentity); + nestedIdentity = new SearchHit.NestedIdentity(originalName.substring(currentPrefix), offset, nestedIdentity); if (current != null) { originalName = current.name(); } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/FetchPhaseExecutionException.java b/core/src/main/java/org/elasticsearch/search/fetch/FetchPhaseExecutionException.java index 1f56ac871429b..5d17a1c95bf6c 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/FetchPhaseExecutionException.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/FetchPhaseExecutionException.java @@ -34,6 +34,10 @@ public FetchPhaseExecutionException(SearchContext context, String msg, Throwable super(context, "Fetch Failed [" + msg + "]", t); } + public FetchPhaseExecutionException(SearchContext context, String msg) { + super(context, "Fetch Failed [" + msg + "]"); + } + public FetchPhaseExecutionException(StreamInput in) throws IOException { super(in); } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/FetchSearchResult.java b/core/src/main/java/org/elasticsearch/search/fetch/FetchSearchResult.java index ed8c0358dbb6e..72960334ea14c 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/FetchSearchResult.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/FetchSearchResult.java @@ -21,59 +21,51 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.search.internal.InternalSearchHits; -import org.elasticsearch.transport.TransportResponse; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.query.QuerySearchResult; import java.io.IOException; -import static org.elasticsearch.search.internal.InternalSearchHits.StreamContext; +public final class FetchSearchResult extends SearchPhaseResult { -/** - * - */ -public class FetchSearchResult extends TransportResponse implements FetchSearchResultProvider { - - private long id; - private SearchShardTarget shardTarget; - private InternalSearchHits hits; + private SearchHits hits; // client side counter private transient int counter; public FetchSearchResult() { - } public FetchSearchResult(long id, SearchShardTarget shardTarget) { - this.id = id; - this.shardTarget = shardTarget; - } - - @Override - public FetchSearchResult fetchResult() { - return this; + this.requestId = id; + setSearchShardTarget(shardTarget); } @Override - public long id() { - return this.id; + public QuerySearchResult queryResult() { + return null; } @Override - public SearchShardTarget shardTarget() { - return this.shardTarget; + public FetchSearchResult fetchResult() { + return this; } - @Override - public void shardTarget(SearchShardTarget shardTarget) { - this.shardTarget = shardTarget; + public void hits(SearchHits hits) { + assert assertNoSearchTarget(hits); + this.hits = hits; } - public void hits(InternalSearchHits hits) { - this.hits = hits; + private boolean assertNoSearchTarget(SearchHits hits) { + for (SearchHit hit : hits.hits()) { + assert hit.getShard() == null : "expected null but got: " + hit.getShard(); + } + return true; } - public InternalSearchHits hits() { + public SearchHits hits() { return hits; } @@ -95,14 +87,14 @@ public static FetchSearchResult readFetchSearchResult(StreamInput in) throws IOE @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - id = in.readLong(); - hits = InternalSearchHits.readSearchHits(in, InternalSearchHits.streamContext().streamShardTarget(StreamContext.ShardTargetType.NO_STREAM)); + requestId = in.readLong(); + hits = SearchHits.readSearchHits(in); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeLong(id); - hits.writeTo(out, InternalSearchHits.streamContext().streamShardTarget(StreamContext.ShardTargetType.NO_STREAM)); + out.writeLong(requestId); + hits.writeTo(out); } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/FetchSearchResultProvider.java b/core/src/main/java/org/elasticsearch/search/fetch/FetchSearchResultProvider.java deleted file mode 100644 index 5f4b81012989c..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/fetch/FetchSearchResultProvider.java +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.fetch; - -import org.elasticsearch.search.SearchPhaseResult; - -/** - * - */ -public interface FetchSearchResultProvider extends SearchPhaseResult { - - FetchSearchResult fetchResult(); -} diff --git a/core/src/main/java/org/elasticsearch/search/fetch/FetchSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/FetchSubPhase.java index 2180cc34128ad..6f34eba2129b8 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/FetchSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/FetchSubPhase.java @@ -22,11 +22,9 @@ import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.IndexSearcher; -import org.elasticsearch.search.SearchParseElement; -import org.elasticsearch.search.internal.InternalSearchHit; +import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.internal.SearchContext; -import java.util.Collections; import java.util.HashMap; import java.util.Map; @@ -36,20 +34,20 @@ public interface FetchSubPhase { class HitContext { - private InternalSearchHit hit; + private SearchHit hit; private IndexSearcher searcher; private LeafReaderContext readerContext; private int docId; private Map cache; - public void reset(InternalSearchHit hit, LeafReaderContext context, int docId, IndexSearcher searcher) { + public void reset(SearchHit hit, LeafReaderContext context, int docId, IndexSearcher searcher) { this.hit = hit; this.readerContext = context; this.docId = docId; this.searcher = searcher; } - public InternalSearchHit hit() { + public SearchHit hit() { return hit; } @@ -69,10 +67,6 @@ public IndexReader topLevelReader() { return searcher.getIndexReader(); } - public IndexSearcher topLevelSearcher() { - return searcher; - } - public Map cache() { if (cache == null) { cache = new HashMap<>(); @@ -82,34 +76,11 @@ public Map cache() { } - default Map parseElements() { - return Collections.emptyMap(); - } - /** * Executes the hit level phase, with a reader and doc id (note, its a low level reader, and the matching doc). */ default void hitExecute(SearchContext context, HitContext hitContext) {} - default void hitsExecute(SearchContext context, InternalSearchHit[] hits) {} - - /** - * This interface is in the fetch phase plugin mechanism. - * Whenever a new search is executed we create a new {@link SearchContext} that holds individual contexts for each {@link org.elasticsearch.search.fetch.FetchSubPhase}. - * Fetch phases that use the plugin mechanism must provide a ContextFactory to the SearchContext that creates the fetch phase context and also associates them with a name. - * See {@link SearchContext#getFetchSubPhaseContext(FetchSubPhase.ContextFactory)} - */ - interface ContextFactory { - - /** - * The name of the context. - */ - String getName(); - - /** - * Creates a new instance of a FetchSubPhaseContext that holds all information a FetchSubPhase needs to execute on hits. - */ - SubPhaseContext newContextInstance(); - } + default void hitsExecute(SearchContext context, SearchHit[] hits) {} } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/FetchSubPhaseContext.java b/core/src/main/java/org/elasticsearch/search/fetch/FetchSubPhaseContext.java deleted file mode 100644 index 856c0ad902f33..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/fetch/FetchSubPhaseContext.java +++ /dev/null @@ -1,49 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.fetch; - -import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext; - -/** - * All configuration and context needed by the FetchSubPhase to execute on hits. - * The only required information in this base class is whether or not the sub phase needs to be run at all. - * It can be extended by FetchSubPhases to hold information the phase needs to execute on hits. - * See {@link org.elasticsearch.search.fetch.FetchSubPhase.ContextFactory} and also {@link DocValueFieldsContext} for an example. - */ -public class FetchSubPhaseContext { - - // This is to store if the FetchSubPhase should be executed at all. - private boolean hitExecutionNeeded = false; - - /** - * Set if this phase should be executed at all. - */ - public void setHitExecutionNeeded(boolean hitExecutionNeeded) { - this.hitExecutionNeeded = hitExecutionNeeded; - } - - /** - * Returns if this phase be executed at all. - */ - public boolean hitExecutionNeeded() { - return hitExecutionNeeded; - } - -} diff --git a/core/src/main/java/org/elasticsearch/search/fetch/QueryFetchSearchResult.java b/core/src/main/java/org/elasticsearch/search/fetch/QueryFetchSearchResult.java index f3271f933febc..8d1e6276e65d9 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/QueryFetchSearchResult.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/QueryFetchSearchResult.java @@ -21,25 +21,21 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchShardTarget; import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.query.QuerySearchResultProvider; import java.io.IOException; import static org.elasticsearch.search.fetch.FetchSearchResult.readFetchSearchResult; import static org.elasticsearch.search.query.QuerySearchResult.readQuerySearchResult; -/** - * - */ -public class QueryFetchSearchResult extends QuerySearchResultProvider implements FetchSearchResultProvider { +public final class QueryFetchSearchResult extends SearchPhaseResult { private QuerySearchResult queryResult; private FetchSearchResult fetchResult; public QueryFetchSearchResult() { - } public QueryFetchSearchResult(QuerySearchResult queryResult, FetchSearchResult fetchResult) { @@ -48,24 +44,27 @@ public QueryFetchSearchResult(QuerySearchResult queryResult, FetchSearchResult f } @Override - public long id() { - return queryResult.id(); + public long getRequestId() { + return queryResult.getRequestId(); } @Override - public SearchShardTarget shardTarget() { - return queryResult.shardTarget(); + public SearchShardTarget getSearchShardTarget() { + return queryResult.getSearchShardTarget(); } @Override - public void shardTarget(SearchShardTarget shardTarget) { - queryResult.shardTarget(shardTarget); - fetchResult.shardTarget(shardTarget); + public void setSearchShardTarget(SearchShardTarget shardTarget) { + super.setSearchShardTarget(shardTarget); + queryResult.setSearchShardTarget(shardTarget); + fetchResult.setSearchShardTarget(shardTarget); } @Override - public boolean includeFetch() { - return true; + public void setShardIndex(int requestIndex) { + super.setShardIndex(requestIndex); + queryResult.setShardIndex(requestIndex); + fetchResult.setShardIndex(requestIndex); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/fetch/ScrollQueryFetchSearchResult.java b/core/src/main/java/org/elasticsearch/search/fetch/ScrollQueryFetchSearchResult.java index dbaee5b64bb6c..55aa4a96d018c 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/ScrollQueryFetchSearchResult.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/ScrollQueryFetchSearchResult.java @@ -21,49 +21,64 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.transport.TransportResponse; +import org.elasticsearch.search.query.QuerySearchResult; import java.io.IOException; import static org.elasticsearch.search.fetch.QueryFetchSearchResult.readQueryFetchSearchResult; -/** - * - */ -public class ScrollQueryFetchSearchResult extends TransportResponse { +public final class ScrollQueryFetchSearchResult extends SearchPhaseResult { private QueryFetchSearchResult result; - private SearchShardTarget shardTarget; public ScrollQueryFetchSearchResult() { } public ScrollQueryFetchSearchResult(QueryFetchSearchResult result, SearchShardTarget shardTarget) { this.result = result; - this.shardTarget = shardTarget; + setSearchShardTarget(shardTarget); } public QueryFetchSearchResult result() { return result; } - public SearchShardTarget shardTarget() { - return shardTarget; + @Override + public void setSearchShardTarget(SearchShardTarget shardTarget) { + super.setSearchShardTarget(shardTarget); + result.setSearchShardTarget(shardTarget); + } + + @Override + public void setShardIndex(int shardIndex) { + super.setShardIndex(shardIndex); + result.setShardIndex(shardIndex); + } + + @Override + public QuerySearchResult queryResult() { + return result.queryResult(); + } + + @Override + public FetchSearchResult fetchResult() { + return result.fetchResult(); } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - shardTarget = new SearchShardTarget(in); + SearchShardTarget searchShardTarget = new SearchShardTarget(in); result = readQueryFetchSearchResult(in); - result.shardTarget(shardTarget); + setSearchShardTarget(searchShardTarget); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - shardTarget.writeTo(out); + getSearchShardTarget().writeTo(out); result.writeTo(out); } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchRequest.java b/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchRequest.java index 4087eb9a01cf8..dcea42e5ecb7f 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchRequest.java @@ -22,9 +22,12 @@ import com.carrotsearch.hppc.IntArrayList; import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.ScoreDoc; +import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.Lucene; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskId; import org.elasticsearch.transport.TransportRequest; import java.io.IOException; @@ -106,4 +109,15 @@ public void writeTo(StreamOutput out) throws IOException { Lucene.writeScoreDoc(out, lastEmittedDoc); } } + + @Override + public Task createTask(long id, String type, String action, TaskId parentTaskId) { + return new SearchTask(id, type, action, getDescription(), parentTaskId); + } + + @Override + public String getDescription() { + return "id[" + id + "], size[" + size + "], lastEmittedDoc[" + lastEmittedDoc + "]"; + } + } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchSearchRequest.java b/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchSearchRequest.java index f6738f9972511..fdfc582c95295 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchSearchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchSearchRequest.java @@ -23,7 +23,6 @@ import org.apache.lucene.search.ScoreDoc; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.OriginalIndices; -import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -42,9 +41,9 @@ public ShardFetchSearchRequest() { } - public ShardFetchSearchRequest(SearchRequest request, long id, IntArrayList list, ScoreDoc lastEmittedDoc) { + public ShardFetchSearchRequest(OriginalIndices originalIndices, long id, IntArrayList list, ScoreDoc lastEmittedDoc) { super(id, list, lastEmittedDoc); - this.originalIndices = new OriginalIndices(request); + this.originalIndices = originalIndices; } @Override diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsContext.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsContext.java index 54185734f97b4..325d28e459282 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsContext.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsContext.java @@ -18,38 +18,23 @@ */ package org.elasticsearch.search.fetch.subphase; -import org.elasticsearch.search.fetch.FetchSubPhaseContext; - -import java.util.ArrayList; import java.util.List; /** * All the required context to pull a field from the doc values. */ -public class DocValueFieldsContext extends FetchSubPhaseContext { - - public static class DocValueField { - private final String name; +public class DocValueFieldsContext { - public DocValueField(String name) { - this.name = name; - } - - public String name() { - return name; - } - } - - private List fields = new ArrayList<>(); - - public DocValueFieldsContext() { - } + private final List fields; - public void add(DocValueField field) { - this.fields.add(field); + public DocValueFieldsContext(List fields) { + this.fields = fields; } - public List fields() { + /** + * Returns the required docvalue fields + */ + public List fields() { return this.fields; } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsFetchSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsFetchSubPhase.java index 803cbb4348f9f..f5425b3c7e9ab 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsFetchSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsFetchSubPhase.java @@ -23,10 +23,10 @@ import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.search.SearchHitField; import org.elasticsearch.search.fetch.FetchSubPhase; -import org.elasticsearch.search.internal.InternalSearchHitField; import org.elasticsearch.search.internal.SearchContext; import java.util.ArrayList; +import java.util.Collections; import java.util.HashMap; /** @@ -36,35 +36,30 @@ */ public final class DocValueFieldsFetchSubPhase implements FetchSubPhase { - public static final String NAME = "docvalue_fields"; - public static final ContextFactory CONTEXT_FACTORY = new ContextFactory() { - - @Override - public String getName() { - return NAME; - } - - @Override - public DocValueFieldsContext newContextInstance() { - return new DocValueFieldsContext(); - } - }; - @Override public void hitExecute(SearchContext context, HitContext hitContext) { - if (context.getFetchSubPhaseContext(CONTEXT_FACTORY).hitExecutionNeeded() == false) { + if (context.collapse() != null) { + // retrieve the `doc_value` associated with the collapse field + String name = context.collapse().getFieldType().name(); + if (context.docValueFieldsContext() == null) { + context.docValueFieldsContext(new DocValueFieldsContext(Collections.singletonList(name))); + } else if (context.docValueFieldsContext().fields().contains(name) == false) { + context.docValueFieldsContext().fields().add(name); + } + } + if (context.docValueFieldsContext() == null) { return; } - for (DocValueFieldsContext.DocValueField field : context.getFetchSubPhaseContext(CONTEXT_FACTORY).fields()) { + for (String field : context.docValueFieldsContext().fields()) { if (hitContext.hit().fieldsOrNull() == null) { hitContext.hit().fields(new HashMap<>(2)); } - SearchHitField hitField = hitContext.hit().fields().get(field.name()); + SearchHitField hitField = hitContext.hit().fields().get(field); if (hitField == null) { - hitField = new InternalSearchHitField(field.name(), new ArrayList<>(2)); - hitContext.hit().fields().put(field.name(), hitField); + hitField = new SearchHitField(field, new ArrayList<>(2)); + hitContext.hit().fields().put(field, hitField); } - MappedFieldType fieldType = context.mapperService().fullName(field.name()); + MappedFieldType fieldType = context.mapperService().fullName(field); if (fieldType != null) { AtomicFieldData data = context.fieldData().getForField(fieldType).load(hitContext.readerContext()); ScriptDocValues values = data.getScriptValues(); diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceContext.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceContext.java index 864de1628a723..128bef9ba146f 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceContext.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceContext.java @@ -25,96 +25,73 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.rest.RestRequest; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; import java.util.List; +import java.util.Map; +import java.util.function.Function; /** * Context used to fetch the {@code _source}. */ -public class FetchSourceContext implements Streamable, ToXContent { +public class FetchSourceContext implements Writeable, ToXContent { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(FetchSourceContext.class)); public static final ParseField INCLUDES_FIELD = new ParseField("includes", "include"); public static final ParseField EXCLUDES_FIELD = new ParseField("excludes", "exclude"); public static final FetchSourceContext FETCH_SOURCE = new FetchSourceContext(true); public static final FetchSourceContext DO_NOT_FETCH_SOURCE = new FetchSourceContext(false); - private boolean fetchSource; - private String[] includes; - private String[] excludes; + private final boolean fetchSource; + private final String[] includes; + private final String[] excludes; + private Function, Map> filter; - public static FetchSourceContext parse(QueryParseContext context) throws IOException { - FetchSourceContext fetchSourceContext = new FetchSourceContext(); - fetchSourceContext.fromXContent(context); - return fetchSourceContext; - } - - public FetchSourceContext() { + public FetchSourceContext(boolean fetchSource, String[] includes, String[] excludes) { + this.fetchSource = fetchSource; + this.includes = includes == null ? Strings.EMPTY_ARRAY : includes; + this.excludes = excludes == null ? Strings.EMPTY_ARRAY : excludes; } public FetchSourceContext(boolean fetchSource) { this(fetchSource, Strings.EMPTY_ARRAY, Strings.EMPTY_ARRAY); } - public FetchSourceContext(String include) { - this(include, null); - } - - public FetchSourceContext(String include, String exclude) { - this(true, - include == null ? Strings.EMPTY_ARRAY : new String[]{include}, - exclude == null ? Strings.EMPTY_ARRAY : new String[]{exclude}); - } - - public FetchSourceContext(String[] includes) { - this(true, includes, Strings.EMPTY_ARRAY); - } - - public FetchSourceContext(String[] includes, String[] excludes) { - this(true, includes, excludes); + public FetchSourceContext(StreamInput in) throws IOException { + fetchSource = in.readBoolean(); + includes = in.readStringArray(); + excludes = in.readStringArray(); } - public FetchSourceContext(boolean fetchSource, String[] includes, String[] excludes) { - this.fetchSource = fetchSource; - this.includes = includes == null ? Strings.EMPTY_ARRAY : includes; - this.excludes = excludes == null ? Strings.EMPTY_ARRAY : excludes; + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeBoolean(fetchSource); + out.writeStringArray(includes); + out.writeStringArray(excludes); } public boolean fetchSource() { return this.fetchSource; } - public FetchSourceContext fetchSource(boolean fetchSource) { - this.fetchSource = fetchSource; - return this; - } - public String[] includes() { return this.includes; } - public FetchSourceContext includes(String[] includes) { - this.includes = includes; - return this; - } - public String[] excludes() { return this.excludes; } - public FetchSourceContext excludes(String[] excludes) { - this.excludes = excludes; - return this; - } - public static FetchSourceContext parseFromRestRequest(RestRequest request) { Boolean fetchSource = null; String[] source_excludes = null; @@ -129,6 +106,10 @@ public static FetchSourceContext parseFromRestRequest(RestRequest request) { } else { source_includes = Strings.splitStringByCommaToArray(source); } + if (fetchSource != null && Booleans.isStrictlyBoolean(source) == false) { + DEPRECATION_LOGGER.deprecated("Expected a boolean [true/false] for request parameter [_source] but got [{}]", source); + } + } String sIncludes = request.param("_source_includes"); sIncludes = request.param("_source_include", sIncludes); @@ -148,8 +129,7 @@ public static FetchSourceContext parseFromRestRequest(RestRequest request) { return null; } - public void fromXContent(QueryParseContext context) throws IOException { - XContentParser parser = context.parser(); + public static FetchSourceContext fromXContent(XContentParser parser) throws IOException { XContentParser.Token token = parser.currentToken(); boolean fetchSource = true; String[] includes = Strings.EMPTY_ARRAY; @@ -170,7 +150,7 @@ public void fromXContent(QueryParseContext context) throws IOException { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_ARRAY) { - if (context.getParseFieldMatcher().match(currentFieldName, INCLUDES_FIELD)) { + if (INCLUDES_FIELD.match(currentFieldName)) { List includesList = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { if (token == XContentParser.Token.VALUE_STRING) { @@ -181,7 +161,7 @@ public void fromXContent(QueryParseContext context) throws IOException { } } includes = includesList.toArray(new String[includesList.size()]); - } else if (context.getParseFieldMatcher().match(currentFieldName, EXCLUDES_FIELD)) { + } else if (EXCLUDES_FIELD.match(currentFieldName)) { List excludesList = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { if (token == XContentParser.Token.VALUE_STRING) { @@ -197,10 +177,13 @@ public void fromXContent(QueryParseContext context) throws IOException { + " in [" + currentFieldName + "].", parser.getTokenLocation()); } } else if (token == XContentParser.Token.VALUE_STRING) { - if (context.getParseFieldMatcher().match(currentFieldName, INCLUDES_FIELD)) { + if (INCLUDES_FIELD.match(currentFieldName)) { includes = new String[] {parser.text()}; - } else if (context.getParseFieldMatcher().match(currentFieldName, EXCLUDES_FIELD)) { + } else if (EXCLUDES_FIELD.match(currentFieldName)) { excludes = new String[] {parser.text()}; + } else { + throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + + " in [" + currentFieldName + "].", parser.getTokenLocation()); } } else { throw new ParsingException(parser.getTokenLocation(), "Unknown key for a " + token + " in [" + currentFieldName + "].", @@ -211,9 +194,7 @@ public void fromXContent(QueryParseContext context) throws IOException { throw new ParsingException(parser.getTokenLocation(), "Expected one of [" + XContentParser.Token.VALUE_BOOLEAN + ", " + XContentParser.Token.START_OBJECT + "] but found [" + token + "]", parser.getTokenLocation()); } - this.fetchSource = fetchSource; - this.includes = includes; - this.excludes = excludes; + return new FetchSourceContext(fetchSource, includes, excludes); } @Override @@ -229,22 +210,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } - @Override - public void readFrom(StreamInput in) throws IOException { - fetchSource = in.readBoolean(); - includes = in.readStringArray(); - excludes = in.readStringArray(); - in.readBoolean(); // Used to be transformSource but that was dropped in 2.1 - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeBoolean(fetchSource); - out.writeStringArray(includes); - out.writeStringArray(excludes); - out.writeBoolean(false); // Used to be transformSource but that was dropped in 2.1 - } - @Override public boolean equals(Object o) { if (this == o) return true; @@ -266,4 +231,15 @@ public int hashCode() { result = 31 * result + (excludes != null ? Arrays.hashCode(excludes) : 0); return result; } + + /** + * Returns a filter function that expects the source map as an input and returns + * the filtered map. + */ + public Function, Map> getFilter() { + if (filter == null) { + filter = XContentMapValues.filter(includes, excludes); + } + return filter; + } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java index c67b96c7af588..3171ca4b00834 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java @@ -36,9 +36,6 @@ public void hitExecute(SearchContext context, HitContext hitContext) { return; } SourceLookup source = context.lookup().source(); - if (source.internalSourceRef() == null) { - return; // source disabled in the mapping - } FetchSourceContext fetchSourceContext = context.fetchSourceContext(); assert fetchSourceContext.fetchSource(); if (fetchSourceContext.includes().length == 0 && fetchSourceContext.excludes().length == 0) { @@ -46,7 +43,12 @@ public void hitExecute(SearchContext context, HitContext hitContext) { return; } - Object value = source.filter(fetchSourceContext.includes(), fetchSourceContext.excludes()); + if (source.internalSourceRef() == null) { + throw new IllegalArgumentException("unable to fetch fields from _source field: _source is disabled in the mappings " + + "for index [" + context.indexShard().shardId().getIndexName() + "]"); + } + + final Object value = source.filter(fetchSourceContext); try { final int initialCapacity = Math.min(1024, source.internalSourceRef().length()); BytesStreamOutput streamOutput = new BytesStreamOutput(initialCapacity); diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java index 44a6b13fd403d..9ec940b8f5b56 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java @@ -19,84 +19,72 @@ package org.elasticsearch.search.fetch.subphase; -import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.Term; -import org.apache.lucene.search.BooleanClause.Occur; -import org.apache.lucene.search.BooleanQuery; -import org.apache.lucene.search.ConstantScoreScorer; -import org.apache.lucene.search.ConstantScoreWeight; +import org.apache.lucene.search.CollectionTerminatedException; +import org.apache.lucene.search.Collector; +import org.apache.lucene.search.ConjunctionDISI; import org.apache.lucene.search.DocIdSetIterator; -import org.apache.lucene.search.DocValuesTermsQuery; -import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.Query; +import org.apache.lucene.search.LeafCollector; import org.apache.lucene.search.Scorer; -import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.ScorerSupplier; import org.apache.lucene.search.TopDocs; -import org.apache.lucene.search.TopDocsCollector; -import org.apache.lucene.search.TopFieldCollector; -import org.apache.lucene.search.TopScoreDocCollector; import org.apache.lucene.search.Weight; -import org.apache.lucene.search.join.BitSetProducer; -import org.apache.lucene.util.BitSet; -import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.index.mapper.ObjectMapper; -import org.elasticsearch.index.mapper.ParentFieldMapper; +import org.apache.lucene.util.Bits; import org.elasticsearch.index.mapper.Uid; -import org.elasticsearch.index.mapper.UidFieldMapper; -import org.elasticsearch.search.SearchHitField; -import org.elasticsearch.search.fetch.FetchSubPhase; -import org.elasticsearch.search.internal.InternalSearchHit; +import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.SubSearchContext; import java.io.IOException; +import java.util.Arrays; import java.util.HashMap; import java.util.Map; import java.util.Objects; -/** - */ public final class InnerHitsContext { - - private final Map innerHits; + private final Map innerHits; public InnerHitsContext() { this.innerHits = new HashMap<>(); } - public InnerHitsContext(Map innerHits) { + InnerHitsContext(Map innerHits) { this.innerHits = Objects.requireNonNull(innerHits); } - public Map getInnerHits() { + public Map getInnerHits() { return innerHits; } - public void addInnerHitDefinition(BaseInnerHits innerHit) { + public void addInnerHitDefinition(InnerHitSubContext innerHit) { if (innerHits.containsKey(innerHit.getName())) { throw new IllegalArgumentException("inner_hit definition with the name [" + innerHit.getName() + - "] already exists. Use a different inner_hit name or define one explicitly"); + "] already exists. Use a different inner_hit name or define one explicitly"); } innerHits.put(innerHit.getName(), innerHit); } - public abstract static class BaseInnerHits extends SubSearchContext { + /** + * A {@link SubSearchContext} that associates {@link TopDocs} to each {@link SearchHit} + * in the parent search context + */ + public abstract static class InnerHitSubContext extends SubSearchContext { private final String name; + protected final SearchContext context; private InnerHitsContext childInnerHits; - protected BaseInnerHits(String name, SearchContext context) { + // TODO: when types are complete removed just use String instead for the id: + private Uid uid; + + protected InnerHitSubContext(String name, SearchContext context) { super(context); this.name = name; + this.context = context; } - public abstract TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContext) throws IOException; + public abstract TopDocs[] topDocs(SearchHit[] hits) throws IOException; public String getName() { return name; @@ -107,236 +95,63 @@ public InnerHitsContext innerHits() { return childInnerHits; } - public void setChildInnerHits(Map childInnerHits) { + public void setChildInnerHits(Map childInnerHits) { this.childInnerHits = new InnerHitsContext(childInnerHits); } - } - - public static final class NestedInnerHits extends BaseInnerHits { - - private final ObjectMapper parentObjectMapper; - private final ObjectMapper childObjectMapper; - public NestedInnerHits(String name, SearchContext context, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper) { - super(name != null ? name : childObjectMapper.fullPath(), context); - this.parentObjectMapper = parentObjectMapper; - this.childObjectMapper = childObjectMapper; + protected Weight createInnerHitQueryWeight() throws IOException { + final boolean needsScores = size() != 0 && (sort() == null || sort().sort.needsScores()); + return context.searcher().createNormalizedWeight(query(), needsScores); } - @Override - public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContext) throws IOException { - Query rawParentFilter; - if (parentObjectMapper == null) { - rawParentFilter = Queries.newNonNestedFilter(); - } else { - rawParentFilter = parentObjectMapper.nestedTypeFilter(); - } - BitSetProducer parentFilter = context.bitsetFilterCache().getBitSetProducer(rawParentFilter); - Query childFilter = childObjectMapper.nestedTypeFilter(); - Query q = Queries.filtered(query(), new NestedChildrenQuery(parentFilter, childFilter, hitContext)); - - if (size() == 0) { - return new TopDocs(context.searcher().count(q), Lucene.EMPTY_SCORE_DOCS, 0); - } else { - int topN = Math.min(from() + size(), context.searcher().getIndexReader().maxDoc()); - TopDocsCollector topDocsCollector; - if (sort() != null) { - try { - topDocsCollector = TopFieldCollector.create(sort().sort, topN, true, trackScores(), trackScores()); - } catch (IOException e) { - throw ExceptionsHelper.convertToElastic(e); - } - } else { - topDocsCollector = TopScoreDocCollector.create(topN); - } - try { - context.searcher().search(q, topDocsCollector); - } finally { - clearReleasables(Lifetime.COLLECTION); - } - return topDocsCollector.topDocs(from(), size()); - } + public SearchContext parentSearchContext() { + return context; } - // A filter that only emits the nested children docs of a specific nested parent doc - static class NestedChildrenQuery extends Query { - - private final BitSetProducer parentFilter; - private final Query childFilter; - private final int docId; - private final LeafReader leafReader; - - NestedChildrenQuery(BitSetProducer parentFilter, Query childFilter, FetchSubPhase.HitContext hitContext) { - this.parentFilter = parentFilter; - this.childFilter = childFilter; - this.docId = hitContext.docId(); - this.leafReader = hitContext.readerContext().reader(); - } - - @Override - public boolean equals(Object obj) { - if (sameClassAs(obj) == false) { - return false; - } - NestedChildrenQuery other = (NestedChildrenQuery) obj; - return parentFilter.equals(other.parentFilter) - && childFilter.equals(other.childFilter) - && docId == other.docId - && leafReader.getCoreCacheKey() == other.leafReader.getCoreCacheKey(); - } - - @Override - public int hashCode() { - int hash = classHash(); - hash = 31 * hash + parentFilter.hashCode(); - hash = 31 * hash + childFilter.hashCode(); - hash = 31 * hash + docId; - hash = 31 * hash + leafReader.getCoreCacheKey().hashCode(); - return hash; - } - - @Override - public String toString(String field) { - return "NestedChildren(parent=" + parentFilter + ",child=" + childFilter + ")"; - } - - @Override - public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException { - final Weight childWeight = childFilter.createWeight(searcher, false); - return new ConstantScoreWeight(this) { - @Override - public Scorer scorer(LeafReaderContext context) throws IOException { - // Nested docs only reside in a single segment, so no need to evaluate all segments - if (!context.reader().getCoreCacheKey().equals(leafReader.getCoreCacheKey())) { - return null; - } - - // If docId == 0 then we a parent doc doesn't have child docs, because child docs are stored - // before the parent doc and because parent doc is 0 we can safely assume that there are no child docs. - if (docId == 0) { - return null; - } - - final BitSet parents = parentFilter.getBitSet(context); - final int firstChildDocId = parents.prevSetBit(docId - 1) + 1; - // A parent doc doesn't have child docs, so we can early exit here: - if (firstChildDocId == docId) { - return null; - } - - final Scorer childrenScorer = childWeight.scorer(context); - if (childrenScorer == null) { - return null; - } - DocIdSetIterator childrenIterator = childrenScorer.iterator(); - final DocIdSetIterator it = new DocIdSetIterator() { - - int doc = -1; - - @Override - public int docID() { - return doc; - } - - @Override - public int nextDoc() throws IOException { - return advance(doc + 1); - } - - @Override - public int advance(int target) throws IOException { - target = Math.max(firstChildDocId, target); - if (target >= docId) { - // We're outside the child nested scope, so it is done - return doc = NO_MORE_DOCS; - } else { - int advanced = childrenIterator.advance(target); - if (advanced >= docId) { - // We're outside the child nested scope, so it is done - return doc = NO_MORE_DOCS; - } else { - return doc = advanced; - } - } - } - - @Override - public long cost() { - return Math.min(childrenIterator.cost(), docId - firstChildDocId); - } - - }; - return new ConstantScoreScorer(this, score(), it); - } - }; - } + public Uid getUid() { + return uid; } + public void setUid(Uid uid) { + this.uid = uid; + } } - public static final class ParentChildInnerHits extends BaseInnerHits { - - private final MapperService mapperService; - private final DocumentMapper documentMapper; - - public ParentChildInnerHits(String name, SearchContext context, MapperService mapperService, DocumentMapper documentMapper) { - super(name != null ? name : documentMapper.type(), context); - this.mapperService = mapperService; - this.documentMapper = documentMapper; + public static void intersect(Weight weight, Weight innerHitQueryWeight, Collector collector, LeafReaderContext ctx) throws IOException { + ScorerSupplier scorerSupplier = weight.scorerSupplier(ctx); + if (scorerSupplier == null) { + return; } + // use random access since this scorer will be consumed on a minority of documents + Scorer scorer = scorerSupplier.get(true); - @Override - public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContext) throws IOException { - final Query hitQuery; - if (isParentHit(hitContext.hit())) { - String field = ParentFieldMapper.joinField(hitContext.hit().type()); - hitQuery = new DocValuesTermsQuery(field, hitContext.hit().id()); - } else if (isChildHit(hitContext.hit())) { - DocumentMapper hitDocumentMapper = mapperService.documentMapper(hitContext.hit().type()); - final String parentType = hitDocumentMapper.parentFieldMapper().type(); - SearchHitField parentField = hitContext.hit().field(ParentFieldMapper.NAME); - if (parentField == null) { - throw new IllegalStateException("All children must have a _parent"); - } - hitQuery = new TermQuery(new Term(UidFieldMapper.NAME, Uid.createUid(parentType, parentField.getValue()))); - } else { - return Lucene.EMPTY_TOP_DOCS; - } - - BooleanQuery q = new BooleanQuery.Builder() - .add(query(), Occur.MUST) - // Only include docs that have the current hit as parent - .add(hitQuery, Occur.FILTER) - // Only include docs that have this inner hits type - .add(documentMapper.typeFilter(), Occur.FILTER) - .build(); - if (size() == 0) { - final int count = context.searcher().count(q); - return new TopDocs(count, Lucene.EMPTY_SCORE_DOCS, 0); - } else { - int topN = Math.min(from() + size(), context.searcher().getIndexReader().maxDoc()); - TopDocsCollector topDocsCollector; - if (sort() != null) { - topDocsCollector = TopFieldCollector.create(sort().sort, topN, true, trackScores(), trackScores()); - } else { - topDocsCollector = TopScoreDocCollector.create(topN); - } - try { - context.searcher().search(q, topDocsCollector); - } finally { - clearReleasables(Lifetime.COLLECTION); - } - return topDocsCollector.topDocs(from(), size()); - } + ScorerSupplier innerHitQueryScorerSupplier = innerHitQueryWeight.scorerSupplier(ctx); + if (innerHitQueryScorerSupplier == null) { + return; } - - private boolean isParentHit(InternalSearchHit hit) { - return hit.type().equals(documentMapper.parentFieldMapper().type()); + // use random access since this scorer will be consumed on a minority of documents + Scorer innerHitQueryScorer = innerHitQueryScorerSupplier.get(true); + + final LeafCollector leafCollector; + try { + leafCollector = collector.getLeafCollector(ctx); + // Just setting the innerHitQueryScorer is ok, because that is the actual scoring part of the query + leafCollector.setScorer(innerHitQueryScorer); + } catch (CollectionTerminatedException e) { + return; } - private boolean isChildHit(InternalSearchHit hit) { - DocumentMapper hitDocumentMapper = mapperService.documentMapper(hit.type()); - return documentMapper.type().equals(hitDocumentMapper.parentFieldMapper().type()); + try { + Bits acceptDocs = ctx.reader().getLiveDocs(); + DocIdSetIterator iterator = ConjunctionDISI.intersectIterators(Arrays.asList(innerHitQueryScorer.iterator(), + scorer.iterator())); + for (int docId = iterator.nextDoc(); docId < DocIdSetIterator.NO_MORE_DOCS; docId = iterator.nextDoc()) { + if (acceptDocs == null || acceptDocs.get(docId)) { + leafCollector.collect(docId); + } + } + } catch (CollectionTerminatedException e) { + // ignore and continue } } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsFetchSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsFetchSubPhase.java index 23c63bc7eef0f..e5907425fee9b 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsFetchSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsFetchSubPhase.java @@ -23,11 +23,12 @@ import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.TopDocs; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.index.mapper.Uid; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.SearchHits; import org.elasticsearch.search.fetch.FetchPhase; import org.elasticsearch.search.fetch.FetchSearchResult; import org.elasticsearch.search.fetch.FetchSubPhase; -import org.elasticsearch.search.internal.InternalSearchHit; -import org.elasticsearch.search.internal.InternalSearchHits; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; @@ -43,39 +44,48 @@ public InnerHitsFetchSubPhase(FetchPhase fetchPhase) { } @Override - public void hitExecute(SearchContext context, HitContext hitContext) { + public void hitsExecute(SearchContext context, SearchHit[] hits) { if ((context.innerHits() != null && context.innerHits().getInnerHits().size() > 0) == false) { return; } - Map results = new HashMap<>(); - for (Map.Entry entry : context.innerHits().getInnerHits().entrySet()) { - InnerHitsContext.BaseInnerHits innerHits = entry.getValue(); - TopDocs topDocs; + + for (Map.Entry entry : context.innerHits().getInnerHits().entrySet()) { + InnerHitsContext.InnerHitSubContext innerHits = entry.getValue(); + TopDocs[] topDocs; try { - topDocs = innerHits.topDocs(context, hitContext); + topDocs = innerHits.topDocs(hits); } catch (IOException e) { throw ExceptionsHelper.convertToElastic(e); } - innerHits.queryResult().topDocs(topDocs, innerHits.sort() == null ? null : innerHits.sort().formats); - int[] docIdsToLoad = new int[topDocs.scoreDocs.length]; - for (int i = 0; i < topDocs.scoreDocs.length; i++) { - docIdsToLoad[i] = topDocs.scoreDocs[i].doc; - } - innerHits.docIdsToLoad(docIdsToLoad, 0, docIdsToLoad.length); - fetchPhase.execute(innerHits); - FetchSearchResult fetchResult = innerHits.fetchResult(); - InternalSearchHit[] internalHits = fetchResult.fetchResult().hits().internalHits(); - for (int i = 0; i < internalHits.length; i++) { - ScoreDoc scoreDoc = topDocs.scoreDocs[i]; - InternalSearchHit searchHitFields = internalHits[i]; - searchHitFields.score(scoreDoc.score); - if (scoreDoc instanceof FieldDoc) { - FieldDoc fieldDoc = (FieldDoc) scoreDoc; - searchHitFields.sortValues(fieldDoc.fields, innerHits.sort().formats); + for (int i = 0; i < hits.length; i++) { + SearchHit hit = hits[i]; + TopDocs topDoc = topDocs[i]; + + Map results = hit.getInnerHits(); + if (results == null) { + hit.setInnerHits(results = new HashMap<>()); + } + innerHits.queryResult().topDocs(topDoc, innerHits.sort() == null ? null : innerHits.sort().formats); + int[] docIdsToLoad = new int[topDoc.scoreDocs.length]; + for (int j = 0; j < topDoc.scoreDocs.length; j++) { + docIdsToLoad[j] = topDoc.scoreDocs[j].doc; + } + innerHits.docIdsToLoad(docIdsToLoad, 0, docIdsToLoad.length); + innerHits.setUid(new Uid(hit.getType(), hit.getId())); + fetchPhase.execute(innerHits); + FetchSearchResult fetchResult = innerHits.fetchResult(); + SearchHit[] internalHits = fetchResult.fetchResult().hits().internalHits(); + for (int j = 0; j < internalHits.length; j++) { + ScoreDoc scoreDoc = topDoc.scoreDocs[j]; + SearchHit searchHitFields = internalHits[j]; + searchHitFields.score(scoreDoc.score); + if (scoreDoc instanceof FieldDoc) { + FieldDoc fieldDoc = (FieldDoc) scoreDoc; + searchHitFields.sortValues(fieldDoc.fields, innerHits.sort().formats); + } } + results.put(entry.getKey(), fetchResult.hits()); } - results.put(entry.getKey(), fetchResult.hits()); } - hitContext.hit().setInnerHits(results); } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/MatchedQueriesFetchSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/MatchedQueriesFetchSubPhase.java index 56223b1ec4617..c28e07ff45526 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/MatchedQueriesFetchSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/MatchedQueriesFetchSubPhase.java @@ -22,13 +22,13 @@ import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.ReaderUtil; import org.apache.lucene.search.Query; -import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.ScorerSupplier; import org.apache.lucene.search.Weight; import org.apache.lucene.util.Bits; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.search.fetch.FetchSubPhase; -import org.elasticsearch.search.internal.InternalSearchHit; +import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.SearchContext.Lifetime; @@ -42,7 +42,7 @@ public final class MatchedQueriesFetchSubPhase implements FetchSubPhase { @Override - public void hitsExecute(SearchContext context, InternalSearchHit[] hits) { + public void hitsExecute(SearchContext context, SearchHit[] hits) { if (hits.length == 0 || // in case the request has only suggest, parsed query is null context.parsedQuery() == null) { @@ -71,15 +71,15 @@ public void hitsExecute(SearchContext context, InternalSearchHit[] hits) { Bits matchingDocs = null; final IndexReader indexReader = context.searcher().getIndexReader(); for (int i = 0; i < hits.length; ++i) { - InternalSearchHit hit = hits[i]; + SearchHit hit = hits[i]; int hitReaderIndex = ReaderUtil.subIndex(hit.docId(), indexReader.leaves()); if (readerIndex != hitReaderIndex) { readerIndex = hitReaderIndex; LeafReaderContext ctx = indexReader.leaves().get(readerIndex); docBase = ctx.docBase; // scorers can be costly to create, so reuse them across docs of the same segment - Scorer scorer = weight.scorer(ctx); - matchingDocs = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), scorer); + ScorerSupplier scorerSupplier = weight.scorerSupplier(ctx); + matchingDocs = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), scorerSupplier); } if (matchingDocs.get(hit.docId() - docBase)) { matchedQueries[i].add(name); @@ -92,7 +92,7 @@ public void hitsExecute(SearchContext context, InternalSearchHit[] hits) { } catch (IOException e) { throw ExceptionsHelper.convertToElastic(e); } finally { - SearchContext.current().clearReleasables(Lifetime.COLLECTION); + context.clearReleasables(Lifetime.COLLECTION); } } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ParentFieldSubFetchPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ParentFieldSubFetchPhase.java index ccfbf3515fc35..9cfab2ff9812b 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ParentFieldSubFetchPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ParentFieldSubFetchPhase.java @@ -26,7 +26,6 @@ import org.elasticsearch.index.mapper.ParentFieldMapper; import org.elasticsearch.search.SearchHitField; import org.elasticsearch.search.fetch.FetchSubPhase; -import org.elasticsearch.search.internal.InternalSearchHitField; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; @@ -47,20 +46,28 @@ public void hitExecute(SearchContext context, HitContext hitContext) { } String parentId = getParentId(parentFieldMapper, hitContext.reader(), hitContext.docId()); + if (parentId == null) { + // hit has no _parent field. Can happen for nested inner hits if parent hit is a p/c document. + return; + } + Map fields = hitContext.hit().fieldsOrNull(); if (fields == null) { fields = new HashMap<>(); hitContext.hit().fields(fields); } - fields.put(ParentFieldMapper.NAME, new InternalSearchHitField(ParentFieldMapper.NAME, Collections.singletonList(parentId))); + fields.put(ParentFieldMapper.NAME, new SearchHitField(ParentFieldMapper.NAME, Collections.singletonList(parentId))); } public static String getParentId(ParentFieldMapper fieldMapper, LeafReader reader, int docId) { try { SortedDocValues docValues = reader.getSortedDocValues(fieldMapper.name()); + if (docValues == null) { + // hit has no _parent field. + return null; + } BytesRef parentId = docValues.get(docId); - assert parentId.length > 0; - return parentId.utf8ToString(); + return parentId.length > 0 ? parentId.utf8ToString() : null; } catch (IOException e) { throw ExceptionsHelper.convertToElastic(e); } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ScriptFieldsFetchSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ScriptFieldsFetchSubPhase.java index 80638860f6cfa..d5f587db29dab 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ScriptFieldsFetchSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ScriptFieldsFetchSubPhase.java @@ -21,7 +21,6 @@ import org.elasticsearch.script.LeafSearchScript; import org.elasticsearch.search.SearchHitField; import org.elasticsearch.search.fetch.FetchSubPhase; -import org.elasticsearch.search.internal.InternalSearchHitField; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; @@ -70,7 +69,7 @@ public void hitExecute(SearchContext context, HitContext hitContext) { } else { values = Collections.singletonList(value); } - hitField = new InternalSearchHitField(scriptField.name(), values); + hitField = new SearchHitField(scriptField.name(), values); hitContext.hit().fields().put(scriptField.name(), hitField); } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/AbstractHighlighterBuilder.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/AbstractHighlighterBuilder.java index 72bd436a88cb2..4699843153f25 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/AbstractHighlighterBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/AbstractHighlighterBuilder.java @@ -21,6 +21,7 @@ import org.apache.lucene.search.highlight.SimpleFragmenter; import org.apache.lucene.search.highlight.SimpleSpanFragmenter; +import org.elasticsearch.Version; import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; @@ -32,10 +33,12 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder.BoundaryScannerType; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder.Order; import java.io.IOException; import java.util.Arrays; +import java.util.Locale; import java.util.Map; import java.util.Objects; import java.util.function.BiFunction; @@ -57,8 +60,10 @@ public abstract class AbstractHighlighterBuilderfvh this setting + * controls which scanner to use for fragment boundaries, and defaults to "simple". + */ + @SuppressWarnings("unchecked") + public HB boundaryScannerType(String boundaryScannerType) { + this.boundaryScannerType = BoundaryScannerType.fromString(boundaryScannerType); + return (HB) this; + } + + /** + * When using the highlighterType fvh this setting + * controls which scanner to use for fragment boundaries, and defaults to "simple". + */ + @SuppressWarnings("unchecked") + public HB boundaryScannerType(BoundaryScannerType boundaryScannerType) { + this.boundaryScannerType = boundaryScannerType; + return (HB) this; + } + + /** + * @return the value set by {@link #boundaryScannerType(String)} + */ + public BoundaryScannerType boundaryScannerType() { + return this.boundaryScannerType; + } + /** * When using the highlighterType fvh this setting * controls how far to look for boundary characters, and defaults to 20. @@ -366,6 +420,25 @@ public char[] boundaryChars() { return this.boundaryChars; } + /** + * When using the highlighterType fvh and boundaryScannerType break_iterator, this setting + * controls the locale to use by the BreakIterator, defaults to "root". + */ + @SuppressWarnings("unchecked") + public HB boundaryScannerLocale(String boundaryScannerLocale) { + if (boundaryScannerLocale != null) { + this.boundaryScannerLocale = Locale.forLanguageTag(boundaryScannerLocale); + } + return (HB) this; + } + + /** + * @return the value set by {@link #boundaryScannerLocale(String)} + */ + public Locale boundaryScannerLocale() { + return this.boundaryScannerLocale; + } + /** * Allows to set custom options for custom highlighters. */ @@ -491,12 +564,18 @@ void commonOptionsToXContent(XContentBuilder builder) throws IOException { if (highlightFilter != null) { builder.field(HIGHLIGHT_FILTER_FIELD.getPreferredName(), highlightFilter); } + if (boundaryScannerType != null) { + builder.field(BOUNDARY_SCANNER_FIELD.getPreferredName(), boundaryScannerType.name()); + } if (boundaryMaxScan != null) { builder.field(BOUNDARY_MAX_SCAN_FIELD.getPreferredName(), boundaryMaxScan); } if (boundaryChars != null) { builder.field(BOUNDARY_CHARS_FIELD.getPreferredName(), new String(boundaryChars)); } + if (boundaryScannerLocale != null) { + builder.field(BOUNDARY_SCANNER_LOCALE_FIELD.getPreferredName(), boundaryScannerLocale.toLanguageTag()); + } if (options != null && options.size() > 0) { builder.field(OPTIONS_FIELD.getPreferredName(), options); } @@ -523,8 +602,10 @@ static > BiFunction hb.boundaryChars(bc.toCharArray()) , BOUNDARY_CHARS_FIELD); + parser.declareString(HB::boundaryScannerLocale, BOUNDARY_SCANNER_LOCALE_FIELD); parser.declareString(HB::highlighterType, TYPE_FIELD); parser.declareString(HB::fragmenter, FRAGMENTER_FIELD); parser.declareInt(HB::noMatchSize, NO_MATCH_SIZE_FIELD); @@ -562,8 +643,8 @@ static > BiFunction terms) throws IOException { - if (query instanceof FunctionScoreQuery) { - query = ((FunctionScoreQuery) query).getSubQuery(); - extract(query, 1F, terms); - } else if (query instanceof FiltersFunctionScoreQuery) { - query = ((FiltersFunctionScoreQuery) query).getSubQuery(); - extract(query, 1F, terms); - } else if (terms.isEmpty()) { + if (terms.isEmpty()) { extractWeightedTerms(terms, query, 1F); } } protected void extract(Query query, float boost, Map terms) throws IOException { - if (query instanceof GeoPointInBBoxQuery) { - // skip all geo queries, see https://issues.apache.org/jira/browse/LUCENE-7293 and - // https://github.com/elastic/elasticsearch/issues/17537 - return; - } else if (query instanceof HasChildQueryBuilder.LateParsingQuery) { + if (isChildOrParentQuery(query.getClass())) { // skip has_child or has_parent queries, see: https://github.com/elastic/elasticsearch/issues/14999 return; + } else if (query instanceof FunctionScoreQuery) { + super.extract(((FunctionScoreQuery) query).getSubQuery(), boost, terms); + } else if (query instanceof FiltersFunctionScoreQuery) { + super.extract(((FiltersFunctionScoreQuery) query).getSubQuery(), boost, terms); + } else if (query instanceof ESToParentBlockJoinQuery) { + super.extract(((ESToParentBlockJoinQuery) query).getChildQuery(), boost, terms); + } else { + super.extract(query, boost, terms); } + } - super.extract(query, boost, terms); + /** + * Workaround to detect parent/child query + */ + private static final String PARENT_CHILD_QUERY_NAME = "HasChildQueryBuilder$LateParsingQuery"; + private static boolean isChildOrParentQuery(Class clazz) { + return clazz.getName().endsWith(PARENT_CHILD_QUERY_NAME); } } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/FastVectorHighlighter.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/FastVectorHighlighter.java index 873567de44e51..1a2f3cbf78a78 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/FastVectorHighlighter.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/FastVectorHighlighter.java @@ -21,6 +21,7 @@ import org.apache.lucene.search.highlight.Encoder; import org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder; import org.apache.lucene.search.vectorhighlight.BoundaryScanner; +import org.apache.lucene.search.vectorhighlight.BreakIteratorBoundaryScanner; import org.apache.lucene.search.vectorhighlight.CustomFieldQuery; import org.apache.lucene.search.vectorhighlight.FieldFragList; import org.apache.lucene.search.vectorhighlight.FieldPhraseList.WeightedPhraseInfo; @@ -32,15 +33,20 @@ import org.apache.lucene.search.vectorhighlight.SimpleFieldFragList; import org.apache.lucene.search.vectorhighlight.SimpleFragListBuilder; import org.apache.lucene.search.vectorhighlight.SingleFragListBuilder; +import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.text.Text; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.search.fetch.FetchPhaseExecutionException; import org.elasticsearch.search.fetch.FetchSubPhase; +import org.elasticsearch.search.fetch.subphase.highlight.SearchContextHighlight.Field; +import org.elasticsearch.search.fetch.subphase.highlight.SearchContextHighlight.FieldOptions; import org.elasticsearch.search.internal.SearchContext; +import java.text.BreakIterator; import java.util.Collections; import java.util.HashMap; +import java.util.Locale; import java.util.Map; /** @@ -48,13 +54,20 @@ */ public class FastVectorHighlighter implements Highlighter { - private static final SimpleBoundaryScanner DEFAULT_BOUNDARY_SCANNER = new SimpleBoundaryScanner(); + private static final BoundaryScanner DEFAULT_SIMPLE_BOUNDARY_SCANNER = new SimpleBoundaryScanner(); + private static final BoundaryScanner DEFAULT_SENTENCE_BOUNDARY_SCANNER = + new BreakIteratorBoundaryScanner(BreakIterator.getSentenceInstance(Locale.ROOT)); + private static final BoundaryScanner DEFAULT_WORD_BOUNDARY_SCANNER = + new BreakIteratorBoundaryScanner(BreakIterator.getWordInstance(Locale.ROOT)); + + public static final Setting SETTING_TV_HIGHLIGHT_MULTI_VALUE = + Setting.boolSetting("search.highlight.term_vector_multi_value", true, Setting.Property.NodeScope); private static final String CACHE_KEY = "highlight-fsv"; private final Boolean termVectorMultiValue; public FastVectorHighlighter(Settings settings) { - this.termVectorMultiValue = settings.getAsBoolean("search.highlight.term_vector_multi_value", true); + this.termVectorMultiValue = SETTING_TV_HIGHLIGHT_MULTI_VALUE.get(settings); } @Override @@ -65,11 +78,12 @@ public HighlightField highlight(HighlighterContext highlighterContext) { FieldMapper mapper = highlighterContext.mapper; if (canHighlight(mapper) == false) { - throw new IllegalArgumentException("the field [" + highlighterContext.fieldName - + "] should be indexed with term vector with position offsets to be used with fast vector highlighter"); + throw new IllegalArgumentException("the field [" + highlighterContext.fieldName + + "] should be indexed with term vector with position offsets to be used with fast vector highlighter"); } - Encoder encoder = field.fieldOptions().encoder().equals("html") ? HighlightUtils.Encoders.HTML : HighlightUtils.Encoders.DEFAULT; + Encoder encoder = field.fieldOptions().encoder().equals("html") ? + HighlightUtils.Encoders.HTML : HighlightUtils.Encoders.DEFAULT; if (!hitContext.cache().containsKey(CACHE_KEY)) { hitContext.cache().put(CACHE_KEY, new HighlighterEntry()); @@ -77,40 +91,12 @@ public HighlightField highlight(HighlighterContext highlighterContext) { HighlighterEntry cache = (HighlighterEntry) hitContext.cache().get(CACHE_KEY); try { - FieldQuery fieldQuery; - if (field.fieldOptions().requireFieldMatch()) { - if (cache.fieldMatchFieldQuery == null) { - /* - * we use top level reader to rewrite the query against all readers, with use caching it across hits (and across - * readers...) - */ - cache.fieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query, hitContext.topLevelReader(), - true, field.fieldOptions().requireFieldMatch()); - } - fieldQuery = cache.fieldMatchFieldQuery; - } else { - if (cache.noFieldMatchFieldQuery == null) { - /* - * we use top level reader to rewrite the query against all readers, with use caching it across hits (and across - * readers...) - */ - cache.noFieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query, hitContext.topLevelReader(), - true, field.fieldOptions().requireFieldMatch()); - } - fieldQuery = cache.noFieldMatchFieldQuery; - } - MapperHighlightEntry entry = cache.mappers.get(mapper); if (entry == null) { FragListBuilder fragListBuilder; BaseFragmentsBuilder fragmentsBuilder; - BoundaryScanner boundaryScanner = DEFAULT_BOUNDARY_SCANNER; - if (field.fieldOptions().boundaryMaxScan() != SimpleBoundaryScanner.DEFAULT_MAX_SCAN - || field.fieldOptions().boundaryChars() != SimpleBoundaryScanner.DEFAULT_BOUNDARY_CHARS) { - boundaryScanner = new SimpleBoundaryScanner(field.fieldOptions().boundaryMaxScan(), - field.fieldOptions().boundaryChars()); - } + final BoundaryScanner boundaryScanner = getBoundaryScanner(field); boolean forceSource = context.highlight().forceSource(field); if (field.fieldOptions().numberOfFragments() == 0) { fragListBuilder = new SingleFragListBuilder(); @@ -124,7 +110,7 @@ public HighlightField highlight(HighlighterContext highlighterContext) { } } else { fragListBuilder = field.fieldOptions().fragmentOffset() == -1 ? - new SimpleFragListBuilder() : new SimpleFragListBuilder(field.fieldOptions().fragmentOffset()); + new SimpleFragListBuilder() : new SimpleFragListBuilder(field.fieldOptions().fragmentOffset()); if (field.fieldOptions().scoreOrdered()) { if (!forceSource && mapper.fieldType().stored()) { fragmentsBuilder = new ScoreOrderFragmentsBuilder(field.fieldOptions().preTags(), @@ -138,24 +124,46 @@ public HighlightField highlight(HighlighterContext highlighterContext) { fragmentsBuilder = new SimpleFragmentsBuilder(mapper, field.fieldOptions().preTags(), field.fieldOptions().postTags(), boundaryScanner); } else { - fragmentsBuilder = new SourceSimpleFragmentsBuilder(mapper, context, field.fieldOptions().preTags(), + fragmentsBuilder = + new SourceSimpleFragmentsBuilder(mapper, context, field.fieldOptions().preTags(), field.fieldOptions().postTags(), boundaryScanner); } } } fragmentsBuilder.setDiscreteMultiValueHighlighting(termVectorMultiValue); entry = new MapperHighlightEntry(); + if (field.fieldOptions().requireFieldMatch()) { + /** + * we use top level reader to rewrite the query against all readers, + * with use caching it across hits (and across readers...) + */ + entry.fieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query, + hitContext.topLevelReader(), true, field.fieldOptions().requireFieldMatch()); + } else { + /** + * we use top level reader to rewrite the query against all readers, + * with use caching it across hits (and across readers...) + */ + entry.noFieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query, + hitContext.topLevelReader(), true, field.fieldOptions().requireFieldMatch()); + } entry.fragListBuilder = fragListBuilder; entry.fragmentsBuilder = fragmentsBuilder; if (cache.fvh == null) { // parameters to FVH are not requires since: - // first two booleans are not relevant since they are set on the CustomFieldQuery (phrase and fieldMatch) - // fragment builders are used explicitly + // first two booleans are not relevant since they are set on the CustomFieldQuery + // (phrase and fieldMatch) fragment builders are used explicitly cache.fvh = new org.apache.lucene.search.vectorhighlight.FastVectorHighlighter(); } CustomFieldQuery.highlightFilters.set(field.fieldOptions().highlightFilter()); cache.mappers.put(mapper, entry); } + final FieldQuery fieldQuery; + if (field.fieldOptions().requireFieldMatch()) { + fieldQuery = entry.fieldMatchFieldQuery; + } else { + fieldQuery = entry.noFieldMatchFieldQuery; + } cache.fvh.setPhraseLimit(field.fieldOptions().phraseLimit()); String[] fragments; @@ -168,13 +176,14 @@ public HighlightField highlight(HighlighterContext highlighterContext) { // we highlight against the low level reader and docId, because if we load source, we want to reuse it if possible // Only send matched fields if they were requested to save time. if (field.fieldOptions().matchedFields() != null && !field.fieldOptions().matchedFields().isEmpty()) { - fragments = cache.fvh.getBestFragments(fieldQuery, hitContext.reader(), hitContext.docId(), mapper.fieldType().name(), - field.fieldOptions().matchedFields(), fragmentCharSize, numberOfFragments, entry.fragListBuilder, - entry.fragmentsBuilder, field.fieldOptions().preTags(), field.fieldOptions().postTags(), encoder); + fragments = cache.fvh.getBestFragments(fieldQuery, hitContext.reader(), hitContext.docId(), + mapper.fieldType().name(), field.fieldOptions().matchedFields(), fragmentCharSize, + numberOfFragments, entry.fragListBuilder, entry.fragmentsBuilder, field.fieldOptions().preTags(), + field.fieldOptions().postTags(), encoder); } else { - fragments = cache.fvh.getBestFragments(fieldQuery, hitContext.reader(), hitContext.docId(), mapper.fieldType().name(), - fragmentCharSize, numberOfFragments, entry.fragListBuilder, entry.fragmentsBuilder, field.fieldOptions().preTags(), - field.fieldOptions().postTags(), encoder); + fragments = cache.fvh.getBestFragments(fieldQuery, hitContext.reader(), hitContext.docId(), + mapper.fieldType().name(), fragmentCharSize, numberOfFragments, entry.fragListBuilder, + entry.fragmentsBuilder, field.fieldOptions().preTags(), field.fieldOptions().postTags(), encoder); } if (fragments != null && fragments.length > 0) { @@ -183,11 +192,13 @@ public HighlightField highlight(HighlighterContext highlighterContext) { int noMatchSize = highlighterContext.field.fieldOptions().noMatchSize(); if (noMatchSize > 0) { - // Essentially we just request that a fragment is built from 0 to noMatchSize using the normal fragmentsBuilder + // Essentially we just request that a fragment is built from 0 to noMatchSize using + // the normal fragmentsBuilder FieldFragList fieldFragList = new SimpleFieldFragList(-1 /*ignored*/); fieldFragList.add(0, noMatchSize, Collections.emptyList()); - fragments = entry.fragmentsBuilder.createFragments(hitContext.reader(), hitContext.docId(), mapper.fieldType().name(), - fieldFragList, 1, field.fieldOptions().preTags(), field.fieldOptions().postTags(), encoder); + fragments = entry.fragmentsBuilder.createFragments(hitContext.reader(), hitContext.docId(), + mapper.fieldType().name(), fieldFragList, 1, field.fieldOptions().preTags(), + field.fieldOptions().postTags(), encoder); if (fragments != null && fragments.length > 0) { return new HighlightField(highlighterContext.fieldName, Text.convertFromStringArray(fragments)); } @@ -196,7 +207,8 @@ public HighlightField highlight(HighlighterContext highlighterContext) { return null; } catch (Exception e) { - throw new FetchPhaseExecutionException(context, "Failed to highlight field [" + highlighterContext.fieldName + "]", e); + throw new FetchPhaseExecutionException(context, + "Failed to highlight field [" + highlighterContext.fieldName + "]", e); } } @@ -206,15 +218,45 @@ public boolean canHighlight(FieldMapper fieldMapper) { && fieldMapper.fieldType().storeTermVectorPositions(); } + private static BoundaryScanner getBoundaryScanner(Field field) { + final FieldOptions fieldOptions = field.fieldOptions(); + final Locale boundaryScannerLocale = + fieldOptions.boundaryScannerLocale() != null ? fieldOptions.boundaryScannerLocale() : + Locale.ROOT; + final HighlightBuilder.BoundaryScannerType type = + fieldOptions.boundaryScannerType() != null ? fieldOptions.boundaryScannerType() : + HighlightBuilder.BoundaryScannerType.CHARS; + switch(type) { + case SENTENCE: + if (boundaryScannerLocale != null) { + return new BreakIteratorBoundaryScanner(BreakIterator.getSentenceInstance(boundaryScannerLocale)); + } + return DEFAULT_SENTENCE_BOUNDARY_SCANNER; + case WORD: + if (boundaryScannerLocale != null) { + return new BreakIteratorBoundaryScanner(BreakIterator.getWordInstance(boundaryScannerLocale)); + } + return DEFAULT_WORD_BOUNDARY_SCANNER; + case CHARS: + if (fieldOptions.boundaryMaxScan() != SimpleBoundaryScanner.DEFAULT_MAX_SCAN + || fieldOptions.boundaryChars() != SimpleBoundaryScanner.DEFAULT_BOUNDARY_CHARS) { + return new SimpleBoundaryScanner(fieldOptions.boundaryMaxScan(), fieldOptions.boundaryChars()); + } + return DEFAULT_SIMPLE_BOUNDARY_SCANNER; + default: + throw new IllegalArgumentException("Invalid boundary scanner type: " + type.toString()); + } + } + private class MapperHighlightEntry { public FragListBuilder fragListBuilder; public FragmentsBuilder fragmentsBuilder; + public FieldQuery noFieldMatchFieldQuery; + public FieldQuery fieldMatchFieldQuery; } private class HighlighterEntry { public org.apache.lucene.search.vectorhighlight.FastVectorHighlighter fvh; - public FieldQuery noFieldMatchFieldQuery; - public FieldQuery fieldMatchFieldQuery; public Map mappers = new HashMap<>(); } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightBuilder.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightBuilder.java index fe4587826c7b8..5e49aa7395d57 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightBuilder.java @@ -97,7 +97,7 @@ public class HighlightBuilder extends AbstractHighlighterBuilder fields = new ArrayList<>(); @@ -327,12 +327,18 @@ private static void transferOptions(AbstractHighlighterBuilder highlighterBuilde if (highlighterBuilder.requireFieldMatch != null) { targetOptionsBuilder.requireFieldMatch(highlighterBuilder.requireFieldMatch); } + if (highlighterBuilder.boundaryScannerType != null) { + targetOptionsBuilder.boundaryScannerType(highlighterBuilder.boundaryScannerType); + } if (highlighterBuilder.boundaryMaxScan != null) { targetOptionsBuilder.boundaryMaxScan(highlighterBuilder.boundaryMaxScan); } if (highlighterBuilder.boundaryChars != null) { targetOptionsBuilder.boundaryChars(convertCharArray(highlighterBuilder.boundaryChars)); } + if (highlighterBuilder.boundaryScannerLocale != null) { + targetOptionsBuilder.boundaryScannerLocale(highlighterBuilder.boundaryScannerLocale); + } if (highlighterBuilder.highlighterType != null) { targetOptionsBuilder.highlighterType(highlighterBuilder.highlighterType); } @@ -476,7 +482,7 @@ public void innerXContent(XContentBuilder builder) throws IOException { builder.field(FRAGMENT_OFFSET_FIELD.getPreferredName(), fragmentOffset); } if (matchedFields != null) { - builder.field(MATCHED_FIELDS_FIELD.getPreferredName(), matchedFields); + builder.array(MATCHED_FIELDS_FIELD.getPreferredName(), matchedFields); } builder.endObject(); } @@ -498,16 +504,12 @@ public enum Order implements Writeable { NONE, SCORE; public static Order readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown Order ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(Order.class); } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(this.ordinal()); + out.writeEnum(this); } public static Order fromString(String order) { @@ -522,4 +524,26 @@ public String toString() { return name().toLowerCase(Locale.ROOT); } } + + public enum BoundaryScannerType implements Writeable { + CHARS, WORD, SENTENCE; + + public static BoundaryScannerType readFromStream(StreamInput in) throws IOException { + return in.readEnum(BoundaryScannerType.class); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeEnum(this); + } + + public static BoundaryScannerType fromString(String boundaryScannerType) { + return valueOf(boundaryScannerType.toUpperCase(Locale.ROOT)); + } + + @Override + public String toString() { + return name().toLowerCase(Locale.ROOT); + } + } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightField.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightField.java index 91fde32c8885d..7ff2147868a5b 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightField.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightField.java @@ -19,18 +19,27 @@ package org.elasticsearch.search.fetch.subphase.highlight; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.text.Text; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; +import java.util.ArrayList; import java.util.Arrays; +import java.util.List; +import java.util.Objects; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; /** * A field highlighted with its highlighted fragments. */ -public class HighlightField implements Streamable { +public class HighlightField implements ToXContent, Streamable { private String name; @@ -40,7 +49,7 @@ public class HighlightField implements Streamable { } public HighlightField(String name, Text[] fragments) { - this.name = name; + this.name = Objects.requireNonNull(name, "missing highlight field name"); this.fragments = fragments; } @@ -112,4 +121,57 @@ public void writeTo(StreamOutput out) throws IOException { } } } + + public static HighlightField fromXContent(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.FIELD_NAME, parser.currentToken(), parser::getTokenLocation); + String fieldName = parser.currentName(); + Text[] fragments = null; + XContentParser.Token token = parser.nextToken(); + if (token == XContentParser.Token.START_ARRAY) { + List values = new ArrayList<>(); + while (parser.nextToken() != XContentParser.Token.END_ARRAY) { + values.add(new Text(parser.text())); + } + fragments = values.toArray(new Text[values.size()]); + } else if (token == XContentParser.Token.VALUE_NULL) { + fragments = null; + } else { + throw new ParsingException(parser.getTokenLocation(), + "unexpected token type [" + token + "]"); + } + return new HighlightField(fieldName, fragments); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.field(name); + if (fragments == null) { + builder.nullValue(); + } else { + builder.startArray(); + for (Text fragment : fragments) { + builder.value(fragment); + } + builder.endArray(); + } + return builder; + } + + @Override + public final boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj == null || getClass() != obj.getClass()) { + return false; + } + HighlightField other = (HighlightField) obj; + return Objects.equals(name, other.name) && Arrays.equals(fragments, other.fragments); + } + + @Override + public final int hashCode() { + return Objects.hash(name, Arrays.hashCode(fragments)); + } + } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightPhase.java index 84890857c793f..cef765862f5e7 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/HighlightPhase.java @@ -29,6 +29,7 @@ import org.elasticsearch.index.mapper.SourceFieldMapper; import org.elasticsearch.index.mapper.StringFieldMapper; import org.elasticsearch.index.mapper.TextFieldMapper; +import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.fetch.FetchSubPhase; import org.elasticsearch.search.internal.SearchContext; @@ -68,13 +69,13 @@ public void hitExecute(SearchContext context, HitContext hitContext) { SourceFieldMapper sourceFieldMapper = context.mapperService().documentMapper(hitContext.hit().type()).sourceMapper(); if (!sourceFieldMapper.enabled()) { throw new IllegalArgumentException("source is forced for fields " + fieldNamesToHighlight - + " but type [" + hitContext.hit().type() + "] has disabled _source"); + + " but type [" + hitContext.hit().type() + "] has disabled _source"); } } boolean fieldNameContainsWildcards = field.field().contains("*"); for (String fieldName : fieldNamesToHighlight) { - FieldMapper fieldMapper = getMapperForField(fieldName, context, hitContext); + FieldMapper fieldMapper = getMapperForField(fieldName, context, hitContext.hit()); if (fieldMapper == null) { continue; } @@ -107,7 +108,7 @@ public void hitExecute(SearchContext context, HitContext hitContext) { Highlighter highlighter = highlighters.get(highlighterType); if (highlighter == null) { throw new IllegalArgumentException("unknown highlighter type [" + highlighterType - + "] for the field [" + fieldName + "]"); + + "] for the field [" + fieldName + "]"); } Query highlightQuery = field.fieldOptions().highlightQuery(); @@ -115,7 +116,7 @@ public void hitExecute(SearchContext context, HitContext hitContext) { highlightQuery = context.parsedQuery().query(); } HighlighterContext highlighterContext = new HighlighterContext(fieldName, field, fieldMapper, context, - hitContext, highlightQuery); + hitContext, highlightQuery); if ((highlighter.canHighlight(fieldMapper) == false) && fieldNameContainsWildcards) { // if several fieldnames matched the wildcard then we want to skip those that we cannot highlight @@ -130,9 +131,85 @@ public void hitExecute(SearchContext context, HitContext hitContext) { hitContext.hit().highlightFields(highlightFields); } - private FieldMapper getMapperForField(String fieldName, SearchContext searchContext, HitContext hitContext) { - DocumentMapper documentMapper = searchContext.mapperService().documentMapper(hitContext.hit().type()); + @Override + public void hitsExecute(SearchContext context, SearchHit[] hits) { + if (context.highlight() == null) { + return; + } + + /** + * This is used to make sure that we log the deprecation of the {@link PostingsHighlighter} + * only once per request even though multiple documents/fields use the deprecated highlighter. + */ + for (SearchHit hit : hits) { + for (SearchContextHighlight.Field field : context.highlight().fields()) { + for (String fieldName : getFieldsToHighlight(context, field, hit)) { + FieldMapper fieldMapper = getMapperForField(fieldName, context, hit); + if (fieldMapper == null) { + continue; + } + + String highlighterType = field.fieldOptions().highlighterType(); + Highlighter highlighter = getHighlighterForField(fieldMapper, highlighterType); + if (highlighter instanceof PostingsHighlighter) { + deprecationLogger.deprecated("[postings] highlighter is deprecated, please use [unified] instead"); + return; + } + } + } + } + } + + /** + * Returns the list of field names that match the provided field definition in the mapping of the {@link SearchHit} + */ + private Collection getFieldsToHighlight(SearchContext context, SearchContextHighlight.Field field, SearchHit hit) { + Collection fieldNamesToHighlight; + if (Regex.isSimpleMatchPattern(field.field())) { + DocumentMapper documentMapper = context.mapperService().documentMapper(hit.type()); + fieldNamesToHighlight = documentMapper.mappers().simpleMatchToFullName(field.field()); + } else { + fieldNamesToHighlight = Collections.singletonList(field.field()); + } + + if (context.highlight().forceSource(field)) { + SourceFieldMapper sourceFieldMapper = context.mapperService().documentMapper(hit.type()).sourceMapper(); + if (!sourceFieldMapper.enabled()) { + throw new IllegalArgumentException("source is forced for fields " + fieldNamesToHighlight + + " but type [" + hit.type() + "] has disabled _source"); + } + } + return fieldNamesToHighlight; + } + + /** + * Returns the {@link FieldMapper} associated with the provided fieldName + */ + private FieldMapper getMapperForField(String fieldName, SearchContext searchContext, SearchHit hit) { + DocumentMapper documentMapper = searchContext.mapperService().documentMapper(hit.getType()); // TODO: no need to lookup the doc mapper with unambiguous field names? just look at the mapper service return documentMapper.mappers().smartNameFieldMapper(fieldName); } + + /** + * Returns the {@link Highlighter} for the given {@link FieldMapper} + */ + private Highlighter getHighlighterForField(FieldMapper fieldMapper, String name) { + if (name == null) { + for (String highlighterCandidate : STANDARD_HIGHLIGHTERS_BY_PRECEDENCE) { + if (highlighters.get(highlighterCandidate).canHighlight(fieldMapper)) { + name = highlighterCandidate; + break; + } + } + assert name != null; + } + Highlighter highlighter = highlighters.get(name); + if (highlighter == null) { + throw new IllegalArgumentException("unknown highlighter type [" + name + + "] for the field [" + fieldMapper.name() + "]"); + } + return highlighter; + } } + diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/PlainHighlighter.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/PlainHighlighter.java index 631d716f6f77a..f5b8abc8b4ae4 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/PlainHighlighter.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/PlainHighlighter.java @@ -31,6 +31,7 @@ import org.apache.lucene.search.highlight.SimpleHTMLFormatter; import org.apache.lucene.search.highlight.SimpleSpanFragmenter; import org.apache.lucene.search.highlight.TextFragment; +import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefHash; import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.ExceptionsHelper; @@ -109,7 +110,12 @@ public HighlightField highlight(HighlighterContext highlighterContext) { textsToHighlight = HighlightUtils.loadFieldValues(field, mapper, context, hitContext); for (Object textToHighlight : textsToHighlight) { - String text = textToHighlight.toString(); + String text; + if (textToHighlight instanceof BytesRef) { + text = mapper.fieldType().valueForDisplay(textToHighlight).toString(); + } else { + text = textToHighlight.toString(); + } try (TokenStream tokenStream = analyzer.tokenStream(mapper.fieldType().name(), text)) { if (!tokenStream.hasAttribute(CharTermAttribute.class) || !tokenStream.hasAttribute(OffsetAttribute.class)) { diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/PostingsHighlighter.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/PostingsHighlighter.java index 7ed50c7a1ddcc..6e15fe930b547 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/PostingsHighlighter.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/PostingsHighlighter.java @@ -25,7 +25,7 @@ import org.apache.lucene.search.postingshighlight.CustomPassageFormatter; import org.apache.lucene.search.postingshighlight.CustomPostingsHighlighter; import org.apache.lucene.search.postingshighlight.CustomSeparatorBreakIterator; -import org.apache.lucene.search.postingshighlight.Snippet; +import org.apache.lucene.search.highlight.Snippet; import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.common.Strings; import org.elasticsearch.common.text.Text; @@ -44,6 +44,7 @@ import java.util.Locale; import java.util.Map; +@Deprecated public class PostingsHighlighter implements Highlighter { private static final String CACHE_KEY = "highlight-postings"; @@ -139,14 +140,14 @@ public boolean canHighlight(FieldMapper fieldMapper) { return fieldMapper.fieldType().indexOptions() == IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS; } - private static String mergeFieldValues(List fieldValues, char valuesSeparator) { + static String mergeFieldValues(List fieldValues, char valuesSeparator) { //postings highlighter accepts all values in a single string, as offsets etc. need to match with content //loaded from stored fields, we merge all values using a proper separator String rawValue = Strings.collectionToDelimitedString(fieldValues, String.valueOf(valuesSeparator)); return rawValue.substring(0, Math.min(rawValue.length(), Integer.MAX_VALUE - 1)); } - private static List filterSnippets(List snippets, int numberOfFragments) { + static List filterSnippets(List snippets, int numberOfFragments) { //We need to filter the snippets as due to no_match_size we could have //either highlighted snippets or non highlighted ones and we don't want to mix those up @@ -181,11 +182,11 @@ private static List filterSnippets(List snippets, int numberOf return filteredSnippets; } - private static class HighlighterEntry { + static class HighlighterEntry { Map mappers = new HashMap<>(); } - private static class MapperHighlighterEntry { + static class MapperHighlighterEntry { final CustomPassageFormatter passageFormatter; private MapperHighlighterEntry(CustomPassageFormatter passageFormatter) { diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/SearchContextHighlight.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/SearchContextHighlight.java index 9f2074d741248..7e7a1c62592d1 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/SearchContextHighlight.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/SearchContextHighlight.java @@ -20,11 +20,13 @@ package org.elasticsearch.search.fetch.subphase.highlight; import org.apache.lucene.search.Query; +import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder.BoundaryScannerType; import java.util.Arrays; import java.util.Collection; import java.util.HashMap; import java.util.LinkedHashMap; +import java.util.Locale; import java.util.Map; import java.util.Set; @@ -113,10 +115,14 @@ public static class FieldOptions { private String fragmenter; + private BoundaryScannerType boundaryScannerType; + private int boundaryMaxScan = -1; private Character[] boundaryChars = null; + private Locale boundaryScannerLocale; + private Query highlightQuery; private int noMatchSize = -1; @@ -171,6 +177,10 @@ public String fragmenter() { return fragmenter; } + public BoundaryScannerType boundaryScannerType() { + return boundaryScannerType; + } + public int boundaryMaxScan() { return boundaryMaxScan; } @@ -179,6 +189,10 @@ public Character[] boundaryChars() { return boundaryChars; } + public Locale boundaryScannerLocale() { + return boundaryScannerLocale; + } + public Query highlightQuery() { return highlightQuery; } @@ -263,6 +277,11 @@ Builder fragmenter(String fragmenter) { return this; } + Builder boundaryScannerType(BoundaryScannerType boundaryScanner) { + fieldOptions.boundaryScannerType = boundaryScanner; + return this; + } + Builder boundaryMaxScan(int boundaryMaxScan) { fieldOptions.boundaryMaxScan = boundaryMaxScan; return this; @@ -273,6 +292,11 @@ Builder boundaryChars(Character[] boundaryChars) { return this; } + Builder boundaryScannerLocale(Locale boundaryScannerLocale) { + fieldOptions.boundaryScannerLocale = boundaryScannerLocale; + return this; + } + Builder highlightQuery(Query highlightQuery) { fieldOptions.highlightQuery = highlightQuery; return this; @@ -327,12 +351,18 @@ Builder merge(FieldOptions globalOptions) { if (fieldOptions.requireFieldMatch == null) { fieldOptions.requireFieldMatch = globalOptions.requireFieldMatch; } + if (fieldOptions.boundaryScannerType == null) { + fieldOptions.boundaryScannerType = globalOptions.boundaryScannerType; + } if (fieldOptions.boundaryMaxScan == -1) { fieldOptions.boundaryMaxScan = globalOptions.boundaryMaxScan; } if (fieldOptions.boundaryChars == null && globalOptions.boundaryChars != null) { fieldOptions.boundaryChars = Arrays.copyOf(globalOptions.boundaryChars, globalOptions.boundaryChars.length); } + if (fieldOptions.boundaryScannerLocale == null) { + fieldOptions.boundaryScannerLocale = globalOptions.boundaryScannerLocale; + } if (fieldOptions.highlighterType == null) { fieldOptions.highlighterType = globalOptions.highlighterType; } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/UnifiedHighlighter.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/UnifiedHighlighter.java new file mode 100644 index 0000000000000..e5b16dacec70c --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/UnifiedHighlighter.java @@ -0,0 +1,202 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.fetch.subphase.highlight; + +import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.highlight.Encoder; +import org.apache.lucene.search.highlight.Snippet; +import org.apache.lucene.search.uhighlight.BoundedBreakIteratorScanner; +import org.apache.lucene.search.uhighlight.CustomPassageFormatter; +import org.apache.lucene.search.uhighlight.CustomUnifiedHighlighter; +import org.apache.lucene.search.uhighlight.UnifiedHighlighter.OffsetSource; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.CollectionUtil; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.text.Text; +import org.elasticsearch.index.mapper.FieldMapper; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.search.fetch.FetchPhaseExecutionException; +import org.elasticsearch.search.fetch.FetchSubPhase; +import org.elasticsearch.search.internal.SearchContext; + +import java.io.IOException; +import java.text.BreakIterator; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.stream.Collectors; + +import static org.apache.lucene.search.uhighlight.CustomUnifiedHighlighter.MULTIVAL_SEP_CHAR; +import static org.elasticsearch.search.fetch.subphase.highlight.PostingsHighlighter.filterSnippets; +import static org.elasticsearch.search.fetch.subphase.highlight.PostingsHighlighter.mergeFieldValues; + +public class UnifiedHighlighter implements Highlighter { + private static final String CACHE_KEY = "highlight-unified"; + + @Override + public boolean canHighlight(FieldMapper fieldMapper) { + return true; + } + + @Override + public HighlightField highlight(HighlighterContext highlighterContext) { + FieldMapper fieldMapper = highlighterContext.mapper; + SearchContextHighlight.Field field = highlighterContext.field; + SearchContext context = highlighterContext.context; + FetchSubPhase.HitContext hitContext = highlighterContext.hitContext; + + if (!hitContext.cache().containsKey(CACHE_KEY)) { + hitContext.cache().put(CACHE_KEY, new HighlighterEntry()); + } + + HighlighterEntry highlighterEntry = (HighlighterEntry) hitContext.cache().get(CACHE_KEY); + MapperHighlighterEntry mapperHighlighterEntry = highlighterEntry.mappers.get(fieldMapper); + + if (mapperHighlighterEntry == null) { + Encoder encoder = field.fieldOptions().encoder().equals("html") ? + HighlightUtils.Encoders.HTML : HighlightUtils.Encoders.DEFAULT; + CustomPassageFormatter passageFormatter = + new CustomPassageFormatter(field.fieldOptions().preTags()[0], + field.fieldOptions().postTags()[0], encoder); + mapperHighlighterEntry = new MapperHighlighterEntry(passageFormatter); + } + + List snippets = new ArrayList<>(); + int numberOfFragments; + try { + Analyzer analyzer = + context.mapperService().documentMapper(hitContext.hit().type()).mappers().indexAnalyzer(); + List fieldValues = HighlightUtils.loadFieldValues(field, fieldMapper, context, hitContext); + fieldValues = fieldValues.stream().map(obj -> { + if (obj instanceof BytesRef) { + return fieldMapper.fieldType().valueForDisplay(obj).toString(); + } else { + return obj; + } + }).collect(Collectors.toList()); + final IndexSearcher searcher = new IndexSearcher(hitContext.reader()); + final CustomUnifiedHighlighter highlighter; + final String fieldValue = mergeFieldValues(fieldValues, MULTIVAL_SEP_CHAR); + final OffsetSource offsetSource = getOffsetSource(fieldMapper.fieldType()); + if (field.fieldOptions().numberOfFragments() == 0) { + // we use a control char to separate values, which is the only char that the custom break iterator + // breaks the text on, so we don't lose the distinction between the different values of a field and we + // get back a snippet per value + org.apache.lucene.search.postingshighlight.CustomSeparatorBreakIterator breakIterator = + new org.apache.lucene.search.postingshighlight + .CustomSeparatorBreakIterator(MULTIVAL_SEP_CHAR); + highlighter = new CustomUnifiedHighlighter(searcher, analyzer, offsetSource, + mapperHighlighterEntry.passageFormatter, field.fieldOptions().boundaryScannerLocale(), + breakIterator, fieldValue, field.fieldOptions().noMatchSize()); + numberOfFragments = fieldValues.size(); // we are highlighting the whole content, one snippet per value + } else { + //using paragraph separator we make sure that each field value holds a discrete passage for highlighting + BreakIterator bi = getBreakIterator(field); + highlighter = new CustomUnifiedHighlighter(searcher, analyzer, offsetSource, + mapperHighlighterEntry.passageFormatter, field.fieldOptions().boundaryScannerLocale(), bi, + fieldValue, field.fieldOptions().noMatchSize()); + numberOfFragments = field.fieldOptions().numberOfFragments(); + } + + if (field.fieldOptions().requireFieldMatch()) { + final String fieldName = highlighterContext.fieldName; + highlighter.setFieldMatcher((name) -> fieldName.equals(name)); + } else { + highlighter.setFieldMatcher((name) -> true); + } + + Snippet[] fieldSnippets = highlighter.highlightField(highlighterContext.fieldName, + highlighterContext.query, hitContext.docId(), numberOfFragments); + for (Snippet fieldSnippet : fieldSnippets) { + if (Strings.hasText(fieldSnippet.getText())) { + snippets.add(fieldSnippet); + } + } + } catch (IOException e) { + throw new FetchPhaseExecutionException(context, + "Failed to highlight field [" + highlighterContext.fieldName + "]", e); + } + + snippets = filterSnippets(snippets, field.fieldOptions().numberOfFragments()); + + if (field.fieldOptions().scoreOrdered()) { + //let's sort the snippets by score if needed + CollectionUtil.introSort(snippets, (o1, o2) -> Double.compare(o2.getScore(), o1.getScore())); + } + + String[] fragments = new String[snippets.size()]; + for (int i = 0; i < fragments.length; i++) { + fragments[i] = snippets.get(i).getText(); + } + + if (fragments.length > 0) { + return new HighlightField(highlighterContext.fieldName, Text.convertFromStringArray(fragments)); + } + return null; + } + + private BreakIterator getBreakIterator(SearchContextHighlight.Field field) { + final SearchContextHighlight.FieldOptions fieldOptions = field.fieldOptions(); + final Locale locale = + fieldOptions.boundaryScannerLocale() != null ? fieldOptions.boundaryScannerLocale() : + Locale.ROOT; + final HighlightBuilder.BoundaryScannerType type = + fieldOptions.boundaryScannerType() != null ? fieldOptions.boundaryScannerType() : + HighlightBuilder.BoundaryScannerType.SENTENCE; + int maxLen = fieldOptions.fragmentCharSize(); + switch (type) { + case SENTENCE: + if (maxLen > 0) { + return BoundedBreakIteratorScanner.getSentence(locale, maxLen); + } + return BreakIterator.getSentenceInstance(locale); + case WORD: + // ignore maxLen + return BreakIterator.getWordInstance(locale); + default: + throw new IllegalArgumentException("Invalid boundary scanner type: " + type.toString()); + } + } + + private OffsetSource getOffsetSource(MappedFieldType fieldType) { + if (fieldType.indexOptions() == IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS) { + return fieldType.storeTermVectors() ? OffsetSource.POSTINGS_WITH_TERM_VECTORS : OffsetSource.POSTINGS; + } + if (fieldType.storeTermVectorOffsets()) { + return OffsetSource.TERM_VECTORS; + } + return OffsetSource.ANALYSIS; + } + + private static class HighlighterEntry { + Map mappers = new HashMap<>(); + } + + private static class MapperHighlighterEntry { + final CustomPassageFormatter passageFormatter; + + private MapperHighlighterEntry(CustomPassageFormatter passageFormatter) { + this.passageFormatter = passageFormatter; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/internal/AliasFilter.java b/core/src/main/java/org/elasticsearch/search/internal/AliasFilter.java new file mode 100644 index 0000000000000..ee598486acfad --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/internal/AliasFilter.java @@ -0,0 +1,143 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.internal; + +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryRewriteContext; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Objects; +import java.util.Optional; + +/** + * Represents a {@link QueryBuilder} and a list of alias names that filters the builder is composed of. + */ +public final class AliasFilter implements Writeable { + + private final String[] aliases; + private final QueryBuilder filter; + private final boolean reparseAliases; + + public static final AliasFilter EMPTY = new AliasFilter(null, Strings.EMPTY_ARRAY); + + public AliasFilter(QueryBuilder filter, String... aliases) { + this.aliases = aliases == null ? Strings.EMPTY_ARRAY : aliases; + this.filter = filter; + reparseAliases = false; // no bwc here - we only do this if we parse the filter + } + + public AliasFilter(StreamInput input) throws IOException { + aliases = input.readStringArray(); + if (input.getVersion().onOrAfter(Version.V_5_1_1)) { + filter = input.readOptionalNamedWriteable(QueryBuilder.class); + reparseAliases = false; + } else { + reparseAliases = true; // alright we read from 5.0 + filter = null; + } + } + + private QueryBuilder reparseFilter(QueryRewriteContext context) { + if (reparseAliases) { + // we are processing a filter received from a 5.0 node - we need to reparse this on the executing node + final IndexMetaData indexMetaData = context.getIndexSettings().getIndexMetaData(); + /* Being static, parseAliasFilter doesn't have access to whatever guts it needs to parse a query. Instead of passing in a bunch + * of dependencies we pass in a function that can perform the parsing. */ + CheckedFunction, IOException> filterParser = bytes -> { + try (XContentParser parser = XContentFactory.xContent(bytes).createParser(context.getXContentRegistry(), bytes)) { + return context.newParseContext(parser).parseInnerQueryBuilder(); + } + }; + return ShardSearchRequest.parseAliasFilter(filterParser, indexMetaData, aliases); + } + return filter; + } + + AliasFilter rewrite(QueryRewriteContext context) throws IOException { + QueryBuilder queryBuilder = reparseFilter(context); + if (queryBuilder != null) { + return new AliasFilter(QueryBuilder.rewriteQuery(queryBuilder, context), aliases); + } + return new AliasFilter(filter, aliases); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeStringArray(aliases); + if (out.getVersion().onOrAfter(Version.V_5_1_1)) { + out.writeOptionalNamedWriteable(filter); + } + } + + /** + * Returns the aliases patters that are used to compose the {@link QueryBuilder} + * returned from {@link #getQueryBuilder()} + */ + public String[] getAliases() { + return aliases; + } + + /** + * Returns the alias filter {@link QueryBuilder} or null if there is no such filter + */ + public QueryBuilder getQueryBuilder() { + if (reparseAliases) { + // this is only for BWC since 5.0 still only sends aliases so this must be rewritten on the executing node + // if we talk to an older node we also only forward/write the string array which is compatible with the consumers + // in 5.0 see ExplainRequest and QueryValidationRequest + throw new IllegalStateException("alias filter for aliases: " + Arrays.toString(aliases) + " must be rewritten first"); + } + return filter; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + AliasFilter that = (AliasFilter) o; + return reparseAliases == that.reparseAliases && + Arrays.equals(aliases, that.aliases) && + Objects.equals(filter, that.filter); + } + + @Override + public int hashCode() { + return Objects.hash(reparseAliases, Arrays.hashCode(aliases), filter); + } + + @Override + public String toString() { + return "AliasFilter{" + + "aliases=" + Arrays.toString(aliases) + + ", filter=" + filter + + ", reparseAliases=" + reparseAliases + + '}'; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/internal/CancellableBulkScorer.java b/core/src/main/java/org/elasticsearch/search/internal/CancellableBulkScorer.java new file mode 100644 index 0000000000000..f5eceed52ea32 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/internal/CancellableBulkScorer.java @@ -0,0 +1,68 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.internal; + +import org.apache.lucene.search.BulkScorer; +import org.apache.lucene.search.LeafCollector; +import org.apache.lucene.util.Bits; + +import java.io.IOException; +import java.util.Objects; + +/** + * A {@link BulkScorer} wrapper that runs a {@link Runnable} on a regular basis + * so that the query can be interrupted. + */ +final class CancellableBulkScorer extends BulkScorer { + + // we use the BooleanScorer window size as a base interval in order to make sure that we do not + // slow down boolean queries + private static final int INITIAL_INTERVAL = 1 << 11; + + // No point in having intervals that are larger than 1M + private static final int MAX_INTERVAL = 1 << 20; + + private final BulkScorer scorer; + private final Runnable checkCancelled; + + CancellableBulkScorer(BulkScorer scorer, Runnable checkCancelled) { + this.scorer = Objects.requireNonNull(scorer); + this.checkCancelled = Objects.requireNonNull(checkCancelled); + } + + @Override + public int score(LeafCollector collector, Bits acceptDocs, int min, int max) throws IOException { + int interval = INITIAL_INTERVAL; + while (min < max) { + checkCancelled.run(); + final int newMax = (int) Math.min((long) min + interval, max); + min = scorer.score(collector, acceptDocs, min, newMax); + interval = Math.min(interval << 1, MAX_INTERVAL); + } + checkCancelled.run(); + return min; + } + + @Override + public long cost() { + return scorer.cost(); + } + +} diff --git a/core/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java b/core/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java index 8d33140e3eec7..5adf25e03b0aa 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java +++ b/core/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java @@ -20,25 +20,32 @@ package org.elasticsearch.search.internal; import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.Term; import org.apache.lucene.index.TermContext; +import org.apache.lucene.search.BulkScorer; import org.apache.lucene.search.CollectionStatistics; +import org.apache.lucene.search.Collector; import org.apache.lucene.search.Explanation; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.QueryCache; import org.apache.lucene.search.QueryCachingPolicy; +import org.apache.lucene.search.Scorer; import org.apache.lucene.search.TermStatistics; import org.apache.lucene.search.Weight; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.search.dfs.AggregatedDfs; +import org.elasticsearch.search.profile.Timer; import org.elasticsearch.search.profile.query.ProfileWeight; import org.elasticsearch.search.profile.query.QueryProfileBreakdown; import org.elasticsearch.search.profile.query.QueryProfiler; import org.elasticsearch.search.profile.query.QueryTimingType; import java.io.IOException; +import java.util.List; +import java.util.Set; /** * Context-aware extension of {@link IndexSearcher}. @@ -57,6 +64,8 @@ public class ContextIndexSearcher extends IndexSearcher implements Releasable { // TODO revisit moving the profiler to inheritance or wrapping model in the future private QueryProfiler profiler; + private Runnable checkCancelled; + public ContextIndexSearcher(Engine.Searcher searcher, QueryCache queryCache, QueryCachingPolicy queryCachingPolicy) { super(searcher.reader()); @@ -75,6 +84,14 @@ public void setProfiler(QueryProfiler profiler) { this.profiler = profiler; } + /** + * Set a {@link Runnable} that will be run on a regular basis while + * collecting documents. + */ + public void setCheckCancelled(Runnable checkCancelled) { + this.checkCancelled = checkCancelled; + } + public void setAggregatedDfs(AggregatedDfs aggregatedDfs) { this.aggregatedDfs = aggregatedDfs; } @@ -116,12 +133,13 @@ public Weight createWeight(Query query, boolean needsScores) throws IOException // each invocation so that it can build an internal representation of the query // tree QueryProfileBreakdown profile = profiler.getQueryBreakdown(query); - profile.startTime(QueryTimingType.CREATE_WEIGHT); + Timer timer = profile.getTimer(QueryTimingType.CREATE_WEIGHT); + timer.start(); final Weight weight; try { weight = super.createWeight(query, needsScores); } finally { - profile.stopAndRecordTime(); + timer.stop(); profiler.pollLastElement(); } return new ProfileWeight(query, weight, profile); @@ -131,6 +149,53 @@ public Weight createWeight(Query query, boolean needsScores) throws IOException } } + @Override + protected void search(List leaves, Weight weight, Collector collector) throws IOException { + final Weight cancellableWeight; + if (checkCancelled != null) { + cancellableWeight = new Weight(weight.getQuery()) { + + @Override + public void extractTerms(Set terms) { + throw new UnsupportedOperationException(); + } + + @Override + public Explanation explain(LeafReaderContext context, int doc) throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public Scorer scorer(LeafReaderContext context) throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public float getValueForNormalization() throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public void normalize(float norm, float boost) { + throw new UnsupportedOperationException(); + } + + @Override + public BulkScorer bulkScorer(LeafReaderContext context) throws IOException { + BulkScorer in = weight.bulkScorer(context); + if (in != null) { + return new CancellableBulkScorer(in, checkCancelled); + } else { + return null; + } + } + }; + } else { + cancellableWeight = weight; + } + super.search(leaves, cancellableWeight, collector); + } + @Override public Explanation explain(Query query, int doc) throws IOException { if (aggregatedDfs != null) { diff --git a/core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java b/core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java index 4e52ab0d53413..fadf979d911d2 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java +++ b/core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java @@ -23,11 +23,10 @@ import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.Query; import org.apache.lucene.util.Counter; +import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.action.search.SearchType; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.MappedFieldType; @@ -35,17 +34,16 @@ import org.elasticsearch.index.mapper.ObjectMapper; import org.elasticsearch.index.query.ParsedQuery; import org.elasticsearch.index.query.QueryShardContext; -import org.elasticsearch.search.fetch.StoredFieldsContext; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.similarity.SimilarityService; -import org.elasticsearch.script.ScriptService; +import org.elasticsearch.search.SearchExtBuilder; import org.elasticsearch.search.SearchShardTarget; import org.elasticsearch.search.aggregations.SearchContextAggregations; +import org.elasticsearch.search.collapse.CollapseContext; import org.elasticsearch.search.dfs.DfsSearchResult; import org.elasticsearch.search.fetch.FetchPhase; import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.FetchSubPhase; -import org.elasticsearch.search.fetch.FetchSubPhaseContext; +import org.elasticsearch.search.fetch.StoredFieldsContext; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import org.elasticsearch.search.fetch.subphase.InnerHitsContext; import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext; @@ -65,8 +63,6 @@ public abstract class FilteredSearchContext extends SearchContext { private final SearchContext in; public FilteredSearchContext(SearchContext in) { - //inner_hits in percolator ends up with null inner search context - super(in == null ? ParseFieldMatcher.EMPTY : in.parseFieldMatcher()); this.in = in; } @@ -101,13 +97,13 @@ protected void doClose() { } @Override - public void preProcess() { - in.preProcess(); + public void preProcess(boolean rewrite) { + in.preProcess(rewrite); } @Override - public Query searchFilter(String[] types) { - return in.searchFilter(types); + public Query buildFilteredQuery(Query query) { + return in.buildFilteredQuery(query); } @Override @@ -145,21 +141,11 @@ public float queryBoost() { return in.queryBoost(); } - @Override - public SearchContext queryBoost(float queryBoost) { - return in.queryBoost(queryBoost); - } - @Override public long getOriginNanoTime() { return in.getOriginNanoTime(); } - @Override - protected long nowInMillisImpl() { - return in.nowInMillisImpl(); - } - @Override public ScrollContext scrollContext() { return in.scrollContext(); @@ -260,21 +246,11 @@ public MapperService mapperService() { return in.mapperService(); } - @Override - public AnalysisService analysisService() { - return in.analysisService(); - } - @Override public SimilarityService similarityService() { return in.similarityService(); } - @Override - public ScriptService scriptService() { - return in.scriptService(); - } - @Override public BigArrays bigArrays() { return in.bigArrays(); @@ -310,6 +286,11 @@ public void terminateAfter(int terminateAfter) { in.terminateAfter(terminateAfter); } + @Override + public boolean lowLevelCancellation() { + return in.lowLevelCancellation(); + } + @Override public SearchContext minimumScore(float minimumScore) { return in.minimumScore(minimumScore); @@ -512,8 +493,13 @@ public Counter timeEstimateCounter() { } @Override - public SubPhaseContext getFetchSubPhaseContext(FetchSubPhase.ContextFactory contextFactory) { - return in.getFetchSubPhaseContext(contextFactory); + public void addSearchExt(SearchExtBuilder searchExtBuilder) { + in.addSearchExt(searchExtBuilder); + } + + @Override + public SearchExtBuilder getSearchExt(String name) { + return in.getSearchExt(name); } @Override @@ -528,4 +514,29 @@ public Profilers getProfilers() { public QueryShardContext getQueryShardContext() { return in.getQueryShardContext(); } + + @Override + public void setTask(SearchTask task) { + in.setTask(task); + } + + @Override + public SearchTask getTask() { + return in.getTask(); + } + + @Override + public boolean isCancelled() { + return in.isCancelled(); + } + + @Override + public SearchContext collapse(CollapseContext collapse) { + return in.collapse(collapse); + } + + @Override + public CollapseContext collapse() { + return in.collapse(); + } } diff --git a/core/src/main/java/org/elasticsearch/search/internal/InternalScrollSearchRequest.java b/core/src/main/java/org/elasticsearch/search/internal/InternalScrollSearchRequest.java index 82a74de73d295..f112c97dd0f63 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/InternalScrollSearchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/internal/InternalScrollSearchRequest.java @@ -20,9 +20,12 @@ package org.elasticsearch.search.internal; import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.search.Scroll; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskId; import org.elasticsearch.transport.TransportRequest; import java.io.IOException; @@ -67,4 +70,15 @@ public void writeTo(StreamOutput out) throws IOException { out.writeLong(id); out.writeOptionalWriteable(scroll); } + + @Override + public Task createTask(long id, String type, String action, TaskId parentTaskId) { + return new SearchTask(id, type, action, getDescription(), parentTaskId); + } + + @Override + public String getDescription() { + return "id[" + id + "], scroll[" + scroll + "]"; + } + } diff --git a/core/src/main/java/org/elasticsearch/search/internal/InternalSearchHit.java b/core/src/main/java/org/elasticsearch/search/internal/InternalSearchHit.java deleted file mode 100644 index e8ba4d88aa702..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/internal/InternalSearchHit.java +++ /dev/null @@ -1,853 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.internal; - -import org.apache.lucene.search.Explanation; -import org.apache.lucene.util.BytesRef; -import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.compress.CompressorFactory; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.text.Text; -import org.elasticsearch.common.xcontent.ToXContent; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentHelper; -import org.elasticsearch.search.DocValueFormat; -import org.elasticsearch.search.SearchHit; -import org.elasticsearch.search.SearchHitField; -import org.elasticsearch.search.SearchHits; -import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.search.fetch.subphase.highlight.HighlightField; -import org.elasticsearch.search.internal.InternalSearchHits.StreamContext.ShardTargetType; -import org.elasticsearch.search.lookup.SourceLookup; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.HashMap; -import java.util.Iterator; -import java.util.List; -import java.util.Map; - -import static java.util.Collections.emptyMap; -import static java.util.Collections.singletonMap; -import static java.util.Collections.unmodifiableMap; -import static org.elasticsearch.common.lucene.Lucene.readExplanation; -import static org.elasticsearch.common.lucene.Lucene.writeExplanation; -import static org.elasticsearch.search.fetch.subphase.highlight.HighlightField.readHighlightField; -import static org.elasticsearch.search.internal.InternalSearchHitField.readSearchHitField; - -/** - * - */ -public class InternalSearchHit implements SearchHit { - - private static final Object[] EMPTY_SORT_VALUES = new Object[0]; - - private transient int docId; - - private float score = Float.NEGATIVE_INFINITY; - - private Text id; - private Text type; - - private InternalNestedIdentity nestedIdentity; - - private long version = -1; - - private BytesReference source; - - private Map fields = emptyMap(); - - private Map highlightFields = null; - - private Object[] sortValues = EMPTY_SORT_VALUES; - - private String[] matchedQueries = Strings.EMPTY_ARRAY; - - private Explanation explanation; - - @Nullable - private SearchShardTarget shard; - - private Map sourceAsMap; - private byte[] sourceAsBytes; - - private Map innerHits; - - private InternalSearchHit() { - - } - - public InternalSearchHit(int docId) { - this(docId, null, null, null); - } - - public InternalSearchHit(int docId, String id, Text type, Map fields) { - this.docId = docId; - if (id != null) { - this.id = new Text(id); - } else { - this.id = null; - } - this.type = type; - this.fields = fields; - } - - public InternalSearchHit(int nestedTopDocId, String id, Text type, InternalNestedIdentity nestedIdentity, Map fields) { - this.docId = nestedTopDocId; - this.id = new Text(id); - this.type = type; - this.nestedIdentity = nestedIdentity; - this.fields = fields; - } - - public int docId() { - return this.docId; - } - - public void shardTarget(SearchShardTarget shardTarget) { - this.shard = shardTarget; - if (innerHits != null) { - for (InternalSearchHits searchHits : innerHits.values()) { - searchHits.shardTarget(shardTarget); - } - } - } - - public void score(float score) { - this.score = score; - } - - @Override - public float score() { - return this.score; - } - - @Override - public float getScore() { - return score(); - } - - public void version(long version) { - this.version = version; - } - - @Override - public long version() { - return this.version; - } - - @Override - public long getVersion() { - return this.version; - } - - @Override - public String index() { - return shard.index(); - } - - @Override - public String getIndex() { - return index(); - } - - @Override - public String id() { - return id != null ? id.string() : null; - } - - @Override - public String getId() { - return id(); - } - - @Override - public String type() { - return type != null ? type.string() : null; - } - - @Override - public String getType() { - return type(); - } - - @Override - public NestedIdentity getNestedIdentity() { - return nestedIdentity; - } - - /** - * Returns bytes reference, also un compress the source if needed. - */ - @Override - public BytesReference sourceRef() { - try { - this.source = CompressorFactory.uncompressIfNeeded(this.source); - return this.source; - } catch (IOException e) { - throw new ElasticsearchParseException("failed to decompress source", e); - } - } - - /** - * Sets representation, might be compressed.... - */ - public InternalSearchHit sourceRef(BytesReference source) { - this.source = source; - this.sourceAsBytes = null; - this.sourceAsMap = null; - return this; - } - - @Override - public BytesReference getSourceRef() { - return sourceRef(); - } - - /** - * Internal source representation, might be compressed.... - */ - public BytesReference internalSourceRef() { - return source; - } - - - @Override - public byte[] source() { - if (source == null) { - return null; - } - if (sourceAsBytes != null) { - return sourceAsBytes; - } - this.sourceAsBytes = BytesReference.toBytes(sourceRef()); - return this.sourceAsBytes; - } - - @Override - public boolean hasSource() { - return source == null; - } - - @Override - public Map getSource() { - return sourceAsMap(); - } - - @Override - public String sourceAsString() { - if (source == null) { - return null; - } - try { - return XContentHelper.convertToJson(sourceRef(), false); - } catch (IOException e) { - throw new ElasticsearchParseException("failed to convert source to a json string"); - } - } - - @Override - public String getSourceAsString() { - return sourceAsString(); - } - - @SuppressWarnings({"unchecked"}) - @Override - public Map sourceAsMap() throws ElasticsearchParseException { - if (source == null) { - return null; - } - if (sourceAsMap != null) { - return sourceAsMap; - } - - sourceAsMap = SourceLookup.sourceAsMap(source); - return sourceAsMap; - } - - @Override - public Iterator iterator() { - return fields.values().iterator(); - } - - @Override - public SearchHitField field(String fieldName) { - return fields().get(fieldName); - } - - @Override - public Map fields() { - return fields == null ? emptyMap() : fields; - } - - // returns the fields without handling null cases - public Map fieldsOrNull() { - return fields; - } - - @Override - public Map getFields() { - return fields(); - } - - public void fields(Map fields) { - this.fields = fields; - } - - public Map internalHighlightFields() { - return highlightFields; - } - - @Override - public Map highlightFields() { - return highlightFields == null ? emptyMap() : highlightFields; - } - - @Override - public Map getHighlightFields() { - return highlightFields(); - } - - public void highlightFields(Map highlightFields) { - this.highlightFields = highlightFields; - } - - public void sortValues(Object[] sortValues, DocValueFormat[] sortValueFormats) { - this.sortValues = Arrays.copyOf(sortValues, sortValues.length); - for (int i = 0; i < sortValues.length; ++i) { - if (this.sortValues[i] instanceof BytesRef) { - this.sortValues[i] = sortValueFormats[i].format((BytesRef) sortValues[i]); - } - } - } - - @Override - public Object[] sortValues() { - return sortValues; - } - - @Override - public Object[] getSortValues() { - return sortValues(); - } - - @Override - public Explanation explanation() { - return explanation; - } - - @Override - public Explanation getExplanation() { - return explanation(); - } - - public void explanation(Explanation explanation) { - this.explanation = explanation; - } - - @Override - public SearchShardTarget shard() { - return shard; - } - - @Override - public SearchShardTarget getShard() { - return shard(); - } - - public void shard(SearchShardTarget target) { - this.shard = target; - } - - public void matchedQueries(String[] matchedQueries) { - this.matchedQueries = matchedQueries; - } - - @Override - public String[] matchedQueries() { - return this.matchedQueries; - } - - @Override - public String[] getMatchedQueries() { - return this.matchedQueries; - } - - @Override - @SuppressWarnings("unchecked") - public Map getInnerHits() { - return (Map) innerHits; - } - - public void setInnerHits(Map innerHits) { - this.innerHits = innerHits; - } - - public static class Fields { - static final String _INDEX = "_index"; - static final String _TYPE = "_type"; - static final String _ID = "_id"; - static final String _VERSION = "_version"; - static final String _SCORE = "_score"; - static final String FIELDS = "fields"; - static final String HIGHLIGHT = "highlight"; - static final String SORT = "sort"; - static final String MATCHED_QUERIES = "matched_queries"; - static final String _EXPLANATION = "_explanation"; - static final String VALUE = "value"; - static final String DESCRIPTION = "description"; - static final String DETAILS = "details"; - static final String INNER_HITS = "inner_hits"; - } - - // public because we render hit as part of completion suggestion option - public XContentBuilder toInnerXContent(XContentBuilder builder, Params params) throws IOException { - List metaFields = new ArrayList<>(); - List otherFields = new ArrayList<>(); - if (fields != null && !fields.isEmpty()) { - for (SearchHitField field : fields.values()) { - if (field.values().isEmpty()) { - continue; - } - if (field.isMetadataField()) { - metaFields.add(field); - } else { - otherFields.add(field); - } - } - } - - // For inner_hit hits shard is null and that is ok, because the parent search hit has all this information. - // Even if this was included in the inner_hit hits this would be the same, so better leave it out. - if (explanation() != null && shard != null) { - builder.field("_shard", shard.shardId()); - builder.field("_node", shard.nodeIdText()); - } - if (nestedIdentity != null) { - nestedIdentity.toXContent(builder, params); - } else { - if (shard != null) { - builder.field(Fields._INDEX, shard.indexText()); - } - if (type != null) { - builder.field(Fields._TYPE, type); - } - if (id != null) { - builder.field(Fields._ID, id); - } - } - if (version != -1) { - builder.field(Fields._VERSION, version); - } - if (Float.isNaN(score)) { - builder.nullField(Fields._SCORE); - } else { - builder.field(Fields._SCORE, score); - } - for (SearchHitField field : metaFields) { - builder.field(field.name(), (Object) field.value()); - } - if (source != null) { - XContentHelper.writeRawField("_source", source, builder, params); - } - if (!otherFields.isEmpty()) { - builder.startObject(Fields.FIELDS); - for (SearchHitField field : otherFields) { - builder.startArray(field.name()); - for (Object value : field.getValues()) { - builder.value(value); - } - builder.endArray(); - } - builder.endObject(); - } - if (highlightFields != null && !highlightFields.isEmpty()) { - builder.startObject(Fields.HIGHLIGHT); - for (HighlightField field : highlightFields.values()) { - builder.field(field.name()); - if (field.fragments() == null) { - builder.nullValue(); - } else { - builder.startArray(); - for (Text fragment : field.fragments()) { - builder.value(fragment); - } - builder.endArray(); - } - } - builder.endObject(); - } - if (sortValues != null && sortValues.length > 0) { - builder.startArray(Fields.SORT); - for (Object sortValue : sortValues) { - builder.value(sortValue); - } - builder.endArray(); - } - if (matchedQueries.length > 0) { - builder.startArray(Fields.MATCHED_QUERIES); - for (String matchedFilter : matchedQueries) { - builder.value(matchedFilter); - } - builder.endArray(); - } - if (explanation() != null) { - builder.field(Fields._EXPLANATION); - buildExplanation(builder, explanation()); - } - if (innerHits != null) { - builder.startObject(Fields.INNER_HITS); - for (Map.Entry entry : innerHits.entrySet()) { - builder.startObject(entry.getKey()); - entry.getValue().toXContent(builder, params); - builder.endObject(); - } - builder.endObject(); - } - return builder; - } - - private void buildExplanation(XContentBuilder builder, Explanation explanation) throws IOException { - builder.startObject(); - builder.field(Fields.VALUE, explanation.getValue()); - builder.field(Fields.DESCRIPTION, explanation.getDescription()); - Explanation[] innerExps = explanation.getDetails(); - if (innerExps != null) { - builder.startArray(Fields.DETAILS); - for (Explanation exp : innerExps) { - buildExplanation(builder, exp); - } - builder.endArray(); - } - builder.endObject(); - - } - - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(); - toInnerXContent(builder, params); - builder.endObject(); - return builder; - } - - public static InternalSearchHit readSearchHit(StreamInput in, InternalSearchHits.StreamContext context) throws IOException { - InternalSearchHit hit = new InternalSearchHit(); - hit.readFrom(in, context); - return hit; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - readFrom(in, InternalSearchHits.streamContext().streamShardTarget(ShardTargetType.STREAM)); - } - - public void readFrom(StreamInput in, InternalSearchHits.StreamContext context) throws IOException { - score = in.readFloat(); - id = in.readOptionalText(); - type = in.readOptionalText(); - nestedIdentity = in.readOptionalStreamable(InternalNestedIdentity::new); - version = in.readLong(); - source = in.readBytesReference(); - if (source.length() == 0) { - source = null; - } - if (in.readBoolean()) { - explanation = readExplanation(in); - } - int size = in.readVInt(); - if (size == 0) { - fields = emptyMap(); - } else if (size == 1) { - SearchHitField hitField = readSearchHitField(in); - fields = singletonMap(hitField.name(), hitField); - } else { - Map fields = new HashMap<>(); - for (int i = 0; i < size; i++) { - SearchHitField hitField = readSearchHitField(in); - fields.put(hitField.name(), hitField); - } - this.fields = unmodifiableMap(fields); - } - - size = in.readVInt(); - if (size == 0) { - highlightFields = emptyMap(); - } else if (size == 1) { - HighlightField field = readHighlightField(in); - highlightFields = singletonMap(field.name(), field); - } else { - Map highlightFields = new HashMap<>(); - for (int i = 0; i < size; i++) { - HighlightField field = readHighlightField(in); - highlightFields.put(field.name(), field); - } - this.highlightFields = unmodifiableMap(highlightFields); - } - - size = in.readVInt(); - if (size > 0) { - sortValues = new Object[size]; - for (int i = 0; i < sortValues.length; i++) { - byte type = in.readByte(); - if (type == 0) { - sortValues[i] = null; - } else if (type == 1) { - sortValues[i] = in.readString(); - } else if (type == 2) { - sortValues[i] = in.readInt(); - } else if (type == 3) { - sortValues[i] = in.readLong(); - } else if (type == 4) { - sortValues[i] = in.readFloat(); - } else if (type == 5) { - sortValues[i] = in.readDouble(); - } else if (type == 6) { - sortValues[i] = in.readByte(); - } else if (type == 7) { - sortValues[i] = in.readShort(); - } else if (type == 8) { - sortValues[i] = in.readBoolean(); - } else { - throw new IOException("Can't match type [" + type + "]"); - } - } - } - - size = in.readVInt(); - if (size > 0) { - matchedQueries = new String[size]; - for (int i = 0; i < size; i++) { - matchedQueries[i] = in.readString(); - } - } - - if (context.streamShardTarget() == ShardTargetType.STREAM) { - if (in.readBoolean()) { - shard = new SearchShardTarget(in); - } - } else if (context.streamShardTarget() == ShardTargetType.LOOKUP) { - int lookupId = in.readVInt(); - if (lookupId > 0) { - shard = context.handleShardLookup().get(lookupId); - } - } - - size = in.readVInt(); - if (size > 0) { - innerHits = new HashMap<>(size); - for (int i = 0; i < size; i++) { - String key = in.readString(); - ShardTargetType shardTarget = context.streamShardTarget(); - InternalSearchHits value = InternalSearchHits.readSearchHits(in, context.streamShardTarget(ShardTargetType.NO_STREAM)); - context.streamShardTarget(shardTarget); - innerHits.put(key, value); - } - } - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - writeTo(out, InternalSearchHits.streamContext().streamShardTarget(ShardTargetType.STREAM)); - } - - public void writeTo(StreamOutput out, InternalSearchHits.StreamContext context) throws IOException { - out.writeFloat(score); - out.writeOptionalText(id); - out.writeOptionalText(type); - out.writeOptionalStreamable(nestedIdentity); - out.writeLong(version); - out.writeBytesReference(source); - if (explanation == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - writeExplanation(out, explanation); - } - if (fields == null) { - out.writeVInt(0); - } else { - out.writeVInt(fields.size()); - for (SearchHitField hitField : fields().values()) { - hitField.writeTo(out); - } - } - if (highlightFields == null) { - out.writeVInt(0); - } else { - out.writeVInt(highlightFields.size()); - for (HighlightField highlightField : highlightFields.values()) { - highlightField.writeTo(out); - } - } - - if (sortValues.length == 0) { - out.writeVInt(0); - } else { - out.writeVInt(sortValues.length); - for (Object sortValue : sortValues) { - if (sortValue == null) { - out.writeByte((byte) 0); - } else { - Class type = sortValue.getClass(); - if (type == String.class) { - out.writeByte((byte) 1); - out.writeString((String) sortValue); - } else if (type == Integer.class) { - out.writeByte((byte) 2); - out.writeInt((Integer) sortValue); - } else if (type == Long.class) { - out.writeByte((byte) 3); - out.writeLong((Long) sortValue); - } else if (type == Float.class) { - out.writeByte((byte) 4); - out.writeFloat((Float) sortValue); - } else if (type == Double.class) { - out.writeByte((byte) 5); - out.writeDouble((Double) sortValue); - } else if (type == Byte.class) { - out.writeByte((byte) 6); - out.writeByte((Byte) sortValue); - } else if (type == Short.class) { - out.writeByte((byte) 7); - out.writeShort((Short) sortValue); - } else if (type == Boolean.class) { - out.writeByte((byte) 8); - out.writeBoolean((Boolean) sortValue); - } else { - throw new IOException("Can't handle sort field value of type [" + type + "]"); - } - } - } - } - - if (matchedQueries.length == 0) { - out.writeVInt(0); - } else { - out.writeVInt(matchedQueries.length); - for (String matchedFilter : matchedQueries) { - out.writeString(matchedFilter); - } - } - - if (context.streamShardTarget() == ShardTargetType.STREAM) { - if (shard == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - shard.writeTo(out); - } - } else if (context.streamShardTarget() == ShardTargetType.LOOKUP) { - if (shard == null) { - out.writeVInt(0); - } else { - out.writeVInt(context.shardHandleLookup().get(shard)); - } - } - - if (innerHits == null) { - out.writeVInt(0); - } else { - out.writeVInt(innerHits.size()); - for (Map.Entry entry : innerHits.entrySet()) { - out.writeString(entry.getKey()); - ShardTargetType shardTarget = context.streamShardTarget(); - entry.getValue().writeTo(out, context.streamShardTarget(ShardTargetType.NO_STREAM)); - context.streamShardTarget(shardTarget); - } - } - } - - public static final class InternalNestedIdentity implements NestedIdentity, Streamable, ToXContent { - - private Text field; - private int offset; - private InternalNestedIdentity child; - - public InternalNestedIdentity(String field, int offset, InternalNestedIdentity child) { - this.field = new Text(field); - this.offset = offset; - this.child = child; - } - - InternalNestedIdentity() { - } - - @Override - public Text getField() { - return field; - } - - @Override - public int getOffset() { - return offset; - } - - @Override - public NestedIdentity getChild() { - return child; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - field = in.readOptionalText(); - offset = in.readInt(); - child = in.readOptionalStreamable(InternalNestedIdentity::new); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeOptionalText(field); - out.writeInt(offset); - out.writeOptionalStreamable(child); - } - - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(Fields._NESTED); - if (field != null) { - builder.field(Fields._NESTED_FIELD, field); - } - if (offset != -1) { - builder.field(Fields._NESTED_OFFSET, offset); - } - if (child != null) { - builder = child.toXContent(builder, params); - } - builder.endObject(); - return builder; - } - - public static class Fields { - - static final String _NESTED = "_nested"; - static final String _NESTED_FIELD = "field"; - static final String _NESTED_OFFSET = "offset"; - - } - } - -} diff --git a/core/src/main/java/org/elasticsearch/search/internal/InternalSearchHitField.java b/core/src/main/java/org/elasticsearch/search/internal/InternalSearchHitField.java deleted file mode 100644 index 114aa4999d1b6..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/internal/InternalSearchHitField.java +++ /dev/null @@ -1,115 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.internal; - -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.search.SearchHitField; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.Iterator; -import java.util.List; - -/** - * - */ -public class InternalSearchHitField implements SearchHitField { - - private String name; - private List values; - - private InternalSearchHitField() { - } - - public InternalSearchHitField(String name, List values) { - this.name = name; - this.values = values; - } - - @Override - public String name() { - return name; - } - - @Override - public String getName() { - return name(); - } - - @Override - public Object value() { - if (values == null || values.isEmpty()) { - return null; - } - return values.get(0); - } - - @Override - public Object getValue() { - return value(); - } - - @Override - public List values() { - return values; - } - - @Override - public List getValues() { - return values(); - } - - @Override - public boolean isMetadataField() { - return MapperService.isMetadataField(name); - } - - @Override - public Iterator iterator() { - return values.iterator(); - } - - public static InternalSearchHitField readSearchHitField(StreamInput in) throws IOException { - InternalSearchHitField result = new InternalSearchHitField(); - result.readFrom(in); - return result; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - name = in.readString(); - int size = in.readVInt(); - values = new ArrayList<>(size); - for (int i = 0; i < size; i++) { - values.add(in.readGenericValue()); - } - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeString(name); - out.writeVInt(values.size()); - for (Object value : values) { - out.writeGenericValue(value); - } - } -} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/search/internal/InternalSearchHits.java b/core/src/main/java/org/elasticsearch/search/internal/InternalSearchHits.java deleted file mode 100644 index 592d4b0751ef3..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/internal/InternalSearchHits.java +++ /dev/null @@ -1,263 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.internal; - -import com.carrotsearch.hppc.IntObjectHashMap; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.search.SearchHit; -import org.elasticsearch.search.SearchHits; -import org.elasticsearch.search.SearchShardTarget; - -import java.io.IOException; -import java.util.Arrays; -import java.util.IdentityHashMap; -import java.util.Iterator; -import java.util.Map; - -import static org.elasticsearch.search.internal.InternalSearchHit.readSearchHit; - -/** - * - */ -public class InternalSearchHits implements SearchHits { - - public static class StreamContext { - - public static enum ShardTargetType { - STREAM, - LOOKUP, - NO_STREAM - } - - private IdentityHashMap shardHandleLookup = new IdentityHashMap<>(); - private IntObjectHashMap handleShardLookup = new IntObjectHashMap<>(); - private ShardTargetType streamShardTarget = ShardTargetType.STREAM; - - public StreamContext reset() { - shardHandleLookup.clear(); - handleShardLookup.clear(); - streamShardTarget = ShardTargetType.STREAM; - return this; - } - - public IdentityHashMap shardHandleLookup() { - return shardHandleLookup; - } - - public IntObjectHashMap handleShardLookup() { - return handleShardLookup; - } - - public ShardTargetType streamShardTarget() { - return streamShardTarget; - } - - public StreamContext streamShardTarget(ShardTargetType streamShardTarget) { - this.streamShardTarget = streamShardTarget; - return this; - } - } - - private static final ThreadLocal cache = new ThreadLocal() { - @Override - protected StreamContext initialValue() { - return new StreamContext(); - } - }; - - public static StreamContext streamContext() { - return cache.get().reset(); - } - - public static InternalSearchHits empty() { - // We shouldn't use static final instance, since that could directly be returned by native transport clients - return new InternalSearchHits(EMPTY, 0, 0); - } - - public static final InternalSearchHit[] EMPTY = new InternalSearchHit[0]; - - private InternalSearchHit[] hits; - - public long totalHits; - - private float maxScore; - - InternalSearchHits() { - - } - - public InternalSearchHits(InternalSearchHit[] hits, long totalHits, float maxScore) { - this.hits = hits; - this.totalHits = totalHits; - this.maxScore = maxScore; - } - - public void shardTarget(SearchShardTarget shardTarget) { - for (InternalSearchHit hit : hits) { - hit.shard(shardTarget); - } - } - - @Override - public long totalHits() { - return totalHits; - } - - @Override - public long getTotalHits() { - return totalHits(); - } - - @Override - public float maxScore() { - return this.maxScore; - } - - @Override - public float getMaxScore() { - return maxScore(); - } - - @Override - public SearchHit[] hits() { - return this.hits; - } - - @Override - public SearchHit getAt(int position) { - return hits[position]; - } - - @Override - public SearchHit[] getHits() { - return hits(); - } - - @Override - public Iterator iterator() { - return Arrays.stream(hits()).iterator(); - } - - public InternalSearchHit[] internalHits() { - return this.hits; - } - - static final class Fields { - static final String HITS = "hits"; - static final String TOTAL = "total"; - static final String MAX_SCORE = "max_score"; - } - - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(Fields.HITS); - builder.field(Fields.TOTAL, totalHits); - if (Float.isNaN(maxScore)) { - builder.nullField(Fields.MAX_SCORE); - } else { - builder.field(Fields.MAX_SCORE, maxScore); - } - builder.field(Fields.HITS); - builder.startArray(); - for (SearchHit hit : hits) { - hit.toXContent(builder, params); - } - builder.endArray(); - builder.endObject(); - return builder; - } - - public static InternalSearchHits readSearchHits(StreamInput in, StreamContext context) throws IOException { - InternalSearchHits hits = new InternalSearchHits(); - hits.readFrom(in, context); - return hits; - } - - public static InternalSearchHits readSearchHits(StreamInput in) throws IOException { - InternalSearchHits hits = new InternalSearchHits(); - hits.readFrom(in); - return hits; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - readFrom(in, streamContext().streamShardTarget(StreamContext.ShardTargetType.LOOKUP)); - } - - public void readFrom(StreamInput in, StreamContext context) throws IOException { - totalHits = in.readVLong(); - maxScore = in.readFloat(); - int size = in.readVInt(); - if (size == 0) { - hits = EMPTY; - } else { - if (context.streamShardTarget() == StreamContext.ShardTargetType.LOOKUP) { - // read the lookup table first - int lookupSize = in.readVInt(); - for (int i = 0; i < lookupSize; i++) { - context.handleShardLookup().put(in.readVInt(), new SearchShardTarget(in)); - } - } - - hits = new InternalSearchHit[size]; - for (int i = 0; i < hits.length; i++) { - hits[i] = readSearchHit(in, context); - } - } - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - writeTo(out, streamContext().streamShardTarget(StreamContext.ShardTargetType.LOOKUP)); - } - - public void writeTo(StreamOutput out, StreamContext context) throws IOException { - out.writeVLong(totalHits); - out.writeFloat(maxScore); - out.writeVInt(hits.length); - if (hits.length > 0) { - if (context.streamShardTarget() == StreamContext.ShardTargetType.LOOKUP) { - // start from 1, 0 is for null! - int counter = 1; - for (InternalSearchHit hit : hits) { - if (hit.shard() != null) { - Integer handle = context.shardHandleLookup().get(hit.shard()); - if (handle == null) { - context.shardHandleLookup().put(hit.shard(), counter++); - } - } - } - out.writeVInt(context.shardHandleLookup().size()); - if (!context.shardHandleLookup().isEmpty()) { - for (Map.Entry entry : context.shardHandleLookup().entrySet()) { - out.writeVInt(entry.getValue()); - entry.getKey().writeTo(out); - } - } - } - - for (InternalSearchHit hit : hits) { - hit.writeTo(out, context); - } - } - } -} diff --git a/core/src/main/java/org/elasticsearch/search/internal/InternalSearchResponse.java b/core/src/main/java/org/elasticsearch/search/internal/InternalSearchResponse.java index 09a787ac3cb8a..eae65c2e95288 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/InternalSearchResponse.java +++ b/core/src/main/java/org/elasticsearch/search/internal/InternalSearchResponse.java @@ -19,144 +19,57 @@ package org.elasticsearch.search.internal; + import org.elasticsearch.Version; +import org.elasticsearch.action.search.SearchResponseSections; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; -import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.SearchHits; -import org.elasticsearch.search.aggregations.Aggregations; import org.elasticsearch.search.aggregations.InternalAggregations; -import org.elasticsearch.search.profile.ProfileShardResult; import org.elasticsearch.search.profile.SearchProfileShardResults; import org.elasticsearch.search.suggest.Suggest; import java.io.IOException; -import java.util.Collections; -import java.util.Map; - -import static org.elasticsearch.search.internal.InternalSearchHits.readSearchHits; /** - * + * {@link SearchResponseSections} subclass that can be serialized over the wire. */ -public class InternalSearchResponse implements Streamable, ToXContent { +public class InternalSearchResponse extends SearchResponseSections implements Writeable, ToXContent { public static InternalSearchResponse empty() { - return new InternalSearchResponse(InternalSearchHits.empty(), null, null, null, false, null); - } - - private InternalSearchHits hits; - - private InternalAggregations aggregations; - - private Suggest suggest; - - private SearchProfileShardResults profileResults; - - private boolean timedOut; - - private Boolean terminatedEarly = null; - - private InternalSearchResponse() { - } - - public InternalSearchResponse(InternalSearchHits hits, InternalAggregations aggregations, Suggest suggest, - SearchProfileShardResults profileResults, boolean timedOut, Boolean terminatedEarly) { - this.hits = hits; - this.aggregations = aggregations; - this.suggest = suggest; - this.profileResults = profileResults; - this.timedOut = timedOut; - this.terminatedEarly = terminatedEarly; - } - - public boolean timedOut() { - return this.timedOut; - } - - public Boolean terminatedEarly() { - return this.terminatedEarly; - } - - public SearchHits hits() { - return hits; + return new InternalSearchResponse(SearchHits.empty(), null, null, null, false, null, 1); } - public Aggregations aggregations() { - return aggregations; + public InternalSearchResponse(SearchHits hits, InternalAggregations aggregations, Suggest suggest, + SearchProfileShardResults profileResults, boolean timedOut, Boolean terminatedEarly, + int numReducePhases) { + super(hits, aggregations, suggest, timedOut, terminatedEarly, profileResults, numReducePhases); } - public Suggest suggest() { - return suggest; - } - - /** - * Returns the profile results for this search response (including all shards). - * An empty map is returned if profiling was not enabled - * - * @return Profile results - */ - public Map profile() { - if (profileResults == null) { - return Collections.emptyMap(); - } - return profileResults.getShardResults(); - } - - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - hits.toXContent(builder, params); - if (aggregations != null) { - aggregations.toXContent(builder, params); - } - if (suggest != null) { - suggest.toXContent(builder, params); - } - if (profileResults != null) { - profileResults.toXContent(builder, params); - } - return builder; - } - - public static InternalSearchResponse readInternalSearchResponse(StreamInput in) throws IOException { - InternalSearchResponse response = new InternalSearchResponse(); - response.readFrom(in); - return response; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - hits = readSearchHits(in); - if (in.readBoolean()) { - aggregations = InternalAggregations.readAggregations(in); - } - if (in.readBoolean()) { - suggest = Suggest.readSuggest(in); - } - timedOut = in.readBoolean(); - terminatedEarly = in.readOptionalBoolean(); - profileResults = in.readOptionalWriteable(SearchProfileShardResults::new); + public InternalSearchResponse(StreamInput in) throws IOException { + super( + SearchHits.readSearchHits(in), + in.readBoolean() ? InternalAggregations.readAggregations(in) : null, + in.readBoolean() ? Suggest.readSuggest(in) : null, + in.readBoolean(), + in.readOptionalBoolean(), + in.readOptionalWriteable(SearchProfileShardResults::new), + in.getVersion().onOrAfter(Version.V_5_4_0) ? in.readVInt() : 1 + ); } @Override public void writeTo(StreamOutput out) throws IOException { hits.writeTo(out); - if (aggregations == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - aggregations.writeTo(out); - } - if (suggest == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - suggest.writeTo(out); - } + out.writeOptionalStreamable((InternalAggregations)aggregations); + out.writeOptionalStreamable(suggest); out.writeBoolean(timedOut); out.writeOptionalBoolean(terminatedEarly); out.writeOptionalWriteable(profileResults); + if (out.getVersion().onOrAfter(Version.V_5_4_0)) { + out.writeVInt(numReducePhases); + } } } diff --git a/core/src/main/java/org/elasticsearch/search/internal/ScrollContext.java b/core/src/main/java/org/elasticsearch/search/internal/ScrollContext.java index 1b7bcfb93c788..163dbcc73d924 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/ScrollContext.java +++ b/core/src/main/java/org/elasticsearch/search/internal/ScrollContext.java @@ -22,11 +22,35 @@ import org.apache.lucene.search.ScoreDoc; import org.elasticsearch.search.Scroll; +import java.util.HashMap; +import java.util.Map; + /** Wrapper around information that needs to stay around when scrolling. */ -public class ScrollContext { +public final class ScrollContext { + + private Map context = null; public int totalHits = -1; public float maxScore; public ScoreDoc lastEmittedDoc; public Scroll scroll; + + /** + * Returns the object or null if the given key does not have a + * value in the context + */ + @SuppressWarnings("unchecked") // (T)object + public T getFromContext(String key) { + return context != null ? (T) context.get(key) : null; + } + + /** + * Puts the object into the context + */ + public void putInContext(String key, Object value) { + if (context == null) { + context = new HashMap<>(); + } + context.put(key, value); + } } diff --git a/core/src/main/java/org/elasticsearch/search/internal/SearchContext.java b/core/src/main/java/org/elasticsearch/search/internal/SearchContext.java index 65fe7ddad1530..ebb2157d981e7 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/SearchContext.java +++ b/core/src/main/java/org/elasticsearch/search/internal/SearchContext.java @@ -22,12 +22,10 @@ import org.apache.lucene.search.Collector; import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.Query; -import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.util.Counter; -import org.apache.lucene.util.RefCount; +import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.action.search.SearchType; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.unit.TimeValue; @@ -35,25 +33,24 @@ import org.elasticsearch.common.util.concurrent.AbstractRefCounted; import org.elasticsearch.common.util.concurrent.RefCounted; import org.elasticsearch.common.util.iterable.Iterables; -import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.ObjectMapper; +import org.elasticsearch.search.collapse.CollapseContext; import org.elasticsearch.index.query.ParsedQuery; import org.elasticsearch.index.query.QueryShardContext; -import org.elasticsearch.search.fetch.StoredFieldsContext; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.similarity.SimilarityService; -import org.elasticsearch.script.ScriptService; +import org.elasticsearch.search.SearchExtBuilder; import org.elasticsearch.search.SearchShardTarget; import org.elasticsearch.search.aggregations.SearchContextAggregations; import org.elasticsearch.search.dfs.DfsSearchResult; import org.elasticsearch.search.fetch.FetchPhase; import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.FetchSubPhase; -import org.elasticsearch.search.fetch.FetchSubPhaseContext; +import org.elasticsearch.search.fetch.StoredFieldsContext; +import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import org.elasticsearch.search.fetch.subphase.InnerHitsContext; import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext; @@ -66,7 +63,7 @@ import org.elasticsearch.search.suggest.SuggestionSearchContext; import java.util.ArrayList; -import java.util.HashMap; +import java.util.EnumMap; import java.util.List; import java.util.Map; import java.util.concurrent.atomic.AtomicBoolean; @@ -84,35 +81,20 @@ // For reference why we use RefCounted here see #20095 public abstract class SearchContext extends AbstractRefCounted implements Releasable { - private static ThreadLocal current = new ThreadLocal<>(); public static final int DEFAULT_TERMINATE_AFTER = 0; - - public static void setCurrent(SearchContext value) { - current.set(value); - } - - public static void removeCurrent() { - current.remove(); - } - - public static SearchContext current() { - return current.get(); - } - private Map> clearables = null; private final AtomicBoolean closed = new AtomicBoolean(false); private InnerHitsContext innerHitsContext; - protected final ParseFieldMatcher parseFieldMatcher; - - protected SearchContext(ParseFieldMatcher parseFieldMatcher) { + protected SearchContext() { super("search_context"); - this.parseFieldMatcher = parseFieldMatcher; } - public ParseFieldMatcher parseFieldMatcher() { - return parseFieldMatcher; - } + public abstract void setTask(SearchTask task); + + public abstract SearchTask getTask(); + + public abstract boolean isCancelled(); @Override public final void close() { @@ -121,8 +103,6 @@ public final void close() { } } - private boolean nowInMillisUsed; - @Override protected final void closeInternal() { try { @@ -141,10 +121,13 @@ protected void alreadyClosed() { /** * Should be called before executing the main query and after all other parameters have been set. + * @param rewrite if the set query should be rewritten against the searcher returned from {@link #searcher()} */ - public abstract void preProcess(); + public abstract void preProcess(boolean rewrite); - public abstract Query searchFilter(String[] types); + /** Automatically apply all required filters to the given query such as + * alias filters, types filters, etc. */ + public abstract Query buildFilteredQuery(Query query); public abstract long id(); @@ -160,25 +143,8 @@ protected void alreadyClosed() { public abstract float queryBoost(); - public abstract SearchContext queryBoost(float queryBoost); - public abstract long getOriginNanoTime(); - public final long nowInMillis() { - nowInMillisUsed = true; - return nowInMillisImpl(); - } - - public final boolean nowInMillisUsed() { - return nowInMillisUsed; - } - - public final void resetNowInMillisUsed() { - this.nowInMillisUsed = false; - } - - protected abstract long nowInMillisImpl(); - public abstract ScrollContext scrollContext(); public abstract SearchContext scrollContext(ScrollContext scroll); @@ -187,7 +153,9 @@ public final void resetNowInMillisUsed() { public abstract SearchContext aggregations(SearchContextAggregations aggregations); - public abstract SubPhaseContext getFetchSubPhaseContext(FetchSubPhase.ContextFactory contextFactory); + public abstract void addSearchExt(SearchExtBuilder searchExtBuilder); + + public abstract SearchExtBuilder getSearchExt(String name); public abstract SearchContextHighlight highlight(); @@ -226,18 +194,18 @@ public InnerHitsContext innerHits() { public abstract SearchContext fetchSourceContext(FetchSourceContext fetchSourceContext); + public abstract DocValueFieldsContext docValueFieldsContext(); + + public abstract SearchContext docValueFieldsContext(DocValueFieldsContext docValueFieldsContext); + public abstract ContextIndexSearcher searcher(); public abstract IndexShard indexShard(); public abstract MapperService mapperService(); - public abstract AnalysisService analysisService(); - public abstract SimilarityService similarityService(); - public abstract ScriptService scriptService(); - public abstract BigArrays bigArrays(); public abstract BitsetFilterCache bitsetFilterCache(); @@ -252,6 +220,14 @@ public InnerHitsContext innerHits() { public abstract void terminateAfter(int terminateAfter); + /** + * Indicates if the current index should perform frequent low level search cancellation check. + * + * Enabling low-level checks will make long running searches to react to the cancellation request faster. However, + * since it will produce more cancellation checks it might slow the search performance down. + */ + public abstract boolean lowLevelCancellation(); + public abstract SearchContext minimumScore(float minimumScore); public abstract Float minimumScore(); @@ -268,6 +244,10 @@ public InnerHitsContext innerHits() { public abstract FieldDoc searchAfter(); + public abstract SearchContext collapse(CollapseContext collapse); + + public abstract CollapseContext collapse(); + public abstract SearchContext parsedPostFilter(ParsedQuery postFilter); public abstract ParsedQuery parsedPostFilter(); @@ -333,7 +313,9 @@ public InnerHitsContext innerHits() { public abstract void keepAlive(long keepAlive); - public abstract SearchLookup lookup(); + public SearchLookup lookup() { + return getQueryShardContext().lookup(); + } public abstract DfsSearchResult dfsResult(); @@ -354,7 +336,7 @@ public InnerHitsContext innerHits() { */ public void addReleasable(Releasable releasable, Lifetime lifetime) { if (clearables == null) { - clearables = new HashMap<>(); + clearables = new EnumMap<>(Lifetime.class); } List releasables = clearables.get(lifetime); if (releasables == null) { @@ -427,7 +409,11 @@ public String toString() { result.append("searchType=[").append(searchType()).append("]"); } if (scrollContext() != null) { - result.append("scroll=[").append(scrollContext().scroll.keepAlive()).append("]"); + if (scrollContext().scroll != null) { + result.append("scroll=[").append(scrollContext().scroll.keepAlive()).append("]"); + } else { + result.append("scroll=[null]"); + } } result.append(" query=[").append(query()).append("]"); return result.toString(); diff --git a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchLocalRequest.java b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchLocalRequest.java index d025d573c14ce..5a8d91fbaa86c 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchLocalRequest.java +++ b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchLocalRequest.java @@ -19,21 +19,23 @@ package org.elasticsearch.search.internal; +import org.elasticsearch.Version; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.search.SearchType; -import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.search.Scroll; import org.elasticsearch.search.builder.SearchSourceBuilder; import java.io.IOException; +import java.util.Optional; /** * Shard level search request that gets created and consumed on the local node. @@ -57,12 +59,14 @@ public class ShardSearchLocalRequest implements ShardSearchRequest { + private String clusterAlias; private ShardId shardId; private int numberOfShards; private SearchType searchType; private Scroll scroll; private String[] types = Strings.EMPTY_ARRAY; - private String[] filteringAliases; + private AliasFilter aliasFilter; + private float indexBoost; private SearchSourceBuilder source; private Boolean requestCache; private long nowInMillis; @@ -72,33 +76,33 @@ public class ShardSearchLocalRequest implements ShardSearchRequest { ShardSearchLocalRequest() { } - ShardSearchLocalRequest(SearchRequest searchRequest, ShardRouting shardRouting, int numberOfShards, - String[] filteringAliases, long nowInMillis) { - this(shardRouting.shardId(), numberOfShards, searchRequest.searchType(), - searchRequest.source(), searchRequest.types(), searchRequest.requestCache()); + ShardSearchLocalRequest(SearchRequest searchRequest, ShardId shardId, int numberOfShards, + AliasFilter aliasFilter, float indexBoost, long nowInMillis, String clusterAlias) { + this(shardId, numberOfShards, searchRequest.searchType(), + searchRequest.source(), searchRequest.types(), searchRequest.requestCache(), aliasFilter, indexBoost); this.scroll = searchRequest.scroll(); - this.filteringAliases = filteringAliases; this.nowInMillis = nowInMillis; + this.clusterAlias = clusterAlias; } - public ShardSearchLocalRequest(String[] types, long nowInMillis) { + public ShardSearchLocalRequest(ShardId shardId, String[] types, long nowInMillis, AliasFilter aliasFilter) { this.types = types; this.nowInMillis = nowInMillis; - } - - public ShardSearchLocalRequest(String[] types, long nowInMillis, String[] filteringAliases) { - this(types, nowInMillis); - this.filteringAliases = filteringAliases; + this.aliasFilter = aliasFilter; + this.shardId = shardId; + indexBoost = 1.0f; } public ShardSearchLocalRequest(ShardId shardId, int numberOfShards, SearchType searchType, SearchSourceBuilder source, String[] types, - Boolean requestCache) { + Boolean requestCache, AliasFilter aliasFilter, float indexBoost) { this.shardId = shardId; this.numberOfShards = numberOfShards; this.searchType = searchType; this.source = source; this.types = types; this.requestCache = requestCache; + this.aliasFilter = aliasFilter; + this.indexBoost = indexBoost; } @@ -133,8 +137,13 @@ public SearchType searchType() { } @Override - public String[] filteringAliases() { - return filteringAliases; + public QueryBuilder filteringAliases() { + return aliasFilter.getQueryBuilder(); + } + + @Override + public float indexBoost() { + return indexBoost; } @Override @@ -162,6 +171,10 @@ public boolean isProfile() { return profile; } + void setSearchType(SearchType type) { + this.searchType = type; + } + protected void innerReadFrom(StreamInput in) throws IOException { shardId = ShardId.readShardId(in); searchType = SearchType.fromId(in.readByte()); @@ -169,9 +182,26 @@ protected void innerReadFrom(StreamInput in) throws IOException { scroll = in.readOptionalWriteable(Scroll::new); source = in.readOptionalWriteable(SearchSourceBuilder::new); types = in.readStringArray(); - filteringAliases = in.readStringArray(); + aliasFilter = new AliasFilter(in); + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { + indexBoost = in.readFloat(); + } else { + // Nodes < 5.2.0 doesn't send index boost. Read it from source. + if (source != null) { + Optional boost = source.indexBoosts() + .stream() + .filter(ib -> ib.getIndex().equals(shardId.getIndexName())) + .findFirst(); + indexBoost = boost.isPresent() ? boost.get().getBoost() : 1.0f; + } else { + indexBoost = 1.0f; + } + } nowInMillis = in.readVLong(); requestCache = in.readOptionalBoolean(); + if (in.getVersion().onOrAfter(Version.V_5_6_0)) { + clusterAlias = in.readOptionalString(); + } } protected void innerWriteTo(StreamOutput out, boolean asKey) throws IOException { @@ -183,11 +213,17 @@ protected void innerWriteTo(StreamOutput out, boolean asKey) throws IOException out.writeOptionalWriteable(scroll); out.writeOptionalWriteable(source); out.writeStringArray(types); - out.writeStringArrayNullable(filteringAliases); + aliasFilter.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { + out.writeFloat(indexBoost); + } if (!asKey) { out.writeVLong(nowInMillis); } out.writeOptionalBoolean(requestCache); + if (out.getVersion().onOrAfter(Version.V_5_6_0)) { + out.writeOptionalString(clusterAlias); + } } @Override @@ -203,10 +239,16 @@ public BytesReference cacheKey() throws IOException { public void rewrite(QueryShardContext context) throws IOException { SearchSourceBuilder source = this.source; SearchSourceBuilder rewritten = null; + aliasFilter = aliasFilter.rewrite(context); while (rewritten != source) { rewritten = source.rewrite(context); source = rewritten; } this.source = source; } + + @Override + public String getClusterAlias() { + return clusterAlias; + } } diff --git a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java index 6c237322f0458..6c5c310eb1e72 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java @@ -20,13 +20,24 @@ package org.elasticsearch.search.internal; import org.elasticsearch.action.search.SearchType; +import org.elasticsearch.cluster.metadata.AliasMetaData; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.collect.ImmutableOpenMap; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.query.BoolQueryBuilder; +import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.indices.AliasFilterParsingException; +import org.elasticsearch.indices.InvalidAliasNameException; import org.elasticsearch.search.Scroll; import org.elasticsearch.search.builder.SearchSourceBuilder; import java.io.IOException; +import java.util.Optional; +import java.util.function.Function; /** * Shard level request that represents a search. @@ -47,7 +58,9 @@ public interface ShardSearchRequest { SearchType searchType(); - String[] filteringAliases(); + QueryBuilder filteringAliases(); + + float indexBoost(); long nowInMillis(); @@ -76,4 +89,63 @@ public interface ShardSearchRequest { * QueryBuilder. */ void rewrite(QueryShardContext context) throws IOException; + + /** + * Returns the filter associated with listed filtering aliases. + *

    + * The list of filtering aliases should be obtained by calling MetaData.filteringAliases. + * Returns null if no filtering is required.

    + */ + static QueryBuilder parseAliasFilter(CheckedFunction, IOException> filterParser, + IndexMetaData metaData, String... aliasNames) { + if (aliasNames == null || aliasNames.length == 0) { + return null; + } + Index index = metaData.getIndex(); + ImmutableOpenMap aliases = metaData.getAliases(); + Function parserFunction = (alias) -> { + if (alias.filter() == null) { + return null; + } + try { + return filterParser.apply(alias.filter().uncompressed()).orElse(null); + } catch (IOException ex) { + throw new AliasFilterParsingException(index, alias.getAlias(), "Invalid alias filter", ex); + } + }; + if (aliasNames.length == 1) { + AliasMetaData alias = aliases.get(aliasNames[0]); + if (alias == null) { + // This shouldn't happen unless alias disappeared after filteringAliases was called. + throw new InvalidAliasNameException(index, aliasNames[0], "Unknown alias name was passed to alias Filter"); + } + return parserFunction.apply(alias); + } else { + // we need to bench here a bit, to see maybe it makes sense to use OrFilter + BoolQueryBuilder combined = new BoolQueryBuilder(); + for (String aliasName : aliasNames) { + AliasMetaData alias = aliases.get(aliasName); + if (alias == null) { + // This shouldn't happen unless alias disappeared after filteringAliases was called. + throw new InvalidAliasNameException(index, aliasNames[0], + "Unknown alias name was passed to alias Filter"); + } + QueryBuilder parsedFilter = parserFunction.apply(alias); + if (parsedFilter != null) { + combined.should(parsedFilter); + } else { + // The filter might be null only if filter was removed after filteringAliases was called + return null; + } + } + return combined; + } + } + + /** + * Returns the cluster alias if this request is for a remote cluster or null if the request if targeted to the local + * cluster. + */ + String getClusterAlias(); + } diff --git a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java index 93013b94b36a8..95f2f26e9aec0 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java +++ b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java @@ -22,16 +22,19 @@ import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.OriginalIndices; import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.action.search.SearchType; import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.search.Scroll; import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskId; import org.elasticsearch.transport.TransportRequest; import java.io.IOException; @@ -50,10 +53,15 @@ public class ShardSearchTransportRequest extends TransportRequest implements Sha public ShardSearchTransportRequest(){ } - public ShardSearchTransportRequest(SearchRequest searchRequest, ShardRouting shardRouting, int numberOfShards, - String[] filteringAliases, long nowInMillis) { - this.shardSearchLocalRequest = new ShardSearchLocalRequest(searchRequest, shardRouting, numberOfShards, filteringAliases, nowInMillis); - this.originalIndices = new OriginalIndices(searchRequest); + public ShardSearchTransportRequest(OriginalIndices originalIndices, SearchRequest searchRequest, ShardId shardId, int numberOfShards, + AliasFilter aliasFilter, float indexBoost, long nowInMillis, String clusterAlias) { + this.shardSearchLocalRequest = new ShardSearchLocalRequest(searchRequest, shardId, numberOfShards, aliasFilter, indexBoost, + nowInMillis, clusterAlias); + this.originalIndices = originalIndices; + } + + public void searchType(SearchType searchType) { + shardSearchLocalRequest.setSearchType(searchType); } @Override @@ -72,7 +80,6 @@ public IndicesOptions indicesOptions() { return originalIndices.indicesOptions(); } - @Override public ShardId shardId() { return shardSearchLocalRequest.shardId(); @@ -104,10 +111,15 @@ public SearchType searchType() { } @Override - public String[] filteringAliases() { + public QueryBuilder filteringAliases() { return shardSearchLocalRequest.filteringAliases(); } + @Override + public float indexBoost() { + return shardSearchLocalRequest.indexBoost(); + } + @Override public long nowInMillis() { return shardSearchLocalRequest.nowInMillis(); @@ -129,6 +141,7 @@ public void readFrom(StreamInput in) throws IOException { shardSearchLocalRequest = new ShardSearchLocalRequest(); shardSearchLocalRequest.innerReadFrom(in); originalIndices = OriginalIndices.readOriginalIndices(in); + } @Override @@ -157,4 +170,20 @@ public boolean isProfile() { public void rewrite(QueryShardContext context) throws IOException { shardSearchLocalRequest.rewrite(context); } + + @Override + public Task createTask(long id, String type, String action, TaskId parentTaskId) { + return new SearchTask(id, type, action, getDescription(), parentTaskId); + } + + @Override + public String getDescription() { + // Shard id is enough here, the request itself can be found by looking at the parent task description + return "shardId[" + shardSearchLocalRequest.shardId() + "]"; + } + + @Override + public String getClusterAlias() { + return shardSearchLocalRequest.getClusterAlias(); + } } diff --git a/core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java b/core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java index 077a0941a0bde..0a6d684ccf525 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java +++ b/core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java @@ -22,13 +22,14 @@ import org.apache.lucene.util.Counter; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.query.ParsedQuery; -import org.elasticsearch.search.fetch.StoredFieldsContext; import org.elasticsearch.search.aggregations.SearchContextAggregations; +import org.elasticsearch.search.collapse.CollapseContext; import org.elasticsearch.search.fetch.FetchSearchResult; +import org.elasticsearch.search.fetch.StoredFieldsContext; +import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext; import org.elasticsearch.search.fetch.subphase.highlight.SearchContextHighlight; -import org.elasticsearch.search.lookup.SearchLookup; import org.elasticsearch.search.query.QuerySearchResult; import org.elasticsearch.search.rescore.RescoreSearchContext; import org.elasticsearch.search.sort.SortAndFormats; @@ -36,8 +37,6 @@ import java.util.List; -/** - */ public class SubSearchContext extends FilteredSearchContext { // By default return 3 hits per bucket. A higher default would make the response really large by default, since @@ -60,6 +59,7 @@ public class SubSearchContext extends FilteredSearchContext { private StoredFieldsContext storedFields; private ScriptFieldsContext scriptFields; private FetchSourceContext fetchSourceContext; + private DocValueFieldsContext docValueFieldsContext; private SearchContextHighlight highlight; private boolean explain; @@ -77,19 +77,14 @@ protected void doClose() { } @Override - public void preProcess() { + public void preProcess(boolean rewrite) { } @Override - public Query searchFilter(String[] types) { + public Query buildFilteredQuery(Query query) { throw new UnsupportedOperationException("this context should be read only"); } - @Override - public SearchContext queryBoost(float queryBoost) { - throw new UnsupportedOperationException("Not supported"); - } - @Override public SearchContext scrollContext(ScrollContext scrollContext) { throw new UnsupportedOperationException("Not supported"); @@ -154,6 +149,17 @@ public SearchContext fetchSourceContext(FetchSourceContext fetchSourceContext) { return this; } + @Override + public DocValueFieldsContext docValueFieldsContext() { + return docValueFieldsContext; + } + + @Override + public SearchContext docValueFieldsContext(DocValueFieldsContext docValueFieldsContext) { + this.docValueFieldsContext = docValueFieldsContext; + return this; + } + @Override public void timeout(TimeValue timeout) { throw new UnsupportedOperationException("Not supported"); @@ -311,6 +317,11 @@ public SearchContext docIdsToLoad(int[] docIdsToLoad, int docsIdsToLoadFrom, int return this; } + @Override + public CollapseContext collapse() { + return null; + } + @Override public void accessed(long accessTime) { throw new UnsupportedOperationException("Not supported"); @@ -331,16 +342,6 @@ public FetchSearchResult fetchResult() { return fetchSearchResult; } - private SearchLookup searchLookup; - - @Override - public SearchLookup lookup() { - if (searchLookup == null) { - searchLookup = new SearchLookup(mapperService(), fieldData(), request().types()); - } - return searchLookup; - } - @Override public Counter timeEstimateCounter() { throw new UnsupportedOperationException("Not supported"); diff --git a/core/src/main/java/org/elasticsearch/search/lookup/LeafDocLookup.java b/core/src/main/java/org/elasticsearch/search/lookup/LeafDocLookup.java index db20a03f8256b..5300202924b9c 100644 --- a/core/src/main/java/org/elasticsearch/search/lookup/LeafDocLookup.java +++ b/core/src/main/java/org/elasticsearch/search/lookup/LeafDocLookup.java @@ -33,12 +33,9 @@ import java.util.Map; import java.util.Set; -/** - * - */ -public class LeafDocLookup implements Map { +public class LeafDocLookup implements Map> { - private final Map localCacheFieldData = new HashMap<>(4); + private final Map> localCacheFieldData = new HashMap<>(4); private final MapperService mapperService; private final IndexFieldDataService fieldDataService; @@ -70,20 +67,20 @@ public void setDocument(int docId) { } @Override - public Object get(Object key) { + public ScriptDocValues get(Object key) { // assume its a string... String fieldName = key.toString(); - ScriptDocValues scriptValues = localCacheFieldData.get(fieldName); + ScriptDocValues scriptValues = localCacheFieldData.get(fieldName); if (scriptValues == null) { final MappedFieldType fieldType = mapperService.fullName(fieldName); if (fieldType == null) { - throw new IllegalArgumentException("No field found for [" + fieldName + "] in mapping with types " + Arrays.toString(types) + ""); + throw new IllegalArgumentException("No field found for [" + fieldName + "] in mapping with types " + Arrays.toString(types)); } // load fielddata on behalf of the script: otherwise it would need additional permissions // to deal with pagedbytes/ramusagestimator/etc - scriptValues = AccessController.doPrivileged(new PrivilegedAction() { + scriptValues = AccessController.doPrivileged(new PrivilegedAction>() { @Override - public ScriptDocValues run() { + public ScriptDocValues run() { return fieldDataService.getForField(fieldType).load(reader).getScriptValues(); } }); @@ -97,7 +94,7 @@ public ScriptDocValues run() { public boolean containsKey(Object key) { // assume its a string... String fieldName = key.toString(); - ScriptDocValues scriptValues = localCacheFieldData.get(fieldName); + ScriptDocValues scriptValues = localCacheFieldData.get(fieldName); if (scriptValues == null) { MappedFieldType fieldType = mapperService.fullName(fieldName); if (fieldType == null) { @@ -123,17 +120,17 @@ public boolean containsValue(Object value) { } @Override - public Object put(Object key, Object value) { + public ScriptDocValues put(String key, ScriptDocValues value) { throw new UnsupportedOperationException(); } @Override - public Object remove(Object key) { + public ScriptDocValues remove(Object key) { throw new UnsupportedOperationException(); } @Override - public void putAll(Map m) { + public void putAll(Map> m) { throw new UnsupportedOperationException(); } @@ -143,17 +140,17 @@ public void clear() { } @Override - public Set keySet() { + public Set keySet() { throw new UnsupportedOperationException(); } @Override - public Collection values() { + public Collection> values() { throw new UnsupportedOperationException(); } @Override - public Set entrySet() { + public Set>> entrySet() { throw new UnsupportedOperationException(); } } diff --git a/core/src/main/java/org/elasticsearch/search/lookup/LeafFieldsLookup.java b/core/src/main/java/org/elasticsearch/search/lookup/LeafFieldsLookup.java index a5f90aa2c909a..446a51d451fc7 100644 --- a/core/src/main/java/org/elasticsearch/search/lookup/LeafFieldsLookup.java +++ b/core/src/main/java/org/elasticsearch/search/lookup/LeafFieldsLookup.java @@ -22,13 +22,16 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.fieldvisitor.SingleFieldsVisitor; +import org.elasticsearch.index.mapper.IdFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.UidFieldMapper; import java.io.IOException; import java.util.Arrays; import java.util.Collection; import java.util.HashMap; +import java.util.List; import java.util.Map; import java.util.Set; @@ -40,6 +43,7 @@ public class LeafFieldsLookup implements Map { private final MapperService mapperService; + private final boolean singleType; @Nullable private final String[] types; @@ -54,6 +58,7 @@ public class LeafFieldsLookup implements Map { LeafFieldsLookup(MapperService mapperService, @Nullable String[] types, LeafReader reader) { this.mapperService = mapperService; + this.singleType = mapperService.getIndexSettings().isSingleType(); this.types = types; this.reader = reader; this.fieldVisitor = new SingleFieldsVisitor(null); @@ -138,18 +143,22 @@ private FieldLookup loadFieldData(String name) { if (data == null) { MappedFieldType fieldType = mapperService.fullName(name); if (fieldType == null) { - throw new IllegalArgumentException("No field found for [" + name + "] in mapping with types " + Arrays.toString(types) + ""); + throw new IllegalArgumentException("No field found for [" + name + "] in mapping with types " + Arrays.toString(types)); } data = new FieldLookup(fieldType); cachedFieldData.put(name, data); } if (data.fields() == null) { String fieldName = data.fieldType().name(); + if (singleType && UidFieldMapper.NAME.equals(fieldName)) { + fieldName = IdFieldMapper.NAME; + } fieldVisitor.reset(fieldName); try { reader.document(docId, fieldVisitor); - fieldVisitor.postProcess(data.fieldType()); - data.fields(singletonMap(name, fieldVisitor.fields().get(data.fieldType().name()))); + fieldVisitor.postProcess(mapperService); + List storedFields = fieldVisitor.fields().get(data.fieldType().name()); + data.fields(singletonMap(name, storedFields)); } catch (IOException e) { throw new ElasticsearchParseException("failed to load field [{}]", e, name); } diff --git a/core/src/main/java/org/elasticsearch/search/lookup/LeafIndexLookup.java b/core/src/main/java/org/elasticsearch/search/lookup/LeafIndexLookup.java index a7b965dbfb1b0..9908f2830f85b 100644 --- a/core/src/main/java/org/elasticsearch/search/lookup/LeafIndexLookup.java +++ b/core/src/main/java/org/elasticsearch/search/lookup/LeafIndexLookup.java @@ -18,6 +18,7 @@ */ package org.elasticsearch.search.lookup; +import org.apache.logging.log4j.Logger; import org.apache.lucene.index.Fields; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.IndexReaderContext; @@ -26,6 +27,8 @@ import org.apache.lucene.index.ReaderUtil; import org.apache.lucene.search.IndexSearcher; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.util.MinimalMap; import java.io.IOException; @@ -64,7 +67,19 @@ public class LeafIndexLookup extends MinimalMap { // computation is expensive private int numDeletedDocs = -1; + private boolean deprecationEmitted = false; + + private void logDeprecation() { + if (deprecationEmitted == false) { + Logger logger = Loggers.getLogger(getClass()); + DeprecationLogger deprecationLogger = new DeprecationLogger(logger); + deprecationLogger.deprecated("Using _index is deprecated. Create a custom ScriptEngine to access index internals."); + deprecationEmitted = true; + } + } + public int numDocs() { + logDeprecation(); if (numDocs == -1) { numDocs = parentReader.numDocs(); } @@ -72,6 +87,7 @@ public int numDocs() { } public int maxDoc() { + logDeprecation(); if (maxDoc == -1) { maxDoc = parentReader.maxDoc(); } @@ -79,6 +95,7 @@ public int maxDoc() { } public int numDeletedDocs() { + logDeprecation(); if (numDeletedDocs == -1) { numDeletedDocs = parentReader.numDeletedDocs(); } @@ -127,6 +144,7 @@ protected void setNextDocIdInFields() { */ @Override public IndexField get(Object key) { + logDeprecation(); String stringField = (String) key; IndexField indexField = indexFields.get(key); if (indexField == null) { @@ -146,19 +164,23 @@ public IndexField get(Object key) { * * */ public Fields termVectors() throws IOException { + logDeprecation(); assert reader != null; return reader.getTermVectors(docId); } LeafReader getReader() { + logDeprecation(); return reader; } public int getDocId() { + logDeprecation(); return docId; } public IndexReader getParentReader() { + logDeprecation(); if (parentReader == null) { return reader; } @@ -166,10 +188,12 @@ public IndexReader getParentReader() { } public IndexSearcher getIndexSearcher() { + logDeprecation(); return indexSearcher; } public IndexReaderContext getReaderContext() { + logDeprecation(); return getParentReader().getContext(); } } diff --git a/core/src/main/java/org/elasticsearch/search/lookup/SourceLookup.java b/core/src/main/java/org/elasticsearch/search/lookup/SourceLookup.java index 910f5daf7a3af..01610c6405f2b 100644 --- a/core/src/main/java/org/elasticsearch/search/lookup/SourceLookup.java +++ b/core/src/main/java/org/elasticsearch/search/lookup/SourceLookup.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.index.fieldvisitor.FieldsVisitor; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import java.util.Collection; import java.util.List; @@ -130,8 +131,8 @@ public List extractRawValues(String path) { return XContentMapValues.extractRawValues(path, loadSourceIfNeeded()); } - public Object filter(String[] includes, String[] excludes) { - return XContentMapValues.filter(loadSourceIfNeeded(), includes, excludes); + public Object filter(FetchSourceContext context) { + return context.getFilter().apply(loadSourceIfNeeded()); } public Object extractValue(String path) { diff --git a/core/src/main/java/org/elasticsearch/search/profile/AbstractInternalProfileTree.java b/core/src/main/java/org/elasticsearch/search/profile/AbstractInternalProfileTree.java index 5d20080408b94..a93d3c419ad19 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/AbstractInternalProfileTree.java +++ b/core/src/main/java/org/elasticsearch/search/profile/AbstractInternalProfileTree.java @@ -162,10 +162,9 @@ private ProfileResult doGetTree(int token) { // TODO this would be better done bottom-up instead of top-down to avoid // calculating the same times over and over...but worth the effort? - long nodeTime = getNodeTime(timings); String type = getTypeFromElement(element); String description = getDescriptionFromElement(element); - return new ProfileResult(type, description, timings, childrenProfileResults, nodeTime); + return new ProfileResult(type, description, timings, childrenProfileResults); } protected abstract String getTypeFromElement(E element); @@ -184,19 +183,4 @@ private void updateParent(int childToken) { tree.set(parent, parentNode); } - /** - * Internal helper to calculate the time of a node, inclusive of children - * - * @param timings - * A map of breakdown timing for the node - * @return The total time at this node, inclusive of children - */ - private static long getNodeTime(Map timings) { - long nodeTime = 0; - for (long time : timings.values()) { - nodeTime += time; - } - return nodeTime; - } - } diff --git a/core/src/main/java/org/elasticsearch/search/profile/AbstractProfileBreakdown.java b/core/src/main/java/org/elasticsearch/search/profile/AbstractProfileBreakdown.java index 3698ea07da11f..f49ad4a8718a0 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/AbstractProfileBreakdown.java +++ b/core/src/main/java/org/elasticsearch/search/profile/AbstractProfileBreakdown.java @@ -33,77 +33,29 @@ public abstract class AbstractProfileBreakdown> { /** * The accumulated timings for this query node */ - private final long[] timings; - - private final long[] counts; - - /** Scratch to store the current timing type. */ - private T currentTimingType; - - /** - * The temporary scratch space for holding start-times - */ - private long scratch; - - private T[] timingTypes; + private final Timer[] timings; + private final T[] timingTypes; /** Sole constructor. */ - public AbstractProfileBreakdown(T[] timingTypes) { - this.timingTypes = timingTypes; - timings = new long[timingTypes.length]; - counts = new long[timingTypes.length]; - } - - /** - * Begin timing a query for a specific Timing context - * @param timing The timing context being profiled - */ - public void startTime(T timing) { - assert currentTimingType == null; - assert scratch == 0; - counts[timing.ordinal()] += 1; - currentTimingType = timing; - scratch = System.nanoTime(); + public AbstractProfileBreakdown(Class clazz) { + this.timingTypes = clazz.getEnumConstants(); + timings = new Timer[timingTypes.length]; + for (int i = 0; i < timings.length; ++i) { + timings[i] = new Timer(); + } } - /** - * Halt the timing process and save the elapsed time. - * startTime() must be called for a particular context prior to calling - * stopAndRecordTime(), otherwise the elapsed time will be negative and - * nonsensical - * - * @return The elapsed time - */ - public long stopAndRecordTime() { - long time = Math.max(1, System.nanoTime() - scratch); - timings[currentTimingType.ordinal()] += time; - currentTimingType = null; - scratch = 0L; - return time; + public Timer getTimer(T timing) { + return timings[timing.ordinal()]; } /** Convert this record to a map from timingType to times. */ public Map toTimingMap() { Map map = new HashMap<>(); for (T timingType : timingTypes) { - map.put(timingType.toString(), timings[timingType.ordinal()]); - map.put(timingType.toString() + "_count", counts[timingType.ordinal()]); + map.put(timingType.toString(), timings[timingType.ordinal()].getApproximateTiming()); + map.put(timingType.toString() + "_count", timings[timingType.ordinal()].getCount()); } return Collections.unmodifiableMap(map); } - - /** - * Add other's timings into this breakdown - * @param other Another Breakdown to merge with this one - */ - public void merge(AbstractProfileBreakdown other) { - assert(timings.length == other.timings.length); - for (int i = 0; i < timings.length; ++i) { - timings[i] += other.timings[i]; - } - assert(counts.length == other.counts.length); - for (int i = 0; i < counts.length; ++i) { - counts[i] += other.counts[i]; - } - } } diff --git a/core/src/main/java/org/elasticsearch/search/profile/ProfileResult.java b/core/src/main/java/org/elasticsearch/search/profile/ProfileResult.java index 4c745a5127dab..9ffc58bf7386d 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/ProfileResult.java +++ b/core/src/main/java/org/elasticsearch/search/profile/ProfileResult.java @@ -23,8 +23,9 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; import java.util.ArrayList; @@ -33,6 +34,9 @@ import java.util.List; import java.util.Locale; import java.util.Map; +import java.util.Objects; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; /** * This class is the internal representation of a profiled Query, corresponding @@ -43,13 +47,14 @@ * Each InternalProfileResult has a List of InternalProfileResults, which will contain * "children" queries if applicable */ -public final class ProfileResult implements Writeable, ToXContent { +public final class ProfileResult implements Writeable, ToXContentObject { - private static final ParseField TYPE = new ParseField("type"); - private static final ParseField DESCRIPTION = new ParseField("description"); - private static final ParseField NODE_TIME = new ParseField("time"); - private static final ParseField CHILDREN = new ParseField("children"); - private static final ParseField BREAKDOWN = new ParseField("breakdown"); + static final ParseField TYPE = new ParseField("type"); + static final ParseField DESCRIPTION = new ParseField("description"); + static final ParseField NODE_TIME = new ParseField("time"); + static final ParseField NODE_TIME_NANOS = new ParseField("time_in_nanos"); + static final ParseField CHILDREN = new ParseField("children"); + static final ParseField BREAKDOWN = new ParseField("breakdown"); private final String type; private final String description; @@ -57,13 +62,12 @@ public final class ProfileResult implements Writeable, ToXContent { private final long nodeTime; private final List children; - public ProfileResult(String type, String description, Map timings, List children, - long nodeTime) { + public ProfileResult(String type, String description, Map timings, List children) { this.type = type; this.description = description; - this.timings = timings; + this.timings = Objects.requireNonNull(timings, "required timings argument missing"); this.children = children; - this.nodeTime = nodeTime; + this.nodeTime = getTotalTime(timings); } /** @@ -147,6 +151,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws .field(TYPE.getPreferredName(), type) .field(DESCRIPTION.getPreferredName(), description) .field(NODE_TIME.getPreferredName(), String.format(Locale.US, "%.10gms", getTime() / 1000000.0)) + .field(NODE_TIME_NANOS.getPreferredName(), getTime()) .field(BREAKDOWN.getPreferredName(), timings); if (!children.isEmpty()) { @@ -161,4 +166,65 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } + public static ProfileResult fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + String currentFieldName = null; + String type = null, description = null; + Map timings = new HashMap<>(); + List children = new ArrayList<>(); + while((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (TYPE.match(currentFieldName)) { + type = parser.text(); + } else if (DESCRIPTION.match(currentFieldName)) { + description = parser.text(); + } else if (NODE_TIME.match(currentFieldName)) { + // skip, total time is calculate by adding up 'timings' values in ProfileResult ctor + parser.text(); + } else if (NODE_TIME_NANOS.match(currentFieldName)) { + // skip, total time is calculate by adding up 'timings' values in ProfileResult ctor + parser.longValue(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (BREAKDOWN.match(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + ensureExpectedToken(parser.currentToken(), XContentParser.Token.FIELD_NAME, parser::getTokenLocation); + String name = parser.currentName(); + ensureExpectedToken(parser.nextToken(), XContentParser.Token.VALUE_NUMBER, parser::getTokenLocation); + long value = parser.longValue(); + timings.put(name, value); + } + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (CHILDREN.match(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + children.add(ProfileResult.fromXContent(parser)); + } + } else { + parser.skipChildren(); + } + } + } + return new ProfileResult(type, description, timings, children); + } + + /** + * @param timings a map of breakdown timing for the node + * @return The total time at this node + */ + private static long getTotalTime(Map timings) { + long nodeTime = 0; + for (long time : timings.values()) { + nodeTime += time; + } + return nodeTime; + } + } diff --git a/core/src/main/java/org/elasticsearch/search/profile/SearchProfileShardResults.java b/core/src/main/java/org/elasticsearch/search/profile/SearchProfileShardResults.java index 6794aa49399cb..b7fa39c42f3ab 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/SearchProfileShardResults.java +++ b/core/src/main/java/org/elasticsearch/search/profile/SearchProfileShardResults.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.search.profile.aggregation.AggregationProfileShardResult; import org.elasticsearch.search.profile.aggregation.AggregationProfiler; import org.elasticsearch.search.profile.query.QueryProfileShardResult; @@ -35,6 +36,9 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.TreeSet; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; /** * A container class to hold all the profile results across all shards. Internally @@ -42,6 +46,11 @@ */ public final class SearchProfileShardResults implements Writeable, ToXContent{ + private static final String SEARCHES_FIELD = "searches"; + private static final String ID_FIELD = "id"; + private static final String SHARDS_FIELD = "shards"; + public static final String PROFILE_FIELD = "profile"; + private Map shardResults; public SearchProfileShardResults(Map shardResults) { @@ -75,26 +84,80 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject("profile").startArray("shards"); - - for (Map.Entry entry : shardResults.entrySet()) { + builder.startObject(PROFILE_FIELD).startArray(SHARDS_FIELD); + // shardResults is a map, but we print entries in a json array, which is ordered. + // we sort the keys of the map, so that toXContent always prints out the same array order + TreeSet sortedKeys = new TreeSet<>(shardResults.keySet()); + for (String key : sortedKeys) { builder.startObject(); - builder.field("id", entry.getKey()); - builder.startArray("searches"); - for (QueryProfileShardResult result : entry.getValue().getQueryProfileResults()) { - builder.startObject(); + builder.field(ID_FIELD, key); + builder.startArray(SEARCHES_FIELD); + ProfileShardResult profileShardResult = shardResults.get(key); + for (QueryProfileShardResult result : profileShardResult.getQueryProfileResults()) { result.toXContent(builder, params); - builder.endObject(); } builder.endArray(); - entry.getValue().getAggregationProfileResults().toXContent(builder, params); + profileShardResult.getAggregationProfileResults().toXContent(builder, params); builder.endObject(); } - builder.endArray().endObject(); return builder; } + public static SearchProfileShardResults fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + Map searchProfileResults = new HashMap<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.START_ARRAY) { + if (SHARDS_FIELD.equals(parser.currentName())) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + parseSearchProfileResultsEntry(parser, searchProfileResults); + } + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + parser.skipChildren(); + } + } + return new SearchProfileShardResults(searchProfileResults); + } + + private static void parseSearchProfileResultsEntry(XContentParser parser, + Map searchProfileResults) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + List queryProfileResults = new ArrayList<>(); + AggregationProfileShardResult aggProfileShardResult = null; + String id = null; + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (ID_FIELD.equals(currentFieldName)) { + id = parser.text(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (SEARCHES_FIELD.equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + queryProfileResults.add(QueryProfileShardResult.fromXContent(parser)); + } + } else if (AggregationProfileShardResult.AGGREGATIONS.equals(currentFieldName)) { + aggProfileShardResult = AggregationProfileShardResult.fromXContent(parser); + } else { + parser.skipChildren(); + } + } else { + parser.skipChildren(); + } + } + searchProfileResults.put(id, new ProfileShardResult(queryProfileResults, aggProfileShardResult)); + } + /** * Helper method to convert Profiler into InternalProfileShardResults, which * can be serialized to other nodes, emitted as JSON, etc. diff --git a/core/src/main/java/org/elasticsearch/search/profile/Timer.java b/core/src/main/java/org/elasticsearch/search/profile/Timer.java new file mode 100644 index 0000000000000..72b62ec562244 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/profile/Timer.java @@ -0,0 +1,96 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.profile; + +/** Helps measure how much time is spent running some methods. + * The {@link #start()} and {@link #stop()} methods should typically be called + * in a try/finally clause with {@link #start()} being called right before the + * try block and {@link #stop()} being called at the beginning of the finally + * block: + *
    + *  timer.start();
    + *  try {
    + *    // code to time
    + *  } finally {
    + *    timer.stop();
    + *  }
    + *  
    + */ +public class Timer { + + private boolean doTiming; + private long timing, count, lastCount, start; + + /** pkg-private for testing */ + long nanoTime() { + return System.nanoTime(); + } + + /** Start the timer. */ + public final void start() { + assert start == 0 : "#start call misses a matching #stop call"; + // We measure the timing of each method call for the first 256 + // calls, then 1/2 call up to 512 then 1/3 up to 768, etc. with + // a maximum interval of 1024, which is reached for 1024*2^8 ~= 262000 + // This allows to not slow down things too much because of calls + // to System.nanoTime() when methods are called millions of time + // in tight loops, while still providing useful timings for methods + // that are only called a couple times per search execution. + doTiming = (count - lastCount) >= Math.min(lastCount >>> 8, 1024); + if (doTiming) { + start = nanoTime(); + } + count++; + } + + /** Stop the timer. */ + public final void stop() { + if (doTiming) { + timing += (count - lastCount) * Math.max(nanoTime() - start, 1L); + lastCount = count; + start = 0; + } + } + + /** Return the number of times that {@link #start()} has been called. */ + public final long getCount() { + if (start != 0) { + throw new IllegalStateException("#start call misses a matching #stop call"); + } + return count; + } + + /** Return an approximation of the total time spent between consecutive calls of #start and #stop. */ + public final long getApproximateTiming() { + if (start != 0) { + throw new IllegalStateException("#start call misses a matching #stop call"); + } + // We don't have timings for the last `count-lastCount` method calls + // so we assume that they had the same timing as the lastCount first + // calls. This approximation is ok since at most 1/256th of method + // calls have not been timed. + long timing = this.timing; + if (count > lastCount) { + assert lastCount > 0; + timing += (count - lastCount) * timing / lastCount; + } + return timing; + } +} diff --git a/core/src/main/java/org/elasticsearch/search/profile/aggregation/AggregationProfileBreakdown.java b/core/src/main/java/org/elasticsearch/search/profile/aggregation/AggregationProfileBreakdown.java index b4cb1efe5d350..84a525bf90784 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/aggregation/AggregationProfileBreakdown.java +++ b/core/src/main/java/org/elasticsearch/search/profile/aggregation/AggregationProfileBreakdown.java @@ -24,7 +24,7 @@ public class AggregationProfileBreakdown extends AbstractProfileBreakdown { public AggregationProfileBreakdown() { - super(AggregationTimingType.values()); + super(AggregationTimingType.class); } } diff --git a/core/src/main/java/org/elasticsearch/search/profile/aggregation/AggregationProfileShardResult.java b/core/src/main/java/org/elasticsearch/search/profile/aggregation/AggregationProfileShardResult.java index df55c5592d6a4..95a41401d11bc 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/aggregation/AggregationProfileShardResult.java +++ b/core/src/main/java/org/elasticsearch/search/profile/aggregation/AggregationProfileShardResult.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.search.profile.ProfileResult; import java.io.IOException; @@ -31,12 +32,15 @@ import java.util.Collections; import java.util.List; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * A container class to hold the profile results for a single shard in the request. * Contains a list of query profiles, a collector tree and a total rewrite tree. */ public final class AggregationProfileShardResult implements Writeable, ToXContent { + public static final String AGGREGATIONS = "aggregations"; private final List aggProfileResults; public AggregationProfileShardResult(List aggProfileResults) { @@ -69,11 +73,21 @@ public List getProfileResults() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startArray("aggregations"); + builder.startArray(AGGREGATIONS); for (ProfileResult p : aggProfileResults) { p.toXContent(builder, params); } builder.endArray(); return builder; } + + public static AggregationProfileShardResult fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.START_ARRAY, token, parser::getTokenLocation); + List aggProfileResults = new ArrayList<>(); + while((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + aggProfileResults.add(ProfileResult.fromXContent(parser)); + } + return new AggregationProfileShardResult(aggProfileResults); + } } diff --git a/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingAggregator.java b/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingAggregator.java index 2883c2903e8f1..d96fbe0d86697 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingAggregator.java @@ -23,7 +23,8 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; -import org.elasticsearch.search.aggregations.support.AggregationContext; +import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.search.profile.Timer; import java.io.IOException; @@ -54,7 +55,7 @@ public String name() { } @Override - public AggregationContext context() { + public SearchContext context() { return delegate.context(); } @@ -70,9 +71,14 @@ public Aggregator subAggregator(String name) { @Override public InternalAggregation buildAggregation(long bucket) throws IOException { - profileBreakdown.startTime(AggregationTimingType.BUILD_AGGREGATION); - InternalAggregation result = delegate.buildAggregation(bucket); - profileBreakdown.stopAndRecordTime(); + Timer timer = profileBreakdown.getTimer(AggregationTimingType.BUILD_AGGREGATION); + timer.start(); + InternalAggregation result; + try { + result = delegate.buildAggregation(bucket); + } finally { + timer.stop(); + } return result; } @@ -89,9 +95,13 @@ public LeafBucketCollector getLeafCollector(LeafReaderContext ctx) throws IOExce @Override public void preCollection() throws IOException { this.profileBreakdown = profiler.getQueryBreakdown(delegate); - profileBreakdown.startTime(AggregationTimingType.INITIALIZE); - delegate.preCollection(); - profileBreakdown.stopAndRecordTime(); + Timer timer = profileBreakdown.getTimer(AggregationTimingType.INITIALIZE); + timer.start(); + try { + delegate.preCollection(); + } finally { + timer.stop(); + } profiler.pollLastElement(); } diff --git a/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingLeafBucketCollector.java b/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingLeafBucketCollector.java index addf910bc56a5..4db67967dcb2b 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingLeafBucketCollector.java +++ b/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingLeafBucketCollector.java @@ -21,24 +21,28 @@ import org.apache.lucene.search.Scorer; import org.elasticsearch.search.aggregations.LeafBucketCollector; +import org.elasticsearch.search.profile.Timer; import java.io.IOException; public class ProfilingLeafBucketCollector extends LeafBucketCollector { private LeafBucketCollector delegate; - private AggregationProfileBreakdown profileBreakdown; + private Timer collectTimer; public ProfilingLeafBucketCollector(LeafBucketCollector delegate, AggregationProfileBreakdown profileBreakdown) { this.delegate = delegate; - this.profileBreakdown = profileBreakdown; + this.collectTimer = profileBreakdown.getTimer(AggregationTimingType.COLLECT); } @Override public void collect(int doc, long bucket) throws IOException { - profileBreakdown.startTime(AggregationTimingType.COLLECT); - delegate.collect(doc, bucket); - profileBreakdown.stopAndRecordTime(); + collectTimer.start(); + try { + delegate.collect(doc, bucket); + } finally { + collectTimer.stop(); + } } @Override diff --git a/core/src/main/java/org/elasticsearch/search/profile/query/CollectorResult.java b/core/src/main/java/org/elasticsearch/search/profile/query/CollectorResult.java index c517c8730e481..26623d300a159 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/query/CollectorResult.java +++ b/core/src/main/java/org/elasticsearch/search/profile/query/CollectorResult.java @@ -24,19 +24,23 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; import java.util.ArrayList; import java.util.List; import java.util.Locale; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * Public interface and serialization container for profiled timings of the * Collectors used in the search. Children CollectorResult's may be * embedded inside of a parent CollectorResult */ -public class CollectorResult implements ToXContent, Writeable { +public class CollectorResult implements ToXContentObject, Writeable { public static final String REASON_SEARCH_COUNT = "search_count"; public static final String REASON_SEARCH_TOP_HITS = "search_top_hits"; @@ -45,12 +49,14 @@ public class CollectorResult implements ToXContent, Writeable { public static final String REASON_SEARCH_MIN_SCORE = "search_min_score"; public static final String REASON_SEARCH_MULTI = "search_multi"; public static final String REASON_SEARCH_TIMEOUT = "search_timeout"; + public static final String REASON_SEARCH_CANCELLED = "search_cancelled"; public static final String REASON_AGGREGATION = "aggregation"; public static final String REASON_AGGREGATION_GLOBAL = "aggregation_global"; private static final ParseField NAME = new ParseField("name"); private static final ParseField REASON = new ParseField("reason"); private static final ParseField TIME = new ParseField("time"); + private static final ParseField TIME_NANOS = new ParseField("time_in_nanos"); private static final ParseField CHILDREN = new ParseField("children"); /** @@ -139,7 +145,8 @@ public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params par builder = builder.startObject() .field(NAME.getPreferredName(), getName()) .field(REASON.getPreferredName(), getReason()) - .field(TIME.getPreferredName(), String.format(Locale.US, "%.10gms", (double) (getTime() / 1000000.0))); + .field(TIME.getPreferredName(), String.format(Locale.US, "%.10gms", getTime() / 1000000.0)) + .field(TIME_NANOS.getPreferredName(), getTime()); if (!children.isEmpty()) { builder = builder.startArray(CHILDREN.getPreferredName()); @@ -151,4 +158,42 @@ public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params par builder = builder.endObject(); return builder; } + + public static CollectorResult fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + String currentFieldName = null; + String name = null, reason = null; + long time = -1; + List children = new ArrayList<>(); + while((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (NAME.match(currentFieldName)) { + name = parser.text(); + } else if (REASON.match(currentFieldName)) { + reason = parser.text(); + } else if (TIME.match(currentFieldName)) { + // we need to consume this value, but we use the raw nanosecond value + parser.text(); + } else if (TIME_NANOS.match(currentFieldName)) { + time = parser.longValue(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (CHILDREN.match(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + children.add(CollectorResult.fromXContent(parser)); + } + } else { + parser.skipChildren(); + } + } else { + parser.skipChildren(); + } + } + return new CollectorResult(name, reason, time, children); + } } diff --git a/core/src/main/java/org/elasticsearch/search/profile/query/ProfileCollector.java b/core/src/main/java/org/elasticsearch/search/profile/query/ProfileCollector.java index c65d097890627..ea8dbb2f335ca 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/query/ProfileCollector.java +++ b/core/src/main/java/org/elasticsearch/search/profile/query/ProfileCollector.java @@ -34,7 +34,7 @@ final class ProfileCollector extends FilterCollector { private long time; /** Sole constructor. */ - public ProfileCollector(Collector in) { + ProfileCollector(Collector in) { super(in); } diff --git a/core/src/main/java/org/elasticsearch/search/profile/query/ProfileScorer.java b/core/src/main/java/org/elasticsearch/search/profile/query/ProfileScorer.java index 51d0b14fc9601..e475bb6b7d90c 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/query/ProfileScorer.java +++ b/core/src/main/java/org/elasticsearch/search/profile/query/ProfileScorer.java @@ -23,6 +23,7 @@ import org.apache.lucene.search.Scorer; import org.apache.lucene.search.TwoPhaseIterator; import org.apache.lucene.search.Weight; +import org.elasticsearch.search.profile.Timer; import java.io.IOException; import java.util.Collection; @@ -35,13 +36,16 @@ final class ProfileScorer extends Scorer { private final Scorer scorer; private ProfileWeight profileWeight; - private final QueryProfileBreakdown profile; + private final Timer scoreTimer, nextDocTimer, advanceTimer, matchTimer; ProfileScorer(ProfileWeight w, Scorer scorer, QueryProfileBreakdown profile) throws IOException { super(w); this.scorer = scorer; this.profileWeight = w; - this.profile = profile; + scoreTimer = profile.getTimer(QueryTimingType.SCORE); + nextDocTimer = profile.getTimer(QueryTimingType.NEXT_DOC); + advanceTimer = profile.getTimer(QueryTimingType.ADVANCE); + matchTimer = profile.getTimer(QueryTimingType.MATCH); } @Override @@ -51,11 +55,11 @@ public int docID() { @Override public float score() throws IOException { - profile.startTime(QueryTimingType.SCORE); + scoreTimer.start(); try { return scorer.score(); } finally { - profile.stopAndRecordTime(); + scoreTimer.stop(); } } @@ -70,7 +74,7 @@ public Weight getWeight() { } @Override - public Collection getChildren() { + public Collection getChildren() throws IOException { return scorer.getChildren(); } @@ -81,21 +85,21 @@ public DocIdSetIterator iterator() { @Override public int advance(int target) throws IOException { - profile.startTime(QueryTimingType.ADVANCE); + advanceTimer.start(); try { return in.advance(target); } finally { - profile.stopAndRecordTime(); + advanceTimer.stop(); } } @Override public int nextDoc() throws IOException { - profile.startTime(QueryTimingType.NEXT_DOC); + nextDocTimer.start(); try { return in.nextDoc(); } finally { - profile.stopAndRecordTime(); + nextDocTimer.stop(); } } @@ -122,21 +126,21 @@ public TwoPhaseIterator twoPhaseIterator() { @Override public int advance(int target) throws IOException { - profile.startTime(QueryTimingType.ADVANCE); + advanceTimer.start(); try { return inApproximation.advance(target); } finally { - profile.stopAndRecordTime(); + advanceTimer.stop(); } } @Override public int nextDoc() throws IOException { - profile.startTime(QueryTimingType.NEXT_DOC); + nextDocTimer.start(); try { return inApproximation.nextDoc(); } finally { - profile.stopAndRecordTime(); + nextDocTimer.stop(); } } @@ -153,11 +157,11 @@ public long cost() { return new TwoPhaseIterator(approximation) { @Override public boolean matches() throws IOException { - profile.startTime(QueryTimingType.MATCH); + matchTimer.start(); try { return in.matches(); } finally { - profile.stopAndRecordTime(); + matchTimer.stop(); } } diff --git a/core/src/main/java/org/elasticsearch/search/profile/query/ProfileWeight.java b/core/src/main/java/org/elasticsearch/search/profile/query/ProfileWeight.java index 9ca33f8454253..a080c22f78f72 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/query/ProfileWeight.java +++ b/core/src/main/java/org/elasticsearch/search/profile/query/ProfileWeight.java @@ -25,7 +25,9 @@ import org.apache.lucene.search.Explanation; import org.apache.lucene.search.Query; import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.ScorerSupplier; import org.apache.lucene.search.Weight; +import org.elasticsearch.search.profile.Timer; import java.io.IOException; import java.util.Set; @@ -48,18 +50,50 @@ public ProfileWeight(Query query, Weight subQueryWeight, QueryProfileBreakdown p @Override public Scorer scorer(LeafReaderContext context) throws IOException { - profile.startTime(QueryTimingType.BUILD_SCORER); - final Scorer subQueryScorer; + ScorerSupplier supplier = scorerSupplier(context); + if (supplier == null) { + return null; + } + return supplier.get(false); + } + + @Override + public ScorerSupplier scorerSupplier(LeafReaderContext context) throws IOException { + Timer timer = profile.getTimer(QueryTimingType.BUILD_SCORER); + timer.start(); + final ScorerSupplier subQueryScorerSupplier; try { - subQueryScorer = subQueryWeight.scorer(context); + subQueryScorerSupplier = subQueryWeight.scorerSupplier(context); } finally { - profile.stopAndRecordTime(); + timer.stop(); } - if (subQueryScorer == null) { + if (subQueryScorerSupplier == null) { return null; } - return new ProfileScorer(this, subQueryScorer, profile); + final ProfileWeight weight = this; + return new ScorerSupplier() { + + @Override + public Scorer get(boolean randomAccess) throws IOException { + timer.start(); + try { + return new ProfileScorer(weight, subQueryScorerSupplier.get(randomAccess), profile); + } finally { + timer.stop(); + } + } + + @Override + public long cost() { + timer.start(); + try { + return subQueryScorerSupplier.cost(); + } finally { + timer.stop(); + } + } + }; } @Override diff --git a/core/src/main/java/org/elasticsearch/search/profile/query/QueryProfileBreakdown.java b/core/src/main/java/org/elasticsearch/search/profile/query/QueryProfileBreakdown.java index d0608eb01afa1..21ec507dbafef 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/query/QueryProfileBreakdown.java +++ b/core/src/main/java/org/elasticsearch/search/profile/query/QueryProfileBreakdown.java @@ -30,6 +30,6 @@ public final class QueryProfileBreakdown extends AbstractProfileBreakdown queryProfileResults; @@ -90,15 +97,52 @@ public CollectorResult getCollectorResult() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startArray("query"); + builder.startObject(); + builder.startArray(QUERY_ARRAY); for (ProfileResult p : queryProfileResults) { p.toXContent(builder, params); } builder.endArray(); - builder.field("rewrite_time", rewriteTime); - builder.startArray("collector"); + builder.field(REWRITE_TIME, rewriteTime); + builder.startArray(COLLECTOR); profileCollector.toXContent(builder, params); builder.endArray(); + builder.endObject(); return builder; } + + public static QueryProfileShardResult fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + String currentFieldName = null; + List queryProfileResults = new ArrayList<>(); + long rewriteTime = 0; + CollectorResult collector = null; + while((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (REWRITE_TIME.equals(currentFieldName)) { + rewriteTime = parser.longValue(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (QUERY_ARRAY.equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + queryProfileResults.add(ProfileResult.fromXContent(parser)); + } + } else if (COLLECTOR.equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + collector = CollectorResult.fromXContent(parser); + } + } else { + parser.skipChildren(); + } + } else { + parser.skipChildren(); + } + } + return new QueryProfileShardResult(queryProfileResults, rewriteTime, collector); + } } diff --git a/core/src/main/java/org/elasticsearch/search/query/CancellableCollector.java b/core/src/main/java/org/elasticsearch/search/query/CancellableCollector.java new file mode 100644 index 0000000000000..504a7f3d13da5 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/query/CancellableCollector.java @@ -0,0 +1,53 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.query; + +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.Collector; +import org.apache.lucene.search.FilterCollector; +import org.apache.lucene.search.LeafCollector; +import org.elasticsearch.tasks.TaskCancelledException; + +import java.io.IOException; +import java.util.function.BooleanSupplier; + +/** + * Collector that checks if the task it is executed under is cancelled. + */ +public class CancellableCollector extends FilterCollector { + private final BooleanSupplier cancelled; + + /** + * Constructor + * @param cancelled supplier of the cancellation flag, the supplier will be called for each segment + * @param in wrapped collector + */ + public CancellableCollector(BooleanSupplier cancelled, Collector in) { + super(in); + this.cancelled = cancelled; + } + + @Override + public LeafCollector getLeafCollector(LeafReaderContext context) throws IOException { + if (cancelled.getAsBoolean()) { + throw new TaskCancelledException("cancelled"); + } + return super.getLeafCollector(context); + } +} diff --git a/core/src/main/java/org/elasticsearch/search/query/QueryPhase.java b/core/src/main/java/org/elasticsearch/search/query/QueryPhase.java index 189fead781396..99253867b80d6 100644 --- a/core/src/main/java/org/elasticsearch/search/query/QueryPhase.java +++ b/core/src/main/java/org/elasticsearch/search/query/QueryPhase.java @@ -34,22 +34,26 @@ import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.Sort; import org.apache.lucene.search.TermQuery; -import org.apache.lucene.search.TimeLimitingCollector; import org.apache.lucene.search.TopDocs; import org.apache.lucene.search.TopDocsCollector; import org.apache.lucene.search.TopFieldCollector; import org.apache.lucene.search.TopScoreDocCollector; import org.apache.lucene.search.TotalHitCountCollector; import org.apache.lucene.search.Weight; +import org.apache.lucene.search.grouping.CollapsingTopDocsCollector; +import org.apache.lucene.util.Counter; +import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.action.search.SearchType; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.lucene.MinimumScoreCollector; import org.elasticsearch.common.lucene.search.FilteredCollector; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.search.collapse.CollapseContext; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.SearchPhase; import org.elasticsearch.search.SearchService; import org.elasticsearch.search.aggregations.AggregationPhase; +import org.elasticsearch.search.internal.ContextIndexSearcher; import org.elasticsearch.search.internal.ScrollContext; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.profile.ProfileShardResult; @@ -60,15 +64,19 @@ import org.elasticsearch.search.rescore.RescoreSearchContext; import org.elasticsearch.search.sort.SortAndFormats; import org.elasticsearch.search.suggest.SuggestPhase; +import org.elasticsearch.tasks.TaskCancelledException; import java.util.AbstractList; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.concurrent.Callable; +import java.util.function.Consumer; + /** - * + * Query phase of a search request, used to run the query and get back from each shard information about the matching documents + * (document ids and score or sort criteria) so that matches can be reduced on the coordinating node */ public class QueryPhase implements SearchPhase { @@ -84,7 +92,7 @@ public QueryPhase(Settings settings) { @Override public void preProcess(SearchContext context) { - context.preProcess(); + context.preProcess(true); } @Override @@ -102,7 +110,8 @@ public void execute(SearchContext searchContext) throws QueryPhaseExecutionExcep // here to make sure it happens during the QUERY phase aggregationPhase.preProcess(searchContext); - boolean rescore = execute(searchContext, searchContext.searcher()); + final ContextIndexSearcher searcher = searchContext.searcher(); + boolean rescore = execute(searchContext, searcher, searcher::setCheckCancelled); if (rescore) { // only if we do a regular search rescorePhase.execute(searchContext); @@ -134,12 +143,12 @@ private static boolean returnsDocsInOrder(Query query, SortAndFormats sf) { * wire everything (mapperService, etc.) * @return whether the rescoring phase should be executed */ - static boolean execute(SearchContext searchContext, final IndexSearcher searcher) throws QueryPhaseExecutionException { + static boolean execute(SearchContext searchContext, final IndexSearcher searcher, + Consumer checkCancellationSetter) throws QueryPhaseExecutionException { QuerySearchResult queryResult = searchContext.queryResult(); queryResult.searchTimedOut(false); final boolean doProfile = searchContext.getProfilers() != null; - final SearchType searchType = searchContext.searchType(); boolean rescore = false; try { queryResult.from(searchContext.from()); @@ -162,17 +171,12 @@ static boolean execute(SearchContext searchContext, final IndexSearcher searcher if (searchContext.getProfilers() != null) { collector = new InternalProfileCollector(collector, CollectorResult.REASON_SEARCH_COUNT, Collections.emptyList()); } - topDocsCallable = new Callable() { - @Override - public TopDocs call() throws Exception { - return new TopDocs(totalHitCountCollector.getTotalHits(), Lucene.EMPTY_SCORE_DOCS, 0); - } - }; + topDocsCallable = () -> new TopDocs(totalHitCountCollector.getTotalHits(), Lucene.EMPTY_SCORE_DOCS, 0); } else { // Perhaps have a dedicated scroll phase? final ScrollContext scrollContext = searchContext.scrollContext(); assert (scrollContext != null) == (searchContext.request().scroll() != null); - final TopDocsCollector topDocsCollector; + final Collector topDocsCollector; ScoreDoc after = null; if (searchContext.request().scroll() != null) { numDocs = Math.min(searchContext.size(), totalNumDocs); @@ -205,52 +209,65 @@ public TopDocs call() throws Exception { numDocs = 1; } assert numDocs > 0; - if (searchContext.sort() != null) { - SortAndFormats sf = searchContext.sort(); - topDocsCollector = TopFieldCollector.create(sf.sort, numDocs, + if (searchContext.collapse() == null) { + if (searchContext.sort() != null) { + SortAndFormats sf = searchContext.sort(); + topDocsCollector = TopFieldCollector.create(sf.sort, numDocs, (FieldDoc) after, true, searchContext.trackScores(), searchContext.trackScores()); - sortValueFormats = sf.formats; + sortValueFormats = sf.formats; + } else { + rescore = !searchContext.rescore().isEmpty(); + for (RescoreSearchContext rescoreContext : searchContext.rescore()) { + numDocs = Math.max(rescoreContext.window(), numDocs); + } + topDocsCollector = TopScoreDocCollector.create(numDocs, after); + } } else { - rescore = !searchContext.rescore().isEmpty(); - for (RescoreSearchContext rescoreContext : searchContext.rescore()) { - numDocs = Math.max(rescoreContext.window(), numDocs); + Sort sort = Sort.RELEVANCE; + if (searchContext.sort() != null) { + sort = searchContext.sort().sort; + } + CollapseContext collapse = searchContext.collapse(); + topDocsCollector = collapse.createTopDocs(sort, numDocs, searchContext.trackScores()); + if (searchContext.sort() == null) { + sortValueFormats = new DocValueFormat[] {DocValueFormat.RAW}; + } else { + sortValueFormats = searchContext.sort().formats; } - topDocsCollector = TopScoreDocCollector.create(numDocs, after); } collector = topDocsCollector; if (doProfile) { collector = new InternalProfileCollector(collector, CollectorResult.REASON_SEARCH_TOP_HITS, Collections.emptyList()); } - topDocsCallable = new Callable() { - @Override - public TopDocs call() throws Exception { - TopDocs topDocs = topDocsCollector.topDocs(); - if (scrollContext != null) { - if (scrollContext.totalHits == -1) { - // first round - scrollContext.totalHits = topDocs.totalHits; - scrollContext.maxScore = topDocs.getMaxScore(); - } else { - // subsequent round: the total number of hits and - // the maximum score were computed on the first round - topDocs.totalHits = scrollContext.totalHits; - topDocs.setMaxScore(scrollContext.maxScore); - } - switch (searchType) { - case QUERY_AND_FETCH: - case DFS_QUERY_AND_FETCH: - // for (DFS_)QUERY_AND_FETCH, we already know the last emitted doc - if (topDocs.scoreDocs.length > 0) { - // set the last emitted doc - scrollContext.lastEmittedDoc = topDocs.scoreDocs[topDocs.scoreDocs.length - 1]; - } - break; - default: - break; + topDocsCallable = () -> { + final TopDocs topDocs; + if (topDocsCollector instanceof TopDocsCollector) { + topDocs = ((TopDocsCollector) topDocsCollector).topDocs(); + } else if (topDocsCollector instanceof CollapsingTopDocsCollector) { + topDocs = ((CollapsingTopDocsCollector) topDocsCollector).getTopDocs(); + } else { + throw new IllegalStateException("Unknown top docs collector " + topDocsCollector.getClass().getName()); + } + if (scrollContext != null) { + if (scrollContext.totalHits == -1) { + // first round + scrollContext.totalHits = topDocs.totalHits; + scrollContext.maxScore = topDocs.getMaxScore(); + } else { + // subsequent round: the total number of hits and + // the maximum score were computed on the first round + topDocs.totalHits = scrollContext.totalHits; + topDocs.setMaxScore(scrollContext.maxScore); + } + if (searchContext.request().numberOfShards() == 1) { + // if we fetch the document in the same roundtrip, we already know the last emitted doc + if (topDocs.scoreDocs.length > 0) { + // set the last emitted doc + scrollContext.lastEmittedDoc = topDocs.scoreDocs[topDocs.scoreDocs.length - 1]; } } - return topDocs; } + return topDocs; }; } @@ -350,14 +367,49 @@ public TopDocs call() throws Exception { } final boolean timeoutSet = searchContext.timeout() != null && !searchContext.timeout().equals(SearchService.NO_TIMEOUT); - if (timeoutSet && collector != null) { // collector might be null if no collection is actually needed + if (collector != null) { // collector might be null if no collection is actually needed + final Runnable timeoutRunnable; + if (timeoutSet) { + final Counter counter = searchContext.timeEstimateCounter(); + final long startTime = counter.get(); + final long timeout = searchContext.timeout().millis(); + final long maxTime = startTime + timeout; + timeoutRunnable = () -> { + final long time = counter.get(); + if (time > maxTime) { + throw new TimeExceededException(); + } + }; + } else { + timeoutRunnable = null; + } + + final Runnable cancellationRunnable; + if (searchContext.lowLevelCancellation()) { + SearchTask task = searchContext.getTask(); + cancellationRunnable = () -> { if (task.isCancelled()) throw new TaskCancelledException("cancelled"); }; + } else { + cancellationRunnable = null; + } + + final Runnable checkCancelled; + if (timeoutRunnable != null && cancellationRunnable != null) { + checkCancelled = () -> { timeoutRunnable.run(); cancellationRunnable.run(); }; + } else if (timeoutRunnable != null) { + checkCancelled = timeoutRunnable; + } else if (cancellationRunnable != null) { + checkCancelled = cancellationRunnable; + } else { + checkCancelled = null; + } + + checkCancellationSetter.accept(checkCancelled); + final Collector child = collector; - // TODO: change to use our own counter that uses the scheduler in ThreadPool - // throws TimeLimitingCollector.TimeExceededException when timeout has reached - collector = Lucene.wrapTimeLimitingCollector(collector, searchContext.timeEstimateCounter(), searchContext.timeout().millis()); + collector = new CancellableCollector(searchContext.getTask()::isCancelled, collector); if (doProfile) { - collector = new InternalProfileCollector(collector, CollectorResult.REASON_SEARCH_TIMEOUT, - Collections.singletonList((InternalProfileCollector) child)); + collector = new InternalProfileCollector(collector, CollectorResult.REASON_SEARCH_CANCELLED, + Collections.singletonList((InternalProfileCollector) child)); } } @@ -368,7 +420,7 @@ public TopDocs call() throws Exception { } searcher.search(query, collector); } - } catch (TimeLimitingCollector.TimeExceededException e) { + } catch (TimeExceededException e) { assert timeoutSet : "TimeExceededException thrown even though timeout wasn't set"; queryResult.searchTimedOut(true); } catch (Lucene.EarlyTerminationException e) { @@ -395,4 +447,6 @@ public TopDocs call() throws Exception { throw new QueryPhaseExecutionException(searchContext, "Failed to execute main query", e); } } + + private static class TimeExceededException extends RuntimeException {} } diff --git a/core/src/main/java/org/elasticsearch/search/query/QuerySearchRequest.java b/core/src/main/java/org/elasticsearch/search/query/QuerySearchRequest.java index 15593abf0da5e..6229f29566162 100644 --- a/core/src/main/java/org/elasticsearch/search/query/QuerySearchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/query/QuerySearchRequest.java @@ -21,11 +21,14 @@ import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.OriginalIndices; -import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.search.dfs.AggregatedDfs; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskId; import org.elasticsearch.transport.TransportRequest; import java.io.IOException; @@ -46,10 +49,10 @@ public class QuerySearchRequest extends TransportRequest implements IndicesReque public QuerySearchRequest() { } - public QuerySearchRequest(SearchRequest request, long id, AggregatedDfs dfs) { + public QuerySearchRequest(OriginalIndices originalIndices, long id, AggregatedDfs dfs) { this.id = id; this.dfs = dfs; - this.originalIndices = new OriginalIndices(request); + this.originalIndices = originalIndices; } public long id() { @@ -85,4 +88,21 @@ public void writeTo(StreamOutput out) throws IOException { dfs.writeTo(out); OriginalIndices.writeOriginalIndices(originalIndices, out); } + + @Override + public Task createTask(long id, String type, String action, TaskId parentTaskId) { + return new SearchTask(id, type, action, getDescription(), parentTaskId); + } + + public String getDescription() { + StringBuilder sb = new StringBuilder(); + sb.append("id["); + sb.append(id); + sb.append("], "); + sb.append("indices["); + Strings.arrayToDelimitedString(originalIndices.indices(), ",", sb); + sb.append("]"); + return sb.toString(); + } + } diff --git a/core/src/main/java/org/elasticsearch/search/query/QuerySearchResult.java b/core/src/main/java/org/elasticsearch/search/query/QuerySearchResult.java index e583cfbf13e7e..f071c62f12c16 100644 --- a/core/src/main/java/org/elasticsearch/search/query/QuerySearchResult.java +++ b/core/src/main/java/org/elasticsearch/search/query/QuerySearchResult.java @@ -21,11 +21,10 @@ import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.TopDocs; -import org.elasticsearch.Version; -import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchShardTarget; import org.elasticsearch.search.aggregations.Aggregations; import org.elasticsearch.search.aggregations.InternalAggregations; @@ -42,36 +41,30 @@ import static org.elasticsearch.common.lucene.Lucene.readTopDocs; import static org.elasticsearch.common.lucene.Lucene.writeTopDocs; -/** - * - */ -public class QuerySearchResult extends QuerySearchResultProvider { +public final class QuerySearchResult extends SearchPhaseResult { - private long id; - private SearchShardTarget shardTarget; private int from; private int size; private TopDocs topDocs; private DocValueFormat[] sortValueFormats; private InternalAggregations aggregations; + private boolean hasAggs; private List pipelineAggregators; private Suggest suggest; private boolean searchTimedOut; private Boolean terminatedEarly = null; private ProfileShardResult profileShardResults; + private boolean hasProfileResults; + private boolean hasScoreDocs; + private int totalHits; + private float maxScore; public QuerySearchResult() { - } public QuerySearchResult(long id, SearchShardTarget shardTarget) { - this.id = id; - this.shardTarget = shardTarget; - } - - @Override - public boolean includeFetch() { - return false; + this.requestId = id; + setSearchShardTarget(shardTarget); } @Override @@ -79,20 +72,6 @@ public QuerySearchResult queryResult() { return this; } - @Override - public long id() { - return this.id; - } - - @Override - public SearchShardTarget shardTarget() { - return shardTarget; - } - - @Override - public void shardTarget(SearchShardTarget shardTarget) { - this.shardTarget = shardTarget; - } public void searchTimedOut(boolean searchTimedOut) { this.searchTimedOut = searchTimedOut; @@ -111,11 +90,34 @@ public Boolean terminatedEarly() { } public TopDocs topDocs() { + if (topDocs == null) { + throw new IllegalStateException("topDocs already consumed"); + } + return topDocs; + } + + /** + * Returns true iff the top docs have already been consumed. + */ + public boolean hasConsumedTopDocs() { + return topDocs == null; + } + + /** + * Returns and nulls out the top docs for this search results. This allows to free up memory once the top docs are consumed. + * @throws IllegalStateException if the top docs have already been consumed. + */ + public TopDocs consumeTopDocs() { + TopDocs topDocs = this.topDocs; + if (topDocs == null) { + throw new IllegalStateException("topDocs already consumed"); + } + this.topDocs = null; return topDocs; } public void topDocs(TopDocs topDocs, DocValueFormat[] sortValueFormats) { - this.topDocs = topDocs; + setTopDocs(topDocs); if (topDocs.scoreDocs.length > 0 && topDocs.scoreDocs[0] instanceof FieldDoc) { int numFields = ((FieldDoc) topDocs.scoreDocs[0]).fields.length; if (numFields != sortValueFormats.length) { @@ -126,24 +128,58 @@ public void topDocs(TopDocs topDocs, DocValueFormat[] sortValueFormats) { this.sortValueFormats = sortValueFormats; } + private void setTopDocs(TopDocs topDocs) { + this.topDocs = topDocs; + hasScoreDocs = topDocs.scoreDocs.length > 0; + this.totalHits = topDocs.totalHits; + this.maxScore = topDocs.getMaxScore(); + } + public DocValueFormat[] sortValueFormats() { return sortValueFormats; } - public Aggregations aggregations() { - return aggregations; + /** + * Returns true if this query result has unconsumed aggregations + */ + public boolean hasAggs() { + return hasAggs; + } + + /** + * Returns and nulls out the aggregation for this search results. This allows to free up memory once the aggregation is consumed. + * @throws IllegalStateException if the aggregations have already been consumed. + */ + public Aggregations consumeAggs() { + if (aggregations == null) { + throw new IllegalStateException("aggs already consumed"); + } + Aggregations aggs = aggregations; + aggregations = null; + return aggs; } public void aggregations(InternalAggregations aggregations) { this.aggregations = aggregations; + hasAggs = aggregations != null; } /** - * Returns the profiled results for this search, or potentially null if result was empty - * @return The profiled results, or null + * Returns and nulls out the profiled results for this search, or potentially null if result was empty. + * This allows to free up memory once the profiled result is consumed. + * @throws IllegalStateException if the profiled result has already been consumed. */ - @Nullable public ProfileShardResult profileResults() { - return profileShardResults; + public ProfileShardResult consumeProfileResult() { + if (profileShardResults == null) { + throw new IllegalStateException("profile results already consumed"); + } + ProfileShardResult result = profileShardResults; + profileShardResults = null; + return result; + } + + public boolean hasProfileResults() { + return hasProfileResults; } /** @@ -152,6 +188,7 @@ public void aggregations(InternalAggregations aggregations) { */ public void profileResults(ProfileShardResult shardResults) { this.profileShardResults = shardResults; + hasProfileResults = shardResults != null; } public List pipelineAggregators() { @@ -179,6 +216,9 @@ public QuerySearchResult from(int from) { return this; } + /** + * Returns the maximum size of this results top docs. + */ public int size() { return size; } @@ -188,10 +228,15 @@ public QuerySearchResult size(int size) { return this; } - /** Returns true iff the result has hits */ - public boolean hasHits() { - return (topDocs != null && topDocs.scoreDocs.length > 0) || - (suggest != null && suggest.hasScoreDocs()); + /** + * Returns true if this result has any suggest score docs + */ + public boolean hasSuggestHits() { + return (suggest != null && suggest.hasScoreDocs()); + } + + public boolean hasSearchContext() { + return hasScoreDocs || hasSuggestHits(); } public static QuerySearchResult readQuerySearchResult(StreamInput in) throws IOException { @@ -208,8 +253,7 @@ public void readFrom(StreamInput in) throws IOException { } public void readFromWithId(long id, StreamInput in) throws IOException { - this.id = id; -// shardTarget = readSearchShardTarget(in); + this.requestId = id; from = in.readVInt(); size = in.readVInt(); int numSortFieldsPlus1 = in.readVInt(); @@ -221,8 +265,8 @@ public void readFromWithId(long id, StreamInput in) throws IOException { sortValueFormats[i] = in.readNamedWriteable(DocValueFormat.class); } } - topDocs = readTopDocs(in); - if (in.readBoolean()) { + setTopDocs(readTopDocs(in)); + if (hasAggs = in.readBoolean()) { aggregations = InternalAggregations.readAggregations(in); } pipelineAggregators = in.readNamedWriteableList(PipelineAggregator.class).stream().map(a -> (SiblingPipelineAggregator) a) @@ -232,21 +276,18 @@ public void readFromWithId(long id, StreamInput in) throws IOException { } searchTimedOut = in.readBoolean(); terminatedEarly = in.readOptionalBoolean(); - - if (in.getVersion().onOrAfter(Version.V_2_2_0) && in.readBoolean()) { - profileShardResults = new ProfileShardResult(in); - } + profileShardResults = in.readOptionalWriteable(ProfileShardResult::new); + hasProfileResults = profileShardResults != null; } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeLong(id); + out.writeLong(requestId); writeToNoId(out); } public void writeToNoId(StreamOutput out) throws IOException { -// shardTarget.writeTo(out); out.writeVInt(from); out.writeVInt(size); if (sortValueFormats == null) { @@ -273,14 +314,14 @@ public void writeToNoId(StreamOutput out) throws IOException { } out.writeBoolean(searchTimedOut); out.writeOptionalBoolean(terminatedEarly); + out.writeOptionalWriteable(profileShardResults); + } - if (out.getVersion().onOrAfter(Version.V_2_2_0)) { - if (profileShardResults == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - profileShardResults.writeTo(out); - } - } + public int getTotalHits() { + return totalHits; + } + + public float getMaxScore() { + return maxScore; } } diff --git a/core/src/main/java/org/elasticsearch/search/query/QuerySearchResultProvider.java b/core/src/main/java/org/elasticsearch/search/query/QuerySearchResultProvider.java deleted file mode 100644 index 1ae3157fa5355..0000000000000 --- a/core/src/main/java/org/elasticsearch/search/query/QuerySearchResultProvider.java +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.search.query; - -import org.elasticsearch.search.SearchPhaseResult; -import org.elasticsearch.transport.TransportResponse; - -/** - * - */ -public abstract class QuerySearchResultProvider extends TransportResponse implements SearchPhaseResult { - - /** - * If both query and fetch happened on the same call. - */ - public abstract boolean includeFetch(); - - public abstract QuerySearchResult queryResult(); -} diff --git a/core/src/main/java/org/elasticsearch/search/query/ScrollQuerySearchResult.java b/core/src/main/java/org/elasticsearch/search/query/ScrollQuerySearchResult.java index bcdd94adf891f..6401459489955 100644 --- a/core/src/main/java/org/elasticsearch/search/query/ScrollQuerySearchResult.java +++ b/core/src/main/java/org/elasticsearch/search/query/ScrollQuerySearchResult.java @@ -21,49 +21,54 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.transport.TransportResponse; import java.io.IOException; import static org.elasticsearch.search.query.QuerySearchResult.readQuerySearchResult; -/** - * - */ -public class ScrollQuerySearchResult extends TransportResponse { +public final class ScrollQuerySearchResult extends SearchPhaseResult { - private QuerySearchResult queryResult; - private SearchShardTarget shardTarget; + private QuerySearchResult result; public ScrollQuerySearchResult() { } - public ScrollQuerySearchResult(QuerySearchResult queryResult, SearchShardTarget shardTarget) { - this.queryResult = queryResult; - this.shardTarget = shardTarget; + public ScrollQuerySearchResult(QuerySearchResult result, SearchShardTarget shardTarget) { + this.result = result; + setSearchShardTarget(shardTarget); } - public QuerySearchResult queryResult() { - return queryResult; + @Override + public void setSearchShardTarget(SearchShardTarget shardTarget) { + super.setSearchShardTarget(shardTarget); + result.setSearchShardTarget(shardTarget); } - public SearchShardTarget shardTarget() { - return shardTarget; + @Override + public void setShardIndex(int shardIndex) { + super.setShardIndex(shardIndex); + result.setShardIndex(shardIndex); + } + + @Override + public QuerySearchResult queryResult() { + return result; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - shardTarget = new SearchShardTarget(in); - queryResult = readQuerySearchResult(in); - queryResult.shardTarget(shardTarget); + SearchShardTarget shardTarget = new SearchShardTarget(in); + result = readQuerySearchResult(in); + setSearchShardTarget(shardTarget); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - shardTarget.writeTo(out); - queryResult.writeTo(out); + getSearchShardTarget().writeTo(out); + result.writeTo(out); } } diff --git a/core/src/main/java/org/elasticsearch/search/rescore/QueryRescoreMode.java b/core/src/main/java/org/elasticsearch/search/rescore/QueryRescoreMode.java index 70718b56c0c5c..51db82652fce0 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/QueryRescoreMode.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/QueryRescoreMode.java @@ -86,16 +86,12 @@ public String toString() { public abstract float combine(float primary, float secondary); public static QueryRescoreMode readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown ScoreMode ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(QueryRescoreMode.class); } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(this.ordinal()); + out.writeEnum(this); } public static QueryRescoreMode fromString(String scoreMode) { @@ -111,4 +107,4 @@ public static QueryRescoreMode fromString(String scoreMode) { public String toString() { return name().toLowerCase(Locale.ROOT); } -} \ No newline at end of file +} diff --git a/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorer.java b/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorer.java index 2a57e41cfafc8..fe1b0577aa79f 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorer.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorer.java @@ -24,10 +24,6 @@ import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.TopDocs; -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.search.internal.ContextIndexSearcher; import org.elasticsearch.search.internal.SearchContext; @@ -159,6 +155,8 @@ private TopDocs combine(TopDocs in, TopDocs resorted, QueryRescoreContext ctx) { // incoming first pass hits, instead of allowing recoring of just the top subset: Arrays.sort(in.scoreDocs, SCORE_DOC_COMPARATOR); } + // update the max score after the resort + in.setMaxScore(in.scoreDocs[0].score); return in; } diff --git a/core/src/main/java/org/elasticsearch/search/rescore/RescoreBuilder.java b/core/src/main/java/org/elasticsearch/search/rescore/RescoreBuilder.java index 16c9c9ba8c70e..ac136a7601e3b 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/RescoreBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/RescoreBuilder.java @@ -85,7 +85,7 @@ public static RescoreBuilder parseFromXContent(QueryParseContext parseContext if (token == XContentParser.Token.FIELD_NAME) { fieldName = parser.currentName(); } else if (token.isValue()) { - if (parseContext.getParseFieldMatcher().match(fieldName, WINDOW_SIZE_FIELD)) { + if (WINDOW_SIZE_FIELD.match(fieldName)) { windowSize = parser.intValue(); } else { throw new ParsingException(parser.getTokenLocation(), "rescore doesn't support [" + fieldName + "]"); diff --git a/core/src/main/java/org/elasticsearch/search/rescore/RescorePhase.java b/core/src/main/java/org/elasticsearch/search/rescore/RescorePhase.java index 395db4cdcd8c2..d3d4c75cd7b5a 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/RescorePhase.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/RescorePhase.java @@ -29,6 +29,7 @@ import java.io.IOException; /** + * Rescore phase of a search request, used to run potentially expensive scoring models against the top matching documents. */ public class RescorePhase extends AbstractComponent implements SearchPhase { diff --git a/core/src/main/java/org/elasticsearch/search/rescore/Rescorer.java b/core/src/main/java/org/elasticsearch/search/rescore/Rescorer.java index bf15e06bcb3a4..43a9dd9c64115 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/Rescorer.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/Rescorer.java @@ -66,7 +66,6 @@ Explanation explain(int topLevelDocId, SearchContext context, RescoreSearchConte /** * Extracts all terms needed to execute this {@link Rescorer}. This method * is executed in a distributed frequency collection roundtrip for - * {@link SearchType#DFS_QUERY_AND_FETCH} and * {@link SearchType#DFS_QUERY_THEN_FETCH} */ void extractTerms(SearchContext context, RescoreSearchContext rescoreContext, Set termsSet); diff --git a/core/src/main/java/org/elasticsearch/search/searchafter/SearchAfterBuilder.java b/core/src/main/java/org/elasticsearch/search/searchafter/SearchAfterBuilder.java index 6ed4b0db5bc7d..e6eb2cfcb35de 100644 --- a/core/src/main/java/org/elasticsearch/search/searchafter/SearchAfterBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/searchafter/SearchAfterBuilder.java @@ -21,9 +21,10 @@ import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.SortField; +import org.apache.lucene.search.SortedNumericSortField; +import org.apache.lucene.search.SortedSetSortField; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -132,12 +133,27 @@ public static FieldDoc buildFieldDoc(SortAndFormats sort, Object[] values) { return new FieldDoc(Integer.MAX_VALUE, 0, fieldValues); } - private static Object convertValueFromSortField(Object value, SortField sortField, DocValueFormat format) { + /** + * Returns the inner {@link SortField.Type} expected for this sort field. + */ + static SortField.Type extractSortType(SortField sortField) { if (sortField.getComparatorSource() instanceof IndexFieldData.XFieldComparatorSource) { - IndexFieldData.XFieldComparatorSource cmpSource = (IndexFieldData.XFieldComparatorSource) sortField.getComparatorSource(); - return convertValueFromSortType(sortField.getField(), cmpSource.reducedType(), value, format); + return ((IndexFieldData.XFieldComparatorSource) sortField.getComparatorSource()).reducedType(); + } else if (sortField instanceof SortedSetSortField) { + return SortField.Type.STRING; + } else if (sortField instanceof SortedNumericSortField) { + return ((SortedNumericSortField) sortField).getNumericType(); + } else if ("LatLonPointSortField".equals(sortField.getClass().getSimpleName())) { + // for geo distance sorting + return SortField.Type.DOUBLE; + } else { + return sortField.getType(); } - return convertValueFromSortType(sortField.getField(), sortField.getType(), value, format); + } + + static Object convertValueFromSortField(Object value, SortField sortField, DocValueFormat format) { + SortField.Type sortType = extractSortType(sortField); + return convertValueFromSortType(sortField.getField(), sortType, value, format); } private static Object convertValueFromSortType(String fieldName, SortField.Type sortType, Object value, DocValueFormat format) { @@ -202,10 +218,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } void innerToXContent(XContentBuilder builder) throws IOException { - builder.field(SEARCH_AFTER.getPreferredName(), sortValues); + builder.array(SEARCH_AFTER.getPreferredName(), sortValues); } - public static SearchAfterBuilder fromXContent(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException { + public static SearchAfterBuilder fromXContent(XContentParser parser) throws IOException { SearchAfterBuilder builder = new SearchAfterBuilder(); XContentParser.Token token = parser.currentToken(); List values = new ArrayList<> (); diff --git a/core/src/main/java/org/elasticsearch/search/slice/SliceBuilder.java b/core/src/main/java/org/elasticsearch/search/slice/SliceBuilder.java index 905ac8991bf64..20d67c494e712 100644 --- a/core/src/main/java/org/elasticsearch/search/slice/SliceBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/slice/SliceBuilder.java @@ -20,6 +20,7 @@ package org.elasticsearch.search.slice; import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.ParseField; @@ -27,12 +28,13 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.lucene.search.MatchNoDocsQuery; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData; +import org.elasticsearch.index.mapper.IdFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryShardContext; @@ -194,9 +196,14 @@ public Query toFilter(QueryShardContext context, int shardId, int numShards) { throw new IllegalArgumentException("field " + field + " not found"); } + String field = this.field; boolean useTermQuery = false; if (UidFieldMapper.NAME.equals(field)) { - useTermQuery = true; + if (context.getIndexSettings().isSingleType()) { + // on new indices, the _id acts as a _uid + field = IdFieldMapper.NAME; + } + useTermQuery = true; } else if (type.hasDocValues() == false) { throw new IllegalArgumentException("cannot load numeric doc values on " + field); } else { diff --git a/core/src/main/java/org/elasticsearch/search/slice/TermsSliceQuery.java b/core/src/main/java/org/elasticsearch/search/slice/TermsSliceQuery.java index 429a3ebe89264..ddc02d32e55a9 100644 --- a/core/src/main/java/org/elasticsearch/search/slice/TermsSliceQuery.java +++ b/core/src/main/java/org/elasticsearch/search/slice/TermsSliceQuery.java @@ -33,6 +33,7 @@ import org.apache.lucene.search.ConstantScoreScorer; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.DocIdSetBuilder; +import org.apache.lucene.util.StringHelper; import java.io.IOException; @@ -46,6 +47,9 @@ * NOTE: Documents with no value for that field are ignored. */ public final class TermsSliceQuery extends SliceQuery { + // Fixed seed for computing term hashCode + public static final int SEED = 7919; + public TermsSliceQuery(String field, int id, int max) { super(field, id, max); } @@ -71,7 +75,9 @@ private DocIdSet build(LeafReader reader) throws IOException { final TermsEnum te = terms.iterator(); PostingsEnum docsEnum = null; for (BytesRef term = te.next(); term != null; term = te.next()) { - int hashCode = term.hashCode(); + // use a fixed seed instead of term.hashCode() otherwise this query may return inconsistent results when + // running on another replica (StringHelper sets its default seed at startup with current time) + int hashCode = StringHelper.murmurhash3_x86_32(term, SEED); if (contains(hashCode)) { docsEnum = te.postings(docsEnum, PostingsEnum.NONE); builder.add(docsEnum); diff --git a/core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java index d519f74087003..db6177ab36f53 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java @@ -21,11 +21,11 @@ import org.apache.lucene.search.SortField; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.fielddata.IndexNumericFieldData; @@ -39,17 +39,13 @@ import java.io.IOException; import java.util.Objects; -import java.util.Optional; /** * A sort builder to sort based on a document field. */ public class FieldSortBuilder extends SortBuilder { public static final String NAME = "field_sort"; - public static final ParseField NESTED_PATH = new ParseField("nested_path"); - public static final ParseField NESTED_FILTER = new ParseField("nested_filter"); public static final ParseField MISSING = new ParseField("missing"); - public static final ParseField ORDER = new ParseField("order"); public static final ParseField SORT_MODE = new ParseField("mode"); public static final ParseField UNMAPPED_TYPE = new ParseField("unmapped_type"); @@ -239,10 +235,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(SORT_MODE.getPreferredName(), sortMode); } if (nestedFilter != null) { - builder.field(NESTED_FILTER.getPreferredName(), nestedFilter, params); + builder.field(NESTED_FILTER_FIELD.getPreferredName(), nestedFilter, params); } if (nestedPath != null) { - builder.field(NESTED_PATH.getPreferredName(), nestedPath); + builder.field(NESTED_PATH_FIELD.getPreferredName(), nestedPath); } builder.endObject(); builder.endObject(); @@ -283,9 +279,7 @@ public SortFieldAndFormat build(QueryShardContext context) throws IOException { && (sortMode == SortMode.SUM || sortMode == SortMode.AVG || sortMode == SortMode.MEDIAN)) { throw new QueryShardException(context, "we only support AVG, MEDIAN and SUM on number based fields"); } - IndexFieldData.XFieldComparatorSource fieldComparatorSource = fieldData - .comparatorSource(missing, localSortMode, nested); - SortField field = new SortField(fieldType.name(), fieldComparatorSource, reverse); + SortField field = fieldData.sortField(missing, localSortMode, nested, reverse); return new SortFieldAndFormat(field, fieldType.docValueFormat(null, null)); } } @@ -327,67 +321,17 @@ public String getWriteableName() { * in '{ "foo": { "order" : "asc"} }'. When parsing the inner object, the field name can be passed in via this argument */ public static FieldSortBuilder fromXContent(QueryParseContext context, String fieldName) throws IOException { - XContentParser parser = context.parser(); - - Optional nestedFilter = Optional.empty(); - String nestedPath = null; - Object missing = null; - SortOrder order = null; - SortMode sortMode = null; - String unmappedType = null; - - String currentFieldName = null; - XContentParser.Token token; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.START_OBJECT) { - if (context.getParseFieldMatcher().match(currentFieldName, NESTED_FILTER)) { - nestedFilter = context.parseInnerQueryBuilder(); - } else { - throw new ParsingException(parser.getTokenLocation(), "Expected " + NESTED_FILTER.getPreferredName() + " element."); - } - } else if (token.isValue()) { - if (context.getParseFieldMatcher().match(currentFieldName, NESTED_PATH)) { - nestedPath = parser.text(); - } else if (context.getParseFieldMatcher().match(currentFieldName, MISSING)) { - missing = parser.objectText(); - } else if (context.getParseFieldMatcher().match(currentFieldName, ORDER)) { - String sortOrder = parser.text(); - if ("asc".equals(sortOrder)) { - order = SortOrder.ASC; - } else if ("desc".equals(sortOrder)) { - order = SortOrder.DESC; - } else { - throw new ParsingException(parser.getTokenLocation(), "Sort order [{}] not supported.", sortOrder); - } - } else if (context.getParseFieldMatcher().match(currentFieldName, SORT_MODE)) { - sortMode = SortMode.fromString(parser.text()); - } else if (context.getParseFieldMatcher().match(currentFieldName, UNMAPPED_TYPE)) { - unmappedType = parser.text(); - } else { - throw new ParsingException(parser.getTokenLocation(), "Option [{}] not supported.", currentFieldName); - } - } - } + return PARSER.parse(context.parser(), new FieldSortBuilder(fieldName), context); + } - FieldSortBuilder builder = new FieldSortBuilder(fieldName); - nestedFilter.ifPresent(builder::setNestedFilter); - if (nestedPath != null) { - builder.setNestedPath(nestedPath); - } - if (missing != null) { - builder.missing(missing); - } - if (order != null) { - builder.order(order); - } - if (sortMode != null) { - builder.sortMode(sortMode); - } - if (unmappedType != null) { - builder.unmappedType(unmappedType); - } - return builder; + private static ObjectParser PARSER = new ObjectParser<>(NAME); + + static { + PARSER.declareField(FieldSortBuilder::missing, p -> p.objectText(), MISSING, ValueType.VALUE); + PARSER.declareString(FieldSortBuilder::setNestedPath , NESTED_PATH_FIELD); + PARSER.declareString(FieldSortBuilder::unmappedType , UNMAPPED_TYPE); + PARSER.declareString((b, v) -> b.order(SortOrder.fromString(v)) , ORDER_FIELD); + PARSER.declareString((b, v) -> b.sortMode(SortMode.fromString(v)), SORT_MODE); + PARSER.declareObject(FieldSortBuilder::setNestedFilter, SortBuilder::parseNestedFilter, NESTED_FILTER_FIELD); } } diff --git a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java index f33dd0e2b1531..f175c8c7c0e6d 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java @@ -19,6 +19,7 @@ package org.elasticsearch.search.sort; +import org.apache.lucene.document.LatLonDocValuesField; import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.NumericDocValues; import org.apache.lucene.search.DocIdSetIterator; @@ -28,10 +29,8 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.geo.GeoDistance; -import org.elasticsearch.common.geo.GeoDistance.FixedSourceDistance; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.io.stream.StreamInput; @@ -46,6 +45,7 @@ import org.elasticsearch.index.fielddata.MultiGeoPointValues; import org.elasticsearch.index.fielddata.NumericDoubleValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; +import org.elasticsearch.index.fielddata.plain.AbstractLatLonPointDVIndexFieldData.LatLonPointDVIndexFieldData; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.query.GeoValidationMethod; import org.elasticsearch.index.query.QueryBuilder; @@ -73,18 +73,14 @@ public class GeoDistanceSortBuilder extends SortBuilder private static final ParseField UNIT_FIELD = new ParseField("unit"); private static final ParseField DISTANCE_TYPE_FIELD = new ParseField("distance_type"); private static final ParseField VALIDATION_METHOD_FIELD = new ParseField("validation_method"); - private static final ParseField IGNORE_MALFORMED_FIELD = new ParseField("ignore_malformed") - .withAllDeprecated("use validation_method instead"); - private static final ParseField COERCE_FIELD = new ParseField("coerce", "normalize") - .withAllDeprecated("use validation_method instead"); + private static final ParseField IGNORE_MALFORMED_FIELD = new ParseField("ignore_malformed").withAllDeprecated("validation_method"); + private static final ParseField COERCE_FIELD = new ParseField("coerce", "normalize").withAllDeprecated("validation_method"); private static final ParseField SORTMODE_FIELD = new ParseField("mode", "sort_mode"); - private static final ParseField NESTED_PATH_FIELD = new ParseField("nested_path"); - private static final ParseField NESTED_FILTER_FIELD = new ParseField("nested_filter"); private final String fieldName; private final List points = new ArrayList<>(); - private GeoDistance geoDistance = GeoDistance.DEFAULT; + private GeoDistance geoDistance = GeoDistance.ARC; private DistanceUnit unit = DistanceUnit.DEFAULT; private SortMode sortMode = null; @@ -243,7 +239,7 @@ public GeoDistance geoDistance() { } /** - * The distance unit to use. Defaults to {@link org.elasticsearch.common.unit.DistanceUnit#KILOMETERS} + * The distance unit to use. Defaults to {@link org.elasticsearch.common.unit.DistanceUnit#METERS} */ public GeoDistanceSortBuilder unit(DistanceUnit unit) { this.unit = unit; @@ -251,7 +247,7 @@ public GeoDistanceSortBuilder unit(DistanceUnit unit) { } /** - * Returns the distance unit to use. Defaults to {@link org.elasticsearch.common.unit.DistanceUnit#KILOMETERS} + * Returns the distance unit to use. Defaults to {@link org.elasticsearch.common.unit.DistanceUnit#METERS} */ public DistanceUnit unit() { return this.unit; @@ -394,18 +390,18 @@ public int hashCode() { * Creates a new {@link GeoDistanceSortBuilder} from the query held by the {@link QueryParseContext} in * {@link org.elasticsearch.common.xcontent.XContent} format. * - * @param context the input parse context. The state on the parser contained in this context will be changed as a side effect of this - * method call - * @param elementName in some sort syntax variations the field name precedes the xContent object that specifies further parameters, e.g. - * in '{ "foo": { "order" : "asc"} }'. When parsing the inner object, the field name can be passed in via this argument + * @param context the input parse context. The state on the parser contained in this context will be changed as a + * side effect of this method call + * @param elementName in some sort syntax variations the field name precedes the xContent object that specifies + * further parameters, e.g. in '{ "foo": { "order" : "asc"} }'. When parsing the inner object, + * the field name can be passed in via this argument */ public static GeoDistanceSortBuilder fromXContent(QueryParseContext context, String elementName) throws IOException { XContentParser parser = context.parser(); - ParseFieldMatcher parseFieldMatcher = context.getParseFieldMatcher(); String fieldName = null; List geoPoints = new ArrayList<>(); DistanceUnit unit = DistanceUnit.DEFAULT; - GeoDistance geoDistance = GeoDistance.DEFAULT; + GeoDistance geoDistance = GeoDistance.ARC; SortOrder order = SortOrder.ASC; SortMode sortMode = null; Optional nestedFilter = Optional.empty(); @@ -425,7 +421,7 @@ public static GeoDistanceSortBuilder fromXContent(QueryParseContext context, Str fieldName = currentName; } else if (token == XContentParser.Token.START_OBJECT) { - if (parseFieldMatcher.match(currentName, NESTED_FILTER_FIELD)) { + if (NESTED_FILTER_FIELD.match(currentName)) { nestedFilter = context.parseInnerQueryBuilder(); } else { // the json in the format of -> field : { lat : 30, lon : 12 } @@ -442,27 +438,27 @@ public static GeoDistanceSortBuilder fromXContent(QueryParseContext context, Str geoPoints.add(point); } } else if (token.isValue()) { - if (parseFieldMatcher.match(currentName, ORDER_FIELD)) { + if (ORDER_FIELD.match(currentName)) { order = SortOrder.fromString(parser.text()); - } else if (parseFieldMatcher.match(currentName, UNIT_FIELD)) { + } else if (UNIT_FIELD.match(currentName)) { unit = DistanceUnit.fromString(parser.text()); - } else if (parseFieldMatcher.match(currentName, DISTANCE_TYPE_FIELD)) { + } else if (DISTANCE_TYPE_FIELD.match(currentName)) { geoDistance = GeoDistance.fromString(parser.text()); - } else if (parseFieldMatcher.match(currentName, COERCE_FIELD)) { + } else if (COERCE_FIELD.match(currentName)) { coerce = parser.booleanValue(); - if (coerce == true) { + if (coerce) { ignoreMalformed = true; } - } else if (parseFieldMatcher.match(currentName, IGNORE_MALFORMED_FIELD)) { + } else if (IGNORE_MALFORMED_FIELD.match(currentName)) { boolean ignore_malformed_value = parser.booleanValue(); if (coerce == false) { ignoreMalformed = ignore_malformed_value; } - } else if (parseFieldMatcher.match(currentName, VALIDATION_METHOD_FIELD)) { + } else if (VALIDATION_METHOD_FIELD.match(currentName)) { validation = GeoValidationMethod.fromString(parser.text()); - } else if (parseFieldMatcher.match(currentName, SORTMODE_FIELD)) { + } else if (SORTMODE_FIELD.match(currentName)) { sortMode = SortMode.fromString(parser.text()); - } else if (parseFieldMatcher.match(currentName, NESTED_PATH_FIELD)) { + } else if (NESTED_PATH_FIELD.match(currentName)) { nestedPath = parser.text(); } else if (token == Token.VALUE_STRING){ if (fieldName != null && fieldName.equals(currentName) == false) { @@ -508,12 +504,9 @@ public static GeoDistanceSortBuilder fromXContent(QueryParseContext context, Str @Override public SortFieldAndFormat build(QueryShardContext context) throws IOException { final boolean indexCreatedBeforeV2_0 = context.indexVersionCreated().before(Version.V_2_0_0); - // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes - List localPoints = new ArrayList(); - for (GeoPoint geoPoint : this.points) { - localPoints.add(new GeoPoint(geoPoint)); - } - + // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed + // on 2.x created indexes + GeoPoint[] localPoints = points.toArray(new GeoPoint[points.size()]); if (!indexCreatedBeforeV2_0 && !GeoValidationMethod.isIgnoreMalformed(validation)) { for (GeoPoint point : localPoints) { if (GeoUtils.isValidLatitude(point.lat()) == false) { @@ -547,16 +540,23 @@ public SortFieldAndFormat build(QueryShardContext context) throws IOException { MappedFieldType fieldType = context.fieldMapper(fieldName); if (fieldType == null) { - throw new IllegalArgumentException("failed to find mapper for [" + fieldName + "] for geo distance based sort"); + throw new IllegalArgumentException("failed to find mapper for [" + fieldName + + "] for geo distance based sort"); } final IndexGeoPointFieldData geoIndexFieldData = context.getForField(fieldType); - final FixedSourceDistance[] distances = new FixedSourceDistance[localPoints.size()]; - for (int i = 0; i< localPoints.size(); i++) { - distances[i] = geoDistance.fixedSourceDistance(localPoints.get(i).lat(), localPoints.get(i).lon(), unit); - } - final Nested nested = resolveNested(context, nestedPath, nestedFilter); + if (geoIndexFieldData.getClass() == LatLonPointDVIndexFieldData.class // only works with 5.x geo_point + && nested == null + && finalSortMode == MultiValueMode.MIN // LatLonDocValuesField internally picks the closest point + && unit == DistanceUnit.METERS + && reverse == false + && localPoints.length == 1) { + return new SortFieldAndFormat( + LatLonDocValuesField.newDistanceSort(fieldName, localPoints[0].lat(), localPoints[0].lon()), + DocValueFormat.RAW); + } + IndexFieldData.XFieldComparatorSource geoDistanceComparatorSource = new IndexFieldData.XFieldComparatorSource() { @Override @@ -565,19 +565,21 @@ public SortField.Type reducedType() { } @Override - public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) throws IOException { + public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) { return new FieldComparator.DoubleComparator(numHits, null, null) { @Override - protected NumericDocValues getNumericDocValues(LeafReaderContext context, String field) throws IOException { + protected NumericDocValues getNumericDocValues(LeafReaderContext context, String field) + throws IOException { final MultiGeoPointValues geoPointValues = geoIndexFieldData.load(context).getGeoPointValues(); - final SortedNumericDoubleValues distanceValues = GeoDistance.distanceValues(geoPointValues, distances); + final SortedNumericDoubleValues distanceValues = GeoUtils.distanceValues(geoDistance, unit, + geoPointValues, localPoints); final NumericDoubleValues selectedValues; if (nested == null) { - selectedValues = finalSortMode.select(distanceValues, Double.MAX_VALUE); + selectedValues = finalSortMode.select(distanceValues, Double.POSITIVE_INFINITY); } else { final BitSet rootDocs = nested.rootDocs(context); final DocIdSetIterator innerDocs = nested.innerDocs(context); - selectedValues = finalSortMode.select(distanceValues, Double.MAX_VALUE, rootDocs, innerDocs, + selectedValues = finalSortMode.select(distanceValues, Double.POSITIVE_INFINITY, rootDocs, innerDocs, context.reader().maxDoc()); } return selectedValues.getRawDoubleValues(); @@ -587,14 +589,16 @@ protected NumericDocValues getNumericDocValues(LeafReaderContext context, String }; - return new SortFieldAndFormat(new SortField(fieldName, geoDistanceComparatorSource, reverse), DocValueFormat.RAW); + return new SortFieldAndFormat(new SortField(fieldName, geoDistanceComparatorSource, reverse), + DocValueFormat.RAW); } static void parseGeoPoints(XContentParser parser, List geoPoints) throws IOException { while (!parser.nextToken().equals(XContentParser.Token.END_ARRAY)) { if (parser.currentToken() == XContentParser.Token.VALUE_NUMBER) { - // we might get here if the geo point is " number, number] " and the parser already moved over the opening bracket - // in this case we cannot use GeoUtils.parseGeoPoint(..) because this expects an opening bracket + // we might get here if the geo point is " number, number] " and the parser already moved over the + // opening bracket in this case we cannot use GeoUtils.parseGeoPoint(..) because this expects an opening + // bracket double lon = parser.doubleValue(); parser.nextToken(); if (!parser.currentToken().equals(XContentParser.Token.VALUE_NUMBER)) { diff --git a/core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java index 5b9b139e4958b..52f25301783c1 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java @@ -20,13 +20,10 @@ package org.elasticsearch.search.sort; import org.apache.lucene.search.SortField; -import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.search.DocValueFormat; @@ -40,7 +37,6 @@ public class ScoreSortBuilder extends SortBuilder { public static final String NAME = "_score"; - public static final ParseField ORDER_FIELD = new ParseField("order"); private static final SortFieldAndFormat SORT_SCORE = new SortFieldAndFormat( new SortField(null, SortField.Type.SCORE), DocValueFormat.RAW); private static final SortFieldAndFormat SORT_SCORE_REVERSE = new SortFieldAndFormat( @@ -86,26 +82,13 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws * in '{ "foo": { "order" : "asc"} }'. When parsing the inner object, the field name can be passed in via this argument */ public static ScoreSortBuilder fromXContent(QueryParseContext context, String fieldName) throws IOException { - XContentParser parser = context.parser(); - ParseFieldMatcher matcher = context.getParseFieldMatcher(); - - XContentParser.Token token; - String currentName = parser.currentName(); - ScoreSortBuilder result = new ScoreSortBuilder(); - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentName = parser.currentName(); - } else if (token.isValue()) { - if (matcher.match(currentName, ORDER_FIELD)) { - result.order(SortOrder.fromString(parser.text())); - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] failed to parse field [" + currentName + "]"); - } - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] unexpected token [" + token + "]"); - } - } - return result; + return PARSER.apply(context.parser(), context); + } + + private static ObjectParser PARSER = new ObjectParser<>(NAME, ScoreSortBuilder::new); + + static { + PARSER.declareString((builder, order) -> builder.order(SortOrder.fromString(order)), ORDER_FIELD); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java index 0a7cb5e1b3691..0527217bc14de 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java @@ -26,13 +26,12 @@ import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser.ValueType; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.fielddata.FieldData; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; @@ -47,19 +46,16 @@ import org.elasticsearch.index.query.QueryShardException; import org.elasticsearch.script.LeafSearchScript; import org.elasticsearch.script.Script; -import org.elasticsearch.script.Script.ScriptField; import org.elasticsearch.script.ScriptContext; import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.MultiValueMode; import java.io.IOException; -import java.util.Collections; -import java.util.HashMap; import java.util.Locale; -import java.util.Map; import java.util.Objects; -import java.util.Optional; + +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; /** * Script sort builder allows to sort based on a custom script expression. @@ -70,8 +66,6 @@ public class ScriptSortBuilder extends SortBuilder { public static final ParseField TYPE_FIELD = new ParseField("type"); public static final ParseField SCRIPT_FIELD = new ParseField("script"); public static final ParseField SORTMODE_FIELD = new ParseField("mode"); - public static final ParseField NESTED_PATH_FIELD = new ParseField("nested_path"); - public static final ParseField NESTED_FILTER_FIELD = new ParseField("nested_filter"); private final Script script; @@ -218,6 +212,19 @@ public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) return builder; } + private static ConstructingObjectParser PARSER = new ConstructingObjectParser<>(NAME, + a -> new ScriptSortBuilder((Script) a[0], (ScriptSortType) a[1])); + + static { + PARSER.declareField(constructorArg(), (parser, context) -> Script.parse(parser, context.getDefaultScriptLanguage()), + Script.SCRIPT_PARSE_FIELD, ValueType.OBJECT_OR_STRING); + PARSER.declareField(constructorArg(), p -> ScriptSortType.fromString(p.text()), TYPE_FIELD, ValueType.STRING); + PARSER.declareString((b, v) -> b.order(SortOrder.fromString(v)), ORDER_FIELD); + PARSER.declareString((b, v) -> b.sortMode(SortMode.fromString(v)), SORTMODE_FIELD); + PARSER.declareString(ScriptSortBuilder::setNestedPath , NESTED_PATH_FIELD); + PARSER.declareObject(ScriptSortBuilder::setNestedFilter, SortBuilder::parseNestedFilter, NESTED_FILTER_FIELD); + } + /** * Creates a new {@link ScriptSortBuilder} from the query held by the {@link QueryParseContext} in * {@link org.elasticsearch.common.xcontent.XContent} format. @@ -228,66 +235,13 @@ public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) * in '{ "foo": { "order" : "asc"} }'. When parsing the inner object, the field name can be passed in via this argument */ public static ScriptSortBuilder fromXContent(QueryParseContext context, String elementName) throws IOException { - XContentParser parser = context.parser(); - ParseFieldMatcher parseField = context.getParseFieldMatcher(); - Script script = null; - ScriptSortType type = null; - SortMode sortMode = null; - SortOrder order = null; - Optional nestedFilter = Optional.empty(); - String nestedPath = null; - - XContentParser.Token token; - String currentName = parser.currentName(); - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.FIELD_NAME) { - currentName = parser.currentName(); - } else if (token == XContentParser.Token.START_OBJECT) { - if (parseField.match(currentName, ScriptField.SCRIPT)) { - script = Script.parse(parser, parseField, context.getDefaultScriptLanguage()); - } else if (parseField.match(currentName, NESTED_FILTER_FIELD)) { - nestedFilter = context.parseInnerQueryBuilder(); - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] failed to parse field [" + currentName + "]"); - } - } else if (token.isValue()) { - if (parseField.match(currentName, ORDER_FIELD)) { - order = SortOrder.fromString(parser.text()); - } else if (parseField.match(currentName, TYPE_FIELD)) { - type = ScriptSortType.fromString(parser.text()); - } else if (parseField.match(currentName, SORTMODE_FIELD)) { - sortMode = SortMode.fromString(parser.text()); - } else if (parseField.match(currentName, NESTED_PATH_FIELD)) { - nestedPath = parser.text(); - } else if (parseField.match(currentName, ScriptField.SCRIPT)) { - script = Script.parse(parser, parseField, context.getDefaultScriptLanguage()); - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] failed to parse field [" + currentName + "]"); - } - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] unexpected token [" + token + "]"); - } - } - - ScriptSortBuilder result = new ScriptSortBuilder(script, type); - if (order != null) { - result.order(order); - } - if (sortMode != null) { - result.sortMode(sortMode); - } - nestedFilter.ifPresent(result::setNestedFilter); - if (nestedPath != null) { - result.setNestedPath(nestedPath); - } - return result; + return PARSER.apply(context.parser(), context); } @Override public SortFieldAndFormat build(QueryShardContext context) throws IOException { - final SearchScript searchScript = context.getScriptService().search( - context.lookup(), script, ScriptContext.Standard.SEARCH, Collections.emptyMap()); + final SearchScript searchScript = context.getSearchScript(script, ScriptContext.Standard.SEARCH); MultiValueMode valueMode = null; if (sortMode != null) { @@ -387,18 +341,14 @@ public enum ScriptSortType implements Writeable { @Override public void writeTo(final StreamOutput out) throws IOException { - out.writeVInt(ordinal()); + out.writeEnum(this); } /** * Read from a stream. */ static ScriptSortType readFromStream(final StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown ScriptSortType ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(ScriptSortType.class); } public static ScriptSortType fromString(final String str) { diff --git a/core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java index c8f15f3a1e8c2..3af96e7eb4ead 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java @@ -25,6 +25,7 @@ import org.apache.lucene.search.join.BitSetProducer; import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.xcontent.XContentParser; @@ -52,7 +53,11 @@ public abstract class SortBuilder> extends ToXContentToBytes implements NamedWriteable { protected SortOrder order = SortOrder.ASC; + + // parse fields common to more than one SortBuilder public static final ParseField ORDER_FIELD = new ParseField("order"); + public static final ParseField NESTED_FILTER_FIELD = new ParseField("nested_filter"); + public static final ParseField NESTED_PATH_FIELD = new ParseField("nested_path"); private static final Map> PARSERS; static { @@ -199,6 +204,16 @@ protected static Nested resolveNested(QueryShardContext context, String nestedPa return nested; } + protected static QueryBuilder parseNestedFilter(XContentParser parser, QueryParseContext context) { + try { + QueryBuilder builder = context.parseInnerQueryBuilder().orElseThrow(() -> new ParsingException(parser.getTokenLocation(), + "Expected " + NESTED_FILTER_FIELD.getPreferredName() + " element.")); + return builder; + } catch (Exception e) { + throw new ParsingException(parser.getTokenLocation(), "Expected " + NESTED_FILTER_FIELD.getPreferredName() + " element.", e); + } + } + @FunctionalInterface private interface Parser> { T fromXContent(QueryParseContext context, String elementName) throws IOException; diff --git a/core/src/main/java/org/elasticsearch/search/sort/SortMode.java b/core/src/main/java/org/elasticsearch/search/sort/SortMode.java index 21495798a89f9..07b5bfa98c23f 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/SortMode.java +++ b/core/src/main/java/org/elasticsearch/search/sort/SortMode.java @@ -52,15 +52,11 @@ public enum SortMode implements Writeable { @Override public void writeTo(final StreamOutput out) throws IOException { - out.writeVInt(ordinal()); + out.writeEnum(this); } public static SortMode readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown SortMode ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(SortMode.class); } public static SortMode fromString(final String str) { @@ -85,4 +81,4 @@ public static SortMode fromString(final String str) { public String toString() { return name().toLowerCase(Locale.ROOT); } -} \ No newline at end of file +} diff --git a/core/src/main/java/org/elasticsearch/search/sort/SortOrder.java b/core/src/main/java/org/elasticsearch/search/sort/SortOrder.java index cd0a3bb6d4642..fbcb7b4288e31 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/SortOrder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/SortOrder.java @@ -52,16 +52,12 @@ public String toString() { }; static SortOrder readFromStream(StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown SortOrder ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(SortOrder.class); } @Override public void writeTo(StreamOutput out) throws IOException { - out.writeVInt(this.ordinal()); + out.writeEnum(this); } public static SortOrder fromString(String op) { diff --git a/core/src/main/java/org/elasticsearch/search/suggest/SortBy.java b/core/src/main/java/org/elasticsearch/search/suggest/SortBy.java index 3cd19c5c2fb22..328fc4e8218ed 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/SortBy.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/SortBy.java @@ -38,15 +38,11 @@ public enum SortBy implements Writeable { @Override public void writeTo(final StreamOutput out) throws IOException { - out.writeVInt(ordinal()); + out.writeEnum(this); } public static SortBy readFromStream(final StreamInput in) throws IOException { - int ordinal = in.readVInt(); - if (ordinal < 0 || ordinal >= values().length) { - throw new IOException("Unknown SortBy ordinal [" + ordinal + "]"); - } - return values()[ordinal]; + return in.readEnum(SortBy.class); } public static SortBy resolve(final String str) { diff --git a/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java b/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java index 95612693f8b4c..943fde8dbf56d 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java @@ -19,13 +19,24 @@ package org.elasticsearch.search.suggest; import org.apache.lucene.util.CollectionUtil; +import org.apache.lucene.util.SetOnce; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.text.Text; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; +import org.elasticsearch.rest.action.search.RestSearchAction; +import org.elasticsearch.search.aggregations.Aggregation; import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry; import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry.Option; import org.elasticsearch.search.suggest.completion.CompletionSuggestion; @@ -39,15 +50,20 @@ import java.util.HashMap; import java.util.Iterator; import java.util.List; +import java.util.Locale; import java.util.Map; import java.util.stream.Collectors; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.optionalConstructorArg; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * Top level suggest result, containing the result for each suggestion. */ public class Suggest implements Iterable>>, Streamable, ToXContent { - private static final String NAME = "suggest"; + public static final String NAME = "suggest"; public static final Comparator