diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 92b35e97baa05..6a4531f1bdefa 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -11,3 +11,4 @@ attention.
- If submitting code, have you built your formula locally prior to submission with `gradle check`?
- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.
- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?
+- If you are submitting this code for a class then read our [policy](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-as-part-of-a-class) for that.
diff --git a/.gitignore b/.gitignore
index d1810a5a83fc7..b4ec8795057e2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -33,6 +33,7 @@ dependency-reduced-pom.xml
# testing stuff
**/.local*
.vagrant/
+/logs/
# osx stuff
.DS_Store
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index b0f1e054e4693..f38a0588a9c0d 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -38,6 +38,11 @@ If you have a bugfix or new feature that you would like to contribute to Elastic
We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code.
+Note that it is unlikely the project will merge refactors for the sake of refactoring. These
+types of pull requests have a high cost to maintainers in reviewing and testing with little
+to no tangible benefit. This especially includes changes generated by tools. For example,
+converting all generic interface instances to use the diamond operator.
+
The process for contributing to any of the [Elastic repositories](https://github.com/elastic/) is similar. Details for individual projects can be found below.
### Fork and clone the repository
@@ -88,7 +93,8 @@ Contributing to the Elasticsearch codebase
**Repository:** [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch)
Make sure you have [Gradle](http://gradle.org) installed, as
-Elasticsearch uses it as its build system.
+Elasticsearch uses it as its build system. Gradle must be at least
+version 3.3 in order to build successfully.
Eclipse users can automatically configure their IDE: `gradle eclipse`
then `File: Import: Existing Projects into Workspace`. Select the
@@ -137,3 +143,32 @@ Before submitting your changes, run the test suite to make sure that nothing is
```sh
gradle check
```
+
+Contributing as part of a class
+-------------------------------
+In general Elasticsearch is happy to accept contributions that were created as
+part of a class but strongly advise against making the contribution as part of
+the class. So if you have code you wrote for a class feel free to submit it.
+
+Please, please, please do not assign contributing to Elasticsearch as part of a
+class. If you really want to assign writing code for Elasticsearch as an
+assignment then the code contributions should be made to your private clone and
+opening PRs against the primary Elasticsearch clone must be optional, fully
+voluntary, not for a grade, and without any deadlines.
+
+Because:
+
+* While the code review process is likely very educational, it can take wildly
+varying amounts of time depending on who is available, where the change is, and
+how deep the change is. There is no way to predict how long it will take unless
+we rush.
+* We do not rush reviews without a very, very good reason. Class deadlines
+aren't a good enough reason for us to rush reviews.
+* We deeply discourage opening a PR you don't intend to work through the entire
+code review process because it wastes our time.
+* We don't have the capacity to absorb an entire class full of new contributors,
+especially when they are unlikely to become long time contributors.
+
+Finally, we require that you run `gradle check` before submitting a
+non-documentation contribution. This is mentioned above, but it is worth
+repeating in this section because it has come up in this context.
diff --git a/GRADLE.CHEATSHEET b/GRADLE.CHEATSHEET
index 3362b8571e7b9..2c9c34fe1b512 100644
--- a/GRADLE.CHEATSHEET
+++ b/GRADLE.CHEATSHEET
@@ -4,4 +4,4 @@ test -> test
verify -> check
verify -Dskip.unit.tests -> integTest
package -DskipTests -> assemble
-install -DskipTests -> install
+install -DskipTests -> publishToMavenLocal
diff --git a/NOTICE.txt b/NOTICE.txt
index c99b958193198..2126baed56547 100644
--- a/NOTICE.txt
+++ b/NOTICE.txt
@@ -1,5 +1,5 @@
Elasticsearch
-Copyright 2009-2016 Elasticsearch
+Copyright 2009-2017 Elasticsearch
-This product includes software developed by The Apache Software
-Foundation (http://www.apache.org/).
+This product includes software developed by The Apache Software Foundation
+(http://www.apache.org/).
diff --git a/README.textile b/README.textile
index 69d3fd54767e5..91895df93b4f8 100644
--- a/README.textile
+++ b/README.textile
@@ -50,16 +50,16 @@ h3. Indexing
Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
-curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -H 'Content-Type: application/json' -d '{ "name" : "Shay Banon" }'
-curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -H 'Content-Type: application/json' -d '
{
"user": "kimchy",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
-curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -H 'Content-Type: application/json' -d '
{
"user": "kimchy",
"post_date": "2009-11-15T14:12:12",
@@ -87,7 +87,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=tru
We can also use the JSON query language Elasticsearch provides instead of a query string:
-curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"match" : { "user": "kimchy" }
@@ -98,7 +98,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
Just for kicks, let's get all the documents stored (we should see the user as well):
-curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"match_all" : {}
@@ -106,10 +106,10 @@ curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
}'
-We can also do range search (the @postDate@ was automatically identified as date)
+We can also do range search (the @post_date@ was automatically identified as date)
-curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"range" : {
@@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In
Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
-curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -H 'Content-Type: application/json' -d '{ "name" : "Shay Banon" }'
-curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -H 'Content-Type: application/json' -d '
{
"user": "kimchy",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
-curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -H 'Content-Type: application/json' -d '
{
"user": "kimchy",
"post_date": "2009-11-15T14:12:12",
@@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@
Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
-curl -XPUT http://localhost:9200/another_user?pretty -d '
+curl -XPUT http://localhost:9200/another_user?pretty -H 'Content-Type: application/json' -d '
{
"index" : {
"number_of_shards" : 1,
@@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea
index (twitter user), for example:
-curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"match_all" : {}
@@ -176,7 +176,7 @@ curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
Or on all the indices:
-curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"match_all" : {}
@@ -200,7 +200,7 @@ We have just covered a very small portion of what Elasticsearch is all about. Fo
h3. Building from Source
-Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have a modern version of Gradle installed - 2.13 should do.
+Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have at least version 3.3 of Gradle installed.
In order to create a distribution, simply run the @gradle assemble@ command in the cloned directory.
diff --git a/TESTING.asciidoc b/TESTING.asciidoc
index 5046dc087b564..4b47f0409734d 100644
--- a/TESTING.asciidoc
+++ b/TESTING.asciidoc
@@ -41,6 +41,12 @@ run it using Gradle:
gradle run
-------------------------------------
+or to attach a remote debugger, run it as:
+
+-------------------------------------
+gradle run --debug-jvm
+-------------------------------------
+
=== Test case filtering.
- `tests.class` is a class-filtering shell-like glob pattern,
@@ -335,7 +341,7 @@ vagrant plugin install vagrant-cachier
. Validate your installed dependencies:
-------------------------------------
-gradle :qa:vagrant:checkVagrantVersion
+gradle :qa:vagrant:vagrantCheckVersion
-------------------------------------
. Download and smoke test the VMs with `gradle vagrantSmokeTest` or
@@ -361,24 +367,25 @@ VM running trusty by running
These are the linux flavors the Vagrantfile currently supports:
-* ubuntu-1204 aka precise
* ubuntu-1404 aka trusty
-* ubuntu-1504 aka vivid
-* debian-8 aka jessie, the current debian stable distribution
+* ubuntu-1604 aka xenial
+* debian-8 aka jessie
+* debian-9 aka stretch, the current debian stable distribution
* centos-6
* centos-7
-* fedora-22
+* fedora-25
+* fedora-26
+* oel-6 aka Oracle Enterprise Linux 6
* oel-7 aka Oracle Enterprise Linux 7
* sles-12
-* opensuse-13
+* opensuse-42 aka Leap
We're missing the following from the support matrix because there aren't high
quality boxes available in vagrant atlas:
* sles-11
-* oel-6
-We're missing the follow because our tests are very linux/bash centric:
+We're missing the following because our tests are very linux/bash centric:
* Windows Server 2012
@@ -427,19 +434,46 @@ and in another window:
----------------------------------------------------
vagrant up centos-7 --provider virtualbox && vagrant ssh centos-7
-cd $TESTROOT
-sudo bats $BATS/*rpm*.bats
+cd $BATS_ARCHIVES
+sudo -E bats $BATS_TESTS/*rpm*.bats
----------------------------------------------------
If you wanted to retest all the release artifacts on a single VM you could:
-------------------------------------------------
-gradle prepareTestRoot
-vagrant up ubuntu-1404 --provider virtualbox && vagrant ssh ubuntu-1404
-cd $TESTROOT
-sudo bats $BATS/*.bats
+gradle setupBats
+cd qa/vagrant; vagrant up ubuntu-1404 --provider virtualbox && vagrant ssh ubuntu-1404
+cd $BATS_ARCHIVES
+sudo -E bats $BATS_TESTS/*.bats
-------------------------------------------------
+You can also use Gradle to prepare the test environment and then starts a single VM:
+
+-------------------------------------------------
+gradle vagrantFedora25#up
+-------------------------------------------------
+
+Or any of vagrantCentos6#up, vagrantCentos7#up, vagrantDebian8#up,
+vagrantFedora25#up, vagrantOel6#up, vagrantOel7#up, vagrantOpensuse13#up,
+vagrantSles12#up, vagrantUbuntu1404#up, vagrantUbuntu1604#up.
+
+Once up, you can then connect to the VM using SSH from the elasticsearch directory:
+
+-------------------------------------------------
+vagrant ssh fedora-25
+-------------------------------------------------
+
+Or from another directory:
+
+-------------------------------------------------
+VAGRANT_CWD=/path/to/elasticsearch vagrant ssh fedora-25
+-------------------------------------------------
+
+Note: Starting vagrant VM outside of the elasticsearch folder requires to
+indicates the folder that contains the Vagrantfile using the VAGRANT_CWD
+environment variable.
+
+
== Coverage analysis
Tests can be run instrumented with jacoco to produce a coverage report in
@@ -475,12 +509,11 @@ gradle run --debug-jvm
== Building with extra plugins
Additional plugins may be built alongside elasticsearch, where their
dependency on elasticsearch will be substituted with the local elasticsearch
-build. To add your plugin, create a directory called x-plugins as a sibling
-of elasticsearch. Checkout your plugin underneath x-plugins and the build
-will automatically pick it up. You can verify the plugin is included as part
-of the build by checking the projects of the build.
+build. To add your plugin, create a directory called elasticsearch-extra as
+a sibling of elasticsearch. Checkout your plugin underneath elasticsearch-extra
+and the build will automatically pick it up. You can verify the plugin is
+included as part of the build by checking the projects of the build.
---------------------------------------------------------------------------
gradle projects
---------------------------------------------------------------------------
-
diff --git a/Vagrantfile b/Vagrantfile
index fc148ee444319..487594bba8a1a 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -22,27 +22,27 @@
# under the License.
Vagrant.configure(2) do |config|
- config.vm.define "ubuntu-1204" do |config|
- config.vm.box = "elastic/ubuntu-12.04-x86_64"
- ubuntu_common config
- end
config.vm.define "ubuntu-1404" do |config|
config.vm.box = "elastic/ubuntu-14.04-x86_64"
ubuntu_common config
end
- config.vm.define "ubuntu-1504" do |config|
- config.vm.box = "elastic/ubuntu-15.04-x86_64"
+ config.vm.define "ubuntu-1604" do |config|
+ config.vm.box = "elastic/ubuntu-16.04-x86_64"
ubuntu_common config, extra: <<-SHELL
# Install Jayatana so we can work around it being present.
[ -f /usr/share/java/jayatanaag.jar ] || install jayatana
SHELL
end
- # Wheezy's backports don't contain Openjdk 8 and the backflips required to
- # get the sun jdk on there just aren't worth it. We have jessie for testing
- # debian and it works fine.
+ # Wheezy's backports don't contain Openjdk 8 and the backflips
+ # required to get the sun jdk on there just aren't worth it. We have
+ # jessie and stretch for testing debian and it works fine.
config.vm.define "debian-8" do |config|
config.vm.box = "elastic/debian-8-x86_64"
- deb_common config, 'echo deb http://cloudfront.debian.net/debian jessie-backports main > /etc/apt/sources.list.d/backports.list', 'backports'
+ deb_common config
+ end
+ config.vm.define "debian-9" do |config|
+ config.vm.box = "elastic/debian-9-x86_64"
+ deb_common config
end
config.vm.define "centos-6" do |config|
config.vm.box = "elastic/centos-6-x86_64"
@@ -60,12 +60,16 @@ Vagrant.configure(2) do |config|
config.vm.box = "elastic/oraclelinux-7-x86_64"
rpm_common config
end
- config.vm.define "fedora-24" do |config|
- config.vm.box = "elastic/fedora-24-x86_64"
+ config.vm.define "fedora-25" do |config|
+ config.vm.box = "elastic/fedora-25-x86_64"
dnf_common config
end
- config.vm.define "opensuse-13" do |config|
- config.vm.box = "elastic/opensuse-13-x86_64"
+ config.vm.define "fedora-26" do |config|
+ config.vm.box = "elastic/fedora-26-x86_64"
+ dnf_common config
+ end
+ config.vm.define "opensuse-42" do |config|
+ config.vm.box = "elastic/opensuse-42-x86_64"
opensuse_common config
end
config.vm.define "sles-12" do |config|
@@ -77,6 +81,9 @@ Vagrant.configure(2) do |config|
# the elasticsearch project called vagrant....
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder ".", "/elasticsearch"
+ # Expose project directory
+ PROJECT_DIR = ENV['VAGRANT_PROJECT_DIR'] || Dir.pwd
+ config.vm.synced_folder PROJECT_DIR, "/project"
config.vm.provider "virtualbox" do |v|
# Give the boxes 3GB because Elasticsearch defaults to using 2GB
v.memory = 3072
@@ -105,16 +112,22 @@ SOURCE_PROMPT
source /etc/profile.d/elasticsearch_prompt.sh
SOURCE_PROMPT
SHELL
+ # Creates a file to mark the machine as created by vagrant. Tests check
+ # for this file and refuse to run if it is not present so that they can't
+ # be run unexpectedly.
+ config.vm.provision "markerfile", type: "shell", inline: <<-SHELL
+ touch /etc/is_vagrant_vm
+ SHELL
end
config.config_procs.push ['2', set_prompt]
end
end
def ubuntu_common(config, extra: '')
- deb_common config, 'apt-add-repository -y ppa:openjdk-r/ppa > /dev/null 2>&1', 'openjdk-r-*', extra: extra
+ deb_common config, extra: extra
end
-def deb_common(config, add_openjdk_repository_command, openjdk_list, extra: '')
+def deb_common(config, extra: '')
# http://foo-o-rama.com/vagrant--stdin-is-not-a-tty--fix.html
config.vm.provision "fix-no-tty", type: "shell" do |s|
s.privileged = false
@@ -124,24 +137,14 @@ def deb_common(config, add_openjdk_repository_command, openjdk_list, extra: '')
update_command: "apt-get update",
update_tracking_file: "/var/cache/apt/archives/last_update",
install_command: "apt-get install -y",
- java_package: "openjdk-8-jdk",
- extra: <<-SHELL
- export DEBIAN_FRONTEND=noninteractive
- ls /etc/apt/sources.list.d/#{openjdk_list}.list > /dev/null 2>&1 ||
- (echo "==> Importing java-8 ppa" &&
- #{add_openjdk_repository_command} &&
- apt-get update)
- #{extra}
-SHELL
- )
+ extra: extra)
end
def rpm_common(config)
provision(config,
update_command: "yum check-update",
update_tracking_file: "/var/cache/yum/last_update",
- install_command: "yum install -y",
- java_package: "java-1.8.0-openjdk-devel")
+ install_command: "yum install -y")
end
def dnf_common(config)
@@ -149,7 +152,7 @@ def dnf_common(config)
update_command: "dnf check-update",
update_tracking_file: "/var/cache/dnf/last_update",
install_command: "dnf install -y",
- java_package: "java-1.8.0-openjdk-devel")
+ install_command_retries: 5)
if Vagrant.has_plugin?("vagrant-cachier")
# Autodetect doesn't work....
config.cache.auto_detect = false
@@ -166,17 +169,12 @@ def suse_common(config, extra)
update_command: "zypper --non-interactive list-updates",
update_tracking_file: "/var/cache/zypp/packages/last_update",
install_command: "zypper --non-interactive --quiet install --no-recommends",
- java_package: "java-1_8_0-openjdk-devel",
extra: extra)
end
def sles_common(config)
extra = <<-SHELL
- zypper rr systemsmanagement_puppet
- zypper addrepo -t yast2 http://demeter.uni-regensburg.de/SLES12-x64/DVD1/ dvd1 || true
- zypper addrepo -t yast2 http://demeter.uni-regensburg.de/SLES12-x64/DVD2/ dvd2 || true
- zypper addrepo http://download.opensuse.org/repositories/Java:Factory/SLE_12/Java:Factory.repo || true
- zypper --no-gpg-checks --non-interactive refresh
+ zypper rr systemsmanagement_puppet puppetlabs-pc1
zypper --non-interactive install git-core
SHELL
suse_common config, extra
@@ -191,26 +189,50 @@ end
# is cached by vagrant-cachier.
# @param install_command [String] The command used to install a package.
# Required. Think `apt-get install #{package}`.
-# @param java_package [String] The name of the java package. Required.
# @param extra [String] Extra provisioning commands run before anything else.
# Optional. Used for things like setting up the ppa for Java 8.
def provision(config,
update_command: 'required',
update_tracking_file: 'required',
install_command: 'required',
- java_package: 'required',
+ install_command_retries: 0,
extra: '')
# Vagrant run ruby 2.0.0 which doesn't have required named parameters....
raise ArgumentError.new('update_command is required') if update_command == 'required'
raise ArgumentError.new('update_tracking_file is required') if update_tracking_file == 'required'
raise ArgumentError.new('install_command is required') if install_command == 'required'
- raise ArgumentError.new('java_package is required') if java_package == 'required'
- config.vm.provision "bats dependencies", type: "shell", inline: <<-SHELL
+ config.vm.provider "virtualbox" do |v|
+ # Give the box more memory and cpu because our tests are beasts!
+ v.memory = Integer(ENV['VAGRANT_MEMORY'] || 8192)
+ v.cpus = Integer(ENV['VAGRANT_CPUS'] || 4)
+ end
+ config.vm.synced_folder "#{Dir.home}/.gradle/caches", "/home/vagrant/.gradle/caches",
+ create: true,
+ owner: "vagrant"
+ config.vm.provision "dependencies", type: "shell", inline: <<-SHELL
set -e
set -o pipefail
+
+ # Retry install command up to $2 times, if failed
+ retry_installcommand() {
+ n=0
+ while true; do
+ #{install_command} $1 && break
+ let n=n+1
+ if [ $n -ge $2 ]; then
+ echo "==> Exhausted retries to install $1"
+ return 1
+ fi
+ echo "==> Retrying installing $1, attempt $((n+1))"
+ # Add a small delay to increase chance of metalink providing updated list of mirrors
+ sleep 5
+ done
+ }
+
installed() {
command -v $1 2>&1 >/dev/null
}
+
install() {
# Only apt-get update if we haven't in the last day
if [ ! -f #{update_tracking_file} ] || [ "x$(find #{update_tracking_file} -mtime +0)" == "x#{update_tracking_file}" ]; then
@@ -219,15 +241,24 @@ def provision(config,
touch #{update_tracking_file}
fi
echo "==> Installing $1"
- #{install_command} $1
+ if [ #{install_command_retries} -eq 0 ]
+ then
+ #{install_command} $1
+ else
+ retry_installcommand $1 #{install_command_retries}
+ fi
}
+
ensure() {
installed $1 || install $1
}
#{extra}
- installed java || install #{java_package}
+ installed java || {
+ echo "==> Java is not installed on vagrant box ${config.vm.box}"
+ return 1
+ }
ensure tar
ensure curl
ensure unzip
@@ -241,13 +272,39 @@ def provision(config,
/tmp/bats/install.sh /usr
rm -rf /tmp/bats
}
+
+ installed gradle || {
+ echo "==> Installing Gradle"
+ curl -sS -o /tmp/gradle.zip -L https://services.gradle.org/distributions/gradle-3.3-bin.zip
+ unzip -q /tmp/gradle.zip -d /opt
+ rm -rf /tmp/gradle.zip
+ ln -s /opt/gradle-3.3/bin/gradle /usr/bin/gradle
+ # make nfs mounted gradle home dir writeable
+ chown vagrant:vagrant /home/vagrant/.gradle
+ }
+
+
cat \<\ /etc/profile.d/elasticsearch_vars.sh
export ZIP=/elasticsearch/distribution/zip/build/distributions
export TAR=/elasticsearch/distribution/tar/build/distributions
export RPM=/elasticsearch/distribution/rpm/build/distributions
export DEB=/elasticsearch/distribution/deb/build/distributions
-export TESTROOT=/elasticsearch/qa/vagrant/build/testroot
-export BATS=/elasticsearch/qa/vagrant/src/test/resources/packaging/scripts
+export BATS=/project/build/bats
+export BATS_UTILS=/project/build/bats/utils
+export BATS_TESTS=/project/build/bats/tests
+export BATS_ARCHIVES=/project/build/bats/archives
+export GRADLE_HOME=/opt/gradle-3.3
VARS
+ cat \<\ /etc/sudoers.d/elasticsearch_vars
+Defaults env_keep += "ZIP"
+Defaults env_keep += "TAR"
+Defaults env_keep += "RPM"
+Defaults env_keep += "DEB"
+Defaults env_keep += "BATS"
+Defaults env_keep += "BATS_UTILS"
+Defaults env_keep += "BATS_TESTS"
+Defaults env_keep += "BATS_ARCHIVES"
+SUDOERS_VARS
+ chmod 0440 /etc/sudoers.d/elasticsearch_vars
SHELL
end
diff --git a/benchmarks/build.gradle b/benchmarks/build.gradle
index 3b8b92328e149..35b7c1a163d3e 100644
--- a/benchmarks/build.gradle
+++ b/benchmarks/build.gradle
@@ -34,13 +34,14 @@ apply plugin: 'com.github.johnrengelman.shadow'
// have the shadow plugin provide the runShadow task
apply plugin: 'application'
+// Not published so no need to assemble
+tasks.remove(assemble)
+build.dependsOn.remove('assemble')
+
archivesBaseName = 'elasticsearch-benchmarks'
mainClassName = 'org.openjdk.jmh.Main'
-// never try to invoke tests on the benchmark project - there aren't any
-check.dependsOn.remove(test)
-// explicitly override the test task too in case somebody invokes 'gradle test' so it won't trip
-task test(type: Test, overwrite: true)
+test.enabled = false
dependencies {
compile("org.elasticsearch:elasticsearch:${version}") {
@@ -60,7 +61,6 @@ compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-u
// enable the JMH's BenchmarkProcessor to generate the final benchmark classes
// needs to be added separately otherwise Gradle will quote it and javac will fail
compileJava.options.compilerArgs.addAll(["-processor", "org.openjdk.jmh.generators.BenchmarkProcessor"])
-compileTestJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
forbiddenApis {
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
diff --git a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/AllocationBenchmark.java b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/AllocationBenchmark.java
index 86902b380c86e..39cfdb6582d74 100644
--- a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/AllocationBenchmark.java
+++ b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/AllocationBenchmark.java
@@ -27,7 +27,6 @@
import org.elasticsearch.cluster.routing.RoutingTable;
import org.elasticsearch.cluster.routing.ShardRoutingState;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
-import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
import org.elasticsearch.common.settings.Settings;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
@@ -160,11 +159,9 @@ private int toInt(String v) {
public ClusterState measureAllocation() {
ClusterState clusterState = initialClusterState;
while (clusterState.getRoutingNodes().hasUnassignedShards()) {
- RoutingAllocation.Result result = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes()
+ clusterState = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes()
.shardsWithState(ShardRoutingState.INITIALIZING));
- clusterState = ClusterState.builder(clusterState).routingResult(result).build();
- result = strategy.reroute(clusterState, "reroute");
- clusterState = ClusterState.builder(clusterState).routingResult(result).build();
+ clusterState = strategy.reroute(clusterState, "reroute");
}
return clusterState;
}
diff --git a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java
index ad06f75074d53..860137cf559a0 100644
--- a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java
+++ b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java
@@ -22,10 +22,10 @@
import org.elasticsearch.cluster.ClusterModule;
import org.elasticsearch.cluster.EmptyClusterInfoService;
import org.elasticsearch.cluster.node.DiscoveryNode;
+import org.elasticsearch.cluster.routing.ShardRouting;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
-import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation;
+import org.elasticsearch.cluster.routing.allocation.FailedShard;
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
-import org.elasticsearch.cluster.routing.allocation.StartedRerouteAllocation;
import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;
import org.elasticsearch.cluster.routing.allocation.decider.AllocationDecider;
import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;
@@ -35,9 +35,7 @@
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.gateway.GatewayAllocator;
-import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
-import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
@@ -52,12 +50,12 @@ protected NoopGatewayAllocator() {
}
@Override
- public void applyStartedShards(StartedRerouteAllocation allocation) {
+ public void applyStartedShards(RoutingAllocation allocation, List startedShards) {
// noop
}
@Override
- public void applyFailedShards(FailedRerouteAllocation allocation) {
+ public void applyFailedShards(RoutingAllocation allocation, List failedShards) {
// noop
}
diff --git a/distribution/licenses/HdrHistogram-NOTICE.txt b/bootstrap.memory_lock
similarity index 100%
rename from distribution/licenses/HdrHistogram-NOTICE.txt
rename to bootstrap.memory_lock
diff --git a/build.gradle b/build.gradle
index f1b57d7857bfc..2c7fff04e53ea 100644
--- a/build.gradle
+++ b/build.gradle
@@ -17,17 +17,26 @@
* under the License.
*/
-import com.bmuschko.gradle.nexus.NexusPlugin
+import java.util.regex.Matcher
+import java.nio.file.Path
import org.eclipse.jgit.lib.Repository
import org.eclipse.jgit.lib.RepositoryBuilder
import org.gradle.plugins.ide.eclipse.model.SourceFolder
import org.apache.tools.ant.taskdefs.condition.Os
+import org.elasticsearch.gradle.BuildPlugin
+import org.elasticsearch.gradle.VersionProperties
+import org.elasticsearch.gradle.Version
// common maven publishing configuration
subprojects {
group = 'org.elasticsearch'
version = org.elasticsearch.gradle.VersionProperties.elasticsearch
description = "Elasticsearch subproject ${project.path}"
+}
+
+Path rootPath = rootDir.toPath()
+// setup pom license info, but only for artifacts that are part of elasticsearch
+configure(subprojects.findAll { it.projectDir.toPath().startsWith(rootPath) }) {
// we only use maven publish to add tasks for pom generation
plugins.withType(MavenPublishPlugin).whenPluginAdded {
@@ -52,89 +61,128 @@ subprojects {
}
}
}
+ plugins.withType(BuildPlugin).whenPluginAdded {
+ project.licenseFile = project.rootProject.file('LICENSE.txt')
+ project.noticeFile = project.rootProject.file('NOTICE.txt')
+ }
+}
- plugins.withType(NexusPlugin).whenPluginAdded {
- modifyPom {
- project {
- url 'https://github.com/elastic/elasticsearch'
- inceptionYear '2009'
-
- scm {
- url 'https://github.com/elastic/elasticsearch'
- connection 'scm:https://elastic@github.com/elastic/elasticsearch'
- developerConnection 'scm:git://github.com/elastic/elasticsearch.git'
- }
-
- licenses {
- license {
- name 'The Apache Software License, Version 2.0'
- url 'http://www.apache.org/licenses/LICENSE-2.0.txt'
- distribution 'repo'
- }
- }
- }
+// introspect all versions of ES that may be tested agains for backwards compatibility
+Version currentVersion = Version.fromString(VersionProperties.elasticsearch.minus('-SNAPSHOT'))
+File versionFile = file('core/src/main/java/org/elasticsearch/Version.java')
+List versionLines = versionFile.readLines('UTF-8')
+List versions = []
+// keep track of the major version's first version, so we know where wire compat begins
+int firstMajorIndex = -1 // index in the versions list of the first minor from this major
+for (String line : versionLines) {
+ Matcher match = line =~ /\W+public static final Version V_(\d+)_(\d+)_(\d+)(_UNRELEASED)? .*/
+ if (match.matches()) {
+ int major = Integer.parseInt(match.group(1))
+ int minor = Integer.parseInt(match.group(2))
+ int bugfix = Integer.parseInt(match.group(3))
+ boolean unreleased = match.group(4) != null
+ Version foundVersion = new Version(major, minor, bugfix, false, unreleased)
+ if (currentVersion != foundVersion) {
+ versions.add(foundVersion)
}
- extraArchive {
- javadoc = true
- tests = false
- }
- nexus {
- String buildSnapshot = System.getProperty('build.snapshot', 'true')
- if (buildSnapshot == 'false') {
- Repository repo = new RepositoryBuilder().findGitDir(project.rootDir).build()
- String shortHash = repo.resolve('HEAD')?.name?.substring(0,7)
- repositoryUrl = project.hasProperty('build.repository') ? project.property('build.repository') : "file://${System.getenv('HOME')}/elasticsearch-releases/${version}-${shortHash}/"
- }
- }
- // we have our own username/password prompts so that they only happen once
- // TODO: add gpg signing prompts, which is tricky, as the buildDeb/buildRpm tasks are executed before this code block
- project.gradle.taskGraph.whenReady { taskGraph ->
- if (taskGraph.allTasks.any { it.name == 'uploadArchives' }) {
- Console console = System.console()
- // no need for username/password on local deploy
- if (project.nexus.repositoryUrl.startsWith('file://')) {
- project.rootProject.allprojects.each {
- it.ext.nexusUsername = 'foo'
- it.ext.nexusPassword = 'bar'
- }
- } else {
- if (project.hasProperty('nexusUsername') == false) {
- String nexusUsername = console.readLine('\nNexus username: ')
- project.rootProject.allprojects.each {
- it.ext.nexusUsername = nexusUsername
- }
- }
- if (project.hasProperty('nexusPassword') == false) {
- String nexusPassword = new String(console.readPassword('\nNexus password: '))
- project.rootProject.allprojects.each {
- it.ext.nexusPassword = nexusPassword
- }
- }
- }
- }
+ if (major == currentVersion.major && firstMajorIndex == -1) {
+ firstMajorIndex = versions.size() - 1
}
}
}
+if (versions.toSorted { it.id } != versions) {
+ println "Versions: ${versions}"
+ throw new GradleException("Versions.java contains out of order version constants")
+}
+if (currentVersion.bugfix == 0) {
+ // If on a release branch, after the initial release of that branch, the bugfix version will
+ // be bumped, and will be != 0. On master and N.x branches, we want to test against the
+ // unreleased version of closest branch. So for those cases, the version includes -SNAPSHOT,
+ // and the bwc distribution will checkout and build that version.
+ Version last = versions[-1]
+ versions[-1] = new Version(last.major, last.minor, last.bugfix,
+ true, last.unreleased)
+}
+
+// build metadata from previous build, contains eg hashes for bwc builds
+String buildMetadataValue = System.getenv('BUILD_METADATA')
+if (buildMetadataValue == null) {
+ buildMetadataValue = ''
+}
+Map buildMetadataMap = buildMetadataValue.tokenize(';').collectEntries {
+ def (String key, String value) = it.split('=')
+ return [key, value]
+}
+// injecting groovy property variables into all projects
allprojects {
// injecting groovy property variables into all projects
project.ext {
// for ide hacks...
isEclipse = System.getProperty("eclipse.launcher") != null || gradle.startParameter.taskNames.contains('eclipse') || gradle.startParameter.taskNames.contains('cleanEclipse')
isIdea = System.getProperty("idea.active") != null || gradle.startParameter.taskNames.contains('idea') || gradle.startParameter.taskNames.contains('cleanIdea')
+ // for backcompat testing
+ indexCompatVersions = versions
+ wireCompatVersions = versions.subList(firstMajorIndex, versions.size())
+ buildMetadata = buildMetadataMap
+ }
+}
+
+task verifyVersions {
+ doLast {
+ if (gradle.startParameter.isOffline()) {
+ throw new GradleException("Must run in online mode to verify versions")
+ }
+ // Read the list from maven central
+ Node xml
+ new URL('https://repo1.maven.org/maven2/org/elasticsearch/elasticsearch/maven-metadata.xml').openStream().withStream { s ->
+ xml = new XmlParser().parse(s)
+ }
+ Set knownVersions = new TreeSet<>(xml.versioning.versions.version.collect { it.text() }.findAll { it ==~ /\d\.\d\.\d/ }.collect { Version.fromString(it) })
+
+ // Limit the known versions to those that should be index compatible, and are not future versions
+ knownVersions = knownVersions.findAll { it.major >= 2 && it.before(VersionProperties.elasticsearch) }
+
+ /* Limit the listed versions to those that have been marked as released.
+ * Versions not marked as released don't get the same testing and we want
+ * to make sure that we flip all unreleased versions to released as soon
+ * as possible after release. */
+ Set actualVersions = new TreeSet<>(indexCompatVersions.findAll { false == it.snapshot })
+
+ // Finally, compare!
+ if (knownVersions.equals(actualVersions) == false) {
+ throw new GradleException("out-of-date released versions\nActual :" + actualVersions + "\nExpected:" + knownVersions +
+ "\nUpdate Version.java. Note that Version.CURRENT doesn't count because it is not released.")
+ }
+ }
+}
+
+/*
+ * When adding backcompat behavior that spans major versions, temporarily
+ * disabling the backcompat tests is necessary. This flag controls
+ * the enabled state of every bwc task. It should be set back to true
+ * after the backport of the backcompat code is complete.
+ */
+allprojects {
+ ext.bwc_tests_enabled = true
+}
+
+task verifyBwcTestsEnabled {
+ doLast {
+ if (project.bwc_tests_enabled == false) {
+ throw new GradleException('Bwc tests are disabled. They must be re-enabled after completing backcompat behavior backporting.')
+ }
}
}
+task branchConsistency {
+ description 'Ensures this branch is internally consistent. For example, that versions constants match released versions.'
+ group 'Verification'
+ dependsOn verifyVersions, verifyBwcTestsEnabled
+}
+
subprojects {
project.afterEvaluate {
- // include license and notice in jars
- tasks.withType(Jar) {
- into('META-INF') {
- from project.rootProject.rootDir
- include 'LICENSE.txt'
- include 'NOTICE.txt'
- }
- }
// ignore missing javadocs
tasks.withType(Javadoc) { Javadoc javadoc ->
// the -quiet here is because of a bug in gradle, in that adding a string option
@@ -163,8 +211,9 @@ subprojects {
"org.elasticsearch.gradle:build-tools:${version}": ':build-tools',
"org.elasticsearch:rest-api-spec:${version}": ':rest-api-spec',
"org.elasticsearch:elasticsearch:${version}": ':core',
- "org.elasticsearch.client:rest:${version}": ':client:rest',
- "org.elasticsearch.client:sniffer:${version}": ':client:sniffer',
+ "org.elasticsearch.client:elasticsearch-rest-client:${version}": ':client:rest',
+ "org.elasticsearch.client:elasticsearch-rest-client-sniffer:${version}": ':client:sniffer',
+ "org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}": ':client:rest-high-level',
"org.elasticsearch.client:test:${version}": ':client:test',
"org.elasticsearch.client:transport:${version}": ':client:transport',
"org.elasticsearch.test:framework:${version}": ':test:framework',
@@ -179,12 +228,23 @@ subprojects {
"org.elasticsearch.plugin:transport-netty4-client:${version}": ':modules:transport-netty4',
"org.elasticsearch.plugin:reindex-client:${version}": ':modules:reindex',
"org.elasticsearch.plugin:lang-mustache-client:${version}": ':modules:lang-mustache',
+ "org.elasticsearch.plugin:parent-join-client:${version}": ':modules:parent-join',
+ "org.elasticsearch.plugin:aggs-matrix-stats-client:${version}": ':modules:aggs-matrix-stats',
"org.elasticsearch.plugin:percolator-client:${version}": ':modules:percolator',
]
- configurations.all {
- resolutionStrategy.dependencySubstitution { DependencySubstitutions subs ->
- projectSubstitutions.each { k,v ->
- subs.substitute(subs.module(k)).with(subs.project(v))
+ if (wireCompatVersions[-1].snapshot) {
+ // if the most previous version is a snapshot, we need to connect that version to the
+ // bwc project which will checkout and build that snapshot version
+ ext.projectSubstitutions["org.elasticsearch.distribution.deb:elasticsearch:${wireCompatVersions[-1]}"] = ':distribution:bwc'
+ ext.projectSubstitutions["org.elasticsearch.distribution.rpm:elasticsearch:${wireCompatVersions[-1]}"] = ':distribution:bwc'
+ ext.projectSubstitutions["org.elasticsearch.distribution.zip:elasticsearch:${wireCompatVersions[-1]}"] = ':distribution:bwc'
+ }
+ project.afterEvaluate {
+ configurations.all {
+ resolutionStrategy.dependencySubstitution { DependencySubstitutions subs ->
+ projectSubstitutions.each { k,v ->
+ subs.substitute(subs.module(k)).with(subs.project(v))
+ }
}
}
}
@@ -341,3 +401,17 @@ task run(type: Run) {
group = 'Verification'
impliesSubProjects = true
}
+
+/* Remove assemble on all qa projects because we don't need to publish
+ * artifacts for them. */
+gradle.projectsEvaluated {
+ subprojects {
+ if (project.path.startsWith(':qa')) {
+ Task assemble = project.tasks.findByName('assemble')
+ if (assemble) {
+ project.tasks.remove(assemble)
+ project.build.dependsOn.remove('assemble')
+ }
+ }
+ }
+}
diff --git a/buildSrc/build.gradle b/buildSrc/build.gradle
index 1be5020f4f813..73115aab88fd3 100644
--- a/buildSrc/build.gradle
+++ b/buildSrc/build.gradle
@@ -23,14 +23,12 @@ apply plugin: 'groovy'
group = 'org.elasticsearch.gradle'
-// TODO: remove this when upgrading to a version that supports ProgressLogger
-// gradle 2.14 made internal apis unavailable to plugins, and gradle considered
-// ProgressLogger to be an internal api. Until this is made available again,
-// we can't upgrade without losing our nice progress logging
-// NOTE that this check duplicates that in BuildPlugin, but we need to check
-// early here before trying to compile the broken classes in buildSrc
-if (GradleVersion.current() != GradleVersion.version('2.13')) {
- throw new GradleException('Gradle 2.13 is required to build elasticsearch')
+if (GradleVersion.current() < GradleVersion.version('3.3')) {
+ throw new GradleException('Gradle 3.3+ is required to build elasticsearch')
+}
+
+if (JavaVersion.current() < JavaVersion.VERSION_1_8) {
+ throw new GradleException('Java 1.8 is required to build elasticsearch gradle tools')
}
if (project == rootProject) {
@@ -94,12 +92,18 @@ dependencies {
compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
- compile 'de.thetaphi:forbiddenapis:2.2'
- compile 'com.bmuschko:gradle-nexus-plugin:2.3.1'
+ compile 'de.thetaphi:forbiddenapis:2.3'
compile 'org.apache.rat:apache-rat:0.11'
- compile 'ru.vyarus:gradle-animalsniffer-plugin:1.0.1'
+ compile "org.elasticsearch:jna:4.4.0-1"
}
+// Gradle 2.14+ removed ProgressLogger(-Factory) classes from the public APIs
+// Use logging dependency instead
+
+dependencies {
+ compileOnly "org.gradle:gradle-logging:${GradleVersion.current().getVersion()}"
+ compile 'ru.vyarus:gradle-animalsniffer-plugin:1.2.0' // Gradle 2.14 requires a version > 1.0.1
+}
/*****************************************************************************
* Bootstrap repositories *
@@ -108,11 +112,10 @@ dependencies {
if (project == rootProject) {
repositories {
- mavenCentral()
- maven {
- name 'sonatype-snapshots'
- url "https://oss.sonatype.org/content/repositories/snapshots/"
+ if (System.getProperty("repos.mavenLocal") != null) {
+ mavenLocal()
}
+ mavenCentral()
}
test.exclude 'org/elasticsearch/test/NamingConventionsCheckBadClasses*'
}
@@ -154,4 +157,11 @@ if (project != rootProject) {
testClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$UnitTestCase'
integTestClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$IntegTestCase'
}
+
+ task namingConventionsMain(type: org.elasticsearch.gradle.precommit.NamingConventionsTask) {
+ checkForTestsInMain = true
+ testClass = namingConventions.testClass
+ integTestClass = namingConventions.integTestClass
+ }
+ precommit.dependsOn namingConventionsMain
}
diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy
index e2230b116c714..c811a4f6c268e 100644
--- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy
+++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy
@@ -33,7 +33,7 @@ class RandomizedTestingPlugin implements Plugin {
]
RandomizedTestingTask newTestTask = tasks.create(properties)
newTestTask.classpath = oldTestTask.classpath
- newTestTask.testClassesDir = oldTestTask.testClassesDir
+ newTestTask.testClassesDir = oldTestTask.project.sourceSets.test.output.classesDir
// hack so check task depends on custom test
Task checkTask = tasks.findByPath('check')
diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy
index b28e7210ea41d..e24c226837d26 100644
--- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy
+++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy
@@ -19,7 +19,7 @@ import org.gradle.api.tasks.Optional
import org.gradle.api.tasks.TaskAction
import org.gradle.api.tasks.util.PatternFilterable
import org.gradle.api.tasks.util.PatternSet
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLoggerFactory
import org.gradle.util.ConfigureUtil
import javax.inject.Inject
@@ -69,6 +69,10 @@ class RandomizedTestingTask extends DefaultTask {
@Input
String ifNoTests = 'ignore'
+ @Optional
+ @Input
+ String onNonEmptyWorkDirectory = 'fail'
+
TestLoggingConfiguration testLoggingConfig = new TestLoggingConfiguration()
BalancersConfiguration balancersConfig = new BalancersConfiguration(task: this)
@@ -81,6 +85,7 @@ class RandomizedTestingTask extends DefaultTask {
String argLine = null
Map systemProperties = new HashMap<>()
+ Map environmentVariables = new HashMap<>()
PatternFilterable patternSet = new PatternSet()
RandomizedTestingTask() {
@@ -91,7 +96,7 @@ class RandomizedTestingTask extends DefaultTask {
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
- throw new UnsupportedOperationException();
+ throw new UnsupportedOperationException()
}
void jvmArgs(Iterable arguments) {
@@ -106,6 +111,10 @@ class RandomizedTestingTask extends DefaultTask {
systemProperties.put(property, value)
}
+ void environment(String key, Object value) {
+ environmentVariables.put(key, value)
+ }
+
void include(String... includes) {
this.patternSet.include(includes);
}
@@ -194,7 +203,9 @@ class RandomizedTestingTask extends DefaultTask {
haltOnFailure: true, // we want to capture when a build failed, but will decide whether to rethrow later
shuffleOnSlave: shuffleOnSlave,
leaveTemporary: leaveTemporary,
- ifNoTests: ifNoTests
+ ifNoTests: ifNoTests,
+ onNonEmptyWorkDirectory: onNonEmptyWorkDirectory,
+ newenvironment: true
]
DefaultLogger listener = null
@@ -250,6 +261,9 @@ class RandomizedTestingTask extends DefaultTask {
for (Map.Entry prop : systemProperties) {
sysproperty key: prop.getKey(), value: prop.getValue().toString()
}
+ for (Map.Entry envvar : environmentVariables) {
+ env key: envvar.getKey(), value: envvar.getValue().toString()
+ }
makeListeners()
}
} catch (BuildException e) {
diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy
index 14f5d476be3cb..da25afa938916 100644
--- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy
+++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy
@@ -25,8 +25,8 @@ import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedStartEvent
import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedSuiteResultEvent
import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedTestResultEvent
import com.carrotsearch.ant.tasks.junit4.listeners.AggregatedEventListener
-import org.gradle.logging.ProgressLogger
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLogger
+import org.gradle.internal.logging.progress.ProgressLoggerFactory
import static com.carrotsearch.ant.tasks.junit4.FormattingUtils.formatDurationInSeconds
import static com.carrotsearch.ant.tasks.junit4.events.aggregated.TestStatus.ERROR
@@ -77,7 +77,7 @@ class TestProgressLogger implements AggregatedEventListener {
/** Have we finished a whole suite yet? */
volatile boolean suiteFinished = false
/* Note that we probably overuse volatile here but it isn't hurting us and
- lets us move things around without worying about breaking things. */
+ lets us move things around without worrying about breaking things. */
@Subscribe
void onStart(AggregatedStartEvent e) throws IOException {
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy
index 4d7bee866b824..c51fc0229eebe 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy
@@ -18,9 +18,12 @@
*/
package org.elasticsearch.gradle
+import com.carrotsearch.gradle.junit4.RandomizedTestingTask
import nebula.plugin.extraconfigurations.ProvidedBasePlugin
+import org.apache.tools.ant.taskdefs.condition.Os
import org.elasticsearch.gradle.precommit.PrecommitTasks
import org.gradle.api.GradleException
+import org.gradle.api.InvalidUserDataException
import org.gradle.api.JavaVersion
import org.gradle.api.Plugin
import org.gradle.api.Project
@@ -32,19 +35,20 @@ import org.gradle.api.artifacts.ModuleVersionIdentifier
import org.gradle.api.artifacts.ProjectDependency
import org.gradle.api.artifacts.ResolvedArtifact
import org.gradle.api.artifacts.dsl.RepositoryHandler
-import org.gradle.api.artifacts.maven.MavenPom
+import org.gradle.api.file.CopySpec
+import org.gradle.api.plugins.JavaPlugin
import org.gradle.api.publish.maven.MavenPublication
import org.gradle.api.publish.maven.plugins.MavenPublishPlugin
import org.gradle.api.publish.maven.tasks.GenerateMavenPom
import org.gradle.api.tasks.bundling.Jar
import org.gradle.api.tasks.compile.JavaCompile
+import org.gradle.api.tasks.javadoc.Javadoc
import org.gradle.internal.jvm.Jvm
import org.gradle.process.ExecResult
import org.gradle.util.GradleVersion
import java.time.ZoneOffset
import java.time.ZonedDateTime
-
/**
* Encapsulates build configuration for elasticsearch projects.
*/
@@ -54,6 +58,11 @@ class BuildPlugin implements Plugin {
@Override
void apply(Project project) {
+ if (project.pluginManager.hasPlugin('elasticsearch.standalone-rest-test')) {
+ throw new InvalidUserDataException('elasticsearch.standalone-test, '
+ + 'elasticearch.standalone-rest-test, and elasticsearch.build '
+ + 'are mutually exclusive')
+ }
project.pluginManager.apply('java')
project.pluginManager.apply('carrotsearch.randomized-testing')
// these plugins add lots of info to our jars
@@ -63,7 +72,6 @@ class BuildPlugin implements Plugin {
project.pluginManager.apply('nebula.info-java')
project.pluginManager.apply('nebula.info-scm')
project.pluginManager.apply('nebula.info-jar')
- project.pluginManager.apply('com.bmuschko.nexus')
project.pluginManager.apply(ProvidedBasePlugin)
globalBuildInfo(project)
@@ -71,6 +79,8 @@ class BuildPlugin implements Plugin {
configureConfigurations(project)
project.ext.versions = VersionProperties.versions
configureCompile(project)
+ configureJavadoc(project)
+ configureSourcesJar(project)
configurePomGeneration(project)
configureTest(project)
@@ -113,7 +123,7 @@ class BuildPlugin implements Plugin {
}
// enforce gradle version
- GradleVersion minGradle = GradleVersion.version('2.13')
+ GradleVersion minGradle = GradleVersion.version('3.3')
if (GradleVersion.current() < minGradle) {
throw new GradleException("${minGradle} or above is required to build elasticsearch")
}
@@ -157,7 +167,7 @@ class BuildPlugin implements Plugin {
private static String findJavaHome() {
String javaHome = System.getenv('JAVA_HOME')
if (javaHome == null) {
- if (System.getProperty("idea.active") != null) {
+ if (System.getProperty("idea.active") != null || System.getProperty("eclipse.launcher") != null) {
// intellij doesn't set JAVA_HOME, so we use the jdk gradle was run with
javaHome = Jvm.current().javaHome
} else {
@@ -194,19 +204,28 @@ class BuildPlugin implements Plugin {
/** Runs the given javascript using jjs from the jdk, and returns the output */
private static String runJavascript(Project project, String javaHome, String script) {
- File tmpScript = File.createTempFile('es-gradle-tmp', '.js')
- tmpScript.setText(script, 'UTF-8')
- ByteArrayOutputStream output = new ByteArrayOutputStream()
+ ByteArrayOutputStream stdout = new ByteArrayOutputStream()
+ ByteArrayOutputStream stderr = new ByteArrayOutputStream()
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ // gradle/groovy does not properly escape the double quote for windows
+ script = script.replace('"', '\\"')
+ }
+ File jrunscriptPath = new File(javaHome, 'bin/jrunscript')
ExecResult result = project.exec {
- executable = new File(javaHome, 'bin/jjs')
- args tmpScript.toString()
- standardOutput = output
- errorOutput = new ByteArrayOutputStream()
- ignoreExitValue = true // we do not fail so we can first cleanup the tmp file
+ executable = jrunscriptPath
+ args '-e', script
+ standardOutput = stdout
+ errorOutput = stderr
+ ignoreExitValue = true
}
- java.nio.file.Files.delete(tmpScript.toPath())
- result.assertNormalExitValue()
- return output.toString('UTF-8').trim()
+ if (result.exitValue != 0) {
+ project.logger.error("STDOUT:")
+ stdout.toString('UTF-8').eachLine { line -> project.logger.error(line) }
+ project.logger.error("STDERR:")
+ stderr.toString('UTF-8').eachLine { line -> project.logger.error(line) }
+ result.rethrowFailure()
+ }
+ return stdout.toString('UTF-8').trim()
}
/** Return the configuration name used for finding transitive deps of the given dependency. */
@@ -267,14 +286,9 @@ class BuildPlugin implements Plugin {
project.configurations.compile.dependencies.all(disableTransitiveDeps)
project.configurations.testCompile.dependencies.all(disableTransitiveDeps)
project.configurations.provided.dependencies.all(disableTransitiveDeps)
-
- // add exclusions to the pom directly, for each of the transitive deps of this project's deps
- project.modifyPom { MavenPom pom ->
- pom.withXml(fixupDependencies(project))
- }
}
- /** Adds repositores used by ES dependencies */
+ /** Adds repositories used by ES dependencies */
static void configureRepositories(Project project) {
RepositoryHandler repos = project.repositories
if (System.getProperty("repos.mavenlocal") != null) {
@@ -284,10 +298,6 @@ class BuildPlugin implements Plugin {
repos.mavenLocal()
}
repos.mavenCentral()
- repos.maven {
- name 'sonatype-snapshots'
- url 'http://oss.sonatype.org/content/repositories/snapshots/'
- }
String luceneVersion = VersionProperties.lucene
if (luceneVersion.contains('-snapshot')) {
// extract the revision number from the version with a regex matcher
@@ -303,12 +313,14 @@ class BuildPlugin implements Plugin {
* Returns a closure which can be used with a MavenPom for fixing problems with gradle generated poms.
*
*
- * Remove transitive dependencies (using wildcard exclusions, fixed in gradle 2.14)
- * Set compile time deps back to compile from runtime (known issue with maven-publish plugin)
+ * Remove transitive dependencies. We currently exclude all artifacts explicitly instead of using wildcards
+ * as Ivy incorrectly translates POMs with * excludes to Ivy XML with * excludes which results in the main artifact
+ * being excluded as well (see https://issues.apache.org/jira/browse/IVY-1531). Note that Gradle 2.14+ automatically
+ * translates non-transitive dependencies to * excludes. We should revisit this when upgrading Gradle.
+ * Set compile time deps back to compile from runtime (known issue with maven-publish plugin)
*
*/
private static Closure fixupDependencies(Project project) {
- // TODO: remove this when enforcing gradle 2.14+, it now properly handles exclusions
return { XmlProvider xml ->
// first find if we have dependencies at all, and grab the node
NodeList depsNodes = xml.asNode().get('dependencies')
@@ -331,6 +343,13 @@ class BuildPlugin implements Plugin {
depNode.scope*.value = 'compile'
}
+ // remove any exclusions added by gradle, they contain wildcards and systems like ivy have bugs with wildcards
+ // see https://github.com/elastic/elasticsearch/issues/24490
+ NodeList exclusionsNode = depNode.get('exclusions')
+ if (exclusionsNode.size() > 0) {
+ depNode.remove(exclusionsNode.get(0))
+ }
+
// collect the transitive deps now that we know what this dependency is
String depConfig = transitiveDepConfigName(groupId, artifactId, version)
Configuration configuration = project.configurations.findByName(depConfig)
@@ -343,10 +362,19 @@ class BuildPlugin implements Plugin {
continue
}
- // we now know we have something to exclude, so add a wildcard exclusion element
- Node exclusion = depNode.appendNode('exclusions').appendNode('exclusion')
- exclusion.appendNode('groupId', '*')
- exclusion.appendNode('artifactId', '*')
+ // we now know we have something to exclude, so add exclusions for all artifacts except the main one
+ Node exclusions = depNode.appendNode('exclusions')
+ for (ResolvedArtifact artifact : artifacts) {
+ ModuleVersionIdentifier moduleVersionIdentifier = artifact.moduleVersion.id;
+ String depGroupId = moduleVersionIdentifier.group
+ String depArtifactId = moduleVersionIdentifier.name
+ // add exclusions for all artifacts except the main one
+ if (depGroupId != groupId || depArtifactId != artifactId) {
+ Node exclusion = exclusions.appendNode('exclusion')
+ exclusion.appendNode('groupId', depGroupId)
+ exclusion.appendNode('artifactId', depArtifactId)
+ }
+ }
}
}
}
@@ -366,8 +394,11 @@ class BuildPlugin implements Plugin {
project.tasks.withType(GenerateMavenPom.class) { GenerateMavenPom t ->
// place the pom next to the jar it is for
t.destination = new File(project.buildDir, "distributions/${project.archivesBaseName}-${project.version}.pom")
- // build poms with assemble
- project.assemble.dependsOn(t)
+ // build poms with assemble (if the assemble task exists)
+ Task assemble = project.tasks.findByName('assemble')
+ if (assemble) {
+ assemble.dependsOn(t)
+ }
}
}
}
@@ -376,8 +407,9 @@ class BuildPlugin implements Plugin {
static void configureCompile(Project project) {
project.ext.compactProfile = 'compact3'
project.afterEvaluate {
- // fail on all javac warnings
project.tasks.withType(JavaCompile) {
+ File gradleJavaHome = Jvm.current().javaHome
+ // we fork because compiling lots of different classes in a shared jvm can eventually trigger GC overhead limitations
options.fork = true
options.forkOptions.executable = new File(project.javaHome, 'bin/javac')
options.forkOptions.memoryMaximumSize = "1g"
@@ -394,6 +426,7 @@ class BuildPlugin implements Plugin {
* -serial because we don't use java serialization.
*/
// don't even think about passing args with -J-xxx, oracle will ask you to submit a bug report :)
+ // fail on all javac warnings
options.compilerArgs << '-Werror' << '-Xlint:all,-path,-serial,-options,-deprecation' << '-Xdoclint:all' << '-Xdoclint:-missing'
// either disable annotation processor completely (default) or allow to enable them if an annotation processor is explicitly defined
@@ -402,21 +435,74 @@ class BuildPlugin implements Plugin {
}
options.encoding = 'UTF-8'
- //options.incremental = true
+ options.incremental = true
if (project.javaVersion == JavaVersion.VERSION_1_9) {
- // hack until gradle supports java 9's new "-release" arg
+ // hack until gradle supports java 9's new "--release" arg
assert minimumJava == JavaVersion.VERSION_1_8
- options.compilerArgs << '-release' << '8'
- project.sourceCompatibility = null
- project.targetCompatibility = null
+ options.compilerArgs << '--release' << '8'
+ if (GradleVersion.current().getBaseVersion() < GradleVersion.version("4.1")) {
+ // this hack is not needed anymore since Gradle 4.1, see https://github.com/gradle/gradle/pull/2474
+ doFirst {
+ sourceCompatibility = null
+ targetCompatibility = null
+ }
+ }
}
}
}
}
- /** Adds additional manifest info to jars, and adds source and javadoc jars */
+ static void configureJavadoc(Project project) {
+ String artifactsHost = VersionProperties.elasticsearch.endsWith("-SNAPSHOT") ? "https://snapshots.elastic.co" : "https://artifacts.elastic.co"
+ project.afterEvaluate {
+ project.tasks.withType(Javadoc) {
+ executable = new File(project.javaHome, 'bin/javadoc')
+ }
+ /*
+ * Order matters, the linksOffline for org.elasticsearch:elasticsearch must be the last one
+ * or all the links for the other packages (e.g org.elasticsearch.client) will point to core rather than their own artifacts
+ */
+ Closure sortClosure = { a, b -> b.group <=> a.group }
+ Closure depJavadocClosure = { dep ->
+ if (dep.group != null && dep.group.startsWith('org.elasticsearch')) {
+ String substitution = project.ext.projectSubstitutions.get("${dep.group}:${dep.name}:${dep.version}")
+ if (substitution != null) {
+ project.javadoc.dependsOn substitution + ':javadoc'
+ String artifactPath = dep.group.replaceAll('\\.', '/') + '/' + dep.name.replaceAll('\\.', '/') + '/' + dep.version
+ project.javadoc.options.linksOffline artifactsHost + "/javadoc/" + artifactPath, "${project.project(substitution).buildDir}/docs/javadoc/"
+ }
+ }
+ }
+ project.configurations.compile.dependencies.findAll().toSorted(sortClosure).each(depJavadocClosure)
+ project.configurations.provided.dependencies.findAll().toSorted(sortClosure).each(depJavadocClosure)
+ }
+ configureJavadocJar(project)
+ }
+
+ /** Adds a javadocJar task to generate a jar containing javadocs. */
+ static void configureJavadocJar(Project project) {
+ Jar javadocJarTask = project.task('javadocJar', type: Jar)
+ javadocJarTask.classifier = 'javadoc'
+ javadocJarTask.group = 'build'
+ javadocJarTask.description = 'Assembles a jar containing javadocs.'
+ javadocJarTask.from(project.tasks.getByName(JavaPlugin.JAVADOC_TASK_NAME))
+ project.assemble.dependsOn(javadocJarTask)
+ }
+
+ static void configureSourcesJar(Project project) {
+ Jar sourcesJarTask = project.task('sourcesJar', type: Jar)
+ sourcesJarTask.classifier = 'sources'
+ sourcesJarTask.group = 'build'
+ sourcesJarTask.description = 'Assembles a jar containing source files.'
+ sourcesJarTask.from(project.sourceSets.main.allSource)
+ project.assemble.dependsOn(sourcesJarTask)
+ }
+
+ /** Adds additional manifest info to jars */
static void configureJars(Project project) {
+ project.ext.licenseFile = null
+ project.ext.noticeFile = null
project.tasks.withType(Jar) { Jar jarTask ->
// we put all our distributable files under distributions
jarTask.destinationDir = new File(project.buildDir, 'distributions')
@@ -437,7 +523,21 @@ class BuildPlugin implements Plugin {
'Build-Java-Version': project.javaVersion)
if (jarTask.manifest.attributes.containsKey('Change') == false) {
logger.warn('Building without git revision id.')
- jarTask.manifest.attributes('Change': 'N/A')
+ jarTask.manifest.attributes('Change': 'Unknown')
+ }
+ }
+ // add license/notice files
+ project.afterEvaluate {
+ if (project.licenseFile == null || project.noticeFile == null) {
+ throw new GradleException("Must specify license and notice file for project ${project.path}")
+ }
+ jarTask.into('META-INF') {
+ from(project.licenseFile.parent) {
+ include project.licenseFile.name
+ }
+ from(project.noticeFile.parent) {
+ include project.noticeFile.name
+ }
}
}
}
@@ -449,16 +549,12 @@ class BuildPlugin implements Plugin {
jvm "${project.javaHome}/bin/java"
parallelism System.getProperty('tests.jvms', 'auto')
ifNoTests 'fail'
+ onNonEmptyWorkDirectory 'wipe'
leaveTemporary true
// TODO: why are we not passing maxmemory to junit4?
jvmArg '-Xmx' + System.getProperty('tests.heap.size', '512m')
jvmArg '-Xms' + System.getProperty('tests.heap.size', '512m')
- if (JavaVersion.current().isJava7()) {
- // some tests need a large permgen, but that only exists on java 7
- jvmArg '-XX:MaxPermSize=128m'
- }
- jvmArg '-XX:MaxDirectMemorySize=512m'
jvmArg '-XX:+HeapDumpOnOutOfMemoryError'
File heapdumpDir = new File(project.buildDir, 'heapdump')
heapdumpDir.mkdirs()
@@ -472,6 +568,8 @@ class BuildPlugin implements Plugin {
systemProperty 'tests.artifact', project.name
systemProperty 'tests.task', path
systemProperty 'tests.security.manager', 'true'
+ // Breaking change in JDK-9, revert to JDK-8 behavior for now, see https://github.com/elastic/elasticsearch/issues/21534
+ systemProperty 'jdk.io.permissionsUseCanonicalPath', 'true'
systemProperty 'jna.nosys', 'true'
// default test sysprop values
systemProperty 'tests.ifNoTests', 'fail'
@@ -484,11 +582,9 @@ class BuildPlugin implements Plugin {
}
}
- // System assertions (-esa) are disabled for now because of what looks like a
- // JDK bug triggered by Groovy on JDK7. We should look at re-enabling system
- // assertions when we upgrade to a new version of Groovy (currently 2.4.4) or
- // require JDK8. See https://issues.apache.org/jira/browse/GROOVY-7528.
- enableSystemAssertions false
+ boolean assertionsEnabled = Boolean.parseBoolean(System.getProperty('tests.asserts', 'true'))
+ enableSystemAssertions assertionsEnabled
+ enableAssertions assertionsEnabled
testLogging {
showNumFailuresAtEnd 25
@@ -529,11 +625,22 @@ class BuildPlugin implements Plugin {
/** Configures the test task */
static Task configureTest(Project project) {
- Task test = project.tasks.getByName('test')
+ RandomizedTestingTask test = project.tasks.getByName('test')
test.configure(commonTestConfig(project))
test.configure {
include '**/*Tests.class'
}
+
+ // Add a method to create additional unit tests for a project, which will share the same
+ // randomized testing setup, but by default run no tests.
+ project.extensions.add('additionalTest', { String name, Closure config ->
+ RandomizedTestingTask additionalTest = project.tasks.create(name, RandomizedTestingTask.class)
+ additionalTest.classpath = test.classpath
+ additionalTest.testClassesDir = test.testClassesDir
+ additionalTest.configure(commonTestConfig(project))
+ additionalTest.configure(config)
+ test.dependsOn(additionalTest)
+ });
return test
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/NoticeTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/NoticeTask.groovy
new file mode 100644
index 0000000000000..928298db7bfc2
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/NoticeTask.groovy
@@ -0,0 +1,99 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.gradle
+
+import org.gradle.api.DefaultTask
+import org.gradle.api.Project
+import org.gradle.api.artifacts.Configuration
+import org.gradle.api.tasks.InputFile
+import org.gradle.api.tasks.OutputFile
+import org.gradle.api.tasks.TaskAction
+
+/**
+ * A task to create a notice file which includes dependencies' notices.
+ */
+public class NoticeTask extends DefaultTask {
+
+ @InputFile
+ File inputFile = project.rootProject.file('NOTICE.txt')
+
+ @OutputFile
+ File outputFile = new File(project.buildDir, "notices/${name}/NOTICE.txt")
+
+ /** Directories to include notices from */
+ private List licensesDirs = new ArrayList<>()
+
+ public NoticeTask() {
+ description = 'Create a notice file from dependencies'
+ // Default licenses directory is ${projectDir}/licenses (if it exists)
+ File licensesDir = new File(project.projectDir, 'licenses')
+ if (licensesDir.exists()) {
+ licensesDirs.add(licensesDir)
+ }
+ }
+
+ /** Add notices from the specified directory. */
+ public void licensesDir(File licensesDir) {
+ licensesDirs.add(licensesDir)
+ }
+
+ @TaskAction
+ public void generateNotice() {
+ StringBuilder output = new StringBuilder()
+ output.append(inputFile.getText('UTF-8'))
+ output.append('\n\n')
+ // This is a map rather than a set so that the sort order is the 3rd
+ // party component names, unaffected by the full path to the various files
+ Map seen = new TreeMap<>()
+ for (File licensesDir : licensesDirs) {
+ licensesDir.eachFileMatch({ it ==~ /.*-NOTICE\.txt/ }) { File file ->
+ String name = file.name.substring(0, file.name.length() - '-NOTICE.txt'.length())
+ if (seen.containsKey(name)) {
+ File prevFile = seen.get(name)
+ if (prevFile.text != file.text) {
+ throw new RuntimeException("Two different notices exist for dependency '" +
+ name + "': " + prevFile + " and " + file)
+ }
+ } else {
+ seen.put(name, file)
+ }
+ }
+ }
+ for (Map.Entry entry : seen.entrySet()) {
+ String name = entry.getKey()
+ File file = entry.getValue()
+ appendFile(file, name, 'NOTICE', output)
+ appendFile(new File(file.parentFile, "${name}-LICENSE.txt"), name, 'LICENSE', output)
+ }
+ outputFile.setText(output.toString(), 'UTF-8')
+ }
+
+ static void appendFile(File file, String name, String type, StringBuilder output) {
+ String text = file.getText('UTF-8')
+ if (text.trim().isEmpty()) {
+ return
+ }
+ output.append('================================================================================\n')
+ output.append("${name} ${type}\n")
+ output.append('================================================================================\n')
+ output.append(text)
+ output.append('\n\n')
+ }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy
new file mode 100644
index 0000000000000..ace8dd34fe9e9
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy
@@ -0,0 +1,84 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.gradle
+
+import groovy.transform.Sortable
+
+/**
+ * Encapsulates comparison and printing logic for an x.y.z version.
+ */
+@Sortable(includes=['id'])
+public class Version {
+
+ final int major
+ final int minor
+ final int bugfix
+ final int id
+ final boolean snapshot
+ /**
+ * Is the vesion listed as {@code _UNRELEASED} in Version.java.
+ */
+ final boolean unreleased
+
+ public Version(int major, int minor, int bugfix, boolean snapshot,
+ boolean unreleased) {
+ this.major = major
+ this.minor = minor
+ this.bugfix = bugfix
+ this.snapshot = snapshot
+ this.id = major * 100000 + minor * 1000 + bugfix * 10 +
+ (snapshot ? 1 : 0)
+ this.unreleased = unreleased
+ }
+
+ public static Version fromString(String s) {
+ String[] parts = s.split('\\.')
+ String bugfix = parts[2]
+ boolean snapshot = false
+ if (bugfix.contains('-')) {
+ snapshot = bugfix.endsWith('-SNAPSHOT')
+ bugfix = bugfix.split('-')[0]
+ }
+ return new Version(parts[0] as int, parts[1] as int, bugfix as int,
+ snapshot, false)
+ }
+
+ @Override
+ public String toString() {
+ String snapshotStr = snapshot ? '-SNAPSHOT' : ''
+ return "${major}.${minor}.${bugfix}${snapshotStr}"
+ }
+
+ public boolean before(String compareTo) {
+ return id < fromString(compareTo).id
+ }
+
+ public boolean onOrBefore(String compareTo) {
+ return id <= fromString(compareTo).id
+ }
+
+ public boolean onOrAfter(String compareTo) {
+ return id >= fromString(compareTo).id
+ }
+
+ public boolean after(String compareTo) {
+ return id > fromString(compareTo).id
+ }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/DocsTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/DocsTestPlugin.groovy
index 0fefecc144646..d2802638ce512 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/DocsTestPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/DocsTestPlugin.groovy
@@ -30,7 +30,11 @@ public class DocsTestPlugin extends RestTestPlugin {
@Override
public void apply(Project project) {
+ project.pluginManager.apply('elasticsearch.standalone-rest-test')
super.apply(project)
+ // Docs are published separately so no need to assemble
+ project.tasks.remove(project.assemble)
+ project.build.dependsOn.remove('assemble')
Map defaultSubstitutions = [
/* These match up with the asciidoc syntax for substitutions but
* the values may differ. In particular {version} needs to resolve
@@ -38,7 +42,7 @@ public class DocsTestPlugin extends RestTestPlugin {
* the last released version for docs. */
'\\{version\\}':
VersionProperties.elasticsearch.replace('-SNAPSHOT', ''),
- '\\{lucene_version\\}' : VersionProperties.lucene,
+ '\\{lucene_version\\}' : VersionProperties.lucene.replaceAll('-snapshot-\\w+$', ''),
]
Task listSnippets = project.tasks.create('listSnippets', SnippetsTask)
listSnippets.group 'Docs'
@@ -53,17 +57,7 @@ public class DocsTestPlugin extends RestTestPlugin {
'List snippets that probably should be marked // CONSOLE'
listConsoleCandidates.defaultSubstitutions = defaultSubstitutions
listConsoleCandidates.perSnippet {
- if (
- it.console != null // Already marked, nothing to do
- || it.testResponse // It is a response
- ) {
- return
- }
- if ( // js almost always should be `// CONSOLE`
- it.language == 'js' ||
- // snippets containing `curl` *probably* should
- // be `// CONSOLE`
- it.curl) {
+ if (RestTestsFromSnippetsTask.isConsoleCandidate(it)) {
println(it.toString())
}
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy
index dc4e6f5f70af4..2ec12fe341f6b 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy
@@ -41,6 +41,16 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
@Input
Map setups = new HashMap()
+ /**
+ * A list of files that contain snippets that *probably* should be
+ * converted to `// CONSOLE` but have yet to be converted. If a file is in
+ * this list and doesn't contain unconverted snippets this task will fail.
+ * If there are unconverted snippets not in this list then this task will
+ * fail. All files are paths relative to the docs dir.
+ */
+ @Input
+ List expectedUnconvertedCandidates = []
+
/**
* Root directory of the tests being generated. To make rest tests happy
* we generate them in a testRoot() which is contained in this directory.
@@ -56,6 +66,7 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
TestBuilder builder = new TestBuilder()
doFirst { outputRoot().delete() }
perSnippet builder.&handleSnippet
+ doLast builder.&checkUnconverted
doLast builder.&finishLastTest
}
@@ -67,6 +78,27 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
return new File(testRoot, '/rest-api-spec/test')
}
+ /**
+ * Is this snippet a candidate for conversion to `// CONSOLE`?
+ */
+ static isConsoleCandidate(Snippet snippet) {
+ /* Snippets that are responses or already marked as `// CONSOLE` or
+ * `// NOTCONSOLE` are not candidates. */
+ if (snippet.console != null || snippet.testResponse) {
+ return false
+ }
+ /* js snippets almost always should be marked with `// CONSOLE`. js
+ * snippets that shouldn't be marked `// CONSOLE`, like examples for
+ * js client, should always be marked with `// NOTCONSOLE`.
+ *
+ * `sh` snippets that contain `curl` almost always should be marked
+ * with `// CONSOLE`. In the exceptionally rare cases where they are
+ * not communicating with Elasticsearch, like the xamples in the ec2
+ * and gce discovery plugins, the snippets should be marked
+ * `// NOTCONSOLE`. */
+ return snippet.language == 'js' || snippet.curl
+ }
+
private class TestBuilder {
private static final String SYNTAX = {
String method = /(?GET|PUT|POST|HEAD|OPTIONS|DELETE)/
@@ -88,17 +120,34 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
*/
PrintWriter current
+ /**
+ * Files containing all snippets that *probably* should be converted
+ * to `// CONSOLE` but have yet to be converted. All files are paths
+ * relative to the docs dir.
+ */
+ Set unconvertedCandidates = new HashSet<>()
+
+ /**
+ * The last non-TESTRESPONSE snippet.
+ */
+ Snippet previousTest
+
/**
* Called each time a snippet is encountered. Tracks the snippets and
* calls buildTest to actually build the test.
*/
void handleSnippet(Snippet snippet) {
+ if (RestTestsFromSnippetsTask.isConsoleCandidate(snippet)) {
+ unconvertedCandidates.add(snippet.path.toString()
+ .replace('\\', '/'))
+ }
if (BAD_LANGUAGES.contains(snippet.language)) {
throw new InvalidUserDataException(
"$snippet: Use `js` instead of `${snippet.language}`.")
}
if (snippet.testSetup) {
setup(snippet)
+ previousTest = snippet
return
}
if (snippet.testResponse) {
@@ -107,6 +156,7 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
}
if (snippet.test || snippet.console) {
test(snippet)
+ previousTest = snippet
return
}
// Must be an unmarked snippet....
@@ -115,13 +165,37 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
private void test(Snippet test) {
setupCurrent(test)
- if (false == test.continued) {
+ if (test.continued) {
+ /* Catch some difficult to debug errors with // TEST[continued]
+ * and throw a helpful error message. */
+ if (previousTest == null || previousTest.path != test.path) {
+ throw new InvalidUserDataException("// TEST[continued] " +
+ "cannot be on first snippet in a file: $test")
+ }
+ if (previousTest != null && previousTest.testSetup) {
+ throw new InvalidUserDataException("// TEST[continued] " +
+ "cannot immediately follow // TESTSETUP: $test")
+ }
+ } else {
current.println('---')
current.println("\"line_$test.start\":")
+ /* The Elasticsearch test runner doesn't support the warnings
+ * construct unless you output this skip. Since we don't know
+ * if this snippet will use the warnings construct we emit this
+ * warning every time. */
+ current.println(" - skip:")
+ current.println(" features: ")
+ current.println(" - stash_in_key")
+ current.println(" - stash_in_path")
+ current.println(" - stash_path_replace")
+ current.println(" - warnings")
}
if (test.skipTest) {
- current.println(" - skip:")
- current.println(" features: always_skip")
+ if (test.continued) {
+ throw new InvalidUserDataException("Continued snippets "
+ + "can't be skipped")
+ }
+ current.println(" - always_skip")
current.println(" reason: $test.skipTest")
}
if (test.setup != null) {
@@ -148,6 +222,11 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
def (String path, String query) = pathAndQuery.tokenize('?')
if (path == null) {
path = '' // Catch requests to the root...
+ } else {
+ // Escape some characters that are also escaped by sense
+ path = path.replace('<', '%3C').replace('>', '%3E')
+ path = path.replace('{', '%7B').replace('}', '%7D')
+ path = path.replace('|', '%7C')
}
current.println(" - do:")
if (catchPart != null) {
@@ -250,5 +329,35 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
current = null
}
}
+
+ void checkUnconverted() {
+ List listedButNotFound = []
+ for (String listed : expectedUnconvertedCandidates) {
+ if (false == unconvertedCandidates.remove(listed)) {
+ listedButNotFound.add(listed)
+ }
+ }
+ String message = ""
+ if (false == listedButNotFound.isEmpty()) {
+ Collections.sort(listedButNotFound)
+ listedButNotFound = listedButNotFound.collect {' ' + it}
+ message += "Expected unconverted snippets but none found in:\n"
+ message += listedButNotFound.join("\n")
+ }
+ if (false == unconvertedCandidates.isEmpty()) {
+ List foundButNotListed =
+ new ArrayList<>(unconvertedCandidates)
+ Collections.sort(foundButNotListed)
+ foundButNotListed = foundButNotListed.collect {' ' + it}
+ if (false == "".equals(message)) {
+ message += "\n"
+ }
+ message += "Unexpected unconverted snippets:\n"
+ message += foundButNotListed.join("\n")
+ }
+ if (false == "".equals(message)) {
+ throw new InvalidUserDataException(message);
+ }
+ }
}
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy
index 41f74b45be143..94af22f4aa279 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy
@@ -39,6 +39,7 @@ public class SnippetsTask extends DefaultTask {
private static final String SKIP = /skip:([^\]]+)/
private static final String SETUP = /setup:([^ \]]+)/
private static final String WARNING = /warning:(.+)/
+ private static final String CAT = /(_cat)/
private static final String TEST_SYNTAX =
/(?:$CATCH|$SUBSTITUTION|$SKIP|(continued)|$SETUP|$WARNING) ?/
@@ -89,6 +90,7 @@ public class SnippetsTask extends DefaultTask {
* tests cleaner.
*/
subst = subst.replace('$body', '\\$body')
+ subst = subst.replace('$_path', '\\$_path')
// \n is a new line....
subst = subst.replace('\\n', '\n')
snippet.contents = snippet.contents.replaceAll(
@@ -221,8 +223,17 @@ public class SnippetsTask extends DefaultTask {
substitutions = []
}
String loc = "$file:$lineNumber"
- parse(loc, matcher.group(2), /$SUBSTITUTION ?/) {
- substitutions.add([it.group(1), it.group(2)])
+ parse(loc, matcher.group(2), /(?:$SUBSTITUTION|$CAT) ?/) {
+ if (it.group(1) != null) {
+ // TESTRESPONSE[s/adsf/jkl/]
+ substitutions.add([it.group(1), it.group(2)])
+ } else if (it.group(3) != null) {
+ // TESTRESPONSE[_cat]
+ substitutions.add(['^', '/'])
+ substitutions.add(['\n$', '\\\\s*/'])
+ substitutions.add(['( +)', '$1\\\\s+'])
+ substitutions.add(['\n', '\\\\s*\n '])
+ }
}
}
return
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy
index 0a454ee1006ff..2e11fdc2681bc 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy
@@ -19,6 +19,7 @@
package org.elasticsearch.gradle.plugin
import org.elasticsearch.gradle.BuildPlugin
+import org.elasticsearch.gradle.NoticeTask
import org.elasticsearch.gradle.test.RestIntegTestTask
import org.elasticsearch.gradle.test.RunTask
import org.gradle.api.Project
@@ -49,33 +50,29 @@ public class PluginBuildPlugin extends BuildPlugin {
project.afterEvaluate {
boolean isModule = project.path.startsWith(':modules:')
String name = project.pluginProperties.extension.name
- project.jar.baseName = name
- project.bundlePlugin.baseName = name
+ project.archivesBaseName = name
if (project.pluginProperties.extension.hasClientJar) {
// for plugins which work with the transport client, we copy the jar
// file to a new name, copy the nebula generated pom to the same name,
// and generate a different pom for the zip
- project.signArchives.enabled = false
addClientJarPomGeneration(project)
addClientJarTask(project)
- if (isModule == false) {
- addZipPomGeneration(project)
- }
} else {
// no client plugin, so use the pom file from nebula, without jar, for the zip
project.ext.set("nebulaPublish.maven.jar", false)
}
- project.integTest.dependsOn(project.bundlePlugin)
+ project.integTestCluster.dependsOn(project.bundlePlugin)
project.tasks.run.dependsOn(project.bundlePlugin)
if (isModule) {
- project.integTest.clusterConfig.module(project)
+ project.integTestCluster.module(project)
project.tasks.run.clusterConfig.module(project)
} else {
- project.integTest.clusterConfig.plugin(project.path)
+ project.integTestCluster.plugin(project.path)
project.tasks.run.clusterConfig.plugin(project.path)
addZipPomGeneration(project)
+ addNoticeGeneration(project)
}
project.namingConventions {
@@ -99,7 +96,7 @@ public class PluginBuildPlugin extends BuildPlugin {
provided "com.vividsolutions:jts:${project.versions.jts}"
provided "org.apache.logging.log4j:log4j-api:${project.versions.log4j}"
provided "org.apache.logging.log4j:log4j-core:${project.versions.log4j}"
- provided "net.java.dev.jna:jna:${project.versions.jna}"
+ provided "org.elasticsearch:jna:${project.versions.jna}"
}
}
@@ -123,12 +120,15 @@ public class PluginBuildPlugin extends BuildPlugin {
// add the plugin properties and metadata to test resources, so unit tests can
// know about the plugin (used by test security code to statically initialize the plugin in unit tests)
SourceSet testSourceSet = project.sourceSets.test
- testSourceSet.output.dir(buildProperties.generatedResourcesDir, builtBy: 'pluginProperties')
+ testSourceSet.output.dir(buildProperties.descriptorOutput.parentFile, builtBy: 'pluginProperties')
testSourceSet.resources.srcDir(pluginMetadata)
// create the actual bundle task, which zips up all the files for the plugin
Zip bundle = project.tasks.create(name: 'bundlePlugin', type: Zip, dependsOn: [project.jar, buildProperties]) {
- from buildProperties // plugin properties file
+ from(buildProperties.descriptorOutput.parentFile) {
+ // plugin properties file
+ include(buildProperties.descriptorOutput.name)
+ }
from pluginMetadata // metadata (eg custom security policy)
from project.jar // this plugin's jar
from project.configurations.runtime - project.configurations.provided // the dep jars
@@ -152,7 +152,7 @@ public class PluginBuildPlugin extends BuildPlugin {
/** Adds a task to move jar and associated files to a "-client" name. */
protected static void addClientJarTask(Project project) {
Task clientJar = project.tasks.create('clientJar')
- clientJar.dependsOn('generatePomFileForJarPublication', project.jar, project.javadocJar, project.sourcesJar)
+ clientJar.dependsOn(project.jar, 'generatePomFileForClientJarPublication', project.javadocJar, project.sourcesJar)
clientJar.doFirst {
Path jarFile = project.jar.outputs.files.singleFile.toPath()
String clientFileName = jarFile.fileName.toString().replace(project.version, "client-${project.version}")
@@ -179,7 +179,10 @@ public class PluginBuildPlugin extends BuildPlugin {
static final Pattern GIT_PATTERN = Pattern.compile(/git@([^:]+):([^\.]+)\.git/)
/** Find the reponame. */
- protected static String urlFromOrigin(String origin) {
+ static String urlFromOrigin(String origin) {
+ if (origin == null) {
+ return null // best effort, the url doesnt really matter, it is just required by maven central
+ }
if (origin.startsWith('https')) {
return origin
}
@@ -197,9 +200,9 @@ public class PluginBuildPlugin extends BuildPlugin {
project.publishing {
publications {
- jar(MavenPublication) {
+ clientJar(MavenPublication) {
from project.components.java
- artifactId = artifactId + '-client'
+ artifactId = project.pluginProperties.extension.name + '-client'
pom.withXml { XmlProvider xml ->
Node root = xml.asNode()
root.appendNode('name', project.pluginProperties.extension.name)
@@ -213,7 +216,7 @@ public class PluginBuildPlugin extends BuildPlugin {
}
}
- /** Adds a task to generate a*/
+ /** Adds a task to generate a pom file for the zip distribution. */
protected void addZipPomGeneration(Project project) {
project.plugins.apply(MavenPublishPlugin.class)
@@ -221,7 +224,19 @@ public class PluginBuildPlugin extends BuildPlugin {
publications {
zip(MavenPublication) {
artifact project.bundlePlugin
- pom.packaging = 'pom'
+ }
+ /* HUGE HACK: the underlying maven publication library refuses to deploy any attached artifacts
+ * when the packaging type is set to 'pom'. But Sonatype's OSS repositories require source files
+ * for artifacts that are of type 'zip'. We already publish the source and javadoc for Elasticsearch
+ * under the various other subprojects. So here we create another publication using the same
+ * name that has the "real" pom, and rely on the fact that gradle will execute the publish tasks
+ * in alphabetical order. This lets us publish the zip file and even though the pom says the
+ * type is 'pom' instead of 'zip'. We cannot setup a dependency between the tasks because the
+ * publishing tasks are created *extremely* late in the configuration phase, so that we cannot get
+ * ahold of the actual task. Furthermore, this entire hack only exists so we can make publishing to
+ * maven local work, since we publish to maven central externally. */
+ zipReal(MavenPublication) {
+ artifactId = project.pluginProperties.extension.name
pom.withXml { XmlProvider xml ->
Node root = xml.asNode()
root.appendNode('name', project.pluginProperties.extension.name)
@@ -234,4 +249,19 @@ public class PluginBuildPlugin extends BuildPlugin {
}
}
}
+
+ protected void addNoticeGeneration(Project project) {
+ File licenseFile = project.pluginProperties.extension.licenseFile
+ if (licenseFile != null) {
+ project.bundlePlugin.from(licenseFile.parentFile) {
+ include(licenseFile.name)
+ }
+ }
+ File noticeFile = project.pluginProperties.extension.noticeFile
+ if (noticeFile != null) {
+ NoticeTask generateNotice = project.tasks.create('generateNotice', NoticeTask.class)
+ generateNotice.inputFile = noticeFile
+ project.bundlePlugin.from(generateNotice)
+ }
+ }
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy
index 5502266693653..1251be265da9a 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy
@@ -39,10 +39,24 @@ class PluginPropertiesExtension {
@Input
String classname
+ @Input
+ boolean hasNativeController = false
+
/** Indicates whether the plugin jar should be made available for the transport client. */
@Input
boolean hasClientJar = false
+ /** A license file that should be included in the built plugin zip. */
+ @Input
+ File licenseFile = null
+
+ /**
+ * A notice file that should be included in the built plugin zip. This will be
+ * extended with notices from the {@code licenses/} directory.
+ */
+ @Input
+ File noticeFile = null
+
PluginPropertiesExtension(Project project) {
name = project.name
version = project.version
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy
index 7156c2650cbe0..91efe247a016b 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy
@@ -22,6 +22,7 @@ import org.elasticsearch.gradle.VersionProperties
import org.gradle.api.InvalidUserDataException
import org.gradle.api.Task
import org.gradle.api.tasks.Copy
+import org.gradle.api.tasks.OutputFile
/**
* Creates a plugin descriptor.
@@ -29,20 +30,22 @@ import org.gradle.api.tasks.Copy
class PluginPropertiesTask extends Copy {
PluginPropertiesExtension extension
- File generatedResourcesDir = new File(project.buildDir, 'generated-resources')
+
+ @OutputFile
+ File descriptorOutput = new File(project.buildDir, 'generated-resources/plugin-descriptor.properties')
PluginPropertiesTask() {
- File templateFile = new File(project.buildDir, 'templates/plugin-descriptor.properties')
+ File templateFile = new File(project.buildDir, "templates/${descriptorOutput.name}")
Task copyPluginPropertiesTemplate = project.tasks.create('copyPluginPropertiesTemplate') {
doLast {
- InputStream resourceTemplate = PluginPropertiesTask.getResourceAsStream('/plugin-descriptor.properties')
+ InputStream resourceTemplate = PluginPropertiesTask.getResourceAsStream("/${descriptorOutput.name}")
templateFile.parentFile.mkdirs()
templateFile.setText(resourceTemplate.getText('UTF-8'), 'UTF-8')
}
}
+
dependsOn(copyPluginPropertiesTemplate)
extension = project.extensions.create('esplugin', PluginPropertiesExtension, project)
- project.clean.delete(generatedResourcesDir)
project.afterEvaluate {
// check require properties are set
if (extension.name == null) {
@@ -55,8 +58,8 @@ class PluginPropertiesTask extends Copy {
throw new InvalidUserDataException('classname is a required setting for esplugin')
}
// configure property substitution
- from(templateFile)
- into(generatedResourcesDir)
+ from(templateFile.parentFile).include(descriptorOutput.name)
+ into(descriptorOutput.parentFile)
Map properties = generateSubstitutions()
expand(properties)
inputs.properties(properties)
@@ -76,7 +79,8 @@ class PluginPropertiesTask extends Copy {
'version': stringSnap(extension.version),
'elasticsearchVersion': stringSnap(VersionProperties.elasticsearch),
'javaVersion': project.targetCompatibility as String,
- 'classname': extension.classname
+ 'classname': extension.classname,
+ 'hasNativeController': extension.hasNativeController
]
}
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy
index 6fa37be309ec1..4d292d87ec39c 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy
@@ -86,6 +86,9 @@ public class DependencyLicensesTask extends DefaultTask {
/** A map of patterns to prefix, used to find the LICENSE and NOTICE file. */
private LinkedHashMap mappings = new LinkedHashMap<>()
+ /** Names of dependencies whose shas should not exist. */
+ private Set ignoreShas = new HashSet<>()
+
/**
* Add a mapping from a regex pattern for the jar name, to a prefix to find
* the LICENSE and NOTICE file for that jar.
@@ -106,6 +109,15 @@ public class DependencyLicensesTask extends DefaultTask {
mappings.put(from, to)
}
+ /**
+ * Add a rule which will skip SHA checking for the given dependency name. This should be used for
+ * locally build dependencies, which cause the sha to change constantly.
+ */
+ @Input
+ public void ignoreSha(String dep) {
+ ignoreShas.add(dep)
+ }
+
@TaskAction
public void checkDependencies() {
if (dependencies.isEmpty()) {
@@ -139,19 +151,27 @@ public class DependencyLicensesTask extends DefaultTask {
for (File dependency : dependencies) {
String jarName = dependency.getName()
- logger.info("Checking license/notice/sha for " + jarName)
- checkSha(dependency, jarName, shaFiles)
+ String depName = jarName - ~/\-\d+.*/
+ if (ignoreShas.contains(depName)) {
+ // local deps should not have sha files!
+ if (getShaFile(jarName).exists()) {
+ throw new GradleException("SHA file ${getShaFile(jarName)} exists for ignored dependency ${depName}")
+ }
+ } else {
+ logger.info("Checking sha for " + jarName)
+ checkSha(dependency, jarName, shaFiles)
+ }
- String name = jarName - ~/\-\d+.*/
- Matcher match = mappingsPattern.matcher(name)
+ logger.info("Checking license/notice for " + depName)
+ Matcher match = mappingsPattern.matcher(depName)
if (match.matches()) {
int i = 0
while (i < match.groupCount() && match.group(i + 1) == null) ++i;
- logger.info("Mapped dependency name ${name} to ${mapped.get(i)} for license check")
- name = mapped.get(i)
+ logger.info("Mapped dependency name ${depName} to ${mapped.get(i)} for license check")
+ depName = mapped.get(i)
}
- checkFile(name, jarName, licenses, 'LICENSE')
- checkFile(name, jarName, notices, 'NOTICE')
+ checkFile(depName, jarName, licenses, 'LICENSE')
+ checkFile(depName, jarName, notices, 'NOTICE')
}
licenses.each { license, count ->
@@ -169,8 +189,12 @@ public class DependencyLicensesTask extends DefaultTask {
}
}
+ private File getShaFile(String jarName) {
+ return new File(licensesDir, jarName + SHA_EXTENSION)
+ }
+
private void checkSha(File jar, String jarName, Set shaFiles) {
- File shaFile = new File(licensesDir, jarName + SHA_EXTENSION)
+ File shaFile = getShaFile(jarName)
if (shaFile.exists() == false) {
throw new GradleException("Missing SHA for ${jarName}. Run 'gradle updateSHAs' to create")
}
@@ -215,6 +239,10 @@ public class DependencyLicensesTask extends DefaultTask {
}
for (File dependency : parentTask.dependencies) {
String jarName = dependency.getName()
+ String depName = jarName - ~/\-\d+.*/
+ if (parentTask.ignoreShas.contains(depName)) {
+ continue
+ }
File shaFile = new File(parentTask.licensesDir, jarName + SHA_EXTENSION)
if (shaFile.exists() == false) {
logger.lifecycle("Adding sha for ${jarName}")
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy
index 7cb344bf47fc9..ed62e88c567fa 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy
@@ -59,11 +59,16 @@ public class ForbiddenPatternsTask extends DefaultTask {
filesFilter.exclude('**/*.png')
// add mandatory rules
- patterns.put('nocommit', /nocommit/)
+ patterns.put('nocommit', /nocommit|NOCOMMIT/)
+ patterns.put('nocommit should be all lowercase or all uppercase',
+ /((?i)nocommit)(? arguments = new ArrayList<>()
+
+ @Input
+ public void args(Object... args) {
+ arguments.addAll(args)
+ }
+
+ /**
+ * Environment variables for the fixture process. The value can be any object, which
+ * will have toString() called at execution time.
+ */
+ private final Map environment = new HashMap<>()
+
+ @Input
+ public void env(String key, Object value) {
+ environment.put(key, value)
+ }
+
+ /** A flag to indicate whether the command should be executed from a shell. */
+ @Input
+ boolean useShell = false
+
+ /**
+ * A flag to indicate whether the fixture should be run in the foreground, or spawned.
+ * It is protected so subclasses can override (eg RunTask).
+ */
+ protected boolean spawn = true
+
+ /**
+ * A closure to call before the fixture is considered ready. The closure is passed the fixture object,
+ * as well as a groovy AntBuilder, to enable running ant condition checks. The default wait
+ * condition is for http on the http port.
+ */
+ @Input
+ Closure waitCondition = { AntFixture fixture, AntBuilder ant ->
+ File tmpFile = new File(fixture.cwd, 'wait.success')
+ ant.get(src: "http://${fixture.addressAndPort}",
+ dest: tmpFile.toString(),
+ ignoreerrors: true, // do not fail on error, so logging information can be flushed
+ retries: 10)
+ return tmpFile.exists()
+ }
+
+ private final Task stopTask
+
+ public AntFixture() {
+ stopTask = createStopTask()
+ finalizedBy(stopTask)
+ }
+
+ @Override
+ public Task getStopTask() {
+ return stopTask
+ }
+
+ @Override
+ protected void runAnt(AntBuilder ant) {
+ project.delete(baseDir) // reset everything
+ cwd.mkdirs()
+ final String realExecutable
+ final List realArgs = new ArrayList<>()
+ final Map realEnv = environment
+ // We need to choose which executable we are using. In shell mode, or when we
+ // are spawning and thus using the wrapper script, the executable is the shell.
+ if (useShell || spawn) {
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ realExecutable = 'cmd'
+ realArgs.add('/C')
+ realArgs.add('"') // quote the entire command
+ } else {
+ realExecutable = 'sh'
+ }
+ } else {
+ realExecutable = executable
+ realArgs.addAll(arguments)
+ }
+ if (spawn) {
+ writeWrapperScript(executable)
+ realArgs.add(wrapperScript)
+ realArgs.addAll(arguments)
+ }
+ if (Os.isFamily(Os.FAMILY_WINDOWS) && (useShell || spawn)) {
+ realArgs.add('"')
+ }
+ commandString.eachLine { line -> logger.info(line) }
+
+ ant.exec(executable: realExecutable, spawn: spawn, dir: cwd, taskname: name) {
+ realEnv.each { key, value -> env(key: key, value: value) }
+ realArgs.each { arg(value: it) }
+ }
+
+ String failedProp = "failed${name}"
+ // first wait for resources, or the failure marker from the wrapper script
+ ant.waitfor(maxwait: '30', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond', timeoutproperty: failedProp) {
+ or {
+ resourceexists {
+ file(file: failureMarker.toString())
+ }
+ and {
+ resourceexists {
+ file(file: pidFile.toString())
+ }
+ resourceexists {
+ file(file: portsFile.toString())
+ }
+ }
+ }
+ }
+
+ if (ant.project.getProperty(failedProp) || failureMarker.exists()) {
+ fail("Failed to start ${name}")
+ }
+
+ // the process is started (has a pid) and is bound to a network interface
+ // so now wait undil the waitCondition has been met
+ // TODO: change this to a loop?
+ boolean success
+ try {
+ success = waitCondition(this, ant) == false
+ } catch (Exception e) {
+ String msg = "Wait condition caught exception for ${name}"
+ logger.error(msg, e)
+ fail(msg, e)
+ }
+ if (success == false) {
+ fail("Wait condition failed for ${name}")
+ }
+ }
+
+ /** Returns a debug string used to log information about how the fixture was run. */
+ protected String getCommandString() {
+ String commandString = "\n${name} configuration:\n"
+ commandString += "-----------------------------------------\n"
+ commandString += " cwd: ${cwd}\n"
+ commandString += " command: ${executable} ${arguments.join(' ')}\n"
+ commandString += ' environment:\n'
+ environment.each { k, v -> commandString += " ${k}: ${v}\n" }
+ if (spawn) {
+ commandString += "\n [${wrapperScript.name}]\n"
+ wrapperScript.eachLine('UTF-8', { line -> commandString += " ${line}\n"})
+ }
+ return commandString
+ }
+
+ /**
+ * Writes a script to run the real executable, so that stdout/stderr can be captured.
+ * TODO: this could be removed if we do use our own ProcessBuilder and pump output from the process
+ */
+ private void writeWrapperScript(String executable) {
+ wrapperScript.parentFile.mkdirs()
+ String argsPasser = '"$@"'
+ String exitMarker = "; if [ \$? != 0 ]; then touch run.failed; fi"
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ argsPasser = '%*'
+ exitMarker = "\r\n if \"%errorlevel%\" neq \"0\" ( type nul >> run.failed )"
+ }
+ wrapperScript.setText("\"${executable}\" ${argsPasser} > run.log 2>&1 ${exitMarker}", 'UTF-8')
+ }
+
+ /** Fail the build with the given message, and logging relevant info*/
+ private void fail(String msg, Exception... suppressed) {
+ if (logger.isInfoEnabled() == false) {
+ // We already log the command at info level. No need to do it twice.
+ commandString.eachLine { line -> logger.error(line) }
+ }
+ logger.error("${name} output:")
+ logger.error("-----------------------------------------")
+ logger.error(" failure marker exists: ${failureMarker.exists()}")
+ logger.error(" pid file exists: ${pidFile.exists()}")
+ logger.error(" ports file exists: ${portsFile.exists()}")
+ // also dump the log file for the startup script (which will include ES logging output to stdout)
+ if (runLog.exists()) {
+ logger.error("\n [log]")
+ runLog.eachLine { line -> logger.error(" ${line}") }
+ }
+ logger.error("-----------------------------------------")
+ GradleException toThrow = new GradleException(msg)
+ for (Exception e : suppressed) {
+ toThrow.addSuppressed(e)
+ }
+ throw toThrow
+ }
+
+ /** Adds a task to kill an elasticsearch node with the given pidfile */
+ private Task createStopTask() {
+ final AntFixture fixture = this
+ final Object pid = "${ -> fixture.pid }"
+ Exec stop = project.tasks.create(name: "${name}#stop", type: LoggedExec)
+ stop.onlyIf { fixture.pidFile.exists() }
+ stop.doFirst {
+ logger.info("Shutting down ${fixture.name} with pid ${pid}")
+ }
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ stop.executable = 'Taskkill'
+ stop.args('/PID', pid, '/F')
+ } else {
+ stop.executable = 'kill'
+ stop.args('-9', pid)
+ }
+ stop.doLast {
+ project.delete(fixture.pidFile)
+ }
+ return stop
+ }
+
+ /**
+ * A path relative to the build dir that all configuration and runtime files
+ * will live in for this fixture
+ */
+ protected File getBaseDir() {
+ return new File(project.buildDir, "fixtures/${name}")
+ }
+
+ /** Returns the working directory for the process. Defaults to "cwd" inside baseDir. */
+ protected File getCwd() {
+ return new File(baseDir, 'cwd')
+ }
+
+ /** Returns the file the process writes its pid to. Defaults to "pid" inside baseDir. */
+ protected File getPidFile() {
+ return new File(baseDir, 'pid')
+ }
+
+ /** Reads the pid file and returns the process' pid */
+ public int getPid() {
+ return Integer.parseInt(pidFile.getText('UTF-8').trim())
+ }
+
+ /** Returns the file the process writes its bound ports to. Defaults to "ports" inside baseDir. */
+ protected File getPortsFile() {
+ return new File(baseDir, 'ports')
+ }
+
+ /** Returns an address and port suitable for a uri to connect to this node over http */
+ public String getAddressAndPort() {
+ return portsFile.readLines("UTF-8").get(0)
+ }
+
+ /** Returns a file that wraps around the actual command when {@code spawn == true}. */
+ protected File getWrapperScript() {
+ return new File(cwd, Os.isFamily(Os.FAMILY_WINDOWS) ? 'run.bat' : 'run')
+ }
+
+ /** Returns a file that the wrapper script writes when the command failed. */
+ protected File getFailureMarker() {
+ return new File(cwd, 'run.failed')
+ }
+
+ /** Returns a file that the wrapper script writes when the command failed. */
+ protected File getRunLog() {
+ return new File(cwd, 'run.log')
+ }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy
index 6a2375efc6299..c9965caa96e7d 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy
@@ -20,8 +20,6 @@ package org.elasticsearch.gradle.test
import org.gradle.api.GradleException
import org.gradle.api.Project
-import org.gradle.api.artifacts.Configuration
-import org.gradle.api.file.FileCollection
import org.gradle.api.tasks.Input
/** Configuration for an elasticsearch cluster, used for integration tests. */
@@ -47,25 +45,56 @@ class ClusterConfiguration {
@Input
int transportPort = 0
+ /**
+ * An override of the data directory. This may only be used with a single node.
+ * The value is lazily evaluated at runtime as a String path.
+ */
+ @Input
+ Object dataDir = null
+
+ /** Optional override of the cluster name. */
+ @Input
+ String clusterName = null
+
@Input
boolean daemonize = true
@Input
boolean debug = false
+ /**
+ * if true each node will be configured with discovery.zen.minimum_master_nodes set
+ * to the total number of nodes in the cluster. This will also cause that each node has `0s` state recovery
+ * timeout which can lead to issues if for instance an existing clusterstate is expected to be recovered
+ * before any tests start
+ */
+ @Input
+ boolean useMinimumMasterNodes = true
+
@Input
String jvmArgs = "-Xms" + System.getProperty('tests.heap.size', '512m') +
- " " + "-Xmx" + System.getProperty('tests.heap.size', '512m') +
- " " + System.getProperty('tests.jvm.argline', '')
+ " " + "-Xmx" + System.getProperty('tests.heap.size', '512m') +
+ " " + System.getProperty('tests.jvm.argline', '')
/**
- * The seed nodes port file. In the case the cluster has more than one node we use a seed node
- * to form the cluster. The file is null if there is no seed node yet available.
+ * A closure to call which returns the unicast host to connect to for cluster formation.
*
- * Note: this can only be null if the cluster has only one node or if the first node is not yet
- * configured. All nodes but the first node should see a non null value.
+ * This allows multi node clusters, or a new cluster to connect to an existing cluster.
+ * The closure takes two arguments, the NodeInfo for the first node in the cluster, and
+ * an AntBuilder which may be used to wait on conditions before returning.
*/
- File seedNodePortsFile
+ @Input
+ Closure unicastTransportUri = { NodeInfo seedNode, NodeInfo node, AntBuilder ant ->
+ if (seedNode == node) {
+ return null
+ }
+ ant.waitfor(maxwait: '40', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond') {
+ resourceexists {
+ file(file: seedNode.transportPortsFile.toString())
+ }
+ }
+ return seedNode.transportUri()
+ }
/**
* A closure to call before the cluster is considered ready. The closure is passed the node info,
@@ -75,7 +104,13 @@ class ClusterConfiguration {
@Input
Closure waitCondition = { NodeInfo node, AntBuilder ant ->
File tmpFile = new File(node.cwd, 'wait.success')
- ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=${numNodes}",
+ String waitUrl = "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=${numNodes}&wait_for_status=yellow"
+ ant.echo(message: "==> [${new Date()}] checking health: ${waitUrl}",
+ level: 'info')
+ // checking here for wait_for_nodes to be >= the number of nodes because its possible
+ // this cluster is attempting to connect to nodes created by another task (same cluster name),
+ // so there will be more nodes in that case in the cluster state
+ ant.get(src: waitUrl,
dest: tmpFile.toString(),
ignoreerrors: true, // do not fail on error, so logging buffers can be flushed by the wait task
retries: 10)
@@ -88,7 +123,9 @@ class ClusterConfiguration {
Map systemProperties = new HashMap<>()
- Map settings = new HashMap<>()
+ Map settings = new HashMap<>()
+
+ Map keystoreSettings = new HashMap<>()
// map from destination path, to source file
Map extraConfigFiles = new HashMap<>()
@@ -99,16 +136,23 @@ class ClusterConfiguration {
LinkedHashMap setupCommands = new LinkedHashMap<>()
+ List dependencies = new ArrayList<>()
+
@Input
void systemProperty(String property, String value) {
systemProperties.put(property, value)
}
@Input
- void setting(String name, String value) {
+ void setting(String name, Object value) {
settings.put(name, value)
}
+ @Input
+ void keystoreSetting(String name, String value) {
+ keystoreSettings.put(name, value)
+ }
+
@Input
void plugin(String path) {
Project pluginProject = project.project(path)
@@ -138,11 +182,9 @@ class ClusterConfiguration {
extraConfigFiles.put(path, sourceFile)
}
- /** Returns an address and port suitable for a uri to connect to this clusters seed node over transport protocol*/
- String seedNodeTransportUri() {
- if (seedNodePortsFile != null) {
- return seedNodePortsFile.readLines("UTF-8").get(0)
- }
- return null;
+ /** Add dependencies that must be run before the first task setting up the cluster. */
+ @Input
+ void dependsOn(Object... deps) {
+ dependencies.addAll(deps)
}
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy
index 8819c63080a3b..ee225904cc93e 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy
@@ -23,6 +23,7 @@ import org.apache.tools.ant.taskdefs.condition.Os
import org.elasticsearch.gradle.LoggedExec
import org.elasticsearch.gradle.VersionProperties
import org.elasticsearch.gradle.plugin.PluginBuildPlugin
+import org.elasticsearch.gradle.plugin.PluginPropertiesExtension
import org.gradle.api.AntBuilder
import org.gradle.api.DefaultTask
import org.gradle.api.GradleException
@@ -30,13 +31,16 @@ import org.gradle.api.InvalidUserDataException
import org.gradle.api.Project
import org.gradle.api.Task
import org.gradle.api.artifacts.Configuration
+import org.gradle.api.artifacts.Dependency
import org.gradle.api.file.FileCollection
import org.gradle.api.logging.Logger
import org.gradle.api.tasks.Copy
import org.gradle.api.tasks.Delete
import org.gradle.api.tasks.Exec
+import java.nio.charset.StandardCharsets
import java.nio.file.Paths
+import java.util.concurrent.TimeUnit
/**
* A helper for creating tasks to build a cluster that is used by a task, and tear down the cluster when the task is finished.
@@ -46,24 +50,20 @@ class ClusterFormationTasks {
/**
* Adds dependent tasks to the given task to start and stop a cluster with the given configuration.
*
- * Returns a NodeInfo object for the first node in the cluster.
+ * Returns a list of NodeInfo objects for each node in the cluster.
*/
- static NodeInfo setup(Project project, Task task, ClusterConfiguration config) {
- if (task.getEnabled() == false) {
- // no need to add cluster formation tasks if the task won't run!
- return
- }
+ static List setup(Project project, String prefix, Task runner, ClusterConfiguration config) {
File sharedDir = new File(project.buildDir, "cluster/shared")
// first we remove everything in the shared cluster directory to ensure there are no leftovers in repos or anything
// in theory this should not be necessary but repositories are only deleted in the cluster-state and not on-disk
// such that snapshots survive failures / test runs and there is no simple way today to fix that.
- Task cleanup = project.tasks.create(name: "${task.name}#prepareCluster.cleanShared", type: Delete, dependsOn: task.dependsOn.collect()) {
+ Task cleanup = project.tasks.create(name: "${prefix}#prepareCluster.cleanShared", type: Delete, dependsOn: config.dependencies) {
delete sharedDir
doLast {
sharedDir.mkdirs()
}
}
- List startTasks = [cleanup]
+ List startTasks = []
List nodes = []
if (config.numNodes < config.numBwcNodes) {
throw new GradleException("numNodes must be >= numBwcNodes [${config.numNodes} < ${config.numBwcNodes}]")
@@ -72,45 +72,45 @@ class ClusterFormationTasks {
throw new GradleException("bwcVersion must not be null if numBwcNodes is > 0")
}
// this is our current version distribution configuration we use for all kinds of REST tests etc.
- String distroConfigName = "${task.name}_elasticsearchDistro"
- Configuration distro = project.configurations.create(distroConfigName)
- configureDistributionDependency(project, config.distribution, distro, VersionProperties.elasticsearch)
- if (config.bwcVersion != null && config.numBwcNodes > 0) {
+ Configuration currentDistro = project.configurations.create("${prefix}_elasticsearchDistro")
+ Configuration bwcDistro = project.configurations.create("${prefix}_elasticsearchBwcDistro")
+ Configuration bwcPlugins = project.configurations.create("${prefix}_elasticsearchBwcPlugins")
+ configureDistributionDependency(project, config.distribution, currentDistro, VersionProperties.elasticsearch)
+ if (config.numBwcNodes > 0) {
+ if (config.bwcVersion == null) {
+ throw new IllegalArgumentException("Must specify bwcVersion when numBwcNodes > 0")
+ }
// if we have a cluster that has a BWC cluster we also need to configure a dependency on the BWC version
// this version uses the same distribution etc. and only differs in the version we depend on.
// from here on everything else works the same as if it's the current version, we fetch the BWC version
// from mirrors using gradles built-in mechanism etc.
- project.configurations {
- elasticsearchBwcDistro
+
+ configureDistributionDependency(project, config.distribution, bwcDistro, config.bwcVersion)
+ for (Map.Entry entry : config.plugins.entrySet()) {
+ configureBwcPluginDependency("${prefix}_elasticsearchBwcPlugins", project, entry.getValue(), bwcPlugins, config.bwcVersion)
}
- configureDistributionDependency(project, config.distribution, project.configurations.elasticsearchBwcDistro, config.bwcVersion)
+ bwcDistro.resolutionStrategy.cacheChangingModulesFor(0, TimeUnit.SECONDS)
+ bwcPlugins.resolutionStrategy.cacheChangingModulesFor(0, TimeUnit.SECONDS)
}
-
- for (int i = 0; i < config.numNodes; ++i) {
+ for (int i = 0; i < config.numNodes; i++) {
// we start N nodes and out of these N nodes there might be M bwc nodes.
- // for each of those nodes we might have a different configuratioon
+ // for each of those nodes we might have a different configuration
String elasticsearchVersion = VersionProperties.elasticsearch
+ Configuration distro = currentDistro
if (i < config.numBwcNodes) {
elasticsearchVersion = config.bwcVersion
- distro = project.configurations.elasticsearchBwcDistro
- }
- NodeInfo node = new NodeInfo(config, i, project, task, elasticsearchVersion, sharedDir)
- if (i == 0) {
- if (config.seedNodePortsFile != null) {
- // we might allow this in the future to be set but for now we are the only authority to set this!
- throw new GradleException("seedNodePortsFile has a non-null value but first node has not been intialized")
- }
- config.seedNodePortsFile = node.transportPortsFile;
+ distro = bwcDistro
}
+ NodeInfo node = new NodeInfo(config, i, project, prefix, elasticsearchVersion, sharedDir)
nodes.add(node)
- startTasks.add(configureNode(project, task, cleanup, node, distro))
+ Task dependsOn = startTasks.empty ? cleanup : startTasks.get(0)
+ startTasks.add(configureNode(project, prefix, runner, dependsOn, node, config, distro, nodes.get(0)))
}
- Task wait = configureWaitTask("${task.name}#wait", project, nodes, startTasks)
- task.dependsOn(wait)
+ Task wait = configureWaitTask("${prefix}#wait", project, nodes, startTasks)
+ runner.dependsOn(wait)
- // delay the resolution of the uri by wrapping in a closure, so it is not used until read for tests
- return nodes[0]
+ return nodes
}
/** Adds a dependency on the given distribution */
@@ -124,6 +124,13 @@ class ClusterFormationTasks {
project.dependencies.add(configuration.name, "org.elasticsearch.distribution.${distro}:elasticsearch:${elasticsearchVersion}@${packaging}")
}
+ /** Adds a dependency on a different version of the given plugin, which will be retrieved using gradle's dependency resolution */
+ static void configureBwcPluginDependency(String name, Project project, Project pluginProject, Configuration configuration, String elasticsearchVersion) {
+ verifyProjectHasBuildPlugin(name, elasticsearchVersion, project, pluginProject)
+ PluginPropertiesExtension extension = pluginProject.extensions.findByName('esplugin');
+ project.dependencies.add(configuration.name, "org.elasticsearch.plugin:${extension.name}:${elasticsearchVersion}@zip")
+ }
+
/**
* Adds dependent tasks to start an elasticsearch cluster before the given task is executed,
* and stop it after it has finished executing.
@@ -141,49 +148,83 @@ class ClusterFormationTasks {
*
* @return a task which starts the node.
*/
- static Task configureNode(Project project, Task task, Object dependsOn, NodeInfo node, Configuration configuration) {
+ static Task configureNode(Project project, String prefix, Task runner, Object dependsOn, NodeInfo node, ClusterConfiguration config,
+ Configuration distribution, NodeInfo seedNode) {
// tasks are chained so their execution order is maintained
- Task setup = project.tasks.create(name: taskName(task, node, 'clean'), type: Delete, dependsOn: dependsOn) {
+ Task setup = project.tasks.create(name: taskName(prefix, node, 'clean'), type: Delete, dependsOn: dependsOn) {
delete node.homeDir
delete node.cwd
doLast {
node.cwd.mkdirs()
}
}
- setup = configureCheckPreviousTask(taskName(task, node, 'checkPrevious'), project, setup, node)
- setup = configureStopTask(taskName(task, node, 'stopPrevious'), project, setup, node)
- setup = configureExtractTask(taskName(task, node, 'extract'), project, setup, node, configuration)
- setup = configureWriteConfigTask(taskName(task, node, 'configure'), project, setup, node)
- setup = configureExtraConfigFilesTask(taskName(task, node, 'extraConfig'), project, setup, node)
- setup = configureCopyPluginsTask(taskName(task, node, 'copyPlugins'), project, setup, node)
+
+ setup = configureCheckPreviousTask(taskName(prefix, node, 'checkPrevious'), project, setup, node)
+ setup = configureStopTask(taskName(prefix, node, 'stopPrevious'), project, setup, node)
+ setup = configureExtractTask(taskName(prefix, node, 'extract'), project, setup, node, distribution)
+ setup = configureWriteConfigTask(taskName(prefix, node, 'configure'), project, setup, node, seedNode)
+ setup = configureCreateKeystoreTask(taskName(prefix, node, 'createKeystore'), project, setup, node)
+ setup = configureAddKeystoreSettingTasks(prefix, project, setup, node)
+
+ if (node.config.plugins.isEmpty() == false) {
+ if (node.nodeVersion == VersionProperties.elasticsearch) {
+ setup = configureCopyPluginsTask(taskName(prefix, node, 'copyPlugins'), project, setup, node, prefix)
+ } else {
+ setup = configureCopyBwcPluginsTask(taskName(prefix, node, 'copyBwcPlugins'), project, setup, node, prefix)
+ }
+ }
// install modules
for (Project module : node.config.modules) {
String actionName = pluginTaskName('install', module.name, 'Module')
- setup = configureInstallModuleTask(taskName(task, node, actionName), project, setup, node, module)
+ setup = configureInstallModuleTask(taskName(prefix, node, actionName), project, setup, node, module)
}
// install plugins
for (Map.Entry plugin : node.config.plugins.entrySet()) {
String actionName = pluginTaskName('install', plugin.getKey(), 'Plugin')
- setup = configureInstallPluginTask(taskName(task, node, actionName), project, setup, node, plugin.getValue())
+ setup = configureInstallPluginTask(taskName(prefix, node, actionName), project, setup, node, plugin.getValue(), prefix)
}
+ // sets up any extra config files that need to be copied over to the ES instance;
+ // its run after plugins have been installed, as the extra config files may belong to plugins
+ setup = configureExtraConfigFilesTask(taskName(prefix, node, 'extraConfig'), project, setup, node)
+
// extra setup commands
for (Map.Entry command : node.config.setupCommands.entrySet()) {
// the first argument is the actual script name, relative to home
Object[] args = command.getValue().clone()
- args[0] = new File(node.homeDir, args[0].toString())
- setup = configureExecTask(taskName(task, node, command.getKey()), project, setup, node, args)
+ final Object commandPath
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ /*
+ * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to
+ * getting the short name requiring the path to already exist. Note that we have to capture the value of arg[0] now
+ * otherwise we would stack overflow later since arg[0] is replaced below.
+ */
+ String argsZero = args[0]
+ commandPath = "${-> Paths.get(NodeInfo.getShortPathName(node.homeDir.toString())).resolve(argsZero.toString()).toString()}"
+ } else {
+ commandPath = node.homeDir.toPath().resolve(args[0].toString()).toString()
+ }
+ args[0] = commandPath
+ setup = configureExecTask(taskName(prefix, node, command.getKey()), project, setup, node, args)
}
- Task start = configureStartTask(taskName(task, node, 'start'), project, setup, node)
+ Task start = configureStartTask(taskName(prefix, node, 'start'), project, setup, node)
if (node.config.daemonize) {
+ Task stop = configureStopTask(taskName(prefix, node, 'stop'), project, [], node)
// if we are running in the background, make sure to stop the server when the task completes
- Task stop = configureStopTask(taskName(task, node, 'stop'), project, [], node)
- task.finalizedBy(stop)
+ runner.finalizedBy(stop)
+ start.finalizedBy(stop)
+ for (Object dependency : config.dependencies) {
+ if (dependency instanceof Fixture) {
+ def depStop = ((Fixture)dependency).stopTask
+ runner.finalizedBy(depStop)
+ start.finalizedBy(depStop)
+ }
+ }
}
return start
}
@@ -222,9 +263,9 @@ class ClusterFormationTasks {
Object rpm = "${ -> configuration.singleFile}"
extract = project.tasks.create(name: name, type: LoggedExec, dependsOn: extractDependsOn) {
commandLine 'rpm', '--badreloc', '--nodeps', '--noscripts', '--notriggers',
- '--dbpath', rpmDatabase,
- '--relocate', "/=${rpmExtracted}",
- '-i', rpm
+ '--dbpath', rpmDatabase,
+ '--relocate', "/=${rpmExtracted}",
+ '-i', rpm
doFirst {
rpmDatabase.deleteDir()
rpmExtracted.deleteDir()
@@ -249,32 +290,37 @@ class ClusterFormationTasks {
}
/** Adds a task to write elasticsearch.yml for the given node configuration */
- static Task configureWriteConfigTask(String name, Project project, Task setup, NodeInfo node) {
+ static Task configureWriteConfigTask(String name, Project project, Task setup, NodeInfo node, NodeInfo seedNode) {
Map esConfig = [
'cluster.name' : node.clusterName,
'pidfile' : node.pidFile,
'path.repo' : "${node.sharedDir}/repo",
'path.shared_data' : "${node.sharedDir}/",
// Define a node attribute so we can test that it exists
- 'node.attr.testattr' : 'test',
+ 'node.attr.testattr' : 'test',
'repositories.url.allowed_urls': 'http://snapshot.test*'
]
+ // we set min master nodes to the total number of nodes in the cluster and
+ // basically skip initial state recovery to allow the cluster to form using a realistic master election
+ // this means all nodes must be up, join the seed node and do a master election. This will also allow new and
+ // old nodes in the BWC case to become the master
+ if (node.config.useMinimumMasterNodes && node.config.numNodes > 1) {
+ esConfig['discovery.zen.minimum_master_nodes'] = node.config.numNodes
+ esConfig['discovery.initial_state_timeout'] = '0s' // don't wait for state.. just start up quickly
+ }
esConfig['node.max_local_storage_nodes'] = node.config.numNodes
esConfig['http.port'] = node.config.httpPort
esConfig['transport.tcp.port'] = node.config.transportPort
+ // Default the watermarks to absurdly low to prevent the tests from failing on nodes without enough disk space
+ esConfig['cluster.routing.allocation.disk.watermark.low'] = '1b'
+ esConfig['cluster.routing.allocation.disk.watermark.high'] = '1b'
esConfig.putAll(node.config.settings)
Task writeConfig = project.tasks.create(name: name, type: DefaultTask, dependsOn: setup)
writeConfig.doFirst {
- if (node.nodeNum > 0) { // multi-node cluster case, we have to wait for the seed node to startup
- ant.waitfor(maxwait: '20', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond') {
- resourceexists {
- file(file: node.config.seedNodePortsFile.toString())
- }
- }
- // the seed node is enough to form the cluster - all subsequent nodes will get the seed node as a unicast
- // host and join the cluster via that.
- esConfig['discovery.zen.ping.unicast.hosts'] = "\"${node.config.seedNodeTransportUri()}\""
+ String unicastTransportUri = node.config.unicastTransportUri(seedNode, node, project.ant)
+ if (unicastTransportUri != null) {
+ esConfig['discovery.zen.ping.unicast.hosts'] = "\"${unicastTransportUri}\""
}
File configFile = new File(node.confDir, 'elasticsearch.yml')
logger.info("Configuring ${configFile}")
@@ -282,6 +328,42 @@ class ClusterFormationTasks {
}
}
+ /** Adds a task to create keystore */
+ static Task configureCreateKeystoreTask(String name, Project project, Task setup, NodeInfo node) {
+ if (node.config.keystoreSettings.isEmpty()) {
+ return setup
+ } else {
+ /*
+ * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to
+ * getting the short name requiring the path to already exist.
+ */
+ final Object esKeystoreUtil = "${-> node.binPath().resolve('elasticsearch-keystore').toString()}"
+ return configureExecTask(name, project, setup, node, esKeystoreUtil, 'create')
+ }
+ }
+
+ /** Adds tasks to add settings to the keystore */
+ static Task configureAddKeystoreSettingTasks(String parent, Project project, Task setup, NodeInfo node) {
+ Map kvs = node.config.keystoreSettings
+ Task parentTask = setup
+ /*
+ * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to getting
+ * the short name requiring the path to already exist.
+ */
+ final Object esKeystoreUtil = "${-> node.binPath().resolve('elasticsearch-keystore').toString()}"
+ for (Map.Entry entry in kvs) {
+ String key = entry.getKey()
+ String name = taskName(parent, node, 'addToKeystore#' + key)
+ Task t = configureExecTask(name, project, parentTask, node, esKeystoreUtil, 'add', key, '-x')
+ String settingsValue = entry.getValue() // eval this early otherwise it will not use the right value
+ t.doFirst {
+ standardInput = new ByteArrayInputStream(settingsValue.getBytes(StandardCharsets.UTF_8))
+ }
+ parentTask = t
+ }
+ return parentTask
+ }
+
static Task configureExtraConfigFilesTask(String name, Project project, Task setup, NodeInfo node) {
if (node.config.extraConfigFiles.isEmpty()) {
return setup
@@ -318,20 +400,15 @@ class ClusterFormationTasks {
* For each plugin, if the plugin has rest spec apis in its tests, those api files are also copied
* to the test resources for this project.
*/
- static Task configureCopyPluginsTask(String name, Project project, Task setup, NodeInfo node) {
- if (node.config.plugins.isEmpty()) {
- return setup
- }
+ static Task configureCopyPluginsTask(String name, Project project, Task setup, NodeInfo node, String prefix) {
Copy copyPlugins = project.tasks.create(name: name, type: Copy, dependsOn: setup)
List pluginFiles = []
for (Map.Entry plugin : node.config.plugins.entrySet()) {
Project pluginProject = plugin.getValue()
- if (pluginProject.plugins.hasPlugin(PluginBuildPlugin) == false) {
- throw new GradleException("Task ${name} cannot project ${pluginProject.path} which is not an esplugin")
- }
- String configurationName = "_plugin_${pluginProject.path}"
+ verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject)
+ String configurationName = "_plugin_${prefix}_${pluginProject.path}"
Configuration configuration = project.configurations.findByName(configurationName)
if (configuration == null) {
configuration = project.configurations.create(configurationName)
@@ -360,6 +437,33 @@ class ClusterFormationTasks {
return copyPlugins
}
+ /** Configures task to copy a plugin based on a zip file resolved using dependencies for an older version */
+ static Task configureCopyBwcPluginsTask(String name, Project project, Task setup, NodeInfo node, String prefix) {
+ Configuration bwcPlugins = project.configurations.getByName("${prefix}_elasticsearchBwcPlugins")
+ for (Map.Entry plugin : node.config.plugins.entrySet()) {
+ Project pluginProject = plugin.getValue()
+ verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject)
+ String configurationName = "_plugin_bwc_${prefix}_${pluginProject.path}"
+ Configuration configuration = project.configurations.findByName(configurationName)
+ if (configuration == null) {
+ configuration = project.configurations.create(configurationName)
+ }
+
+ final String depName = pluginProject.extensions.findByName('esplugin').name
+
+ Dependency dep = bwcPlugins.dependencies.find {
+ it.name == depName
+ }
+ configuration.dependencies.add(dep)
+ }
+
+ Copy copyPlugins = project.tasks.create(name: name, type: Copy, dependsOn: setup) {
+ from bwcPlugins
+ into node.pluginsTmpDir
+ }
+ return copyPlugins
+ }
+
static Task configureInstallModuleTask(String name, Project project, Task setup, NodeInfo node, Project module) {
if (node.config.distribution != 'integ-test-zip') {
throw new GradleException("Module ${module.path} not allowed be installed distributions other than integ-test-zip because they should already have all modules bundled!")
@@ -374,11 +478,21 @@ class ClusterFormationTasks {
return installModule
}
- static Task configureInstallPluginTask(String name, Project project, Task setup, NodeInfo node, Project plugin) {
- FileCollection pluginZip = project.configurations.getByName("_plugin_${plugin.path}")
+ static Task configureInstallPluginTask(String name, Project project, Task setup, NodeInfo node, Project plugin, String prefix) {
+ final FileCollection pluginZip;
+ if (node.nodeVersion != VersionProperties.elasticsearch) {
+ pluginZip = project.configurations.getByName("_plugin_bwc_${prefix}_${plugin.path}")
+ } else {
+ pluginZip = project.configurations.getByName("_plugin_${prefix}_${plugin.path}")
+ }
// delay reading the file location until execution time by wrapping in a closure within a GString
- Object file = "${-> new File(node.pluginsTmpDir, pluginZip.singleFile.getName()).toURI().toURL().toString()}"
- Object[] args = [new File(node.homeDir, 'bin/elasticsearch-plugin'), 'install', file]
+ final Object file = "${-> new File(node.pluginsTmpDir, pluginZip.singleFile.getName()).toURI().toURL().toString()}"
+ /*
+ * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to getting
+ * the short name requiring the path to already exist.
+ */
+ final Object esPluginUtil = "${-> node.binPath().resolve('elasticsearch-plugin').toString()}"
+ final Object[] args = [esPluginUtil, 'install', file]
return configureExecTask(name, project, setup, node, args)
}
@@ -490,7 +604,7 @@ class ClusterFormationTasks {
anyNodeFailed |= node.failedMarker.exists()
}
if (ant.properties.containsKey("failed${name}".toString()) || anyNodeFailed) {
- waitFailed(nodes, logger, 'Failed to start elasticsearch')
+ waitFailed(project, nodes, logger, 'Failed to start elasticsearch')
}
// go through each node checking the wait condition
@@ -507,14 +621,14 @@ class ClusterFormationTasks {
}
if (success == false) {
- waitFailed(nodes, logger, 'Elasticsearch cluster failed to pass wait condition')
+ waitFailed(project, nodes, logger, 'Elasticsearch cluster failed to pass wait condition')
}
}
}
return wait
}
- static void waitFailed(List nodes, Logger logger, String msg) {
+ static void waitFailed(Project project, List nodes, Logger logger, String msg) {
for (NodeInfo node : nodes) {
if (logger.isInfoEnabled() == false) {
// We already log the command at info level. No need to do it twice.
@@ -534,6 +648,17 @@ class ClusterFormationTasks {
logger.error("|\n| [log]")
node.startLog.eachLine { line -> logger.error("| ${line}") }
}
+ if (node.pidFile.exists() && node.failedMarker.exists() == false &&
+ (node.httpPortsFile.exists() == false || node.transportPortsFile.exists() == false)) {
+ logger.error("|\n| [jstack]")
+ String pid = node.pidFile.getText('UTF-8')
+ ByteArrayOutputStream output = new ByteArrayOutputStream()
+ project.exec {
+ commandLine = ["${project.javaHome}/bin/jstack", pid]
+ standardOutput = output
+ }
+ output.toString('UTF-8').eachLine { line -> logger.error("| ${line}") }
+ }
logger.error("|-----------------------------------------")
}
throw new GradleException(msg)
@@ -558,11 +683,11 @@ class ClusterFormationTasks {
standardOutput = new ByteArrayOutputStream()
doLast {
String out = standardOutput.toString()
- if (out.contains("${pid} org.elasticsearch.bootstrap.Elasticsearch") == false) {
+ if (out.contains("${ext.pid} org.elasticsearch.bootstrap.Elasticsearch") == false) {
logger.error('jps -l')
logger.error(out)
- logger.error("pid file: ${pidFile}")
- logger.error("pid: ${pid}")
+ logger.error("pid file: ${node.pidFile}")
+ logger.error("pid: ${ext.pid}")
throw new GradleException("jps -l did not report any process with org.elasticsearch.bootstrap.Elasticsearch\n" +
"Did you run gradle clean? Maybe an old pid file is still lying around.")
} else {
@@ -599,11 +724,11 @@ class ClusterFormationTasks {
}
/** Returns a unique task name for this task and node configuration */
- static String taskName(Task parentTask, NodeInfo node, String action) {
+ static String taskName(String prefix, NodeInfo node, String action) {
if (node.config.numNodes > 1) {
- return "${parentTask.name}#node${node.nodeNum}.${action}"
+ return "${prefix}#node${node.nodeNum}.${action}"
} else {
- return "${parentTask.name}#${action}"
+ return "${prefix}#${action}"
}
}
@@ -625,4 +750,11 @@ class ClusterFormationTasks {
project.ant.project.removeBuildListener(listener)
return retVal
}
+
+ static void verifyProjectHasBuildPlugin(String name, String version, Project project, Project pluginProject) {
+ if (pluginProject.plugins.hasPlugin(PluginBuildPlugin) == false) {
+ throw new GradleException("Task [${name}] cannot add plugin [${pluginProject.path}] with version [${version}] to project's " +
+ "[${project.path}] dependencies: the plugin is not an esplugin")
+ }
+ }
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy
index 46b81624ba3fa..498a1627b3598 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy
@@ -16,272 +16,15 @@
* specific language governing permissions and limitations
* under the License.
*/
-
package org.elasticsearch.gradle.test
-import org.apache.tools.ant.taskdefs.condition.Os
-import org.elasticsearch.gradle.AntTask
-import org.elasticsearch.gradle.LoggedExec
-import org.gradle.api.GradleException
-import org.gradle.api.Task
-import org.gradle.api.tasks.Exec
-import org.gradle.api.tasks.Input
-
/**
- * A fixture for integration tests which runs in a separate process.
+ * Any object that can produce an accompanying stop task, meant to tear down
+ * a previously instantiated service.
*/
-public class Fixture extends AntTask {
-
- /** The path to the executable that starts the fixture. */
- @Input
- String executable
-
- private final List arguments = new ArrayList<>()
-
- @Input
- public void args(Object... args) {
- arguments.addAll(args)
- }
-
- /**
- * Environment variables for the fixture process. The value can be any object, which
- * will have toString() called at execution time.
- */
- private final Map environment = new HashMap<>()
-
- @Input
- public void env(String key, Object value) {
- environment.put(key, value)
- }
-
- /** A flag to indicate whether the command should be executed from a shell. */
- @Input
- boolean useShell = false
-
- /**
- * A flag to indicate whether the fixture should be run in the foreground, or spawned.
- * It is protected so subclasses can override (eg RunTask).
- */
- protected boolean spawn = true
-
- /**
- * A closure to call before the fixture is considered ready. The closure is passed the fixture object,
- * as well as a groovy AntBuilder, to enable running ant condition checks. The default wait
- * condition is for http on the http port.
- */
- @Input
- Closure waitCondition = { Fixture fixture, AntBuilder ant ->
- File tmpFile = new File(fixture.cwd, 'wait.success')
- ant.get(src: "http://${fixture.addressAndPort}",
- dest: tmpFile.toString(),
- ignoreerrors: true, // do not fail on error, so logging information can be flushed
- retries: 10)
- return tmpFile.exists()
- }
+public interface Fixture {
/** A task which will stop this fixture. This should be used as a finalizedBy for any tasks that use the fixture. */
- public final Task stopTask
-
- public Fixture() {
- stopTask = createStopTask()
- finalizedBy(stopTask)
- }
-
- @Override
- protected void runAnt(AntBuilder ant) {
- project.delete(baseDir) // reset everything
- cwd.mkdirs()
- final String realExecutable
- final List realArgs = new ArrayList<>()
- final Map realEnv = environment
- // We need to choose which executable we are using. In shell mode, or when we
- // are spawning and thus using the wrapper script, the executable is the shell.
- if (useShell || spawn) {
- if (Os.isFamily(Os.FAMILY_WINDOWS)) {
- realExecutable = 'cmd'
- realArgs.add('/C')
- realArgs.add('"') // quote the entire command
- } else {
- realExecutable = 'sh'
- }
- } else {
- realExecutable = executable
- realArgs.addAll(arguments)
- }
- if (spawn) {
- writeWrapperScript(executable)
- realArgs.add(wrapperScript)
- realArgs.addAll(arguments)
- }
- if (Os.isFamily(Os.FAMILY_WINDOWS) && (useShell || spawn)) {
- realArgs.add('"')
- }
- commandString.eachLine { line -> logger.info(line) }
-
- ant.exec(executable: realExecutable, spawn: spawn, dir: cwd, taskname: name) {
- realEnv.each { key, value -> env(key: key, value: value) }
- realArgs.each { arg(value: it) }
- }
-
- String failedProp = "failed${name}"
- // first wait for resources, or the failure marker from the wrapper script
- ant.waitfor(maxwait: '30', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond', timeoutproperty: failedProp) {
- or {
- resourceexists {
- file(file: failureMarker.toString())
- }
- and {
- resourceexists {
- file(file: pidFile.toString())
- }
- resourceexists {
- file(file: portsFile.toString())
- }
- }
- }
- }
-
- if (ant.project.getProperty(failedProp) || failureMarker.exists()) {
- fail("Failed to start ${name}")
- }
-
- // the process is started (has a pid) and is bound to a network interface
- // so now wait undil the waitCondition has been met
- // TODO: change this to a loop?
- boolean success
- try {
- success = waitCondition(this, ant) == false
- } catch (Exception e) {
- String msg = "Wait condition caught exception for ${name}"
- logger.error(msg, e)
- fail(msg, e)
- }
- if (success == false) {
- fail("Wait condition failed for ${name}")
- }
- }
-
- /** Returns a debug string used to log information about how the fixture was run. */
- protected String getCommandString() {
- String commandString = "\n${name} configuration:\n"
- commandString += "-----------------------------------------\n"
- commandString += " cwd: ${cwd}\n"
- commandString += " command: ${executable} ${arguments.join(' ')}\n"
- commandString += ' environment:\n'
- environment.each { k, v -> commandString += " ${k}: ${v}\n" }
- if (spawn) {
- commandString += "\n [${wrapperScript.name}]\n"
- wrapperScript.eachLine('UTF-8', { line -> commandString += " ${line}\n"})
- }
- return commandString
- }
-
- /**
- * Writes a script to run the real executable, so that stdout/stderr can be captured.
- * TODO: this could be removed if we do use our own ProcessBuilder and pump output from the process
- */
- private void writeWrapperScript(String executable) {
- wrapperScript.parentFile.mkdirs()
- String argsPasser = '"$@"'
- String exitMarker = "; if [ \$? != 0 ]; then touch run.failed; fi"
- if (Os.isFamily(Os.FAMILY_WINDOWS)) {
- argsPasser = '%*'
- exitMarker = "\r\n if \"%errorlevel%\" neq \"0\" ( type nul >> run.failed )"
- }
- wrapperScript.setText("\"${executable}\" ${argsPasser} > run.log 2>&1 ${exitMarker}", 'UTF-8')
- }
-
- /** Fail the build with the given message, and logging relevant info*/
- private void fail(String msg, Exception... suppressed) {
- if (logger.isInfoEnabled() == false) {
- // We already log the command at info level. No need to do it twice.
- commandString.eachLine { line -> logger.error(line) }
- }
- logger.error("${name} output:")
- logger.error("-----------------------------------------")
- logger.error(" failure marker exists: ${failureMarker.exists()}")
- logger.error(" pid file exists: ${pidFile.exists()}")
- logger.error(" ports file exists: ${portsFile.exists()}")
- // also dump the log file for the startup script (which will include ES logging output to stdout)
- if (runLog.exists()) {
- logger.error("\n [log]")
- runLog.eachLine { line -> logger.error(" ${line}") }
- }
- logger.error("-----------------------------------------")
- GradleException toThrow = new GradleException(msg)
- for (Exception e : suppressed) {
- toThrow.addSuppressed(e)
- }
- throw toThrow
- }
-
- /** Adds a task to kill an elasticsearch node with the given pidfile */
- private Task createStopTask() {
- final Fixture fixture = this
- final Object pid = "${ -> fixture.pid }"
- Exec stop = project.tasks.create(name: "${name}#stop", type: LoggedExec)
- stop.onlyIf { fixture.pidFile.exists() }
- stop.doFirst {
- logger.info("Shutting down ${fixture.name} with pid ${pid}")
- }
- if (Os.isFamily(Os.FAMILY_WINDOWS)) {
- stop.executable = 'Taskkill'
- stop.args('/PID', pid, '/F')
- } else {
- stop.executable = 'kill'
- stop.args('-9', pid)
- }
- stop.doLast {
- project.delete(fixture.pidFile)
- }
- return stop
- }
-
- /**
- * A path relative to the build dir that all configuration and runtime files
- * will live in for this fixture
- */
- protected File getBaseDir() {
- return new File(project.buildDir, "fixtures/${name}")
- }
-
- /** Returns the working directory for the process. Defaults to "cwd" inside baseDir. */
- protected File getCwd() {
- return new File(baseDir, 'cwd')
- }
-
- /** Returns the file the process writes its pid to. Defaults to "pid" inside baseDir. */
- protected File getPidFile() {
- return new File(baseDir, 'pid')
- }
-
- /** Reads the pid file and returns the process' pid */
- public int getPid() {
- return Integer.parseInt(pidFile.getText('UTF-8').trim())
- }
-
- /** Returns the file the process writes its bound ports to. Defaults to "ports" inside baseDir. */
- protected File getPortsFile() {
- return new File(baseDir, 'ports')
- }
-
- /** Returns an address and port suitable for a uri to connect to this node over http */
- public String getAddressAndPort() {
- return portsFile.readLines("UTF-8").get(0)
- }
-
- /** Returns a file that wraps around the actual command when {@code spawn == true}. */
- protected File getWrapperScript() {
- return new File(cwd, Os.isFamily(Os.FAMILY_WINDOWS) ? 'run.bat' : 'run')
- }
-
- /** Returns a file that the wrapper script writes when the command failed. */
- protected File getFailureMarker() {
- return new File(cwd, 'run.failed')
- }
+ public Object getStopTask()
- /** Returns a file that the wrapper script writes when the command failed. */
- protected File getRunLog() {
- return new File(cwd, 'run.log')
- }
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/JNAKernel32Library.java b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/JNAKernel32Library.java
new file mode 100644
index 0000000000000..4d069cd434fc0
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/JNAKernel32Library.java
@@ -0,0 +1,25 @@
+package org.elasticsearch.gradle.test;
+
+import com.sun.jna.Native;
+import com.sun.jna.WString;
+import org.apache.tools.ant.taskdefs.condition.Os;
+
+public class JNAKernel32Library {
+
+ private static final class Holder {
+ private static final JNAKernel32Library instance = new JNAKernel32Library();
+ }
+
+ static JNAKernel32Library getInstance() {
+ return Holder.instance;
+ }
+
+ private JNAKernel32Library() {
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ Native.register("kernel32");
+ }
+ }
+
+ native int GetShortPathNameW(WString lpszLongPath, char[] lpszShortPath, int cchBuffer);
+
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy
index 5d9961a0425ee..9a237c628de98 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy
@@ -18,10 +18,15 @@
*/
package org.elasticsearch.gradle.test
+import com.sun.jna.Native
+import com.sun.jna.WString
import org.apache.tools.ant.taskdefs.condition.Os
import org.gradle.api.InvalidUserDataException
import org.gradle.api.Project
-import org.gradle.api.Task
+
+import java.nio.file.Files
+import java.nio.file.Path
+import java.nio.file.Paths
/**
* A container for the files and configuration associated with a single node in a test cluster.
@@ -57,6 +62,9 @@ class NodeInfo {
/** config directory */
File confDir
+ /** data directory (as an Object, to allow lazy evaluation) */
+ Object dataDir
+
/** THE config file */
File configFile
@@ -82,24 +90,40 @@ class NodeInfo {
String executable
/** Path to the elasticsearch start script */
- File esScript
+ private Object esScript
/** script to run when running in the background */
- File wrapperScript
+ private File wrapperScript
/** buffer for ant output when starting this node */
ByteArrayOutputStream buffer = new ByteArrayOutputStream()
- /** Creates a node to run as part of a cluster for the given task */
- NodeInfo(ClusterConfiguration config, int nodeNum, Project project, Task task, String nodeVersion, File sharedDir) {
+ /** the version of elasticsearch that this node runs */
+ String nodeVersion
+
+ /** Holds node configuration for part of a test cluster. */
+ NodeInfo(ClusterConfiguration config, int nodeNum, Project project, String prefix, String nodeVersion, File sharedDir) {
this.config = config
this.nodeNum = nodeNum
this.sharedDir = sharedDir
- clusterName = "${task.path.replace(':', '_').substring(1)}"
- baseDir = new File(project.buildDir, "cluster/${task.name} node${nodeNum}")
+ if (config.clusterName != null) {
+ clusterName = config.clusterName
+ } else {
+ clusterName = project.path.replace(':', '_').substring(1) + '_' + prefix
+ }
+ baseDir = new File(project.buildDir, "cluster/${prefix} node${nodeNum}")
pidFile = new File(baseDir, 'es.pid')
+ this.nodeVersion = nodeVersion
homeDir = homeDir(baseDir, config.distribution, nodeVersion)
confDir = confDir(baseDir, config.distribution, nodeVersion)
+ if (config.dataDir != null) {
+ if (config.numNodes != 1) {
+ throw new IllegalArgumentException("Cannot set data dir for integ test with more than one node")
+ }
+ dataDir = config.dataDir
+ } else {
+ dataDir = new File(homeDir, "data")
+ }
configFile = new File(confDir, 'elasticsearch.yml')
// even for rpm/deb, the logs are under home because we dont start with real services
File logsDir = new File(homeDir, 'logs')
@@ -116,22 +140,37 @@ class NodeInfo {
args.add('/C')
args.add('"') // quote the entire command
wrapperScript = new File(cwd, "run.bat")
- esScript = new File(homeDir, 'bin/elasticsearch.bat')
+ /*
+ * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to
+ * getting the short name requiring the path to already exist.
+ */
+ esScript = "${-> binPath().resolve('elasticsearch.bat').toString()}"
} else {
- executable = 'sh'
+ executable = 'bash'
wrapperScript = new File(cwd, "run")
- esScript = new File(homeDir, 'bin/elasticsearch')
+ esScript = binPath().resolve('elasticsearch')
}
if (config.daemonize) {
- args.add("${wrapperScript}")
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ /*
+ * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to
+ * getting the short name requiring the path to already exist.
+ */
+ args.add("${-> getShortPathName(wrapperScript.toString())}")
+ } else {
+ args.add("${wrapperScript}")
+ }
} else {
args.add("${esScript}")
}
- env = [ 'JAVA_HOME' : project.javaHome ]
+ env = ['JAVA_HOME': project.javaHome]
args.addAll("-E", "node.portsfile=true")
String collectedSystemProperties = config.systemProperties.collect { key, value -> "-D${key}=${value}" }.join(" ")
String esJavaOpts = config.jvmArgs.isEmpty() ? collectedSystemProperties : collectedSystemProperties + " " + config.jvmArgs
+ if (Boolean.parseBoolean(System.getProperty('tests.asserts', 'true'))) {
+ esJavaOpts += " -ea -esa"
+ }
env.put('ES_JAVA_OPTS', esJavaOpts)
for (Map.Entry property : System.properties.entrySet()) {
if (property.key.startsWith('tests.es.')) {
@@ -139,13 +178,68 @@ class NodeInfo {
args.add("${property.key.substring('tests.es.'.size())}=${property.value}")
}
}
- env.put('ES_JVM_OPTIONS', new File(confDir, 'jvm.options'))
- args.addAll("-E", "path.conf=${confDir}")
+ final File jvmOptionsFile = new File(confDir, 'jvm.options')
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ /*
+ * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to
+ * getting the short name requiring the path to already exist.
+ */
+ env.put('ES_JVM_OPTIONS', "${-> getShortPathName(jvmOptionsFile.toString())}")
+ } else {
+ env.put('ES_JVM_OPTIONS', jvmOptionsFile)
+ }
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ /*
+ * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to
+ * getting the short name requiring the path to already exist.
+ */
+ args.addAll("-E", "path.conf=${-> getShortPathName(confDir.toString())}")
+ } else {
+ args.addAll("-E", "path.conf=${confDir}")
+ }
+ if (!System.properties.containsKey("tests.es.path.data")) {
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ /*
+ * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to
+ * getting the short name requiring the path to already exist. This one is extra tricky because usually we rely on the node
+ * creating its data directory on startup but we simply can not do that here because getting the short path name requires
+ * the directory to already exist. Therefore, we create this directory immediately before getting the short name.
+ */
+ args.addAll("-E", "path.data=${-> Files.createDirectories(Paths.get(dataDir.toString())); getShortPathName(dataDir.toString())}")
+ } else {
+ args.addAll("-E", "path.data=${-> dataDir.toString()}")
+ }
+ }
if (Os.isFamily(Os.FAMILY_WINDOWS)) {
args.add('"') // end the entire command, quoted
}
}
+ Path binPath() {
+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+ return Paths.get(getShortPathName(new File(homeDir, 'bin').toString()))
+ } else {
+ return Paths.get(new File(homeDir, 'bin').toURI())
+ }
+ }
+
+ static String getShortPathName(String path) {
+ assert Os.isFamily(Os.FAMILY_WINDOWS)
+ final WString longPath = new WString("\\\\?\\" + path)
+ // first we get the length of the buffer needed
+ final int length = JNAKernel32Library.getInstance().GetShortPathNameW(longPath, null, 0)
+ if (length == 0) {
+ throw new IllegalStateException("path [" + path + "] encountered error [" + Native.getLastError() + "]")
+ }
+ final char[] shortPath = new char[length]
+ // knowing the length of the buffer, now we get the short name
+ if (JNAKernel32Library.getInstance().GetShortPathNameW(longPath, shortPath, length) == 0) {
+ throw new IllegalStateException("path [" + path + "] encountered error [" + Native.getLastError() + "]")
+ }
+ // we have to strip the \\?\ away from the path for cmd.exe
+ return Native.toString(shortPath).substring(4)
+ }
+
/** Returns debug string for the command that started this node. */
String getCommandString() {
String esCommandString = "\nNode ${nodeNum} configuration:\n"
@@ -184,6 +278,19 @@ class NodeInfo {
return transportPortsFile.readLines("UTF-8").get(0)
}
+ /** Returns the file which contains the transport protocol ports for this node */
+ File getTransportPortsFile() {
+ return transportPortsFile
+ }
+
+ /** Returns the data directory for this node */
+ File getDataDir() {
+ if (!(dataDir instanceof File)) {
+ return new File(dataDir)
+ }
+ return dataDir
+ }
+
/** Returns the directory elasticsearch home is contained in for the given distribution */
static File homeDir(File baseDir, String distro, String nodeVersion) {
String path
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy
index c6463d2881185..859c89c693806 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy
@@ -20,80 +20,114 @@ package org.elasticsearch.gradle.test
import com.carrotsearch.gradle.junit4.RandomizedTestingTask
import org.elasticsearch.gradle.BuildPlugin
+import org.gradle.api.DefaultTask
import org.gradle.api.Task
+import org.gradle.api.execution.TaskExecutionAdapter
import org.gradle.api.internal.tasks.options.Option
import org.gradle.api.plugins.JavaBasePlugin
import org.gradle.api.tasks.Input
-import org.gradle.util.ConfigureUtil
+import org.gradle.api.tasks.TaskState
+
+import java.nio.charset.StandardCharsets
+import java.nio.file.Files
+import java.util.stream.Stream
/**
- * Runs integration tests, but first starts an ES cluster,
- * and passes the ES cluster info as parameters to the tests.
+ * A wrapper task around setting up a cluster and running rest tests.
*/
-public class RestIntegTestTask extends RandomizedTestingTask {
+public class RestIntegTestTask extends DefaultTask {
+
+ protected ClusterConfiguration clusterConfig
+
+ protected RandomizedTestingTask runner
- ClusterConfiguration clusterConfig
+ protected Task clusterInit
+
+ /** Info about nodes in the integ test cluster. Note this is *not* available until runtime. */
+ List nodes
/** Flag indicating whether the rest tests in the rest spec should be run. */
@Input
boolean includePackaged = false
public RestIntegTestTask() {
- description = 'Runs rest tests against an elasticsearch cluster.'
- group = JavaBasePlugin.VERIFICATION_GROUP
- dependsOn(project.testClasses)
- classpath = project.sourceSets.test.runtimeClasspath
- testClassesDir = project.sourceSets.test.output.classesDir
- clusterConfig = new ClusterConfiguration(project)
+ runner = project.tasks.create("${name}Runner", RandomizedTestingTask.class)
+ super.dependsOn(runner)
+ clusterInit = project.tasks.create(name: "${name}Cluster#init", dependsOn: project.testClasses)
+ runner.dependsOn(clusterInit)
+ runner.classpath = project.sourceSets.test.runtimeClasspath
+ runner.testClassesDir = project.sourceSets.test.output.classesDir
+ clusterConfig = project.extensions.create("${name}Cluster", ClusterConfiguration.class, project)
// start with the common test configuration
- configure(BuildPlugin.commonTestConfig(project))
+ runner.configure(BuildPlugin.commonTestConfig(project))
// override/add more for rest tests
- parallelism = '1'
- include('**/*IT.class')
- systemProperty('tests.rest.load_packaged', 'false')
+ runner.parallelism = '1'
+ runner.include('**/*IT.class')
+ runner.systemProperty('tests.rest.load_packaged', 'false')
+ // we pass all nodes to the rest cluster to allow the clients to round-robin between them
+ // this is more realistic than just talking to a single node
+ runner.systemProperty('tests.rest.cluster', "${-> nodes.collect{it.httpUri()}.join(",")}")
+ runner.systemProperty('tests.config.dir', "${-> nodes[0].confDir}")
+ // TODO: our "client" qa tests currently use the rest-test plugin. instead they should have their own plugin
+ // that sets up the test cluster and passes this transport uri instead of http uri. Until then, we pass
+ // both as separate sysprops
+ runner.systemProperty('tests.cluster', "${-> nodes[0].transportUri()}")
+
+ // dump errors and warnings from cluster log on failure
+ TaskExecutionAdapter logDumpListener = new TaskExecutionAdapter() {
+ @Override
+ void afterExecute(Task task, TaskState state) {
+ if (state.failure != null) {
+ for (NodeInfo nodeInfo : nodes) {
+ printLogExcerpt(nodeInfo)
+ }
+ }
+ }
+ }
+ runner.doFirst {
+ project.gradle.addListener(logDumpListener)
+ }
+ runner.doLast {
+ project.gradle.removeListener(logDumpListener)
+ }
// copy the rest spec/tests into the test resources
RestSpecHack.configureDependencies(project)
project.afterEvaluate {
- dependsOn(RestSpecHack.configureTask(project, includePackaged))
+ runner.dependsOn(RestSpecHack.configureTask(project, includePackaged))
}
// this must run after all projects have been configured, so we know any project
// references can be accessed as a fully configured
project.gradle.projectsEvaluated {
- NodeInfo node = ClusterFormationTasks.setup(project, this, clusterConfig)
- systemProperty('tests.rest.cluster', "${-> node.httpUri()}")
- systemProperty('tests.config.dir', "${-> node.confDir}")
- // TODO: our "client" qa tests currently use the rest-test plugin. instead they should have their own plugin
- // that sets up the test cluster and passes this transport uri instead of http uri. Until then, we pass
- // both as separate sysprops
- systemProperty('tests.cluster', "${-> node.transportUri()}")
+ if (enabled == false) {
+ runner.enabled = false
+ clusterInit.enabled = false
+ return // no need to add cluster formation tasks if the task won't run!
+ }
+ nodes = ClusterFormationTasks.setup(project, "${name}Cluster", runner, clusterConfig)
+ super.dependsOn(runner.finalizedBy)
}
}
@Option(
- option = "debug-jvm",
- description = "Enable debugging configuration, to allow attaching a debugger to elasticsearch."
+ option = "debug-jvm",
+ description = "Enable debugging configuration, to allow attaching a debugger to elasticsearch."
)
public void setDebug(boolean enabled) {
clusterConfig.debug = enabled;
}
- @Input
- public void cluster(Closure closure) {
- ConfigureUtil.configure(closure, clusterConfig)
- }
-
- public ClusterConfiguration getCluster() {
- return clusterConfig
+ public List getNodes() {
+ return nodes
}
@Override
public Task dependsOn(Object... dependencies) {
- super.dependsOn(dependencies)
+ runner.dependsOn(dependencies)
for (Object dependency : dependencies) {
if (dependency instanceof Fixture) {
- finalizedBy(((Fixture)dependency).stopTask)
+ runner.finalizedBy(((Fixture)dependency).getStopTask())
}
}
return this
@@ -101,11 +135,54 @@ public class RestIntegTestTask extends RandomizedTestingTask {
@Override
public void setDependsOn(Iterable> dependencies) {
- super.setDependsOn(dependencies)
+ runner.setDependsOn(dependencies)
for (Object dependency : dependencies) {
if (dependency instanceof Fixture) {
- finalizedBy(((Fixture)dependency).stopTask)
+ runner.finalizedBy(((Fixture)dependency).getStopTask())
+ }
+ }
+ }
+
+ @Override
+ public Task mustRunAfter(Object... tasks) {
+ clusterInit.mustRunAfter(tasks)
+ }
+
+ /** Print out an excerpt of the log from the given node. */
+ protected static void printLogExcerpt(NodeInfo nodeInfo) {
+ File logFile = new File(nodeInfo.homeDir, "logs/${nodeInfo.clusterName}.log")
+ println("\nCluster ${nodeInfo.clusterName} - node ${nodeInfo.nodeNum} log excerpt:")
+ println("(full log at ${logFile})")
+ println('-----------------------------------------')
+ Stream stream = Files.lines(logFile.toPath(), StandardCharsets.UTF_8)
+ try {
+ boolean inStartup = true
+ boolean inExcerpt = false
+ int linesSkipped = 0
+ for (String line : stream) {
+ if (line.startsWith("[")) {
+ inExcerpt = false // clear with the next log message
+ }
+ if (line =~ /(\[WARN\])|(\[ERROR\])/) {
+ inExcerpt = true // show warnings and errors
+ }
+ if (inStartup || inExcerpt) {
+ if (linesSkipped != 0) {
+ println("... SKIPPED ${linesSkipped} LINES ...")
+ }
+ println(line)
+ linesSkipped = 0
+ } else {
+ ++linesSkipped
+ }
+ if (line =~ /recovered \[\d+\] indices into cluster_state/) {
+ inStartup = false
+ }
}
+ } finally {
+ stream.close()
}
+ println('=========================================')
+
}
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestSpecHack.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestSpecHack.groovy
index 43b5c2f6f38d2..296ae7115789f 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestSpecHack.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestSpecHack.groovy
@@ -43,18 +43,22 @@ public class RestSpecHack {
}
/**
- * Creates a task to copy the rest spec files.
+ * Creates a task (if necessary) to copy the rest spec files.
*
* @param project The project to add the copy task to
* @param includePackagedTests true if the packaged tests should be copied, false otherwise
*/
public static Task configureTask(Project project, boolean includePackagedTests) {
+ Task copyRestSpec = project.tasks.findByName('copyRestSpec')
+ if (copyRestSpec != null) {
+ return copyRestSpec
+ }
Map copyRestSpecProps = [
name : 'copyRestSpec',
type : Copy,
dependsOn: [project.configurations.restSpec, 'processTestResources']
]
- Task copyRestSpec = project.tasks.create(copyRestSpecProps) {
+ copyRestSpec = project.tasks.create(copyRestSpecProps) {
from { project.zipTree(project.configurations.restSpec.singleFile) }
include 'rest-api-spec/api/**'
if (includePackagedTests) {
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy
index dc9aa7693889d..da1462412812a 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy
@@ -18,18 +18,35 @@
*/
package org.elasticsearch.gradle.test
+import org.elasticsearch.gradle.BuildPlugin
+import org.gradle.api.InvalidUserDataException
import org.gradle.api.Plugin
import org.gradle.api.Project
+import org.gradle.api.plugins.JavaBasePlugin
-/** A plugin to add rest integration tests. Used for qa projects. */
+/**
+ * Adds support for starting an Elasticsearch cluster before running integration
+ * tests. Used in conjunction with {@link StandaloneRestTestPlugin} for qa
+ * projects and in conjunction with {@link BuildPlugin} for testing the rest
+ * client.
+ */
public class RestTestPlugin implements Plugin {
+ List REQUIRED_PLUGINS = [
+ 'elasticsearch.build',
+ 'elasticsearch.standalone-rest-test']
@Override
public void apply(Project project) {
- project.pluginManager.apply(StandaloneTestBasePlugin)
+ if (false == REQUIRED_PLUGINS.any {project.pluginManager.hasPlugin(it)}) {
+ throw new InvalidUserDataException('elasticsearch.rest-test '
+ + 'requires either elasticsearch.build or '
+ + 'elasticsearch.standalone-rest-test')
+ }
RestIntegTestTask integTest = project.tasks.create('integTest', RestIntegTestTask.class)
- integTest.cluster.distribution = 'zip' // rest tests should run with the real zip
+ integTest.description = 'Runs rest tests against an elasticsearch cluster.'
+ integTest.group = JavaBasePlugin.VERIFICATION_GROUP
+ integTest.clusterConfig.distribution = 'zip' // rest tests should run with the real zip
integTest.mustRunAfter(project.precommit)
project.check.dependsOn(integTest)
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy
index f045c95740ba6..a88152d7865ff 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy
@@ -16,8 +16,9 @@ public class RunTask extends DefaultTask {
clusterConfig.httpPort = 9200
clusterConfig.transportPort = 9300
clusterConfig.daemonize = false
+ clusterConfig.distribution = 'zip'
project.afterEvaluate {
- ClusterFormationTasks.setup(project, this, clusterConfig)
+ ClusterFormationTasks.setup(project, name, this, clusterConfig)
}
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy
new file mode 100644
index 0000000000000..66868aafac0b0
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy
@@ -0,0 +1,64 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+
+package org.elasticsearch.gradle.test
+
+import com.carrotsearch.gradle.junit4.RandomizedTestingPlugin
+import org.elasticsearch.gradle.BuildPlugin
+import org.elasticsearch.gradle.VersionProperties
+import org.elasticsearch.gradle.precommit.PrecommitTasks
+import org.gradle.api.InvalidUserDataException
+import org.gradle.api.Plugin
+import org.gradle.api.Project
+import org.gradle.api.plugins.JavaBasePlugin
+
+/**
+ * Configures the build to compile tests against Elasticsearch's test framework
+ * and run REST tests. Use BuildPlugin if you want to build main code as well
+ * as tests.
+ */
+public class StandaloneRestTestPlugin implements Plugin {
+
+ @Override
+ public void apply(Project project) {
+ if (project.pluginManager.hasPlugin('elasticsearch.build')) {
+ throw new InvalidUserDataException('elasticsearch.standalone-test '
+ + 'elasticsearch.standalone-rest-test, and elasticsearch.build '
+ + 'are mutually exclusive')
+ }
+ project.pluginManager.apply(JavaBasePlugin)
+ project.pluginManager.apply(RandomizedTestingPlugin)
+
+ BuildPlugin.globalBuildInfo(project)
+ BuildPlugin.configureRepositories(project)
+
+ // only setup tests to build
+ project.sourceSets.create('test')
+ project.dependencies.add('testCompile', "org.elasticsearch.test:framework:${VersionProperties.elasticsearch}")
+
+ project.eclipse.classpath.sourceSets = [project.sourceSets.test]
+ project.eclipse.classpath.plusConfigurations = [project.configurations.testRuntime]
+ project.idea.module.testSourceDirs += project.sourceSets.test.java.srcDirs
+ project.idea.module.scopes['TEST'] = [plus: [project.configurations.testRuntime]]
+
+ PrecommitTasks.create(project, false)
+ project.check.dependsOn(project.precommit)
+ }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestBasePlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestBasePlugin.groovy
deleted file mode 100644
index af2b20e4abfab..0000000000000
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestBasePlugin.groovy
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Licensed to Elasticsearch under one or more contributor
- * license agreements. See the NOTICE file distributed with
- * this work for additional information regarding copyright
- * ownership. Elasticsearch licenses this file to you under
- * the Apache License, Version 2.0 (the "License"); you may
- * not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied. See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-
-package org.elasticsearch.gradle.test
-
-import com.carrotsearch.gradle.junit4.RandomizedTestingPlugin
-import org.elasticsearch.gradle.BuildPlugin
-import org.elasticsearch.gradle.VersionProperties
-import org.elasticsearch.gradle.precommit.PrecommitTasks
-import org.gradle.api.Plugin
-import org.gradle.api.Project
-import org.gradle.api.plugins.JavaBasePlugin
-
-/** Configures the build to have a rest integration test. */
-public class StandaloneTestBasePlugin implements Plugin {
-
- @Override
- public void apply(Project project) {
- project.pluginManager.apply(JavaBasePlugin)
- project.pluginManager.apply(RandomizedTestingPlugin)
-
- BuildPlugin.globalBuildInfo(project)
- BuildPlugin.configureRepositories(project)
-
- // only setup tests to build
- project.sourceSets.create('test')
- project.dependencies.add('testCompile', "org.elasticsearch.test:framework:${VersionProperties.elasticsearch}")
-
- project.eclipse.classpath.sourceSets = [project.sourceSets.test]
- project.eclipse.classpath.plusConfigurations = [project.configurations.testRuntime]
- project.idea.module.testSourceDirs += project.sourceSets.test.java.srcDirs
- project.idea.module.scopes['TEST'] = [plus: [project.configurations.testRuntime]]
-
- PrecommitTasks.create(project, false)
- project.check.dependsOn(project.precommit)
- }
-}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestPlugin.groovy
index fefd08fe4e525..de52d75c6008c 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneTestPlugin.groovy
@@ -25,12 +25,15 @@ import org.gradle.api.Plugin
import org.gradle.api.Project
import org.gradle.api.plugins.JavaBasePlugin
-/** A plugin to add tests only. Used for QA tests that run arbitrary unit tests. */
+/**
+ * Configures the build to compile against Elasticsearch's test framework and
+ * run integration and unit tests. Use BuildPlugin if you want to build main
+ * code as well as tests. */
public class StandaloneTestPlugin implements Plugin {
@Override
public void apply(Project project) {
- project.pluginManager.apply(StandaloneTestBasePlugin)
+ project.pluginManager.apply(StandaloneRestTestPlugin)
Map testOptions = [
name: 'test',
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy
new file mode 100644
index 0000000000000..fa08a8f9c6667
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy
@@ -0,0 +1,54 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.elasticsearch.gradle.test
+
+import org.elasticsearch.gradle.vagrant.VagrantCommandTask
+import org.gradle.api.Task
+
+/**
+ * A fixture for integration tests which runs in a virtual machine launched by Vagrant.
+ */
+class VagrantFixture extends VagrantCommandTask implements Fixture {
+
+ private VagrantCommandTask stopTask
+
+ public VagrantFixture() {
+ this.stopTask = project.tasks.create(name: "${name}#stop", type: VagrantCommandTask) {
+ command 'halt'
+ }
+ finalizedBy this.stopTask
+ }
+
+ @Override
+ void setBoxName(String boxName) {
+ super.setBoxName(boxName)
+ this.stopTask.setBoxName(boxName)
+ }
+
+ @Override
+ void setEnvironmentVars(Map environmentVars) {
+ super.setEnvironmentVars(environmentVars)
+ this.stopTask.setEnvironmentVars(environmentVars)
+ }
+
+ @Override
+ public Task getStopTask() {
+ return this.stopTask
+ }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy
index c68e0528c9b67..110f2fc7e8461 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy
@@ -18,14 +18,7 @@
*/
package org.elasticsearch.gradle.vagrant
-import org.gradle.api.DefaultTask
import org.gradle.api.tasks.Input
-import org.gradle.api.tasks.TaskAction
-import org.gradle.logging.ProgressLoggerFactory
-import org.gradle.process.internal.ExecAction
-import org.gradle.process.internal.ExecActionFactory
-
-import javax.inject.Inject
/**
* Runs bats over vagrant. Pretty much like running it using Exec but with a
@@ -34,12 +27,15 @@ import javax.inject.Inject
public class BatsOverVagrantTask extends VagrantCommandTask {
@Input
- String command
+ String remoteCommand
BatsOverVagrantTask() {
- project.afterEvaluate {
- args 'ssh', boxName, '--command', command
- }
+ command = 'ssh'
+ }
+
+ void setRemoteCommand(String remoteCommand) {
+ this.remoteCommand = Objects.requireNonNull(remoteCommand)
+ setArgs(['--command', remoteCommand])
}
@Override
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy
index 3f980c57a49a6..e15759a1fe588 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy
@@ -19,11 +19,9 @@
package org.elasticsearch.gradle.vagrant
import com.carrotsearch.gradle.junit4.LoggingOutputStream
-import groovy.transform.PackageScope
import org.gradle.api.GradleScriptException
import org.gradle.api.logging.Logger
-import org.gradle.logging.ProgressLogger
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLogger
import java.util.regex.Matcher
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy
index d79c2533fabaf..aab120e8d049a 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy
@@ -21,9 +21,15 @@ package org.elasticsearch.gradle.vagrant
import org.apache.commons.io.output.TeeOutputStream
import org.elasticsearch.gradle.LoggedExec
import org.gradle.api.tasks.Input
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.api.tasks.Optional
+import org.gradle.api.tasks.TaskAction
+import org.gradle.internal.logging.progress.ProgressLoggerFactory
import javax.inject.Inject
+import java.util.concurrent.CountDownLatch
+import java.util.concurrent.locks.Lock
+import java.util.concurrent.locks.ReadWriteLock
+import java.util.concurrent.locks.ReentrantLock
/**
* Runs a vagrant command. Pretty much like Exec task but with a nicer output
@@ -31,17 +37,51 @@ import javax.inject.Inject
*/
public class VagrantCommandTask extends LoggedExec {
+ @Input
+ String command
+
+ @Input @Optional
+ String subcommand
+
@Input
String boxName
+ @Input
+ Map environmentVars
+
public VagrantCommandTask() {
executable = 'vagrant'
+
+ // We're using afterEvaluate here to slot in some logic that captures configurations and
+ // modifies the command line right before the main execution happens. The reason that we
+ // call doFirst instead of just doing the work in the afterEvaluate is that the latter
+ // restricts how subclasses can extend functionality. Calling afterEvaluate is like having
+ // all the logic of a task happening at construction time, instead of at execution time
+ // where a subclass can override or extend the logic.
project.afterEvaluate {
- // It'd be nice if --machine-readable were, well, nice
- standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream())
+ doFirst {
+ if (environmentVars != null) {
+ environment environmentVars
+ }
+
+ // Build our command line for vagrant
+ def vagrantCommand = [executable, command]
+ if (subcommand != null) {
+ vagrantCommand = vagrantCommand + subcommand
+ }
+ commandLine([*vagrantCommand, boxName, *args])
+
+ // It'd be nice if --machine-readable were, well, nice
+ standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream())
+ }
}
}
+ @Inject
+ ProgressLoggerFactory getProgressLoggerFactory() {
+ throw new UnsupportedOperationException()
+ }
+
protected OutputStream createLoggerOutputStream() {
return new VagrantLoggerOutputStream(
command: commandLine.join(' '),
@@ -50,9 +90,4 @@ public class VagrantCommandTask extends LoggedExec {
stuff starts with ==> $box */
squashedPrefix: "==> $boxName: ")
}
-
- @Inject
- ProgressLoggerFactory getProgressLoggerFactory() {
- throw new UnsupportedOperationException();
- }
}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy
index 331a638b5cade..e899c0171298b 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy
@@ -19,9 +19,7 @@
package org.elasticsearch.gradle.vagrant
import com.carrotsearch.gradle.junit4.LoggingOutputStream
-import org.gradle.api.logging.Logger
-import org.gradle.logging.ProgressLogger
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLogger
/**
* Adapts an OutputStream being written to by vagrant into a ProcessLogger. It
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy
new file mode 100644
index 0000000000000..f16913d5be64a
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy
@@ -0,0 +1,76 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.elasticsearch.gradle.vagrant
+
+import org.gradle.api.tasks.Input
+
+class VagrantPropertiesExtension {
+
+ @Input
+ List boxes
+
+ @Input
+ Long testSeed
+
+ @Input
+ String formattedTestSeed
+
+ @Input
+ String upgradeFromVersion
+
+ @Input
+ List upgradeFromVersions
+
+ @Input
+ String batsDir
+
+ @Input
+ Boolean inheritTests
+
+ @Input
+ Boolean inheritTestArchives
+
+ @Input
+ Boolean inheritTestUtils
+
+ VagrantPropertiesExtension(List availableBoxes) {
+ this.boxes = availableBoxes
+ this.batsDir = 'src/test/resources/packaging'
+ }
+
+ void boxes(String... boxes) {
+ this.boxes = Arrays.asList(boxes)
+ }
+
+ void setBatsDir(String batsDir) {
+ this.batsDir = batsDir
+ }
+
+ void setInheritTests(Boolean inheritTests) {
+ this.inheritTests = inheritTests
+ }
+
+ void setInheritTestArchives(Boolean inheritTestArchives) {
+ this.inheritTestArchives = inheritTestArchives
+ }
+
+ void setInheritTestUtils(Boolean inheritTestUtils) {
+ this.inheritTestUtils = inheritTestUtils
+ }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy
new file mode 100644
index 0000000000000..9dfe487e83018
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy
@@ -0,0 +1,127 @@
+package org.elasticsearch.gradle.vagrant
+
+import org.gradle.api.GradleException
+import org.gradle.api.InvalidUserDataException
+import org.gradle.api.Plugin
+import org.gradle.api.Project
+import org.gradle.process.ExecResult
+import org.gradle.process.internal.ExecException
+
+/**
+ * Global configuration for if Vagrant tasks are supported in this
+ * build environment.
+ */
+class VagrantSupportPlugin implements Plugin {
+
+ @Override
+ void apply(Project project) {
+ if (project.rootProject.ext.has('vagrantEnvChecksDone') == false) {
+ Map vagrantInstallation = getVagrantInstallation(project)
+ Map virtualBoxInstallation = getVirtualBoxInstallation(project)
+
+ project.rootProject.ext.vagrantInstallation = vagrantInstallation
+ project.rootProject.ext.virtualBoxInstallation = virtualBoxInstallation
+ project.rootProject.ext.vagrantSupported = vagrantInstallation.supported && virtualBoxInstallation.supported
+ project.rootProject.ext.vagrantEnvChecksDone = true
+
+ // Finding that HOME needs to be set when performing vagrant updates
+ String homeLocation = System.getenv("HOME")
+ if (project.rootProject.ext.vagrantSupported && homeLocation == null) {
+ throw new GradleException("Could not locate \$HOME environment variable. Vagrant is enabled " +
+ "and requires \$HOME to be set to function properly.")
+ }
+ }
+
+ addVerifyInstallationTasks(project)
+ }
+
+ private Map getVagrantInstallation(Project project) {
+ try {
+ ByteArrayOutputStream pipe = new ByteArrayOutputStream()
+ ExecResult runResult = project.exec {
+ commandLine 'vagrant', '--version'
+ standardOutput pipe
+ ignoreExitValue true
+ }
+ String version = pipe.toString().trim()
+ if (runResult.exitValue == 0) {
+ if (version ==~ /Vagrant 1\.(8\.[6-9]|9\.[0-9])+/ || version ==~ /Vagrant 2\.[0-9]+\.[0-9]+/) {
+ return [ 'supported' : true ]
+ } else {
+ return [ 'supported' : false,
+ 'info' : "Illegal version of vagrant [${version}]. Need [Vagrant 1.8.6+]" ]
+ }
+ } else {
+ return [ 'supported' : false,
+ 'info' : "Could not read installed vagrant version:\n" + version ]
+ }
+ } catch (ExecException e) {
+ // Exec still throws this if it cannot find the command, regardless if ignoreExitValue is set.
+ // Swallow error. Vagrant isn't installed. Don't halt the build here.
+ return [ 'supported' : false, 'info' : "Could not find vagrant: " + e.message ]
+ }
+ }
+
+ private Map getVirtualBoxInstallation(Project project) {
+ try {
+ ByteArrayOutputStream pipe = new ByteArrayOutputStream()
+ ExecResult runResult = project.exec {
+ commandLine 'vboxmanage', '--version'
+ standardOutput = pipe
+ ignoreExitValue true
+ }
+ String version = pipe.toString().trim()
+ if (runResult.exitValue == 0) {
+ try {
+ String[] versions = version.split('\\.')
+ int major = Integer.parseInt(versions[0])
+ int minor = Integer.parseInt(versions[1])
+ if ((major < 5) || (major == 5 && minor < 1)) {
+ return [ 'supported' : false,
+ 'info' : "Illegal version of virtualbox [${version}]. Need [5.1+]" ]
+ } else {
+ return [ 'supported' : true ]
+ }
+ } catch (NumberFormatException | ArrayIndexOutOfBoundsException e) {
+ return [ 'supported' : false,
+ 'info' : "Unable to parse version of virtualbox [${version}]. Required [5.1+]" ]
+ }
+ } else {
+ return [ 'supported': false, 'info': "Could not read installed virtualbox version:\n" + version ]
+ }
+ } catch (ExecException e) {
+ // Exec still throws this if it cannot find the command, regardless if ignoreExitValue is set.
+ // Swallow error. VirtualBox isn't installed. Don't halt the build here.
+ return [ 'supported' : false, 'info' : "Could not find virtualbox: " + e.message ]
+ }
+ }
+
+ private void addVerifyInstallationTasks(Project project) {
+ createCheckVagrantVersionTask(project)
+ createCheckVirtualBoxVersionTask(project)
+ }
+
+ private void createCheckVagrantVersionTask(Project project) {
+ project.tasks.create('vagrantCheckVersion') {
+ description 'Check the Vagrant version'
+ group 'Verification'
+ doLast {
+ if (project.rootProject.vagrantInstallation.supported == false) {
+ throw new InvalidUserDataException(project.rootProject.vagrantInstallation.info)
+ }
+ }
+ }
+ }
+
+ private void createCheckVirtualBoxVersionTask(Project project) {
+ project.tasks.create('virtualboxCheckVersion') {
+ description 'Check the Virtualbox version'
+ group 'Verification'
+ doLast {
+ if (project.rootProject.virtualBoxInstallation.supported == false) {
+ throw new InvalidUserDataException(project.rootProject.virtualBoxInstallation.info)
+ }
+ }
+ }
+ }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy
new file mode 100644
index 0000000000000..d0c529b263285
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy
@@ -0,0 +1,403 @@
+package org.elasticsearch.gradle.vagrant
+
+import org.elasticsearch.gradle.FileContentsTask
+import org.gradle.api.*
+import org.gradle.api.artifacts.dsl.RepositoryHandler
+import org.gradle.api.execution.TaskExecutionAdapter
+import org.gradle.api.internal.artifacts.dependencies.DefaultProjectDependency
+import org.gradle.api.tasks.Copy
+import org.gradle.api.tasks.Delete
+import org.gradle.api.tasks.Exec
+import org.gradle.api.tasks.TaskState
+
+class VagrantTestPlugin implements Plugin {
+
+ /** All available boxes **/
+ static List BOXES = [
+ 'centos-6',
+ 'centos-7',
+ 'debian-8',
+ 'debian-9',
+ 'fedora-25',
+ 'fedora-26',
+ 'oel-6',
+ 'oel-7',
+ 'opensuse-42',
+ 'sles-12',
+ 'ubuntu-1404',
+ 'ubuntu-1604'
+ ]
+
+ /** Boxes used when sampling the tests **/
+ static List SAMPLE = [
+ 'centos-7',
+ 'ubuntu-1404',
+ ]
+
+ /** All onboarded archives by default, available for Bats tests even if not used **/
+ static List DISTRIBUTION_ARCHIVES = ['tar', 'rpm', 'deb']
+
+ /** Packages onboarded for upgrade tests **/
+ static List UPGRADE_FROM_ARCHIVES = ['rpm', 'deb']
+
+ private static final BATS = 'bats'
+ private static final String BATS_TEST_COMMAND ="cd \$BATS_ARCHIVES && sudo bats --tap \$BATS_TESTS/*.$BATS"
+ private static final String PLATFORM_TEST_COMMAND ="rm -rf ~/elasticsearch && rsync -r /elasticsearch/ ~/elasticsearch && cd ~/elasticsearch && \$GRADLE_HOME/bin/gradle test integTest"
+
+ @Override
+ void apply(Project project) {
+
+ // Creates the Vagrant extension for the project
+ project.extensions.create('esvagrant', VagrantPropertiesExtension, listVagrantBoxes(project))
+
+ // Add required repositories for Bats tests
+ configureBatsRepositories(project)
+
+ // Creates custom configurations for Bats testing files (and associated scripts and archives)
+ createBatsConfiguration(project)
+
+ // Creates all the main Vagrant tasks
+ createVagrantTasks(project)
+
+ if (project.extensions.esvagrant.boxes == null || project.extensions.esvagrant.boxes.size() == 0) {
+ throw new InvalidUserDataException('Vagrant boxes cannot be null or empty for esvagrant')
+ }
+
+ for (String box : project.extensions.esvagrant.boxes) {
+ if (BOXES.contains(box) == false) {
+ throw new InvalidUserDataException("Vagrant box [${box}] not found, available virtual machines are ${BOXES}")
+ }
+ }
+
+ // Creates all tasks related to the Vagrant boxes
+ createVagrantBoxesTasks(project)
+ }
+
+ private List listVagrantBoxes(Project project) {
+ String vagrantBoxes = project.getProperties().get('vagrant.boxes', 'sample')
+ if (vagrantBoxes == 'sample') {
+ return SAMPLE
+ } else if (vagrantBoxes == 'all') {
+ return BOXES
+ } else {
+ return vagrantBoxes.split(',')
+ }
+ }
+
+ private static void configureBatsRepositories(Project project) {
+ RepositoryHandler repos = project.repositories
+
+ // Try maven central first, it'll have releases before 5.0.0
+ repos.mavenCentral()
+
+ /* Setup a repository that tries to download from
+ https://artifacts.elastic.co/downloads/elasticsearch/[module]-[revision].[ext]
+ which should work for 5.0.0+. This isn't a real ivy repository but gradle
+ is fine with that */
+ repos.ivy {
+ artifactPattern "https://artifacts.elastic.co/downloads/elasticsearch/[module]-[revision].[ext]"
+ }
+ }
+
+ private static void createBatsConfiguration(Project project) {
+ project.configurations.create(BATS)
+
+ final long seed
+ final String formattedSeed
+ String maybeTestsSeed = System.getProperty("tests.seed")
+ if (maybeTestsSeed != null) {
+ if (maybeTestsSeed.trim().isEmpty()) {
+ throw new GradleException("explicit tests.seed cannot be empty")
+ }
+ String masterSeed = maybeTestsSeed.tokenize(':').get(0)
+ seed = new BigInteger(masterSeed, 16).longValue()
+ formattedSeed = maybeTestsSeed
+ } else {
+ seed = new Random().nextLong()
+ formattedSeed = String.format("%016X", seed)
+ }
+
+ String upgradeFromVersion = System.getProperty("tests.packaging.upgradeVersion");
+ if (upgradeFromVersion == null) {
+ upgradeFromVersion = project.indexCompatVersions[new Random(seed).nextInt(project.indexCompatVersions.size())]
+ }
+
+ DISTRIBUTION_ARCHIVES.each {
+ // Adds a dependency for the current version
+ project.dependencies.add(BATS, project.dependencies.project(path: ":distribution:${it}", configuration: 'archives'))
+ }
+
+ UPGRADE_FROM_ARCHIVES.each {
+ // The version of elasticsearch that we upgrade *from*
+ project.dependencies.add(BATS, "org.elasticsearch.distribution.${it}:elasticsearch:${upgradeFromVersion}@${it}")
+ }
+
+ project.extensions.esvagrant.testSeed = seed
+ project.extensions.esvagrant.formattedTestSeed = formattedSeed
+ project.extensions.esvagrant.upgradeFromVersion = upgradeFromVersion
+ }
+
+ private static void createCleanTask(Project project) {
+ project.tasks.create('clean', Delete.class) {
+ description 'Clean the project build directory'
+ group 'Build'
+ delete project.buildDir
+ }
+ }
+
+ private static void createStopTask(Project project) {
+ project.tasks.create('stop') {
+ description 'Stop any tasks from tests that still may be running'
+ group 'Verification'
+ }
+ }
+
+ private static void createSmokeTestTask(Project project) {
+ project.tasks.create('vagrantSmokeTest') {
+ description 'Smoke test the specified vagrant boxes'
+ group 'Verification'
+ }
+ }
+
+ private static void createPrepareVagrantTestEnvTask(Project project) {
+ File batsDir = new File("${project.buildDir}/${BATS}")
+
+ Task createBatsDirsTask = project.tasks.create('createBatsDirs')
+ createBatsDirsTask.outputs.dir batsDir
+ createBatsDirsTask.doLast {
+ batsDir.mkdirs()
+ }
+
+ Copy copyBatsArchives = project.tasks.create('copyBatsArchives', Copy) {
+ dependsOn createBatsDirsTask
+ into "${batsDir}/archives"
+ from project.configurations[BATS]
+ }
+
+ Copy copyBatsTests = project.tasks.create('copyBatsTests', Copy) {
+ dependsOn createBatsDirsTask
+ into "${batsDir}/tests"
+ from {
+ "${project.extensions.esvagrant.batsDir}/tests"
+ }
+ }
+
+ Copy copyBatsUtils = project.tasks.create('copyBatsUtils', Copy) {
+ dependsOn createBatsDirsTask
+ into "${batsDir}/utils"
+ from {
+ "${project.extensions.esvagrant.batsDir}/utils"
+ }
+ }
+
+ // Now we iterate over dependencies of the bats configuration. When a project dependency is found,
+ // we bring back its own archives, test files or test utils.
+ project.afterEvaluate {
+ project.configurations.bats.dependencies.findAll {it.targetConfiguration == BATS }.each { d ->
+ if (d instanceof DefaultProjectDependency) {
+ DefaultProjectDependency externalBatsDependency = (DefaultProjectDependency) d
+ Project externalBatsProject = externalBatsDependency.dependencyProject
+ String externalBatsDir = externalBatsProject.extensions.esvagrant.batsDir
+
+ if (project.extensions.esvagrant.inheritTests) {
+ copyBatsTests.from(externalBatsProject.files("${externalBatsDir}/tests"))
+ }
+ if (project.extensions.esvagrant.inheritTestArchives) {
+ copyBatsArchives.from(externalBatsDependency.projectConfiguration.files)
+ }
+ if (project.extensions.esvagrant.inheritTestUtils) {
+ copyBatsUtils.from(externalBatsProject.files("${externalBatsDir}/utils"))
+ }
+ }
+ }
+ }
+
+ Task createVersionFile = project.tasks.create('createVersionFile', FileContentsTask) {
+ dependsOn createBatsDirsTask
+ file "${batsDir}/archives/version"
+ contents project.version
+ }
+
+ Task createUpgradeFromFile = project.tasks.create('createUpgradeFromFile', FileContentsTask) {
+ dependsOn createBatsDirsTask
+ file "${batsDir}/archives/upgrade_from_version"
+ contents project.extensions.esvagrant.upgradeFromVersion
+ }
+
+ Task vagrantSetUpTask = project.tasks.create('setupBats')
+ vagrantSetUpTask.dependsOn 'vagrantCheckVersion'
+ vagrantSetUpTask.dependsOn copyBatsTests, copyBatsUtils, copyBatsArchives, createVersionFile, createUpgradeFromFile
+ }
+
+ private static void createPackagingTestTask(Project project) {
+ project.tasks.create('packagingTest') {
+ group 'Verification'
+ description "Tests yum/apt packages using vagrant and bats.\n" +
+ " Specify the vagrant boxes to test using the gradle property 'vagrant.boxes'.\n" +
+ " 'sample' can be used to test a single yum and apt box. 'all' can be used to\n" +
+ " test all available boxes. The available boxes are: \n" +
+ " ${BOXES}"
+ dependsOn 'vagrantCheckVersion'
+ }
+ }
+
+ private static void createPlatformTestTask(Project project) {
+ project.tasks.create('platformTest') {
+ group 'Verification'
+ description "Test unit and integ tests on different platforms using vagrant.\n" +
+ " Specify the vagrant boxes to test using the gradle property 'vagrant.boxes'.\n" +
+ " 'all' can be used to test all available boxes. The available boxes are: \n" +
+ " ${BOXES}"
+ dependsOn 'vagrantCheckVersion'
+ }
+ }
+
+ private static void createVagrantTasks(Project project) {
+ createCleanTask(project)
+ createStopTask(project)
+ createSmokeTestTask(project)
+ createPrepareVagrantTestEnvTask(project)
+ createPackagingTestTask(project)
+ createPlatformTestTask(project)
+ }
+
+ private static void createVagrantBoxesTasks(Project project) {
+ assert project.extensions.esvagrant.boxes != null
+
+ assert project.tasks.stop != null
+ Task stop = project.tasks.stop
+
+ assert project.tasks.vagrantSmokeTest != null
+ Task vagrantSmokeTest = project.tasks.vagrantSmokeTest
+
+ assert project.tasks.vagrantCheckVersion != null
+ Task vagrantCheckVersion = project.tasks.vagrantCheckVersion
+
+ assert project.tasks.virtualboxCheckVersion != null
+ Task virtualboxCheckVersion = project.tasks.virtualboxCheckVersion
+
+ assert project.tasks.setupBats != null
+ Task setupBats = project.tasks.setupBats
+
+ assert project.tasks.packagingTest != null
+ Task packagingTest = project.tasks.packagingTest
+
+ assert project.tasks.platformTest != null
+ Task platformTest = project.tasks.platformTest
+
+ /*
+ * We always use the main project.rootDir as Vagrant's current working directory (VAGRANT_CWD)
+ * so that boxes are not duplicated for every Gradle project that use this VagrantTestPlugin.
+ */
+ def vagrantEnvVars = [
+ 'VAGRANT_CWD' : "${project.rootDir.absolutePath}",
+ 'VAGRANT_VAGRANTFILE' : 'Vagrantfile',
+ 'VAGRANT_PROJECT_DIR' : "${project.projectDir.absolutePath}"
+ ]
+
+ // Each box gets it own set of tasks
+ for (String box : BOXES) {
+ String boxTask = box.capitalize().replace('-', '')
+
+ // always add a halt task for all boxes, so clean makes sure they are all shutdown
+ Task halt = project.tasks.create("vagrant${boxTask}#halt", VagrantCommandTask) {
+ command 'halt'
+ boxName box
+ environmentVars vagrantEnvVars
+ }
+ stop.dependsOn(halt)
+
+ Task update = project.tasks.create("vagrant${boxTask}#update", VagrantCommandTask) {
+ command 'box'
+ subcommand 'update'
+ boxName box
+ environmentVars vagrantEnvVars
+ dependsOn vagrantCheckVersion, virtualboxCheckVersion
+ }
+ update.mustRunAfter(setupBats)
+
+ Task up = project.tasks.create("vagrant${boxTask}#up", VagrantCommandTask) {
+ command 'up'
+ boxName box
+ environmentVars vagrantEnvVars
+ /* Its important that we try to reprovision the box even if it already
+ exists. That way updates to the vagrant configuration take automatically.
+ That isn't to say that the updates will always be compatible. Its ok to
+ just destroy the boxes if they get busted but that is a manual step
+ because its slow-ish. */
+ /* We lock the provider to virtualbox because the Vagrantfile specifies
+ lots of boxes that only work properly in virtualbox. Virtualbox is
+ vagrant's default but its possible to change that default and folks do.
+ But the boxes that we use are unlikely to work properly with other
+ virtualization providers. Thus the lock. */
+ args '--provision', '--provider', 'virtualbox'
+ /* It'd be possible to check if the box is already up here and output
+ SKIPPED but that would require running vagrant status which is slow! */
+ dependsOn update
+ }
+
+ Task smoke = project.tasks.create("vagrant${boxTask}#smoketest", Exec) {
+ environment vagrantEnvVars
+ dependsOn up
+ finalizedBy halt
+ commandLine 'vagrant', 'ssh', box, '--command',
+ "set -o pipefail && echo 'Hello from ${project.path}' | sed -ue 's/^/ ${box}: /'"
+ }
+ vagrantSmokeTest.dependsOn(smoke)
+
+ Task packaging = project.tasks.create("vagrant${boxTask}#packagingTest", BatsOverVagrantTask) {
+ remoteCommand BATS_TEST_COMMAND
+ boxName box
+ environmentVars vagrantEnvVars
+ dependsOn up, setupBats
+ finalizedBy halt
+ }
+
+ TaskExecutionAdapter packagingReproListener = new TaskExecutionAdapter() {
+ @Override
+ void afterExecute(Task task, TaskState state) {
+ if (state.failure != null) {
+ println "REPRODUCE WITH: gradle ${packaging.path} " +
+ "-Dtests.seed=${project.extensions.esvagrant.formattedTestSeed} "
+ }
+ }
+ }
+ packaging.doFirst {
+ project.gradle.addListener(packagingReproListener)
+ }
+ packaging.doLast {
+ project.gradle.removeListener(packagingReproListener)
+ }
+ if (project.extensions.esvagrant.boxes.contains(box)) {
+ packagingTest.dependsOn(packaging)
+ }
+
+ Task platform = project.tasks.create("vagrant${boxTask}#platformTest", VagrantCommandTask) {
+ command 'ssh'
+ boxName box
+ environmentVars vagrantEnvVars
+ dependsOn up
+ finalizedBy halt
+ args '--command', PLATFORM_TEST_COMMAND + " -Dtests.seed=${-> project.extensions.esvagrant.formattedTestSeed}"
+ }
+ TaskExecutionAdapter platformReproListener = new TaskExecutionAdapter() {
+ @Override
+ void afterExecute(Task task, TaskState state) {
+ if (state.failure != null) {
+ println "REPRODUCE WITH: gradle ${platform.path} " +
+ "-Dtests.seed=${project.extensions.esvagrant.formattedTestSeed} "
+ }
+ }
+ }
+ platform.doFirst {
+ project.gradle.addListener(platformReproListener)
+ }
+ platform.doLast {
+ project.gradle.removeListener(platformReproListener)
+ }
+ if (project.extensions.esvagrant.boxes.contains(box)) {
+ platformTest.dependsOn(platform)
+ }
+ }
+ }
+}
diff --git a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java
index cbfa31d1aaf5b..9bd14675d34a4 100644
--- a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java
+++ b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java
@@ -28,6 +28,7 @@
import java.nio.file.Paths;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.HashSet;
+import java.util.Objects;
import java.util.Set;
/**
@@ -49,6 +50,7 @@ public static void main(String[] args) throws IOException {
Path rootPath = null;
boolean skipIntegTestsInDisguise = false;
boolean selfTest = false;
+ boolean checkMainClasses = false;
for (int i = 0; i < args.length; i++) {
String arg = args[i];
switch (arg) {
@@ -64,6 +66,9 @@ public static void main(String[] args) throws IOException {
case "--self-test":
selfTest = true;
break;
+ case "--main":
+ checkMainClasses = true;
+ break;
case "--":
rootPath = Paths.get(args[++i]);
break;
@@ -73,28 +78,43 @@ public static void main(String[] args) throws IOException {
}
NamingConventionsCheck check = new NamingConventionsCheck(testClass, integTestClass);
- check.check(rootPath, skipIntegTestsInDisguise);
+ if (checkMainClasses) {
+ check.checkMain(rootPath);
+ } else {
+ check.checkTests(rootPath, skipIntegTestsInDisguise);
+ }
if (selfTest) {
- assertViolation("WrongName", check.missingSuffix);
- assertViolation("WrongNameTheSecond", check.missingSuffix);
- assertViolation("DummyAbstractTests", check.notRunnable);
- assertViolation("DummyInterfaceTests", check.notRunnable);
- assertViolation("InnerTests", check.innerClasses);
- assertViolation("NotImplementingTests", check.notImplementing);
- assertViolation("PlainUnit", check.pureUnitTest);
+ if (checkMainClasses) {
+ assertViolation(NamingConventionsCheckInMainTests.class.getName(), check.testsInMain);
+ assertViolation(NamingConventionsCheckInMainIT.class.getName(), check.testsInMain);
+ } else {
+ assertViolation("WrongName", check.missingSuffix);
+ assertViolation("WrongNameTheSecond", check.missingSuffix);
+ assertViolation("DummyAbstractTests", check.notRunnable);
+ assertViolation("DummyInterfaceTests", check.notRunnable);
+ assertViolation("InnerTests", check.innerClasses);
+ assertViolation("NotImplementingTests", check.notImplementing);
+ assertViolation("PlainUnit", check.pureUnitTest);
+ }
}
// Now we should have no violations
- assertNoViolations("Not all subclasses of " + check.testClass.getSimpleName()
- + " match the naming convention. Concrete classes must end with [Tests]", check.missingSuffix);
+ assertNoViolations(
+ "Not all subclasses of " + check.testClass.getSimpleName()
+ + " match the naming convention. Concrete classes must end with [Tests]",
+ check.missingSuffix);
assertNoViolations("Classes ending with [Tests] are abstract or interfaces", check.notRunnable);
assertNoViolations("Found inner classes that are tests, which are excluded from the test runner", check.innerClasses);
assertNoViolations("Pure Unit-Test found must subclass [" + check.testClass.getSimpleName() + "]", check.pureUnitTest);
assertNoViolations("Classes ending with [Tests] must subclass [" + check.testClass.getSimpleName() + "]", check.notImplementing);
+ assertNoViolations(
+ "Classes ending with [Tests] or [IT] or extending [" + check.testClass.getSimpleName() + "] must be in src/test/java",
+ check.testsInMain);
if (skipIntegTestsInDisguise == false) {
- assertNoViolations("Subclasses of " + check.integTestClass.getSimpleName() +
- " should end with IT as they are integration tests", check.integTestsInDisguise);
+ assertNoViolations(
+ "Subclasses of " + check.integTestClass.getSimpleName() + " should end with IT as they are integration tests",
+ check.integTestsInDisguise);
}
}
@@ -104,84 +124,76 @@ public static void main(String[] args) throws IOException {
private final Set> integTestsInDisguise = new HashSet<>();
private final Set> notRunnable = new HashSet<>();
private final Set> innerClasses = new HashSet<>();
+ private final Set> testsInMain = new HashSet<>();
private final Class> testClass;
private final Class> integTestClass;
public NamingConventionsCheck(Class> testClass, Class> integTestClass) {
- this.testClass = testClass;
+ this.testClass = Objects.requireNonNull(testClass, "--test-class is required");
this.integTestClass = integTestClass;
}
- public void check(Path rootPath, boolean skipTestsInDisguised) throws IOException {
- Files.walkFileTree(rootPath, new FileVisitor() {
- /**
- * The package name of the directory we are currently visiting. Kept as a string rather than something fancy because we load
- * just about every class and doing so requires building a string out of it anyway. At least this way we don't need to build the
- * first part of the string over and over and over again.
- */
- private String packageName;
-
+ public void checkTests(Path rootPath, boolean skipTestsInDisguised) throws IOException {
+ Files.walkFileTree(rootPath, new TestClassVisitor() {
@Override
- public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
- // First we visit the root directory
- if (packageName == null) {
- // And it package is empty string regardless of the directory name
- packageName = "";
- } else {
- packageName += dir.getFileName() + ".";
+ protected void visitTestClass(Class> clazz) {
+ if (skipTestsInDisguised == false && integTestClass.isAssignableFrom(clazz)) {
+ integTestsInDisguise.add(clazz);
+ }
+ if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
+ notRunnable.add(clazz);
+ } else if (isTestCase(clazz) == false) {
+ notImplementing.add(clazz);
+ } else if (Modifier.isStatic(clazz.getModifiers())) {
+ innerClasses.add(clazz);
}
- return FileVisitResult.CONTINUE;
}
@Override
- public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
- // Go up one package by jumping back to the second to last '.'
- packageName = packageName.substring(0, 1 + packageName.lastIndexOf('.', packageName.length() - 2));
- return FileVisitResult.CONTINUE;
+ protected void visitIntegrationTestClass(Class> clazz) {
+ if (isTestCase(clazz) == false) {
+ notImplementing.add(clazz);
+ }
}
@Override
- public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
- String filename = file.getFileName().toString();
- if (filename.endsWith(".class")) {
- String className = filename.substring(0, filename.length() - ".class".length());
- Class> clazz = loadClassWithoutInitializing(packageName + className);
- if (clazz.getName().endsWith("Tests")) {
- if (skipTestsInDisguised == false && integTestClass.isAssignableFrom(clazz)) {
- integTestsInDisguise.add(clazz);
- }
- if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
- notRunnable.add(clazz);
- } else if (isTestCase(clazz) == false) {
- notImplementing.add(clazz);
- } else if (Modifier.isStatic(clazz.getModifiers())) {
- innerClasses.add(clazz);
- }
- } else if (clazz.getName().endsWith("IT")) {
- if (isTestCase(clazz) == false) {
- notImplementing.add(clazz);
- }
- } else if (Modifier.isAbstract(clazz.getModifiers()) == false && Modifier.isInterface(clazz.getModifiers()) == false) {
- if (isTestCase(clazz)) {
- missingSuffix.add(clazz);
- } else if (junit.framework.Test.class.isAssignableFrom(clazz)) {
- pureUnitTest.add(clazz);
- }
- }
+ protected void visitOtherClass(Class> clazz) {
+ if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
+ return;
+ }
+ if (isTestCase(clazz)) {
+ missingSuffix.add(clazz);
+ } else if (junit.framework.Test.class.isAssignableFrom(clazz)) {
+ pureUnitTest.add(clazz);
}
- return FileVisitResult.CONTINUE;
+ }
+ });
+ }
+
+ public void checkMain(Path rootPath) throws IOException {
+ Files.walkFileTree(rootPath, new TestClassVisitor() {
+ @Override
+ protected void visitTestClass(Class> clazz) {
+ testsInMain.add(clazz);
}
- private boolean isTestCase(Class> clazz) {
- return testClass.isAssignableFrom(clazz);
+ @Override
+ protected void visitIntegrationTestClass(Class> clazz) {
+ testsInMain.add(clazz);
}
@Override
- public FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException {
- throw exc;
+ protected void visitOtherClass(Class> clazz) {
+ if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
+ return;
+ }
+ if (isTestCase(clazz)) {
+ testsInMain.add(clazz);
+ }
}
});
+
}
/**
@@ -203,7 +215,7 @@ private static void assertNoViolations(String message, Set> set) {
* similar enough.
*/
private static void assertViolation(String className, Set> set) {
- className = "org.elasticsearch.test.NamingConventionsCheckBadClasses$" + className;
+ className = className.startsWith("org") ? className : "org.elasticsearch.test.NamingConventionsCheckBadClasses$" + className;
if (false == set.remove(loadClassWithoutInitializing(className))) {
System.err.println("Error in NamingConventionsCheck! Expected [" + className + "] to be a violation but wasn't.");
System.exit(1);
@@ -229,4 +241,74 @@ static Class> loadClassWithoutInitializing(String name) {
throw new RuntimeException(e);
}
}
+
+ abstract class TestClassVisitor implements FileVisitor {
+ /**
+ * The package name of the directory we are currently visiting. Kept as a string rather than something fancy because we load
+ * just about every class and doing so requires building a string out of it anyway. At least this way we don't need to build the
+ * first part of the string over and over and over again.
+ */
+ private String packageName;
+
+ /**
+ * Visit classes named like a test.
+ */
+ protected abstract void visitTestClass(Class> clazz);
+ /**
+ * Visit classes named like an integration test.
+ */
+ protected abstract void visitIntegrationTestClass(Class> clazz);
+ /**
+ * Visit classes not named like a test at all.
+ */
+ protected abstract void visitOtherClass(Class> clazz);
+
+ @Override
+ public final FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
+ // First we visit the root directory
+ if (packageName == null) {
+ // And it package is empty string regardless of the directory name
+ packageName = "";
+ } else {
+ packageName += dir.getFileName() + ".";
+ }
+ return FileVisitResult.CONTINUE;
+ }
+
+ @Override
+ public final FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
+ // Go up one package by jumping back to the second to last '.'
+ packageName = packageName.substring(0, 1 + packageName.lastIndexOf('.', packageName.length() - 2));
+ return FileVisitResult.CONTINUE;
+ }
+
+ @Override
+ public final FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
+ String filename = file.getFileName().toString();
+ if (filename.endsWith(".class")) {
+ String className = filename.substring(0, filename.length() - ".class".length());
+ Class> clazz = loadClassWithoutInitializing(packageName + className);
+ if (clazz.getName().endsWith("Tests")) {
+ visitTestClass(clazz);
+ } else if (clazz.getName().endsWith("IT")) {
+ visitIntegrationTestClass(clazz);
+ } else {
+ visitOtherClass(clazz);
+ }
+ }
+ return FileVisitResult.CONTINUE;
+ }
+
+ /**
+ * Is this class a test case?
+ */
+ protected boolean isTestCase(Class> clazz) {
+ return testClass.isAssignableFrom(clazz);
+ }
+
+ @Override
+ public final FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException {
+ throw exc;
+ }
+ }
}
diff --git a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainIT.java b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainIT.java
new file mode 100644
index 0000000000000..46adc7f065b16
--- /dev/null
+++ b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainIT.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.test;
+
+/**
+ * This class should fail the naming conventions self test.
+ */
+public class NamingConventionsCheckInMainIT {
+}
diff --git a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainTests.java b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainTests.java
new file mode 100644
index 0000000000000..27c0b41eb3f6a
--- /dev/null
+++ b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainTests.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.test;
+
+/**
+ * This class should fail the naming conventions self test.
+ */
+public class NamingConventionsCheckInMainTests {
+}
diff --git a/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.standalone-rest-test.properties b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.standalone-rest-test.properties
new file mode 100644
index 0000000000000..2daf4dc27c0ec
--- /dev/null
+++ b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.standalone-rest-test.properties
@@ -0,0 +1,20 @@
+#
+# Licensed to Elasticsearch under one or more contributor
+# license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright
+# ownership. Elasticsearch licenses this file to you under
+# the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+implementation-class=org.elasticsearch.gradle.test.StandaloneRestTestPlugin
diff --git a/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrant.properties b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrant.properties
new file mode 100644
index 0000000000000..844310fa9d7fa
--- /dev/null
+++ b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrant.properties
@@ -0,0 +1 @@
+implementation-class=org.elasticsearch.gradle.vagrant.VagrantTestPlugin
diff --git a/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrantsupport.properties b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrantsupport.properties
new file mode 100644
index 0000000000000..73a3f4123496c
--- /dev/null
+++ b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.vagrantsupport.properties
@@ -0,0 +1 @@
+implementation-class=org.elasticsearch.gradle.vagrant.VagrantSupportPlugin
\ No newline at end of file
diff --git a/buildSrc/src/main/resources/checkstyle_suppressions.xml b/buildSrc/src/main/resources/checkstyle_suppressions.xml
index 14c2bc8ca5a5f..7774fd8b13fe5 100644
--- a/buildSrc/src/main/resources/checkstyle_suppressions.xml
+++ b/buildSrc/src/main/resources/checkstyle_suppressions.xml
@@ -10,11 +10,12 @@
-
-
+
+
+
@@ -29,7 +30,6 @@
-
@@ -38,7 +38,6 @@
-
@@ -58,14 +57,8 @@
-
-
-
-
-
-
@@ -117,8 +110,6 @@
-
-
@@ -141,50 +132,28 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
-
-
-
-
-
@@ -202,7 +171,6 @@
-
@@ -212,33 +180,23 @@
-
-
-
-
-
-
-
-
-
-
@@ -246,7 +204,6 @@
-
@@ -259,7 +216,6 @@
-
@@ -272,8 +228,6 @@
-
-
@@ -281,9 +235,7 @@
-
-
@@ -298,36 +250,22 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
@@ -339,14 +277,11 @@
-
-
-
@@ -363,67 +298,55 @@
-
+
+
-
-
-
+
+
-
+
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
-
-
-
-
-
-
-
-
-
@@ -433,11 +356,8 @@
-
-
-
@@ -447,7 +367,6 @@
-
@@ -455,153 +374,103 @@
-
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
-
-
+
@@ -613,20 +482,15 @@
-
-
-
-
-
@@ -662,14 +526,12 @@
-
-
@@ -711,26 +573,22 @@
-
-
-
-
@@ -739,28 +597,23 @@
-
+
-
-
-
-
-
@@ -768,86 +621,71 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
+
-
-
-
-
-
-
-
+
+
+
+
+
+
-
+
+
+
+
-
-
-
-
+
+
-
+
+
+
+
+
-
-
+
-
-
-
+
+
+
-
-
-
-
-
-
-
+
@@ -856,7 +694,6 @@
-
@@ -871,10 +708,8 @@
-
-
@@ -889,29 +724,20 @@
-
-
-
-
-
-
-
-
-
@@ -919,15 +745,28 @@
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
+
@@ -950,28 +789,21 @@
-
-
-
-
-
-
-
-
+
@@ -980,68 +812,50 @@
-
-
-
-
-
+
-
-
-
-
-
+
-
-
-
-
-
-
+
-
-
-
-
-
@@ -1050,26 +864,21 @@
-
-
-
-
-
-
-
-
-
-
+
-
+
+
+
+
+
-
+
@@ -1077,29 +886,17 @@
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs b/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs
index 9bee5e587b03f..48c93f444ba2a 100644
--- a/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs
+++ b/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs
@@ -1,6 +1,5 @@
eclipse.preferences.version=1
-# previous configuration from maven build
# this is merged with gradle's generated properties during 'gradle eclipse'
# NOTE: null pointer analysis etc is not enabled currently, it seems very unstable
diff --git a/buildSrc/src/main/resources/forbidden/es-all-signatures.txt b/buildSrc/src/main/resources/forbidden/es-all-signatures.txt
index 37f03f4c91c28..1b655004cec3e 100644
--- a/buildSrc/src/main/resources/forbidden/es-all-signatures.txt
+++ b/buildSrc/src/main/resources/forbidden/es-all-signatures.txt
@@ -36,3 +36,12 @@ org.apache.lucene.document.FieldType#numericType()
java.lang.invoke.MethodHandle#invoke(java.lang.Object[])
java.lang.invoke.MethodHandle#invokeWithArguments(java.lang.Object[])
java.lang.invoke.MethodHandle#invokeWithArguments(java.util.List)
+
+# This method is misleading, and uses lenient boolean parsing under the hood. If you intend to parse
+# a system property as a boolean, use
+# org.elasticsearch.common.Booleans#parseBoolean(java.lang.String) on the result of
+# java.lang.SystemProperty#getProperty(java.lang.String) instead. If you were not intending to parse
+# a system property as a boolean, but instead parse a string to a boolean, use
+# org.elasticsearch.common.Booleans#parseBoolean(java.lang.String) directly on the string.
+@defaultMessage use org.elasticsearch.common.Booleans#parseBoolean(java.lang.String)
+java.lang.Boolean#getBoolean(java.lang.String)
diff --git a/buildSrc/src/main/resources/forbidden/http-signatures.txt b/buildSrc/src/main/resources/forbidden/http-signatures.txt
new file mode 100644
index 0000000000000..dcf20bbb09387
--- /dev/null
+++ b/buildSrc/src/main/resources/forbidden/http-signatures.txt
@@ -0,0 +1,45 @@
+# Licensed to Elasticsearch under one or more contributor
+# license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright
+# ownership. Elasticsearch licenses this file to you under
+# the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on
+# an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
+# either express or implied. See the License for the specific
+# language governing permissions and limitations under the License.
+
+@defaultMessage Explicitly specify the ContentType of HTTP entities when creating
+org.apache.http.entity.StringEntity#(java.lang.String)
+org.apache.http.entity.StringEntity#(java.lang.String,java.lang.String)
+org.apache.http.entity.StringEntity#(java.lang.String,java.nio.charset.Charset)
+org.apache.http.entity.ByteArrayEntity#(byte[])
+org.apache.http.entity.ByteArrayEntity#(byte[],int,int)
+org.apache.http.entity.FileEntity#(java.io.File)
+org.apache.http.entity.InputStreamEntity#(java.io.InputStream)
+org.apache.http.entity.InputStreamEntity#(java.io.InputStream,long)
+org.apache.http.nio.entity.NByteArrayEntity#(byte[])
+org.apache.http.nio.entity.NByteArrayEntity#(byte[],int,int)
+org.apache.http.nio.entity.NFileEntity#(java.io.File)
+org.apache.http.nio.entity.NStringEntity#(java.lang.String)
+org.apache.http.nio.entity.NStringEntity#(java.lang.String,java.lang.String)
+
+@defaultMessage Use non-deprecated constructors
+org.apache.http.nio.entity.NFileEntity#(java.io.File,java.lang.String)
+org.apache.http.nio.entity.NFileEntity#(java.io.File,java.lang.String,boolean)
+org.apache.http.entity.FileEntity#(java.io.File,java.lang.String)
+org.apache.http.entity.StringEntity#(java.lang.String,java.lang.String,java.lang.String)
+
+@defaultMessage BasicEntity is easy to mess up and forget to set content type
+org.apache.http.entity.BasicHttpEntity#()
+
+@defaultMessage EntityTemplate is easy to mess up and forget to set content type
+org.apache.http.entity.EntityTemplate#(org.apache.http.entity.ContentProducer)
+
+@defaultMessage SerializableEntity uses java serialization and makes it easy to forget to set content type
+org.apache.http.entity.SerializableEntity#(java.io.Serializable)
diff --git a/buildSrc/src/main/resources/plugin-descriptor.properties b/buildSrc/src/main/resources/plugin-descriptor.properties
index ebde46d326ba9..67c6ee39968cd 100644
--- a/buildSrc/src/main/resources/plugin-descriptor.properties
+++ b/buildSrc/src/main/resources/plugin-descriptor.properties
@@ -30,11 +30,15 @@ name=${name}
# 'classname': the name of the class to load, fully-qualified.
classname=${classname}
#
-# 'java.version' version of java the code is built against
+# 'java.version': version of java the code is built against
# use the system property java.specification.version
# version string must be a sequence of nonnegative decimal integers
# separated by "."'s and may have leading zeros
java.version=${javaVersion}
#
-# 'elasticsearch.version' version of elasticsearch compiled against
+# 'elasticsearch.version': version of elasticsearch compiled against
elasticsearch.version=${elasticsearchVersion}
+### optional elements for plugins:
+#
+# 'has.native.controller': whether or not the plugin has a native controller
+has.native.controller=${hasNativeController}
diff --git a/buildSrc/version.properties b/buildSrc/version.properties
index e96f98245958d..eab95c5ace916 100644
--- a/buildSrc/version.properties
+++ b/buildSrc/version.properties
@@ -1,18 +1,21 @@
-elasticsearch = 5.0.0-alpha6
-lucene = 6.2.0
+elasticsearch = 5.6.4
+lucene = 6.6.1
# optional dependencies
spatial4j = 0.6
jts = 1.13
-jackson = 2.8.1
+jackson = 2.8.6
snakeyaml = 1.15
-log4j = 2.6.2
+# when updating log4j, please update also docs/java-api/index.asciidoc
+log4j = 2.9.1
slf4j = 1.6.2
-jna = 4.2.2
+
+# when updating the JNA version, also update the version in buildSrc/build.gradle
+jna = 4.4.0-1
# test dependencies
-randomizedrunner = 2.3.2
-junit = 4.11
+randomizedrunner = 2.5.0
+junit = 4.12
httpclient = 4.5.2
httpcore = 4.4.5
commonslogging = 1.1.3
diff --git a/client/benchmark/build.gradle b/client/benchmark/build.gradle
index bd4abddbd1ddf..ec0a002b1ae45 100644
--- a/client/benchmark/build.gradle
+++ b/client/benchmark/build.gradle
@@ -37,6 +37,10 @@ apply plugin: 'application'
group = 'org.elasticsearch.client'
+// Not published so no need to assemble
+tasks.remove(assemble)
+build.dependsOn.remove('assemble')
+
archivesBaseName = 'client-benchmarks'
mainClassName = 'org.elasticsearch.client.benchmark.BenchmarkMain'
@@ -49,7 +53,7 @@ task test(type: Test, overwrite: true)
dependencies {
compile 'org.apache.commons:commons-math3:3.2'
- compile("org.elasticsearch.client:rest:${version}")
+ compile("org.elasticsearch.client:elasticsearch-rest-client:${version}")
// bottleneck should be the client, not Elasticsearch
compile project(path: ':client:client-benchmark-noop-api-plugin')
// for transport client
@@ -64,7 +68,3 @@ dependencies {
// No licenses for our benchmark deps (we don't ship benchmarks)
dependencyLicenses.enabled = false
-
-extraArchive {
- javadoc = false
-}
diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java
index 214a75d12cc01..e9cde26e6c870 100644
--- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java
+++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java
@@ -95,7 +95,7 @@ private static final class LoadGenerator {
private final BlockingQueue> bulkQueue;
private final int bulkSize;
- public LoadGenerator(Path bulkDataFile, BlockingQueue> bulkQueue, int bulkSize) {
+ LoadGenerator(Path bulkDataFile, BlockingQueue> bulkQueue, int bulkSize) {
this.bulkDataFile = bulkDataFile;
this.bulkQueue = bulkQueue;
this.bulkSize = bulkSize;
@@ -143,7 +143,7 @@ private static final class BulkIndexer implements Runnable {
private final BulkRequestExecutor bulkRequestExecutor;
private final SampleRecorder sampleRecorder;
- public BulkIndexer(BlockingQueue> bulkData, int warmupIterations, int measurementIterations,
+ BulkIndexer(BlockingQueue> bulkData, int warmupIterations, int measurementIterations,
SampleRecorder sampleRecorder, BulkRequestExecutor bulkRequestExecutor) {
this.bulkData = bulkData;
this.warmupIterations = warmupIterations;
diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java
index b342d93fba5a1..9210526e7c81c 100644
--- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java
+++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java
@@ -73,7 +73,7 @@ private static final class RestBulkRequestExecutor implements BulkRequestExecuto
private final RestClient client;
private final String actionMetaData;
- public RestBulkRequestExecutor(RestClient client, String index, String type) {
+ RestBulkRequestExecutor(RestClient client, String index, String type) {
this.client = client;
this.actionMetaData = String.format(Locale.ROOT, "{ \"index\" : { \"_index\" : \"%s\", \"_type\" : \"%s\" } }%n", index, type);
}
diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java
index c38234ef30241..bf25fde24cfc4 100644
--- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java
+++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java
@@ -28,6 +28,7 @@
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
+import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.plugin.noop.NoopPlugin;
import org.elasticsearch.plugin.noop.action.bulk.NoopBulkAction;
@@ -70,7 +71,7 @@ private static final class TransportBulkRequestExecutor implements BulkRequestEx
private final String indexName;
private final String typeName;
- public TransportBulkRequestExecutor(TransportClient client, String indexName, String typeName) {
+ TransportBulkRequestExecutor(TransportClient client, String indexName, String typeName) {
this.client = client;
this.indexName = indexName;
this.typeName = typeName;
@@ -80,7 +81,7 @@ public TransportBulkRequestExecutor(TransportClient client, String indexName, St
public boolean bulkIndex(List bulkData) {
NoopBulkRequestBuilder builder = NoopBulkAction.INSTANCE.newRequestBuilder(client);
for (String bulkItem : bulkData) {
- builder.add(new IndexRequest(indexName, typeName).source(bulkItem.getBytes(StandardCharsets.UTF_8)));
+ builder.add(new IndexRequest(indexName, typeName).source(bulkItem.getBytes(StandardCharsets.UTF_8), XContentType.JSON));
}
BulkResponse bulkResponse;
try {
diff --git a/client/client-benchmark-noop-api-plugin/build.gradle b/client/client-benchmark-noop-api-plugin/build.gradle
index 9f3af3ce002ae..bee41034c3ce5 100644
--- a/client/client-benchmark-noop-api-plugin/build.gradle
+++ b/client/client-benchmark-noop-api-plugin/build.gradle
@@ -20,7 +20,6 @@
group = 'org.elasticsearch.plugin'
apply plugin: 'elasticsearch.esplugin'
-apply plugin: 'com.bmuschko.nexus'
esplugin {
name 'client-benchmark-noop-api'
@@ -28,9 +27,12 @@ esplugin {
classname 'org.elasticsearch.plugin.noop.NoopPlugin'
}
+// Not published so no need to assemble
+tasks.remove(assemble)
+build.dependsOn.remove('assemble')
+
compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
// no unit tests
test.enabled = false
integTest.enabled = false
-
diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java
index 343d3cf613a04..e8ed27715c10a 100644
--- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java
+++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java
@@ -23,19 +23,27 @@
import org.elasticsearch.plugin.noop.action.bulk.TransportNoopBulkAction;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.ActionResponse;
+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
+import org.elasticsearch.cluster.node.DiscoveryNodes;
+import org.elasticsearch.common.settings.ClusterSettings;
+import org.elasticsearch.common.settings.IndexScopedSettings;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.settings.SettingsFilter;
import org.elasticsearch.plugin.noop.action.search.NoopSearchAction;
import org.elasticsearch.plugin.noop.action.search.RestNoopSearchAction;
import org.elasticsearch.plugin.noop.action.search.TransportNoopSearchAction;
import org.elasticsearch.plugins.ActionPlugin;
import org.elasticsearch.plugins.Plugin;
+import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestHandler;
import java.util.Arrays;
import java.util.List;
+import java.util.function.Supplier;
public class NoopPlugin extends Plugin implements ActionPlugin {
@Override
- public List, ? extends ActionResponse>> getActions() {
+ public List> getActions() {
return Arrays.asList(
new ActionHandler<>(NoopBulkAction.INSTANCE, TransportNoopBulkAction.class),
new ActionHandler<>(NoopSearchAction.INSTANCE, TransportNoopSearchAction.class)
@@ -43,7 +51,11 @@ public class NoopPlugin extends Plugin implements ActionPlugin {
}
@Override
- public List> getRestHandlers() {
- return Arrays.asList(RestNoopBulkAction.class, RestNoopSearchAction.class);
+ public List getRestHandlers(Settings settings, RestController restController, ClusterSettings clusterSettings,
+ IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter, IndexNameExpressionResolver indexNameExpressionResolver,
+ Supplier nodesInCluster) {
+ return Arrays.asList(
+ new RestNoopBulkAction(settings, restController),
+ new RestNoopSearchAction(settings, restController));
}
}
diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java
index ceaf9f8cc9d17..1034e722e8789 100644
--- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java
+++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java
@@ -33,6 +33,7 @@
import org.elasticsearch.client.ElasticsearchClient;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.unit.TimeValue;
+import org.elasticsearch.common.xcontent.XContentType;
public class NoopBulkRequestBuilder extends ActionRequestBuilder
implements WriteRequestBuilder {
@@ -95,17 +96,17 @@ public NoopBulkRequestBuilder add(UpdateRequestBuilder request) {
/**
* Adds a framed data in binary format
*/
- public NoopBulkRequestBuilder add(byte[] data, int from, int length) throws Exception {
- request.add(data, from, length, null, null);
+ public NoopBulkRequestBuilder add(byte[] data, int from, int length, XContentType xContentType) throws Exception {
+ request.add(data, from, length, null, null, xContentType);
return this;
}
/**
* Adds a framed data in binary format
*/
- public NoopBulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType)
- throws Exception {
- request.add(data, from, length, defaultIndex, defaultType);
+ public NoopBulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType,
+ XContentType xContentType) throws Exception {
+ request.add(data, from, length, defaultIndex, defaultType, xContentType);
return this;
}
diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java
index 814c05889b2b7..df712f537e43e 100644
--- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java
+++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java
@@ -18,6 +18,7 @@
*/
package org.elasticsearch.plugin.noop.action.bulk;
+import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.DocWriteResponse;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.bulk.BulkRequest;
@@ -27,7 +28,6 @@
import org.elasticsearch.client.Requests;
import org.elasticsearch.client.node.NodeClient;
import org.elasticsearch.common.Strings;
-import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.index.shard.ShardId;
@@ -39,12 +39,13 @@
import org.elasticsearch.rest.RestResponse;
import org.elasticsearch.rest.action.RestBuilderListener;
+import java.io.IOException;
+
import static org.elasticsearch.rest.RestRequest.Method.POST;
import static org.elasticsearch.rest.RestRequest.Method.PUT;
import static org.elasticsearch.rest.RestStatus.OK;
public class RestNoopBulkAction extends BaseRestHandler {
- @Inject
public RestNoopBulkAction(Settings settings, RestController controller) {
super(settings);
@@ -57,7 +58,7 @@ public RestNoopBulkAction(Settings settings, RestController controller) {
}
@Override
- public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception {
+ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
BulkRequest bulkRequest = Requests.bulkRequest();
String defaultIndex = request.param("index");
String defaultType = request.param("type");
@@ -72,22 +73,24 @@ public void handleRequest(final RestRequest request, final RestChannel channel,
}
bulkRequest.timeout(request.paramAsTime("timeout", BulkShardRequest.DEFAULT_TIMEOUT));
bulkRequest.setRefreshPolicy(request.param("refresh"));
- bulkRequest.add(request.content(), defaultIndex, defaultType, defaultRouting, defaultFields, defaultPipeline, null, true);
+ bulkRequest.add(request.requiredContent(), defaultIndex, defaultType, defaultRouting, defaultFields,
+ null, defaultPipeline, null, true, request.getXContentType());
// short circuit the call to the transport layer
- BulkRestBuilderListener listener = new BulkRestBuilderListener(channel, request);
- listener.onResponse(bulkRequest);
-
+ return channel -> {
+ BulkRestBuilderListener listener = new BulkRestBuilderListener(channel, request);
+ listener.onResponse(bulkRequest);
+ };
}
private static class BulkRestBuilderListener extends RestBuilderListener {
- private final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, "update",
+ private final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, DocWriteRequest.OpType.UPDATE,
new UpdateResponse(new ShardId("mock", "", 1), "mock_type", "1", 1L, DocWriteResponse.Result.CREATED));
private final RestRequest request;
- public BulkRestBuilderListener(RestChannel channel, RestRequest request) {
+ BulkRestBuilderListener(RestChannel channel, RestRequest request) {
super(channel);
this.request = request;
}
@@ -99,9 +102,7 @@ public RestResponse buildResponse(BulkRequest bulkRequest, XContentBuilder build
builder.field(Fields.ERRORS, false);
builder.startArray(Fields.ITEMS);
for (int idx = 0; idx < bulkRequest.numberOfActions(); idx++) {
- builder.startObject();
ITEM_RESPONSE.toXContent(builder, request);
- builder.endObject();
}
builder.endArray();
builder.endObject();
diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/TransportNoopBulkAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/TransportNoopBulkAction.java
index dcc225c260309..aff5863bd9298 100644
--- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/TransportNoopBulkAction.java
+++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/TransportNoopBulkAction.java
@@ -19,6 +19,7 @@
package org.elasticsearch.plugin.noop.action.bulk;
import org.elasticsearch.action.ActionListener;
+import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.DocWriteResponse;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.bulk.BulkRequest;
@@ -34,7 +35,7 @@
import org.elasticsearch.transport.TransportService;
public class TransportNoopBulkAction extends HandledTransportAction {
- private static final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, "update",
+ private static final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, DocWriteRequest.OpType.UPDATE,
new UpdateResponse(new ShardId("mock", "", 1), "mock_type", "1", 1L, DocWriteResponse.Result.CREATED));
@Inject
@@ -45,7 +46,7 @@ public TransportNoopBulkAction(Settings settings, ThreadPool threadPool, Transpo
@Override
protected void doExecute(BulkRequest request, ActionListener listener) {
- final int itemCount = request.subRequests().size();
+ final int itemCount = request.requests().size();
// simulate at least a realistic amount of data that gets serialized
BulkItemResponse[] bulkItemResponses = new BulkItemResponse[itemCount];
for (int idx = 0; idx < itemCount; idx++) {
diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/RestNoopSearchAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/RestNoopSearchAction.java
index 3520876af04b4..48a453c372507 100644
--- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/RestNoopSearchAction.java
+++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/RestNoopSearchAction.java
@@ -20,10 +20,8 @@
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.client.node.NodeClient;
-import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.rest.BaseRestHandler;
-import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.action.RestStatusToXContentListener;
@@ -34,8 +32,6 @@
import static org.elasticsearch.rest.RestRequest.Method.POST;
public class RestNoopSearchAction extends BaseRestHandler {
-
- @Inject
public RestNoopSearchAction(Settings settings, RestController controller) {
super(settings);
controller.registerHandler(GET, "/_noop_search", this);
@@ -47,8 +43,8 @@ public RestNoopSearchAction(Settings settings, RestController controller) {
}
@Override
- public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws IOException {
+ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
SearchRequest searchRequest = new SearchRequest();
- client.execute(NoopSearchAction.INSTANCE, searchRequest, new RestStatusToXContentListener<>(channel));
+ return channel -> client.execute(NoopSearchAction.INSTANCE, searchRequest, new RestStatusToXContentListener<>(channel));
}
}
diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java
index c4397684bc41e..280e0b08f2c72 100644
--- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java
+++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java
@@ -28,8 +28,8 @@
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.search.aggregations.InternalAggregations;
-import org.elasticsearch.search.internal.InternalSearchHit;
-import org.elasticsearch.search.internal.InternalSearchHits;
+import org.elasticsearch.search.SearchHit;
+import org.elasticsearch.search.SearchHits;
import org.elasticsearch.search.internal.InternalSearchResponse;
import org.elasticsearch.search.profile.SearchProfileShardResults;
import org.elasticsearch.search.suggest.Suggest;
@@ -49,10 +49,10 @@ public TransportNoopSearchAction(Settings settings, ThreadPool threadPool, Trans
@Override
protected void doExecute(SearchRequest request, ActionListener listener) {
listener.onResponse(new SearchResponse(new InternalSearchResponse(
- new InternalSearchHits(
- new InternalSearchHit[0], 0L, 0.0f),
+ new SearchHits(
+ new SearchHit[0], 0L, 0.0f),
new InternalAggregations(Collections.emptyList()),
new Suggest(Collections.emptyList()),
- new SearchProfileShardResults(Collections.emptyMap()), false, false), "", 1, 1, 0, new ShardSearchFailure[0]));
+ new SearchProfileShardResults(Collections.emptyMap()), false, false, 1), "", 1, 1, 0, 0, new ShardSearchFailure[0]));
}
}
diff --git a/client/rest-high-level/build.gradle b/client/rest-high-level/build.gradle
new file mode 100644
index 0000000000000..ba97605dba82e
--- /dev/null
+++ b/client/rest-high-level/build.gradle
@@ -0,0 +1,63 @@
+import org.elasticsearch.gradle.precommit.PrecommitTasks
+
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+apply plugin: 'elasticsearch.build'
+apply plugin: 'elasticsearch.rest-test'
+apply plugin: 'nebula.maven-base-publish'
+apply plugin: 'nebula.maven-scm'
+
+group = 'org.elasticsearch.client'
+archivesBaseName = 'elasticsearch-rest-high-level-client'
+
+publishing {
+ publications {
+ nebula {
+ artifactId = archivesBaseName
+ }
+ }
+}
+
+dependencies {
+ compile "org.elasticsearch:elasticsearch:${version}"
+ compile "org.elasticsearch.client:elasticsearch-rest-client:${version}"
+ compile "org.elasticsearch.plugin:parent-join-client:${version}"
+ compile "org.elasticsearch.plugin:aggs-matrix-stats-client:${version}"
+
+ testCompile "org.elasticsearch.client:test:${version}"
+ testCompile "org.elasticsearch.test:framework:${version}"
+ testCompile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}"
+ testCompile "junit:junit:${versions.junit}"
+ testCompile "org.hamcrest:hamcrest-all:${versions.hamcrest}"
+}
+
+dependencyLicenses {
+ // Don't check licenses for dependency that are part of the elasticsearch project
+ // But any other dependency should have its license/notice/sha1
+ dependencies = project.configurations.runtime.fileCollection {
+ it.group.startsWith('org.elasticsearch') == false
+ }
+}
+
+forbiddenApisMain {
+ // core does not depend on the httpclient for compile so we add the signatures here. We don't add them for test as they are already
+ // specified
+ signaturesURLs += [PrecommitTasks.getResource('/forbidden/http-signatures.txt')]
+ signaturesURLs += [file('src/main/resources/forbidden/rest-high-level-signatures.txt').toURI().toURL()]
+}
\ No newline at end of file
diff --git a/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java b/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java
new file mode 100644
index 0000000000000..7a95553c3c003
--- /dev/null
+++ b/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java
@@ -0,0 +1,578 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.apache.http.HttpEntity;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.entity.ContentType;
+import org.apache.lucene.util.BytesRef;
+import org.elasticsearch.action.DocWriteRequest;
+import org.elasticsearch.action.bulk.BulkRequest;
+import org.elasticsearch.action.delete.DeleteRequest;
+import org.elasticsearch.action.get.GetRequest;
+import org.elasticsearch.action.index.IndexRequest;
+import org.elasticsearch.action.search.ClearScrollRequest;
+import org.elasticsearch.action.search.SearchRequest;
+import org.elasticsearch.action.search.SearchScrollRequest;
+import org.elasticsearch.action.support.ActiveShardCount;
+import org.elasticsearch.action.support.IndicesOptions;
+import org.elasticsearch.action.support.WriteRequest;
+import org.elasticsearch.action.update.UpdateRequest;
+import org.elasticsearch.common.Nullable;
+import org.elasticsearch.common.Strings;
+import org.elasticsearch.common.SuppressForbidden;
+import org.elasticsearch.common.bytes.BytesReference;
+import org.elasticsearch.common.lucene.uid.Versions;
+import org.elasticsearch.common.unit.TimeValue;
+import org.elasticsearch.common.xcontent.NamedXContentRegistry;
+import org.elasticsearch.common.xcontent.ToXContent;
+import org.elasticsearch.common.xcontent.XContentBuilder;
+import org.elasticsearch.common.xcontent.XContentHelper;
+import org.elasticsearch.common.xcontent.XContentParser;
+import org.elasticsearch.common.xcontent.XContentType;
+import org.elasticsearch.index.VersionType;
+import org.elasticsearch.rest.action.search.RestSearchAction;
+import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Objects;
+import java.util.StringJoiner;
+
+public final class Request {
+
+ static final XContentType REQUEST_BODY_CONTENT_TYPE = XContentType.JSON;
+
+ private final String method;
+ private final String endpoint;
+ private final Map parameters;
+ private final HttpEntity entity;
+
+ public Request(String method, String endpoint, Map parameters, HttpEntity entity) {
+ this.method = Objects.requireNonNull(method, "method cannot be null");
+ this.endpoint = Objects.requireNonNull(endpoint, "endpoint cannot be null");
+ this.parameters = Objects.requireNonNull(parameters, "parameters cannot be null");
+ this.entity = entity;
+ }
+
+ public String getMethod() {
+ return method;
+ }
+
+ public String getEndpoint() {
+ return endpoint;
+ }
+
+ public Map getParameters() {
+ return parameters;
+ }
+
+ public HttpEntity getEntity() {
+ return entity;
+ }
+
+ @Override
+ public String toString() {
+ return "Request{" +
+ "method='" + method + '\'' +
+ ", endpoint='" + endpoint + '\'' +
+ ", params=" + parameters +
+ ", hasBody=" + (entity != null) +
+ '}';
+ }
+
+ static Request delete(DeleteRequest deleteRequest) {
+ String endpoint = endpoint(deleteRequest.index(), deleteRequest.type(), deleteRequest.id());
+
+ Params parameters = Params.builder();
+ parameters.withRouting(deleteRequest.routing());
+ parameters.withParent(deleteRequest.parent());
+ parameters.withTimeout(deleteRequest.timeout());
+ parameters.withVersion(deleteRequest.version());
+ parameters.withVersionType(deleteRequest.versionType());
+ parameters.withRefreshPolicy(deleteRequest.getRefreshPolicy());
+ parameters.withWaitForActiveShards(deleteRequest.waitForActiveShards());
+
+ return new Request(HttpDelete.METHOD_NAME, endpoint, parameters.getParams(), null);
+ }
+
+ static Request info() {
+ return new Request(HttpGet.METHOD_NAME, "/", Collections.emptyMap(), null);
+ }
+
+ static Request bulk(BulkRequest bulkRequest) throws IOException {
+ Params parameters = Params.builder();
+ parameters.withTimeout(bulkRequest.timeout());
+ parameters.withRefreshPolicy(bulkRequest.getRefreshPolicy());
+
+ // Bulk API only supports newline delimited JSON or Smile. Before executing
+ // the bulk, we need to check that all requests have the same content-type
+ // and this content-type is supported by the Bulk API.
+ XContentType bulkContentType = null;
+ for (int i = 0; i < bulkRequest.numberOfActions(); i++) {
+ DocWriteRequest> request = bulkRequest.requests().get(i);
+
+ DocWriteRequest.OpType opType = request.opType();
+ if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) {
+ bulkContentType = enforceSameContentType((IndexRequest) request, bulkContentType);
+
+ } else if (opType == DocWriteRequest.OpType.UPDATE) {
+ UpdateRequest updateRequest = (UpdateRequest) request;
+ if (updateRequest.doc() != null) {
+ bulkContentType = enforceSameContentType(updateRequest.doc(), bulkContentType);
+ }
+ if (updateRequest.upsertRequest() != null) {
+ bulkContentType = enforceSameContentType(updateRequest.upsertRequest(), bulkContentType);
+ }
+ }
+ }
+
+ if (bulkContentType == null) {
+ bulkContentType = XContentType.JSON;
+ }
+
+ final byte separator = bulkContentType.xContent().streamSeparator();
+ final ContentType requestContentType = createContentType(bulkContentType);
+
+ ByteArrayOutputStream content = new ByteArrayOutputStream();
+ for (DocWriteRequest> request : bulkRequest.requests()) {
+ DocWriteRequest.OpType opType = request.opType();
+
+ try (XContentBuilder metadata = XContentBuilder.builder(bulkContentType.xContent())) {
+ metadata.startObject();
+ {
+ metadata.startObject(opType.getLowercase());
+ if (Strings.hasLength(request.index())) {
+ metadata.field("_index", request.index());
+ }
+ if (Strings.hasLength(request.type())) {
+ metadata.field("_type", request.type());
+ }
+ if (Strings.hasLength(request.id())) {
+ metadata.field("_id", request.id());
+ }
+ if (Strings.hasLength(request.routing())) {
+ metadata.field("_routing", request.routing());
+ }
+ if (Strings.hasLength(request.parent())) {
+ metadata.field("_parent", request.parent());
+ }
+ if (request.version() != Versions.MATCH_ANY) {
+ metadata.field("_version", request.version());
+ }
+
+ VersionType versionType = request.versionType();
+ if (versionType != VersionType.INTERNAL) {
+ if (versionType == VersionType.EXTERNAL) {
+ metadata.field("_version_type", "external");
+ } else if (versionType == VersionType.EXTERNAL_GTE) {
+ metadata.field("_version_type", "external_gte");
+ } else if (versionType == VersionType.FORCE) {
+ metadata.field("_version_type", "force");
+ }
+ }
+
+ if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) {
+ IndexRequest indexRequest = (IndexRequest) request;
+ if (Strings.hasLength(indexRequest.getPipeline())) {
+ metadata.field("pipeline", indexRequest.getPipeline());
+ }
+ } else if (opType == DocWriteRequest.OpType.UPDATE) {
+ UpdateRequest updateRequest = (UpdateRequest) request;
+ if (updateRequest.retryOnConflict() > 0) {
+ metadata.field("_retry_on_conflict", updateRequest.retryOnConflict());
+ }
+ if (updateRequest.fetchSource() != null) {
+ metadata.field("_source", updateRequest.fetchSource());
+ }
+ }
+ metadata.endObject();
+ }
+ metadata.endObject();
+
+ BytesRef metadataSource = metadata.bytes().toBytesRef();
+ content.write(metadataSource.bytes, metadataSource.offset, metadataSource.length);
+ content.write(separator);
+ }
+
+ BytesRef source = null;
+ if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) {
+ IndexRequest indexRequest = (IndexRequest) request;
+ BytesReference indexSource = indexRequest.source();
+ XContentType indexXContentType = indexRequest.getContentType();
+
+ try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, indexSource, indexXContentType)) {
+ try (XContentBuilder builder = XContentBuilder.builder(bulkContentType.xContent())) {
+ builder.copyCurrentStructure(parser);
+ source = builder.bytes().toBytesRef();
+ }
+ }
+ } else if (opType == DocWriteRequest.OpType.UPDATE) {
+ source = XContentHelper.toXContent((UpdateRequest) request, bulkContentType, false).toBytesRef();
+ }
+
+ if (source != null) {
+ content.write(source.bytes, source.offset, source.length);
+ content.write(separator);
+ }
+ }
+
+ HttpEntity entity = new ByteArrayEntity(content.toByteArray(), 0, content.size(), requestContentType);
+ return new Request(HttpPost.METHOD_NAME, "/_bulk", parameters.getParams(), entity);
+ }
+
+ static Request exists(GetRequest getRequest) {
+ Request request = get(getRequest);
+ return new Request(HttpHead.METHOD_NAME, request.endpoint, request.parameters, null);
+ }
+
+ static Request get(GetRequest getRequest) {
+ String endpoint = endpoint(getRequest.index(), getRequest.type(), getRequest.id());
+
+ Params parameters = Params.builder();
+ parameters.withPreference(getRequest.preference());
+ parameters.withRouting(getRequest.routing());
+ parameters.withParent(getRequest.parent());
+ parameters.withRefresh(getRequest.refresh());
+ parameters.withRealtime(getRequest.realtime());
+ parameters.withStoredFields(getRequest.storedFields());
+ parameters.withVersion(getRequest.version());
+ parameters.withVersionType(getRequest.versionType());
+ parameters.withFetchSourceContext(getRequest.fetchSourceContext());
+
+ return new Request(HttpGet.METHOD_NAME, endpoint, parameters.getParams(), null);
+ }
+
+ static Request index(IndexRequest indexRequest) {
+ String method = Strings.hasLength(indexRequest.id()) ? HttpPut.METHOD_NAME : HttpPost.METHOD_NAME;
+
+ boolean isCreate = (indexRequest.opType() == DocWriteRequest.OpType.CREATE);
+ String endpoint = endpoint(indexRequest.index(), indexRequest.type(), indexRequest.id(), isCreate ? "_create" : null);
+
+ Params parameters = Params.builder();
+ parameters.withRouting(indexRequest.routing());
+ parameters.withParent(indexRequest.parent());
+ parameters.withTimeout(indexRequest.timeout());
+ parameters.withVersion(indexRequest.version());
+ parameters.withVersionType(indexRequest.versionType());
+ parameters.withPipeline(indexRequest.getPipeline());
+ parameters.withRefreshPolicy(indexRequest.getRefreshPolicy());
+ parameters.withWaitForActiveShards(indexRequest.waitForActiveShards());
+
+ BytesRef source = indexRequest.source().toBytesRef();
+ ContentType contentType = createContentType(indexRequest.getContentType());
+ HttpEntity entity = new ByteArrayEntity(source.bytes, source.offset, source.length, contentType);
+
+ return new Request(method, endpoint, parameters.getParams(), entity);
+ }
+
+ static Request ping() {
+ return new Request(HttpHead.METHOD_NAME, "/", Collections.emptyMap(), null);
+ }
+
+ static Request update(UpdateRequest updateRequest) throws IOException {
+ String endpoint = endpoint(updateRequest.index(), updateRequest.type(), updateRequest.id(), "_update");
+
+ Params parameters = Params.builder();
+ parameters.withRouting(updateRequest.routing());
+ parameters.withParent(updateRequest.parent());
+ parameters.withTimeout(updateRequest.timeout());
+ parameters.withRefreshPolicy(updateRequest.getRefreshPolicy());
+ parameters.withWaitForActiveShards(updateRequest.waitForActiveShards());
+ parameters.withDocAsUpsert(updateRequest.docAsUpsert());
+ parameters.withFetchSourceContext(updateRequest.fetchSource());
+ parameters.withRetryOnConflict(updateRequest.retryOnConflict());
+ parameters.withVersion(updateRequest.version());
+ parameters.withVersionType(updateRequest.versionType());
+
+ // The Java API allows update requests with different content types
+ // set for the partial document and the upsert document. This client
+ // only accepts update requests that have the same content types set
+ // for both doc and upsert.
+ XContentType xContentType = null;
+ if (updateRequest.doc() != null) {
+ xContentType = updateRequest.doc().getContentType();
+ }
+ if (updateRequest.upsertRequest() != null) {
+ XContentType upsertContentType = updateRequest.upsertRequest().getContentType();
+ if ((xContentType != null) && (xContentType != upsertContentType)) {
+ throw new IllegalStateException("Update request cannot have different content types for doc [" + xContentType + "]" +
+ " and upsert [" + upsertContentType + "] documents");
+ } else {
+ xContentType = upsertContentType;
+ }
+ }
+ if (xContentType == null) {
+ xContentType = Requests.INDEX_CONTENT_TYPE;
+ }
+
+ HttpEntity entity = createEntity(updateRequest, xContentType);
+ return new Request(HttpPost.METHOD_NAME, endpoint, parameters.getParams(), entity);
+ }
+
+ static Request search(SearchRequest searchRequest) throws IOException {
+ String endpoint = endpoint(searchRequest.indices(), searchRequest.types(), "_search");
+ Params params = Params.builder();
+ params.putParam(RestSearchAction.TYPED_KEYS_PARAM, "true");
+ params.withRouting(searchRequest.routing());
+ params.withPreference(searchRequest.preference());
+ params.withIndicesOptions(searchRequest.indicesOptions());
+ params.putParam("search_type", searchRequest.searchType().name().toLowerCase(Locale.ROOT));
+ if (searchRequest.requestCache() != null) {
+ params.putParam("request_cache", Boolean.toString(searchRequest.requestCache()));
+ }
+ params.putParam("batched_reduce_size", Integer.toString(searchRequest.getBatchedReduceSize()));
+ if (searchRequest.scroll() != null) {
+ params.putParam("scroll", searchRequest.scroll().keepAlive());
+ }
+ HttpEntity entity = null;
+ if (searchRequest.source() != null) {
+ entity = createEntity(searchRequest.source(), REQUEST_BODY_CONTENT_TYPE);
+ }
+ return new Request(HttpGet.METHOD_NAME, endpoint, params.getParams(), entity);
+ }
+
+ static Request searchScroll(SearchScrollRequest searchScrollRequest) throws IOException {
+ HttpEntity entity = createEntity(searchScrollRequest, REQUEST_BODY_CONTENT_TYPE);
+ return new Request("GET", "/_search/scroll", Collections.emptyMap(), entity);
+ }
+
+ static Request clearScroll(ClearScrollRequest clearScrollRequest) throws IOException {
+ HttpEntity entity = createEntity(clearScrollRequest, REQUEST_BODY_CONTENT_TYPE);
+ return new Request("DELETE", "/_search/scroll", Collections.emptyMap(), entity);
+ }
+
+ private static HttpEntity createEntity(ToXContent toXContent, XContentType xContentType) throws IOException {
+ BytesRef source = XContentHelper.toXContent(toXContent, xContentType, false).toBytesRef();
+ return new ByteArrayEntity(source.bytes, source.offset, source.length, createContentType(xContentType));
+ }
+
+ static String endpoint(String[] indices, String[] types, String endpoint) {
+ return endpoint(String.join(",", indices), String.join(",", types), endpoint);
+ }
+
+ /**
+ * Utility method to build request's endpoint.
+ */
+ static String endpoint(String... parts) {
+ StringJoiner joiner = new StringJoiner("/", "/", "");
+ for (String part : parts) {
+ if (Strings.hasLength(part)) {
+ joiner.add(part);
+ }
+ }
+ return joiner.toString();
+ }
+
+ /**
+ * Returns a {@link ContentType} from a given {@link XContentType}.
+ *
+ * @param xContentType the {@link XContentType}
+ * @return the {@link ContentType}
+ */
+ @SuppressForbidden(reason = "Only allowed place to convert a XContentType to a ContentType")
+ public static ContentType createContentType(final XContentType xContentType) {
+ return ContentType.create(xContentType.mediaTypeWithoutParameters(), (Charset) null);
+ }
+
+ /**
+ * Utility class to build request's parameters map and centralize all parameter names.
+ */
+ static class Params {
+ private final Map params = new HashMap<>();
+
+ private Params() {
+ }
+
+ Params putParam(String key, String value) {
+ if (Strings.hasLength(value)) {
+ if (params.putIfAbsent(key, value) != null) {
+ throw new IllegalArgumentException("Request parameter [" + key + "] is already registered");
+ }
+ }
+ return this;
+ }
+
+ Params putParam(String key, TimeValue value) {
+ if (value != null) {
+ return putParam(key, value.getStringRep());
+ }
+ return this;
+ }
+
+ Params withDocAsUpsert(boolean docAsUpsert) {
+ if (docAsUpsert) {
+ return putParam("doc_as_upsert", Boolean.TRUE.toString());
+ }
+ return this;
+ }
+
+ Params withFetchSourceContext(FetchSourceContext fetchSourceContext) {
+ if (fetchSourceContext != null) {
+ if (fetchSourceContext.fetchSource() == false) {
+ putParam("_source", Boolean.FALSE.toString());
+ }
+ if (fetchSourceContext.includes() != null && fetchSourceContext.includes().length > 0) {
+ putParam("_source_include", String.join(",", fetchSourceContext.includes()));
+ }
+ if (fetchSourceContext.excludes() != null && fetchSourceContext.excludes().length > 0) {
+ putParam("_source_exclude", String.join(",", fetchSourceContext.excludes()));
+ }
+ }
+ return this;
+ }
+
+ Params withParent(String parent) {
+ return putParam("parent", parent);
+ }
+
+ Params withPipeline(String pipeline) {
+ return putParam("pipeline", pipeline);
+ }
+
+ Params withPreference(String preference) {
+ return putParam("preference", preference);
+ }
+
+ Params withRealtime(boolean realtime) {
+ if (realtime == false) {
+ return putParam("realtime", Boolean.FALSE.toString());
+ }
+ return this;
+ }
+
+ Params withRefresh(boolean refresh) {
+ if (refresh) {
+ return withRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
+ }
+ return this;
+ }
+
+ Params withRefreshPolicy(WriteRequest.RefreshPolicy refreshPolicy) {
+ if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) {
+ return putParam("refresh", refreshPolicy.getValue());
+ }
+ return this;
+ }
+
+ Params withRetryOnConflict(int retryOnConflict) {
+ if (retryOnConflict > 0) {
+ return putParam("retry_on_conflict", String.valueOf(retryOnConflict));
+ }
+ return this;
+ }
+
+ Params withRouting(String routing) {
+ return putParam("routing", routing);
+ }
+
+ Params withStoredFields(String[] storedFields) {
+ if (storedFields != null && storedFields.length > 0) {
+ return putParam("stored_fields", String.join(",", storedFields));
+ }
+ return this;
+ }
+
+ Params withTimeout(TimeValue timeout) {
+ return putParam("timeout", timeout);
+ }
+
+ Params withVersion(long version) {
+ if (version != Versions.MATCH_ANY) {
+ return putParam("version", Long.toString(version));
+ }
+ return this;
+ }
+
+ Params withVersionType(VersionType versionType) {
+ if (versionType != VersionType.INTERNAL) {
+ return putParam("version_type", versionType.name().toLowerCase(Locale.ROOT));
+ }
+ return this;
+ }
+
+ Params withWaitForActiveShards(ActiveShardCount activeShardCount) {
+ if (activeShardCount != null && activeShardCount != ActiveShardCount.DEFAULT) {
+ return putParam("wait_for_active_shards", activeShardCount.toString().toLowerCase(Locale.ROOT));
+ }
+ return this;
+ }
+
+ Params withIndicesOptions(IndicesOptions indicesOptions) {
+ putParam("ignore_unavailable", Boolean.toString(indicesOptions.ignoreUnavailable()));
+ putParam("allow_no_indices", Boolean.toString(indicesOptions.allowNoIndices()));
+ String expandWildcards;
+ if (indicesOptions.expandWildcardsOpen() == false && indicesOptions.expandWildcardsClosed() == false) {
+ expandWildcards = "none";
+ } else {
+ StringJoiner joiner = new StringJoiner(",");
+ if (indicesOptions.expandWildcardsOpen()) {
+ joiner.add("open");
+ }
+ if (indicesOptions.expandWildcardsClosed()) {
+ joiner.add("closed");
+ }
+ expandWildcards = joiner.toString();
+ }
+ putParam("expand_wildcards", expandWildcards);
+ return this;
+ }
+
+ Map getParams() {
+ return Collections.unmodifiableMap(params);
+ }
+
+ static Params builder() {
+ return new Params();
+ }
+ }
+
+ /**
+ * Ensure that the {@link IndexRequest}'s content type is supported by the Bulk API and that it conforms
+ * to the current {@link BulkRequest}'s content type (if it's known at the time of this method get called).
+ *
+ * @return the {@link IndexRequest}'s content type
+ */
+ static XContentType enforceSameContentType(IndexRequest indexRequest, @Nullable XContentType xContentType) {
+ XContentType requestContentType = indexRequest.getContentType();
+ if (requestContentType != XContentType.JSON && requestContentType != XContentType.SMILE) {
+ throw new IllegalArgumentException("Unsupported content-type found for request with content-type [" + requestContentType
+ + "], only JSON and SMILE are supported");
+ }
+ if (xContentType == null) {
+ return requestContentType;
+ }
+ if (requestContentType != xContentType) {
+ throw new IllegalArgumentException("Mismatching content-type found for request with content-type [" + requestContentType
+ + "], previous requests have content-type [" + xContentType + "]");
+ }
+ return xContentType;
+ }
+}
diff --git a/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java b/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java
new file mode 100644
index 0000000000000..59684b18508ee
--- /dev/null
+++ b/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java
@@ -0,0 +1,601 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.elasticsearch.ElasticsearchException;
+import org.elasticsearch.ElasticsearchStatusException;
+import org.elasticsearch.action.ActionListener;
+import org.elasticsearch.action.ActionRequest;
+import org.elasticsearch.action.ActionRequestValidationException;
+import org.elasticsearch.action.bulk.BulkRequest;
+import org.elasticsearch.action.bulk.BulkResponse;
+import org.elasticsearch.action.delete.DeleteRequest;
+import org.elasticsearch.action.delete.DeleteResponse;
+import org.elasticsearch.action.get.GetRequest;
+import org.elasticsearch.action.get.GetResponse;
+import org.elasticsearch.action.index.IndexRequest;
+import org.elasticsearch.action.index.IndexResponse;
+import org.elasticsearch.action.main.MainRequest;
+import org.elasticsearch.action.main.MainResponse;
+import org.elasticsearch.action.search.ClearScrollRequest;
+import org.elasticsearch.action.search.ClearScrollResponse;
+import org.elasticsearch.action.search.SearchRequest;
+import org.elasticsearch.action.search.SearchResponse;
+import org.elasticsearch.action.search.SearchScrollRequest;
+import org.elasticsearch.action.update.UpdateRequest;
+import org.elasticsearch.action.update.UpdateResponse;
+import org.elasticsearch.common.CheckedFunction;
+import org.elasticsearch.common.ParseField;
+import org.elasticsearch.common.xcontent.ContextParser;
+import org.elasticsearch.common.xcontent.NamedXContentRegistry;
+import org.elasticsearch.common.xcontent.XContentParser;
+import org.elasticsearch.common.xcontent.XContentType;
+import org.elasticsearch.plugins.spi.NamedXContentProvider;
+import org.elasticsearch.rest.BytesRestResponse;
+import org.elasticsearch.rest.RestStatus;
+import org.elasticsearch.search.aggregations.Aggregation;
+import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.adjacency.ParsedAdjacencyMatrix;
+import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.filter.ParsedFilter;
+import org.elasticsearch.search.aggregations.bucket.filters.FiltersAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.filters.ParsedFilters;
+import org.elasticsearch.search.aggregations.bucket.geogrid.GeoGridAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.geogrid.ParsedGeoHashGrid;
+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.global.ParsedGlobal;
+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.histogram.ParsedDateHistogram;
+import org.elasticsearch.search.aggregations.bucket.histogram.ParsedHistogram;
+import org.elasticsearch.search.aggregations.bucket.missing.MissingAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.missing.ParsedMissing;
+import org.elasticsearch.search.aggregations.bucket.nested.NestedAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.nested.ParsedNested;
+import org.elasticsearch.search.aggregations.bucket.nested.ParsedReverseNested;
+import org.elasticsearch.search.aggregations.bucket.nested.ReverseNestedAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.range.ip.IpRangeAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.range.ParsedBinaryRange;
+import org.elasticsearch.search.aggregations.bucket.range.ParsedRange;
+import org.elasticsearch.search.aggregations.bucket.range.RangeAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.range.date.DateRangeAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.range.date.ParsedDateRange;
+import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.range.geodistance.ParsedGeoDistance;
+import org.elasticsearch.search.aggregations.bucket.sampler.InternalSampler;
+import org.elasticsearch.search.aggregations.bucket.sampler.ParsedSampler;
+import org.elasticsearch.search.aggregations.bucket.significant.ParsedSignificantLongTerms;
+import org.elasticsearch.search.aggregations.bucket.significant.ParsedSignificantStringTerms;
+import org.elasticsearch.search.aggregations.bucket.significant.SignificantLongTerms;
+import org.elasticsearch.search.aggregations.bucket.significant.SignificantStringTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.DoubleTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.LongTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.ParsedDoubleTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.ParsedLongTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.ParsedStringTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.StringTerms;
+import org.elasticsearch.search.aggregations.metrics.avg.AvgAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.avg.ParsedAvg;
+import org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.cardinality.ParsedCardinality;
+import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBoundsAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.geobounds.ParsedGeoBounds;
+import org.elasticsearch.search.aggregations.metrics.geocentroid.GeoCentroidAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.geocentroid.ParsedGeoCentroid;
+import org.elasticsearch.search.aggregations.metrics.max.MaxAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.max.ParsedMax;
+import org.elasticsearch.search.aggregations.metrics.min.MinAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.min.ParsedMin;
+import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.InternalHDRPercentileRanks;
+import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.InternalHDRPercentiles;
+import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.ParsedHDRPercentileRanks;
+import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.ParsedHDRPercentiles;
+import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.InternalTDigestPercentileRanks;
+import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.InternalTDigestPercentiles;
+import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.ParsedTDigestPercentileRanks;
+import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.ParsedTDigestPercentiles;
+import org.elasticsearch.search.aggregations.metrics.scripted.ParsedScriptedMetric;
+import org.elasticsearch.search.aggregations.metrics.scripted.ScriptedMetricAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.stats.ParsedStats;
+import org.elasticsearch.search.aggregations.metrics.stats.StatsAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStatsAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.stats.extended.ParsedExtendedStats;
+import org.elasticsearch.search.aggregations.metrics.sum.ParsedSum;
+import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.tophits.ParsedTopHits;
+import org.elasticsearch.search.aggregations.metrics.tophits.TopHitsAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.valuecount.ParsedValueCount;
+import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.InternalSimpleValue;
+import org.elasticsearch.search.aggregations.pipeline.ParsedSimpleValue;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.InternalBucketMetricValue;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.ParsedBucketMetricValue;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile.ParsedPercentilesBucket;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile.PercentilesBucketPipelineAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.ParsedStatsBucket;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.StatsBucketPipelineAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended.ExtendedStatsBucketPipelineAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended.ParsedExtendedStatsBucket;
+import org.elasticsearch.search.aggregations.pipeline.derivative.DerivativePipelineAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.derivative.ParsedDerivative;
+import org.elasticsearch.search.suggest.Suggest;
+import org.elasticsearch.search.suggest.completion.CompletionSuggestion;
+import org.elasticsearch.search.suggest.phrase.PhraseSuggestion;
+import org.elasticsearch.search.suggest.term.TermSuggestion;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.ServiceLoader;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import static java.util.Collections.emptySet;
+import static java.util.Collections.singleton;
+import static java.util.stream.Collectors.toList;
+
+/**
+ * High level REST client that wraps an instance of the low level {@link RestClient} and allows to build requests and read responses.
+ * The provided {@link RestClient} is externally built and closed.
+ * Can be sub-classed to expose additional client methods that make use of endpoints added to Elasticsearch through plugins, or to
+ * add support for custom response sections, again added to Elasticsearch through plugins.
+ */
+public class RestHighLevelClient {
+
+ private final RestClient client;
+ private final NamedXContentRegistry registry;
+
+ /**
+ * Creates a {@link RestHighLevelClient} given the low level {@link RestClient} that it should use to perform requests.
+ */
+ public RestHighLevelClient(RestClient restClient) {
+ this(restClient, Collections.emptyList());
+ }
+
+ /**
+ * Creates a {@link RestHighLevelClient} given the low level {@link RestClient} that it should use to perform requests and
+ * a list of entries that allow to parse custom response sections added to Elasticsearch through plugins.
+ */
+ protected RestHighLevelClient(RestClient restClient, List namedXContentEntries) {
+ this.client = Objects.requireNonNull(restClient);
+ this.registry = new NamedXContentRegistry(
+ Stream.of(getDefaultNamedXContents().stream(), getProvidedNamedXContents().stream(), namedXContentEntries.stream())
+ .flatMap(Function.identity()).collect(toList()));
+ }
+
+ /**
+ * Executes a bulk request using the Bulk API
+ *
+ * See Bulk API on elastic.co
+ */
+ public BulkResponse bulk(BulkRequest bulkRequest, Header... headers) throws IOException {
+ return performRequestAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, emptySet(), headers);
+ }
+
+ /**
+ * Asynchronously executes a bulk request using the Bulk API
+ *
+ * See Bulk API on elastic.co
+ */
+ public void bulkAsync(BulkRequest bulkRequest, ActionListener listener, Header... headers) {
+ performRequestAsyncAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, listener, emptySet(), headers);
+ }
+
+ /**
+ * Pings the remote Elasticsearch cluster and returns true if the ping succeeded, false otherwise
+ */
+ public boolean ping(Header... headers) throws IOException {
+ return performRequest(new MainRequest(), (request) -> Request.ping(), RestHighLevelClient::convertExistsResponse,
+ emptySet(), headers);
+ }
+
+ /**
+ * Get the cluster info otherwise provided when sending an HTTP request to port 9200
+ */
+ public MainResponse info(Header... headers) throws IOException {
+ return performRequestAndParseEntity(new MainRequest(), (request) -> Request.info(), MainResponse::fromXContent, emptySet(),
+ headers);
+ }
+
+ /**
+ * Retrieves a document by id using the Get API
+ *
+ * See Get API on elastic.co
+ */
+ public GetResponse get(GetRequest getRequest, Header... headers) throws IOException {
+ return performRequestAndParseEntity(getRequest, Request::get, GetResponse::fromXContent, singleton(404), headers);
+ }
+
+ /**
+ * Asynchronously retrieves a document by id using the Get API
+ *
+ * See Get API on elastic.co
+ */
+ public void getAsync(GetRequest getRequest, ActionListener listener, Header... headers) {
+ performRequestAsyncAndParseEntity(getRequest, Request::get, GetResponse::fromXContent, listener, singleton(404), headers);
+ }
+
+ /**
+ * Checks for the existence of a document. Returns true if it exists, false otherwise
+ *
+ * See Get API on elastic.co
+ */
+ public boolean exists(GetRequest getRequest, Header... headers) throws IOException {
+ return performRequest(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, emptySet(), headers);
+ }
+
+ /**
+ * Asynchronously checks for the existence of a document. Returns true if it exists, false otherwise
+ *
+ * See Get API on elastic.co
+ */
+ public void existsAsync(GetRequest getRequest, ActionListener listener, Header... headers) {
+ performRequestAsync(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, listener, emptySet(), headers);
+ }
+
+ /**
+ * Index a document using the Index API
+ *
+ * See Index API on elastic.co
+ */
+ public IndexResponse index(IndexRequest indexRequest, Header... headers) throws IOException {
+ return performRequestAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, emptySet(), headers);
+ }
+
+ /**
+ * Asynchronously index a document using the Index API
+ *
+ * See Index API on elastic.co
+ */
+ public void indexAsync(IndexRequest indexRequest, ActionListener listener, Header... headers) {
+ performRequestAsyncAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, listener, emptySet(), headers);
+ }
+
+ /**
+ * Updates a document using the Update API
+ *
+ * See Update API on elastic.co
+ */
+ public UpdateResponse update(UpdateRequest updateRequest, Header... headers) throws IOException {
+ return performRequestAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, emptySet(), headers);
+ }
+
+ /**
+ * Asynchronously updates a document using the Update API
+ *
+ * See Update API on elastic.co
+ */
+ public void updateAsync(UpdateRequest updateRequest, ActionListener listener, Header... headers) {
+ performRequestAsyncAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, listener, emptySet(), headers);
+ }
+
+ /**
+ * Deletes a document by id using the Delete api
+ *
+ * See Delete API on elastic.co
+ */
+ public DeleteResponse delete(DeleteRequest deleteRequest, Header... headers) throws IOException {
+ return performRequestAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, Collections.singleton(404),
+ headers);
+ }
+
+ /**
+ * Asynchronously deletes a document by id using the Delete api
+ *
+ * See Delete API on elastic.co
+ */
+ public void deleteAsync(DeleteRequest deleteRequest, ActionListener listener, Header... headers) {
+ performRequestAsyncAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, listener,
+ Collections.singleton(404), headers);
+ }
+
+ /**
+ * Executes a search using the Search api
+ *
+ * See Search API on elastic.co
+ */
+ public SearchResponse search(SearchRequest searchRequest, Header... headers) throws IOException {
+ return performRequestAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, emptySet(), headers);
+ }
+
+ /**
+ * Asynchronously executes a search using the Search api
+ *
+ * See Search API on elastic.co
+ */
+ public void searchAsync(SearchRequest searchRequest, ActionListener listener, Header... headers) {
+ performRequestAsyncAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, listener, emptySet(), headers);
+ }
+
+ /**
+ * Executes a search using the Search Scroll api
+ *
+ * See Search Scroll
+ * API on elastic.co
+ */
+ public SearchResponse searchScroll(SearchScrollRequest searchScrollRequest, Header... headers) throws IOException {
+ return performRequestAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent, emptySet(), headers);
+ }
+
+ /**
+ * Asynchronously executes a search using the Search Scroll api
+ *
+ * See Search Scroll
+ * API on elastic.co
+ */
+ public void searchScrollAsync(SearchScrollRequest searchScrollRequest, ActionListener listener, Header... headers) {
+ performRequestAsyncAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent,
+ listener, emptySet(), headers);
+ }
+
+ /**
+ * Clears one or more scroll ids using the Clear Scroll api
+ *
+ * See
+ * Clear Scroll API on elastic.co
+ */
+ public ClearScrollResponse clearScroll(ClearScrollRequest clearScrollRequest, Header... headers) throws IOException {
+ return performRequestAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent,
+ emptySet(), headers);
+ }
+
+ /**
+ * Asynchronously clears one or more scroll ids using the Clear Scroll api
+ *
+ * See
+ * Clear Scroll API on elastic.co
+ */
+ public void clearScrollAsync(ClearScrollRequest clearScrollRequest, ActionListener listener, Header... headers) {
+ performRequestAsyncAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent,
+ listener, emptySet(), headers);
+ }
+
+ protected Resp performRequestAndParseEntity(Req request,
+ CheckedFunction requestConverter,
+ CheckedFunction entityParser,
+ Set ignores, Header... headers) throws IOException {
+ return performRequest(request, requestConverter, (response) -> parseEntity(response.getEntity(), entityParser), ignores, headers);
+ }
+
+ protected Resp performRequest(Req request,
+ CheckedFunction requestConverter,
+ CheckedFunction responseConverter,
+ Set ignores, Header... headers) throws IOException {
+ ActionRequestValidationException validationException = request.validate();
+ if (validationException != null) {
+ throw validationException;
+ }
+ Request req = requestConverter.apply(request);
+ Response response;
+ try {
+ response = client.performRequest(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), headers);
+ } catch (ResponseException e) {
+ if (ignores.contains(e.getResponse().getStatusLine().getStatusCode())) {
+ try {
+ return responseConverter.apply(e.getResponse());
+ } catch (Exception innerException) {
+ //the exception is ignored as we now try to parse the response as an error.
+ //this covers cases like get where 404 can either be a valid document not found response,
+ //or an error for which parsing is completely different. We try to consider the 404 response as a valid one
+ //first. If parsing of the response breaks, we fall back to parsing it as an error.
+ throw parseResponseException(e);
+ }
+ }
+ throw parseResponseException(e);
+ }
+
+ try {
+ return responseConverter.apply(response);
+ } catch(Exception e) {
+ throw new IOException("Unable to parse response body for " + response, e);
+ }
+ }
+
+ protected void performRequestAsyncAndParseEntity(Req request,
+ CheckedFunction requestConverter,
+ CheckedFunction entityParser,
+ ActionListener listener, Set ignores, Header... headers) {
+ performRequestAsync(request, requestConverter, (response) -> parseEntity(response.getEntity(), entityParser),
+ listener, ignores, headers);
+ }
+
+ protected void performRequestAsync(Req request,
+ CheckedFunction requestConverter,
+ CheckedFunction responseConverter,
+ ActionListener listener, Set ignores, Header... headers) {
+ ActionRequestValidationException validationException = request.validate();
+ if (validationException != null) {
+ listener.onFailure(validationException);
+ return;
+ }
+ Request req;
+ try {
+ req = requestConverter.apply(request);
+ } catch (Exception e) {
+ listener.onFailure(e);
+ return;
+ }
+
+ ResponseListener responseListener = wrapResponseListener(responseConverter, listener, ignores);
+ client.performRequestAsync(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), responseListener, headers);
+ }
+
+ ResponseListener wrapResponseListener(CheckedFunction responseConverter,
+ ActionListener actionListener, Set ignores) {
+ return new ResponseListener() {
+ @Override
+ public void onSuccess(Response response) {
+ try {
+ actionListener.onResponse(responseConverter.apply(response));
+ } catch(Exception e) {
+ IOException ioe = new IOException("Unable to parse response body for " + response, e);
+ onFailure(ioe);
+ }
+ }
+
+ @Override
+ public void onFailure(Exception exception) {
+ if (exception instanceof ResponseException) {
+ ResponseException responseException = (ResponseException) exception;
+ Response response = responseException.getResponse();
+ if (ignores.contains(response.getStatusLine().getStatusCode())) {
+ try {
+ actionListener.onResponse(responseConverter.apply(response));
+ } catch (Exception innerException) {
+ //the exception is ignored as we now try to parse the response as an error.
+ //this covers cases like get where 404 can either be a valid document not found response,
+ //or an error for which parsing is completely different. We try to consider the 404 response as a valid one
+ //first. If parsing of the response breaks, we fall back to parsing it as an error.
+ actionListener.onFailure(parseResponseException(responseException));
+ }
+ } else {
+ actionListener.onFailure(parseResponseException(responseException));
+ }
+ } else {
+ actionListener.onFailure(exception);
+ }
+ }
+ };
+ }
+
+ /**
+ * Converts a {@link ResponseException} obtained from the low level REST client into an {@link ElasticsearchException}.
+ * If a response body was returned, tries to parse it as an error returned from Elasticsearch.
+ * If no response body was returned or anything goes wrong while parsing the error, returns a new {@link ElasticsearchStatusException}
+ * that wraps the original {@link ResponseException}. The potential exception obtained while parsing is added to the returned
+ * exception as a suppressed exception. This method is guaranteed to not throw any exception eventually thrown while parsing.
+ */
+ protected ElasticsearchStatusException parseResponseException(ResponseException responseException) {
+ Response response = responseException.getResponse();
+ HttpEntity entity = response.getEntity();
+ ElasticsearchStatusException elasticsearchException;
+ if (entity == null) {
+ elasticsearchException = new ElasticsearchStatusException(
+ responseException.getMessage(), RestStatus.fromCode(response.getStatusLine().getStatusCode()), responseException);
+ } else {
+ try {
+ elasticsearchException = parseEntity(entity, BytesRestResponse::errorFromXContent);
+ elasticsearchException.addSuppressed(responseException);
+ } catch (Exception e) {
+ RestStatus restStatus = RestStatus.fromCode(response.getStatusLine().getStatusCode());
+ elasticsearchException = new ElasticsearchStatusException("Unable to parse response body", restStatus, responseException);
+ elasticsearchException.addSuppressed(e);
+ }
+ }
+ return elasticsearchException;
+ }
+
+ protected Resp parseEntity(final HttpEntity entity,
+ final CheckedFunction entityParser) throws IOException {
+ if (entity == null) {
+ throw new IllegalStateException("Response body expected but not returned");
+ }
+ if (entity.getContentType() == null) {
+ throw new IllegalStateException("Elasticsearch didn't return the [Content-Type] header, unable to parse response body");
+ }
+ XContentType xContentType = XContentType.fromMediaTypeOrFormat(entity.getContentType().getValue());
+ if (xContentType == null) {
+ throw new IllegalStateException("Unsupported Content-Type: " + entity.getContentType().getValue());
+ }
+ try (XContentParser parser = xContentType.xContent().createParser(registry, entity.getContent())) {
+ return entityParser.apply(parser);
+ }
+ }
+
+ static boolean convertExistsResponse(Response response) {
+ return response.getStatusLine().getStatusCode() == 200;
+ }
+
+ static List getDefaultNamedXContents() {
+ Map> map = new HashMap<>();
+ map.put(CardinalityAggregationBuilder.NAME, (p, c) -> ParsedCardinality.fromXContent(p, (String) c));
+ map.put(InternalHDRPercentiles.NAME, (p, c) -> ParsedHDRPercentiles.fromXContent(p, (String) c));
+ map.put(InternalHDRPercentileRanks.NAME, (p, c) -> ParsedHDRPercentileRanks.fromXContent(p, (String) c));
+ map.put(InternalTDigestPercentiles.NAME, (p, c) -> ParsedTDigestPercentiles.fromXContent(p, (String) c));
+ map.put(InternalTDigestPercentileRanks.NAME, (p, c) -> ParsedTDigestPercentileRanks.fromXContent(p, (String) c));
+ map.put(PercentilesBucketPipelineAggregationBuilder.NAME, (p, c) -> ParsedPercentilesBucket.fromXContent(p, (String) c));
+ map.put(MinAggregationBuilder.NAME, (p, c) -> ParsedMin.fromXContent(p, (String) c));
+ map.put(MaxAggregationBuilder.NAME, (p, c) -> ParsedMax.fromXContent(p, (String) c));
+ map.put(SumAggregationBuilder.NAME, (p, c) -> ParsedSum.fromXContent(p, (String) c));
+ map.put(AvgAggregationBuilder.NAME, (p, c) -> ParsedAvg.fromXContent(p, (String) c));
+ map.put(ValueCountAggregationBuilder.NAME, (p, c) -> ParsedValueCount.fromXContent(p, (String) c));
+ map.put(InternalSimpleValue.NAME, (p, c) -> ParsedSimpleValue.fromXContent(p, (String) c));
+ map.put(DerivativePipelineAggregationBuilder.NAME, (p, c) -> ParsedDerivative.fromXContent(p, (String) c));
+ map.put(InternalBucketMetricValue.NAME, (p, c) -> ParsedBucketMetricValue.fromXContent(p, (String) c));
+ map.put(StatsAggregationBuilder.NAME, (p, c) -> ParsedStats.fromXContent(p, (String) c));
+ map.put(StatsBucketPipelineAggregationBuilder.NAME, (p, c) -> ParsedStatsBucket.fromXContent(p, (String) c));
+ map.put(ExtendedStatsAggregationBuilder.NAME, (p, c) -> ParsedExtendedStats.fromXContent(p, (String) c));
+ map.put(ExtendedStatsBucketPipelineAggregationBuilder.NAME,
+ (p, c) -> ParsedExtendedStatsBucket.fromXContent(p, (String) c));
+ map.put(GeoBoundsAggregationBuilder.NAME, (p, c) -> ParsedGeoBounds.fromXContent(p, (String) c));
+ map.put(GeoCentroidAggregationBuilder.NAME, (p, c) -> ParsedGeoCentroid.fromXContent(p, (String) c));
+ map.put(HistogramAggregationBuilder.NAME, (p, c) -> ParsedHistogram.fromXContent(p, (String) c));
+ map.put(DateHistogramAggregationBuilder.NAME, (p, c) -> ParsedDateHistogram.fromXContent(p, (String) c));
+ map.put(StringTerms.NAME, (p, c) -> ParsedStringTerms.fromXContent(p, (String) c));
+ map.put(LongTerms.NAME, (p, c) -> ParsedLongTerms.fromXContent(p, (String) c));
+ map.put(DoubleTerms.NAME, (p, c) -> ParsedDoubleTerms.fromXContent(p, (String) c));
+ map.put(MissingAggregationBuilder.NAME, (p, c) -> ParsedMissing.fromXContent(p, (String) c));
+ map.put(NestedAggregationBuilder.NAME, (p, c) -> ParsedNested.fromXContent(p, (String) c));
+ map.put(ReverseNestedAggregationBuilder.NAME, (p, c) -> ParsedReverseNested.fromXContent(p, (String) c));
+ map.put(GlobalAggregationBuilder.NAME, (p, c) -> ParsedGlobal.fromXContent(p, (String) c));
+ map.put(FilterAggregationBuilder.NAME, (p, c) -> ParsedFilter.fromXContent(p, (String) c));
+ map.put(InternalSampler.PARSER_NAME, (p, c) -> ParsedSampler.fromXContent(p, (String) c));
+ map.put(GeoGridAggregationBuilder.NAME, (p, c) -> ParsedGeoHashGrid.fromXContent(p, (String) c));
+ map.put(RangeAggregationBuilder.NAME, (p, c) -> ParsedRange.fromXContent(p, (String) c));
+ map.put(DateRangeAggregationBuilder.NAME, (p, c) -> ParsedDateRange.fromXContent(p, (String) c));
+ map.put(GeoDistanceAggregationBuilder.NAME, (p, c) -> ParsedGeoDistance.fromXContent(p, (String) c));
+ map.put(FiltersAggregationBuilder.NAME, (p, c) -> ParsedFilters.fromXContent(p, (String) c));
+ map.put(AdjacencyMatrixAggregationBuilder.NAME, (p, c) -> ParsedAdjacencyMatrix.fromXContent(p, (String) c));
+ map.put(SignificantLongTerms.NAME, (p, c) -> ParsedSignificantLongTerms.fromXContent(p, (String) c));
+ map.put(SignificantStringTerms.NAME, (p, c) -> ParsedSignificantStringTerms.fromXContent(p, (String) c));
+ map.put(ScriptedMetricAggregationBuilder.NAME, (p, c) -> ParsedScriptedMetric.fromXContent(p, (String) c));
+ map.put(IpRangeAggregationBuilder.NAME, (p, c) -> ParsedBinaryRange.fromXContent(p, (String) c));
+ map.put(TopHitsAggregationBuilder.NAME, (p, c) -> ParsedTopHits.fromXContent(p, (String) c));
+ List entries = map.entrySet().stream()
+ .map(entry -> new NamedXContentRegistry.Entry(Aggregation.class, new ParseField(entry.getKey()), entry.getValue()))
+ .collect(Collectors.toList());
+ entries.add(new NamedXContentRegistry.Entry(Suggest.Suggestion.class, new ParseField(TermSuggestion.NAME),
+ (parser, context) -> TermSuggestion.fromXContent(parser, (String)context)));
+ entries.add(new NamedXContentRegistry.Entry(Suggest.Suggestion.class, new ParseField(PhraseSuggestion.NAME),
+ (parser, context) -> PhraseSuggestion.fromXContent(parser, (String)context)));
+ entries.add(new NamedXContentRegistry.Entry(Suggest.Suggestion.class, new ParseField(CompletionSuggestion.NAME),
+ (parser, context) -> CompletionSuggestion.fromXContent(parser, (String)context)));
+ return entries;
+ }
+
+ /**
+ * Loads and returns the {@link NamedXContentRegistry.Entry} parsers provided by plugins.
+ */
+ static List getProvidedNamedXContents() {
+ List entries = new ArrayList<>();
+ for (NamedXContentProvider service : ServiceLoader.load(NamedXContentProvider.class)) {
+ entries.addAll(service.getNamedXContentParsers());
+ }
+ return entries;
+ }
+}
diff --git a/client/rest-high-level/src/main/resources/forbidden/rest-high-level-signatures.txt b/client/rest-high-level/src/main/resources/forbidden/rest-high-level-signatures.txt
new file mode 100644
index 0000000000000..fb2330f3f083c
--- /dev/null
+++ b/client/rest-high-level/src/main/resources/forbidden/rest-high-level-signatures.txt
@@ -0,0 +1,21 @@
+# Licensed to Elasticsearch under one or more contributor
+# license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright
+# ownership. Elasticsearch licenses this file to you under
+# the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on
+# an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
+# either express or implied. See the License for the specific
+# language governing permissions and limitations under the License.
+
+@defaultMessage Use Request#createContentType(XContentType) to be sure to pass the right MIME type
+org.apache.http.entity.ContentType#create(java.lang.String)
+org.apache.http.entity.ContentType#create(java.lang.String,java.lang.String)
+org.apache.http.entity.ContentType#create(java.lang.String,java.nio.charset.Charset)
+org.apache.http.entity.ContentType#create(java.lang.String,org.apache.http.NameValuePair[])
diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java
new file mode 100644
index 0000000000000..a28686b27aa0d
--- /dev/null
+++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java
@@ -0,0 +1,704 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.elasticsearch.ElasticsearchException;
+import org.elasticsearch.ElasticsearchStatusException;
+import org.elasticsearch.action.DocWriteRequest;
+import org.elasticsearch.action.DocWriteResponse;
+import org.elasticsearch.action.bulk.BulkItemResponse;
+import org.elasticsearch.action.bulk.BulkProcessor;
+import org.elasticsearch.action.bulk.BulkRequest;
+import org.elasticsearch.action.bulk.BulkResponse;
+import org.elasticsearch.action.delete.DeleteRequest;
+import org.elasticsearch.action.delete.DeleteResponse;
+import org.elasticsearch.action.get.GetRequest;
+import org.elasticsearch.action.get.GetResponse;
+import org.elasticsearch.action.index.IndexRequest;
+import org.elasticsearch.action.index.IndexResponse;
+import org.elasticsearch.action.update.UpdateRequest;
+import org.elasticsearch.action.update.UpdateResponse;
+import org.elasticsearch.common.Strings;
+import org.elasticsearch.common.bytes.BytesReference;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.unit.ByteSizeUnit;
+import org.elasticsearch.common.unit.ByteSizeValue;
+import org.elasticsearch.common.xcontent.XContentBuilder;
+import org.elasticsearch.common.xcontent.XContentType;
+import org.elasticsearch.index.VersionType;
+import org.elasticsearch.index.get.GetResult;
+import org.elasticsearch.rest.RestStatus;
+import org.elasticsearch.script.Script;
+import org.elasticsearch.script.ScriptType;
+import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
+import org.elasticsearch.threadpool.ThreadPool;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Map;
+
+import java.util.concurrent.atomic.AtomicReference;
+
+import static java.util.Collections.singletonMap;
+
+public class CrudIT extends ESRestHighLevelClientTestCase {
+
+ public void testDelete() throws IOException {
+ {
+ // Testing deletion
+ String docId = "id";
+ highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar")));
+ DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId);
+ if (randomBoolean()) {
+ deleteRequest.version(1L);
+ }
+ DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync);
+ assertEquals("index", deleteResponse.getIndex());
+ assertEquals("type", deleteResponse.getType());
+ assertEquals(docId, deleteResponse.getId());
+ assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult());
+ }
+ {
+ // Testing non existing document
+ String docId = "does_not_exist";
+ DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId);
+ DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync);
+ assertEquals("index", deleteResponse.getIndex());
+ assertEquals("type", deleteResponse.getType());
+ assertEquals(docId, deleteResponse.getId());
+ assertEquals(DocWriteResponse.Result.NOT_FOUND, deleteResponse.getResult());
+ }
+ {
+ // Testing version conflict
+ String docId = "version_conflict";
+ highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar")));
+ DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).version(2);
+ ElasticsearchException exception = expectThrows(ElasticsearchException.class,
+ () -> execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync));
+ assertEquals(RestStatus.CONFLICT, exception.status());
+ assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][" + docId + "]: " +
+ "version conflict, current version [1] is different than the one provided [2]]", exception.getMessage());
+ assertEquals("index", exception.getMetadata("es.index").get(0));
+ }
+ {
+ // Testing version type
+ String docId = "version_type";
+ highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar"))
+ .versionType(VersionType.EXTERNAL).version(12));
+ DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).versionType(VersionType.EXTERNAL).version(13);
+ DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync);
+ assertEquals("index", deleteResponse.getIndex());
+ assertEquals("type", deleteResponse.getType());
+ assertEquals(docId, deleteResponse.getId());
+ assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult());
+ }
+ {
+ // Testing version type with a wrong version
+ String docId = "wrong_version";
+ highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar"))
+ .versionType(VersionType.EXTERNAL).version(12));
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> {
+ DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).versionType(VersionType.EXTERNAL).version(10);
+ execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync);
+ });
+ assertEquals(RestStatus.CONFLICT, exception.status());
+ assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][" +
+ docId + "]: version conflict, current version [12] is higher or equal to the one provided [10]]", exception.getMessage());
+ assertEquals("index", exception.getMetadata("es.index").get(0));
+ }
+ {
+ // Testing routing
+ String docId = "routing";
+ highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar")).routing("foo"));
+ DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).routing("foo");
+ DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync);
+ assertEquals("index", deleteResponse.getIndex());
+ assertEquals("type", deleteResponse.getType());
+ assertEquals(docId, deleteResponse.getId());
+ assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult());
+ }
+ }
+
+ public void testExists() throws IOException {
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "id");
+ assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync));
+ }
+ String document = "{\"field1\":\"value1\",\"field2\":\"value2\"}";
+ StringEntity stringEntity = new StringEntity(document, ContentType.APPLICATION_JSON);
+ Response response = client().performRequest("PUT", "/index/type/id", Collections.singletonMap("refresh", "wait_for"), stringEntity);
+ assertEquals(201, response.getStatusLine().getStatusCode());
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "id");
+ assertTrue(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync));
+ }
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "does_not_exist");
+ assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync));
+ }
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "does_not_exist").version(1);
+ assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync));
+ }
+ }
+
+ public void testGet() throws IOException {
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "id");
+ ElasticsearchException exception = expectThrows(ElasticsearchException.class,
+ () -> execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync));
+ assertEquals(RestStatus.NOT_FOUND, exception.status());
+ assertEquals("Elasticsearch exception [type=index_not_found_exception, reason=no such index]", exception.getMessage());
+ assertEquals("index", exception.getMetadata("es.index").get(0));
+ }
+
+ String document = "{\"field1\":\"value1\",\"field2\":\"value2\"}";
+ StringEntity stringEntity = new StringEntity(document, ContentType.APPLICATION_JSON);
+ Response response = client().performRequest("PUT", "/index/type/id", Collections.singletonMap("refresh", "wait_for"), stringEntity);
+ assertEquals(201, response.getStatusLine().getStatusCode());
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "id").version(2);
+ ElasticsearchException exception = expectThrows(ElasticsearchException.class,
+ () -> execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync));
+ assertEquals(RestStatus.CONFLICT, exception.status());
+ assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, " + "reason=[type][id]: " +
+ "version conflict, current version [1] is different than the one provided [2]]", exception.getMessage());
+ assertEquals("index", exception.getMetadata("es.index").get(0));
+ }
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "id");
+ if (randomBoolean()) {
+ getRequest.version(1L);
+ }
+ GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync);
+ assertEquals("index", getResponse.getIndex());
+ assertEquals("type", getResponse.getType());
+ assertEquals("id", getResponse.getId());
+ assertTrue(getResponse.isExists());
+ assertFalse(getResponse.isSourceEmpty());
+ assertEquals(1L, getResponse.getVersion());
+ assertEquals(document, getResponse.getSourceAsString());
+ }
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "does_not_exist");
+ GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync);
+ assertEquals("index", getResponse.getIndex());
+ assertEquals("type", getResponse.getType());
+ assertEquals("does_not_exist", getResponse.getId());
+ assertFalse(getResponse.isExists());
+ assertEquals(-1, getResponse.getVersion());
+ assertTrue(getResponse.isSourceEmpty());
+ assertNull(getResponse.getSourceAsString());
+ }
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "id");
+ getRequest.fetchSourceContext(new FetchSourceContext(false, Strings.EMPTY_ARRAY, Strings.EMPTY_ARRAY));
+ GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync);
+ assertEquals("index", getResponse.getIndex());
+ assertEquals("type", getResponse.getType());
+ assertEquals("id", getResponse.getId());
+ assertTrue(getResponse.isExists());
+ assertTrue(getResponse.isSourceEmpty());
+ assertEquals(1L, getResponse.getVersion());
+ assertNull(getResponse.getSourceAsString());
+ }
+ {
+ GetRequest getRequest = new GetRequest("index", "type", "id");
+ if (randomBoolean()) {
+ getRequest.fetchSourceContext(new FetchSourceContext(true, new String[]{"field1"}, Strings.EMPTY_ARRAY));
+ } else {
+ getRequest.fetchSourceContext(new FetchSourceContext(true, Strings.EMPTY_ARRAY, new String[]{"field2"}));
+ }
+ GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync);
+ assertEquals("index", getResponse.getIndex());
+ assertEquals("type", getResponse.getType());
+ assertEquals("id", getResponse.getId());
+ assertTrue(getResponse.isExists());
+ assertFalse(getResponse.isSourceEmpty());
+ assertEquals(1L, getResponse.getVersion());
+ Map sourceAsMap = getResponse.getSourceAsMap();
+ assertEquals(1, sourceAsMap.size());
+ assertEquals("value1", sourceAsMap.get("field1"));
+ }
+ }
+
+ public void testIndex() throws IOException {
+ final XContentType xContentType = randomFrom(XContentType.values());
+ {
+ IndexRequest indexRequest = new IndexRequest("index", "type");
+ indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("test", "test").endObject());
+
+ IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync);
+ assertEquals(RestStatus.CREATED, indexResponse.status());
+ assertEquals(DocWriteResponse.Result.CREATED, indexResponse.getResult());
+ assertEquals("index", indexResponse.getIndex());
+ assertEquals("type", indexResponse.getType());
+ assertTrue(Strings.hasLength(indexResponse.getId()));
+ assertEquals(1L, indexResponse.getVersion());
+ assertNotNull(indexResponse.getShardId());
+ assertEquals(-1, indexResponse.getShardId().getId());
+ assertEquals("index", indexResponse.getShardId().getIndexName());
+ assertEquals("index", indexResponse.getShardId().getIndex().getName());
+ assertEquals("_na_", indexResponse.getShardId().getIndex().getUUID());
+ assertNotNull(indexResponse.getShardInfo());
+ assertEquals(0, indexResponse.getShardInfo().getFailed());
+ assertTrue(indexResponse.getShardInfo().getSuccessful() > 0);
+ assertTrue(indexResponse.getShardInfo().getTotal() > 0);
+ }
+ {
+ IndexRequest indexRequest = new IndexRequest("index", "type", "id");
+ indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("version", 1).endObject());
+
+ IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync);
+ assertEquals(RestStatus.CREATED, indexResponse.status());
+ assertEquals("index", indexResponse.getIndex());
+ assertEquals("type", indexResponse.getType());
+ assertEquals("id", indexResponse.getId());
+ assertEquals(1L, indexResponse.getVersion());
+
+ indexRequest = new IndexRequest("index", "type", "id");
+ indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("version", 2).endObject());
+
+ indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync);
+ assertEquals(RestStatus.OK, indexResponse.status());
+ assertEquals("index", indexResponse.getIndex());
+ assertEquals("type", indexResponse.getType());
+ assertEquals("id", indexResponse.getId());
+ assertEquals(2L, indexResponse.getVersion());
+
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> {
+ IndexRequest wrongRequest = new IndexRequest("index", "type", "id");
+ wrongRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject());
+ wrongRequest.version(5L);
+
+ execute(wrongRequest, highLevelClient()::index, highLevelClient()::indexAsync);
+ });
+ assertEquals(RestStatus.CONFLICT, exception.status());
+ assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][id]: " +
+ "version conflict, current version [2] is different than the one provided [5]]", exception.getMessage());
+ assertEquals("index", exception.getMetadata("es.index").get(0));
+ }
+ {
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> {
+ IndexRequest indexRequest = new IndexRequest("index", "type", "missing_parent");
+ indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject());
+ indexRequest.parent("missing");
+
+ execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync);
+ });
+
+ assertEquals(RestStatus.BAD_REQUEST, exception.status());
+ assertEquals("Elasticsearch exception [type=illegal_argument_exception, " +
+ "reason=can't specify parent if no parent field has been configured]", exception.getMessage());
+ }
+ {
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> {
+ IndexRequest indexRequest = new IndexRequest("index", "type", "missing_pipeline");
+ indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject());
+ indexRequest.setPipeline("missing");
+
+ execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync);
+ });
+
+ assertEquals(RestStatus.BAD_REQUEST, exception.status());
+ assertEquals("Elasticsearch exception [type=illegal_argument_exception, " +
+ "reason=pipeline with id [missing] does not exist]", exception.getMessage());
+ }
+ {
+ IndexRequest indexRequest = new IndexRequest("index", "type", "external_version_type");
+ indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject());
+ indexRequest.version(12L);
+ indexRequest.versionType(VersionType.EXTERNAL);
+
+ IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync);
+ assertEquals(RestStatus.CREATED, indexResponse.status());
+ assertEquals("index", indexResponse.getIndex());
+ assertEquals("type", indexResponse.getType());
+ assertEquals("external_version_type", indexResponse.getId());
+ assertEquals(12L, indexResponse.getVersion());
+ }
+ {
+ final IndexRequest indexRequest = new IndexRequest("index", "type", "with_create_op_type");
+ indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject());
+ indexRequest.opType(DocWriteRequest.OpType.CREATE);
+
+ IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync);
+ assertEquals(RestStatus.CREATED, indexResponse.status());
+ assertEquals("index", indexResponse.getIndex());
+ assertEquals("type", indexResponse.getType());
+ assertEquals("with_create_op_type", indexResponse.getId());
+
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> {
+ execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync);
+ });
+
+ assertEquals(RestStatus.CONFLICT, exception.status());
+ assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][with_create_op_type]: " +
+ "version conflict, document already exists (current version [1])]", exception.getMessage());
+ }
+ }
+
+ public void testUpdate() throws IOException {
+ {
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "does_not_exist");
+ updateRequest.doc(singletonMap("field", "value"), randomFrom(XContentType.values()));
+
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () ->
+ execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync));
+ assertEquals(RestStatus.NOT_FOUND, exception.status());
+ assertEquals("Elasticsearch exception [type=document_missing_exception, reason=[type][does_not_exist]: document missing]",
+ exception.getMessage());
+ }
+ {
+ IndexRequest indexRequest = new IndexRequest("index", "type", "id");
+ indexRequest.source(singletonMap("field", "value"));
+ IndexResponse indexResponse = highLevelClient().index(indexRequest);
+ assertEquals(RestStatus.CREATED, indexResponse.status());
+
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "id");
+ updateRequest.doc(singletonMap("field", "updated"), randomFrom(XContentType.values()));
+
+ UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ assertEquals(RestStatus.OK, updateResponse.status());
+ assertEquals(indexResponse.getVersion() + 1, updateResponse.getVersion());
+
+ UpdateRequest updateRequestConflict = new UpdateRequest("index", "type", "id");
+ updateRequestConflict.doc(singletonMap("field", "with_version_conflict"), randomFrom(XContentType.values()));
+ updateRequestConflict.version(indexResponse.getVersion());
+
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () ->
+ execute(updateRequestConflict, highLevelClient()::update, highLevelClient()::updateAsync));
+ assertEquals(RestStatus.CONFLICT, exception.status());
+ assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][id]: version conflict, " +
+ "current version [2] is different than the one provided [1]]", exception.getMessage());
+ }
+ {
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> {
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "id");
+ updateRequest.doc(singletonMap("field", "updated"), randomFrom(XContentType.values()));
+ if (randomBoolean()) {
+ updateRequest.parent("missing");
+ } else {
+ updateRequest.routing("missing");
+ }
+ execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ });
+
+ assertEquals(RestStatus.NOT_FOUND, exception.status());
+ assertEquals("Elasticsearch exception [type=document_missing_exception, reason=[type][id]: document missing]",
+ exception.getMessage());
+ }
+ {
+ IndexRequest indexRequest = new IndexRequest("index", "type", "with_script");
+ indexRequest.source(singletonMap("counter", 12));
+ IndexResponse indexResponse = highLevelClient().index(indexRequest);
+ assertEquals(RestStatus.CREATED, indexResponse.status());
+
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_script");
+ Script script = new Script(ScriptType.INLINE, "painless", "ctx._source.counter += params.count", singletonMap("count", 8));
+ updateRequest.script(script);
+ updateRequest.fetchSource(true);
+
+ UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ assertEquals(RestStatus.OK, updateResponse.status());
+ assertEquals(DocWriteResponse.Result.UPDATED, updateResponse.getResult());
+ assertEquals(2L, updateResponse.getVersion());
+ assertEquals(20, updateResponse.getGetResult().sourceAsMap().get("counter"));
+
+ }
+ {
+ IndexRequest indexRequest = new IndexRequest("index", "type", "with_doc");
+ indexRequest.source("field_1", "one", "field_3", "three");
+ indexRequest.version(12L);
+ indexRequest.versionType(VersionType.EXTERNAL);
+ IndexResponse indexResponse = highLevelClient().index(indexRequest);
+ assertEquals(RestStatus.CREATED, indexResponse.status());
+ assertEquals(12L, indexResponse.getVersion());
+
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_doc");
+ updateRequest.doc(singletonMap("field_2", "two"), randomFrom(XContentType.values()));
+ updateRequest.fetchSource("field_*", "field_3");
+
+ UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ assertEquals(RestStatus.OK, updateResponse.status());
+ assertEquals(DocWriteResponse.Result.UPDATED, updateResponse.getResult());
+ assertEquals(13L, updateResponse.getVersion());
+ GetResult getResult = updateResponse.getGetResult();
+ assertEquals(13L, updateResponse.getVersion());
+ Map sourceAsMap = getResult.sourceAsMap();
+ assertEquals("one", sourceAsMap.get("field_1"));
+ assertEquals("two", sourceAsMap.get("field_2"));
+ assertFalse(sourceAsMap.containsKey("field_3"));
+ }
+ {
+ IndexRequest indexRequest = new IndexRequest("index", "type", "noop");
+ indexRequest.source("field", "value");
+ IndexResponse indexResponse = highLevelClient().index(indexRequest);
+ assertEquals(RestStatus.CREATED, indexResponse.status());
+ assertEquals(1L, indexResponse.getVersion());
+
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "noop");
+ updateRequest.doc(singletonMap("field", "value"), randomFrom(XContentType.values()));
+
+ UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ assertEquals(RestStatus.OK, updateResponse.status());
+ assertEquals(DocWriteResponse.Result.NOOP, updateResponse.getResult());
+ assertEquals(1L, updateResponse.getVersion());
+
+ updateRequest.detectNoop(false);
+
+ updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ assertEquals(RestStatus.OK, updateResponse.status());
+ assertEquals(DocWriteResponse.Result.UPDATED, updateResponse.getResult());
+ assertEquals(2L, updateResponse.getVersion());
+ }
+ {
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_upsert");
+ updateRequest.upsert(singletonMap("doc_status", "created"));
+ updateRequest.doc(singletonMap("doc_status", "updated"));
+ updateRequest.fetchSource(true);
+
+ UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ assertEquals(RestStatus.CREATED, updateResponse.status());
+ assertEquals("index", updateResponse.getIndex());
+ assertEquals("type", updateResponse.getType());
+ assertEquals("with_upsert", updateResponse.getId());
+ GetResult getResult = updateResponse.getGetResult();
+ assertEquals(1L, updateResponse.getVersion());
+ assertEquals("created", getResult.sourceAsMap().get("doc_status"));
+ }
+ {
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_doc_as_upsert");
+ updateRequest.doc(singletonMap("field", "initialized"));
+ updateRequest.fetchSource(true);
+ updateRequest.docAsUpsert(true);
+
+ UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ assertEquals(RestStatus.CREATED, updateResponse.status());
+ assertEquals("index", updateResponse.getIndex());
+ assertEquals("type", updateResponse.getType());
+ assertEquals("with_doc_as_upsert", updateResponse.getId());
+ GetResult getResult = updateResponse.getGetResult();
+ assertEquals(1L, updateResponse.getVersion());
+ assertEquals("initialized", getResult.sourceAsMap().get("field"));
+ }
+ {
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_scripted_upsert");
+ updateRequest.fetchSource(true);
+ updateRequest.script(new Script(ScriptType.INLINE, "painless", "ctx._source.level = params.test", singletonMap("test", "C")));
+ updateRequest.scriptedUpsert(true);
+ updateRequest.upsert(singletonMap("level", "A"));
+
+ UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ assertEquals(RestStatus.CREATED, updateResponse.status());
+ assertEquals("index", updateResponse.getIndex());
+ assertEquals("type", updateResponse.getType());
+ assertEquals("with_scripted_upsert", updateResponse.getId());
+
+ GetResult getResult = updateResponse.getGetResult();
+ assertEquals(1L, updateResponse.getVersion());
+ assertEquals("C", getResult.sourceAsMap().get("level"));
+ }
+ {
+ IllegalStateException exception = expectThrows(IllegalStateException.class, () -> {
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "id");
+ updateRequest.doc(new IndexRequest().source(Collections.singletonMap("field", "doc"), XContentType.JSON));
+ updateRequest.upsert(new IndexRequest().source(Collections.singletonMap("field", "upsert"), XContentType.YAML));
+ execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync);
+ });
+ assertEquals("Update request cannot have different content types for doc [JSON] and upsert [YAML] documents",
+ exception.getMessage());
+ }
+ }
+
+ public void testBulk() throws IOException {
+ int nbItems = randomIntBetween(10, 100);
+ boolean[] errors = new boolean[nbItems];
+
+ XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);
+
+ BulkRequest bulkRequest = new BulkRequest();
+ for (int i = 0; i < nbItems; i++) {
+ String id = String.valueOf(i);
+ boolean erroneous = randomBoolean();
+ errors[i] = erroneous;
+
+ DocWriteRequest.OpType opType = randomFrom(DocWriteRequest.OpType.values());
+ if (opType == DocWriteRequest.OpType.DELETE) {
+ if (erroneous == false) {
+ assertEquals(RestStatus.CREATED,
+ highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status());
+ }
+ DeleteRequest deleteRequest = new DeleteRequest("index", "test", id);
+ bulkRequest.add(deleteRequest);
+
+ } else {
+ BytesReference source = XContentBuilder.builder(xContentType.xContent()).startObject().field("id", i).endObject().bytes();
+ if (opType == DocWriteRequest.OpType.INDEX) {
+ IndexRequest indexRequest = new IndexRequest("index", "test", id).source(source, xContentType);
+ if (erroneous) {
+ indexRequest.version(12L);
+ }
+ bulkRequest.add(indexRequest);
+
+ } else if (opType == DocWriteRequest.OpType.CREATE) {
+ IndexRequest createRequest = new IndexRequest("index", "test", id).source(source, xContentType).create(true);
+ if (erroneous) {
+ assertEquals(RestStatus.CREATED, highLevelClient().index(createRequest).status());
+ }
+ bulkRequest.add(createRequest);
+
+ } else if (opType == DocWriteRequest.OpType.UPDATE) {
+ UpdateRequest updateRequest = new UpdateRequest("index", "test", id)
+ .doc(new IndexRequest().source(source, xContentType));
+ if (erroneous == false) {
+ assertEquals(RestStatus.CREATED,
+ highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status());
+ }
+ bulkRequest.add(updateRequest);
+ }
+ }
+ }
+
+ BulkResponse bulkResponse = execute(bulkRequest, highLevelClient()::bulk, highLevelClient()::bulkAsync);
+ assertEquals(RestStatus.OK, bulkResponse.status());
+ assertTrue(bulkResponse.getTook().getMillis() > 0);
+ assertEquals(nbItems, bulkResponse.getItems().length);
+
+ validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest);
+ }
+
+ public void testBulkProcessorIntegration() throws IOException, InterruptedException {
+ int nbItems = randomIntBetween(10, 100);
+ boolean[] errors = new boolean[nbItems];
+
+ XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);
+
+ AtomicReference responseRef = new AtomicReference<>();
+ AtomicReference requestRef = new AtomicReference<>();
+ AtomicReference error = new AtomicReference<>();
+
+ BulkProcessor.Listener listener = new BulkProcessor.Listener() {
+ @Override
+ public void beforeBulk(long executionId, BulkRequest request) {
+
+ }
+
+ @Override
+ public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
+ responseRef.set(response);
+ requestRef.set(request);
+ }
+
+ @Override
+ public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
+ error.set(failure);
+ }
+ };
+
+ ThreadPool threadPool = new ThreadPool(Settings.builder().put("node.name", getClass().getName()).build());
+ try(BulkProcessor processor = new BulkProcessor.Builder(highLevelClient()::bulkAsync, listener, threadPool)
+ .setConcurrentRequests(0)
+ .setBulkSize(new ByteSizeValue(5, ByteSizeUnit.GB))
+ .setBulkActions(nbItems + 1)
+ .build()) {
+ for (int i = 0; i < nbItems; i++) {
+ String id = String.valueOf(i);
+ boolean erroneous = randomBoolean();
+ errors[i] = erroneous;
+
+ DocWriteRequest.OpType opType = randomFrom(DocWriteRequest.OpType.values());
+ if (opType == DocWriteRequest.OpType.DELETE) {
+ if (erroneous == false) {
+ assertEquals(RestStatus.CREATED,
+ highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status());
+ }
+ DeleteRequest deleteRequest = new DeleteRequest("index", "test", id);
+ processor.add(deleteRequest);
+
+ } else {
+ if (opType == DocWriteRequest.OpType.INDEX) {
+ IndexRequest indexRequest = new IndexRequest("index", "test", id).source(xContentType, "id", i);
+ if (erroneous) {
+ indexRequest.version(12L);
+ }
+ processor.add(indexRequest);
+
+ } else if (opType == DocWriteRequest.OpType.CREATE) {
+ IndexRequest createRequest = new IndexRequest("index", "test", id).source(xContentType, "id", i).create(true);
+ if (erroneous) {
+ assertEquals(RestStatus.CREATED, highLevelClient().index(createRequest).status());
+ }
+ processor.add(createRequest);
+
+ } else if (opType == DocWriteRequest.OpType.UPDATE) {
+ UpdateRequest updateRequest = new UpdateRequest("index", "test", id)
+ .doc(new IndexRequest().source(xContentType, "id", i));
+ if (erroneous == false) {
+ assertEquals(RestStatus.CREATED,
+ highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status());
+ }
+ processor.add(updateRequest);
+ }
+ }
+ }
+ assertNull(responseRef.get());
+ assertNull(requestRef.get());
+ }
+
+
+ BulkResponse bulkResponse = responseRef.get();
+ BulkRequest bulkRequest = requestRef.get();
+
+ assertEquals(RestStatus.OK, bulkResponse.status());
+ assertTrue(bulkResponse.getTookInMillis() > 0);
+ assertEquals(nbItems, bulkResponse.getItems().length);
+ assertNull(error.get());
+
+ validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest);
+
+ terminate(threadPool);
+ }
+
+ private void validateBulkResponses(int nbItems, boolean[] errors, BulkResponse bulkResponse, BulkRequest bulkRequest) {
+ for (int i = 0; i < nbItems; i++) {
+ BulkItemResponse bulkItemResponse = bulkResponse.getItems()[i];
+
+ assertEquals(i, bulkItemResponse.getItemId());
+ assertEquals("index", bulkItemResponse.getIndex());
+ assertEquals("test", bulkItemResponse.getType());
+ assertEquals(String.valueOf(i), bulkItemResponse.getId());
+
+ DocWriteRequest.OpType requestOpType = bulkRequest.requests().get(i).opType();
+ if (requestOpType == DocWriteRequest.OpType.INDEX || requestOpType == DocWriteRequest.OpType.CREATE) {
+ assertEquals(errors[i], bulkItemResponse.isFailed());
+ assertEquals(errors[i] ? RestStatus.CONFLICT : RestStatus.CREATED, bulkItemResponse.status());
+ } else if (requestOpType == DocWriteRequest.OpType.UPDATE) {
+ assertEquals(errors[i], bulkItemResponse.isFailed());
+ assertEquals(errors[i] ? RestStatus.NOT_FOUND : RestStatus.OK, bulkItemResponse.status());
+ } else if (requestOpType == DocWriteRequest.OpType.DELETE) {
+ assertFalse(bulkItemResponse.isFailed());
+ assertEquals(errors[i] ? RestStatus.NOT_FOUND : RestStatus.OK, bulkItemResponse.status());
+ }
+ }
+ }
+}
diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java
new file mode 100644
index 0000000000000..18ca074dd58b1
--- /dev/null
+++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java
@@ -0,0 +1,209 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.ProtocolVersion;
+import org.apache.http.RequestLine;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.entity.ContentType;
+import org.apache.http.message.BasicHeader;
+import org.apache.http.message.BasicRequestLine;
+import org.apache.http.message.BasicStatusLine;
+import org.apache.lucene.util.BytesRef;
+import org.elasticsearch.Build;
+import org.elasticsearch.Version;
+import org.elasticsearch.action.ActionListener;
+import org.elasticsearch.action.main.MainRequest;
+import org.elasticsearch.action.main.MainResponse;
+import org.elasticsearch.action.support.PlainActionFuture;
+import org.elasticsearch.client.Request;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.ResponseListener;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestHighLevelClient;
+import org.elasticsearch.cluster.ClusterName;
+import org.elasticsearch.common.SuppressForbidden;
+import org.elasticsearch.common.xcontent.XContentHelper;
+import org.elasticsearch.common.xcontent.XContentType;
+import org.elasticsearch.test.ESTestCase;
+import org.junit.Before;
+
+import java.io.IOException;
+import java.lang.reflect.Method;
+import java.lang.reflect.Modifier;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static java.util.Collections.emptyMap;
+import static java.util.Collections.emptySet;
+import static org.hamcrest.Matchers.containsInAnyOrder;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.anyMapOf;
+import static org.mockito.Matchers.anyObject;
+import static org.mockito.Matchers.anyVararg;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.doAnswer;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+/**
+ * Test and demonstrates how {@link RestHighLevelClient} can be extended to support custom endpoints.
+ */
+public class CustomRestHighLevelClientTests extends ESTestCase {
+
+ private static final String ENDPOINT = "/_custom";
+
+ private CustomRestClient restHighLevelClient;
+
+ @Before
+ @SuppressWarnings("unchecked")
+ public void initClients() throws IOException {
+ if (restHighLevelClient == null) {
+ final RestClient restClient = mock(RestClient.class);
+ restHighLevelClient = new CustomRestClient(restClient);
+
+ doAnswer(mock -> mockPerformRequest((Header) mock.getArguments()[4]))
+ .when(restClient)
+ .performRequest(eq(HttpGet.METHOD_NAME), eq(ENDPOINT), anyMapOf(String.class, String.class), anyObject(), anyVararg());
+
+ doAnswer(mock -> mockPerformRequestAsync((Header) mock.getArguments()[5], (ResponseListener) mock.getArguments()[4]))
+ .when(restClient)
+ .performRequestAsync(eq(HttpGet.METHOD_NAME), eq(ENDPOINT), anyMapOf(String.class, String.class),
+ any(HttpEntity.class), any(ResponseListener.class), anyVararg());
+ }
+ }
+
+ public void testCustomEndpoint() throws IOException {
+ final MainRequest request = new MainRequest();
+ final Header header = new BasicHeader("node_name", randomAlphaOfLengthBetween(1, 10));
+
+ MainResponse response = restHighLevelClient.custom(request, header);
+ assertEquals(header.getValue(), response.getNodeName());
+
+ response = restHighLevelClient.customAndParse(request, header);
+ assertEquals(header.getValue(), response.getNodeName());
+ }
+
+ public void testCustomEndpointAsync() throws Exception {
+ final MainRequest request = new MainRequest();
+ final Header header = new BasicHeader("node_name", randomAlphaOfLengthBetween(1, 10));
+
+ PlainActionFuture future = PlainActionFuture.newFuture();
+ restHighLevelClient.customAsync(request, future, header);
+ assertEquals(header.getValue(), future.get().getNodeName());
+
+ future = PlainActionFuture.newFuture();
+ restHighLevelClient.customAndParseAsync(request, future, header);
+ assertEquals(header.getValue(), future.get().getNodeName());
+ }
+
+ /**
+ * The {@link RestHighLevelClient} must declare the following execution methods using the protected modifier
+ * so that they can be used by subclasses to implement custom logic.
+ */
+ @SuppressForbidden(reason = "We're forced to uses Class#getDeclaredMethods() here because this test checks protected methods")
+ public void testMethodsVisibility() throws ClassNotFoundException {
+ final String[] methodNames = new String[]{"performRequest",
+ "performRequestAsync",
+ "performRequestAndParseEntity",
+ "performRequestAsyncAndParseEntity",
+ "parseEntity",
+ "parseResponseException"};
+
+ final List protectedMethods = Arrays.stream(RestHighLevelClient.class.getDeclaredMethods())
+ .filter(method -> Modifier.isProtected(method.getModifiers()))
+ .map(Method::getName)
+ .collect(Collectors.toList());
+
+ assertThat(protectedMethods, containsInAnyOrder(methodNames));
+ }
+
+ /**
+ * Mocks the asynchronous request execution by calling the {@link #mockPerformRequest(Header)} method.
+ */
+ private Void mockPerformRequestAsync(Header httpHeader, ResponseListener responseListener) {
+ try {
+ responseListener.onSuccess(mockPerformRequest(httpHeader));
+ } catch (IOException e) {
+ responseListener.onFailure(e);
+ }
+ return null;
+ }
+
+ /**
+ * Mocks the synchronous request execution like if it was executed by Elasticsearch.
+ */
+ private Response mockPerformRequest(Header httpHeader) throws IOException {
+ final Response mockResponse = mock(Response.class);
+ when(mockResponse.getHost()).thenReturn(new HttpHost("localhost", 9200));
+
+ ProtocolVersion protocol = new ProtocolVersion("HTTP", 1, 1);
+ when(mockResponse.getStatusLine()).thenReturn(new BasicStatusLine(protocol, 200, "OK"));
+
+ MainResponse response = new MainResponse(httpHeader.getValue(), Version.CURRENT, ClusterName.DEFAULT, "_na", Build.CURRENT, true);
+ BytesRef bytesRef = XContentHelper.toXContent(response, XContentType.JSON, false).toBytesRef();
+ when(mockResponse.getEntity()).thenReturn(new ByteArrayEntity(bytesRef.bytes, ContentType.APPLICATION_JSON));
+
+ RequestLine requestLine = new BasicRequestLine(HttpGet.METHOD_NAME, ENDPOINT, protocol);
+ when(mockResponse.getRequestLine()).thenReturn(requestLine);
+
+ return mockResponse;
+ }
+
+ /**
+ * A custom high level client that provides custom methods to execute a request and get its associate response back.
+ */
+ static class CustomRestClient extends RestHighLevelClient {
+
+ private CustomRestClient(RestClient restClient) {
+ super(restClient);
+ }
+
+ MainResponse custom(MainRequest mainRequest, Header... headers) throws IOException {
+ return performRequest(mainRequest, this::toRequest, this::toResponse, emptySet(), headers);
+ }
+
+ MainResponse customAndParse(MainRequest mainRequest, Header... headers) throws IOException {
+ return performRequestAndParseEntity(mainRequest, this::toRequest, MainResponse::fromXContent, emptySet(), headers);
+ }
+
+ void customAsync(MainRequest mainRequest, ActionListener listener, Header... headers) {
+ performRequestAsync(mainRequest, this::toRequest, this::toResponse, listener, emptySet(), headers);
+ }
+
+ void customAndParseAsync(MainRequest mainRequest, ActionListener listener, Header... headers) {
+ performRequestAsyncAndParseEntity(mainRequest, this::toRequest, MainResponse::fromXContent, listener, emptySet(), headers);
+ }
+
+ Request toRequest(MainRequest mainRequest) throws IOException {
+ return new Request(HttpGet.METHOD_NAME, ENDPOINT, emptyMap(), null);
+ }
+
+ MainResponse toResponse(Response response) throws IOException {
+ return parseEntity(response.getEntity(), MainResponse::fromXContent);
+ }
+ }
+}
diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java
new file mode 100644
index 0000000000000..cdd8317830909
--- /dev/null
+++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java
@@ -0,0 +1,75 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.apache.http.Header;
+import org.elasticsearch.action.ActionListener;
+import org.elasticsearch.action.support.PlainActionFuture;
+import org.elasticsearch.test.rest.ESRestTestCase;
+import org.junit.AfterClass;
+import org.junit.Before;
+
+import java.io.IOException;
+
+public abstract class ESRestHighLevelClientTestCase extends ESRestTestCase {
+
+ private static RestHighLevelClient restHighLevelClient;
+
+ @Before
+ public void initHighLevelClient() throws IOException {
+ super.initClient();
+ if (restHighLevelClient == null) {
+ restHighLevelClient = new RestHighLevelClient(client());
+ }
+ }
+
+ @AfterClass
+ public static void cleanupClient() {
+ restHighLevelClient = null;
+ }
+
+ protected static RestHighLevelClient highLevelClient() {
+ return restHighLevelClient;
+ }
+
+ /**
+ * Executes the provided request using either the sync method or its async variant, both provided as functions
+ */
+ protected static Resp execute(Req request, SyncMethod syncMethod,
+ AsyncMethod asyncMethod, Header... headers) throws IOException {
+ if (randomBoolean()) {
+ return syncMethod.execute(request, headers);
+ } else {
+ PlainActionFuture future = PlainActionFuture.newFuture();
+ asyncMethod.execute(request, future, headers);
+ return future.actionGet();
+ }
+ }
+
+ @FunctionalInterface
+ protected interface SyncMethod {
+ Response execute(Request request, Header... headers) throws IOException;
+ }
+
+ @FunctionalInterface
+ protected interface AsyncMethod {
+ void execute(Request request, ActionListener listener, Header... headers);
+ }
+}
diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/PingAndInfoIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/PingAndInfoIT.java
new file mode 100644
index 0000000000000..b22ded52655df
--- /dev/null
+++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/PingAndInfoIT.java
@@ -0,0 +1,51 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.elasticsearch.action.main.MainResponse;
+
+import java.io.IOException;
+import java.util.Map;
+
+public class PingAndInfoIT extends ESRestHighLevelClientTestCase {
+
+ public void testPing() throws IOException {
+ assertTrue(highLevelClient().ping());
+ }
+
+ @SuppressWarnings("unchecked")
+ public void testInfo() throws IOException {
+ MainResponse info = highLevelClient().info();
+ // compare with what the low level client outputs
+ Map infoAsMap = entityAsMap(adminClient().performRequest("GET", "/"));
+ assertEquals(infoAsMap.get("cluster_name"), info.getClusterName().value());
+ assertEquals(infoAsMap.get("cluster_uuid"), info.getClusterUuid());
+
+ // only check node name existence, might be a different one from what was hit by low level client in multi-node cluster
+ assertNotNull(info.getNodeName());
+ Map versionMap = (Map) infoAsMap.get("version");
+ assertEquals(versionMap.get("build_hash"), info.getBuild().shortHash());
+ assertEquals(versionMap.get("build_date"), info.getBuild().date());
+ assertEquals(versionMap.get("build_snapshot"), info.getBuild().isSnapshot());
+ assertEquals(versionMap.get("number"), info.getVersion().toString());
+ assertEquals(versionMap.get("lucene_version"), info.getVersion().luceneVersion.toString());
+ }
+
+}
diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java
new file mode 100644
index 0000000000000..8f52eb37fe95d
--- /dev/null
+++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java
@@ -0,0 +1,945 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.apache.http.HttpEntity;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.util.EntityUtils;
+import org.elasticsearch.action.DocWriteRequest;
+import org.elasticsearch.action.bulk.BulkRequest;
+import org.elasticsearch.action.bulk.BulkShardRequest;
+import org.elasticsearch.action.delete.DeleteRequest;
+import org.elasticsearch.action.get.GetRequest;
+import org.elasticsearch.action.index.IndexRequest;
+import org.elasticsearch.action.search.ClearScrollRequest;
+import org.elasticsearch.action.search.SearchRequest;
+import org.elasticsearch.action.search.SearchScrollRequest;
+import org.elasticsearch.action.search.SearchType;
+import org.elasticsearch.action.support.IndicesOptions;
+import org.elasticsearch.action.support.WriteRequest;
+import org.elasticsearch.action.support.replication.ReplicatedWriteRequest;
+import org.elasticsearch.action.support.replication.ReplicationRequest;
+import org.elasticsearch.action.update.UpdateRequest;
+import org.elasticsearch.common.Strings;
+import org.elasticsearch.common.bytes.BytesArray;
+import org.elasticsearch.common.bytes.BytesReference;
+import org.elasticsearch.common.io.Streams;
+import org.elasticsearch.common.lucene.uid.Versions;
+import org.elasticsearch.common.xcontent.ToXContent;
+import org.elasticsearch.common.xcontent.XContentBuilder;
+import org.elasticsearch.common.xcontent.XContentHelper;
+import org.elasticsearch.common.xcontent.XContentParser;
+import org.elasticsearch.common.xcontent.XContentType;
+import org.elasticsearch.index.VersionType;
+import org.elasticsearch.index.query.TermQueryBuilder;
+import org.elasticsearch.rest.action.search.RestSearchAction;
+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
+import org.elasticsearch.search.aggregations.support.ValueType;
+import org.elasticsearch.search.builder.SearchSourceBuilder;
+import org.elasticsearch.search.collapse.CollapseBuilder;
+import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
+import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
+import org.elasticsearch.search.rescore.QueryRescorerBuilder;
+import org.elasticsearch.search.suggest.SuggestBuilder;
+import org.elasticsearch.search.suggest.completion.CompletionSuggestionBuilder;
+import org.elasticsearch.test.ESTestCase;
+import org.elasticsearch.test.RandomObjects;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.Modifier;
+import java.util.HashMap;
+import java.util.Locale;
+import java.util.Map;
+import java.util.StringJoiner;
+import java.util.function.Consumer;
+import java.util.function.Function;
+
+import static java.util.Collections.singletonMap;
+import static org.elasticsearch.client.Request.enforceSameContentType;
+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertToXContentEquivalent;
+
+public class RequestTests extends ESTestCase {
+
+ public void testConstructor() throws Exception {
+ final String method = randomFrom("GET", "PUT", "POST", "HEAD", "DELETE");
+ final String endpoint = randomAlphaOfLengthBetween(1, 10);
+ final Map parameters = singletonMap(randomAlphaOfLength(5), randomAlphaOfLength(5));
+ final HttpEntity entity = randomBoolean() ? new StringEntity(randomAlphaOfLengthBetween(1, 100), ContentType.TEXT_PLAIN) : null;
+
+ NullPointerException e = expectThrows(NullPointerException.class, () -> new Request(null, endpoint, parameters, entity));
+ assertEquals("method cannot be null", e.getMessage());
+
+ e = expectThrows(NullPointerException.class, () -> new Request(method, null, parameters, entity));
+ assertEquals("endpoint cannot be null", e.getMessage());
+
+ e = expectThrows(NullPointerException.class, () -> new Request(method, endpoint, null, entity));
+ assertEquals("parameters cannot be null", e.getMessage());
+
+ final Request request = new Request(method, endpoint, parameters, entity);
+ assertEquals(method, request.getMethod());
+ assertEquals(endpoint, request.getEndpoint());
+ assertEquals(parameters, request.getParameters());
+ assertEquals(entity, request.getEntity());
+
+ final Constructor>[] constructors = Request.class.getConstructors();
+ assertEquals("Expected only 1 constructor", 1, constructors.length);
+ assertTrue("Request constructor is not public", Modifier.isPublic(constructors[0].getModifiers()));
+ }
+
+ public void testClassVisibility() throws Exception {
+ assertTrue("Request class is not public", Modifier.isPublic(Request.class.getModifiers()));
+ }
+
+ public void testPing() {
+ Request request = Request.ping();
+ assertEquals("/", request.getEndpoint());
+ assertEquals(0, request.getParameters().size());
+ assertNull(request.getEntity());
+ assertEquals("HEAD", request.getMethod());
+ }
+
+ public void testInfo() {
+ Request request = Request.info();
+ assertEquals("/", request.getEndpoint());
+ assertEquals(0, request.getParameters().size());
+ assertNull(request.getEntity());
+ assertEquals("GET", request.getMethod());
+ }
+
+ public void testGet() {
+ getAndExistsTest(Request::get, "GET");
+ }
+
+ public void testDelete() throws IOException {
+ String index = randomAlphaOfLengthBetween(3, 10);
+ String type = randomAlphaOfLengthBetween(3, 10);
+ String id = randomAlphaOfLengthBetween(3, 10);
+ DeleteRequest deleteRequest = new DeleteRequest(index, type, id);
+
+ Map expectedParams = new HashMap<>();
+
+ setRandomTimeout(deleteRequest, expectedParams);
+ setRandomRefreshPolicy(deleteRequest, expectedParams);
+ setRandomVersion(deleteRequest, expectedParams);
+ setRandomVersionType(deleteRequest, expectedParams);
+
+ if (frequently()) {
+ if (randomBoolean()) {
+ String routing = randomAlphaOfLengthBetween(3, 10);
+ deleteRequest.routing(routing);
+ expectedParams.put("routing", routing);
+ }
+ if (randomBoolean()) {
+ String parent = randomAlphaOfLengthBetween(3, 10);
+ deleteRequest.parent(parent);
+ expectedParams.put("parent", parent);
+ }
+ }
+
+ Request request = Request.delete(deleteRequest);
+ assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint());
+ assertEquals(expectedParams, request.getParameters());
+ assertEquals("DELETE", request.getMethod());
+ assertNull(request.getEntity());
+ }
+
+ public void testExists() {
+ getAndExistsTest(Request::exists, "HEAD");
+ }
+
+ private static void getAndExistsTest(Function requestConverter, String method) {
+ String index = randomAlphaOfLengthBetween(3, 10);
+ String type = randomAlphaOfLengthBetween(3, 10);
+ String id = randomAlphaOfLengthBetween(3, 10);
+ GetRequest getRequest = new GetRequest(index, type, id);
+
+ Map expectedParams = new HashMap<>();
+ if (randomBoolean()) {
+ if (randomBoolean()) {
+ String preference = randomAlphaOfLengthBetween(3, 10);
+ getRequest.preference(preference);
+ expectedParams.put("preference", preference);
+ }
+ if (randomBoolean()) {
+ String routing = randomAlphaOfLengthBetween(3, 10);
+ getRequest.routing(routing);
+ expectedParams.put("routing", routing);
+ }
+ if (randomBoolean()) {
+ boolean realtime = randomBoolean();
+ getRequest.realtime(realtime);
+ if (realtime == false) {
+ expectedParams.put("realtime", "false");
+ }
+ }
+ if (randomBoolean()) {
+ boolean refresh = randomBoolean();
+ getRequest.refresh(refresh);
+ if (refresh) {
+ expectedParams.put("refresh", "true");
+ }
+ }
+ if (randomBoolean()) {
+ long version = randomLong();
+ getRequest.version(version);
+ if (version != Versions.MATCH_ANY) {
+ expectedParams.put("version", Long.toString(version));
+ }
+ }
+ if (randomBoolean()) {
+ VersionType versionType = randomFrom(VersionType.values());
+ getRequest.versionType(versionType);
+ if (versionType != VersionType.INTERNAL) {
+ expectedParams.put("version_type", versionType.name().toLowerCase(Locale.ROOT));
+ }
+ }
+ if (randomBoolean()) {
+ int numStoredFields = randomIntBetween(1, 10);
+ String[] storedFields = new String[numStoredFields];
+ StringBuilder storedFieldsParam = new StringBuilder();
+ for (int i = 0; i < numStoredFields; i++) {
+ String storedField = randomAlphaOfLengthBetween(3, 10);
+ storedFields[i] = storedField;
+ storedFieldsParam.append(storedField);
+ if (i < numStoredFields - 1) {
+ storedFieldsParam.append(",");
+ }
+ }
+ getRequest.storedFields(storedFields);
+ expectedParams.put("stored_fields", storedFieldsParam.toString());
+ }
+ if (randomBoolean()) {
+ randomizeFetchSourceContextParams(getRequest::fetchSourceContext, expectedParams);
+ }
+ }
+ Request request = requestConverter.apply(getRequest);
+ assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint());
+ assertEquals(expectedParams, request.getParameters());
+ assertNull(request.getEntity());
+ assertEquals(method, request.getMethod());
+ }
+
+ public void testIndex() throws IOException {
+ String index = randomAlphaOfLengthBetween(3, 10);
+ String type = randomAlphaOfLengthBetween(3, 10);
+ IndexRequest indexRequest = new IndexRequest(index, type);
+
+ String id = randomBoolean() ? randomAlphaOfLengthBetween(3, 10) : null;
+ indexRequest.id(id);
+
+ Map expectedParams = new HashMap<>();
+
+ String method = "POST";
+ if (id != null) {
+ method = "PUT";
+ if (randomBoolean()) {
+ indexRequest.opType(DocWriteRequest.OpType.CREATE);
+ }
+ }
+
+ setRandomTimeout(indexRequest, expectedParams);
+ setRandomRefreshPolicy(indexRequest, expectedParams);
+
+ // There is some logic around _create endpoint and version/version type
+ if (indexRequest.opType() == DocWriteRequest.OpType.CREATE) {
+ indexRequest.version(randomFrom(Versions.MATCH_ANY, Versions.MATCH_DELETED));
+ expectedParams.put("version", Long.toString(Versions.MATCH_DELETED));
+ } else {
+ setRandomVersion(indexRequest, expectedParams);
+ setRandomVersionType(indexRequest, expectedParams);
+ }
+
+ if (frequently()) {
+ if (randomBoolean()) {
+ String routing = randomAlphaOfLengthBetween(3, 10);
+ indexRequest.routing(routing);
+ expectedParams.put("routing", routing);
+ }
+ if (randomBoolean()) {
+ String parent = randomAlphaOfLengthBetween(3, 10);
+ indexRequest.parent(parent);
+ expectedParams.put("parent", parent);
+ }
+ if (randomBoolean()) {
+ String pipeline = randomAlphaOfLengthBetween(3, 10);
+ indexRequest.setPipeline(pipeline);
+ expectedParams.put("pipeline", pipeline);
+ }
+ }
+
+ XContentType xContentType = randomFrom(XContentType.values());
+ int nbFields = randomIntBetween(0, 10);
+ try (XContentBuilder builder = XContentBuilder.builder(xContentType.xContent())) {
+ builder.startObject();
+ for (int i = 0; i < nbFields; i++) {
+ builder.field("field_" + i, i);
+ }
+ builder.endObject();
+ indexRequest.source(builder);
+ }
+
+ Request request = Request.index(indexRequest);
+ if (indexRequest.opType() == DocWriteRequest.OpType.CREATE) {
+ assertEquals("/" + index + "/" + type + "/" + id + "/_create", request.getEndpoint());
+ } else if (id != null) {
+ assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint());
+ } else {
+ assertEquals("/" + index + "/" + type, request.getEndpoint());
+ }
+ assertEquals(expectedParams, request.getParameters());
+ assertEquals(method, request.getMethod());
+
+ HttpEntity entity = request.getEntity();
+ assertTrue(entity instanceof ByteArrayEntity);
+ assertEquals(indexRequest.getContentType().mediaTypeWithoutParameters(), entity.getContentType().getValue());
+ try (XContentParser parser = createParser(xContentType.xContent(), entity.getContent())) {
+ assertEquals(nbFields, parser.map().size());
+ }
+ }
+
+ public void testUpdate() throws IOException {
+ XContentType xContentType = randomFrom(XContentType.values());
+
+ Map expectedParams = new HashMap<>();
+ String index = randomAlphaOfLengthBetween(3, 10);
+ String type = randomAlphaOfLengthBetween(3, 10);
+ String id = randomAlphaOfLengthBetween(3, 10);
+
+ UpdateRequest updateRequest = new UpdateRequest(index, type, id);
+ updateRequest.detectNoop(randomBoolean());
+
+ if (randomBoolean()) {
+ BytesReference source = RandomObjects.randomSource(random(), xContentType);
+ updateRequest.doc(new IndexRequest().source(source, xContentType));
+
+ boolean docAsUpsert = randomBoolean();
+ updateRequest.docAsUpsert(docAsUpsert);
+ if (docAsUpsert) {
+ expectedParams.put("doc_as_upsert", "true");
+ }
+ } else {
+ updateRequest.script(mockScript("_value + 1"));
+ updateRequest.scriptedUpsert(randomBoolean());
+ }
+ if (randomBoolean()) {
+ BytesReference source = RandomObjects.randomSource(random(), xContentType);
+ updateRequest.upsert(new IndexRequest().source(source, xContentType));
+ }
+ if (randomBoolean()) {
+ String routing = randomAlphaOfLengthBetween(3, 10);
+ updateRequest.routing(routing);
+ expectedParams.put("routing", routing);
+ }
+ if (randomBoolean()) {
+ String parent = randomAlphaOfLengthBetween(3, 10);
+ updateRequest.parent(parent);
+ expectedParams.put("parent", parent);
+ }
+ if (randomBoolean()) {
+ String timeout = randomTimeValue();
+ updateRequest.timeout(timeout);
+ expectedParams.put("timeout", timeout);
+ } else {
+ expectedParams.put("timeout", ReplicationRequest.DEFAULT_TIMEOUT.getStringRep());
+ }
+ if (randomBoolean()) {
+ WriteRequest.RefreshPolicy refreshPolicy = randomFrom(WriteRequest.RefreshPolicy.values());
+ updateRequest.setRefreshPolicy(refreshPolicy);
+ if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) {
+ expectedParams.put("refresh", refreshPolicy.getValue());
+ }
+ }
+ if (randomBoolean()) {
+ int waitForActiveShards = randomIntBetween(0, 10);
+ updateRequest.waitForActiveShards(waitForActiveShards);
+ expectedParams.put("wait_for_active_shards", String.valueOf(waitForActiveShards));
+ }
+ if (randomBoolean()) {
+ long version = randomLong();
+ updateRequest.version(version);
+ if (version != Versions.MATCH_ANY) {
+ expectedParams.put("version", Long.toString(version));
+ }
+ }
+ if (randomBoolean()) {
+ VersionType versionType = randomFrom(VersionType.values());
+ updateRequest.versionType(versionType);
+ if (versionType != VersionType.INTERNAL) {
+ expectedParams.put("version_type", versionType.name().toLowerCase(Locale.ROOT));
+ }
+ }
+ if (randomBoolean()) {
+ int retryOnConflict = randomIntBetween(0, 5);
+ updateRequest.retryOnConflict(retryOnConflict);
+ if (retryOnConflict > 0) {
+ expectedParams.put("retry_on_conflict", String.valueOf(retryOnConflict));
+ }
+ }
+ if (randomBoolean()) {
+ randomizeFetchSourceContextParams(updateRequest::fetchSource, expectedParams);
+ }
+
+ Request request = Request.update(updateRequest);
+ assertEquals("/" + index + "/" + type + "/" + id + "/_update", request.getEndpoint());
+ assertEquals(expectedParams, request.getParameters());
+ assertEquals("POST", request.getMethod());
+
+ HttpEntity entity = request.getEntity();
+ assertTrue(entity instanceof ByteArrayEntity);
+
+ UpdateRequest parsedUpdateRequest = new UpdateRequest();
+
+ XContentType entityContentType = XContentType.fromMediaTypeOrFormat(entity.getContentType().getValue());
+ try (XContentParser parser = createParser(entityContentType.xContent(), entity.getContent())) {
+ parsedUpdateRequest.fromXContent(parser);
+ }
+
+ assertEquals(updateRequest.scriptedUpsert(), parsedUpdateRequest.scriptedUpsert());
+ assertEquals(updateRequest.docAsUpsert(), parsedUpdateRequest.docAsUpsert());
+ assertEquals(updateRequest.detectNoop(), parsedUpdateRequest.detectNoop());
+ assertEquals(updateRequest.fetchSource(), parsedUpdateRequest.fetchSource());
+ assertEquals(updateRequest.script(), parsedUpdateRequest.script());
+ if (updateRequest.doc() != null) {
+ assertToXContentEquivalent(updateRequest.doc().source(), parsedUpdateRequest.doc().source(), xContentType);
+ } else {
+ assertNull(parsedUpdateRequest.doc());
+ }
+ if (updateRequest.upsertRequest() != null) {
+ assertToXContentEquivalent(updateRequest.upsertRequest().source(), parsedUpdateRequest.upsertRequest().source(), xContentType);
+ } else {
+ assertNull(parsedUpdateRequest.upsertRequest());
+ }
+ }
+
+ public void testUpdateWithDifferentContentTypes() throws IOException {
+ IllegalStateException exception = expectThrows(IllegalStateException.class, () -> {
+ UpdateRequest updateRequest = new UpdateRequest();
+ updateRequest.doc(new IndexRequest().source(singletonMap("field", "doc"), XContentType.JSON));
+ updateRequest.upsert(new IndexRequest().source(singletonMap("field", "upsert"), XContentType.YAML));
+ Request.update(updateRequest);
+ });
+ assertEquals("Update request cannot have different content types for doc [JSON] and upsert [YAML] documents",
+ exception.getMessage());
+ }
+
+ public void testBulk() throws IOException {
+ Map expectedParams = new HashMap<>();
+
+ BulkRequest bulkRequest = new BulkRequest();
+ if (randomBoolean()) {
+ String timeout = randomTimeValue();
+ bulkRequest.timeout(timeout);
+ expectedParams.put("timeout", timeout);
+ } else {
+ expectedParams.put("timeout", BulkShardRequest.DEFAULT_TIMEOUT.getStringRep());
+ }
+
+ if (randomBoolean()) {
+ WriteRequest.RefreshPolicy refreshPolicy = randomFrom(WriteRequest.RefreshPolicy.values());
+ bulkRequest.setRefreshPolicy(refreshPolicy);
+ if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) {
+ expectedParams.put("refresh", refreshPolicy.getValue());
+ }
+ }
+
+ XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);
+
+ int nbItems = randomIntBetween(10, 100);
+ for (int i = 0; i < nbItems; i++) {
+ String index = randomAlphaOfLength(5);
+ String type = randomAlphaOfLength(5);
+ String id = randomAlphaOfLength(5);
+
+ BytesReference source = RandomObjects.randomSource(random(), xContentType);
+ DocWriteRequest.OpType opType = randomFrom(DocWriteRequest.OpType.values());
+
+ DocWriteRequest> docWriteRequest = null;
+ if (opType == DocWriteRequest.OpType.INDEX) {
+ IndexRequest indexRequest = new IndexRequest(index, type, id).source(source, xContentType);
+ docWriteRequest = indexRequest;
+ if (randomBoolean()) {
+ indexRequest.setPipeline(randomAlphaOfLength(5));
+ }
+ if (randomBoolean()) {
+ indexRequest.parent(randomAlphaOfLength(5));
+ }
+ } else if (opType == DocWriteRequest.OpType.CREATE) {
+ IndexRequest createRequest = new IndexRequest(index, type, id).source(source, xContentType).create(true);
+ docWriteRequest = createRequest;
+ if (randomBoolean()) {
+ createRequest.parent(randomAlphaOfLength(5));
+ }
+ } else if (opType == DocWriteRequest.OpType.UPDATE) {
+ final UpdateRequest updateRequest = new UpdateRequest(index, type, id).doc(new IndexRequest().source(source, xContentType));
+ docWriteRequest = updateRequest;
+ if (randomBoolean()) {
+ updateRequest.retryOnConflict(randomIntBetween(1, 5));
+ }
+ if (randomBoolean()) {
+ randomizeFetchSourceContextParams(updateRequest::fetchSource, new HashMap<>());
+ }
+ if (randomBoolean()) {
+ updateRequest.parent(randomAlphaOfLength(5));
+ }
+ } else if (opType == DocWriteRequest.OpType.DELETE) {
+ docWriteRequest = new DeleteRequest(index, type, id);
+ }
+
+ if (randomBoolean()) {
+ docWriteRequest.routing(randomAlphaOfLength(10));
+ }
+ if (randomBoolean()) {
+ docWriteRequest.version(randomNonNegativeLong());
+ }
+ if (randomBoolean()) {
+ docWriteRequest.versionType(randomFrom(VersionType.values()));
+ }
+ bulkRequest.add(docWriteRequest);
+ }
+
+ Request request = Request.bulk(bulkRequest);
+ assertEquals("/_bulk", request.getEndpoint());
+ assertEquals(expectedParams, request.getParameters());
+ assertEquals("POST", request.getMethod());
+ assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
+ byte[] content = new byte[(int) request.getEntity().getContentLength()];
+ try (InputStream inputStream = request.getEntity().getContent()) {
+ Streams.readFully(inputStream, content);
+ }
+
+ BulkRequest parsedBulkRequest = new BulkRequest();
+ parsedBulkRequest.add(content, 0, content.length, xContentType);
+ assertEquals(bulkRequest.numberOfActions(), parsedBulkRequest.numberOfActions());
+
+ for (int i = 0; i < bulkRequest.numberOfActions(); i++) {
+ DocWriteRequest> originalRequest = bulkRequest.requests().get(i);
+ DocWriteRequest> parsedRequest = parsedBulkRequest.requests().get(i);
+
+ assertEquals(originalRequest.opType(), parsedRequest.opType());
+ assertEquals(originalRequest.index(), parsedRequest.index());
+ assertEquals(originalRequest.type(), parsedRequest.type());
+ assertEquals(originalRequest.id(), parsedRequest.id());
+ assertEquals(originalRequest.routing(), parsedRequest.routing());
+ assertEquals(originalRequest.parent(), parsedRequest.parent());
+ assertEquals(originalRequest.version(), parsedRequest.version());
+ assertEquals(originalRequest.versionType(), parsedRequest.versionType());
+
+ DocWriteRequest.OpType opType = originalRequest.opType();
+ if (opType == DocWriteRequest.OpType.INDEX) {
+ IndexRequest indexRequest = (IndexRequest) originalRequest;
+ IndexRequest parsedIndexRequest = (IndexRequest) parsedRequest;
+
+ assertEquals(indexRequest.getPipeline(), parsedIndexRequest.getPipeline());
+ assertToXContentEquivalent(indexRequest.source(), parsedIndexRequest.source(), xContentType);
+ } else if (opType == DocWriteRequest.OpType.UPDATE) {
+ UpdateRequest updateRequest = (UpdateRequest) originalRequest;
+ UpdateRequest parsedUpdateRequest = (UpdateRequest) parsedRequest;
+
+ assertEquals(updateRequest.retryOnConflict(), parsedUpdateRequest.retryOnConflict());
+ assertEquals(updateRequest.fetchSource(), parsedUpdateRequest.fetchSource());
+ if (updateRequest.doc() != null) {
+ assertToXContentEquivalent(updateRequest.doc().source(), parsedUpdateRequest.doc().source(), xContentType);
+ } else {
+ assertNull(parsedUpdateRequest.doc());
+ }
+ }
+ }
+ }
+
+ public void testBulkWithDifferentContentTypes() throws IOException {
+ {
+ BulkRequest bulkRequest = new BulkRequest();
+ bulkRequest.add(new DeleteRequest("index", "type", "0"));
+ bulkRequest.add(new UpdateRequest("index", "type", "1").script(mockScript("test")));
+ bulkRequest.add(new DeleteRequest("index", "type", "2"));
+
+ Request request = Request.bulk(bulkRequest);
+ assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
+ }
+ {
+ XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);
+ BulkRequest bulkRequest = new BulkRequest();
+ bulkRequest.add(new DeleteRequest("index", "type", "0"));
+ bulkRequest.add(new IndexRequest("index", "type", "0").source(singletonMap("field", "value"), xContentType));
+ bulkRequest.add(new DeleteRequest("index", "type", "2"));
+
+ Request request = Request.bulk(bulkRequest);
+ assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
+ }
+ {
+ XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);
+ UpdateRequest updateRequest = new UpdateRequest("index", "type", "0");
+ if (randomBoolean()) {
+ updateRequest.doc(new IndexRequest().source(singletonMap("field", "value"), xContentType));
+ } else {
+ updateRequest.upsert(new IndexRequest().source(singletonMap("field", "value"), xContentType));
+ }
+
+ Request request = Request.bulk(new BulkRequest().add(updateRequest));
+ assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
+ }
+ {
+ BulkRequest bulkRequest = new BulkRequest();
+ bulkRequest.add(new IndexRequest("index", "type", "0").source(singletonMap("field", "value"), XContentType.SMILE));
+ bulkRequest.add(new IndexRequest("index", "type", "1").source(singletonMap("field", "value"), XContentType.JSON));
+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> Request.bulk(bulkRequest));
+ assertEquals("Mismatching content-type found for request with content-type [JSON], " +
+ "previous requests have content-type [SMILE]", exception.getMessage());
+ }
+ {
+ BulkRequest bulkRequest = new BulkRequest();
+ bulkRequest.add(new IndexRequest("index", "type", "0")
+ .source(singletonMap("field", "value"), XContentType.JSON));
+ bulkRequest.add(new IndexRequest("index", "type", "1")
+ .source(singletonMap("field", "value"), XContentType.JSON));
+ bulkRequest.add(new UpdateRequest("index", "type", "2")
+ .doc(new IndexRequest().source(singletonMap("field", "value"), XContentType.JSON))
+ .upsert(new IndexRequest().source(singletonMap("field", "value"), XContentType.SMILE))
+ );
+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> Request.bulk(bulkRequest));
+ assertEquals("Mismatching content-type found for request with content-type [SMILE], " +
+ "previous requests have content-type [JSON]", exception.getMessage());
+ }
+ {
+ XContentType xContentType = randomFrom(XContentType.CBOR, XContentType.YAML);
+ BulkRequest bulkRequest = new BulkRequest();
+ bulkRequest.add(new DeleteRequest("index", "type", "0"));
+ bulkRequest.add(new IndexRequest("index", "type", "1").source(singletonMap("field", "value"), XContentType.JSON));
+ bulkRequest.add(new DeleteRequest("index", "type", "2"));
+ bulkRequest.add(new DeleteRequest("index", "type", "3"));
+ bulkRequest.add(new IndexRequest("index", "type", "4").source(singletonMap("field", "value"), XContentType.JSON));
+ bulkRequest.add(new IndexRequest("index", "type", "1").source(singletonMap("field", "value"), xContentType));
+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> Request.bulk(bulkRequest));
+ assertEquals("Unsupported content-type found for request with content-type [" + xContentType
+ + "], only JSON and SMILE are supported", exception.getMessage());
+ }
+ }
+
+ public void testSearch() throws Exception {
+ SearchRequest searchRequest = new SearchRequest();
+ int numIndices = randomIntBetween(0, 5);
+ String[] indices = new String[numIndices];
+ for (int i = 0; i < numIndices; i++) {
+ indices[i] = "index-" + randomAlphaOfLengthBetween(2, 5);
+ }
+ searchRequest.indices(indices);
+ int numTypes = randomIntBetween(0, 5);
+ String[] types = new String[numTypes];
+ for (int i = 0; i < numTypes; i++) {
+ types[i] = "type-" + randomAlphaOfLengthBetween(2, 5);
+ }
+ searchRequest.types(types);
+
+ Map expectedParams = new HashMap<>();
+ expectedParams.put(RestSearchAction.TYPED_KEYS_PARAM, "true");
+ if (randomBoolean()) {
+ searchRequest.routing(randomAlphaOfLengthBetween(3, 10));
+ expectedParams.put("routing", searchRequest.routing());
+ }
+ if (randomBoolean()) {
+ searchRequest.preference(randomAlphaOfLengthBetween(3, 10));
+ expectedParams.put("preference", searchRequest.preference());
+ }
+ if (randomBoolean()) {
+ searchRequest.searchType(randomFrom(SearchType.values()));
+ }
+ expectedParams.put("search_type", searchRequest.searchType().name().toLowerCase(Locale.ROOT));
+ if (randomBoolean()) {
+ searchRequest.requestCache(randomBoolean());
+ expectedParams.put("request_cache", Boolean.toString(searchRequest.requestCache()));
+ }
+ if (randomBoolean()) {
+ searchRequest.setBatchedReduceSize(randomIntBetween(2, Integer.MAX_VALUE));
+ }
+ expectedParams.put("batched_reduce_size", Integer.toString(searchRequest.getBatchedReduceSize()));
+ if (randomBoolean()) {
+ searchRequest.scroll(randomTimeValue());
+ expectedParams.put("scroll", searchRequest.scroll().keepAlive().getStringRep());
+ }
+
+ if (randomBoolean()) {
+ searchRequest.indicesOptions(IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean()));
+ }
+ expectedParams.put("ignore_unavailable", Boolean.toString(searchRequest.indicesOptions().ignoreUnavailable()));
+ expectedParams.put("allow_no_indices", Boolean.toString(searchRequest.indicesOptions().allowNoIndices()));
+ if (searchRequest.indicesOptions().expandWildcardsOpen() && searchRequest.indicesOptions().expandWildcardsClosed()) {
+ expectedParams.put("expand_wildcards", "open,closed");
+ } else if (searchRequest.indicesOptions().expandWildcardsOpen()) {
+ expectedParams.put("expand_wildcards", "open");
+ } else if (searchRequest.indicesOptions().expandWildcardsClosed()) {
+ expectedParams.put("expand_wildcards", "closed");
+ } else {
+ expectedParams.put("expand_wildcards", "none");
+ }
+
+ SearchSourceBuilder searchSourceBuilder = null;
+ if (frequently()) {
+ searchSourceBuilder = new SearchSourceBuilder();
+ if (randomBoolean()) {
+ searchSourceBuilder.size(randomIntBetween(0, Integer.MAX_VALUE));
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.from(randomIntBetween(0, Integer.MAX_VALUE));
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.minScore(randomFloat());
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.explain(randomBoolean());
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.profile(randomBoolean());
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.highlighter(new HighlightBuilder().field(randomAlphaOfLengthBetween(3, 10)));
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.query(new TermQueryBuilder(randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10)));
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.aggregation(new TermsAggregationBuilder(randomAlphaOfLengthBetween(3, 10), ValueType.STRING)
+ .field(randomAlphaOfLengthBetween(3, 10)));
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.suggest(new SuggestBuilder().addSuggestion(randomAlphaOfLengthBetween(3, 10),
+ new CompletionSuggestionBuilder(randomAlphaOfLengthBetween(3, 10))));
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.addRescorer(new QueryRescorerBuilder(
+ new TermQueryBuilder(randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10))));
+ }
+ if (randomBoolean()) {
+ searchSourceBuilder.collapse(new CollapseBuilder(randomAlphaOfLengthBetween(3, 10)));
+ }
+ searchRequest.source(searchSourceBuilder);
+ }
+
+ Request request = Request.search(searchRequest);
+ StringJoiner endpoint = new StringJoiner("/", "/", "");
+ String index = String.join(",", indices);
+ if (Strings.hasLength(index)) {
+ endpoint.add(index);
+ }
+ String type = String.join(",", types);
+ if (Strings.hasLength(type)) {
+ endpoint.add(type);
+ }
+ endpoint.add("_search");
+ assertEquals(endpoint.toString(), request.getEndpoint());
+ assertEquals(expectedParams, request.getParameters());
+ if (searchSourceBuilder == null) {
+ assertNull(request.getEntity());
+ } else {
+ assertToXContentBody(searchSourceBuilder, request.getEntity());
+ }
+ }
+
+ public void testSearchScroll() throws IOException {
+ SearchScrollRequest searchScrollRequest = new SearchScrollRequest();
+ searchScrollRequest.scrollId(randomAlphaOfLengthBetween(5, 10));
+ if (randomBoolean()) {
+ searchScrollRequest.scroll(randomPositiveTimeValue());
+ }
+ Request request = Request.searchScroll(searchScrollRequest);
+ assertEquals("GET", request.getMethod());
+ assertEquals("/_search/scroll", request.getEndpoint());
+ assertEquals(0, request.getParameters().size());
+ assertToXContentBody(searchScrollRequest, request.getEntity());
+ assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
+ }
+
+ public void testClearScroll() throws IOException {
+ ClearScrollRequest clearScrollRequest = new ClearScrollRequest();
+ int numScrolls = randomIntBetween(1, 10);
+ for (int i = 0; i < numScrolls; i++) {
+ clearScrollRequest.addScrollId(randomAlphaOfLengthBetween(5, 10));
+ }
+ Request request = Request.clearScroll(clearScrollRequest);
+ assertEquals("DELETE", request.getMethod());
+ assertEquals("/_search/scroll", request.getEndpoint());
+ assertEquals(0, request.getParameters().size());
+ assertToXContentBody(clearScrollRequest, request.getEntity());
+ assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
+ }
+
+ private static void assertToXContentBody(ToXContent expectedBody, HttpEntity actualEntity) throws IOException {
+ BytesReference expectedBytes = XContentHelper.toXContent(expectedBody, Request.REQUEST_BODY_CONTENT_TYPE, false);
+ assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), actualEntity.getContentType().getValue());
+ assertEquals(expectedBytes, new BytesArray(EntityUtils.toByteArray(actualEntity)));
+ }
+
+ public void testParams() {
+ final int nbParams = randomIntBetween(0, 10);
+ Request.Params params = Request.Params.builder();
+ Map expectedParams = new HashMap<>();
+ for (int i = 0; i < nbParams; i++) {
+ String paramName = "p_" + i;
+ String paramValue = randomAlphaOfLength(5);
+ params.putParam(paramName, paramValue);
+ expectedParams.put(paramName, paramValue);
+ }
+
+ Map requestParams = params.getParams();
+ assertEquals(nbParams, requestParams.size());
+ assertEquals(expectedParams, requestParams);
+ }
+
+ public void testParamsNoDuplicates() {
+ Request.Params params = Request.Params.builder();
+ params.putParam("test", "1");
+
+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> params.putParam("test", "2"));
+ assertEquals("Request parameter [test] is already registered", e.getMessage());
+
+ Map requestParams = params.getParams();
+ assertEquals(1L, requestParams.size());
+ assertEquals("1", requestParams.values().iterator().next());
+ }
+
+ public void testEndpoint() {
+ assertEquals("/", Request.endpoint());
+ assertEquals("/", Request.endpoint(Strings.EMPTY_ARRAY));
+ assertEquals("/", Request.endpoint(""));
+ assertEquals("/a/b", Request.endpoint("a", "b"));
+ assertEquals("/a/b/_create", Request.endpoint("a", "b", "_create"));
+ assertEquals("/a/b/c/_create", Request.endpoint("a", "b", "c", "_create"));
+ assertEquals("/a/_create", Request.endpoint("a", null, null, "_create"));
+ }
+
+ public void testCreateContentType() {
+ final XContentType xContentType = randomFrom(XContentType.values());
+ assertEquals(xContentType.mediaTypeWithoutParameters(), Request.createContentType(xContentType).getMimeType());
+ }
+
+ public void testEnforceSameContentType() {
+ XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);
+ IndexRequest indexRequest = new IndexRequest().source(singletonMap("field", "value"), xContentType);
+ assertEquals(xContentType, enforceSameContentType(indexRequest, null));
+ assertEquals(xContentType, enforceSameContentType(indexRequest, xContentType));
+
+ XContentType bulkContentType = randomBoolean() ? xContentType : null;
+
+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () ->
+ enforceSameContentType(new IndexRequest().source(singletonMap("field", "value"), XContentType.CBOR), bulkContentType));
+ assertEquals("Unsupported content-type found for request with content-type [CBOR], only JSON and SMILE are supported",
+ exception.getMessage());
+
+ exception = expectThrows(IllegalArgumentException.class, () ->
+ enforceSameContentType(new IndexRequest().source(singletonMap("field", "value"), XContentType.YAML), bulkContentType));
+ assertEquals("Unsupported content-type found for request with content-type [YAML], only JSON and SMILE are supported",
+ exception.getMessage());
+
+ XContentType requestContentType = xContentType == XContentType.JSON ? XContentType.SMILE : XContentType.JSON;
+
+ exception = expectThrows(IllegalArgumentException.class, () ->
+ enforceSameContentType(new IndexRequest().source(singletonMap("field", "value"), requestContentType), xContentType));
+ assertEquals("Mismatching content-type found for request with content-type [" + requestContentType + "], "
+ + "previous requests have content-type [" + xContentType + "]", exception.getMessage());
+ }
+
+ /**
+ * Randomize the {@link FetchSourceContext} request parameters.
+ */
+ private static void randomizeFetchSourceContextParams(Consumer consumer, Map expectedParams) {
+ if (randomBoolean()) {
+ if (randomBoolean()) {
+ boolean fetchSource = randomBoolean();
+ consumer.accept(new FetchSourceContext(fetchSource));
+ if (fetchSource == false) {
+ expectedParams.put("_source", "false");
+ }
+ } else {
+ int numIncludes = randomIntBetween(0, 5);
+ String[] includes = new String[numIncludes];
+ StringBuilder includesParam = new StringBuilder();
+ for (int i = 0; i < numIncludes; i++) {
+ String include = randomAlphaOfLengthBetween(3, 10);
+ includes[i] = include;
+ includesParam.append(include);
+ if (i < numIncludes - 1) {
+ includesParam.append(",");
+ }
+ }
+ if (numIncludes > 0) {
+ expectedParams.put("_source_include", includesParam.toString());
+ }
+ int numExcludes = randomIntBetween(0, 5);
+ String[] excludes = new String[numExcludes];
+ StringBuilder excludesParam = new StringBuilder();
+ for (int i = 0; i < numExcludes; i++) {
+ String exclude = randomAlphaOfLengthBetween(3, 10);
+ excludes[i] = exclude;
+ excludesParam.append(exclude);
+ if (i < numExcludes - 1) {
+ excludesParam.append(",");
+ }
+ }
+ if (numExcludes > 0) {
+ expectedParams.put("_source_exclude", excludesParam.toString());
+ }
+ consumer.accept(new FetchSourceContext(true, includes, excludes));
+ }
+ }
+ }
+
+ private static void setRandomTimeout(ReplicationRequest> request, Map expectedParams) {
+ if (randomBoolean()) {
+ String timeout = randomTimeValue();
+ request.timeout(timeout);
+ expectedParams.put("timeout", timeout);
+ } else {
+ expectedParams.put("timeout", ReplicationRequest.DEFAULT_TIMEOUT.getStringRep());
+ }
+ }
+
+ private static void setRandomRefreshPolicy(ReplicatedWriteRequest> request, Map expectedParams) {
+ if (randomBoolean()) {
+ WriteRequest.RefreshPolicy refreshPolicy = randomFrom(WriteRequest.RefreshPolicy.values());
+ request.setRefreshPolicy(refreshPolicy);
+ if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) {
+ expectedParams.put("refresh", refreshPolicy.getValue());
+ }
+ }
+ }
+
+ private static void setRandomVersion(DocWriteRequest> request, Map expectedParams) {
+ if (randomBoolean()) {
+ long version = randomFrom(Versions.MATCH_ANY, Versions.MATCH_DELETED, Versions.NOT_FOUND, randomNonNegativeLong());
+ request.version(version);
+ if (version != Versions.MATCH_ANY) {
+ expectedParams.put("version", Long.toString(version));
+ }
+ }
+ }
+
+ private static void setRandomVersionType(DocWriteRequest> request, Map expectedParams) {
+ if (randomBoolean()) {
+ VersionType versionType = randomFrom(VersionType.values());
+ request.versionType(versionType);
+ if (versionType != VersionType.INTERNAL) {
+ expectedParams.put("version_type", versionType.name().toLowerCase(Locale.ROOT));
+ }
+ }
+ }
+}
diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java
new file mode 100644
index 0000000000000..8c12cbeb1e563
--- /dev/null
+++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java
@@ -0,0 +1,138 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.apache.http.HttpEntity;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.elasticsearch.common.ParseField;
+import org.elasticsearch.common.xcontent.NamedXContentRegistry;
+import org.elasticsearch.common.xcontent.XContentParser;
+import org.elasticsearch.test.ESTestCase;
+import org.junit.Before;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.hamcrest.CoreMatchers.instanceOf;
+import static org.mockito.Mockito.mock;
+
+/**
+ * This test works against a {@link RestHighLevelClient} subclass that simulates how custom response sections returned by
+ * Elasticsearch plugins can be parsed using the high level client.
+ */
+public class RestHighLevelClientExtTests extends ESTestCase {
+
+ private RestHighLevelClient restHighLevelClient;
+
+ @Before
+ public void initClient() throws IOException {
+ RestClient restClient = mock(RestClient.class);
+ restHighLevelClient = new RestHighLevelClientExt(restClient);
+ }
+
+ public void testParseEntityCustomResponseSection() throws IOException {
+ {
+ HttpEntity jsonEntity = new StringEntity("{\"custom1\":{ \"field\":\"value\"}}", ContentType.APPLICATION_JSON);
+ BaseCustomResponseSection customSection = restHighLevelClient.parseEntity(jsonEntity, BaseCustomResponseSection::fromXContent);
+ assertThat(customSection, instanceOf(CustomResponseSection1.class));
+ CustomResponseSection1 customResponseSection1 = (CustomResponseSection1) customSection;
+ assertEquals("value", customResponseSection1.value);
+ }
+ {
+ HttpEntity jsonEntity = new StringEntity("{\"custom2\":{ \"array\": [\"item1\", \"item2\"]}}", ContentType.APPLICATION_JSON);
+ BaseCustomResponseSection customSection = restHighLevelClient.parseEntity(jsonEntity, BaseCustomResponseSection::fromXContent);
+ assertThat(customSection, instanceOf(CustomResponseSection2.class));
+ CustomResponseSection2 customResponseSection2 = (CustomResponseSection2) customSection;
+ assertArrayEquals(new String[]{"item1", "item2"}, customResponseSection2.values);
+ }
+ }
+
+ private static class RestHighLevelClientExt extends RestHighLevelClient {
+
+ private RestHighLevelClientExt(RestClient restClient) {
+ super(restClient, getNamedXContentsExt());
+ }
+
+ private static List getNamedXContentsExt() {
+ List entries = new ArrayList<>();
+ entries.add(new NamedXContentRegistry.Entry(BaseCustomResponseSection.class, new ParseField("custom1"),
+ CustomResponseSection1::fromXContent));
+ entries.add(new NamedXContentRegistry.Entry(BaseCustomResponseSection.class, new ParseField("custom2"),
+ CustomResponseSection2::fromXContent));
+ return entries;
+ }
+ }
+
+ private abstract static class BaseCustomResponseSection {
+
+ static BaseCustomResponseSection fromXContent(XContentParser parser) throws IOException {
+ assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken());
+ assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken());
+ BaseCustomResponseSection custom = parser.namedObject(BaseCustomResponseSection.class, parser.currentName(), null);
+ assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken());
+ return custom;
+ }
+ }
+
+ private static class CustomResponseSection1 extends BaseCustomResponseSection {
+
+ private final String value;
+
+ private CustomResponseSection1(String value) {
+ this.value = value;
+ }
+
+ static CustomResponseSection1 fromXContent(XContentParser parser) throws IOException {
+ assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken());
+ assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken());
+ assertEquals("field", parser.currentName());
+ assertEquals(XContentParser.Token.VALUE_STRING, parser.nextToken());
+ CustomResponseSection1 responseSection1 = new CustomResponseSection1(parser.text());
+ assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken());
+ return responseSection1;
+ }
+ }
+
+ private static class CustomResponseSection2 extends BaseCustomResponseSection {
+
+ private final String[] values;
+
+ private CustomResponseSection2(String[] values) {
+ this.values = values;
+ }
+
+ static CustomResponseSection2 fromXContent(XContentParser parser) throws IOException {
+ assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken());
+ assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken());
+ assertEquals("array", parser.currentName());
+ assertEquals(XContentParser.Token.START_ARRAY, parser.nextToken());
+ List values = new ArrayList<>();
+ while(parser.nextToken().isValue()) {
+ values.add(parser.text());
+ }
+ assertEquals(XContentParser.Token.END_ARRAY, parser.currentToken());
+ CustomResponseSection2 responseSection2 = new CustomResponseSection2(values.toArray(new String[values.size()]));
+ assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken());
+ return responseSection2;
+ }
+ }
+}
diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java
new file mode 100644
index 0000000000000..a6d015afca713
--- /dev/null
+++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java
@@ -0,0 +1,691 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import com.fasterxml.jackson.core.JsonParseException;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.HttpResponse;
+import org.apache.http.ProtocolVersion;
+import org.apache.http.RequestLine;
+import org.apache.http.StatusLine;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.message.BasicHttpResponse;
+import org.apache.http.message.BasicRequestLine;
+import org.apache.http.message.BasicStatusLine;
+import org.apache.http.nio.entity.NStringEntity;
+
+import org.elasticsearch.Build;
+import org.elasticsearch.ElasticsearchException;
+import org.elasticsearch.Version;
+import org.elasticsearch.action.ActionListener;
+import org.elasticsearch.action.ActionRequest;
+import org.elasticsearch.action.ActionRequestValidationException;
+import org.elasticsearch.action.main.MainRequest;
+import org.elasticsearch.action.main.MainResponse;
+import org.elasticsearch.action.search.ClearScrollRequest;
+import org.elasticsearch.action.search.ClearScrollResponse;
+import org.elasticsearch.action.search.SearchResponse;
+import org.elasticsearch.action.search.SearchResponseSections;
+import org.elasticsearch.action.search.SearchScrollRequest;
+import org.elasticsearch.action.search.ShardSearchFailure;
+import org.elasticsearch.cluster.ClusterName;
+import org.elasticsearch.common.CheckedFunction;
+import org.elasticsearch.common.xcontent.NamedXContentRegistry;
+import org.elasticsearch.common.xcontent.ToXContent;
+import org.elasticsearch.common.xcontent.XContentBuilder;
+import org.elasticsearch.common.xcontent.XContentParser;
+import org.elasticsearch.common.xcontent.cbor.CborXContent;
+import org.elasticsearch.common.xcontent.smile.SmileXContent;
+import org.elasticsearch.join.aggregations.ChildrenAggregationBuilder;
+import org.elasticsearch.rest.RestStatus;
+import org.elasticsearch.search.SearchHits;
+import org.elasticsearch.search.aggregations.Aggregation;
+import org.elasticsearch.search.aggregations.InternalAggregations;
+import org.elasticsearch.search.aggregations.matrix.stats.MatrixStatsAggregationBuilder;
+import org.elasticsearch.search.suggest.Suggest;
+import org.elasticsearch.test.ESTestCase;
+import org.elasticsearch.test.InternalAggregationTestCase;
+import org.junit.Before;
+import org.mockito.ArgumentMatcher;
+import org.mockito.internal.matchers.ArrayEquals;
+import org.mockito.internal.matchers.VarargMatcher;
+
+import java.io.IOException;
+import java.net.SocketTimeoutException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicReference;
+
+import static org.elasticsearch.client.RestClientTestUtil.randomHeaders;
+import static org.elasticsearch.common.xcontent.XContentHelper.toXContent;
+import static org.hamcrest.CoreMatchers.instanceOf;
+import static org.mockito.Matchers.anyMapOf;
+import static org.mockito.Matchers.anyObject;
+import static org.mockito.Matchers.anyString;
+import static org.mockito.Matchers.anyVararg;
+import static org.mockito.Matchers.argThat;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Matchers.isNotNull;
+import static org.mockito.Matchers.isNull;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class RestHighLevelClientTests extends ESTestCase {
+
+ private static final ProtocolVersion HTTP_PROTOCOL = new ProtocolVersion("http", 1, 1);
+ private static final RequestLine REQUEST_LINE = new BasicRequestLine("GET", "/", HTTP_PROTOCOL);
+
+ private RestClient restClient;
+ private RestHighLevelClient restHighLevelClient;
+
+ @Before
+ public void initClient() {
+ restClient = mock(RestClient.class);
+ restHighLevelClient = new RestHighLevelClient(restClient);
+ }
+
+ public void testPingSuccessful() throws IOException {
+ Header[] headers = randomHeaders(random(), "Header");
+ Response response = mock(Response.class);
+ when(response.getStatusLine()).thenReturn(newStatusLine(RestStatus.OK));
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenReturn(response);
+ assertTrue(restHighLevelClient.ping(headers));
+ verify(restClient).performRequest(eq("HEAD"), eq("/"), eq(Collections.emptyMap()),
+ isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers)));
+ }
+
+ public void testPing404NotFound() throws IOException {
+ Header[] headers = randomHeaders(random(), "Header");
+ Response response = mock(Response.class);
+ when(response.getStatusLine()).thenReturn(newStatusLine(RestStatus.NOT_FOUND));
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenReturn(response);
+ assertFalse(restHighLevelClient.ping(headers));
+ verify(restClient).performRequest(eq("HEAD"), eq("/"), eq(Collections.emptyMap()),
+ isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers)));
+ }
+
+ public void testPingSocketTimeout() throws IOException {
+ Header[] headers = randomHeaders(random(), "Header");
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenThrow(new SocketTimeoutException());
+ expectThrows(SocketTimeoutException.class, () -> restHighLevelClient.ping(headers));
+ verify(restClient).performRequest(eq("HEAD"), eq("/"), eq(Collections.emptyMap()),
+ isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers)));
+ }
+
+ public void testInfo() throws IOException {
+ Header[] headers = randomHeaders(random(), "Header");
+ MainResponse testInfo = new MainResponse("nodeName", Version.CURRENT, new ClusterName("clusterName"), "clusterUuid",
+ Build.CURRENT, true);
+ mockResponse(testInfo);
+ MainResponse receivedInfo = restHighLevelClient.info(headers);
+ assertEquals(testInfo, receivedInfo);
+ verify(restClient).performRequest(eq("GET"), eq("/"), eq(Collections.emptyMap()),
+ isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers)));
+ }
+
+ public void testSearchScroll() throws IOException {
+ Header[] headers = randomHeaders(random(), "Header");
+ SearchResponse mockSearchResponse = new SearchResponse(new SearchResponseSections(SearchHits.empty(), InternalAggregations.EMPTY,
+ null, false, false, null, 1), randomAlphaOfLengthBetween(5, 10), 5, 5, 0, 100, new ShardSearchFailure[0]);
+ mockResponse(mockSearchResponse);
+ SearchResponse searchResponse = restHighLevelClient.searchScroll(new SearchScrollRequest(randomAlphaOfLengthBetween(5, 10)),
+ headers);
+ assertEquals(mockSearchResponse.getScrollId(), searchResponse.getScrollId());
+ assertEquals(0, searchResponse.getHits().totalHits);
+ assertEquals(5, searchResponse.getTotalShards());
+ assertEquals(5, searchResponse.getSuccessfulShards());
+ assertEquals(100, searchResponse.getTook().getMillis());
+ verify(restClient).performRequest(eq("GET"), eq("/_search/scroll"), eq(Collections.emptyMap()),
+ isNotNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers)));
+ }
+
+ public void testClearScroll() throws IOException {
+ Header[] headers = randomHeaders(random(), "Header");
+ ClearScrollResponse mockClearScrollResponse = new ClearScrollResponse(randomBoolean(), randomIntBetween(0, Integer.MAX_VALUE));
+ mockResponse(mockClearScrollResponse);
+ ClearScrollRequest clearScrollRequest = new ClearScrollRequest();
+ clearScrollRequest.addScrollId(randomAlphaOfLengthBetween(5, 10));
+ ClearScrollResponse clearScrollResponse = restHighLevelClient.clearScroll(clearScrollRequest, headers);
+ assertEquals(mockClearScrollResponse.isSucceeded(), clearScrollResponse.isSucceeded());
+ assertEquals(mockClearScrollResponse.getNumFreed(), clearScrollResponse.getNumFreed());
+ verify(restClient).performRequest(eq("DELETE"), eq("/_search/scroll"), eq(Collections.emptyMap()),
+ isNotNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers)));
+ }
+
+ private void mockResponse(ToXContent toXContent) throws IOException {
+ Response response = mock(Response.class);
+ ContentType contentType = ContentType.parse(Request.REQUEST_BODY_CONTENT_TYPE.mediaType());
+ String requestBody = toXContent(toXContent, Request.REQUEST_BODY_CONTENT_TYPE, false).utf8ToString();
+ when(response.getEntity()).thenReturn(new NStringEntity(requestBody, contentType));
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenReturn(response);
+ }
+
+ public void testRequestValidation() {
+ ActionRequestValidationException validationException = new ActionRequestValidationException();
+ validationException.addValidationError("validation error");
+ ActionRequest request = new ActionRequest() {
+ @Override
+ public ActionRequestValidationException validate() {
+ return validationException;
+ }
+ };
+
+ {
+ ActionRequestValidationException actualException = expectThrows(ActionRequestValidationException.class,
+ () -> restHighLevelClient.performRequest(request, null, null, null));
+ assertSame(validationException, actualException);
+ }
+ {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ restHighLevelClient.performRequestAsync(request, null, null, trackingActionListener, null);
+ assertSame(validationException, trackingActionListener.exception.get());
+ }
+ }
+
+ public void testParseEntity() throws IOException {
+ {
+ IllegalStateException ise = expectThrows(IllegalStateException.class, () -> restHighLevelClient.parseEntity(null, null));
+ assertEquals("Response body expected but not returned", ise.getMessage());
+ }
+ {
+ IllegalStateException ise = expectThrows(IllegalStateException.class,
+ () -> restHighLevelClient.parseEntity(new StringEntity("", (ContentType) null), null));
+ assertEquals("Elasticsearch didn't return the [Content-Type] header, unable to parse response body", ise.getMessage());
+ }
+ {
+ StringEntity entity = new StringEntity("", ContentType.APPLICATION_SVG_XML);
+ IllegalStateException ise = expectThrows(IllegalStateException.class, () -> restHighLevelClient.parseEntity(entity, null));
+ assertEquals("Unsupported Content-Type: " + entity.getContentType().getValue(), ise.getMessage());
+ }
+ {
+ CheckedFunction entityParser = parser -> {
+ assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken());
+ assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken());
+ assertTrue(parser.nextToken().isValue());
+ String value = parser.text();
+ assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken());
+ return value;
+ };
+ HttpEntity jsonEntity = new StringEntity("{\"field\":\"value\"}", ContentType.APPLICATION_JSON);
+ assertEquals("value", restHighLevelClient.parseEntity(jsonEntity, entityParser));
+ HttpEntity yamlEntity = new StringEntity("---\nfield: value\n", ContentType.create("application/yaml"));
+ assertEquals("value", restHighLevelClient.parseEntity(yamlEntity, entityParser));
+ HttpEntity smileEntity = createBinaryEntity(SmileXContent.contentBuilder(), ContentType.create("application/smile"));
+ assertEquals("value", restHighLevelClient.parseEntity(smileEntity, entityParser));
+ HttpEntity cborEntity = createBinaryEntity(CborXContent.contentBuilder(), ContentType.create("application/cbor"));
+ assertEquals("value", restHighLevelClient.parseEntity(cborEntity, entityParser));
+ }
+ }
+
+ private static HttpEntity createBinaryEntity(XContentBuilder xContentBuilder, ContentType contentType) throws IOException {
+ try (XContentBuilder builder = xContentBuilder) {
+ builder.startObject();
+ builder.field("field", "value");
+ builder.endObject();
+ return new ByteArrayEntity(builder.bytes().toBytesRef().bytes, contentType);
+ }
+ }
+
+ public void testConvertExistsResponse() {
+ RestStatus restStatus = randomBoolean() ? RestStatus.OK : randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ boolean result = RestHighLevelClient.convertExistsResponse(response);
+ assertEquals(restStatus == RestStatus.OK, result);
+ }
+
+ public void testParseResponseException() throws IOException {
+ {
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException);
+ assertEquals(responseException.getMessage(), elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ }
+ {
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":" + restStatus.getStatus() + "}",
+ ContentType.APPLICATION_JSON));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException);
+ assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getSuppressed()[0]);
+ }
+ {
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ httpResponse.setEntity(new StringEntity("{\"error\":", ContentType.APPLICATION_JSON));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException);
+ assertEquals("Unable to parse response body", elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IOException.class));
+ }
+ {
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ httpResponse.setEntity(new StringEntity("{\"status\":" + restStatus.getStatus() + "}", ContentType.APPLICATION_JSON));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException);
+ assertEquals("Unable to parse response body", elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IllegalStateException.class));
+ }
+ }
+
+ public void testPerformRequestOnSuccess() throws IOException {
+ MainRequest mainRequest = new MainRequest();
+ CheckedFunction requestConverter = request ->
+ new Request("GET", "/", Collections.emptyMap(), null);
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenReturn(mockResponse);
+ {
+ Integer result = restHighLevelClient.performRequest(mainRequest, requestConverter,
+ response -> response.getStatusLine().getStatusCode(), Collections.emptySet());
+ assertEquals(restStatus.getStatus(), result.intValue());
+ }
+ {
+ IOException ioe = expectThrows(IOException.class, () -> restHighLevelClient.performRequest(mainRequest,
+ requestConverter, response -> {throw new IllegalStateException();}, Collections.emptySet()));
+ assertEquals("Unable to parse response body for Response{requestLine=GET / http/1.1, host=http://localhost:9200, " +
+ "response=http/1.1 " + restStatus.getStatus() + " " + restStatus.name() + "}", ioe.getMessage());
+ }
+ }
+
+ public void testPerformRequestOnResponseExceptionWithoutEntity() throws IOException {
+ MainRequest mainRequest = new MainRequest();
+ CheckedFunction requestConverter = request ->
+ new Request("GET", "/", Collections.emptyMap(), null);
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(mockResponse);
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenThrow(responseException);
+ ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class,
+ () -> restHighLevelClient.performRequest(mainRequest, requestConverter,
+ response -> response.getStatusLine().getStatusCode(), Collections.emptySet()));
+ assertEquals(responseException.getMessage(), elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ }
+
+ public void testPerformRequestOnResponseExceptionWithEntity() throws IOException {
+ MainRequest mainRequest = new MainRequest();
+ CheckedFunction requestConverter = request ->
+ new Request("GET", "/", Collections.emptyMap(), null);
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":" + restStatus.getStatus() + "}",
+ ContentType.APPLICATION_JSON));
+ Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(mockResponse);
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenThrow(responseException);
+ ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class,
+ () -> restHighLevelClient.performRequest(mainRequest, requestConverter,
+ response -> response.getStatusLine().getStatusCode(), Collections.emptySet()));
+ assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getSuppressed()[0]);
+ }
+
+ public void testPerformRequestOnResponseExceptionWithBrokenEntity() throws IOException {
+ MainRequest mainRequest = new MainRequest();
+ CheckedFunction requestConverter = request ->
+ new Request("GET", "/", Collections.emptyMap(), null);
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ httpResponse.setEntity(new StringEntity("{\"error\":", ContentType.APPLICATION_JSON));
+ Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(mockResponse);
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenThrow(responseException);
+ ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class,
+ () -> restHighLevelClient.performRequest(mainRequest, requestConverter,
+ response -> response.getStatusLine().getStatusCode(), Collections.emptySet()));
+ assertEquals("Unable to parse response body", elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ assertThat(elasticsearchException.getSuppressed()[0], instanceOf(JsonParseException.class));
+ }
+
+ public void testPerformRequestOnResponseExceptionWithBrokenEntity2() throws IOException {
+ MainRequest mainRequest = new MainRequest();
+ CheckedFunction requestConverter = request ->
+ new Request("GET", "/", Collections.emptyMap(), null);
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ httpResponse.setEntity(new StringEntity("{\"status\":" + restStatus.getStatus() + "}", ContentType.APPLICATION_JSON));
+ Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(mockResponse);
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenThrow(responseException);
+ ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class,
+ () -> restHighLevelClient.performRequest(mainRequest, requestConverter,
+ response -> response.getStatusLine().getStatusCode(), Collections.emptySet()));
+ assertEquals("Unable to parse response body", elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IllegalStateException.class));
+ }
+
+ public void testPerformRequestOnResponseExceptionWithIgnores() throws IOException {
+ MainRequest mainRequest = new MainRequest();
+ CheckedFunction requestConverter = request ->
+ new Request("GET", "/", Collections.emptyMap(), null);
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND));
+ Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(mockResponse);
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenThrow(responseException);
+ //although we got an exception, we turn it into a successful response because the status code was provided among ignores
+ assertEquals(Integer.valueOf(404), restHighLevelClient.performRequest(mainRequest, requestConverter,
+ response -> response.getStatusLine().getStatusCode(), Collections.singleton(404)));
+ }
+
+ public void testPerformRequestOnResponseExceptionWithIgnoresErrorNoBody() throws IOException {
+ MainRequest mainRequest = new MainRequest();
+ CheckedFunction requestConverter = request ->
+ new Request("GET", "/", Collections.emptyMap(), null);
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND));
+ Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(mockResponse);
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenThrow(responseException);
+ ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class,
+ () -> restHighLevelClient.performRequest(mainRequest, requestConverter,
+ response -> {throw new IllegalStateException();}, Collections.singleton(404)));
+ assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ assertEquals(responseException.getMessage(), elasticsearchException.getMessage());
+ }
+
+ public void testPerformRequestOnResponseExceptionWithIgnoresErrorValidBody() throws IOException {
+ MainRequest mainRequest = new MainRequest();
+ CheckedFunction requestConverter = request ->
+ new Request("GET", "/", Collections.emptyMap(), null);
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND));
+ httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":404}",
+ ContentType.APPLICATION_JSON));
+ Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(mockResponse);
+ when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class),
+ anyObject(), anyVararg())).thenThrow(responseException);
+ ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class,
+ () -> restHighLevelClient.performRequest(mainRequest, requestConverter,
+ response -> {throw new IllegalStateException();}, Collections.singleton(404)));
+ assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getSuppressed()[0]);
+ assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage());
+ }
+
+ public void testWrapResponseListenerOnSuccess() {
+ {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet());
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ responseListener.onSuccess(new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse));
+ assertNull(trackingActionListener.exception.get());
+ assertEquals(restStatus.getStatus(), trackingActionListener.statusCode.get());
+ }
+ {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> {throw new IllegalStateException();}, trackingActionListener, Collections.emptySet());
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ responseListener.onSuccess(new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse));
+ assertThat(trackingActionListener.exception.get(), instanceOf(IOException.class));
+ IOException ioe = (IOException) trackingActionListener.exception.get();
+ assertEquals("Unable to parse response body for Response{requestLine=GET / http/1.1, host=http://localhost:9200, " +
+ "response=http/1.1 " + restStatus.getStatus() + " " + restStatus.name() + "}", ioe.getMessage());
+ assertThat(ioe.getCause(), instanceOf(IllegalStateException.class));
+ }
+ }
+
+ public void testWrapResponseListenerOnException() {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet());
+ IllegalStateException exception = new IllegalStateException();
+ responseListener.onFailure(exception);
+ assertSame(exception, trackingActionListener.exception.get());
+ }
+
+ public void testWrapResponseListenerOnResponseExceptionWithoutEntity() throws IOException {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet());
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ responseListener.onFailure(responseException);
+ assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class));
+ ElasticsearchException elasticsearchException = (ElasticsearchException) trackingActionListener.exception.get();
+ assertEquals(responseException.getMessage(), elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ }
+
+ public void testWrapResponseListenerOnResponseExceptionWithEntity() throws IOException {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet());
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":" + restStatus.getStatus() + "}",
+ ContentType.APPLICATION_JSON));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ responseListener.onFailure(responseException);
+ assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class));
+ ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get();
+ assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getSuppressed()[0]);
+ }
+
+ public void testWrapResponseListenerOnResponseExceptionWithBrokenEntity() throws IOException {
+ {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet());
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ httpResponse.setEntity(new StringEntity("{\"error\":", ContentType.APPLICATION_JSON));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ responseListener.onFailure(responseException);
+ assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class));
+ ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get();
+ assertEquals("Unable to parse response body", elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ assertThat(elasticsearchException.getSuppressed()[0], instanceOf(JsonParseException.class));
+ }
+ {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet());
+ RestStatus restStatus = randomFrom(RestStatus.values());
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus));
+ httpResponse.setEntity(new StringEntity("{\"status\":" + restStatus.getStatus() + "}", ContentType.APPLICATION_JSON));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ responseListener.onFailure(responseException);
+ assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class));
+ ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get();
+ assertEquals("Unable to parse response body", elasticsearchException.getMessage());
+ assertEquals(restStatus, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IllegalStateException.class));
+ }
+ }
+
+ public void testWrapResponseListenerOnResponseExceptionWithIgnores() throws IOException {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.singleton(404));
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ responseListener.onFailure(responseException);
+ //although we got an exception, we turn it into a successful response because the status code was provided among ignores
+ assertNull(trackingActionListener.exception.get());
+ assertEquals(404, trackingActionListener.statusCode.get());
+ }
+
+ public void testWrapResponseListenerOnResponseExceptionWithIgnoresErrorNoBody() throws IOException {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ //response parsing throws exception while handling ignores. same as when GetResponse#fromXContent throws error when trying
+ //to parse a 404 response which contains an error rather than a valid document not found response.
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> { throw new IllegalStateException(); }, trackingActionListener, Collections.singleton(404));
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ responseListener.onFailure(responseException);
+ assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class));
+ ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get();
+ assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getCause());
+ assertEquals(responseException.getMessage(), elasticsearchException.getMessage());
+ }
+
+ public void testWrapResponseListenerOnResponseExceptionWithIgnoresErrorValidBody() throws IOException {
+ TrackingActionListener trackingActionListener = new TrackingActionListener();
+ //response parsing throws exception while handling ignores. same as when GetResponse#fromXContent throws error when trying
+ //to parse a 404 response which contains an error rather than a valid document not found response.
+ ResponseListener responseListener = restHighLevelClient.wrapResponseListener(
+ response -> { throw new IllegalStateException(); }, trackingActionListener, Collections.singleton(404));
+ HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND));
+ httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":404}",
+ ContentType.APPLICATION_JSON));
+ Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse);
+ ResponseException responseException = new ResponseException(response);
+ responseListener.onFailure(responseException);
+ assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class));
+ ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get();
+ assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status());
+ assertSame(responseException, elasticsearchException.getSuppressed()[0]);
+ assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage());
+ }
+
+ public void testDefaultNamedXContents() {
+ List namedXContents = RestHighLevelClient.getDefaultNamedXContents();
+ int expectedInternalAggregations = InternalAggregationTestCase.getDefaultNamedXContents().size();
+ int expectedSuggestions = 3;
+ assertEquals(expectedInternalAggregations + expectedSuggestions, namedXContents.size());
+ Map, Integer> categories = new HashMap<>();
+ for (NamedXContentRegistry.Entry namedXContent : namedXContents) {
+ Integer counter = categories.putIfAbsent(namedXContent.categoryClass, 1);
+ if (counter != null) {
+ categories.put(namedXContent.categoryClass, counter + 1);
+ }
+ }
+ assertEquals(2, categories.size());
+ assertEquals(expectedInternalAggregations, categories.get(Aggregation.class).intValue());
+ assertEquals(expectedSuggestions, categories.get(Suggest.Suggestion.class).intValue());
+ }
+
+ public void testProvidedNamedXContents() {
+ List namedXContents = RestHighLevelClient.getProvidedNamedXContents();
+ assertEquals(2, namedXContents.size());
+ Map, Integer> categories = new HashMap<>();
+ List names = new ArrayList<>();
+ for (NamedXContentRegistry.Entry namedXContent : namedXContents) {
+ names.add(namedXContent.name.getPreferredName());
+ Integer counter = categories.putIfAbsent(namedXContent.categoryClass, 1);
+ if (counter != null) {
+ categories.put(namedXContent.categoryClass, counter + 1);
+ }
+ }
+ assertEquals(1, categories.size());
+ assertEquals(Integer.valueOf(2), categories.get(Aggregation.class));
+ assertTrue(names.contains(ChildrenAggregationBuilder.NAME));
+ assertTrue(names.contains(MatrixStatsAggregationBuilder.NAME));
+ }
+
+ private static class TrackingActionListener implements ActionListener {
+ private final AtomicInteger statusCode = new AtomicInteger(-1);
+ private final AtomicReference exception = new AtomicReference<>();
+
+ @Override
+ public void onResponse(Integer statusCode) {
+ assertTrue(this.statusCode.compareAndSet(-1, statusCode));
+ }
+
+ @Override
+ public void onFailure(Exception e) {
+ assertTrue(exception.compareAndSet(null, e));
+ }
+ }
+
+ private static class HeadersVarargMatcher extends ArgumentMatcher implements VarargMatcher {
+ private Header[] expectedHeaders;
+
+ HeadersVarargMatcher(Header... expectedHeaders) {
+ this.expectedHeaders = expectedHeaders;
+ }
+
+ @Override
+ public boolean matches(Object varargArgument) {
+ if (varargArgument instanceof Header[]) {
+ Header[] actualHeaders = (Header[]) varargArgument;
+ return new ArrayEquals(expectedHeaders).matches(actualHeaders);
+ }
+ return false;
+ }
+ }
+
+ private static StatusLine newStatusLine(RestStatus restStatus) {
+ return new BasicStatusLine(HTTP_PROTOCOL, restStatus.getStatus(), restStatus.name());
+ }
+}
diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java
new file mode 100644
index 0000000000000..6c53d191dfc98
--- /dev/null
+++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java
@@ -0,0 +1,464 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.apache.http.HttpEntity;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.nio.entity.NStringEntity;
+import org.elasticsearch.ElasticsearchException;
+import org.elasticsearch.ElasticsearchStatusException;
+import org.elasticsearch.action.search.ClearScrollRequest;
+import org.elasticsearch.action.search.ClearScrollResponse;
+import org.elasticsearch.action.search.SearchRequest;
+import org.elasticsearch.action.search.SearchResponse;
+import org.elasticsearch.action.search.SearchScrollRequest;
+import org.elasticsearch.common.unit.TimeValue;
+import org.elasticsearch.common.xcontent.XContentBuilder;
+import org.elasticsearch.index.query.MatchQueryBuilder;
+import org.elasticsearch.join.aggregations.Children;
+import org.elasticsearch.join.aggregations.ChildrenAggregationBuilder;
+import org.elasticsearch.rest.RestStatus;
+import org.elasticsearch.search.SearchHit;
+import org.elasticsearch.search.aggregations.bucket.range.Range;
+import org.elasticsearch.search.aggregations.bucket.range.RangeAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.terms.Terms;
+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
+import org.elasticsearch.search.aggregations.matrix.stats.MatrixStats;
+import org.elasticsearch.search.aggregations.matrix.stats.MatrixStatsAggregationBuilder;
+import org.elasticsearch.search.aggregations.support.ValueType;
+import org.elasticsearch.search.builder.SearchSourceBuilder;
+import org.elasticsearch.search.sort.SortOrder;
+import org.elasticsearch.search.suggest.Suggest;
+import org.elasticsearch.search.suggest.SuggestBuilder;
+import org.elasticsearch.search.suggest.phrase.PhraseSuggestionBuilder;
+import org.junit.Before;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collections;
+
+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
+import static org.hamcrest.Matchers.both;
+import static org.hamcrest.Matchers.containsString;
+import static org.hamcrest.Matchers.either;
+import static org.hamcrest.Matchers.equalTo;
+import static org.hamcrest.Matchers.greaterThan;
+import static org.hamcrest.Matchers.greaterThanOrEqualTo;
+import static org.hamcrest.Matchers.instanceOf;
+import static org.hamcrest.Matchers.lessThan;
+
+public class SearchIT extends ESRestHighLevelClientTestCase {
+
+ @Before
+ public void indexDocuments() throws IOException {
+ StringEntity doc1 = new StringEntity("{\"type\":\"type1\", \"num\":10, \"num2\":50}", ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "/index/type/1", Collections.emptyMap(), doc1);
+ StringEntity doc2 = new StringEntity("{\"type\":\"type1\", \"num\":20, \"num2\":40}", ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "/index/type/2", Collections.emptyMap(), doc2);
+ StringEntity doc3 = new StringEntity("{\"type\":\"type1\", \"num\":50, \"num2\":35}", ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "/index/type/3", Collections.emptyMap(), doc3);
+ StringEntity doc4 = new StringEntity("{\"type\":\"type2\", \"num\":100, \"num2\":10}", ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "/index/type/4", Collections.emptyMap(), doc4);
+ StringEntity doc5 = new StringEntity("{\"type\":\"type2\", \"num\":100, \"num2\":10}", ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "/index/type/5", Collections.emptyMap(), doc5);
+ client().performRequest("POST", "/index/_refresh");
+ }
+
+ public void testSearchNoQuery() throws IOException {
+ SearchRequest searchRequest = new SearchRequest();
+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);
+ assertSearchHeader(searchResponse);
+ assertNull(searchResponse.getAggregations());
+ assertNull(searchResponse.getSuggest());
+ assertEquals(Collections.emptyMap(), searchResponse.getProfileResults());
+ assertEquals(5, searchResponse.getHits().totalHits);
+ assertEquals(5, searchResponse.getHits().getHits().length);
+ for (SearchHit searchHit : searchResponse.getHits().getHits()) {
+ assertEquals("index", searchHit.getIndex());
+ assertEquals("type", searchHit.getType());
+ assertThat(Integer.valueOf(searchHit.getId()), both(greaterThan(0)).and(lessThan(6)));
+ assertEquals(1.0f, searchHit.getScore(), 0);
+ assertEquals(-1L, searchHit.getVersion());
+ assertNotNull(searchHit.getSourceAsMap());
+ assertEquals(3, searchHit.getSourceAsMap().size());
+ assertTrue(searchHit.getSourceAsMap().containsKey("type"));
+ assertTrue(searchHit.getSourceAsMap().containsKey("num"));
+ assertTrue(searchHit.getSourceAsMap().containsKey("num2"));
+ }
+ }
+
+ public void testSearchMatchQuery() throws IOException {
+ SearchRequest searchRequest = new SearchRequest();
+ searchRequest.source(new SearchSourceBuilder().query(new MatchQueryBuilder("num", 10)));
+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);
+ assertSearchHeader(searchResponse);
+ assertNull(searchResponse.getAggregations());
+ assertNull(searchResponse.getSuggest());
+ assertEquals(Collections.emptyMap(), searchResponse.getProfileResults());
+ assertEquals(1, searchResponse.getHits().totalHits);
+ assertEquals(1, searchResponse.getHits().getHits().length);
+ assertThat(searchResponse.getHits().getMaxScore(), greaterThan(0f));
+ SearchHit searchHit = searchResponse.getHits().getHits()[0];
+ assertEquals("index", searchHit.getIndex());
+ assertEquals("type", searchHit.getType());
+ assertEquals("1", searchHit.getId());
+ assertThat(searchHit.getScore(), greaterThan(0f));
+ assertEquals(-1L, searchHit.getVersion());
+ assertNotNull(searchHit.getSourceAsMap());
+ assertEquals(3, searchHit.getSourceAsMap().size());
+ assertEquals("type1", searchHit.getSourceAsMap().get("type"));
+ assertEquals(50, searchHit.getSourceAsMap().get("num2"));
+ }
+
+ public void testSearchWithTermsAgg() throws IOException {
+ SearchRequest searchRequest = new SearchRequest();
+ SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
+ searchSourceBuilder.aggregation(new TermsAggregationBuilder("agg1", ValueType.STRING).field("type.keyword"));
+ searchSourceBuilder.size(0);
+ searchRequest.source(searchSourceBuilder);
+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);
+ assertSearchHeader(searchResponse);
+ assertNull(searchResponse.getSuggest());
+ assertEquals(Collections.emptyMap(), searchResponse.getProfileResults());
+ assertEquals(0, searchResponse.getHits().getHits().length);
+ assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f);
+ Terms termsAgg = searchResponse.getAggregations().get("agg1");
+ assertEquals("agg1", termsAgg.getName());
+ assertEquals(2, termsAgg.getBuckets().size());
+ Terms.Bucket type1 = termsAgg.getBucketByKey("type1");
+ assertEquals(3, type1.getDocCount());
+ assertEquals(0, type1.getAggregations().asList().size());
+ Terms.Bucket type2 = termsAgg.getBucketByKey("type2");
+ assertEquals(2, type2.getDocCount());
+ assertEquals(0, type2.getAggregations().asList().size());
+ }
+
+ public void testSearchWithRangeAgg() throws IOException {
+ {
+ SearchRequest searchRequest = new SearchRequest();
+ SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
+ searchSourceBuilder.aggregation(new RangeAggregationBuilder("agg1").field("num"));
+ searchSourceBuilder.size(0);
+ searchRequest.source(searchSourceBuilder);
+
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class,
+ () -> execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync));
+ assertEquals(RestStatus.BAD_REQUEST, exception.status());
+ }
+
+ SearchRequest searchRequest = new SearchRequest();
+ SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
+ searchSourceBuilder.aggregation(new RangeAggregationBuilder("agg1").field("num")
+ .addRange("first", 0, 30).addRange("second", 31, 200));
+ searchSourceBuilder.size(0);
+ searchRequest.source(searchSourceBuilder);
+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);
+ assertSearchHeader(searchResponse);
+ assertNull(searchResponse.getSuggest());
+ assertEquals(Collections.emptyMap(), searchResponse.getProfileResults());
+ assertEquals(5, searchResponse.getHits().totalHits);
+ assertEquals(0, searchResponse.getHits().getHits().length);
+ assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f);
+ Range rangeAgg = searchResponse.getAggregations().get("agg1");
+ assertEquals("agg1", rangeAgg.getName());
+ assertEquals(2, rangeAgg.getBuckets().size());
+ {
+ Range.Bucket bucket = rangeAgg.getBuckets().get(0);
+ assertEquals("first", bucket.getKeyAsString());
+ assertEquals(2, bucket.getDocCount());
+ }
+ {
+ Range.Bucket bucket = rangeAgg.getBuckets().get(1);
+ assertEquals("second", bucket.getKeyAsString());
+ assertEquals(3, bucket.getDocCount());
+ }
+ }
+
+ public void testSearchWithTermsAndRangeAgg() throws IOException {
+ SearchRequest searchRequest = new SearchRequest();
+ SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
+ TermsAggregationBuilder agg = new TermsAggregationBuilder("agg1", ValueType.STRING).field("type.keyword");
+ agg.subAggregation(new RangeAggregationBuilder("subagg").field("num")
+ .addRange("first", 0, 30).addRange("second", 31, 200));
+ searchSourceBuilder.aggregation(agg);
+ searchSourceBuilder.size(0);
+ searchRequest.source(searchSourceBuilder);
+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);
+ assertSearchHeader(searchResponse);
+ assertNull(searchResponse.getSuggest());
+ assertEquals(Collections.emptyMap(), searchResponse.getProfileResults());
+ assertEquals(0, searchResponse.getHits().getHits().length);
+ assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f);
+ Terms termsAgg = searchResponse.getAggregations().get("agg1");
+ assertEquals("agg1", termsAgg.getName());
+ assertEquals(2, termsAgg.getBuckets().size());
+ Terms.Bucket type1 = termsAgg.getBucketByKey("type1");
+ assertEquals(3, type1.getDocCount());
+ assertEquals(1, type1.getAggregations().asList().size());
+ {
+ Range rangeAgg = type1.getAggregations().get("subagg");
+ assertEquals(2, rangeAgg.getBuckets().size());
+ {
+ Range.Bucket bucket = rangeAgg.getBuckets().get(0);
+ assertEquals("first", bucket.getKeyAsString());
+ assertEquals(2, bucket.getDocCount());
+ }
+ {
+ Range.Bucket bucket = rangeAgg.getBuckets().get(1);
+ assertEquals("second", bucket.getKeyAsString());
+ assertEquals(1, bucket.getDocCount());
+ }
+ }
+ Terms.Bucket type2 = termsAgg.getBucketByKey("type2");
+ assertEquals(2, type2.getDocCount());
+ assertEquals(1, type2.getAggregations().asList().size());
+ {
+ Range rangeAgg = type2.getAggregations().get("subagg");
+ assertEquals(2, rangeAgg.getBuckets().size());
+ {
+ Range.Bucket bucket = rangeAgg.getBuckets().get(0);
+ assertEquals("first", bucket.getKeyAsString());
+ assertEquals(0, bucket.getDocCount());
+ }
+ {
+ Range.Bucket bucket = rangeAgg.getBuckets().get(1);
+ assertEquals("second", bucket.getKeyAsString());
+ assertEquals(2, bucket.getDocCount());
+ }
+ }
+ }
+
+ public void testSearchWithMatrixStats() throws IOException {
+ SearchRequest searchRequest = new SearchRequest();
+ SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
+ searchSourceBuilder.aggregation(new MatrixStatsAggregationBuilder("agg1").fields(Arrays.asList("num", "num2")));
+ searchSourceBuilder.size(0);
+ searchRequest.source(searchSourceBuilder);
+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);
+ assertSearchHeader(searchResponse);
+ assertNull(searchResponse.getSuggest());
+ assertEquals(Collections.emptyMap(), searchResponse.getProfileResults());
+ assertEquals(5, searchResponse.getHits().totalHits);
+ assertEquals(0, searchResponse.getHits().getHits().length);
+ assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f);
+ assertEquals(1, searchResponse.getAggregations().asList().size());
+ MatrixStats matrixStats = searchResponse.getAggregations().get("agg1");
+ assertEquals(5, matrixStats.getFieldCount("num"));
+ assertEquals(56d, matrixStats.getMean("num"), 0d);
+ assertEquals(1830d, matrixStats.getVariance("num"), 0d);
+ assertEquals(0.09340198804973057, matrixStats.getSkewness("num"), 0d);
+ assertEquals(1.2741646510794589, matrixStats.getKurtosis("num"), 0d);
+ assertEquals(5, matrixStats.getFieldCount("num2"));
+ assertEquals(29d, matrixStats.getMean("num2"), 0d);
+ assertEquals(330d, matrixStats.getVariance("num2"), 0d);
+ assertEquals(-0.13568039346585542, matrixStats.getSkewness("num2"), 1.0e-16);
+ assertEquals(1.3517561983471074, matrixStats.getKurtosis("num2"), 0d);
+ assertEquals(-767.5, matrixStats.getCovariance("num", "num2"), 0d);
+ assertEquals(-0.9876336291667923, matrixStats.getCorrelation("num", "num2"), 0d);
+ }
+
+ public void testSearchWithParentJoin() throws IOException {
+ StringEntity parentMapping = new StringEntity("{\n" +
+ " \"mappings\": {\n" +
+ " \"answer\" : {\n" +
+ " \"_parent\" : {\n" +
+ " \"type\" : \"question\"\n" +
+ " }\n" +
+ " }\n" +
+ " },\n" +
+ " \"settings\": {\n" +
+ " \"index.mapping.single_type\": false" +
+ " }\n" +
+ "}", ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "/child_example", Collections.emptyMap(), parentMapping);
+ StringEntity questionDoc = new StringEntity("{\n" +
+ " \"body\": \"I have Windows 2003 server and i bought a new Windows 2008 server...\",\n" +
+ " \"title\": \"Whats the best way to file transfer my site from server to a newer one?\",\n" +
+ " \"tags\": [\n" +
+ " \"windows-server-2003\",\n" +
+ " \"windows-server-2008\",\n" +
+ " \"file-transfer\"\n" +
+ " ]\n" +
+ "}", ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "/child_example/question/1", Collections.emptyMap(), questionDoc);
+ StringEntity answerDoc1 = new StringEntity("{\n" +
+ " \"owner\": {\n" +
+ " \"location\": \"Norfolk, United Kingdom\",\n" +
+ " \"display_name\": \"Sam\",\n" +
+ " \"id\": 48\n" +
+ " },\n" +
+ " \"body\": \"
Unfortunately you're pretty much limited to FTP...\",\n" +
+ " \"creation_date\": \"2009-05-04T13:45:37.030\"\n" +
+ "}", ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "child_example/answer/1", Collections.singletonMap("parent", "1"), answerDoc1);
+ StringEntity answerDoc2 = new StringEntity("{\n" +
+ " \"owner\": {\n" +
+ " \"location\": \"Norfolk, United Kingdom\",\n" +
+ " \"display_name\": \"Troll\",\n" +
+ " \"id\": 49\n" +
+ " },\n" +
+ " \"body\": \"
Use Linux...\",\n" +
+ " \"creation_date\": \"2009-05-05T13:45:37.030\"\n" +
+ "}", ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "/child_example/answer/2", Collections.singletonMap("parent", "1"), answerDoc2);
+ client().performRequest("POST", "/_refresh");
+
+ TermsAggregationBuilder leafTermAgg = new TermsAggregationBuilder("top-names", ValueType.STRING)
+ .field("owner.display_name.keyword").size(10);
+ ChildrenAggregationBuilder childrenAgg = new ChildrenAggregationBuilder("to-answers", "answer").subAggregation(leafTermAgg);
+ TermsAggregationBuilder termsAgg = new TermsAggregationBuilder("top-tags", ValueType.STRING).field("tags.keyword")
+ .size(10).subAggregation(childrenAgg);
+ SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
+ searchSourceBuilder.size(0).aggregation(termsAgg);
+ SearchRequest searchRequest = new SearchRequest("child_example");
+ searchRequest.source(searchSourceBuilder);
+
+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);
+ assertSearchHeader(searchResponse);
+ assertNull(searchResponse.getSuggest());
+ assertEquals(Collections.emptyMap(), searchResponse.getProfileResults());
+ assertEquals(3, searchResponse.getHits().totalHits);
+ assertEquals(0, searchResponse.getHits().getHits().length);
+ assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f);
+ assertEquals(1, searchResponse.getAggregations().asList().size());
+ Terms terms = searchResponse.getAggregations().get("top-tags");
+ assertEquals(0, terms.getDocCountError());
+ assertEquals(0, terms.getSumOfOtherDocCounts());
+ assertEquals(3, terms.getBuckets().size());
+ for (Terms.Bucket bucket : terms.getBuckets()) {
+ assertThat(bucket.getKeyAsString(),
+ either(equalTo("file-transfer")).or(equalTo("windows-server-2003")).or(equalTo("windows-server-2008")));
+ assertEquals(1, bucket.getDocCount());
+ assertEquals(1, bucket.getAggregations().asList().size());
+ Children children = bucket.getAggregations().get("to-answers");
+ assertEquals(2, children.getDocCount());
+ assertEquals(1, children.getAggregations().asList().size());
+ Terms leafTerms = children.getAggregations().get("top-names");
+ assertEquals(0, leafTerms.getDocCountError());
+ assertEquals(0, leafTerms.getSumOfOtherDocCounts());
+ assertEquals(2, leafTerms.getBuckets().size());
+ assertEquals(2, leafTerms.getBuckets().size());
+ Terms.Bucket sam = leafTerms.getBucketByKey("Sam");
+ assertEquals(1, sam.getDocCount());
+ Terms.Bucket troll = leafTerms.getBucketByKey("Troll");
+ assertEquals(1, troll.getDocCount());
+ }
+ }
+
+ public void testSearchWithSuggest() throws IOException {
+ SearchRequest searchRequest = new SearchRequest();
+ SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
+ searchSourceBuilder.suggest(new SuggestBuilder().addSuggestion("sugg1", new PhraseSuggestionBuilder("type"))
+ .setGlobalText("type"));
+ searchSourceBuilder.size(0);
+ searchRequest.source(searchSourceBuilder);
+
+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);
+ assertSearchHeader(searchResponse);
+ assertNull(searchResponse.getAggregations());
+ assertEquals(Collections.emptyMap(), searchResponse.getProfileResults());
+ assertEquals(0, searchResponse.getHits().totalHits);
+ assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f);
+ assertEquals(0, searchResponse.getHits().getHits().length);
+ assertEquals(1, searchResponse.getSuggest().size());
+
+ Suggest.Suggestion extends Suggest.Suggestion.Entry extends Suggest.Suggestion.Entry.Option>> sugg = searchResponse
+ .getSuggest().iterator().next();
+ assertEquals("sugg1", sugg.getName());
+ for (Suggest.Suggestion.Entry extends Suggest.Suggestion.Entry.Option> options : sugg) {
+ assertEquals("type", options.getText().string());
+ assertEquals(0, options.getOffset());
+ assertEquals(4, options.getLength());
+ assertEquals(2 ,options.getOptions().size());
+ for (Suggest.Suggestion.Entry.Option option : options) {
+ assertThat(option.getScore(), greaterThan(0f));
+ assertThat(option.getText().string(), either(equalTo("type1")).or(equalTo("type2")));
+ }
+ }
+ }
+
+ public void testSearchScroll() throws Exception {
+
+ for (int i = 0; i < 100; i++) {
+ XContentBuilder builder = jsonBuilder().startObject().field("field", i).endObject();
+ HttpEntity entity = new NStringEntity(builder.string(), ContentType.APPLICATION_JSON);
+ client().performRequest("PUT", "test/type1/" + Integer.toString(i), Collections.emptyMap(), entity);
+ }
+ client().performRequest("POST", "/test/_refresh");
+
+ SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder().size(35).sort("field", SortOrder.ASC);
+ SearchRequest searchRequest = new SearchRequest("test").scroll(TimeValue.timeValueMinutes(2)).source(searchSourceBuilder);
+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);
+
+ try {
+ long counter = 0;
+ assertSearchHeader(searchResponse);
+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L));
+ assertThat(searchResponse.getHits().getHits().length, equalTo(35));
+ for (SearchHit hit : searchResponse.getHits()) {
+ assertThat(((Number) hit.getSortValues()[0]).longValue(), equalTo(counter++));
+ }
+
+ searchResponse = execute(new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)),
+ highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync);
+
+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L));
+ assertThat(searchResponse.getHits().getHits().length, equalTo(35));
+ for (SearchHit hit : searchResponse.getHits()) {
+ assertEquals(counter++, ((Number) hit.getSortValues()[0]).longValue());
+ }
+
+ searchResponse = execute(new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)),
+ highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync);
+
+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L));
+ assertThat(searchResponse.getHits().getHits().length, equalTo(30));
+ for (SearchHit hit : searchResponse.getHits()) {
+ assertEquals(counter++, ((Number) hit.getSortValues()[0]).longValue());
+ }
+ } finally {
+ ClearScrollRequest clearScrollRequest = new ClearScrollRequest();
+ clearScrollRequest.addScrollId(searchResponse.getScrollId());
+ ClearScrollResponse clearScrollResponse = execute(clearScrollRequest,
+ // Not using a method reference to work around https://bugs.eclipse.org/bugs/show_bug.cgi?id=517951
+ (request, headers) -> highLevelClient().clearScroll(request, headers),
+ (request, listener, headers) -> highLevelClient().clearScrollAsync(request, listener, headers));
+ assertThat(clearScrollResponse.getNumFreed(), greaterThan(0));
+ assertTrue(clearScrollResponse.isSucceeded());
+
+ SearchScrollRequest scrollRequest = new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2));
+ ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> execute(scrollRequest,
+ highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync));
+ assertEquals(RestStatus.NOT_FOUND, exception.status());
+ assertThat(exception.getRootCause(), instanceOf(ElasticsearchException.class));
+ ElasticsearchException rootCause = (ElasticsearchException) exception.getRootCause();
+ assertThat(rootCause.getMessage(), containsString("No search context found for"));
+ }
+ }
+
+ private static void assertSearchHeader(SearchResponse searchResponse) {
+ assertThat(searchResponse.getTook().nanos(), greaterThanOrEqualTo(0L));
+ assertEquals(0, searchResponse.getFailedShards());
+ assertThat(searchResponse.getTotalShards(), greaterThan(0));
+ assertEquals(searchResponse.getTotalShards(), searchResponse.getSuccessfulShards());
+ assertEquals(0, searchResponse.getShardFailures().length);
+ }
+}
diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java
new file mode 100644
index 0000000000000..30907cd050531
--- /dev/null
+++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java
@@ -0,0 +1,962 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client.documentation;
+
+import org.apache.http.HttpEntity;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.nio.entity.NStringEntity;
+import org.elasticsearch.Build;
+import org.elasticsearch.ElasticsearchException;
+import org.elasticsearch.Version;
+import org.elasticsearch.action.ActionListener;
+import org.elasticsearch.action.DocWriteRequest;
+import org.elasticsearch.action.DocWriteResponse;
+import org.elasticsearch.action.bulk.BackoffPolicy;
+import org.elasticsearch.action.bulk.BulkItemResponse;
+import org.elasticsearch.action.bulk.BulkProcessor;
+import org.elasticsearch.action.bulk.BulkRequest;
+import org.elasticsearch.action.bulk.BulkResponse;
+import org.elasticsearch.action.delete.DeleteRequest;
+import org.elasticsearch.action.delete.DeleteResponse;
+import org.elasticsearch.action.get.GetRequest;
+import org.elasticsearch.action.get.GetResponse;
+import org.elasticsearch.action.index.IndexRequest;
+import org.elasticsearch.action.index.IndexResponse;
+import org.elasticsearch.action.main.MainResponse;
+import org.elasticsearch.action.support.ActiveShardCount;
+import org.elasticsearch.action.support.WriteRequest;
+import org.elasticsearch.action.support.replication.ReplicationResponse;
+import org.elasticsearch.action.update.UpdateRequest;
+import org.elasticsearch.action.update.UpdateResponse;
+import org.elasticsearch.client.ESRestHighLevelClientTestCase;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.RestHighLevelClient;
+import org.elasticsearch.cluster.ClusterName;
+import org.elasticsearch.common.Strings;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.unit.ByteSizeUnit;
+import org.elasticsearch.common.unit.ByteSizeValue;
+import org.elasticsearch.common.unit.TimeValue;
+import org.elasticsearch.common.xcontent.XContentBuilder;
+import org.elasticsearch.common.xcontent.XContentFactory;
+import org.elasticsearch.common.xcontent.XContentType;
+import org.elasticsearch.index.VersionType;
+import org.elasticsearch.index.get.GetResult;
+import org.elasticsearch.rest.RestStatus;
+import org.elasticsearch.script.Script;
+import org.elasticsearch.script.ScriptType;
+import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
+import org.elasticsearch.threadpool.ThreadPool;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import static java.util.Collections.emptyMap;
+import static java.util.Collections.singletonMap;
+
+/**
+ * This class is used to generate the Java CRUD API documentation.
+ * You need to wrap your code between two tags like:
+ * // tag::example[]
+ * // end::example[]
+ *
+ * Where example is your tag name.
+ *
+ * Then in the documentation, you can extract what is between tag and end tags with
+ * ["source","java",subs="attributes,callouts,macros"]
+ * --------------------------------------------------
+ * include-tagged::{doc-tests}/CRUDDocumentationIT.java[example]
+ * --------------------------------------------------
+ */
+public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
+
+ public void testIndex() throws IOException {
+ RestHighLevelClient client = highLevelClient();
+
+ {
+ //tag::index-request-map
+ Map jsonMap = new HashMap<>();
+ jsonMap.put("user", "kimchy");
+ jsonMap.put("postDate", new Date());
+ jsonMap.put("message", "trying out Elasticsearch");
+ IndexRequest indexRequest = new IndexRequest("posts", "doc", "1")
+ .source(jsonMap); // <1>
+ //end::index-request-map
+ IndexResponse indexResponse = client.index(indexRequest);
+ assertEquals(indexResponse.getResult(), DocWriteResponse.Result.CREATED);
+ }
+ {
+ //tag::index-request-xcontent
+ XContentBuilder builder = XContentFactory.jsonBuilder();
+ builder.startObject();
+ {
+ builder.field("user", "kimchy");
+ builder.field("postDate", new Date());
+ builder.field("message", "trying out Elasticsearch");
+ }
+ builder.endObject();
+ IndexRequest indexRequest = new IndexRequest("posts", "doc", "1")
+ .source(builder); // <1>
+ //end::index-request-xcontent
+ IndexResponse indexResponse = client.index(indexRequest);
+ assertEquals(indexResponse.getResult(), DocWriteResponse.Result.UPDATED);
+ }
+ {
+ //tag::index-request-shortcut
+ IndexRequest indexRequest = new IndexRequest("posts", "doc", "1")
+ .source("user", "kimchy",
+ "postDate", new Date(),
+ "message", "trying out Elasticsearch"); // <1>
+ //end::index-request-shortcut
+ IndexResponse indexResponse = client.index(indexRequest);
+ assertEquals(indexResponse.getResult(), DocWriteResponse.Result.UPDATED);
+ }
+ {
+ //tag::index-request-string
+ IndexRequest request = new IndexRequest(
+ "posts", // <1>
+ "doc", // <2>
+ "1"); // <3>
+ String jsonString = "{" +
+ "\"user\":\"kimchy\"," +
+ "\"postDate\":\"2013-01-30\"," +
+ "\"message\":\"trying out Elasticsearch\"" +
+ "}";
+ request.source(jsonString, XContentType.JSON); // <4>
+ //end::index-request-string
+
+ // tag::index-execute
+ IndexResponse indexResponse = client.index(request);
+ // end::index-execute
+ assertEquals(indexResponse.getResult(), DocWriteResponse.Result.UPDATED);
+
+ // tag::index-response
+ String index = indexResponse.getIndex();
+ String type = indexResponse.getType();
+ String id = indexResponse.getId();
+ long version = indexResponse.getVersion();
+ if (indexResponse.getResult() == DocWriteResponse.Result.CREATED) {
+ // <1>
+ } else if (indexResponse.getResult() == DocWriteResponse.Result.UPDATED) {
+ // <2>
+ }
+ ReplicationResponse.ShardInfo shardInfo = indexResponse.getShardInfo();
+ if (shardInfo.getTotal() != shardInfo.getSuccessful()) {
+ // <3>
+ }
+ if (shardInfo.getFailed() > 0) {
+ for (ReplicationResponse.ShardInfo.Failure failure : shardInfo.getFailures()) {
+ String reason = failure.reason(); // <4>
+ }
+ }
+ // end::index-response
+
+ // tag::index-execute-async
+ client.indexAsync(request, new ActionListener() {
+ @Override
+ public void onResponse(IndexResponse indexResponse) {
+ // <1>
+ }
+
+ @Override
+ public void onFailure(Exception e) {
+ // <2>
+ }
+ });
+ // end::index-execute-async
+ }
+ {
+ IndexRequest request = new IndexRequest("posts", "doc", "1");
+ // tag::index-request-routing
+ request.routing("routing"); // <1>
+ // end::index-request-routing
+ // tag::index-request-parent
+ request.parent("parent"); // <1>
+ // end::index-request-parent
+ // tag::index-request-timeout
+ request.timeout(TimeValue.timeValueSeconds(1)); // <1>
+ request.timeout("1s"); // <2>
+ // end::index-request-timeout
+ // tag::index-request-refresh
+ request.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL); // <1>
+ request.setRefreshPolicy("wait_for"); // <2>
+ // end::index-request-refresh
+ // tag::index-request-version
+ request.version(2); // <1>
+ // end::index-request-version
+ // tag::index-request-version-type
+ request.versionType(VersionType.EXTERNAL); // <1>
+ // end::index-request-version-type
+ // tag::index-request-op-type
+ request.opType(DocWriteRequest.OpType.CREATE); // <1>
+ request.opType("create"); // <2>
+ // end::index-request-op-type
+ // tag::index-request-pipeline
+ request.setPipeline("pipeline"); // <1>
+ // end::index-request-pipeline
+ }
+ {
+ // tag::index-conflict
+ IndexRequest request = new IndexRequest("posts", "doc", "1")
+ .source("field", "value")
+ .version(1);
+ try {
+ IndexResponse response = client.index(request);
+ } catch(ElasticsearchException e) {
+ if (e.status() == RestStatus.CONFLICT) {
+ // <1>
+ }
+ }
+ // end::index-conflict
+ }
+ {
+ // tag::index-optype
+ IndexRequest request = new IndexRequest("posts", "doc", "1")
+ .source("field", "value")
+ .opType(DocWriteRequest.OpType.CREATE);
+ try {
+ IndexResponse response = client.index(request);
+ } catch(ElasticsearchException e) {
+ if (e.status() == RestStatus.CONFLICT) {
+ // <1>
+ }
+ }
+ // end::index-optype
+ }
+ }
+
+ public void testUpdate() throws IOException {
+ RestHighLevelClient client = highLevelClient();
+ {
+ IndexRequest indexRequest = new IndexRequest("posts", "doc", "1").source("field", 0);
+ IndexResponse indexResponse = client.index(indexRequest);
+ assertSame(indexResponse.status(), RestStatus.CREATED);
+
+ XContentType xContentType = XContentType.JSON;
+ String script = XContentBuilder.builder(xContentType.xContent())
+ .startObject()
+ .startObject("script")
+ .field("lang", "painless")
+ .field("code", "ctx._source.field += params.count")
+ .endObject()
+ .endObject().string();
+ HttpEntity body = new NStringEntity(script, ContentType.create(xContentType.mediaType()));
+ Response response = client().performRequest(HttpPost.METHOD_NAME, "/_scripts/increment-field", emptyMap(), body);
+ assertEquals(response.getStatusLine().getStatusCode(), RestStatus.OK.getStatus());
+ }
+ {
+ //tag::update-request
+ UpdateRequest request = new UpdateRequest(
+ "posts", // <1>
+ "doc", // <2>
+ "1"); // <3>
+ //end::update-request
+ request.fetchSource(true);
+ //tag::update-request-with-inline-script
+ Map parameters = singletonMap("count", 4); // <1>
+
+ Script inline = new Script(ScriptType.INLINE, "painless", "ctx._source.field += params.count", parameters); // <2>
+ request.script(inline); // <3>
+ //end::update-request-with-inline-script
+ UpdateResponse updateResponse = client.update(request);
+ assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED);
+ assertEquals(4, updateResponse.getGetResult().getSource().get("field"));
+
+ request = new UpdateRequest("posts", "doc", "1").fetchSource(true);
+ //tag::update-request-with-stored-script
+ Script stored =
+ new Script(ScriptType.STORED, "painless", "increment-field", parameters); // <1>
+ request.script(stored); // <2>
+ //end::update-request-with-stored-script
+ updateResponse = client.update(request);
+ assertEquals(updateResponse.getResult(), DocWriteResponse.Result.UPDATED);
+ assertEquals(8, updateResponse.getGetResult().getSource().get("field"));
+ }
+ {
+ //tag::update-request-with-doc-as-map
+ Map