-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Description
Elasticsearch version: 5.1.1
{
"name" : "atom1",
"cluster_name" : "atomcluster",
"cluster_uuid" : "Ax602ATwRUanUplRc3d5iA",
"version" : {
"number" : "5.1.1",
"build_hash" : "5395e21",
"build_date" : "2016-12-06T12:36:15.409Z",
"build_snapshot" : false,
"lucene_version" : "6.3.0"
},
"tagline" : "You Know, for Search"
}
Plugins installed: []
none
JVM version: 1.8.0
[root@atom1 jenkins]# java -version
java version "1.8.0"
Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) 64-Bit Server VM (build 25.0-b70, mixed mode)
OS version: CentOS release 6.8 (Final) Linux 64bit
[root@atom1 jenkins]# cat /etc/redhat-release
CentOS release 6.8 (Final)
[root@atom1 jenkins]# uname -a
Linux atom1 2.6.32-642.11.1.el6.x86_64 #1 SMP Fri Nov 18 19:25:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
Steps to reproduce:
- Search and scroll using the following query.
_search/?scroll=1m
{
"size": 0,
"query": {
"constant_score": {
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"gte" : "now-1d/d",
"lt" : "now/d"
}
}
},
{
"exists": {
"field": "session"
}
}
]
}
}
}
},
"aggs": {
"2": {
"terms": {
"field": "session.id.raw"
},
"aggs": {
"3": {
"terms": {
"field": "@timestamp",
"size": 1,
"order": {
"_term": "asc"
}
},
"aggs": {
"4": {
"top_hits": {
"size": 1
}
}
}
}
}
}
}
}
- Initial search request is successful. Second search, now with scroll_id as returned in step 1 fails.
_search/scroll?scroll=1m
{
"error":{
"root_cause":[
],
"type":"reduce_search_phase_exception",
"reason":"[reduce] inner finish failed",
"phase":"fetch",
"grouped":true,
"failed_shards":[
],
"caused_by":{
"type":"null_pointer_exception",
"reason":null
}
},
"status":503
}
Cluster health:
[root@atom1 jenkins]# curl -XGET http://localhost:9200/_cluster/health?pretty=true
{
"cluster_name" : "atomcluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 165,
"active_shards" : 330,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
Cluster log:
[2017-01-09T13:48:06,677][DEBUG][o.e.i.f.p.SortedSetDVOrdinalsIndexFieldData] global-ordinals [session.id.raw][1972] took [3.6ms]
[2017-01-09T13:48:06,779][DEBUG][o.e.i.f.p.SortedSetDVOrdinalsIndexFieldData] global-ordinals [session.id.raw][2017] took [15.5ms]
[2017-01-09T13:48:06,914][WARN ][r.suppressed ] path: /_search/scroll, params: {scroll=1m}
org.elasticsearch.action.search.ReduceSearchPhaseException: [reduce] inner finish failed
at org.elasticsearch.action.search.SearchScrollQueryThenFetchAsyncAction.finishHim(SearchScrollQueryThenFetchAsyncAction.java:217) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.action.search.SearchScrollQueryThenFetchAsyncAction.executeFetchPhase(SearchScrollQueryThenFetchAsyncAction.java:175) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.action.search.SearchScrollQueryThenFetchAsyncAction.access$000(SearchScrollQueryThenFetchAsyncAction.java:44) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.action.search.SearchScrollQueryThenFetchAsyncAction$1.onResponse(SearchScrollQueryThenFetchAsyncAction.java:135) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.action.search.SearchScrollQueryThenFetchAsyncAction$1.onResponse(SearchScrollQueryThenFetchAsyncAction.java:129) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:46) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:978) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.transport.TcpTransport$1.doRun(TcpTransport.java:1289) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:109) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.transport.TcpTransport.handleResponse(TcpTransport.java:1281) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1250) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) [transport-netty4-5.1.1.jar:5.1.1]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) [netty-codec-4.1.6.Final.jar:4.1.6.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280) [netty-codec-4.1.6.Final.jar:4.1.6.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:396) [netty-codec-4.1.6.Final.jar:4.1.6.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) [netty-codec-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:536) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:490) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450) [netty-transport-4.1.6.Final.jar:4.1.6.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873) [netty-common-4.1.6.Final.jar:4.1.6.Final]
at java.lang.Thread.run(Thread.java:744) [?:1.8.0]
Caused by: java.lang.NullPointerException
The session field as well as the @timestamp field exists in all documents during the time period of now - 1d.
Is there anything that can be done to fix this? Please let me know what info you would need to help you in solving this issue.