-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Closed
Labels
:mlMachine learningMachine learning>test-failureTriaged test failures from CITriaged test failures from CI
Description
This test has failed three times in the past 2 weeks
https://gradle-enterprise.elastic.co/s/mp3vxqny35j4e
Output:
org.elasticsearch.xpack.ml.integration.ForecastIT > testOverflowToDisk FAILED
org.elasticsearch.ElasticsearchStatusException: Test likely fails due to insufficient disk space on test machine, please free up space.
at __randomizedtesting.SeedInfo.seed([8263E7B83540FE6E:CDB359E25C3EF7DE]:0)
at org.elasticsearch.xpack.ml.integration.ForecastIT.testOverflowToDisk(ForecastIT.java:244)
Caused by:
org.elasticsearch.ElasticsearchStatusException: Cannot run forecast: Forecast cannot be executed as models exceed internal memory limit and available disk space is insufficient Minimum disk space required: [200mb]
at org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.badRequestException(ExceptionsHelper.java:75)
at org.elasticsearch.xpack.ml.action.TransportForecastJobAction.lambda$getForecastRequestStats$2(TransportForecastJobAction.java:125)
at org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider.lambda$getForecastRequestStats$40(JobResultsProvider.java:1368)
at org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider.lambda$searchSingleResult$33(JobResultsProvider.java:1165)
at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63)
at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43)
at org.elasticsearch.client.node.NodeClient.lambda$executeLocally$0(NodeClient.java:91)
at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:175)
at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:169)
at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43)
at org.elasticsearch.action.search.AbstractSearchAsyncAction.sendSearchResponse(AbstractSearchAsyncAction.java:545)
at org.elasticsearch.action.search.ExpandSearchPhase.run(ExpandSearchPhase.java:117)
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:350)
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:344)
at org.elasticsearch.action.search.FetchSearchPhase.moveToNextPhase(FetchSearchPhase.java:231)
at org.elasticsearch.action.search.FetchSearchPhase.lambda$innerRun$1(FetchSearchPhase.java:119)
at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:125)
at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:95)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:706)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.lang.Thread.run(Thread.java:834)
I'm not too familiar with this part of the system, but wonder if we truly have less than 200mb available on disk when running the test (in that case we could possibly raise it as infrastructural issue), or if we should perhaps have the test skipped in case where these conditions are not met.
Metadata
Metadata
Assignees
Labels
:mlMachine learningMachine learning>test-failureTriaged test failures from CITriaged test failures from CI