Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fatal error in discover app after upgrade from 6.3.2 to 6.4.0 #22355

Closed
dav3860 opened this issue Aug 24, 2018 · 26 comments
Closed

Fatal error in discover app after upgrade from 6.3.2 to 6.4.0 #22355

dav3860 opened this issue Aug 24, 2018 · 26 comments
Assignees
Labels
bug Fixes for quality problems that affect the customer experience Feature:Discover Discover Application Team:Operations Team label for Operations Team Team:Visualizations Visualization editors, elastic-charts and infrastructure

Comments

@dav3860
Copy link

dav3860 commented Aug 24, 2018

**Kibana version:6.0.0

**Elasticsearch version:6.4.0

**Server OS version:centos7

**Browser version:Chrome 63

**Browser OS version:Windows 10

**Original install method (e.g. download page, yum, from source, etc.):yum

**Describe the bug:Fatal error in discover app after upgrade from 6.3.2 to 6.4.0

Steps to reproduce:
1.open the discover app
2.a fatal error is printed : "Courier fetch: Cannot read property 'forEach' of undefined"
3.

Expected behavior:

Screenshots (if relevant):

Errors in browser console (if relevant):
TypeError: Cannot read property 'forEach' of undefined
at https://kibana/bundles/commons.bundle.js:3:1085607
at Array.forEach ()
at _callee$ (https://kibana/bundles/commons.bundle.js:3:1085555)
at tryCatch (https://kibana/bundles/vendors.bundle.js:36:138904)
at Generator.invoke [as _invoke] (https://kibana/bundles/vendors.bundle.js:36:142786)
at Generator.prototype.(anonymous function) [as next] (https://kibana/bundles/vendors.bundle.js:36:140027)
at step (https://kibana/bundles/commons.bundle.js:3:1081249)
at https://kibana/bundles/commons.bundle.js:3:1081375
at

The error appears in this line :
segregatedResponses.forEach(function(responses, strategyIndex) {

Provide logs and/or server output (if relevant):

Any additional context:

@florian-asche
Copy link

florian-asche commented Aug 26, 2018

I have a similar problem here. After upgrading from 6.X to 6.4.0 i get the following error message:

Error: Bad Gateway
ErrorAbstract@http://192.168.0.8/bundles/vendors.bundle.js:313:132203
StatusCodeError@http://192.168.0.8/bundles/vendors.bundle.js:313:135090
respond@http://192.168.0.8/bundles/vendors.bundle.js:313:149378
checkRespForFailure@http://192.168.0.8/bundles/vendors.bundle.js:313:148589
AngularConnector.prototype.request/<@http://192.168.0.8/bundles/vendors.bundle.js:313:157823
processQueue@http://192.168.0.8/bundles/vendors.bundle.js:197:199684
scheduleProcessQueue/<@http://192.168.0.8/bundles/vendors.bundle.js:197:200647
$digest@http://192.168.0.8/bundles/vendors.bundle.js:197:210409
$apply@http://192.168.0.8/bundles/vendors.bundle.js:197:213205
done@http://192.168.0.8/bundles/vendors.bundle.js:197:132704
completeRequest@http://192.168.0.8/bundles/vendors.bundle.js:197:136327
requestLoaded@http://192.168.0.8/bundles/vendors.bundle.js:197:135223

But it only appears for metricbeat, not for heartbeat.

I found out, that the request on /elasticsearch url get an 502 instead of 200. The nginx write in logfile, that the header is to big:

2018/08/26 21:39:59 [error] 17551#17551: *855 upstream sent too big header while reading response header from upstream, client: 193, server: kibana, request: "POST /elasticsearch/_msearch HTTP/1.1", upstream: "http://8:5601/elasticsearch/_msearch", host: "8", referrer: "http://8/app/kibana"

i fixed it by adding:

  proxy_buffers 4 256k;
  proxy_buffer_size 128k;
  proxy_busy_buffers_size 256k;

to nginx config.

@dav3860
Copy link
Author

dav3860 commented Aug 27, 2018

It doesn't seem to be the same error for me, as the request on /elasticsearch gets a 200 when the error appears :
10.0.28.176 - MYUSER [27/Aug/2018:10:46:19 +0200] "POST /elasticsearch/_msearch HTTP/1.1" 200 56177 "https://kibana/app/kibana" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"

@liza-mae liza-mae added :Discovery Team:Operations Team label for Operations Team triage_needed labels Aug 27, 2018
@joschi99
Copy link

Same error after upgrade from 6.3.2 to 6.4.0 when i open a dashboard

image

@musskopf
Copy link

Same error over here:

Courier fetch: Cannot read property 'forEach' of undefined

I noticed the error goes away when I remove all "Scripted Fields" for the index pattern.

@florian-asche
Copy link

Are there errors in your proxy? For example nginx?

@carolineLe
Copy link

I did an upgrade from 6.3.2 to 6.4.0 and got the following similar error:
Courier fetch: responses is undefined
Version: 6.4.0
Build: 17929

Error: responses is undefined

_callee$@https://mydomainname/bundles/commons.bundle.js:3:1085535
tryCatch@https://mydomainname/bundles/vendors.bundle.js:36:138901
invoke@https://mydomainname/bundles/vendors.bundle.js:36:142786
defineIteratorMethods/</prototype[method]@https://mydomainname/bundles/vendors.bundle.js:36:140022
step@https://mydomainname/bundles/commons.bundle.js:3:1081241
step/<@https://mydomainname/bundles/commons.bundle.js:3:1081375
run@https://mydomainname/bundles/vendors.bundle.js:36:107333
notify/<@https://mydomainname/bundles/vendors.bundle.js:36:107573
flush@https://mydomainname/bundles/vendors.bundle.js:153:55572

the debugger console is a bit more verbose:

Possibly unhandled rejection: {}
TypeError: responses is undefined
Stack trace:
_callee$/<@https://mydomainname/bundles/commons.bundle.js:3:1085597
_callee$@https://mydomainname/bundles/commons.bundle.js:3:1085535
tryCatch@https://mydomainname/bundles/vendors.bundle.js:36:138901
invoke@https://mydomainname/bundles/vendors.bundle.js:36:142786
defineIteratorMethods/</prototype[method]@https://mydomainname/bundles/vendors.bundle.js:36:140022
step@https://mydomainname/bundles/commons.bundle.js:3:1081241
step/<@https://mydomainname/bundles/commons.bundle.js:3:1081375
run@https://mydomainname/bundles/vendors.bundle.js:36:107333
notify/<@https://mydomainname/bundles/vendors.bundle.js:36:107573
flush@https://mydomainname/bundles/vendors.bundle.js:153:55572

@dav3860
Copy link
Author

dav3860 commented Sep 5, 2018

For us, the error appears randomly but very often.

@Bargs
Copy link
Contributor

Bargs commented Sep 10, 2018

@cjcenizal I'm wondering if this is a side effect of the courier refactor, any chance you could take a look?

@Bargs Bargs added bug Fixes for quality problems that affect the customer experience and removed triage_needed labels Sep 10, 2018
@cjcenizal
Copy link
Contributor

@dav3860 @carolineLe Could you look under the network tab in your browser's dev tools and tell me what the response is from the call to _msearch? It should be the most recent call in the network activity log.

If the response is too large to easily share or if you want to keep your data confidential then you can just share the first part of the response with me. It should look something like this:

{"responses":[{"took":132,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":81,"max_score":null

@cjcenizal
Copy link
Contributor

@dav3860 @carolineLe Could you also share the request payload with me? That would help a lot.

I'm able to reproduce the error you've found, but I'm having trouble identifying the root cause. So far the information I have points to the problem lying within the response we get back from Elasticsearch, but more details on the particular requests you're making will help me.

@carolineLe
Copy link

carolineLe commented Sep 11, 2018

Here is the request that is sent:
{"index":"index_name-*","ignore_unavailable":true,"timeout":30000,"preference":1536647765711}
{"version":true,"size":500,"sort":[{"@timestamp":{"order":"desc","unmapped_type":"boolean"}}],"_source":{"excludes":[]},"aggs":{"2":{"date_histogram":{"field":"@timestamp","interval":"30s","time_zone":"Europe/Berlin","min_doc_count":1}}},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["@timestamp", "OtherTimeField1", "OtherTimeField2", "OtherTimeField3", "OtherTimeField4"],"query":{"bool":{"must":[{"match_all":{}},{"range":{"@timestamp":{"gte":1536646868523,"lte":1536647768524,"format":"epoch_millis"}}}],"filter":[],"should":[],"must_not":[]}},"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"*":{}},"fragment_size":2147483647}}

And the response associated:

{"responses":[{"took":414,"timed_out":false,"_shards":{"total":100,"successful":100,"skipped":90,"failed":0},
"hits":{"total":450519,"max_score":null,"hits":[...]},
"aggregations":{"2":{"buckets":[{"key_as_string":"2018-09-11T08:21:00.000+02:00","key":1536646860000,"doc_count":7342},{"key_as_string":"2018-09-11T08:21:30.000+02:00","key":1536646890000,"doc_count":14881},{"key_as_string":"2018-09-11T08:22:00.000+02:00","key":1536646920000,"doc_count":12212}, ... ]}},"status":200}]}

I had the bug with another index pattern and noticed the following warning: "Doc-value field [@timestamp] is not using a format.". Setting the format for the timestamps fields in this index "fixed" the problem, but it did not change anything for the one I'm having trouble with right now.

@cjcenizal
Copy link
Contributor

@carolineLe Thank you for this info! For the response you shared, could you provide the entire shape of the object? For example, the response example I gave had a responses property as well.

@carolineLe
Copy link

Sorry, the formatting removed the first line, I edited my answer.

@cjcenizal
Copy link
Contributor

@dav3860 When you say "randomly but very often" can you confirm for me that you mean that you're using the same query, which sometimes yields the error and sometimes doesn't? E.g. you have Discover open with "Absolute" time range selected (to ensure the time doesn't change), and you hit refresh 10 times in your browser, and out of those 10 times 2 or 3 will fail?

@timroes timroes added Feature:Discover Discover Application Team:Visualizations Visualization editors, elastic-charts and infrastructure and removed :Discovery labels Sep 16, 2018
@cjcenizal
Copy link
Contributor

@carolineLe I'm able to reproduce the original error, but your error Courier fetch: responses is undefined is turning out to be trickier to reproduce. Can you click on the top line in the stack trace and show me which line of code is generating that error? Thanks.

@cjcenizal
Copy link
Contributor

In terms of the original error, "Courier fetch: Cannot read property 'forEach' of undefined, this is causing a fatal error for the same reason that #22466 originally did. 6.4.0 contains a regression in which errors thrown by CallClient result in fatal errors, as in the case when a call to the es client results in a Promise rejection. This has been fixed in 6.4.1 (#22558).

Note that this just fixes the regression, which is an exacerbated symptom of an underlying error. The underlying error "Courier fetch: Cannot read property 'forEach' of undefined is due to a failed request sent through the es client, which I can't diagnose with the information in this issue.

@carolineLe
Copy link

carolineLe commented Sep 19, 2018

columnNumber: 1085597
fileName: "https://mydomainname/bundles/commons.bundle.js"
lineNumber: 3
message: "responses is undefined"

Right before this error, I get another weird error. The "XML" appears malformed but there is no XML involved, the response is in JSON.

Erreur d’analyse XML : mal formé
Emplacement : https://mydomainname/elasticsearch/_msearch
Numéro de ligne 1, Colonne 1 :
_msearch:1:1
_callee$/<@https://mydomainname/bundles/commons.bundle.js:3:1085597
_callee$@https://mydomainname/bundles/commons.bundle.js:3:1085535
tryCatch@https://mydomainname/bundles/vendors.bundle.js:36:138901
invoke@https://mydomainname/bundles/vendors.bundle.js:36:142786
defineIteratorMethods/</prototype[method]@https://mydomainname/bundles/vendors.bundle.js:36:140022
step@https://mydomainname/bundles/commons.bundle.js:3:1081241
step/<@https://mydomainname/bundles/commons.bundle.js:3:1081375
run@https://mydomainname/bundles/vendors.bundle.js:36:107333
notify/<@https://mydomainname/bundles/vendors.bundle.js:36:107573
flush@https://mydomainname/bundles/vendors.bundle.js:153:55572

Thank you for investigating this !

@carolineLe
Copy link

@cjcenizal OK, I did change the response from the server and added the "Content-type: application/json" and it fixed my problem.
Should I report it to the elasticsearch team ?

@cjcenizal
Copy link
Contributor

Hi @carolineLe thank you for the information. That sounds like an appropriate issue for the elasticsearch-js repo.

In terms of hunting down the origin of that error within our source, I'm still wondering if you can help me do that? Could you refer to the animation below and click on the second link in the commons.bundle.js stack trace in the dev tools console, to identify the line of code which originates the error? Thank you so much!

call_stack

@cjcenizal
Copy link
Contributor

6.4.1 parity with 6.3.0

I've done some investigation to identify whether 6.4.1 is on par with 6.3.0 in terms of how errors within CallClient are surfaced.

Long-story short, I believe 6.4.1 is on par with 6.3.0, if not slightly above par. Here are my findings.

6.3.0

Throwing an error in getFetchParams

This results in a Notifier error.

throw_error

msearch returning a rejected promise

This results in a Notifier error.

throw_error

msearch resolving with undefined

This results in a different Notifier error: “Cannot read property responses of undefined”.

resolve_with_undefined

msearch resolving with an empty object

This results in a fatal error.

resolve_with_empty_object

6.4.1

Throwing an error in getFetchParams

This results in a Notifier error.

throw_error

msearch returning a rejected promise

This results in a Notifier error.

throw_error_msearch

msearch resolving with undefined

This results in a different Notifier error: “Cannot read property responses of undefined”.

resolve_with_undefined

msearch resolving with an empty object

This results in a different Notifier error: “Cannot read property forEach of undefined”.

resolve_with_object

@carolineLe
Copy link

Here is where the exception occurs, at the "responses.forEach":

case 12:
  segregatedResponses = _context2.sent; responsesInOriginalRequestOrder = new Array(searchRequestsAndStatuses.length); segregatedResponses.forEach(function (r\
esponses, strategyIndex) {
    responses.forEach(function (response, responseIndex) {
      var searchRequest = searchStrategiesWithRequests[strategyIndex].searchRequests[responseIndex];
      var requestIndex = searchRequestsAndStatuses.indexOf(searchRequest);
      responsesInOriginalRequestOrder[requestIndex] = response
    })

@cjcenizal
Copy link
Contributor

Thank you @carolineLe! This is very helpful.

@cjcenizal
Copy link
Contributor

cjcenizal commented Sep 20, 2018

Looks like the two error messages we've been seeing (Cannot read property 'forEach' of undefined and responses is undefined) are the same error, just surfaced in different ways depending on whether you're using Chrome or Firefox. They're both due to the msearch request resolving with an empty object (for some reason).

@carolineLe
Copy link

@cjcenizal Maybe this is due to the JSON wrongly parsed as XML/HTML because the mimetype header is missing, what do you think ? I don't know if the missing mimetype is specific to my installation.

@florian-asche
Copy link

Looks like the two error messages we've been seeing (Cannot read property 'forEach' of undefined and responses is undefined) are the same error, just surfaced in different ways depending on whether you're using Chrome or Firefox. They're both due to the msearch request resolving with an empty object (for some reason).

see #22355 (comment)

@cjcenizal
Copy link
Contributor

@carolineLe Do you happen to have a proxy in front of Elasticsearch, similar to what @florian-asche has set up? This will be the proxy located at the elasticsearch.url defined in your Kibana config file. If so then the error could be due to the proxy responding with some sort of XML or HTML response, for an example an error page due to a 502 or similar.

I've created an issue to address this problem (elastic/elasticsearch-js#701). We currently suggest users configure their elasticsearch.url to point directly to Elasticsearch. In production, if you want high availability built into your deployment, you can configure an Elasticsearch coordinating node on the Kibana host, which can act as your endpoint. Let me know if you'd like to explore this option and I'll pull together some information for you.

Thank you everyone for your help with this issue. I'm going to close it since it's not a problem we can solve on the Kibana side. Feel free to continue leaving questions here if you have any.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience Feature:Discover Discover Application Team:Operations Team label for Operations Team Team:Visualizations Visualization editors, elastic-charts and infrastructure
Projects
None yet
Development

No branches or pull requests

9 participants