$ curl -XGET "http://localhost:9200/test/_search" -H 'Content-Type: application/json' -d'
"_source": {
"include": ["s", "a"],
"exclude": "t"
"query": {
"match": {
"s": "value"
{"error":{"root_cause":[{"type":"json_parse_exception","reason":"Unexpected character ('\"' (code 34)): was expecting comma to separate Object entries\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@7d758e66; line: 7, column: 4]"}],"type":"json_parse_exception","reason":"Unexpected character ('\"' (code 34)): was expecting comma to separate Object entries\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@7d758e66; line: 7, column: 4]"},"status":500}%
It seems that if the JSON is invalid we send it as plain text. It would be better if we just send the request as JSON and show the above error (the cURL one), since the user may not have seen that they have an error on the request side.
I wish there was a future proof way of determining the encoding on the fly so that we don't need to worry about potential future mismatches in requirements between Elasticsearch and Kibana console, but clearly it's fragile one way or another, so leaning on the msearch/bulk endpoints may be the best option.