添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

Hi everybody,

In one of the Elastic indices, I have a problem with some documents.
I receive JSON from filebeat and parse them with the json plugin.
But some of them contain a lot of fields.

Example response.0.field1->field28, response.1.field1->field28, response.2.field1->field28, ...

This goes up to more than response.400.*

I increased the index.mapping.total_fields.limit setting it to 5000, then to 8000 and now to 12000.

But I doubt somebody is using these fields.

So, to get rid of this and to go back to a more normal situation, I am thinking of flattening all response.0-999 fields.

Can I use a wildcard or regex as field name to mark fields as flattened in a template ?
Something like

"response.[0-9]*" : {
        "type": "flattened"

Or must I be explicit and include all possible names in it ?

Another option is to prune response.000 fields with 000 upper than 10, thing I can do in logstash after parsing the json in black listing them.

Thank you for your attention and responses.

Hi everybody,

No answer after a week most probably means it is not possible to use regex to flatten fields in a template.

I look at the other solution using logstash prune filter to blacklist/drop fields with names like response.[1-9][0-9]* but following this bug, prune fails to work on nested fields: