添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

With the change to log4j2 , now we can use an action inside the DefaultRolloverStrategy to delete old log files by default and keep a certain amount of days. We could even add a new ls.logs.keepdays setting that defaults to any number and one can modify to change this.

status = error
name = LogstashPropertiesConfig
appender.rolling.type = RollingFile
appender.rolling.name = plain_rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %-.10000m%n
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:ls.logs}
appender.rolling.strategy.action.condition.type = IfLastModified
appender.rolling.strategy.action.condition.age = 14D
appender.rolling.strategy.action.PathConditions.type = IfFileName
appender.rolling.strategy.action.PathConditions.glob = logstash-${sys:ls.log.format}-*
appender.json_rolling.type = RollingFile
appender.json_rolling.name = json_rolling
appender.json_rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.log
appender.json_rolling.policies.type = Policies
appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling.policies.time.interval = 1
appender.json_rolling.policies.time.modulate = true
appender.json_rolling.layout.type = JSONLayout
appender.json_rolling.layout.compact = true
appender.json_rolling.layout.eventEol = true
rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.rolling.ref = ${sys:ls.log.format}_rolling

This will add a safety line for having a large history but not run out of disk for having hundreds of files.

pemontto, micoq, slachiewicz, gpchelkin, uptenterpriseavailability, reilee, LucaWintergerst, rdrgporto, fs-kitamura, mmolinac, and 17 more reacted with thumbs up emoji shoggeh, stevenoumi, and bin-shi-mulesoft reacted with heart emoji All reactions

@gmoskovicz Thanks a lot of this. Would you be so kind to also provide a configuration example to rotate files after reaching fixed size (instead of daily rotation)?

I believe a lot of user is already deleting LS/ES logs using find -mtime or similar, but that doesn't prevent a situation where a single daily log file is growing up to enormous size -- and something like that might very easily happen in a scenario of mapping data types conflict.

You can add a size policy by adding these lines:

appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 10MB
appender.rolling.strategy.max = 5

and modify the pattern:

appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log

It will limit the total size for one day to 5*10 = 50MB

logstash-plain-2017-06-22-1.log
logstash-plain-2017-06-22-2.log
logstash-plain-2017-06-22-3.log
logstash-plain-2017-06-22-4.log
logstash-plain-2017-06-22-5.log

Obviously, you will lose logs if you exceed this size (but you avoid to fill your /var/log fs !)

  • time-based rotation policy
  • My feeling is that time-based is the most intuitive, but size-based is safest.

    Can we make time-based have a size cap?

    Definitely +1 on this issue.

    We encountered disk capacity issues with our LS instances due to missing the appender rolling strategy. Also I'd recommend making note of this in the official "Getting Started" documentation but that is a different issue entirely so I'll submit that over in documentation.

    Side note: It is the personal view of our team to use both time-based and size-based caps. We only keep 7 days of logstash application logs and auto roll after 1GB has been reached. There have been a few cases where we had several months of logstash logs with several of them consuming well over 5GB due to unforeseen issues at the time (we have better monitoring in place for this now :) ). We use the following configuration across the board and works very well for our own needs. Of course, all of these options are entirely subjective depending on the views and needs of individual orgs. Nonetheless I thought it would be useful to share what we are doing as a reference point for others looking for guidance/ideas for their own logging strategy.

    appender.rolling.strategy.type = DefaultRolloverStrategy
    appender.rolling.strategy.action.type = Delete
    appender.rolling.strategy.action.basepath = ${sys:ls.logs}
    appender.rolling.strategy.action.condition.type = IfLastModified
    appender.rolling.strategy.action.condition.age = 7D
    appender.rolling.strategy.action.PathConditions.type = IfFileName
    appender.rolling.strategy.action.PathConditions.glob = logstash-${sys:ls.log.format}-*
    appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
    appender.rolling.strategy.action.condition.nested_condition.exceeds = 1GB
              

    Not certain if this is the right place to post this but I tried the log4j changes that are shown in the above link and the old files are not deleted after 7 days as configured by "appender.rolling.strategy.max". This was implemented correct? My properties file looks like:

    status = error
    name = LogstashPropertiesConfig
    appender.rolling.type = RollingFile
    appender.rolling.name = plain_rolling
    appender.rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
    appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
    appender.rolling.policies.type = Policies
    appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
    appender.rolling.policies.time.interval = 1
    appender.rolling.policies.time.modulate = true
    appender.rolling.layout.type = PatternLayout
    appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %-.10000m%n
    appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
    appender.rolling.policies.size.size = 100MB
    appender.rolling.strategy.type = DefaultRolloverStrategy
    appender.rolling.strategy.max = 7
    appender.json_rolling.type = RollingFile
    appender.json_rolling.name = json_rolling
    appender.json_rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
    appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
    appender.json_rolling.policies.type = Policies
    appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy
    appender.json_rolling.policies.time.interval = 1
    appender.json_rolling.policies.time.modulate = true
    appender.json_rolling.layout.type = JSONLayout
    appender.json_rolling.layout.compact = true
    appender.json_rolling.layout.eventEol = true
    appender.json_rolling.policies.size.type = SizeBasedTriggeringPolicy
    appender.json_rolling.policies.size.size = 100MB
    appender.json_rolling.strategy.type = DefaultRolloverStrategy
    appender.json_rolling.strategy.max = 7
    rootLogger.level = ${sys:ls.log.level}
    rootLogger.appenderRef.rolling.ref = ${sys:ls.log.format}_rolling
              

    @EndlessTundra - thanks.

    After abit of testing, it appears that the max setting only applies to size based rollovers within the given time window.

    for example appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz

    Will roll over every day, and WITH-IN that day if there are multiple rollovers due to size, then the max gets applied. Specifically, [DATE/TIME]-%i.log.gz where i is gauranteed to be <= max, and i++ for every size based roll over with [DATE/TIME]. This keeps the number of file inside the [DATE/TIME] bucket in check. However, when rolled over ACROSS days (or what ever the [DATE/TIME] dictates), i = 1 always if no sized based roll overs happen. max does not seem to apply ACROSS [DATE/TIME] in part it seems that the key to max is the value of %i

    This is a neat feature to keep the per day logs in check in case of a huge daily spike... but wasn't the intended behavior with this config. I am still combing through log4j2's documentation trying to figure out how to get it to behave as desired (a simple max of X number of .log.gz).

    Any log4j2 guru's know the right config ?

    EDIT: It looks like @gmoskovicz config in the original comment may be the configuration that we are looking for. (however, we should keep the max setting too )

    Hello, Can anyone tell me which editor in CentOS 7 to use to modify the log4j2.properties file? I tried using vi, but it didn't seem to like that editor. We're just setting up the latest version (6.4.1) of Elasticsearch with logstash, Kibana, filebeat, and packetbeat. I'm currently working on performing the steps noted here: https://www.elastic.co/guide/en/elasticsearch/reference/6.4/logging.html Thank you.
    Eloy Sanchez

    I got this working with the below example which offers (values are set arbitary low for testing):

  • sized based rotation at 1KB with max of 10 logs per day
  • time based daily rotation
  • time & size based deletion if any of the below match:
  • logs age > 3 days
  • appender.rolling.type = RollingFile
    appender.rolling.name = plain_rolling
    appender.rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
    appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
    appender.rolling.policies.type = Policies
    appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
    appender.rolling.policies.time.interval = 1
    appender.rolling.policies.time.modulate = true
    appender.rolling.layout.type = PatternLayout
    appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %-.10000m%n
    appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
    appender.rolling.policies.size.size = 1KB
    appender.rolling.strategy.type = DefaultRolloverStrategy
    appender.rolling.strategy.max = 10
    appender.rolling.strategy.action.type = Delete
    appender.rolling.strategy.action.basepath = ${sys:ls.logs}
    appender.rolling.strategy.action.condition.type = IfFileName
    appender.rolling.strategy.action.condition.glob = logstash-${sys:ls.log.format}-*
    appender.rolling.strategy.action.ifAny.type = IfAny
    appender.rolling.strategy.action.ifAny.ifLastModified.type = IfLastModified
    appender.rolling.strategy.action.ifAny.ifLastModified.age = 3D
    appender.rolling.strategy.action.ifAny.ifAccumulatedFileSize.type = IfAccumulatedFileSize
    appender.rolling.strategy.action.ifAny.ifAccumulatedFileSize.exceeds = 10MB
              

    I was banging my head against the wall because the configs mentioned above and here #8233 did not work for me. They just wouldn't delete the logs from /var/log/logstash. I tried to set --path.logs to another directory, and things worked!
    The only difference was my /var/log/logstash was symlinked to -> /mnt/log/logstash

    This is how a successful deletion shows up in TRACE level logs for log4j2

    2020-03-02 06:22:01,611 Log4j2-TF-2-RollingFileManager-5 TRACE IfFileName ACCEPTED: 'glob:logstash-plain-*' matches relative path 'logstash-plain-2020-03-02-06-18.log'
    2020-03-02 06:22:01,612 Log4j2-TF-2-RollingFileManager-5 TRACE IfLastModified ACCEPTED: logstash-plain-2020-03-02-06-18.log ageMillis '207888' >= 'PT3M'
    2020-03-02 06:22:01,612 Log4j2-TF-2-RollingFileManager-5 TRACE Deleting /mnt/log/logstash/logstash-plain-2020-03-02-06-18.log
    

    And with --path.logs set to a symlink, it looks like this

    2020-03-02 06:22:01,611 Log4j2-TF-2-RollingFileManager-5 TRACE IfFileName ACCEPTED: 'glob:logstash-plain-*' matches relative path 'logstash-plain-2020-03-02-06-18.log'
    2020-03-02 06:22:01,612 Log4j2-TF-2-RollingFileManager-5 TRACE IfLastModified ACCEPTED: logstash-plain-2020-03-02-06-18.log ageMillis '207888' >= 'PT3M'
    2020-03-02 06:22:01,612 Log4j2-TF-2-RollingFileManager-5 TRACE Deleting /mnt/log/logstash/logstash-plain-2020-03-02-06-18.log
    

    Added following line fixed the problem
    appender.rolling.strategy.action.followlinks = true