String
|
The key that will be sent to Kafka with every message. Optional value defaulting to
null
.
Any of the
Lookups
) can be included.
|
Filter
|
A Filter to determine if the event should be handled by this Appender. More than one Filter
may be used by using a CompositeFilter.
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
boolean
|
The default is
true
, causing sends to block until the record has been acknowledged by the
Kafka server. When set to
false
sends return immediately, allowing for lower latency and significantly
higher throughput.
New since 2.8. Be aware that this is a new addition, and it has not been extensively tested.
Any failure sending to Kafka will be reported as error to StatusLogger and the log event will be dropped
(the ignoreExceptions parameter will not be effective). Log events may arrive out of order to the Kafka server.
You can set properties in
Kafka producer properties
.
You need to set the
bootstrap.servers
property, there are sensible default values for the others.
Do not set the
value.serializer
nor
key.serializer
properties.
<Kafka name="Kafka" topic="log-test">
<PatternLayout pattern="%date %message"/>
<Property name="bootstrap.servers">localhost:9092</Property>
</Kafka>
</Appenders>
This appender is synchronous by default and will block until the record has been acknowledged by the Kafka server, timeout
for this can be set with the
timeout.ms
property (defaults to 30 seconds). Wrap with
Async appender
and/or set syncSend to
false
to log asynchronously.
This appender requires the
Kafka client library
. Note that you need to use a version of
the Kafka client library matching the Kafka server used.
Note:
Make sure to not let
org.apache.kafka
log to a Kafka appender on DEBUG level,
since that will cause recursive logging:
<?xml version="1.0" encoding="UTF-8"?>
<Loggers>
<Root level="DEBUG">
<AppenderRef ref="Kafka"/>
</Root>
<Logger name="org.apache.kafka" level="INFO" /> <!-- avoid recursive logging -->
</Loggers>
New since 2.1. Be aware that this is a new addition, and although it has been
tested on several platforms, it does not have as much track record as the other file appenders.
The MemoryMappedFileAppender maps a part of the specified file into memory
and writes log events to this memory, relying on the operating system's
virtual memory manager to synchronize the changes to the storage device.
The main benefit of using memory mapped files is I/O performance. Instead of making system
calls to write to disk, this appender can simply change the program's local memory,
which is orders of magnitude faster. Also, in most operating systems the memory
region mapped actually is the kernel's
page
cache
(file cache), meaning that no copies need to be created in user space.
(TODO: performance tests that compare performance of this appender to
RandomAccessFileAppender and FileAppender.)
There is some overhead with mapping a file region into memory,
especially very large regions (half a gigabyte or more).
The default region size is 32 MB, which should strike a reasonable balance
between the frequency and the duration of remap operations.
(TODO: performance test remapping various sizes.)
Similar to the FileAppender and the RandomAccessFileAppender,
MemoryMappedFileAppender uses a MemoryMappedFileManager to actually perform the
file I/O. While MemoryMappedFileAppender from different Configurations
cannot be shared, the MemoryMappedFileManagers can be if the Manager is
accessible. For example, two web applications in a servlet container can have
their own configuration and safely write to the same file if Log4j
is in a ClassLoader that is common to both of them.
<Configuration status="warn" name="MyApp">
<Appenders>
<MemoryMappedFile name="MyFile" fileName="logs/app.log">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
</MemoryMappedFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="MyFile"/>
</Root>
</Loggers>
</Configuration>
The NoSQLAppender writes log events to a NoSQL database using an internal lightweight provider interface.
Provider implementations currently exist for MongoDB and Apache CouchDB, and writing a custom provider is
quite simple.
MemoryMappedFileAppender Parameters
When true - the default, records will be appended to the end
of the file. When set to false, the file will be cleared before
new records are written.
When set to true, each write will be followed by a
call to
MappedByteBuffer.force()
.
This will guarantee the data is written to the storage device.
The default for this parameter is
false
.
This means that the data is written to the storage device even
if the Java process crashes, but there may be data loss if the
operating system crashes.
Note that manually forcing a sync on every log event loses most
of the performance benefits of using a memory mapped file.
Flushing after every write is only useful when using this
appender with synchronous loggers. Asynchronous loggers and
appenders will automatically flush at the end of a batch of events,
even if immediateFlush is set to false. This also guarantees
the data is written to disk but is more efficient.
|
The length of the mapped region, defaults to 32 MB
(32 * 1024 * 1024 bytes). This parameter must be a value
between 256 and 1,073,741,824 (1 GB or 2^30);
values outside this range will be adjusted to the closest valid
value.
Log4j will round the specified value up to the nearest power of two.
|
Layout
|
The Layout to use to format the LogEvent. If no layout is supplied the default pattern layout
of "%m%n" will be used.
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
You specify which NoSQL provider to use by specifying the appropriate configuration element within the
<NoSql>
element. The types currently supported are
<MongoDb>
and
<CouchDb>
. To create your own custom provider, read the JavaDoc for the
NoSQLProvider
,
NoSQLConnection
, and
NoSQLObject
classes and the
documentation about creating Log4j plugins. We recommend you review the source code for the MongoDB and
CouchDB providers as a guide for creating your own provider.
The following example demonstrates how log events are persisted in NoSQL databases if represented in a JSON
format:
"level": "WARN",
"loggerName": "com.example.application.MyClass",
"message": "Something happened that you might want to know about.",
"source": {
"className": "com.example.application.MyClass",
"methodName": "exampleMethod",
"fileName": "MyClass.java",
"lineNumber": 81
"marker": {
"name": "SomeMarker",
"parent" {
"name": "SomeParentMarker"
"threadName": "Thread-1",
"millis": 1368844166761,
"date": "2013-05-18T02:29:26.761Z",
"thrown": {
"type": "java.sql.SQLException",
"message": "Could not insert record. Connection lost.",
"stackTrace": [
{ "className": "org.example.sql.driver.PreparedStatement$1", "methodName": "responder", "fileName": "PreparedStatement.java", "lineNumber": 1049 },
{ "className": "org.example.sql.driver.PreparedStatement", "methodName": "executeUpdate", "fileName": "PreparedStatement.java", "lineNumber": 738 },
{ "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 },
{ "className": "com.example.application.MainClass", "methodName": "main", "fileName": "MainClass.java", "lineNumber": 52 }
"cause": {
"type": "java.io.IOException",
"message": "Connection lost.",
"stackTrace": [
{ "className": "java.nio.channels.SocketChannel", "methodName": "write", "fileName": null, "lineNumber": -1 },
{ "className": "org.example.sql.driver.PreparedStatement$1", "methodName": "responder", "fileName": "PreparedStatement.java", "lineNumber": 1032 },
{ "className": "org.example.sql.driver.PreparedStatement", "methodName": "executeUpdate", "fileName": "PreparedStatement.java", "lineNumber": 738 },
{ "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 },
{ "className": "com.example.application.MainClass", "methodName": "main", "fileName": "MainClass.java", "lineNumber": 52 }
"contextMap": {
"ID": "86c3a497-4e67-4eed-9d6a-2e5797324d7b",
"username": "JohnDoe"
"contextStack": [
"topItem",
"anotherItem",
"bottomItem"
Starting with Log4 2.11.0, we provide the following MongoDB modules:
Added in v2.11.0, dropped in v2.14.0:
log4j-mongodb2
defines the configuration element
MongoDb2
matching the MongoDB Driver version 2.
Added in v2.11.0:
log4j-mongodb3
defines the configuration element
MongoDb3
matching the MongoDB Driver version 3.
Added in v2.14.0:
log4j-mongodb4
defines the configuration element
MongoDb4
matching the MongoDB Driver version 4.
We no longer provide the module
log4j-mongodb
.
The module
log4j-mongodb2
aliases the old configuration element
MongoDb
to
MongoDb2
.
This section details specializations of the
NoSQLAppender
provider for MongoDB using
the MongoDB driver version 3. The NoSQLAppender Appender writes log events to a NoSQL database using an
internal lightweight provider interface.
NoSQLAppender Parameters
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
Filter
|
A Filter to determine if the event should be handled by this Appender. More than one Filter may be
used by using a CompositeFilter.
|
If an integer greater than 0, this causes the appender to buffer log events and flush whenever the
buffer reaches this size.
|
<Appenders>
<NoSql name="databaseAppender">
<MongoDb3 databaseName="applicationDb" collectionName="applicationLog" server="mongo.example.org"
username="loggingUser" password="abc123" />
</NoSql>
</Appenders>
<Loggers>
<Root level="warn">
<AppenderRef ref="databaseAppender"/>
</Root>
</Loggers>
</Configuration>
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="error">
<Appenders>
<NoSql name="databaseAppender">
<MongoDb3 collectionName="applicationLog" factoryClassName="org.example.db.ConnectionFactory"
factoryMethodName="getNewMongoClient" />
</NoSql>
</Appenders>
<Loggers>
<Root level="warn">
<AppenderRef ref="databaseAppender"/>
</Root>
</Loggers>
</Configuration>
You can define additional fields to log using
KeyValuePair
elements, for example:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<NoSql name="MongoDbAppender">
<MongoDb3 databaseName="testDb" collectionName="testCollection" server="localhost"
port="${sys:MongoDBTestPort:-27017}" />
<KeyValuePair key="A" value="1" />
<KeyValuePair key="B" value="2" />
<KeyValuePair key="env1" value="${env:PATH}" />
<KeyValuePair key="env2" value="$${env:PATH}" />
</NoSql>
</Appenders>
<Loggers>
<Root level="ALL">
<AppenderRef ref="MongoDbAppender" />
</Root>
</Loggers>
</Configuration>
This section details specializations of the
NoSQLAppender
provider for MongoDB using
the MongoDB driver version 4. The NoSQLAppender Appender writes log events to a NoSQL database using an
internal lightweight provider interface.
MongoDB3 Provider Parameters
By default, the MongoDB provider inserts records with the instructions
com.mongodb.WriteConcern.ACKNOWLEDGED
. Use this optional attribute to specify the name of
a constant other than
ACKNOWLEDGED
.
|
Class
|
If you specify
writeConcernConstant
, you can use this attribute to specify a class other
than
com.mongodb.WriteConcern
to find the constant on (to create your own custom
instructions).
|
To provide a connection to the MongoDB database, you can use this attribute and
factoryMethodName
to specify a class and static method to get the connection from. The
method must return a
com.mongodb.client.MongoDatabase
or a
com.mongodb.MongoClient
. If the
com.mongodb.client.MongoDatabase
is not authenticated, you must also specify a
username
and
password
. If you use the factory method for providing a connection, you must not specify
the
databaseName
,
server
, or
port
attributes.
|
If you do not specify a
factoryClassName
and
factoryMethodName
for providing
a MongoDB connection, you must specify a MongoDB database name using this attribute. You must also
specify a
username
and
password
. You can optionally also specify a
server
(defaults to localhost), and a
port
(defaults to the default MongoDB
port).
|
Specify the size in bytes of the capped collection to use if enabled. The minimum size is 4096 bytes,
and larger sizes will be increased to the nearest integer multiple of 256. See the capped collection documentation
linked above for more information.
|
<Appenders>
<NoSql name="MongoDbAppender">
<MongoDb4 connection="mongodb://log4jUser:12345678@localhost:${sys:MongoDBTestPort:-27017}/testDb.testCollection" />
</NoSql>
</Appenders>
<Loggers>
<Root level="ALL">
<AppenderRef ref="MongoDbAppender" />
</Root>
</Loggers>
</Configuration>
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<NoSql name="MongoDbAppender">
<MongoDb4
connection="mongodb://localhost:${sys:MongoDBTestPort:-27017}/testDb.testCollection"
capped="true"
collectionSize="1073741824"/>
</NoSql>
</Appenders>
<Loggers>
<Root level="ALL">
<AppenderRef ref="MongoDbAppender" />
</Root>
</Loggers>
</Configuration>
You can define additional fields to log using
KeyValuePair
elements, for example:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<NoSql name="MongoDbAppender">
<MongoDb4 connection="mongodb://localhost:${sys:MongoDBTestPort:-27017}/testDb.testCollection" />
<KeyValuePair key="A" value="1" />
<KeyValuePair key="B" value="2" />
<KeyValuePair key="env1" value="${env:PATH}" />
<KeyValuePair key="env2" value="$${env:PATH}" />
</NoSql>
</Appenders>
<Loggers>
<Root level="ALL">
<AppenderRef ref="MongoDbAppender" />
</Root>
</Loggers>
</Configuration>
This section details specializations of the
NoSQLAppender
provider for CouchDB.
The NoSQLAppender writes log events to a NoSQL database using an internal lightweight provider interface.
MongoDB4 Provider Parameters
Specify the size in bytes of the capped collection to use if enabled. The minimum size is 4096 bytes,
and larger sizes will be increased to the nearest integer multiple of 256. See the capped collection documentation
linked above for more information.
|
<Appenders>
<NoSql name="databaseAppender">
<CouchDb databaseName="applicationDb" protocol="https" server="couch.example.org"
username="loggingUser" password="abc123" />
</NoSql>
</Appenders>
<Loggers>
<Root level="warn">
<AppenderRef ref="databaseAppender"/>
</Root>
</Loggers>
</Configuration>
The OutputStreamAppender provides the base for many of the other Appenders such as the File and Socket
appenders that write the event to an Output Stream. It cannot be directly configured. Support for
immediateFlush and buffering is provided by the OutputStreamAppender. The OutputStreamAppender uses an
OutputStreamManager to handle the actual I/O, allowing the stream to be shared by Appenders in multiple
configurations.
The RandomAccessFileAppender is similar to the standard
FileAppender
except it is always buffered (this cannot be switched off)
and internally it uses a
ByteBuffer + RandomAccessFile
instead of a
BufferedOutputStream
.
We saw a 20-200% performance improvement compared to
FileAppender with "bufferedIO=true" in our
measurements
.
Similar to the FileAppender,
RandomAccessFileAppender uses a RandomAccessFileManager to actually perform the
file I/O. While RandomAccessFileAppender
from different Configurations
cannot be shared, the RandomAccessFileManagers can be if the Manager is
accessible. For example, two web applications in a
servlet container can have
their own configuration and safely
write to the same file if Log4j
is in a ClassLoader that is common to
both of them.
CouchDB Provider Parameters
To provide a connection to the CouchDB database, you can use this attribute and
factoryMethodName
to specify a class and static method to get the connection from. The
method must return a
org.lightcouch.CouchDbClient
or a
org.lightcouch.CouchDbProperties
. If you use the factory method for providing a connection,
you must not specify the
databaseName
,
protocol
,
server
,
port
,
username
, or
password
attributes.
|
If you do not specify a
factoryClassName
and
factoryMethodName
for providing
a CouchDB connection, you must specify a CouchDB database name using this attribute. You must also
specify a
username
and
password
. You can optionally also specify a
protocol
(defaults to http),
server
(defaults to localhost), and a
port
(defaults to 80 for http and 443 for https).
|
<Configuration status="warn" name="MyApp">
<Appenders>
<RandomAccessFile name="MyFile" fileName="logs/app.log">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
</RandomAccessFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="MyFile"/>
</Root>
</Loggers>
</Configuration>
The RewriteAppender allows the LogEvent to manipulated before it is processed by another Appender. This
can be used to mask sensitive information such as passwords or to inject information into each event.
The RewriteAppender must be configured with a
RewritePolicy
. The
RewriteAppender should be configured after any Appenders it references to allow it to shut down properly.
RandomAccessFileAppender Parameters
When true - the default, records will be appended to the end
of the file. When set to false,
the file will be cleared before
new records are written.
|
The name of the file to write to. If the file, or any of its
parent directories, do not exist,
they will be created.
|
A Filter to determine if the event should be handled by this
Appender. More than one Filter
may be used by using a CompositeFilter.
When set to true - the default, each write will be followed by a flush.
This will guarantee that the data is passed to the operating system for writing;
it does not guarantee that the data is actually written to a physical device
such as a disk drive.
Note that if this flag is set to false, and the logging activity is sparse,
there may be an indefinite delay in the data eventually making it to the
operating system, because it is held up in a buffer.
This can cause surprising effects such as the logs not
appearing in the tail output of a file immediately after writing to the log.
Flushing after every write is only useful when using this appender with
synchronous loggers. Asynchronous loggers and appenders will
automatically flush at the end of a batch of events, even if
immediateFlush is set to false. This also guarantees the data is passed
to the operating system but is more efficient.
|
Layout
|
The Layout to use to format the LogEvent. If no layout is supplied the default pattern layout
of "%m%n" will be used.
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
RewritePolicy is an interface that allows implementations to inspect and possibly modify LogEvents
before they are passed to Appender. RewritePolicy declares a single method named rewrite that must
be implemented. The method is passed the LogEvent and can return the same event or create a new one.
MapRewritePolicy will evaluate LogEvents that contain a MapMessage and will add or update
elements of the Map.
The following configuration shows a RewriteAppender configured to add a product key and its value
to the MapMessage.:
<Configuration status="warn" name="MyApp">
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="%m%n"/>
</Console>
<Rewrite name="rewrite">
<AppenderRef ref="STDOUT"/>
<MapRewritePolicy mode="Add">
<KeyValuePair key="product" value="TestProduct"/>
</MapRewritePolicy>
</Rewrite>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="Rewrite"/>
</Root>
</Loggers>
</Configuration>
PropertiesRewritePolicy will add properties configured on the policy to the ThreadContext Map
being logged. The properties will not be added to the actual ThreadContext Map. The property
values may contain variables that will be evaluated when the configuration is processed as
well as when the event is logged.
The following configuration shows a RewriteAppender configured to add a product key and its value
to the MapMessage:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn" name="MyApp">
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="%m%n"/>
</Console>
<Rewrite name="rewrite">
<AppenderRef ref="STDOUT"/>
<PropertiesRewritePolicy>
<Property name="user">${sys:user.name}</Property>
<Property name="env">${sys:environment}</Property>
</PropertiesRewritePolicy>
</Rewrite>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="Rewrite"/>
</Root>
</Loggers>
</Configuration>
You can use this policy to make loggers in third party code less chatty by changing event levels.
The LoggerNameLevelRewritePolicy will rewrite log event levels for a given logger name prefix.
You configure a LoggerNameLevelRewritePolicy with a logger name prefix and a pairs of levels,
where a pair defines a source level and a target level.
The following configuration shows a RewriteAppender configured to map level INFO to DEBUG and level
WARN to INFO for all loggers that start with
com.foo.bar
.
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn" name="MyApp">
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="%m%n"/>
</Console>
<Rewrite name="rewrite">
<AppenderRef ref="STDOUT"/>
<LoggerNameLevelRewritePolicy logger="com.foo.bar">
<KeyValuePair key="INFO" value="DEBUG"/>
<KeyValuePair key="WARN" value="INFO"/>
</LoggerNameLevelRewritePolicy>
</Rewrite>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="Rewrite"/>
</Root>
</Loggers>
</Configuration>
The RollingFileAppender is an OutputStreamAppender that writes to the File named in the fileName parameter
and rolls the file over according the TriggeringPolicy and the RolloverPolicy. The
RollingFileAppender uses a RollingFileManager (which extends OutputStreamManager) to actually perform the
file I/O and perform the rollover. While RolloverFileAppenders from different Configurations cannot be
shared, the RollingFileManagers can be if the Manager is accessible. For example, two web applications in a
servlet container can have their own configuration and safely
write to the same file if Log4j is in a ClassLoader that is common to both of them.
A RollingFileAppender requires a
TriggeringPolicy
and a
RolloverStrategy
. The triggering policy determines if a rollover should
be performed while the RolloverStrategy defines how the rollover should be done. If no RolloverStrategy
is configured, RollingFileAppender will use the
DefaultRolloverStrategy
.
Since log4j-2.5, a
custom delete action
can be configured in the
DefaultRolloverStrategy to run at rollover. Since 2.8 if no file name is configured then
DirectWriteRolloverStrategy
will be used instead of
DefaultRolloverStrategy.
Since log4j-2.9, a
custom POSIX file attribute view action
can be configured in the
DefaultRolloverStrategy to run at rollover, if not defined, inherited POSIX file attribute view from the RollingFileAppender will be applied.
File locking is not supported by the RollingFileAppender.
RewriteAppender Parameters
String
|
The name of the Appenders to call after the LogEvent has been manipulated. Multiple AppenderRef
elements can be configured.
|
Filter
|
A Filter to determine if the event should be handled by this Appender. More than one Filter
may be used by using a CompositeFilter.
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
When set to true - the default, each write will be followed by a flush.
This will guarantee that the data is passed to the operating system for writing;
it does not guarantee that the data is actually written to a physical device
such as a disk drive.
Note that if this flag is set to false, and the logging activity is sparse,
there may be an indefinite delay in the data eventually making it to the
operating system, because it is held up in a buffer.
This can cause surprising effects such as the logs not
appearing in the tail output of a file immediately after writing to the log.
Flushing after every write is only useful when using this appender with
synchronous loggers. Asynchronous loggers and appenders will
automatically flush at the end of a batch of events, even if
immediateFlush is set to false. This also guarantees the data is passed
to the operating system but is more efficient.
File attribute permissions in POSIX format to apply whenever the file is created.
Underlying files system shall support
POSIX
file attribute view.
Examples: rw------- or rw-rw-rw- etc...
File owner to define whenever the file is created.
Changing file's owner may be restricted for security reason and Operation not permitted IOException thrown.
Only processes with an effective user ID equal to the user ID
of the file or with appropriate privileges may change the ownership of a file
if
_POSIX_CHOWN_RESTRICTED
is in effect for path.
Underlying files system shall support file
owner
attribute view.
The
CompositeTriggeringPolicy
combines multiple triggering policies and returns true if
any of the configured policies return true. The
CompositeTriggeringPolicy
is configured
simply by wrapping other policies in a
Policies
element.
For example, the following XML fragment defines policies that rollover the log when the JVM starts,
when the log size reaches twenty megabytes, and when the current date no longer matches the log’s
start date.
<Policies>
<OnStartupTriggeringPolicy />
<SizeBasedTriggeringPolicy size="20 MB" />
<TimeBasedTriggeringPolicy />
</Policies>
The
CronTriggeringPolicy
triggers rollover based on a cron expression. This policy
is controlled by a timer and is asynchronous to processing log events, so it is possible that log events
from the previous or next time period may appear at the beginning or end of the log file. The
filePattern attribute of the Appender should contain a timestamp otherwise the target file will be
overwritten on each rollover.
RollingFileAppender Parameters
boolean
|
When true - the default, records will be appended to the end of the file. When set to false,
the file will be cleared before new records are written.
|
boolean
|
When true - the default, records will be written to a buffer and the data will be written to
disk when the buffer is full or, if immediateFlush is set, when the record is written.
File locking cannot be used with bufferedIO. Performance tests have shown that using buffered I/O
significantly improves performance, even if immediateFlush is enabled.
|
boolean
|
The appender creates the file on-demand. The appender only creates the file when a log event
passes all filters and is routed to this appender. Defaults to false.
|
Filter
|
A Filter to determine if the event should be handled by this Appender. More than one Filter
may be used by using a CompositeFilter.
|
String
|
The name of the file to write to. If the file, or any of its parent directories, do not exist,
they will be created.
|
String
|
The pattern of the file name of the archived log file. The format of the pattern is
dependent on the RolloverPolicy that is used. The DefaultRolloverPolicy will accept both
a date/time pattern compatible with
SimpleDateFormat
and/or a %i which represents an integer counter. The pattern also supports interpolation at
runtime so any of the Lookups (such as the
DateLookup
) can
be included in the pattern.
|
Layout
|
The Layout to use to format the LogEvent. If no layout is supplied the default pattern layout
of "%m%n" will be used.
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
The
OnStartupTriggeringPolicy
policy causes a rollover if the log file is older than the
current JVM's start time and the minimum file size is met or exceeded.
CronTriggeringPolicy Parameters
String
|
The cron expression. The expression is the same as what is allowed in the Quartz scheduler. See
CronExpression
for a full description of the expression.
|
boolean
|
On startup the cron expression will be evaluated against the file's last modification timestamp.
If the cron expression indicates a rollover should have occurred between that time and the current
time the file will be immediately rolled over.
|
The minimum size the file must have to roll over. A size of zero will cause a roll over no matter
what the file size is. The default value is 1, which will prevent rolling over an empty file.
Google App Engine note:
When running in Google App Engine, the OnStartup policy causes a rollover if the log file is older
than
the time when Log4J initialized
.
(Google App Engine restricts access to certain classes so Log4J cannot determine JVM start time with
java.lang.management.ManagementFactory.getRuntimeMXBean().getStartTime()
and falls back to Log4J initialization time instead.)
The
SizeBasedTriggeringPolicy
causes a rollover once the file has reached the specified
size. The size can be specified in bytes, with the suffix KB, MB, GB, or TB, for example
20MB
.
The size may also contain a fractional value such as
1.5 MB
. The size is evaluated
using the Java root Locale so a period must always be used for the fractional unit.
When combined with a time based triggering policy the file pattern must contain a
%i
otherwise the target file will be overwritten on every rollover as the SizeBased Triggering Policy
will not cause the timestamp value in the file name to change. When used without a time based
triggering policy the SizeBased Triggering Policy will cause the timestamp value to change.
The
TimeBasedTriggeringPolicy
causes a rollover once the date/time pattern no longer
applies to the active file. This policy accepts an
interval
attribute which indicates how
frequently the rollover should occur based on the time pattern and a
modulate
boolean
attribute.
OnStartupTriggeringPolicy Parameters
The default rollover strategy accepts both a date/time pattern and an integer from the filePattern
attribute specified on the RollingFileAppender itself. If the date/time pattern
is present it will be replaced with the current date and time values. If the pattern contains an integer
it will be incremented on each rollover. If the pattern contains both a date/time and integer
in the pattern the integer will be incremented until the result of the date/time pattern changes. If
the file pattern ends with ".gz", ".zip", ".bz2", ".deflate", ".pack200", or ".xz" the resulting archive
will be compressed using the compression scheme that matches the suffix. The formats bzip2, Deflate,
Pack200 and XZ require
Apache Commons Compress
.
In addition, XZ requires
XZ for Java
.
The pattern may also contain lookup references that can be resolved at runtime such as is shown in the example
below.
The default rollover strategy supports three variations for incrementing
the counter. To illustrate how it works, suppose that the min attribute
is set to 1, the max attribute is set to 3, the file name is "foo.log",
and the file name pattern is "foo-%i.log".
By way of contrast, when the fileIndex attribute is set to "min" but all the other settings are the
same the "fixed window" strategy will be performed.
Finally, as of release 2.8, if the fileIndex attribute is set to "nomax" then the min and max values
will be ignored and file numbering will increment by 1 and each rollover will have an incrementally
higher value with no maximum number of files.
TimeBasedTriggeringPolicy Parameters
integer
|
How often a rollover should occur based on the most specific time unit in the date pattern.
For example, with a date pattern with hours as the most specific item and increment of 4 rollovers
would occur every 4 hours.
The default value is 1.
|
boolean
|
Indicates whether the interval should be adjusted to cause the next rollover to occur on
the interval boundary. For example, if the item is hours, the current hour is 3 am and the
interval is 4 then the first rollover will occur at 4 am and then next ones will occur at
8 am, noon, 4pm, etc. The default value is false.
|
Indicates the maximum number of seconds to randomly delay a rollover. By default,
this is 0 which indicates no delay. This setting is useful on servers where multiple
applications are configured to rollover log files at the same time and can spread
the load of doing so across time.
|
foo-1.log
|
During the first rollover foo.log is renamed to foo-1.log. A new foo.log file is created and
starts being written to.
|
foo-2.log, foo-1.log
|
During the second rollover foo.log is renamed to foo-2.log. A new foo.log file is created and
starts being written to.
|
foo-3.log, foo-2.log, foo-1.log
|
During the third rollover foo.log is renamed to foo-3.log. A new foo.log file is created and
starts being written to.
|
foo-3.log, foo-2.log, foo-1.log
|
In the fourth and subsequent rollovers, foo-1.log is deleted, foo-2.log is renamed to
foo-1.log, foo-3.log is renamed to foo-2.log and foo.log is renamed to
foo-3.log. A new foo.log file is created and starts being written to.
|
foo-1.log
|
During the first rollover foo.log is renamed to foo-1.log. A new foo.log file is created and
starts being written to.
|
foo-1.log, foo-2.log
|
During the second rollover foo-1.log is renamed to foo-2.log and foo.log is renamed to
foo-1.log. A new foo.log file is created and starts being written to.
|
foo-1.log, foo-2.log, foo-3.log
|
During the third rollover foo-2.log is renamed to foo-3.log, foo-1.log is renamed to foo-2.log and
foo.log is renamed to foo-1.log. A new foo.log file is created and starts being written to.
|
foo-1.log, foo-2.log, foo-3.log
|
In the fourth and subsequent rollovers, foo-3.log is deleted, foo-2.log is renamed to
foo-3.log, foo-1.log is renamed to foo-2.log and foo.log is renamed to
foo-1.log. A new foo.log file is created and starts being written to.
|
Sets the compression level, 0-9, where 0 = none, 1 = best speed, through 9 = best compression.
Only implemented for ZIP files.
The DirectWriteRolloverStrategy causes log events to be written directly to files represented by the
file pattern. With this strategy file renames are not performed. If the size-based triggering policy
causes multiple files to be written during the specified time period they will be numbered starting
at one and continually incremented until a time-based rollover occurs.
Warning: If the file pattern has a
suffix indicating compression should take place the current file will not be compressed when the
application is shut down. Furthermore, if the time changes such that the file pattern no longer
matches the current file it will not be compressed at startup either.
DefaultRolloverStrategy Parameters
String
|
If set to "max" (the default), files with a higher index will be newer than files with a
smaller index. If set to "min", file renaming and the counter will follow the Fixed Window strategy
described above.
|
integer
|
The maximum value of the counter. Once this values is reached older archives will be
deleted on subsequent rollovers. The default value is 7.
|
Sets the compression level, 0-9, where 0 = none, 1 = best speed, through 9 = best compression.
Only implemented for ZIP files.
Below is a sample configuration that uses a RollingFileAppender with both the time and size based
triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory
based on the current year and month, and will compress each
archive using gzip:
<Configuration status="warn" name="MyApp">
<Appenders>
<RollingFile name="RollingFile" fileName="logs/app.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
This second example shows a rollover strategy that will keep up to 20 files before removing them.
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn" name="MyApp">
<Appenders>
<RollingFile name="RollingFile" fileName="logs/app.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
<DefaultRolloverStrategy max="20"/>
</RollingFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
Below is a sample configuration that uses a RollingFileAppender with both the time and size based
triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory
based on the current year and month, and will compress each
archive using gzip and will roll every 6 hours when the hour is divisible by 6:
<Configuration status="warn" name="MyApp">
<Appenders>
<RollingFile name="RollingFile" fileName="logs/app.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{yyyy-MM-dd-HH}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy interval="6" modulate="true"/>
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
This sample configuration uses a RollingFileAppender with both the cron and size based
triggering policies, and writes directly to an unlimited number of archive files. The cron
trigger causes a rollover every hour while the file size is limited to 250MB:
<Configuration status="warn" name="MyApp">
<Appenders>
<RollingFile name="RollingFile" filePattern="logs/app-%d{yyyy-MM-dd-HH}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<CronTriggeringPolicy schedule="0 0 * * * ?"/>
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
This sample configuration is the same as the previous but limits the number of files saved each hour to 10:
<Configuration status="warn" name="MyApp">
<Appenders>
<RollingFile name="RollingFile" filePattern="logs/app-%d{yyyy-MM-dd-HH}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<CronTriggeringPolicy schedule="0 0 * * * ?"/>
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
<DirectWriteRolloverStrategy maxFiles="10"/>
</RollingFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
Log4j-2.5 introduces a
Delete
action that gives users more control
over what files are deleted at rollover time than what was possible with the DefaultRolloverStrategy
max
attribute.
The Delete action lets users configure one or more conditions that select the files to delete
relative to a base directory.
Note that it is possible to delete any file, not just rolled over log files, so use this action with care!
With the
testMode
parameter you can test your configuration without accidentally deleting the wrong files.
DirectWriteRolloverStrategy Parameters
String
|
The maximum number of files to allow in the time period matching the file pattern. If the
number of files is exceeded the oldest file will be deleted. If specified, the value must
be greater than 1. If the value is less than zero or omitted then the number of files will
not be limited.
|
If more than one condition is specified,
they all need to accept a path before it is deleted. Conditions can be nested, in which case the
inner condition(s) are evaluated only if the outer condition accepts the path.
If conditions are not nested they may be evaluated in any order.
Conditions can also be combined with the logical operators AND, OR and NOT by using the
IfAll
,
IfAny
and
IfNot
composite conditions.
Users can create custom conditions or use the built-in conditions:
IfFileName
- accepts files whose path (relative to the base path) matches a
regular expression
or a
glob
.
IfLastModified
- accepts files that are as old as or older than the specified
duration
.
IfAccumulatedFileSize
- accepts paths after the accumulated file size threshold is exceeded during the file tree walk.
IfAll - accepts a path if all nested conditions accept it (logical AND).
Nested conditions may be evaluated in any order.
IfAny - accepts a path if one of the nested conditions accept it (logical OR).
Nested conditions may be evaluated in any order.
IfNot - accepts a path if the nested condition does not accept it (logical NOT).
Required if no PathConditions are specified.
A ScriptCondition element specifying a script.
The ScriptCondition should contain a
Script,
ScriptRef or ScriptFile
element that specifies the logic to be executed.
(See also the
ScriptFilter
documentation for more examples of
configuring ScriptFiles and ScriptRefs.)
The script is passed a number of
parameters
,
including a list of paths found under the base path (up to
maxDepth
)
and must return a list with the paths to delete.
Delete Parameters
The maximum number of levels of directories to visit. A value of 0
means that only the starting file (the base path itself) is visited,
unless denied by the security manager. A value of
Integer.MAX_VALUE indicates that all levels should be visited. The default is 1,
meaning only the files in the specified base directory.
|
boolean
|
If true, files are not deleted but instead a message is printed to the
status logger
at INFO level.
Use this to do a dry run to test if the configuration works as expected. Default is false.
|
A plugin implementing the
PathSorter
interface to sort the files before selecting the files to delete. The default is to sort most recently
modified files first.
|
IfAccumulatedFileCount
- accepts paths after some count threshold is exceeded during the file tree walk.
Required if regex not specified.
Matches the relative path (relative to the base path) using a limited pattern language that resembles regular expressions but with a
simpler syntax
.
|
Required if glob not specified.
Matches the relative path (relative to the base path) using a regular expression as defined by the
Pattern
class.
|
An optional set of nested
PathConditions
. If any nested conditions
exist they all need to accept the file before it is deleted. Nested conditions are only evaluated if the
outer condition accepts a file (if the path name matches).
|
An optional set of nested
PathConditions
. If any nested conditions
exist they all need to accept the file before it is deleted. Nested conditions are only evaluated if the
outer condition accepts a file (if the file is old enough).
|
An optional set of nested
PathConditions
. If any nested conditions
exist they all need to accept the file before it is deleted. Nested conditions are only evaluated if the
outer condition accepts a file (if the threshold count has been exceeded).
|
An optional set of nested
PathConditions
. If any nested conditions
exist they all need to accept the file before it is deleted. Nested conditions are only evaluated if the
outer condition accepts a file (if the threshold accumulated file size has been exceeded).
Below is a sample configuration that uses a RollingFileAppender with the cron
triggering policy configured to trigger every day at midnight.
Archives are stored in a directory based on the current year and month.
All files under the base directory that match the "*/app-*.log.gz" glob and are 60 days old
or older are deleted at rollover time.
<Appenders>
<RollingFile name="RollingFile" fileName="${baseDir}/app.log"
filePattern="${baseDir}/$${date:yyyy-MM}/app-%d{yyyy-MM-dd}.log.gz">
<PatternLayout pattern="%d %p %c{1.} [%t] %m%n" />
<CronTriggeringPolicy schedule="0 0 0 * * ?"/>
<DefaultRolloverStrategy>
<Delete basePath="${baseDir}" maxDepth="2">
<IfFileName glob="*/app-*.log.gz" />
<IfLastModified age="P60D" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
Below is a sample configuration that uses a RollingFileAppender with both the time and size based
triggering policies, will create up to 100 archives on the same day (1-100) that are stored in a directory
based on the current year and month, and will compress each
archive using gzip and will roll every hour.
During every rollover, this configuration will delete files that match "*/app-*.log.gz"
and are 30 days old or older,
but keep the most recent 100 GB or the most recent 10 files, whichever comes first.
<Appenders>
<RollingFile name="RollingFile" fileName="${baseDir}/app.log"
filePattern="${baseDir}/$${date:yyyy-MM}/app-%d{yyyy-MM-dd-HH}-%i.log.gz">
<PatternLayout pattern="%d %p %c{1.} [%t] %m%n" />
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
<DefaultRolloverStrategy max="100">
Nested conditions: the inner condition is only evaluated on files
for which the outer conditions are true.
<Delete basePath="${baseDir}" maxDepth="2">
<IfFileName glob="*/app-*.log.gz">
<IfLastModified age="P30D">
<IfAny>
<IfAccumulatedFileSize exceeds="100 GB" />
<IfAccumulatedFileCount exceeds="10" />
</IfAny>
</IfLastModified>
</IfFileName>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
Below is a sample configuration that uses a RollingFileAppender with the cron
triggering policy configured to trigger every day at midnight.
Archives are stored in a directory based on the current year and month.
The script returns a list of rolled over files under the base directory dated Friday the 13th.
The Delete action will delete all files returned by the script.
<Configuration status="trace" name="MyApp">
<Properties>
<Property name="baseDir">logs</Property>
</Properties>
<Appenders>
<RollingFile name="RollingFile" fileName="${baseDir}/app.log"
filePattern="${baseDir}/$${date:yyyy-MM}/app-%d{yyyyMMdd}.log.gz">
<PatternLayout pattern="%d %p %c{1.} [%t] %m%n" />
<CronTriggeringPolicy schedule="0 0 0 * * ?"/>
<DefaultRolloverStrategy>
<Delete basePath="${baseDir}" maxDepth="2">
<ScriptCondition>
<Script name="superstitious" language="groovy"><![CDATA[
import java.nio.file.*;
def result = [];
def pattern = ~/\d*\/app-(\d*)\.log\.gz/;
pathList.each { pathWithAttributes ->
def relative = basePath.relativize pathWithAttributes.path
statusLogger.trace 'SCRIPT: relative path=' + relative + " (base=$basePath)";
// remove files dated Friday the 13th
def matcher = pattern.matcher(relative.toString());
if (matcher.find()) {
def dateString = matcher.group(1);
def calendar = Date.parse("yyyyMMdd", dateString).toCalendar();
def friday13th = calendar.get(Calendar.DAY_OF_MONTH) == 13 \
&& calendar.get(Calendar.DAY_OF_WEEK) == Calendar.FRIDAY;
if (friday13th) {
result.add pathWithAttributes;
statusLogger.trace 'SCRIPT: deleting path ' + pathWithAttributes;
statusLogger.trace 'SCRIPT: returning ' + result;
result;
</Script>
</ScriptCondition>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
Log4j-2.9 introduces a
PosixViewAttribute
action that gives users more control
over which file attribute permissions, owner and group should be applied.
The PosixViewAttribute action lets users configure one or more conditions that select the eligible files
relative to a base directory.
ScriptCondition Parameters
Script, ScriptFile or ScriptRef
|
The Script element that specifies the logic to be executed. The script is passed
a list of paths found under the base path and must return the paths to delete as a
java.util.List<
PathWithAttributes
>
.
See also the
ScriptFilter
documentation for an example of
how ScriptFiles and ScriptRefs can be configured.
|
java.util.List<
PathWithAttributes
>
|
The list of paths found under the base path up to the specified max depth,
sorted most recently modified files first.
The script is free to modify and return this list.
|
File attribute permissions in POSIX format to apply when action is executed.
Underlying files system shall support
POSIX
file attribute view.
Examples: rw------- or rw-rw-rw- etc...
File owner to define when action is executed.
Changing file's owner may be restricted for security reason and Operation not permitted IOException thrown.
Only processes with an effective user ID equal to the user ID
of the file or with appropriate privileges may change the ownership of a file
if
_POSIX_CHOWN_RESTRICTED
is in effect for path.
Underlying files system shall support file
owner
attribute view.
<Configuration status="trace" name="MyApp">
<Properties>
<Property name="baseDir">logs</Property>
</Properties>
<Appenders>
<RollingFile name="RollingFile" fileName="${baseDir}/app.log"
filePattern="${baseDir}/$${date:yyyy-MM}/app-%d{yyyyMMdd}.log.gz"
filePermissions="rw-------">
<PatternLayout pattern="%d %p %c{1.} [%t] %m%n" />
<CronTriggeringPolicy schedule="0 0 0 * * ?"/>
<DefaultRolloverStrategy stopCustomActionsOnError="true">
<PosixViewAttribute basePath="${baseDir}/$${date:yyyy-MM}" filePermissions="r--r--r--">
<IfFileName glob="*.gz" />
</PosixViewAttribute>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
The RollingRandomAccessFileAppender is similar to the standard
RollingFileAppender
except it is always buffered (this cannot be switched off) and
internally it uses a
ByteBuffer + RandomAccessFile
instead of a
BufferedOutputStream
.
We saw a 20-200% performance improvement compared to
RollingFileAppender with "bufferedIO=true"
in our
measurements
.
The RollingRandomAccessFileAppender writes to the File named in the
fileName parameter and rolls the file over according the
TriggeringPolicy and the RolloverPolicy.
Similar to the RollingFileAppender, RollingRandomAccessFileAppender uses a RollingRandomAccessFileManager
to actually perform the file I/O and perform the rollover. While RollingRandomAccessFileAppender
from different Configurations cannot be shared, the RollingRandomAccessFileManagers can be
if the Manager is accessible. For example, two web applications in a servlet
container can have their own configuration and safely write to the
same file if Log4j is in a ClassLoader that is common to both of them.
A RollingRandomAccessFileAppender requires a
TriggeringPolicy
and a
RolloverStrategy
.
The triggering policy determines if a rollover should be performed
while the RolloverStrategy defines how the rollover should be done.
If no RolloverStrategy is configured, RollingRandomAccessFileAppender will use the
DefaultRolloverStrategy
.
Since log4j-2.5, a
custom delete action
can be configured in the
DefaultRolloverStrategy to run at rollover.
File locking is not supported by the RollingRandomAccessFileAppender.
PosixViewAttribute Parameters
The maximum number of levels of directories to visit. A value of 0
means that only the starting file (the base path itself) is visited,
unless denied by the security manager. A value of
Integer.MAX_VALUE indicates that all levels should be visited. The default is 1,
meaning only the files in the specified base directory.
|
File attribute permissions in POSIX format to apply whenever the file is created.
Underlying files system shall support
POSIX
file attribute view.
Examples:
rw-------
or
rw-rw-rw-
etc...
File owner to define whenever the file is created.
Changing file's owner may be restricted for security reason and Operation not permitted IOException thrown.
Only processes with an effective user ID equal to the user ID
of the file or with appropriate privileges may change the ownership of a file
if
_POSIX_CHOWN_RESTRICTED
is in effect for path.
Underlying files system shall support file
owner
attribute view.
Below is a sample configuration that uses a RollingRandomAccessFileAppender
with both the time and size based
triggering policies, will create
up to 7 archives on the same day (1-7) that
are stored in a
directory
based on the current year and month, and will compress
archive using gzip:
<Configuration status="warn" name="MyApp">
<Appenders>
<RollingRandomAccessFile name="RollingRandomAccessFile" fileName="logs/app.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
</RollingRandomAccessFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingRandomAccessFile"/>
</Root>
</Loggers>
</Configuration>
This second example shows a rollover strategy that will keep up to
20 files before removing them.
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn" name="MyApp">
<Appenders>
<RollingRandomAccessFile name="RollingRandomAccessFile" fileName="logs/app.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
<DefaultRolloverStrategy max="20"/>
</RollingRandomAccessFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingRandomAccessFile"/>
</Root>
</Loggers>
</Configuration>
Below is a sample configuration that uses a RollingRandomAccessFileAppender
with both the time and size based
triggering policies, will create
up to 7 archives on the same day (1-7) that
are stored in a
directory
based on the current year and month, and will compress
archive using gzip and will roll every 6 hours when the hour is
divisible
by 6:
<Configuration status="warn" name="MyApp">
<Appenders>
<RollingRandomAccessFile name="RollingRandomAccessFile" fileName="logs/app.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{yyyy-MM-dd-HH}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy interval="6" modulate="true"/>
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
</RollingRandomAccessFile>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="RollingRandomAccessFile"/>
</Root>
</Loggers>
</Configuration>
The RoutingAppender evaluates LogEvents and then routes them to a subordinate Appender. The target
Appender may be an appender previously configured and may be referenced by its name or the
Appender can be dynamically created as needed. The RoutingAppender should be configured after any
Appenders it references to allow it to shut down properly.
You can also configure a RoutingAppender with scripts: you can run a script when the appender starts
and when a route is chosen for a log event.
RollingRandomAccessFileAppender Parameters
When true - the default, records will be appended to the end
of the file. When set to false,
the file will be cleared before
new records are written.
|
A Filter to determine if the event should be handled by this
Appender. More than one Filter
may be used by using a
CompositeFilter.
|
The name of the file to write to. If the file, or any of its
parent directories, do not exist,
they will be created.
The pattern of the file name of the archived log file. The format
of the pattern should is dependent on the RolloverStrategy that is used. The DefaultRolloverStrategy
will accept both a date/time pattern compatible with
SimpleDateFormat
and/or a %i which represents an integer counter. The integer counter
allows specifying a padding, like %3i for space-padding the counter to
3 digits or (usually more useful) %03i for zero-padding the counter to
3 digits. The pattern also supports interpolation at runtime so any of the Lookups (such as the
DateLookup
can be included in the pattern.
When set to true - the default, each write will be followed by a flush.
This will guarantee the data is written
to disk but could impact performance.
Flushing after every write is only useful when using this
appender with synchronous loggers. Asynchronous loggers and
appenders will automatically flush at the end of a batch of events,
even if immediateFlush is set to false. This also guarantees
the data is written to disk but is more efficient.
|
Layout
|
The Layout to use to format the LogEvent. If no layout is supplied the default pattern layout
of "%m%n" will be used.
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
In this example, the script causes the "ServiceWindows" route to be the default route on Windows and
"ServiceOther" on all other operating systems. Note that the List Appender is one of our test appenders,
any appender can be used, it is only used as a shorthand.
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" name="RoutingTest">
<Appenders>
<Routing name="Routing">
<Script name="RoutingInit" language="JavaScript"><![CDATA[
java.lang.System.getProperty("os.name").search("Windows") > -1 ? "ServiceWindows" : "ServiceOther";]]>
</Script>
<Routes>
<Route key="ServiceOther">
<List name="List1" />
</Route>
<Route key="ServiceWindows">
<List name="List2" />
</Route>
</Routes>
</Routing>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="Routing" />
</Root>
</Loggers>
</Configuration>
The Routes element accepts a single attribute named "pattern". The pattern is evaluated
against all the registered Lookups and the result is used to select a Route. Each Route may be
configured with a key. If the key matches the result of evaluating the pattern then that Route
will be selected. If no key is specified on a Route then that Route is the default. Only one Route
can be configured as the default.
The Routes element may contain a Script child element. If specified, the Script is run for each
log event and returns the String Route key to use.
You must specify either the pattern attribute or the Script element, but not both.
Each Route must reference an Appender. If the Route contains a ref attribute then the
Route will reference an Appender that was defined in the configuration. If the Route contains an
Appender definition then an Appender will be created within the context of the RoutingAppender and
will be reused each time a matching Appender name is referenced through a Route.
This script is passed the following variables:
RoutingAppender Parameters
Filter
|
A Filter to determine if the event should be handled by this Appender. More than one Filter
may be used by using a CompositeFilter.
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
In this example, the script runs for each log event and picks a route based on the presence of a
Marker named "AUDIT".
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" name="RoutingTest">
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT" />
<Flume name="AuditLogger" compress="true">
<Agent host="192.168.10.101" port="8800"/>
<Agent host="192.168.10.102" port="8800"/>
<RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
</Flume>
<Routing name="Routing">
<Routes>
<Script name="RoutingInit" language="JavaScript"><![CDATA[
if (logEvent.getMarker() != null && logEvent.getMarker().isInstanceOf("AUDIT")) {
return "AUDIT";
} else if (logEvent.getContextMap().containsKey("UserId")) {
return logEvent.getContextMap().get("UserId");
return "STDOUT";]]>
</Script>
<Route>
<RollingFile
name="Rolling-${mdc:UserId}"
fileName="${mdc:UserId}.log"
filePattern="${mdc:UserId}.%i.log.gz">
<PatternLayout>
<pattern>%d %p %c{1.} [%t] %m%n</pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="500" />
</RollingFile>
</Route>
<Route ref="AuditLogger" key="AUDIT"/>
<Route ref="STDOUT" key="STDOUT"/>
</Routes>
<IdlePurgePolicy timeToLive="15" timeUnit="minutes"/>
</Routing>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="Routing" />
</Root>
</Loggers>
</Configuration>
The RoutingAppender can be configured with a PurgePolicy whose purpose is to stop and remove dormant
Appenders that have been dynamically created by the RoutingAppender. Log4j currently provides the
IdlePurgePolicy as the only PurgePolicy available for cleaning up the Appenders. The IdlePurgePolicy
accepts 2 attributes; timeToLive, which is the number of timeUnits the Appender should survive without
having any events sent to it, and timeUnit, the String representation of java.util.concurrent.TimeUnit
which is used with the timeToLive attribute.
Below is a sample configuration that uses a RoutingAppender to route all Audit events to
a FlumeAppender and all other events will be routed to a RollingFileAppender that captures only
the specific event type. Note that the AuditAppender was predefined while the RollingFileAppenders
are created as needed.
<Configuration status="warn" name="MyApp">
<Appenders>
<Flume name="AuditLogger" compress="true">
<Agent host="192.168.10.101" port="8800"/>
<Agent host="192.168.10.102" port="8800"/>
<RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
</Flume>
<Routing name="Routing">
<Routes pattern="$${sd:type}">
<Route>
<RollingFile name="Rolling-${sd:type}" fileName="${sd:type}.log"
filePattern="${sd:type}.%i.log.gz">
<PatternLayout>
<pattern>%d %p %c{1.} [%t] %m%n</pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="500" />
</RollingFile>
</Route>
<Route ref="AuditLogger" key="Audit"/>
</Routes>
<IdlePurgePolicy timeToLive="15" timeUnit="minutes"/>
</Routing>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="Routing"/>
</Root>
</Loggers>
</Configuration>
Sends an e-mail when a specific logging event occurs, typically on errors or fatal errors.
The number of logging events delivered in this e-mail depend on the value of
BufferSize
option. The
SMTPAppender
keeps only the last
BufferSize
logging events in its cyclic buffer. This keeps
memory requirements at a reasonable level while still delivering useful
application context. All events in the buffer are included in the email.
The buffer will contain the most recent events of level TRACE to WARN
preceding the event that triggered the email.
The default behavior is to trigger sending an email whenever an ERROR or higher
severity event is logged and to format it as HTML. The circumstances on when the
email is sent can be controlled by setting one or more filters on the Appender.
As with other Appenders, the formatting can be controlled by specifying a Layout
for the Appender.
RoutingAppender Routes Script Parameters
<Configuration status="warn" name="MyApp">
<Appenders>
<SMTP name="Mail" subject="Error Log" to="[email protected]" from="[email protected]"
smtpHost="localhost" smtpPort="25" bufferSize="50">
</SMTP>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="Mail"/>
</Root>
</Loggers>
</Configuration>
When the configuration is built, the
ScriptAppenderSelector
appender calls a
Script
to compute an appender name. Log4j then creates one of the appender named listed under
AppenderSet
using the name of the
ScriptAppenderSelector
. After configuration, Log4j
ignores the
ScriptAppenderSelector
. Log4j only builds the one selected appender from the
configuration tree, and ignores other
AppenderSet
child nodes.
In the following example, the script returns the name "List2". The appender name is recorded under
the name of the
ScriptAppenderSelector
, not the name of the selected appender, in this example,
"SelectIt".
<Configuration status="WARN" name="ScriptAppenderSelectorExample">
<Appenders>
<ScriptAppenderSelector name="SelectIt">
<Script language="JavaScript"><![CDATA[
java.lang.System.getProperty("os.name").search("Windows") > -1 ? "MyCustomWindowsAppender" : "MySyslogAppender";]]>
</Script>
<AppenderSet>
<MyCustomWindowsAppender name="MyAppender" ... />
<SyslogAppender name="MySyslog" ... />
</AppenderSet>
</ScriptAppenderSelector>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="SelectIt" />
</Root>
</Loggers>
</Configuration>
The
SocketAppender
is an OutputStreamAppender that writes its output to a remote destination
specified by a host and port. The data can be sent over either TCP or UDP and can be sent in any format.
You can optionally secure communication with
SSL
. Note that the TCP and SSL variants
write to the socket as a stream and do not expect response from the target destination. Due to limitations
in the TCP protocol that means that when the target server closes its connection some log events may
continue to appear to succeed until a closed connection exception is raised, causing those events to be
lost. If guaranteed delivery is required a protocol that requires acknowledgements must be used.
SocketAppender
Parameters
<Configuration status="warn" name="MyApp">
<Appenders>
<Socket name="socket" host="localhost" port="9500">
<JsonLayout properties="true"/>
</Socket>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="socket"/>
</Root>
</Loggers>
</Configuration>
This is a secured
SSL
configuration:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn" name="MyApp">
<Appenders>
<Socket name="socket" host="localhost" port="9500">
<JsonLayout properties="true"/>
<KeyStore location="log4j2-keystore.jks" passwordEnvironmentVariable="KEYSTORE_PASSWORD"/>
<TrustStore location="truststore.jks" passwordFile="${sys:user.home}/truststore.pwd"/>
</Socket>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="socket"/>
</Root>
</Loggers>
</Configuration>
Several appenders can be configured to use either a plain network connection or a Secure Socket Layer (SSL)
connection. This section documents the parameters available for SSL configuration.
SMTPAppender Parameters
Filter
|
A Filter to determine if the event should be handled by this Appender. More than one Filter
may be used by using a CompositeFilter.
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
String
|
The name or address of the system that is listening for log events. This parameter is required. If
the host name resolves to multiple IP addresses the TCP and SSL variations will fail over to
the next IP address when a connection is lost.
|
Filter
|
A Filter to determine if the event should be handled by this Appender. More than one Filter
may be used by using a CompositeFilter.
|
boolean
|
When set to true, log events will not wait to try to reconnect and will fail immediately if the
socket is not available.
|
When set to true - the default, each write will be followed by a flush.
This will guarantee the data is written
to disk but could impact performance.
|
boolean
|
When true - the default, events are written to a buffer and the data will be written to
the socket when the buffer is full or, if immediateFlush is set, when the record is written.
|
integer
|
If set to a value greater than 0, after an error the SocketManager will attempt to reconnect to
the server after waiting the specified number of milliseconds. If the reconnect fails then
an exception will be thrown (which can be caught by the application if
ignoreExceptions
is
set to
false
).
|
integer
|
The connect timeout in milliseconds. The default is 0 (infinite timeout, like Socket.connect()
methods).
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
The trust store is meant to contain the CA certificates you are willing to trust
when a remote party presents its certificate. Determines whether the remote authentication credentials
(and thus the connection) should be trusted.
In some cases, they can be one and the same store,
although it is often better practice to use distinct stores (especially when they are file-based).
SSL Configuration Parameters
String
|
The SSL protocol to use,
TLS
if omitted. A single value may enable multiple protocols,
see the
JVM documentation
for details.
|
Contains the CA certificates of the remote counterparty.
Determines whether the remote authentication credentials
(and thus the connection) should be trusted.
|
String
|
Name of an environment variable that holds the password. Cannot be combined with either
password
or
passwordFile
.
|
Optional KeyStore type, e.g.
JKS
,
PKCS12
,
PKCS11
,
BKS
,
Windows-MY/Windows-ROOT
,
KeychainStore
, etc.
The default is JKS. See also
Standard types
.
|
<KeyStore location="log4j2-keystore.jks" passwordEnvironmentVariable="KEYSTORE_PASSWORD"/>
<TrustStore location="truststore.jks" passwordFile="${sys:user.home}/truststore.pwd"/>
The
SyslogAppender
is a
SocketAppender
that writes its output to a remote destination
specified by a host and port in a format that conforms with either the BSD Syslog format or the RFC 5424
format. The data can be sent over either TCP or UDP.
SyslogAppender
Parameters
A sample syslogAppender configuration that is configured with two
SyslogAppender
s, one using the BSD
format and one using RFC 5424.
<Configuration status="warn" name="MyApp">
<Appenders>
<Syslog name="bsd" host="localhost" port="514" protocol="TCP"/>
<Syslog name="RFC5424" format="RFC5424" host="localhost" port="8514"
protocol="TCP" appName="MyApp" includeMDC="true"
facility="LOCAL0" enterpriseNumber="18060" newLine="true"
messageId="Audit" id="App"/>
</Appenders>
<Loggers>
<Logger name="com.mycorp" level="error">
<AppenderRef ref="RFC5424"/>
</Logger>
<Root level="error">
<AppenderRef ref="bsd"/>
</Root>
</Loggers>
</Configuration>
For
SSL
this appender writes its output to a remote destination specified by a host and port over SSL in
a format that conforms with either the BSD Syslog format or the RFC 5424 format.
<Configuration status="warn" name="MyApp">
<Appenders>
<Syslog name="bsd" host="localhost" port="6514" protocol="SSL">
<KeyStore location="log4j2-keystore.jks" passwordEnvironmentVariable="KEYSTORE_PASSWORD"/>
<TrustStore location="truststore.jks" passwordFile="${sys:user.home}/truststore.pwd"/>
</Syslog>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="bsd"/>
</Root>
</Loggers>
</Configuration>
<?xml version="1.0" encoding="UTF-8"?>
<Configuration name="JeroMQAppenderTest" status="TRACE">
<Appenders>
<JeroMQ name="JeroMQAppender">
<Property name="endpoint">tcp://*:5556</Property>
<Property name="endpoint">ipc://info-topic</Property>
</JeroMQ>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="JeroMQAppender"/>
</Root>
</Loggers>
</Configuration>
The table below describes all options. Please consult the JeroMQ and ZeroMQ documentation for details.
TrustStore Configuration Parameters
String
|
Name of an environment variable that holds the password. Cannot be combined with either
password
or
passwordFile
.
|
Optional KeyStore type, e.g.
JKS
,
PKCS12
,
PKCS11
,
BKS
,
Windows-MY/Windows-ROOT
,
KeychainStore
, etc.
The default is JKS. See also
Standard types
.
|
String
|
The character set to use when converting the syslog String to a byte array. The String must be
a valid
Charset
.
If not specified, the default system Charset will be used.
|
integer
|
The connect timeout in milliseconds. The default is 0 (infinite timeout, like Socket.connect()
methods).
|
Filter
|
A Filter to determine if the event should be handled by this Appender. More than one Filter
may be used by using a CompositeFilter.
|
String
|
The facility is used to try to classify the message. The facility option must be set to one of
"KERN", "USER", "MAIL", "DAEMON", "AUTH", "SYSLOG", "LPR", "NEWS", "UUCP", "CRON", "AUTHPRIV",
"FTP", "NTP", "AUDIT", "ALERT", "CLOCK", "LOCAL0", "LOCAL1", "LOCAL2", "LOCAL3", "LOCAL4", "LOCAL5",
"LOCAL6", or "LOCAL7". These values may be specified as upper or lower case characters.
|
String
|
If set to "RFC5424" the data will be formatted in accordance with RFC 5424. Otherwise, it will
be formatted as a BSD Syslog record. Note that although BSD Syslog records are required to be
1024 bytes or shorter the SyslogLayout does not truncate them. The RFC5424Layout also does not
truncate records since the receiver must accept records of up to 2048 bytes and may accept records
that are longer.
|
String
|
The default structured data id to use when formatting according to RFC 5424. If the LogEvent contains
a StructuredDataMessage the id from the Message will be used instead of this value.
|
The default is
true
, causing exceptions encountered while appending events to be
internally logged and then ignored. When set to
false
exceptions will be propagated to the
caller, instead. You must set this to
false
when wrapping this Appender in a
FailoverAppender
.
|
boolean
|
When set to true, log events will not wait to try to reconnect and will fail immediately if the
socket is not available.
|
When set to true - the default, each write will be followed by a flush.
This will guarantee the data is written
to disk but could impact performance.
|
boolean
|
Indicates whether data from the ThreadContextMap will be included in the RFC 5424 Syslog record.
Defaults to true.
|
List of KeyValuePairs
|
Allows arbitrary PatternLayout patterns to be included as specified ThreadContext fields; no default
specified. To use, include a >LoggerFields< nested element, containing one or more
>KeyValuePair< elements. Each >KeyValuePair< must have a key attribute, which
specifies the key name which will be used to identify the field within the MDC Structured Data element,
and a value attribute, which specifies the PatternLayout pattern to use as the value.
|
String
|
A comma separated list of mdc keys that should be excluded from the LogEvent. This is mutually
exclusive with the mdcIncludes attribute. This attribute only applies to RFC 5424 syslog records.
|
String
|
A comma separated list of mdc keys that should be included in the FlumeEvent. Any keys in the MDC
not found in the list will be excluded. This option is mutually exclusive with the mdcExcludes
attribute. This attribute only applies to RFC 5424 syslog records.
|
String
|
A comma separated list of mdc keys that must be present in the MDC. If a key is not present a
LoggingException will be thrown. This attribute only applies to RFC 5424 syslog records.
|
String
|
A string that should be prepended to each MDC key in order to distinguish it from event attributes.
The default string is "mdc:". This attribute only applies to RFC 5424 syslog records.
|
integer
|
If set to a value greater than 0, after an error the SocketManager will attempt to reconnect to
the server after waiting the specified number of milliseconds. If the reconnect fails then
an exception will be thrown (which can be caught by the application if
ignoreExceptions
is
set to
false
).
|
JeroMQ Parameters
layout
|
The Layout to use to format the LogEvent. If no layout is supplied the default pattern layout
of "%m%n" will be used.
|
|
|