Documentation

Log and trace InfluxDB Enterprise operations

InfluxDB writes log output, by default, to stderr. Depending on your use case, this log information can be written to another location. Some service managers may override this default.

Logging locations

Run InfluxDB directly

If you run InfluxDB directly, using influxd, all logs will be written to stderr. You may redirect this log output as you would any output to stderr like so:

influxdb-meta 2>$HOME/my_log_file # Meta nodes
influxd 2>$HOME/my_log_file # Data nodes
influx-enterprise 2>$HOME/my_log_file # Enterprise Web

Launched as a service

sysvinit

If InfluxDB was installed using a pre-built package, and then launched as a service, stderr is redirected to /var/log/influxdb/<node-type>.log, and all log data will be written to that file. You can override this location by setting the variable STDERR in the file /etc/default/<node-type>.

For example, if on a data node /etc/default/influxdb contains:

STDERR=/dev/null

all log data will be discarded. You can similarly direct output to stdout by setting STDOUT in the same file. Output to stdout is sent to /dev/null by default when InfluxDB is launched as a service.

InfluxDB must be restarted to pick up any changes to /etc/default/<node-type>.

Meta nodes

For meta nodes, the is influxdb-meta. The default log file is /var/log/influxdb/influxdb-meta.log The service configuration file is /etc/default/influxdb-meta.

Data nodes

For data nodes, the is influxdb. The default log file is /var/log/influxdb/influxdb.log The service configuration file is /etc/default/influxdb.

Enterprise Web

For Enterprise Web nodes, the is influx-enterprise. The default log file is /var/log/influxdb/influx-enterprise.log The service configuration file is /etc/default/influx-enterprise.

systemd

Starting with version 1.0, InfluxDB on systemd systems no longer writes files to /var/log/<node-type>.log by default, and now uses the system configured default for logging (usually journald). On most systems, the logs will be directed to the systemd journal and can be accessed with the command:

sudo journalctl -u <node-type>.service

Please consult the systemd journald documentation for configuring journald.

Meta nodes

For data nodes the is influxdb-meta. The default log command is sudo journalctl -u influxdb-meta.service The service configuration file is /etc/default/influxdb-meta.

Data nodes

For data nodes the is influxdb. The default log command is sudo journalctl -u influxdb.service The service configuration file is /etc/default/influxdb.

Enterprise Web

For data nodes the is influx-enterprise. The default log command is sudo journalctl -u influx-enterprise.service The service configuration file is /etc/default/influx-enterprise.

Use logrotate

You can use logrotate to rotate the log files generated by InfluxDB on systems where logs are written to flat files. If using the package install on a sysvinit system, the config file for logrotate is installed in /etc/logrotate.d. You can view the file here.

Redirect HTTP access logging

InfluxDB 1.5 introduces the option to log HTTP request traffic separately from the other InfluxDB log output. When HTTP request logging is enabled, the HTTP logs are intermingled by default with internal InfluxDB logging. By redirecting the HTTP request log entries to a separate file, both log files are easier to read, monitor, and debug.

See Redirecting HTTP request logging in the InfluxDB OSS documentation.

Structured logging

With InfluxDB 1.5, structured logging is supported and enable machine-readable and more developer-friendly log output formats. The two new structured log formats, logfmt and json, provide easier filtering and searching with external tools and simplifies integration of InfluxDB logs with Splunk, Papertrail, Elasticsearch, and other third party tools.

See Structured logging in the InfluxDB OSS documentation.

Tracing

Logging has been enhanced to provide tracing of important InfluxDB operations. Tracing is useful for error reporting and discovering performance bottlenecks.

Logging keys used in tracing

Tracing identifier key

The trace_id key specifies a unique identifier for a specific instance of a trace. You can use this key to filter and correlate all related log entries for an operation.

All operation traces include consistent starting and ending log entries, with the same message (msg) describing the operation (e.g., “TSM compaction”), but adding the appropriate op_event context (either start or end). For an example, see Finding all trace log entries for an InfluxDB operation.

Example: trace_id=06R0P94G000

Operation keys

The following operation keys identify an operation’s name, the start and end timestamps, and the elapsed execution time.

op_name

Unique identifier for an operation. You can filter on all operations of a specific name.

Example: op_name=tsm1_compact_group

op_event

Specifies the start and end of an event. The two possible values, (start) or (end), are used to indicate when an operation started or ended. For example, you can grep by values in op_name AND op_event to find all starting operation log entries. For an example of this, see Finding all starting log entries.

Example: op_event=start

op_elapsed

Duration of the operation execution. Logged with the ending trace log entry. Valid duration units are ns, µs, ms, and s.

Example: op_elapsed=352ms

Log identifier context key

The log identifier key (log_id) lets you easily identify every log entry for a single execution of an influxd process. There are other ways a log file could be split by a single execution, but the consistent log_id eases the searching of log aggregation services.

Example: log_id=06QknqtW000

Database context keys

  • db_instance: Database name
  • db_rp: Retention policy name
  • db_shard_id: Shard identifier
  • db_shard_group: Shard group identifier

Tooling

Here are a couple of popular tools available for processing and filtering log files output in logfmt or json formats.

hutils

The hutils utility collection, provided by Heroku, provides tools for working with logfmt-encoded logs, including:

  • lcut: Extracts values from a logfmt trace based on a specified field name.
  • lfmt: Prettifies logfmt lines as they emerge from a stream, and highlights their key sections.
  • ltap: Accesses messages from log providers in a consistent way to allow easy parsing by other utilities that operate on logfmt traces.
  • lviz: Visualizes logfmt output by building a tree out of a dataset combining common sets of key-value pairs into shared parent nodes.

The lnav (Log File Navigator) is an advanced log file viewer useful for watching and analyzing your log files from a terminal. The lnav viewer provides a single log view, automatic log format detection, filtering, timeline view, pretty-print view, and querying logs using SQL.

Operations

The following operations, listed by their operation name (op_name) are traced in InfluxDB internal logs and available for use without changes in logging level.

Initial opening of data files

The tsdb_open operation traces include all events related to the initial opening of the tsdb_store.

Retention policy shard deletions

The retention.delete_check operation includes all shard deletions related to the retention policy.

TSM snapshotting in-memory cache to disk

The tsm1_cache_snapshot operation represents the snapshotting of the TSM in-memory cache to disk.

TSM compaction strategies

The tsm1_compact_group operation includes all trace log entries related to TSM compaction strategies and displays the related TSM compaction strategy keys:

  • tsm1_strategy: level or full
  • tsm1_level: 1, 2, or 3
  • tsm_optimize: true or false

Series file compactions

The series_partition_compaction operation includes all trace log entries related to series file compactions.

Continuous query execution (if logging enabled)

The continuous_querier_execute operation includes all continuous query executions, if logging is enabled.

TSI log file compaction

The tsi1_compact_log_file operation includes all trace log entries related to log file compactions.

TSI level compaction

The tsi1_compact_to_level operation includes all trace log entries for TSI level compactions.

Tracing examples

Finding all trace log entries for an InfluxDB operation

In the example below, you can see the log entries for all trace operations related to a “TSM compaction” process. Note that the initial entry shows the message “TSM compaction (start)” and the final entry displays the message “TSM compaction (end)”.

Log entries were grepped using the trace_id value and then the specified key values were displayed using lcut (an hutils tool).

]

$ grep "06QW92x0000" influxd.log | lcut ts lvl msg strategy level
2018-02-21T20:18:56.880065Z	info	TSM compaction (start)	full
2018-02-21T20:18:56.880162Z	info	Beginning compaction	full
2018-02-21T20:18:56.880185Z	info	Compacting file	full
2018-02-21T20:18:56.880211Z	info	Compacting file	full
2018-02-21T20:18:56.880226Z	info	Compacting file	full
2018-02-21T20:18:56.880254Z	info	Compacting file	full
2018-02-21T20:19:03.928640Z	info	Compacted file	full
2018-02-21T20:19:03.928687Z	info	Finished compacting files	full
2018-02-21T20:19:03.928707Z	info	TSM compaction (end)	full

Finding all starting operation log entries

To find all starting operation log entries, you can grep by values in op_name AND op_event. In the following example, the grep returned 101 entries, so the result below only displays the first entry. In the example result entry, the timestamp, level, strategy, trace_id, op_name, and op_event values are included.

$ grep -F 'op_name=tsm1_compact_group' influxd.log | grep -F 'op_event=start'
ts=2018-02-21T20:16:16.709953Z lvl=info msg="TSM compaction" log_id=06QVNNCG000 engine=tsm1 level=1 strategy=level trace_id=06QV~HHG000 op_name=tsm1_compact_group op_event=start
...

Using the lcut utility (in hutils), the following command uses the previous grep command, but adds an lcut command to only display the keys and their values for keys that are not identical in all of the entries. The following example includes 19 examples of unique log entries displaying selected keys: ts, strategy, level, and trace_id.

$ grep -F 'op_name=tsm1_compact_group' influxd.log | grep -F 'op_event=start' | lcut ts strategy level trace_id | sort -u
2018-02-21T20:16:16.709953Z	level	1	06QV~HHG000
2018-02-21T20:16:40.707452Z	level	1	06QW0k0l000
2018-02-21T20:17:04.711519Z	level	1	06QW2Cml000
2018-02-21T20:17:05.708227Z	level	2	06QW2Gg0000
2018-02-21T20:17:29.707245Z	level	1	06QW3jQl000
2018-02-21T20:17:53.711948Z	level	1	06QW5CBl000
2018-02-21T20:18:17.711688Z	level	1	06QW6ewl000
2018-02-21T20:18:56.880065Z	full		06QW92x0000
2018-02-21T20:20:46.202368Z	level	3	06QWFizW000
2018-02-21T20:21:25.292557Z	level	1	06QWI6g0000
2018-02-21T20:21:49.294272Z	level	1	06QWJ_RW000
2018-02-21T20:22:13.292489Z	level	1	06QWL2B0000
2018-02-21T20:22:37.292431Z	level	1	06QWMVw0000
2018-02-21T20:22:38.293320Z	level	2	06QWMZqG000
2018-02-21T20:23:01.293690Z	level	1	06QWNygG000
2018-02-21T20:23:25.292956Z	level	1	06QWPRR0000
2018-02-21T20:24:33.291664Z	full		06QWTa2l000
2018-02-21T21:12:08.017055Z	full		06QZBpKG000
2018-02-21T21:12:08.478200Z	full		06QZBr7W000

Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB v3 enhancements and InfluxDB Clustered is now generally available

New capabilities, including faster query performance and management tooling advance the InfluxDB v3 product line. InfluxDB Clustered is now generally available.

InfluxDB v3 performance and features

The InfluxDB v3 product line has seen significant enhancements in query performance and has made new management tooling available. These enhancements include an operational dashboard to monitor the health of your InfluxDB cluster, single sign-on (SSO) support in InfluxDB Cloud Dedicated, and new management APIs for tokens and databases.

Learn about the new v3 enhancements


InfluxDB Clustered general availability

InfluxDB Clustered is now generally available and gives you the power of InfluxDB v3 in your self-managed stack.

Talk to us about InfluxDB Clustered