Documentation

Kapacitor alerts overview

Kapacitor makes it possible to handle alert messages in two different ways.

  • The messages can be pushed directly to an event handler exposed through the Alert node.
  • The messages can be published to a topic namespace to which one or more alert handlers can subscribe.

No matter which approach is used, the handlers need to be enabled and configured in the configuration file. If the handler requires sensitive information such as tokens and passwords, it can also be configured using the Kapacitor HTTP API.

Push to handler

Pushing messages to a handler is the basic approach presented in the Getting started with Kapacitor guide. This involves simply calling the relevant chaining method made available through the alert node. Messages can be pushed to log() files, the email() service, the httpOut() cache and many third party services.

Publish and subscribe

An alert topic is simply a namespace where alerts are grouped. When an alert event fires it can be published to a topic. Multiple handlers can subscribe (can be bound) to that topic and all handlers process each alert event for the topic. Handlers get bound to topics through the kapacitor command line client and handler binding files. Handler binding files can be written in yaml or json. They contain four key fields and one optional one.

  • topic: declares the topic to which the handler will subscribe.
  • id: declares the identity of the binding.
  • kind: declares the type of event handler to be used. Note that this needs to be enabled in the kapacitord configuration.
  • match: (optional) declares a match expression used to filter which alert events will be processed. See the Match Expressions section below.
  • options: options specific to the handler in question. These are listed below in the section List of handlers

Example 1: A handler binding file for the slack handler and cpu topic

topic: cpu
id: slack
kind: slack
options:
  channel: '#kapacitor'

Example 1 could be saved into a file named slack_cpu_handler.yaml.

This can then be generated into a Kapacitor topic handler through the command line client.

$ kapacitor define-topic-handler slack_cpu_handler.yaml

Handler bindings can also be created over the HTTP API. See the Create a Handler section of the HTTP API document.

For a walk through on defining and using alert topics see the Using Alert Topics walk-through.

Handlers

A handler takes action on incoming alert events for a specific topic. Each handler operates on exactly one topic.

List of handlers

The following is a list of available alert event handlers:

Handler Description
aggregate Aggregate alert messages over a specified interval.
Alerta Post alert messages to Alerta.
BigPanda Send alert messages to BigPanda.
Discord Send alert messages to Discord.
email Send and email with alert data.
exec Execute a command passing alert data over STDIN.
HipChat Post alert messages to HipChat room.
Kafka Send alerts to a Apache Kafka cluster.
log Log alert data to file.
Microsoft Teams Send alert messages to a Microsoft Teams channel.
MQTT Post alert messages to MQTT.
OpsGenie v1 Send alerts to OpsGenie using their v1 API. (Deprecated)
OpsGenie v2 Send alerts to OpsGenie using their v2 API.
PagerDuty v1 Send alerts to PagerDuty using their v1 API. (Deprecated)
PagerDuty v2 Send alerts to PagerDuty using their v2 API.
post HTTP POST data to a specified URL.
publish Publish alerts to multiple Kapacitor topics.
Pushover Send alerts to Pushover.
Sensu Post alert messages to Sensu client.
ServiceNow Send alerts to ServiceNow.
Slack Post alert messages to Slack channel.
SNMPTrap Trigger SNMP traps.
tcp Send data to a specified address via raw TCP.
Telegram Post alert messages to Telegram client.
VictorOps Send alerts to VictorOps.
Zenoss Send alerts to Zenoss.

Match expressions

Alert handlers support match expressions that filter which alert events the handler processes.

A match expression is a TICKscript lambda expression. The data that triggered the alert is available to the match expression, including all fields and tags.

In addition to the data that triggered the alert metadata about the alert is available. This alert metadata is available via various functions.

Name Type Description
level int The alert level of the event, one of ‘0’, ‘1’, ‘2’, or ‘3’ corresponding to ‘OK’, ‘INFO’, ‘WARNING’, and ‘CRITICAL’.
changed bool Indicates whether the alert level changed with this event.
name string Returns the measurement name of the triggering data.
taskName string Returns the task name that generated the alert event.
duration duration Returns the duration of the event in a non OK state.

Additionally the vars OK, INFO, WARNING, and CRITICAL have been defined to correspond with the return value of the level function.

For example to send only critical alerts to a handler, use this match expression:

match: level() == CRITICAL

Examples

Send only changed events to the handler:

match: changed() == TRUE

Send only WARNING and CRITICAL events to the handler:

match: level() >= WARNING

Send events with the tag “host” equal to s001.example.com to the handler:

match: "\"host\" == 's001.example.com'"

Alert event data

Each alert event that gets sent to a handler contains the following alert data:

Name Description
ID The ID of the alert, user defined.
Message The alert message, user defined.
Details The alert details, user defined HTML content.
Time The time the alert occurred.
Duration The duration of the alert in nanoseconds.
Level One of OK, INFO, WARNING or CRITICAL.
Data influxql.Result containing the data that triggered the alert.
Recoverable Indicates whether the alert is auto-recoverable. Determined by the .noRecoveries() property.

This data is used by event handlers in their handling of alert events.

Alert messages use Golang Template and have access to the alert data.

|alert()
  // ...
  .message('{{ .ID }} is {{ .Level }} value:{{ index .Fields "value" }}, {{ if not .Recoverable }}non-recoverable{{ end }}')

Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Flux is going into maintenance mode and will not be supported in InfluxDB 3.0. This was a decision based on the broad demand for SQL and the continued growth and adoption of InfluxQL. We are continuing to support Flux for users in 1.x and 2.x so you can continue using it with no changes to your code. If you are interested in transitioning to InfluxDB 3.0 and want to future-proof your code, we suggest using InfluxQL.

For information about the future of Flux, see the following: