Configure Kapacitor

Basic installation and startup of the Kapacitor service is covered in Getting started with Kapacitor. The basic principles of working with Kapacitor described there should be understood before continuing here. This document presents Kapacitor configuration in greater detail.

Kapacitor service properties are configured using key-value pairs organized into groups. Any property key can be located by following its path in the configuration file (for example, [http].https-enabled or [slack].channel). Values for configuration keys are declared in the configuration file.

Kapacitor configuration file location

Kapacitor looks for configuration files at specific locations, depending on your operating system:

  • Linux: /etc/kapacitor/kapacitor.conf
  • macOS: /usr/local/etc/kapacitor.conf
  • Windows: same directory as kapacitord.exe

Define a custom location for your kapacitor.conf at startup with the -config flag. The path to the configuration file can also be declared using the environment variable KAPACITOR_CONFIG_PATH. Values declared in the configuration file are overridden by environment variables beginning with KAPACITOR_. Some values can also be dynamically altered using the HTTP API when the key [config-override].enabled is set to true.

Configuration precedence

Configure Kapacitor using one or more of the available configuration mechanisms. Configuration mechanisms are honored in the following order of precedence.

  1. Command line arguments
  2. HTTP API (for the InfluxDB connection and other optional services)
  3. Environment variables
  4. Configuration file values

Note: Setting the property skip-config-overrides in the configuration file to true will disable configuration overrides at startup.


To specify how to load and run the Kapacitor daemon, set the following command line options:

  • -config: Path to the configuration file.
  • -hostname: Hostname that will override the hostname specified in the configuration file.
  • -pidfile: File where the process ID will be written.
  • -log-file: File where logs will be written.
  • -log-level: Threshold for writing messages to the log file. Valid values include debug, info, warn, error.


On POSIX systems, when the Kapacitor daemon starts as part of systemd, environment variables can be set in the file /etc/default/kapacitor.

  1. To start Kapacitor as part of systemd, do one of the following:

    $ sudo systemctl enable kapacitor
    $ sudo systemctl enable kapacitor —-now
  2. Define where the PID file and log file will be written:

    1. Add a line like the following into the /etc/default/kapacitor file:

      KAPACITOR_OPTS="-pidfile=/home/kapacitor/ -log-file=/home/kapacitor/logs/kapacitor.log"
    2. Restart Kapacitor:

      sudo systemctl restart kapacitor

The environment variable KAPACITOR_OPTS is one of a few special variables used by Kapacitor at startup. For more information on working with environment variables, see Kapacitor environment variables below.

Kapacitor configuration file

The default configuration can be displayed using the config command of the Kapacitor daemon.

kapacitord config

A sample configuration file is also available in the Kapacitor code base. The most current version can be accessed on Github.

Use the Kapacitor HTTP API to get current configuration settings and values that can be changed while the Kapacitor service is running. See Retrieving the current configuration.


The configuration file is based on TOML. Important configuration properties are identified by case-sensitive keys to which values are assigned. Key-value pairs are grouped into tables whose identifiers are delineated by brackets. Tables can also be grouped into table arrays.

The most common value types found in the Kapacitor configuration file include the following:

  • String (declared in double quotes)
    • Examples: host = "localhost", id = "myconsul", refresh-interval = "30s".
  • Integer
    • Examples: port = 80, timeout = 0, udp-buffer = 1000.
  • Float
    • Example: threshold = 0.0.
  • Boolean
    • Examples: enabled = true, global = false, no-verify = false.
  • Array
    • Examples: my_database = [ "default", "longterm" ], urls = ["http://localhost:8086"]
  • Inline Table
    • Example: basic-auth = { username = "my-user", password = "my-pass" }

Table grouping identifiers are declared within brackets. For example, [http], [deadman],[kubernetes].

An array of tables is declared within double brackets. For example, [[influxdb]]. [[mqtt]], [[dns]].


Most keys are declared in the context of a table grouping, but the basic properties of the Kapacitor system are defined in the root context of the configuration file. The four basic properties of the Kapacitor service include:

  • hostname: String declaring the DNS hostname where the Kapacitor daemon runs.
  • data_dir: String declaring the file system directory where core Kapacitor data is stored.
  • skip-config-overrides: Boolean indicating whether or not to skip configuration overrides.
  • default-retention-policy: String declaring the default retention policy to be used on the InfluxDB database.

Table groupings and arrays of tables follow the basic properties and include essential and optional features, including specific alert handlers and mechanisms for service discovery and data scraping.

Essential configuration groups

Authentication and authorization (auth)

Enable and configure user authentication and authorization in Kapacitor.

  # Enable authentication for Kapacitor
  enabled = false
  # User permissions cache expiration time.
  cache-expiration = "10m"
  # Cost to compute bcrypt password hashes.
  # bcrypt rounds = 2^cost
  bcrypt-cost = 10
  # Address of a meta server.
  # If empty then meta, is not used as a user backend.
  # host:port
  meta-addr = ""
  meta-use-tls = false

  # Username for basic user authorization when using meta API. meta-password should also be set.
  meta-username = "kapauser"

  # Password for basic user authorization when using meta API. meta-username must also be set.
  meta-password = "kapapass"

  # Shared secret for JWT bearer token authentication when using meta API. 
  # If this is set, then the `meta-username` and `meta-password` settings are ignored.
  # This should match the `[meta] internal-shared-secret` setting on the meta nodes.
  meta-internal-shared-secret = "MyVoiceIsMyPassport"

  # Absolute path to PEM encoded Certificate Authority (CA) file.
  # A CA can be provided without a key/certificate pair.
  meta-ca = "/etc/kapacitor/ca.pem"

  # Absolute paths to PEM encoded private key and server certificate files.
  meta-cert = "/etc/kapacitor/cert.pem"
  meta-key = "/etc/kapacitor/key.pem"
  meta-insecure-skip-verify = false


The Kapacitor service requires an HTTP connection. Use the [http] configuration group to configure HTTP properties such as a bind address and the path to an HTTPS certificate.

# ...
  # HTTP API Server for Kapacitor
  # This server is always on,
  # it serves both as a write endpoint
  # and as the API endpoint for all other
  # Kapacitor calls.
  bind-address = ":9092"
  # Require authentication when interacting with Kapacitor
  auth-enabled = false
  log-enabled = true
  write-tracing = false
  pprof-enabled = false
  https-enabled = false
  https-certificate = "/etc/ssl/influxdb-selfsigned.pem"
  shared-secret = ""
  # The shared secret must match on all data and kapacitor nodes.
  ### Use a separate private key location.
  # https-private-key = ""
# ...

Transport Layer Security (TLS)

If the TLS configuration settings is not specified, Kapacitor supports all of the cipher suite IDs listed and all of the TLS versions implemented in the Constants section of the Go crypto/tls package documentation, depending on the version of Go used to build InfluxDB. Use the SHOW DIAGNOSTICS command to see the version of Go used to build Kapacitor.

# ...

  ciphers = [
  min-version = "tls1.3"
  max-version = "tls1.3"

# ...

Important: The order of the cipher suite IDs in the ciphers setting determines which algorithms are selected by priority. The TLS min-version and the max-version settings in the example above restrict support to TLS 1.3.


List of available TLS cipher suites. Default is ["TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_CHACHA20_POLY1305_SHA256"].

For a list of available ciphers available with the version of Go used to build Kapacitor, see the Go crypto/tls package. Use the query SHOW DIAGNOSTICS to see the version of Go used to build Kapacitor.


Minimum version of the TLS protocol that will be negotiated. Valid values include:

  • tls1.2
  • tls1.3 (default)

Maximum version of the TLS protocol that will be negotiated. Valid values include:

  • tls1.2
  • tls1.3 (default)

InfluxData recommends configuring your Kapacitor server’s TLS settings for “modern compatibility” to provide a higher level of security and assumes that backward compatibility is not required. Our recommended TLS configuration settings for ciphers, min-version, and max-version are based on Mozilla’s “modern compatibility” TLS server configuration described in Security/Server Side TLS.

InfluxData’s recommended TLS settings for “modern compatibility” are specified in the configuration settings example above.

Config override

The [config-override] group contains only one key which enables or disables the ability to override certain values through the HTTP API. It is enabled by default.

# ...

  # Enable/Disable the service for overriding configuration via the HTTP API.
  enabled = true



The Kapacitor service uses logging to monitor and inspect its behavior. The path to the log and the log threshold is defined in [logging] group.

# ...

  # Destination for logs
  # Can be a path to a file or 'STDOUT', 'STDERR'.
  file = "/var/log/kapacitor/kapacitor.log"
  # Logging level can be one of:
  level = "INFO"



Kapacitor can load TICKscript tasks when the service starts. Use the [load] group to enable this feature and specify the directory path for TICKscripts to load.

# ...

  # Enable/Disable the service for loading tasks/templates/handlers
  # from a directory
  enabled = true
  # Directory where task/template/handler files are set
  dir = "/etc/kapacitor/load"



Kapacitor can record data streams and batches for testing tasks before they are enabled. Use the [replay] group specify the path to the directory where the replay files are stored.

# ...

  # Where to store replay files.
  dir = "/var/lib/kapacitor/replay"

# ...


Prior to Kapacitor 1.4, tasks were written to a special task database. The [task] group and its associated keys are deprecated and should only be used for migration purposes.


The Kapacitor service stores its configuration and other information in BoltDB, a file-based key-value data store. Use the [storage] group to define the location of the BoltDB database file on disk.

# ...

  # Where to store the Kapacitor boltdb database
  boltdb = "/var/lib/kapacitor/kapacitor.db"



Use the [deadman] group to configure Kapacitor’s deadman’s switch globally. See the Deadman helper function topic in the AlertNode documentation.

# ...

  # Configure a deadman's switch
  # Globally configure deadman's switches on all tasks.
  # NOTE: for this to be of use you must also globally configure at least one alerting method.
  global = false
  # Threshold, if globally configured the alert will be triggered if the throughput in points/interval is <= threshold.
  threshold = 0.0
  # Interval, if globally configured the frequency at which to check the throughput.
  interval = "10s"
  # Id: the alert Id, NODE_NAME will be replaced with the name of the node being monitored.
  id = "node 'NODE_NAME' in task '{{ .TaskName }}'"
  # The message of the alert. INTERVAL will be replaced by the interval.
  message = "{{ .ID }} is {{ if eq .Level \"OK\" }}alive{{ else }}dead{{ end }}: {{ index .Fields \"collected\" | printf \"%0.3f\" }} points/INTERVAL."



Use the [[influxdb]] group to configure an InfluxDB connection. Configure one or more [[influxdb]] groups configurations, one per InfluxDB connection. Identify one of the InfluxDB groups as the default.

InfluxDB user must have admin privileges

To use Kapacitor with an InfluxDB instance that requires authentication, the InfluxDB user must have admin privileges.

# ...

  # Connect to InfluxDB
  # Kapacitor can subscribe, query, and write to this cluster.
  # Using InfluxDB is not required and can be disabled.
  # To connect to InfluxDB OSS 1.x or InfluxDB Enterprise, 
  # use the following configuration:
  enabled = true
  default = true
  name = "localhost"
  urls = ["http://localhost:8086"]
  username = ""
  password = ""
  timeout = 0

  # To connect to InfluxDB OSS 2.x or InfluxDB Cloud, 
  # use the following configuration:
  enabled = true
  default = true
  name = "localhost"
  urls = ["http://localhost:8086"]
  token = ""
  timeout = 0 
  # By default, all data sent to InfluxDB is compressed in gzip format.
  # To turn off gzip compression, add the following config setting:
  compression = "none"

  # Absolute path to pem encoded CA file.
  # A CA can be provided without a key/cert pair
  #   ssl-ca = "/etc/kapacitor/ca.pem"
  # Absolutes paths to pem encoded key and cert files.
  #   ssl-cert = "/etc/kapacitor/cert.pem"
  #   ssl-key = "/etc/kapacitor/key.pem"

  # Do not verify the TLS/SSL certificate.
  # This is insecure.
  insecure-skip-verify = false

  # Maximum time to try and connect to InfluxDB during startup
  startup-timeout = "5m"

  # Turn off all subscriptions
  disable-subscriptions = false

  # Subscription mode is either "cluster" or "server"
  subscription-mode = "server"

  # Which protocol to use for subscriptions
  # one of 'udp', 'http', or 'https'.
  subscription-protocol = "http"

  # Subscriptions resync time interval
  # Useful if you want to subscribe to new created databases
  # without restart Kapacitord
  subscriptions-sync-interval = "1m0s"

  # Override the global hostname option for this InfluxDB cluster.
  # Useful if the InfluxDB cluster is in a separate network and
  # needs special configuration to connect back to this Kapacitor instance.
  # Defaults to `hostname` if empty.
  kapacitor-hostname = ""

  # Override the global http port option for this InfluxDB cluster.
  # Useful if the InfluxDB cluster is in a separate network and
  # needs special configuration to connect back to this Kapacitor instance.
  # Defaults to the port from `[http] bind-address` if 0.
  http-port = 0

  # Host part of a bind address for UDP listeners.
  # For example if a UDP listener is using port 1234
  # and `udp-bind = "hostname_or_ip"`,
  # then the UDP port will be bound to `hostname_or_ip:1234`
  # The default empty value will bind to all addresses.
  udp-bind = ""
  # Subscriptions use the UDP network protocol.
  # The following options of for the created UDP listeners for each subscription.
  # Number of packets to buffer when reading packets off the socket.
  udp-buffer = 1000
  # The size in bytes of the OS read buffer for the UDP socket.
  # A value of 0 indicates use the OS default.
  udp-read-buffer = 0

    # Set of databases and retention policies to subscribe to.
    # If empty will subscribe to all, minus the list in
    # influxdb.excluded-subscriptions
    # Format
    # db_name = <list of retention policies>
    # Example:
    # my_database = [ "default", "longterm" ]
    # Set of databases and retention policies to exclude from the subscriptions.
    # If influxdb.subscriptions is empty it will subscribe to all
    # except databases listed here.
    # Format
    # db_name = <list of retention policies>
    # Example:
    # my_database = [ "default", "longterm" ]

# ...

Internal configuration groups

Kapacitor includes configurable internal services that can be enabled or disabled.


Kapacitor will send usage statistics back to InfluxData. Use the [reporting] group to disable, enable, and configure reporting.

# ...

  # Send usage statistics
  # every 12 hours to Enterprise.
  enabled = true
  url = ""



Kapacitor can output internal Kapacitor statistics to an InfluxDB database. Use the [stats] group to configure the collection frequency and the database to store statistics in.

# ...

  # Emit internal statistics about Kapacitor.
  # To consume these stats, create a stream task
  # that selects data from the configured database
  # and retention policy.
  # Example:
  #  stream|from().database('_kapacitor').retentionPolicy('autogen')...
  enabled = true
  stats-interval = "10s"
  database = "_kapacitor"
  retention-policy= "autogen"

# ...


Use the [alert] group to globally configure alerts created by the alertNode.

# ...

  # Persisting topics can become an I/O bottleneck under high load.
  # This setting disables them entirely.
  persist-topics = false
  # This setting sets the topic queue length.
  # Default is 5000. Minimum length is 1000.
  topic-buffer-length = 5000

# ...

Optional configuration groups

Optional table groupings are disabled by default and relate to specific features that can be leveraged by TICKscript nodes or used to discover and scrape information from remote locations. In the default configuration, these optional table groupings may be commented out or include a key enabled set to false (i.e., enabled = false). A feature defined by an optional table should be enabled whenever a relevant node or a handler for a relevant node is required by a task, or when an input source is needed.

Optional features include:

Event handlers

Event handlers manage communications from Kapacitor to third party services or across Internet standard messaging protocols. They are activated through chaining methods on the AlertNode.

Every event handler has the property enabled. They also need an endpoint to send messages to. Endpoints may include single properties (e.g, url and addr) or property pairs (e.g., host and port). Most include an authentication mechanism such as a token or a pair of properties like username and password.

For information about available event handlers and their configuration options:

View available event handlers

Docker services

Use Kapacitor to trigger changes in Docker clusters with SwarmAutoScale and K8sAutoScale nodes.

# ...
  # Enable/Disable the Docker Swarm service.
  # Needed by the swarmAutoscale TICKscript node.
  enabled = false
  # Unique ID for this Swarm cluster
  # NOTE: This is not the ID generated by Swarm rather a user defined
  # ID for this cluster since Kapacitor can communicate with multiple clusters.
  id = ""
  # List of URLs for Docker Swarm servers.
  servers = ["http://localhost:2376"]
  # TLS/SSL Configuration for connecting to secured Docker daemons
  ssl-ca = ""
  ssl-cert = ""
  ssl-key = ""
  insecure-skip-verify = false
# ...
# ...
  # Enable/Disable the kubernetes service.
  # Needed by the k8sAutoscale TICKscript node.
  enabled = false
  # There are several ways to connect to the kubernetes API servers:
  # Via the proxy, start the proxy via the `kubectl proxy` command:
  #   api-servers = ["http://localhost:8001"]
  # From within the cluster itself, in which case
  # kubernetes secrets and DNS services are used
  # to determine the needed configuration.
  #   in-cluster = true
  # Direct connection, in which case you need to know
  # the URL of the API servers,  the authentication token and
  # the path to the ca cert bundle.
  # These value can be found using the `kubectl config view` command.
  #   api-servers = [""]
  #   token = "..."
  #   ca-path = "/path/to/kubernetes/ca.crt"
  # Kubernetes can also serve as a discoverer for scrape targets.
  # In that case the type of resources to discoverer must be specified.
  # Valid values are: "node", "pod", "service", and "endpoint".
  #   resource = "pod"
# ...

User defined functions (UDFs)

Use Kapacitor to run user defined functions (UDF), as chaining methods in a TICKscript. Define a UDF configuration group in your kapacitor.conf using the group identifier pattern:


A UDF configuration group requires the following properties:

Property Description Value type
prog Path to the executable string
args Arguments to pass to the executable array of strings
timeout Executable response timeout string

Include environment variables in your UDF configuration group using the group pattern:

Example UDF configuration
# ...
# Configuration for UDFs (User Defined Functions)
    # ...
    # Example Python UDF.
    # Use in TICKscript:
    #   stream.pyavg()
    #           .field('value')
    #           .size(10)
    #           .as('m_average')
       prog = "/usr/bin/python2"
       args = ["-u", "./udf/agent/examples/"]
       timeout = "10s"
           PYTHONPATH = "./udf/agent/py"
# ...

Additional examples can be found in the default configuration file.

Input methods

Use Kapacitor to receive and process data from data sources other than InfluxDB and then write the results to InfluxDB.

The following sources (in addition to InfluxDB) are supported:

  • Collectd: The POSIX daemon for collecting system, network, and service performance data.
  • Opentsdb: The Open Time Series Database.
  • UDP: User datagram protocol.

Each input source has additional properties specific to its configuration.

# ...
  enabled = false
  bind-address = ":25826"
  database = "collectd"
  retention-policy = ""
  batch-size = 1000
  batch-pending = 5
  batch-timeout = "10s"
  typesdb = "/usr/share/collectd/types.db"
# ...

View collectd configuration properties

# ...
  enabled = false
  bind-address = ":4242"
  database = "opentsdb"
  retention-policy = ""
  consistency-level = "one"
  tls-enabled = false
  certificate = "/etc/ssl/influxdb.pem"
  batch-size = 1000
  batch-pending = 5
  batch-timeout = "1s"
# ...

View Opentsdb configuration properties

User Datagram Protocol (UDP)

Use Kapacitor to collect raw data from a UDP connection.

# ...
  enabled = true
  bind-address = ":9100"
  database = "game"
  retention-policy = "autogen"
# ...

View UDP configuration properties

For examples of using Kapacitor to collect raw UDP data, see:

Service discovery and metric scraping

Kapacitor service discovery and metric scrapers let you discover and scrape metrics from data sources at runtime. This process is known as metric scraping and discovery. For more information, see Scraping and Discovery.

Use the [[scraper]] configuration group to configure scrapers and service discovery. One scraper can be bound to one discovery service.

Example scraper configuration
# ...
  enabled = false
  name = "myscraper"
  # Specify the id of a discoverer service specified below
  discoverer-id = "goethe-ec2"
  # Specify the type of discoverer service being used.
  discoverer-service = "ec2"
  db = "prometheus_raw"
  rp = "autogen"
  type = "prometheus"
  scheme = "http"
  metrics-path = "/metrics"
  scrape-interval = "1m0s"
  scrape-timeout = "10s"
  username = "schwartz.pudel"
  password = "f4usT!1808"
  bearer-token = ""
  ssl-ca = ""
  ssl-cert = ""
  ssl-key = ""
  ssl-server-name = ""
  insecure-skip-verify = false
# ...
Discovery services

Kapacitor supports the following discovery services:

  • Azure
  • Consul
  • DNS
  • EC2
  • File Discovery
  • GCE
  • Marathon
  • Nerve
  • ServerSet
  • Static Discovery
  • Triton
  • UDP

Each discovery service has an id property used to bind the service to a scraper. To see configuration properties unique to each discovery service, see the sample Kapacitor configuration file.

Example EC2 discovery service configuration
# ...
  enabled = false
  id = "goethe-ec2"
  region = "us-east-1"
  access-key = "ABCD1234EFGH5678IJKL"
  secret-key = "1nP00dl3N01rM4Su1v1Ju5qU3ch3ZM01"
  profile = "mph"
  refresh-interval = "1m0s"
  port = 80
# ...

Flux tasks

Use the [fluxtask] configuration group to enable and configure Kapacitor Flux tasks.

# ...
  # Configure flux tasks for kapacitor
  enabled = false
  # The InfluxDB instance name (from the [[influxdb]] config section)
  # to store historical task run data in 
  # Not recommended: use "none" to turn off historical task run data storage.
  task-run-influxdb = "localhost"
  # Bucket to store historical task run data in. We recommend leaving this empty; by default, data is written to the `kapacitor_fluxtask_logs` bucket or database.
  # If you have multiple Kapacitor instances and want to keep your data separate, specify the InfluxDB 2.x bucket or InfluxDB 1.x database to write to. For InfluxDB 1.x, use the `"mydb"` convention--the `"mydb/rp"` convention with the retention policy is not supported.  
  task-run-bucket=" " 
  # The organization name or ID if storing historical task run data
  # in InfluxDB 2.x or InfluxDB Cloud
  task-run-org = ""
  task-run-orgid = ""
  # The measurement name for the historical task run data
  task-run-measurement = "runs" 

# ...

For more information about using Flux tasks with Kapacitor, see Use Flux tasks.

Kapacitor environment variables

Use environment variables to set global Kapacitor configuration settings or override properties in the configuration file.

Environment variables not in configuration file

Environment variable Description Value type
KAPACITOR_OPTS Options to pass to systemd when the kapacitord process is started by systemd string
KAPACITOR_CONFIG_PATH Path to the Kapacitor configuration file string
KAPACITOR_URL Kapacitor URL used by the kapacitor CLI string
KAPACITOR_UNSAFE_SSL Allow the kapacitor CLI to skep certificate verification when using SSL boolean

Map configuration properties to environment variables

Kapacitor-specific environment variables begin with the token KAPACITOR followed by an underscore (_). Properties then follow their path through the configuration file tree with each node in the tree separated by an underscore. Dashes in configuration file identifiers are replaced with underscores. Table groupings in table arrays are identified by integer tokens.

Example environment variable mappings
# Set the skip-config-overrides configuration property

# Set the value of the first URLs in the first InfluxDB configuration group
# [infludxb][0].[urls][0]

# Set the value of the [storage].boltdb configuration property

# Set the value of the authorization header in the first httpost configuration group
# [httppost][0].headers.{authorization:"some_value"}

# Enable the Kubernetes service – [kubernetes].enabled

Configure with the HTTP API

Use the Kapacitor HTTP API to override certain configuration properties. This is helpful when a property may contain security sensitive information or when you need to reconfigure a service without restarting Kapacitor.

To view which properties are configurable through the API, use the GET request method with the /kapacitor/v1/config endpoint:

GET /kapacitor/v1/config
curl --request GET 'http://localhost:9092/kapacitor/v1/config'

To apply configuration overrides through the API, set the [config-override].enabled property in your Kapacitor configuration file to true.

View configuration sections

Most Kapacitor configuration groups or sections can be viewed as JSON files by using the GET request method and appending the group identifier to the /kapacitor/v1/config/ endpoint. For example, to get InfluxDB configuration properties:

GET /kapacitor/v1/config/influxdb
curl --request GET 'http://localhost:9092/kapacitor/v1/config/influxdb'

Sensitive fields such as passwords, keys, and security tokens are redacted when using the GET request method.

Modify configuration sections

To modify configuration properties, use the POST request method to send a JSON document to the configuration section endpoint. The JSON document must contain a set field with a map of the properties to override and their new values.

POST /kapacitor/v1/config/{config-group}
Enable the SMTP configuration
curl --request POST 'http://localhost:9092/kapacitor/v1/config/smtp' \
  --data '{
        "enabled": true

To remove a configuration override, use the POST request method to send a JSON document with a the delete field to the configuration endpoint.

Remove the SMTP configuration override
curl --request POST 'http://localhost:9092/kapacitor/v1/config/smtp' \
  --data '{

For detailed information about how to override configurations with the Kapacitor API, see Overriding configurations.

Was this page helpful?

Thank you for your feedback!

The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Flux is going into maintenance mode and will not be supported in InfluxDB 3.0. This was a decision based on the broad demand for SQL and the continued growth and adoption of InfluxQL. We are continuing to support Flux for users in 1.x and 2.x so you can continue using it with no changes to your code. If you are interested in transitioning to InfluxDB 3.0 and want to future-proof your code, we suggest using InfluxQL.

For information about the future of Flux, see the following: