Documentation

Track state changes across task executions

This page documents an earlier version of . is the latest stable version. View this page in the documentation.

Problem

It’s common to use InfluxDB tasks to evaluate and assign states to your time series data and then detect changes in those states. Tasks process data in batches, but what happens if there is a state change across the batch boundary? The task won’t recognize it without knowing the final state of the previous task execution. This guide walks through creating a task that assigns a state to rows and then uses results from the previous task execution to detect any state changes across the batch boundary so you don’t miss any state changes.

Solution

Explicitly assign levels to your data based on thresholds.

Solution Advantages

This is the easiest solution to understand if you have never written a task with the monitor package.

Solution Disadvantages

You have to explicitly define your thresholds, which potentially requires more code.

Solution Overview

Create a task where you:

  1. Boilerplate. Import packages and define task options.
  2. Query your data.
  3. Assign states to your data based on thresholds. Store this data in a variable, i.e. “states”.
  4. Write the “states” to a bucket.
  5. Find the latest value from the previous task run and store it in a variable “last_state_previous_task”.
  6. Union “states” and “last_state_previous_task”. Store this data in a variable “unioned_states”.
  7. Discover state changes in “unioned_states”. Store this data in a variable “state_changes”.
  8. Notify on state changes that span across the last two tasks to catch any state changes that occur across task executions.

Solution Explained

  1. Import packages and define task options and secrets. Import the following packages:
  • Flux Telegram package: This package

  • Flux InfluxDB secrets package: This package contains the secrets.get() function which allows you to retrieve secrets from the InfluxDB secret store. Learn how to manage secrets in InfluxDB to use this package.

  • Flux InfluxDB monitoring package: This package contains functions and tools for monitoring your data.

    import "contrib/sranka/telegram"
    import "influxdata/influxdb/secrets"
    import "influxdata/influxdb/monitor"
    
    option task = {name: "State changes across tasks", every: 30m, offset: 5m}
    
    telegram_token = secrets.get(key: "telegram_token")
    telegram_channel_ID = secrets.get(key: "telegram_channel_ID")
    
  1. Query the data you want to monitor.

    data = from(bucket: "example-bucket")
        // Query for data from the last successful task run or from the 1 every duration ago.
        // This ensures that you won’t miss any data.
        |> range(start: tasks.lastSuccess(orTime: -task.every))
        |> filter(fn: (r) => r._measurement == "example-measurement")
        |> filter(fn: (r) => r.tagKey1 == "example-tag-value")
        |> filter(fn: (r) => r._field == "example-field")
    

    Where data might look like:

    _measurement tagKey1 _field _value _time
    example-measurement example-tag-value example-field 30.0 2022-01-01T00:00:00Z
    example-measurement example-tag-value example-field 50.0 2022-01-01T00:00:00Z
  2. Assign states to your data based on thresholds. Store this data in a variable, i.e. “states”. To simplify this example, there are only two states: “ok” and “crit.” Store states in the _level column (required by the monitor package).

    states =
        data
            |> map(fn: (r) => ({r with _level: if r._value > 40.0 then "crit" else "ok"}))
    

    Where states might look like:

    _measurement tagKey1 _field _value _level _time
    example-measurement example-tag-value example-field 30.0 ok 2022-01-01T00:00:00Z
    example-measurement example-tag-value example-field 50.0 crit 2022-01-01T00:01:00Z
  3. Write “states” back to InfluxDB. You can write the data to a new measurement or to a new bucket. To write the data to a new measurement, use set() to update the value of the _measurement column in your “states” data.

    states
        // (Optional) Change the measurement name to write the data to a new measurement
        |> set(key: "_measurement", value: "new-measurement")
        |> to(bucket : "example-bucket") 
    
  4. Find the latest value from the previous task run and store it in a variable “last_state_previous_task”,

    last_state_previous_task =
        from(bucket: "example-bucket")
            |> range(start: date.sub(d: task.every, from: tasks.lastSuccess(orTime: -task.every))
            |> filter(fn: (r) => r._measurement == "example-measurement")
            |> filter(fn: (r) => r.tagKey == "example-tag-value")
            |> filter(fn: (r) => r._field == "example-field")
            |> last() 
    

    Where last_state_previous_task might look like:

    _measurement tagKey1 _field _value _level _time
    example-measurement example-tag-value example-field 55.0 crit 2021-12-31T23:59:00Z
  5. Union “states” and “last_state_previous_task”. Store this data in a variable “unioned_states”. Use sort() to ensure rows are ordered by time.

    unioned_states =
        union(tables: [states, last_state_previous_task])
            |> sort(columns: ["_time"], desc: true)
    

    Where unioned_states might look like:

    _measurement tagKey1 _field _value _level _time
    example-measurement example-tag-value example-field 55.0 crit 2021-12-31T23:59:00Z
    example-measurement example-tag-value example-field 30.0 ok 2022-01-01T00:00:00Z
    example-measurement example-tag-value example-field 50.0 crit 2022-01-01T00:01:00Z
  6. Use monitor.stateChangesOnly() to return only rows where the state changed in “unioned_states”. Store this data in a variable, “state_changes”.

    state_changes =
        unioned_states 
            |> monitor.stateChangesOnly()
    

    Where state_changes might look like:

    _measurement tagKey1 _field _value _level _time
    example-measurement example-tag-value example-field 30.0 ok 2022-01-01T00:00:00Z
    example-measurement example-tag-value example-field 50.0 crit 2022-01-01T00:01:00Z
  7. Notify on state changes that span across the last two tasks to catch any state changes that occur across task executions.

    state_changes =
        data
            |> map(
                fn: (r) =>
                    ({
                        _value:
                            telegram.message(
                                token: telegram_token,
                                channel: telegram_channel_ID,
                                text: "state change at ${r._value} at ${r._time}",
                            ),
                    }),
            )
    

    Using the unioned data, the following alerts would be sent to Telegram:

    • state change at 30.0 at 2022-01-01T00:00:00Z
    • state change at 50.0 at 2022-01-01T00:01:00Z

Was this page helpful?

Thank you for your feedback!


Introducing InfluxDB Clustered

A highly available InfluxDB 3.0 cluster on your own infrastructure.

InfluxDB Clustered is a highly available InfluxDB 3.0 cluster built for high write and query workloads on your own infrastructure.

InfluxDB Clustered is currently in limited availability and is only available to a limited group of InfluxData customers. If interested in being part of the limited access group, please contact the InfluxData Sales team.

Learn more
Contact InfluxData Sales

The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Flux is going into maintenance mode and will not be supported in InfluxDB 3.0. This was a decision based on the broad demand for SQL and the continued growth and adoption of InfluxQL. We are continuing to support Flux for users in 1.x and 2.x so you can continue using it with no changes to your code. If you are interested in transitioning to InfluxDB 3.0 and want to future-proof your code, we suggest using InfluxQL.

For information about the future of Flux, see the following:

State of the InfluxDB Cloud Serverless documentation

InfluxDB Cloud Serverless documentation is a work in progress.

The new documentation for InfluxDB Cloud Serverless is a work in progress. We are adding new information and content almost daily. Thank you for your patience!

If there is specific information you’re looking for, please submit a documentation issue.