A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


abstract syntax tree (AST)

Tree representation of source code that shows the structure, content, and rules of programming statements and discards additional syntax elements. The tree is hierarchical, with elements of program statements broken down into their parts.

For more information about AST design, see Abstract Syntax Tree on Wikipedia.


A background process started by (or on behalf of) a user that typically requires user input.

Telegraf is an agent that requires user input (a configuration file) to gather metrics from declared input plugins and sends metrics to declared output plugins, based on the plugins enabled for a configuration.

Related entries: input plugin, output plugin, daemon

aggregator plugin

Receives metrics from input plugins, creates aggregate metrics, and then passes aggregate metrics to configured output plugins.

Related entries: input plugin, output plugin, processor plugin


A function that returns an aggregated value across a set of points. For a list of available aggregation functions, see Flux aggregate functions.

Related entries: function, selector, transformation


bar graph

A visual representation in the InfluxDB user interface used to compare variables (bars) and plot categorical data. A bar graph has spaces between bars, can be sorted in any order, and bars in the graph typically have the same width.

Related entries: histogram


A collection of points in line protocol format, separated by newlines (0x0A). Submitting a batch of points using a single HTTP request to the write endpoints drastically increases performance by reducing the HTTP overhead. InfluxData typically recommends batch sizes of 5,000-10,000 points. In some use cases, performance may improve with significantly smaller or larger batches.

Related entries: line protocol, point

batch size

The number of lines or individual data points in a line protocol batch. The Telegraf agent sends metrics to output plugins in batches rather than individually. Batch size controls the size of each write batch that Telegraf sends to the output plugins.

Related entries: output plugin


In a cumulative histogram, a bin includes all data points less than or equal to a specified upper bound. In a normal histogram, a bin includes all data points between the upper and lower bounds.


In Flux, a block is a possibly empty sequence of statements within matching braces ({ }). Two types of blocks exist in Flux:

  • Explicit blocks in the source code, for example:

    Block         = "{" StatementList "}
    StatementList = { Statement }
  • Implicit blocks, including:

    • Universe: Encompasses all Flux source text.
    • Package: Each package includes a package block that contains Flux source text for the package.
    • File: Each file has a file block containing Flux source text in the file.
    • Function: Each function literal has a function block with Flux source text (even if not explicitly declared).

Related entries: implicit block, explicit block


A data type with two possible values: true or false. By convention, you can express true as the integer 1 and false as the integer 0 (zero). In annotated CSV, columns that contain boolean values are annotated with the boolean datatype.


A bucket is a named location where time series data is stored. All buckets have a retention period. A bucket belongs to an organization.

bucket schema

In InfluxDB Cloud, an explicit bucket schema lets you strictly enforce the data that can be written into one or more measurements in a bucket by defining the column names, tags, fields, and data types allowed for each measurement. By default, buckets in InfluxDB Cloud have an implicit schema that lets you write data without restrictions on columns, fields, or data types.

Learn how to manage bucket schemas in InfluxDB Cloud.

Related entries: data type, field, measurement



Checks are part of queries used in monitoring to read input data and assign a status (_level) based on specified conditions. For example:

  crit: (r) => r._value > 90.0,
  warn: (r) => r._value > 80.0,
  info: (r) => r._value > 60.0,
  ok:   (r) => r._value <= 20.0,
  messageFn: (r) => "The current level is ${r._level}",

This check gives rows with a _value greater than 90.0 a crit _level; rows greater than 80.0 get a warn _level, and so on.

Learn how to create a check.

Related entries: check status, notification rule, notification endpoint

check status

A check gets one of the following statuses (_level): crit, info, warn, or ok. Check statuses are written to a status measurement in the _monitoring bucket.

Related entries: check, notification rule, notification endpoint


Comma-separated values (CSV) delimits text between commas to separate values. A CSV file stores tabular data (numbers and text) in plain text. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. CSV file format is not fully standardized.

InfluxData uses annotated CSV (comma-separated values) format to encode HTTP responses and results returned to the Flux csv.from() function. For more detail, see Annotated CSV.

co-monitoring dashboard

The prebuilt co-monitoring dashboard displays details of your instance based on metrics from Telegraf, allowing you to monitor overall performance.


Collect and write time series data to InfluxDB using line protocol, Telegraf or InfluxDB scrapers, the InfluxDB v2 API, influx command line interface (CLI),the InfluxDB user interface (UI), and client libraries.

collection interval

The default global interval for collecting data from each Telegraf input plugin. The collection interval can be overridden by each individual input plugin’s configuration.

Related entries: input plugin

collection jitter

Collection jitter prevents every input plugin from collecting metrics simultaneously, which can have a measurable effect on the system. For each collection interval, every Telegraf input plugin will sleep for a random time between zero and the collection jitter before collecting the metrics.

Related entries: collection interval, input plugin


InfluxDB data is stored in tables within rows and columns. Columns store tag sets (indexed) and fields sets. The only required column is time, which stores timestamps and is included in all InfluxDB tables.


Use comments with Flux statements to describe your functions.

common log format (CLF)

A standardized text file format used by the InfluxDB web server to create log entries when generating server log files.


Compressing time series data to optimize disk usage.

continuous query (CQ)

Continuous queries are the predecessor to tasks in InfluxDB Cloud. Continuous queries run automatically and periodically on a database.

Related entries: function



A background process that runs without user input.


InfluxDB dashboards visualize time series data. Use dashboards to query and graph data.

dashboard variable

Dashboard template variables define components of a cell query. Dashboard variables make is easier to interact with and explore your databoard data. Use the InfluxDB user interface (UI) to add predefined template variables or customize your own template variables.

Data Explorer

Use the Data Explorer in the InfluxDB user interface (UI) to view, add, or delete variables and functions manually or using the Script Editor.

data model

A data model organizes elements of data and standardizes how they relate to one another and to properties of the real world entities.

Flux uses a data model built from basic data types: tables, records, columns and streams.

data service

Stores time series data and handles writes and queries.

data source

A source of data that InfluxDB collects or queries data from. Examples include InfluxDB buckets, Prometheus, Postgres, MySQL, and InfluxDB clients.

Related entries: bucket

data type

A data type is defined by the values it can take, the programming language used, or the operations that can be performed on it.

InfluxDB supports the following data types:

Data type Alias/annotation
float double
integer int, long
unsigned integer uint, unsignedLong
time dateTime

For more information about different data types, see:


In InfluxDB 1.x, a database represented a logical container for users, retention policies, continuous queries, and time series data. The InfluxDB 2.x equivalent of this concept is an InfluxDB bucket.

Related entries: continuous query, retention policy, user


InfluxDB stores the date-time format for each data point in a timestamp with nanosecond-precision Unix time. Specifying a timestamp is options. If a timestamp isn’t specified for a data point, InfluxDB uses the server’s local nanosecond timestamp in UTC.


Aggregating high resolution data into lower resolution data to preserve disk space.


A data type that represents a duration of time–for example, 1s, 1m, 1h, 1d. Retention policies are set using durations. Data older than the duration is automatically dropped from the database.



Metrics gathered at irregular time intervals.

explicit block

In Flux, an explicit block is a possibly empty sequence of statements within matching braces ({ }) that is defined in the source code, for example:

Block         = "{" StatementList "}
StatementList = { Statement }

Related entries: implicit block, block


A combination of one or more constants, variables, operators, and functions.



The key-value pair in InfluxDB’s data structure that records metadata and the actual data value. Fields are required in InfluxDB’s data structure and they are not indexed - queries on field values scan all points that match the specified time range and, as a result, are not performant relative to tags.

Query tip: Compare fields to tags; tags are indexed.

Related entries: field key, field set, field value, tag

field key

The key of the key-value pair. Field keys are strings and they store metadata.

Related entries: field, field set, field value, tag key

field set

The collection of field keys and field values on a point.

Related entries: field, field key, field value, point

field value

The value of a key-value pair. Field values are the actual data; they can be strings, floats, integers, or booleans. A field value is always associated with a timestamp.

Field values are not indexed - queries on field values scan all points that match the specified time range and, as a result, are not performant.

Query tip: Compare field values to tag values; tag values are indexed.

Related entries: field, field key, field set, tag value, timestamp

file block

A file block is a fixed-length chunk of data read into memory when requested by an application.

Related entries: block


A real number written with a decimal point dividing the integer and fractional parts (1.0, 3.14, -20.1). InfluxDB supports 64-bit float values. In annotated CSV, columns that contain float values are annotated with the double datatype.

flush interval

The global interval for flushing data from each Telegraf output plugin to its destination. This value should not be set lower than the collection interval.

Related entries: collection interval, flush jitter, output plugin

flush jitter

Flush jitter prevents every Telegraf output plugin from sending writes simultaneously, which can overwhelm some data sinks. Each flush interval, every Telegraf output plugin will sleep for a random time between zero and the flush jitter before emitting metrics. Flush jitter smooths out write spikes when running a large number of Telegraf instances.

Related entries: flush interval, output plugin


A lightweight scripting language for querying databases (like InfluxDB) and working with data.


Flux functions aggregate, select, and transform time series data. For a complete list of Flux functions, see Flux functions.

Related entries: aggregate, selector, transformation

function block

In Flux, each file has a file block containing all Flux source text in that file. Each function literal has its own function block even if not explicitly declared.



A type of visualization that displays the single most recent value for a time series. A gauge typically displays one or more measures from a single row, and is not designed to display multiple rows of data. Elements include a range, major and minor tick marks (within the range), and a pointer (needle) indicating the single most recent value.


A diagram that visually depicts the relation between variable quantities measured along specified axes.

group key

Group keys determine the schema and contents of tables in Flux output. A group key is a list of columns for which every row in the table has the same value. Columns with unique values in each row are not part of the group key.


gzip is a type of data compression that compress chunks of data, which is restored by unzipping compressed gzip files. The gzip file extension is .gz.



A visual representation of statistical information that uses rectangles to show the frequency of data items in successive, equal intervals or bins.



Identifiers are tokens that refer to task names, bucket names, field keys, measurement names, tag keys, and user names. For examples and rules, see Flux language lexical elements.

Related entries: bucket field key, measurement,

tag key, user

implicit block

In Flux, an implicit block is a possibly empty sequence of statements within matching braces ({ }) that includes the following types:

  • Universe: Encompasses all Flux source text.
  • Package: Each package includes a package block that contains Flux source text for the package.
  • File: Each file has a file block containing Flux source text in the file.
  • Function: Each function literal has a function block with Flux source text (even if not explicitly declared).

Related entries: explicit block, block


influx is a command line interface (CLI) that interacts with the InfluxDB daemon (influxd).


influxd is the InfluxDB daemon that runs the InfluxDB server and other required processes.


An open source time series database (TSDB) developed by InfluxData. Written in Go and optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, Internet of Things sensor data, and real-time analytics.

InfluxDB UI

The graphical web interface provided by InfluxDB for visualizing data and managing InfluxDB functionality.


The SQL-like query language used to query data in InfluxDB 1.x. The preferred method for querying data in InfluxDB Cloud is the Flux language.

input plugin

Telegraf input plugins actively gather metrics and deliver them to the core agent, where aggregator, processor, and output plugins can operate on the metrics. In order to activate an input plugin, it needs to be enabled and configured in Telegraf’s configuration file.

Related entries: aggregator plugin, collection interval, output plugin, processor plugin


An entity comprising data on a server (or virtual server in cloud computing).


A whole number that is positive, negative, or zero (0, -5, 143). InfluxDB supports 64-bit integers (minimum: -9223372036854775808, maximum: 9223372036854775807). In annotated CSV, columns that contain integers are annotated with the long datatype.

Related entries: unsigned integer



Typically, JSON web tokens (JWT) are used to authenticate users between an identity provider and a service provider. A server can generate a JWT to assert any business processes. For example, an “admin” token sent to a client can prove the client is logged in as admin. Tokens are signed by one party’s private key (typically, the server). Private keys are used by both parties to verify that a token is legitimate.

JWT uses an open standard specified in RFC 7519.


Open source tracing used in distributed systems to monitor and troubleshoot transactions.


JavaScript Object Notation (JSON) is an open-standard file format that uses human-readable text to transmit data objects consisting of attribute–value pairs and array data types.



A keyword is reserved by a program because it has special meaning. Every programming language has a set of keywords (reserved names) that cannot be used as an identifier.

See a list of Flux keywords.



A literal is value in an expression, a number, character, string, function, record, or array. Literal values are interpreted as defined.

See examples of Flux literals.


Logs record information. Event logs describe system events and activity that help to describe and diagnose problems. Transaction logs describe changes to stored data that help recover data if a database crashes or other errors occur.

The InfluxDB Cloud user interface (UI) can be used to view log history and data.

Line protocol (LP)

The text based format for writing points to InfluxDB. See line protocol.



The part of InfluxDB’s structure that describes the data stored in the associated fields. Measurements are strings.

With InfluxDB v3, a time series measurement equates to a relational database table with fields, tags, and timestamp as columns.

Related entries: field, series, table


A user in an organization.


Data tracked over time.

metric buffer

The metric buffer caches individual metrics when writes are failing for an Telegraf output plugin. Telegraf will attempt to flush the buffer upon a successful write to the output. The oldest metrics are dropped first when this buffer fills.

Related entries: output plugin

missing values

Denoted by a null value. Identifies missing information, which may be useful to include in an error message.

The Flux data model includes Missing values (null).



An independent influxd process.

Related entries: server

notification endpoint

The notification endpoint specifies the Slack or PagerDuty endpoint to send a notification and contains configuration details for connecting to the endpoint. Learn how to create a notification endpoint.

Related entries: check, notification rule

notification rule

A notification rule specifies a status level (and tags) to alert on, the notification message to send for the specified status level (or change in status level), and the interval or schedule you want to check the status level (and tags). If conditions are met, the notification rule sends a message to the notification endpoint and stores a receipt in a notification measurement in the _monitoring bucket. For example, a notification rule may specify a message to send to a Slack endpoint when a status level is critical (crit).

Learn how to create a notification rule.

Related entries: check, notification endpoint


The local server’s nanosecond timestamp.

Related entries: timestamp


A data type that represents a missing or unknown value. Denoted by the null value.



A symbol that usually represents an action or process. For example: +, -, >.


The object or value on either side of an operator.


Represents a storage location for any value of a specified type. Mutable, can hold different values during its lifetime.

See built-in Flux options.

option assignment

An option assignment binds an identifier to an option.

Learn about the option assignment in Flux.


A workspace for a group of users. All dashboards, tasks, buckets, members, and so on, belong to an organization.


A type of role for a user. Owners have read/write permissions. Users can have owner roles for bucket and organization resources.

Role permissions are separate from API token permissions. For additional information on API tokens, see token.

output plugin

Telegraf output plugins deliver metrics to their configured destination. To activate an output plugin, enable and configure the plugin in Telegraf’s configuration file.

Related entries: aggregator plugin, flush interval, input plugin, processor plugin



A key-value pair used to pass information to functions.


Method for passing information from one process to another. For example, an output parameter from one process is input to another process. Information passed through a pipe is retained until the receiving process reads the information.

pipe-forward operator

An operator (|>) used in Flux to chain operations together. Specifies the output from a function is input to next function.


In InfluxDB, a point represents a single data record, similar to a row in a SQL database table. Each point:

  • has a measurement, a tag set, a field key, a field value, and a timestamp;
  • is uniquely identified by its series and timestamp.

In a series, each point has a unique timestamp. If you write a point to a series with a timestamp that matches an existing point, the field set becomes a union of the old and new field set, where any ties go to the new field set.

Related entries: measurement, tag set, field set, timestamp

primary key

With the InfluxDB v3 storage engine, the primary key is the list of columns used to uniquely identify each row in a table. Rows are uniquely identified by their timestamp and tag set.


The precision configuration setting determines the timestamp precision retained for input data points. All incoming timestamps are truncated to the specified precision. Valid precisions are ns, us or µs, ms, and s.

In Telegraf, truncated timestamps are padded with zeros to create a nanosecond timestamp. Telegraf output plugins emit timestamps in nanoseconds. For example, if the precision is set to ms, the nanosecond epoch timestamp 1480000000123456789 is truncated to 1480000000123 in millisecond precision and padded with zeroes to make a new, less precise nanosecond timestamp of 1480000000123000000. Telegraf output plugins do not alter the timestamp further. The precision setting is ignored for service input plugins.

Related entries: aggregator plugin, input plugin, output plugin, processor plugin, service input plugin, timestamp

predicate expression

A predicate expression compares two values and returns true or false based on the relationship between the two values. A predicate expression is comprised of a left operand, a comparison operator, and a right operand.

predicate function

A Flux predicate function is an anonymous function that returns true or false based on one or more predicate expressions.

Example predicate function
(r) => == "bar" and r.baz != "quz"


A set of predetermined rules. A process can refer to instructions being executed by the computer processor or refer to the act of manipulating data.

In Flux, you can process data with InfluxDB tasks.

processor plugin

Telegraf processor plugins transform, decorate, and filter metrics collected by input plugins, passing the transformed metrics to the output plugins.

Related entries: aggregator plugin, input plugin, output plugin

Prometheus format

A simple text-based format for exposing metrics and ingesting them into Prometheus or InfluxDB using InfluxDB scrapers.

Collect data from any accessible endpoint that provides data in the Prometheus exposition format.



A Flux script that returns time series data, including tags and timestamps.

See Query data in InfluxDB.



A Read-Eval-Print Loop (REPL) is an interactive programming environment where you type a command and immediately see the result. See Flux REPL for information on building and using the REPL.


A tuple of named values represented using a record type.

regular expressions

Regular expressions (regex or regexp) are patterns used to match character combinations in strings.

rejected points

In a batch of data, points that InfluxDB couldn’t write to a bucket. Field type conflicts are a common cause of rejected points.

retention period

The duration of time that a bucket retains data. InfluxDB drops points with timestamps older than their bucket’s retention period. The minimum retention period is one hour.

Related entries: bucket, shard group duration

retention policy (RP)

Retention policy is an InfluxDB 1.x concept that represents the duration of time that each data point in the retention policy persists. The InfluxDB 2.x equivalent is retention period. For more information about retention policies, see the latest 1.x documentation.

Related entries: retention period,

RFC3339 timestamp

A timestamp that uses the human-readable DateTime format proposed in RFC 3339 (for example: 2020-01-01T00:00:00.00Z). Flux and InfluxDB clients return query results with RFC3339 timestamps.

Related entries: RFC3339Nano timestamp, timestamp, unix timestamp

RFC3339Nano timestamp

A Golang representation of the RFC 3339 DateTime format that uses nanosecond resolution–for example: 2006-01-02T15:04:05.999999999Z07:00.

InfluxDB clients can return RFC3339Nano timestamps in log events and CSV-formatted query results.

Related entries: RFC3339 timestamp, timestamp, unix timestamp



How data is organized in InfluxDB. The fundamentals of the InfluxDB schema are buckets (which include retention policies), series, measurements, tag keys, tag values, and field keys.

Related entries: bucket, field key, measurement, series, tag key, tag value


InfluxDB scrapes data from specified targets at regular intervals and writes the data to an InfluxDB bucket. Data can be scraped from any accessible endpoint that provides data in the Prometheus exposition format.


Secrets are key-value pairs that contain information you want to control access to, such as API keys, passwords, or certificates.


A Flux function that returns a single point from the range of specified points. See Flux selector functions for a complete list of available selector functions.

Related entries: aggregate, function, transformation


A collection of timestamps and field values that share a common series key (measurement, tag set, and field key).

Related entries: field set, measurement, series key, tag set

series cardinality

The number of unique measurement, tag set, and field key combinations in an InfluxDB bucket.

For example, assume that an InfluxDB bucket has one measurement. The single measurement has two tag keys: email and status. If the data contains three different email values, and each email address is associated with two different status values, the series cardinality for the measurement is 6 (3 × 2 = 6):

email status start finish start finish start finish

In some cases, this calculation may overestimate series cardinality because of the presence of dependent tags–tags scoped by another tag. Dependent tags do not increase series cardinality. Adding the tag firstname to the preceding example would not increase the series cardinality to 18 (3 × 2 × 3 = 18). The series cardinality would remain unchanged at 6, as firstname is already scoped by the email tag:

email status firstname start lorraine finish lorraine start marvin finish marvin start clifford finish clifford
Query for cardinality:

Related entries: field key,measurement, tag key, tag set

series file

A file created and used by the InfluxDB OSS storage engine that contains a set of all series keys across the entire database.

series key

A series key identifies a particular series by measurement, tag set, and field key.

For example:

# measurement, tag set, field key
h2o_level, location=santa_monica, h2o_feet

Related entries: series


A computer, virtual or physical, running InfluxDB.

Related entries: node

service input plugin

Telegraf input plugins that run in a passive collection mode while the Telegraf agent is running. Service input plugins listen on a socket for known protocol inputs, or apply their own logic to ingested metrics before delivering metrics to the Telegraf agent.

Related entries: aggregator plugin, input plugin, output plugin, processor plugin


A shard contains encoded and compressed data for a specific set of series. A shard consists of one or more TSM files on disk. All points in a series in a given shard group are stored in the same shard (TSM file) on disk. A shard belongs to a single shard group.

For more information, see Shards and shard groups (OSS).

Related entries: series, shard group duration, shard group, tsm

shard group

Shard groups are logical containers for shards organized by bucket. Every bucket with data has at least one shard group. A shard group contains all shards with data for the time interval covered by the shard group. The interval spanned by each shard group is the shard group duration.

For more information, see Shards and shard groups (OSS).

Related entries: bucket, retention period, series, shard, shard group duration

shard group duration

The duration of time or interval that each shard group covers. Set the shard-group-duration for each bucket.

For more information, see:

Single Stat

A visualization that displays the numeric value of the most recent point in a table (or series) returned by a query.

Snappy compression

InfluxDB uses snappy compression to compress batches of points. To improve space and disk IO efficiency, each batch is compressed before being written to disk.


A data visualization that displays time series data in a staircase graph. Generate a step-plot using the step interpolation option for line graphs.


Flux processes streams of data. A stream includes a series of tables over a sequence of time intervals.


A data type used to represent text. In annotated CSV, columns that contain string values are annotated with the string datatype.



InfluxDB uses Transmission Control Protocol (TCP) port 8086 for client-server communication over the InfluxDB HTTP API.


With InfluxDB v3, a time series measurement equates to a relational database table with fields, tags, and timestamp as columns.

Related entries: measurement


The key-value pair in InfluxDB’s data structure that records metadata. Tags are an optional part of InfluxDB’s data structure but they are useful for storing commonly queried metadata; tags are indexed so queries on tags are performant. Query tip: Compare tags to fields; fields are not indexed.

Related entries: field, tag key, tag set, tag value

tag key

The key of a tag key-value pair. Tag keys are strings and store metadata. Tag keys are indexed so queries on tag keys are processed quickly.

Query tip: Compare tag keys to field keys. Field keys are not indexed.

Related entries: field key, tag, tag set, tag value

tag set

The collection of tag keys and tag values on a point.

Related entries: point, series, tag, tag key, tag value

tag value

The value of a tag key-value pair. Tag values are strings and they store metadata. Tag values are indexed so queries on tag values are processed quickly.

Related entries: tag, tag key, tag set


A scheduled Flux query that runs periodically and may store results in a specified measurement. Examples include downsampling and batch jobs. For more information, see Process Data with InfluxDB tasks.

Related entries: function

technical preview

A new feature released to gather feedback from customers and the InfluxDB community. Send feedback to InfluxData via Community Slack or our Community Site.


A plugin-driven agent that collects, processes, aggregates, and writes metrics.

Related entries: Automatically configure Telegraf, Manually configure Telegraf, Telegraf plugins, Use Telegraf to collect data, View a Telegraf configuration

time (data type)

A data type that represents a single point in time with nanosecond precision.

time series data

Sequence of data points typically consisting of successive measurements made from the same source over a time interval. Time series data shows how data evolves over time. On a time series data graph, one of the axes is always time. Time series data may be regular or irregular. Regular time series data changes in constant intervals. Irregular time series data changes at non-constant intervals.


The date and time associated with a point. In InfluxDB, a timestamp is a nanosecond-scale unix timestamp in UTC.

To specify time when writing data, see Elements of line protocol. To specify time when querying data, see Query InfluxDB with Flux.

Related entries: point, precision, RFC3339 timestamp, unix timestamp,


Tokens (or API tokens) verify user and organization permissions in InfluxDB. There are different types of API tokens:

  • All Access token: grants full read and write access to all resources in an organization.
  • Read/Write token: grants read or write access to specific resources in an organization.

Related entries: Create a token.


By default, tracing is disabled in InfluxDB OSS. To enable tracing or set other InfluxDB OSS configuration options, see InfluxDB OSS configuration options.


An InfluxQL function that returns a value or a set of values calculated from specified points, but does not return an aggregated value across those points. See InfluxQL functions for a complete list of the available and upcoming aggregations.

Related entries: aggregate, function, selector

TSI (Time Series Index)

TSI uses the operating system’s page cache to pull frequently accessed data into memory and keep infrequently accessed data on disk.


The Time Series Logs (TSL) extension (.tsl) identifies Time Series Index (TSI) log files, generated by the tsi1 engine.

TSM (Time Structured Merge tree)

A data storage format that allows greater compaction and higher write and read throughput than B+ or LSM tree implementations. For more information, see Storage engine.

Related entries: TSI



User Datagram Protocol is a packet of information. When a request is made, a UDP packet is sent to the recipient. The sender doesn’t verify the packet is received. The sender continues to send the next packets. This means computers can communicate more quickly. This protocol is used when speed is desirable and error correction is not necessary.

universe block

An implicit block that encompasses all Flux source text in a universe block.

unix timestamp

Counts time since Unix Epoch (1970-01-01T00:00:00Z UTC) in specified units (precision). Specify timestamp precision when writing data to InfluxDB. InfluxDB supports the following unix timestamp precisions:

Precision Description Example
ns Nanoseconds 1577836800000000000
us Microseconds 1577836800000000
ms Milliseconds 1577836800000
s Seconds 1577836800

The examples above represent 2020-01-01T00:00:00Z UTC.

Related entries: timestamp, RFC3339 timestamp

unsigned integer

A whole number that is positive or zero (0, 143). Also known as a “uinteger.” InfluxDB supports 64-bit unsigned integers (minimum: 0, maximum: 18446744073709551615). In annotated CSV, columns that contain integers are annotated with the unsignedLong datatype.

Related entries: integer


InfluxDB users are granted permission to access to InfluxDB. Users are added as a member of an organization and are given a unique API token.


values per second

The preferred measurement of the rate at which data is persisted to InfluxDB. Write speeds are generally quoted in values per second.

To calculate the values per second rate, multiply the number of points written per second by the number of values stored per point. For example, if the points have four fields each, and a batch of 5000 points is written 10 times per second, the values per second rate is:

4 field values per point × 5000 points per batch × 10 batches per second = 200,000 values per second

Related entries: batch, field, point


A storage location (identified by a memory address) paired with an associated symbolic name (an identifier). A variable contains some known or unknown quantity of information referred to as a value.

variable assignment

A statement that sets or updates the value stored in a variable.

In Flux, the variable assignment creates a variable bound to an identifier and gives it a type and value. A variable keeps the same type and value for the remainder of its lifetime. An identifier assigned to a variable in a block cannot be reassigned in the same block.



Grouping data based on specified time intervals. For information about how to window in Flux, see Window and aggregate data with Flux.

Was this page helpful?

Thank you for your feedback!

The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Flux is going into maintenance mode and will not be supported in InfluxDB 3.0. This was a decision based on the broad demand for SQL and the continued growth and adoption of InfluxQL. We are continuing to support Flux for users in 1.x and 2.x so you can continue using it with no changes to your code. If you are interested in transitioning to InfluxDB 3.0 and want to future-proof your code, we suggest using InfluxQL.

For information about the future of Flux, see the following:

InfluxDB Cloud powered by TSM