Annotated CSV

You can write data to InfluxDB using annotated CSV and the InfluxDB HTTP API or upload a CSV file in the InfluxDB UI.

CSV tables must be encoded in UTF-8 and Unicode Normal Form C as defined in UAX15. InfluxDB removes carriage returns before newline characters.

CSV response format

InfluxDB annotated CSV supports encodings listed below.


A table may have the following rows and columns.


  • Annotation rows: describe column properties.

  • Header row: defines column labels (one header row per table).

  • Record row: describes data in the table (one record per row).



In addition to the data columns, a table may include the following columns:

  • Annotation column: Displays the name of an annotation. Only used in annotation rows and is always the first column. Value can be empty or a supported annotation. The response format uses a comma (,) to separate an annotation name from values in the row. To account for this, rows in the table start with a leading comma; you’ll notice an empty column for the entire length of the table.

  • Result column: Contains the name of the result specified by the query.

  • Table column: Contains a unique ID for each table in a result.

Multiple tables and results

If a file or data stream contains multiple tables or results, the following requirements must be met:

  • A table column indicates which table a row belongs to.
  • All rows in a table are contiguous.
  • An empty row delimits a new table boundary in the following cases:
    • Between tables in the same result that do not share a common table schema.
    • Between concatenated CSV files.
  • Each new table boundary starts with new annotation and header rows.


Dialect options

Flux supports the following dialect options for text/csv format.

Option Description Default
header If true, the header row is included. true
delimiter Character used to delimit columns. ,
quoteChar Character used to quote values containing the delimiter. "
annotations List of annotations to encode (datatype, group, or default). empty
commentPrefix String prefix to identify a comment. Always added to annotations. #


Annotation rows describe column properties, and start with # (or commentPrefix value). The first column in an annotation row always contains the annotation name. Subsequent columns contain annotation values as shown in the table below.

Annotation name Values Description
datatype a data type or line protocol element Describes the type of data or which line protocol element the column represents.
default a value of the column’s data type Value to use for rows with an empty value.

Data types

Datatype Description
boolean “true” or “false”
unsignedLong unsigned 64-bit integer
long signed 64-bit integer
double IEEE-754 64-bit floating-point number
string UTF-8 encoded string
base64Binary base64 encoded sequence of bytes as defined in RFC 4648
dateTime instant in time, may be followed with a colon : and a description of the format (number, RFC3339, RFC3339Nano)
duration length of time represented as an unsigned 64-bit integer number of nanoseconds

Line protocol elements

The datatype annotation accepts data types and line protocol elements. Line protocol elements identify how columns are converted into line protocol when using the influx write command to write annotated CSV to InfluxDB.

Line protocol element Description
measurement column value is the measurement
field (default) column header is the field key, column value is the field value
tag column header is the tag key, column value is the tag value
time column value is the timestamp (alias for dateTime)
ignore orignored column is ignored and not included in line protocol

Mixing data types and line protocol elements

Columns with data types (other than dateTime) in the #datatype annotation are treated as fields when converted to line protocol. Columns without a specified data type default to field when converted to line protocol and column values are left unmodified in line protocol. See an example below and line protocol data types and format.

Time columns

A column with time or dateTime #datatype annotations are used as the timestamp when converted to line protocol. If there are multiple time or dateTime columns, the last column (on the right) is used as the timestamp in line protocol. Other time columns are ignored and the influx write command outputs a warning.

Time column values should be Unix nanosecond timestamps, RFC3339, or RFC3339Nano.

Example line protocol elements in datatype annotation
#group false,false,false,false,false,false,false
#datatype measurement,tag,tag,field,field,ignored,time
#default ,,,,,,

Resulting line protocol:

cpu,cpu=cpu1,host=host1 time_steal=0,usage_user=2.7 1482669077000000000
cpu,cpu=cpu1,host=host2 time_steal=0,usage_user=2.2 1482669087000000000
Example of mixing data types and line protocol elements

Resulting line protocol:

test,name=annotatedDatatypes s="str1",d=1,b=true,l=1i,ul=1u,dur=1000000i 1
test,name=annotatedDatatypes s="str2",d=2,b=false,l=2i,ul=2u,dur=2000i 1578737410000000000


If an error occurs during execution, a table returns with:

  • An error column that contains an error message.
  • A reference column with a unique reference code to identify more information about the error.
  • A second row with error properties.

If an error occurs:

  • Before results materialize, the HTTP status code indicates an error. Error details are encoded in the csv table.
  • After partial results are sent to the client, the error is encoded as the next table and remaining results are discarded. In this case, the HTTP status code remains 200 OK.

Encoding for an error with the datatype annotation:

,Failed to parse query,897

Was this page helpful?

Thank you for your feedback!

The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Flux is going into maintenance mode and will not be supported in InfluxDB 3.0. This was a decision based on the broad demand for SQL and the continued growth and adoption of InfluxQL. We are continuing to support Flux for users in 1.x and 2.x so you can continue using it with no changes to your code. If you are interested in transitioning to InfluxDB 3.0 and want to future-proof your code, we suggest using InfluxQL.

For information about the future of Flux, see the following:

InfluxDB Cloud Serverless