Use InfluxDB client libraries to write line protocol data

Limited availability

InfluxDB Clustered is currently only available to a limited group of InfluxData customers. If interested in being part of the limited access group, please contact the InfluxData Sales team.

Use InfluxDB client libraries to build line protocol, and then write it to an InfluxDB database.

Construct line protocol

With a basic understanding of line protocol, you can now construct line protocol and write data to InfluxDB. Consider a use case where you collect data from sensors in your home. Each sensor collects temperature, humidity, and carbon monoxide readings. To collect this data, use the following schema:

  • measurement: home
    • tags
      • room: Living Room or Kitchen
    • fields
      • temp: temperature in °C (float)
      • hum: percent humidity (float)
      • co: carbon monoxide in parts per million (integer)
    • timestamp: Unix timestamp in second precision

The following example shows how to construct and write points that follow this schema.

Set up your project

The examples in this guide assume you followed Set up InfluxDB and Write data set up instructions in Get started.

After setting up InfluxDB and your project, you should have the following:

  • InfluxDB Clustered credentials:

  • A directory for your project.

  • Credentials stored as environment variables or in a project configuration file–for example, a .env (“dotenv”) file.

  • Client libraries installed for writing data to InfluxDB.

The following example shows how to construct Point objects that follow the example home schema, and then write the points as line protocol to an InfluxDB Clustered database.

  1. Install Go 1.13 or later.

  2. Inside of your project directory, install the client package to your project dependencies.

    go get

Inside of your project directory, install the @influxdata/influxdb-client InfluxDB v2 JavaScript client library.

npm install --save @influxdata/influxdb-client
  1. Optional, but recommended: Use venv) or conda to activate a virtual environment for installing and executing code–for example:

    Inside of your project directory, enter the following command using venv to create and activate a virtual environment for the project:

    python3 -m venv envs/env1 && source ./envs/env1/bin/activate
  2. Install the influxdb3-python, which provides the InfluxDB influxdb_client_3 Python client library module and also installs the pyarrow package for working with Arrow data.

    pip install influxdb3-python

Construct points and write line protocol

  1. Create a file for your module–for example: write-point.go.

  2. In write-point.go, enter the following sample code:

    package main
    import (
    func main() {
      // Set a log level constant
      const debugLevel uint = 4
        * Define options for the client.
        * Instantiate the client with the following arguments:
        *   - An object containing InfluxDB URL and token credentials.
        *   - Write options for batch size and timestamp precision.
      clientOptions := influxdb2.DefaultOptions().
      client := influxdb2.NewClientWithOptions(os.Getenv("INFLUX_URL"),
        * Create an asynchronous, non-blocking write client.
        * Provide your InfluxDB org and database as arguments
      writeAPI := client.WriteAPI(os.Getenv("INFLUX_ORG"), "get-started")
      // Get the errors channel for the asynchronous write client.
      errorsCh := writeAPI.Errors()
      /** Create a point.
        * Provide measurement, tags, and fields as arguments.
      p := influxdb2.NewPointWithMeasurement("home").
            AddTag("room", "Kitchen").
            AddField("temp", 72.0).
            AddField("hum", 20.2).
            AddField("co", 9).
      // Define a proc for handling errors.
      go func() {
        for err := range errorsCh {
            fmt.Printf("write error: %s\n", err.Error())
      // Write the point asynchronously
      // Send pending writes from the buffer to the database.
      // Ensure background processes finish and release resources.
  1. Create a file for your module–for example: write-point.js.

  2. In write-point.js, enter the following sample code:

    'use strict'
    /** @module write
     * Use the JavaScript client library for Node.js. to create a point and write it to InfluxDB 
    import {InfluxDB, Point} from '@influxdata/influxdb-client'
    /** Get credentials from the environment **/
    const url = process.env.INFLUX_URL
    const token = process.env.INFLUX_TOKEN
    const org = process.env.INFLUX_ORG
     * Instantiate a client with a configuration object
     * that contains your InfluxDB URL and token.
    const influxDB = new InfluxDB({url, token})
     * Create a write client configured to write to the database.
     * Provide your InfluxDB org and database.
    const writeApi = influxDB.getWriteApi(org, 'get-started')
     * Create a point and add tags and fields.
     * To add a field, call the field method for your data type.
    const point1 = new Point('home')
      .tag('room', 'Kitchen')
      .floatField('temp', 72.0)
      .floatField('hum', 20.2)
      .intField('co', 9)
    console.log(` ${point1}`)
     * Add the point to the batch.
     * Flush pending writes in the batch from the buffer and close the write client.
    writeApi.close().then(() => {
      console.log('WRITE FINISHED')
  1. Create a file for your module–for example:

  2. In, enter the following sample code to write data in batching mode:

    import os
    from influxdb_client_3 import Point, write_client_options, WritePrecision, WriteOptions, InfluxDBError
    # Create an array of points with tags and fields.
    points = [Point("home")
                .tag("room", "Kitchen")
                .field("temp", 25.3)
                .field('hum', 20.2)
                .field('co', 9)]
    # With batching mode, define callbacks to execute after a successful or failed write request.
    # Callback methods receive the configuration and data sent in the request.
    def success(self, data: str):
        print(f"Successfully wrote batch: data: {data}")
    def error(self, data: str, exception: InfluxDBError):
        print(f"Failed writing batch: config: {self}, data: {data} due: {exception}")
    def retry(self, data: str, exception: InfluxDBError):
        print(f"Failed retry writing batch: config: {self}, data: {data} retry: {exception}")
    # Configure options for batch writing.
    write_options = WriteOptions(batch_size=500,
    # Create an options dict that sets callbacks and WriteOptions.
    wco = write_client_options(success_callback=success,
    # Instantiate a synchronous instance of the client with your
    # InfluxDB credentials and write options.
    with InfluxDBClient3(host=config['INFLUX_HOST'],
                            write_client_options=wco) as client:
          client.write(points, write_precision='s')

The sample code does the following:

  1. Instantiates a client configured with the InfluxDB URL and API token.

  2. Uses the client to instantiate a write client with credentials.

  3. Constructs a Point object with the measurement name ("home").

  4. Adds a tag and fields to the point.

  5. Adds the point to a batch to be written to the database.

  6. Sends the batch to InfluxDB and waits for the response.

  7. Executes callbacks for the response, flushes the write buffer, and releases resources.

Run the example

To run the sample and write the data to your InfluxDB Clustered database, enter the following command in your terminal:

go run write-point.go
node write-point.js

The example logs the point as line protocol to stdout, and then writes the point to the database. The line protocol is similar to the following:

Home sensor data line protocol

home,room=Kitchen co=9i,hum=20.2,temp=72 1641024000

Was this page helpful?

Thank you for your feedback!

The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Flux is going into maintenance mode and will not be supported in InfluxDB 3.0. This was a decision based on the broad demand for SQL and the continued growth and adoption of InfluxQL. We are continuing to support Flux for users in 1.x and 2.x so you can continue using it with no changes to your code. If you are interested in transitioning to InfluxDB 3.0 and want to future-proof your code, we suggest using InfluxQL.

For information about the future of Flux, see the following: