Python client library for InfluxDB v3
The InfluxDB v3 influxdb3-python
Python client library
integrates InfluxDB Clustered write and query operations with Python scripts and applications.
InfluxDB client libraries provide configurable batch writing of data to InfluxDB Clustered. Client libraries can be used to construct line protocol data, transform data from other formats to line protocol, and batch write line protocol data to InfluxDB HTTP APIs.
InfluxDB v3 client libraries can query InfluxDB Clustered using SQL or InfluxQL.
The influxdb3-python
Python client library wraps the Apache Arrow pyarrow.flight
client
in a convenient InfluxDB v3 interface for executing SQL and InfluxQL queries, requesting
server metadata, and retrieving data from InfluxDB Clustered using the Flight protocol with gRPC.
Code samples in this page use the Get started home sensor sample data.
- Installation
- Importing the module
- API reference
- Classes
- Class InfluxDBClient3
- Class Point
- Class WriteOptions
- Functions
- Constants
- Exceptions
Installation
Install the client library and dependencies using pip
:
pip install influxdb3-python
Importing the module
The influxdb3-python
client library package provides the influxdb_client_3
module.
Import the module:
import influxdb_client_3
Import specific class methods from the module:
from influxdb_client_3 import InfluxDBClient3, Point, WriteOptions
influxdb_client_3.InfluxDBClient3
: a class for interacting with InfluxDBinfluxdb_client_3.Point
: a class for constructing a time series data pointinfluxdb_client_3.WriteOptions
: a class for configuring client write options.
API reference
The influxdb_client_3
module includes the following classes and functions.
Classes
Class InfluxDBClient3
Provides an interface for interacting with InfluxDB APIs for writing and querying data.
The InfluxDBClient3
constructor initializes and returns a client instance with the following:
- A singleton write client configured for writing to the database.
- A singleton Flight client configured for querying the database.
Parameters
host
(string): The host URL of the InfluxDB instance.database
(string): The database to use for writing and querying.token
(string): A database token with read/write permissions.- Optional
write_client_options
(dict): Options to use when writing to InfluxDB. IfNone
, writes are synchronous. - Optional
flight_client_options
(dict): Options to use when querying InfluxDB.
Writing modes
When writing data, the client uses one of the following modes:
- Synchronous writing
- Batch writing
- Asynchronous writing: Deprecated
Synchronous writing
Default. When no write_client_options
are provided during the initialization of InfluxDBClient3
, writes are synchronous.
When writing data in synchronous mode, the client immediately tries to write the provided data to InfluxDB, doesn’t retry failed requests, and doesn’t invoke response callbacks.
Example: initialize a client with synchronous (non-batch) defaults
The following example initializes a client for writing and querying data in an InfluxDB Clustered database.
Given that write_client_options
isn’t specified, the client uses the default synchronous writing mode.
–>
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN")
Replace the following:
DATABASE_NAME
: the name of your InfluxDB Clustered databaseDATABASE_TOKEN
: an InfluxDB Clustered database token with read/write permissions on the specified database
To explicitly specify synchronous mode, create a client with write_options=SYNCHRONOUS
–for example:
from influxdb_client_3 import InfluxDBClient3, write_client_options, SYNCHRONOUS
wco = write_client_options(write_options=SYNCHRONOUS)
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN",
write_client_options=wco,
flight_client_options=None)
Replace the following:
DATABASE_NAME
: the name of your InfluxDB Clustered databaseDATABASE_TOKEN
: an InfluxDB Clustered database token with write permissions on the specified database
Batch writing
Batch writing is particularly useful for efficient bulk data operations. Options include setting batch size, flush intervals, retry intervals, and more.
Batch writing groups multiple writes into a single request to InfluxDB.
In batching mode, the client adds the record or records to a batch, and then schedules the batch for writing to InfluxDB.
The client writes the batch to InfluxDB after reaching write_client_options.batch_size
or write_client_options.flush_interval
.
If a write fails, the client reschedules the write according to the write_client_options
retry options.
Configuring write client options
Use WriteOptions
and write_client_options
to configure batch writing and response handling for the client:
- Instantiate
WriteOptions
. To use batch defaults, call the constructor without specifying parameters. - Call
write_client_options
and use thewrite_options
parameter to specify theWriteOptions
instance from the preceding step. Specify callback parameters (success, error, and retry) to invoke functions on success or error. - Instantiate
InfluxDBClient3
and use thewrite_client_options
parameter to specify thedict
output from the preceding step.
Example: initialize a client using batch defaults and callbacks
The following example shows how to use batch mode with defaults and specify callback functions for the response status (success, error, or retryable error).
from influxdb_client_3 import(InfluxDBClient3,
write_client_options,
WriteOptions,
InfluxDBError)
status = None
# Define callbacks for write responses
def success(self, data: str):
status = "Success writing batch: data: {data}"
assert status.startswith('Success'), f"Expected {status} to be success"
def error(self, data: str, err: InfluxDBError):
status = f"Error writing batch: config: {self}, data: {data}, error: {err}"
assert status.startswith('Success'), f"Expected {status} to be success"
def retry(self, data: str, err: InfluxDBError):
status = f"Retry error writing batch: config: {self}, data: {data}, error: {err}"
assert status.startswith('Success'), f"Expected {status} to be success"
# Instantiate WriteOptions for batching
write_options = WriteOptions()
wco = write_client_options(success_callback=success,
error_callback=error,
retry_callback=retry,
write_options=write_options)
# Use the with...as statement to ensure the file is properly closed and resources
# are released.
with InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN",
write_client_options=wco) as client:
client.write_file(file='./data/home-sensor-data.csv',
timestamp_column='time', tag_columns=["room"], write_precision='s')
Replace the following:
DATABASE_NAME
: the name of your InfluxDB Clustered databaseDATABASE_TOKEN
: an InfluxDB Clustered database token with write permissions on the specified database
InfluxDBClient3 instance methods
InfluxDBClient3.write
Writes a record or a list of records to InfluxDB.
Parameters
record
(record or list): A record or list of records to write. A record can be aPoint
object, a dict that represents a point, a line protocol string, or aDataFrame
.database
(string): The database to write to. Default is to write to the database specified for the client.- **
**kwargs
**: Additional write options–for example:write_precision
(string): Optional. Default is"ns"
. Specifies the precision ("ms"
,"s"
,"us"
,"ns"
) for timestamps inrecord
.write_client_options
(dict): Optional. Specifies callback functions and options for batch writing mode. To generate thedict
, use thewrite_client_options
function.
Example: write a line protocol string
Replace the following:
DATABASE_NAME
: the name of your InfluxDB Clustered databaseDATABASE_TOKEN
: an InfluxDB Clustered database token with write permissions on the specified database
Example: write data using points
The influxdb_client_3.Point
class provides an interface for constructing a data
point for a measurement and setting fields, tags, and the timestamp for the point.
The following example shows how to create a Point
object, and then write the
data to InfluxDB.
from influxdb_client_3 import Point, InfluxDBClient3
point = Point("home").tag("room", "Kitchen").field("temp", 21.5).field("hum", .25)
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN")
client.write(point)
The following sample code executes an InfluxQL query to retrieve the written data:
# Execute an InfluxQL query
table = client.query(query='''SELECT DISTINCT(temp) as val
FROM home
WHERE temp > 21.0
AND time >= now() - 10m''', language="influxql")
# table is a pyarrow.Table
df = table.to_pandas()
assert 21.5 in df['val'].values, f"Expected value in {df['val']}"
Replace the following:
DATABASE_NAME
: the name of your InfluxDB Clustered databaseDATABASE_TOKEN
: an InfluxDB Clustered database token with write permissions on the specified database
Example: write data using a dict
InfluxDBClient3
can serialize a dictionary object into line protocol.
If you pass a dict
to InfluxDBClient3.write
, the client expects the dict
to have the
following point attributes:
- measurement (string): the measurement name
- tags (dict): a dictionary of tag key-value pairs
- fields (dict): a dictionary of field key-value pairs
- time: the timestamp for the record
The following example shows how to define a dict
that represents a point, and then write the
data to InfluxDB.
Replace the following:
DATABASE_NAME
: the name of your InfluxDB Clustered databaseDATABASE_TOKEN
: an InfluxDB Clustered database token with write permissions on the specified database
InfluxDBClient3.write_file
Writes data from a file to InfluxDB. Execution is synchronous.
Parameters
-
file
(string): A path to a file containing records to write to InfluxDB. The filename must end with one of the following supported extensions. For more information about encoding and formatting data, see the documentation for each supported format:.feather
: Feather.parquet
: Parquet.csv
: Comma-separated values.json
: JSON.orc
: ORC
-
measurement_name
(string): Defines the measurement name for records in the file. The specified value takes precedence overmeasurement
andiox::measurement
columns in the file. If no value is specified for the parameter, and ameasurement
column exists in the file, themeasurement
column value is used for the measurement name. If no value is specified for the parameter, and nomeasurement
column exists, theiox::measurement
column value is used for the measurement name. -
tag_columns
(list): Tag column names. Columns not included in the list and not specified by another parameter are assumed to be fields. -
timestamp_column
(string): The name of the column that contains timestamps. Default is'time'
. -
database
(str
): The database to write to. Default is to write to the database specified for the client. -
file_parser_options
(callable): A function for providing additional arguments to the file parser. -
**kwargs
: Additional options to pass to theWriteAPI
–for example:write_precision
(string): Optional. Default is"ns"
. Specifies the precision ("ms"
,"s"
,"us"
,"ns"
) for timestamps inrecord
.write_client_options
(dict): Optional. Specifies callback functions and options for batch writing mode. To generate thedict
, use thewrite_client_options
function.
Example: use batch options when writing file data
The following example shows how to specify customized write options for batching, retries, and response callbacks, and how to write data from CSV and JSON files to InfluxDB:
from influxdb_client_3 import(InfluxDBClient3, write_client_options,
WritePrecision, WriteOptions, InfluxDBError)
# Define the result object
result = {
'config': None,
'status': None,
'data': None,
'error': None
}
# Define callbacks for write responses
def success_callback(self, data: str):
result['config'] = self
result['status'] = 'success'
result['data'] = data
assert result['data'] != None, f"Expected {result['data']}"
print("Successfully wrote data: {result['data']}")
def error_callback(self, data: str, exception: InfluxDBError):
result['config'] = self
result['status'] = 'error'
result['data'] = data
result['error'] = exception
assert result['status'] == "success", f"Expected {result['error']} to be success for {result['config']}"
def retry_callback(self, data: str, exception: InfluxDBError):
result['config'] = self
result['status'] = 'retry_error'
result['data'] = data
result['error'] = exception
assert result['status'] == "success", f"Expected {result['status']} to be success for {result['config']}"
write_options = WriteOptions(batch_size=500,
flush_interval=10_000,
jitter_interval=2_000,
retry_interval=5_000,
max_retries=5,
max_retry_delay=30_000,
exponential_base=2)
wco = write_client_options(success_callback=success_callback,
error_callback=error_callback,
retry_callback=retry_callback,
write_options=write_options)
with InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN",
write_client_options=wco) as client:
client.write_file(file='./data/home-sensor-data.csv', timestamp_column='time',
tag_columns=["room"], write_precision='s')
client.write_file(file='./data/home-sensor-data.json', timestamp_column='time',
tag_columns=["room"], write_precision='s')
Replace the following:
DATABASE_NAME
: the name of your InfluxDB Clustered databaseDATABASE_TOKEN
: an InfluxDB Clustered database token with write permissions on the specified database
InfluxDBClient3.query
Sends a Flight request to execute the specified SQL or InfluxQL query.
Returns all data in the query result as an Arrow table (pyarrow.Table
instance).
Parameters
query
(string): the SQL or InfluxQL to execute.language
(string): the query language used in thequery
parameter–"sql"
or"influxql"
. Default is"sql"
.mode
(string): Specifies the output to return from thepyarrow.flight.FlightStreamReader
. Default is"all"
.all
: Read the entire contents of the stream and return it as apyarrow.Table
.chunk
: Read the next message (aFlightStreamChunk
) and returndata
andapp_metadata
. Returnsnull
if there are no more messages.pandas
: Read the contents of the stream and return it as apandas.DataFrame
.reader
: Convert theFlightStreamReader
into apyarrow.RecordBatchReader
.schema
: Return the schema for all record batches in the stream.
**kwargs
:FlightCallOptions
Example: query using SQL
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN")
table = client.query("SELECT * from home WHERE time >= now() - INTERVAL '90 days'")
# Filter columns.
print(table.select(['room', 'temp']))
# Use PyArrow to aggregate data.
print(table.group_by('hum').aggregate([]))
In the examples, replace the following:
DATABASE_NAME
: the name of your InfluxDB Clustered databaseDATABASE_TOKEN
: an InfluxDB Clustered database token with read permission on the specified database
Example: query using InfluxQL
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN")
query = "SELECT * from home WHERE time >= -90d"
table = client.query(query=query, language="influxql")
# Filter columns.
print(table.select(['room', 'temp']))
Example: read all data from the stream and return a pandas DataFrame
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN")
query = "SELECT * from home WHERE time >= now() - INTERVAL '90 days'"
pd = client.query(query=query, mode="pandas")
# Print the pandas DataFrame formatted as a Markdown table.
print(pd.to_markdown())
Example: view the schema for all batches in the stream
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN")
table = client.query("""SELECT *
from home
WHERE time >= now() - INTERVAL '90 days'""")
# View the table schema.
print(table.schema)
Example: retrieve the result schema and no data
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN")
query = "SELECT * from home WHERE time >= now() - INTERVAL '90 days'"
schema = client.query(query=query, mode="schema")
print(schema)
Specify a timeout
Pass timeout=<number of seconds>
for FlightCallOptions
to use a custom timeout.
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN")
query = "SELECT * from home WHERE time >= now() - INTERVAL '90 days'"
client.query(query=query, timeout=5)
InfluxDBClient3.close
Sends all remaining records from the batch to InfluxDB, and then closes the underlying write client and Flight client to release resources.
Example: close a client
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN")
client.close()
Class Point
Provides an interface for constructing a time series data point for a measurement, and setting fields, tags, and timestamp.
from influxdb_client_3 import Point
point = Point("home").tag("room", "Living Room").field("temp", 72)
See how to write data using points.
Class WriteOptions
Provides an interface for constructing options that customize batch writing behavior, such as batch size and retry.
from influxdb_client_3 import WriteOptions
write_options = WriteOptions(batch_size=500,
flush_interval=10_000,
jitter_interval=2_000,
retry_interval=5_000,
max_retries=5,
max_retry_delay=30_000,
exponential_base=2)
See how to use batch options for writing data.
Parameters
batch_size
: Default is1000
.flush_interval
: Default is1000
.jitter_interval
: Default is0
.retry_interval
: Default is5000
.max_retries
: Default is5
.max_retry_delay
: Default is125000
.max_retry_time
: Default is180000
.exponential_base
: Default is2
.max_close_wait
: Default is300000
.write_scheduler
: Default isThreadPoolScheduler(max_workers=1)
.
Functions
Function write_client_options(**kwargs)
Returns a dict
with the specified write client options.
Parameters
The function takes the following keyword arguments:
write_options
(WriteOptions
): Specifies whether the client writes data using synchronous mode or batching mode. If using batching mode, the client uses the specified batching options.point_settings
(dict): Default tags that the client will add to each point when writing the data to InfluxDB.success_callback
(callable): If using batching mode, a function to call after data is written successfully to InfluxDB (HTTP status204
)error_callback
(callable): if using batching mode, a function to call if data is not written successfully (the response has a non-204
HTTP status)retry_callback
(callable): if using batching mode, a function to call if the request is a retry (using batching mode) and data is not written successfully
Example: instantiate options for batch writing
from influxdb_client_3 import write_client_options, WriteOptions
from influxdb_client_3.write_client.client.write_api import WriteType
def success():
print("Success")
def error():
print("Error")
def retry():
print("Retry error")
write_options = WriteOptions()
wco = write_client_options(success_callback=success,
error_callback=error,
retry_callback=retry,
write_options=write_options)
assert wco['success_callback']
assert wco['error_callback']
assert wco['retry_callback']
assert wco['write_options'].write_type == WriteType.batching
Example: instantiate options for synchronous writing
from influxdb_client_3 import write_client_options, SYNCHRONOUS
from influxdb_client_3.write_client.client.write_api import WriteType
wco = write_client_options(write_options=SYNCHRONOUS)
assert wco['write_options'].write_type == WriteType.synchronous
Function flight_client_options(**kwargs)
Returns a dict
with the specified FlightClient parameters.
Parameters
kwargs
: keyword arguments forpyarrow.flight.FlightClient
parameters
Example: specify the root certificate path
from influxdb_client_3 import InfluxDBClient3, flight_client_options
import certifi
fh = open(certifi.where(), "r")
cert = fh.read()
fh.close()
client = InfluxDBClient3(host=f"cluster-host.com",
database=f"DATABASE_NAME",
token=f"DATABASE_TOKEN",
fco=flight_client_options(tls_root_certs=cert))
Replace the following:
DATABASE_NAME
: the name of your InfluxDB Clustered databaseDATABASE_TOKEN
: an InfluxDB Clustered database token with read permission on the specified database
Constants
influxdb_client_3.SYNCHRONOUS
: Represents synchronous write modeinfluxdb_client_3.WritePrecision
: Enum class that represents write precision
Exceptions
influxdb_client_3.InfluxDBError
: Exception class raised for InfluxDB-related errors
Was this page helpful?
Thank you for your feedback!
Support and feedback
Thank you for being part of our community! We welcome and encourage your feedback and bug reports for InfluxDB and this documentation. To find support, use the following resources:
Customers with an annual or support contract can contact InfluxData Support.