Write to Google BigQuery
To write data to Google BigQuery with Flux:
-
Import the
sql
package. -
Pipe-forward data into
sql.to()
and provide the following parameters:- driverName: bigquery
- dataSourceName: See data source name
- table: Table to write to
- batchSize: Number of parameters or columns that can be queued within
each call to
Exec
(default is10000
)
import "sql"
data
|> sql.to(
driverName: "bigquery",
dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y",
table: "exampleTable",
)
BigQuery data source name
The bigquery
driver uses the following DSN syntaxes (also known as a connection string):
bigquery://projectid/?param1=value¶m2=value
bigquery://projectid/location?param1=value¶m2=value
Common BigQuery URL parameters
- dataset - BigQuery dataset ID. When set, you can use unqualified table names in queries.
BigQuery authentication parameters
The Flux BigQuery implementation uses the Google Cloud Go SDK. Provide your authentication credentials using one of the following methods:
-
Set the
GOOGLE_APPLICATION_CREDENTIALS
environment variable to identify the location of your credential JSON file. -
Provide your base-64 encoded service account, refresh token, or JSON credentials using the credentials URL parameter in your BigQuery DSN.
Example credentials URL parameter
bigquery://projectid/?credentials=eyJ0eXBlIjoiYXV0...
Flux to BigQuery data type conversion
sql.to()
converts Flux data types to BigQuery data types.
Flux data type | BigQuery data type |
---|---|
int | INT64 |
float | FLOAT64 |
string | STRING |
bool | BOOL |
time | TIMESTAMP |
Was this page helpful?
Thank you for your feedback!
Support and feedback
Thank you for being part of our community! We welcome and encourage your feedback and bug reports for Flux and this documentation. To find support, use the following resources:
Customers with an annual or support contract can contact InfluxData Support.