- Before you begin
- Use bulk insert SQL query
- Use SQL query once per page of records
- Use SQL query on first page only
- Data URI template
Set up a connection to Google BigQuery.
The bulk insert data option is ideal for large data volumes. integrator.io builds the insert query for you automatically for each batch of records. Each batch's default number of records bound is 100, but you can use the Batch size field to tune your imports as needed.
By using metadata, you can now:
- Search for a specific project ID, dataset, table, or field to add to your query
- Use lookups to find data
- Refresh your data to get new updates automatically
You can also use Mapper 2.0 to map imported records.
Note: You can use any dataset and table name regardless of what you used in your connection by using the following format in your import Destination table search: project ID.dataset.table name
Execute a SQL query once per page of data. You can write your SQL command in the SQL query text field. Click Edit () to the right of the text field to open the SQL Query builder AFE.
Use this option if you want to run a SQL command once per flow run. The SQL will execute as soon as the first page of data reaches the flow step. This option should be used carefully to avoid race conditions in your flows.
Important: Do not use this option to delete or create tables that are also being loaded or updated in the same flow. To automate those tasks, use separate flows for each individual task, and then link the flows together to run consecutively. Using linked separate flows guarantees that the SQL commands will execute in the correct order without colliding with each other.
When your flow runs but has data errors this field can be really helpful in that it allows you to make sure that all the errors in your job dashboard have a link to the target data in the import application (where possible). This field uses a handlebars template to generate the dynamic links based on the data being imported.