You can create an import into a JDBC application in either of these ways:
-
as part of a flow while designing or building your flow (Flow builder > Add destination app)
– or –
-
as a standalone resource that you can later attach to a flow (Resources > Imports)
Tip
The navigation paths differ, but the importing task is the same.
Creating an import into a JDBC source application consists of two logical parts: providing general details and configuring your import.
To import data, you can start by adding a destination application or an import resource and providing details:
-
From the main menu, select Tools> Flow builder.
-
In the Flow builder, click the plus sign (+) next to Destinations & Lookups.
-
In Create destination/lookup, for Application, select JDBC.
-
For What would you like to do?, select Import records into destination application.
-
For Connection, select a previously created JDBC connection from the drop-down menu. It saves time if you've already created your connection and only have to select it. However, you can create a connection if you haven't done so already.
-
Click Next.
– or –
-
From the main menu, select Resources >Imports.
-
Click + New import.
-
In Create import, for Application, select JDBC.
-
For Connection, select a previously created JDBC connection from the drop-down menu. It saves time if you've already created your connection and only have to select it. However, you can create a connection if you haven't done so already.
-
For Name, enter a meaningful name based on the purpose, function, or any other aspect.
-
For Description, optionally provide details that will help in understanding your import.
-
Click Next.
You've completed providing the general import details. Next, you must configure the import.
You can select the Use existing import check box later when you've created an import and want to reuse it.
Use standard SQL queries to import and modify data. If your flow requires multiple SQL queries, you must create one import for each query. You can't use multiple SQL queries on a single import.
You must configure the *required settings in the expanded sections for data to be imported successfully. When a section has only optional settings, it is collapsed. You can configure the optional settings for any customization.
-
In Name your import, enter a meaningful name based on the purpose, function, or any other aspect. (If you've used the Resources > Imports path to create an import, then this value is automatically filled in.)
Note
When importing records, the default for One to many is No. If you've a scenario where you want to use One to many, that is, you have a single record that internally needs to create multiple records, see Create a one to many import including nested arrays.
-
Under How would you like the fields to be imported? for Choose type, select any of the following options based on your requirements.
Setting
Description
Use bulk insert SQL query (recommended)
Note
-
When attempting to perform bulk insert operations, the destination table must belong to a default schema. The JDBC connection does not currently support the bulk insertion of data into tables in non-default schemas. So, make sure that the destination table for bulk insert operations is located in the default schema.
-
Bulk insert operations do not support Oracle built-in data types – BLOB, BFILE, RAW, and LONG RAW.
The bulk insert data option is ideal for large data volumes. integrator.io builds the insert query for you automatically for each batch of records. You can select the destination table to receive bulk insert data by validated table name or by referencing a dynamic lookup with a handlebars expression that identifies the destination table. The default number of records bound in each batch is 100, but you can use the Batch size setting in the Advanced section to tune your imports as needed.
For bulk insert, provide a table name in the Destination table.
Use mapping for bulk insert option
When using the bulk insert option, you can map the data.
-
Click the mappings
-
Edit the mapping between your source and the JDBC import. Click Settings () to the right of the mappings if required for any further configuration. For more information on mapping, see Mapper 2.0
-
Set the Data type based on your requirements.
-
Click Preview to view the output.
-
Click Save.
Use SQL query once per record
Execute an SQL query once for each record. You can write your SQL command in the SQL query text box. Click Edit () to the right of the text box to open the SQL Query builder AFE. In the query, if you're using dates, you must provide the data type explicitly. For example,
INSERT INTO product_orders (id, description, price, country, purchase_date) VALUES ({{record.id}}, {{record.description}}, {{record.price}}, {{record.country}}, TO_DATE({{record.purchase_date}},'YYYY-MM-DD"T"HH24:MI:SS"Z"'))
Use SQL query once per page of records
Execute an SQL query once per page of records. You can write your SQL command in the SQL query text box. Click Edit () to the right of the text field to open the SQL Query builder AFE.
-
-
Click Preview to check the import. If the SQL query is not executed successfully, revise the query.
-
Click Save.
When an import is saved,
-
if you've created it in a flow, the import is added to your flow in Flow builder.
-
if you've created it as a resource, then it is added under Resources > Imports.
Optional sections/settings |
Instructions |
---|---|
Mock response |
|
Mock response |
|
Advanced |
|
Batch |
The Batch setting is shown only for a bulk insert. BatchSize indicates the number of records that will be imported in one request. The default value is 100. |
Concurrency ID lock template |
|
Data URI template |
|
Invoke |
Copy the URL if you want to invoke this resource via an HTTP request. |
Comments
Please sign in to leave a comment.