Before you begin
Set up a connection to PostgreSQL.
Create an import
From the Tools menu, select Flow builder. For the Destination application, click PostgreSQL. Select your PostgreSQL connection from the connection list and click Next.
–– OR ––
From the Resources menu, select Imports. In the resulting Imports page, click + New Import. From the application list, click PostgreSQL. Select your PostgreSQL connection, add a name and a description for your import.
Contents
One to many (required): There are advanced use cases where a parent record is being passed around in a flow, but you actually need to process child records contained within the parent record context. For example, if you're exporting Sales Order records out of NetSuite and importing them into Salesforce as Opportunity and Opportunity Line Item records, then you will need to import the Opportunity Line Item records using this option. One to many is used in cases where you have a single record that internally needs to create multiple records. This field cannot be used when importing a CSV file.
- Path to many (required): If the records being processed are JSON objects, then use this field to specify the JSON path to the child records. This field does not need to be set for array/row-based data.
How would you like your records imported?
Query Type
Query type (required): Choose Insert if you are importing new records into the database. Choose Update if you are importing changes to existing records in the database. Choose Insert or Update if you want your import to dynamically check whether a record exists in the database. If so, that record will be updated; otherwise, a record will be created.
Insert
Choose Insert if you are importing new records into the database.
INSERT INTO afe_automation\n(\nname,\nemp_id,\ntype,\nnotes,\ncomments,\ndoj\n)\nVALUES\n(\n{{record.Name}},\n{{record.Order_NUmber__c}},\n{{record.LeadSource}},\n{{record.Description}},\n{{record.Comments}},\n{{record.Birthdate}}\n)
Ignore existing records (optional): When importing new data, if it is possible for the data being imported to already exist in the import application, or if you are worried that someone might accidentally re-import the same data twice, you can use this field to tell integrator.io to ignore existing data. It is definitely a best practice to have some sort of protection against duplicates, and this field is a good solution for most use-cases. The only downside of using this field is the slight performance hit needed to check first if something exists or not.
"query": ["INSERT INTO public.\"POSTGRE_Import_Insert_IgnoreExisting_ExportField\"(\t\"Name\", \"Email\", \"Phone\", \"Fax\", \"Itemname\", \"Quantity\", \"Amount\", \"Notes\", \"Date\")\tVALUES ({{record.orderID}},{{record.FirstName}}, {{record.[PO#]}}, {{record.[PO#]}}, {{record.ItemName}}, {{record.Quantity}}, {{record.Amount}}, {{record.Notes}}, {{record.Date}});"], "queryType": ["INSERT"], "ignoreExtract": "FirstName"
SQL query (required): Enter the exact SQL query to be sent to your database. The query can be static or dynamic, which means that handlebars templates are used to source data and settings from your flow.
Update
Choose Update if you are importing changes to existing records in the database.
"UPDATE AFE_Automation_Update\nSET \nemail={{record.email}},\nphone={{record.phone}},\ntitle={{record.title}},\ntype={{lookup \"dynamic\" record}},\nmemo={{lookup \"static\" record.memo}},\ndirection={{record.fax}}\nwhere name={{record.name}}"
Ignore missing records (optional): When updating existing data, if it is possible (or just harmless) for the data being imported to include stuff that does not exist in the import application, you can use this field to tell integrator.io to just ignore that missing data (i.e. so that unnecessary errors are not returned). If it is expected that all the data being imported always exist in the import application then it is better to leave this field unchecked so that you get an error to alert you that the data being imported is not what you expected.
"query": [ "UPDATE public.\"POSTGRESQL_Import_Update_IgnoreMissing_ExportField\"\tSET \"FirstName\"={{record.FirstName}}, \"LastName\"={{record.LastName}}, \"ItemName\"={{record.ItemName}}, \"Quantity\"={{record.Quantity}}, \"Amount\"={{record.Amount}}, \"Notes\"={{record.Notes}}, units={{record.units}}, \"PO#\"={{record.[PO#]}}, \"Date\"={{record.Date}}\tWHERE \"orderID\" = {{record.orderID}};" ], "queryType": [ "UPDATE" ], "ignoreExtract": "FirstName"
SQL query (required): Enter the exact SQL query to be sent to your database. The query can be static or dynamic, which means that handlebars templates are used to source data and settings from your flow.
Insert or update
Choose Insert or Update if you want your import to dynamically check whether a record exists in the database. If so, that record will be updated; otherwise, a record will be created.
How would you like to identify existing records? (required): Choose the method that should be used to identify if a source record already exists in the destination application:
- Records have a specific field populated – This method checks if a specific field in the source record already has a value, and then if so, the source record will be considered an existing record.
"query": [ "UPDATE public.\"POSTGRE_Import_InsertorUpdate_Exportfield\"\tSET \"FirstName\"={{record.FirstName}}, \"LastName\"={{record.LastName}}, \"Email\"={{record.Email}}, \"ItemName\"={{record.ItemName}}\tWHERE \"orderID\"={{record.orderID}};", "INSERT INTO public.\"POSTGRE_Import_InsertorUpdate_Exportfield\"(\t\"orderID\", \"FirstName\", \"LastName\", \"Email\", \"ItemName\", \"Quantity\", \"Amount\", \"Notes\")\tVALUES ({{record.orderID}}, {{record.FirstName}}, {{record.LastName}}, {{record.Email}}, {{record.ItemName}}, {{record.Quantity}}, {{record.Amount}}, {{record.Notes}});" ], "queryType": [ "UPDATE", "INSERT" ], "updateExtract": "FirstName"
- Which field? (required): Enter the path to the field in the source record that should be used to identify existing records. If a value is found for this field, then the source record will be considered an existing record.
- SQL query (for inserts) (required): Enter the exact SQL query to be sent to your database. The query can be static or dynamic, which means that handlebars templates are used to source data and settings from your flow.
- SQL query (for updates) (required): Enter the exact SQL query to be sent to your database. The query can be static or dynamic, which means that handlebars templates are used to source data and settings from your flow.
- Run a dynamic lookup – Lookups provide a way to dynamically search the destination application on the fly to find existing records.
- Lookup (required): Choose an existing lookup from the list or click + to define a new lookup.
- SQL query (for inserts) (required): Enter the exact SQL query to be sent to your database. The query can be static or dynamic, which means that handlebars templates are used to source data and settings from your flow.
- SQL query (for updates) (required): Enter the exact SQL query to be sent to your database. The query can be static or dynamic, which means that handlebars templates are used to source data and settings from your flow.
Advanced
Concurrency ID lock template (optional): This field can be used to help prevent duplicate records from being submitted at the same time when the connection associated with this import is using a concurrency level greater than 1.
Saying this another way, there are fields on the connection record associated with this import to limit the number of concurrent requests that can be made at any one time, and if you are allowing more than 1 request at a time then it is possible for imports to override each other (i.e. a race condition) if multiple messages/updates for the same record are being processed at the same time.
This field allows you to enforce an ordering across concurrent requests such that imports for a specific record id will queue up and happen one at a time (while still allowing imports for different record ids to happen in parallel).
The value of this field should be a handlebars template that generates a unique ID for each exported record (note: we are using the raw exported data when generating the ids -- before any import or mapping logic is invoked), and then with this ID, the integrator.io back-end will make sure that no two records with the same id are submitted for import at the same time.
One example, if you are exporting Zendesk records and importing them into NetSuite then you would most likely use '{{id}}' (the field Zendesk uses to identify unique records), and then no two records with the same Zendesk id value would import into NetSuite at the same time.
Data URI template (optional): When your flow runs but has data errors this field can be really helpful in that it allows you to make sure that all the errors in your job dashboard have a link to the target data in the import application (where possible).
This field uses a handlebars template to generate the dynamic links based on the data being imported.
Note: The template you provide will run against your data after it has been mapped, and then again after it has been submitted to the import application, to maximize the ability to link to the right place.
For example, if you are updating a customer record in Shopify, you would most likely set this field to the following value: https://your-store.myshopify.com/admin/customers/{{{id}}}.
"idLockTemplate": "{{record.name}}", "dataURITemplate": "{{record._id}}", "region": "us-east-1", "method": "putItem", "tableName": "afe_dataurl_concurrency_import", "partitionKey": "_id", "itemDocument": "{\n\"id\":\"{{record._id}}\",\n\"email\":\"{{record.email}}\",\n\"name\":\"{{record.name}}\",\n\"type\":\"{{record.type}}\"\n}"
Comments
0 comments
Please sign in to leave a comment.