Articles in this section

Import data into DynamoDB

Before you begin

Set up a connection to DynamoDB.

Create an import

From the Tools menu, select Flow builder. For the Destination application, click DynamoDB. Select your DynamoDB connection from the connection list and click Next.

–– OR ––

From the Resources menu, select Imports. In the resulting Imports page, click + New Import. From the application list, click DynamoDB. Select your DynamoDB connection, add a name and a description for your import.

How would you like your records imported?

Method (required): Enter the method to use for adding or updating Items in your DynamoDB instance.

  • putItem: Creates an item or replaces an old item with a new item.

  • updateItem: Edits an existing item’s attributes or adds a new item to the table if it does not already exist.

Region (required): Name of the DynamoDB region to the location where the request is being made. If not set, by default US East (N. Virginia) [us-east-1] is selected.

Table name (Required): Enter the name of the DynamoDB table in your database that you would like to query from. For example limit-test orders , items , users , customers, etc..

Expression attribute names: An expression attribute name is a placeholder that you use in an Amazon DynamoDB expression as an alternative to an actual attribute name. An expression attribute name must begin with a pound sign (#) and be followed by one or more alphanumeric characters. Refer to the DynamoDB documentation for the expected projection object syntax and operators. Example : { "#n1": "sku", "#n2":"id"}

Expression attribute values: If you need to compare an attribute with a value, define an expression attribute value as a placeholder. Expression attribute values in Amazon DynamoDB are substitutes for the actual values that you want to compare—values that you might not know until runtime. An expression attribute value must begin with a colon (: and be followed by one or more alphanumeric characters. Refer to the DynamoDB documentation for the expected projection object syntax and operators. Example: { ":p1": "celigo-tshirt",":p2": 99}

Partition key (optional): The primary key that uniquely identifies each item in an Amazon DynamoDB table can be simple (a partition key only) or composite (a partition key combined with a sort key). Refer to the DynamoDB documentation for the expected projection object syntax and operators.

Sort key (optional): In an Amazon DynamoDB table, the primary key that uniquely identifies each item in the table can be composed not only of a partition key but also of a sort key. Refer to the DynamoDB documentation for the expected projection object syntax and operators.

Key condition expression (required): To specify the search criteria, you use a key condition expression—a string that determines the items to be read from the table or index. Example: #n1 = :p1 AND #n2 > :p2

Advanced

Concurrency ID lock template (optional): This field can be used to help prevent duplicate records from being submitted at the same time when the connection associated with this import is using a concurrency level greater than 1.

Saying this another way, there are fields on the connection record associated with this import to limit the number of concurrent requests that can be made at any one time, and if you are allowing more than 1 request at a time then it is possible for imports to override each other (i.e. a race condition) if multiple messages/updates for the same record are being processed at the same time.

This field allows you to enforce an ordering across concurrent requests such that imports for a specific record id will queue up and happen one at a time (while still allowing imports for different record ids to happen in parallel).

The value of this field should be a handlebars template that generates a unique ID for each exported record (note: we are using the raw exported data when generating the ids -- before any import or mapping logic is invoked), and then with this ID, the integrator.io back-end will make sure that no two records with the same id are submitted for import at the same time.

One example, if you are exporting Zendesk records and importing them into NetSuite then you would most likely use '{{id}}' (the field Zendesk uses to identify unique records), and then no two records with the same Zendesk id value would import into NetSuite at the same time.

Data URI template (optional): When your flow runs but has data errors this field can be really helpful in that it allows you to make sure that all the errors in your job dashboard have a link to the target data in the import application (where possible).

This field uses a handlebars template to generate the dynamic links based on the data being imported.

Note

The template you provide will run against your data after it has been mapped, and then again after it has been submitted to the import application, to maximize the ability to link to the right place.

For example, if you are updating a customer record in Shopify, you would most likely set this field to the following value: https://your-store.myshopify.com/admin/customers/{{{id}}}.

Override child record trace key template (optional): Define a trace key that integrator.io will use to identify a unique record for a parent-child record combination. You can use a single field such as {{{field1}}} or use a handlebar expression.

When this field is set, you will override the platform default child record trace key field. The child record trace key template value will include the parent record trace key in the format parent_record_trace_key - child_record_trace_key.

Ignore Existing: When importing new data, if it is possible for the data being imported to already exist in the import application, or if you are worried that someone might accidentally re-import the same data twice, you can use this field to tell integrator.io to ignore existing data. It is definitely a best practice to have some sort of protection against duplicates, and this field is a good solution for most use-cases. The only downside of using this field is the slight performance hit needed to check first if something exists or not.

Ignore Missing: When updating existing data, if it is possible (or just harmless) for the data being imported to include stuff that does not exist in the import application, you can use this field to tell integrator.io to just ignore that missing data (i.e. so that unnecessary errors are not returned). If it is expected that all the data being imported always exists in the import application then it is better to leave this field unchecked so that you get an error to alert you that the data being imported is not what you expected.

Map data in DynamoDB

Map your data using Mapper 2.0. Mapper 2.0 provides a clear visual representation of both source and destination JSON structures, enabling you to reference sample input data from the source app while building the JSON structure to be sent to a destination app. You can easily build complex JSON structures that include nested arrays and make use of data type validation for required fields and incompatible data types.

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.