Data Loader allows you to import data files into NetSuite, Salesforce, or any other RESTful web application. Use Data Loader to upload files, configure settings, map fields, and run an import.
Note: You can't add a lookup to a Data Loader flow.
Get started
Access Data Loader from Tools > Data Loader.
The typical workflow for Data Loader has four steps:
- Upload the source file from a directory.
- Configure your import settings and parsing options.
- Map the data fields so that each field value in the source file can populate its counterpart in the destination application.
- Run the flow.
Select a file and configure your export options
Click Data Loader.
Use the Create export panel to define the import criteria for the file you are uploading.
Enter a name (and optional description) for the source application.
How would you like to parse the file?
You can use Data Loader to upload any of the following file types:
The following sections explain the configuration options available for each file type.
CSV (or any delimited text file)
File Type: Use this drop-down menu to specify the file type for export so integrator.io can parse the data. integrator.io reads any file in the format you select (regardless of filename or extension). For example, if you select CSV, the filename and extension are irrelevant "csvData.txt", "csvData.dat", or "csvData.random". Additional supported file formats are JSON, XSLX, XML.
Sample file (that would be parsed): Click Choose file to provide a sample file that contains all possible fields that this transfer will need to parse when the flow runs. This sample file will help you configure the file parser options and map data for subsequent steps in your flow. The sample file can’t exceed 5 MB.
Note: Data Loader does not support File Transfer Protocol (FTP) for importing data to destination applications.
CSV parser helper: Click Launch to open the CSV parser helper. This advanced field editor allows you to define your parsing options in detail. If the file structure is simple, you can use the following basic options without using the CSV parser helper.
Column Delimiter: Select the column delimiter from the drop-down menu. Default options include
- Comma (,)
- Pipe (|)
- Semicolon (;)
- Asterisk (*)
- Space
- Tab
Note: You can also type any other character or set of characters in this field to define a custom column delimiter.
Row delimiter: Select the row delimiter from the drop-down menu. Default options include:
- CR (\r): Carriage return.
- CRLF (\r\n): Carriage return or line feed.
- LF (\n): Line feed.
Note: You can also type any other character or set of characters in this field to define a custom row delimiter.
Trim Spaces: If checked, Data Loader removes all leading and trailing white space in columns (except for header rows).
Number Of Rows To Skip: On some occasions CSV files contain multiple header rows that do not describe the columns. For example, the header rows may contain the filename or date/time the file was created. Enter the number of header rows to skip during the parse.
File Has Header: Check this box if the first row in the CSV file contains column names (and not actual data).
Preview data: This section displays the JSON output generated by the settings you've configured.
JSON
Sample file (that would be parsed): Click Choose file to provide a sample file that contains all possible fields that this transfer will need to parse when the flow runs. This sample file will help you configure the file parser options and map data for subsequent steps in your flow. The sample file can’t exceed 5 MB.
Note: Data Loader does not support File Transfer Protocol (FTP) for importing data to destination applications.
Resource path: If no value is provided, integrator.io assumes that the resources can be found at the root of the file. Use "dot" notation to describe the path to your resources. For example, item.description.
XLSX
Sample file (that would be parsed): Click Choose file to provide a sample file that contains all possible fields that this transfer will need to parse when the flow runs. This sample file will help you configure the file parser options and map data for subsequent steps in your flow. The sample file can’t exceed 5 MB.
Note: Data Loader does not support File Transfer Protocol (FTP) for importing data to destination applications.
File has header: Check this checkbox if the file contains a top level header row (meaning the first row in the file you are uploading is reserved for column names and not actual data).
Multiple rows per record: Check this checkbox if records in the spreadsheet span more than one row.
XML
Sample file (that would be parsed): Click Choose file to provide a sample file that contains all possible fields that this transfer will need to parse when the flow runs. This sample file will help you configure the file parser options and map data for subsequent steps in your flow. The sample file can’t exceed 5 MB.
Note: Data Loader does not support File Transfer Protocol (FTP) for importing data to destination applications.
XML parser helper: Click Launch to open the XML parser helper. This advanced field editor allows you to define your parsing options in detail. If the file structure is simple, you can use the following basic options without using the XML parser helper.
Resource path: If no value is provided, integrator.io assumes that the resources can be found at the root of the file. Use "forward slash" notation to describe the path to your resources. For example, /item/description.
Parse strategy: You can use custom or automatic parsing strategies for your JSON output record.
-
Custom: Choose this option if you want more control over the JSON structure than can be achieved with automatic parsing. A custom parse strategy allows you to modify the following elements in the resulting JSON record:
- Trim trailing and leading spaces: Check this checkbox to remove all whitespace characters from the output (does not include newline characters).
- Strip newline characters: Check this checkbox to remove all newline characters from the output.
- Character to prepend on attibute names: Enter one or more characters to prepend an attribute in the XML record. The JSON output will add the character(s) you enter in this field to the beginning of the attribute name.
- Text node name: Use this field to rename a node in XML so that the original name will be replaced with the value you enter in the JSON output. For example, to rename an XML node name of ID to Identifier, enter Identifier in this field. The XML name will be replaced with the value you enter in this field.
- List nodes: Enter the list nodes you want to include in the JSON parsed output.
- Include only these nodes: If you only want to include specific nodes in the JSON output, you can list each node to include in this field. Use semicolons (;) to list multiple nodes.
- Exclude any of these nodes: If it is easier to only list the nodes you want excluded from the JSON output, you can list the nodes you want excluded from the JSON parse here. Use semicolons (;) to list multiple nodes.
- Automatic: Automatic parsing means integrator.io converts the XML data to JSON without any user configurations. This can generate a more complex and difficult to read JSON. If you would like to have more control over what the JSON output looks like, use the custom option.
How would you like to sort and group records?
The Sorting and Grouping option allows you to manage your flow’s files by sorting and grouping records by field.
Sample data and record transformation mapping
Once you have added your source file to a flow that uses Data Loader, you can apply record transformation mappings by clicking the transform icon.
The Transform record advanced field editor displays.
Transform Rules
Transformations are an optional feature that lets you alter the representation of your record(s). By providing a set of rules, you can change the structure of your record. Each rule is made up of a pair of extract and generate JSON paths. These paths let you map where to get (extract) values and where to place (generate) them.
In the Transform rules section you can select the node from the incoming data in the left column and define its name and location for the output in the right column.
By providing transform rules, you can select which record values to keep and which to drop. Only properties that are specified in a transform rule will be part of the output or export record. You can also promote parent properties to child items.
You can click the JavaScript toggle button at the top of the Transform rules advanced field editor to use JavaScript mode to perform your transformation. If you enter JavaScript, then any rules you defined using the Rules mode for the editor are ignored.
- JSON keys must be strings with double quotes:
"name":"Mary"
- JavaScript keys don't have quotes, but values do:
name:"Mary"
Advanced settings
File encoding type: File encoding defines how the individual characters in your data are represented on the file system. The default encoding is UTF-8, but you can optionally use Windows-1254 and UTF-16LE.
Page size: You can set the number of records per page for an export operation. The default (when left blank) is 20. There is no maximum value, but pages are capped if they exceed 5 MB. Most applications that you are importing data into have pre-established page size limits.
Data URI template: When your flow runs but has data errors, this field allows you to verify that any errors in your job dashboard have a working link to the original data in the export application. This field uses a handlebars template to generate the dynamic links based on the data being exported.
"https://your-store.myshopify.com/admin/customers/{{{id}}}"
If you are just exporting a CSV file from an FTP site then this field could be one or more columns from the file:
{{internal_id}}, {{Name}}, {{Email}}, etc.
- To include data, use a three-handlebar expression:
{{{name}}}
- To escape the data, use a two-bracket expression:
{{name}}
You must provide a sample template file to proceed. This template automatically populates fields and helps you map fields later on. This file should contain all of the fields that you want to work with and must be in the same format that the export will read when deployed to your production environment.
If you do not have a sample template you can create one manually based on the specs provided by the export application provider.
Once you configure your Data Loader export, you can add your destination application import as you would for any other flow.
Configure your import
Click Add destination create an import.
Choose the destination application from the drop-down menu. If the application you want to import data into is unlisted, you can use a universal connector (like REST or HTTP) to set up the destination application.
What would you like to do?: Data Loader assumes you want to Import records into destination application by default. Data Loader flows assume your flow does not need additional steps to generate the data you want to import into your destination application. For more information, see Imports.
Connection: If you have already created a connection for your lookup or destination application, you can select it from the drop-down menu. Otherwise, click the + button to create a connection.
Click Next.
Name the import and provide an optional description.
One to many: Select Yes (advanced) if the incoming source application records contain child records that should instead be treated as the main records. For example, if you are processing order records, but for this step in the flow you want to process the line items within the orders as the main records. If the records being processed are JSON objects, then use the Path to many field to select or enter the JSON path to the child records. This field does not need to be set for array/row based data.
Form view: Use the branded form view whenever possible to reduce the complexity of your import configurations. If you are importing data to an organization that is partnered with Celigo, the settings available on import are customized for that organization's services (as currently understood by Celigo developers). In cases where a change to an organization's platform have not yet been accomodated by Celigo's branded import settings, you can switch the form view to a generic universal form that displays all possible configuration options for that general request type (REST, HTTP, etc.).
The options available for import will vary widely depending on which service you are connecting to. Once you have configured your import options, you can set up import filters, map your data fields, and run the import.
Run the flow
Click the Run icon to run the flow.
Run Dashboard
The flow's status displays in the Run dashboard. The status displays as Waiting in queue, changes to In progress, then Completed when the flow is finished.
- Green: success
- Red: error
- Blue: marked as resolved
Comments
Please sign in to leave a comment.