This article describes how to export files from Microsoft Azure Blob Storage.
Start creating an export from Microsoft Azure Blob Storage in either of the following ways:
-
From the Tools menu, select Flow builder. Then, click Add source.
– or –
-
From the Resources menu, select Exports. Then, click + Create export.
From the Application drop-down list, select Microsoft Azure Blob Storage and your available Microsoft Azure Blob Storage connection.
Name (required): Provide a clear and distinguishable name. You will have the option to choose this export throughout integrator.io, and a unique identifier will prove helpful later when selecting among a list of imports that you’ve created.
Description (optional): Describe your export so that you and others can quickly understand its purpose. Be sure to highlight any nuances that a user should be aware of before using this import in a flow. As you make changes to the resource, be sure to keep this description up to date.
Parse files being transferred: If you need to parse CSV, XML, or JSON files into records before sending them to other applications, select Yes. Select No if you want to transfer the files in the exact same format as they are stored in Microsoft Azure Blob Storage.
File type (required): Choose the type of file to be exported from Microsoft Azure Blob Storage. For example, choose CSV if you are exporting a flat, delimited text file, or XLSX for a binary Microsoft Excel file. The file type you select changes the fields available on the Create export panel. Acceptable file types include:
Sample file (that would be parsed) (required): This field only displays if you choose CSV, JSON, XLSX, or XML as a file type. Choose a sample file to define the record structure. Click Choose file and navigate to a sample version of the files you will be exporting.
CSV parser helper: The CSV parser helper can be used to visualize and experiment with how integrator.io parses CSV files (or any other delimited text files) into the JSON records/rows that then get processed by your flow. See CSV parser helper.
EDI x12 format (required): Select the EDI X12 file format that matches the files you are exporting from Microsoft Azure Blob Storage.
EDIFACT format (required): Select the EDIFACT file format that matches the files you are exporting from Microsoft Azure Blob Storage.
Format (required): Select the Fixed width file format that matches the files you are exporting from Microsoft Azure Blob Storage.
Sample file (that would be parsed) (required): This field only displays if you choose CSV, JSON, XLSX, or XML as a file type. Choose a sample file to define the record structure. Click Choose file and navigate to a sample version of the files you will be exporting.
Resource path: You can use this field optionally to define the JSON path to the resources you are exporting from the JSON file with handlebars syntax.
Sample file (that would be parsed) (required): This field only displays if you choose CSV, JSON, XLSX, or XML as a file type. Choose a sample file to define the record structure. Click Choose file and navigate to a sample version of the files you will be exporting.
File has header: Check this box if the files you are exporting contain a top level header row. If the first row of your XLSX file is reserved for column names (and not actual data), then check this box.
Sample file (that would be parsed) (required): This field only displays if you choose CSV, JSON, XLSX, or XML as a file type. Choose a sample file to define the record structure. Click Choose file and navigate to a sample version of the files you will be exporting.
XML parser helper: The XML parser will give you immediate feedback on how your parse options are applied against raw XML data. See XML parser helper.
Container name (required): Specify the path of the Microsoft Azure Blob Storage container that stores the files to be transferred. integrator.io will transfer all files from the container specified, and delete them from the container once the transfer completes. You can (optionally) configure integrator.io to leave files in the container, or to transfer files that match a certain starts with or ends with file name pattern.
File name starts with: Use this field to specify a file name prefix to filter which files in the Microsoft Azure Blob Storage Container name will be transferred. For example, if you set this value to test then only files where the name starts with test will be transferred (like test-myFile.csv).
File name ends with: Use this field to specify a file name postfix to filter which files in the Microsoft Azure Blob Storage Directory path will be transferred. For example, if you set this value to test.csv then only files where the name ends with test.csv will be transferred (like myFile-test.csv). You must specify the file extension for this filter to work correctly.
The Sorting and Grouping drop-down menu allows you to manage your flow’s files by sorting and grouping records by field. This feature gives you the ability to group records by field (column1, column2, column3…), and sort the records in ascending or descending order (ASC or DESC).
Decrypt files: This option is enabled if your connection is configured to use PGP encryption/decryption.
Decompress files: Set this field to True if you're exporting compressed files.
Leave file on server: If you check this box integrator.io will NOT delete files from Microsoft Azure Blob Storage after your export completes. The files will stay in Microsoft Azure Blob Storage, and if the flow runs again, the same files will be exported again.
File encoding: The file encoding indicates how the individual characters in your data are represented on the file system. Depending on the source system of the data, the encoding can take on different formats. If left blank, the default encoding is UTF-8, but you can also choose Windows-1252 or UTF-16LE formats.
Backup files path: Enter a value relative to the root directory, if you want to archive a copy of the exported files . For example, to have integrator.io store a copy of the exported file at ftps://ftp.my-site.com/integrations/backup
, enter /integrations/backup
. This path must already exist on the server.
Page size: Specify how many records you want in each page of data. The default Page size value (when you leave this field blank) is 20.
Data URI template: This field ensures that all the errors in your job dashboard have a link to the original data in your FTP server. This field uses handlebars templates to generate dynamic links based on the exported data. For example, if you are exporting a CSV file, this field could include one or more columns from the file, such as {{{internal_id}} or {{{email}}}.
Do not store retry data: Check this box if you do NOT want integrator.io to store retry data for records that fail in your flow. Storing retry data can slow down your flow's overall performance if you are processing very large numbers of records that are failing. Storing retry data also means that anyone with access to your flow's dashboard can see the retry data in plain text.
Override trace key template: (optional): Define a trace key that integrator.io will use to identify a unique record. You can use a single field such as {{{field1}}} or use a handlebar expression. For example, this syntax {{join “_” field1 field2}} will generate a trace key template field1_field2. When this field is set, you will override the platform default trace key field.
Comments
Please sign in to leave a comment.