You can debug a flow in test mode to ensure the flow processes your records as expected before deploying the flow to production. Enter mock data manually or make calls to your source and destination apps to retrieve live source data and response data to use for testing purposes. When you run a flow in test mode, you can review the processing behavior of each flow step option (transformation, mapping, filter, or hook) and fine-tune your flow configuration to handle all variations of results or responses from each application endpoint.
Warning: Update to the latest bundle and SuiteApp version for integrator.io on SuiteScript 2.0 to make sure no records are created, deleted, or updated in NetSuite during test runs.
Contents
- Provide mock data for a test run
- Switch to test mode
- Using test mode
- Run a test flow
- Cancel a test run
- Review test run results and errors
- Other test mode considerations
- NetSuite lookups
Important concepts:
- Licensing – Any type of license allows you to run any flow in test mode multiple times using as many endpoints as you like.
- Permissions – You can test run a flow in any integration that you have access to.
- NetSuite requirements – Update to the latest bundle version (v1.28.0.0) and SuiteApp (v1.11.0) to make sure no records are created, deleted, updated in NetSuite during test runs.
- Deleting files from server – No files are deleted even if the Leave file on the server option isn’t set.
- Pages – Test runs process only the first page of data.
- NetSuite and SuiteScript hooks – Be cautious while executing test runs that make a live call to NetSuite from SuiteScript hooks or any hook that uses a virtual export or import. In such cases, the live call can unintentionally create or update a record during the test run.
- Trace keys – By default, the record ID field is used as the trace key. You can define a different trace key with a trace key template. The trace key might not work as expected, if the number of mock output records does not equal the number of exported records and the top-level record ID is mapped in response mapping.
Provide mock data for a test run
Test mode flow runs allow you to test your flow with live data as mock data or manually enter mock data for testing purposes. Before you can perform accurate test runs, you must first provide mock output data for the source flow step(s) and any lookup flow step you want to test. You must also provide mock response data for each destination import flow step.
- Live mock output data: You can retrieve live mock output data from the source application of any export or lookup that uses an HTTP-based connector. The Preview panel on the right allows you to see the constructed request and a sample record that represents what integrator.io will send to the destination flow step.
- Live mock response data: For imports, you can send the mock output record from the source application to the destination application API to generate a response, and use the returned response data to configure response mappings. Use caution if you use live data to test import flow steps because the live call can unintentionally create or update records in the destination application.
- Manually enter mock data: You can manually enter mock data to use as source output data or destination import response data. The mock data you enter in the Mock output or Mock response AFE is used for the test run, and no call to the application is made to retrieve testing data from the connected app.
For exports and lookups, if you do not provide any mock output data in the Mock output setting (or click Populate with preview data) and execute a test run, the flow will make a call to the source application to retrieve live data and will prioritize the test run call in the connection queue.
For imports, if you do not provide mock response data in the Mock response setting (or click Populate with live data), integrator.io inserts the following sample data in the Mock response setting to complete the test run.
You can use mock data that you provided to test flow behavior in future test runs.
- If an integration folder (or a single flow within an integration folder) is cloned, any mock data you provided for the cloned flow(s) is included in duplicate flow(s).
- Mock data provided by you while creating a template is available post publishing and you can see it after template installation as well.
- If you create a snapshot of an integration, the snapshot includes mock data that has been provided at the timestamp of the snapshot.
- If you revert a flow to a previous version, the mock data also reverts to the mock data that was present at the time of the revision timestamp.
- If you merge clone data with source mock data for a test run, then any changes made to source mock data are replaced by the clone mock data.
Sample mock response data:
[{ id: '1234567890', errors: [], _json: { mockResponse: 'replace me' }, statusCode: 200, ignored: false, dataURI: '', _headers: { 'content-type': 'application/json; charset=utf-8' } }]
Using the above sample data will not provide meaningful test results, so you should instead replace the sample data in the Mock response AFE with accurate mock data in valid integrator.io canonical JSON format. Click Populate with live data to replace the mock response data with live response data.
Mock response data fields
Field | Description |
---|---|
id (optional) | Record unique identifier |
errors (optional) | Array of error objects with each object having code, message, source in it. Structure:
[{code: <error_code>, message: <error_message>, source: <error_source> }] |
_json (optional) | The full HTTP response returned by the call to the endpoint. |
statusCode | The HTTP statusCode that indicates the success or failure of the import request. Use 200 to indicate success and 422 for failure. |
dataURI (optional) | The HTTP URL to the processed data from the export application. |
_headers (optional) | The HTTP headers returned from the live call to the endpoint. |
Test runs pass the mock output data from the source flow step to the next downstream flow step. Each subsequent step uses the mock data from the previous flow step. You can manually enter the data in the mock output editor or click Populate with preview data to populate the mock output data.
Important: Running a flow in test mode applies only to Flow Builder. Flow tests do not affect any other functionality beyond the flow you are testing. The Dashboard in the left navigation menu doesn’t display test run information, the integration flow list is unaffected by running flow tests, and disabled flows remain unaltered.
TestMode boolean flag as an input parameter for hooks used to customize script behavior during test runs, previews, or when making send calls. An export preview will not include the results of a testMode flag in the hook, but any subsequent destination step will include the results triggered by the testMode flag.
Note: TestMode flag is only supported in preSavePage, preSend, preMap, postMap, postSubmit, postAggregate, postResponseMap in export and import hooks.
Switch to test mode
In Flow builder, when a flow is not enabled, it automatically switches to test mode. The Run flow button becomes Run test. When you enable the flow, the button returns to Run flow.
If you want to switch a flow to test mode, disable the flow.
Testing flows that have multiple sources
For flows that have multiple source steps, during both test runs and production runs, a lightning bolt () on the left side of the source flow step indicates which source is currently feeding data into the flow. By default, appears next to the top source when the flow is disabled for test runs. When the flow is enabled and the flow is running in production, remains visible until the flow is completed and it is seen only when a single source is used.
Note: The preview data in the flow step is updated based on the selected source.
Set the source to be used in the test run
You can select only one source flow step that will provide the mock data for the test run from the list of available sources next to the Run test button. The symbol indicates the selected source flow step. By default, the first source is selected.
- Test flow run – The selected source is stored for the session and used for the flow test run
- Enabled flow run – The selected source is never stored. If you want to execute a flow for a single source, then you have to select it manually
- Scheduled flow run – If the flow is not triggered by selecting a source from the list, but instead runs on a schedule, is triggered by a webhook listener, or is set to run after another flow is complete, the entire flow (including all sources) is executed
- Listener export – It cannot be manually triggered in production mode. In test run mode, you can use mock output data as source to run a test flow
Note: The selected source for a session is stored and used for the upcoming test flow run.
Using test mode
- Disable your flow to enter Test mode (any new flow is disabled and in test mode by default).
- Select a single source for your test run. If you have multiple sources, you can change the source that is used. To use an offline source for a test run, you must enter mock output data, from either source:
- Provide your own output data from the API documentation.
- Populate with preview data pulled from the source (default if left blank).
- Provide mock response data for all destinations.
- Provide your own mock response data from the API documentation. You must provide a statusCode in your mock response data. If your mock response data contains an error, the test results will demonstrate flow behavior when such errors occur during enabled production flows.
- Populate with live data: This option is available only when your connection is online and for connectors that have a preview panel. Clicking this button makes a live call to the import application and creates a record in the destination app. The response returned by the call to the destination app will replace any existing mock response data. Test mode flow runs use only mock data, so no live calls will be made to the destination app when you run the flow in test mode.
- If you leave the mock response setting empty, integrator.io populates the step with dummy data during a test run, and it may not provide accurate test results. That auto-populated data remains in the setting. You can add your own data instead of “replace me”.
-
Populate with canonical stub: Populates mock output and mock response fields with the data object in the required integrator.io canonical JSON format. Simply replace the placeholder text with source data.
Run a test flow
You can run a test flow in either of two ways:
- Click Run test button at the top right of Flow builder
- Click Run test button in the Run console panel
- Select a source flow step from the list next to the Run test button.
Cancel a test run
During the test run, the Run test button is replaced with a Cancel test run button. Click Cancel test run to abort the test run. The on/off switch is disabled until the test run is completed or is canceled.
If a test flow run takes more than hundred seconds to process, system displays the following message due to long processing time. You can either cancel the flow run or wait for it to complete.
To avoid system performance issues, test run will be auto-canceled after five minutes of running a test flow in the following scenarios:
- The endpoint does not respond within two minutes of running a test flow
- Uncaught exceptions
Review test run results and errors
After the test run is complete, you can view the test run results for each flow step option and review errors in the Run console. Each flow step also displays the error status of your test run.
Review test run results for each flow step option
The test result indicator () displays next to each flow step option (transformation, filter, mapping, or hook) after the test run is complete.
Click to see the test results associated with that flow step.
The image below displays the filter indicator on the flow step. The window name changes based on each step with the test run details.
The Test run results section lists all records along with the trace key and status that were processed based on the selected processor. For example, Record 1, Record 2, and Record 3 as shown in the image. The record status can be success or error. In case of filter evaluation for successful records, the record status is shown as Success or Success (ignored).
- Success – Record successfully processed and passed the filter criteria
- Success (ignored) – Record successfully processed and did not pass the filter criteria
Click a record under Test run results to see the related input to the processor in the Input pane and the processor output for the selected record in the Output pane. Only one record can be viewed at a time. Use the toggle in the top right corner to change the editor view between Rules or JavaScript while making edits.
Note: All the test run results are cleared on editing a flow (including changing the test run source, modifying a step, changing step options, or reordering steps). You have to run a new test flow to see the accurate results based on the updated flow steps.
The test run results are displayed only for the steps executed during the test run.
To see detailed test run results for NetSuite SuiteScript hooks, click
Only when mock output is not added, SuiteScript preSend hook is executed and input and output in test run results are available.
- If the batch size is less than 10, then the number of records shown in the mock output is equal to the batch size set.
- If the page size is less than batch size, then the number of records shown in the mock output is equal to page size.
Note: The number of records processed in test run is based on export configuration.
Review test run errors on flow steps and in the Run console
You can expect any of the following results post test flow completion in Run console:
- Success
- Ignore
- Error
- Resolve
The Run history tab in the Run console panel is disabled while in test mode, that is when a flow is switched off.
Errors and successes are indicated on the flow steps as shown in the production flow runs.
Note: In test mode, the test result is updated only for the selected source. The test run results are not available for other sources.
Perform the following steps to see the record error details:
- Click an error to open the Error details page and review the list of errors.
- By default, the first record is selected and the error details can be seen under Error details at the right.
- Click a record to see the error details.
- If you switch from a unified view to list view by using the toggle view, then to see the error details for the selected source, click View error details under the Actions overflow (...) menu.
Note: Test run results are not displayed for dynamic lookups.
Other test mode considerations
Test mode is currently in beta release while we work on improvements and get customer feedback. Keep in mind the important tips to get started with Test mode:
- When your Celigo platform session expires
- On running the next test flow in the same session
- On navigating away from the Flow builder
- On changing the URL
- Always add mock response data to each destination step before you run your first test. Otherwise, dummy data will auto-populate the first run, which will not provide accurate test results. You can add mock response data either by...
- Clicking Populate with live data inside a destination’s mock response panel (running the import can result in records’ being updated or created)
- Entering your own data from the app’s API documentation (must include
statusCode
)
- Tests can be run only using a single source, which you can choose (if you have multiple sources). You can add your own mock output data to a source. Otherwise, live data from the source will auto-populate the first test run.
- After a test run has completed, the areas that have results are marked .
- In test runs, only the first page of records is processed.
- Test results will not change until the next test run, even if you modify the flow configuration.
- Test results will be cleared and are not saved if you navigate away from your flow, refresh, run a new test, or enable your flow.
- Certain features – such as flow schedule, next flow, auto-resolve, and error email notifications – are not available in disabled flows.
Let us know your thoughts about test mode and which enhancements you want to see prioritized.
NetSuite lookups
You can perform the following actions only for NetSuite if you are using integrator.io SuiteBundle v1.31.0.0 and SuiteApp v1.14.0.0:
- If you configure a dynamic lookup in NetSuite mapping, you can see and review if the test run results are as expected or not.
- Validate if the test run results for How can we find existing records? are accurate or not. If needed, you can change the criteria to get accurate results.
Review flow step and script debug logs
You can review the test run debug logs in HTTP for exports, imports, lookups, listeners, and for the scripts within the flow, without starting the debug process. The debug logs from the enabled runs are also available under the debug logs tab.
Perform the following to view the script execution logs for scripts which are added in a flow:- On the Flow builder page, click Test run execution logs tab in the Run console panel.
- Select the script from the Scripts drop down list to view its log details.
-
The script log details are displayed.
In test mode, the debug logs are available only for a single session.
Note: Postresponsemap hook script execution logs are not captured in test run.
Blob support
Blobs are supported in test run mode with updated canonical stub.Incase of running a flow in test mode for FTP blob export or lookup, having more than 200 files on FTP server leads to test run delay or run time out. The test run results may not be visible.
Comments
1 comment
Finally! Thank you so much for this 🔥🔥loved it
Please sign in to leave a comment.