A connection is a resource – specific to one endpoint account – that you can create and apply to any integrator.io flow or Data loader operation. Primarily, the connection securely stores your credentials for accessing an app, file server, or database system. Each connection has an equally important role in regulating the flow of data to and from the source and destination.
A fundamental concept of integrator.io is that the connection also serves as a “channel,” or a queue, through which all data is processed. When you query an app to export data from it or import data into it, that request is added to the connection’s queue and processed in order, after earlier requests.
You can see the remaining items for each connection in the Queue size column of the Resources > Connection screen.
It might help to visualize the flow of data in these terms,
Imagine you’re at a ski resort. You could think of the app’s maximum concurrent API requests limit as the chairlift. If the lift has five seats, then that’s all it can hold at a time. There’s no way to get more out of that chairlift. Then, you can consider multiple connections to be where you set up different lines to get on the chair lift.
On the holiday season, you may want to set up a VIP line, just for your most important guests to always be able to go straight to the head of the line.
That’s really what these connections allow you to do, and they let you add as many lines as you want and then assign who gets to go in which line. This functionality [designating separate connections per flow step and borrowing concurrency among connections] allows you to prioritize certain requests over others.
To be clear, the app’s governance, or rate limit, is the “chairlift:” it allows only a certain amount of data, and there’s no increasing that at all. We’re just allowing you to organize how you let all the passengers onto the chairlift.
The “chairlift,” like all metaphors, is helpful if imperfect. Consider, too:
- You might naturally assume a chairlift’s ascending to be an import, and its trip down the mountain an export. In practice, every call involves at least one request-response round trip (from integrator.io to the app through the connection and back, for scheduled a flow step; vice versa for a lookup).
- You may, in fact, be able to increase the API’s global concurrency limit by increasing the account level. See the vendor’s sales plans or documentation for opportunities to increase your limit, and then modify your integration’s connections accordingly.
- Not all APIs impose a limit, and some allow a virtually infinite number of concurrent calls. Keep both (or more) values in mind when optimizing data transfer. (This lift comes from the bunny hill with a few passengers, and it drops off additional skiers to another line heading up to the black diamond.)
One strategy for controlling the volume of requests is to adjust the connection’s “concurrency.” The Advanced settings section of every connection offers a field for you to specify the Concurrency level:
The concurrency level defines the maximum number of requests that a connection can run in parallel, while emptying a single queue. This setting applies to all data flowing through a connection.
Take the example of a concurrency level of 6, where one of the spaces is being used for a large export. You still have five spaces remaining to process the imports being queued from one or more exports.
You may select up to 25 concurrent requests. If you leave Concurrency level blank, integrator.io defaults to “burst mode,” sending requests as fast as possible, with high levels of concurrency.
Tip: It is always advisable to consult your app’s API documentation to learn how many concurrent requests are accepted; be sure to check the limit for the account level that you purchased, if the vendor offers tiered subscriptions. If you are allowed one API request per minute and your Concurrency level is set to 3, you would get governance errors until you reduce the level.
It would be possible to exceed the limit of 25 requests by creating an additional connection, where the total concurrency levels are more than 25. Both queues would initiate near-parallel data transactions, if scheduled simultaneously. This workaround, however, would mostly likely also require you to partition your flows and data sets.
Managing concurrency is an important factor in optimizing data transfer. For more information, see Fine-tune integrator.io for optimal data throughput.
The solution for managing multiple connections’ requests is to allow one connection to “borrow” throughput from another connection, as long as they both connect to the same app. (See Advanced settings, above.) All of the borrower’s concurrent requests are consolidated along with all of the concurrent requests in the “borrowed” connection within the same queue.
In other words, with borrowed concurrency, multiple connections can share the same concurrency level, such that the data flowing through both connections is submitted to the endpoint application together via a shared concurrency model. Borrow concurrency from applies only when Concurrency level is set for one of the connections.
That begs the question: why would an integration need more than one connection, with a dedicated queue and concurrency level, to the same endpoint? Consider the following use cases and the flexibility that integrator.io provides:
- Some service-level agreements require timely reporting, and therefore frequent cross-app integration, of purchases. You may want to have a dedicated queue for exporting sales invoices (with a low concurrency level), not shared with a less important flow that syncs product data overnight (at a higher concurrency level).
- Connections for the same company may require different sets of permissions, such as one connection for privileged payroll data and a separate connection for more general-audience accounting data.
When Borrow concurrency from references a connection, integrator.io performs validation and notifies you of the relationship when you attempt to delete the borrowed connection or borrow its concurrency again from a third connection.
Restricting multiple connections to the same shared concurrency level
Picture an integration with three separate connections to the same endpoint: Connection A, Connection B, and Connection C.
In this instance, it is important to stay within a maximum of two simultaneous connections to the endpoint from all three connections. That is, they must use a shared concurrency level of 2 for all requests, which can be accomplished with the following steps:
- Modify Connection A to set its Concurrency level to 2.
- In Connection B still, select Borrow concurrency from > Connection A. Save Connection B.
- Modify Connection C, by selecting Borrow concurrency from > Connection A. Save Connection C.
When the flows run, all three connections will share the same concurrency level of 2, assuming that more than one connection adds a request to the queue at the same time.
Configure two connections
Another scenario in which you might want to create two connections and set separate concurrency levels is as follows:
- One connection serves flows that generate high-volume, high-priority, time-sensitive data, such as Order and Fulfilment.
- Another connection is intended for flows that generate high-volume, low-priority, time-insensitive data, such as syncing product information
Such a configuration enables you to have greater levels of concurrency available for your high-priority data, which should get synced faster.