I'm trying to speed up a new FTP to API transfer.
For my test which I ran several time I made 459 files available and by looking at the Run History I see that the FTP pickup took (34 - 37) minutes and the API export took (2 -3) minutes. This fileset had four files that were significant in size 3.3, 3.0, 1.1 and 0.5 M all of the remaining were 17Kb or smaller.
The FTPS server is running on our LAN, we have gigabit plus internet speed, we have checked our firewall settings, disabled deep packet inspection etc and we have run out of options. This FTPS server is dedicated to working with Celigo.
We ran an independent test to connect to this server from another SAS service and via the same protocol FTPS we were able to pick up 1700 files in 3 minutes so the bottleneck does not seem to be our server, network or internal security.
I have concurrency turned up to 4 but this made no noticeable improvement, setting a batch size triggers an error message about Blob transfers and wont let me save the change.
I was expecting speeds about 8 to 10 times faster than I'm seeing. What happens if I have this flow scheduled and the previous occurrence is still running, will it not start until the previous finishes or does it know which files are already being worked and will skip them or will it try to retransfer the same files?
Please sign in to leave a comment.