Can I throttle my Amazon Hybrid Connection?

Comments

12 comments

  • Scott Henderson CTO
    Celigo University Level 1: Skilled
    Answer Pro
    Top Contributor

    The default concurrency level value for HTTP based connections is actually undefined/empty (i.e. the UI will show 'Please select' in the dropdown field), and then this is when we use burst mode, which blasts an API as fast as possible.  If you want to set concurrency level to 1, which is the lowest value possible, then you need to explicitly set the value to 1 in the dropdown. 

    0
  • Alex Baeza
    Answer Pro
    Celigo University Level 4: Legendary
    Awesome Follow-up

    Thank you for the information.

    This note on the Assign concurrency levels to data transfer page confused me:

    NOTE: If you do not select any Concurrency level value for a universal HTTP connection (that is, the setting is left blank), then the concurrency defaults to “1” in “burst mode.” ...

    It seems to imply that not setting the concurrency level defaults concurrency level to 1 which is the same as burst mode.
    Perhaps this documentation can be updated to more clearly indicate that concurrency level 1  is not burst mode? 

    0
  • Alex Baeza
    Answer Pro
    Celigo University Level 4: Legendary
    Awesome Follow-up

    After setting the concurrency level to 1, I still get quota exceeded errors.

    Is there anyway to throttle my requests by setting a maximum number of requests per minute for example?

    0
  • Scott Henderson CTO
    Celigo University Level 1: Skilled
    Answer Pro
    Top Contributor

    Here are your primary tools for managing governance right now:

    1. Concurrency level should be 1, and make sure that ALL flows in your account share the exact same connection resource, or if you want to use multiple connections across different flows with different credential permissions, then be sure to have them all borrow concurrency from the same single connection.
    2. You can schedule your flows to run in accordance with the projected number of API calls to hopefully not exceed hourly type limits.
    3. If you are doing "Lookups" in your flow, such that for each record being processed in your flow you are making additional API calls back to that system, then often times there is a better way to design these type flows such that a per record is not needed.
    4. If you are doing imports, then maybe it is possible to increase page size and increase batch size such that you submit larger requests to the API governing you, but less number of API requests.

    Do any of those help?  If not, here is what we are actively building for release next year.

    1. We are planning to introduce an auto recovery procedure for governance errors.  The auto recovery will first lower concurrency, and if the governance errors persist, then it will introduce a delay in between requests that keep growing until the governance error is gone.
    2. After we release the auto recovery outlined above, we will then look to expose the ability for everyone to use the 'delay helper' concept in their flows as well.
    0
  • Alex Baeza
    Answer Pro
    Celigo University Level 4: Legendary
    Awesome Follow-up

    I don't think any of the suggestions apply to my case.

    The new auto recovery feature for governance seems very promising though.

    0
  • Scott Henderson CTO
    Celigo University Level 1: Skilled
    Answer Pro
    Top Contributor

    Hey Alex, I just heard this morning that we are planning a patch release soon to fix how we auto handle governance errors from the Amazon SP APIs. At a very high level, we are already doing some auto recovery logic for this API and governance/throttling errors, and Amazon SP APIs return a numeric field telling you how long to wait before sending the next API request, but apparently the value they return is not always accurate, and so our upcoming patch is going to add an increasingly large buffer in between requests. Fingers crossed this alleviates your governance errors. I will update this thread when the patch goes live.

    Also, just an FYI that we refreshed article (linked above) over the weekend to address the doc bug you pointed out, plus we made several other improvements to the content.

    0
  • Alex Baeza
    Answer Pro
    Celigo University Level 4: Legendary
    Awesome Follow-up

    Awesome news. Thanks for the update!

    0
  • Scott Henderson CTO
    Celigo University Level 1: Skilled
    Answer Pro
    Top Contributor

    FYI, the patch was released a couple days ago.

    0
  • Alex Baeza
    Answer Pro
    Celigo University Level 4: Legendary
    Awesome Follow-up

    Do I need to adjust any settings to take advantage of this new update? It sounds as if it just detects quota exceeded errors and backs off requests until no more errors are encountered.

    0
  • Scott Henderson CTO
    Celigo University Level 1: Skilled
    Answer Pro
    Top Contributor

    Yeah, there is nothing for you to do related to what we patched. We released better logic to auto slow down requests based on the error info given back to us from Amazon APIs.  You still need to make sure your concurrency levels are set to 1 though to keep parallel requests minimal.

    0
  • Alex Baeza
    Answer Pro
    Celigo University Level 4: Legendary
    Awesome Follow-up

    I just ran a test to see how this new feature works, but I am still getting quota exceeded errors.

    Maybe the auto slow down logic is not slowing down fast enough?

    I would expect that after the first quota error against spapi_orderitems that there would be an exponential delay. Here I see that 127 auto resolved errors (all quota exceeded) means that the slowdown was not enough to prevent the other errors. I also see 2 errors in the error column also quota exceeded errors that were not auto resolved.

    0
  • Scott Henderson CTO
    Celigo University Level 1: Skilled
    Answer Pro
    Top Contributor

    Amazon APIs tell us how long to wait before making the next request, and our recent patch was only to wait a little bit longer than they tell us.  We did not implement any sort of exponential decay or anything fancy like that in the patch.

    The auto resolved errors are how we give you visibility that we encountered the errors still from the Amazon API, but that we were able to successfully navigate around them.

    I asked our QA team to look into why there were 2 quota exceeded errors that we were not able to auto navigate around.

    Also, it seems like your flow and this Amazon API are not a good match.  i.e. If this specific Amazon API that you are working with is reporting this many governance errors, but you are submitting lots of API requests due to the design of your flow, then maybe there is a better way to build it altogether with a fundamentally lower numbers of API requests.  You could attend an office hours to get ideas, or maybe purchase paid consulting hours to see if there is a better way to do what you are trying to do.  It is difficult to help at this level here in the community.

    0

Please sign in to leave a comment.