14

Facebook seems to be experiencing a bug in regards to their rate-limits. The bug has been open for several days at the time of writing. I'm sure everyone is aware that this affects the client base of these developers severely.

The requests limit seems sporadic and is not inline with the documentation. The actual rate limits seems to have increased drastically, only allowing a percentage of requests as compared to "normal" Several people seem to be affected:

https://developers.facebook.com/support/bugs/169774397034403/

Does anyone have any work-around, suggestions or insights in place to alleviate this problem?

The original bug report submitted:

Our application has been encountering the "GraphAPIError: (#4) Application request limit reached" error on and off for the past several days. Our application monitors several of our users accounts and pulls data for every FB Page, and it has for the past few years made a number of API calls to gather metrics on those accounts which would normally occur over a period of less than two hours every day. On May 25th, we were able to make 1% of the API calls we normally make over a 24 hour period due to the Application rate limit. On May 26th, we got 3% of the API calls we normally make over a 24 hour period due to the same Application rate limit. Then for the 27th-29th it went back to normal, in less than 2 hours we were able to make 100% of the API calls we normally make, with no errors. Then on the 30th we were able to make 33% of the normal API calls, and thus far for today the 31st we have been able to make 1% of the normal API calls. Nothing has changed on our end, and there is no reason why we should only be able to make 1% of the API calls normally make some days and not for other days, especially since our application has been doing the same exact thing for several years now. Any assistance it appreciated.

Riaan van Zyl
  • 538
  • 8
  • 17

5 Answers5

2

So we also are having issues with rate limits.

Our solution is two fold.

Step one, for clients who are consistently running into rate limits (the reason is that they only have one daily active user but manage hundreds of pages) we are adding users (employee users) to the app. Since out app is for scheduling posts, we have scheduled posts on each of these 'new' users to go out each day. This is bumping the apps daily active user number resulting in more throughput from the api.

The longer term solution is that we are building a new service to manage all of the api calls. It will analyze the apps throughput, throttle api calls as needed, and provide reporting insight into what calls are being made and by which customer/app so we can better optimize the calls going out.

It's easy to just install a SDK and go to town, but it looks like that just isn't going to cut it anymore.

  • +1 on creating a dedicated service to handle API calls, this is becoming a must have it seems, to circumvent issues like this. – Riaan van Zyl Jun 25 '18 at 12:13
  • 1
    @RiaanvanZyl thanks. I know it's an ambiguous declaration, so i thought i'd share a bit more of how i'm going about it. We are going to leverage an Event Bus architecture using AWS to abstract all the calls from the core platform. And then used dedicated micro-services to analyze and process the API calls. These guys have an amazing set of tools for something like this. [https://www.leoplatform.io](https://www.leoplatform.io) – Don Rzeszut Jun 25 '18 at 22:17
1

My solution:

Because we were only accessing the page/{page-id} endpoint, we calculated the number of new posts per request and delayed the next request for that same resource.

So if we queried the API and received 1 new item out of a 100 total items, we would significantly increase the wait time before that same resource(page-id) is called again.

When we receive a response that's closer to "full" i.e 90/10, we would slightly increase the time again. This way we don't waste requests on requesting "stale" data.

We also made sure to only call our "priority pages", reducing the total number of items contesting for requests

Notes:

  • The Rate-Limit widget on the Facebook Dashboard not reflecting the responses from the API:

enter image description here

  • Even though the dashboard has not reflected the limits, we do receive the notifications:

{Application Name} has reached 100% of the hourly rate limit. All API calls to your app will fail until your app falls below the throttling limit.

  • According to the documentation, code 4 is specific to App Tokens:

https://developers.facebook.com/docs/graph-api/advanced/rate-limiting

  • Inspecting the headers reveals the cause to be the "total_time" (Requests were made exactly 10 seconds apart, until we received a 403 response):

enter image description here

Community
  • 1
  • 1
Riaan van Zyl
  • 538
  • 8
  • 17
  • I tried that, but it ends up happening again. My app often makes a call every 10 minutes. I increased the gap to 15 and it worked. Several days later it started failing again, so I increased it to 30, then 45, then an hour. It just temporarily works – Zerquix18 Jun 21 '18 at 13:08
  • Are you also getting limited on the total_time @Zerquix18? – Riaan van Zyl Jun 21 '18 at 13:20
1

My application regularly queries the posts for several of our own as well as our competitor's pages. (Media website facebook pages linking to news articles. We like to compare the posts and the performance to the competition.)

What I've done to reduce the issue is use the app token for the competitor's posts, but use a page-specific token for our own page posts. This significantly reduced the amount of calls on the app token, causing the rate limit to kick in much less often.

SVerhulst
  • 31
  • 5
  • 1
    For more information: https://developers.facebook.com/blog/post/2016/06/16/page-level-rate-limits/ – Sam Jun 20 '18 at 15:55
0

Our application is having the same problem. Here is some (totally) empirical evidence I was able to gather. Our application gets data (posts and comments) from certain public pages. We use an APP token (not user token).

The rate limit error #4 always seems to happen when we try to get 2nd-level comments, that is, comments underneath other comments. And occasionally happens when we try to get the reactions from a comment (even 1st level comments).

Again this is totally empirical evidence. But it would be good to hear if other people can replicate this findings.

Reinaldo
  • 131
  • 6
0

This is what worked for me. If I limit my script to 200 API calls every 3650 seconds, it will run to completion. These numbers seem to be close to the best I can do. If I gradually increase the number of API calls or gradually reduce the number of seconds, the script starts to fail intermittently. If I change them too much, the script fails consistently.

This probably means that some scripts won't be able to complete in a day. Fortunately, mine completes in a couple of hours.

Zampano
  • 101
  • 1
  • 6