0

I'm building a application using PHP, HTML & JavaScript that accesses a users Facebook data and does some analysis on the information returned. It requires making approx 15 to 30 requests to the Graph API depending on how much data a user has in their profile.

I have started by ensuring that all of the data is accessible and the wait time for the scraping is not too extensive. I have used the straightforward Server-side Flow (PHP) example from the Authentication page in the Facebook documentation and have now got a bunch of PHP scripts that I am optimising. Currently I just load the page and wait for the long PHP script to execute. Not ideal.

I have realised that from a front end perspective, after the user has authenticated the FB app, there will ideally not be a page refresh and the user should not have to wait whilst the page is continually loading (i.e. waiting for the long execution of a PHP script).

Therefore my question is: should I use the same PHP scripts the I've already written and (after user authentication) set the scripts off using an AJAX request (and then use AJAX to poll for completion) or should I rewrite the server side logic in JavaScript and do the whole thing using the Facebook JavaScript SDK & AJAX?

I'm my opinion there is something about having the application reliant on the client making a ton of http requests that seems very flaky to me......plus I'd rather not rewrite everything I've already done! =)

Apologies for the long message. Trying to be as explicit as possible.

Thanks in advance, gfte

gfte
  • 110
  • 1
  • 11

2 Answers2

0

I had many troubles with server-side flow (php->curl) on my app with 30000+/dau's. Try to code it in both language (PHP and JS). And try it yourselt, which is faster for the app-user. If possible use JS instead of PHP for better performance

mr_app
  • 1,292
  • 2
  • 16
  • 37
0

Why not write the script in php but call with AJAX and ensure your script provides user feedback on the current progress. It would still be only a couple extra http requests and would reuse the code you have already written.

It doesn't really matter how long the request takes as long as the user knows what is going on.

Rick Burgess
  • 704
  • 1
  • 5
  • 12
  • Hi Rick, This is the method I went with in the end and is working ok at the moment with a small group of testers. The scale of the data scraping however has increased and for 1 user there is approximately 40 - 50 curls requests. I may open another thread for this but my concern now is how the server will cope with, for example, 200 or more simultaneous users all making 40 or 50 requests. If you or anyone reading this thread have any idea about this it would be appreciated, else I will submit this as a new question. – gfte Feb 24 '12 at 14:59