1

In the process of using Eventbrite's API to power a website with their events, however, I am stuck on how to implement project.

Currently, I am wanting to filter events via dates and show them on the front page, however, Eventbrite API limits to only 50 items per page. The API request pushes out 86 pages with 50 items each.

How can I go about presenting those events at once while lowering latency? What I currently have takes 104 seconds to respond and render results to page. I'm scratching the surface on Big O Notation theory, however, I think I'm using O(n²).

here's the code:

routes.py:

from flask import Flask, render_template
from eventbrite import Eventbrite
from datetime import datetime

import requests, json

app = Flask(__name__)


# TODO: Remove outh key in file and register in sys
oKey = ‘<oauth-key>’

o_token_url = '&token={0}'.format(oKey)

@app.route("/")
def index():

    base_url = 'https://www.eventbriteapi.com/v3/'
    regions_url = base_url + 'system/regions/?token={0}'.format(oKey)
    regions_result = requests.get(regions_url)
    regions_obj = regions_result.json()

    e_url = base_url + 'events/search/?token={0}&location.address=<city>&start_date.range_start=2018-07-02T12%3A00%3A00Z&page=2'.format(oKey)
    e_result = requests.get(e_url)
    e_obj = e_result.json()

   list_events = []

    for pg in range(1, e_obj['pagination']['page_count']):
        print(pg)
        startTime = datetime.now()

        e_url = base_url + 'events/search/?token={0}&location.address=<city>&start_date.range_start=2018-07-02T12%3A00%3A00Z&page={1}'.format(oKey, pg)
        e_result = requests.get(e_url)
        e_obj = e_result.json()

        for i in e_obj['events']:
            an_event = {}
            an_event['name'] = i['name']['text']
            list_events.append(htx_event)

    print(datetime.now() - startTime)
    return render_template('index.html', events=list_events)

if __name__ == "__main__":
    app.run(host='0.0.0.0')

index.html:

{% for event in events %}
{{event['name']}}<br>

{% endfor %}

I am aware of EB API batched requests endpoint however, I can't wrap my head around on how to send the list of url dictionaries with requests.

Any tips on how to optimize the code above would be appreciated. I figured I can limit the amount of events that are shown at any given time, however, interested in how others would approach this issue.

LtWorf
  • 7,286
  • 6
  • 31
  • 45
limaBEAN
  • 67
  • 2
  • 11
  • For event in events is an O(n) operation :) – Mars Oct 17 '18 at 02:53
  • 1
    For page in range, for event in events would be an O(u*v) operation, but in this case, we know events is <50, so it's pretty much a constant, again bringing you back to an O(n) operation. – Mars Oct 17 '18 at 02:55
  • But that aside, the title has nothing to do with what you want. What you're asking is for how to use EB's batched requests, or how else you can parallelize your O(n) operation or how you can reduce the size of your (n) – Mars Oct 17 '18 at 02:57
  • 1
    Talking you big O notation is not going to bring you more near roba solution. In long runtime of your code is not caused by any iteration or code complexity. It is just what the batchef requests to the API need. If you want to speed it up, you will have to cache the API response locally in some way. – Klaus D. Oct 17 '18 at 03:20
  • I think I'm getting a better grasp of things. So in reality, the 104 seconds it is taking to load 4000+ events is not due to operation, but has everything to do with the amount of requests that are going on. Will definitely take a look at caching or batched requests. Thank you. – limaBEAN Oct 18 '18 at 01:34

1 Answers1

3

It takes long time not because it's a CPU intensive operation (which could be solved with algorithm optimization), but because each HTTP request takes time. To mitigate that, you can make requests concurrent (i.e. simultaneous). To further reduce latency, you can tune re-use of TCP connections so you wouldn't waste time to create a TCP connection for each HTTP request.

Fine
  • 2,114
  • 1
  • 12
  • 18
  • Thank you for the 2 python references. Re-use TCP connections... That's very interesting. I'll probably look deeper into concurrent requests at the moment, however, will also review the TCP bit. Thanks. – limaBEAN Oct 18 '18 at 01:37
  • Have you reduced the lagging time? – Rochan Jan 15 '21 at 10:48