If I am not mistaken, you can't speed up individual HTTP requests, but there are ways to speed up multiple requests through asynchronous functions and caching.
Below is an example of a function that prints all the release assets in a GitHub repository.
from github import Github
def get_release_assets(access_token, repository):
github = Github(access_token)
repo = github.get_repo(repository)
releases = list(repo.get_releases())
for release in releases:
assets = list(release.get_assets())
print(assets)
This function takes around 9 seconds to finish in a repository with a relatively small amount of releases.
I am aware that you can use asynchronous functions to speed this up. However, I know that this repository wont be changing constantly, so I am thinking of caching the results of running the function so the next time you run the function it finishes almost instantly.
I am having trouble caching the responses from GitHub, though, as the caching libraries I have tried don't seem to work for PyGitHub methods like get_releases
. For example, requests-cache
library only works for methods in the requests
library.