I'm still a novice with python and now using multiprocessing is a big job for me.
So my question is, how do I speed to crawl the comments section of YouTube using the YouTube API whilst using multiprocessing?
This project is to crawl few 100000++ of videos for their comments in a limited time. I understand that multiprocessing is used on normal scraping methods such as BeautifulSoup/Scrapy, but how about when I use the YouTube API?
If I use the YouTube API (which requires API keys) to crawl the data, will multiprocessing be able to do the job using multiple keys or will it use the same one over and over again for different tasks?
To simplify, is it possible to use multiprocessing that uses API keys in the code instead of normal scraping methods that do not require API keys?
Anyone have any idea?