I am trying to scrape a certain website let's call it "https://some-website.com". For the past few months I was able to do it without problems however a few days ago I noticed the scraper no longer works as all requests return a 403 Forbidden status.
For the last 3 months I was using the below code to scrape the data.
import requests
from fake_useragent import UserAgent
res = requests.get(<url>, headers={'User-Agent': UserAgent().random})
This always returned a nice 200 OK with the page I needed. Until a few days ago I started getting a 403 Forbidden error. And somewhere in the return text I can spot a sentence "Enable JavaScript and cookies to continue".
User-Agent issue
As you can see in the code I already randomly switch user-agent header which is usually the recommendation to fix this kind of problems.
IP issue
Naturally I suspected they blacklisted my IP (maybe in combination with some user agents and don't allow me to scrape). However I implemented a solution to use a proxy and I still get a 403.
import requests
from fake_useragent import UserAgent
proxies = {
"https": f'http://some_legit_proxy',
"http": f'http://some_legit_proxy',
}
res = requests.get(<url>, headers={'User-Agent': UserAgent().random}, proxies=proxies)
The proxy is a residential proxy.
Basic Attempt actually works
What baffles me the most is that if I remove the random user-agent part and use the default requests user-agent the scrape all of a sudden works.
import requests
res = requests.get(<url>) # 'User-Agent': 'python-requests/2.28.1'
# 200 OK
This tells me that it doesn't mean the website all of a sudden need javascript as the scrape does work it just seems they are somehow blocking me.
I have a few ideas in mind to work around this but as I don't understand how is this happening I cannot be sure this will be scalable in the future.
Please help me understand what is happening here.