The server tries to prevent web scraping by default by denying requests without a user agent in the headers. Instead of returning a success status code of 206, it returns the forbidden status code of 403.
These are the headers I typically use for requests to mimic the browser:
WEB_HDRS = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36',
'Accept': 'text/html,text/plain,application/xhtml+xml,application/xml,application/_json;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Charset': 'Windows-1252,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8;q=0.5',
'Connection': 'keep-alive'
}
Then add the headers
keyword argument to the aiohttp.ClientSession
call as stated by the docs:
aiohttp.ClientSession('https://www.hepsiburada.com/', headers=WEB_HDRS)
See this related question for how other folks create sessions.