0

I'm facing issue to access such URL via Python code

https://www1.nseindia.com/content/historical/EQUITIES/2021/JAN/cm01JAN2021bhav.csv.zip

This was working for last 3 years until 31-Dec-2020. Seems that the site has implemented some restrictions. There's solution for similar one here in

VB NSE ACCESS DENIED This addition is made : "User-Agent" : "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11" "Referer" : "https://www1.nseindia.com/products/content/equities/equities/archieve_eq.htm"

Original code is here : https://github.com/naveen7v/Bhavcopy/blob/master/Bhavcopy.py

It's not working even after adding following in requests section

 headers = {'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11'}
            ##print (Path)
            a=requests.get(Path,headers)  

Can someone help?

Artem Sokolov
  • 13,196
  • 4
  • 43
  • 74
g1p
  • 3
  • 1
  • 3
  • extracting info from NSE is not a good idea. There will always be this issue. what exactly are you trying to do? i can suggest you alternatives if there were any – keerthan kumar Jan 23 '21 at 09:26
  • Thanks for response @keerthankumar . This is standard practice to extract daily data. Works more like automation vs downloading it manually. Let me know if you've any suggestions – g1p Jan 25 '21 at 11:05
  • you mean like bhavcopy ? for these things open account with API supporting brokers and use their apis free of cost. this is what I'm doing – keerthan kumar Jan 25 '21 at 12:34
  • Thanks for advice Keerthan. I'm not sure if this service is provided free of cost by brokers. – g1p Jan 27 '21 at 03:41

2 Answers2

3

def download_bhavcopy(formated_date):

url = "https://www1.nseindia.com/content/historical/DERIVATIVES/{0}/{1}/fo{2}{1}{0}bhav.csv.zip".format(
    formated_date.split('-')[2],
    month_dict[formated_date.split('-')[1]],
    formated_date.split('-')[0])

print(url)
res=None

hdr = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36',
       'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*,q=0.8,application/signed-exchange;v=b3;q=0.9',
       'Accept-Encoding': 'gzip, deflate, br',
       'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
       'Accept-Encoding': 'gzip, deflate, br',
       'Accept-Language': 'en-IN,en;q=0.9,en-GB;q=0.8,en-US;q=0.7,hi;q=0.6',
       'Connection': 'keep-alive','Host':'www1.nseindia.com',
       'Cache-Control':'max-age=0',
       'Host':'www1.nseindia.com',
       'Referer':'https://www1.nseindia.com/products/content/derivatives/equities/fo.htm',
       }
cookie_dict={'bm_sv':'E2109FAE3F0EA09C38163BBF24DD9A7E~t53LAJFVQDcB/+q14T3amyom/sJ5dm1gV7z2R0E3DKg6WiKBpLgF0t1Mv32gad4CqvL3DIswsfAKTAHD16vNlona86iCn3267hHmZU/O7DrKPY73XE6C4p5geps7yRwXxoUOlsqqPtbPsWsxE7cyDxr6R+RFqYMoDc9XuhS7e18='}
session = requests.session()
for cookie in cookie_dict:
    session.cookies.set(cookie,cookie_dict[cookie])
    
response = session.get(url,headers = hdr)
if response.status_code == 200:
    print('Success!')
elif response.status_code == 404:
    print('Not Found.')
else :
    print('response.status_code ', response.status_code)

file_name="none";

try: 
    zipT=zipfile.ZipFile(io.BytesIO(response.content)   )
    
    zipT.extractall()
    file_name = zipT.filelist[0].filename
    print('file name '+ file_name)
except zipfile.BadZipFile: # if the zip file has any errors then it prints the error message which you wrote under the 'except' block
    print('Error: Zip file is corrupted')
except zipfile.LargeZipFile: # it raises an 'LargeZipFile' error because you didn't enable the 'Zip64'
    print('Error: File size if too large')    
    
print(file_name)
return file_name
siddmuk2005
  • 178
  • 2
  • 11
0

Inspect the link in your web browser and find GET for the required download link. Go to Headers and check User-Agent e.g. User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:88.0) Gecko/20100101 Firefox/88.0

Now modify your code as:

import requests

headers = {
    'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:88.0) Gecko/20100101 Firefox/88.0'
}

result = requests.get(URL, headers = headers)
Josef
  • 2,869
  • 2
  • 22
  • 23