-4

I want to write a small Python program that automatically downloads the list of stock symbols from the New York Stock Exchange every day.

I have found that this data can be obtained in CSV format by pointing my browser here: http://www.nasdaq.com/screening/companies-by-industry.aspx?exchange=NYSE&render=download

But how do I get this data via curl from the bash shell? Doing the following doesn't work:

% curl http://www.nasdaq.com/screening/companies-by-industry.aspx?exchange=NYSE&render=download

Really I need to find a way to get this data into my python program. If I can do it with curl from the bash shell, I can then easily convert it to PyCurl. But how to do it? Is there a better way than PyCurl?

Saqib Ali
  • 11,931
  • 41
  • 133
  • 272
  • Why don't you use `urllib`? – t.m.adam Apr 02 '18 at 05:47
  • The first reason your `curl` command didn't work is that you can't use certain special characters, like `&`, in an argument without quoting the whole argument. Try `curl 'http://www.nasdaq.com/screening/companies-by-industry.aspx?exchange=NYSE&render=download'`. – abarnert Apr 02 '18 at 05:59
  • 1
    Meanwhile, "doesn't work" is something nobody can debug. You may get some answers telling you to do something completely different that, if you're lucky, happens to be what you want, but you're not going to get any help understanding why your existing code is wrong unless you tell us what's wrong—incorrect output, an error message, whatever, with the appropriate details. See the [mcve] section in the help. – abarnert Apr 02 '18 at 06:01
  • abarnert's answer was correct about the bash shell intercepting the ampersand. Dumb mistake. Sorry everyone. – Saqib Ali Apr 02 '18 at 07:01

2 Answers2

1

You can use urllib and the csv module as shown below.

import csv
import urllib

url = 'http://www.nasdaq.com/screening/companies-by-industry.aspx?exchange=NYSE&render=download'

resp = urllib.urlopen(url)
cr = csv.reader(resp.read().decode('utf-8'))
for row in cr:
        print(row)
Saqib Ali
  • 11,931
  • 41
  • 133
  • 272
Braden Holt
  • 1,544
  • 1
  • 18
  • 32
0

Use requests library can do that.

pip3 install requests

Here is an example.

import requests

def download(file_url, output_path):
    r = requests.get(file_url)
    with open(output_path, 'wb') as fd:
        for chunk in r.iter_content(chunk_size=10*1024*1024):
            fd.write(chunk)

download("http://www.nasdaq.com/screening/companies-by-industry.aspx?exchange=NYSE&render=download", "stock_symbols.csv")
haolee
  • 892
  • 9
  • 19