I would like to optimize the time it takes me to go and retrieve stock prices
I have used this method suggested at http://blog.quanttrader.org/2011/03/downloading-sp-500-data-to-r/:
I store a list of the index components in .csv file
library(tseries)
library(timeDate)
symbols <- read.csv("/home/robo/workspace/R-Test/sp500.csv", header = F, stringsAsFactors = F)
nrStocks = length(symbols[,1])
dateStart<-"2000-01-01"
z <- zoo()
for (i in 1:nrStocks) {
cat("Downloading ", i, " out of ", nrStocks , "\n")
x <- get.hist.quote(instrument = symbols[i,], start = dateStart, quote = "AdjClose", retclass = "zoo", quiet = T)
z <- merge(z, x)
}
This process takes quite some time. I remember there being another way of doing this by scraping components from html site.
There must be a more efficient, faster way.
Thank you
ps: here is a good post on retrieving multiple symbols. Downloading Yahoo stock prices in R My questions differs in that I want to find fastest way run through a large quantity of symbols
Reproducible Example on smaller subset of stocks:
library(tseries)
library(timeDate)
symbols <- c("AAPL","IBM","CSCO")
nrStocks = length(symbols)
dateStart<-"2000-01-01"
z <- zoo()
for (i in 1:nrStocks) {
cat("Downloading ", i, " out of ", nrStocks , "\n")
x <- get.hist.quote(instrument = symbols, start = dateStart, quote = "AdjClose", retclass = "zoo", quiet = T)
z <- merge(z, x)
}