I would like to implement a constant training of my neural network as my input keep coming. However, as I get new data, the normalized values will change over time. Let's say that in time one I get:
df <- "Factor1 Factor2 Factor3 Response
10 10000 0.4 99
15 10200 0 88
11 9200 1 99
13 10300 0.3 120"
df <- read.table(text=df, header=TRUE)
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
dfNorm <- as.data.frame(lapply(df, normalize))
### Keep old normalized values
dfNormOld <- dfNorm
library(neuralnet)
nn <- neuralnet(Response~Factor1+Factor2+Factor3, data=dfNorm, hidden=c(3,4),
linear.output=FALSE, threshold=0.10, lifesign="full", stepmax=20000)
Then, as time two comes:
df2 <- "Factor1 Factor2 Factor3 Response
12 10100 0.2 101
14 10900 -0.7 108
11 9800 0.8 120
11 10300 0.3 113"
df2 <- read.table(text=df2, header=TRUE)
### Bind all-time data
df <- rbind(df2, df)
### Normalize all-time data in one shot
dfNorm <- as.data.frame(lapply(df, normalize))
### Continue training the network with most recent data
library(neuralnet)
Wei <- nn$weights
nn <- neuralnet(Response~Factor1+Factor2+Factor3, data=df[1:nrow(df2),], hidden=c(3,4),
linear.output=FALSE, threshold=0.10, lifesign="full", stepmax=20000, startweights = Wei)
This would be how I would train it over time. However, I was wondering if there is any elegant way to decrease this bias of constant training as the normalized values will unavoidably change over time. Here I am assuming that non-normalized values may be biased.