I have written a Java program to scrap through huge log files. For parallel processing of files, I am using thread concept. Below is the source code for the same.
ExecutorService threadPool = Executors.newFixedThreadPool(8);
for(int i=0; i < files.size(); i++)
{
threadPool.execute(new ProcessInThreads(i+"`"+files.get(i),fr) {
public void run()
{
long threadId = Thread.currentThread().getId();
Initiate(fr,threadId);
}
});
}
threadPool.shutdown();
When the files.size()=300, the program completes execution in less than minutes, but when the files.size() increases, the performance degrades. What could be reason?? How to overcome the same.
Here files is an array of the filenames to be processed, populating this array is taking less than 10 seconds.
ProcessInThreads is a class which implements Runnable interface. if the loop runs for 500 files, will 500 instance of ProcessInThreads be created. How can I kill/release the instance after every execution?