I am reading about 6000 text-files into memory with the following code in a loop:
void readDocs(const char *dir, char **array){
DIR *dp = opendir(dir);;
struct dirent *ep;
struct stat st;
static uint count = 0;
if (dp != NULL){
while (ep = readdir(dp)){ // crawl through directory
char name[strlen(dir) + strlen(ep->d_name) + 2];
sprintf(name, "%s/%s", dir, ep->d_name);
if(ep->d_type == DT_REG){ // regular file
stat(name, &st);
array[count] = (char*) malloc(st.st_size);
int f;
if((f = open(name, O_RDONLY)) < 0) perror("open: ");
read(f, array[count], st.st_size));
if(close(f) < 0) perror("close: ");
++count;
}
else if(ep->d_type == DT_DIR && strcmp(ep->d_name, "..") && strcmp(ep->d_name, "."))
// go recursive through sub directories
readDocs(name, array);
}
}
}
In iteration 2826 i get an "Too many open files" error when opening the 2826th file.
No error occured in the close operation until this point.
Since it always hangs in the 2826th iteration i do not believe that i should wait until a file is realy closed after calling close();
I had the same issue using fopen, fread and fclose.
I don't think it has to do with the context of this snippet but if you do i will provide it.
Thanks for your time!
EDIT:
I put the program to sleep and checked /proc//fd/ (thanks to nos). Like you suspected there were exactly 1024 file descriptors which i found to be a usual limit.
+ i gave you the whole function which reads documents out of a directory and all subdirectories
+ the program runs on Linux! Sorry for forgetting that!