0

I am using an OBJ Loader library that I found on the 'net and I want to look into speeding it up.

It works by reading an .OBJ file line by line (its basically a text file with lines of numbers)

I have a 12mb OBJ file that equates to approx 400,000 lines. suffice to say, it takes forever to try and read line by line.

Is there a way to speed it up? It uses a BufferedReader to read the file (which is stored in my assets folder)

Here is the link to the library: click me

Mr Pablo
  • 4,109
  • 8
  • 51
  • 104
  • BufferedReader is actually a good way and also used for example by Files.readAllLines (Java 7). I presume the time is lost in other places like not efficient buffering. Have you tried to analyze where the time is spend? – Carsten Dec 22 '14 at 12:12
  • basically do not use OBJ file, use you own format that can be ready to load "as is" into memory (as Vertices / Indices) – Selvin Dec 22 '14 at 12:12
  • Just to add to this, I'm also using a test OBJ file, that is much smaller, at 10,000 lines. This takes approx 20 seconds to load, which is terrible :( I would have though it'd be much faster then that! – Mr Pablo Dec 22 '14 at 12:12
  • @Selvin how would I go about doing something like that? (im just kind of bashing this together) – Mr Pablo Dec 22 '14 at 12:13
  • So it turns out, in debug mode, it's terribly slow loading just 10,000 lines. Without debug, it takes 3 seconds. Not too bad. I tried with the 400,000 line file, and I got an error as the code uses Short numbers in the Vectors. I tried to change them all to Long but I had errors in the TDModelPart class. – Mr Pablo Dec 22 '14 at 23:09

1 Answers1

0

Just an idea: you could first get the size of the file using the File class, after getting the file:

File file = new File(sdcard,"sample.txt");
long size = file.length();

The size returned is in bytes, thus, divide the file size into a sizable number of chunks e.g. 5, 10, 20 e.t.c. with a byte range specified and saved for each chunk. Create byte arrays of the same size as each chunk created above, then "assign" each chunk to a separate worker thread, which should read its chunk into its corresponding character array using the read(buffer, offset, length) method i.e. read "length" characters of each chunk into the array "buffer" beginning at array index "offset". You have to convert the bytes into characters. Then concatenate all arrays together to get the final complete file contents. Insert checks for the chunk sizes so each thread does not overlap the others boundaries. Again, this is just an idea, hopefully it will work when actually implemented.

Peter
  • 648
  • 7
  • 26