0

I have java 6 embedded HttpServer. It has a handle which allows clients to download a big text file. The problem is that whenthe server has more then 10 simultaneous clients, i get out of memory exception. I'm prety sure that the problem is around the Http Server.

   HttpServer m_server = HttpServer.create(new InetSocketAddress(8080), 0);
   m_server.createContext("/DownloadFile", new DownloadFileHandler() );

   public class DownloadFileHandler implements HttpHandler {

         private static byte[] myFile = new String("....................").getBytes(); //string about 8M

         @Override
         public void handle(HttpExchange exchange) throws IOException {
                exchange.sendResponseHeaders(HTTP_OK, myFile .length);                 OutputStream responseBody = exchange.getResponseBody();
                responseBody.write(myFile );
                responseBody.close();
         } 
   }

Now the exception i get is:

java.lang.OutOfMemoryError: Java heap space 
at java.nio.HeapByteBuffer.<init>(Unknown Source)
at java.nio.ByteBuffer.allocate(Unknown Source)
at sun.net.httpserver.Request$WriteStream.write(Unknown Source)
at sun.net.httpserver.FixedLengthOutputStream.write(Unknown Source) 
at java.io.FilterOutputStream.write(Unknown Source) 
at sun.net.httpserver.PlaceholderOutputStream.write(Unknown Source) 
at com.shunra.javadestination.webservices.DownloadFileHandler.handle(Unknown Source) 
at com.sun.net.httpserver.Filter$Chain.doFilter(Unknown Source) 
at sun.net.httpserver.AuthFilter.doFilter(Unknown Source) 
at com.sun.net.httpserver.Filter$Chain.doFilter(Unknown Source) 
at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(Unknown Source) 
at com.sun.net.httpserver.Filter$Chain.doFilter(Unknown Source)
at sun.net.httpserver.ServerImpl$Exchange.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Exception in thread "pool-1-thread-24" java.lang.OutOfMemoryError: 

The suggestion regarding the getBytes() doesn't change the exception. i have tried to hold a static reference to byte[] instead of creating it each time. And I still get the same exception.

skaffman
  • 398,947
  • 96
  • 818
  • 769
Sophie
  • 1,580
  • 3
  • 16
  • 20
  • Please some "code quotes" around your code.. – kgautron Oct 26 '11 at 14:17
  • @Sophie this sounds like this could be 2 issues at once; a Java issue, and a Thread handling issue. how many connections does you embedded HTTP server allow? What HTTP server are you using? – bakoyaro Oct 26 '11 at 16:15
  • @Sophie also if you set debug=true when you are compiling your source, you should get the line numbers in your stack traces – bakoyaro Oct 26 '11 at 16:16

6 Answers6

7

Do not do that for large files:

byte[] bytesToSend = myFile.getBytes();

This is inefficient and you need heap space for storing the whole file data. You're wasting lots of heap space when you first read the file completly and afterwards write it completly.

Instead read/write the file data in chunks of specific size from file directly to the response. You can write code on your own or just use a utility class like IOUtils from Apache Commons IO.

It is important to not read the whole file first before you write it. Instead do it in smaller chunks. Use streams here and avoid anything that deals with byte[] except for buffering and the small chunks.

Edit: Here's some code with Apache IO...

public static void main(String[] args) {
    HttpExchange exchange = ...;
    OutputStream responseBody = null;

    try {
        File file = new File("big-file.txt");
        long bytesToSkip = 4711; //detemine how many bytes to skip

        exchange.sendResponseHeaders(200, file.length() - bytesToSkip);
        responseBody = exchange.getResponseBody();
        skipAndCopy(file, responseBody, bytesToSkip);           
    }
    catch (IOException e) {
        // handle it
    }
    finally {
        IOUtils.closeQuietly(responseBody);
    }
}


private static void skipAndCopy(File src, @WillNotClose OutputStream dest, long bytesToSkip) throws IOException {
    InputStream in = null;

    try {
        in = FileUtils.openInputStream(src);

        IOUtils.skip(in, bytesToSkip);
        IOUtils.copyLarge(in, dest);
    }
    finally {
        IOUtils.closeQuietly(in);
    }
}
Fabian Barney
  • 14,219
  • 5
  • 40
  • 60
  • The suggestion regarding the getBytes() doesn't change the exception. i have tried to hold a static reference to byte[] instead of creating it each time. And I still gThe suggestion regarding the getBytes() doesn't change the exception. i have tried to hold a static reference to byte[] instead of creating it each time. And I still get the same exception.et the same exception. – Sophie Oct 26 '11 at 14:58
  • @Sophie this guy's explaining it fine, just does not provide implementation details. Tarlog's solution was just wrong, he was reading out the whole file in a different place (the static field). – MarianP Oct 26 '11 at 15:19
  • @Sophie Do not deal with a byte[] holding the whole data. Not at any point in your application. Instead read the file in chunks and write them "on-the-fly" into the response. I recommend to have a look at `IOUtils`. – Fabian Barney Oct 26 '11 at 15:39
  • @Fatal - thank you for ur comment. I have a single file which i load once. then for each client i select an offset from the begining of the file and return from this point on. so i have a problem reading "on the fly", i always have a huge string which i need to manipulate. – Sophie Oct 26 '11 at 16:03
  • @Sophie Just skip the offset and then send the rest "on-the-fly". Again, I recommend to have a decent look at `IOUtils` which provides a skip()-method for such purposes. Otherwise, if you really cannot avoid loading the whole file into memory then you've to implement a service which limit these operations and is aware of the total memory allocated by such operations. When these exceed a threshold then this service have to block new requests until memory freed from previous completed calls to this service. But this is MUCH harder to implement especially when you're not familar with it. – Fabian Barney Oct 26 '11 at 16:15
5

If you retrieve all of the bytes for the file at once, it has to read all of them into memory and then write them to the filesystem. try something like:

FileReader reader = new FileReader(myFile);
try{
    char buffer[] = new char[4096];
    int numberOfBytes=0;
    while ((numberOfBytes=reader.read(buffer)) != -1){
        responseBody.write(buffer);
    }
}catch(Exception e){
    //TODO do something with the exception.
}finally{
    reader.close();
}
ryanb
  • 934
  • 7
  • 4
  • The suggestion regarding the getBytes() doesn't change the exception. i have tried to hold a static reference to byte[] instead of creating it each time. And I still get the same exception. – Sophie Oct 26 '11 at 14:59
  • Do you know what the best size for a buffer is? I always use "4096" too, but for no specific reason. – Michael Oct 26 '11 at 15:14
  • 3
    I always use 4096 for file operations because that is the default allocation size for many file systems. – ryanb Oct 26 '11 at 15:31
  • @Sophie also: try to flush the stream after writing to it. So it won't try to buffer the response. – Tarlog Oct 26 '11 at 16:02
4

Use streams so that you don't have to write all the data at once.

See getRequestBody and getResponseBody. You'll want to open your file as a stream and write the bytes to the appropriate stream.

Jeff Foster
  • 43,770
  • 11
  • 86
  • 103
4

With large amounts of data like this, it's best to stream the data. Streaming means that you send the data in chunks instead of sending it all at once. This is more memory-efficient because you don't have to store all the data in memory, just pieces of it.

Also, a more generic way of returning the file data is to use a regular InputStream instead of a Reader.

  • InputStream: used for reading any kind of data
  • Reader: used for reading text data

Using an InputStream means you don't have to worry about character encodings. It also makes your code more flexible because it allows you to send binary files too.

Here is a complete solution:

OutputStream responseBody = null;
try{
  File file = new File("bigggggg-text-file.txt");
  InputStream in = new FileInputStream(file);
  exchange.sendResponseHeaders(HTTP_OK, file.length());
  responseBody = exchange.getResponseBody();
  int read;
  byte buffer[] = new byte[4096];
  while ((read = in.read(buffer)) != -1){
    responseBody.write(buffer, 0, read);
  }
} catch (FileNotFoundException e){
  //uh-oh, the file doesn't exist
} catch (IOException e){
  //uh-oh, there was a problem reading the file or sending the response
} finally {
  if (responseBody != null){
    responseBody.close();
  }
}
Michael
  • 34,873
  • 17
  • 75
  • 109
0

The problem in your code that myFile.getBytes() creates a new array for each request.

You can simply improve it by holding the byte array instead of String:

      private static byte[] bytesToSend = "....................".getBytes(); //string about 8M

     @Override
     public void handle(HttpExchange exchange) throws IOException {
            exchange.sendResponseHeaders(HTTP_OK, bytesToSend.length);                                     OutputStream responseBody = exchange.getResponseBody();
            responseBody.write(bytesToSend);
            responseBody.close();
     } 

Btw, both this code and your code use getBytes(). This means that it will use the default platform encoding, which is not a good practice. It's better to call it with an explicit encoding, like getBytes("UTF-8")

Another note: I corrected your code assuming it's a real code. In case your logic is more complex, e.g. you allow downloading multiple files, it's better to use streaming: read the input file by chunks and send the chunks to the requested. Don't keep too many chunks in memory.

Tarlog
  • 10,024
  • 2
  • 43
  • 67
  • The suggestion regarding the getBytes() doesn't change the exception. i have tried to hold a static reference to byte[] instead of creating it each time. And I still get the same exception. – Sophie Oct 26 '11 at 14:59
  • I think you misunderstood the problem. It lies not in losing empty memory with more requests, it is a problem of a single file that is just too big. – MarianP Oct 26 '11 at 15:22
  • @MarianP - But it works for 5-7 clients with file that big. So not sure that file is the problem. – Sophie Oct 26 '11 at 15:29
  • @Sophie it might be combination of those too. But the right solution is to use proper streams as other guys suggest anyway. – MarianP Oct 26 '11 at 15:32
  • @Sophie It's possible that the `responseBody` OuputStream stores data in its own buffer, which might be why you were still getting OutOfMemory errors even after using a statically-defined byte array. – Michael Oct 26 '11 at 15:35
  • @MarianP Sorry, but if the problem is as described, meaning one file of 8MB, I disagree: reading it using streams is a stupid overhead on modern computers. Only writing it may need streaming. – Tarlog Oct 26 '11 at 15:52
  • @Sophie did you change the JVM memory settings? By default JVM runs with 64MB of heap, which may be not enough in your case. – Tarlog Oct 26 '11 at 15:52
  • @Tarlog I disagree. Even with the large amounts of memory computers have today, 8MB is a lot and can add up if you are storing multiple, similarly-sized files in memory. – Michael Oct 26 '11 at 16:08
  • @Michael multiple - yes. Look at my answer, I don't suggest to store multiple files in memory. But if it's a single file, with many clients, I see no reason why not to cache it. – Tarlog Oct 26 '11 at 16:11
  • @Tarlog 8MB is quite a lot for a web server and doing it at once scales badly, JVMs are also set to use quite a little memory by default (often just 256MB). edit some char in your question, so that I can un-downvote it, I was a bit hasty decision on my side as the answer is not that evil in light of the further discussion. – MarianP Oct 26 '11 at 16:26
0

Do not convert the whole String into bytes at once:

Writer writer = new OutputStreamWriter(responseBody),someEncoding);
try {
  writer.write(myFile);
}
finally {
  writer.close();
}
ysdx
  • 8,889
  • 1
  • 38
  • 51