1

I need help because i need to integrate JOSS in a existing code. My code uses the Consumer feature of Java 8.

Consumer<? super GHRepository> action = repo -> {
            try {
                if(github.getRateLimit().remaining > 0) {
                    Files.write(this.path, (repo.toString() + "\n").getBytes(), StandardOpenOption.APPEND);
                    totalIteration++;
                } else {
                    logger.info("Time to pause for " + (github.getRateLimit().reset.getTime() -  new Date().getTime()));
                    //wait until rate limit is ok.
                    do {
                        Thread.sleep(60000);
                    } while(github.getRateLimit().reset.after(new Date()));

                }
            } catch (Exception e) {
                logger.error("Erreur d'écriture dans le fichier : " + e.getMessage());
            }
        };

This code works fine but disk space available on the machine is not enough. So i need to write the file directly on an OpenStack container.

I've read in the doc that JOSS uses this function to upload a file.

   StoredObject object = container.getObject("dog.png");
   object.uploadObject(new File("/dog.png"));

This is the method to upload a file already written. But I need to write the file directly on the container. The uploadObject function can receive a InputStream in parameter. So i want to use it. But i don't know how to integrate it with my existing code. Can you help me?

Scandinave
  • 1,388
  • 1
  • 17
  • 41

1 Answers1

0

Ok, i find the way .

object.uploadObject(Files.newInputStream(Files.write(this.path, (repo.toString() + "\n").getBytes(), StandardOpenOption.APPEND)));
Scandinave
  • 1,388
  • 1
  • 17
  • 41
  • There seems to be a problem with the JOSS Swift API InputStream implementation in that while it exposes an InputStream as part of the API, it does not provide the means to control the read block size and apparently it just does a read(all bytes from stream) implementation - which defeats a big purpose of using the stream in the first place (requires heap memory equal to the sum of all files concurrently uploaded). – Darrell Teague May 10 '16 at 18:44
  • @DarrellTeague No. I checked the code base for recent release and they seems to copy with `IOUtils.copy(entity.getContent(), output);` from `apache common-io` and it uses 4096 byte buffer size which I believe is optimal. – Raja Anbazhagan May 05 '17 at 17:50
  • The public API is only exposing a stream and not allowing read block size control. Whilst the server will (apparently) only use up to 4K per-read, it is not controllable nor does it provide an interceptor. The requirement concern is stopping the upload at some arbitrary 'max' size (without trusting the HTTP request headers of the sender). An implementation was written around this eventually using a PipedInputStream model that checks the total bytes uploaded with a configurable buffer size: https://docs.oracle.com/javase/7/docs/api/java/io/PipedInputStream.html – Darrell Teague May 09 '17 at 16:35