0

I have to work with Amazon MQ. Amazon MQ is based on ActiveMQ. I found some code, and it should put a blob message (PDF size 230kB) on a queue. But if I run the program it errors out in the error stack below.

This is my code:

private final static String WIRE_LEVEL_ENDPOINT = "ssl://<examplednsname>-1.amazonaws.com:61617";
private final static String ACTIVE_MQ_USERNAME = "test123";
private final static String ACTIVE_MQ_PASSWORD = "test123";

public static void sendFileViaQueue(String uri, String queueName) throws JMSException {
    File file = new File("test.pdf");
    ConnectionFactory connectionFactory = null;
    Connection connection = null;
    Session session = null;
    BlobMessage blobMsg = null;
    MessageProducer producer = null;
    try {
        connectionFactory = new ActiveMQConnectionFactory(ACTIVE_MQ_USERNAME, ACTIVE_MQ_PASSWORD, WIRE_LEVEL_ENDPOINT);
        connection = connectionFactory.createConnection();
        connection.start();
        session = connection.createSession(Boolean.TRUE, Session.AUTO_ACKNOWLEDGE);
        producer = session.createProducer(session.createQueue(queueName));
        blobMsg = ((ActiveMQSession) session).createBlobMessage(file);
        blobMsg.setStringProperty("FILE.NAME", file.getName());
        blobMsg.setLongProperty("FILE.SIZE", file.length());
        producer.send(blobMsg);
        session.commit();
    } finally {
        closeQuietly(producer);
        closeQuietly(session);
        closeQuietly(connection);
    }
}

It seems like it want something to upload to 8080 but I didn't configure anything locally. It only should upload a PDF to a queue thats it.

Has anybody an idea to fix this? It shouldn't be that complicated just upload a blob to a queue.

This is the stack-trace I am getting:

javax.jms.JMSException: PUT failed to: http://localhost:8080/uploads/ID:bpSligro-PC-50920-1584558692848-1:1:1:1:1
    at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:72)
    at org.apache.activemq.command.ActiveMQBlobMessage.onSend(ActiveMQBlobMessage.java:177)
    at org.apache.activemq.ActiveMQSession.send(ActiveMQSession.java:1952)
    at org.apache.activemq.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:288)
    at org.apache.activemq.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:223)
    at org.apache.activemq.ActiveMQMessageProducerSupport.send(ActiveMQMessageProducerSupport.java:241)
    at nl.bpittens.mq.AmazonMQExample.sendFileViaQueue(AmazonMQExample.java:81)
    at nl.bpittens.mq.AmazonMQExample.main(AmazonMQExample.java:52)
Caused by: java.io.IOException: PUT failed to: http://localhost:8080/uploads/ID:bpSligro-PC-50920-1584558692848-1:1:1:1:1
    at org.apache.activemq.blob.DefaultBlobUploadStrategy.uploadStream(DefaultBlobUploadStrategy.java:67)
    at org.apache.activemq.blob.DefaultBlobUploadStrategy.uploadFile(DefaultBlobUploadStrategy.java:44)
    at org.apache.activemq.blob.BlobUploader.upload(BlobUploader.java:53)
    at org.apache.activemq.command.ActiveMQBlobMessage.onSend(ActiveMQBlobMessage.java:174)
    ... 6 more
Caused by: java.net.ConnectException: Connection refused: connect
    at java.net.DualStackPlainSocketImpl.connect0(Native Method)
    at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:589)
    at java.net.Socket.connect(Socket.java:538)
    at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
    at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
    at sun.net.www.http.HttpClient.New(HttpClient.java:339)
    at sun.net.www.http.HttpClient.New(HttpClient.java:357)
    at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
    at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
    at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334)
    at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309)
    at org.apache.activemq.blob.DefaultBlobUploadStrategy.uploadStream(DefaultBlobUploadStrategy.java:60)
    ... 9 more

Sending a normal JMS TextMessage works without a problem.

Justin Bertram
  • 29,372
  • 4
  • 21
  • 43
Ben
  • 594
  • 1
  • 9
  • 24

1 Answers1

1

As noted in the documentation, a "blob" message:

allows massive BLOBs (Binary Large OBjects) to be sent around in some out-of-band transport mechanism. Possible out-of-band mechanisms could be HTTP or FTP or SCP or some other point-to-point protocol.

Notice that the actual binary data must be sent "in some out-of-band transport mechanism." In other words, the blob doesn't actually go to the queue. The blob is uploaded somewhere else and the message that goes to the queue simply points to that location.

You need to configure the transfer policy using the jms.blobTransferPolicy.uploadUrl parameter on the client URL. The default upload URL of the default transfer policy is http://localhost:8080/uploads/ which is what your client is trying to use to upload the binary data.

If you want to send an arbitrarily large message directly to a queue rather than using some out of band mechanism consider moving to ActiveMQ Artemis which supports that functionality.

If you're stuck using Amazon MQ then I don't think you have any other solution other than some kind of manual solution where you break the file into smaller chunks that you can put into individual messages and then re-assemble those chunks later in the consuming application.

Justin Bertram
  • 29,372
  • 4
  • 21
  • 43
  • I think if I want to upload a file like this then this is a wrong solution. It seems that nobody is using this because there is no information on the internet. If you use queueing then you do that for a reason for example to get a reliable messaging solution. working with upload servers is not reliable. So my question is if I want to upload big files to Amazon MQ what is the best solution to do so ? For example PDF files. – Ben Mar 19 '20 at 06:08
  • 1
    The best solution is the one that works for you and meets your requirements. – Tim Bish Mar 19 '20 at 13:11
  • Hi, thanks all for your comments. I think Amazon MQ is not the right platform for us if the requirments are sending big XML files > 100 MB and we want to send images and pdf's. Am I right with this conclusion ? I have also read that the maxmessage size of a message on a queue is 32 MB in Amazon MQ and ActiveMQ. – Ben Mar 19 '20 at 15:30
  • 1
    If your requirements include the ability to send files > 100MB and you don't want to use the blob message's out-of-band transfer mechanism then I don't think Amazon MQ would be a good fit. As I mentioned before, ActiveMQ Artemis would be a good fit, though, as it supports arbitrarily large messages. – Justin Bertram Mar 19 '20 at 15:48
  • Thanks Justin, but if my info is right then Artemis is not implemented by Amazon MQ ? – Ben Mar 19 '20 at 16:54
  • That's correct. Amazon MQ does not implement ActiveMQ Artemis. If it did, then you wouldn't be having this problem in the first place. Amazon MQ implements ActiveMQ 5.x. ActiveMQ Artemis is the next generation ActiveMQ broker and is planned to become ActiveMQ 6 when it's ready. – Justin Bertram Mar 19 '20 at 17:06
  • Hi @JustinBertram I tried to upload a XML files :40 MB and the upload seems ok, but if I try to open the message by its ID in the ActiveMQ web console then my server crashes. It restarts in failover modus so the second server becomes the active. Is this the expected behaviour ? I have chosen a t2.micro E2 instance in AWS – Ben Mar 19 '20 at 18:25
  • We're diverging from the original question at this point. Please start a new question with a full description of the issue. – Justin Bertram Mar 19 '20 at 18:55
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/209947/discussion-between-ben-and-justin-bertram). – Ben Mar 19 '20 at 19:18