3

We are using Azure's IoT Hub as backend for our IoT solution. We noticed that we had an unexpected spike in message amount during one day, and we exceeded our daily limit of 400k messages.

From client application's logs we couldn't find anything out of the normal: based on the logs it seems that client kept sending data despite the daily limit being exceeded.

So my question is: what happens when client application sends data using DeviceClient.SendEventBatchAsync when daily message limit is already exceeded? Are the messages just dropped cold blooded despite there was no exceptions? Or is there happening something that I have failed to catch? We are using C# .NET Microsoft.Azure.Devices.Client version 1.19.0 and sending the data using MQTT protocol.

2 Answers2

4

To accommodate burst traffic, IoT Hub accepts requests above the throttle for a limited time. The first few of these requests are processed immediately. However, if the number of requests continues violate the throttle, IoT Hub starts placing the requests in a queue and processed at the limit rate. This effect is called traffic shaping. Furthermore, the size of this queue is limited. If the throttle violation continues, eventually the queue fills up, and IoT Hub starts rejecting requests with 429 ThrottlingException.

For example, you use a simulated device to send 200 device-to-cloud messages per second to your S1 IoT Hub (which has a limit of 100/sec D2C sends). For the first minute or two, the messages are processed immediately. However, since the device continues to send more messages than the throttle limit, IoT Hub begins to only process 100 messages per second and puts the rest in a queue. You start noticing increased latency. Eventually, you start getting 429 ThrottlingException as the queue fills up, and the "number of throttle errors" in IoT Hub's metrics starts increasing.

According to the docs

So yea, it gets batched until it start throwing exceptions when the queue is full. You should reduce the number of messages and consider choosing a MQTT library that supports client side batching in case there's burst data.

Jonny Lin
  • 727
  • 4
  • 11
  • Thanks for your answer! However our problem is clearly just a raise in device + data volume, it's not about burst data. Batching messages won't do any good here since they are still calculated against daily message limit. – Jesse Ikola Jul 01 '19 at 05:45
0

Your MQTT device should be disconnected and the Send and Receive operations are blocked for this hub until the next UTC day.

In the case of using the https protocol, the following response is sent by Azure IOT Hub:

{
  "Message": "{\"errorCode\":403002,\"trackingId\":\"c41eb2a0f7764132aa31a7f3ff97a1ce-G:3-TimeStamp:06/20/2019 12:36:43\",\"message\":\"Total number of messages on IotHub 'xxxxxxxxx' exceeded the allocated quota. Max allowed message count : '8000', current message count : '8448'. Send and Receive operations are blocked for this hub until the next UTC day. Consider increasing the units for this hub to increase the quota.\",\"timestampUtc\":\"2019-06-20T12:36:43.5570129Z\"}",
  "ExceptionMessage": ""
}

That's for the F1 scale tier and it should be the same behavior for any scale tier when the daily allocated Send/Receive message operations has been exceeded.

Roman Kiss
  • 7,925
  • 1
  • 8
  • 21
  • I went through more log files and saw that some of our devices were receiving `TimeoutException` when trying to send more data while IoT Hub message limit was hit for the day. What's strange here is that clearly this didn't affect all of our devices since they were able to keep sending data. – Jesse Ikola Jul 01 '19 at 05:50
  • I have tested this behavior for F1 scale tier and using the MQTT protocol directly to see how the Azure IoT Hub reacting. In my test, the MQTT devices are off for auto re-connecting, so when the send/receive daily limit has been reached, the devices have been disconnected by Azure IoT Hub. After this point, I have sent a d2c message by https protocol to obtain a detail response error message. – Roman Kiss Jul 01 '19 at 06:20