3

We are running into issues with our Blob triggered function. The function is written in javascript. We had a hard time putting an automated deployment process for it in place. Here are the steps we followed.

  1. Create the function app within an existing resource group, using the ARM template and a parameter file New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile $templateFilePath -TemplateParameterFile $armParametersFilePath;

  2. Deploy the function code through the Kudu api Invoke-RestMethod -Uri "$apiUrl" -Method Put -InFile "$functionCodeArchivePath" -Credential $credentials -DisableKeepAlive -UserAgent "powershell/1.0" -TimeoutSec 600

  3. Run the npm install command through the kudu api Invoke-RestMethod -Uri "$apiCommandUrl" -Method Post -Body $json -DisableKeepAlive -ContentType "application/json" -Credential $credentials -UserAgent "powershell/1.0" -TimeoutSec 1200

In the last step - the command to get the dependencies (npm install) on Kudu times out this seems to be a known issue.

To overcome this, we went for using WebPack to package all the dependencies in one JavaScript file, following this approach.

Now the deployment is faster, the function does not seem to be executing correctly though.

When we drop a file into our blob storage account the function is triggered from , the function does not seem to log the execution trace always. There are runs which have the full logs, and there are runs that only have Function started in them without having any custom log statements.

Here are the logs, straight from Kudu (D:\home\LogFiles\Application\Functions\Function\functionname>)

2017-03-03T11:24:33.835 Function started (Id=77b5b022-eee0-45e0-8e14-15e89de59835)
2017-03-03T11:24:35.167 JavaScript blob trigger function started with blob:
2017-03-03T11:24:35.167 Name: _1486988111937 
 Blob Size: 8926 Bytes
2017-03-03T11:24:35.167 Extracting file
2017-03-03T11:24:35.167 JavaScript blob trigger function processed blob 
 Name: _1486988111937 
 Blob Size: 8926 Bytes
2017-03-03T11:24:35.183 Function completed (Success, Id=77b5b022-eee0-45e0-8e14-15e89de59835)
2017-03-03T11:24:35.292 { Error: [** SENSITIVE ERROR MESSAGE, INTERNAL TO FUNCTION, REMOVED **] }
2017-03-03T11:28:34.929 Function started (Id=8bd96186-50bc-43b0-916c-fefe4bd0cf51)
2017-03-03T11:38:18.302 Function started (Id=7967cc93-73cf-4acf-8428-20b0c70bbac9)
2017-03-03T11:39:32.235 Function started (Id=a0abb823-9497-429d-b477-4f7a9421132e)
2017-03-03T11:49:25.164 Function started (Id=ab16b1d9-114c-4718-aab2-ffc426cfbc98)
2017-03-03T11:53:51.172 Function started (Id=87ed29bc-122f-46d2-a658-d933330580c9)
2017-03-03T11:56:06.512 Function started (Id=23f8ee3f-cda0-45a3-8dd0-4babe9e45e4e)
2017-03-03T12:02:58.886 Function started (Id=c7ef7ad5-62b8-4b43-a043-bc394d9b02f5)

PS: Our function code is getting the blob, a zipped file, unzipping it and making API calls for each of the files inside the zipped folder. The error marked with [** SENSITIVE ERROR MESSAGE, INTERNAL TO FUNCTION, REMOVED **] in the log is related to connectivity to our API.

Halima
  • 73
  • 6
  • How many blobs are in your container? – Don Lockhart Mar 03 '17 at 19:45
  • There is a cap on the number of sockets connections you can open per Function App. Is it possible for you to batch the API calls? – Ling Toh Mar 03 '17 at 19:55
  • @don-lockhart There are not many blobs, 5 at most. Is the number of blob going to impact the call of the function? – Halima Mar 04 '17 at 21:18
  • @ling-toh Good point, I suppose we could. Would reaching the cap impact the execution of other functions? – Halima Mar 04 '17 at 21:19
  • If you have a large number of blobs (more than 10,000) there's a possibility the trigger may not occur. So, that is not your issue. – Don Lockhart Mar 05 '17 at 13:47
  • @Halima, yes the cap is 300 connections and is set at the Function App level. You need to make sure that the total number of connections from all your Functions inside your Function App does not exceed that total. https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#per-sandbox-per-appper-site-numerical-limits – Ling Toh Mar 05 '17 at 19:55

1 Answers1

5

It looks like that Blob triggering is not that reliable, at least according to this page: How to use Azure blob storage with the WebJobs SDK

The WebJobs SDK scans log files to watch for new or changed blobs. This process is not real-time; a function might not get triggered until several minutes or longer after the blob is created. In addition, storage logs are created on a "best efforts" basis; there is no guarantee that all events will be captured. Under some conditions, logs might be missed. If the speed and reliability limitations of blob triggers are not acceptable for your application, the recommended method is to create a queue message when you create the blob, and use the QueueTrigger attribute instead of the BlobTrigger attribute on the function that processes the blob.

You should probably change the logic and create a queue message for each file that you put in Blob storage

Robert Vuković
  • 4,677
  • 6
  • 31
  • 47
  • 1
    I don't believe that this is the reason why @Halima's Function is not logging "Function completed..." entries. The Function is getting triggered as "Function started..." is being logged. To be clear, with reference to the bold phrases you highlighted, "logs might be missed" is referring to "storage logs", not "Function logs" (which is what the post is referring to). – Ling Toh Mar 05 '17 at 19:50
  • @LingToh problem with the logs can be just a symptom or totally different issue. The main problem is with non reliable triggering of the Blob triggered functions. As I read it, the advice is not to use it if you want reliable results e.g. production. – Robert Vuković Mar 05 '17 at 21:17
  • Hi thank you all for your help, it seems that @robert-vuković answer is what might be causing our problems. We are looking into implementing a queue triggered mechanism. Thanks – Halima Mar 06 '17 at 09:04