I have a time-triggered Azure Function that runs every SECOND. The function reads data from API Servers and stores it into ADLS. How can I optimize the performance of the function so that it can make more that 500 API calls and store per second data for each call in a SECOND.
public static void Run([TimerTrigger("*/1 * * * * *")] TimerInfo myTimer, ILogger log)
{
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
log.LogInformation($"Execution starts at: {DateTime.Now.ToString("hh.mm.ss.ffffff")}");
try
{
var IDs = GetIDs(); //makes 1 API call to fetch list of IDs
foreach(var i in IDs){
ReadAndWriteData(i); //reads data for each ID from API server and stores in ADLS
}
}
catch (Exception e)
{
log.LogError($"An exception has been raised : {e}");
}
log.LogInformation($"C# Timer trigger function execution ended at: {DateTime.Now}");
}
public static async Task<List<string>> GetIDs(){
//List<string> idList = await Task.Run(()=> ReadIDs()); //makes 1 API call to fetch list of IDs
//return idList;
}
public static async Task ReadAndWriteData(String id){
//var result = await Task.Run(()=> ReadData()); //reads data for each ID from API server
...
// uploads data to ADLS
}
What is the best possible way to get data accurately for all IDs per second? I have tried some parallel programming/ TPL methods but still it is giving expected accuracy if I use only one ID, not for all.