The request is unusual. A Task isn't a value, it's a promise that something will complete in the future. To get the desired result the code will have to await all the tasks, retrieve their results, then return all of them in a Dictionary<string, int>.
There's almost certainly a better way to solve the actual problem.
One quick and dirty example would be :
async Task<ConcurrentDictionary<string,T>> GetValues<T>(CancellationToken token=default)
{
var dict=new ConcurrentDictionary<string,T>();
try
{
await Parallel.ForEachAsync(_urls,token, async (url,tk)=>{
var res=await _httpClient.GetStringAsync(url,tk);
dict[url]=someResult;
});
}
catch(OperationCancelledException){}
return dict;
}
There are far better ways to solve the actual problem though - execute interdependent HttpClient requests. .NET offers several ways to construct asynchronous processing pipelines: Dataflow blocks, Channels, IAsyncEnumerable.
Dataflow Blocks
For example, using Dataflow blocks you can create a pipeline that downloads CSV files, parses them, then inserts the data into a database.
These options specify that 8 CSV files will be downloaded concurrently and two parsed concurrently.
var downloadDOP=8;
var parseDOP=2;
var tableName="SomeTable";
var linkOptions=new DataflowLinkOptions { PropagateCompletion = true};
var downloadOptions =new ExecutionDataflowBlockOptions {
MaxDegreeOfParallelism = downloadDOP,
};
var parseOptions =new ExecutionDataflowBlockOptions {
MaxDegreeOfParallelism = parseDOP,
};
The following code creates the pipeline
HttpClient http=new HttpClient(...);
var downloader=new TransformBlock<(Uri,string),FileInfo>(async (uri,path)=>{
var file=new FileInfo(path);
using var stream =await httpClient.GetStreamAsync(uri);
using var fileStream=file.Create();
await stream.CopyToAsync(stream);
return file;
},downloadOptions);
var parser=new TransformBlock<FileInfo,Foo[]>(async file=>{
using var reader = file.OpenText();
using var csv = new CsvReader(reader, CultureInfo.InvariantCulture);
var records = csv.GetRecords<Foo>().ToList();
return records;
},parseOptions);
var importer=new ActionBlock<Foo[]>(async recs=>{
using var bcp=new SqlBulkCopy(connectionString, SqlBulkCopyOptions.TableLock);
bcp.DestinationTableName=tableName;
//Map columns if needed
...
using var reader=ObjectReader.Create(recs);
await bcp.WriteToServerAsync(reader);
});
downloader.LinkTo(parser,linkOptions);
parser.LinkTo(importer,linkOptions);
Once you have the pipeline, you can start posting URLs to it and await for the entire pipeline to complete:
IEnumerable<(Uri,string)> filesToDownload = ...
foreach(var pair in filesToDownload)
{
await downloader.SendAsync(pair);
}
downloader.Complete();
await importer.Completion;