Im trying to write a Roslyn Analyzer which searches in code comments for some kind of hashtags and generates a Gpt prompt from it using the codefix lightbulb. However when sending the gpt request I get an exception stating
Message:
Test method GptAnalyzer.Test.GptAnalyzerUnitTest.TestMethod2 threw exception:
System.IO.FileNotFoundException: Could not load file or assembly 'System.Diagnostics.DiagnosticSource, Version=6.0.0.0, Culture=neutral, PublicKeyToken='.
Stack Trace:
DiagnosticScopeFactory.ctor(String clientNamespace, String resourceProviderNamespace, Boolean isActivityEnabled, Boolean suppressNestedClientActivities)
ClientDiagnostics.ctor(String optionsNamespace, String providerNamespace, DiagnosticsOptions diagnosticsOptions, Nullable`1 suppressNestedClientActivities)
ClientDiagnostics.ctor(ClientOptions options, Nullable`1 suppressNestedClientActivities)
OpenAIClient.ctor(Uri endpoint, AzureKeyCredential keyCredential, OpenAIClientOptions options)
OpenAIClient.ctor(Uri endpoint, AzureKeyCredential keyCredential)
I think, some security mechanism is preventing the request from sending. How and where can I configure that I trust this request / url? Im quite sure technical the request is correct, because the same code I use in the analyzer is working in a console app.
Technical setup:
- nuGet Azure.AI.OpenAI
- nuGet Microsoft.Extensions.Configuration
- creating the OpenAIClient:
OpenAIClient client = new OpenAIClient(
someApiUrl,
new AzureKeyCredential( someApiKey )
);
- sending the prompt:
Response<ChatCompletions> responseWithoutStream =
await client.GetChatCompletionsAsync(
"someModel",
new ChatCompletionsOptions()
{
Messages =
{
new ChatMessage(
ChatRole.System,
@"somePrompt"
),
},
Temperature = ( float ) 0.7,
MaxTokens = 400,
NucleusSamplingFactor = ( float ) 0.95,
FrequencyPenalty = 0,
PresencePenalty = 0,
}
);