0

Im trying to write a Roslyn Analyzer which searches in code comments for some kind of hashtags and generates a Gpt prompt from it using the codefix lightbulb. However when sending the gpt request I get an exception stating

  Message: 
Test method GptAnalyzer.Test.GptAnalyzerUnitTest.TestMethod2 threw exception: 
System.IO.FileNotFoundException: Could not load file or assembly 'System.Diagnostics.DiagnosticSource, Version=6.0.0.0, Culture=neutral, PublicKeyToken='. 

  Stack Trace: 
DiagnosticScopeFactory.ctor(String clientNamespace, String resourceProviderNamespace, Boolean isActivityEnabled, Boolean suppressNestedClientActivities)
ClientDiagnostics.ctor(String optionsNamespace, String providerNamespace, DiagnosticsOptions diagnosticsOptions, Nullable`1 suppressNestedClientActivities)
ClientDiagnostics.ctor(ClientOptions options, Nullable`1 suppressNestedClientActivities)
OpenAIClient.ctor(Uri endpoint, AzureKeyCredential keyCredential, OpenAIClientOptions options)
OpenAIClient.ctor(Uri endpoint, AzureKeyCredential keyCredential)

I think, some security mechanism is preventing the request from sending. How and where can I configure that I trust this request / url? Im quite sure technical the request is correct, because the same code I use in the analyzer is working in a console app.

Technical setup:

  • nuGet Azure.AI.OpenAI
  • nuGet Microsoft.Extensions.Configuration
  • creating the OpenAIClient:
OpenAIClient client = new OpenAIClient(
       someApiUrl,
       new AzureKeyCredential( someApiKey )
   );
  • sending the prompt:
Response<ChatCompletions> responseWithoutStream = 
    await client.GetChatCompletionsAsync(
        "someModel",
        new ChatCompletionsOptions()
        {
            Messages =
            {
                new ChatMessage(
                    ChatRole.System,
                    @"somePrompt"
                ),
            },
            Temperature = ( float ) 0.7,
            MaxTokens = 400,
            NucleusSamplingFactor = ( float ) 0.95,
            FrequencyPenalty = 0,
            PresencePenalty = 0,
        } 
    );
ChopSeo
  • 1
  • 1
  • The failure isn't a security mechanism here but a packaging issue. But ignoring that for a moment, are you trying to use OpenAI in a code _fix_ or the actual analyzer itself? Because if the latter, that's going to cause massive, massive performance problems. you shouldn't do that. – Jason Malinowski Aug 24 '23 at 17:18
  • Thx for your answer. My goal is to send a gpt request when using / activating the codefix. I dont care if the codefix itself takes long. Hopefully my understanding is correct, that this will not effect the synthax node parsing speed. But what can I do about the packaging issue? – ChopSeo Aug 25 '23 at 14:05

0 Answers0