.Make certain being compatible with various structures, including.NET 6.0,. Web Framework 4.6.2, and.NET Requirement 2.0 and also above.Reduce dependences to prevent version conflicts and also the necessity for binding redirects.Translating Sound Files.Among the main capabilities of the SDK is actually audio transcription. Designers can record audio reports asynchronously or even in real-time. Below is actually an instance of exactly how to translate an audio report:.using AssemblyAI.utilizing AssemblyAI.Transcripts.var customer = new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For regional documents, similar code could be made use of to accomplish transcription.wait for using var flow = brand-new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK also reinforces real-time audio transcription making use of Streaming Speech-to-Text. This attribute is particularly practical for applications demanding quick processing of audio records.utilizing AssemblyAI.Realtime.await making use of var transcriber = brand-new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Last: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for getting sound coming from a mic as an example.GetAudio( async (piece) => wait for transcriber.SendAudioAsync( portion)).await transcriber.CloseAsync().Using LeMUR for LLM Apps.The SDK integrates along with LeMUR to allow developers to build big foreign language version (LLM) applications on vocal data. Listed here is an example:.var lemurTaskParams = new LemurTaskParams.Cause="Offer a brief review of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var response = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intellect Models.Also, the SDK possesses integrated assistance for audio cleverness versions, making it possible for sentiment evaluation and also other sophisticated components.var transcript = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or even downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To learn more, go to the official AssemblyAI blog.Image source: Shutterstock.