|
string | contentUri [get, set] |
| Publicly facing uri Format: uri More...
|
|
string | encoding [get, set] |
| The encoding of the original audio Required Example: Wav Enum: Mpeg, Mp4, Wav, Webm, Webp, Aac, Avi, Ogg More...
|
|
string | languageCode [get, set] |
| Language spoken in the audio file. Required Example: en-US More...
|
|
string | source [get, set] |
| Source of the audio file eg: Phone, RingCentral, GoogleMeet, Zoom etc Example: RingCentral More...
|
|
string | audioType [get, set] |
| Type of the audio Example: CallCenter Enum: CallCenter, Meeting, EarningsCalls, Interview, PressConference, Voicemail More...
|
|
bool? | separateSpeakerPerChannel [get, set] |
| Indicates that the input audio is multi-channel and each channel has a separate speaker. More...
|
|
long? | speakerCount [get, set] |
| Number of speakers in the file, omit parameter if unknown Format: int32 Example: 2 More...
|
|
string[] | speakerIds [get, set] |
| Optional set of speakers to be identified from the call. Example: speakerId1,speakerId2 More...
|
|
bool? | enableVoiceActivityDetection [get, set] |
| Apply voice activity detection. More...
|
|
bool? | enablePunctuation [get, set] |
| Enables Smart Punctuation API. More...
|
|
bool? | enableSpeakerDiarization [get, set] |
| Tags each word corresponding to the speaker. More...
|
|
SpeechContextPhrasesInput[] | speechContexts [get, set] |
| Indicates the words/phrases that will be used for boosting the transcript. This can help to boost accuracy for cases like Person Names, Company names etc. More...
|
|
◆ audioType
string RingCentral.AsrInput.audioType |
|
getset |
Type of the audio Example: CallCenter Enum: CallCenter, Meeting, EarningsCalls, Interview, PressConference, Voicemail
◆ contentUri
string RingCentral.AsrInput.contentUri |
|
getset |
Publicly facing uri Format: uri
◆ enablePunctuation
bool? RingCentral.AsrInput.enablePunctuation |
|
getset |
Enables Smart Punctuation API.
◆ enableSpeakerDiarization
bool? RingCentral.AsrInput.enableSpeakerDiarization |
|
getset |
Tags each word corresponding to the speaker.
◆ enableVoiceActivityDetection
bool? RingCentral.AsrInput.enableVoiceActivityDetection |
|
getset |
Apply voice activity detection.
◆ encoding
string RingCentral.AsrInput.encoding |
|
getset |
The encoding of the original audio Required Example: Wav Enum: Mpeg, Mp4, Wav, Webm, Webp, Aac, Avi, Ogg
◆ languageCode
string RingCentral.AsrInput.languageCode |
|
getset |
Language spoken in the audio file. Required Example: en-US
◆ separateSpeakerPerChannel
bool? RingCentral.AsrInput.separateSpeakerPerChannel |
|
getset |
Indicates that the input audio is multi-channel and each channel has a separate speaker.
◆ source
string RingCentral.AsrInput.source |
|
getset |
◆ speakerCount
long? RingCentral.AsrInput.speakerCount |
|
getset |
Number of speakers in the file, omit parameter if unknown Format: int32 Example: 2
◆ speakerIds
string [] RingCentral.AsrInput.speakerIds |
|
getset |
Optional set of speakers to be identified from the call. Example: speakerId1,speakerId2
◆ speechContexts
Indicates the words/phrases that will be used for boosting the transcript. This can help to boost accuracy for cases like Person Names, Company names etc.
The documentation for this class was generated from the following file:
- RingCentral.Net/Definitions/AsrInput.cs