Top-level elements
The following table describes the top-level elements included in a JSON transcript.
Refer to V‑Blaze transcription parameters for more information on the stream tags used to generate the elements that appear in these sections.
Element | Availability | Type | Description |
---|---|---|---|
agentscore | V‑Blaze version 7.3+ | number | Predicts whether the speaker is the agent or client. Expressed as a value between -1 and 1, where a negative value means the speaker is believed to be the client. A positive value corresponds to an agent. A value closer to -1 or 1 indicates the system is more confident in its prediction, where -1 and 1 indicate the most confidence. Appears at the top level only when processing mono audio that was not diarized and was submitted for transcription with the stream tag |
V‑Spark only | object | A JSON object that stores metadata and application scores generated by V‑Spark. | |
asr | V‑Blaze version 6.1+ | string | Version number of the automatic speech recognition server being used. |
audiosecs | V‑Blaze version 6.1+ | number | Duration of audio, in seconds, in the stream. As of V‑Blaze 7.2, this element will not appear in the JSON output if there was a problem processing audio. |
chaninfo | V‑Blaze version 7.3+ | array | Appears only for stereo or diarized audio. Contains one object for each audio channel. Each channel object may contain the elements in the following list depending on audio attributes and the stream tags specified with the request.
Elements in the chaninfo array's channel objects contain the same information as the top-level elements described in this table. |
V‑Spark only | object | A JSON object that stores user-supplied call metadata associated with the audio file. | |
confidence | All | number | A measure of how confident the speech recognition system is in its transcription results. Results range between 0 and 1 with 1 being the most confident. |
diascore | V‑Blaze version 7.3+ | number | Indicates the level of confidence the system has in its classification of agent and client for audio with two speakers on a single channel. Expressed as a range between 0 and 1, where 1 indicates the best speaker separation. |
donedate | All | string | Date and time the file transcription was completed by the speech-to-text engine, meaning the last utterance finished. |
emotion | V‑Blaze version 7.3+ | string | Describes the emotion detected in decoded speech. Emotional intelligence consists of both acoustic and linguistic information. Events can be given the following values:
As of V‑Blaze version 7.3, the emotion field is always included at the top level, and the value describing detected emotion is more dynamic. The emotion detected toward the end of a call is compared to the emotion detected closer to the beginning. The emotion value describes what speaker emotion was, or how speaker emotion changed in transcribed audio. |
emotion | V‑Blaze version 7.2 and earlier | value | Emotional intelligence consists of both acoustic and linguistic information. Events can be given the following values:
Emotion must be the same for all utterances to be included at the top level. Additional emotion scoring is available in The utterances array. |
ended | V‑Blaze version 6.1+ | string | Date and time the stream ended. This is most useful for measuring real-time transcription. As of V‑Blaze 7.2, this element will not appear in the JSON output if there was a problem processing audio. |
gender | All | string | The gender identified for the audio. |
langinfo | V‑Blaze version 7.1+ | string | Breakdown of language information that is added when there was more than one language detected. The dictionary contains several fields:
For example:
|
last_modified | V‑Spark version 4.0.2-1+ only | string | The date and time at which an update to the The following events trigger an update to the
Note: The To add the |
license | All | string | Identification information for the license used. |
lidinfo | V‑Blaze version 5.6+ | object | The
For example:
|
model | All | string containing model name if one model was specified; array of model names if multiple models were specified | Language model(s) specified for transcription. For example:
As of V‑Blaze 7.2, this element will not appear in the JSON output if there was a problem processing audio. |
musicinfo | V‑Blaze version 7.3+ | object | Appears only for stereo audio in which music was detected when audio was submitted for transcription with the stream tag |
nchannels | All | number | Number of channels in the audio file unless diarization is set to true, in which a single (1) channel file is broken up into 2 based on speaker separation As of V‑Blaze 7.2, this element will not appear in the JSON output if there was a problem processing audio. |
nsubs | V‑Blaze version 7.1+ | number | The number of substitutions applied. This tag will not appear if no substitutions were applied. This value does not include |
rawemotion | All | string | Acoustic emotion values. Possible values in version 7.1+ include:
Acoustic emotion values prior to version 7.1 include:
|
recvdate | All | string | Date and time the audio file was received by the ASR engine and placed in queue |
recvtz | All | array | An array containing two values:
|
requestid | All | string | The unique identifier for the request. |
resampleinfo | V‑Blaze version 7.4+ | array | Shows sample rates in Hz of the original file and the outputted file when audio was resampled. For example:
|
scrubbed | All | boolean | If true then audio is purified so numbers are all redacted. If false, the data name does not appear in the JSON output. |
sentiment | All | string | Linguistic sentiment value:
|
sentiment_scores | All | array | Array of length 2. [0]=Positive phrase counts and [1]=Negative phrase counts in the file |
source | All | string | The audio file name. |
started | V‑Blaze version 6.1+ | string | Date and time the stream started. This is most useful for measuring real-time transcription. |
streamtags | V‑Blaze version 6.1+ | object | A list of the parameters or other values specified by the user. This is useful for debugging and verification. It is also useful for tagging the output with user-level metadata (for example, tags that have meaning to the user for filtering or association). For example:
|
substinfo | V‑Blaze version 7.1+ | object | Detail for substitutions that is included when
For example, the object below shows 4 total substitutions from 2 sources using 3 patterns:
|
V‑Blaze version 7.3+ | object | Text metrics for the audio transcript, including the amount of transcribed audio that was silence or contained words, overtalk metrics, and the total number of words spoken. If the initial audio was stereo or diarized mono, | |
All | array | Each audio file is broken up into segments of speech called utterances. The utterances array contains the word transcripts and corresponding metadata organized by utterances. | |
warning | V‑Blaze version 5.6.0-3+ | string | This field describes a problem or issue that was encountered during transcription. A common example is substitutions errors. |