Datasets:
messages listlengths 3 3 | videos listlengths 0 0 | audios listlengths 1 1 | clip_id stringclasses 213 values | win_start float32 0 1.07k |
|---|---|---|---|---|
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | abjxc | 0 |
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | abjxc | 15 |
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | abjxc | 30 |
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | afjiv | 0 |
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | afjiv | 15 |
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | afjiv | 30 |
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | afjiv | 45 |
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | afjiv | 60 |
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | afjiv | 75 |
[{"role":"system","content":"You are an expert in speaker diarization. Given a 30-second audio clip,(...TRUNCATED) | [] | ["UklGRv////9XQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAATElTVBoAAABJTkZPSVNGVA4AAABMYXZmNjAuMTYuMTAwAGRhdG(...TRUNCATED) | afjiv | 90 |
VoxConverse — Speaker Diarization in the Wild (MS-Swift Format)
This dataset is a reformatted version of VoxConverse for fine-tuning and evaluating multimodal large language models on speaker diarization, packaged in the MS-Swift Parquet format.
Note: The underlying audio is sourced from YouTube videos whose copyright remains with the original owners. This reformatted dataset is intended for research purposes only. For the original annotations and audio, please refer to the official VoxConverse repository.
Content notice: The data consists of political debates and news segments. The views and opinions expressed by speakers do not reflect positions of the original dataset authors, the University of Oxford, Naver Corporation, or the authors of this reformatted version.
Bias notice: The distribution of identities in this dataset may not be representative of the global human population. Please be careful of unintended societal, gender, racial, linguistic and other biases when training or deploying models trained on this data.
Task: Given a 30-second audio clip of a multi-speaker conversation, identify who speaks in each half-second bin (diarization), or count the number of distinct speakers.
Dataset Structure
Columns
| Column | Type | Description |
|---|---|---|
messages |
list[{role, content}] |
System / user / assistant conversation |
audios |
list[binary] |
Raw 16kHz mono WAV bytes (30s per clip) |
videos |
list[binary] |
Empty |
clip_id |
string |
Source clip identifier for cross-window stitching |
win_start |
float32 |
Window start time in seconds within the source clip |
Splits
| Split | Source | Diarization examples | Speaker count examples |
|---|---|---|---|
| train | VoxConverse dev set (216 clips) | 4,543 | 4,543 |
| test | VoxConverse test set (232 clips) | 10,088 | 10,088 |
Subsets
diarization
Given a 30-second clip, output a 60-entry timeline of 0.5-second bins indicating which speaker(s) are active.
System:
You are an expert in speaker diarization.
Given a 30-second audio clip, identify who speaks in each 0.5-second bin.
Assign each distinct speaker a letter (A, B, C, ...) in order of first appearance.
Use '-' for silence and combined letters (e.g. 'AB') for simultaneous speech.
Provide your answer as a valid JSON object with exactly 60 entries:
{"timeline": ["A", "A", "AB", "B", "-", ...]}.
User:
<audio>
For each of the 60 half-second bins in this clip, indicate which speaker(s)
are active. Use letters (A, B, ...) in order of first appearance, '-' for silence,
combined letters (e.g. 'AB') for overlap.
Assistant:
{"timeline": ["A", "A", "A", "-", "B", "B", "AB", "B", "A", "A", ...]}
speaker_count
Given a 30-second clip, count the number of distinct speakers.
System:
You are an expert in speaker diarization.
Given a 30-second audio clip, count the number of distinct speakers present.
Provide your answer as a valid JSON object: {"num_speakers": N}.
User:
<audio>
How many distinct speakers are present in this 30-second clip?
Assistant: {"num_speakers": 3}
Windowing & Annotation Details
- Window: 30 seconds, stride: 15 seconds (50% overlap)
- Bin size: 0.5 seconds → 60 bins per window
- Active threshold: speaker is active in a bin if their segment overlaps it by > 0.25s (DER forgiveness collar from the original paper)
- Speaker normalization: raw IDs (spk00, spk01, ...) mapped to letters (A, B, ...) in order of first appearance within each window
- Audio: resampled to 16kHz mono WAV
Cross-window Speaker Stitching (Inference)
Since speaker labels are normalized independently per window, inference over a full clip requires stitching. The 50% overlap (15s = 30 shared bins) between consecutive windows allows Hungarian matching:
- Group rows by
clip_id, sort bywin_start - For each adjacent window pair, build a co-occurrence matrix over the 30 shared bins
- Apply Hungarian algorithm to find the optimal speaker mapping
- Re-label speakers in window N+1 to be consistent with window N
Speaker Distribution (train split)
| Speakers per window | Windows |
|---|---|
| 1 | 1,364 |
| 2 | 1,985 |
| 3 | 818 |
| 4 | 289 |
| 5+ | 87 |
Evaluation
The diarization task is directly compatible with the standard Diarisation Error Rate (DER):
DER = Missed Speech + False Alarm + Speaker Confusion
Convert predicted timeline back to RTTM segments (each bin = 0.5s) and evaluate with pyannote.metrics using a 0.25s forgiveness collar. The paper's audio-only baseline achieves ~20% DER on the dev set; the best audio-visual method achieves 7.7% DER.
Usage with MS-Swift
# Fine-tuning
swift sft \
--model Qwen/Qwen3-Omni-7B \
--dataset voxconverse-omni/diarization voxconverse-omni/speaker-count \
--custom_plugin plugins/omni_dataset_plugin.py
# Evaluation
swift eval \
--model Qwen/Qwen3-Omni-7B \
--dataset voxconverse-omni/diarization \
--custom_plugin plugins/omni_dataset_plugin.py \
--split test
Source Dataset
- Original dataset: VoxConverse — Chung et al., 2020
- 216 dev + 232 test clips from YouTube (political debates, panel discussions, news)
- Duration: 22s–1097s per clip, avg ~338s (dev) / ~675s (test)
- Annotations: RTTM format, manually verified, 0.1s boundary precision
- License: CC BY 4.0
Citation
@inproceedings{chung2020spot,
title={Spot the conversation: speaker diarisation in the wild},
author={Chung, Joon Son and Huh, Jaesung and Nagrani, Arsha and Afouras, Triantafyllos and Zisserman, Andrew},
booktitle={Interspeech},
year={2020}
}
- Downloads last month
- 3