This dataset uses high-fidelity independent audio tracks to comprehensively capture natural interaction features in daily conversations, such as interruptions, overlapping speech, intonation shifts, and emotional pauses. All conversations are annotated with multi-speaker labels and span diverse scenarios, providing robust training resources for AI models to comprehend the intricate Japanese honorific system, colloquial ellipses, and context-dependent logic.
Language
Japanese
Data Style
Conversational Style
Sampling Rate
16kHz
Bit Rate
16bits
Channel
2
Total Audio Duration
10000+hours
ISO/IEC 27001 & ISO/IEC 27701:2019 compliant
Audio, text, image, and video multi-modal data
Conversational, scripted, and spontaneous data covering extensive domains
Expertise secured quality result