The Indian Institute of Science IISc and ARTPARK partner with Hugging Face to enable developers across the globe to access Vaani, India’s most diverse open-source, multi-modal, multi-lingual dataset. Each organisations share a commitment to constructing inclusive, accessible, and state-of-the-art AI technologies that honor linguistic and cultural diversity.
Partnership
The partnership between Hugging Face and IISc/ARTPARK goals to extend the accessibility and improve usability of the Vaani dataset, encouraging the event of AI systems that higher understand India’s diverse languages and cater to the digital needs of its people.
About Vaani Dataset
Launched in 2022 by IISc/ARTPARK and Google, Project Vaani is a pioneering initiative aimed toward creating an open-source multi-modal dataset that actually represents India’s linguistic diversity. This dataset is exclusive in its geo-centric approach, allowing for the gathering of dialects and languages spoken in distant regions moderately than focusing solely on mainstream languages.
Vaani targets the gathering of over 150,000 hours of speech and 15,000 hours of transcribed text data from 1 million people across all 773 districts, ensuring diversity in language, dialects, and demographics.
The dataset is being in-built phases, with Phase 1 covering 80 districts, which has already been open-sourced. Phase 2 is currently underway, expanding the dataset to 100 more districts, further strengthening Vaani’s reach and impact across India’s diverse linguistic landscape.

Key Highlights of the Vaani data set, open sourced thus far: (as of 15-02-2025)
District smart language distribution
The Vaani dataset shows a wealthy distribution of languages across India’s districts, highlighting linguistic diversity at a neighborhood level. This information is precious for researchers, AI developers, and language technology innovators seeking to construct speech models tailored to specific regions and dialects. To explore the detailed district-wise language distribution, visit: Vaani Dataset on HuggingFace
Transcribed subset
If you must access only transcribed data and you prefer to to skip untranscribed audio-only data, a subset of the larger dataset has been open sourced here. This dataset has 790 Hrs of transcribed audio, from ~7L speakers covering 70K images. This resource includes smaller, segmented audio units matched with precise transcriptions, allowing for various tasks including:
- Speech Recognition: Training models to accurately transcribe spoken language.
- Language Modeling: Constructing more refined language models.
- Segmentation Tasks: Identifying distinct speech units for improved transcription accuracy.
This extra dataset complements the primary Vaani dataset, making it possible to develop end-to-end speech recognition systems and more targeted AI solutions.
Utility of Vaani within the Age of LLMs
The Vaani dataset offers several key benefits, including extensive language coverage (54 languages), representation across diverse realms, diverse educational and socio economic background, very large speaker coverage, spontaneous speech data, and real-life data collection environments. These features can enable inclusive AI models for:
- Speech-to-Text and Text-to-Speech: High-quality-tuning these models for each LLM and non-LLM-based applications. Moreover, the transcription tagging enables the event of code-switching (Indic and English language)ASR models.
- Foundational Speech Models for Indic Languages: The dataset’s significant linguistic and geographical coverage supports the event of sturdy foundational models for Indic languages.
- Speaker Identification/Verification Models: With data from over 80,000 speakers, the dataset is well-suited for developing robust speaker identification and verification models.
- Language Identification Models: Enables the creation of language identification models for various real-world applications.
- Speech Enhancement Systems: The dataset’s tagging system supports the event of advanced speech enhancement technologies.
- Enhancing Multimodal LLMs: The unique data collection approach makes it precious for constructing and improving multimodal capabilities in LLMs when combined with other multimodal datasets.
- Performance Benchmarking: The dataset is a super alternative for benchmarking speech models because of its diverse linguistic, geographical, and real-world data properties.
These AI models can power a big selection of Conversational AI applications. From educational tools to telemedicine platforms, healthcare solutions, voter helplines, media localization, and multilingual smart devices, the Vaani dataset is usually a game-changer in real-world scenarios.
What’s next
IISc/ARTPARK and Google have prolonged the partnership to Phase 2 (additional 100 districts). With this, Vaani covers all states in India! We’re excited to bring this dataset to all of you.

The map highlights the districts across India where data has been collected as of Feb 5,2025
How You Can Contribute
Probably the most meaningful contribution you may make is to make use of the Vaani dataset. Whether constructing recent AI applications, conducting research, or exploring modern use cases, your engagement helps improve and expand the project.
We could be delighted to listen to from you if you could have feedback or insights from using the dataset. Please reach out to vaanicontact@gmail.com to share your experiences/inquire about collaboration opportunities or please do fill out this feedback form.
Made with ❤️ for India’s linguistic diversity
