The 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2022) — one of the world’s leading NLP conferences, will take place from July 10-15 in Seattle, Washington, and virtually. Magic Data is proud to be one of the sponsors of the event and present our latest training data portfolio for artificial intelligence research and development to the estimated 3,000+ attendees.
Language is the foundation of information communication, and barrier-free communication has always been a dream of mankind. However, due to differences in dialects, languages, speaking styles, etc., there are great communication barriers between people. At present, the development of artificial intelligence is breaking down the barriers of language communication. Many multilingual speech recognition products have appeared on the market. In addition to Chinese Mandarin and English, these products also support multiple languages and dialects, breaking the barriers of national boundaries and regions for communication between people. Multilingual speech recognition is full of challenges when trying to achieve high-accuracy recognition because of its understanding of multiple languages.
Sometimes, or more often, it is hard to tell whether the lady who pick up your phone for customer service is indeed a human being with sweet voice or an AI chatbot. Advancements in AI continue to pave the way for increased efficiency and better experience in customer service. AI is becoming the central to the future of customer service. Still, there are a lots of challenge to overcome… Learn why and how to do with it by reading Do You Think the Sweet Customer Service Lady is Real
Since the end of 2019, COVID-19 has upended many different parts of our lives. The number of global online office and online teaching staff has increased significantly. The prevalence of COVID-19 has become a major turning point from face-to-face meeting/working to online office/class. The online conference voice assistant is the largest application of online conference/teaching. Compared with ordinary speech recognition application scenarios, there are more challenges in speech recognition in online conference scenarios.
Watchers of Disney Channel in the late 90s will remember the iconic (and slightly traumatizing) Disney Channel movie, Smart House. The story of an AI program that runs a family’s home and becomes sentient enough to think she is human and take over their lives may have its flaws when it comes to accuracy, but it was an introduction to many young minds to the concept of machine learning.
As of 2021, the adoption rate of intelligent voice interaction functions in Chinese passenger cars has reached 86%. At present, the car cockpit is trending toward intelligent development, and the in-vehicle voice assistant is the core function of the intelligent cockpit. Since the driver's hands and eyes are occupied by tasks during the driving process, the in-vehicle scene has higher requirements for the hands-free interaction voice function.
Over two years into the pandemic, a lot of things have changed in the remote work landscape. As more jobs move to remote settings than ever before, the communication between coworkers and customers has shifted to that realm as well. With that shift comes a new set of trials and tribulations that didn’t exist in face-to-face meetings.
As a collection of high quality and richly annotated training data, MagicData-RAMC is applicable to a series of research. This article will introduce 3 experiments related to speech recognition, speaker diarization and keyword search based on MagicData-RAMC conducted by Magic Data, together with the Institute of Acoustics, Chinese Academy of Sciences, Shanghai Jiao Tong University and Northwestern Polytechnic University.
MagicHub, an open-source community for AI, releases 180-hour conversational speech dataset in Mandarin for free, enriching the open source speech corpus and promoting the development of spoken language processing technology and conversational AI.
Magic Data launches an accumulation of more than 200,000 hours of training datasets, including 140,000 hours of conversational AI training datasets and 60,000 hours of read speech datasets, covering Asian languages, English dialects, and European languages, boosting the rapid development of human-computer interaction in artificial intelligence.