2 illustrates a block diagram of an exemplary user device according to various examples.įIG. 1 illustrates an exemplary system for recognizing speech for a virtual assistant according to various examples.įIG. In response to finding the candidate term in the archive, a notification can be generated including the candidate term.
The archive can be searched using phonetic matching. The archive can include speech traffic of an automatic speech recognizer. In response to identifying the candidate term, an archive of speech traffic can be searched for the candidate term. In one example, a candidate term can be identified based on a frequency of occurrence of the term in an electronic data source. Systems and processes are disclosed for discovering trending terms in automatic speech recognition. As such, system utility can be impaired, and the user experience can suffer as a result.Īccordingly, without identifying and accommodating changes in relevant names, words, phrases, requests, and the like, speech recognizers can suffer poor recognition accuracy, which can limit speech recognition utility and negatively impact the user experience. These systems can thus be ill-equipped to handle new names, words, phrases, requests, and the like as they are encountered or to handle fluctuations in popular terms, and updating the systems to accommodate changing language can be tedious and slow. ASR systems and natural language understanding (NLU) systems can work well for predetermined training language, but ASR systems can have limited and relatively static vocabularies while NLU systems can be limited by expected word patterns. Virtual assistant and speech transcription services, however, can become outdated as relevant language and knowledge changes. In addition, it is desirable that virtual assistants be sympathetic and fun to talk with, which can depend on having relevant and current knowledge. Examples include speech and spoken requests related to web searches, knowledge questions, sending text messages, posting to social media networks, and the like. These recognizers are expected to handle a wide variety of speech input, including a variety of different types of spoken requests for virtual assistants. In support of virtual assistants, speech-to-text transcription (e.g., dictation), and other speech applications, automatic speech recognition (ASR) systems are used to interpret user speech. The tasks can then be performed by executing one or more functions of the electronic device, and a relevant output can be returned to the user in natural language form. The virtual assistant can perform natural language processing on the spoken user input to infer the user's intent and operationalize the user's intent into tasks.
For example, a user can access the services of an electronic device by providing a spoken user input in natural language form to a virtual assistant associated with the electronic device. These assistants can allow users to interact with devices or systems using natural language in spoken and/or text forms. Intelligent automated assistants (or virtual assistants) provide an intuitive interface between users and electronic devices. This relates generally to automatic speech recognition and, more specifically, to discovering trending terms in automatic speech recognition. 11, 2014, entitled METHOD AND APPARATUS FOR DISCOVERING TRENDING TERMS IN SPEECH REQUESTS, which are hereby incorporated by reference in their entirety for all purposes. 28, 2015, entitled METHOD AND APPARATUS FOR DISCOVERING TRENDING TERMS IN SPEECH REQUESTS, which claims priority from U.S. This application is a continuation of U.S.