Google began implementing federated learning on Android in March 2021 to increase Hey Google hotword accuracy. With the aid of an upcoming feature called “Personalized speech recognition,” Google Assistant should be able to recognize your common words and names more accurately.
Regarding APK Insight: We’ve decompiled the most recent version of an app that Google put on the Play Store in our APK Insight post. When we decompile these files—known as APKs for Android apps—we can find numerous lines of code that allude to potential future features. Remember that Google might or might not ever deploy these features, and our understanding of what they are might be limited. However, we’ll strive to give those that are closer to completion a chance to demonstrate how they’ll seem if they do ship. Read on with that in mind.
Personalized speech recognition will show up in Google Assistant settings with the following description, according to strings in latest Android Google apps:
To help Google Assistant become more adept at understanding what you say, store audio recordings on this gadget. This gadget saves audio, which can be removed at any moment by disabling customised voice recognition. Study more
This link to Learn more could lead to an existing support article about Google’s use of federated learning to enhance hotword activations by using audio recordings saved on users’ devices to improve models like Hey Google detection:
It uses the voice data to learn how to adjust the model and then provides a summary of the model changes to Google servers. To improve the model for all users, these summaries are combined from multiple people.
The new functionality aims to extend those machine learning-based enhancements past Hey Google to your real Assistant queries, particularly ones that include names (such using your voice to send contacts a message) and frequently used terms. In order to improve future transcription accuracy, audio from previously recorded utterances will be saved on the device and evaluated.
Google already integrates a machine learning chip that locally processes your most frequent inquiries for a much faster response time into products like the 2nd generation Nest Hub and Mini. It’s possible that this idea is now extending to Android devices as well as smart homes.
This function will probably be an opt-in one, just like Assistant settings and Help Improve Assistant is now, given Google’s attitude on Assistant and voice privacy. According to the information that is currently provided, audio is stored on this device and is destroyed when the feature is disabled. Google cautions that when you disable personalized speech recognition:
Your Assistant will be less proficient at correctly identifying names and other words you commonly say if you disable this. This device will erase all audio needed to enhance speech recognition for you.
When this capacity will be available and how much of an improvement it will make are both unknowns. This comes as Google showed off at I/O 2022 how talks with Assistant will seem more natural in 2019. The assistant will essentially disregard and even acknowledge interruptions, pauses that naturally occur, and other self-corrections. This contrasts with the Assistant’s answer to what you stated today, which was taken verbatim from what you said.