This Google Meet feature is the first AI tool I actually want to use

by · Android Police

Most mobile AI features feel like someone shoved them into an app because the roadmap needed an AI line item.

Google Meet’s live speech translation actually has a reason to be there.

In April 2026, Google began rolling out Speech Translation to the Meet apps on Android and iOS, bringing AI-powered live dubbing-style translation to mobile calls.

Powered by Google DeepMind technology, it listens to someone speaking another language and speaks back to the listener in their native language.

And that’s the kind of AI feature the industry should be chasing.

Related

8 simple ways to use Google Translate

Translate text and speech on the fly with Google Translate

Posts 1
By  Jon Gilbert

Meet’s translation works because it still sounds human

Older audio translation systems transcribed incoming speech into text, translated that into text, and then converted the translated text back into synthetic speech.

That process added latency, but a conversation can’t survive every response sitting behind a 10-second buffer.

DeepMind’s system uses a unified end-to-end speech-to-speech model. Google built the feature around a short processing delay.

Translate any faster, and the dubbed voice can overlap and confuse the listener. Wait longer, and the rhythm of normal conversation starts to break.

The synthetic AI voice plays at full volume over the call audio and mimics the speaker’s cadence and tone. The original language stays faintly audible in the background.

That faint track keeps some of the speaker’s emotion intact, so listeners can still hear laughter, urgency, or concern behind the translation.

Live dubbing is better suited to phones than captions

Most mobile AI features come with homework. Open a separate app. Write the right prompt. Correct the chatbot when it confidently mangles a basic fact.

Google Meet translation works inside the call that people were already going to make.

Users join a video call, open the Tools menu, and turn on Speech translation. They choose the language they speak and the language they want to hear.


For privacy, Google Meet may require a participant to consent before Meet intercepts that person’s microphone feed and translates their voice for others on the call.


The audio-first design is a big upgrade over translated text captions. Captions force people to watch a small screen while trying to follow a conversation.

That’s annoying enough at a desk and worse on a phone. Live dubbing lets someone listen through earbuds and keep talking while doing anything that makes reading tiny subtitles a bad plan.

Google’s universal translator dream is not here yet

Google Meet is still nowhere close to a universal translator. The current mobile version allows only one language pair per meeting.

If a host sets up a meeting to translate between English and Spanish, those are the only two languages the system will process.

If a colleague in Paris joins and starts speaking French during an English-Spanish meeting, Meet will not translate that French audio unless the meeting switches to a supported English-French pair.

The language list is small, too. Meet currently supports bidirectional translation between English and Spanish, French, German, Portuguese, and Italian.

Google says translation accuracy and language availability will keep improving, but teams that need Mandarin or Japanese are out of luck for now.

There’s also a recording problem. Google’s documentation says translated audio streams are excluded from meeting recordings and livestreams, and Meet records only the original spoken language.

Google’s practical AI feature still comes with plan limits

Google has put this feature behind a paywall. Live dubbing doesn’t work with a standard free personal Google account.

Subscribe to the newsletter for mobile AI insights

Get clearer context on mobile AI advances like Meet's live dubbing; subscribe to the newsletter for focused analysis of features, limits, platform support, and practical implications across apps and devices.


Get Updates

By subscribing, you agree to receive newsletter and marketing emails, and accept our Terms of Use and Privacy Policy. You can unsubscribe anytime.

Consumers need Google AI Pro or AI Ultra. Businesses need an eligible Workspace plan, such as Business Standard or Plus, Enterprise Standard or Plus, Frontline Plus, or supported education and AI access add-ons.

The good news is that only one eligible participant needs the paid tier to turn on translation for everyone else on the call. That’s better than making every attendee subscribe, but someone still has to pay.

Google still describes Speech Translation as a beta feature, so availability may continue to change.

This is the mobile AI standard we should chase

A good mobile AI feature should pass this test. Would this still be useful if nobody called it AI? Meet’s live translation passes.

That same thinking would make Google’s other apps more useful. Google Maps could translate local signs, transit alerts, menus, or spoken directions while someone is traveling.

Google Wallet could explain ticket rules, refund windows, warranties, and confusing receipts when those details actually matter. That is the version of mobile AI worth caring about.