

.png)
As the CEO of an AI Language Company, I get a lot of questions about Apple's new Live Translation functionality. It's a huge moment for our industry when a tech giant does something new in this space, so I knew I had to test it out myself.
High level: I really wanted this to work seamlessly like Star Trek’s Universal Translator or Hitchhiker’s Guide to the Galaxy’s Babelfish, and it’s clear that Apple wants that too; however, the experience just isn’t there yet – I still very much prefer turn-based translation as opposed to “live translation.” This is the same reaction I had when Google launched translation inside of Meet. In both cases, it is just too easy to get into a situation where neither party knows what to do to make the conversation flow.
Here is the deep dive into my experience:
Live Translation is a new feature (still in Beta) enabled by Apple Intelligence, meaning it requires specific, newer hardware and a bit of setup. It cost me $1000 to try this out as I had to buy a brand new iPhone to use it (fortunately, I needed an upgrade anyway).
Here is the essential checklist for using Live Translation:
For those of you who haven’t tried it, Live Translation provides near simultaneous translation of speech between two languages. My primary focus of this post is on the Live Translation with AirPods use case.
Notably, the user experience is different for each of the participants:
In all scenarios, a dynamic transcription and translation is displayed on the screen of the iPhone.
I found the simultaneous translation experience to be particularly disconcerting because even though the speaker’s voice is muffled, you can still hear both languages being spoken simultaneously. This is especially distracting if you have any facility with the other language.
The experience is best suited to 1:1 conversations in a quiet environment. When there are multiple participants (or just loud people in the background), it can quickly become very confusing to follow, especially because there is a definite delay (sometimes as much as 5-10 seconds) between when someone speaks and when you hear the translation, which makes it very easy to lose the thread on who spoke what you are hearing translated.
Live Translation is also available via the Phone app; however, I found that in addition to the above issues, it is an especially confusing experience for the person on the other end of the call if they are not expecting a live translated call. I feel like there should be some sort of announcement when translation is enabled in the other person’s language letting them know what’s going on.
Live Translation in FaceTime is a different experience, with translations appearing as subtitles on the display. There was no spoken translation, even with my AirPods in. Of note, the translation experience is one sided. The other person does not see translations on their end unless they also have a Live Translation enabled device and have turned it on.
I am excited by all of the innovation that is happening with translation AI as well as the accelerating trajectory of its integration into common communication tools. I love the idea of using AirPods to provide “in ear” translation. However, I think it will still be some time before Live Translation works seamlessly enough and with high enough quality for all parties involved for it to supplant turn-based translation.
Jaide Health is a technology company developing AI-powered healthcare-specific, foreign language interpretation and translation services. Leveraging recent advances in large language models, Jaide Health enables healthcare organizations to improve the overall experience for patients with Limited English Proficiency. Led by “techy clinicians” and security/privacy experts, Jaide Health is venture backed by Inovia Capital, Flare Capital, and Innovation Global. Follow us on LinkedIn and visit www.jaide-health.com to learn more.
Follow us on LinkedIn and visit www.jaide-health.com to learn more.