Just like Google’s concept AR Glasses, announced this week. Apple’s Live Captions can take incoming audio and transcribe it instantly. The difference is that Apple’s version will ship “later this year,” which probably means it will be in this fall’s iOS 16 release. But the real news here is that this is Apple’s most obvious stab yet at trialing future Apple Glasses features in plain sight.  “As someone who has two parents who both have difficulty hearing, this stands to be a big help,” writes Apple-centric journalist Dan Moren on his personal Six Colors blog. “I am curious to see how well the feature actually works and how it handles a big FaceTime call with a lot of participants; Apple says it will attribute dialog to specific speakers.” 

Sleight of Hand

Live Captions, which we’ll get to in a second, is far from the first AR glasses feature that Apple has trialed. The most obvious is the inclusion of LIDAR cameras in iPhones and iPads. These scanners help create an accurate 3D map of the world outside and allow the iPhone to overlay 3D models onto the real world shown through the camera.  So far, this tech has been used to let you preview new Apple computers on your own desk, play AR Lego games, test out IKEA furniture in your living room, and so on. LIDAR hardware is so absurdly redundant in iPhones that it must only be there so Apple can hone the hardware and software for a real AR application: Apple Glasses.  It’s not just visual AR, either. AirPods have been adding neat AR features for years now. The latest, Spatial Audio, tricks our brains into thinking that sounds are coming from all around us and is a great way to watch movies or listen to relaxing soundscapes. It’s a great feature, but it will be even better when it works with Apple’s expected future glasses product. Being able to place sounds in a 3D space to match the AR objects will really sell the illusion.  Or how about Live Text, the iOS 15 technology that recognizes and reads text in photos, and live, through the iPhone’s camera? That’s another feature that is ideal for reading signs, menus, and other text through AR glasses. 

Live Captions

Live Captions takes speech from a FaceTime call, video-conferencing apps, streaming video, and so on. Your phone takes the audio and transcribes it on-the-fly, providing subtitles, as seen in this video.  That’s great, but what’s even better is that nothing ever leaves your iPhone. The captions, says Apple, are generated on-device instead of being sent off to a server. This is not only more private, it’s also a lot faster. “I’m not sure we can trust Apple’s live translation better than Google’s new AR glasses, but I think we can trust that the competition will help breed the best results,” Kristen Bolig, founder of SecurityNerd, told Lifewire via email. “Now that the competition is public and the issues with this kind of technology (privacy, accuracy, etc..) are well known, both companies will not only be in a race to create the best product first but also to create the product that best solves these problems.” We’d also expect some kind of built-in auto-translation like you can get right now using third-party app Navi to auto-translate your FaceTime conversations, or perhaps a way to save these transcriptions during interviews for easier access later.  We’ve long enjoyed excellent accessibility features from Apple, letting us customize our iOS devices to an almost absurd extent. From tweaking the display to make colors and text easier to see, to controlling the entire user interface with external devices, to having the phone notify you when somebody rings the doorbell, or a delivery van arrives outside.  Now, we’re all getting the benefits of Apple’s increased research into augmented reality tech. We might not care for IKEA or LEGO, nor even want to ever buy a pair of Apple’s fabled AR glasses device, but that doesn’t mean we can’t all enjoy the fruits of that research.