Veteran tech journalist Steven Levy got a chance to talk to Eddy Cue, Craig Federighi, and some key machine learning experts at Apple about all things AI. The whole article is fascinating, as one expects from Levy:
Machine learning, my briefers say, is now found all over Apple’s products and services. Apple uses deep learning to detect fraud on the Apple store, to extend battery life between charges on all your devices, and to help it identify the most useful feedback from thousands of reports from its beta testers. Machine learning helps Apple choose news stories for you. It determines whether Apple Watch users are exercising or simply perambulating. It recognizes faces and locations in your photos. It figures out whether you would be better off leaving a weak Wi-Fi signal and switching to the cell network. It even knows what good filmmaking is, enabling Apple to quickly compile your snapshots and videos into a mini-movie at a touch of a button.
It’s fascinating to see all the places where neural nets and deep learning have come into play, and most importantly (and most Apple-y of all), how it’s often totally transparent to the user. We all think of Siri, but Apple’s AI ambitions clearly run so much deeper.
Also, this tangential point struck me:
When Acero arrived three years ago, Apple was still licensing much of its speech technology for Siri from a third party, a situation due for a change. Federighi notes that this is a pattern Apple repeats consistently. “As it becomes clear a technology area is critical to our ability to deliver a great product over time, we build our in-house capabilities to deliver the experience we want. To make it great, we want to own and innovate internally. Speech is an excellent example where we applied stuff available externally to get it off the ground.”
We’ve always kind of known this about Apple—it wants to control not just the whole widget, but the whole process, soup to nuts. They’ve become downright zealous about this self-reliant philosophy.