Dave Gershgorn at Quartz has an inside look at an Apple presentation on machine learning at an AI conference this past week:
This kind of work is essential for Apple, as a hardware company that makes mobile devices. By slimming down the neural network, iPhones and iPads can identify faces and locations in photos, or understand changes in a user’s heart rate, without needing to rely on remote servers. Keeping these processes on the phone makes the features available anywhere, and also ensures data doesn’t need to be encrypted and sent over wireless networks.
A lot of the details are kind of high-level, but perhaps the most interesting part of this is that Apple will now be able to publish and share its AI research. For a company that tends to be as close-mouthed as Apple, that’s a big coup, and I suspect it’s in no small part by the widespread perception that the company is behind Google in this arena. (The article also mentions that Apple says its photo-processing algorithm can run through twice as many images as Google’s.)
Just another sign that the Apple of 2016 definitely isn’t the Apple of 2006.