By Dan Moren
April 17, 2018 11:39 AM PT
Apple is serious about machine learning, but maybe the wrong parts
The latest entry in Apple’s Machine Learning Journal showed up a couple days ago, and it details how the “Hey Siri” trigger phrase works, including how it prevents other people from activating it by mistake. It’s a fascinating, if technical, read.
But the timing of this entry is interesting. Just two weeks ago, Apple poached Google’s AI chief and today word comes that Apple is beefing up a Seattle-based machine learning team. It’s also the Journal’s first entry since December of last year, prior to which updates were coming at least monthly.
Set all of that against the backdrop of recent suggestions that the HomePod’s initial sales might be underwhelming and it’s not hard to see a tacit narrative emerge. There’s a temptation to phrase it as “Apple gets serious about machine learning,” but that’s a facile interpretation: The engineering demonstrated in the “Hey Siri” implementation proves that the company doesn’t half-ass these things. In fact, you might go so far as to say the opposite, that Apple spends too much time perfecting these fine details.
Elegance and sophistication are a key part of Apple’s brand, but at times it feels like they do overtake the substance. It’s like spending a lot of time landscaping every tree, shrub, and blade of grass in the yard of a house that’s a movie-set facade.
Perfect, as the old saying goes, is the enemy of good, and in this case I wonder if Apple has been perfecting some features at the expense of others that might be more crucial. It’s great that the HomePod can recognize me a couple rooms away even when loud music is playing, but if it can’t or won’t execute the commands that follow that trigger, what’s the point?
[If you appreciate articles like this one, help us continue doing Six Colors (and get some fun benefits) by becoming a Six Colors subscriber.]