Support this Site
Become a Six Colors member and get access to an exclusive weekly podcast, community, newsletter and more.
Jason Snell for Tom's Guide
October 13, 2019 8:28 AM PT
Apple’s newest contribution to the smartphone computational-photography arms race came wrapped in a fuzzy sweater, just in time for autumn. Deep Fusion is a method that, by all accounts, generates remarkably detailed photos on the iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max. How? By fusing 12-megapixel multiple camera exposures into a single image, with every pixel of the image given the once-over by intense machine-learning algorithms.
Like so many camera phone innovations of the last few years, it’s a remarkable combination of sensor technology, optics and software that’s transforming how we take pictures.
And yet … Deep Fusion and its sweater also say a lot about the perplexing era the smartphone industry currently finds itself in. What does it say about a feature that creates appreciably better photographs, but in details that are hard to notice unless you zoom in all the way and carefully toggle back and forth between samples? Why is it that the best picture Apple could use to show off Deep Fusion was an awkward shot of a dude in a sweater? And what does it mean that Deep Fusion received a clever name and several slides in Apple’s biggest media event, but it’s invisible to users of the iPhone’s camera app?
What I’m saying is, this sweater raises a lot of questions.