Apple wrote checks Camera Control can’t cash

The Camera Control button control on the iPhone 16 family seemed like a good idea, but the devil’s always in the details, isn’t it? Apple made too many promises, all of them in conflict with one another because they all rely on using the same tiny hardware feature to function. And as it ships more features, things aren’t getting better.
A half-baked half press
It was definitely confusing that Camera Control was introduced as a shutter button that could also be half-pressed—but the half-press gesture didn’t do the thing it does on every normal camera, which is to lock focus and/or exposure. Apple shipped Camera Control with a complex swipe-and-press interface to move among different functions but said that the most basic exposure/focus function would be coming later.
The new half-press feature is in direct conflict with the original overloaded half-press feature. To enable it, you need to go to Settings and then to Camera -> Camera Control where there’s a toggle for AE/AF Lock.
In hindsight, it’s absolutely the right move to have this feature disabled by default. Not only because most ordinary people wouldn’t want to use it, not just because it is in such deep conflict with the tiny half-press menu overlay for the slider functions, but because it is terribly executed.
First of all, you don’t always want to do both AE and AF lock. Sometimes you do, but not always. We’ll set that aside for now. The way that the iPhone handled AE/AF before is that you could tap on something, and it would set focus and exposure for that region you tapped. If that subject, or your camera, moved, then the temporary lock would go away. If you tapped and held, then you’d get an actual AE/AF lock, in which the subject or the camera could move, and the AE/AF would stay in place.
A way to get around the lack of independent exposure controls in the Camera app is to tap the sun icon overlay next to your single-tapped region and drag the exposure up or down to perform exposure compensation relative to the exposure setting the Camera app picked for you. This comes in handy when I take photos of neon signs. You can also get exposure compensation in one of the overlay submenus revealed by tapping the top arrow to expose the bottom row, as you do, with the plus and minus in a circle. Not a sun. (It’s as perfectly logical and consistent as the rest of the interface.)
The problem with the AE/AF lock feature triggered by Camera Control is that it activates a large region of the center of the screen. With a camera, you can set this to be a significantly smaller center area. Basically a cross-hair or just a single phase detection point in the center. Even if you tap and hold on the screen for AE/AF Lock, the region of the screen is much smaller.
If a “subject” is in frame, like a person’s face, the Camera app draws a box specifically around the bounds of their face instead of the larger region box it draws for a landscape or other wide shot. It’s still not a tiny box you’re sticking to a person’s eye, but it does not cast the wide net that the oversized region box does.
The reason the region size matters is that if your subject is layered in depth—let’s say a foreground, middle ground, and background—then you’ll capture some of another layer in what you’re trying to lock instead of just the center-most point. It’s a lack of precision. That’s for both metering for exposure and focus together. Again, for some reason, you can tap, or tap and hold, to get a finer level of control than you can with the thing that has “control” in the name.

You can still get to the layered object you want to lock to by moving more broadly to capture only that subject in the large center region, but that’s more effort than tapping and more movement than you’d have to expend using a real camera with a smaller center region since you need to get what you want in that large box.
There are no deep menus to go into to refine the region size or lock only exposure or focus. This is the entirety of the feature enabled by the buried toggle. On or off. Press the button gently, but not too gently. Also move a lot, maybe.
Otherwise, you can simply give up and tap the screen, which anyone with any model of iPhone can do. What a selling point for Camera Control!
This is absolutely where third-party camera apps can fill a void, but then what was the point of doing all this not-so-useful work for the official top-dog Camera app?
Lacking in Visual Intelligence
Apple also included Visual Intelligence in iOS 18.2, and it’s a huge disappointment. The two on-screen buttons always divert you to two different third-party services. If you select Ask, the image will be sent to ChatGPT. If you select Search, it will be sent to Google. There are appropriate warnings for both services, but again, Apple’s vaunted new feature is primarily a quick image upload to a partner.

Other options can be triggered if it detects certain criteria, but it’s pretty picky about it and doesn’t tip you off that it can do more until you press the shutter. Unlike constantly showing “Ask” and “Search.”
In one case, I held my phone up to a yellow warning sign in Spanish, and it offered up a Translate button, but only after I hit the “shutter” button, which isn’t saving the photo, but hitting pause on the input for the software to more thoroughly examine it. Google’s apps and Apple’s own Translate app offer live translations without needing to hit the shutter to pause, but Visual Intelligence doesn’t have that option.

There is also the option to summarize the text you took a photo of with the shutter button. It’s probably the least likely thing I would want to do, but hey, it’s something the software can do, so why not?
Apple has many other machine learning models for all kinds of image recognition, but only the ones that use optical character recognition are present. I can’t use this to identify a plant, for example. I have to take a photo of a plant, go to the photo in my Camera Roll, expand the photo, thumb the whole thing upwards to reveal the info panel, and then tap on the plant identification option there. The same goes for animal and landmark identification.
Conversely, you can’t use Visual Intelligence “Ask” and “Search” features on a photo that you’ve already taken from inside of the Photos app, like you can use those other features. You can certainly send those images off to ChatGPT or drop them in the Google app. What gives? Why not put the “Ask” and “Search” buttons under every photo? Why not put them in context menus?
Maybe, someday, all of those things will be true, and Visual Intelligence will act as an umbrella for all the image-based models Apple has. Why make a promise about shipping this right now when it’s really not terribly beneficial to anyone —including Apple?
If it was to appear like they weren’t behind (Google Lens shipped one million years ago), then unfortunately the shipping product reveals that they are more behind than they were if it was something in a lab they were still promising. There is a danger that this trains customers that Visual Intelligence is not worth using, especially since it’s so hard to get to.
Dialing back the dial
Speaking of training customers, I’ve reached the point where Camera Control has trained me to turn off the features I keep accidentally triggering. Settings -> Camera -> Camera Control -> Accessibility -> Toggling off both Light-Press and Swipe. I’m not interested in accidentally triggering them, and there’s no reward for trying to do anything with those on purpose.
Apple has not addressed any critiques of Camera Control other than the “we are totally shipping a half-press focus lock” promise from the launch. Anecdotally, most people use it as a Camera app launcher or shutter button that’s easier to reach than the volume-up button. Yay?
I’ll leave AE/AF Lock on for the time being, but the truth is that with the way it’s enabled, it’ll likely return to the default too, and all of that will be for naught. I currently regret that so many of us asked them to give us this, because it was only ever going to be another thing on top of this complicated stack of decisions they already made about what Camera Control is. They can’t take these things away, but maybe they can make profiles, or group them into modes to make the button do less under certain circumstances instead of people not wanting to mess around with it. Perhaps it’s time to exercise some control.
[Joe Rosensteel is a VFX artist and writer based in Los Angeles.]
If you appreciate articles like this one, support us by becoming a Six Colors subscriber. Subscribers get access to an exclusive podcast, members-only stories, and a special community.