Our video gaming preferences and habits, thoughts on Nomad’s new thin Qi-rechargeable tracking card with Find My, how we connect non-iPhone devices on the go, and our top three software wishes for the upcoming WWDC.
Fascinating blog post from Panic designer Neven Mrgan about the feeling of getting a letter from a friend written by AI:
Where exactly would I draw the line between helpful features (“make this red shirt green instead”) and offensive takeovers (“generate an album cover in the style of barney bubbles, award-winning”)? As I said, until this email I was more bored than enraged by AI, so I didn’t have an immediate answer. I use computer crap all the time—it’s pretty cool! So what was different here? I thought I’d come up with some comparisons that capture different aspects of my friend’s AI email, in order to see how I feel about them.
I think that more than maybe any other technology in my lifetime, AI is going to be profoundly divisive in how it’s received by users.
Marvel Studios and ILM’s “What If…? – An Immersive Story” for Vision Pro, launching Thursday as a free app, isn’t an immersive video or a game. It’s something in between—a mixed-media experiment, roughly an hour long, that tries to use every feature of the Vision Pro to make a compelling entertainment experience.
Based on the Disney+ animated series (and, more distantly, the Marvel comic), “What If?” is built around the premise of variant versions of famous Marvel characters. The “What If?” comic was doing multiverse stories decades before they were cool.
In the new “What If?” immersive story, you’re called by The Watcher (the narrator and main character of the TV series, filling Rod Serling’s shoes but with a bit more agency) to intervene in various multiversal crises, aided by Sorcerer Supreme Wong, who equips you with magic spells to use during your adventure.
The scenes with The Watcher and Wong don’t take place in a fantasy world—they’re augmented reality scenes set in wherever you’re using the Vision Pro. It’s pretty funny to see The Watcher towering above Wong, his head clearly too tall to fit in my house (but somehow doing it anyway). In addition to augmented reality, the app contains extended animated scenes (very much like segments from the TV show, but in 3-D and displayed via a sort of crystal shard floating in front of you) and quite a few immersive environments.
Watching TV on a crystal shard, as you do.
I really enjoyed the environments, which are cleverly designed to resemble the style of the animated “What If?” TV series, but upgraded a bit so that they make sense in a 3-D, 360-degree context. I was especially impressed by a few surprising easter eggs littered around, and the design of a see-through pod containing something very interesting.
The environments are interactive in the sense that you can look around and drink them in, but you don’t really interact with them directly beyond that. You don’t move around, and your only interaction that effects the scene is when you cast spells. (If you want to revisit those environments later, and just look around, you can—the app lets you revisit any of its “chapters” from the main menu.)
Spellcasting works well with Vision Pro’s hand tracking.
Oh, casting spells: The app makes the clever choice of building the entire interactive mechanic around magic, which makes sense because—like the Vision Pro itself—Marvel Universe magic is controlled via hand gestures. Wong will train you to shield yourself, fire power blasts, seal away dangerous objects, and collect others.
If this all sounds very much like a video game… it’s not. There’s a reason the app is subtitled An Immersive Story: Your actions (while very fun!) are really just there to make the story move along. There’s no way to lose. The story will wait for you to complete your task, and then it continues on. It’s all done in a very subtle way—the music plays, the action continues—but “What If?” is trying to be an engaging story that you’re present in, not a game. Consider that this app is from two Disney-owned companies, and consider it sort of like an interactive theme park ride. You can do stuff, but you can’t really change the ride.
That is, except for one point in the story, where you’re offered a choice. It’s a real, legitimate “final choice” that results in different endings depending on what you choose. It’s the lightest dollop of branching on a story that otherwise goes in a straight line—clearly the budget of this project was not high enough to create numerous scenes that will only be seen by the fraction of the viewers who make those specific choices—but it’s a fun moment nonetheless.
It’s hard to judge “What If?” overall, because it really does seem like a sampler platter of ways this sort of entertainment could evolve in the future. Is there room for something that’s more interactive than watching TV, but less interactive than a full-on video game? I have no idea. But I do know that the hour I spent with “What If?” was maybe the best hour I’ve spent on the device since I got it. If Apple is looking for a single app that demonstrates all the features of the Vision Pro at its best, “What If?” may be the answer.
The making of “What If…?”
I got a chance to briefly talk to two people involved in making the app — Indira Guerrieri, technical art director, and Joe Ching, lead experience designer.
Guerrieri on translating the animated series to full environments: “At the beginning, it was all, ‘we’ve got this gorgeous artwork and these gorgeous characters, and we got to make them right.’ And then all sorts of discussion started happening around how much do we actually make it a little more realistic so that it feels like you’re in a 3D space? I remember the discussions were, do we want to make it more sort of toonish, do we want to add marks of a pencil from a toon-rendered drawing, or do you want to leave it? So we went somewhere in between.”
Ching on the level of interactivity in the app: “It was actually a very conscious decision that we made… we did not want any kind of a health bar where you say, ‘Oh, I died, and now I have to start this thing over.’ We really wanted it to be where the player could just do whatever they wanted in this world and not really have any sort of consequence to either not engaging or engaging. So that way, anybody at any skill level could really get through and sort of get the entire storyline.
“There are going to be particular players who are going to be focused on the narrative and going to want to sit there and watch everything that happens in front of them, and then there are going to be people who just want to, you know, take the time to look around. And that goes back to the no-consequences of, ‘If I don’t do anything, I’m not going to die.’ So that if a player only has 10 minutes and they just want to take a look at the environments, they can do that. There are some Easter eggs here and there. So, it’s really how the player wants to go through there, if they want to engage, and also the idea of replayability. You can play through the storyline the first time and then just go through and just appreciate everything that Indira and the art team have done. So, it’s sort of a play as you like.”
Guerrieri on targeting the Vision Pro: “We designed the project for a device that has so much range, that was the beauty of it. It’s like, you can go, ‘Yeah, I can do color. I can do, you know, nuance in the way things look.’ It’s pretty cool. That was a joy, to work with that range.”
Technology improvements are a bit like going to a movie or a magic show: you want to be wowed, but it works best when you don’t see what’s going on behind the scenes. You don’t want to know about the trapdoor, or the strings holding people up as they soar through the air—even if it gives some appreciation for the difficulty of the production, it robs it of some of its power and awe.
Apple ends up having to ride this line a lot. At the root of its ethos has been the desire to provide technology that feels magical and amazing to its customers. With every year that goes by, every new device that comes out, Apple wants to boast about its impressive new functionality, but some of its biggest technological breakthroughs happen at a level that is totally invisible to its users.
It’s cases like that where the company has the difficult task of impressing how advanced some of these technologies are without belaboring the point. And with the onslaught of artificial intelligence features, it also means that the company has its work cut out for it if it wants to continue being the best example of magical, invisible technology.
All my computing devices, save one, have color displays. The last time I regularly used a computer without a color display was probably in the mid-1990s. The only exception is my e-reader, which—since the very first Kindle I bought—has been a black-and-white E Ink screen that excelled at the boring job of displaying text. But… what if an e-reader added color?
We’ve reached the point where E Ink technology—which is unlike normal display technology found in our phones and computers, but allows low-power reflective displays that work more like actual ink on paper—can actually display color decently and affordably. And so now I’ve spend the last few weeks with my first color e-reader, the $219 Kobo Libra Colour.
In theory, color adds a new dimension to the e-reader. Highlights can be color coded, and book covers finally appear in full color. This is especially fun when I turn off the reader and a boldly colored book cover, designed for maximum marketing appeal, appears on the device’s screen. Unfortunately, a moment later the device’s backlight turns off and the colors become muted unless the screen is in bright light.
I love e-readers, and for the last few years my e-reader of choice has been the Kobo Libra 2. It’s a small (7-inch diagonal) device that’s easy to hold, with physical page turn buttons. It’s a winner. And now, there’s one in color!
But the truth is, most of what I use an e-reader for is text on a page. Color isn’t really part of the equation. I spent some time reading a color comic book using the Libra Colour, and it worked—but it wasn’t fun. The screen is just too small to read comfortably, and the colors were muted, feeling more like I was reading on newsprint (or a very old comic book) than on a bright, modern iPad display.
You can read comics on the Kobo Libra Colour, but I wouldn’t recommend it.
And the ugly truth is that as miraculous as it is that E Ink displays can do color, the Libra Colour’s screen is actually inferior to the screen on the Libra 2. Up close, it’s clear that there’s some sort of visible background texture on the Libra Colour (sort of a yellowish-gray wash) that reduces contrast. And when I cranked the brightness up to 100% to read in bright sunlight, it was clear that the Libra 2 was brighter and clearer than the Libra Colour.
It’s hard to see, but the Kobo Libra 2 screen (left) is brighter and offers higher contrast, while the screen of the Kobo Libra Colour (right) has a patterned background that reduces contrast.
Physically, the Libra Colour is almost identical to the Libra 2. It’s a little thicker at the grip edge and there’s a different plastic texture on the back of the case (which I found more pleasant) and it’s a few grams lighter than the previous model. Unfortunately, it’s still got a recessed screen, meaning dust and hair can collect around the edges of the bezel. That’s a negative, but it makes it easy to find the edge of the display to slide your finger up and down to adjust brightness without fiddling with a more complex user interface. I’d still rather have a flush screen, though.
In terms of software, Kobo has seen fit to enable Dropbox support on the Libra Colour—it was previously only available on higher-end Kobos, not the Libra—and added support for Google Drive as well. This means it’s a lot easier to sideload books, comics, and random PDFs from your collection without having to attach the Kobo via USB-C. In practice, though, I found myself still using the Calibre app to sideload files to my Kobo unless I was really in a pinch, because Kobo’s own Dropbox import doesn’t “dress up” ePub files in any way, while Calibre has some nice plug-ins that convert generic ePubs to use some Kobo-specific extensions that improve the presentation of the books.
Color book covers are fine (when they’re in bright light), but is that enough? (Pictured: Sarah’s Kobo Clara Colour.)
The Libra Colour is not a bad e-reader, but it feels like a misstep by Kobo. Color isn’t really very necessary for reading text, and the color display offers a warmer color temperature and worse contrast. All for a $40 higher list price—though at least cloud syncing isn’t being withheld from the Libra line anymore. I wouldn’t mind the move so much—I’m sure some people want to view color comics and PDFs and would be willing to put up with the small screen, and users of the optional Kobo Stylus 2 might enjoy having different ink colors for their markup—if Kobo kept a non-color model around at a lower price. But as I write this, the Libra 2 is not available from Kobo.
If you’re a casual reader of eBooks and are barely motivated to buy a dedicated e-reader at all, the $120 base-level Kindle ($20 less if you let Amazon stick ads on it) is probably good enough, though it doesn’t have a flush screen, isn’t waterproof, and has no page-turn buttons, which I consider essential for pleasurable reading. The $130 Kobo Clara BW is similar, and has similar drawbacks. (My friend Sarah Hendrica Bickerton upgraded to the $150 Kobo Clara Colour, which was released at the same time—and she had the same issues with the screen being dimmer and off-color that I saw.) The Kindle Paperwhite also doesn’t have buttons, but it’s got a flush screen and is slightly cheaper than the Libra—$150 with ads, $170 without.
(Reader, I am filled with despair at the current state of e-reader options. More on this soon.)
In the end: I don’t mind the Kobo Libra Colour. It definitely fills a niche. But adding $40 to the price and degrading the screen quality a bit, all in the name of nice-but-not-necessary color is really frustrating. The Libra 2 was my go-to recommendation for the discerning eBook reader; the Libra Colour might still take that crown, but it’s an all-around worse value than its predecessor, and that’s really a shame.
Apple pushes back against regulators as photos come back from the dead. Lastly, I should apparently not write about streaming service bundles around lunchtime.
Not going gentle into that good night
Apple is hoping that the path of legal recourse is a two-way street.
“Your honor, what even is a ‘lawsuit’? Dresswear for bills that have passed Congress? I ask you, have you ever heard of anything so ridiculous? Why, even the Bill in the Schoolhouse Rock video upon which our entire system of governance is founded was naked but for a ribbon and a button.”
Apple was happy to pony up for previous fines, which were in the paltry hundreds of millions, but a billion here and a billion there and pretty soon Tim Cook is walking down to Apple Legal asking what the hell is going on.
While Apple may be trying to stem the flood, the waters still seem to be rising.
Of course, in Japan it’s considered unusual to not be able to get apps from talking vending machines, clean and well-stocked 7-11s, and capybara cafes so this was probably inevitable.
Shutter bug
Apple recently introduced a new Photos feature allowing you to revisit magic moments that you might have forgotten about because you mistakenly deleted them.
Oh, wait, that’s not a feature, it’s a bug.
Well, now it’s patched, but not after it kind of freaked people out. Thursday afternoon, Apple elaborated to 9to5Mac about the bug, explaining that it was a database corruption problem and affirming that:
The bug only hit a small number of devices and photos.
If a device was properly reset before sale, photos would not magically return to it after it was sold (no matter what that dude on Reddit said).
Photos are not in a state of quantum flux in which they are both deleted and not deleted depending upon the act of observation. It’s just a bug.
Speaking of changing the state of things, remember how it seemed like every company from HBO to Paramount to Carl’s Jr. was making their own streaming service? Laugh if you want, but the show in which the Big Angus El Diablo Combo was a good cop on the edge was really good and deserved better than to be canceled after two seasons.
Chief: “Turn in your badge, Big Angus El Diablo Combo!”
Big Angus El Diablo Combo: “I’ll turn in my badge… after justice is served! Hot and spicy. With a Coke and fries. See local details for offer.”
Chief: “You’re out of line, Big Angus El Diablo Combo!”
Big Angus El Diablo Combo: “You’re out of line! This whole town is out of line! But get in line for charbroiled burger and breakfast combos daily.”
I could write this all day. His partner is Chicken Tender Wraps. There’s a lot of sexual tension between them.
Anyway, instead of every company having a separate streaming service, what if instead we combined the streaming services into some sort of, oh, package or bundle?
So, if you’re already paying for Comcast, you can get Netflix, Apple TV+ and the streaming service equivalent of a player to be named later for $8 less a month, as long as you don’t mind ads on two of them.
Honestly, of the two combos I’ve mentioned here, I’d rather have the Big Angus El Diablo.
[John Moltz is a Six Colors contributor. You can find him on Mastodon at Mastodon.social/@moltz and he sells items with references you might get on Cotton Bureau.]
One bug teaches us a lot about Apple’s unnecessary opacity, media dynamics, and taking Internet claims at face value. [More Colors & Backstage: 15 minutes more, including not getting your hopes up for WWDC AI announcements.]
My thanks to Kolide for sponsoring Six Colors this week.
None of us are as good at clocking deepfakes as we think we are. Even your mom, or your boss, or anyone in your IT department might not be able to tell the difference. We all think we’re clever enough to spot a fake, but in real life, people only catch voice clones about 50% of the time.
But the good news is that we can be trained to look past our vulnerabilities and recognize a suspicious phone call, even if the voice sounds just like someone we trust. Kolide has a blog post all about it. It’s a frank and thorough exploration of what we should be worried about when it comes to audio deepfakes.
Speaking to Chance Miller at 9to5Mac, Apple explained the issue (patched in the recent iOS 17.5.1 update) that caused old photos to spontaneously appear in some users’ libraries:
Apple confirmed to me that iCloud Photos is not to be blamed for this. Instead, it all boils to the corrupt database entry that existed on the device’s file system itself.
According to Apple, the photos that did not fully delete from a user’s device were not synced to iCloud Photos. Those files were only on the device itself. However, the files could have persisted from one device to another when restoring from a backup, performing a device-to-device transfer, or when restoring from an iCloud Backup but not using iCloud Photos.
In the release notes for 17.5.1, Apple had attributed the resurrected photos to “database corruption.” This didn’t seem unreasonable—databases can be finicky and corruption is not uncommon—but it also didn’t go far enough into explaining the issue for those who were affected. The further clarification to Miller does hold water, though, especially in terms of how photos moved from device to device. (The company did also say that the single much reported story about a photo cropping up on a device that had been erased and given to someone else was false, which also makes sense: as John Gruber pointed out, it didn’t pass the sniff test.)
Attention has understandably focused on sensitive photos that have showed up in libraries, though I think that’s more of a human bias: what kind of picture are you going to notice popping up? (More to the point: which pictures are people likely deleting the most?)1
While it’s good that Apple has now (after several days of requests) clarified the issue, this does speak to a larger point: why is the company not more proactive in talking about these issues when they come up? For example, there still doesn’t seem to be any acknowledgement of the issue that locked users out of their Apple IDs/iCloud accounts last month. Some of this is undoubtedly an issue of scale—even a problem that seems widespread might only account for a tiny fraction of Apple’s overall user base. But when issues *do *hit the broader press, it still seems like the company only responds some of the time—and though the complaints may subside when the immediate issue is resolved (even if silently), it does all contribute to the feeling that Apple’s devices and services are a bit of a black box.
Personally, the pictures I delete the most are screenshots I take for writing pieces. Would I notice if one from several years ago suddenly popped up in my library? Probably not. ↩
There’s been a lot of buzz lately involving a bug in Photos that caused some deleted items to reappear in libraries, including some (apparent) misinformation that blew the entire story out of proportion. Is Apple’s “Secure Erase” feature truly insecure?
As far as I can tell, claims made about securely erased devices recovering old images originate from a single post on Reddit, since deleted by the person who posted it. Although that brought a series of cogent responses pointing out how that isn’t possible, it was picked up and amplified elsewhere, under the title iOS 17.5 Bug May Also Resurface Deleted Photos on Wiped, Sold Devices, which is manifestly incorrect. Sadly, even those who should know better have piled in and reported that single, retracted claim as established fact.
Good information and a useful lesson about not believing everything you read on the Internet.
Do we multitask on our iPads? What AI uses do we actually find helpful? Plus, the email apps we use and whether we’ve experimented with cloning our voice.
Picture this. (Photo illustration by Joe Rosensteel)
For the past year, pressure has really been ramping up on the tech industry to do stuff with AI. What “AI” actually means can vary, but usually refers to a large language model (LLM) chatbot that takes natural language input.
LLMs are artificial, but not really intelligent. They can be quite wrong, or simply malfunction. They are far better at conversational threads, and in understanding context than original-recipe voice assistants, like Siri. It was enough to spook Apple’s executives and lead them to begin a crash AI program.
OpenAI, Microsoft, Meta, Google—you name it. It’s a land grab. Everyone is trying to find a way around smartphone platforms, search monopolies, data brokers, ad sales, SEO, publishers, photographers, stock footage…. pretty much everything. The urgency, the sheer sweatiness of tech companies to show their AI relevance is palpable.
Apple doesn’t want anyone to see them sweat, but at WWDC they’re going to have to break out the AI buzzwords and show where they fit into the current zeitgeist. Here’s what Apple can learn from the mistakes other companies are making when it comes to demonstrating AI prowess.
Summaries and slop
Don’t show off summarizing a conversation. I know Mark Gurman suggests this might be a new feature, but every demo of it from other companies has gone over like a lead balloon. Summarization demos say one thing: “How can I more efficiently ignore the nuance and humanity of the people around me?” Also, demos of summaries are just plain boring.
Google I/O featured several instances of summarization that were not useful and borderline disrespectful. There was a theoretical conversation between a power couple and their prospective roofer. Google’s “helpful” summary said a quote was agreed to, but didn’t actually say what the quote actually was! The actual price—seems like a key element of a quote—didn’t appear until a follow-up question. It also omits all the nuance of the roofer’s interactions with the husband in the scenario. Who would trust that summary?
LLM summaries remove words, collapse context, kill tone, and neuter meaning. Busy technology executives eat these demos up, though!
Don’t demo things that snoop on a user’s calls or their device’s screen. At Google I/O, a demo displayed a fraud warning during a phone call. That means there was an AI model listening to the phone conversation. Even if that’s happening entirely on your device, it’s still unnerving that Google is now listening to the contents of my phone calls. The same goes for Microsoft’s Recall, another on-device feature that watches everything you do—so long as you forget Microsoft’s lousy track record securing people’s devices.
Under no circumstances should there be a chatbot in a conversation with real people that’s jumping in to offering to help coordinate times or issue reminders. Fortunately, Apple doesn’t ship a workplace chat platform, so we’re unlikely to get Google’s demo of “Chip,” the nosy virtual chat kibitzer shown at Google I/O. But I don’t want that bot in my iMessage threads, either.
No generative slop. Don’t show off AI-written poetry or book reports. If people ask for help writing a cover letter, show them an example of a cover letter. AI should point users to vetted and approved templates. (But there should be an AI story with Xcode at WWDC, or why even have it be about AI? It just needs to be respectful of developers’ needs, and actually useful in helping developers with their jobs.)
I think Apple already learned a valuable lesson about visual metaphor when they smashed instruments of human expression into a thin iPad, but just to reiterate: Don’t do that again.
Speaking of creation: Don’t show off images generated out of nothing but a prompt. Any generative elements should be augmented from source images or video. Show off altering aspect ratio on an image, object and lens flare removal, creating thumbnail images, sharpening, denoising, or focus effects.
Even then, keep it grounded to what a reasonable person would want to do with their photos. The Photos app doesn’t need to become Midjourney, or Stable Diffusion, and it certainly doesn’t need to use any models with opaque, legally questionable sources to augment a photo of you smiling at the beach. It should still be that photo at the end of the day.
As for partner demos, I would recommend against demonstrations from companies that have AI models that allow people to make a logo or icon for their company or product without using an artist. Under no circumstances should Midjourney, Dall-E, or any of the other generators that scraped art and photos off the internet be used as a demo. That sends the wrong message, even if it is absolutely a use case that can be demoed to show how the neural engine makes creating a logo 90% faster than on Intel.
AI video tools that handle retiming, color grading, detail recovery, and noise reduction are all acceptable, especially if they can lean on Apple’s multifaceted imaging pipeline, or can use Apple’s depth data as part of the dataset in processing the footage.
For example: Apple is interested in customers shooting Spatial Video, but there are technical shortcomings with the different lenses. Show us how data can be transferred from one eye to the other to help reduce artifacts, and increase resolution. Do an easy-to-use version of something akin to Ocula.
It is possible to preserve AI/ML as a tool without having AI/ML take over the output. There should always be a kernel of reality in every demo to ground it. It should apply to real life, and not trying to compete in the crowded hallucination market.
Hey, Siri
Now that the lede is good and buried, let’s talk about Siri.
We’d all love a senior Apple exec to get on stage and issue a mea culpa before launching the new version, but it’s probably going to be something more like, “Millions of people use Siri every day, which is why we’re excited to announce Siri is even better than before.”
Unfortunately, Mark Gurman has kind of burst the bubble:
The big missing item here is a chatbot. Apple’s generative AI technology isn’t advanced enough for the company to release its own equivalent of ChatGPT or Gemini. Moreover, some of its top executives are allergic to the idea of Apple going in that direction. Chatbot mishaps have brought controversy to companies like Google, and they could hurt Apple’s reputation.
But the company knows consumers will demand such a feature, and so it’s teaming up with OpenAI to add the startup’s technology to iOS 18, the next version of the iPhone’s software. The companies are preparing a major announcement of their partnership at WWDC, with Sam Altman-led OpenAI now racing to ensure it has the capacity to support the influx of users later this year.
Baffling. I have no idea what that demo will look like, but I hope it isn’t “Showing results from ChatGPT on your iPhone” and there’s a big modal window of ChatGPT output.
It is worth noting that not everyone is enamored with ChatGPT, despite the enthusiasm over the features ChatGPT has.
Apple certainly won’t be demoing the imposter Scarlett Johansen voice from OpenAI at WWDC like OpenAI did at their spring event. You know, on account of them being sued, and all.
Google demoed integration with Google Workspace (Drive, Sheets, Gmail, Gchat (lol), etc.) and Apple should show that Siri can pull in information and context from Mail, Messages, Calendar, Photos, Reminders, etc. Ideally, it would be great to work with apps beyond that, but it needs to be able to plug into at least that data.
That means there needs to be a privacy interface for what apps Siri can access, especially if it is relaying it to a third party, and a privacy story about how Apple won’t be looking into every app on your device if you don’t want it to.
I fear that Apple simply won’t address anything but ChatGPT basics shoved into Siri windows. Which is possibly worse than continuing to work quietly on whatever the hell it is they’re working on. I’ll still run through some examples I’d love to see:
Show us someone asking a HomePod or Watch to do something, and instead of saying it can’t, it’ll execute it on your iPhone. Tell us the story about how Siri is secure and functional across devices under your Apple ID.
Demo someone telling Siri to play something on TV. Then asking their Apple Watch to “pause the TV”. Where Siri can know “the TV” is the one I started playing something on (and my iPhone is near based on Bluetooth), even if there are many TVs attached to my Apple ID.
Put on a little show of someone asking Siri where something is in the interface, or how they can do something. “Hey Siri, where are my saved passwords?” It whisks the person right to the Passwords section of Settings. “Hey Siri, I turned down the brightness all the way but it’s still too bright, what can I do?” and it surfaces the Reduce Whitepoint control. Conversationally, “How can I only turn on Reduce Whitepoint late at night?” and it offers a Shortcut based around the sleep and wake-up times.
Demo someone using new Siri with CarPlay, an essential application of Siri, where someone can conversationally talk to Siri to “Play ‘Mona Lisa Overdrive'” and then follow that up with “Play the rest of the album” and it’ll queue up the tracks after instead of doing something completely random like it does now.
Absolutely demo someone pausing music on their Mac, and telling their HomePod to “play what I was last listening to” and it can go resume playback on the HomePod exactly as if you had just hit play on your Mac.
Demo Siri being able to understand what’s currently on-screen when asked. “Hey Siri, who is the actor in this video?” Then conversationally follow that up with “What have I seen them in recently?” Where it could look through what was recently watched through the TV app and check that against the roles that actor has played. That’s not putting anyone out of a job (Well, except Casey. Sorry, buddy.)
Above all else, demo to the audience that when Siri doesn’t know what to do, it’ll ask. Show us a graceful failure state that reassures people how Apple can behave responsibly.
Let me illustrate what not to do with a recent interaction I had with Current Siri:
Me: “Play the soundtrack for The Last Starfighter“ Siri: “Here’s The Last Starfighter“
[Opens TV app on iOS and starts playing The Last Starfighter from my video library.] Me: “Play The Last Starfighter soundtrack.” Siri: “Here’s Dan + Shay”
[Music app starts playnig Dan + Shay “Alone Together”.] Me: “Play The Last Starfighter Original Motion Picture Soundtrack.” Siri: “Here’s The Last Starfighter by Craig Safan.”
It seems, however, that nothing is really rumored along these lines. Oh well, guess, I’ll listen to some more Dan + Shay!
Ethics? Anyone?
A very troubling aspect of these rumors is Apple partnering with OpenAI. They didn’t ethically buy rights to use information to train their models, just like they didn’t take Scarlett Johansen’s no for an answer. They’re in active lawsuits with various media companies.
Even companies that have struck a deal with OpenAI—like Stack Overflow, and Reddit—are getting bought off after their sites were already being scraped. Users, who generated all the value in the site, can’t even delete their posts in protest.
Is Apple going to endorse OpenAI by giving them a thumbs up and slotting them into their next operating system releases without comment? They absolutely shouldn’t show anyone from OpenAI in their WWDC presentation, especially not Sam Altman.
There’s an easy way to draw a parallel to Google. Companies sue Google all the time over rights, and Apple still includes Google.
Of course, they are taking money from Google to be the default search engine on iOS, and then trying to have Safari insert Spotlight suggestions to pretend there’s a privacy angle. That Google deal now means that the default search will go through Google’s AI Overview. So Apple is already going to endorse Google’s approach to AI too, even if they don’t strike a deal for anything more.
And let’s not forget the ethics of Apple’s climate pledge. There should be a point in the WWDC keynote where Apple communicates how they can harness AI and still stay on target for their climate goals. That probably seems like a small thing, but people are getting pretty hand-wavy about maintaining their commitments while also putting their models to use.
Regardless of what happens, I suspect there will be plenty of disappointment and outrage to go around in the aftermath of WWDC. These are the times we live in. I just hope Apple takes some lessons from that thing with the hydraulic press and the iPad and doesn’t step in it too badly, just to show that they’re keeping up with the AI hype from the bozos of the tech world.
[Joe Rosensteel is a VFX artist and writer based in Los Angeles.]
It’s kind of hard to believe today, given how wildly successful the iPhone has been, but in the product’s early days there was only a single iPhone model for sale. It wasn’t until ten years ago, in 2014, that Apple introduced two different iPhone models, the iPhone 6 and iPhone 6 Plus. But that opened the floodgates, and Apple has spent the last ten years trying to find the right combination of new iPhone models to maximize the money it makes from its most important product.
If reports are true, next year Apple’s going to be switching things up again, dropping the iPhone Plus for a dramatic new model. After the discontinuation if the iPhone Mini after two years and the (apparent) death of the iPhone Plus after three, what does Apple have up its sleeve for 2025?
Today Marvel and ILM dropped the trailer for the forthcoming “What If?” immersive story, which was just announced a couple of weeks ago, and (surprise!) is launching next Wednesday in the visionOS App Store as a free app.
Marvel and ILM say the immersive story is about an hour long, and is directly connected to the “What If?” animated series on Disney+, which itself is a multiversal riff on various Marvel Cinematic Universe movies. (Lest you think that this multiverse stuff is new, “What If?” is based on a comic that I absolutely devoured when I was a kid. I still have my issue #1 in a box—what if Spider-Man joined the Fantastic Four?—not too far from where I’m typing this.)
The story will feature the cosmic being The Watcher asking you, the Vision Pro user, for help in facing off with dangerous “variants,” alternate-universe versions of various Marvel characters. Sorcerer Supreme Wong will instruct players on how to cast spells (presumably with the Vision Pro’s hand tracking features) and use the Infinity Stones. Versions of characters such as Thanos, Hela, the Collector, and Red Guardian will appear.
Marvel and ILM describe the app as including both full virtual environments as well as mixed-reality scenarios which involve the real world around players. It sounds very much like an interesting mash-up of interactive and animated elements. I’m very much looking forward to seeing how the “What If?” animation style translates into 3-D. I guess my wait will be over in a week!
Myke returns to the show to give his personal iPad Pro review, and we discuss Apple’s accessibility announcements, worrying Apple AI reports, and some mystifying rumors about future iPhones for this year and next.