While the clip was full of fawning descriptions of each other — and a suspicious number of “ordinary San Franciscans” with the apparent gift of time travel — it was light on details about what Ive and Altman are cooking up. But since the trailer for the Seth Rogen flick filming in San Francisco has yet to drop, The Standard’s staff had nothing better to do than break down the video in excruciating detail. Enjoy!
Details include spotting the extras, being judgmental about the café selection, and questioning Sam Altman’s grasp of Silicon Valley history.
Ever since OpenAI announced a couple of days ago that it’s integrating Jony Ive’s hardware startup, I’ve been struggling with what to write about it.
Struggling because it’s obviously an important technology topic, and needs to be taken seriously. But also struggling because I’ve seen a lot of people who think, talk and write about this stuff for a living reacting to the announcement with enthusiasm and positivity, and I just don’t feel that, not even a little bit.
And struggling because I don’t want to judge any project based entirely on the red flashing light going off in my head suggesting that it’s a load of bullshit. It’s the same light that flashed when I first heard about Quibi or the Humane Ai Pin.
So OpenAI and Apple’s legendary design lead are embarking on a journey to build some new AI-enabled hardware. They’re coy about what it will be—probably not a phone, definitely not a watch, maybe not “something you wear”—but my gut feeling is that it’ll be something we’ve actually seen before. My true prediction is that it’ll be more like the Humane Ai Pin or that AI Pendant but they’re embarrassed to be associated with those products, so they’re going to wait a little longer to let the stink clear.
I’m skeptical about OpenAI in general, because while I think AI is so powerful that aspects of it will legitimately change the world, I also think it has been overhyped more than just about anything I’ve seen in my three decades of writing about technology. Sam Altman strikes me as being a drinker of his own Kool-Aid1, but it’s also his job to make everyone in the world think that AI is inevitable and amazing and that his company is the unassailable leader while it’s bleeding cash.
I’m skeptical about the premise that people want to give up their smartphones. Two rich guys, one of whom made a fortune by designing the iPhone, have decided that the most successful and important tech product in history is bad for you, actually, and that the solution is, unsurprisingly, a new and different tech product.
But people love their smartphones. Really love them.
Just as with the Humane pin, it strikes me as unlikely that there’s much an AI accessory device can offer that can’t already be done by a powerful smartphone and maybe some earbuds or smart glasses or a smartwatch working in concert. It’s not impossible, but it feels unlikely that there’s a space for something to unseat the smartphone given all of its advantages and the fact that people really, really like it.
I’m skeptical of the composition of the io leadership team, which features an awful lot of product designers and not a lot of hardware engineers. I’m sure there are talented engineers there too—the OpenAI announcement refers to “physicists, scientists, researchers” among the team members—but the fact remains that this is a startup whose leader and key lieutenants appear to all be designers.
Designers aren’t bad. They’re good. But designers are part of a team. You can’t make a football team out of quarterbacks or a baseball team out of pitchers. I’ve worked with some very talented designers over the years, and while they can be incredibly creative, the magic happens when they work in collaboration with the other members of a team, where their design sense can be steered by practicalities and, in turn, steer non-designers away from bad approaches.
Which brings me to Sir Jony Ive himself. Ive is undeniably one of the most famous and important designers of our lifetime. His early days at Apple were frustrating, but when Steve Jobs arrived, the two of them clicked, and the results were spectacular. We all know what they did. It’s undeniable.
I would argue that what worked about that partnership is that Jobs grounded Ive, bringing a sense of the customer and user of Apple products that perhaps tempered some of Ive’s design tendencies. When Jobs died, Apple made a great effort to push Ive to the forefront, mostly as a signal that the magic of Apple hadn’t died with Jobs, but was still alive and well, even though an operations guy was now the CEO. Ive provided Apple with cover until the rapid acceleration of iPhone sales made it unnecessary.
But in that post-Jobs era of Apple, Ive was unfettered. He was put in charge of software design, so his portfolio expanded. And who at Apple was going to say no to Sir Jony Ive? Who was going to tell him, in Steve Jobs fashion, that some of his ideas sucked? (And who is going to do that at io and OpenAI? Forgive me if I’m dubious about Sam Altman having both the skill and desire to do that.)
The post-Jobs Apple era was one of great financial success, but the design failures and bizarre dead ends are there for all to see, and it’s hard not to imagine that an unchallenged Ive was a major part of that dynamic. Solid gold watches, butterfly keyboards to meet impossible laptop design goals, removing unsightly ports on pro laptops, and the introduction of a $3500 VR headset with sparkling chrome and a luxurious 3D knitted headband and a set of outward-facing displays to “encourage human connection.” To me, all of this is the legacy of Ive’s design culture.
Meanwhile, Apple’s success made Ive a very rich man. He was knighted, did work for the King, drove fancy cars, designed a bunch of expensive jackets… it is hard not to look at Jony Ive’s last decade and a half and not wonder if he’s entirely lost touch with the part of him that collaborated with Jobs on the iMac, the iPod mini and the original iPhone. He seems to move in luxurious circles, among billionaires (like Sam Altman), with expensive tastes and interests. It felt like he was bored at Apple, and he seems to be excited about working with Altman on this new project, but are a bunch of designers who’ve been to the mountaintop and reaped the rewards really going to be tied in to the next big consumer hardware product?
I’ll say this: Never count out Jony Ive and the talented people that surround him. They’ve gotten the band back together, thanks to an enormous investment of AI money, and we’re going to find out—eventually—what they want to put into the world.
But right now, all we have are words and an awkward video of Sam and Jony drinking espresso. The words are all vague. I’ll believe whatever they’re going to release when I see it. Until then, like so much in the AI world in particular and the tech world in general, it’s meaningless hype, signifying nothing.
Yes, I know it was Flavor Aid that was used at Jonestown. The metaphor taking one step away from the truth of that event was probably a good idea. Also, I was just trying not to use the phrase “high on his own supply,” but I can’t fool you, footnote reader. ↩
Bad vibes about Apple dealing with the U.S. government and also bad vibes about the Sam and Jonny show. [More Colors and Backstage subscribers also get 20 minutes of bonus vibes about Google and Apple existential crises.]
My thanks to Clic for Sonos for sponsoring Six Colors this week. Clic for Sonos is the fastest native Sonos client for iPhone, iPad, Mac, Apple Watch, and visionOS. It’s easy to get set up and get going, whether you’re playing to a single device or grouping multiple speakers together.
Clic for Sonos offers deep integration with native Apple technologies, with support for Widgets, Live Activities, Shortcuts, a Mac Menu Bar app, and support for Control Center. It works with your Sonos library, Apple Music, Spotify, Plex, Tidal, and TuneIn, and supports lossless and Dolby Atmos.
Try it for yourself and you’ll see. Six Colors readers can get one year for just $9.99 (30% off) or lifetime updates for $30 (50% off). Go to clic.dance/sixcolors for all the details.
Here’s the problem with CarPlay Ultra: It’s still CarPlay.
Based on what we’ve seen of CarPlay Ultra, Apple believes that if it controls the appearance of the displays in cars, then using the car will be a good experience. I’m not sure that’s an assumption I’d make, especially when styling isn’t directly connected to function—as is the case with most of what distinguishes CarPlay Ultra from CarPlay.
The real hallmark of Apple is a bad settings screen. (Image: Top Gear.)
There’s so much more Apple needs to do with CarPlay, fixes that would also benefit CarPlay Ultra. I use CarPlay all the time, and there are plenty of issues that don’t seem to be on Apple’s roadmap. If Apple improves CarPlay, it also improves CarPlay Ultra. That being said, here are some of my biggest outstanding issues with CarPlay today.
At the center of things
Whether you’re driving a fancy car with CarPlay Ultra or you’ve just got basic CarPlay, the interface in your vehicle’s central touchscreen is the main stage of things. In early CarPlay Ultra demos, that very familiar CarPlay interface is still front and center.
The entire approach to notifications needs to be rethought. When a new notification appears, it displays for a second and then fades away. If you’re busy driving the car and, you know, paying attention to the road, you won’t know that you have missed the text message that your friend is running late or has canceled. If Messages is not in the dock, there is no visible badge, and it’s not added to the dock based on incoming notifications, but rather on when you last used it.
A glanceable, non-distracting indicator that there are active notifications that need attention would be nice. Perhaps Apple could even use some of that vaunted Apple Intelligence to detect what sorts of messages were a priority in the context of driving a car.
When notifications appear, they also float above existing tap targets in the interface. If I am parked and trying to select my dentist’s office in Apple Maps, a calendar alert reminding me to go to the dentist will appear and block me from completing my task. CarPlay Ultra adds even more new overlays, like vehicle warnings and climate controls. I don’t know what the answer is—push down the screen? have a dedicated area of the screen for warnings?—but it’s a problem in need of a solution.
Organizing the apps displayed on CarPlay could also be improved. Right now, this is accomplished by using the Settings app to reorder the list of apps on a per-vehicle basis, but the vertical list offered in Settings doesn’t match how those items are displayed in their icon grid in the car! Since the settings are per-vehicle, Apple knows the exact dimensions of the screen, so it knows how many rows and columns there are, and where the page breaks will be. It should also be easier to sync these layouts across devices. I’m not a current Apple Music subscriber, but it’s the second app in any default CarPlay homescreen, and there’s nothing I can do to prevent that from appearing in every rental car I connect to my phone.
Connectivity quirks
I had a Honda with CarPlay, and my boyfriend and I currently share an Audi with CarPlay. Even though both use wired connections, both periodically flake out. We’ve rented numerous cars with both wired and wireless CarPlay when traveling, and there has been no consistency in connectivity in any of these vehicles. The wireless version in one Chevy car had unacceptable lag that made the screen unusable, requiring a wired connection. In a recent Toyota rental, the wired connection didn’t work, but the wireless connection was rock solid.
There’s no quality guarantee from Apple or automakers about how well CarPlay will work with any given car, but I’ve built a mental list of which cars seem to work better than others through trial and error. That list informs my animus toward certain makes and models that can persist even if the CarPlay experience has improved, because there’s no rating system or seal of approval. I’m not sure what Apple can do here, but some sort of CarPlay certification process might allow Apple to inform automakers about choices that lead to unreliable connectivity and unhappy customers.
CarPlay Ultra disconnects won’t affect the instruments and essential functions of the car because they’re rendered locally by the vehicle. I have no safety concerns about dropped connections. However, we haven’t seen how gracefully the phone-generated part of the non-essential interface degrades when there are connection issues. I don’t believe Apple wants to be the one to show people anything less than ideal function, even if we all know that’s not realistic.
Regardless of what the connection failure states are: If Apple pushes out a buggy iOS release again, will people drive their CarPlay Ultra cars around with only essential, locally-rendered instruments for two weeks, or revert to their car’s interface and be hesitant to go back?
Talking to Siri
Ideally, when you’re driving, you’re not fiddling with touchscreens, but talking to Siri and keeping most of your attention on the road. I believe it’s one of the reasons Apple marketing VP Bob Borchers said, “This next generation of CarPlay gives drivers a smarter, safer way to use their iPhone in the car.” (Emphasis added.)
CarPlay Ultra isn’t adding or augmenting lane guidance, crash avoidance, or self-driving features, but in theory, it’s safer because you can now tell Siri to turn on the seat warmer.
But we’ve all used Siri. It doesn’t just fail, but can also execute the wrong command with utmost confidence, causing a distraction! With CarPlay Ultra, Siri can now cause a distraction over car functions, not just by playing the wrong music.
There’s also another issue at the crossroads of Siri and connectivity, and that’s what happens when Siri can’t connect to the Internet. I’m sure you’ve all had the pleasure of getting in the car, pulling out of the driveway, and saying, “Hey Siri, give me directions to a place,” only to have it spin or glow and give up. Not only can it not get the directions, but it also eats the command, and you have to say the whole thing over again.
This needs to be smarter. The iPhone should recognize that since it’s just connected to a car, its nearby Wi-Fi connection is likely to disappear, so prioritizing the cellular network might be a smart move. And if there is a temporary connectivity failure, perhaps Siri should hang on to that command and send it again when connection resumes, or offer to resubmit the request instead of requiring me to do it personally.
(Remind me: I’m a person and my iPhone is a computer. Which one of us should be doing the repetitive tasks, again?)
In the event of a failure, I also never notice Siri attempting to use the iPhone’s on-device dictation model to decode my instructions and pass them on to Apple Maps, which has been helpfully preloaded with offline maps.1 Remember to be online when you want to use your offline maps.
When sharing isn’t caring
The car has a volume settings for audio playback, and separate ones for navigation audio, but it isn’t per-device, so the different audio settings on my iPhone and my boyfriend’s iPhone result in one of us getting into a very loud or very quiet car, or the navigation audio being too loud for him in Google Maps and too quiet for me in Apple Maps.
This is the lowest level annoyance of all the annoyances, but it’s worth mentioning in light of how it might apply to CarPlay Ultra. To what degree are my settings carried over to my iPhone, including climate, radio, and instrument cluster layout? To what extent does my iPhone simply set those things in the car at the time of my request, and then pick up whatever state the settings are in when my iPhone reconnects later?
If it’s like audio settings are right now, where the settings are just whatever they were when the last person drove the car, then what are we even doing with our smartphones connected to these cars instead of relying on Android Automotive profiles?
It’s even more complicated when both of us are in the car with our individual devices. With wired CarPlay, the phone plugged in is the CarPlay phone. But with wireless CarPlay and multiple phones, it’s a crapshoot—it’s which phone gets in range first, or maybe which one was connected most recently. CarPlay doesn’t offer a switcher if it connects to the wrong phone, or if you just want to switch from one phone to another.
When the locally rendered instrument cluster in CarPlay Ultra boots up before it connects to my iPhone, is it what my boyfriend had the instrument cluster set to? Does it change to mine while I’m using the car, and back to his, or will we be overriding each other each time we connect to the car, as we are currently with volume settings? Are we overriding each other’s climate settings?
I would love to know if CarPlay Ultra offers a more seamless user switching experience, but I’m unsure if it has occurred to Apple that we’re not a two-Aston-Martin household.
Put it in the parking lot
Apple improving CarPlay would help everyone. It would be a better sales pitch for CarPlay Ultra, because “All the same annoyances as before, but across your whole dashboard!” is not a great slogan.
I would never buy another car without CarPlay, because even when it’s flakey, or Siri bumbles something, it’s handling my media and my personalized navigation better than any car can. I can’t say the same thing about CarPlay Ultra, which feels more like applying an iOS-styled WinAmp skin to the speedometer. For CarPlay Ultra to succeed, Apple needs to do more than woo reluctant automakers. It needs the discipline to address the long list of existing CarPlay annoyances. A rising tide lifts all boats. Er, cars. You get what I’m saying.
If you put your iPhone into Airplane Mode and disconnect from Wi-Fi, you can ask it for directions to points of interest stored in your offline maps, and Siri can’t do it. You can open the Maps app and use speech-to-text dictation in the search field to get directions. Shocking, I know. ↩
[Joe Rosensteel is a VFX artist and writer based in Los Angeles.]
The future of CarPlay is here, sort of, as we break down Apple’s sales pitch for CarPlay Ultra. Also, Tim Cook’s India iPhone plan gets noticed, Apple and Epic go around again on Fortnite, and Bloomberg portrays Apple’s internal AI struggles.
Parallels Desktop allows straightforward use of Windows and other ARM (and now x86) operating systems within a window or commingled in macOS. (Source: Parallels)
Six Colors subscriber Brandon Minich asked:
I haven’t been able to keep track of…what is the current status of the emulation of Windows on the Mac after the changeover to Apple silicon? Is this even worth doing like it used to be in the Intel days? And how does the ARM-based Windows work with Mac emulation, if at all?
It’s been a somewhat twisty road at times, but there is one strong option and one also-ran for using Windows in a virtual machine on a Mac at not far below the performance you would get on today’s comparable ARM-based chips used to run Windows. As of January 2025, you can even run Intel 64-bit x86 versions of Windows, though you will find it painfully slow.
Giving Boot Camp the boot
The M-series Macs came out of the gate blazing fast. While many pundits had predicted for years that Apple would release Macs running silicon the company had designed, the ARM-based Apple chips’ performance exceeded everyone’s expectations. But this caused immediate trouble for people who relied on Windows emulation (using VMware or Parallels) or who used Apple’s multi-system Boot Camp option: ARM processors couldn’t run x86 Windows code.
The fortunate timing was that the computer industry was already making a shift from Intel’s long-running x86 architecture to ARM-based chips before Apple released the first M1 Macs. As a result, Microsoft had a version of Windows for ARM well into testing, although the company hadn’t yet authorized its virtualized use.1 And Windows for ARM would seemingly require new versions of popular software, too.
The sheer computational power of Apple silicon beckoned us to give up our Intel Macs, yet any dependency on Windows software meant the shiny red ball was being held just out of reach, beyond our grasping efforts.
But there were glimmers of what was to come. Apple had wisely shipped the M1 Macs with Rosetta 2, which enabled the emulation of Intel Mac code on ARM.2 And Microsoft was no slouch, either, including 32-bit x86 emulation in Windows 10; it added x64 (64-bit) emulation in Windows 11, using a system it calls Prism.
With Boot Camp dead, since you can’t start up into an Intel environment, Parallels and VMware Fusion became the only alternative, with Parallels releasing a version supporting Windows 11 for ARM in August 2021 and VMware over a year later. Microsoft initially didn’t offer official support for using its operating system in emulation on Apple silicon, but blessed Parallels Desktop in 2023. VMware took this as a blessing, too, though I’ve never understood why.
Just pause for a minute to realize something absurd: an M-series processor, even the original M1, was powerful enough to run a virtualized version of Windows for ARM, which could handle executing x86 Windows software in emulation while you also ran Intel-based Mac software in the main macOS environment.
You make a dead Mac cry
Windows 11 for ARM works seamlessly and natively in a virtual machine on Apple silicon. (Source: Microsoft)
The landscape today puts Windows emulation on Apple silicon on stable ground with a boost in January 2025 that gives you even more options, depending on your needs.
Parallels seems to have won the consumer desktop game in a quirky way. Both Parallels and VMware have continued developing their virtualization software for Apple silicon. However, Parallels’s feature set now significantly outpaces VMware Fusion, and VMware has given up on selling licenses for Fusion, which is free for personal and commercial use as of November 2024—that may speak to its future. Parallels integration with macOS has always been better, from installation to drag-and-drop support to commingling Mac and Windows apps in a unique Coherence mode.
Parallels has two versions that most people would consider: Parallel Desktop Standard Edition and Pro Edition. Standard has limitations on the amount of RAM and processing power you can throw at a virtual machine and is meant to run a single VM at a time. Pro strips the limits off, adds graphical processing support, and allows multiple VMs to run simultaneously. You might find Standard sufficient if you’re not engaged in development, documentation, or testing, but its memory, CPU, and GPU restrictions could chafe.
Because of its relationship with Microsoft, you can purchase Parallels Desktop and install Windows 11 with a couple of clicks. Windows 11 can be used without paying for a license and activating it, though this prevents personalization and may prevent downloading non-critical updates. The OS will also nag you (including in a persistent background image) about activation. For occasional use, this might be ok, although you can purchase highly discounted legal Windows 11 licenses.
In January 2025, Parallels announced the release of one missing piece: Desktop can fully emulate a 64-bit x86 processor, although not all Parallels features are available). It’s also apparently incredibly slow, taking several minutes just to boot. However, with a sufficiently powerful M-series Mac and patience, you can now run 32-bit or 64-bit x86 apps in a 64-bit operating system as a virtual machine if you had a previously unmet need or one that was incompatible with Windows x86-based ARM emulation.
Parallels Desktop Standard Edition is $100 per year (currently discounted to $65) or can be purchased with no recurring fee for $220. The Pro Edition is available only as a subscription: $120 but with a current discount of $80. Those reduced rates are for the first year. However, I have found in the past, when I needed an active subscription, I was typically able to find a reduced license fee through a bundle with another product I already purchased; I’m not sure if this is still the case.
[Got a question for the column? You can email glenn@sixcolors.com or use/glennin our subscriber-only Discord community.]
Thanks to Jackson for emailing that VMware had updated its terms to include free commercial use for its emulators!
Virtualization typically involves creating a walled-off area inside a computer operating system that believes itself to be a fully functional computer, running code that’s native to an operating system. An operating system runs inside that. A hypervisor is software that aids in creating these virtual machines. Apple has built-in hypervisor support. Most of the time, full operating systems have to be native to work as a virtual machine on another computer or server. Emulation is used to bridge that gap; see next note. ↩
Emulation lets you run software built for one processor to work on another, on the fly or by performing a one-time conversion on the non-native code. By my count, Apple has pulled off this emulation transition four times since the original Mac operating system—or five, if you count using iOS/iPadOS apps on M-series Macs. See my 2021 TidBITS article, “Emulation, Virtualization, and Rosetta 2: A Blend of Old, New, and Yet To Come.” ↩
Later this year, the App Store will feature cards indicating the accessibility features each app supports. Hopefully that’s the usual “later this year” not the Apple Intelligence “later this year” or the CarPlay 2 “later this year”.
Apple is not done in the accessibility field, however.
If that sounds creepy and terrible, it’s really not. While Elon Musk is working on brain implants because he wants to extract the thoughts of normal humans and implant them into his own brain to see what real human emotions are like, Apple’s efforts here are more noble.
Researchers believe that Brain Computer Interfaces, such as the Stentrode and Neuralink, will revolutionize the ability of people with diseases like amyotrophic lateral sclerosis, or ALS, to interact with their devices.
With over 5,000 five star reviews; Magic Lasso Adblock is simply the best Safari ad blocker for your iPhone, iPad and Mac.
Designed from the ground up to protect your privacy, Magic Lasso blocks all intrusive ads, trackers and annoyances – stopping you from being followed by ads around the web.
Howard Oakley notes the deprecation of the Apple Filing Protocol, which used to be the foundation of peer-to-peer file sharing on the Mac. (These days most Macs share files using the SMB protocol.) The end of AFP has some serious ramifications for some older Apple hardware:
In case you missed it, Apple has just announced that a “future version of macOS” will no longer support AFP, Apple Filing Protocol. This is included in the Enterprise release notes for macOS 15.5 Sequoia….
Greatest problems come with Apple’s old Time Capsules, most of which are still used with AFP, as they can only support SMB version 1, not versions 2 or 3. If you’re still using a Time Capsule, or an old NAS that doesn’t support SMB version 3, then access to your network storage may well still be reliant on AFP.
Oakley’s recommendation is also mine: If you’re using a Time Capsule or another old NAS that doesn’t use SMB 3, it’s finally time to say goodbye. If you recently put in a new drive, move it to a new NAS. And buy a new wi-fi access point. So it goes.
The rise of sports as a driver in streaming, NBC’s big 2026, and Netflix’s live reality moves. Plus: our TV picks. [Downstream+ subscribers also get: ESPN and WB’s solid branding decisions and Fox’s smart strategy.]
The somewhat half-baked set of features we saw announced at WWDC in June 2024 was, according to Bloomberg’s Mark Gurman, the result of Apple going “all-in on AI” nearly a year before. Apple added a bunch of features into iOS 18 and macOS 15 and came up with a marketing plan built around the new Apple Intelligence brand, but by Apple standards it was a rush job.
Contrast that with the current status of Apple Intelligence. In the last few months, Apple has had to pull back on features it already promised and bring in new leadership. Meanwhile, WWDC 2025 looms. There’s not a lot of time to decide how Apple’s going to approach its AI functionality over the next year.
When I consider what’s going on at Apple right now, I keep thinking back to one of my favorite movies, “Apollo 13,” in which a bunch of engineers back in Houston are guided through a series of intense analytical steps by Flight Director Gene Kranz in order to understand what’s happened and how they can best work the problem and save the crew.
This exchange, between Kranz (played by Ed Harris) and Flight Controller Sy Liebergot (Clint Howard), is what I’ve kept thinking about:
Gene Kranz: Can we review our status here, Sy? Let’s look at this thing from a standpoint of status. What have we got on the spacecraft that’s good?
Sy Liebergot: I’ll get back to you, Gene.
While there are no lives at stake, this is very much a situation where there is a daunting technical challenge that demands an immediate response. So what do you do if you’re Federighi and Mike Rockwell (the new head of Siri)? You do what Gene Kranz did (and yes, kids, “Apollo 13” is a true story): look at the entire thing from a standpoint of status.
What does Apple have in artificial intelligence that’s good?
It’s triage, which involves reviewing a list of projects and determining what’s feasible. Balancing the needs of WWDC marketing with the art of the possible has to be one of the toughest things Federighi and company have done in the last few months.
First up: What’s the current status of the items announced last year and delayed back in March? Is it close to shipping, or is it far off? Federighi and company need to find out whether these features are just lagging, or if the initial conception was misguided and things need to be reconceptualized.
Another question for both Federighi and Apple’s marketing group is how to handle features already promised a year ago. Do they get re-promised? Are they not mentioned? If they’re reconceptualized, how does that get communicated? Apple has always been reluctant to admit to failure, so do revised features just get re-announced without any acknowledgement that they were previously promised?
Next: What’s the state of the stuff being worked on that hasn’t been announced? Between the features being built by Federighi’s team and the work he’s inheriting from AI chief John Giannandrea, there are undoubtedly a whole bunch of new items that were intended for the 2025-2026 OS cycle.
Obviously, the first step is a status check, to get a realistic sense of when a feature will be ready to ship to customers. But there’s another aspect to this part of the job: The whole group needs to consider all the mistakes they made last year in terms of gauging readiness. Obviously, last year’s judgment about what was ready to be announced was… flawed. How does Apple avoid that this time around? And then considering those mistakes, what features are really going to ship by spring 2026? Everything else gets delayed until 2026.
Someone also needs to look critically at Apple’s own AI models and judge whether they’re suitable for deployment. One would hope that over the past year, Apple has developed better versions of the models it currently ships on devices, but even those new models may still lag behind the functionality of models from other providers. Some reports suggest that Federighi has softened on the use of third-party models in Apple features and functionality.
If Apple’s models are not state-of-the-art (and they are almost certainly not), are there “quick wins” Apple could accomplish by integrating third-party models? Could they be integrated into specific features? Does Apple have time to build a modular AI system that lets users choose which models—Apple, OpenAI, Perplexity, Google, whatever—they’d prefer to use? (And, separately, is Apple going to provide tools for app developers to use to integrate AI functionality as well?)
At WWDC, we’ll get our first sense of what Apple, with a revised software structure and still feeling the sting from failing to ship what it promised last year, thinks it can deliver. Failure is not an option.
If you guessed that automaker was an exceptionally high-end company selling a relatively small number of vehicles, you’d be right: hello, Aston Martin. According to Apple, the update is available in new Aston Martin vehicles in the U.S. and Canada starting today and will start appearing in compatible existing models in “the coming weeks” via a dealer-provided software update. It requires an iPhone 12 or later running iOS 18.5 or later.
CarPlay Ultra’s goals, according to Apple, are to provide deeper integration with the car experience, taking over all of the vehicle’s screens, including providing real-time information in the instrument cluster. It’s intended to move beyond navigation and entertainment, letting drivers also interact with car features like climate control.
This update has traveled a bumpy road with a lot of detours since its initial introduction at WWDC 2022. At the time, Apple said the first car models with support would be announced in late 2023, and named a variety of partners, none of which have yet delivered a product. Aston Martin, notably, was not on that initial list. In January of this year, the company edited its website to remove a note about the first vehicles shipping in 2024.
The CarPlay 2 traffic jam has been largely the result of conflicting priorities between Apple and its partners where the rubber meets the road. Apple, of course, wants to exercise control over the whole system, but automakers are loath to give up control to a third-party, especially as they’d like to generate revenue by selling their own services—see General Motors, which said in 2023 that it would be discontinuing support for CarPlay and Android Auto.
That’s one reason Apple’s press release revving up CarPlay Ultra is replete with language about “reflecting the automaker’s look and feel” and “deeply integrating with the car’s systems and showcasing the unique look and feel of each automaker.” In fact, there’s a whole section of the announcement called “A Design Unique to Each Automaker” that stresses how Apple creates the design in collaboration with automakers.
Apple claims to be working with additional car manufacturers, specifically naming Hyundai, Kia, and Genesis. But it stops short of providing specific timelines or vehicles that might support the new system.
Personally, I’m a longtime CarPlay user and fan—I put a compatible head unit in my car back in 2019 and haven’t looked back. I appreciate what Apple’s trying to do here—let’s all agree that most automakers aren’t great at the software side of the business—but it’s also clear Apple has an uphill drive to get carmakers to sign on and give up their control.
Dealing with the imperfect realities of partner companies is a place where Apple doesn’t always have the best track record. While I’m glad to see the company isn’t pumping the brakes on next-generation CarPlay, it does still feel like they’ve taken their foot off the gas.
[Dan Moren is the East Coast Bureau Chief of Six Colors, as well as an author, podcaster, and two-time Jeopardy! champion. You can find him on Mastodon at @dmoren@zeppelin.flights or reach him by email at dan@sixcolors.com. His next novel, the sci-fi adventure Eternity's Tomb, will be released in November 2026.]
Default Finder window in Slovene. It’s still called the Finder, while Maps is translated as Zemljevidi. I can find no logic why some app names are translated and others are not. Overall, the word I would use to grade Apple’s translation of its operating systems is “decent.”
When I wrote about switching to the Mac in July of 2023, I lamented the Mac’s lack of support for my language, Slovene. Apple has never gotten around to supporting many languages, but for Slovene, it all changed with iOS 18 and macOS 15. So I am writing this on my Mac Mini with the default language set to my native tongue.
Why did Apple do this after 18 years of iOS and a thousand years of the Mac? The simple answer is: Regulation works.
In April of last year, the Slovenian parliament passed a new version of The Public Use of the Slovenian Language Act stating that if you want to sell products in Slovenia, they must “speak” Slovene. Before I discuss the reasoning behind the law, it’s worth explaining why Apple’s past exclusion of Slovene was so unusual.